diff --git a/README.en.md b/README.en.md deleted file mode 100644 index 9a85dc6f038831b7ef2bd05627789c420575d82a..0000000000000000000000000000000000000000 --- a/README.en.md +++ /dev/null @@ -1,433 +0,0 @@ -# openEuler OpenStack SIG - -## Mission and Vision - -OpenStack is an open source cloud computing software project initiated by NASA and Rackspace, and then continuously developed and maintained by major open source contributors and vendors. It is licensed under the terms of the Apache license and is free and open source. - -OpenStack is currently the world's most widely deployed open source cloud software that has been validated in large-scale production environments. It contains a series of software components that provide common services for cloud infrastructure. - -As a well-known cloud computing open source community, the OpenStack community has many individuals and corporate organizations providing code contributions around the world. - -The openEuler OpenStack SIG is committed to combining diversified computing power to contribute to the OpenStack community with platform enhancements that are more suitable for industry development, and regularly organizes meetings to provide suggestions and feedback for community development. - -## SIG Work Objectives and Scope - -- Provide native OpenStack on top of openEuler to build an open and reliable cloud computing technology stack. -- Hold regular meetings to collect requests from developers and vendors and discuss the development of the OpenStack community. - -## Organization Meetings - -Public meeting time: bi-weekly regular meeting, Wednesday afternoon 3:00-4:00(UTC +8) - -Meeting agenda and summary: - -## Members - -### Maintainer list - -- Chen Shuo [@joec88](https://gitee.com/joec88) joseph.chn1988@gmail.com -- Li Kunshan [@liksh](https://gitee.com/liksh) li_kunshan@163.com -- Huang Tianhua [@huangtianhua](https://gitee.com/huangtianhua) huangtianhua223@gmail.com -- Wang Xiyuan [@xiyuanwang](https://gitee.com/xiyuanwang) wangxiyuan1007@gmail.com -- Zhang Fan [@zh-f](https://gitee.com/zh-f) zh.f@outlook.com -- Zhang Ying [@zhangy1317](https://gitee.com/zhangy1317) zhangy1317@foxmail.com -- Liu Sheng [@sean-lau](https://gitee.com/sean-lau) liusheng2048@gmail.com - Retired - -### Contact details - -- Mailing list: openstack@openeuler.org. Click the OpenStack link on the [openEuler page](https://openeuler.org/zh/community/mailing-list/) to subscribe. -- Contact the maintainers to join our Wechat discussion group. - -## OpenStack Version Support List - -The OpenStack SIG collects OpenStack version requirements through user feedback, and determines the OpenStack version evolution roadmap through open discussions of members. Versions in the plan may be adjusted due to changes in requirements and manpower. The OpenStack SIG welcomes more developers and vendors to jointly improve OpenStack support for openEuler. - -● - Released -○ - Planning - -| | Queens | Rocky | Train | Ussuri | Victoria | Wallaby | Xena | Yoga | -|:-----------------------:|:------:|:-----:|:-----:|:------:|:--------:|:-------:|:----:|:----:| -| openEuler 20.03 LTS SP2 | ● | ● | | | | | | | -| openEuler 20.03 LTS SP3 | ● | ● | ● | | | | | | -| openEuler 21.03 | | | | | ● | | | | -| openEuler 21.09 | | | | | | ● | | | -| openEuler 22.03 LTS | | | ○ | | | | | | -| openEuler 22.09 LTS | | | | | | | | ○ | - - -| | Queens | Rocky | Train | Victoria | Wallaby | -|:---------: |:------:|:-----:|:-----:|:--------:|:-------:| -| Keystone | ● | ● | ● | ● | ● | -| Glance | ● | ● | ● | ● | ● | -| Nova | ● | ● | ● | ● | ● | -| Cinder | ● | ● | ● | ● | ● | -| Neutron | ● | ● | ● | ● | ● | -| Tempest | ● | ● | ● | ● | ● | -| Horizon | ● | ● | ● | ● | ● | -| Ironic | ● | ● | ● | ● | ● | -| Placement | | | ● | ● | ● | -| Trove | ● | ● | ● | | ● | -| Kolla | ● | ● | ● | | ● | -| Rally | ● | ● | | | | -| Swift | | | ● | | ● | -| Heat | | | ● | | | -| Ceilometer | | | ● | | | -| Aodh | | | ● | | | -| Cyborg | | | ● | | | -| Gnocchi | | | ● | | ● | - -Note: openEuler 20.03 LTS SP2 doesn't support Rally -### oepkg Repository List - -The packages of OpenStack Queens and Rocky are in the oepkg repository: - -20.03-LTS-SP2 Rocky: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/rocky/ - -20.03-LTS-SP2 Queens: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/ - -## How to Contribue - -The OpenStack SIG adheres to the four open principles of the OpenStack community: open source, open design, open development, open community. Developers, users, and vendors are welcome to participate in SIG contributions in various open source modes, including but not limited to: - -1. Submit issues to the SIG and report requirements and software package bugs. -2. Communicate with the SIG through the mailing list. -3. Join the SIG WeChat discussion group to receive the latest SIG updates in real time and discuss various technologies with industry developers. -4. Participate in bi-weekly meetings to discuss real-time technical issues and SIG roadmaps. -5. Participate in SIG software development, including RPM package creation, environment deployment and testing, automatic tool development, and document compilation. -6. Participate in OpenStack open source project donation, SIG self-developed project development, etc. - -There are many other things you can do in the SIG. Just ensure that the work you do is related to OpenStack and open source. OpenStack SIG welcomes your participation. - -## Directory structure of this project - -```none -└── docs "Installation and test documents" -| └── install "Installation document directory" -| └── test "Test document directory" -└── example "Example files" -└── templet "RPM packaging specifications" -└── tools "OpenStack packaging and dependency analysis" - └── oos "OpenStack SIG development tool" - └── docker "Basic container environment for OpenStack SIG development" - -``` - -## Item List - -### Unified entry - -- - -OpenStack contains many projects. To facilitate management, a unified entry project has been set up. Users and developers can submit issues in the project if they have any questions about the OpenStack SIG and OpenStack sub-projects. - -### SIG developped projects (in alphabetical order) - -- -- - -OpenStack doesn't support openEuler by default in deployment, build system. Our SIG provides some plugins -to support it. - -### RPM package projects (in alphabetical order) - -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- diff --git a/README.md b/README.md deleted file mode 100644 index 90cf3e7bbd0c48fbf6b34f9dff65ad13f3485ed3..0000000000000000000000000000000000000000 --- a/README.md +++ /dev/null @@ -1,472 +0,0 @@ -# openEuler OpenStack SIG - -## 使命和愿景 - -OpenStack是由美国国家航空航天局和Rackspace发起的,后由各大开源贡献者和厂商持续开发、维护的开源云计算软件。以Apache授权条款授权,并且是自由和开放源代码软件。 - -OpenStack是目前全球部署最广泛的、经过大规模生产环境验证的开源云软件,其中包括一系列软件组件,为云基础架构提供通用服务。 - -OpenStack社区作为著名云计算开源社区,在全球范围拥有众多个人及企业组织提供代码贡献。 - -openEuler OpenStack SIG致力于结合多样性算力为openstack社区贡献更适合行业发展的平台功能增强,并且定期组织会议为社区发展提供建议和回馈。 - -## SIG 工作目标和范围 - -- 在openEuler之上提供原生的OpenStack,构建开放可靠的云计算技术栈。 -- 定期召开会议,收集开发者、厂商诉求,讨论OpenStack社区发展。 - -## 组织会议 - -公开的会议时间:双周例会,周三下午3:00-4:00(北京时间) - -会议链接:通过微信群消息和邮件列表发出 - -会议纪要: - -## 成员 - -### Maintainer列表 - -- 陈硕[@joec88](https://gitee.com/joec88) joseph.chn1988@gmail.com -- 李昆山[@liksh](https://gitee.com/liksh) li_kunshan@163.com -- 黄填华[@huangtianhua](https://gitee.com/huangtianhua) huangtianhua223@gmail.com -- 王玺源[@xiyuanwang](https://gitee.com/xiyuanwang) wangxiyuan1007@gmail.com -- 张帆[@zh-f](https://gitee.com/zh-f) zh.f@outlook.com -- 张迎[@zhangy1317](https://gitee.com/zhangy1317) zhangy1317@foxmail.com -- 刘胜[@sean-lau](https://gitee.com/sean-lau) liusheng2048@gmail.com - 已退休 - -### 联系方式 - -- 邮件列表:openstack@openeuler.org,邮件订阅请在[页面](https://www.openeuler.org/zh/community/mailing-list/)中单击OpenStack链接。 -- Wechat讨论群,请联系Maintainer入群 - -## OpenStack版本支持列表 - -OpenStack SIG通过用户反馈等方式收集OpenStack版本需求,经过SIG组内成员公开讨论决定OpenStack的版本演进路线。规划中的版本可能因为需求更变、人力变动等原因进行调整。OpenStack SIG欢迎更多开发者、厂商参与,共同完善openEuler的OpenStack支持。 - -● - 已支持 -○ - 规划中/开发中 -▲ - 部分openEuler版本支持 - -| | Queens | Rocky | Train | Ussuri | Victoria | Wallaby | Xena | Yoga | -|:-----------------------:|:------:|:-----:|:-----:|:------:|:--------:|:-------:|:----:|:----:| -| openEuler 20.03 LTS SP1 | | | ○ | | | | | | -| openEuler 20.03 LTS SP2 | ● | ● | | | | | | | -| openEuler 20.03 LTS SP3 | ● | ● | ● | | | | | | -| openEuler 21.03 | | | | | ● | | | | -| openEuler 21.09 | | | | | | ● | | | -| openEuler 22.03 LTS | | | ● | | | ● | | | -| openEuler 22.09 | | | | | | | | ○ | - - -| | Queens | Rocky | Train | Victoria | Wallaby | -|:---------: |:------:|:-----:|:-----:|:--------:|:-------:| -| Keystone | ● | ● | ● | ● | ● | -| Glance | ● | ● | ● | ● | ● | -| Nova | ● | ● | ● | ● | ● | -| Cinder | ● | ● | ● | ● | ● | -| Neutron | ● | ● | ● | ● | ● | -| Tempest | ● | ● | ● | ● | ● | -| Horizon | ● | ● | ● | ● | ● | -| Ironic | ● | ● | ● | ● | ● | -| Placement | | | ● | ● | ● | -| Trove | ● | ● | ● | | ● | -| Kolla | ● | ● | ● | | ● | -| Rally | ▲ | ▲ | | | | -| Swift | | | ● | | ● | -| Heat | | | ● | | ▲ | -| Ceilometer | | | ● | | ▲ | -| Aodh | | | ● | | ▲ | -| Cyborg | | | ● | | ▲ | -| Gnocchi | | | ● | | ● | - -Note: - -1. openEuler 20.03 LTS SP2不支持Rally -2. openEuler 21.09 不支持Heat、Ceilometer、Swift、Aodh和Cyborg - -### oepkg软件仓地址列表 - -Queens、Rocky版本的支持放在官方认证的第三方软件平台oepkg: - -20.03-LTS-SP2 Rocky: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/ - -20.03-LTS-SP3 Rocky: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/ - -20.03-LTS-SP2 Queens: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/ - -20.03-LTS-SP3 Rocky: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/ - -## 本项目目录结构 - -```none -└── docs "安装、测试文档" -| └── install "安装文档目录" -| └── test "测试文档目录" -└── example "示例文件" -└── templet "RPM打包规范" -└── tools "openstack打包、依赖分析等工作" - └── oos "OpenStack SIG开发工具" - └── docker "OpenStack SIG开发基础容器环境" -``` - -## 如何贡献 - -OpenStack SIG秉承OpenStack社区4个Open原则(Open source、Open Design、Open Development、Open Community),欢迎开发者、用户、厂商以各种开源方式参与SIG贡献,包括但不限于: - -1. [提交Issue](https://gitee.com/openeuler/openstack/issues/new) - 如果您在使用OpenStack时遇到了任何问题,可以向SIG提交ISSUE,包括不限于使用疑问、软件包BUG、特性需求等等。 -2. 参与技术讨论 - 通过邮件列表、微信群、在线例会等方式,与SIG成员实时讨论OpenStack技术。 -3. 参与SIG的软件开发测试工作 - 1. OpenStack SIG跟随openEuler版本开发的节奏,每几个月对外发布不同版本的OpenStack,每个版本包含了几百个RPM软件包,开发者可以参与到这些RPM包的开发工作中。 - 2. OpenStack SIG包括一些来自厂商捐献、自主研发的项目,开发者可以参与相关项目的开发工作。 - 3. openEuler新版本发布后,用户可以测试试用对应的OpenStack,相关BUG和问题可以提交到SIG。 - 4. OpenStack SIG还提供了一系列提高开发效率的工具和文档,用户可以帮忙优化、完善。 -4. 技术预言、联合创新 - OpenStack SIG欢迎各种形式的联合创新,邀请各位开发者以开源的方式、以SIG为平台,创造属于国人的云计算新技术。如果您有idea或开发意愿,欢迎加入SIG。 - -当然,贡献形式不仅包含这些,其他任何与OpenStack相关、与开源相关的事务都可以带到SIG中。OpenStack SIG欢迎您的参与。 - -## Maintainer的加入和退出 - -秉承开源开放的理念,OpenStack SIG在maintainer成员的加入和退出方面也有一定的规范和要求。 - -### 如何成为maintainer - -maintainer作为SIG的直接负责人,拥有代码合入、路标规划、提名maintainer等方面的权利,同时也有软件质量看护、版本开发的义务。如果您想成为OpenStack SIG的一名maintainer,需要满足以下几点要求: - -1. 持续参与OpenStack SIG开发贡献,不小于一个openEuler release周期(一般为3个月) -2. 持续参与OpenStack SIG代码检视,review排名应不低于SIG平均量 -3. 定时参加OpenStack SIG例会(一般为双周一次),一个openEuler release周期一般包括6次例会,缺席次数应不大于2次 - -加分项: - -1. 积极参加OpenStack SIG组织的各种活动,比如线上分享、线下meetup或峰会等。 -2. 帮助SIG扩展运营范围,进行联合技术创新,例如主动开源新项目,吸引新的开发者、厂商加入SIG等。 - -SIG maintainer每个季度会组织闭门会议,审视当前贡献数据,根据贡献者满足相关要求,经讨论达成一致后并且贡献者愿意担任maintainer一职时,SIG会向openEuler TC提出相关申请 - -### maintainer的退出 - -当SIG maintainer因为自身原因(工作变动、业务调整等原因),无法再担任maintainer一职时,可主动申请退出。 - -SIG maintainer每半年也会例行审视当前maintainer列表,如果发现有不再适合担任maintainer的贡献者(贡献不足、不再活跃等原因),经讨论达成一致后,会向openEuler TC提出相关申请。 - -## 项目清单 - -### 统一入口 - -- - -OpenStack包含项目众多,为了方便管理,设置了统一入口项目,用户、开发者对OpenStack SIG以及各OpenStack子项目有任何问题,可以在该项目中提交Issue。 - -### SIG自研项目(按字母顺序) - -- -- -- - -### RPM构建项目(按字母顺序) - -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- diff --git a/docs/branch_introduce.md b/docs/branch_introduce.md deleted file mode 100644 index 769bcb192d9efe16cac69dbd9fad3e954439432d..0000000000000000000000000000000000000000 --- a/docs/branch_introduce.md +++ /dev/null @@ -1,28 +0,0 @@ -# OpenStack oepkg分支介绍 - -OpenStack Train版本才开始完全支持python3,openEuler 20.03-LTS-SP2支持Queens和Rocky版本需要引入python2软件包,根据社区意见将OpenStack的软件包发布到官方认证的第三方软件包平台[oepkg](https://oepkgs.net/zh/),代码托管放在openEuler社区,由Jenkins CI保证基本门槛,由[OBS](https://build.openeuler.org/)构建软件Rpm包。基于openEuler分支开发规范,OpenStack软件包仓库针对OpenStack Rocky和Queens版本的开发,创建了如下几个oepkg分支: - -## oepkg_openstack-common_oe-20.03-LTS-Next分支 -基于openEuler 20.03-LTS版本的common公共包开发分支,作为20.03-LTS版本common开发主线,跟随openEuler社区开发节奏后续拉出对应common的SP分支 - -## oepkg_openstack-common_oe-20.03-LTS-SP2分支 -从oepkg_openstack-common_oe-20.03-LTS-Next分支拉出的对应20.03-LTS-SP2版本的common公共包分支 - -## oepkg_openstack-rocky_oe-20.03-LTS-Next分支 -基于openEuler 20.03-LTS版本的rocky开发分支,作为20.03-LTS版本OpenStack Rocky开发主线,跟随openEuler社区开发节奏后续拉出对应Rocky版本的SP分支 - -## oepkg_openstack-rocky_oe-20.03-LTS-SP2分支 -从oepkg_openstack-rocky_oe-20.03-LTS-Next分支拉出的对应20.03-LTS-SP2版本的rocky分支 - -## oepkg_openstack-queens_oe-20.03-LTS-Next分支 -基于openEuler 20.03-LTS版本的queens开发分支,作为20.03-LTS版本OpenStack Queens开发主线,跟随openEuler社区开发节奏后续拉出对应Queens版本的SP分支 - -## oepkg_openstack-queens_oe-20.03-LTS-SP2分支 -从oepkg_openstack-queens_oe-20.03-LTS-Next分支拉出的对应20.03-LTS-SP2版本的OpenStack Queens分支 - - -注意:上述提及的common包最终在oepkg上面并不对用户呈现,用户看到的是OpenStack的Queens和Rocky版本,其中common+queens构成[Queens](https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens),common+rocky构成[Rocky](https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/rocky) - -## 分支维护规范(以rocky分支为例): -rocky SP分支拉出以后,允许Bug、CVE安全漏洞以及其他必须的适配修改,在rocky Next分支提交PR修改,合入后同步到对应rocky SP分支。后续更新发布到[oepkg](https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack) - diff --git a/docs/install/devstack-success.png b/docs/install/devstack-success.png deleted file mode 100644 index 9ad9ffbd55b95dcb35fb1a9621f3dd92b603c161..0000000000000000000000000000000000000000 Binary files a/docs/install/devstack-success.png and /dev/null differ diff --git a/docs/install/devstack.md b/docs/install/devstack.md deleted file mode 100644 index dcce1ecbcf76ce295c62a74a7c4338db275078ec..0000000000000000000000000000000000000000 --- a/docs/install/devstack.md +++ /dev/null @@ -1,172 +0,0 @@ -# 使用Devstack安装OpenStack - -目前OpenStack原生Devstack项目已经支持在openEuler上安装OpenStack,其中openEuler 20.03 LTS SP2已经过验证,并且有上游官方CI保证质量。其他版本的openEuler需要用户自行测试(2022-04-25 openEuler master分支已验证)。 - -## 安装步骤 - -准备一个openEuler环境, 20.03 LTS SP2[虚拟机镜像地址](https://repo.openeuler.org/openEuler-20.03-LTS-SP2/virtual_machine_img/), master[虚拟机镜像地址](http://121.36.84.172/dailybuild/openEuler-Mainline/) - -1. 配置yum源 - - **openEuler 20.03 LTS SP2**: - - openEuler官方源中缺少了一些OpenStack需要的RPM包,因此需要先配上OpenStack SIG在oepkg中准备好的RPM源 - ``` - vi /etc/yum.repos.d/openeuler.repo - - [openstack] - name=openstack - baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack-master-ci/aarch64/ - enabled=1 - gpgcheck=0 - ``` - - **openEuler master**: - - 使用master的RPM源 - ``` - vi /etc/yum.repos.d/openeuler.repo - - [mainline] - name=mainline - baseurl=http://119.3.219.20:82/openEuler:/Mainline/standard_aarch64/ - gpgcheck=false - - [epol] - name=epol - baseurl=http://119.3.219.20:82/openEuler:/Epol/standard_aarch64/ - gpgcheck=false - ``` - -2. 前期准备 - - **openEuler 20.03 LTS SP2**: - - 在一些版本的openEuler官方镜像的默认源中,EPOL-update的URL可能配置不正确,需要修改 - - ``` - vi /etc/yum.repos.d/openEuler.repo - - # 把[EPOL-UPDATE]URL改成 - baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/EPOL/update/main/$basearch/ - ``` - - **openEuler master**: - - ``` - yum remove python3-pip # 系统的pip与devstack pip冲突,需要先删除 - # master的虚机环境缺少了一些依赖,devstack不会自动安装,需要手动安装 - yum install iptables tar wget python3-devel httpd-devel iscsi-initiator-utils libvirt python3-libvirt qemu memcached - ``` - -3. 下载devstack - - ``` - yum update - yum install git - cd /opt/ - git clone https://opendev.org/openstack/devstack - ``` - -4. 初始化devstack环境配置 - - ``` - # 创建stack用户 - /opt/devstack/tools/create-stack-user.sh - # 修改目录权限 - chown -R stack:stack /opt/devstack - chmod -R 755 /opt/devstack - chmod -R 755 /opt/stack - # 切换到要部署的openstack版本分支,以yoga为例,不切换的话,默认安装的是master版本的openstack - git checkout stable/yoga - ``` - -5. 初始化devstack配置文件 - - ``` - 切换到stack用户 - su stack - 此时,请确认stack用户的PATH环境变量是否包含了`/usr/sbin`,如果没有,则需要执行 - PATH=$PATH:/usr/sbin - 新增配置文件 - vi /opt/devstack/local.conf - - [[local|localrc]] - DATABASE_PASSWORD=root - RABBIT_PASSWORD=root - SERVICE_PASSWORD=root - ADMIN_PASSWORD=root - OVN_BUILD_FROM_SOURCE=True - ``` - - openEuler没有提供OVN的RPM软件包,因此需要配置`OVN_BUILD_FROM_SOURCE=True`, 从源码编译OVN - - 另外如果使用的是arm64虚拟机环境,则需要配置libvirt嵌套虚拟化,在`local.conf`中追加如下配置: - - ``` - [[post-config|$NOVA_CONF]] - [libvirt] - cpu_mode=custom - cpu_model=cortex-a72 - ``` - 如果安装Ironic,需要提前安装依赖: - ```bash - sudo dnf install syslinux-nonlinux - ``` - - **openEuler master的特殊配置**: 由于devstack还没有适配最新的openEuler,我们需要手动修复一些问题: - - 1. 修改devstack源码 - - ``` - vi /opt/devstack/tools/fixup_stuff.sh - 把fixup_openeuler方法中的所有echo语句删掉 - (echo '[openstack-ci]' - echo 'name=openstack' - echo 'baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack-master-ci/'$arch'/' - echo 'enabled=1' - echo 'gpgcheck=0') | sudo tee -a /etc/yum.repos.d/openstack-master.repo > /dev/null - ``` - 2. 修改requirements源码 - - Yoga版keystone的依赖`setproctitle`的devstack默认版本不支持python3.10,需要升级,手动下载requirements项目并修改 - ``` - cd /opt/stack - git clone https://opendev.org/openstack/requirements --branch stable/yoga - vi /opt/stack/requirements/upper-constraints.txt - setproctitle===1.2.3 - ``` - - 3. OpenStack horizon有BUG,无法正常安装。这里我们暂时不安装horizon,修改`local.conf`,新增一行: - - ``` - [[local|localrc]] - disable_service horizon - ``` - - 如果确实有对horizon的需求,则需要解决以下问题: - ``` - # 1. horizon依赖的pyScss默认为1.3.7版本,不支持python3.10 - # 解决方法:需要提前clone`requirements`项目并修改代码 - vi /opt/stack/requirements/upper-constraints.txt - pyScss===1.4.0 - - # 2. horizon依赖httpd的mod_wsgi插件,但目前openEuler的mod_wsgi构建异常(2022-04-25)(解决后yum install mod_wsgi即可),无法从yum安装 - # 解决方法:手动源码build mod_wsgi并配置,该过程较复杂,这里略过 - ``` - - 4. dstat服务依赖的`pcp-system-tools`构建异常(2022-04-25)(解决后yum install pcp-system-tools即可),无法从yum安装,暂时先不安装dstat - - ``` - [[local|localrc]] - disable_service dstat - ``` - -6. 部署OpenStack - - 进入devstack目录,执行`./stack.sh`,等待OpenStack完成安装部署。 - - -部署成功的截图展示: - -![devstack-success](./devstack-success.png) diff --git a/docs/install/openEuler-20.03-LTS-SP1/.keep b/docs/install/openEuler-20.03-LTS-SP1/.keep deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/docs/install/openEuler-20.03-LTS-SP2/OpenStack-queens.md b/docs/install/openEuler-20.03-LTS-SP2/OpenStack-queens.md deleted file mode 100644 index 51516291e73efdc017705f14d6d906ae005bbf0d..0000000000000000000000000000000000000000 --- a/docs/install/openEuler-20.03-LTS-SP2/OpenStack-queens.md +++ /dev/null @@ -1,2048 +0,0 @@ -# OpenStack-Queens 部署指南 - - - -- [OpenStack-Queens 部署指南](#openstack-queens-部署指南) - - [OpenStack 简介](#openstack-简介) - - [约定](#约定) - - [准备环境](#准备环境) - - [环境配置](#环境配置) - - [安装 SQL DataBase](#安装-sql-database) - - [安装 RabbitMQ](#安装-rabbitmq) - - [安装 Memcached](#安装-memcached) - - [安装 OpenStack](#安装-openstack) - - [Keystone 安装](#keystone-安装) - - [Glance 安装](#glance-安装) - - [Nova 安装](#nova-安装) - - [Neutron 安装](#neutron-安装) - - [Cinder 安装](#cinder-安装) - - [horizon 安装](#horizon-安装) - - [Tempest 安装](#tempest-安装) - - [Ironic 安装](#ironic-安装) - - [Kolla 安装](#kolla-安装) - - [Trove 安装](#trove-安装) - - - -## OpenStack 简介 - -OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。 - -作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。 - -openEuler 20.03-LTS-SP2 版本官方认证的第三方oepkg yum 源已经支持 Openstack-Queens 版本,用户可以配置好oepkg yum 源后根据此文档进行 OpenStack 部署。 - -## 约定 - -Openstack 支持多种形态部署,此文档支持`ALL in One`以及`Distributed`两种部署方式,按照如下方式约定: - -`ALL in One`模式: - -```text -忽略所有可能的后缀 -``` - -`Distributed`模式: - -```text -以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点` -以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点` -除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点` -``` - -***注意*** - -涉及到以上约定的服务如下: - -- Cinder -- Nova -- Neutron - -## 准备环境 - -### 环境配置 - -1. 配置 20.03-LTS-SP2 官方认证的第三方源 oepkg - - ```shell - cat << EOF >> /etc/yum.repos.d/OpenStack_Queens.repo - [openstack_queens] - name=OpenStack_Queens - baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/$basearch/ - gpgcheck=0 - enabled=1 - EOF - - yum clean all && yum makecache - ``` - -2. 修改主机名以及映射 - - 设置各个节点的主机名 - - ```shell - hostnamectl set-hostname controller (CTL) - hostnamectl set-hostname compute (CPT) - ``` - - 假设controller节点的IP是`10.0.0.11`,compute节点的IP是`10.0.0.12`(如果存在的话),则于`/etc/hosts`新增如下: - - ```shell - 10.0.0.11 controller - 10.0.0.12 compute - ``` - -### 安装 SQL DataBase - -1. 执行如下命令,安装软件包。 - - ```shell - yum install mariadb mariadb-server python2-PyMySQL - ``` - -2. 执行如下命令,创建并编辑 `/etc/my.cnf.d/openstack.cnf` 文件。 - - ```shell - vim /etc/my.cnf.d/openstack.cnf - - [mysqld] - bind-address = 10.0.0.11 - default-storage-engine = innodb - innodb_file_per_table = on - max_connections = 4096 - collation-server = utf8_general_ci - character-set-server = utf8 - ``` - - ***注意*** - - **其中 `bind-address` 设置为控制节点的管理IP地址。** - -3. 启动 DataBase 服务,并为其配置开机自启动: - - ```shell - systemctl enable mariadb.service - systemctl start mariadb.service - ``` - -4. 配置DataBase的默认密码(可选) - - ```shell - mysql_secure_installation - ``` - - ***注意*** - - **根据提示进行即可** - -### 安装 RabbitMQ - -1. 执行如下命令,安装软件包。 - - ```shell - yum install rabbitmq-server - ``` - -2. 启动 RabbitMQ 服务,并为其配置开机自启动。 - - ```shell - systemctl enable rabbitmq-server.service - systemctl start rabbitmq-server.service - ``` - -3. 添加 OpenStack用户。 - - ```shell - rabbitmqctl add_user openstack RABBIT_PASS - ``` - - ***注意*** - - **替换 `RABBIT_PASS`,为 OpenStack 用户设置密码** - -4. 设置openstack用户权限,允许进行配置、写、读: - - ```shell - rabbitmqctl set_permissions openstack ".*" ".*" ".*" - ``` - -### 安装 Memcached - -1. 执行如下命令,安装依赖软件包。 - - ```shell - yum install memcached python2-memcached - ``` - -2. 编辑 `/etc/sysconfig/memcached` 文件。 - - ```shell - vim /etc/sysconfig/memcached - - OPTIONS="-l 127.0.0.1,::1,controller" - ``` - -3. 执行如下命令,启动 Memcached 服务,并为其配置开机启动。 - - ```shell - systemctl enable memcached.service - systemctl start memcached.service - ``` - 服务启动后,可以通过命令`memcached-tool controller stats`确保启动正常,服务可用,其中可以将`controller`替换为控制节点的管理IP地址。 - -## 安装 OpenStack - -### Keystone 安装 - -1. 创建 keystone 数据库并授权。 - - ``` sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE keystone; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `KEYSTONE_DBPASS`,为 Keystone 数据库设置密码** - -2. 安装软件包。 - - ```shell - yum install openstack-keystone httpd python2-mod_wsgi - ``` - -3. 配置keystone相关配置 - - ```shell - vim /etc/keystone/keystone.conf - - [database] - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - - [token] - provider = fernet - ``` - - ***解释*** - - [database]部分,配置数据库入口 - - [token]部分,配置token provider - - ***注意:*** - - **替换 `KEYSTONE_DBPASS` 为 Keystone 数据库的密码** - -4. 同步数据库。 - - ```shell - su -s /bin/sh -c "keystone-manage db_sync" keystone - ``` - -5. 初始化Fernet密钥仓库。 - - ```shell - keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - ``` - -6. 启动服务。 - - ```shell - keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:5000/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - ``` - - ***注意*** - - **替换 `ADMIN_PASS`,为 admin 用户设置密码** - -7. 配置Apache HTTP server - - ```shell - vim /etc/httpd/conf/httpd.conf - - ServerName controller - ``` - - ```shell - ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ - ``` - - ***解释*** - - 配置 `ServerName` 项引用控制节点 - - ***注意*** - **如果 `ServerName` 项不存在则需要创建** - -8. 启动Apache HTTP服务。 - - ```shell - systemctl enable httpd.service - systemctl start httpd.service - ``` - -9. 创建环境变量配置。 - - ```shell - cat << EOF >> ~/.admin-openrc - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=admin - export OS_USERNAME=admin - export OS_PASSWORD=ADMIN_PASS - export OS_AUTH_URL=http://controller:5000/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - EOF - ``` - - ***注意*** - - **替换 `ADMIN_PASS` 为 admin 用户的密码** - -10. 依次创建domain, projects, users, roles,需要先安装好python2-openstackclient: - - ``` - yum install python2-openstackclient - ``` - - 导入环境变量 - - ```shell - source ~/.admin-openrc - ``` - - 创建project `service`,其中 domain `default` 在 keystone-manage bootstrap 时已创建 - - ```shell - openstack domain create --description "An Example Domain" example - ``` - - ```shell - openstack project create --domain default --description "Service Project" service - ``` - - 创建(non-admin)project `myproject`,user `myuser` 和 role `myrole`,为 `myproject` 和 `myuser` 添加角色`myrole` - - ```shell - openstack project create --domain default --description "Demo Project" myproject - openstack user create --domain default --password-prompt myuser - openstack role create myrole - openstack role add --project myproject --user myuser myrole - ``` - -11. 验证 - - 取消临时环境变量OS_AUTH_URL和OS_PASSWORD: - - ```shell - source ~/.admin-openrc - unset OS_AUTH_URL OS_PASSWORD - ``` - - 为admin用户请求token: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - ``` - - 为myuser用户请求token: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name myproject --os-username myuser token issue - ``` - -### Glance 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE glance; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意:*** - - **替换 `GLANCE_DBPASS`,为 glance 数据库设置密码** - - 创建服务凭证 - - ```shell - source ~/.admin-openrc - - openstack user create --domain default --password-prompt glance - openstack role add --project service --user glance admin - openstack service create --name glance --description "OpenStack Image" image - ``` - - 创建镜像服务API端点: - - ```shell - openstack endpoint create --region RegionOne image public http://controller:9292 - openstack endpoint create --region RegionOne image internal http://controller:9292 - openstack endpoint create --region RegionOne image admin http://controller:9292 - ``` - -2. 安装软件包 - - ```shell - yum install openstack-glance - ``` - -3. 配置glance相关配置: - - ```shell - vim /etc/glance/glance-api.conf - - [database] - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - flavor = keystone - - [glance_store] - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - ``` - - ```shell - vim /etc/glance/glance-registry.conf - - [database] - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - flavor = keystone - - [glance_store] - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - ``` - - ***解释:*** - - [database]部分,配置数据库入口 - - [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口 - - [glance_store]部分,配置本地文件系统存储和镜像文件的位置 - - ***注意*** - - **替换 `GLANCE_DBPASS` 为 glance 数据库的密码** - - **替换 `GLANCE_PASS` 为 glance 用户的密码** - -4. 同步数据库: - - ```shell - su -s /bin/sh -c "glance-manage db_sync" glance - ``` - -5. 启动服务: - - ```shell - systemctl enable openstack-glance-api.service openstack-glance-registry.service - systemctl start openstack-glance-api.service openstack-glance-registry.service - ``` - -6. 验证 - - 下载镜像 - - ```shell - source ~/.admin-openrc - - wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img - ``` - - ***注意*** - - **如果您使用的环境是鲲鹏架构,请下载arm64版本的镜像** - - 向Image服务上传镜像: - - ```shell - openstack image create --disk-format qcow2 --container-format bare \ - --file cirros-0.4.0-x86_64-disk.img --public cirros - ``` - - 确认镜像上传并验证属性: - - ```shell - openstack image list - ``` - -### Nova 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p (CPT) - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换NOVA_DBPASS,为nova数据库设置密码** - - ```shell - source ~/.admin-openrc (CPT) - ``` - - 创建nova服务凭证: - - ```shell - openstack user create --domain default --password-prompt nova (CTP) - openstack role add --project service --user nova admin (CPT) - openstack service create --name nova --description "OpenStack Compute" compute (CPT) - ``` - - 创建placement服务凭证: - - ```shell - openstack user create --domain default --password-prompt placement (CPT) - openstack role add --project service --user placement admin (CPT) - openstack service create --name placement --description "Placement API" placement (CPT) - ``` - - 创建nova API端点: - - ```shell - openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CPT) - openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CPT) - openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CPT) - ``` - - 创建placement API端点: - - ```shell - openstack endpoint create --region RegionOne placement public http://controller:8778 (CPT) - openstack endpoint create --region RegionOne placement internal http://controller:8778 (CPT) - openstack endpoint create --region RegionOne placement admin http://controller:8778 (CPT) - ``` - -2. 安装软件包 - - ```shell - yum install openstack-nova-api openstack-nova-conductor openstack-nova-console \ - openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api (CTL) - - yum install openstack-nova-compute (CPT) - ``` - - ***注意*** - - **如果为arm64结构,还需要执行以下命令** - - ```shell - yum install edk2-aarch64 (CPT) - ``` - -3. 配置nova相关配置 - - ```shell - vim /etc/nova/nova.conf - - [DEFAULT] - enabled_apis = osapi_compute,metadata - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - my_ip = 10.0.0.1 - use_neutron = true - firewall_driver = nova.virt.firewall.NoopFirewallDriver - compute_driver=libvirt.LibvirtDriver (CPT) - instances_path = /var/lib/nova/instances/ (CPT) - lock_path = /var/lib/nova/tmp (CPT) - - [api_database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) - - [database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) - - [api] - auth_strategy = keystone - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = nova - password = NOVA_PASS - - [vnc] - enabled = true - server_listen = $my_ip - server_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) - - [libvirt] - virt_type = qemu (CPT) - cpu_mode = custom (CPT) - cpu_model = cortex-a7 (CPT) - - [glance] - api_servers = http://controller:9292 - - [oslo_concurrency] - lock_path = /var/lib/nova/tmp (CTL) - - [placement] - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:5000/v3 - username = placement - password = PLACEMENT_PASS - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***解释*** - - [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron; - - [api_database] [database]部分,配置数据库入口; - - [api] [keystone_authtoken]部分,配置身份认证服务入口; - - [vnc]部分,启用并配置远程控制台入口; - - [glance]部分,配置镜像服务API的地址; - - [oslo_concurrency]部分,配置lock path; - - [placement]部分,配置placement服务的入口。 - - ***注意*** - - **替换 `RABBIT_PASS` 为 RabbitMQ 中 openstack 账户的密码;** - - **配置 `my_ip` 为控制节点的管理IP地址;** - - **替换 `NOVA_DBPASS` 为nova数据库的密码;** - - **替换 `NOVA_PASS` 为nova用户的密码;** - - **替换 `PLACEMENT_PASS` 为placement用户的密码;** - - **替换 `NEUTRON_PASS` 为neutron用户的密码;** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - - **额外** - - 手动增加Placement API接入配置。 - - ```shell - vim /etc/httpd/conf.d/00-nova-placement-api.conf (CTL) - - - = 2.4> - Require all granted - - - Order allow,deny - Allow from all - - - ``` - - 重启httpd服务: - - ```shell - systemctl restart httpd (CTL) - ``` - - 确定是否支持虚拟机硬件加速(x86架构): - - ```shell - egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) - ``` - - 如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM: - - ```shell - vim /etc/nova/nova.conf (CPT) - - [libvirt] - virt_type = qemu - ``` - - 如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置 - - ***注意*** - - **如果为arm64结构,还需要执行以下命令** - - ```shell - mkdir -p /usr/share/AAVMF - chown nova:nova /usr/share/AAVMF - - ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \ - /usr/share/AAVMF/AAVMF_CODE.fd (CPT) - ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \ - /usr/share/AAVMF/AAVMF_VARS.fd (CPT) - - vim /etc/libvirt/qemu.conf - - nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \ - /usr/share/AAVMF/AAVMF_VARS.fd", \ - "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \ - /usr/share/edk2/aarch64/vars-template-pflash.raw"] (CPT) - ``` - -4. 同步数据库 - - 同步nova-api数据库: - - ```shell - su -s /bin/sh -c "nova-manage api_db sync" nova (CTL) - ``` - - 注册cell0数据库: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova (CTL) - ``` - - 创建cell1 cell: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova (CTL) - ``` - - 同步nova数据库: - - ```shell - su -s /bin/sh -c "nova-manage db sync" nova (CTL) - ``` - - 验证cell0和cell1注册正确: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova (CTL) - ``` - - 添加计算节点到openstack集群 - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (CPT) - ``` - -5. 启动服务 - - ```shell - systemctl enable \ (CTL) - openstack-nova-api.service \ - openstack-nova-consoleauth.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - - systemctl start \ (CTL) - openstack-nova-api.service \ - openstack-nova-consoleauth.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - ``` - - ```shell - systemctl enable libvirtd.service openstack-nova-compute.service (CPT) - systemctl start libvirtd.service openstack-nova-compute.service (CPT) - ``` - -6. 验证 - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 列出服务组件,验证每个流程都成功启动和注册: - - ```shell - openstack compute service list (CTL) - ``` - - 列出身份服务中的API端点,验证与身份服务的连接: - - ```shell - openstack catalog list (CTL) - ``` - - 列出镜像服务中的镜像,验证与镜像服务的连接: - - ```shell - openstack image list (CTL) - ``` - - 检查cells和placement API是否运作成功,以及其他必要条件是否已具备。 - - ```shell - nova-status upgrade check (CTL) - ``` - -### Neutron 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE neutron; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `NEUTRON_DBPASS` 为 neutron 数据库设置密码。** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 创建neutron服务凭证 - - ```shell - openstack user create --domain default --password-prompt neutron (CTL) - openstack role add --project service --user neutron admin (CTL) - openstack service create --name neutron --description "OpenStack Networking" network (CTL) - ``` - - 创建Neutron服务API端点: - - ```shell - openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) - ``` - -2. 安装软件包: - - ```shell - yum install openstack-neutron openstack-neutron-linuxbridge-agent ebtables ipset \ (CTL) - openstack-neutron-l3-agent openstack-neutron-dhcp-agent \ - openstack-neutron-metadata-agent - ``` - - ```shell - yum install openstack-neutron-linuxbridge-agent ebtables ipset (CPT) - ``` - -3. 配置neutron相关配置: - - 配置主体配置 - - ```shell - vim /etc/neutron/neutron.conf - - [database] - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) - - [DEFAULT] - core_plugin = ml2 (CTL) - service_plugins = router (CTL) - allow_overlapping_ips = true (CTL) - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - notify_nova_on_port_status_changes = true (CTL) - notify_nova_on_port_data_changes = true (CTL) - api_workers = 3 (CTL) - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = neutron - password = NEUTRON_PASS - - [nova] - auth_url = http://controller:5000 (CTL) - auth_type = password (CTL) - project_domain_name = Default (CTL) - user_domain_name = Default (CTL) - region_name = RegionOne (CTL) - project_name = service (CTL) - username = nova (CTL) - password = NOVA_PASS (CTL) - - [oslo_concurrency] - lock_path = /var/lib/neutron/tmp - ``` - - ***解释*** - - [database]部分,配置数据库入口; - - [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口; - - [default] [keystone]部分,配置身份认证服务入口; - - [default] [nova]部分,配置网络来通知计算网络拓扑的变化; - - [oslo_concurrency]部分,配置lock path。 - - ***注意*** - - **替换`NEUTRON_DBPASS`为 neutron 数据库的密码;** - - **替换`RABBIT_PASS`为 RabbitMQ中openstack 账户的密码;** - - **替换`NEUTRON_PASS`为 neutron 用户的密码;** - - **替换`NOVA_PASS`为 nova 用户的密码。** - - 配置ML2插件: - - ```shell - vim /etc/neutron/plugins/ml2/ml2_conf.ini - - [ml2] - type_drivers = flat,vlan,vxlan - tenant_network_types = vxlan - mechanism_drivers = linuxbridge,l2population - extension_drivers = port_security - - [ml2_type_flat] - flat_networks = provider - - [ml2_type_vxlan] - vni_ranges = 1:1000 - - [securitygroup] - enable_ipset = true - ``` - - 创建/etc/neutron/plugin.ini的符号链接 - - ```shell - ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - ``` - - **注意** - - **[ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;** - - **[ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;** - - **[ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;** - - **[securitygroup]部分,配置允许 ipset。** - - **补充** - - **l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge** - - 配置 Linux bridge 代理: - - ```shell - vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - [securitygroup] - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - ``` - - ***解释*** - - [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口; - - [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population; - - [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。 - - ***注意*** - - **替换`PROVIDER_INTERFACE_NAME`为物理网络接口;** - - **替换`OVERLAY_INTERFACE_IP_ADDRESS`为控制节点的管理IP地址。** - - 配置Layer-3代理: - - ```shell - vim /etc/neutron/l3_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - ``` - - ***解释*** - - 在[default]部分,配置接口驱动为linuxbridge - - 配置DHCP代理: - - ```shell - vim /etc/neutron/dhcp_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - ``` - - ***解释*** - - [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。 - - 配置metadata代理: - - ```shell - vim /etc/neutron/metadata_agent.ini (CTL) - - [DEFAULT] - nova_metadata_host = controller - metadata_proxy_shared_secret = METADATA_SECRET - ``` - - ***解释*** - - [default]部分,配置元数据主机和shared secret。 - - ***注意*** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - -4. 配置nova相关配置 - - ```shell - vim /etc/nova/nova.conf - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***解释*** - - [neutron]部分,配置访问参数,启用元数据代理,配置secret。 - - ***注意*** - - **替换`NEUTRON_PASS`为 neutron 用户的密码;** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - -5. 同步数据库: - - ```shell - su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - ``` - -6. 重启计算API服务: - - ```shell - systemctl restart openstack-nova-api.service - ``` - -7. 启动网络服务 - - ```shell - systemctl enable openstack-neutron-server.service \ (CTL) - openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \ - openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service - systemctl restart openstack-nova-api.service openstack-neutron-server.service (CTL) - openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \ - openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service - - systemctl enable openstack-neutron-linuxbridge-agent.service (CPT) - systemctl restart openstack-neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) - ``` - -8. 验证 - - 列出代理验证 neutron 代理启动成功: - - ```shell - openstack network agent list - ``` - -### Cinder 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE cinder; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `CINDER_DBPASS` 为cinder数据库设置密码。** - - ```shell - source ~/.admin-openrc - ``` - - 创建cinder服务凭证: - - ```shell - openstack user create --domain default --password-prompt cinder - openstack role add --project service --user cinder admin - openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 - openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 - ``` - - 创建块存储服务API端点: - - ```shell - openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s - ``` - -2. 安装软件包: - - ```shell - yum install openstack-cinder-api openstack-cinder-scheduler (CTL) - ``` - - ```shell - yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \ (CPT) - openstack-cinder-volume openstack-cinder-backup - ``` - -3. 准备存储设备,以下仅为示例: - - ```shell - pvcreate /dev/vdb - vgcreate cinder-volumes /dev/vdb - - vim /etc/lvm/lvm.conf - - - devices { - ... - filter = [ "a/vdb/", "r/.*/"] - ``` - - ***解释*** - - 在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。 - -4. 准备NFS - - ```shell - mkdir -p /root/cinder/backup - - cat << EOF >> /etc/export - /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) - EOF - - ``` - -5. 配置cinder相关配置: - - ```shell - vim /etc/cinder/cinder.conf - - [DEFAULT] - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - my_ip = 10.0.0.11 - enabled_backends = lvm (CPT) - backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (CPT) - backup_share=HOST:PATH (CPT) - - [database] - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = cinder - password = CINDER_PASS - - [oslo_concurrency] - lock_path = /var/lib/cinder/tmp - - [lvm] - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (CPT) - volume_group = cinder-volumes (CPT) - iscsi_protocol = iscsi (CPT) - iscsi_helper = tgtadm (CPT) - ``` - - ***解释*** - - [database]部分,配置数据库入口; - - [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip; - - [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口; - - [oslo_concurrency]部分,配置lock path。 - - ***注意*** - - **替换`CINDER_DBPASS`为 cinder 数据库的密码;** - - **替换`RABBIT_PASS`为 RabbitMQ 中 openstack 账户的密码;** - - **配置`my_ip`为控制节点的管理 IP 地址;** - - **替换`CINDER_PASS`为 cinder 用户的密码;** - - **替换`HOST:PATH`为 NFS的HOSTIP和共享路径 用户的密码;** - -6. 同步数据库: - - ```shell - su -s /bin/sh -c "cinder-manage db sync" cinder (CTL) - ``` - -7. 配置nova: - - ```shell - vim /etc/nova/nova.conf (CTL) - - [cinder] - os_region_name = RegionOne - ``` - -8. 重启计算API服务 - - ```shell - systemctl restart openstack-nova-api.service - ``` - -9. 启动cinder服务 - - ```shell - systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - ``` - - ```shell - systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \ (CPT) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \ (CPT) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - ``` - - ***注意*** - - 当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。 - - ``` - include /var/lib/cinder/volumes/* - ``` - -10. 验证 - - ```shell - source ~/.admin-openrc - openstack volume service list - ``` - -### horizon 安装 - -1. 安装软件包 - - ```shell - yum install openstack-dashboard - ``` - -2. 修改文件 - - 修改变量 - - ```text - vim /etc/openstack-dashboard/local_settings - - ALLOWED_HOSTS = ['*', ] - OPENSTACK_HOST = "controller" - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - ``` - -3. 重启 httpd 服务 - - ```shell - systemctl restart httpd - ``` - -4. 验证 - 打开浏览器,输入网址,登录 horizon。 - - ***注意*** - - **替换HOSTIP为控制节点管理平面IP地址** - -### Tempest 安装 - -Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装 - -1. 安装Tempest - - ```shell - yum install openstack-tempest - ``` - -2. 初始化目录 - - ```shell - tempest init mytest - ``` - -3. 修改配置文件。 - - ```shell - cd mytest - vi etc/tempest.conf - ``` - - tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考[官方示例](https://docs.openstack.org/tempest/latest/sampleconf.html) - -4. 执行测试 - - ```shell - tempest run - ``` - -### Ironic 安装 - -Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 裸金属服务在数据库中存储信息,创建一个**ironic**用户可以访问的**ironic**数据库,替换**IRONIC_DBPASSWORD**为合适的密码 - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - ``` - -2. 创建服务用户认证 - - 1、创建Bare Metal服务用户 - - ```shell - openstack user create --password IRONIC_PASSWORD \ - --email ironic@example.com ironic - openstack role add --project service --user ironic admin - openstack service create --name ironic - --description "Ironic baremetal provisioning service" baremetal - - openstack service create --name ironic-inspector --description "Ironic inspector baremetal provisioning service" baremetal-introspection - openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector - openstack role add --project service --user ironic-inspector admin - ``` - - 2、创建Bare Metal服务访问入口 - - ```shell - openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 - openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 - openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 - ``` - -3. 配置ironic-api服务 - - 配置文件路径/etc/ironic/ironic.conf - - 1、通过**connection**选项配置数据库的位置,如下所示,替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换**DB_IP**为DB服务器所在的IP地址: - - ```shell - [database] - - # The SQLAlchemy connection string used to connect to the - # database (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 3、配置ironic-api服务使用身份认证服务的凭证,替换**PUBLIC_IDENTITY_IP**为身份认证服务器的公共IP,替换**PRIVATE_IDENTITY_IP**为身份认证服务器的私有IP,替换**IRONIC_PASSWORD**为身份认证服务中**ironic**用户的密码: - - ```shell - [DEFAULT] - - # Authentication strategy used by ironic-api: one of - # "keystone" or "noauth". "noauth" should not be used in a - # production environment because all authentication will be - # disabled. (string value) - - auth_strategy=keystone - - [keystone_authtoken] - # Authentication type to load (string value) - auth_type=password - # Complete public Identity API endpoint (string value) - www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 - # Complete admin Identity API endpoint. (string value) - auth_url=http://PRIVATE_IDENTITY_IP:5000 - # Service username. (string value) - username=ironic - # Service account password. (string value) - password=IRONIC_PASSWORD - # Service tenant name. (string value) - project_name=service - # Domain name containing project (string value) - project_domain_name=Default - # User's domain name (string value) - user_domain_name=Default - ``` - - 4、创建裸金属服务数据库表 - - ```shell - ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema - ``` - - 5、重启ironic-api服务 - - ```shell - sudo systemctl restart openstack-ironic-api - ``` - -4. 配置ironic-conductor服务 - - 1、替换**HOST_IP**为conductor host的IP - - ```shell - [DEFAULT] - - # IP address of this host. If unset, will determine the IP - # programmatically. If unable to do so, will use "127.0.0.1". - # (string value) - - my_ip=HOST_IP - ``` - - 2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换DB_IP为DB服务器所在的IP地址: - - ```shell - [database] - - # The SQLAlchemy connection string to use to connect to the - # database. (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 4、配置凭证访问其他OpenStack服务 - - 为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。 - - ```shell - [neutron] - 访问Openstack网络服务 - [glance] - 访问Openstack镜像服务 - [swift] - 访问Openstack对象存储服务 - [cinder] - 访问Openstack块存储服务 - [inspector] - 访问Openstack裸金属introspection服务 - [service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在Openstack身份认证服务目录中的自己的API URL端点 - ``` - - 简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。 - - 在下面的示例中,用户访问openstack网络服务的身份验证信息配置为: - - ```shell - 网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口 - - 请求时使用特定的CA SSL证书进行HTTPS连接 - - 与ironic-api服务配置相同的服务用户 - - 动态密码认证插件基于其他选项发现合适的身份认证服务API版本 - ``` - - ```shell - [neutron] - - # Authentication type to load (string value) - auth_type = password - # Authentication URL (string value) - auth_url=https://IDENTITY_IP:5000/ - # Username (string value) - username=ironic - # User's password (string value) - password=IRONIC_PASSWORD - # Project name to scope to (string value) - project_name=service - # Domain ID containing project (string value) - project_domain_id=default - # User's domain id (string value) - user_domain_id=default - # PEM encoded Certificate Authority to use when verifying - # HTTPs connections. (string value) - cafile=/opt/stack/data/ca-bundle.pem - # The default region_name for endpoint URL discovery. (string - # value) - region_name = RegionOne - # List of interfaces, in order of preference, for endpoint - # URL. (list value) - valid_interfaces=public - ``` - - 默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定: - - ```shell - [neutron] ... endpoint_override = - ``` - - 5、配置允许的驱动程序和硬件类型 - - 通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型: - - ```shell - [DEFAULT] enabled_hardware_types = ipmi - ``` - - 配置硬件接口: - - ```shell - enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool - ``` - - 配置接口默认值: - - ```shell - [DEFAULT] default_deploy_interface = direct default_network_interface = neutron - ``` - - 如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。 - - 6、重启ironic-conductor服务 - - ```shell - sudo systemctl restart openstack-ironic-conductor - ``` - -5. 配置ironic-inspector服务 - - 配置文件路径/etc/ironic-inspector/inspector.conf - - 1、创建数据库 - - ```shell - # mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \ - IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; - ``` - - 2、通过**connection**选项配置数据库的位置,如下所示,替换**IRONIC_INSPECTOR_DBPASSWORD**为**ironic_inspector**用户的密码,替换**DB_IP**为DB服务器所在的IP地址: - - ```shell - [database] - backend = sqlalchemy - connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector - ``` - - 3、配置消息度列通信地址 - - ```shell - [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 4、设置keystone认证 - - ```shell - [DEFAULT] - - auth_strategy = keystone - - [ironic] - - api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 - auth_type = password - auth_url = http://PUBLIC_IDENTITY_IP:5000 - auth_strategy = keystone - ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 - os_region = RegionOne - project_name = service - project_domain_name = Default - user_domain_name = Default - username = IRONIC_SERVICE_USER_NAME - password = IRONIC_SERVICE_USER_PASSWORD - ``` - - 5、配置ironic inspector dnsmasq服务 - - ```shell - # 配置文件地址:/etc/ironic-inspector/dnsmasq.conf - port=0 - interface=enp3s0 #替换为实际监听网络接口 - dhcp-range=172.20.19.100,172.20.19.110 #替换为实际dhcp地址范围 - bind-interfaces - enable-tftp - - dhcp-match=set:efi,option:client-arch,7 - dhcp-match=set:efi,option:client-arch,9 - dhcp-match=aarch64, option:client-arch,11 - dhcp-boot=tag:aarch64,grubaa64.efi - dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi - dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 - - tftp-root=/tftpboot #替换为实际tftpboot目录 - log-facility=/var/log/dnsmasq.log - ``` - - 6、启动服务 - - ```shell - systemctl enable --now openstack-ironic-inspector.service - systemctl enable --now openstack-ironic-inspector-dnsmasq.service - ``` - -6. deploy ramdisk镜像制作 - - Q版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 - 若使用Q版原生工具,则需要安装对应的软件包。 - - ``` - yum install openstack-ironic-python-agent - 或者 - yum install diskimage-builder - ``` - 具体的使用方法可以参考[官方文档](https://docs.openstack.org/ironic/queens/install/deploy-ramdisk.html) - - 这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。 - - 1. 安装 ironic-python-agent-builder - - - 1. 安装工具: - - ```shell - pip install ironic-python-agent-builder - ``` - - 2. 修改以下文件中的python解释器: - - ```shell - /usr/bin/yum /usr/libexec/urlgrabber-ext-down - ``` - - 3. 安装其它必须的工具: - - ```shell - yum install git - ``` - - 由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可: - - ```shell - # 先查询需要安装哪个包 - [root@localhost ~]# yum provides /usr/sbin/semanage - 已加载插件:fastestmirror - Loading mirror speeds from cached hostfile - * base: mirror.vcu.edu - * extras: mirror.vcu.edu - * updates: mirror.math.princeton.edu - policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities - 源 :base - 匹配来源: - 文件名 :/usr/sbin/semanage - # 安装 - [root@localhost ~]# yum install policycoreutils-python - ``` - - 2. 制作镜像 - - 如果是`arm`架构,需要添加: - ```shell - export ARCH=aarch64 - ``` - - 基本用法: - - ```shell - usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] - [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] - distribution - - positional arguments: - distribution Distribution to use - - optional arguments: - -h, --help show this help message and exit - -r RELEASE, --release RELEASE - Distribution release to use - -o OUTPUT, --output OUTPUT - Output base file name - -e ELEMENT, --element ELEMENT - Additional DIB element to use - -b BRANCH, --branch BRANCH - If set, override the branch that is used for ironic- - python-agent and requirements - -v, --verbose Enable verbose logging in diskimage-builder - --extra-args EXTRA_ARGS - Extra arguments to pass to diskimage-builder - ``` - - 举例说明: - - ```shell - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky - ``` - - 3. 允许ssh登陆 - - 初始化环境变量,然后制作镜像: - - ```shell - export DIB_DEV_USER_USERNAME=ipa \ - export DIB_DEV_USER_PWDLESS_SUDO=yes \ - export DIB_DEV_USER_PASSWORD='123' - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser - ``` - - 4. 指定代码仓库 - - 初始化对应的环境变量,然后制作镜像: - - ```shell - # 指定仓库地址以及版本 - DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git - DIB_REPOREF_ironic_python_agent=origin/develop - - # 直接从gerrit上clone代码 - DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent - DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 - ``` - - 参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。 - - 指定仓库地址及版本验证成功。 - -### Kolla 安装 - -Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 20.03 LTS SP2中引入了Kolla和Kolla-ansible服务。 - -Kolla的安装十分简单,只需要安装对应的RPM包即可 - -``` -yum install openstack-kolla openstack-kolla-ansible -``` - -安装完后,就可以使用`kolla-ansible`, `kolla-build`, `kolla-genpwd`, `kolla-mergepwd`等命令了。 - -### Trove 安装 -Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 数据库服务在数据库中存储信息,创建一个**trove**用户可以访问的**trove**数据库,替换**TROVE_DBPASSWORD**为合适的密码 - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - ``` - -2. 创建服务用户认证 - - 1、创建**Trove**服务用户 - - ```shell - openstack user create --password TROVE_PASSWORD \ - --email trove@example.com trove - openstack role add --project service --user trove admin - openstack service create --name trove - --description "Database service" database - ``` - **解释:** `TROVE_PASSWORD` 替换为`trove`用户的密码 - - 2、创建**Database**服务访问入口 - - ```shell - openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s - ``` - **解释:** `$TROVE_NODE` 替换为Trove的API服务部署节点 - -3. 安装和配置**Trove**各组件 - 1、安装**Trove**包 - ```shell script - yum install openstack-trove python-troveclient - ``` - 2. 配置`trove.conf` - ```shell script - vim /etc/trove/trove.conf - - [DEFAULT] - bind_host=TROVE_NODE_IP - log_dir = /var/log/trove - - auth_strategy = keystone - # Config option for showing the IP address that nova doles out - add_addresses = True - network_label_regex = ^NETWORK_LABEL$ - api_paste_config = /etc/trove/api-paste.ini - - trove_auth_url = http://controller:35357/v3/ - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - - nova_proxy_admin_user = admin - nova_proxy_admin_pass = ADMIN_PASS - nova_proxy_admin_tenant_name = service - taskmanager_manager = trove.taskmanager.manager.Manager - use_nova_server_config_drive = True - - # Set these if using Neutron Networking - network_driver=trove.network.neutron.NeutronDriver - network_label_regex=.* - - - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/v3/ - auth_url=http://controller:35357/v3/ - #auth_uri = http://controller/identity - #auth_url = http://controller/identity_admin - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = trove - password = TROVE_PASS - - ``` - **解释:** - - `[Default]`分组中`bind_host`配置为Trove部署节点的IP - - `nova_compute_url` 和 `cinder_url` 为Nova和Cinder在Keystone中创建的endpoint - - `nova_proxy_XXX` 为一个能访问Nova服务的用户信息,上例中使用`admin`用户为例 - - `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码 - - `[database]`分组中的`connection` 为前面在mysql中为Trove创建的数据库信息 - - Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码 - - 3. 配置`trove-taskmanager.conf` - ```shell script - vim /etc/trove/trove-taskmanager.conf - - [DEFAULT] - log_dir = /var/log/trove - trove_auth_url = http://controller/identity/v2.0 - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove - ``` - **解释:** 参照`trove.conf`配置 - - 4. 配置`trove-conductor.conf` - ```shell script - vim /etc/trove/trove-conductor.conf - - [DEFAULT] - log_dir = /var/log/trove - trove_auth_url = http://controller/identity/v2.0 - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:trove@controller/trove - ``` - **解释:** 参照`trove.conf`配置 - - 5. 配置`trove-guestagent.conf` - ```shell script - vim /etc/trove/trove-guestagent.conf - [DEFAULT] - rabbit_host = controller - rabbit_password = RABBIT_PASS - nova_proxy_admin_user = admin - nova_proxy_admin_pass = ADMIN_PASS - nova_proxy_admin_tenant_name = service - trove_auth_url = http://controller/identity_admin/v2.0 - ``` - **解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 - 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 - 报心跳,因此需要配置RabbitMQ的用户和密码信息。 - - 6. 生成数据`Trove`数据库表 - ```shell script - su -s /bin/sh -c "trove-manage db_sync" trove - ``` -4. 完成安装配置 - 1. 配置**Trove**服务自启动 - ```shell script - systemctl enable openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - 2. 启动服务 - ```shell script - systemctl start openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - \ No newline at end of file diff --git a/docs/install/openEuler-20.03-LTS-SP2/OpenStack-rocky.md b/docs/install/openEuler-20.03-LTS-SP2/OpenStack-rocky.md deleted file mode 100644 index 0dcd355cc7a26b3840d8658e222cc919f546e0a3..0000000000000000000000000000000000000000 --- a/docs/install/openEuler-20.03-LTS-SP2/OpenStack-rocky.md +++ /dev/null @@ -1,2104 +0,0 @@ - - -# OpenStack-Rocky 部署指南 - - - -- [OpenStack-Rocky 部署指南](#openstack-rocky-部署指南) - - - [OpenStack 简介](#openstack-简介) - - [准备环境](#准备环境) - - - [环境配置](#环境配置) - - [安装 SQL DataBase](#安装-sql-database) - - [安装 RabbitMQ](#安装-rabbitmq) - - [安装 Memcached](#安装-memcached) - - [安装 OpenStack](#安装-openstack) - - - [Keystone 安装](#keystone-安装) - - - [Glance 安装](#glance-安装) - - - [Nova 安装](#nova-安装) - - - [Neutron 安装](#neutron-安装) - - - [Cinder 安装](#cinder-安装) - - - [Horizon 安装](#Horizon-安装) - - - [Tempest 安装](#tempest-安装) - - - [Ironic 安装](#ironic-安装) - - - [Kolla 安装](#kolla-安装) - - - [Trove 安装](#Trove-安装) - - - -## OpenStack 简介 - -OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。 - -作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。 - -openEuler 20.03-LTS-SP2 版本官方认证的第三方oepkg yum 源已经支持 Openstack-Rocky 版本,用户可以配置好oepkg yum 源后根据此文档进行 OpenStack 部署。 - - -## 准备环境 -### OpenStack yum源配置 - -配置 20.03-LTS-SP2 官方认证的第三方源 oepkg,以x86_64为例 - -```shell -$ cat << EOF >> /etc/yum.repos.d/OpenStack_Rocky.repo -[openstack_rocky] -name=OpenStack_Rocky -baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/rocky/x86_64/ -gpgcheck=0 -enabled=1 -EOF -``` - -```shell -$ yum clean all && yum makecache -``` - -### 环境配置 - -在`/etc/hosts`中添加controller信息,例如节点IP是`10.0.0.11`,则新增: - -``` -10.0.0.11 controller -``` - -### 安装 SQL DataBase - -1. 执行如下命令,安装软件包。 - - ```shell - $ yum install mariadb mariadb-server python2-PyMySQL - ``` -2. 创建并编辑 `/etc/my.cnf.d/openstack.cnf` 文件。 - - 复制如下内容到文件,其中 bind-address 设置为控制节点的管理IP地址。 - ```ini - [mysqld] - bind-address = 10.0.0.11 - default-storage-engine = innodb - innodb_file_per_table = on - max_connections = 4096 - collation-server = utf8_general_ci - character-set-server = utf8 - ``` - -3. 启动 DataBase 服务,并为其配置开机自启动: - - ```shell - $ systemctl enable mariadb.service - $ systemctl start mariadb.service - ``` -### 安装 RabbitMQ - -1. 执行如下命令,安装软件包。 - - ```shell - $ yum install rabbitmq-server - ``` - -2. 启动 RabbitMQ 服务,并为其配置开机自启动。 - - ```shell - $ systemctl enable rabbitmq-server.service - $ systemctl start rabbitmq-server.service - ``` -3. 添加 OpenStack用户。 - - ```shell - $ rabbitmqctl add_user openstack RABBIT_PASS - ``` -4. 替换 RABBIT_PASS,为OpenStack用户设置密码 - -5. 设置openstack用户权限,允许进行配置、写、读: - - ```shell - $ rabbitmqctl set_permissions openstack ".*" ".*" ".*" - ``` - -### 安装 Memcached - -1. 执行如下命令,安装依赖软件包。 - - ```shell - $ yum install memcached python2-memcached - ``` -2. 编辑 `/etc/sysconfig/memcached` 文件,添加以下内容 - - ```shell - OPTIONS="-l 127.0.0.1,::1,controller" - ``` - OPTIONS 修改为实际环境中控制节点的管理IP地址。 - -3. 执行如下命令,启动 Memcached 服务,并为其配置开机启动。 - - ```shell - $ systemctl enable memcached.service - $ systemctl start memcached.service - ``` - -## 安装 OpenStack - -### Keystone 安装 - -1. 以 root 用户访问数据库,创建 keystone 数据库并授权。 - - ```shell - $ mysql -u root -p - ``` - - ```sql - MariaDB [(none)]> CREATE DATABASE keystone; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> exit - ``` - 替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码 - -2. 执行如下命令,安装软件包。 - - ```shell - $ yum install openstack-keystone httpd python2-mod_wsgi - ``` - -3. 配置keystone,编辑 `/etc/keystone/keystone.conf` 文件。在[database]部分,配置数据库入口。在[token]部分,配置token provider - - ```ini - [database] - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - [token] - provider = fernet - ``` - 替换KEYSTONE_DBPASS为Keystone数据库的密码 - -4. 执行如下命令,同步数据库。 - - ```shell - su -s /bin/sh -c "keystone-manage db_sync" keystone - ``` - -5. 执行如下命令,初始化Fernet密钥仓库。 - - ```shell - $ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - $ keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - ``` - -6. 执行如下命令,启动身份服务。 - - ```shell - $ keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:5000/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - ``` - 替换 ADMIN_PASS,为 admin 用户设置密码。 - -7. 编辑 `/etc/httpd/conf/httpd.conf` 文件,配置Apache HTTP server - - ```shell - $ vim /etc/httpd/conf/httpd.conf - ``` - - 配置 ServerName 项引用控制节点,如下所示。 - ``` - ServerName controller - ``` - - 如果 ServerName 项不存在则需要创建。 - -8. 执行如下命令,为 `/usr/share/keystone/wsgi-keystone.conf` 文件创建链接。 - - ```shell - $ ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ - ``` - -9. 完成安装,执行如下命令,启动Apache HTTP服务。 - - ```shell - $ systemctl enable httpd.service - $ systemctl start httpd.service - ``` - -10. 安装OpenStackClient - - ```shell - $ yum install python2-openstackclient - ``` - -11. 创建 OpenStack client 环境脚本 - - 创建admin用户的环境变量脚本: - - ```shell - # vim admin-openrc - - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=admin - export OS_USERNAME=admin - export OS_PASSWORD=ADMIN_PASS - export OS_AUTH_URL=http://controller:5000/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - ``` - - 替换ADMIN_PASS为admin用户的密码, 与上述`keystone-manage bootstrap` 命令中设置的密码一致 - 运行脚本加载环境变量: - - ```shell - $ source admin-openrc - ``` - -12. 分别执行如下命令,创建domain, projects, users, roles。 - - 创建domain ‘example’: - - ```shell - $ openstack domain create --description "An Example Domain" example - ``` - - 注:domain ‘default’在 keystone-manage bootstrap 时已创建 - - 创建project ‘service’: - - ```shell - $ openstack project create --domain default --description "Service Project" service - ``` - - 创建(non-admin)project ’myproject‘,user ’myuser‘ 和 role ’myrole‘,为‘myproject’和‘myuser’添加角色‘myrole’: - - ```shell - $ openstack project create --domain default --description "Demo Project" myproject - $ openstack user create --domain default --password-prompt myuser - $ openstack role create myrole - $ openstack role add --project myproject --user myuser myrole - ``` - -13. 验证 - - 取消临时环境变量OS_AUTH_URL和OS_PASSWORD: - - ```shell - $ unset OS_AUTH_URL OS_PASSWORD - ``` - - 为admin用户请求token: - - ```shell - $ openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - ``` - - 为myuser用户请求token: - - ```shell - $ openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name myproject --os-username myuser token issue - ``` - - -### Glance 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - 以 root 用户访问数据库,创建 glance 数据库并授权。 - - ```shell - $ mysql -u root -p - ``` - - - - ```sql - MariaDB [(none)]> CREATE DATABASE glance; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> exit - ``` - - 替换 GLANCE_DBPASS,为 glance 数据库设置密码。 - - ```shell - $ source admin-openrc - ``` - - 执行以下命令,分别完成创建 glance 服务凭证、创建glance用户和添加‘admin’角色到用户‘glance’。 - - ```shell - $ openstack user create --domain default --password-prompt glance - $ openstack role add --project service --user glance admin - $ openstack service create --name glance --description "OpenStack Image" image - ``` - 创建镜像服务API端点: - - ```shell - $ openstack endpoint create --region RegionOne image public http://controller:9292 - $ openstack endpoint create --region RegionOne image internal http://controller:9292 - $ openstack endpoint create --region RegionOne image admin http://controller:9292 - ``` - -2. 安装和配置 - - 安装软件包: - - ```shell - $ yum install openstack-glance - ``` - 配置glance: - - 编辑 `/etc/glance/glance-api.conf` 文件: - - 在[database]部分,配置数据库入口 - - 在[keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口 - - 在[glance_store]部分,配置本地文件系统存储和镜像文件的位置 - - ```ini - [database] - # ... - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - [keystone_authtoken] - # ... - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - [paste_deploy] - # ... - flavor = keystone - [glance_store] - # ... - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - ``` - - 编辑 `/etc/glance/glance-registry.conf` 文件: - - 在[database]部分,配置数据库入口 - - 在[keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口 - - ```ini - [database] - # ... - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - [keystone_authtoken] - # ... - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - [paste_deploy] - # ... - flavor = keystone - ``` - - 其中,替换 GLANCE_DBPASS 为 glance 数据库的密码,替换 GLANCE_PASS 为 glance 用户的密码。 - - 同步数据库: - - ```shell - $ su -s /bin/sh -c "glance-manage db_sync" glance - ``` - 启动镜像服务: - - ```shell - $ systemctl enable openstack-glance-api.service openstack-glance-registry.service - $ systemctl start openstack-glance-api.service openstack-glance-registry.service - ``` - -3. 验证 - - 下载镜像 - ```shell - $ source admin-openrc - # 注意:如果您使用的环境是鲲鹏架构,请下载arm64版本的镜像。 - $ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img - ``` - - 向Image服务上传镜像: - - ```shell - $ glance image-create --name "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public - ``` - - 确认镜像上传并验证属性: - - ```shell - $ glance image-list - ``` -### Nova 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - 作为root用户访问数据库,创建nova、nova_api、nova_cell0 数据库并授权 - - ```shell - $ mysql -u root -p - ``` - - ```SQL - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - MariaDB [(none)]> CREATE DATABASE placement; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> exit - ``` - 替换NOVA_DBPASS及PLACEMENT_DBPASS,为nova及placement数据库设置密码 - - 执行如下命令,完成创建nova服务凭证、创建nova用户以及添加‘admin’角色到用户‘nova’。 - - ```shell - $ . admin-openrc - $ openstack user create --domain default --password-prompt nova - $ openstack role add --project service --user nova admin - $ openstack service create --name nova --description "OpenStack Compute" compute - ``` - - 创建计算服务API端点: - - ```shell - $ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 - $ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 - $ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 - ``` - - 创建placement用户并添加‘admin’角色到用户‘placement’: - ```shell - $ openstack user create --domain default --password-prompt placement - $ openstack role add --project service --user placement admin - ``` - - 创建placement服务凭证及API服务端点: - ```shell - $ openstack service create --name placement --description "Placement API" placement - $ openstack endpoint create --region RegionOne placement public http://controller:8778 - $ openstack endpoint create --region RegionOne placement internal http://controller:8778 - $ openstack endpoint create --region RegionOne placement admin http://controller:8778 - ``` - -2. 安装和配置 - - 安装软件包: - - ```shell - $ yum install openstack-nova-api openstack-nova-conductor \ - openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-compute \ - openstack-nova-placement-api openstack-nova-console - ``` - - 配置nova: - - 编辑 `/etc/nova/nova.conf` 文件: - - 在[default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron; - - 在[api_database] [database] [placement_database]部分,配置数据库入口; - - 在[api] [keystone_authtoken]部分,配置身份认证服务入口; - - 在[vnc]部分,启用并配置远程控制台入口; - - 在[glance]部分,配置镜像服务API的地址; - - 在[oslo_concurrency]部分,配置lock path; - - 在[placement]部分,配置placement服务的入口。 - - ```ini - [DEFAULT] - # ... - enabled_apis = osapi_compute,metadata - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - my_ip = 10.0.0.11 - use_neutron = true - firewall_driver = nova.virt.firewall.NoopFirewallDriver - compute_driver = libvirt.LibvirtDriver - instances_path = /var/lib/nova/instances/ - [api_database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api - [database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova - [placement_database] - # ... - connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement - [api] - # ... - auth_strategy = keystone - [keystone_authtoken] - # ... - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = nova - password = NOVA_PASS - [vnc] - enabled = true - # ... - server_listen = $my_ip - server_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html - [glance] - # ... - api_servers = http://controller:9292 - [oslo_concurrency] - # ... - lock_path = /var/lib/nova/tmp - [placement] - # ... - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:5000/v3 - username = placement - password = PLACEMENT_PASS - [neutron] - # ... - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - ``` - - 替换RABBIT_PASS为RabbitMQ中openstack账户的密码; - - 配置my_ip为控制节点的管理IP地址; - - 替换NOVA_DBPASS为nova数据库的密码; - - 替换PLACEMENT_DBPASS为placement数据库的密码; - - 替换NOVA_PASS为nova用户的密码; - - 替换PLACEMENT_PASS为placement用户的密码; - - 替换NEUTRON_PASS为neutron用户的密码; - - 编辑`/etc/httpd/conf.d/00-nova-placement-api.conf`,增加Placement API接入配置 - - ```xml - - = 2.4> - Require all granted - - - Order allow,deny - Allow from all - - - ``` - - 重启httpd服务: - - ```shell - $ systemctl restart httpd - ``` - - 同步nova-api数据库: - - ```shell - $ su -s /bin/sh -c "nova-manage api_db sync" nova - ``` - 注册cell0数据库: - - ```shell - $ su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova - ``` - 创建cell1 cell: - - ```shell - $ su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova - ``` - 同步nova数据库: - - ```shell - $ su -s /bin/sh -c "nova-manage db sync" nova - ``` - 验证cell0和cell1注册正确: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova - ``` - 确定是否支持虚拟机硬件加速(x86架构): - - ```shell - $ egrep -c '(vmx|svm)' /proc/cpuinfo - ``` - - 如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM: - **注意:** 如果是在ARM64的服务器上,还需要在配置`cpu_mode`为`custom`,`cpu_model`为`cortex-a72` - - ```ini - # vim /etc/nova/nova.conf - [libvirt] - # ... - virt_type = qemu - cpu_mode = custom - cpu_model = cortex-a72 - ``` - 如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置 - - ***注意*** - - **如果为arm64结构,还需要在`compute`节点执行以下命令** - - ```shell - mkdir -p /usr/share/AAVMF - ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \ - /usr/share/AAVMF/AAVMF_CODE.fd - ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \ - /usr/share/AAVMF/AAVMF_VARS.fd - chown nova:nova /usr/share/AAVMF -R - - vim /etc/libvirt/qemu.conf - - nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", - "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2/aarch64/vars-template-pflash.raw" - ] - ``` - - 启动计算服务及其依赖项,并配置其开机启动: - - ```shell - $ systemctl enable \ - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - $ systemctl start \ - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - ``` - ```bash - $ systemctl enable libvirtd.service openstack-nova-compute.service - $ systemctl start libvirtd.service openstack-nova-compute.service - ``` - 添加计算节点到cell数据库: - - 确认计算节点存在: - - ```bash - $ . admin-openrc - $ openstack compute service list --service nova-compute - ``` - 注册计算节点: - - ```bash - $ su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova - ``` - -3. 验证 - - ```shell - $ . admin-openrc - ``` - 列出服务组件,验证每个流程都成功启动和注册: - - ```shell - $ openstack compute service list - ``` - - 列出身份服务中的API端点,验证与身份服务的连接: - - ```shell - $ openstack catalog list - ``` - - 列出镜像服务中的镜像,验证与镜像服务的连接: - - ```shell - $ openstack image list - ``` - - 检查cells和placement API是否运作成功,以及其他必要条件是否已具备。 - - ```shell - $ nova-status upgrade check - ``` -### Neutron 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - 作为root用户访问数据库,创建 neutron 数据库并授权。 - - ```shell - $ mysql -u root -p - ``` - - ```sql - MariaDB [(none)]> CREATE DATABASE neutron; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> exit - ``` - 替换NEUTRON_DBPASS,为neutron数据库设置密码。 - - ```shell - $ . admin-openrc - ``` - 执行如下命令,完成创建 neutron 服务凭证、创建neutron用户和添加‘admin’角色到‘neutron’用户操作。 - - 创建neutron服务 - - ```shell - $ openstack user create --domain default --password-prompt neutron - $ openstack role add --project service --user neutron admin - $ openstack service create --name neutron --description "OpenStack Networking" network - ``` - 创建网络服务API端点: - - ```shell - $ openstack endpoint create --region RegionOne network public http://controller:9696 - $ openstack endpoint create --region RegionOne network internal http://controller:9696 - $ openstack endpoint create --region RegionOne network admin http://controller:9696 - ``` - -2. 安装和配置 Self-service 网络 - - 安装软件包: - - ```shell - $ yum install openstack-neutron openstack-neutron-ml2 \ - openstack-neutron-linuxbridge ebtables ipset - ``` - 配置neutron: - - 编辑 /etc/neutron/neutron.conf 文件: - - 在[database]部分,配置数据库入口; - - 在[default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口; - - 在[default] [keystone]部分,配置身份认证服务入口; - - 在[default] [nova]部分,配置网络来通知计算网络拓扑的变化; - - 在[oslo_concurrency]部分,配置lock path。 - - ```ini - [database] - # ... - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - [DEFAULT] - # ... - core_plugin = ml2 - service_plugins = router - allow_overlapping_ips = true - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - notify_nova_on_port_status_changes = true - notify_nova_on_port_data_changes = true - [keystone_authtoken] - # ... - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = neutron - password = NEUTRON_PASS - [nova] - # ... - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = nova - password = NOVA_PASS - [oslo_concurrency] - # ... - lock_path = /var/lib/neutron/tmp - ``` - - 替换NEUTRON_DBPASS为neutron数据库的密码; - - 替换RABBIT_PASS为RabbitMQ中openstack账户的密码; - - 替换NEUTRON_PASS为neutron用户的密码; - - 替换NOVA_PASS为nova用户的密码。 - - 配置ML2插件: - - 编辑 /etc/neutron/plugins/ml2/ml2_conf.ini 文件: - - 在[ml2]部分,启用 flat、vlan、vxlan 网络,启用网桥及 layer-2 population 机制,启用端口安全扩展驱动; - - 在[ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络; - - 在[ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围; - - 在[securitygroup]部分,配置允许 ipset。 - - ```ini - # vim /etc/neutron/plugins/ml2/ml2_conf.ini - [ml2] - # ... - type_drivers = flat,vlan,vxlan - tenant_network_types = vxlan - mechanism_drivers = linuxbridge,l2population - extension_drivers = port_security - [ml2_type_flat] - # ... - flat_networks = provider - [ml2_type_vxlan] - # ... - vni_ranges = 1:1000 - [securitygroup] - # ... - enable_ipset = true - ``` - 配置 Linux bridge 代理: - - 编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 文件: - - 在[linux_bridge]部分,映射 provider 虚拟网络到物理网络接口; - - 在[vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population; - - 在[securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。 - - ```ini - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - [securitygroup] - # ... - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - ``` - 替换PROVIDER_INTERFACE_NAME为物理网络接口; - - 替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。 - - 配置Layer-3代理: - - 编辑 /etc/neutron/l3_agent.ini 文件: - - 在[default]部分,配置接口驱动为linuxbridge - - ```ini - [DEFAULT] - # ... - interface_driver = linuxbridge - ``` - 配置DHCP代理: - - 编辑 /etc/neutron/dhcp_agent.ini 文件: - - 在[default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。 - - ```ini - [DEFAULT] - # ... - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - ``` - 配置metadata代理: - - 编辑 /etc/neutron/metadata_agent.ini 文件: - - 在[default]部分,配置元数据主机和shared secret。 - - ```ini - [DEFAULT] - # ... - nova_metadata_host = controller - metadata_proxy_shared_secret = METADATA_SECRET - ``` - 替换METADATA_SECRET为合适的元数据代理secret。 - - -3. 配置计算服务 - - 编辑 /etc/nova/nova.conf 文件: - - 在[neutron]部分,配置访问参数,启用元数据代理,配置secret。 - - ```ini - [neutron] - # ... - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true - metadata_proxy_shared_secret = METADATA_SECRET - ``` - - 替换NEUTRON_PASS为neutron用户的密码; - - 替换METADATA_SECRET为合适的元数据代理secret。 - - - -4. 完成安装 - - 添加配置文件链接: - - ```shell - $ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - ``` - - 同步数据库: - - ```shell - $ su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - ``` - - 重启计算API服务: - - ```shell - $ systemctl restart openstack-nova-api.service - ``` - - 启动网络服务并配置开机启动: - - ```shell - $ systemctl enable neutron-server.service \ - neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ - neutron-metadata-agent.service - $ systemctl start neutron-server.service \ - neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ - neutron-metadata-agent.service - $ systemctl enable neutron-l3-agent.service - $ systemctl start neutron-l3-agent.service - ``` - -5. 验证 - - 列出代理验证 neutron 代理启动成功: - - ```shell - $ openstack network agent list - ``` - - -### Cinder 安装 - - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - 作为root用户访问数据库,创建cinder数据库并授权。 - - ```shell - $ mysql -u root -p - MariaDB [(none)]> CREATE DATABASE cinder; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> exit - ``` - 替换CINDER_DBPASS,为cinder数据库设置密码。 - - ```shell - $ source admin-openrc - ``` - - 创建cinder服务凭证: - - 创建cinder用户 - - 添加‘admin’角色到用户‘cinder’ - - 创建cinderv2和cinderv3服务 - - ```shell - $ openstack user create --domain default --password-prompt cinder - $ openstack role add --project service --user cinder admin - $ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 - $ openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 - ``` - 创建块存储服务API端点: - - ```shell - $ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s - $ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s - $ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s - $ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s - $ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s - $ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s - ``` - -2. 安装和配置控制节点 - - 安装软件包: - - ```shell - $ yum install openstack-cinder - ``` - 配置cinder: - - 编辑 `/etc/cinder/cinder.conf` 文件: - - 在[database]部分,配置数据库入口; - - 在[DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip; - - 在[DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口; - - 在[oslo_concurrency]部分,配置lock path。 - - ```ini - [database] - # ... - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - my_ip = 10.0.0.11 - [keystone_authtoken] - # ... - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = cinder - password = CINDER_PASS - [oslo_concurrency] - # ... - lock_path = /var/lib/cinder/tmp - ``` - 替换CINDER_DBPASS为cinder数据库的密码; - - 替换RABBIT_PASS为RabbitMQ中openstack账户的密码; - - 配置my_ip为控制节点的管理IP地址; - - 替换CINDER_PASS为cinder用户的密码; - - 同步数据库: - - ```shell - $ su -s /bin/sh -c "cinder-manage db sync" cinder - ``` - 配置计算使用块存储: - - 编辑 /etc/nova/nova.conf 文件。 - - ```ini - [cinder] - os_region_name = RegionOne - ``` - 完成安装: - - 重启计算API服务 - - ```shell - $ systemctl restart openstack-nova-api.service - ``` - 启动块存储服务 - - ```shell - $ systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service - $ systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service - ``` - -3. 安装和配置存储节点(LVM) - - 安装软件包: - - ```shell - $ yum install lvm2 device-mapper-persistent-data scsi-target-utils python2-keystone \ - openstack-cinder-volume - ``` - - 创建LVM物理卷 /dev/sdb: - - ```shell - $ pvcreate /dev/sdb - ``` - 创建LVM卷组 cinder-volumes: - - ```shell - $ vgcreate cinder-volumes /dev/sdb - ``` - 编辑 /etc/lvm/lvm.conf 文件: - - 在devices部分,添加过滤以接受/dev/sdb设备拒绝其他设备。 - - ```ini - devices { - - # ... - - filter = [ "a/sdb/", "r/.*/"] - ``` - - 编辑 `/etc/cinder/cinder.conf` 文件: - - 在[lvm]部分,使用LVM驱动、cinder-volumes卷组、iSCSI协议和适当的iSCSI服务配置LVM后端。 - - 在[DEFAULT]部分,启用LVM后端,配置镜像服务API的位置。 - - ```ini - [lvm] - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver - volume_group = cinder-volumes - target_protocol = iscsi - target_helper = lioadm - [DEFAULT] - # ... - enabled_backends = lvm - glance_api_servers = http://controller:9292 - ``` - - ***注意*** - - 当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。 - - ``` - include /var/lib/cinder/volumes/* - ``` - 完成安装: - - ```shell - $ systemctl enable openstack-cinder-volume.service tgtd.service iscsid.service - $ systemctl start openstack-cinder-volume.service tgtd.service iscsid.service - ``` - -4. 安装和配置存储节点(ceph RBD) - - 安装软件包: - - ```shell - $ yum install ceph-common python2-rados python2-rbd python2-keystone openstack-cinder-volume - ``` - - 在[DEFAULT]部分,启用LVM后端,配置镜像服务API的位置。 - - ```ini - [DEFAULT] - enabled_backends = ceph-rbd - ``` - - 添加ceph rbd配置部分,配置块命名与enabled_backends中保持一致 - - ```ini - [ceph-rbd] - glance_api_version = 2 - rados_connect_timeout = -1 - rbd_ceph_conf = /etc/ceph/ceph.conf - rbd_flatten_volume_from_snapshot = False - rbd_max_clone_depth = 5 - rbd_pool = # RBD存储池名称 - rbd_secret_uuid = # 随机生成SECRET UUID - rbd_store_chunk_size = 4 - rbd_user = - volume_backend_name = ceph-rbd - volume_driver = cinder.volume.drivers.rbd.RBDDriver - ``` - - 配置存储节点ceph客户端,需要保证/etc/ceph/目录中包含ceph集群访问配置,包括ceph.conf以及keyring - - ```shell - [root@openeuler ~]# ll /etc/ceph - -rw-r--r-- 1 root root 82 Jun 16 17:11 ceph.client..keyring - -rw-r--r-- 1 root root 1.5K Jun 16 17:11 ceph.conf - -rw-r--r-- 1 root root 92 Jun 16 17:11 rbdmap - ``` - - 在存储节点检查ceph集群是否正常可访问 - - ```shell - [root@openeuler ~]# ceph --user cinder -s - cluster: - id: b7b2fac6-420f-4ec1-aea2-4862d29b4059 - health: HEALTH_OK - - services: - mon: 3 daemons, quorum VIRT01,VIRT02,VIRT03 - mgr: VIRT03(active), standbys: VIRT02, VIRT01 - mds: cephfs_virt-1/1/1 up {0=VIRT03=up:active}, 2 up:standby - osd: 15 osds: 15 up, 15 in - - data: - pools: 7 pools, 1416 pgs - objects: 5.41M objects, 19.8TiB - usage: 49.3TiB used, 59.9TiB / 109TiB avail - pgs: 1414 active - - io: - client: 2.73MiB/s rd, 22.4MiB/s wr, 3.21kop/s rd, 1.19kop/s wr - ``` - - 启动服务 - - ```shell - $ systemctl enable openstack-cinder-volume.service - $ systemctl start openstack-cinder-volume.service - ``` - - - -5. 安装和配置备份服务 - - 编辑 /etc/cinder/cinder.conf 文件: - - 在[DEFAULT]部分,配置备份选项 - - ```ini - [DEFAULT] - # ... - # 注意: openEuler 21.03中没有提供OpenStack Swift软件包,需要用户自行安装。或者使用其他的备份后端,例如,NFS。NFS已经过测试验证,可以正常使用。 - backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver - backup_swift_url = SWIFT_URL - ``` - 替换SWIFT_URL为对象存储服务的URL,该URL可以通过对象存储API端点找到: - - ```shell - $ openstack catalog show object-store - ``` - 完成安装: - - ```shell - $ systemctl enable openstack-cinder-backup.service - $ systemctl start openstack-cinder-backup.service - ``` - -6. 验证 - - 列出服务组件验证每个步骤成功: - ```shell - $ source admin-openrc - $ openstack volume service list - ``` - - 注:目前暂未对swift组件进行支持,有条件的同学可以配置对接ceph。 - -### Horizon 安装 - -1. 安装软件包 - - ```shell - $ yum install openstack-dashboard - ``` -2. 修改文件`/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py` - - 修改变量 - - ```ini - ALLOWED_HOSTS = ['*', ] - OPENSTACK_HOST = "controller" - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': 'controller:11211', - } - } - ``` - 新增变量 - ```ini - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 3, - } - WEBROOT = "/dashboard/" - COMPRESS_OFFLINE = True - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default" - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin" - LOGIN_URL = '/dashboard/auth/login/' - LOGOUT_URL = '/dashboard/auth/logout/' - ``` -3. 修改文件/etc/httpd/conf.d/openstack-dashboard.conf - ```xml - WSGIDaemonProcess dashboard - WSGIProcessGroup dashboard - WSGISocketPrefix run/wsgi - WSGIApplicationGroup %{GLOBAL} - - WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi - Alias /dashboard/static /usr/share/openstack-dashboard/static - - - Options All - AllowOverride All - Require all granted - - - - Options All - AllowOverride All - Require all granted - - ``` -4. 在/usr/share/openstack-dashboard目录下执行 - ```shell - $ ./manage.py compress - ``` -5. 重启 httpd 服务 - ```shell - $ systemctl restart httpd - ``` -5. 验证 - 打开浏览器,输入网址http://,登录 horizon。 - -### Tempest 安装 - -Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装 - -1. 安装Tempest - ```shell - $ yum install openstack-tempest - ``` -2. 初始化目录 - - ```shell - $ tempest init mytest - ``` -3. 修改配置文件。 - - ```shell - $ cd mytest - $ vi etc/tempest.conf - ``` - tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考[官方示例](https://docs.openstack.org/tempest/latest/sampleconf.html) - -4. 执行测试 - - ```shell - $ tempest run - ``` - -### Ironic 安装 - -Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 裸金属服务在数据库中存储信息,创建一个**ironic**用户可以访问的**ironic**数据库,替换**IRONIC_DBPASSWORD**为合适的密码 - - ```shell - $ mysql -u root -p - ``` - - ```sql - MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - ``` - -2. 组件安装与配置 - - ##### 创建服务用户认证 - - 1、创建Bare Metal服务用户 - - ```shell - $ openstack user create --password IRONIC_PASSWORD \ - --email ironic@example.com ironic - $ openstack role add --project service --user ironic admin - $ openstack service create --name ironic --description \ - "Ironic baremetal provisioning service" baremetal - - $ openstack service create --name ironic-inspector --description "Ironic inspector baremetal provisioning service" baremetal-introspection - $ openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector - $ openstack role add --project service --user ironic-inspector admin - ``` - - 2、创建Bare Metal服务访问入口 - - ```shell - $ openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 - $ openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 - $ openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 - $ openstack endpoint create --region RegionOne baremetal-introspection internal http://$IRONIC_NODE:5050/v1 - $ openstack endpoint create --region RegionOne baremetal-introspection public http://$IRONIC_NODE:5050/v1 - $ openstack endpoint create --region RegionOne baremetal-introspection admin http://$IRONIC_NODE:5050/v1 - ``` - - ##### 配置ironic-api服务 - - 配置文件路径/etc/ironic/ironic.conf - - 1、通过**connection**选项配置数据库的位置,如下所示,替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换**DB_IP**为DB服务器所在的IP地址: - - ```ini - [database] - - # The SQLAlchemy connection string used to connect to the - # database (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```ini - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 3、配置ironic-api服务使用身份认证服务的凭证,替换**PUBLIC_IDENTITY_IP**为身份认证服务器的公共IP,替换**PRIVATE_IDENTITY_IP**为身份认证服务器的私有IP,替换**IRONIC_PASSWORD**为身份认证服务中**ironic**用户的密码: - - ```ini - [DEFAULT] - - # Authentication strategy used by ironic-api: one of - # "keystone" or "noauth". "noauth" should not be used in a - # production environment because all authentication will be - # disabled. (string value) - - auth_strategy=keystone - force_config_drive = True - - [keystone_authtoken] - # Authentication type to load (string value) - auth_type=password - # Complete public Identity API endpoint (string value) - www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 - # Complete admin Identity API endpoint. (string value) - auth_url=http://PRIVATE_IDENTITY_IP:5000 - # Service username. (string value) - username=ironic - # Service account password. (string value) - password=IRONIC_PASSWORD - # Service tenant name. (string value) - project_name=service - # Domain name containing project (string value) - project_domain_name=Default - # User's domain name (string value) - user_domain_name=Default - ``` - - 4、需要在配置文件中指定ironic日志目录 - - ``` - [DEFAULT] - log_dir = /var/log/ironic/ - ``` - - 5、创建裸金属服务数据库表 - - ```shell - $ ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema - ``` - - 6、重启ironic-api服务 - - ```shell - $ systemctl restart openstack-ironic-api - ``` - - ##### 配置ironic-conductor服务 - - 1、替换**HOST_IP**为conductor host的IP - - ```ini - [DEFAULT] - - # IP address of this host. If unset, will determine the IP - # programmatically. If unable to do so, will use "127.0.0.1". - # (string value) - - my_ip=HOST_IP - ``` - - 2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换DB_IP为DB服务器所在的IP地址: - - ```ini - [database] - - # The SQLAlchemy connection string to use to connect to the - # database. (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```ini - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 4、配置凭证访问其他OpenStack服务 - - 为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。 - - [neutron] - 访问Openstack网络服务 - [glance] - 访问Openstack镜像服务 - [swift] - 访问Openstack对象存储服务 - [cinder] - 访问Openstack块存储服务 - [inspector] - 访问Openstack裸金属introspection服务 - [service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在Openstack身份认证服务目录中的自己的API URL端点 - - 简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。 - - 在下面的示例中,用户访问openstack网络服务的身份验证信息配置为: - - 网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口 - - 请求时使用特定的CA SSL证书进行HTTPS连接 - - 与ironic-api服务配置相同的服务用户 - - 动态密码认证插件基于其他选项发现合适的身份认证服务API版本 - - ```ini - [neutron] - - # Authentication type to load (string value) - auth_type = password - # Authentication URL (string value) - auth_url=https://IDENTITY_IP:5000/ - # Username (string value) - username=ironic - # User's password (string value) - password=IRONIC_PASSWORD - # Project name to scope to (string value) - project_name=service - # Domain ID containing project (string value) - project_domain_id=default - # User's domain id (string value) - user_domain_id=default - # PEM encoded Certificate Authority to use when verifying - # HTTPs connections. (string value) - cafile=/opt/stack/data/ca-bundle.pem - # The default region_name for endpoint URL discovery. (string - # value) - region_name = RegionOne - # List of interfaces, in order of preference, for endpoint - # URL. (list value) - valid_interfaces=public - ``` - - 默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定: - - ```ini - [neutron] - # ... - endpoint_override = - ``` - - 5、配置允许的驱动程序和硬件类型 - - 通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型: - - ```ini - [DEFAULT] - enabled_hardware_types = ipmi - ``` - - 配置硬件接口: - - ```ini - enabled_boot_interfaces = pxe - enabled_deploy_interfaces = direct,iscsi - enabled_inspect_interfaces = inspector - enabled_management_interfaces = ipmitool - enabled_power_interfaces = ipmitool - ``` - - 配置接口默认值: - - ```ini - [DEFAULT] - default_deploy_interface = direct - default_network_interface = neutron - ``` - - 如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。 - - 6、重启ironic-conductor服务 - - ```shell - $ systemctl restart openstack-ironic-conductor - ``` - - ##### 配置ironic-inspector服务 - - 配置文件路径`/etc/ironic-inspector/inspector.conf` - - 1、创建数据库 - - ```shell - $ mysql -u root -p - ``` - ```sql - MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \ - IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; - ``` - - 2、通过**connection**选项配置数据库的位置,如下所示,替换**IRONIC_INSPECTOR_DBPASSWORD**为**ironic_inspector**用户的密码,替换**DB_IP**为DB服务器所在的IP地址: - - ```ini - [database] - backend = sqlalchemy - connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector - ``` - - 3、调用 ironic-inspector-dbsync 生成表 - - ``` - ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade - ``` - - 4、配置消息队列通信地址 - - ```ini - [DEFAULT] - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 5、设置keystone认证 - - ```ini - [DEFAULT] - - auth_strategy = keystone - - [ironic] - - api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 - auth_type = password - auth_url = http://PUBLIC_IDENTITY_IP:5000 - auth_strategy = keystone - ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 - os_region = RegionOne - project_name = service - project_domain_name = Default - user_domain_name = Default - username = IRONIC_SERVICE_USER_NAME - password = IRONIC_SERVICE_USER_PASSWORD - ``` - - 6、配置ironic inspector dnsmasq服务 - - ```ini - # 配置文件地址:/etc/ironic-inspector/dnsmasq.conf - port=0 - interface=enp3s0 #替换为实际监听网络接口 - dhcp-range=172.20.19.100,172.20.19.110 #替换为实际dhcp地址范围 - bind-interfaces - enable-tftp - - dhcp-match=set:efi,option:client-arch,7 - dhcp-match=set:efi,option:client-arch,9 - dhcp-match=aarch64, option:client-arch,11 - dhcp-boot=tag:aarch64,grubaa64.efi - dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi - dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 - - tftp-root=/tftpboot #替换为实际tftpboot目录 - log-facility=/var/log/dnsmasq.log - ``` - - 7、启动服务 - - ```shell - $ systemctl enable --now openstack-ironic-inspector.service - $ systemctl enable --now openstack-ironic-inspector-dnsmasq.service - ``` - - 8、如果节点单独部署ironic服务还需要部署启动iscsid.service服务 - - ``` - $ systemctl enable openstack-cinder-volume.service tgtd.service iscsid.service - $ systemctl start openstack-cinder-volume.service tgtd.service iscsid.service - ``` - - **注意**:arm架构支持不完全,需要根据自己情况进行适配; - -3. deploy ramdisk镜像制作 - - 目前ramdisk镜像支持通过ironic python agent builder来进行制作,这里介绍下使用这个工具构建ironic使用的deploy镜像的完整过程。(用户也可以根据自己的情况获取ironic-python-agent,这里提供使用ipa-builder制作ipa方法) - - ##### 安装 ironic-python-agent-builder - - 2. 安装工具: - - ```shell - $ pip install ironic-python-agent-builder - ``` - - 3. 修改以下文件中的python解释器: - - ```shell - $ /usr/bin/yum /usr/libexec/urlgrabber-ext-down - ``` - - 4. 安装其它必须的工具: - - ```shell - $ yum install git - ``` - - 由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可: - - ```shell - # 先查询需要安装哪个包 - [root@localhost ~]# yum provides /usr/sbin/semanage - 已加载插件:fastestmirror - Loading mirror speeds from cached hostfile - * base: mirror.vcu.edu - * extras: mirror.vcu.edu - * updates: mirror.math.princeton.edu - policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities - 源 :base - 匹配来源: - 文件名 :/usr/sbin/semanage - # 安装 - [root@localhost ~]# yum install policycoreutils-python - ``` - - ##### 制作镜像 - - 如果是`aarch64`架构,还需要添加: - - ```shell - $ export ARCH=aarch64 - ``` - - ###### 普通镜像 - - 基本用法: - - ```shell - usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] - [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] - distribution - - positional arguments: - distribution Distribution to use - - optional arguments: - -h, --help show this help message and exit - -r RELEASE, --release RELEASE - Distribution release to use - -o OUTPUT, --output OUTPUT - Output base file name - -e ELEMENT, --element ELEMENT - Additional DIB element to use - -b BRANCH, --branch BRANCH - If set, override the branch that is used for ironic- - python-agent and requirements - -v, --verbose Enable verbose logging in diskimage-builder - --extra-args EXTRA_ARGS - Extra arguments to pass to diskimage-builder - ``` - - 举例说明: - - ```shell - $ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky - ``` - - ###### 允许ssh登陆 - - 初始化环境变量,然后制作镜像: - - ```shell - $ export DIB_DEV_USER_USERNAME=ipa \ - $ export DIB_DEV_USER_PWDLESS_SUDO=yes \ - $ export DIB_DEV_USER_PASSWORD='123' - $ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser - ``` - - ###### 指定代码仓库 - - 初始化对应的环境变量,然后制作镜像: - - ```ini - # 指定仓库地址以及版本 - DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git - DIB_REPOREF_ironic_python_agent=origin/develop - - # 直接从gerrit上clone代码 - DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent - DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 - ``` - - 参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。 - - 指定仓库地址及版本验证成功。 - -### Kolla 安装 - -Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 20.03 LTS SP2中引入了Kolla和Kolla-ansible服务。 - -Kolla的安装十分简单,只需要安装对应的RPM包即可 - -```shell -$ yum install openstack-kolla openstack-kolla-ansible -``` - -安装完后,就可以使用`kolla-ansible`, `kolla-build`, `kolla-genpwd`, `kolla-mergepwd`等命令了。 - -### Trove 安装 -Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 数据库服务在数据库中存储信息,创建一个**trove**用户可以访问**trove**数据库,替换**TROVE_DBPASSWORD**为对应密码 - - ```shell - $ mysql -u root -p - ``` - - ```sql - MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - ``` - -2. 创建服务用户认证 - - 1、创建**Trove**服务用户 - - ```shell - $ openstack user create --password TROVE_PASSWORD \ - --email trove@example.com trove - $ openstack role add --project service --user trove admin - $ openstack service create --name trove - --description "Database service" database - ``` - **解释:** `TROVE_PASSWORD` 替换为`trove`用户的密码 - - 2、创建**Database**服务访问入口 - - ```shell - $ openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s - $ openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s - $ openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s - ``` - **解释:** `$TROVE_NODE` 替换为Trove的API服务部署节点 - -3. 安装和配置**Trove**各组件 - - 1、安装**Trove**包 - - ```shell - $ yum install openstack-trove python-troveclient - ``` - 2、配置`/etc/trove/trove.conf` - - ```ini - [DEFAULT] - bind_host=TROVE_NODE_IP - log_dir = /var/log/trove - - auth_strategy = keystone - # Config option for showing the IP address that nova doles out - add_addresses = True - network_label_regex = ^NETWORK_LABEL$ - api_paste_config = /etc/trove/api-paste.ini - - trove_auth_url = http://controller:35357/v3/ - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - - nova_proxy_admin_user = admin - nova_proxy_admin_pass = ADMIN_PASS - nova_proxy_admin_tenant_name = service - taskmanager_manager = trove.taskmanager.manager.Manager - use_nova_server_config_drive = True - - # Set these if using Neutron Networking - network_driver=trove.network.neutron.NeutronDriver - network_label_regex=.* - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/v3/ - auth_url=http://controller:35357/v3/ - #auth_uri = http://controller/identity - #auth_url = http://controller/identity_admin - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = trove - password = TROVE_PASS - - ``` - **解释:** - - `[Default]`分组中`bind_host`配置为Trove部署节点的IP - - `nova_compute_url` 和 `cinder_url` 为Nova和Cinder在Keystone中创建的endpoint - - `nova_proxy_XXX` 为一个能访问Nova服务的用户信息,上例中使用`admin`用户为例 - - `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码 - - `[database]`分组中的`connection` 为前面在mysql中为Trove创建的数据库信息 - - Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码 - - 3、配置`/etc/trove/trove-taskmanager.conf` - - ```ini - [DEFAULT] - log_dir = /var/log/trove - trove_auth_url = http://controller/identity/v2.0 - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove - ``` - **解释:** 参照`trove.conf`配置 - 4、配置`/etc/trove/trove-conductor.conf` - - ```ini - [DEFAULT] - log_dir = /var/log/trove - trove_auth_url = http://controller/identity/v2.0 - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:trove@controller/trove - ``` - **解释:** 参照`trove.conf`配置 - - 5、配置`/etc/trove/trove-guestagent.conf` - - ```ini - [DEFAULT] - rabbit_host = controller - rabbit_password = RABBIT_PASS - nova_proxy_admin_user = admin - nova_proxy_admin_pass = ADMIN_PASS - nova_proxy_admin_tenant_name = service - trove_auth_url = http://controller/identity_admin/v2.0 - ``` - **解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 - 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 - 报心跳,因此需要配置RabbitMQ的用户和密码信息。 - - 6、生成数据`Trove`数据库表 - - ```shell - $ su -s /bin/sh -c "trove-manage db_sync" trove - ``` - -4. 完成安装配置 - 1、配置**Trove**服务自启动 - - ```shell - $ systemctl enable openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - 2、启动服务 - - ```shell - $ systemctl start openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` \ No newline at end of file diff --git a/docs/install/openEuler-20.03-LTS-SP3/OpenStack-queens.md b/docs/install/openEuler-20.03-LTS-SP3/OpenStack-queens.md deleted file mode 100644 index 52488b6f1ea1f684de514efdc82b7ce46dea2d7c..0000000000000000000000000000000000000000 --- a/docs/install/openEuler-20.03-LTS-SP3/OpenStack-queens.md +++ /dev/null @@ -1,2016 +0,0 @@ -# OpenStack-Queens 部署指南 - - - -- [OpenStack-Queens 部署指南](#openstack-queens-部署指南) - - [OpenStack 简介](#openstack-简介) - - [约定](#约定) - - [准备环境](#准备环境) - - [环境配置](#环境配置) - - [安装 SQL DataBase](#安装-sql-database) - - [安装 RabbitMQ](#安装-rabbitmq) - - [安装 Memcached](#安装-memcached) - - [安装 OpenStack](#安装-openstack) - - [Keystone 安装](#keystone-安装) - - [Glance 安装](#glance-安装) - - [Nova 安装](#nova-安装) - - [Neutron 安装](#neutron-安装) - - [Cinder 安装](#cinder-安装) - - [horizon 安装](#horizon-安装) - - [Tempest 安装](#tempest-安装) - - [Ironic 安装](#ironic-安装) - - [Kolla 安装](#kolla-安装) - - [Trove 安装](#trove-安装) - - [Rally 安装](#rally-安装) - - - -## OpenStack 简介 - -OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。 - -作为一个开源的云计算管理平台,OpenStack 由 nova、cinder、neutron、glance、keystone、horizon 等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。 - -openEuler 20.03-LTS-SP3 版本官方认证的第三方 oepkg yum 源已经支持 Openstack-Queens 版本,用户可以配置好 oepkg yum 源后根据此文档进行 OpenStack 部署。 - -## 约定 - -Openstack 支持多种形态部署,此文档支持`ALL in One`以及`Distributed`两种部署方式,按照如下方式约定: - -`ALL in One`模式: - -```text -忽略所有可能的后缀 -``` - -`Distributed`模式: - -```text -以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点` -以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点` -除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点` -``` - -***注意*** - -涉及到以上约定的服务如下: - -- Cinder -- Nova -- Neutron - -## 准备环境 - -### 环境配置 - -1. 配置 20.03-LTS-SP3 官方认证的第三方源 oepkg - - ```shell - cat << EOF >> /etc/yum.repos.d/OpenStack_Queens.repo - [openstack_queens] - name=OpenStack_Queens - baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/queens/$basearch/ - gpgcheck=0 - enabled=1 - EOF - ``` - - ***注意*** - - 如果环境启用了Epol源,需要提高queens仓的优先级,设置priority=1: - ```shell - cat << EOF >> /etc/yum.repos.d/OpenStack_Queens.repo - [openstack_queens] - name=OpenStack_Queens - baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/queens/$basearch/ - gpgcheck=0 - enabled=1 - priority=1 - EOF - ``` - - ```shell - $ yum clean all && yum makecache - ``` - - -2. 修改主机名以及映射 - - 设置各个节点的主机名 - - ```shell - hostnamectl set-hostname controller (CTL) - hostnamectl set-hostname compute (CPT) - ``` - - 假设controller节点的IP是`10.0.0.11`,compute节点的IP是`10.0.0.12`(如果存在的话),则于`/etc/hosts`新增如下: - - ```shell - 10.0.0.11 controller - 10.0.0.12 compute - ``` - -### 安装 SQL DataBase - -1. 执行如下命令,安装软件包。 - - ```shell - yum install mariadb mariadb-server python2-PyMySQL - ``` - -2. 执行如下命令,创建并编辑 `/etc/my.cnf.d/openstack.cnf` 文件。 - - ```shell - vim /etc/my.cnf.d/openstack.cnf - - [mysqld] - bind-address = 10.0.0.11 - default-storage-engine = innodb - innodb_file_per_table = on - max_connections = 4096 - collation-server = utf8_general_ci - character-set-server = utf8 - ``` - - ***注意*** - - **其中 `bind-address` 设置为控制节点的管理IP地址。** - -3. 启动 DataBase 服务,并为其配置开机自启动: - - ```shell - systemctl enable mariadb.service - systemctl start mariadb.service - ``` - -4. 配置DataBase的默认密码(可选) - - ```shell - mysql_secure_installation - ``` - - ***注意*** - - **根据提示进行即可** - -### 安装 RabbitMQ - -1. 执行如下命令,安装软件包。 - - ```shell - yum install rabbitmq-server - ``` - -2. 启动 RabbitMQ 服务,并为其配置开机自启动。 - - ```shell - systemctl enable rabbitmq-server.service - systemctl start rabbitmq-server.service - ``` - -3. 添加 OpenStack用户。 - - ```shell - rabbitmqctl add_user openstack RABBIT_PASS - ``` - - ***注意*** - - **替换 `RABBIT_PASS`,为 OpenStack 用户设置密码** - -4. 设置openstack用户权限,允许进行配置、写、读: - - ```shell - rabbitmqctl set_permissions openstack ".*" ".*" ".*" - ``` - -### 安装 Memcached - -1. 执行如下命令,安装依赖软件包。 - - ```shell - yum install memcached python2-memcached - ``` - -2. 编辑 `/etc/sysconfig/memcached` 文件。 - - ```shell - vim /etc/sysconfig/memcached - - OPTIONS="-l 127.0.0.1,::1,controller" - ``` - -3. 执行如下命令,启动 Memcached 服务,并为其配置开机启动。 - - ```shell - systemctl enable memcached.service - systemctl start memcached.service - ``` - 服务启动后,可以通过命令`memcached-tool controller stats`确保启动正常,服务可用,其中可以将`controller`替换为控制节点的管理IP地址。 - -## 安装 OpenStack - -### Keystone 安装 - -1. 创建 keystone 数据库并授权。 - - ``` sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE keystone; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `KEYSTONE_DBPASS`,为 Keystone 数据库设置密码** - -2. 安装软件包。 - - ```shell - yum install openstack-keystone httpd python2-mod_wsgi - ``` - -3. 配置keystone相关配置 - - ```shell - vim /etc/keystone/keystone.conf - - [database] - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - - [token] - provider = fernet - ``` - - ***解释*** - - [database]部分,配置数据库入口 - - [token]部分,配置token provider - - ***注意:*** - - **替换 `KEYSTONE_DBPASS` 为 Keystone 数据库的密码** - -4. 同步数据库。 - - ```shell - su -s /bin/sh -c "keystone-manage db_sync" keystone - ``` - -5. 初始化Fernet密钥仓库。 - - ```shell - keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - ``` - -6. 启动服务。 - - ```shell - keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:5000/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - ``` - - ***注意*** - - **替换 `ADMIN_PASS`,为 admin 用户设置密码** - -7. 配置Apache HTTP server - - ```shell - vim /etc/httpd/conf/httpd.conf - - ServerName controller - ``` - - ```shell - ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ - ``` - - ***解释*** - - 配置 `ServerName` 项引用控制节点 - - ***注意*** - **如果 `ServerName` 项不存在则需要创建** - -8. 启动Apache HTTP服务。 - - ```shell - systemctl enable httpd.service - systemctl start httpd.service - ``` - -9. 创建环境变量配置。 - - ```shell - cat << EOF >> ~/.admin-openrc - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=admin - export OS_USERNAME=admin - export OS_PASSWORD=ADMIN_PASS - export OS_AUTH_URL=http://controller:5000/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - EOF - ``` - - ***注意*** - - **替换 `ADMIN_PASS` 为 admin 用户的密码** - -10. 依次创建domain, projects, users, roles,需要先安装好python2-openstackclient: - - ``` - yum install python2-openstackclient - ``` - - 导入环境变量 - - ```shell - source ~/.admin-openrc - ``` - - 创建project `service`,其中 domain `default` 在 keystone-manage bootstrap 时已创建 - - ```shell - openstack domain create --description "An Example Domain" example - ``` - - ```shell - openstack project create --domain default --description "Service Project" service - ``` - - 创建(non-admin)project `myproject`,user `myuser` 和 role `myrole`,为 `myproject` 和 `myuser` 添加角色`myrole` - - ```shell - openstack project create --domain default --description "Demo Project" myproject - openstack user create --domain default --password-prompt myuser - openstack role create myrole - openstack role add --project myproject --user myuser myrole - ``` - -11. 验证 - - 取消临时环境变量OS_AUTH_URL和OS_PASSWORD: - - ```shell - source ~/.admin-openrc - unset OS_AUTH_URL OS_PASSWORD - ``` - - 为admin用户请求token: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - ``` - - 为myuser用户请求token: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name myproject --os-username myuser token issue - ``` - -### Glance 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE glance; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意:*** - - **替换 `GLANCE_DBPASS`,为 glance 数据库设置密码** - - 创建服务凭证 - - ```shell - source ~/.admin-openrc - - openstack user create --domain default --password-prompt glance - openstack role add --project service --user glance admin - openstack service create --name glance --description "OpenStack Image" image - ``` - - 创建镜像服务API端点: - - ```shell - openstack endpoint create --region RegionOne image public http://controller:9292 - openstack endpoint create --region RegionOne image internal http://controller:9292 - openstack endpoint create --region RegionOne image admin http://controller:9292 - ``` - -2. 安装软件包 - - ```shell - yum install openstack-glance - ``` - -3. 配置glance相关配置: - - ```shell - vim /etc/glance/glance-api.conf - - [database] - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - flavor = keystone - - [glance_store] - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - ``` - - ```shell - vim /etc/glance/glance-registry.conf - - [database] - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - flavor = keystone - - [glance_store] - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - ``` - - ***解释:*** - - [database]部分,配置数据库入口 - - [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口 - - [glance_store]部分,配置本地文件系统存储和镜像文件的位置 - - ***注意*** - - **替换 `GLANCE_DBPASS` 为 glance 数据库的密码** - - **替换 `GLANCE_PASS` 为 glance 用户的密码** - -4. 同步数据库: - - ```shell - su -s /bin/sh -c "glance-manage db_sync" glance - ``` - -5. 启动服务: - - ```shell - systemctl enable openstack-glance-api.service openstack-glance-registry.service - systemctl start openstack-glance-api.service openstack-glance-registry.service - ``` - -6. 验证 - - 下载镜像 - - ```shell - source ~/.admin-openrc - - wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img - ``` - - ***注意*** - - **如果您使用的环境是鲲鹏架构,请下载arm64版本的镜像** - - 向Image服务上传镜像: - - ```shell - openstack image create --disk-format qcow2 --container-format bare \ - --file cirros-0.4.0-x86_64-disk.img --public cirros - ``` - - 确认镜像上传并验证属性: - - ```shell - openstack image list - ``` - -### Nova 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p (CPT) - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换NOVA_DBPASS,为nova数据库设置密码** - - ```shell - source ~/.admin-openrc (CPT) - ``` - - 创建nova服务凭证: - - ```shell - openstack user create --domain default --password-prompt nova (CTP) - openstack role add --project service --user nova admin (CPT) - openstack service create --name nova --description "OpenStack Compute" compute (CPT) - ``` - - 创建placement服务凭证: - - ```shell - openstack user create --domain default --password-prompt placement (CPT) - openstack role add --project service --user placement admin (CPT) - openstack service create --name placement --description "Placement API" placement (CPT) - ``` - - 创建nova API端点: - - ```shell - openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CPT) - openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CPT) - openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CPT) - ``` - - 创建placement API端点: - - ```shell - openstack endpoint create --region RegionOne placement public http://controller:8778 (CPT) - openstack endpoint create --region RegionOne placement internal http://controller:8778 (CPT) - openstack endpoint create --region RegionOne placement admin http://controller:8778 (CPT) - ``` - -2. 安装软件包 - - ```shell - yum install openstack-nova-api openstack-nova-conductor openstack-nova-console \ - novnc openstack-nova-novncproxy openstack-nova-scheduler \ - openstack-nova-placement-api (CTL) - - yum install openstack-nova-compute (CPT) - ``` - - ***注意*** - - **如果为arm64结构,还需要执行以下命令** - - ```shell - yum install edk2-aarch64 (CPT) - ``` - -3. 配置nova相关配置 - - ```shell - vim /etc/nova/nova.conf - - [DEFAULT] - enabled_apis = osapi_compute,metadata - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - my_ip = 10.0.0.1 - use_neutron = true - firewall_driver = nova.virt.firewall.NoopFirewallDriver - compute_driver=libvirt.LibvirtDriver (CPT) - instances_path = /var/lib/nova/instances/ (CPT) - lock_path = /var/lib/nova/tmp (CPT) - - [api_database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) - - [database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) - - [api] - auth_strategy = keystone - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = nova - password = NOVA_PASS - - [vnc] - enabled = true - server_listen = $my_ip - server_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) - - [glance] - api_servers = http://controller:9292 - - [oslo_concurrency] - lock_path = /var/lib/nova/tmp (CTL) - - [placement] - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:5000/v3 - username = placement - password = PLACEMENT_PASS - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***解释*** - - [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron; - - [api_database] [database]部分,配置数据库入口; - - [api] [keystone_authtoken]部分,配置身份认证服务入口; - - [vnc]部分,启用并配置远程控制台入口; - - [glance]部分,配置镜像服务API的地址; - - [oslo_concurrency]部分,配置lock path; - - [placement]部分,配置placement服务的入口。 - - ***注意*** - - **替换 `RABBIT_PASS` 为 RabbitMQ 中 openstack 账户的密码;** - - **配置 `my_ip` 为控制节点的管理IP地址;** - - **替换 `NOVA_DBPASS` 为nova数据库的密码;** - - **替换 `NOVA_PASS` 为nova用户的密码;** - - **替换 `PLACEMENT_PASS` 为placement用户的密码;** - - **替换 `NEUTRON_PASS` 为neutron用户的密码;** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - - **额外** - - 手动增加Placement API接入配置。 - - ```shell - vim /etc/httpd/conf.d/00-nova-placement-api.conf (CTL) - - - = 2.4> - Require all granted - - - Order allow,deny - Allow from all - - - ``` - - 重启httpd服务: - - ```shell - systemctl restart httpd (CTL) - ``` - - 确定是否支持虚拟机硬件加速(x86架构): - - ```shell - egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) - ``` - - 如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM: - - ```shell - vim /etc/nova/nova.conf (CPT) - - [libvirt] - virt_type = qemu - ``` - - 如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置 - - ***注意*** - - **如果为arm64结构,还需要在计算节点执行以下命令** - - ```shell - mkdir -p /usr/share/AAVMF - chown nova:nova /usr/share/AAVMF - - ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \ - /usr/share/AAVMF/AAVMF_CODE.fd - ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \ - /usr/share/AAVMF/AAVMF_VARS.fd - - vim /etc/libvirt/qemu.conf - - nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \ - /usr/share/AAVMF/AAVMF_VARS.fd", \ - "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \ - /usr/share/edk2/aarch64/vars-template-pflash.raw"] - ``` - - 并且当ARM架构下的部署环境为嵌套虚拟化时,`libvirt`配置如下: - - ```shell - [libvirt] - virt_type = qemu - cpu_mode = custom - cpu_model = cortex-a72 - ``` - -4. 同步数据库 - - 同步nova-api数据库: - - ```shell - su -s /bin/sh -c "nova-manage api_db sync" nova (CTL) - ``` - - 注册cell0数据库: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova (CTL) - ``` - - 创建cell1 cell: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova (CTL) - ``` - - 同步nova数据库: - - ```shell - su -s /bin/sh -c "nova-manage db sync" nova (CTL) - ``` - - 验证cell0和cell1注册正确: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova (CTL) - ``` - - 添加计算节点到openstack集群 - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (CPT) - ``` - -5. 启动服务 - - ```shell - systemctl enable \ (CTL) - openstack-nova-api.service \ - openstack-nova-consoleauth.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - - systemctl start \ (CTL) - openstack-nova-api.service \ - openstack-nova-consoleauth.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - ``` - - ```shell - systemctl enable libvirtd.service openstack-nova-compute.service (CPT) - systemctl start libvirtd.service openstack-nova-compute.service (CPT) - ``` - -6. 验证 - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 列出服务组件,验证每个流程都成功启动和注册: - - ```shell - openstack compute service list (CTL) - ``` - - 列出身份服务中的API端点,验证与身份服务的连接: - - ```shell - openstack catalog list (CTL) - ``` - - 列出镜像服务中的镜像,验证与镜像服务的连接: - - ```shell - openstack image list (CTL) - ``` - - 检查cells和placement API是否运作成功,以及其他必要条件是否已具备。 - - ```shell - nova-status upgrade check (CTL) - ``` - -### Neutron 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE neutron; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `NEUTRON_DBPASS` 为 neutron 数据库设置密码。** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 创建neutron服务凭证 - - ```shell - openstack user create --domain default --password-prompt neutron (CTL) - openstack role add --project service --user neutron admin (CTL) - openstack service create --name neutron --description "OpenStack Networking" network (CTL) - ``` - - 创建Neutron服务API端点: - - ```shell - openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) - ``` - -2. 安装软件包: - - ```shell - yum install openstack-neutron openstack-neutron-linuxbridge-agent \ (CTL) - ebtables ipset openstack-neutron-l3-agent \ - openstack-neutron-dhcp-agent \ - openstack-neutron-metadata-agent - ``` - - ```shell - yum install openstack-neutron-linuxbridge-agent ebtables ipset (CPT) - ``` - -3. 配置neutron相关配置: - - 配置主体配置 - - ```shell - vim /etc/neutron/neutron.conf - - [database] - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) - - [DEFAULT] - core_plugin = ml2 (CTL) - service_plugins = router (CTL) - allow_overlapping_ips = true (CTL) - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - notify_nova_on_port_status_changes = true (CTL) - notify_nova_on_port_data_changes = true (CTL) - api_workers = 3 (CTL) - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = neutron - password = NEUTRON_PASS - - [nova] - auth_url = http://controller:5000 (CTL) - auth_type = password (CTL) - project_domain_name = Default (CTL) - user_domain_name = Default (CTL) - region_name = RegionOne (CTL) - project_name = service (CTL) - username = nova (CTL) - password = NOVA_PASS (CTL) - - [oslo_concurrency] - lock_path = /var/lib/neutron/tmp - ``` - - ***解释*** - - [database]部分,配置数据库入口; - - [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口; - - [default] [keystone]部分,配置身份认证服务入口; - - [default] [nova]部分,配置网络来通知计算网络拓扑的变化; - - [oslo_concurrency]部分,配置lock path。 - - ***注意*** - - **替换`NEUTRON_DBPASS`为 neutron 数据库的密码;** - - **替换`RABBIT_PASS`为 RabbitMQ中openstack 账户的密码;** - - **替换`NEUTRON_PASS`为 neutron 用户的密码;** - - **替换`NOVA_PASS`为 nova 用户的密码。** - - 配置ML2插件: - - ```shell - vim /etc/neutron/plugins/ml2/ml2_conf.ini - - [ml2] - type_drivers = flat,vlan,vxlan - tenant_network_types = vxlan - mechanism_drivers = linuxbridge,l2population - extension_drivers = port_security - - [ml2_type_flat] - flat_networks = provider - - [ml2_type_vxlan] - vni_ranges = 1:1000 - - [securitygroup] - enable_ipset = true - ``` - - 创建/etc/neutron/plugin.ini的符号链接 - - ```shell - ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - ``` - - **注意** - - **[ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;** - - **[ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;** - - **[ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;** - - **[securitygroup]部分,配置允许 ipset。** - - **补充** - - **l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge** - - 配置 Linux bridge 代理: - - ```shell - vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - [securitygroup] - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - ``` - - ***解释*** - - [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口; - - [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population; - - [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。 - - ***注意*** - - **替换`PROVIDER_INTERFACE_NAME`为物理网络接口;** - - **替换`OVERLAY_INTERFACE_IP_ADDRESS`为控制节点的管理IP地址。** - - 配置Layer-3代理: - - ```shell - vim /etc/neutron/l3_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - ``` - - ***解释*** - - 在[default]部分,配置接口驱动为linuxbridge - - 配置DHCP代理: - - ```shell - vim /etc/neutron/dhcp_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - ``` - - ***解释*** - - [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。 - - 配置metadata代理: - - ```shell - vim /etc/neutron/metadata_agent.ini (CTL) - - [DEFAULT] - nova_metadata_host = controller - metadata_proxy_shared_secret = METADATA_SECRET - ``` - - ***解释*** - - [default]部分,配置元数据主机和shared secret。 - - ***注意*** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - -4. 配置nova相关配置 - - ```shell - vim /etc/nova/nova.conf - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***解释*** - - [neutron]部分,配置访问参数,启用元数据代理,配置secret。 - - ***注意*** - - **替换`NEUTRON_PASS`为 neutron 用户的密码;** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - -5. 同步数据库: - - ```shell - su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - ``` - -6. 重启计算API服务: - - ```shell - systemctl restart openstack-nova-api.service - ``` - -7. 启动网络服务 - - ```shell - systemctl enable openstack-neutron-server.service \ (CTL) - openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \ - openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service - systemctl restart openstack-nova-api.service openstack-neutron-server.service \ (CTL) - openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \ - openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service - - systemctl enable openstack-neutron-linuxbridge-agent.service (CPT) - systemctl restart openstack-neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) - ``` - -8. 验证 - - 列出代理验证 neutron 代理启动成功: - - ```shell - openstack network agent list - ``` - -### Cinder 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE cinder; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `CINDER_DBPASS` 为cinder数据库设置密码。** - - ```shell - source ~/.admin-openrc - ``` - - 创建cinder服务凭证: - - ```shell - openstack user create --domain default --password-prompt cinder - openstack role add --project service --user cinder admin - openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 - openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 - ``` - - 创建块存储服务API端点: - - ```shell - openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s - ``` - -2. 安装软件包: - - ```shell - yum install openstack-cinder-api openstack-cinder-scheduler (CTL) - ``` - - ```shell - yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \ (CPT) - openstack-cinder-volume openstack-cinder-backup - ``` - -3. 准备存储设备,以下仅为示例: - - ```shell - pvcreate /dev/vdb - vgcreate cinder-volumes /dev/vdb - - vim /etc/lvm/lvm.conf - - - devices { - ... - filter = [ "a/vdb/", "r/.*/"] - ``` - - ***解释*** - - 在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。 - -4. 准备NFS - - ```shell - mkdir -p /root/cinder/backup - - cat << EOF >> /etc/export - /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) - EOF - - ``` - -5. 配置cinder相关配置: - - ```shell - vim /etc/cinder/cinder.conf - - [DEFAULT] - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - my_ip = 10.0.0.11 - enabled_backends = lvm (CPT) - backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (CPT) - backup_share=HOST:PATH (CPT) - - [database] - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = cinder - password = CINDER_PASS - - [oslo_concurrency] - lock_path = /var/lib/cinder/tmp - - [lvm] - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (CPT) - volume_group = cinder-volumes (CPT) - iscsi_protocol = iscsi (CPT) - iscsi_helper = tgtadm (CPT) - ``` - - ***解释*** - - [database]部分,配置数据库入口; - - [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip; - - [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口; - - [oslo_concurrency]部分,配置lock path。 - - ***注意*** - - **替换`CINDER_DBPASS`为 cinder 数据库的密码;** - - **替换`RABBIT_PASS`为 RabbitMQ 中 openstack 账户的密码;** - - **配置`my_ip`为控制节点的管理 IP 地址;** - - **替换`CINDER_PASS`为 cinder 用户的密码;** - - **替换`HOST:PATH`为 NFS的HOSTIP和共享路径 用户的密码;** - -6. 同步数据库: - - ```shell - su -s /bin/sh -c "cinder-manage db sync" cinder (CTL) - ``` - -7. 配置nova: - - ```shell - vim /etc/nova/nova.conf (CTL) - - [cinder] - os_region_name = RegionOne - ``` - -8. 重启计算API服务 - - ```shell - systemctl restart openstack-nova-api.service - ``` - -9. 启动cinder服务 - - ```shell - systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - ``` - - ```shell - systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \ (CPT) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \ (CPT) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - ``` - - ***注意*** - - 当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。 - - ``` - include /var/lib/cinder/volumes/* - ``` - -10. 验证 - - ```shell - source ~/.admin-openrc - openstack volume service list - ``` - -### horizon 安装 - -1. 安装软件包 - - ```shell - yum install openstack-dashboard - ``` - -2. 修改文件 - - 修改变量 - - ```text - vim /etc/openstack-dashboard/local_settings - - ALLOWED_HOSTS = ['*', ] - OPENSTACK_HOST = "controller" - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - ``` - -3. 重启 httpd 服务 - - ```shell - systemctl restart httpd - ``` - -4. 验证 - 打开浏览器,输入网址,登录 horizon。 - - ***注意*** - - **替换HOSTIP为控制节点管理平面IP地址** - -### Tempest 安装 - -Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装 - -1. 安装Tempest - - ```shell - yum install openstack-tempest - ``` - -2. 初始化目录 - - ```shell - tempest init mytest - ``` - -3. 修改配置文件。 - - ```shell - cd mytest - vi etc/tempest.conf - ``` - - tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考[官方示例](https://docs.openstack.org/tempest/latest/sampleconf.html) - -4. 执行测试 - - ```shell - tempest run - ``` - -### Ironic 安装 - -Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 裸金属服务在数据库中存储信息,创建一个**ironic**用户可以访问的**ironic**数据库,替换**IRONIC_DBPASSWORD**为合适的密码 - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - ``` - -2. 安装软件包 - - ```shell - yum install openstack-ironic-api openstack-ironic-conductor python2-ironicclient - ``` - - 启动服务 - - ```shell - systemctl enable openstack-ironic-api openstack-ironic-conductor - systemctl start openstack-ironic-api openstack-ironic-conductor - ``` - - -3. 创建服务用户认证 - - 1、创建Bare Metal服务用户 - - ```shell - openstack user create --password IRONIC_PASSWORD \ - --email ironic@example.com ironic - openstack role add --project service --user ironic admin - openstack service create --name ironic --description "Ironic baremetal provisioning service" baremetal - - ``` - - 2、创建Bare Metal服务访问入口 - - ```shell - openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 - ``` - -4. 配置ironic-api服务 - - 配置文件路径/etc/ironic/ironic.conf - - 1、通过**connection**选项配置数据库的位置,如下所示,替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换**DB_IP**为DB服务器所在的IP地址: - - ```shell - [database] - - # The SQLAlchemy connection string used to connect to the - # database (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 3、配置ironic-api服务使用身份认证服务的凭证,替换**PUBLIC_IDENTITY_IP**为身份认证服务器的公共IP,替换**PRIVATE_IDENTITY_IP**为身份认证服务器的私有IP,替换**IRONIC_PASSWORD**为身份认证服务中**ironic**用户的密码: - - ```shell - [DEFAULT] - - # Authentication strategy used by ironic-api: one of - # "keystone" or "noauth". "noauth" should not be used in a - # production environment because all authentication will be - # disabled. (string value) - - auth_strategy=keystone - - [keystone_authtoken] - # Authentication type to load (string value) - auth_type=password - # Complete public Identity API endpoint (string value) - www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 - # Complete admin Identity API endpoint. (string value) - auth_url=http://PRIVATE_IDENTITY_IP:5000 - # Service username. (string value) - username=ironic - # Service account password. (string value) - password=IRONIC_PASSWORD - # Service tenant name. (string value) - project_name=service - # Domain name containing project (string value) - project_domain_name=Default - # User's domain name (string value) - user_domain_name=Default - ``` - - 4、创建裸金属服务数据库表 - - ```shell - ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema - ``` - - 5、重启ironic-api服务 - - ```shell - sudo systemctl restart openstack-ironic-api - ``` - -5. 配置ironic-conductor服务 - - 1、替换**HOST_IP**为conductor host的IP - - ```shell - [DEFAULT] - - # IP address of this host. If unset, will determine the IP - # programmatically. If unable to do so, will use "127.0.0.1". - # (string value) - - my_ip=HOST_IP - ``` - - 2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换DB_IP为DB服务器所在的IP地址: - - ```shell - [database] - - # The SQLAlchemy connection string to use to connect to the - # database. (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 4、配置凭证访问其他OpenStack服务 - - 为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。 - - ```shell - [neutron] - 访问Openstack网络服务 - [glance] - 访问Openstack镜像服务 - [swift] - 访问Openstack对象存储服务 - [cinder] - 访问Openstack块存储服务 - [inspector] - 访问Openstack裸金属introspection服务 - [service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在Openstack身份认证服务目录中的自己的API URL端点 - ``` - - 简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。 - - 在下面的示例中,用户访问openstack网络服务的身份验证信息配置为: - - ```shell - 网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口 - - 请求时使用特定的CA SSL证书进行HTTPS连接 - - 与ironic-api服务配置相同的服务用户 - - 动态密码认证插件基于其他选项发现合适的身份认证服务API版本 - ``` - - ```shell - [neutron] - - # Authentication type to load (string value) - auth_type = password - # Authentication URL (string value) - auth_url=https://IDENTITY_IP:5000/ - # Username (string value) - username=ironic - # User's password (string value) - password=IRONIC_PASSWORD - # Project name to scope to (string value) - project_name=service - # Domain ID containing project (string value) - project_domain_id=default - # User's domain id (string value) - user_domain_id=default - # PEM encoded Certificate Authority to use when verifying - # HTTPs connections. (string value) - cafile=/opt/stack/data/ca-bundle.pem - # The default region_name for endpoint URL discovery. (string - # value) - region_name = RegionOne - # List of interfaces, in order of preference, for endpoint - # URL. (list value) - valid_interfaces=public - ``` - - 默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定: - - ```shell - [neutron] ... endpoint_override = - ``` - - 5、配置允许的驱动程序和硬件类型 - - 通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型: - - ```shell - [DEFAULT] enabled_hardware_types = ipmi - ``` - - 配置硬件接口: - - ```shell - enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool - ``` - - 配置接口默认值: - - ```shell - [DEFAULT] default_deploy_interface = direct default_network_interface = neutron - ``` - - 如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。 - - 6、重启ironic-conductor服务 - - ```shell - sudo systemctl restart openstack-ironic-conductor - ``` - -6. deploy ramdisk镜像制作 - - Q版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 - 若使用Q版原生工具,则需要安装对应的软件包。 - - ``` - yum install openstack-ironic-python-agent - 或者 - yum install diskimage-builder - ``` - 具体的使用方法可以参考[官方文档](https://docs.openstack.org/ironic/queens/install/deploy-ramdisk.html) - - 这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。 - - 1. 安装 ironic-python-agent-builder - - - 1. 安装工具: - - ```shell - pip install ironic-python-agent-builder - ``` - - 2. 修改以下文件中的python解释器: - - ```shell - /usr/bin/yum /usr/libexec/urlgrabber-ext-down - ``` - - 3. 安装其它必须的工具: - - ```shell - yum install git - ``` - - 由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可: - - ```shell - # 先查询需要安装哪个包 - [root@localhost ~]# yum provides /usr/sbin/semanage - 已加载插件:fastestmirror - Loading mirror speeds from cached hostfile - * base: mirror.vcu.edu - * extras: mirror.vcu.edu - * updates: mirror.math.princeton.edu - policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities - 源 :base - 匹配来源: - 文件名 :/usr/sbin/semanage - # 安装 - [root@localhost ~]# yum install policycoreutils-python - ``` - - 2. 制作镜像 - - 如果是`arm`架构,需要添加: - ```shell - export ARCH=aarch64 - ``` - - 基本用法: - - ```shell - usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] - [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] - distribution - - positional arguments: - distribution Distribution to use - - optional arguments: - -h, --help show this help message and exit - -r RELEASE, --release RELEASE - Distribution release to use - -o OUTPUT, --output OUTPUT - Output base file name - -e ELEMENT, --element ELEMENT - Additional DIB element to use - -b BRANCH, --branch BRANCH - If set, override the branch that is used for ironic- - python-agent and requirements - -v, --verbose Enable verbose logging in diskimage-builder - --extra-args EXTRA_ARGS - Extra arguments to pass to diskimage-builder - ``` - - 举例说明: - - ```shell - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky - ``` - - 3. 允许ssh登陆 - - 初始化环境变量,然后制作镜像: - - ```shell - export DIB_DEV_USER_USERNAME=ipa \ - export DIB_DEV_USER_PWDLESS_SUDO=yes \ - export DIB_DEV_USER_PASSWORD='123' - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser - ``` - - 4. 指定代码仓库 - - 初始化对应的环境变量,然后制作镜像: - - ```shell - # 指定仓库地址以及版本 - DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git - DIB_REPOREF_ironic_python_agent=origin/develop - - # 直接从gerrit上clone代码 - DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent - DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 - ``` - - 参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。 - - 指定仓库地址及版本验证成功。 - -在Queens中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。 - -### Kolla 安装 - -Kolla 为 OpenStack 服务提供生产环境可用的容器化部署的功能。openEuler 20.03 LTS SP2中已经引入了Kolla和Kolla-ansible服务,但是Kolla 以及 Kolla-ansible 原生并不支持 openEuler, -因此 Openstack SIG 在openEuler 20.03 LTS SP3中提供了 `openstack-kolla-plugin` 和 `openstack-kolla-ansible-plugin` 这两个补丁包。 - -Kolla的安装十分简单,只需要安装对应的RPM包即可 - -支持 openEuler 版本: - -```shell -yum install openstack-kolla-plugin openstack-kolla-ansible-plugin -``` - -不支持 openEuler 版本: - -```shell -yum install openstack-kolla openstack-kolla-ansible -``` - -安装完后,就可以使用`kolla-ansible`, `kolla-build`, `kolla-genpwd`, `kolla-mergepwd`等命令了。 - -### Trove 安装 -Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 数据库服务在数据库中存储信息,创建一个**trove**用户可以访问的**trove**数据库,替换**TROVE_DBPASSWORD**为合适的密码 - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - ``` - -2. 创建服务用户认证 - - 1、创建**Trove**服务用户 - - ```shell - openstack user create --password TROVE_PASSWORD \ - --email trove@example.com trove - openstack role add --project service --user trove admin - openstack service create --name trove --description "Database service" database - ``` - **解释:** `TROVE_PASSWORD` 替换为`trove`用户的密码 - - 2、创建**Database**服务访问入口 - - ```shell - openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s - ``` - **解释:** `$TROVE_NODE` 替换为Trove的API服务部署节点 - -3. 安装和配置**Trove**各组件 - 1、安装**Trove**包 - ```shell script - yum install openstack-trove python2-troveclient - ``` - 2. 配置`trove.conf` - ```shell script - vim /etc/trove/trove.conf - - [DEFAULT] - bind_host=TROVE_NODE_IP - log_dir = /var/log/trove - - auth_strategy = keystone - # Config option for showing the IP address that nova doles out - add_addresses = True - network_label_regex = ^NETWORK_LABEL$ - api_paste_config = /etc/trove/api-paste.ini - - trove_auth_url = http://controller:35357/v3/ - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - - nova_proxy_admin_user = admin - nova_proxy_admin_pass = ADMIN_PASS - nova_proxy_admin_tenant_name = service - taskmanager_manager = trove.taskmanager.manager.Manager - use_nova_server_config_drive = True - - # Set these if using Neutron Networking - network_driver=trove.network.neutron.NeutronDriver - network_label_regex=.* - - - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/v3/ - auth_url=http://controller:35357/v3/ - #auth_uri = http://controller/identity - #auth_url = http://controller/identity_admin - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = trove - password = TROVE_PASS - - ``` - **解释:** - - `[Default]`分组中`bind_host`配置为Trove部署节点的IP - - `nova_compute_url` 和 `cinder_url` 为Nova和Cinder在Keystone中创建的endpoint - - `nova_proxy_XXX` 为一个能访问Nova服务的用户信息,上例中使用`admin`用户为例 - - `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码 - - `[database]`分组中的`connection` 为前面在mysql中为Trove创建的数据库信息 - - Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码 - - 3. 配置`trove-taskmanager.conf` - ```shell script - vim /etc/trove/trove-taskmanager.conf - - [DEFAULT] - log_dir = /var/log/trove - trove_auth_url = http://controller/identity/v2.0 - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove - ``` - **解释:** 参照`trove.conf`配置 - - 4. 配置`trove-conductor.conf` - ```shell script - vim /etc/trove/trove-conductor.conf - - [DEFAULT] - log_dir = /var/log/trove - trove_auth_url = http://controller/identity/v2.0 - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:trove@controller/trove - ``` - **解释:** 参照`trove.conf`配置 - - 5. 配置`trove-guestagent.conf` - ```shell script - vim /etc/trove/trove-guestagent.conf - [DEFAULT] - rabbit_host = controller - rabbit_password = RABBIT_PASS - nova_proxy_admin_user = admin - nova_proxy_admin_pass = ADMIN_PASS - nova_proxy_admin_tenant_name = service - trove_auth_url = http://controller/identity_admin/v2.0 - ``` - **解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 - 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 - 报心跳,因此需要配置RabbitMQ的用户和密码信息。 - - 6. 生成数据`Trove`数据库表 - ```shell script - su -s /bin/sh -c "trove-manage db_sync" trove - ``` -4. 完成安装配置 - 1. 配置**Trove**服务自启动 - ```shell script - systemctl enable openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - 2. 启动服务 - ```shell script - systemctl start openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - -### Rally 安装 - -Rally是OpenStack提供的性能测试工具。只需要简单的安装即可。 - -``` -yum install openstack-rally openstack-rally-plugins -``` diff --git a/docs/install/openEuler-20.03-LTS-SP3/OpenStack-rocky.md b/docs/install/openEuler-20.03-LTS-SP3/OpenStack-rocky.md deleted file mode 100644 index e640679d747e9e20db29bb15501e29235805ae62..0000000000000000000000000000000000000000 --- a/docs/install/openEuler-20.03-LTS-SP3/OpenStack-rocky.md +++ /dev/null @@ -1,2048 +0,0 @@ - - -# OpenStack-Rocky 部署指南 - - - -- [OpenStack-Rocky 部署指南](#openstack-rocky-部署指南) - - - [OpenStack 简介](#openstack-简介) - - [准备环境](#准备环境) - - - [环境配置](#环境配置) - - [安装 SQL DataBase](#安装-sql-database) - - [安装 RabbitMQ](#安装-rabbitmq) - - [安装 Memcached](#安装-memcached) - - [安装 OpenStack](#安装-openstack) - - - [Keystone 安装](#keystone-安装) - - - [Glance 安装](#glance-安装) - - - [Nova 安装](#nova-安装) - - - [Neutron 安装](#neutron-安装) - - - [Cinder 安装](#cinder-安装) - - - [Horizon 安装](#Horizon-安装) - - - [Tempest 安装](#tempest-安装) - - - [Ironic 安装](#ironic-安装) - - - [Kolla 安装](#kolla-安装) - - - [Trove 安装](#Trove-安装) - - - [Rally 安装](#Rally-安装) - - -## OpenStack 简介 - -OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。 - -作为一个开源的云计算管理平台,OpenStack 由 nova、cinder、neutron、glance、keystone、horizon 等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。 - -openEuler 20.03-LTS-SP3 版本官方认证的第三方 oepkg yum 源已经支持 Openstack-Rocky 版本,用户可以配置好 oepkg yum 源后根据此文档进行 OpenStack 部署。 - - -## 准备环境 -### OpenStack yum源配置 - -配置 20.03-LTS-SP3 官方认证的第三方源 oepkg - -```shell -$ cat << EOF >> /etc/yum.repos.d/OpenStack_Rocky.repo -[openstack_rocky] -name=OpenStack_Rocky -baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/$basearch/ -gpgcheck=0 -enabled=1 -EOF -``` - -***注意*** - -如果环境启用了Epol源,需要提高rocky仓的优先级,设置priority=1: -```shell -$ cat << EOF >> /etc/yum.repos.d/OpenStack_Rocky.repo -[openstack_rocky] -name=OpenStack_Rocky -baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/$basearch/ -gpgcheck=0 -enabled=1 -priority=1 -EOF -``` - -```shell -$ yum clean all && yum makecache -``` - -### 环境配置 - -在`/etc/hosts`中添加controller信息,例如节点IP是`10.0.0.11`,则新增: - -``` -10.0.0.11 controller -``` - -### 安装 SQL DataBase - -1. 执行如下命令,安装软件包。 - - ```shell - $ yum install mariadb mariadb-server python2-PyMySQL - ``` -2. 创建并编辑 `/etc/my.cnf.d/openstack.cnf` 文件。 - - 复制如下内容到文件,其中 bind-address 设置为控制节点的管理IP地址。 - ```ini - [mysqld] - bind-address = 10.0.0.11 - default-storage-engine = innodb - innodb_file_per_table = on - max_connections = 4096 - collation-server = utf8_general_ci - character-set-server = utf8 - ``` - -3. 启动 DataBase 服务,并为其配置开机自启动: - - ```shell - $ systemctl enable mariadb.service - $ systemctl start mariadb.service - ``` -### 安装 RabbitMQ - -1. 执行如下命令,安装软件包。 - - ```shell - $ yum install rabbitmq-server - ``` - -2. 启动 RabbitMQ 服务,并为其配置开机自启动。 - - ```shell - $ systemctl enable rabbitmq-server.service - $ systemctl start rabbitmq-server.service - ``` -3. 添加 OpenStack用户。 - - ```shell - $ rabbitmqctl add_user openstack RABBIT_PASS - ``` -4. 替换 RABBIT_PASS,为OpenStack用户设置密码 - -5. 设置openstack用户权限,允许进行配置、写、读: - - ```shell - $ rabbitmqctl set_permissions openstack ".*" ".*" ".*" - ``` - -### 安装 Memcached - -1. 执行如下命令,安装依赖软件包。 - - ```shell - $ yum install memcached python2-memcached - ``` -2. 编辑 `/etc/sysconfig/memcached` 文件,添加以下内容 - - ```shell - OPTIONS="-l 127.0.0.1,::1,controller" - ``` - OPTIONS 修改为实际环境中控制节点的管理IP地址。 - -3. 执行如下命令,启动 Memcached 服务,并为其配置开机启动。 - - ```shell - $ systemctl enable memcached.service - $ systemctl start memcached.service - ``` - -## 安装 OpenStack - -### Keystone 安装 - -1. 以 root 用户访问数据库,创建 keystone 数据库并授权。 - - ```shell - $ mysql -u root -p - ``` - - ```sql - MariaDB [(none)]> CREATE DATABASE keystone; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> exit - ``` - 替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码 - -2. 执行如下命令,安装软件包。 - - ```shell - $ yum install openstack-keystone httpd python2-mod_wsgi - ``` - -3. 配置keystone,编辑 `/etc/keystone/keystone.conf` 文件。在[database]部分,配置数据库入口。在[token]部分,配置token provider - - ```ini - [database] - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - [token] - provider = fernet - ``` - 替换KEYSTONE_DBPASS为Keystone数据库的密码 - -4. 执行如下命令,同步数据库。 - - ```shell - su -s /bin/sh -c "keystone-manage db_sync" keystone - ``` - -5. 执行如下命令,初始化Fernet密钥仓库。 - - ```shell - $ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - $ keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - ``` - -6. 执行如下命令,启动身份服务。 - - ```shell - $ keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:5000/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - ``` - 替换 ADMIN_PASS,为 admin 用户设置密码。 - -7. 编辑 `/etc/httpd/conf/httpd.conf` 文件,配置Apache HTTP server - - ```shell - $ vim /etc/httpd/conf/httpd.conf - ``` - - 配置 ServerName 项引用控制节点,如下所示。 - ``` - ServerName controller - ``` - - 如果 ServerName 项不存在则需要创建。 - -8. 执行如下命令,为 `/usr/share/keystone/wsgi-keystone.conf` 文件创建链接。 - - ```shell - $ ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ - ``` - -9. 完成安装,执行如下命令,启动Apache HTTP服务。 - - ```shell - $ systemctl enable httpd.service - $ systemctl start httpd.service - ``` - -10. 安装OpenStackClient - - ```shell - $ yum install python2-openstackclient - ``` - -11. 创建 OpenStack client 环境脚本 - - 创建admin用户的环境变量脚本: - - ```shell - # vim admin-openrc - - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=admin - export OS_USERNAME=admin - export OS_PASSWORD=ADMIN_PASS - export OS_AUTH_URL=http://controller:5000/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - ``` - - 替换ADMIN_PASS为admin用户的密码, 与上述`keystone-manage bootstrap` 命令中设置的密码一致 - 运行脚本加载环境变量: - - ```shell - $ source admin-openrc - ``` - -12. 分别执行如下命令,创建domain, projects, users, roles。 - - 创建domain ‘example’: - - ```shell - $ openstack domain create --description "An Example Domain" example - ``` - - 注:domain ‘default’在 keystone-manage bootstrap 时已创建 - - 创建project ‘service’: - - ```shell - $ openstack project create --domain default --description "Service Project" service - ``` - - 创建(non-admin)project ’myproject‘,user ’myuser‘ 和 role ’myrole‘,为‘myproject’和‘myuser’添加角色‘myrole’: - - ```shell - $ openstack project create --domain default --description "Demo Project" myproject - $ openstack user create --domain default --password-prompt myuser - $ openstack role create myrole - $ openstack role add --project myproject --user myuser myrole - ``` - -13. 验证 - - 取消临时环境变量OS_AUTH_URL和OS_PASSWORD: - - ```shell - $ unset OS_AUTH_URL OS_PASSWORD - ``` - - 为admin用户请求token: - - ```shell - $ openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - ``` - - 为myuser用户请求token: - - ```shell - $ openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name myproject --os-username myuser token issue - ``` - - -### Glance 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - 以 root 用户访问数据库,创建 glance 数据库并授权。 - - ```shell - $ mysql -u root -p - ``` - - - - ```sql - MariaDB [(none)]> CREATE DATABASE glance; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> exit - ``` - - 替换 GLANCE_DBPASS,为 glance 数据库设置密码。 - - ```shell - $ source admin-openrc - ``` - - 执行以下命令,分别完成创建 glance 服务凭证、创建glance用户和添加‘admin’角色到用户‘glance’。 - - ```shell - $ openstack user create --domain default --password-prompt glance - $ openstack role add --project service --user glance admin - $ openstack service create --name glance --description "OpenStack Image" image - ``` - 创建镜像服务API端点: - - ```shell - $ openstack endpoint create --region RegionOne image public http://controller:9292 - $ openstack endpoint create --region RegionOne image internal http://controller:9292 - $ openstack endpoint create --region RegionOne image admin http://controller:9292 - ``` - -2. 安装和配置 - - 安装软件包: - - ```shell - $ yum install openstack-glance - ``` - 配置glance: - - 编辑 `/etc/glance/glance-api.conf` 文件: - - 在[database]部分,配置数据库入口 - - 在[keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口 - - 在[glance_store]部分,配置本地文件系统存储和镜像文件的位置 - - ```ini - [database] - # ... - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - [keystone_authtoken] - # ... - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - [paste_deploy] - # ... - flavor = keystone - [glance_store] - # ... - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - ``` - - 编辑 `/etc/glance/glance-registry.conf` 文件: - - 在[database]部分,配置数据库入口 - - 在[keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口 - - ```ini - [database] - # ... - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - [keystone_authtoken] - # ... - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - [paste_deploy] - # ... - flavor = keystone - ``` - - 其中,替换 GLANCE_DBPASS 为 glance 数据库的密码,替换 GLANCE_PASS 为 glance 用户的密码。 - - 同步数据库: - - ```shell - $ su -s /bin/sh -c "glance-manage db_sync" glance - ``` - 启动镜像服务: - - ```shell - $ systemctl enable openstack-glance-api.service openstack-glance-registry.service - $ systemctl start openstack-glance-api.service openstack-glance-registry.service - ``` - -3. 验证 - - 下载镜像 - ```shell - $ source admin-openrc - # 注意:如果您使用的环境是鲲鹏架构,请下载arm64版本的镜像。 - $ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img - ``` - - 向Image服务上传镜像: - - ```shell - $ glance image-create --name "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public - ``` - - 确认镜像上传并验证属性: - - ```shell - $ glance image-list - ``` -### Nova 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - 作为root用户访问数据库,创建nova、nova_api、nova_cell0 数据库并授权 - - ```shell - $ mysql -u root -p - ``` - - ```SQL - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - MariaDB [(none)]> CREATE DATABASE placement; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> exit - ``` - 替换NOVA_DBPASS及PLACEMENT_DBPASS,为nova及placement数据库设置密码 - - 执行如下命令,完成创建nova服务凭证、创建nova用户以及添加‘admin’角色到用户‘nova’。 - - ```shell - $ . admin-openrc - $ openstack user create --domain default --password-prompt nova - $ openstack role add --project service --user nova admin - $ openstack service create --name nova --description "OpenStack Compute" compute - ``` - - 创建计算服务API端点: - - ```shell - $ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 - $ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 - $ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 - ``` - - 创建placement用户并添加‘admin’角色到用户‘placement’: - ```shell - $ openstack user create --domain default --password-prompt placement - $ openstack role add --project service --user placement admin - ``` - - 创建placement服务凭证及API服务端点: - ```shell - $ openstack service create --name placement --description "Placement API" placement - $ openstack endpoint create --region RegionOne placement public http://controller:8778 - $ openstack endpoint create --region RegionOne placement internal http://controller:8778 - $ openstack endpoint create --region RegionOne placement admin http://controller:8778 - ``` - -2. 安装和配置 - - 安装软件包: - - ```shell - $ yum install openstack-nova-api openstack-nova-conductor \ - openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-compute \ - openstack-nova-placement-api openstack-nova-console - ``` - - 配置nova: - - 编辑 `/etc/nova/nova.conf` 文件: - - 在[default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron; - - 在[api_database] [database] [placement_database]部分,配置数据库入口; - - 在[api] [keystone_authtoken]部分,配置身份认证服务入口; - - 在[vnc]部分,启用并配置远程控制台入口; - - 在[glance]部分,配置镜像服务API的地址; - - 在[oslo_concurrency]部分,配置lock path; - - 在[placement]部分,配置placement服务的入口。 - - ```ini - [DEFAULT] - # ... - enabled_apis = osapi_compute,metadata - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - my_ip = 10.0.0.11 - use_neutron = true - firewall_driver = nova.virt.firewall.NoopFirewallDriver - compute_driver = libvirt.LibvirtDriver - instances_path = /var/lib/nova/instances/ - [api_database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api - [database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova - [placement_database] - # ... - connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement - [api] - # ... - auth_strategy = keystone - [keystone_authtoken] - # ... - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = nova - password = NOVA_PASS - [vnc] - enabled = true - # ... - server_listen = $my_ip - server_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html - [glance] - # ... - api_servers = http://controller:9292 - [oslo_concurrency] - # ... - lock_path = /var/lib/nova/tmp - [placement] - # ... - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:5000/v3 - username = placement - password = PLACEMENT_PASS - [neutron] - # ... - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - ``` - - 替换RABBIT_PASS为RabbitMQ中openstack账户的密码; - - 配置my_ip为控制节点的管理IP地址; - - 替换NOVA_DBPASS为nova数据库的密码; - - 替换PLACEMENT_DBPASS为placement数据库的密码; - - 替换NOVA_PASS为nova用户的密码; - - 替换PLACEMENT_PASS为placement用户的密码; - - 替换NEUTRON_PASS为neutron用户的密码; - - 编辑`/etc/httpd/conf.d/00-nova-placement-api.conf`,增加Placement API接入配置 - - ```xml - - = 2.4> - Require all granted - - - Order allow,deny - Allow from all - - - ``` - - 重启httpd服务: - - ```shell - $ systemctl restart httpd - ``` - - 同步nova-api数据库: - - ```shell - $ su -s /bin/sh -c "nova-manage api_db sync" nova - ``` - 注册cell0数据库: - - ```shell - $ su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova - ``` - 创建cell1 cell: - - ```shell - $ su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova - ``` - 同步nova数据库: - - ```shell - $ su -s /bin/sh -c "nova-manage db sync" nova - ``` - 验证cell0和cell1注册正确: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova - ``` - 确定是否支持虚拟机硬件加速(x86架构): - - ```shell - $ egrep -c '(vmx|svm)' /proc/cpuinfo - ``` - - 如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM: - **注意:** 如果是在ARM64的服务器上,还需要在配置`cpu_mode`为`custom`,`cpu_model`为`cortex-a72` - - ```ini - # vim /etc/nova/nova.conf - [libvirt] - # ... - virt_type = qemu - cpu_mode = custom - cpu_model = cortex-a72 - ``` - 如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置 - - ***注意*** - - **如果为arm64结构,还需要在`compute`节点执行以下命令** - - ```shell - mkdir -p /usr/share/AAVMF - ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \ - /usr/share/AAVMF/AAVMF_CODE.fd - ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \ - /usr/share/AAVMF/AAVMF_VARS.fd - chown nova:nova /usr/share/AAVMF -R - - vim /etc/libvirt/qemu.conf - - nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", - "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2/aarch64/vars-template-pflash.raw" - ] - ``` - - 启动计算服务及其依赖项,并配置其开机启动: - - ```shell - $ systemctl enable \ - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - $ systemctl start \ - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - ``` - ```bash - $ systemctl enable libvirtd.service openstack-nova-compute.service - $ systemctl start libvirtd.service openstack-nova-compute.service - ``` - 添加计算节点到cell数据库: - - 确认计算节点存在: - - ```bash - $ . admin-openrc - $ openstack compute service list --service nova-compute - ``` - 注册计算节点: - - ```bash - $ su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova - ``` - -3. 验证 - - ```shell - $ . admin-openrc - ``` - 列出服务组件,验证每个流程都成功启动和注册: - - ```shell - $ openstack compute service list - ``` - - 列出身份服务中的API端点,验证与身份服务的连接: - - ```shell - $ openstack catalog list - ``` - - 列出镜像服务中的镜像,验证与镜像服务的连接: - - ```shell - $ openstack image list - ``` - - 检查cells和placement API是否运作成功,以及其他必要条件是否已具备。 - - ```shell - $ nova-status upgrade check - ``` -### Neutron 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - 作为root用户访问数据库,创建 neutron 数据库并授权。 - - ```shell - $ mysql -u root -p - ``` - - ```sql - MariaDB [(none)]> CREATE DATABASE neutron; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> exit - ``` - 替换NEUTRON_DBPASS,为neutron数据库设置密码。 - - ```shell - $ . admin-openrc - ``` - 执行如下命令,完成创建 neutron 服务凭证、创建neutron用户和添加‘admin’角色到‘neutron’用户操作。 - - 创建neutron服务 - - ```shell - $ openstack user create --domain default --password-prompt neutron - $ openstack role add --project service --user neutron admin - $ openstack service create --name neutron --description "OpenStack Networking" network - ``` - 创建网络服务API端点: - - ```shell - $ openstack endpoint create --region RegionOne network public http://controller:9696 - $ openstack endpoint create --region RegionOne network internal http://controller:9696 - $ openstack endpoint create --region RegionOne network admin http://controller:9696 - ``` - -2. 安装和配置 Self-service 网络 - - 安装软件包: - - ```shell - $ yum install openstack-neutron openstack-neutron-ml2 \ - openstack-neutron-linuxbridge ebtables ipset - ``` - 配置neutron: - - 编辑 /etc/neutron/neutron.conf 文件: - - 在[database]部分,配置数据库入口; - - 在[default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口; - - 在[default] [keystone]部分,配置身份认证服务入口; - - 在[default] [nova]部分,配置网络来通知计算网络拓扑的变化; - - 在[oslo_concurrency]部分,配置lock path。 - - ```ini - [database] - # ... - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - [DEFAULT] - # ... - core_plugin = ml2 - service_plugins = router - allow_overlapping_ips = true - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - notify_nova_on_port_status_changes = true - notify_nova_on_port_data_changes = true - [keystone_authtoken] - # ... - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = neutron - password = NEUTRON_PASS - [nova] - # ... - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = nova - password = NOVA_PASS - [oslo_concurrency] - # ... - lock_path = /var/lib/neutron/tmp - ``` - - 替换NEUTRON_DBPASS为neutron数据库的密码; - - 替换RABBIT_PASS为RabbitMQ中openstack账户的密码; - - 替换NEUTRON_PASS为neutron用户的密码; - - 替换NOVA_PASS为nova用户的密码。 - - 配置ML2插件: - - 编辑 /etc/neutron/plugins/ml2/ml2_conf.ini 文件: - - 在[ml2]部分,启用 flat、vlan、vxlan 网络,启用网桥及 layer-2 population 机制,启用端口安全扩展驱动; - - 在[ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络; - - 在[ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围; - - 在[securitygroup]部分,配置允许 ipset。 - - ```ini - # vim /etc/neutron/plugins/ml2/ml2_conf.ini - [ml2] - # ... - type_drivers = flat,vlan,vxlan - tenant_network_types = vxlan - mechanism_drivers = linuxbridge,l2population - extension_drivers = port_security - [ml2_type_flat] - # ... - flat_networks = provider - [ml2_type_vxlan] - # ... - vni_ranges = 1:1000 - [securitygroup] - # ... - enable_ipset = true - ``` - 配置 Linux bridge 代理: - - 编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 文件: - - 在[linux_bridge]部分,映射 provider 虚拟网络到物理网络接口; - - 在[vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population; - - 在[securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。 - - ```ini - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - [securitygroup] - # ... - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - ``` - 替换PROVIDER_INTERFACE_NAME为物理网络接口; - - 替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。 - - 配置Layer-3代理: - - 编辑 /etc/neutron/l3_agent.ini 文件: - - 在[default]部分,配置接口驱动为linuxbridge - - ```ini - [DEFAULT] - # ... - interface_driver = linuxbridge - ``` - 配置DHCP代理: - - 编辑 /etc/neutron/dhcp_agent.ini 文件: - - 在[default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。 - - ```ini - [DEFAULT] - # ... - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - ``` - 配置metadata代理: - - 编辑 /etc/neutron/metadata_agent.ini 文件: - - 在[default]部分,配置元数据主机和shared secret。 - - ```ini - [DEFAULT] - # ... - nova_metadata_host = controller - metadata_proxy_shared_secret = METADATA_SECRET - ``` - 替换METADATA_SECRET为合适的元数据代理secret。 - - -3. 配置计算服务 - - 编辑 /etc/nova/nova.conf 文件: - - 在[neutron]部分,配置访问参数,启用元数据代理,配置secret。 - - ```ini - [neutron] - # ... - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true - metadata_proxy_shared_secret = METADATA_SECRET - ``` - - 替换NEUTRON_PASS为neutron用户的密码; - - 替换METADATA_SECRET为合适的元数据代理secret。 - - - -4. 完成安装 - - 添加配置文件链接: - - ```shell - $ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - ``` - - 同步数据库: - - ```shell - $ su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - ``` - - 重启计算API服务: - - ```shell - $ systemctl restart openstack-nova-api.service - ``` - - 启动网络服务并配置开机启动: - - ```shell - $ systemctl enable neutron-server.service \ - neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ - neutron-metadata-agent.service - $ systemctl start neutron-server.service \ - neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ - neutron-metadata-agent.service - $ systemctl enable neutron-l3-agent.service - $ systemctl start neutron-l3-agent.service - ``` - -5. 验证 - - 列出代理验证 neutron 代理启动成功: - - ```shell - $ openstack network agent list - ``` - - -### Cinder 安装 - - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - 作为root用户访问数据库,创建cinder数据库并授权。 - - ```shell - $ mysql -u root -p - MariaDB [(none)]> CREATE DATABASE cinder; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> exit - ``` - 替换CINDER_DBPASS,为cinder数据库设置密码。 - - ```shell - $ source admin-openrc - ``` - - 创建cinder服务凭证: - - 创建cinder用户 - - 添加‘admin’角色到用户‘cinder’ - - 创建cinderv2和cinderv3服务 - - ```shell - $ openstack user create --domain default --password-prompt cinder - $ openstack role add --project service --user cinder admin - $ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 - $ openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 - ``` - 创建块存储服务API端点: - - ```shell - $ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s - $ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s - $ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s - $ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s - $ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s - $ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s - ``` - -2. 安装和配置控制节点 - - 安装软件包: - - ```shell - $ yum install openstack-cinder - ``` - 配置cinder: - - 编辑 `/etc/cinder/cinder.conf` 文件: - - 在[database]部分,配置数据库入口; - - 在[DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip; - - 在[DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口; - - 在[oslo_concurrency]部分,配置lock path。 - - ```ini - [database] - # ... - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - my_ip = 10.0.0.11 - [keystone_authtoken] - # ... - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = cinder - password = CINDER_PASS - [oslo_concurrency] - # ... - lock_path = /var/lib/cinder/tmp - ``` - 替换CINDER_DBPASS为cinder数据库的密码; - - 替换RABBIT_PASS为RabbitMQ中openstack账户的密码; - - 配置my_ip为控制节点的管理IP地址; - - 替换CINDER_PASS为cinder用户的密码; - - 同步数据库: - - ```shell - $ su -s /bin/sh -c "cinder-manage db sync" cinder - ``` - 配置计算使用块存储: - - 编辑 /etc/nova/nova.conf 文件。 - - ```ini - [cinder] - os_region_name = RegionOne - ``` - 完成安装: - - 重启计算API服务 - - ```shell - $ systemctl restart openstack-nova-api.service - ``` - 启动块存储服务 - - ```shell - $ systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service - $ systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service - ``` - -3. 安装和配置存储节点(LVM) - - 安装软件包: - - ```shell - $ yum install lvm2 device-mapper-persistent-data scsi-target-utils python2-keystone \ - openstack-cinder-volume - ``` - - 创建LVM物理卷 /dev/sdb: - - ```shell - $ pvcreate /dev/sdb - ``` - 创建LVM卷组 cinder-volumes: - - ```shell - $ vgcreate cinder-volumes /dev/sdb - ``` - 编辑 /etc/lvm/lvm.conf 文件: - - 在devices部分,添加过滤以接受/dev/sdb设备拒绝其他设备。 - - ```ini - devices { - - # ... - - filter = [ "a/sdb/", "r/.*/"] - ``` - - 编辑 `/etc/cinder/cinder.conf` 文件: - - 在[lvm]部分,使用LVM驱动、cinder-volumes卷组、iSCSI协议和适当的iSCSI服务配置LVM后端。 - - 在[DEFAULT]部分,启用LVM后端,配置镜像服务API的位置。 - - ```ini - [lvm] - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver - volume_group = cinder-volumes - target_protocol = iscsi - target_helper = lioadm - [DEFAULT] - # ... - enabled_backends = lvm - glance_api_servers = http://controller:9292 - ``` - - ***注意*** - - 当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。 - - ``` - include /var/lib/cinder/volumes/* - ``` - 完成安装: - - ```shell - $ systemctl enable openstack-cinder-volume.service tgtd.service iscsid.service - $ systemctl start openstack-cinder-volume.service tgtd.service iscsid.service - ``` - -4. 安装和配置存储节点(ceph RBD) - - 安装软件包: - - ```shell - $ yum install ceph-common python2-rados python2-rbd python2-keystone openstack-cinder-volume - ``` - - 在[DEFAULT]部分,启用LVM后端,配置镜像服务API的位置。 - - ```ini - [DEFAULT] - enabled_backends = ceph-rbd - ``` - - 添加ceph rbd配置部分,配置块命名与enabled_backends中保持一致 - - ```ini - [ceph-rbd] - glance_api_version = 2 - rados_connect_timeout = -1 - rbd_ceph_conf = /etc/ceph/ceph.conf - rbd_flatten_volume_from_snapshot = False - rbd_max_clone_depth = 5 - rbd_pool = # RBD存储池名称 - rbd_secret_uuid = # 随机生成SECRET UUID - rbd_store_chunk_size = 4 - rbd_user = - volume_backend_name = ceph-rbd - volume_driver = cinder.volume.drivers.rbd.RBDDriver - ``` - - 配置存储节点ceph客户端,需要保证/etc/ceph/目录中包含ceph集群访问配置,包括ceph.conf以及keyring - - ```shell - [root@openeuler ~]# ll /etc/ceph - -rw-r--r-- 1 root root 82 Jun 16 17:11 ceph.client..keyring - -rw-r--r-- 1 root root 1.5K Jun 16 17:11 ceph.conf - -rw-r--r-- 1 root root 92 Jun 16 17:11 rbdmap - ``` - - 在存储节点检查ceph集群是否正常可访问 - - ```shell - [root@openeuler ~]# ceph --user cinder -s - cluster: - id: b7b2fac6-420f-4ec1-aea2-4862d29b4059 - health: HEALTH_OK - - services: - mon: 3 daemons, quorum VIRT01,VIRT02,VIRT03 - mgr: VIRT03(active), standbys: VIRT02, VIRT01 - mds: cephfs_virt-1/1/1 up {0=VIRT03=up:active}, 2 up:standby - osd: 15 osds: 15 up, 15 in - - data: - pools: 7 pools, 1416 pgs - objects: 5.41M objects, 19.8TiB - usage: 49.3TiB used, 59.9TiB / 109TiB avail - pgs: 1414 active - - io: - client: 2.73MiB/s rd, 22.4MiB/s wr, 3.21kop/s rd, 1.19kop/s wr - ``` - - 启动服务 - - ```shell - $ systemctl enable openstack-cinder-volume.service - $ systemctl start openstack-cinder-volume.service - ``` - - - -5. 安装和配置备份服务 - - 编辑 /etc/cinder/cinder.conf 文件: - - 在[DEFAULT]部分,配置备份选项 - - ```ini - [DEFAULT] - # ... - # 注意: openEuler 21.03中没有提供OpenStack Swift软件包,需要用户自行安装。或者使用其他的备份后端,例如,NFS。NFS已经过测试验证,可以正常使用。 - backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver - backup_swift_url = SWIFT_URL - ``` - 替换SWIFT_URL为对象存储服务的URL,该URL可以通过对象存储API端点找到: - - ```shell - $ openstack catalog show object-store - ``` - 完成安装: - - ```shell - $ systemctl enable openstack-cinder-backup.service - $ systemctl start openstack-cinder-backup.service - ``` - -6. 验证 - - 列出服务组件验证每个步骤成功: - ```shell - $ source admin-openrc - $ openstack volume service list - ``` - - 注:目前暂未对swift组件进行支持,有条件的同学可以配置对接ceph。 - -### Horizon 安装 - -1. 安装软件包 - - ```shell - $ yum install openstack-dashboard - ``` -2. 修改文件`/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py` - - 修改变量 - - ```ini - ALLOWED_HOSTS = ['*', ] - OPENSTACK_HOST = "controller" - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': 'controller:11211', - } - } - ``` - 新增变量 - ```ini - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 3, - } - WEBROOT = "/dashboard/" - COMPRESS_OFFLINE = True - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default" - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin" - LOGIN_URL = '/dashboard/auth/login/' - LOGOUT_URL = '/dashboard/auth/logout/' - ``` -3. 修改文件/etc/httpd/conf.d/openstack-dashboard.conf - ```xml - WSGIDaemonProcess dashboard - WSGIProcessGroup dashboard - WSGISocketPrefix run/wsgi - WSGIApplicationGroup %{GLOBAL} - - WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi - Alias /dashboard/static /usr/share/openstack-dashboard/static - - - Options All - AllowOverride All - Require all granted - - - - Options All - AllowOverride All - Require all granted - - ``` -4. 在/usr/share/openstack-dashboard目录下执行 - ```shell - $ ./manage.py compress - ``` -5. 重启 httpd 服务 - ```shell - $ systemctl restart httpd - ``` -5. 验证 - 打开浏览器,输入网址http://,登录 horizon。 - -### Tempest 安装 - -Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装 - -1. 安装Tempest - ```shell - $ yum install openstack-tempest - ``` -2. 初始化目录 - - ```shell - $ tempest init mytest - ``` -3. 修改配置文件。 - - ```shell - $ cd mytest - $ vi etc/tempest.conf - ``` - tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考[官方示例](https://docs.openstack.org/tempest/latest/sampleconf.html) - -4. 执行测试 - - ```shell - $ tempest run - ``` - -### Ironic 安装 - -Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 裸金属服务在数据库中存储信息,创建一个**ironic**用户可以访问的**ironic**数据库,替换**IRONIC_DBPASSWORD**为合适的密码 - - ```shell - $ mysql -u root -p - ``` - - ```sql - MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - ``` - -2. 安装软件包 - - ```shell - yum install openstack-ironic-api openstack-ironic-conductor python2-ironicclient - ``` - - 启动服务 - - ```shell - systemctl enable openstack-ironic-api openstack-ironic-conductor - systemctl start openstack-ironic-api openstack-ironic-conductor - ``` - -3. 组件安装与配置 - - ##### 创建服务用户认证 - - 1、创建Bare Metal服务用户 - - ```shell - $ openstack user create --password IRONIC_PASSWORD \ - --email ironic@example.com ironic - $ openstack role add --project service --user ironic admin - $ openstack service create --name ironic --description \ - "Ironic baremetal provisioning service" baremetal - ``` - - 2、创建Bare Metal服务访问入口 - - ```shell - $ openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 - $ openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 - $ openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 - ``` - - ##### 配置ironic-api服务 - - 配置文件路径/etc/ironic/ironic.conf - - 1、通过**connection**选项配置数据库的位置,如下所示,替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换**DB_IP**为DB服务器所在的IP地址: - - ```ini - [database] - - # The SQLAlchemy connection string used to connect to the - # database (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```ini - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 3、配置ironic-api服务使用身份认证服务的凭证,替换**PUBLIC_IDENTITY_IP**为身份认证服务器的公共IP,替换**PRIVATE_IDENTITY_IP**为身份认证服务器的私有IP,替换**IRONIC_PASSWORD**为身份认证服务中**ironic**用户的密码: - - ```ini - [DEFAULT] - - # Authentication strategy used by ironic-api: one of - # "keystone" or "noauth". "noauth" should not be used in a - # production environment because all authentication will be - # disabled. (string value) - - auth_strategy=keystone - force_config_drive = True - - [keystone_authtoken] - # Authentication type to load (string value) - auth_type=password - # Complete public Identity API endpoint (string value) - www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 - # Complete admin Identity API endpoint. (string value) - auth_url=http://PRIVATE_IDENTITY_IP:5000 - # Service username. (string value) - username=ironic - # Service account password. (string value) - password=IRONIC_PASSWORD - # Service tenant name. (string value) - project_name=service - # Domain name containing project (string value) - project_domain_name=Default - # User's domain name (string value) - user_domain_name=Default - ``` - - 4、需要在配置文件中指定ironic日志目录 - - ``` - [DEFAULT] - log_dir = /var/log/ironic/ - ``` - - 5、创建裸金属服务数据库表 - - ```shell - $ ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema - ``` - - 6、重启ironic-api服务 - - ```shell - $ systemctl restart openstack-ironic-api - ``` - - ##### 配置ironic-conductor服务 - - 1、替换**HOST_IP**为conductor host的IP - - ```ini - [DEFAULT] - - # IP address of this host. If unset, will determine the IP - # programmatically. If unable to do so, will use "127.0.0.1". - # (string value) - - my_ip=HOST_IP - ``` - - 2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换DB_IP为DB服务器所在的IP地址: - - ```ini - [database] - - # The SQLAlchemy connection string to use to connect to the - # database. (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```ini - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 4、配置凭证访问其他OpenStack服务 - - 为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。 - - [neutron] - 访问Openstack网络服务 - [glance] - 访问Openstack镜像服务 - [swift] - 访问Openstack对象存储服务 - [cinder] - 访问Openstack块存储服务 - [inspector] - 访问Openstack裸金属introspection服务 - [service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在Openstack身份认证服务目录中的自己的API URL端点 - - 简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。 - - 在下面的示例中,用户访问openstack网络服务的身份验证信息配置为: - - 网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口 - - 请求时使用特定的CA SSL证书进行HTTPS连接 - - 与ironic-api服务配置相同的服务用户 - - 动态密码认证插件基于其他选项发现合适的身份认证服务API版本 - - ```ini - [neutron] - - # Authentication type to load (string value) - auth_type = password - # Authentication URL (string value) - auth_url=https://IDENTITY_IP:5000/ - # Username (string value) - username=ironic - # User's password (string value) - password=IRONIC_PASSWORD - # Project name to scope to (string value) - project_name=service - # Domain ID containing project (string value) - project_domain_id=default - # User's domain id (string value) - user_domain_id=default - # PEM encoded Certificate Authority to use when verifying - # HTTPs connections. (string value) - cafile=/opt/stack/data/ca-bundle.pem - # The default region_name for endpoint URL discovery. (string - # value) - region_name = RegionOne - # List of interfaces, in order of preference, for endpoint - # URL. (list value) - valid_interfaces=public - ``` - - 默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定: - - ```ini - [neutron] - # ... - endpoint_override = - ``` - - 5、配置允许的驱动程序和硬件类型 - - 通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型: - - ```ini - [DEFAULT] - enabled_hardware_types = ipmi - ``` - - 配置硬件接口: - - ```ini - enabled_boot_interfaces = pxe - enabled_deploy_interfaces = direct,iscsi - enabled_inspect_interfaces = inspector - enabled_management_interfaces = ipmitool - enabled_power_interfaces = ipmitool - ``` - - 配置接口默认值: - - ```ini - [DEFAULT] - default_deploy_interface = direct - default_network_interface = neutron - ``` - - 如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。 - - 6、重启ironic-conductor服务 - - ```shell - $ systemctl restart openstack-ironic-conductor - ``` - -4. deploy ramdisk镜像制作 - - 目前ramdisk镜像支持通过ironic python agent builder来进行制作,这里介绍下使用这个工具构建ironic使用的deploy镜像的完整过程。(用户也可以根据自己的情况获取ironic-python-agent,这里提供使用ipa-builder制作ipa方法) - - ##### 安装 ironic-python-agent-builder - - 1. 安装工具: - - ```shell - $ pip install ironic-python-agent-builder - ``` - - 2. 修改以下文件中的python解释器: - - ```shell - $ /usr/bin/yum /usr/libexec/urlgrabber-ext-down - ``` - - 3. 安装其它必须的工具: - - ```shell - $ yum install git - ``` - - 由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可: - - ```shell - # 先查询需要安装哪个包 - [root@localhost ~]# yum provides /usr/sbin/semanage - 已加载插件:fastestmirror - Loading mirror speeds from cached hostfile - * base: mirror.vcu.edu - * extras: mirror.vcu.edu - * updates: mirror.math.princeton.edu - policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities - 源 :base - 匹配来源: - 文件名 :/usr/sbin/semanage - # 安装 - [root@localhost ~]# yum install policycoreutils-python - ``` - - ##### 制作镜像 - - 如果是`aarch64`架构,还需要添加: - - ```shell - $ export ARCH=aarch64 - ``` - - ###### 普通镜像 - - 基本用法: - - ```shell - usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] - [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] - distribution - - positional arguments: - distribution Distribution to use - - optional arguments: - -h, --help show this help message and exit - -r RELEASE, --release RELEASE - Distribution release to use - -o OUTPUT, --output OUTPUT - Output base file name - -e ELEMENT, --element ELEMENT - Additional DIB element to use - -b BRANCH, --branch BRANCH - If set, override the branch that is used for ironic- - python-agent and requirements - -v, --verbose Enable verbose logging in diskimage-builder - --extra-args EXTRA_ARGS - Extra arguments to pass to diskimage-builder - ``` - - 举例说明: - - ```shell - $ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky - ``` - - ###### 允许ssh登陆 - - 初始化环境变量,然后制作镜像: - - ```shell - $ export DIB_DEV_USER_USERNAME=ipa \ - $ export DIB_DEV_USER_PWDLESS_SUDO=yes \ - $ export DIB_DEV_USER_PASSWORD='123' - $ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser - ``` - - ###### 指定代码仓库 - - 初始化对应的环境变量,然后制作镜像: - - ```ini - # 指定仓库地址以及版本 - DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git - DIB_REPOREF_ironic_python_agent=origin/develop - - # 直接从gerrit上clone代码 - DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent - DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 - ``` - - 参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。 - - 指定仓库地址及版本验证成功。 - -在Rocky中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。 - -### Kolla 安装 - -Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 20.03 LTS SP2中已经引入了Kolla和Kolla-ansible服务,但是Kolla 以及 Kolla-ansible 原生并不支持 openEuler, -因此 Openstack SIG 在openEuler 20.03 LTS SP3中提供了 `openstack-kolla-plugin` 和 `openstack-kolla-ansible-plugin` 这两个补丁包。 - -Kolla的安装十分简单,只需要安装对应的RPM包即可 - -支持 openEuler 版本: - -```shell -yum install openstack-kolla-plugin openstack-kolla-ansible-plugin -``` - -不支持 openEuler 版本: - -```shell -yum install openstack-kolla openstack-kolla-ansible -``` - -安装完后,就可以使用`kolla-ansible`, `kolla-build`, `kolla-genpwd`, `kolla-mergepwd`等命令了。 - -### Trove 安装 - -Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 数据库服务在数据库中存储信息,创建一个**trove**用户可以访问**trove**数据库,替换**TROVE_DBPASSWORD**为对应密码 - - ```shell - $ mysql -u root -p - ``` - - ```sql - MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - ``` - -2. 创建服务用户认证 - - 1、创建**Trove**服务用户 - - ```shell - $ openstack user create --password TROVE_PASSWORD \ - --email trove@example.com trove - $ openstack role add --project service --user trove admin - $ openstack service create --name trove - --description "Database service" database - ``` - **解释:** `TROVE_PASSWORD` 替换为`trove`用户的密码 - - 2、创建**Database**服务访问入口 - - ```shell - $ openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s - $ openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s - $ openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s - ``` - **解释:** `$TROVE_NODE` 替换为Trove的API服务部署节点 - -3. 安装和配置**Trove**各组件 - - 1、安装**Trove**包 - - ```shell - $ yum install openstack-trove python2-troveclient - ``` - 2、配置`/etc/trove/trove.conf` - - ```ini - [DEFAULT] - bind_host=TROVE_NODE_IP - log_dir = /var/log/trove - - auth_strategy = keystone - # Config option for showing the IP address that nova doles out - add_addresses = True - network_label_regex = ^NETWORK_LABEL$ - api_paste_config = /etc/trove/api-paste.ini - - trove_auth_url = http://controller:35357/v3/ - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - - nova_proxy_admin_user = admin - nova_proxy_admin_pass = ADMIN_PASS - nova_proxy_admin_tenant_name = service - taskmanager_manager = trove.taskmanager.manager.Manager - use_nova_server_config_drive = True - - # Set these if using Neutron Networking - network_driver=trove.network.neutron.NeutronDriver - network_label_regex=.* - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/v3/ - auth_url=http://controller:35357/v3/ - #auth_uri = http://controller/identity - #auth_url = http://controller/identity_admin - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = trove - password = TROVE_PASS - - ``` - **解释:** - - `[Default]`分组中`bind_host`配置为Trove部署节点的IP - - `nova_compute_url` 和 `cinder_url` 为Nova和Cinder在Keystone中创建的endpoint - - `nova_proxy_XXX` 为一个能访问Nova服务的用户信息,上例中使用`admin`用户为例 - - `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码 - - `[database]`分组中的`connection` 为前面在mysql中为Trove创建的数据库信息 - - Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码 - - 3、配置`/etc/trove/trove-taskmanager.conf` - - ```ini - [DEFAULT] - log_dir = /var/log/trove - trove_auth_url = http://controller/identity/v2.0 - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove - ``` - **解释:** 参照`trove.conf`配置 - 4、配置`/etc/trove/trove-conductor.conf` - - ```ini - [DEFAULT] - log_dir = /var/log/trove - trove_auth_url = http://controller/identity/v2.0 - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:trove@controller/trove - ``` - **解释:** 参照`trove.conf`配置 - - 5、配置`/etc/trove/trove-guestagent.conf` - - ```ini - [DEFAULT] - rabbit_host = controller - rabbit_password = RABBIT_PASS - nova_proxy_admin_user = admin - nova_proxy_admin_pass = ADMIN_PASS - nova_proxy_admin_tenant_name = service - trove_auth_url = http://controller/identity_admin/v2.0 - ``` - **解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 - 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 - 报心跳,因此需要配置RabbitMQ的用户和密码信息。 - - 6、生成数据`Trove`数据库表 - - ```shell - $ su -s /bin/sh -c "trove-manage db_sync" trove - ``` - -4. 完成安装配置 - 1、配置**Trove**服务自启动 - - ```shell - $ systemctl enable openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - 2、启动服务 - - ```shell - $ systemctl start openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - -### Rally 安装 - -Rally是OpenStack提供的性能测试工具。只需要简单的安装即可。 - -``` -yum install openstack-rally openstack-rally-plugins -``` diff --git a/docs/install/openEuler-20.03-LTS-SP3/OpenStack-train.md b/docs/install/openEuler-20.03-LTS-SP3/OpenStack-train.md deleted file mode 100644 index 281ac95eec10425b28fe9ac3e584366977506104..0000000000000000000000000000000000000000 --- a/docs/install/openEuler-20.03-LTS-SP3/OpenStack-train.md +++ /dev/null @@ -1,2842 +0,0 @@ -# OpenStack-Train 部署指南 - - - -- [OpenStack-Train 部署指南](#openstack-train-部署指南) - - [OpenStack 简介](#openstack-简介) - - [约定](#约定) - - [准备环境](#准备环境) - - [环境配置](#环境配置) - - [安装 SQL DataBase](#安装-sql-database) - - [安装 RabbitMQ](#安装-rabbitmq) - - [安装 Memcached](#安装-memcached) - - [安装 OpenStack](#安装-openstack) - - [Keystone 安装](#keystone-安装) - - [Glance 安装](#glance-安装) - - [Placement安装](#placement安装) - - [Nova 安装](#nova-安装) - - [Neutron 安装](#neutron-安装) - - [Cinder 安装](#cinder-安装) - - [horizon 安装](#horizon-安装) - - [Tempest 安装](#tempest-安装) - - [Ironic 安装](#ironic-安装) - - [Kolla 安装](#kolla-安装) - - [Trove 安装](#trove-安装) - - [Swift 安装](#swift-安装) - - [Cyborg 安装](#cyborg-安装) - - [Aodh 安装](#aodh-安装) - - [Gnocchi 安装](#gnocchi-安装) - - [Ceilometer 安装](#ceilometer-安装) - - [Heat 安装](#heat-安装) - - -## OpenStack 简介 - -OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。 - -作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。 - -openEuler 20.03-LTS-SP3 版本官方源已经支持 OpenStack-Train 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。 - -## 约定 - -OpenStack 支持多种形态部署,此文档支持`ALL in One`以及`Distributed`两种部署方式,按照如下方式约定: - -`ALL in One`模式: - -```text -忽略所有可能的后缀 -``` - -`Distributed`模式: - -```text -以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点` -以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点` -以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点` -除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点` -``` - -***注意*** - -涉及到以上约定的服务如下: - -- Cinder -- Nova -- Neutron - -## 准备环境 - -### 环境配置 - -1. 配置 20.03-LTS-SP3 官方yum源,需要启用EPOL软件仓以支持OpenStack - - ```shell - cat << EOF >> /etc/yum.repos.d/20.03-LTS-SP3-OpenStack_Train.repo - [OS] - name=OS - baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/OS/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler - - [everything] - name=everything - baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/everything/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/everything/$basearch/RPM-GPG-KEY-openEuler - - [EPOL] - name=EPOL - baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/EPOL/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler - EOF - - yum clean all && yum makecache - ``` - -2. 修改主机名以及映射 - - 设置各个节点的主机名 - - ```shell - hostnamectl set-hostname controller (CTL) - hostnamectl set-hostname compute (CPT) - ``` - - 假设controller节点的IP是`10.0.0.11`,compute节点的IP是`10.0.0.12`(如果存在的话),则于`/etc/hosts`新增如下: - - ```shell - 10.0.0.11 controller - 10.0.0.12 compute - ``` - -### 安装 SQL DataBase - -1. 执行如下命令,安装软件包。 - - ```shell - yum install mariadb mariadb-server python3-PyMySQL - ``` - -2. 执行如下命令,创建并编辑 `/etc/my.cnf.d/openstack.cnf` 文件。 - - ```shell - vim /etc/my.cnf.d/openstack.cnf - - [mysqld] - bind-address = 10.0.0.11 - default-storage-engine = innodb - innodb_file_per_table = on - max_connections = 4096 - collation-server = utf8_general_ci - character-set-server = utf8 - ``` - - ***注意*** - - **其中 `bind-address` 设置为控制节点的管理IP地址。** - -3. 启动 DataBase 服务,并为其配置开机自启动: - - ```shell - systemctl enable mariadb.service - systemctl start mariadb.service - ``` - -4. 配置DataBase的默认密码(可选) - - ```shell - mysql_secure_installation - ``` - - ***注意*** - - **根据提示进行即可** - -### 安装 RabbitMQ - -1. 执行如下命令,安装软件包。 - - ```shell - yum install rabbitmq-server - ``` - -2. 启动 RabbitMQ 服务,并为其配置开机自启动。 - - ```shell - systemctl enable rabbitmq-server.service - systemctl start rabbitmq-server.service - ``` - -3. 添加 OpenStack用户。 - - ```shell - rabbitmqctl add_user openstack RABBIT_PASS - ``` - - ***注意*** - - **替换 `RABBIT_PASS`,为 OpenStack 用户设置密码** - -4. 设置openstack用户权限,允许进行配置、写、读: - - ```shell - rabbitmqctl set_permissions openstack ".*" ".*" ".*" - ``` - -### 安装 Memcached - -1. 执行如下命令,安装依赖软件包。 - - ```shell - yum install memcached python3-memcached - ``` - -2. 编辑 `/etc/sysconfig/memcached` 文件。 - - ```shell - vim /etc/sysconfig/memcached - - OPTIONS="-l 127.0.0.1,::1,controller" - ``` - -3. 执行如下命令,启动 Memcached 服务,并为其配置开机启动。 - - ```shell - systemctl enable memcached.service - systemctl start memcached.service - ``` - - ***注意*** - - **服务启动后,可以通过命令`memcached-tool controller stats`确保启动正常,服务可用,其中可以将`controller`替换为控制节点的管理IP地址。** - -## 安装 OpenStack - -### Keystone 安装 - -1. 创建 keystone 数据库并授权。 - - ``` sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE keystone; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `KEYSTONE_DBPASS`,为 Keystone 数据库设置密码** - -2. 安装软件包。 - - ```shell - yum install openstack-keystone httpd mod_wsgi - ``` - -3. 配置keystone相关配置 - - ```shell - vim /etc/keystone/keystone.conf - - [database] - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - - [token] - provider = fernet - ``` - - ***解释*** - - [database]部分,配置数据库入口 - - [token]部分,配置token provider - - ***注意:*** - - **替换 `KEYSTONE_DBPASS` 为 Keystone 数据库的密码** - -4. 同步数据库。 - - ```shell - su -s /bin/sh -c "keystone-manage db_sync" keystone - ``` - -5. 初始化Fernet密钥仓库。 - - ```shell - keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - ``` - -6. 启动服务。 - - ```shell - keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:5000/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - ``` - - ***注意*** - - **替换 `ADMIN_PASS`,为 admin 用户设置密码** - -7. 配置Apache HTTP server - - ```shell - vim /etc/httpd/conf/httpd.conf - - ServerName controller - ``` - - ```shell - ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ - ``` - - ***解释*** - - 配置 `ServerName` 项引用控制节点 - - ***注意*** - **如果 `ServerName` 项不存在则需要创建** - -8. 启动Apache HTTP服务。 - - ```shell - systemctl enable httpd.service - systemctl start httpd.service - ``` - -9. 创建环境变量配置。 - - ```shell - cat << EOF >> ~/.admin-openrc - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=admin - export OS_USERNAME=admin - export OS_PASSWORD=ADMIN_PASS - export OS_AUTH_URL=http://controller:5000/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - EOF - ``` - - ***注意*** - - **替换 `ADMIN_PASS` 为 admin 用户的密码** - -10. 依次创建domain, projects, users, roles,需要先安装好python3-openstackclient: - - ```shell - yum install python3-openstackclient - ``` - - 导入环境变量 - - ```shell - source ~/.admin-openrc - ``` - - 创建project `service`,其中 domain `default` 在 keystone-manage bootstrap 时已创建 - - ```shell - openstack domain create --description "An Example Domain" example - ``` - - ```shell - openstack project create --domain default --description "Service Project" service - ``` - - 创建(non-admin)project `myproject`,user `myuser` 和 role `myrole`,为 `myproject` 和 `myuser` 添加角色`myrole` - - ```shell - openstack project create --domain default --description "Demo Project" myproject - openstack user create --domain default --password-prompt myuser - openstack role create myrole - openstack role add --project myproject --user myuser myrole - ``` - -11. 验证 - - 取消临时环境变量OS_AUTH_URL和OS_PASSWORD: - - ```shell - source ~/.admin-openrc - unset OS_AUTH_URL OS_PASSWORD - ``` - - 为admin用户请求token: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - ``` - - 为myuser用户请求token: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name myproject --os-username myuser token issue - ``` - -### Glance 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE glance; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意:*** - - **替换 `GLANCE_DBPASS`,为 glance 数据库设置密码** - - 创建服务凭证 - - ```shell - source ~/.admin-openrc - - openstack user create --domain default --password-prompt glance - openstack role add --project service --user glance admin - openstack service create --name glance --description "OpenStack Image" image - ``` - - 创建镜像服务API端点: - - ```shell - openstack endpoint create --region RegionOne image public http://controller:9292 - openstack endpoint create --region RegionOne image internal http://controller:9292 - openstack endpoint create --region RegionOne image admin http://controller:9292 - ``` - -2. 安装软件包 - - ```shell - yum install openstack-glance - ``` - -3. 配置glance相关配置: - - ```shell - vim /etc/glance/glance-api.conf - - [database] - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - flavor = keystone - - [glance_store] - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - ``` - - ***解释:*** - - [database]部分,配置数据库入口 - - [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口 - - [glance_store]部分,配置本地文件系统存储和镜像文件的位置 - - ***注意*** - - **替换 `GLANCE_DBPASS` 为 glance 数据库的密码** - - **替换 `GLANCE_PASS` 为 glance 用户的密码** - -4. 同步数据库: - - ```shell - su -s /bin/sh -c "glance-manage db_sync" glance - ``` - -5. 启动服务: - - ```shell - systemctl enable openstack-glance-api.service - systemctl start openstack-glance-api.service - ``` - -6. 验证 - - 下载镜像 - - ```shell - source ~/.admin-openrc - - wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img - ``` - - ***注意*** - - **如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。** - - 向Image服务上传镜像: - - ```shell - openstack image create --disk-format qcow2 --container-format bare \ - --file cirros-0.4.0-x86_64-disk.img --public cirros - ``` - - 确认镜像上传并验证属性: - - ```shell - openstack image list - ``` - -### Placement安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - 作为 root 用户访问数据库,创建 placement 数据库并授权。 - - ```shell - mysql -u root -p - MariaDB [(none)]> CREATE DATABASE placement; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `PLACEMENT_DBPASS` 为 placement 数据库设置密码** - - ```shell - source admin-openrc - ``` - - 执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。 - - 创建Placement API服务 - - ```shell - openstack user create --domain default --password-prompt placement - openstack role add --project service --user placement admin - openstack service create --name placement --description "Placement API" placement - ``` - - 创建placement服务API端点: - - ```shell - openstack endpoint create --region RegionOne placement public http://controller:8778 - openstack endpoint create --region RegionOne placement internal http://controller:8778 - openstack endpoint create --region RegionOne placement admin http://controller:8778 - ``` - -2. 安装和配置 - - 安装软件包: - - ```shell - yum install openstack-placement-api - ``` - - 配置placement: - - 编辑 /etc/placement/placement.conf 文件: - - 在[placement_database]部分,配置数据库入口 - - 在[api] [keystone_authtoken]部分,配置身份认证服务入口 - - ```shell - # vim /etc/placement/placement.conf - [placement_database] - # ... - connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement - [api] - # ... - auth_strategy = keystone - [keystone_authtoken] - # ... - auth_url = http://controller:5000/v3 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = placement - password = PLACEMENT_PASS - ``` - - 其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。 - - 同步数据库: - - ```shell - su -s /bin/sh -c "placement-manage db sync" placement - ``` - - 启动httpd服务: - - ```shell - systemctl restart httpd - ``` - -3. 验证 - - 执行如下命令,执行状态检查: - - ```shell - . admin-openrc - placement-status upgrade check - ``` - - 安装osc-placement,列出可用的资源类别及特性: - - ```shell - yum install python3-osc-placement - openstack --os-placement-api-version 1.2 resource class list --sort-column name - openstack --os-placement-api-version 1.6 trait list --sort-column name - ``` - -### Nova 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换NOVA_DBPASS,为nova数据库设置密码** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 创建nova服务凭证: - - ```shell - openstack user create --domain default --password-prompt nova (CTL) - openstack role add --project service --user nova admin (CTL) - openstack service create --name nova --description "OpenStack Compute" compute (CTL) - ``` - - 创建nova API端点: - - ```shell - openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) - openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) - openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) - ``` - -2. 安装软件包 - - ```shell - yum install openstack-nova-api openstack-nova-conductor \ (CTL) - openstack-nova-novncproxy openstack-nova-scheduler - - yum install openstack-nova-compute (CPT) - ``` - - ***注意*** - - **如果为arm64结构,还需要执行以下命令** - - ```shell - yum install edk2-aarch64 (CPT) - ``` - -3. 配置nova相关配置 - - ```shell - vim /etc/nova/nova.conf - - [DEFAULT] - enabled_apis = osapi_compute,metadata - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - my_ip = 10.0.0.1 - use_neutron = true - firewall_driver = nova.virt.firewall.NoopFirewallDriver - compute_driver=libvirt.LibvirtDriver (CPT) - instances_path = /var/lib/nova/instances/ (CPT) - lock_path = /var/lib/nova/tmp (CPT) - - [api_database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) - - [database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) - - [api] - auth_strategy = keystone - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = nova - password = NOVA_PASS - - [vnc] - enabled = true - server_listen = $my_ip - server_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) - - [glance] - api_servers = http://controller:9292 - - [oslo_concurrency] - lock_path = /var/lib/nova/tmp (CTL) - - [placement] - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:5000/v3 - username = placement - password = PLACEMENT_PASS - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***解释*** - - [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron; - - [api_database] [database]部分,配置数据库入口; - - [api] [keystone_authtoken]部分,配置身份认证服务入口; - - [vnc]部分,启用并配置远程控制台入口; - - [glance]部分,配置镜像服务API的地址; - - [oslo_concurrency]部分,配置lock path; - - [placement]部分,配置placement服务的入口。 - - ***注意*** - - **替换 `RABBIT_PASS` 为 RabbitMQ 中 openstack 账户的密码;** - - **配置 `my_ip` 为控制节点的管理IP地址;** - - **替换 `NOVA_DBPASS` 为nova数据库的密码;** - - **替换 `NOVA_PASS` 为nova用户的密码;** - - **替换 `PLACEMENT_PASS` 为placement用户的密码;** - - **替换 `NEUTRON_PASS` 为neutron用户的密码;** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - - **额外** - - 确定是否支持虚拟机硬件加速(x86架构): - - ```shell - egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) - ``` - - 如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM: - - ```shell - vim /etc/nova/nova.conf (CPT) - - [libvirt] - virt_type = qemu - ``` - - 如果返回值为1或更大的值,则支持硬件加速,则`virt_type`可以配置为`kvm` - - ***注意*** - - **如果为arm64结构,还需要在计算节点执行以下命令** - - ```shell - - mkdir -p /usr/share/AAVMF - chown nova:nova /usr/share/AAVMF - - ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \ - /usr/share/AAVMF/AAVMF_CODE.fd - ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \ - /usr/share/AAVMF/AAVMF_VARS.fd - - vim /etc/libvirt/qemu.conf - - nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \ - /usr/share/AAVMF/AAVMF_VARS.fd", \ - "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \ - /usr/share/edk2/aarch64/vars-template-pflash.raw"] - ``` - - 并且当ARM架构下的部署环境为嵌套虚拟化时,`libvirt`配置如下: - - ```shell - [libvirt] - virt_type = qemu - cpu_mode = custom - cpu_model = cortex-a72 - ``` - -4. 同步数据库 - - 同步nova-api数据库: - - ```shell - su -s /bin/sh -c "nova-manage api_db sync" nova (CTL) - ``` - - 注册cell0数据库: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova (CTL) - ``` - - 创建cell1 cell: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova (CTL) - ``` - - 同步nova数据库: - - ```shell - su -s /bin/sh -c "nova-manage db sync" nova (CTL) - ``` - - 验证cell0和cell1注册正确: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova (CTL) - ``` - - 添加计算节点到openstack集群 - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (CTL) - ``` - -5. 启动服务 - - ```shell - systemctl enable \ (CTL) - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - - systemctl start \ (CTL) - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - ``` - - ```shell - systemctl enable libvirtd.service openstack-nova-compute.service (CPT) - systemctl start libvirtd.service openstack-nova-compute.service (CPT) - ``` - -6. 验证 - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 列出服务组件,验证每个流程都成功启动和注册: - - ```shell - openstack compute service list (CTL) - ``` - - 列出身份服务中的API端点,验证与身份服务的连接: - - ```shell - openstack catalog list (CTL) - ``` - - 列出镜像服务中的镜像,验证与镜像服务的连接: - - ```shell - openstack image list (CTL) - ``` - - 检查cells是否运作成功,以及其他必要条件是否已具备。 - - ```shell - nova-status upgrade check (CTL) - ``` - -### Neutron 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE neutron; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `NEUTRON_DBPASS` 为 neutron 数据库设置密码。** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 创建neutron服务凭证 - - ```shell - openstack user create --domain default --password-prompt neutron (CTL) - openstack role add --project service --user neutron admin (CTL) - openstack service create --name neutron --description "OpenStack Networking" network (CTL) - ``` - - 创建Neutron服务API端点: - - ```shell - openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) - ``` - -2. 安装软件包: - - ```shell - yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \ (CTL) - openstack-neutron-ml2 - ``` - - ```shell - yum install openstack-neutron-linuxbridge ebtables ipset (CPT) - ``` - -3. 配置neutron相关配置: - - 配置主体配置 - - ```shell - vim /etc/neutron/neutron.conf - - [database] - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) - - [DEFAULT] - core_plugin = ml2 (CTL) - service_plugins = router (CTL) - allow_overlapping_ips = true (CTL) - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - notify_nova_on_port_status_changes = true (CTL) - notify_nova_on_port_data_changes = true (CTL) - api_workers = 3 (CTL) - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = neutron - password = NEUTRON_PASS - - [nova] - auth_url = http://controller:5000 (CTL) - auth_type = password (CTL) - project_domain_name = Default (CTL) - user_domain_name = Default (CTL) - region_name = RegionOne (CTL) - project_name = service (CTL) - username = nova (CTL) - password = NOVA_PASS (CTL) - - [oslo_concurrency] - lock_path = /var/lib/neutron/tmp - ``` - - ***解释*** - - [database]部分,配置数据库入口; - - [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口; - - [default] [keystone]部分,配置身份认证服务入口; - - [default] [nova]部分,配置网络来通知计算网络拓扑的变化; - - [oslo_concurrency]部分,配置lock path。 - - ***注意*** - - **替换`NEUTRON_DBPASS`为 neutron 数据库的密码;** - - **替换`RABBIT_PASS`为 RabbitMQ中openstack 账户的密码;** - - **替换`NEUTRON_PASS`为 neutron 用户的密码;** - - **替换`NOVA_PASS`为 nova 用户的密码。** - - 配置ML2插件: - - ```shell - vim /etc/neutron/plugins/ml2/ml2_conf.ini - - [ml2] - type_drivers = flat,vlan,vxlan - tenant_network_types = vxlan - mechanism_drivers = linuxbridge,l2population - extension_drivers = port_security - - [ml2_type_flat] - flat_networks = provider - - [ml2_type_vxlan] - vni_ranges = 1:1000 - - [securitygroup] - enable_ipset = true - ``` - - 创建/etc/neutron/plugin.ini的符号链接 - - ```shell - ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - ``` - - **注意** - - **[ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;** - - **[ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;** - - **[ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;** - - **[securitygroup]部分,配置允许 ipset。** - - **补充** - - **l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge** - - 配置 Linux bridge 代理: - - ```shell - vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - [securitygroup] - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - ``` - - ***解释*** - - [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口; - - [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population; - - [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。 - - ***注意*** - - **替换`PROVIDER_INTERFACE_NAME`为物理网络接口;** - - **替换`OVERLAY_INTERFACE_IP_ADDRESS`为控制节点的管理IP地址。** - - 配置Layer-3代理: - - ```shell - vim /etc/neutron/l3_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - ``` - - ***解释*** - - 在[default]部分,配置接口驱动为linuxbridge - - 配置DHCP代理: - - ```shell - vim /etc/neutron/dhcp_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - ``` - - ***解释*** - - [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。 - - 配置metadata代理: - - ```shell - vim /etc/neutron/metadata_agent.ini (CTL) - - [DEFAULT] - nova_metadata_host = controller - metadata_proxy_shared_secret = METADATA_SECRET - ``` - - ***解释*** - - [default]部分,配置元数据主机和shared secret。 - - ***注意*** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - -4. 配置nova相关配置 - - ```shell - vim /etc/nova/nova.conf - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***解释*** - - [neutron]部分,配置访问参数,启用元数据代理,配置secret。 - - ***注意*** - - **替换`NEUTRON_PASS`为 neutron 用户的密码;** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - -5. 同步数据库: - - ```shell - su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - ``` - -6. 重启计算API服务: - - ```shell - systemctl restart openstack-nova-api.service - ``` - -7. 启动网络服务 - - ```shell - systemctl enable neutron-server.service neutron-linuxbridge-agent.service \ (CTL) - neutron-dhcp-agent.service neutron-metadata-agent.service \ - neutron-l3-agent.service - - systemctl restart neutron-server.service neutron-linuxbridge-agent.service \ (CTL) - neutron-dhcp-agent.service neutron-metadata-agent.service \ - neutron-l3-agent.service - - systemctl enable neutron-linuxbridge-agent.service (CPT) - systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) - ``` - -8. 验证 - - 验证 neutron 代理启动成功: - - ```shell - openstack network agent list - ``` - -### Cinder 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE cinder; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `CINDER_DBPASS` 为cinder数据库设置密码。** - - ```shell - source ~/.admin-openrc - ``` - - 创建cinder服务凭证: - - ```shell - openstack user create --domain default --password-prompt cinder - openstack role add --project service --user cinder admin - openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 - openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 - ``` - - 创建块存储服务API端点: - - ```shell - openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s - ``` - -2. 安装软件包: - - ```shell - yum install openstack-cinder-api openstack-cinder-scheduler (CTL) - ``` - - ```shell - yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \ (STG) - openstack-cinder-volume openstack-cinder-backup - ``` - -3. 准备存储设备,以下仅为示例: - - ```shell - pvcreate /dev/vdb - vgcreate cinder-volumes /dev/vdb - - vim /etc/lvm/lvm.conf - - - devices { - ... - filter = [ "a/vdb/", "r/.*/"] - ``` - - ***解释*** - - 在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。 - -4. 准备NFS - - ```shell - mkdir -p /root/cinder/backup - - cat << EOF >> /etc/export - /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) - EOF - - ``` - -5. 配置cinder相关配置: - - ```shell - vim /etc/cinder/cinder.conf - - [DEFAULT] - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - my_ip = 10.0.0.11 - enabled_backends = lvm (STG) - backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) - backup_share=HOST:PATH (STG) - - [database] - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = cinder - password = CINDER_PASS - - [oslo_concurrency] - lock_path = /var/lib/cinder/tmp - - [lvm] - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) - volume_group = cinder-volumes (STG) - iscsi_protocol = iscsi (STG) - iscsi_helper = tgtadm (STG) - ``` - - ***解释*** - - [database]部分,配置数据库入口; - - [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip; - - [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口; - - [oslo_concurrency]部分,配置lock path。 - - ***注意*** - - **替换`CINDER_DBPASS`为 cinder 数据库的密码;** - - **替换`RABBIT_PASS`为 RabbitMQ 中 openstack 账户的密码;** - - **配置`my_ip`为控制节点的管理 IP 地址;** - - **替换`CINDER_PASS`为 cinder 用户的密码;** - - **替换`HOST:PATH`为 NFS的HOSTIP和共享路径 用户的密码;** - -6. 同步数据库: - - ```shell - su -s /bin/sh -c "cinder-manage db sync" cinder (CTL) - ``` - -7. 配置nova: - - ```shell - vim /etc/nova/nova.conf (CTL) - - [cinder] - os_region_name = RegionOne - ``` - -8. 重启计算API服务 - - ```shell - systemctl restart openstack-nova-api.service - ``` - -9. 启动cinder服务 - - ```shell - systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - ``` - - ```shell - systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - ``` - - ***注意*** - - 当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。 - - ```shell - include /var/lib/cinder/volumes/* - ``` - -10. 验证 - - ```shell - source ~/.admin-openrc - openstack volume service list - ``` - -### horizon 安装 - -1. 安装软件包 - - ```shell - yum install openstack-dashboard - ``` - -2. 修改文件 - - 修改变量 - - ```text - vim /etc/openstack-dashboard/local_settings - - OPENSTACK_HOST = "controller" - ALLOWED_HOSTS = ['*', ] - - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': 'controller:11211', - } - } - - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" - - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 3, - } - ``` - -3. 重启 httpd 服务 - - ```shell - systemctl restart httpd.service memcached.service - ``` - -4. 验证 - 打开浏览器,输入网址,登录 horizon。 - - ***注意*** - - **替换HOSTIP为控制节点管理平面IP地址** - -### Tempest 安装 - -Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。 - -1. 安装Tempest - - ```shell - yum install openstack-tempest - ``` - -2. 初始化目录 - - ```shell - tempest init mytest - ``` - -3. 修改配置文件。 - - ```shell - cd mytest - vi etc/tempest.conf - ``` - - tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考[官方示例](https://docs.openstack.org/tempest/latest/sampleconf.html) - -4. 执行测试 - - ```shell - tempest run - ``` - -5. 安装tempest扩展(可选) - OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Train中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: - ``` - yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin - ``` - -### Ironic 安装 - -Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 裸金属服务在数据库中存储信息,创建一个**ironic**用户可以访问的**ironic**数据库,替换**IRONIC_DBPASSWORD**为合适的密码 - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - ``` -2. 安装软件包 - - ```shell - yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient - ``` - - 启动服务 - - ```shell - systemctl enable openstack-ironic-api openstack-ironic-conductor - systemctl start openstack-ironic-api openstack-ironic-conductor - ``` - -3. 创建服务用户认证 - - 1、创建Bare Metal服务用户 - - ```shell - openstack user create --password IRONIC_PASSWORD \ - --email ironic@example.com ironic - openstack role add --project service --user ironic admin - openstack service create --name ironic \ - --description "Ironic baremetal provisioning service" baremetal - ``` - - 2、创建Bare Metal服务访问入口 - - ```shell - openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 - ``` - -4. 配置ironic-api服务 - - 配置文件路径/etc/ironic/ironic.conf - - 1、通过**connection**选项配置数据库的位置,如下所示,替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换**DB_IP**为DB服务器所在的IP地址: - - ```shell - [database] - - # The SQLAlchemy connection string used to connect to the - # database (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 3、配置ironic-api服务使用身份认证服务的凭证,替换**PUBLIC_IDENTITY_IP**为身份认证服务器的公共IP,替换**PRIVATE_IDENTITY_IP**为身份认证服务器的私有IP,替换**IRONIC_PASSWORD**为身份认证服务中**ironic**用户的密码: - - ```shell - [DEFAULT] - - # Authentication strategy used by ironic-api: one of - # "keystone" or "noauth". "noauth" should not be used in a - # production environment because all authentication will be - # disabled. (string value) - - auth_strategy=keystone - - [keystone_authtoken] - # Authentication type to load (string value) - auth_type=password - # Complete public Identity API endpoint (string value) - www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 - # Complete admin Identity API endpoint. (string value) - auth_url=http://PRIVATE_IDENTITY_IP:5000 - # Service username. (string value) - username=ironic - # Service account password. (string value) - password=IRONIC_PASSWORD - # Service tenant name. (string value) - project_name=service - # Domain name containing project (string value) - project_domain_name=Default - # User's domain name (string value) - user_domain_name=Default - - ``` - - 4、创建裸金属服务数据库表 - - ```shell - ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema - ``` - - 5、重启ironic-api服务 - - ```shell - sudo systemctl restart openstack-ironic-api - ``` - -5. 配置ironic-conductor服务 - - 1、替换**HOST_IP**为conductor host的IP - - ```shell - [DEFAULT] - - # IP address of this host. If unset, will determine the IP - # programmatically. If unable to do so, will use "127.0.0.1". - # (string value) - - my_ip=HOST_IP - ``` - - 2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换DB_IP为DB服务器所在的IP地址: - - ```shell - [database] - - # The SQLAlchemy connection string to use to connect to the - # database. (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 4、配置凭证访问其他OpenStack服务 - - 为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。 - - ```shell - [neutron] - 访问OpenStack网络服务 - [glance] - 访问OpenStack镜像服务 - [swift] - 访问OpenStack对象存储服务 - [cinder] - 访问OpenStack块存储服务 - [inspector] - 访问OpenStack裸金属introspection服务 - [service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点 - ``` - - 简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。 - - 在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为: - - ```shell - 网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口 - - 请求时使用特定的CA SSL证书进行HTTPS连接 - - 与ironic-api服务配置相同的服务用户 - - 动态密码认证插件基于其他选项发现合适的身份认证服务API版本 - ``` - - ```shell - [neutron] - - # Authentication type to load (string value) - auth_type = password - # Authentication URL (string value) - auth_url=https://IDENTITY_IP:5000/ - # Username (string value) - username=ironic - # User's password (string value) - password=IRONIC_PASSWORD - # Project name to scope to (string value) - project_name=service - # Domain ID containing project (string value) - project_domain_id=default - # User's domain id (string value) - user_domain_id=default - # PEM encoded Certificate Authority to use when verifying - # HTTPs connections. (string value) - cafile=/opt/stack/data/ca-bundle.pem - # The default region_name for endpoint URL discovery. (string - # value) - region_name = RegionOne - # List of interfaces, in order of preference, for endpoint - # URL. (list value) - valid_interfaces=public - ``` - - 默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定: - - ```shell - [neutron] ... endpoint_override = - ``` - - 5、配置允许的驱动程序和硬件类型 - - 通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型: - - ```shell - [DEFAULT] enabled_hardware_types = ipmi - ``` - - 配置硬件接口: - - ```shell - enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool - ``` - - 配置接口默认值: - - ```shell - [DEFAULT] default_deploy_interface = direct default_network_interface = neutron - ``` - - 如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。 - - 6、重启ironic-conductor服务 - - ```shell - sudo systemctl restart openstack-ironic-conductor - ``` - -6. 配置httpd服务 - - 1. 创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。 - - ``` - mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot - ``` - - 2. 安装和配置httpd服务 - - 1. 安装httpd服务,已有请忽略 - - ``` - yum install httpd -y - ``` - 2. 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下: - - ``` - Listen 8080 - - - ServerName ironic.openeuler.com - - ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log" - CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b" - - DocumentRoot "/var/lib/ironic/httproot" - - Options Indexes FollowSymLinks - Require all granted - - LogLevel warn - AddDefaultCharset UTF-8 - EnableSendfile on - - - ``` - - 注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。 - - 3. 重启httpd服务。 - - ``` - systemctl restart httpd - ``` -7. deploy ramdisk镜像制作 - - T版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 - 若使用T版原生工具,则需要安装对应的软件包。 - - ```shell - yum install openstack-ironic-python-agent - 或者 - yum install diskimage-builder - ``` - - 具体的使用方法可以参考[官方文档](https://docs.openstack.org/ironic/queens/install/deploy-ramdisk.html) - - 这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。 - - 1. 安装 ironic-python-agent-builder - - 1. 安装工具: - - ```shell - pip install ironic-python-agent-builder - ``` - - 2. 修改以下文件中的python解释器: - - ```shell - /usr/bin/yum /usr/libexec/urlgrabber-ext-down - ``` - - 3. 安装其它必须的工具: - - ```shell - yum install git - ``` - - 由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可: - - ```shell - # 先查询需要安装哪个包 - [root@localhost ~]# yum provides /usr/sbin/semanage - 已加载插件:fastestmirror - Loading mirror speeds from cached hostfile - * base: mirror.vcu.edu - * extras: mirror.vcu.edu - * updates: mirror.math.princeton.edu - policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities - 源 :base - 匹配来源: - 文件名 :/usr/sbin/semanage - # 安装 - [root@localhost ~]# yum install policycoreutils-python - ``` - - 2. 制作镜像 - - 如果是`arm`架构,需要添加: - ```shell - export ARCH=aarch64 - ``` - - 基本用法: - - ```shell - usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] - [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] - distribution - - positional arguments: - distribution Distribution to use - - optional arguments: - -h, --help show this help message and exit - -r RELEASE, --release RELEASE - Distribution release to use - -o OUTPUT, --output OUTPUT - Output base file name - -e ELEMENT, --element ELEMENT - Additional DIB element to use - -b BRANCH, --branch BRANCH - If set, override the branch that is used for ironic- - python-agent and requirements - -v, --verbose Enable verbose logging in diskimage-builder - --extra-args EXTRA_ARGS - Extra arguments to pass to diskimage-builder - ``` - - 举例说明: - - ```shell - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky - ``` - - 3. 允许ssh登陆 - - 初始化环境变量,然后制作镜像: - - ```shell - export DIB_DEV_USER_USERNAME=ipa \ - export DIB_DEV_USER_PWDLESS_SUDO=yes \ - export DIB_DEV_USER_PASSWORD='123' - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser - ``` - - 4. 指定代码仓库 - - 初始化对应的环境变量,然后制作镜像: - - ```shell - # 指定仓库地址以及版本 - DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git - DIB_REPOREF_ironic_python_agent=origin/develop - - # 直接从gerrit上clone代码 - DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent - DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 - ``` - - 参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。 - - 指定仓库地址及版本验证成功。 - - 5. 注意 - 原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改: - - 在T版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败 - - 需要用户对生成grub.cfg的代码逻辑自行修改。 - - ironic向ipa发送查询命令执行状态请求的tls报错: - - T版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。 - - 1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1: - - ``` - [agent] - verify_ca = False - - [pxe] - pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 - ``` - - 2. ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下: - - /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录) - - ``` - [DEFAULT] - enable_auto_tls = False - ``` - - 设置权限: - - ``` - chown -R ipa.ipa /etc/ironic_python_agent/ - ``` - - 3. 修改ipa服务的服务启动文件,添加配置文件选项 - - vim usr/lib/systemd/system/ironic-python-agent.service - - ``` - [Unit] - Description=Ironic Python Agent - After=network-online.target - - [Service] - ExecStartPre=/sbin/modprobe vfat - ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf - Restart=always - RestartSec=30s - - [Install] - WantedBy=multi-user.target - ``` - - -在Train中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。 - -### Kolla 安装 - -Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。 - -Kolla的安装十分简单,只需要安装对应的RPM包即可 - -``` -yum install openstack-kolla openstack-kolla-ansible -``` - -安装完后,就可以使用`kolla-ansible`, `kolla-build`, `kolla-genpwd`, `kolla-mergepwd`等命令进行相关的镜像制作和容器环境部署了。 - -### Trove 安装 -Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 数据库服务在数据库中存储信息,创建一个**trove**用户可以访问的**trove**数据库,替换**TROVE_DBPASSWORD**为合适的密码 - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - ``` - -2. 创建服务用户认证 - - 1、创建**Trove**服务用户 - - ```shell - openstack user create --domain default --password-prompt trove - openstack role add --project service --user trove admin - openstack service create --name trove --description "Database" database - ``` - **解释:** `TROVE_PASSWORD` 替换为`trove`用户的密码 - - 2、创建**Database**服务访问入口 - - ```shell - openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s - ``` - -3. 安装和配置**Trove**各组件 - - 1、安装**Trove**包 - ```shell script - yum install openstack-trove python3-troveclient - ``` - - 2. 配置`trove.conf` - ```shell script - vim /etc/trove/trove.conf - - [DEFAULT] - log_dir = /var/log/trove - trove_auth_url = http://controller:5000/ - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - swift_url = http://controller:8080/v1/AUTH_ - rpc_backend = rabbit - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 - auth_strategy = keystone - add_addresses = True - api_paste_config = /etc/trove/api-paste.ini - nova_proxy_admin_user = admin - nova_proxy_admin_pass = ADMIN_PASSWORD - nova_proxy_admin_tenant_name = service - taskmanager_manager = trove.taskmanager.manager.Manager - use_nova_server_config_drive = True - # Set these if using Neutron Networking - network_driver = trove.network.neutron.NeutronDriver - network_label_regex = .* - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = trove - password = TROVE_PASSWORD - ``` - **解释:** - - `[Default]`分组中`nova_compute_url` 和 `cinder_url` 为Nova和Cinder在Keystone中创建的endpoint - - `nova_proxy_XXX` 为一个能访问Nova服务的用户信息,上例中使用`admin`用户为例 - - `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码 - - `[database]`分组中的`connection` 为前面在mysql中为Trove创建的数据库信息 - - Trove的用户信息中`TROVE_PASSWORD`替换为实际trove用户的密码 - - 3. 配置`trove-guestagent.conf` - ```shell script - vim /etc/trove/trove-guestagent.conf - - rabbit_host = controller - rabbit_password = RABBIT_PASS - trove_auth_url = http://controller:5000/ - ``` - **解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 - 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 - 报心跳,因此需要配置RabbitMQ的用户和密码信息。 - **从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。** - - `RABBIT_PASS`替换为RabbitMQ的密码 - - 4. 生成数据`Trove`数据库表 - ```shell script - su -s /bin/sh -c "trove-manage db_sync" trove - ``` - -4. 完成安装配置 - 1. 配置**Trove**服务自启动 - ```shell script - systemctl enable openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - 2. 启动服务 - ```shell script - systemctl start openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` -### Swift 安装 - -Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。 - -1. 创建服务凭证、API端点。 - - 创建服务凭证 - - ``` shell - #创建swift用户: - openstack user create --domain default --password-prompt swift - #admin为swift用户添加角色: - openstack role add --project service --user swift admin - #创建swift服务实体: - openstack service create --name swift --description "OpenStack Object Storage" object-store - ``` - - 创建swift API 端点: - - ```shell - openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s - openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s - openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 - ``` - - -2. 安装软件包: - - ```shell - yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL) - ``` - -3. 配置proxy-server相关配置 - - Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。 - - ***注意*** - - **注意替换password为您swift在身份服务中为用户选择的密码** - -4. 安装和配置存储节点 (STG) - - 安装支持的程序包: - ```shell - yum install xfsprogs rsync - ``` - - 将/dev/vdb和/dev/vdc设备格式化为 XFS - - ```shell - mkfs.xfs /dev/vdb - mkfs.xfs /dev/vdc - ``` - - 创建挂载点目录结构: - - ```shell - mkdir -p /srv/node/vdb - mkdir -p /srv/node/vdc - ``` - - 找到新分区的 UUID: - - ```shell - blkid - ``` - - 编辑/etc/fstab文件并将以下内容添加到其中: - - ```shell - UUID="" /srv/node/vdb xfs noatime 0 2 - UUID="" /srv/node/vdc xfs noatime 0 2 - ``` - - 挂载设备: - - ```shell - mount /srv/node/vdb - mount /srv/node/vdc - ``` - ***注意*** - - **如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置** - - (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容: - - ```shell - [DEFAULT] - uid = swift - gid = swift - log file = /var/log/rsyncd.log - pid file = /var/run/rsyncd.pid - address = MANAGEMENT_INTERFACE_IP_ADDRESS - - [account] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/account.lock - - [container] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/container.lock - - [object] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/object.lock - ``` - **替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址** - - 启动rsyncd服务并配置它在系统启动时启动: - - ```shell - systemctl enable rsyncd.service - systemctl start rsyncd.service - ``` - -5. 在存储节点安装和配置组件 (STG) - - 安装软件包: - - ```shell - yum install openstack-swift-account openstack-swift-container openstack-swift-object - ``` - - 编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。 - - 确保挂载点目录结构的正确所有权: - - ```shell - chown -R swift:swift /srv/node - ``` - - 创建recon目录并确保其拥有正确的所有权: - - ```shell - mkdir -p /var/cache/swift - chown -R root:swift /var/cache/swift - chmod -R 775 /var/cache/swift - ``` - -6. 创建账号环 (CTL) - - 切换到/etc/swift目录。 - - ```shell - cd /etc/swift - ``` - - 创建基础account.builder文件: - - ```shell - swift-ring-builder account.builder create 10 1 1 - ``` - - 将每个存储节点添加到环中: - - ```shell - swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT - ``` - - **替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称** - - ***注意 *** - **对每个存储节点上的每个存储设备重复此命令** - - 验证戒指内容: - - ```shell - swift-ring-builder account.builder - ``` - - 重新平衡戒指: - - ```shell - swift-ring-builder account.builder rebalance - ``` - -7. 创建容器环 (CTL) - - 切换到`/etc/swift`目录。 - - 创建基础`container.builder`文件: - - ```shell - swift-ring-builder container.builder create 10 1 1 - ``` - - 将每个存储节点添加到环中: - - ```shell - swift-ring-builder container.builder \ - add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \ - --device DEVICE_NAME --weight 100 - - ``` - - **替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称** - - ***注意*** - **对每个存储节点上的每个存储设备重复此命令** - - 验证戒指内容: - - ```shell - swift-ring-builder container.builder - ``` - - 重新平衡戒指: - - ```shell - swift-ring-builder container.builder rebalance - ``` - -8. 创建对象环 (CTL) - - 切换到`/etc/swift`目录。 - - 创建基础`object.builder`文件: - - ```shell - swift-ring-builder object.builder create 10 1 1 - ``` - - 将每个存储节点添加到环中 - - ```shell - swift-ring-builder object.builder \ - add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \ - --device DEVICE_NAME --weight 100 - ``` - - **替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称** - - ***注意 *** - **对每个存储节点上的每个存储设备重复此命令** - - 验证戒指内容: - - ```shell - swift-ring-builder object.builder - ``` - - 重新平衡戒指: - - ```shell - swift-ring-builder object.builder rebalance - ``` - - 分发环配置文件: - - 将`account.ring.gz`,`container.ring.gz`以及 `object.ring.gz`文件复制到`/etc/swift`每个存储节点和运行代理服务的任何其他节点上目录。 - - - -9. 完成安装 - - 编辑`/etc/swift/swift.conf`文件 - - ``` shell - [swift-hash] - swift_hash_path_suffix = test-hash - swift_hash_path_prefix = test-hash - - [storage-policy:0] - name = Policy-0 - default = yes - ``` - - **用唯一值替换 test-hash** - - 将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。 - - 在所有节点上,确保配置目录的正确所有权: - - ```shell - chown -R root:swift /etc/swift - ``` - - 在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动: - - ```shell - systemctl enable openstack-swift-proxy.service memcached.service - systemctl start openstack-swift-proxy.service memcached.service - ``` - - 在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动: - - ```shell - systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service - - systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service - - systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service - - systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service - - systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service - - systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service - ``` - -### Cyborg 安装 - -Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。 - -1. 初始化对应数据库 - -``` -CREATE DATABASE cyborg; -GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; -GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; -``` - -2. 创建对应Keystone资源对象 - -``` -$ openstack user create --domain default --password-prompt cyborg -$ openstack role add --project service --user cyborg admin -$ openstack service create --name cyborg --description "Acceleration Service" accelerator - -$ openstack endpoint create --region RegionOne \ - accelerator public http://:6666/v1 -$ openstack endpoint create --region RegionOne \ - accelerator internal http://:6666/v1 -$ openstack endpoint create --region RegionOne \ - accelerator admin http://:6666/v1 -``` - -3. 安装Cyborg - -``` -yum install openstack-cyborg -``` - -4. 配置Cyborg - -修改`/etc/cyborg/cyborg.conf` - -``` -[DEFAULT] -transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ -use_syslog = False -state_path = /var/lib/cyborg -debug = True - -[database] -connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg - -[service_catalog] -project_domain_id = default -user_domain_id = default -project_name = service -password = PASSWORD -username = cyborg -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password - -[placement] -project_domain_name = Default -project_name = service -user_domain_name = Default -password = PASSWORD -username = placement -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password - -[keystone_authtoken] -memcached_servers = localhost:11211 -project_domain_name = Default -project_name = service -user_domain_name = Default -password = PASSWORD -username = cyborg -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password -``` - -自行修改对应的用户名、密码、IP等信息 - -5. 同步数据库表格 - -``` -cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade -``` - -6. 启动Cyborg服务 - -``` -systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent -systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent -``` - -### Aodh 安装 - -1. 创建数据库 - -``` -CREATE DATABASE aodh; - -GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; - -GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; -``` - -2. 创建对应Keystone资源对象 - -``` -openstack user create --domain default --password-prompt aodh - -openstack role add --project service --user aodh admin - -openstack service create --name aodh --description "Telemetry" alarming - -openstack endpoint create --region RegionOne alarming public http://controller:8042 - -openstack endpoint create --region RegionOne alarming internal http://controller:8042 - -openstack endpoint create --region RegionOne alarming admin http://controller:8042 -``` - -3. 安装Aodh - -``` -yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient -``` - -4. 修改配置文件 - -``` -[database] -connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh - -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller -auth_strategy = keystone - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_id = default -user_domain_id = default -project_name = service -username = aodh -password = AODH_PASS - -[service_credentials] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_id = default -user_domain_id = default -project_name = service -username = aodh -password = AODH_PASS -interface = internalURL -region_name = RegionOne -``` - -5. 初始化数据库 - -``` -aodh-dbsync -``` - -6. 启动Aodh服务 - -``` -systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service - -systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service -``` - -### Gnocchi 安装 - -1. 创建数据库 - -``` -CREATE DATABASE gnocchi; - -GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; - -GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; -``` - -2. 创建对应Keystone资源对象 - -``` -openstack user create --domain default --password-prompt gnocchi - -openstack role add --project service --user gnocchi admin - -openstack service create --name gnocchi --description "Metric Service" metric - -openstack endpoint create --region RegionOne metric public http://controller:8041 - -openstack endpoint create --region RegionOne metric internal http://controller:8041 - -openstack endpoint create --region RegionOne metric admin http://controller:8041 -``` - -3. 安装Gnocchi - -``` -yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient -``` - -4. 修改配置文件`/etc/gnocchi/gnocchi.conf` - -``` -[api] -auth_mode = keystone -port = 8041 -uwsgi_mode = http-socket - -[keystone_authtoken] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_name = Default -user_domain_name = Default -project_name = service -username = gnocchi -password = GNOCCHI_PASS -interface = internalURL -region_name = RegionOne - -[indexer] -url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi - -[storage] -# coordination_url is not required but specifying one will improve -# performance with better workload division across workers. -coordination_url = redis://controller:6379 -file_basepath = /var/lib/gnocchi -driver = file -``` - -5. 初始化数据库 - -``` -gnocchi-upgrade -``` - -6. 启动Gnocchi服务 - -``` -systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service - -systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service -``` - -### Ceilometer 安装 - -1. 创建对应Keystone资源对象 - -``` -openstack user create --domain default --password-prompt ceilometer - -openstack role add --project service --user ceilometer admin - -openstack service create --name ceilometer --description "Telemetry" metering -``` - -2. 安装Ceilometer - -``` -yum install openstack-ceilometer-notification openstack-ceilometer-central -``` - -3. 修改配置文件`/etc/ceilometer/pipeline.yaml` - -``` -publishers: - # set address of Gnocchi - # + filter out Gnocchi-related activity meters (Swift driver) - # + set default archive policy - - gnocchi://?filter_project=service&archive_policy=low -``` - -4. 修改配置文件`/etc/ceilometer/ceilometer.conf` - -``` -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller - -[service_credentials] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_id = default -user_domain_id = default -project_name = service -username = ceilometer -password = CEILOMETER_PASS -interface = internalURL -region_name = RegionOne -``` - -5. 初始化数据库 - -``` -ceilometer-upgrade -``` - -6. 启动Ceilometer服务 - -``` -systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service - -systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service -``` - -### Heat 安装 - -1. 创建**heat**数据库,并授予**heat**数据库正确的访问权限,替换**HEAT_DBPASS**为合适的密码 - -``` -CREATE DATABASE heat; -GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; -GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; -``` - -2. 创建服务凭证,创建**heat**用户,并为其增加**admin**角色 - -``` -openstack user create --domain default --password-prompt heat -openstack role add --project service --user heat admin -``` - -3. 创建**heat**和**heat-cfn**服务及其对应的API端点 - -``` -openstack service create --name heat --description "Orchestration" orchestration -openstack service create --name heat-cfn --description "Orchestration" cloudformation -openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 -openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 -openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 -``` - -4. 创建stack管理的额外信息,包括**heat**domain及其对应domain的admin用户**heat_domain_admin**, -**heat_stack_owner**角色,**heat_stack_user**角色 - -``` -openstack user create --domain heat --password-prompt heat_domain_admin -openstack role add --domain heat --user-domain heat --user heat_domain_admin admin -openstack role create heat_stack_owner -openstack role create heat_stack_user -``` - -5. 安装软件包 - -``` -yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine -``` - -6. 修改配置文件`/etc/heat/heat.conf` - -``` -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller -heat_metadata_server_url = http://controller:8000 -heat_waitcondition_server_url = http://controller:8000/v1/waitcondition -stack_domain_admin = heat_domain_admin -stack_domain_admin_password = HEAT_DOMAIN_PASS -stack_user_domain_name = heat - -[database] -connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = default -user_domain_name = default -project_name = service -username = heat -password = HEAT_PASS - -[trustee] -auth_type = password -auth_url = http://controller:5000 -username = heat -password = HEAT_PASS -user_domain_name = default - -[clients_keystone] -auth_uri = http://controller:5000 -``` - -7. 初始化**heat**数据库表 - -``` -su -s /bin/sh -c "heat-manage db_sync" heat -``` - -8. 启动服务 - -``` -systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service -systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service -``` diff --git a/docs/install/openEuler-21.09/OpenStack-wallaby.md b/docs/install/openEuler-21.09/OpenStack-wallaby.md deleted file mode 100644 index 6bb13afb29c298c9aa023fed9363e85fc3294460..0000000000000000000000000000000000000000 --- a/docs/install/openEuler-21.09/OpenStack-wallaby.md +++ /dev/null @@ -1,2688 +0,0 @@ -# OpenStack-Wallaby 部署指南 - - - -- [OpenStack-Wallaby 部署指南](#openstack-wallaby-部署指南) - - [OpenStack 简介](#openstack-简介) - - [约定](#约定) - - [准备环境](#准备环境) - - [环境配置](#环境配置) - - [安装 SQL DataBase](#安装-sql-database) - - [安装 RabbitMQ](#安装-rabbitmq) - - [安装 Memcached](#安装-memcached) - - [安装 OpenStack](#安装-openstack) - - [Keystone 安装](#keystone-安装) - - [Glance 安装](#glance-安装) - - [Placement安装](#placement安装) - - [Nova 安装](#nova-安装) - - [Neutron 安装](#neutron-安装) - - [Cinder 安装](#cinder-安装) - - [horizon 安装](#horizon-安装) - - [Tempest 安装](#tempest-安装) - - [Ironic 安装](#ironic-安装) - - [Kolla 安装](#kolla-安装) - - [Trove 安装](#trove-安装) - - [Swift 安装](#swift-安装) - - -## OpenStack 简介 - -OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。 - -作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。 - -openEuler 21.09 版本官方源已经支持 OpenStack-Wallaby 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。 - -## 约定 - -OpenStack 支持多种形态部署,此文档支持`ALL in One`以及`Distributed`两种部署方式,按照如下方式约定: - -`ALL in One`模式: - -```text -忽略所有可能的后缀 -``` - -`Distributed`模式: - -```text -以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点` -以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点` -以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点` -除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点` -``` - -***注意*** - -涉及到以上约定的服务如下: - -- Cinder -- Nova -- Neutron - -## 准备环境 - -### 环境配置 - -1. 配置 21.09 官方yum源,需要启用EPOL软件仓以支持OpenStack - - ```shell - cat << EOF >> /etc/yum.repos.d/21.09-OpenStack_Wallaby.repo - [OS] - name=OS - baseurl=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler - - [everything] - name=everything - baseurl=http://repo.openeuler.org/openEuler-21.09/everything/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-21.09/everything/$basearch/RPM-GPG-KEY-openEuler - - [EPOL] - name=EPOL - baseurl=http://repo.openeuler.org/openEuler-21.09/EPOL/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler - EOF - - yum clean all && yum makecache - ``` - -2. 修改主机名以及映射 - - 设置各个节点的主机名 - - ```shell - hostnamectl set-hostname controller (CTL) - hostnamectl set-hostname compute (CPT) - ``` - - 假设controller节点的IP是`10.0.0.11`,compute节点的IP是`10.0.0.12`(如果存在的话),则于`/etc/hosts`新增如下: - - ```shell - 10.0.0.11 controller - 10.0.0.12 compute - ``` - -### 安装 SQL DataBase - -1. 执行如下命令,安装软件包。 - - ```shell - yum install mariadb mariadb-server python3-PyMySQL - ``` - -2. 执行如下命令,创建并编辑 `/etc/my.cnf.d/openstack.cnf` 文件。 - - ```shell - vim /etc/my.cnf.d/openstack.cnf - - [mysqld] - bind-address = 10.0.0.11 - default-storage-engine = innodb - innodb_file_per_table = on - max_connections = 4096 - collation-server = utf8_general_ci - character-set-server = utf8 - ``` - - ***注意*** - - **其中 `bind-address` 设置为控制节点的管理IP地址。** - -3. 启动 DataBase 服务,并为其配置开机自启动: - - ```shell - systemctl enable mariadb.service - systemctl start mariadb.service - ``` - -4. 配置DataBase的默认密码(可选) - - ```shell - mysql_secure_installation - ``` - - ***注意*** - - **根据提示进行即可** - -### 安装 RabbitMQ - -1. 执行如下命令,安装软件包。 - - ```shell - yum install rabbitmq-server - ``` - -2. 启动 RabbitMQ 服务,并为其配置开机自启动。 - - ```shell - systemctl enable rabbitmq-server.service - systemctl start rabbitmq-server.service - ``` - -3. 添加 OpenStack用户。 - - ```shell - rabbitmqctl add_user openstack RABBIT_PASS - ``` - - ***注意*** - - **替换 `RABBIT_PASS`,为 OpenStack 用户设置密码** - -4. 设置openstack用户权限,允许进行配置、写、读: - - ```shell - rabbitmqctl set_permissions openstack ".*" ".*" ".*" - ``` - -### 安装 Memcached - -1. 执行如下命令,安装依赖软件包。 - - ```shell - yum install memcached python3-memcached - ``` - -2. 编辑 `/etc/sysconfig/memcached` 文件。 - - ```shell - vim /etc/sysconfig/memcached - - OPTIONS="-l 127.0.0.1,::1,controller" - ``` - -3. 执行如下命令,启动 Memcached 服务,并为其配置开机启动。 - - ```shell - systemctl enable memcached.service - systemctl start memcached.service - ``` - - ***注意*** - - **服务启动后,可以通过命令`memcached-tool controller stats`确保启动正常,服务可用,其中可以将`controller`替换为控制节点的管理IP地址。** - -## 安装 OpenStack - -### Keystone 安装 - -1. 创建 keystone 数据库并授权。 - - ``` sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE keystone; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `KEYSTONE_DBPASS`,为 Keystone 数据库设置密码** - -2. 安装软件包。 - - ```shell - yum install openstack-keystone httpd mod_wsgi - ``` - -3. 配置keystone相关配置 - - ```shell - vim /etc/keystone/keystone.conf - - [database] - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - - [token] - provider = fernet - ``` - - ***解释*** - - [database]部分,配置数据库入口 - - [token]部分,配置token provider - - ***注意:*** - - **替换 `KEYSTONE_DBPASS` 为 Keystone 数据库的密码** - -4. 同步数据库。 - - ```shell - su -s /bin/sh -c "keystone-manage db_sync" keystone - ``` - -5. 初始化Fernet密钥仓库。 - - ```shell - keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - ``` - -6. 启动服务。 - - ```shell - keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:5000/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - ``` - - ***注意*** - - **替换 `ADMIN_PASS`,为 admin 用户设置密码** - -7. 配置Apache HTTP server - - ```shell - vim /etc/httpd/conf/httpd.conf - - ServerName controller - ``` - - ```shell - ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ - ``` - - ***解释*** - - 配置 `ServerName` 项引用控制节点 - - ***注意*** - **如果 `ServerName` 项不存在则需要创建** - -8. 启动Apache HTTP服务。 - - ```shell - systemctl enable httpd.service - systemctl start httpd.service - ``` - -9. 创建环境变量配置。 - - ```shell - cat << EOF >> ~/.admin-openrc - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=admin - export OS_USERNAME=admin - export OS_PASSWORD=ADMIN_PASS - export OS_AUTH_URL=http://controller:5000/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - EOF - ``` - - ***注意*** - - **替换 `ADMIN_PASS` 为 admin 用户的密码** - -10. 依次创建domain, projects, users, roles,需要先安装好python3-openstackclient: - - ```shell - yum install python3-openstackclient - ``` - - 导入环境变量 - - ```shell - source ~/.admin-openrc - ``` - - 创建project `service`,其中 domain `default` 在 keystone-manage bootstrap 时已创建 - - ```shell - openstack domain create --description "An Example Domain" example - ``` - - ```shell - openstack project create --domain default --description "Service Project" service - ``` - - 创建(non-admin)project `myproject`,user `myuser` 和 role `myrole`,为 `myproject` 和 `myuser` 添加角色`myrole` - - ```shell - openstack project create --domain default --description "Demo Project" myproject - openstack user create --domain default --password-prompt myuser - openstack role create myrole - openstack role add --project myproject --user myuser myrole - ``` - -11. 验证 - - 取消临时环境变量OS_AUTH_URL和OS_PASSWORD: - - ```shell - source ~/.admin-openrc - unset OS_AUTH_URL OS_PASSWORD - ``` - - 为admin用户请求token: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - ``` - - 为myuser用户请求token: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name myproject --os-username myuser token issue - ``` - -### Glance 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE glance; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意:*** - - **替换 `GLANCE_DBPASS`,为 glance 数据库设置密码** - - 创建服务凭证 - - ```shell - source ~/.admin-openrc - - openstack user create --domain default --password-prompt glance - openstack role add --project service --user glance admin - openstack service create --name glance --description "OpenStack Image" image - ``` - - 创建镜像服务API端点: - - ```shell - openstack endpoint create --region RegionOne image public http://controller:9292 - openstack endpoint create --region RegionOne image internal http://controller:9292 - openstack endpoint create --region RegionOne image admin http://controller:9292 - ``` - -2. 安装软件包 - - ```shell - yum install openstack-glance - ``` - -3. 配置glance相关配置: - - ```shell - vim /etc/glance/glance-api.conf - - [database] - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - flavor = keystone - - [glance_store] - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - ``` - - ***解释:*** - - [database]部分,配置数据库入口 - - [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口 - - [glance_store]部分,配置本地文件系统存储和镜像文件的位置 - - ***注意*** - - **替换 `GLANCE_DBPASS` 为 glance 数据库的密码** - - **替换 `GLANCE_PASS` 为 glance 用户的密码** - -4. 同步数据库: - - ```shell - su -s /bin/sh -c "glance-manage db_sync" glance - ``` - -5. 启动服务: - - ```shell - systemctl enable openstack-glance-api.service - systemctl start openstack-glance-api.service - ``` - -6. 验证 - - 下载镜像 - - ```shell - source ~/.admin-openrc - - wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img - ``` - - ***注意*** - - **如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。** - - 向Image服务上传镜像: - - ```shell - openstack image create --disk-format qcow2 --container-format bare \ - --file cirros-0.4.0-x86_64-disk.img --public cirros - ``` - - 确认镜像上传并验证属性: - - ```shell - openstack image list - ``` - -### Placement安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - 作为 root 用户访问数据库,创建 placement 数据库并授权。 - - ```shell - mysql -u root -p - MariaDB [(none)]> CREATE DATABASE placement; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `PLACEMENT_DBPASS` 为 placement 数据库设置密码** - - ```shell - source admin-openrc - ``` - - 执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。 - - 创建Placement API服务 - - ```shell - openstack user create --domain default --password-prompt placement - openstack role add --project service --user placement admin - openstack service create --name placement --description "Placement API" placement - ``` - - 创建placement服务API端点: - - ```shell - openstack endpoint create --region RegionOne placement public http://controller:8778 - openstack endpoint create --region RegionOne placement internal http://controller:8778 - openstack endpoint create --region RegionOne placement admin http://controller:8778 - ``` - -2. 安装和配置 - - 安装软件包: - - ```shell - yum install openstack-placement-api - ``` - - 配置placement: - - 编辑 /etc/placement/placement.conf 文件: - - 在[placement_database]部分,配置数据库入口 - - 在[api] [keystone_authtoken]部分,配置身份认证服务入口 - - ```shell - # vim /etc/placement/placement.conf - [placement_database] - # ... - connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement - [api] - # ... - auth_strategy = keystone - [keystone_authtoken] - # ... - auth_url = http://controller:5000/v3 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = placement - password = PLACEMENT_PASS - ``` - - 其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。 - - 同步数据库: - - ```shell - su -s /bin/sh -c "placement-manage db sync" placement - ``` - - 启动httpd服务: - - ```shell - systemctl restart httpd - ``` - -3. 验证 - - 执行如下命令,执行状态检查: - - ```shell - . admin-openrc - placement-status upgrade check - ``` - - 安装osc-placement,列出可用的资源类别及特性: - - ```shell - yum install python3-osc-placement - openstack --os-placement-api-version 1.2 resource class list --sort-column name - openstack --os-placement-api-version 1.6 trait list --sort-column name - ``` - -### Nova 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换NOVA_DBPASS,为nova数据库设置密码** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 创建nova服务凭证: - - ```shell - openstack user create --domain default --password-prompt nova (CTL) - openstack role add --project service --user nova admin (CTL) - openstack service create --name nova --description "OpenStack Compute" compute (CTL) - ``` - - 创建nova API端点: - - ```shell - openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) - openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) - openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) - ``` - -2. 安装软件包 - - ```shell - yum install openstack-nova-api openstack-nova-conductor \ (CTL) - openstack-nova-novncproxy openstack-nova-scheduler - - yum install openstack-nova-compute (CPT) - ``` - - ***注意*** - - **如果为arm64结构,还需要执行以下命令** - - ```shell - yum install edk2-aarch64 (CPT) - ``` - -3. 配置nova相关配置 - - ```shell - vim /etc/nova/nova.conf - - [DEFAULT] - enabled_apis = osapi_compute,metadata - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - my_ip = 10.0.0.1 - use_neutron = true - firewall_driver = nova.virt.firewall.NoopFirewallDriver - compute_driver=libvirt.LibvirtDriver (CPT) - instances_path = /var/lib/nova/instances/ (CPT) - lock_path = /var/lib/nova/tmp (CPT) - - [api_database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) - - [database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) - - [api] - auth_strategy = keystone - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = nova - password = NOVA_PASS - - [vnc] - enabled = true - server_listen = $my_ip - server_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) - - [libvirt] - virt_type = qemu (CPT) - cpu_mode = custom (CPT) - cpu_model = cortex-a72 (CPT) - - [glance] - api_servers = http://controller:9292 - - [oslo_concurrency] - lock_path = /var/lib/nova/tmp (CTL) - - [placement] - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:5000/v3 - username = placement - password = PLACEMENT_PASS - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***解释*** - - [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron; - - [api_database] [database]部分,配置数据库入口; - - [api] [keystone_authtoken]部分,配置身份认证服务入口; - - [vnc]部分,启用并配置远程控制台入口; - - [glance]部分,配置镜像服务API的地址; - - [oslo_concurrency]部分,配置lock path; - - [placement]部分,配置placement服务的入口。 - - ***注意*** - - **替换 `RABBIT_PASS` 为 RabbitMQ 中 openstack 账户的密码;** - - **配置 `my_ip` 为控制节点的管理IP地址;** - - **替换 `NOVA_DBPASS` 为nova数据库的密码;** - - **替换 `NOVA_PASS` 为nova用户的密码;** - - **替换 `PLACEMENT_PASS` 为placement用户的密码;** - - **替换 `NEUTRON_PASS` 为neutron用户的密码;** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - - **额外** - - 确定是否支持虚拟机硬件加速(x86架构): - - ```shell - egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) - ``` - - 如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM: - - ```shell - vim /etc/nova/nova.conf (CPT) - - [libvirt] - virt_type = qemu - ``` - - 如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置 - - ***注意*** - - **如果为arm64结构,还需要执行以下命令** - - ```shell - vim /etc/libvirt/qemu.conf - - nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \ - /usr/share/AAVMF/AAVMF_VARS.fd", \ - "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \ - /usr/share/edk2/aarch64/vars-template-pflash.raw"] - - vim /etc/qemu/firmware/edk2-aarch64.json - - { - "description": "UEFI firmware for ARM64 virtual machines", - "interface-types": [ - "uefi" - ], - "mapping": { - "device": "flash", - "executable": { - "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw", - "format": "raw" - }, - "nvram-template": { - "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw", - "format": "raw" - } - }, - "targets": [ - { - "architecture": "aarch64", - "machines": [ - "virt-*" - ] - } - ], - "features": [ - - ], - "tags": [ - - ] - } - - (CPT) - ``` - -4. 同步数据库 - - 同步nova-api数据库: - - ```shell - su -s /bin/sh -c "nova-manage api_db sync" nova (CTL) - ``` - - 注册cell0数据库: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova (CTL) - ``` - - 创建cell1 cell: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova (CTL) - ``` - - 同步nova数据库: - - ```shell - su -s /bin/sh -c "nova-manage db sync" nova (CTL) - ``` - - 验证cell0和cell1注册正确: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova (CTL) - ``` - - 添加计算节点到openstack集群 - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (CPT) - ``` - -5. 启动服务 - - ```shell - systemctl enable \ (CTL) - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - - systemctl start \ (CTL) - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - ``` - - ```shell - systemctl enable libvirtd.service openstack-nova-compute.service (CPT) - systemctl start libvirtd.service openstack-nova-compute.service (CPT) - ``` - -6. 验证 - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 列出服务组件,验证每个流程都成功启动和注册: - - ```shell - openstack compute service list (CTL) - ``` - - 列出身份服务中的API端点,验证与身份服务的连接: - - ```shell - openstack catalog list (CTL) - ``` - - 列出镜像服务中的镜像,验证与镜像服务的连接: - - ```shell - openstack image list (CTL) - ``` - - 检查cells是否运作成功,以及其他必要条件是否已具备。 - - ```shell - nova-status upgrade check (CTL) - ``` - -### Neutron 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE neutron; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `NEUTRON_DBPASS` 为 neutron 数据库设置密码。** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 创建neutron服务凭证 - - ```shell - openstack user create --domain default --password-prompt neutron (CTL) - openstack role add --project service --user neutron admin (CTL) - openstack service create --name neutron --description "OpenStack Networking" network (CTL) - ``` - - 创建Neutron服务API端点: - - ```shell - openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) - ``` - -2. 安装软件包: - - ```shell - yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \ (CTL) - openstack-neutron-ml2 - ``` - - ```shell - yum install openstack-neutron-linuxbridge ebtables ipset (CPT) - ``` - -3. 配置neutron相关配置: - - 配置主体配置 - - ```shell - vim /etc/neutron/neutron.conf - - [database] - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) - - [DEFAULT] - core_plugin = ml2 (CTL) - service_plugins = router (CTL) - allow_overlapping_ips = true (CTL) - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - notify_nova_on_port_status_changes = true (CTL) - notify_nova_on_port_data_changes = true (CTL) - api_workers = 3 (CTL) - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = neutron - password = NEUTRON_PASS - - [nova] - auth_url = http://controller:5000 (CTL) - auth_type = password (CTL) - project_domain_name = Default (CTL) - user_domain_name = Default (CTL) - region_name = RegionOne (CTL) - project_name = service (CTL) - username = nova (CTL) - password = NOVA_PASS (CTL) - - [oslo_concurrency] - lock_path = /var/lib/neutron/tmp - ``` - - ***解释*** - - [database]部分,配置数据库入口; - - [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口; - - [default] [keystone]部分,配置身份认证服务入口; - - [default] [nova]部分,配置网络来通知计算网络拓扑的变化; - - [oslo_concurrency]部分,配置lock path。 - - ***注意*** - - **替换`NEUTRON_DBPASS`为 neutron 数据库的密码;** - - **替换`RABBIT_PASS`为 RabbitMQ中openstack 账户的密码;** - - **替换`NEUTRON_PASS`为 neutron 用户的密码;** - - **替换`NOVA_PASS`为 nova 用户的密码。** - - 配置ML2插件: - - ```shell - vim /etc/neutron/plugins/ml2/ml2_conf.ini - - [ml2] - type_drivers = flat,vlan,vxlan - tenant_network_types = vxlan - mechanism_drivers = linuxbridge,l2population - extension_drivers = port_security - - [ml2_type_flat] - flat_networks = provider - - [ml2_type_vxlan] - vni_ranges = 1:1000 - - [securitygroup] - enable_ipset = true - ``` - - 创建/etc/neutron/plugin.ini的符号链接 - - ```shell - ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - ``` - - **注意** - - **[ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;** - - **[ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;** - - **[ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;** - - **[securitygroup]部分,配置允许 ipset。** - - **补充** - - **l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge** - - 配置 Linux bridge 代理: - - ```shell - vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - [securitygroup] - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - ``` - - ***解释*** - - [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口; - - [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population; - - [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。 - - ***注意*** - - **替换`PROVIDER_INTERFACE_NAME`为物理网络接口;** - - **替换`OVERLAY_INTERFACE_IP_ADDRESS`为控制节点的管理IP地址。** - - 配置Layer-3代理: - - ```shell - vim /etc/neutron/l3_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - ``` - - ***解释*** - - 在[default]部分,配置接口驱动为linuxbridge - - 配置DHCP代理: - - ```shell - vim /etc/neutron/dhcp_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - ``` - - ***解释*** - - [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。 - - 配置metadata代理: - - ```shell - vim /etc/neutron/metadata_agent.ini (CTL) - - [DEFAULT] - nova_metadata_host = controller - metadata_proxy_shared_secret = METADATA_SECRET - ``` - - ***解释*** - - [default]部分,配置元数据主机和shared secret。 - - ***注意*** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - -4. 配置nova相关配置 - - ```shell - vim /etc/nova/nova.conf - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***解释*** - - [neutron]部分,配置访问参数,启用元数据代理,配置secret。 - - ***注意*** - - **替换`NEUTRON_PASS`为 neutron 用户的密码;** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - -5. 同步数据库: - - ```shell - su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - ``` - -6. 重启计算API服务: - - ```shell - systemctl restart openstack-nova-api.service - ``` - -7. 启动网络服务 - - ```shell - systemctl enable neutron-server.service neutron-linuxbridge-agent.service \ (CTL) - neutron-dhcp-agent.service neutron-metadata-agent.service \ - systemctl enable neutron-l3-agent.service - systemctl restart openstack-nova-api.service neutron-server.service (CTL) - neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ - neutron-metadata-agent.service neutron-l3-agent.service - - systemctl enable neutron-linuxbridge-agent.service (CPT) - systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) - ``` - -8. 验证 - - 验证 neutron 代理启动成功: - - ```shell - openstack network agent list - ``` - -### Cinder 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE cinder; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `CINDER_DBPASS` 为cinder数据库设置密码。** - - ```shell - source ~/.admin-openrc - ``` - - 创建cinder服务凭证: - - ```shell - openstack user create --domain default --password-prompt cinder - openstack role add --project service --user cinder admin - openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 - openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 - ``` - - 创建块存储服务API端点: - - ```shell - openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s - ``` - -2. 安装软件包: - - ```shell - yum install openstack-cinder-api openstack-cinder-scheduler (CTL) - ``` - - ```shell - yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \ (STG) - openstack-cinder-volume openstack-cinder-backup - ``` - -3. 准备存储设备,以下仅为示例: - - ```shell - pvcreate /dev/vdb - vgcreate cinder-volumes /dev/vdb - - vim /etc/lvm/lvm.conf - - - devices { - ... - filter = [ "a/vdb/", "r/.*/"] - ``` - - ***解释*** - - 在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。 - -4. 准备NFS - - ```shell - mkdir -p /root/cinder/backup - - cat << EOF >> /etc/export - /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) - EOF - - ``` - -5. 配置cinder相关配置: - - ```shell - vim /etc/cinder/cinder.conf - - [DEFAULT] - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - my_ip = 10.0.0.11 - enabled_backends = lvm (STG) - backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) - backup_share=HOST:PATH (STG) - - [database] - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = cinder - password = CINDER_PASS - - [oslo_concurrency] - lock_path = /var/lib/cinder/tmp - - [lvm] - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) - volume_group = cinder-volumes (STG) - iscsi_protocol = iscsi (STG) - iscsi_helper = tgtadm (STG) - ``` - - ***解释*** - - [database]部分,配置数据库入口; - - [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip; - - [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口; - - [oslo_concurrency]部分,配置lock path。 - - ***注意*** - - **替换`CINDER_DBPASS`为 cinder 数据库的密码;** - - **替换`RABBIT_PASS`为 RabbitMQ 中 openstack 账户的密码;** - - **配置`my_ip`为控制节点的管理 IP 地址;** - - **替换`CINDER_PASS`为 cinder 用户的密码;** - - **替换`HOST:PATH`为 NFS的HOSTIP和共享路径 用户的密码;** - -6. 同步数据库: - - ```shell - su -s /bin/sh -c "cinder-manage db sync" cinder (CTL) - ``` - -7. 配置nova: - - ```shell - vim /etc/nova/nova.conf (CTL) - - [cinder] - os_region_name = RegionOne - ``` - -8. 重启计算API服务 - - ```shell - systemctl restart openstack-nova-api.service - ``` - -9. 启动cinder服务 - - ```shell - systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - ``` - - ```shell - systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - ``` - - ***注意*** - - 当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。 - - ```shell - include /var/lib/cinder/volumes/* - ``` - -10. 验证 - - ```shell - source ~/.admin-openrc - openstack volume service list - ``` - -### horizon 安装 - -1. 安装软件包 - - ```shell - yum install openstack-dashboard - ``` - -2. 修改文件 - - 修改变量 - - ```text - vim /etc/openstack-dashboard/local_settings - - OPENSTACK_HOST = "controller" - ALLOWED_HOSTS = ['*', ] - - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': 'controller:11211', - } - } - - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" - - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 3, - } - ``` - -3. 重启 httpd 服务 - - ```shell - systemctl restart httpd.service memcached.service - ``` - -4. 验证 - 打开浏览器,输入网址,登录 horizon。 - - ***注意*** - - **替换HOSTIP为控制节点管理平面IP地址** - -### Tempest 安装 - -Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。 - -1. 安装Tempest - - ```shell - yum install openstack-tempest - ``` - -2. 初始化目录 - - ```shell - tempest init mytest - ``` - -3. 修改配置文件。 - - ```shell - cd mytest - vi etc/tempest.conf - ``` - - tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考[官方示例](https://docs.openstack.org/tempest/latest/sampleconf.html) - -4. 执行测试 - - ```shell - tempest run - ``` - -5. 安装tempest扩展(可选) - OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Wallaby中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: - ``` - yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin - ``` - -### Ironic 安装 - -Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 裸金属服务在数据库中存储信息,创建一个**ironic**用户可以访问的**ironic**数据库,替换**IRONIC_DBPASSWORD**为合适的密码 - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - ``` - -2. 创建服务用户认证 - - 1、创建Bare Metal服务用户 - - ```shell - openstack user create --password IRONIC_PASSWORD \ - --email ironic@example.com ironic - openstack role add --project service --user ironic admin - openstack service create --name ironic - --description "Ironic baremetal provisioning service" baremetal - - openstack service create --name ironic-inspector --description "Ironic inspector baremetal provisioning service" baremetal-introspection - openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector - openstack role add --project service --user ironic-inspector admin - ``` - - 2、创建Bare Metal服务访问入口 - - ```shell - openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 - openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 - openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 - ``` - -3. 配置ironic-api服务 - - 配置文件路径/etc/ironic/ironic.conf - - 1、通过**connection**选项配置数据库的位置,如下所示,替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换**DB_IP**为DB服务器所在的IP地址: - - ```shell - [database] - - # The SQLAlchemy connection string used to connect to the - # database (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 3、配置ironic-api服务使用身份认证服务的凭证,替换**PUBLIC_IDENTITY_IP**为身份认证服务器的公共IP,替换**PRIVATE_IDENTITY_IP**为身份认证服务器的私有IP,替换**IRONIC_PASSWORD**为身份认证服务中**ironic**用户的密码: - - ```shell - [DEFAULT] - - # Authentication strategy used by ironic-api: one of - # "keystone" or "noauth". "noauth" should not be used in a - # production environment because all authentication will be - # disabled. (string value) - - auth_strategy=keystone - host = controller - memcache_servers = controller:11211 - enabled_network_interfaces = flat,noop,neutron - default_network_interface = noop - transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ - enabled_hardware_types = ipmi - enabled_boot_interfaces = pxe - enabled_deploy_interfaces = direct - default_deploy_interface = direct - enabled_inspect_interfaces = inspector - enabled_management_interfaces = ipmitool - enabled_power_interfaces = ipmitool - enabled_rescue_interfaces = no-rescue,agent - isolinux_bin = /usr/share/syslinux/isolinux.bin - logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s - - [keystone_authtoken] - # Authentication type to load (string value) - auth_type=password - # Complete public Identity API endpoint (string value) - www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 - # Complete admin Identity API endpoint. (string value) - auth_url=http://PRIVATE_IDENTITY_IP:5000 - # Service username. (string value) - username=ironic - # Service account password. (string value) - password=IRONIC_PASSWORD - # Service tenant name. (string value) - project_name=service - # Domain name containing project (string value) - project_domain_name=Default - # User's domain name (string value) - user_domain_name=Default - - [agent] - deploy_logs_collect = always - deploy_logs_local_path = /var/log/ironic/deploy - deploy_logs_storage_backend = local - image_download_source = http - stream_raw_images = false - force_raw_images = false - verify_ca = False - - [oslo_concurrency] - - [oslo_messaging_notifications] - transport_url = rabbit://openstack:123456@172.20.19.25:5672/ - topics = notifications - driver = messagingv2 - - [oslo_messaging_rabbit] - amqp_durable_queues = True - rabbit_ha_queues = True - - [pxe] - ipxe_enabled = false - pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 - image_cache_size = 204800 - tftp_root=/var/lib/tftpboot/cephfs/ - tftp_master_path=/var/lib/tftpboot/cephfs/master_images - - [dhcp] - dhcp_provider = none - ``` - - 4、创建裸金属服务数据库表 - - ```shell - ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema - ``` - - 5、重启ironic-api服务 - - ```shell - sudo systemctl restart openstack-ironic-api - ``` - -4. 配置ironic-conductor服务 - - 1、替换**HOST_IP**为conductor host的IP - - ```shell - [DEFAULT] - - # IP address of this host. If unset, will determine the IP - # programmatically. If unable to do so, will use "127.0.0.1". - # (string value) - - my_ip=HOST_IP - ``` - - 2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换DB_IP为DB服务器所在的IP地址: - - ```shell - [database] - - # The SQLAlchemy connection string to use to connect to the - # database. (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 4、配置凭证访问其他OpenStack服务 - - 为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。 - - ```shell - [neutron] - 访问OpenStack网络服务 - [glance] - 访问OpenStack镜像服务 - [swift] - 访问OpenStack对象存储服务 - [cinder] - 访问OpenStack块存储服务 - [inspector] - 访问OpenStack裸金属introspection服务 - [service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点 - ``` - - 简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。 - - 在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为: - - ```shell - 网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口 - - 请求时使用特定的CA SSL证书进行HTTPS连接 - - 与ironic-api服务配置相同的服务用户 - - 动态密码认证插件基于其他选项发现合适的身份认证服务API版本 - ``` - - ```shell - [neutron] - - # Authentication type to load (string value) - auth_type = password - # Authentication URL (string value) - auth_url=https://IDENTITY_IP:5000/ - # Username (string value) - username=ironic - # User's password (string value) - password=IRONIC_PASSWORD - # Project name to scope to (string value) - project_name=service - # Domain ID containing project (string value) - project_domain_id=default - # User's domain id (string value) - user_domain_id=default - # PEM encoded Certificate Authority to use when verifying - # HTTPs connections. (string value) - cafile=/opt/stack/data/ca-bundle.pem - # The default region_name for endpoint URL discovery. (string - # value) - region_name = RegionOne - # List of interfaces, in order of preference, for endpoint - # URL. (list value) - valid_interfaces=public - ``` - - 默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定: - - ```shell - [neutron] ... endpoint_override = - ``` - - 5、配置允许的驱动程序和硬件类型 - - 通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型: - - ```shell - [DEFAULT] enabled_hardware_types = ipmi - ``` - - 配置硬件接口: - - ```shell - enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool - ``` - - 配置接口默认值: - - ```shell - [DEFAULT] default_deploy_interface = direct default_network_interface = neutron - ``` - - 如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。 - - 6、重启ironic-conductor服务 - - ```shell - sudo systemctl restart openstack-ironic-conductor - ``` - -5. 配置ironic-inspector服务 - - 配置文件路径/etc/ironic-inspector/inspector.conf - - 1、创建数据库 - - ```shell - # mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \ - IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; - ``` - - 2、通过**connection**选项配置数据库的位置,如下所示,替换**IRONIC_INSPECTOR_DBPASSWORD**为**ironic_inspector**用户的密码,替换**DB_IP**为DB服务器所在的IP地址: - - ```shell - [database] - backend = sqlalchemy - connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector - min_pool_size = 100 - max_pool_size = 500 - pool_timeout = 30 - max_retries = 5 - max_overflow = 200 - db_retry_interval = 2 - db_inc_retry_interval = True - db_max_retry_interval = 2 - db_max_retries = 5 - ``` - - 3、配置消息度列通信地址 - - ```shell - [DEFAULT] - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - - ``` - - 4、设置keystone认证 - - ```shell - [DEFAULT] - - auth_strategy = keystone - timeout = 900 - rootwrap_config = /etc/ironic-inspector/rootwrap.conf - logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s - log_dir = /var/log/ironic-inspector - state_path = /var/lib/ironic-inspector - use_stderr = False - - [ironic] - api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 - auth_type = password - auth_url = http://PUBLIC_IDENTITY_IP:5000 - auth_strategy = keystone - ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 - os_region = RegionOne - project_name = service - project_domain_name = Default - user_domain_name = Default - username = IRONIC_SERVICE_USER_NAME - password = IRONIC_SERVICE_USER_PASSWORD - - [keystone_authtoken] - auth_type = password - auth_url = http://control:5000 - www_authenticate_uri = http://control:5000 - project_domain_name = default - user_domain_name = default - project_name = service - username = ironic_inspector - password = IRONICPASSWD - region_name = RegionOne - memcache_servers = control:11211 - token_cache_time = 300 - - [processing] - add_ports = active - processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic - ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk - always_store_ramdisk_logs = true - store_data =none - power_off = false - - [pxe_filter] - driver = iptables - - [capabilities] - boot_mode=True - ``` - - 5、配置ironic inspector dnsmasq服务 - - ```shell - # 配置文件地址:/etc/ironic-inspector/dnsmasq.conf - port=0 - interface=enp3s0 #替换为实际监听网络接口 - dhcp-range=172.20.19.100,172.20.19.110 #替换为实际dhcp地址范围 - bind-interfaces - enable-tftp - - dhcp-match=set:efi,option:client-arch,7 - dhcp-match=set:efi,option:client-arch,9 - dhcp-match=aarch64, option:client-arch,11 - dhcp-boot=tag:aarch64,grubaa64.efi - dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi - dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 - - tftp-root=/tftpboot #替换为实际tftpboot目录 - log-facility=/var/log/dnsmasq.log - ``` - - 6、关闭ironic provision网络子网的dhcp - - ``` - openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c - ``` - - 7、初始化ironic-inspector服务的数据库 - - 在控制节点执行: - - ``` - ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade - ``` - - 8、启动服务 - - ```shell - systemctl enable --now openstack-ironic-inspector.service - systemctl enable --now openstack-ironic-inspector-dnsmasq.service - ``` - -6. 配置httpd服务 - - 1. 创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。 - - ``` - mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot - ``` - - - - 2. 安装和配置httpd服务 - - - - 1. 安装httpd服务,已有请忽略 - - ``` - yum install httpd -y - ``` - - - - 2. 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下: - - ``` - Listen 8080 - - - ServerName ironic.openeuler.com - - ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log" - CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b" - - DocumentRoot "/var/lib/ironic/httproot" - - Options Indexes FollowSymLinks - Require all granted - - LogLevel warn - AddDefaultCharset UTF-8 - EnableSendfile on - - - ``` - - 注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。 - - 3. 重启httpd服务。 - - ``` - systemctl restart httpd - ``` - - - -7. deploy ramdisk镜像制作 - - W版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 - 若使用W版原生工具,则需要安装对应的软件包。 - - ```shell - yum install openstack-ironic-python-agent - 或者 - yum install diskimage-builder - ``` - - 具体的使用方法可以参考[官方文档](https://docs.openstack.org/ironic/queens/install/deploy-ramdisk.html) - - 这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。 - - 1. 安装 ironic-python-agent-builder - - - 1. 安装工具: - - ```shell - pip install ironic-python-agent-builder - ``` - - 2. 修改以下文件中的python解释器: - - ```shell - /usr/bin/yum /usr/libexec/urlgrabber-ext-down - ``` - - 3. 安装其它必须的工具: - - ```shell - yum install git - ``` - - 由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可: - - ```shell - # 先查询需要安装哪个包 - [root@localhost ~]# yum provides /usr/sbin/semanage - 已加载插件:fastestmirror - Loading mirror speeds from cached hostfile - * base: mirror.vcu.edu - * extras: mirror.vcu.edu - * updates: mirror.math.princeton.edu - policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities - 源 :base - 匹配来源: - 文件名 :/usr/sbin/semanage - # 安装 - [root@localhost ~]# yum install policycoreutils-python - ``` - - 2. 制作镜像 - - 如果是`arm`架构,需要添加: - ```shell - export ARCH=aarch64 - ``` - - 基本用法: - - ```shell - usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] - [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] - distribution - - positional arguments: - distribution Distribution to use - - optional arguments: - -h, --help show this help message and exit - -r RELEASE, --release RELEASE - Distribution release to use - -o OUTPUT, --output OUTPUT - Output base file name - -e ELEMENT, --element ELEMENT - Additional DIB element to use - -b BRANCH, --branch BRANCH - If set, override the branch that is used for ironic- - python-agent and requirements - -v, --verbose Enable verbose logging in diskimage-builder - --extra-args EXTRA_ARGS - Extra arguments to pass to diskimage-builder - ``` - - 举例说明: - - ```shell - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky - ``` - - 3. 允许ssh登陆 - - 初始化环境变量,然后制作镜像: - - ```shell - export DIB_DEV_USER_USERNAME=ipa \ - export DIB_DEV_USER_PWDLESS_SUDO=yes \ - export DIB_DEV_USER_PASSWORD='123' - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser - ``` - - 4. 指定代码仓库 - - 初始化对应的环境变量,然后制作镜像: - - ```shell - # 指定仓库地址以及版本 - DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git - DIB_REPOREF_ironic_python_agent=origin/develop - - # 直接从gerrit上clone代码 - DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent - DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 - ``` - - 参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。 - - 指定仓库地址及版本验证成功。 - - 5. 注意 - -原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改: - -在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败,如下: - -生成的错误配置文件: - -![erro](/Users/andy_lee/Downloads/erro.png) - -如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。 - -需要用户对生成grub.cfg的代码逻辑自行修改。 - -ironic向ipa发送查询命令执行状态请求的tls报错: - -w版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。 - -1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1: - -``` -[agent] -verify_ca = False - -[pxe] -pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 -``` - -2) ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下: - -/etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录) - -``` -[DEFAULT] -enable_auto_tls = False -``` - -设置权限: - -``` -chown -R ipa.ipa /etc/ironic_python_agent/ -``` - -3. 修改ipa服务的服务启动文件,添加配置文件选项 - - vim usr/lib/systemd/system/ironic-python-agent.service - - ``` - [Unit] - Description=Ironic Python Agent - After=network-online.target - - [Service] - ExecStartPre=/sbin/modprobe vfat - ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf - Restart=always - RestartSec=30s - - [Install] - WantedBy=multi-user.target - ``` - - - -### Kolla 安装 - -Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 21.09中引入了Kolla和Kolla-ansible服务。 - -Kolla的安装十分简单,只需要安装对应的RPM包即可 - -``` -yum install openstack-kolla openstack-kolla-ansible -``` - -安装完后,就可以使用`kolla-ansible`, `kolla-build`, `kolla-genpwd`, `kolla-mergepwd`等命令了。 - -### Trove 安装 -Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 数据库服务在数据库中存储信息,创建一个**trove**用户可以访问的**trove**数据库,替换**TROVE_DBPASSWORD**为合适的密码 - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - ``` - -2. 创建服务用户认证 - - 1、创建**Trove**服务用户 - - ```shell - openstack user create --password TROVE_PASSWORD \ - --email trove@example.com trove - openstack role add --project service --user trove admin - openstack service create --name trove - --description "Database service" database - ``` - **解释:** `TROVE_PASSWORD` 替换为`trove`用户的密码 - - 2、创建**Database**服务访问入口 - - ```shell - openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s - ``` - -3. 安装和配置**Trove**各组件 - 1、安装**Trove**包 - ```shell script - yum install openstack-trove python-troveclient - ``` - 2. 配置`trove.conf` - ```shell script - vim /etc/trove/trove.conf - - [DEFAULT] - bind_host=TROVE_NODE_IP - log_dir = /var/log/trove - network_driver = trove.network.neutron.NeutronDriver - management_security_groups = - nova_keypair = trove-mgmt - default_datastore = mysql - taskmanager_manager = trove.taskmanager.manager.Manager - trove_api_workers = 5 - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - reboot_time_out = 300 - usage_timeout = 900 - agent_call_high_timeout = 1200 - use_syslog = False - debug = True - - # Set these if using Neutron Networking - network_driver=trove.network.neutron.NeutronDriver - network_label_regex=.* - - - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove - - [keystone_authtoken] - project_domain_name = Default - project_name = service - user_domain_name = Default - password = trove - username = trove - auth_url = http://controller:5000/v3/ - auth_type = password - - [service_credentials] - auth_url = http://controller:5000/v3/ - region_name = RegionOne - project_name = service - password = trove - project_domain_name = Default - user_domain_name = Default - username = trove - - [mariadb] - tcp_ports = 3306,4444,4567,4568 - - [mysql] - tcp_ports = 3306 - - [postgresql] - tcp_ports = 5432 - ``` - **解释:** - - `[Default]`分组中`bind_host`配置为Trove部署节点的IP - - `nova_compute_url` 和 `cinder_url` 为Nova和Cinder在Keystone中创建的endpoint - - `nova_proxy_XXX` 为一个能访问Nova服务的用户信息,上例中使用`admin`用户为例 - - `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码 - - `[database]`分组中的`connection` 为前面在mysql中为Trove创建的数据库信息 - - Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码 - - 5. 配置`trove-guestagent.conf` - ```shell script - vim /etc/trove/trove-guestagent.conf - - [DEFAULT] - log_file = trove-guestagent.log - log_dir = /var/log/trove/ - ignore_users = os_admin - control_exchange = trove - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - rpc_backend = rabbit - command_process_timeout = 60 - use_syslog = False - debug = True - - [service_credentials] - auth_url = http://controller:5000/v3/ - region_name = RegionOne - project_name = service - password = TROVE_PASS - project_domain_name = Default - user_domain_name = Default - username = trove - - [mysql] - docker_image = your-registry/your-repo/mysql - backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 - ``` - **解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 - 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 - 报心跳,因此需要配置RabbitMQ的用户和密码信息。 - **从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。** - - `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码 - - Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码 - - 6. 生成数据`Trove`数据库表 - ```shell script - su -s /bin/sh -c "trove-manage db_sync" trove - ``` -4. 完成安装配置 - 1. 配置**Trove**服务自启动 - ```shell script - systemctl enable openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - 2. 启动服务 - ```shell script - systemctl start openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` -### Swift 安装 - -Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。 - -1. 创建服务凭证、API端点。 - - 创建服务凭证 - - ``` shell - #创建swift用户: - openstack user create --domain default --password-prompt swift - #admin为swift用户添加角色: - openstack role add --project service --user swift admin - #创建swift服务实体: - openstack service create --name swift --description "OpenStack Object Storage" object-store - ``` - - 创建swift API 端点: - - ```shell - openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s - openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s - openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 - ``` - - -2. 安装软件包: - - ```shell - yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL) - ``` - -3. 配置proxy-server相关配置 - - Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。 - - ***注意*** - - **注意替换password为您swift在身份服务中为用户选择的密码** - -4. 安装和配置存储节点 (STG) - - 安装支持的程序包: - ```shell - yum install xfsprogs rsync - ``` - - 将/dev/vdb和/dev/vdc设备格式化为 XFS - - ```shell - mkfs.xfs /dev/vdb - mkfs.xfs /dev/vdc - ``` - - 创建挂载点目录结构: - - ```shell - mkdir -p /srv/node/vdb - mkdir -p /srv/node/vdc - ``` - - 找到新分区的 UUID: - - ```shell - blkid - ``` - - 编辑/etc/fstab文件并将以下内容添加到其中: - - ```shell - UUID="" /srv/node/vdb xfs noatime 0 2 - UUID="" /srv/node/vdc xfs noatime 0 2 - ``` - - 挂载设备: - - ```shell - mount /srv/node/vdb - mount /srv/node/vdc - ``` - ***注意*** - - **如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置** - - (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容: - - ```shell - [DEFAULT] - uid = swift - gid = swift - log file = /var/log/rsyncd.log - pid file = /var/run/rsyncd.pid - address = MANAGEMENT_INTERFACE_IP_ADDRESS - - [account] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/account.lock - - [container] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/container.lock - - [object] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/object.lock - ``` - **替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址** - - 启动rsyncd服务并配置它在系统启动时启动: - - ```shell - systemctl enable rsyncd.service - systemctl start rsyncd.service - ``` - -5. 在存储节点安装和配置组件 (STG) - - 安装软件包: - - ```shell - yum install openstack-swift-account openstack-swift-container openstack-swift-object - ``` - - 编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。 - - 确保挂载点目录结构的正确所有权: - - ```shell - chown -R swift:swift /srv/node - ``` - - 创建recon目录并确保其拥有正确的所有权: - - ```shell - mkdir -p /var/cache/swift - chown -R root:swift /var/cache/swift - chmod -R 775 /var/cache/swift - ``` - -6. 创建账号环 (CTL) - - 切换到/etc/swift目录。 - - ```shell - cd /etc/swift - ``` - - 创建基础account.builder文件: - - ```shell - swift-ring-builder account.builder create 10 1 1 - ``` - - 将每个存储节点添加到环中: - - ```shell - swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT - ``` - - **替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称** - - ***注意 *** - **对每个存储节点上的每个存储设备重复此命令** - - 验证戒指内容: - - ```shell - swift-ring-builder account.builder - ``` - - 重新平衡戒指: - - ```shell - swift-ring-builder account.builder rebalance - ``` - -7. 创建容器环 (CTL) - - 切换到`/etc/swift`目录。 - - 创建基础`container.builder`文件: - - ```shell - swift-ring-builder container.builder create 10 1 1 - ``` - - 将每个存储节点添加到环中: - - ```shell - swift-ring-builder container.builder \ - add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \ - --device DEVICE_NAME --weight 100 - - ``` - - **替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称** - - ***注意*** - **对每个存储节点上的每个存储设备重复此命令** - - 验证戒指内容: - - ```shell - swift-ring-builder container.builder - ``` - - 重新平衡戒指: - - ```shell - swift-ring-builder account.builder rebalance - ``` - -8. 创建对象环 (CTL) - - 切换到`/etc/swift`目录。 - - 创建基础`object.builder`文件: - - ```shell - swift-ring-builder object.builder create 10 1 1 - ``` - - 将每个存储节点添加到环中 - - ```shell - swift-ring-builder object.builder \ - add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \ - --device DEVICE_NAME --weight 100 - ``` - - **替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称** - - ***注意 *** - **对每个存储节点上的每个存储设备重复此命令** - - 验证戒指内容: - - ```shell - swift-ring-builder object.builder - ``` - - 重新平衡戒指: - - ```shell - swift-ring-builder account.builder rebalance - ``` - - 分发环配置文件: - - 将`account.ring.gz`,`container.ring.gz`以及 `object.ring.gz`文件复制到`/etc/swift`每个存储节点和运行代理服务的任何其他节点上目录。 - - - -9. 完成安装 - - 编辑`/etc/swift/swift.conf`文件 - - ``` shell - [swift-hash] - swift_hash_path_suffix = test-hash - swift_hash_path_prefix = test-hash - - [storage-policy:0] - name = Policy-0 - default = yes - ``` - - **用唯一值替换 test-hash** - - 将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。 - - 在所有节点上,确保配置目录的正确所有权: - - ```shell - chown -R root:swift /etc/swift - ``` - - 在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动: - - ```shell - systemctl enable openstack-swift-proxy.service memcached.service - systemctl start openstack-swift-proxy.service memcached.service - ``` - - 在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动: - - ```shell - systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service - - systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service - - systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service - - systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service - - systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service - - systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service - ``` diff --git a/docs/install/openEuler-22.03-LTS/OpenStack-train.md b/docs/install/openEuler-22.03-LTS/OpenStack-train.md deleted file mode 100644 index a8d3f44096ef061a59711470f828a1e296a2d0cb..0000000000000000000000000000000000000000 --- a/docs/install/openEuler-22.03-LTS/OpenStack-train.md +++ /dev/null @@ -1,2959 +0,0 @@ -# OpenStack-Train 部署指南 - - - -- [OpenStack-Train 部署指南](#openstack-train-部署指南) - - [OpenStack 简介](#openstack-简介) - - [约定](#约定) - - [准备环境](#准备环境) - - [环境配置](#环境配置) - - [安装 SQL DataBase](#安装-sql-database) - - [安装 RabbitMQ](#安装-rabbitmq) - - [安装 Memcached](#安装-memcached) - - [安装 OpenStack](#安装-openstack) - - [Keystone 安装](#keystone-安装) - - [Glance 安装](#glance-安装) - - [Placement安装](#placement安装) - - [Nova 安装](#nova-安装) - - [Neutron 安装](#neutron-安装) - - [Cinder 安装](#cinder-安装) - - [horizon 安装](#horizon-安装) - - [Tempest 安装](#tempest-安装) - - [Ironic 安装](#ironic-安装) - - [Kolla 安装](#kolla-安装) - - [Trove 安装](#trove-安装) - - [Swift 安装](#swift-安装) - - [Cyborg 安装](#cyborg-安装) - - [Aodh 安装](#aodh-安装) - - [Gnocchi 安装](#gnocchi-安装) - - [Ceilometer 安装](#ceilometer-安装) - - [Heat 安装](#heat-安装) - - [快速安装 OpenStack](#快速安装-openstack) - - -## OpenStack 简介 - -OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。 - -作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。 - -openEuler 22.03-LTS版本官方源已经支持 OpenStack-Train 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。 - -## 约定 - -OpenStack 支持多种形态部署,此文档支持`ALL in One`以及`Distributed`两种部署方式,按照如下方式约定: - -`ALL in One`模式: - -```text -忽略所有可能的后缀 -``` - -`Distributed`模式: - -```text -以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点` -以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点` -以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点` -除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点` -``` - -***注意*** - -涉及到以上约定的服务如下: - -- Cinder -- Nova -- Neutron - -## 准备环境 - -### 环境配置 - -1. 启动OpenStack Train yum源 - - ```shell - yum update - yum install openstack-release-train - yum clean all && yum makecache - ``` - - **注意**:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL - - ```shell - vi /etc/yum.repos.d/openEuler.repo - - [EPOL] - name=EPOL - baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler - EOF - ``` - -2. 修改主机名以及映射 - - 设置各个节点的主机名 - - ```shell - hostnamectl set-hostname controller (CTL) - hostnamectl set-hostname compute (CPT) - ``` - - 假设controller节点的IP是`10.0.0.11`,compute节点的IP是`10.0.0.12`(如果存在的话),则于`/etc/hosts`新增如下: - - ```shell - 10.0.0.11 controller - 10.0.0.12 compute - ``` - -### 安装 SQL DataBase - -1. 执行如下命令,安装软件包。 - - ```shell - yum install mariadb mariadb-server python3-PyMySQL - ``` - -2. 执行如下命令,创建并编辑 `/etc/my.cnf.d/openstack.cnf` 文件。 - - ```shell - vim /etc/my.cnf.d/openstack.cnf - - [mysqld] - bind-address = 10.0.0.11 - default-storage-engine = innodb - innodb_file_per_table = on - max_connections = 4096 - collation-server = utf8_general_ci - character-set-server = utf8 - ``` - - ***注意*** - - **其中 `bind-address` 设置为控制节点的管理IP地址。** - -3. 启动 DataBase 服务,并为其配置开机自启动: - - ```shell - systemctl enable mariadb.service - systemctl start mariadb.service - ``` - -4. 配置DataBase的默认密码(可选) - - ```shell - mysql_secure_installation - ``` - - ***注意*** - - **根据提示进行即可** - -### 安装 RabbitMQ - -1. 执行如下命令,安装软件包。 - - ```shell - yum install rabbitmq-server - ``` - -2. 启动 RabbitMQ 服务,并为其配置开机自启动。 - - ```shell - systemctl enable rabbitmq-server.service - systemctl start rabbitmq-server.service - ``` - -3. 添加 OpenStack用户。 - - ```shell - rabbitmqctl add_user openstack RABBIT_PASS - ``` - - ***注意*** - - **替换 `RABBIT_PASS`,为 OpenStack 用户设置密码** - -4. 设置openstack用户权限,允许进行配置、写、读: - - ```shell - rabbitmqctl set_permissions openstack ".*" ".*" ".*" - ``` - -### 安装 Memcached - -1. 执行如下命令,安装依赖软件包。 - - ```shell - yum install memcached python3-memcached - ``` - -2. 编辑 `/etc/sysconfig/memcached` 文件。 - - ```shell - vim /etc/sysconfig/memcached - - OPTIONS="-l 127.0.0.1,::1,controller" - ``` - -3. 执行如下命令,启动 Memcached 服务,并为其配置开机启动。 - - ```shell - systemctl enable memcached.service - systemctl start memcached.service - ``` - - ***注意*** - - **服务启动后,可以通过命令`memcached-tool controller stats`确保启动正常,服务可用,其中可以将`controller`替换为控制节点的管理IP地址。** - -## 安装 OpenStack - -### Keystone 安装 - -1. 创建 keystone 数据库并授权。 - - ``` sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE keystone; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `KEYSTONE_DBPASS`,为 Keystone 数据库设置密码** - -2. 安装软件包。 - - ```shell - yum install openstack-keystone httpd mod_wsgi - ``` - -3. 配置keystone相关配置 - - ```shell - vim /etc/keystone/keystone.conf - - [database] - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - - [token] - provider = fernet - ``` - - ***解释*** - - [database]部分,配置数据库入口 - - [token]部分,配置token provider - - ***注意:*** - - **替换 `KEYSTONE_DBPASS` 为 Keystone 数据库的密码** - -4. 同步数据库。 - - ```shell - su -s /bin/sh -c "keystone-manage db_sync" keystone - ``` - -5. 初始化Fernet密钥仓库。 - - ```shell - keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - ``` - -6. 启动服务。 - - ```shell - keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:5000/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - ``` - - ***注意*** - - **替换 `ADMIN_PASS`,为 admin 用户设置密码** - -7. 配置Apache HTTP server - - ```shell - vim /etc/httpd/conf/httpd.conf - - ServerName controller - ``` - - ```shell - ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ - ``` - - ***解释*** - - 配置 `ServerName` 项引用控制节点 - - ***注意*** - **如果 `ServerName` 项不存在则需要创建** - -8. 启动Apache HTTP服务。 - - ```shell - systemctl enable httpd.service - systemctl start httpd.service - ``` - -9. 创建环境变量配置。 - - ```shell - cat << EOF >> ~/.admin-openrc - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=admin - export OS_USERNAME=admin - export OS_PASSWORD=ADMIN_PASS - export OS_AUTH_URL=http://controller:5000/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - EOF - ``` - - ***注意*** - - **替换 `ADMIN_PASS` 为 admin 用户的密码** - -10. 依次创建domain, projects, users, roles,需要先安装好python3-openstackclient: - - ```shell - yum install python3-openstackclient - ``` - - 导入环境变量 - - ```shell - source ~/.admin-openrc - ``` - - 创建project `service`,其中 domain `default` 在 keystone-manage bootstrap 时已创建 - - ```shell - openstack domain create --description "An Example Domain" example - ``` - - ```shell - openstack project create --domain default --description "Service Project" service - ``` - - 创建(non-admin)project `myproject`,user `myuser` 和 role `myrole`,为 `myproject` 和 `myuser` 添加角色`myrole` - - ```shell - openstack project create --domain default --description "Demo Project" myproject - openstack user create --domain default --password-prompt myuser - openstack role create myrole - openstack role add --project myproject --user myuser myrole - ``` - -11. 验证 - - 取消临时环境变量OS_AUTH_URL和OS_PASSWORD: - - ```shell - source ~/.admin-openrc - unset OS_AUTH_URL OS_PASSWORD - ``` - - 为admin用户请求token: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - ``` - - 为myuser用户请求token: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name myproject --os-username myuser token issue - ``` - -### Glance 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE glance; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意:*** - - **替换 `GLANCE_DBPASS`,为 glance 数据库设置密码** - - 创建服务凭证 - - ```shell - source ~/.admin-openrc - - openstack user create --domain default --password-prompt glance - openstack role add --project service --user glance admin - openstack service create --name glance --description "OpenStack Image" image - ``` - - 创建镜像服务API端点: - - ```shell - openstack endpoint create --region RegionOne image public http://controller:9292 - openstack endpoint create --region RegionOne image internal http://controller:9292 - openstack endpoint create --region RegionOne image admin http://controller:9292 - ``` - -2. 安装软件包 - - ```shell - yum install openstack-glance - ``` - -3. 配置glance相关配置: - - ```shell - vim /etc/glance/glance-api.conf - - [database] - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - flavor = keystone - - [glance_store] - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - ``` - - ***解释:*** - - [database]部分,配置数据库入口 - - [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口 - - [glance_store]部分,配置本地文件系统存储和镜像文件的位置 - - ***注意*** - - **替换 `GLANCE_DBPASS` 为 glance 数据库的密码** - - **替换 `GLANCE_PASS` 为 glance 用户的密码** - -4. 同步数据库: - - ```shell - su -s /bin/sh -c "glance-manage db_sync" glance - ``` - -5. 启动服务: - - ```shell - systemctl enable openstack-glance-api.service - systemctl start openstack-glance-api.service - ``` - -6. 验证 - - 下载镜像 - - ```shell - source ~/.admin-openrc - - wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img - ``` - - ***注意*** - - **如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。** - - 向Image服务上传镜像: - - ```shell - openstack image create --disk-format qcow2 --container-format bare \ - --file cirros-0.4.0-x86_64-disk.img --public cirros - ``` - - 确认镜像上传并验证属性: - - ```shell - openstack image list - ``` - -### Placement安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - 作为 root 用户访问数据库,创建 placement 数据库并授权。 - - ```shell - mysql -u root -p - MariaDB [(none)]> CREATE DATABASE placement; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `PLACEMENT_DBPASS` 为 placement 数据库设置密码** - - ```shell - source admin-openrc - ``` - - 执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。 - - 创建Placement API服务 - - ```shell - openstack user create --domain default --password-prompt placement - openstack role add --project service --user placement admin - openstack service create --name placement --description "Placement API" placement - ``` - - 创建placement服务API端点: - - ```shell - openstack endpoint create --region RegionOne placement public http://controller:8778 - openstack endpoint create --region RegionOne placement internal http://controller:8778 - openstack endpoint create --region RegionOne placement admin http://controller:8778 - ``` - -2. 安装和配置 - - 安装软件包: - - ```shell - yum install openstack-placement-api - ``` - - 配置placement: - - 编辑 /etc/placement/placement.conf 文件: - - 在[placement_database]部分,配置数据库入口 - - 在[api] [keystone_authtoken]部分,配置身份认证服务入口 - - ```shell - # vim /etc/placement/placement.conf - [placement_database] - # ... - connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement - [api] - # ... - auth_strategy = keystone - [keystone_authtoken] - # ... - auth_url = http://controller:5000/v3 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = placement - password = PLACEMENT_PASS - ``` - - 其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。 - - 同步数据库: - - ```shell - su -s /bin/sh -c "placement-manage db sync" placement - ``` - - 启动httpd服务: - - ```shell - systemctl restart httpd - ``` - -3. 验证 - - 执行如下命令,执行状态检查: - - ```shell - . admin-openrc - placement-status upgrade check - ``` - - 安装osc-placement,列出可用的资源类别及特性: - - ```shell - yum install python3-osc-placement - openstack --os-placement-api-version 1.2 resource class list --sort-column name - openstack --os-placement-api-version 1.6 trait list --sort-column name - ``` - -### Nova 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换NOVA_DBPASS,为nova数据库设置密码** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 创建nova服务凭证: - - ```shell - openstack user create --domain default --password-prompt nova (CTL) - openstack role add --project service --user nova admin (CTL) - openstack service create --name nova --description "OpenStack Compute" compute (CTL) - ``` - - 创建nova API端点: - - ```shell - openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) - openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) - openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) - ``` - -2. 安装软件包 - - ```shell - yum install openstack-nova-api openstack-nova-conductor \ (CTL) - openstack-nova-novncproxy openstack-nova-scheduler - - yum install openstack-nova-compute (CPT) - ``` - - ***注意*** - - **如果为arm64结构,还需要执行以下命令** - - ```shell - yum install edk2-aarch64 (CPT) - ``` - -3. 配置nova相关配置 - - ```shell - vim /etc/nova/nova.conf - - [DEFAULT] - enabled_apis = osapi_compute,metadata - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - my_ip = 10.0.0.1 - use_neutron = true - firewall_driver = nova.virt.firewall.NoopFirewallDriver - compute_driver=libvirt.LibvirtDriver (CPT) - instances_path = /var/lib/nova/instances/ (CPT) - lock_path = /var/lib/nova/tmp (CPT) - - [api_database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) - - [database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) - - [api] - auth_strategy = keystone - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = nova - password = NOVA_PASS - - [vnc] - enabled = true - server_listen = $my_ip - server_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) - - [glance] - api_servers = http://controller:9292 - - [oslo_concurrency] - lock_path = /var/lib/nova/tmp (CTL) - - [placement] - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:5000/v3 - username = placement - password = PLACEMENT_PASS - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***解释*** - - [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron; - - [api_database] [database]部分,配置数据库入口; - - [api] [keystone_authtoken]部分,配置身份认证服务入口; - - [vnc]部分,启用并配置远程控制台入口; - - [glance]部分,配置镜像服务API的地址; - - [oslo_concurrency]部分,配置lock path; - - [placement]部分,配置placement服务的入口。 - - ***注意*** - - **替换 `RABBIT_PASS` 为 RabbitMQ 中 openstack 账户的密码;** - - **配置 `my_ip` 为控制节点的管理IP地址;** - - **替换 `NOVA_DBPASS` 为nova数据库的密码;** - - **替换 `NOVA_PASS` 为nova用户的密码;** - - **替换 `PLACEMENT_PASS` 为placement用户的密码;** - - **替换 `NEUTRON_PASS` 为neutron用户的密码;** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - - **额外** - - 确定是否支持虚拟机硬件加速(x86架构): - - ```shell - egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) - ``` - - 如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM: - - ```shell - vim /etc/nova/nova.conf (CPT) - - [libvirt] - virt_type = qemu - ``` - - 如果返回值为1或更大的值,则支持硬件加速,则`virt_type`可以配置为`kvm` - - ***注意*** - - **如果为arm64结构,还需要在计算节点执行以下命令** - - ```shell - - mkdir -p /usr/share/AAVMF - chown nova:nova /usr/share/AAVMF - - ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \ - /usr/share/AAVMF/AAVMF_CODE.fd - ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \ - /usr/share/AAVMF/AAVMF_VARS.fd - - vim /etc/libvirt/qemu.conf - - nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \ - /usr/share/AAVMF/AAVMF_VARS.fd", \ - "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \ - /usr/share/edk2/aarch64/vars-template-pflash.raw"] - ``` - - 并且当ARM架构下的部署环境为嵌套虚拟化时,`libvirt`配置如下: - - ```shell - [libvirt] - virt_type = qemu - cpu_mode = custom - cpu_model = cortex-a72 - ``` - -4. 同步数据库 - - 同步nova-api数据库: - - ```shell - su -s /bin/sh -c "nova-manage api_db sync" nova (CTL) - ``` - - 注册cell0数据库: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova (CTL) - ``` - - 创建cell1 cell: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova (CTL) - ``` - - 同步nova数据库: - - ```shell - su -s /bin/sh -c "nova-manage db sync" nova (CTL) - ``` - - 验证cell0和cell1注册正确: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova (CTL) - ``` - - 添加计算节点到openstack集群 - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (CTL) - ``` - -5. 启动服务 - - ```shell - systemctl enable \ (CTL) - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - - systemctl start \ (CTL) - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - ``` - - ```shell - systemctl enable libvirtd.service openstack-nova-compute.service (CPT) - systemctl start libvirtd.service openstack-nova-compute.service (CPT) - ``` - -6. 验证 - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 列出服务组件,验证每个流程都成功启动和注册: - - ```shell - openstack compute service list (CTL) - ``` - - 列出身份服务中的API端点,验证与身份服务的连接: - - ```shell - openstack catalog list (CTL) - ``` - - 列出镜像服务中的镜像,验证与镜像服务的连接: - - ```shell - openstack image list (CTL) - ``` - - 检查cells是否运作成功,以及其他必要条件是否已具备。 - - ```shell - nova-status upgrade check (CTL) - ``` - -### Neutron 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE neutron; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `NEUTRON_DBPASS` 为 neutron 数据库设置密码。** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 创建neutron服务凭证 - - ```shell - openstack user create --domain default --password-prompt neutron (CTL) - openstack role add --project service --user neutron admin (CTL) - openstack service create --name neutron --description "OpenStack Networking" network (CTL) - ``` - - 创建Neutron服务API端点: - - ```shell - openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) - ``` - -2. 安装软件包: - - ```shell - yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \ (CTL) - openstack-neutron-ml2 - ``` - - ```shell - yum install openstack-neutron-linuxbridge ebtables ipset (CPT) - ``` - -3. 配置neutron相关配置: - - 配置主体配置 - - ```shell - vim /etc/neutron/neutron.conf - - [database] - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) - - [DEFAULT] - core_plugin = ml2 (CTL) - service_plugins = router (CTL) - allow_overlapping_ips = true (CTL) - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - notify_nova_on_port_status_changes = true (CTL) - notify_nova_on_port_data_changes = true (CTL) - api_workers = 3 (CTL) - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = neutron - password = NEUTRON_PASS - - [nova] - auth_url = http://controller:5000 (CTL) - auth_type = password (CTL) - project_domain_name = Default (CTL) - user_domain_name = Default (CTL) - region_name = RegionOne (CTL) - project_name = service (CTL) - username = nova (CTL) - password = NOVA_PASS (CTL) - - [oslo_concurrency] - lock_path = /var/lib/neutron/tmp - ``` - - ***解释*** - - [database]部分,配置数据库入口; - - [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口; - - [default] [keystone]部分,配置身份认证服务入口; - - [default] [nova]部分,配置网络来通知计算网络拓扑的变化; - - [oslo_concurrency]部分,配置lock path。 - - ***注意*** - - **替换`NEUTRON_DBPASS`为 neutron 数据库的密码;** - - **替换`RABBIT_PASS`为 RabbitMQ中openstack 账户的密码;** - - **替换`NEUTRON_PASS`为 neutron 用户的密码;** - - **替换`NOVA_PASS`为 nova 用户的密码。** - - 配置ML2插件: - - ```shell - vim /etc/neutron/plugins/ml2/ml2_conf.ini - - [ml2] - type_drivers = flat,vlan,vxlan - tenant_network_types = vxlan - mechanism_drivers = linuxbridge,l2population - extension_drivers = port_security - - [ml2_type_flat] - flat_networks = provider - - [ml2_type_vxlan] - vni_ranges = 1:1000 - - [securitygroup] - enable_ipset = true - ``` - - 创建/etc/neutron/plugin.ini的符号链接 - - ```shell - ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - ``` - - **注意** - - **[ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;** - - **[ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;** - - **[ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;** - - **[securitygroup]部分,配置允许 ipset。** - - **补充** - - **l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge** - - 配置 Linux bridge 代理: - - ```shell - vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - [securitygroup] - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - ``` - - ***解释*** - - [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口; - - [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population; - - [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。 - - ***注意*** - - **替换`PROVIDER_INTERFACE_NAME`为物理网络接口;** - - **替换`OVERLAY_INTERFACE_IP_ADDRESS`为控制节点的管理IP地址。** - - 配置Layer-3代理: - - ```shell - vim /etc/neutron/l3_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - ``` - - ***解释*** - - 在[default]部分,配置接口驱动为linuxbridge - - 配置DHCP代理: - - ```shell - vim /etc/neutron/dhcp_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - ``` - - ***解释*** - - [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。 - - 配置metadata代理: - - ```shell - vim /etc/neutron/metadata_agent.ini (CTL) - - [DEFAULT] - nova_metadata_host = controller - metadata_proxy_shared_secret = METADATA_SECRET - ``` - - ***解释*** - - [default]部分,配置元数据主机和shared secret。 - - ***注意*** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - -4. 配置nova相关配置 - - ```shell - vim /etc/nova/nova.conf - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***解释*** - - [neutron]部分,配置访问参数,启用元数据代理,配置secret。 - - ***注意*** - - **替换`NEUTRON_PASS`为 neutron 用户的密码;** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - -5. 同步数据库: - - ```shell - su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - ``` - -6. 重启计算API服务: - - ```shell - systemctl restart openstack-nova-api.service - ``` - -7. 启动网络服务 - - ```shell - systemctl enable neutron-server.service neutron-linuxbridge-agent.service \ (CTL) - neutron-dhcp-agent.service neutron-metadata-agent.service \ - neutron-l3-agent.service - - systemctl restart neutron-server.service neutron-linuxbridge-agent.service \ (CTL) - neutron-dhcp-agent.service neutron-metadata-agent.service \ - neutron-l3-agent.service - - systemctl enable neutron-linuxbridge-agent.service (CPT) - systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) - ``` - -8. 验证 - - 验证 neutron 代理启动成功: - - ```shell - openstack network agent list - ``` - -### Cinder 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE cinder; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `CINDER_DBPASS` 为cinder数据库设置密码。** - - ```shell - source ~/.admin-openrc - ``` - - 创建cinder服务凭证: - - ```shell - openstack user create --domain default --password-prompt cinder - openstack role add --project service --user cinder admin - openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 - openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 - ``` - - 创建块存储服务API端点: - - ```shell - openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s - ``` - -2. 安装软件包: - - ```shell - yum install openstack-cinder-api openstack-cinder-scheduler (CTL) - ``` - - ```shell - yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \ (STG) - openstack-cinder-volume openstack-cinder-backup - ``` - -3. 准备存储设备,以下仅为示例: - - ```shell - pvcreate /dev/vdb - vgcreate cinder-volumes /dev/vdb - - vim /etc/lvm/lvm.conf - - - devices { - ... - filter = [ "a/vdb/", "r/.*/"] - ``` - - ***解释*** - - 在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。 - -4. 准备NFS - - ```shell - mkdir -p /root/cinder/backup - - cat << EOF >> /etc/export - /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) - EOF - - ``` - -5. 配置cinder相关配置: - - ```shell - vim /etc/cinder/cinder.conf - - [DEFAULT] - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - my_ip = 10.0.0.11 - enabled_backends = lvm (STG) - backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) - backup_share=HOST:PATH (STG) - - [database] - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = cinder - password = CINDER_PASS - - [oslo_concurrency] - lock_path = /var/lib/cinder/tmp - - [lvm] - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) - volume_group = cinder-volumes (STG) - iscsi_protocol = iscsi (STG) - iscsi_helper = tgtadm (STG) - ``` - - ***解释*** - - [database]部分,配置数据库入口; - - [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip; - - [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口; - - [oslo_concurrency]部分,配置lock path。 - - ***注意*** - - **替换`CINDER_DBPASS`为 cinder 数据库的密码;** - - **替换`RABBIT_PASS`为 RabbitMQ 中 openstack 账户的密码;** - - **配置`my_ip`为控制节点的管理 IP 地址;** - - **替换`CINDER_PASS`为 cinder 用户的密码;** - - **替换`HOST:PATH`为 NFS 的HOSTIP和共享路径的密码;** - -6. 同步数据库: - - ```shell - su -s /bin/sh -c "cinder-manage db sync" cinder (CTL) - ``` - -7. 配置nova: - - ```shell - vim /etc/nova/nova.conf (CTL) - - [cinder] - os_region_name = RegionOne - ``` - -8. 重启计算API服务 - - ```shell - systemctl restart openstack-nova-api.service - ``` - -9. 启动cinder服务 - - ```shell - systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - ``` - - ```shell - systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - ``` - - ***注意*** - - 当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。 - - ```shell - include /var/lib/cinder/volumes/* - ``` - -10. 验证 - - ```shell - source ~/.admin-openrc - openstack volume service list - ``` - -### horizon 安装 - -1. 安装软件包 - - ```shell - yum install openstack-dashboard - ``` - -2. 修改文件 - - 修改变量 - - ```text - vim /etc/openstack-dashboard/local_settings - - OPENSTACK_HOST = "controller" - ALLOWED_HOSTS = ['*', ] - - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': 'controller:11211', - } - } - - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" - - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 3, - } - ``` - -3. 重启 httpd 服务 - - ```shell - systemctl restart httpd.service memcached.service - ``` - -4. 验证 - 打开浏览器,输入网址,登录 horizon。 - - ***注意*** - - **替换HOSTIP为控制节点管理平面IP地址** - -### Tempest 安装 - -Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。 - -1. 安装Tempest - - ```shell - yum install openstack-tempest - ``` - -2. 初始化目录 - - ```shell - tempest init mytest - ``` - -3. 修改配置文件。 - - ```shell - cd mytest - vi etc/tempest.conf - ``` - - tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考[官方示例](https://docs.openstack.org/tempest/latest/sampleconf.html) - -4. 执行测试 - - ```shell - tempest run - ``` - -5. 安装tempest扩展(可选) - OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Train中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: - ``` - yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin - ``` - -### Ironic 安装 - -Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 裸金属服务在数据库中存储信息,创建一个**ironic**用户可以访问的**ironic**数据库,替换**IRONIC_DBPASSWORD**为合适的密码 - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - ``` -2. 安装软件包 - - ```shell - yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient - ``` - - 启动服务 - - ```shell - systemctl enable openstack-ironic-api openstack-ironic-conductor - systemctl start openstack-ironic-api openstack-ironic-conductor - ``` - -3. 创建服务用户认证 - - 1、创建Bare Metal服务用户 - - ```shell - openstack user create --password IRONIC_PASSWORD \ - --email ironic@example.com ironic - openstack role add --project service --user ironic admin - openstack service create --name ironic \ - --description "Ironic baremetal provisioning service" baremetal - ``` - - 2、创建Bare Metal服务访问入口 - - ```shell - openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 - ``` - -4. 配置ironic-api服务 - - 配置文件路径/etc/ironic/ironic.conf - - 1、通过**connection**选项配置数据库的位置,如下所示,替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换**DB_IP**为DB服务器所在的IP地址: - - ```shell - [database] - - # The SQLAlchemy connection string used to connect to the - # database (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 3、配置ironic-api服务使用身份认证服务的凭证,替换**PUBLIC_IDENTITY_IP**为身份认证服务器的公共IP,替换**PRIVATE_IDENTITY_IP**为身份认证服务器的私有IP,替换**IRONIC_PASSWORD**为身份认证服务中**ironic**用户的密码: - - ```shell - [DEFAULT] - - # Authentication strategy used by ironic-api: one of - # "keystone" or "noauth". "noauth" should not be used in a - # production environment because all authentication will be - # disabled. (string value) - - auth_strategy=keystone - - [keystone_authtoken] - # Authentication type to load (string value) - auth_type=password - # Complete public Identity API endpoint (string value) - www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 - # Complete admin Identity API endpoint. (string value) - auth_url=http://PRIVATE_IDENTITY_IP:5000 - # Service username. (string value) - username=ironic - # Service account password. (string value) - password=IRONIC_PASSWORD - # Service tenant name. (string value) - project_name=service - # Domain name containing project (string value) - project_domain_name=Default - # User's domain name (string value) - user_domain_name=Default - - ``` - - 4、创建裸金属服务数据库表 - - ```shell - ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema - ``` - - 5、重启ironic-api服务 - - ```shell - sudo systemctl restart openstack-ironic-api - ``` - -5. 配置ironic-conductor服务 - - 1、替换**HOST_IP**为conductor host的IP - - ```shell - [DEFAULT] - - # IP address of this host. If unset, will determine the IP - # programmatically. If unable to do so, will use "127.0.0.1". - # (string value) - - my_ip=HOST_IP - ``` - - 2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换DB_IP为DB服务器所在的IP地址: - - ```shell - [database] - - # The SQLAlchemy connection string to use to connect to the - # database. (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 4、配置凭证访问其他OpenStack服务 - - 为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。 - - ```shell - [neutron] - 访问OpenStack网络服务 - [glance] - 访问OpenStack镜像服务 - [swift] - 访问OpenStack对象存储服务 - [cinder] - 访问OpenStack块存储服务 - [inspector] - 访问OpenStack裸金属introspection服务 - [service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点 - ``` - - 简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。 - - 在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为: - - ```shell - 网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口 - - 请求时使用特定的CA SSL证书进行HTTPS连接 - - 与ironic-api服务配置相同的服务用户 - - 动态密码认证插件基于其他选项发现合适的身份认证服务API版本 - ``` - - ```shell - [neutron] - - # Authentication type to load (string value) - auth_type = password - # Authentication URL (string value) - auth_url=https://IDENTITY_IP:5000/ - # Username (string value) - username=ironic - # User's password (string value) - password=IRONIC_PASSWORD - # Project name to scope to (string value) - project_name=service - # Domain ID containing project (string value) - project_domain_id=default - # User's domain id (string value) - user_domain_id=default - # PEM encoded Certificate Authority to use when verifying - # HTTPs connections. (string value) - cafile=/opt/stack/data/ca-bundle.pem - # The default region_name for endpoint URL discovery. (string - # value) - region_name = RegionOne - # List of interfaces, in order of preference, for endpoint - # URL. (list value) - valid_interfaces=public - ``` - - 默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定: - - ```shell - [neutron] ... endpoint_override = - ``` - - 5、配置允许的驱动程序和硬件类型 - - 通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型: - - ```shell - [DEFAULT] enabled_hardware_types = ipmi - ``` - - 配置硬件接口: - - ```shell - enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool - ``` - - 配置接口默认值: - - ```shell - [DEFAULT] default_deploy_interface = direct default_network_interface = neutron - ``` - - 如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。 - - 6、重启ironic-conductor服务 - - ```shell - sudo systemctl restart openstack-ironic-conductor - ``` - -6. 配置httpd服务 - - 1. 创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。 - - ``` - mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot - ``` - - 2. 安装和配置httpd服务 - - 1. 安装httpd服务,已有请忽略 - - ``` - yum install httpd -y - ``` - 2. 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下: - - ``` - Listen 8080 - - - ServerName ironic.openeuler.com - - ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log" - CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b" - - DocumentRoot "/var/lib/ironic/httproot" - - Options Indexes FollowSymLinks - Require all granted - - LogLevel warn - AddDefaultCharset UTF-8 - EnableSendfile on - - - ``` - - 注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。 - - 3. 重启httpd服务。 - - ``` - systemctl restart httpd - ``` -7. deploy ramdisk镜像制作 - - T版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 - 若使用T版原生工具,则需要安装对应的软件包。 - - ```shell - yum install openstack-ironic-python-agent - 或者 - yum install diskimage-builder - ``` - - 具体的使用方法可以参考[官方文档](https://docs.openstack.org/ironic/queens/install/deploy-ramdisk.html) - - 这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。 - - 1. 安装 ironic-python-agent-builder - - 1. 安装工具: - - ```shell - pip install ironic-python-agent-builder - ``` - - 2. 修改以下文件中的python解释器: - - ```shell - /usr/bin/yum /usr/libexec/urlgrabber-ext-down - ``` - - 3. 安装其它必须的工具: - - ```shell - yum install git - ``` - - 由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可: - - ```shell - # 先查询需要安装哪个包 - [root@localhost ~]# yum provides /usr/sbin/semanage - 已加载插件:fastestmirror - Loading mirror speeds from cached hostfile - * base: mirror.vcu.edu - * extras: mirror.vcu.edu - * updates: mirror.math.princeton.edu - policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities - 源 :base - 匹配来源: - 文件名 :/usr/sbin/semanage - # 安装 - [root@localhost ~]# yum install policycoreutils-python - ``` - - 2. 制作镜像 - - 如果是`arm`架构,需要添加: - ```shell - export ARCH=aarch64 - ``` - - 基本用法: - - ```shell - usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] - [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] - distribution - - positional arguments: - distribution Distribution to use - - optional arguments: - -h, --help show this help message and exit - -r RELEASE, --release RELEASE - Distribution release to use - -o OUTPUT, --output OUTPUT - Output base file name - -e ELEMENT, --element ELEMENT - Additional DIB element to use - -b BRANCH, --branch BRANCH - If set, override the branch that is used for ironic- - python-agent and requirements - -v, --verbose Enable verbose logging in diskimage-builder - --extra-args EXTRA_ARGS - Extra arguments to pass to diskimage-builder - ``` - - 举例说明: - - ```shell - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky - ``` - - 3. 允许ssh登陆 - - 初始化环境变量,然后制作镜像: - - ```shell - export DIB_DEV_USER_USERNAME=ipa \ - export DIB_DEV_USER_PWDLESS_SUDO=yes \ - export DIB_DEV_USER_PASSWORD='123' - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser - ``` - - 4. 指定代码仓库 - - 初始化对应的环境变量,然后制作镜像: - - ```shell - # 指定仓库地址以及版本 - DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git - DIB_REPOREF_ironic_python_agent=origin/develop - - # 直接从gerrit上clone代码 - DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent - DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 - ``` - - 参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。 - - 指定仓库地址及版本验证成功。 - - 5. 注意 - 原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改: - - 在T版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败 - - 需要用户对生成grub.cfg的代码逻辑自行修改。 - - ironic向ipa发送查询命令执行状态请求的tls报错: - - T版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。 - - 1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1: - - ``` - [agent] - verify_ca = False - - [pxe] - pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 - ``` - - 2. ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下: - - /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录) - - ``` - [DEFAULT] - enable_auto_tls = False - ``` - - 设置权限: - - ``` - chown -R ipa.ipa /etc/ironic_python_agent/ - ``` - - 3. 修改ipa服务的服务启动文件,添加配置文件选项 - - vim usr/lib/systemd/system/ironic-python-agent.service - - ``` - [Unit] - Description=Ironic Python Agent - After=network-online.target - - [Service] - ExecStartPre=/sbin/modprobe vfat - ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf - Restart=always - RestartSec=30s - - [Install] - WantedBy=multi-user.target - ``` - - -在Train中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。 - -### Kolla 安装 - -Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。 - -Kolla的安装十分简单,只需要安装对应的RPM包即可 - -``` -yum install openstack-kolla openstack-kolla-ansible -``` - -安装完后,就可以使用`kolla-ansible`, `kolla-build`, `kolla-genpwd`, `kolla-mergepwd`等命令进行相关的镜像制作和容器环境部署了。 - -### Trove 安装 -Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 数据库服务在数据库中存储信息,创建一个**trove**用户可以访问的**trove**数据库,替换**TROVE_DBPASSWORD**为合适的密码 - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - ``` - -2. 创建服务用户认证 - - 1、创建**Trove**服务用户 - - ```shell - openstack user create --domain default --password-prompt trove - openstack role add --project service --user trove admin - openstack service create --name trove --description "Database" database - ``` - **解释:** `TROVE_PASSWORD` 替换为`trove`用户的密码 - - 2、创建**Database**服务访问入口 - - ```shell - openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s - ``` - -3. 安装和配置**Trove**各组件 - - 1、安装**Trove**包 - ```shell script - yum install openstack-trove python3-troveclient - ``` - - 2. 配置`trove.conf` - ```shell script - vim /etc/trove/trove.conf - - [DEFAULT] - log_dir = /var/log/trove - trove_auth_url = http://controller:5000/ - nova_compute_url = http://controller:8774/v2 - cinder_url = http://controller:8776/v1 - swift_url = http://controller:8080/v1/AUTH_ - rpc_backend = rabbit - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 - auth_strategy = keystone - add_addresses = True - api_paste_config = /etc/trove/api-paste.ini - nova_proxy_admin_user = admin - nova_proxy_admin_pass = ADMIN_PASSWORD - nova_proxy_admin_tenant_name = service - taskmanager_manager = trove.taskmanager.manager.Manager - use_nova_server_config_drive = True - # Set these if using Neutron Networking - network_driver = trove.network.neutron.NeutronDriver - network_label_regex = .* - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = trove - password = TROVE_PASSWORD - ``` - **解释:** - - `[Default]`分组中`nova_compute_url` 和 `cinder_url` 为Nova和Cinder在Keystone中创建的endpoint - - `nova_proxy_XXX` 为一个能访问Nova服务的用户信息,上例中使用`admin`用户为例 - - `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码 - - `[database]`分组中的`connection` 为前面在mysql中为Trove创建的数据库信息 - - Trove的用户信息中`TROVE_PASSWORD`替换为实际trove用户的密码 - - 3. 配置`trove-guestagent.conf` - ```shell script - vim /etc/trove/trove-guestagent.conf - - rabbit_host = controller - rabbit_password = RABBIT_PASS - trove_auth_url = http://controller:5000/ - ``` - **解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 - 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 - 报心跳,因此需要配置RabbitMQ的用户和密码信息。 - **从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。** - - `RABBIT_PASS`替换为RabbitMQ的密码 - - 4. 生成数据`Trove`数据库表 - ```shell script - su -s /bin/sh -c "trove-manage db_sync" trove - ``` - -4. 完成安装配置 - 1. 配置**Trove**服务自启动 - ```shell script - systemctl enable openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - 2. 启动服务 - ```shell script - systemctl start openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` -### Swift 安装 - -Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。 - -1. 创建服务凭证、API端点。 - - 创建服务凭证 - - ``` shell - #创建swift用户: - openstack user create --domain default --password-prompt swift - #为swift用户添加admin角色: - openstack role add --project service --user swift admin - #创建swift服务实体: - openstack service create --name swift --description "OpenStack Object Storage" object-store - ``` - - 创建swift API 端点: - - ```shell - openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s - openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s - openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 - ``` - - -2. 安装软件包: - - ```shell - yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL) - ``` - -3. 配置proxy-server相关配置 - - Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。 - - ***注意*** - - **注意替换password为您在身份服务中为swift用户选择的密码** - -4. 安装和配置存储节点 (STG) - - 安装支持的程序包: - ```shell - yum install xfsprogs rsync - ``` - - 将/dev/vdb和/dev/vdc设备格式化为 XFS - - ```shell - mkfs.xfs /dev/vdb - mkfs.xfs /dev/vdc - ``` - - 创建挂载点目录结构: - - ```shell - mkdir -p /srv/node/vdb - mkdir -p /srv/node/vdc - ``` - - 找到新分区的 UUID: - - ```shell - blkid - ``` - - 编辑/etc/fstab文件并将以下内容添加到其中: - - ```shell - UUID="" /srv/node/vdb xfs noatime 0 2 - UUID="" /srv/node/vdc xfs noatime 0 2 - ``` - - 挂载设备: - - ```shell - mount /srv/node/vdb - mount /srv/node/vdc - ``` - ***注意*** - - **如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置** - - (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容: - - ```shell - [DEFAULT] - uid = swift - gid = swift - log file = /var/log/rsyncd.log - pid file = /var/run/rsyncd.pid - address = MANAGEMENT_INTERFACE_IP_ADDRESS - - [account] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/account.lock - - [container] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/container.lock - - [object] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/object.lock - ``` - **替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址** - - 启动rsyncd服务并配置它在系统启动时启动: - - ```shell - systemctl enable rsyncd.service - systemctl start rsyncd.service - ``` - -5. 在存储节点安装和配置组件 (STG) - - 安装软件包: - - ```shell - yum install openstack-swift-account openstack-swift-container openstack-swift-object - ``` - - 编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。 - - 确保挂载点目录结构的正确所有权: - - ```shell - chown -R swift:swift /srv/node - ``` - - 创建recon目录并确保其拥有正确的所有权: - - ```shell - mkdir -p /var/cache/swift - chown -R root:swift /var/cache/swift - chmod -R 775 /var/cache/swift - ``` - -6. 创建账号环 (CTL) - - 切换到/etc/swift目录。 - - ```shell - cd /etc/swift - ``` - - 创建基础account.builder文件: - - ```shell - swift-ring-builder account.builder create 10 1 1 - ``` - - 将每个存储节点添加到环中: - - ```shell - swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT - ``` - - **替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称** - - ***注意 *** - **对每个存储节点上的每个存储设备重复此命令** - - 验证戒指内容: - - ```shell - swift-ring-builder account.builder - ``` - - 重新平衡戒指: - - ```shell - swift-ring-builder account.builder rebalance - ``` - -7. 创建容器环 (CTL) - - 切换到`/etc/swift`目录。 - - 创建基础`container.builder`文件: - - ```shell - swift-ring-builder container.builder create 10 1 1 - ``` - - 将每个存储节点添加到环中: - - ```shell - swift-ring-builder container.builder \ - add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \ - --device DEVICE_NAME --weight 100 - - ``` - - **替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称** - - ***注意*** - **对每个存储节点上的每个存储设备重复此命令** - - 验证戒指内容: - - ```shell - swift-ring-builder container.builder - ``` - - 重新平衡戒指: - - ```shell - swift-ring-builder container.builder rebalance - ``` - -8. 创建对象环 (CTL) - - 切换到`/etc/swift`目录。 - - 创建基础`object.builder`文件: - - ```shell - swift-ring-builder object.builder create 10 1 1 - ``` - - 将每个存储节点添加到环中 - - ```shell - swift-ring-builder object.builder \ - add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \ - --device DEVICE_NAME --weight 100 - ``` - - **替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称** - - ***注意 *** - **对每个存储节点上的每个存储设备重复此命令** - - 验证戒指内容: - - ```shell - swift-ring-builder object.builder - ``` - - 重新平衡戒指: - - ```shell - swift-ring-builder object.builder rebalance - ``` - - 分发环配置文件: - - 将`account.ring.gz`,`container.ring.gz`以及 `object.ring.gz`文件复制到每个存储节点和运行代理服务的任何其他节点上的`/etc/swift`目录。 - - - -9. 完成安装 - - 编辑`/etc/swift/swift.conf`文件 - - ``` shell - [swift-hash] - swift_hash_path_suffix = test-hash - swift_hash_path_prefix = test-hash - - [storage-policy:0] - name = Policy-0 - default = yes - ``` - - **用唯一值替换 test-hash** - - 将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。 - - 在所有节点上,确保配置目录的正确所有权: - - ```shell - chown -R root:swift /etc/swift - ``` - - 在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动: - - ```shell - systemctl enable openstack-swift-proxy.service memcached.service - systemctl start openstack-swift-proxy.service memcached.service - ``` - - 在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动: - - ```shell - systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service - - systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service - - systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service - - systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service - - systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service - - systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service - ``` - -### Cyborg 安装 - -Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。 - -1. 初始化对应数据库 - -``` -CREATE DATABASE cyborg; -GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; -GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; -``` - -2. 创建对应Keystone资源对象 - -``` -$ openstack user create --domain default --password-prompt cyborg -$ openstack role add --project service --user cyborg admin -$ openstack service create --name cyborg --description "Acceleration Service" accelerator - -$ openstack endpoint create --region RegionOne \ - accelerator public http://:6666/v1 -$ openstack endpoint create --region RegionOne \ - accelerator internal http://:6666/v1 -$ openstack endpoint create --region RegionOne \ - accelerator admin http://:6666/v1 -``` - -3. 安装Cyborg - -``` -yum install openstack-cyborg -``` - -4. 配置Cyborg - -修改`/etc/cyborg/cyborg.conf` - -``` -[DEFAULT] -transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ -use_syslog = False -state_path = /var/lib/cyborg -debug = True - -[database] -connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg - -[service_catalog] -project_domain_id = default -user_domain_id = default -project_name = service -password = PASSWORD -username = cyborg -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password - -[placement] -project_domain_name = Default -project_name = service -user_domain_name = Default -password = PASSWORD -username = placement -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password - -[keystone_authtoken] -memcached_servers = localhost:11211 -project_domain_name = Default -project_name = service -user_domain_name = Default -password = PASSWORD -username = cyborg -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password -``` - -自行修改对应的用户名、密码、IP等信息 - -5. 同步数据库表格 - -``` -cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade -``` - -6. 启动Cyborg服务 - -``` -systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent -systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent -``` - -### Aodh 安装 - -1. 创建数据库 - -``` -CREATE DATABASE aodh; - -GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; - -GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; -``` - -2. 创建对应Keystone资源对象 - -``` -openstack user create --domain default --password-prompt aodh - -openstack role add --project service --user aodh admin - -openstack service create --name aodh --description "Telemetry" alarming - -openstack endpoint create --region RegionOne alarming public http://controller:8042 - -openstack endpoint create --region RegionOne alarming internal http://controller:8042 - -openstack endpoint create --region RegionOne alarming admin http://controller:8042 -``` - -3. 安装Aodh - -``` -yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient -``` - -4. 修改配置文件 - -``` -[database] -connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh - -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller -auth_strategy = keystone - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_id = default -user_domain_id = default -project_name = service -username = aodh -password = AODH_PASS - -[service_credentials] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_id = default -user_domain_id = default -project_name = service -username = aodh -password = AODH_PASS -interface = internalURL -region_name = RegionOne -``` - -5. 初始化数据库 - -``` -aodh-dbsync -``` - -6. 启动Aodh服务 - -``` -systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service - -systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service -``` - -### Gnocchi 安装 - -1. 创建数据库 - -``` -CREATE DATABASE gnocchi; - -GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; - -GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; -``` - -2. 创建对应Keystone资源对象 - -``` -openstack user create --domain default --password-prompt gnocchi - -openstack role add --project service --user gnocchi admin - -openstack service create --name gnocchi --description "Metric Service" metric - -openstack endpoint create --region RegionOne metric public http://controller:8041 - -openstack endpoint create --region RegionOne metric internal http://controller:8041 - -openstack endpoint create --region RegionOne metric admin http://controller:8041 -``` - -3. 安装Gnocchi - -``` -yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient -``` - -4. 修改配置文件`/etc/gnocchi/gnocchi.conf` - -``` -[api] -auth_mode = keystone -port = 8041 -uwsgi_mode = http-socket - -[keystone_authtoken] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_name = Default -user_domain_name = Default -project_name = service -username = gnocchi -password = GNOCCHI_PASS -interface = internalURL -region_name = RegionOne - -[indexer] -url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi - -[storage] -# coordination_url is not required but specifying one will improve -# performance with better workload division across workers. -coordination_url = redis://controller:6379 -file_basepath = /var/lib/gnocchi -driver = file -``` - -5. 初始化数据库 - -``` -gnocchi-upgrade -``` - -6. 启动Gnocchi服务 - -``` -systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service - -systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service -``` - -### Ceilometer 安装 - -1. 创建对应Keystone资源对象 - -``` -openstack user create --domain default --password-prompt ceilometer - -openstack role add --project service --user ceilometer admin - -openstack service create --name ceilometer --description "Telemetry" metering -``` - -2. 安装Ceilometer - -``` -yum install openstack-ceilometer-notification openstack-ceilometer-central -``` - -3. 修改配置文件`/etc/ceilometer/pipeline.yaml` - -``` -publishers: - # set address of Gnocchi - # + filter out Gnocchi-related activity meters (Swift driver) - # + set default archive policy - - gnocchi://?filter_project=service&archive_policy=low -``` - -4. 修改配置文件`/etc/ceilometer/ceilometer.conf` - -``` -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller - -[service_credentials] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_id = default -user_domain_id = default -project_name = service -username = ceilometer -password = CEILOMETER_PASS -interface = internalURL -region_name = RegionOne -``` - -5. 初始化数据库 - -``` -ceilometer-upgrade -``` - -6. 启动Ceilometer服务 - -``` -systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service - -systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service -``` - -### Heat 安装 - -1. 创建**heat**数据库,并授予**heat**数据库正确的访问权限,替换**HEAT_DBPASS**为合适的密码 - -``` -CREATE DATABASE heat; -GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; -GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; -``` - -2. 创建服务凭证,创建**heat**用户,并为其增加**admin**角色 - -``` -openstack user create --domain default --password-prompt heat -openstack role add --project service --user heat admin -``` - -3. 创建**heat**和**heat-cfn**服务及其对应的API端点 - -``` -openstack service create --name heat --description "Orchestration" orchestration -openstack service create --name heat-cfn --description "Orchestration" cloudformation -openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 -openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 -openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 -``` - -4. 创建stack管理的额外信息,包括**heat**domain及其对应domain的admin用户**heat_domain_admin**, -**heat_stack_owner**角色,**heat_stack_user**角色 - -``` -openstack user create --domain heat --password-prompt heat_domain_admin -openstack role add --domain heat --user-domain heat --user heat_domain_admin admin -openstack role create heat_stack_owner -openstack role create heat_stack_user -``` - -5. 安装软件包 - -``` -yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine -``` - -6. 修改配置文件`/etc/heat/heat.conf` - -``` -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller -heat_metadata_server_url = http://controller:8000 -heat_waitcondition_server_url = http://controller:8000/v1/waitcondition -stack_domain_admin = heat_domain_admin -stack_domain_admin_password = HEAT_DOMAIN_PASS -stack_user_domain_name = heat - -[database] -connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = default -user_domain_name = default -project_name = service -username = heat -password = HEAT_PASS - -[trustee] -auth_type = password -auth_url = http://controller:5000 -username = heat -password = HEAT_PASS -user_domain_name = default - -[clients_keystone] -auth_uri = http://controller:5000 -``` - -7. 初始化**heat**数据库表 - -``` -su -s /bin/sh -c "heat-manage db_sync" heat -``` - -8. 启动服务 - -``` -systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service -systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service -``` - -## 快速安装 OpenStack - -OpenStack SIG还提供了一键部署OpenStack all in one或三节点的ansible脚本,用户可以使用该脚本快速部署一套基于openEuler RPM的OpenStack环境。下面以all in one举例说明使用方法 - -1. 安装OpenStack SIG工具 - - ```shell - pip install openstack-sig-tool - ``` - -2. 配置openstack yum 源 - - ```shell - yum install openstack-release-train - ``` - - **注意**:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL - - ```shell - vi /etc/yum.repos.d/openEuler.repo - - [EPOL] - name=EPOL - baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler - EOF - -3. 刷新ansible配置 - - 打开`/usr/local/etc/inventory/all_in_one.yaml`,根据当前机器环境和需求修改对应配置。内容如下 - - ```shell - all: - hosts: - controller: - ansible_host: - ansible_ssh_private_key_file: - ansible_ssh_user: root - vars: - mysql_root_password: root - mysql_project_password: root - rabbitmq_password: root - project_identity_password: root - enabled_service: - - keystone - - neutron - - cinder - - placement - - nova - - glance - - horizon - - aodh - - ceilometer - - cyborg - - gnocchi - - kolla - - heat - - swift - - trove - - tempest - neutron_provider_interface_name: br-ex - default_ext_subnet_range: 10.100.100.0/24 - default_ext_subnet_gateway: 10.100.100.1 - neutron_dataplane_interface_name: eth1 - cinder_block_device: vdb - swift_storage_devices: - - vdc - swift_hash_path_suffix: ash - swift_hash_path_prefix: has - children: - compute: - hosts: controller - storage: - hosts: controller - network: - hosts: controller - vars: - test-key: test-value - dashboard: - hosts: controller - vars: - allowed_host: '*' - kolla: - hosts: controller - vars: - # We add openEuler OS support for kolla in OpenStack Queens/Rocky release - # Set this var to true if you want to use it in Q/R - openeuler_plugin: false - ``` - - **关键配置** - - | 配置项 | 解释 | - |---|---| - | ansible_host | all in one节点IP | - | ansible_ssh_private_key_file | ansible脚本登录all in one节点时使用的登录秘钥 | - | ansible_ssh_user | ansible脚本登录all in one节点时使用的登录用户 | - | enabled_service | 安装服务列表,根据用户需求自行删减 | - | neutron_provider_interface_name | neutron L3网桥名称 | - | default_ext_subnet_range | neutron私网IP段 | - | default_ext_subnet_gateway | neutron私网gateway | - | neutron_dataplane_interface_name | neutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,发现all in one主机断连的情况 | - | cinder_block_device | cinder使用的卷设备名 | - | swift_storage_devices | swift使用的卷设备名 | - -4. 执行安装命令 - - ```shell - oos env setup all_in_one - ``` - - 该命令执行后,OpenStack all in one环境就部署成功了 - - 环境变量文件在当前用户的根目录下,名叫`.admin-openrc` - -5. 初始化tempest环境 - - 如果用户想使用该环境运行tempest测试的话,可以执行命令`oos env init all_in_one`,会自动把tempest需要的OpenStack资源自动创建好。 - - 命令执行成功后,在用户的根目录下会生成`mytest`目录,进入其中就可以执行`tempest run`命令了。 diff --git a/docs/install/openEuler-22.03-LTS/OpenStack-wallaby.md b/docs/install/openEuler-22.03-LTS/OpenStack-wallaby.md deleted file mode 100644 index 0533954b4600903922c98bef73d60ab53f2e0d4b..0000000000000000000000000000000000000000 --- a/docs/install/openEuler-22.03-LTS/OpenStack-wallaby.md +++ /dev/null @@ -1,3220 +0,0 @@ -# OpenStack-Wallaby 部署指南 - - - -- [OpenStack-Wallaby 部署指南](#openstack-wallaby-部署指南) - - [OpenStack 简介](#openstack-简介) - - [约定](#约定) - - [准备环境](#准备环境) - - [环境配置](#环境配置) - - [安装 SQL DataBase](#安装-sql-database) - - [安装 RabbitMQ](#安装-rabbitmq) - - [安装 Memcached](#安装-memcached) - - [安装 OpenStack](#安装-openstack) - - [Keystone 安装](#keystone-安装) - - [Glance 安装](#glance-安装) - - [Placement安装](#placement安装) - - [Nova 安装](#nova-安装) - - [Neutron 安装](#neutron-安装) - - [Cinder 安装](#cinder-安装) - - [horizon 安装](#horizon-安装) - - [Tempest 安装](#tempest-安装) - - [Ironic 安装](#ironic-安装) - - [Kolla 安装](#kolla-安装) - - [Trove 安装](#trove-安装) - - [Swift 安装](#swift-安装) - - [Cyborg 安装](#cyborg-安装) - - [Aodh 安装](#aodh-安装) - - [Gnocchi 安装](#gnocchi-安装) - - [Ceilometer 安装](#ceilometer-安装) - - [Heat 安装](#heat-安装) - - [快速安装 OpenStack](#快速安装-openstack) - - -## OpenStack 简介 - -OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。 - -作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。 - -openEuler 22.03 LTS 版本官方源已经支持 OpenStack-Wallaby 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。 - -## 约定 - -OpenStack 支持多种形态部署,此文档支持`ALL in One`以及`Distributed`两种部署方式,按照如下方式约定: - -`ALL in One`模式: - -```text -忽略所有可能的后缀 -``` - -`Distributed`模式: - -```text -以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点` -以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点` -以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点` -除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点` -``` - -***注意*** - -涉及到以上约定的服务如下: - -- Cinder -- Nova -- Neutron - -## 准备环境 - -### 环境配置 - -1. 配置 22.03 LTS 官方yum源,需要启用EPOL软件仓以支持OpenStack - - ```shell - yum update - yum install openstack-release-wallaby - yum clean all && yum makecache - ``` - - **注意**:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL - - ```shell - vi /etc/yum.repos.d/openEuler.repo - - [EPOL] - name=EPOL - baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler - EOF - -2. 修改主机名以及映射 - - 设置各个节点的主机名 - - ```shell - hostnamectl set-hostname controller (CTL) - hostnamectl set-hostname compute (CPT) - ``` - - 假设controller节点的IP是`10.0.0.11`,compute节点的IP是`10.0.0.12`(如果存在的话),则于`/etc/hosts`新增如下: - - ```shell - 10.0.0.11 controller - 10.0.0.12 compute - ``` - -### 安装 SQL DataBase - -1. 执行如下命令,安装软件包。 - - ```shell - yum install mariadb mariadb-server python3-PyMySQL - ``` - -2. 执行如下命令,创建并编辑 `/etc/my.cnf.d/openstack.cnf` 文件。 - - ```shell - vim /etc/my.cnf.d/openstack.cnf - - [mysqld] - bind-address = 10.0.0.11 - default-storage-engine = innodb - innodb_file_per_table = on - max_connections = 4096 - collation-server = utf8_general_ci - character-set-server = utf8 - ``` - - ***注意*** - - **其中 `bind-address` 设置为控制节点的管理IP地址。** - -3. 启动 DataBase 服务,并为其配置开机自启动: - - ```shell - systemctl enable mariadb.service - systemctl start mariadb.service - ``` - -4. 配置DataBase的默认密码(可选) - - ```shell - mysql_secure_installation - ``` - - ***注意*** - - **根据提示进行即可** - -### 安装 RabbitMQ - -1. 执行如下命令,安装软件包。 - - ```shell - yum install rabbitmq-server - ``` - -2. 启动 RabbitMQ 服务,并为其配置开机自启动。 - - ```shell - systemctl enable rabbitmq-server.service - systemctl start rabbitmq-server.service - ``` - -3. 添加 OpenStack用户。 - - ```shell - rabbitmqctl add_user openstack RABBIT_PASS - ``` - - ***注意*** - - **替换 `RABBIT_PASS`,为 OpenStack 用户设置密码** - -4. 设置openstack用户权限,允许进行配置、写、读: - - ```shell - rabbitmqctl set_permissions openstack ".*" ".*" ".*" - ``` - -### 安装 Memcached - -1. 执行如下命令,安装依赖软件包。 - - ```shell - yum install memcached python3-memcached - ``` - -2. 编辑 `/etc/sysconfig/memcached` 文件。 - - ```shell - vim /etc/sysconfig/memcached - - OPTIONS="-l 127.0.0.1,::1,controller" - ``` - -3. 执行如下命令,启动 Memcached 服务,并为其配置开机启动。 - - ```shell - systemctl enable memcached.service - systemctl start memcached.service - ``` - - ***注意*** - - **服务启动后,可以通过命令`memcached-tool controller stats`确保启动正常,服务可用,其中可以将`controller`替换为控制节点的管理IP地址。** - -## 安装 OpenStack - -### Keystone 安装 - -1. 创建 keystone 数据库并授权。 - - ``` sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE keystone; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `KEYSTONE_DBPASS`,为 Keystone 数据库设置密码** - -2. 安装软件包。 - - ```shell - yum install openstack-keystone httpd mod_wsgi - ``` - -3. 配置keystone相关配置 - - ```shell - vim /etc/keystone/keystone.conf - - [database] - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - - [token] - provider = fernet - ``` - - ***解释*** - - [database]部分,配置数据库入口 - - [token]部分,配置token provider - - ***注意:*** - - **替换 `KEYSTONE_DBPASS` 为 Keystone 数据库的密码** - -4. 同步数据库。 - - ```shell - su -s /bin/sh -c "keystone-manage db_sync" keystone - ``` - -5. 初始化Fernet密钥仓库。 - - ```shell - keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - ``` - -6. 启动服务。 - - ```shell - keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:5000/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - ``` - - ***注意*** - - **替换 `ADMIN_PASS`,为 admin 用户设置密码** - -7. 配置Apache HTTP server - - ```shell - vim /etc/httpd/conf/httpd.conf - - ServerName controller - ``` - - ```shell - ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ - ``` - - ***解释*** - - 配置 `ServerName` 项引用控制节点 - - ***注意*** - **如果 `ServerName` 项不存在则需要创建** - -8. 启动Apache HTTP服务。 - - ```shell - systemctl enable httpd.service - systemctl start httpd.service - ``` - -9. 创建环境变量配置。 - - ```shell - cat << EOF >> ~/.admin-openrc - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=admin - export OS_USERNAME=admin - export OS_PASSWORD=ADMIN_PASS - export OS_AUTH_URL=http://controller:5000/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - EOF - ``` - - ***注意*** - - **替换 `ADMIN_PASS` 为 admin 用户的密码** - -10. 依次创建domain, projects, users, roles,需要先安装好python3-openstackclient: - - ```shell - yum install python3-openstackclient - ``` - - 导入环境变量 - - ```shell - source ~/.admin-openrc - ``` - - 创建project `service`,其中 domain `default` 在 keystone-manage bootstrap 时已创建 - - ```shell - openstack domain create --description "An Example Domain" example - ``` - - ```shell - openstack project create --domain default --description "Service Project" service - ``` - - 创建(non-admin)project `myproject`,user `myuser` 和 role `myrole`,为 `myproject` 和 `myuser` 添加角色`myrole` - - ```shell - openstack project create --domain default --description "Demo Project" myproject - openstack user create --domain default --password-prompt myuser - openstack role create myrole - openstack role add --project myproject --user myuser myrole - ``` - -11. 验证 - - 取消临时环境变量OS_AUTH_URL和OS_PASSWORD: - - ```shell - source ~/.admin-openrc - unset OS_AUTH_URL OS_PASSWORD - ``` - - 为admin用户请求token: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - ``` - - 为myuser用户请求token: - - ```shell - openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name myproject --os-username myuser token issue - ``` - -### Glance 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE glance; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意:*** - - **替换 `GLANCE_DBPASS`,为 glance 数据库设置密码** - - 创建服务凭证 - - ```shell - source ~/.admin-openrc - - openstack user create --domain default --password-prompt glance - openstack role add --project service --user glance admin - openstack service create --name glance --description "OpenStack Image" image - ``` - - 创建镜像服务API端点: - - ```shell - openstack endpoint create --region RegionOne image public http://controller:9292 - openstack endpoint create --region RegionOne image internal http://controller:9292 - openstack endpoint create --region RegionOne image admin http://controller:9292 - ``` - -2. 安装软件包 - - ```shell - yum install openstack-glance - ``` - -3. 配置glance相关配置: - - ```shell - vim /etc/glance/glance-api.conf - - [database] - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - flavor = keystone - - [glance_store] - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - ``` - - ***解释:*** - - [database]部分,配置数据库入口 - - [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口 - - [glance_store]部分,配置本地文件系统存储和镜像文件的位置 - - ***注意*** - - **替换 `GLANCE_DBPASS` 为 glance 数据库的密码** - - **替换 `GLANCE_PASS` 为 glance 用户的密码** - -4. 同步数据库: - - ```shell - su -s /bin/sh -c "glance-manage db_sync" glance - ``` - -5. 启动服务: - - ```shell - systemctl enable openstack-glance-api.service - systemctl start openstack-glance-api.service - ``` - -6. 验证 - - 下载镜像 - - ```shell - source ~/.admin-openrc - - wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img - ``` - - ***注意*** - - **如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。** - - 向Image服务上传镜像: - - ```shell - openstack image create --disk-format qcow2 --container-format bare \ - --file cirros-0.4.0-x86_64-disk.img --public cirros - ``` - - 确认镜像上传并验证属性: - - ```shell - openstack image list - ``` - -### Placement安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - 作为 root 用户访问数据库,创建 placement 数据库并授权。 - - ```shell - mysql -u root -p - MariaDB [(none)]> CREATE DATABASE placement; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ - IDENTIFIED BY 'PLACEMENT_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `PLACEMENT_DBPASS` 为 placement 数据库设置密码** - - ```shell - source admin-openrc - ``` - - 执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。 - - 创建Placement API服务 - - ```shell - openstack user create --domain default --password-prompt placement - openstack role add --project service --user placement admin - openstack service create --name placement --description "Placement API" placement - ``` - - 创建placement服务API端点: - - ```shell - openstack endpoint create --region RegionOne placement public http://controller:8778 - openstack endpoint create --region RegionOne placement internal http://controller:8778 - openstack endpoint create --region RegionOne placement admin http://controller:8778 - ``` - -2. 安装和配置 - - 安装软件包: - - ```shell - yum install openstack-placement-api - ``` - - 配置placement: - - 编辑 /etc/placement/placement.conf 文件: - - 在[placement_database]部分,配置数据库入口 - - 在[api] [keystone_authtoken]部分,配置身份认证服务入口 - - ```shell - # vim /etc/placement/placement.conf - [placement_database] - # ... - connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement - [api] - # ... - auth_strategy = keystone - [keystone_authtoken] - # ... - auth_url = http://controller:5000/v3 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = placement - password = PLACEMENT_PASS - ``` - - 其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。 - - 同步数据库: - - ```shell - su -s /bin/sh -c "placement-manage db sync" placement - ``` - - 启动httpd服务: - - ```shell - systemctl restart httpd - ``` - -3. 验证 - - 执行如下命令,执行状态检查: - - ```shell - . admin-openrc - placement-status upgrade check - ``` - - 安装osc-placement,列出可用的资源类别及特性: - - ```shell - yum install python3-osc-placement - openstack --os-placement-api-version 1.2 resource class list --sort-column name - openstack --os-placement-api-version 1.6 trait list --sort-column name - ``` - -### Nova 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换NOVA_DBPASS,为nova数据库设置密码** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 创建nova服务凭证: - - ```shell - openstack user create --domain default --password-prompt nova (CTL) - openstack role add --project service --user nova admin (CTL) - openstack service create --name nova --description "OpenStack Compute" compute (CTL) - ``` - - 创建nova API端点: - - ```shell - openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) - openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) - openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) - ``` - -2. 安装软件包 - - ```shell - yum install openstack-nova-api openstack-nova-conductor \ (CTL) - openstack-nova-novncproxy openstack-nova-scheduler - - yum install openstack-nova-compute (CPT) - ``` - - ***注意*** - - **如果为arm64结构,还需要执行以下命令** - - ```shell - yum install edk2-aarch64 (CPT) - ``` - -3. 配置nova相关配置 - - ```shell - vim /etc/nova/nova.conf - - [DEFAULT] - enabled_apis = osapi_compute,metadata - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - my_ip = 10.0.0.1 - use_neutron = true - firewall_driver = nova.virt.firewall.NoopFirewallDriver - compute_driver=libvirt.LibvirtDriver (CPT) - instances_path = /var/lib/nova/instances/ (CPT) - lock_path = /var/lib/nova/tmp (CPT) - - [api_database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) - - [database] - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) - - [api] - auth_strategy = keystone - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000/ - auth_url = http://controller:5000/ - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = nova - password = NOVA_PASS - - [vnc] - enabled = true - server_listen = $my_ip - server_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) - - [libvirt] - virt_type = qemu (CPT) - cpu_mode = custom (CPT) - cpu_model = cortex-a72 (CPT) - - [glance] - api_servers = http://controller:9292 - - [oslo_concurrency] - lock_path = /var/lib/nova/tmp (CTL) - - [placement] - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:5000/v3 - username = placement - password = PLACEMENT_PASS - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***解释*** - - [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron; - - [api_database] [database]部分,配置数据库入口; - - [api] [keystone_authtoken]部分,配置身份认证服务入口; - - [vnc]部分,启用并配置远程控制台入口; - - [glance]部分,配置镜像服务API的地址; - - [oslo_concurrency]部分,配置lock path; - - [placement]部分,配置placement服务的入口。 - - ***注意*** - - **替换 `RABBIT_PASS` 为 RabbitMQ 中 openstack 账户的密码;** - - **配置 `my_ip` 为控制节点的管理IP地址;** - - **替换 `NOVA_DBPASS` 为nova数据库的密码;** - - **替换 `NOVA_PASS` 为nova用户的密码;** - - **替换 `PLACEMENT_PASS` 为placement用户的密码;** - - **替换 `NEUTRON_PASS` 为neutron用户的密码;** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - - **额外** - - 确定是否支持虚拟机硬件加速(x86架构): - - ```shell - egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) - ``` - - 如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM: - - ```shell - vim /etc/nova/nova.conf (CPT) - - [libvirt] - virt_type = qemu - ``` - - 如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置 - - ***注意*** - - **如果为arm64结构,还需要执行以下命令** - - ```shell - vim /etc/libvirt/qemu.conf - - nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \ - /usr/share/AAVMF/AAVMF_VARS.fd", \ - "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \ - /usr/share/edk2/aarch64/vars-template-pflash.raw"] - - vim /etc/qemu/firmware/edk2-aarch64.json - - { - "description": "UEFI firmware for ARM64 virtual machines", - "interface-types": [ - "uefi" - ], - "mapping": { - "device": "flash", - "executable": { - "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw", - "format": "raw" - }, - "nvram-template": { - "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw", - "format": "raw" - } - }, - "targets": [ - { - "architecture": "aarch64", - "machines": [ - "virt-*" - ] - } - ], - "features": [ - - ], - "tags": [ - - ] - } - - (CPT) - ``` - -4. 同步数据库 - - 同步nova-api数据库: - - ```shell - su -s /bin/sh -c "nova-manage api_db sync" nova (CTL) - ``` - - 注册cell0数据库: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova (CTL) - ``` - - 创建cell1 cell: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova (CTL) - ``` - - 同步nova数据库: - - ```shell - su -s /bin/sh -c "nova-manage db sync" nova (CTL) - ``` - - 验证cell0和cell1注册正确: - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova (CTL) - ``` - - 添加计算节点到openstack集群 - - ```shell - su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (CPT) - ``` - -5. 启动服务 - - ```shell - systemctl enable \ (CTL) - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - - systemctl start \ (CTL) - openstack-nova-api.service \ - openstack-nova-scheduler.service \ - openstack-nova-conductor.service \ - openstack-nova-novncproxy.service - ``` - - ```shell - systemctl enable libvirtd.service openstack-nova-compute.service (CPT) - systemctl start libvirtd.service openstack-nova-compute.service (CPT) - ``` - -6. 验证 - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 列出服务组件,验证每个流程都成功启动和注册: - - ```shell - openstack compute service list (CTL) - ``` - - 列出身份服务中的API端点,验证与身份服务的连接: - - ```shell - openstack catalog list (CTL) - ``` - - 列出镜像服务中的镜像,验证与镜像服务的连接: - - ```shell - openstack image list (CTL) - ``` - - 检查cells是否运作成功,以及其他必要条件是否已具备。 - - ```shell - nova-status upgrade check (CTL) - ``` - -### Neutron 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p (CTL) - - MariaDB [(none)]> CREATE DATABASE neutron; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `NEUTRON_DBPASS` 为 neutron 数据库设置密码。** - - ```shell - source ~/.admin-openrc (CTL) - ``` - - 创建neutron服务凭证 - - ```shell - openstack user create --domain default --password-prompt neutron (CTL) - openstack role add --project service --user neutron admin (CTL) - openstack service create --name neutron --description "OpenStack Networking" network (CTL) - ``` - - 创建Neutron服务API端点: - - ```shell - openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) - openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) - ``` - -2. 安装软件包: - - ```shell - yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \ (CTL) - openstack-neutron-ml2 - ``` - - ```shell - yum install openstack-neutron-linuxbridge ebtables ipset (CPT) - ``` - -3. 配置neutron相关配置: - - 配置主体配置 - - ```shell - vim /etc/neutron/neutron.conf - - [database] - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) - - [DEFAULT] - core_plugin = ml2 (CTL) - service_plugins = router (CTL) - allow_overlapping_ips = true (CTL) - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - notify_nova_on_port_status_changes = true (CTL) - notify_nova_on_port_data_changes = true (CTL) - api_workers = 3 (CTL) - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = neutron - password = NEUTRON_PASS - - [nova] - auth_url = http://controller:5000 (CTL) - auth_type = password (CTL) - project_domain_name = Default (CTL) - user_domain_name = Default (CTL) - region_name = RegionOne (CTL) - project_name = service (CTL) - username = nova (CTL) - password = NOVA_PASS (CTL) - - [oslo_concurrency] - lock_path = /var/lib/neutron/tmp - ``` - - ***解释*** - - [database]部分,配置数据库入口; - - [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口; - - [default] [keystone]部分,配置身份认证服务入口; - - [default] [nova]部分,配置网络来通知计算网络拓扑的变化; - - [oslo_concurrency]部分,配置lock path。 - - ***注意*** - - **替换`NEUTRON_DBPASS`为 neutron 数据库的密码;** - - **替换`RABBIT_PASS`为 RabbitMQ中openstack 账户的密码;** - - **替换`NEUTRON_PASS`为 neutron 用户的密码;** - - **替换`NOVA_PASS`为 nova 用户的密码。** - - 配置ML2插件: - - ```shell - vim /etc/neutron/plugins/ml2/ml2_conf.ini - - [ml2] - type_drivers = flat,vlan,vxlan - tenant_network_types = vxlan - mechanism_drivers = linuxbridge,l2population - extension_drivers = port_security - - [ml2_type_flat] - flat_networks = provider - - [ml2_type_vxlan] - vni_ranges = 1:1000 - - [securitygroup] - enable_ipset = true - ``` - - 创建/etc/neutron/plugin.ini的符号链接 - - ```shell - ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - ``` - - **注意** - - **[ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;** - - **[ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;** - - **[ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;** - - **[securitygroup]部分,配置允许 ipset。** - - **补充** - - **l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge** - - 配置 Linux bridge 代理: - - ```shell - vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - [securitygroup] - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - ``` - - ***解释*** - - [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口; - - [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population; - - [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。 - - ***注意*** - - **替换`PROVIDER_INTERFACE_NAME`为物理网络接口;** - - **替换`OVERLAY_INTERFACE_IP_ADDRESS`为控制节点的管理IP地址。** - - 配置Layer-3代理: - - ```shell - vim /etc/neutron/l3_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - ``` - - ***解释*** - - 在[default]部分,配置接口驱动为linuxbridge - - 配置DHCP代理: - - ```shell - vim /etc/neutron/dhcp_agent.ini (CTL) - - [DEFAULT] - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - ``` - - ***解释*** - - [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。 - - 配置metadata代理: - - ```shell - vim /etc/neutron/metadata_agent.ini (CTL) - - [DEFAULT] - nova_metadata_host = controller - metadata_proxy_shared_secret = METADATA_SECRET - ``` - - ***解释*** - - [default]部分,配置元数据主机和shared secret。 - - ***注意*** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - -4. 配置nova相关配置 - - ```shell - vim /etc/nova/nova.conf - - [neutron] - auth_url = http://controller:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true (CTL) - metadata_proxy_shared_secret = METADATA_SECRET (CTL) - ``` - - ***解释*** - - [neutron]部分,配置访问参数,启用元数据代理,配置secret。 - - ***注意*** - - **替换`NEUTRON_PASS`为 neutron 用户的密码;** - - **替换`METADATA_SECRET`为合适的元数据代理secret。** - -5. 同步数据库: - - ```shell - su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - ``` - -6. 重启计算API服务: - - ```shell - systemctl restart openstack-nova-api.service - ``` - -7. 启动网络服务 - - ```shell - systemctl enable neutron-server.service neutron-linuxbridge-agent.service \ (CTL) - neutron-dhcp-agent.service neutron-metadata-agent.service \ - systemctl enable neutron-l3-agent.service - systemctl restart openstack-nova-api.service neutron-server.service (CTL) - neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ - neutron-metadata-agent.service neutron-l3-agent.service - - systemctl enable neutron-linuxbridge-agent.service (CPT) - systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) - ``` - -8. 验证 - - 验证 neutron 代理启动成功: - - ```shell - openstack network agent list - ``` - -### Cinder 安装 - -1. 创建数据库、服务凭证和 API 端点 - - 创建数据库: - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE cinder; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> exit - ``` - - ***注意*** - - **替换 `CINDER_DBPASS` 为cinder数据库设置密码。** - - ```shell - source ~/.admin-openrc - ``` - - 创建cinder服务凭证: - - ```shell - openstack user create --domain default --password-prompt cinder - openstack role add --project service --user cinder admin - openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 - openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 - ``` - - 创建块存储服务API端点: - - ```shell - openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s - openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s - ``` - -2. 安装软件包: - - ```shell - yum install openstack-cinder-api openstack-cinder-scheduler (CTL) - ``` - - ```shell - yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \ (STG) - openstack-cinder-volume openstack-cinder-backup - ``` - -3. 准备存储设备,以下仅为示例: - - ```shell - pvcreate /dev/vdb - vgcreate cinder-volumes /dev/vdb - - vim /etc/lvm/lvm.conf - - - devices { - ... - filter = [ "a/vdb/", "r/.*/"] - ``` - - ***解释*** - - 在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。 - -4. 准备NFS - - ```shell - mkdir -p /root/cinder/backup - - cat << EOF >> /etc/export - /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) - EOF - - ``` - -5. 配置cinder相关配置: - - ```shell - vim /etc/cinder/cinder.conf - - [DEFAULT] - transport_url = rabbit://openstack:RABBIT_PASS@controller - auth_strategy = keystone - my_ip = 10.0.0.11 - enabled_backends = lvm (STG) - backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) - backup_share=HOST:PATH (STG) - - [database] - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - [keystone_authtoken] - www_authenticate_uri = http://controller:5000 - auth_url = http://controller:5000 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = cinder - password = CINDER_PASS - - [oslo_concurrency] - lock_path = /var/lib/cinder/tmp - - [lvm] - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) - volume_group = cinder-volumes (STG) - iscsi_protocol = iscsi (STG) - iscsi_helper = tgtadm (STG) - ``` - - ***解释*** - - [database]部分,配置数据库入口; - - [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip; - - [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口; - - [oslo_concurrency]部分,配置lock path。 - - ***注意*** - - **替换`CINDER_DBPASS`为 cinder 数据库的密码;** - - **替换`RABBIT_PASS`为 RabbitMQ 中 openstack 账户的密码;** - - **配置`my_ip`为控制节点的管理 IP 地址;** - - **替换`CINDER_PASS`为 cinder 用户的密码;** - - **替换`HOST:PATH`为 NFS 的HOSTIP和共享路径的密码;** - -6. 同步数据库: - - ```shell - su -s /bin/sh -c "cinder-manage db sync" cinder (CTL) - ``` - -7. 配置nova: - - ```shell - vim /etc/nova/nova.conf (CTL) - - [cinder] - os_region_name = RegionOne - ``` - -8. 重启计算API服务 - - ```shell - systemctl restart openstack-nova-api.service - ``` - -9. 启动cinder服务 - - ```shell - systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) - ``` - - ```shell - systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG) - openstack-cinder-volume.service \ - openstack-cinder-backup.service - ``` - - ***注意*** - - 当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。 - - ```shell - include /var/lib/cinder/volumes/* - ``` - -10. 验证 - - ```shell - source ~/.admin-openrc - openstack volume service list - ``` - -### horizon 安装 - -1. 安装软件包 - - ```shell - yum install openstack-dashboard - ``` - -2. 修改文件 - - 修改变量 - - ```text - vim /etc/openstack-dashboard/local_settings - - OPENSTACK_HOST = "controller" - ALLOWED_HOSTS = ['*', ] - - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': 'controller:11211', - } - } - - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" - - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 3, - } - ``` - -3. 重启 httpd 服务 - - ```shell - systemctl restart httpd.service memcached.service - ``` - -4. 验证 - 打开浏览器,输入网址,登录 horizon。 - - ***注意*** - - **替换HOSTIP为控制节点管理平面IP地址** - -### Tempest 安装 - -Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。 - -1. 安装Tempest - - ```shell - yum install openstack-tempest - ``` - -2. 初始化目录 - - ```shell - tempest init mytest - ``` - -3. 修改配置文件。 - - ```shell - cd mytest - vi etc/tempest.conf - ``` - - tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考[官方示例](https://docs.openstack.org/tempest/latest/sampleconf.html) - -4. 执行测试 - - ```shell - tempest run - ``` - -5. 安装tempest扩展(可选) - OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Wallaby中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: - ``` - yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin - ``` - -### Ironic 安装 - -Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 裸金属服务在数据库中存储信息,创建一个**ironic**用户可以访问的**ironic**数据库,替换**IRONIC_DBPASSWORD**为合适的密码 - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \ - IDENTIFIED BY 'IRONIC_DBPASSWORD'; - ``` - -2. 创建服务用户认证 - - 1、创建Bare Metal服务用户 - - ```shell - openstack user create --password IRONIC_PASSWORD \ - --email ironic@example.com ironic - openstack role add --project service --user ironic admin - openstack service create --name ironic - --description "Ironic baremetal provisioning service" baremetal - - openstack service create --name ironic-inspector --description "Ironic inspector baremetal provisioning service" baremetal-introspection - openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector - openstack role add --project service --user ironic-inspector admin - ``` - - 2、创建Bare Metal服务访问入口 - - ```shell - openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 - openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 - openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 - openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 - ``` - -3. 配置ironic-api服务 - - 配置文件路径/etc/ironic/ironic.conf - - 1、通过**connection**选项配置数据库的位置,如下所示,替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换**DB_IP**为DB服务器所在的IP地址: - - ```shell - [database] - - # The SQLAlchemy connection string used to connect to the - # database (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 3、配置ironic-api服务使用身份认证服务的凭证,替换**PUBLIC_IDENTITY_IP**为身份认证服务器的公共IP,替换**PRIVATE_IDENTITY_IP**为身份认证服务器的私有IP,替换**IRONIC_PASSWORD**为身份认证服务中**ironic**用户的密码: - - ```shell - [DEFAULT] - - # Authentication strategy used by ironic-api: one of - # "keystone" or "noauth". "noauth" should not be used in a - # production environment because all authentication will be - # disabled. (string value) - - auth_strategy=keystone - host = controller - memcache_servers = controller:11211 - enabled_network_interfaces = flat,noop,neutron - default_network_interface = noop - transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ - enabled_hardware_types = ipmi - enabled_boot_interfaces = pxe - enabled_deploy_interfaces = direct - default_deploy_interface = direct - enabled_inspect_interfaces = inspector - enabled_management_interfaces = ipmitool - enabled_power_interfaces = ipmitool - enabled_rescue_interfaces = no-rescue,agent - isolinux_bin = /usr/share/syslinux/isolinux.bin - logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s - - [keystone_authtoken] - # Authentication type to load (string value) - auth_type=password - # Complete public Identity API endpoint (string value) - www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 - # Complete admin Identity API endpoint. (string value) - auth_url=http://PRIVATE_IDENTITY_IP:5000 - # Service username. (string value) - username=ironic - # Service account password. (string value) - password=IRONIC_PASSWORD - # Service tenant name. (string value) - project_name=service - # Domain name containing project (string value) - project_domain_name=Default - # User's domain name (string value) - user_domain_name=Default - - [agent] - deploy_logs_collect = always - deploy_logs_local_path = /var/log/ironic/deploy - deploy_logs_storage_backend = local - image_download_source = http - stream_raw_images = false - force_raw_images = false - verify_ca = False - - [oslo_concurrency] - - [oslo_messaging_notifications] - transport_url = rabbit://openstack:123456@172.20.19.25:5672/ - topics = notifications - driver = messagingv2 - - [oslo_messaging_rabbit] - amqp_durable_queues = True - rabbit_ha_queues = True - - [pxe] - ipxe_enabled = false - pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 - image_cache_size = 204800 - tftp_root=/var/lib/tftpboot/cephfs/ - tftp_master_path=/var/lib/tftpboot/cephfs/master_images - - [dhcp] - dhcp_provider = none - ``` - - 4、创建裸金属服务数据库表 - - ```shell - ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema - ``` - - 5、重启ironic-api服务 - - ```shell - sudo systemctl restart openstack-ironic-api - ``` - -4. 配置ironic-conductor服务 - - 1、替换**HOST_IP**为conductor host的IP - - ```shell - [DEFAULT] - - # IP address of this host. If unset, will determine the IP - # programmatically. If unable to do so, will use "127.0.0.1". - # (string value) - - my_ip=HOST_IP - ``` - - 2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换**IRONIC_DBPASSWORD**为**ironic**用户的密码,替换DB_IP为DB服务器所在的IP地址: - - ```shell - [database] - - # The SQLAlchemy connection string to use to connect to the - # database. (string value) - - connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic - ``` - - 3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换**RPC_\***为RabbitMQ的详细地址和凭证 - - ```shell - [DEFAULT] - - # A URL representing the messaging driver to use and its full - # configuration. (string value) - - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - ``` - - 用户也可自行使用json-rpc方式替换rabbitmq - - 4、配置凭证访问其他OpenStack服务 - - 为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。 - - ```shell - [neutron] - 访问OpenStack网络服务 - [glance] - 访问OpenStack镜像服务 - [swift] - 访问OpenStack对象存储服务 - [cinder] - 访问OpenStack块存储服务 - [inspector] - 访问OpenStack裸金属introspection服务 - [service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点 - ``` - - 简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。 - - 在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为: - - ```shell - 网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口 - - 请求时使用特定的CA SSL证书进行HTTPS连接 - - 与ironic-api服务配置相同的服务用户 - - 动态密码认证插件基于其他选项发现合适的身份认证服务API版本 - ``` - - ```shell - [neutron] - - # Authentication type to load (string value) - auth_type = password - # Authentication URL (string value) - auth_url=https://IDENTITY_IP:5000/ - # Username (string value) - username=ironic - # User's password (string value) - password=IRONIC_PASSWORD - # Project name to scope to (string value) - project_name=service - # Domain ID containing project (string value) - project_domain_id=default - # User's domain id (string value) - user_domain_id=default - # PEM encoded Certificate Authority to use when verifying - # HTTPs connections. (string value) - cafile=/opt/stack/data/ca-bundle.pem - # The default region_name for endpoint URL discovery. (string - # value) - region_name = RegionOne - # List of interfaces, in order of preference, for endpoint - # URL. (list value) - valid_interfaces=public - ``` - - 默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定: - - ```shell - [neutron] ... endpoint_override = - ``` - - 5、配置允许的驱动程序和硬件类型 - - 通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型: - - ```shell - [DEFAULT] enabled_hardware_types = ipmi - ``` - - 配置硬件接口: - - ```shell - enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool - ``` - - 配置接口默认值: - - ```shell - [DEFAULT] default_deploy_interface = direct default_network_interface = neutron - ``` - - 如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。 - - 6、重启ironic-conductor服务 - - ```shell - sudo systemctl restart openstack-ironic-conductor - ``` - -5. 配置ironic-inspector服务 - - 配置文件路径/etc/ironic-inspector/inspector.conf - - 1、创建数据库 - - ```shell - # mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \ - IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; - ``` - - 2、通过**connection**选项配置数据库的位置,如下所示,替换**IRONIC_INSPECTOR_DBPASSWORD**为**ironic_inspector**用户的密码,替换**DB_IP**为DB服务器所在的IP地址: - - ```shell - [database] - backend = sqlalchemy - connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector - min_pool_size = 100 - max_pool_size = 500 - pool_timeout = 30 - max_retries = 5 - max_overflow = 200 - db_retry_interval = 2 - db_inc_retry_interval = True - db_max_retry_interval = 2 - db_max_retries = 5 - ``` - - 3、配置消息度列通信地址 - - ```shell - [DEFAULT] - transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ - - ``` - - 4、设置keystone认证 - - ```shell - [DEFAULT] - - auth_strategy = keystone - timeout = 900 - rootwrap_config = /etc/ironic-inspector/rootwrap.conf - logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s - log_dir = /var/log/ironic-inspector - state_path = /var/lib/ironic-inspector - use_stderr = False - - [ironic] - api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 - auth_type = password - auth_url = http://PUBLIC_IDENTITY_IP:5000 - auth_strategy = keystone - ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 - os_region = RegionOne - project_name = service - project_domain_name = Default - user_domain_name = Default - username = IRONIC_SERVICE_USER_NAME - password = IRONIC_SERVICE_USER_PASSWORD - - [keystone_authtoken] - auth_type = password - auth_url = http://control:5000 - www_authenticate_uri = http://control:5000 - project_domain_name = default - user_domain_name = default - project_name = service - username = ironic_inspector - password = IRONICPASSWD - region_name = RegionOne - memcache_servers = control:11211 - token_cache_time = 300 - - [processing] - add_ports = active - processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic - ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk - always_store_ramdisk_logs = true - store_data =none - power_off = false - - [pxe_filter] - driver = iptables - - [capabilities] - boot_mode=True - ``` - - 5、配置ironic inspector dnsmasq服务 - - ```shell - # 配置文件地址:/etc/ironic-inspector/dnsmasq.conf - port=0 - interface=enp3s0 #替换为实际监听网络接口 - dhcp-range=172.20.19.100,172.20.19.110 #替换为实际dhcp地址范围 - bind-interfaces - enable-tftp - - dhcp-match=set:efi,option:client-arch,7 - dhcp-match=set:efi,option:client-arch,9 - dhcp-match=aarch64, option:client-arch,11 - dhcp-boot=tag:aarch64,grubaa64.efi - dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi - dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 - - tftp-root=/tftpboot #替换为实际tftpboot目录 - log-facility=/var/log/dnsmasq.log - ``` - - 6、关闭ironic provision网络子网的dhcp - - ``` - openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c - ``` - - 7、初始化ironic-inspector服务的数据库 - - 在控制节点执行: - - ``` - ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade - ``` - - 8、启动服务 - - ```shell - systemctl enable --now openstack-ironic-inspector.service - systemctl enable --now openstack-ironic-inspector-dnsmasq.service - ``` - -6. 配置httpd服务 - - 1. 创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。 - - ``` - mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot - ``` - - - - 2. 安装和配置httpd服务 - - - - 1. 安装httpd服务,已有请忽略 - - ``` - yum install httpd -y - ``` - - - - 2. 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下: - - ``` - Listen 8080 - - - ServerName ironic.openeuler.com - - ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log" - CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b" - - DocumentRoot "/var/lib/ironic/httproot" - - Options Indexes FollowSymLinks - Require all granted - - LogLevel warn - AddDefaultCharset UTF-8 - EnableSendfile on - - - ``` - - 注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。 - - 3. 重启httpd服务。 - - ``` - systemctl restart httpd - ``` - - - -7. deploy ramdisk镜像制作 - - W版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 - 若使用W版原生工具,则需要安装对应的软件包。 - - ```shell - yum install openstack-ironic-python-agent - 或者 - yum install diskimage-builder - ``` - - 具体的使用方法可以参考[官方文档](https://docs.openstack.org/ironic/queens/install/deploy-ramdisk.html) - - 这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。 - - 1. 安装 ironic-python-agent-builder - - - 1. 安装工具: - - ```shell - pip install ironic-python-agent-builder - ``` - - 2. 修改以下文件中的python解释器: - - ```shell - /usr/bin/yum /usr/libexec/urlgrabber-ext-down - ``` - - 3. 安装其它必须的工具: - - ```shell - yum install git - ``` - - 由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可: - - ```shell - # 先查询需要安装哪个包 - [root@localhost ~]# yum provides /usr/sbin/semanage - 已加载插件:fastestmirror - Loading mirror speeds from cached hostfile - * base: mirror.vcu.edu - * extras: mirror.vcu.edu - * updates: mirror.math.princeton.edu - policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities - 源 :base - 匹配来源: - 文件名 :/usr/sbin/semanage - # 安装 - [root@localhost ~]# yum install policycoreutils-python - ``` - - 2. 制作镜像 - - 如果是`arm`架构,需要添加: - ```shell - export ARCH=aarch64 - ``` - - 基本用法: - - ```shell - usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] - [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] - distribution - - positional arguments: - distribution Distribution to use - - optional arguments: - -h, --help show this help message and exit - -r RELEASE, --release RELEASE - Distribution release to use - -o OUTPUT, --output OUTPUT - Output base file name - -e ELEMENT, --element ELEMENT - Additional DIB element to use - -b BRANCH, --branch BRANCH - If set, override the branch that is used for ironic- - python-agent and requirements - -v, --verbose Enable verbose logging in diskimage-builder - --extra-args EXTRA_ARGS - Extra arguments to pass to diskimage-builder - ``` - - 举例说明: - - ```shell - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky - ``` - - 3. 允许ssh登陆 - - 初始化环境变量,然后制作镜像: - - ```shell - export DIB_DEV_USER_USERNAME=ipa \ - export DIB_DEV_USER_PWDLESS_SUDO=yes \ - export DIB_DEV_USER_PASSWORD='123' - ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser - ``` - - 4. 指定代码仓库 - - 初始化对应的环境变量,然后制作镜像: - - ```shell - # 指定仓库地址以及版本 - DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git - DIB_REPOREF_ironic_python_agent=origin/develop - - # 直接从gerrit上clone代码 - DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent - DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 - ``` - - 参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。 - - 指定仓库地址及版本验证成功。 - - 5. 注意 - -原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改: - -在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败,如下: - -生成的错误配置文件: - -![erro](/Users/andy_lee/Downloads/erro.png) - -如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。 - -需要用户对生成grub.cfg的代码逻辑自行修改。 - -ironic向ipa发送查询命令执行状态请求的tls报错: - -w版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。 - -1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1: - -``` -[agent] -verify_ca = False - -[pxe] -pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 -``` - -2) ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下: - -/etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录) - -``` -[DEFAULT] -enable_auto_tls = False -``` - -设置权限: - -``` -chown -R ipa.ipa /etc/ironic_python_agent/ -``` - -3. 修改ipa服务的服务启动文件,添加配置文件选项 - - vim usr/lib/systemd/system/ironic-python-agent.service - - ``` - [Unit] - Description=Ironic Python Agent - After=network-online.target - - [Service] - ExecStartPre=/sbin/modprobe vfat - ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf - Restart=always - RestartSec=30s - - [Install] - WantedBy=multi-user.target - ``` - - - -### Kolla 安装 - -Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 22.03 LTS中引入了Kolla和Kolla-ansible服务。 - -Kolla的安装十分简单,只需要安装对应的RPM包即可 - -``` -yum install openstack-kolla openstack-kolla-ansible -``` - -安装完后,就可以使用`kolla-ansible`, `kolla-build`, `kolla-genpwd`, `kolla-mergepwd`等命令了。 - -### Trove 安装 -Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。 - -1. 设置数据库 - - 数据库服务在数据库中存储信息,创建一个**trove**用户可以访问的**trove**数据库,替换**TROVE_DBPASSWORD**为合适的密码 - - ```sql - mysql -u root -p - - MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \ - IDENTIFIED BY 'TROVE_DBPASSWORD'; - ``` - -2. 创建服务用户认证 - - 1、创建**Trove**服务用户 - - ```shell - openstack user create --password TROVE_PASSWORD \ - --email trove@example.com trove - openstack role add --project service --user trove admin - openstack service create --name trove - --description "Database service" database - ``` - **解释:** `TROVE_PASSWORD` 替换为`trove`用户的密码 - - 2、创建**Database**服务访问入口 - - ```shell - openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s - openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s - ``` - -3. 安装和配置**Trove**各组件 - - 1、安装**Trove**包 - ```shell script - yum install openstack-trove python-troveclient - ``` - 2. 配置`trove.conf` - ```shell script - vim /etc/trove/trove.conf - - [DEFAULT] - bind_host=TROVE_NODE_IP - log_dir = /var/log/trove - network_driver = trove.network.neutron.NeutronDriver - management_security_groups = - nova_keypair = trove-mgmt - default_datastore = mysql - taskmanager_manager = trove.taskmanager.manager.Manager - trove_api_workers = 5 - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - reboot_time_out = 300 - usage_timeout = 900 - agent_call_high_timeout = 1200 - use_syslog = False - debug = True - - # Set these if using Neutron Networking - network_driver=trove.network.neutron.NeutronDriver - network_label_regex=.* - - - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - - [database] - connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove - - [keystone_authtoken] - project_domain_name = Default - project_name = service - user_domain_name = Default - password = trove - username = trove - auth_url = http://controller:5000/v3/ - auth_type = password - - [service_credentials] - auth_url = http://controller:5000/v3/ - region_name = RegionOne - project_name = service - password = trove - project_domain_name = Default - user_domain_name = Default - username = trove - - [mariadb] - tcp_ports = 3306,4444,4567,4568 - - [mysql] - tcp_ports = 3306 - - [postgresql] - tcp_ports = 5432 - ``` - **解释:** - - `[Default]`分组中`bind_host`配置为Trove部署节点的IP - - `nova_compute_url` 和 `cinder_url` 为Nova和Cinder在Keystone中创建的endpoint - - `nova_proxy_XXX` 为一个能访问Nova服务的用户信息,上例中使用`admin`用户为例 - - `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码 - - `[database]`分组中的`connection` 为前面在mysql中为Trove创建的数据库信息 - - Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码 - - 5. 配置`trove-guestagent.conf` - ```shell script - vim /etc/trove/trove-guestagent.conf - - [DEFAULT] - log_file = trove-guestagent.log - log_dir = /var/log/trove/ - ignore_users = os_admin - control_exchange = trove - transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ - rpc_backend = rabbit - command_process_timeout = 60 - use_syslog = False - debug = True - - [service_credentials] - auth_url = http://controller:5000/v3/ - region_name = RegionOne - project_name = service - password = TROVE_PASS - project_domain_name = Default - user_domain_name = Default - username = trove - - [mysql] - docker_image = your-registry/your-repo/mysql - backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 - ``` - **解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 - 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 - 报心跳,因此需要配置RabbitMQ的用户和密码信息。 - **从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。** - - `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码 - - Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码 - - 6. 生成数据`Trove`数据库表 - ```shell script - su -s /bin/sh -c "trove-manage db_sync" trove - ``` -4. 完成安装配置 - 1. 配置**Trove**服务自启动 - ```shell script - systemctl enable openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` - 2. 启动服务 - ```shell script - systemctl start openstack-trove-api.service \ - openstack-trove-taskmanager.service \ - openstack-trove-conductor.service - ``` -### Swift 安装 - -Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。 - -1. 创建服务凭证、API端点。 - - 创建服务凭证 - - ``` shell - #创建swift用户: - openstack user create --domain default --password-prompt swift - #为swift用户添加admin角色: - openstack role add --project service --user swift admin - #创建swift服务实体: - openstack service create --name swift --description "OpenStack Object Storage" object-store - ``` - - 创建swift API 端点: - - ```shell - openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s - openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s - openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 - ``` - - -2. 安装软件包: - - ```shell - yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL) - ``` - -3. 配置proxy-server相关配置 - - Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。 - - ***注意*** - - **注意替换password为您在身份服务中为swift用户选择的密码** - -4. 安装和配置存储节点 (STG) - - 安装支持的程序包: - ```shell - yum install xfsprogs rsync - ``` - - 将/dev/vdb和/dev/vdc设备格式化为 XFS - - ```shell - mkfs.xfs /dev/vdb - mkfs.xfs /dev/vdc - ``` - - 创建挂载点目录结构: - - ```shell - mkdir -p /srv/node/vdb - mkdir -p /srv/node/vdc - ``` - - 找到新分区的 UUID: - - ```shell - blkid - ``` - - 编辑/etc/fstab文件并将以下内容添加到其中: - - ```shell - UUID="" /srv/node/vdb xfs noatime 0 2 - UUID="" /srv/node/vdc xfs noatime 0 2 - ``` - - 挂载设备: - - ```shell - mount /srv/node/vdb - mount /srv/node/vdc - ``` - ***注意*** - - **如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置** - - (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容: - - ```shell - [DEFAULT] - uid = swift - gid = swift - log file = /var/log/rsyncd.log - pid file = /var/run/rsyncd.pid - address = MANAGEMENT_INTERFACE_IP_ADDRESS - - [account] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/account.lock - - [container] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/container.lock - - [object] - max connections = 2 - path = /srv/node/ - read only = False - lock file = /var/lock/object.lock - ``` - **替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址** - - 启动rsyncd服务并配置它在系统启动时启动: - - ```shell - systemctl enable rsyncd.service - systemctl start rsyncd.service - ``` - -5. 在存储节点安装和配置组件 (STG) - - 安装软件包: - - ```shell - yum install openstack-swift-account openstack-swift-container openstack-swift-object - ``` - - 编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。 - - 确保挂载点目录结构的正确所有权: - - ```shell - chown -R swift:swift /srv/node - ``` - - 创建recon目录并确保其拥有正确的所有权: - - ```shell - mkdir -p /var/cache/swift - chown -R root:swift /var/cache/swift - chmod -R 775 /var/cache/swift - ``` - -6. 创建账号环 (CTL) - - 切换到/etc/swift目录。 - - ```shell - cd /etc/swift - ``` - - 创建基础account.builder文件: - - ```shell - swift-ring-builder account.builder create 10 1 1 - ``` - - 将每个存储节点添加到环中: - - ```shell - swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT - ``` - - **替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称** - - ***注意 *** - **对每个存储节点上的每个存储设备重复此命令** - - 验证戒指内容: - - ```shell - swift-ring-builder account.builder - ``` - - 重新平衡戒指: - - ```shell - swift-ring-builder account.builder rebalance - ``` - -7. 创建容器环 (CTL) - - 切换到`/etc/swift`目录。 - - 创建基础`container.builder`文件: - - ```shell - swift-ring-builder container.builder create 10 1 1 - ``` - - 将每个存储节点添加到环中: - - ```shell - swift-ring-builder container.builder \ - add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \ - --device DEVICE_NAME --weight 100 - - ``` - - **替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称** - - ***注意*** - **对每个存储节点上的每个存储设备重复此命令** - - 验证戒指内容: - - ```shell - swift-ring-builder container.builder - ``` - - 重新平衡戒指: - - ```shell - swift-ring-builder container.builder rebalance - ``` - -8. 创建对象环 (CTL) - - 切换到`/etc/swift`目录。 - - 创建基础`object.builder`文件: - - ```shell - swift-ring-builder object.builder create 10 1 1 - ``` - - 将每个存储节点添加到环中 - - ```shell - swift-ring-builder object.builder \ - add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \ - --device DEVICE_NAME --weight 100 - ``` - - **替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称** - - ***注意 *** - **对每个存储节点上的每个存储设备重复此命令** - - 验证戒指内容: - - ```shell - swift-ring-builder object.builder - ``` - - 重新平衡戒指: - - ```shell - swift-ring-builder object.builder rebalance - ``` - - 分发环配置文件: - - 将`account.ring.gz`,`container.ring.gz`以及 `object.ring.gz`文件复制到每个存储节点和运行代理服务的任何其他节点上的`/etc/swift`目录。 - - - -9. 完成安装 - - 编辑`/etc/swift/swift.conf`文件 - - ``` shell - [swift-hash] - swift_hash_path_suffix = test-hash - swift_hash_path_prefix = test-hash - - [storage-policy:0] - name = Policy-0 - default = yes - ``` - - **用唯一值替换 test-hash** - - 将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。 - - 在所有节点上,确保配置目录的正确所有权: - - ```shell - chown -R root:swift /etc/swift - ``` - - 在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动: - - ```shell - systemctl enable openstack-swift-proxy.service memcached.service - systemctl start openstack-swift-proxy.service memcached.service - ``` - - 在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动: - - ```shell - systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service - - systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service - - systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service - - systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service - - systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service - - systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service - ``` -### Cyborg 安装 - -Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。 - -1. 初始化对应数据库 - -``` -CREATE DATABASE cyborg; -GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; -GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; -``` - -2. 创建对应Keystone资源对象 - -``` -$ openstack user create --domain default --password-prompt cyborg -$ openstack role add --project service --user cyborg admin -$ openstack service create --name cyborg --description "Acceleration Service" accelerator - -$ openstack endpoint create --region RegionOne \ - accelerator public http://:6666/v1 -$ openstack endpoint create --region RegionOne \ - accelerator internal http://:6666/v1 -$ openstack endpoint create --region RegionOne \ - accelerator admin http://:6666/v1 -``` - -3. 安装Cyborg - -``` -yum install openstack-cyborg -``` - -4. 配置Cyborg - -修改`/etc/cyborg/cyborg.conf` - -``` -[DEFAULT] -transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ -use_syslog = False -state_path = /var/lib/cyborg -debug = True - -[database] -connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg - -[service_catalog] -project_domain_id = default -user_domain_id = default -project_name = service -password = PASSWORD -username = cyborg -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password - -[placement] -project_domain_name = Default -project_name = service -user_domain_name = Default -password = PASSWORD -username = placement -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password - -[keystone_authtoken] -memcached_servers = localhost:11211 -project_domain_name = Default -project_name = service -user_domain_name = Default -password = PASSWORD -username = cyborg -auth_url = http://%OPENSTACK_HOST_IP%/identity -auth_type = password -``` - -自行修改对应的用户名、密码、IP等信息 - -5. 同步数据库表格 - -``` -cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade -``` - -6. 启动Cyborg服务 - -``` -systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent -systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent -``` - -### Aodh 安装 - -1. 创建数据库 - -``` -CREATE DATABASE aodh; - -GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; - -GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; -``` - -2. 创建对应Keystone资源对象 - -``` -openstack user create --domain default --password-prompt aodh - -openstack role add --project service --user aodh admin - -openstack service create --name aodh --description "Telemetry" alarming - -openstack endpoint create --region RegionOne alarming public http://controller:8042 - -openstack endpoint create --region RegionOne alarming internal http://controller:8042 - -openstack endpoint create --region RegionOne alarming admin http://controller:8042 -``` - -3. 安装Aodh - -``` -yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient -``` - -***注意*** - -aodh依赖的软件包pytho3-pyparsing在openEuler的OS仓不适配,需要覆盖安装OpenStack对应版本,可以使用`yum list |grep pyparsing |grep OpenStack | awk '{print $2}'`获取对应的版本 - -VERSION,然后再`yum install -y python3-pyparsing-VERSION`覆盖安装适配的pyparsing - -4. 修改配置文件 - -``` -[database] -connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh - -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller -auth_strategy = keystone - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_id = default -user_domain_id = default -project_name = service -username = aodh -password = AODH_PASS - -[service_credentials] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_id = default -user_domain_id = default -project_name = service -username = aodh -password = AODH_PASS -interface = internalURL -region_name = RegionOne -``` - -5. 初始化数据库 - -``` -aodh-dbsync -``` - -6. 启动Aodh服务 - -``` -systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service - -systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service -``` - -### Gnocchi 安装 - -1. 创建数据库 - -``` -CREATE DATABASE gnocchi; - -GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; - -GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; -``` - -2. 创建对应Keystone资源对象 - -``` -openstack user create --domain default --password-prompt gnocchi - -openstack role add --project service --user gnocchi admin - -openstack service create --name gnocchi --description "Metric Service" metric - -openstack endpoint create --region RegionOne metric public http://controller:8041 - -openstack endpoint create --region RegionOne metric internal http://controller:8041 - -openstack endpoint create --region RegionOne metric admin http://controller:8041 -``` - -3. 安装Gnocchi - -``` -yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient -``` - -4. 修改配置文件`/etc/gnocchi/gnocchi.conf` - -``` -[api] -auth_mode = keystone -port = 8041 -uwsgi_mode = http-socket - -[keystone_authtoken] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_name = Default -user_domain_name = Default -project_name = service -username = gnocchi -password = GNOCCHI_PASS -interface = internalURL -region_name = RegionOne - -[indexer] -url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi - -[storage] -# coordination_url is not required but specifying one will improve -# performance with better workload division across workers. -coordination_url = redis://controller:6379 -file_basepath = /var/lib/gnocchi -driver = file -``` - -5. 初始化数据库 - -``` -gnocchi-upgrade -``` - -6. 启动Gnocchi服务 - -``` -systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service - -systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service -``` - -### Ceilometer 安装 - -1. 创建对应Keystone资源对象 - -``` -openstack user create --domain default --password-prompt ceilometer - -openstack role add --project service --user ceilometer admin - -openstack service create --name ceilometer --description "Telemetry" metering -``` - -2. 安装Ceilometer - -``` -yum install openstack-ceilometer-notification openstack-ceilometer-central -``` - -3. 修改配置文件`/etc/ceilometer/pipeline.yaml` - -``` -publishers: - # set address of Gnocchi - # + filter out Gnocchi-related activity meters (Swift driver) - # + set default archive policy - - gnocchi://?filter_project=service&archive_policy=low -``` - -4. 修改配置文件`/etc/ceilometer/ceilometer.conf` - -``` -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller - -[service_credentials] -auth_type = password -auth_url = http://controller:5000/v3 -project_domain_id = default -user_domain_id = default -project_name = service -username = ceilometer -password = CEILOMETER_PASS -interface = internalURL -region_name = RegionOne -``` - -5. 初始化数据库 - -``` -ceilometer-upgrade -``` - -6. 启动Ceilometer服务 - -``` -systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service - -systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service -``` - -### Heat 安装 - -1. 创建**heat**数据库,并授予**heat**数据库正确的访问权限,替换**HEAT_DBPASS**为合适的密码 - -``` -CREATE DATABASE heat; -GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; -GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; -``` - -2. 创建服务凭证,创建**heat**用户,并为其增加**admin**角色 - -``` -openstack user create --domain default --password-prompt heat -openstack role add --project service --user heat admin -``` - -3. 创建**heat**和**heat-cfn**服务及其对应的API端点 - -``` -openstack service create --name heat --description "Orchestration" orchestration -openstack service create --name heat-cfn --description "Orchestration" cloudformation -openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s -openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 -openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 -openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 -``` - -4. 创建stack管理的额外信息,包括**heat**domain及其对应domain的admin用户**heat_domain_admin**, -**heat_stack_owner**角色,**heat_stack_user**角色 - -``` -openstack user create --domain heat --password-prompt heat_domain_admin -openstack role add --domain heat --user-domain heat --user heat_domain_admin admin -openstack role create heat_stack_owner -openstack role create heat_stack_user -``` - -5. 安装软件包 - -``` -yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine -``` - -6. 修改配置文件`/etc/heat/heat.conf` - -``` -[DEFAULT] -transport_url = rabbit://openstack:RABBIT_PASS@controller -heat_metadata_server_url = http://controller:8000 -heat_waitcondition_server_url = http://controller:8000/v1/waitcondition -stack_domain_admin = heat_domain_admin -stack_domain_admin_password = HEAT_DOMAIN_PASS -stack_user_domain_name = heat - -[database] -connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = default -user_domain_name = default -project_name = service -username = heat -password = HEAT_PASS - -[trustee] -auth_type = password -auth_url = http://controller:5000 -username = heat -password = HEAT_PASS -user_domain_name = default - -[clients_keystone] -auth_uri = http://controller:5000 -``` - -7. 初始化**heat**数据库表 - -``` -su -s /bin/sh -c "heat-manage db_sync" heat -``` - -8. 启动服务 - -``` -systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service -systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service -``` - -## 快速安装 OpenStack - -OpenStack SIG还提供了一键部署OpenStack all in one或三节点的ansible脚本,用户可以使用该脚本快速部署一套基于openEuler RPM的OpenStack环境。下面以all in one举例说明使用方法 - -1. 安装OpenStack SIG工具 - - ```shell - pip install openstack-sig-tool - ``` - -2. 配置openstack yum 源 - - ```shell - yum install openstack-release-wallaby - ``` - - **注意**:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL - - ```shell - vi /etc/yum.repos.d/openEuler.repo - - [EPOL] - name=EPOL - baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/ - enabled=1 - gpgcheck=1 - gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler - EOF - -3. 刷新ansible配置 - - 打开`/usr/local/etc/inventory/all_in_one.yaml`,根据当前机器环境和需求修改对应配置。内容如下 - - ```shell - all: - hosts: - controller: - ansible_host: - ansible_ssh_private_key_file: - ansible_ssh_user: root - vars: - mysql_root_password: root - mysql_project_password: root - rabbitmq_password: root - project_identity_password: root - enabled_service: - - keystone - - neutron - - cinder - - placement - - nova - - glance - - horizon - - aodh - - ceilometer - - cyborg - - gnocchi - - kolla - - heat - - swift - - trove - - tempest - neutron_provider_interface_name: br-ex - default_ext_subnet_range: 10.100.100.0/24 - default_ext_subnet_gateway: 10.100.100.1 - neutron_dataplane_interface_name: eth1 - cinder_block_device: vdb - swift_storage_devices: - - vdc - swift_hash_path_suffix: ash - swift_hash_path_prefix: has - children: - compute: - hosts: controller - storage: - hosts: controller - network: - hosts: controller - vars: - test-key: test-value - dashboard: - hosts: controller - vars: - allowed_host: '*' - kolla: - hosts: controller - vars: - # We add openEuler OS support for kolla in OpenStack Queens/Rocky release - # Set this var to true if you want to use it in Q/R - openeuler_plugin: false - ``` - - **关键配置** - - | 配置项 | 解释 | - |---|---| - | ansible_host | all in one节点IP | - | ansible_ssh_private_key_file | ansible脚本登录all in one节点时使用的登录秘钥 | - | ansible_ssh_user | ansible脚本登录all in one节点时使用的登录用户 | - | enabled_service | 安装服务列表,根据用户需求自行删减 | - | neutron_provider_interface_name | neutron L3网桥名称 | - | default_ext_subnet_range | neutron私网IP段 | - | default_ext_subnet_gateway | neutron私网gateway | - | neutron_dataplane_interface_name | neutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,发现all in one主机断连的情况 | - | cinder_block_device | cinder使用的卷设备名 | - | swift_storage_devices | swift使用的卷设备名 | - -4. 执行安装命令 - - ```shell - oos env setup all_in_one - ``` - - 该命令执行后,OpenStack all in one环境就部署成功了 - - 环境变量文件在当前用户的根目录下,名叫`.admin-openrc` - -5. 初始化tempest环境 - - 如果用户想使用该环境运行tempest测试的话,可以执行命令`oos env init all_in_one`,会自动把tempest需要的OpenStack资源自动创建好。 - - 命令执行成功后,在用户的根目录下会生成`mytest`目录,进入其中就可以执行`tempest run`命令了。 diff --git a/docs/spec/priority_vm.md b/docs/spec/priority_vm.md deleted file mode 100644 index 4ba56ded2446e90da0c35196e7d7237162972cbe..0000000000000000000000000000000000000000 --- a/docs/spec/priority_vm.md +++ /dev/null @@ -1,127 +0,0 @@ - -# 高低优先级虚拟机混部 - -虚拟机混合部署是指把对CPU、IO、Memory等资源有不同需求的虚拟机通过调度方式部署、迁移到同一个计算节点上,从而使得节点的资源得到充分利用。 - -虚拟机混合部署的场景有多种,比如通过动态资源调度满足节点资源的动态调整;根据用户使用习惯动态调整节点虚拟机分布等等。而虚拟机高低优先级调度也是其中的一种实现方法。 - -在OpenStack Nova中引入虚拟机高低优先级技术,可以一定程度上满足虚拟机的混合部署要求。 - -本特性主要针对OpenStack Nova虚拟机创建、迁移功能,介绍虚拟机高低优先级调度的设计与实现。 - -## 实现方案 - -在Nova的虚拟机创建、迁移流程中引入高低优先级概念,虚拟机对象新增高低优先级属性。高优先级虚拟机在调度的过程中,会尽可能的调度到资源充足的节点,这样的节点需要至少满足内存不超卖、高优先级虚拟机所用CPU不超卖的要求。 - -## 实现细节 - -本特性的实现基于即将发布的OpenStack Yoga版本,承载于openEuler 22.09创新版本中。 - -对高低优先混部节点使用 Host Aggregate来识别: - -* 通过`aggregate_instance_extra_specs:priority_mix=true`属性区别是否为混部节点 -* 计算节点配置参数`cpu_shared_set`中配置为低优先级虚拟机预留CPU,`cpu_dedicated_set`中配置高优先级虚拟机可使用的CPU -* 在`nova.conf`的`default`块中增加`cpu_priority_mix_enable`配置, 默认值为False,标识是否允许CPU混用。 - -创建虚拟机时,API请求需要增加`os:scheduler_hints.priority`属性来设置高低优先级机器类型,或者使用已经设置aggregate_instance_extra_specs:priority属性的flavor。 - -### 资源模型 - -* VM对象可选属性`os:scheduler_hints.priority`中设置`priority`值,不进行设置时表示是一个普通VM。`priority`可被设置成`high`或`low`,分别表示高低优先级。 - -* flavor extra_specs设置`hw:cpu_priority`字段,标识为高低优先级虚拟机规格,设置与`os:scheduler_hints.priority`一致,值为`high`或`low`。 - -* flavor extra_specs设置`aggregate_instance_extra_specs:priority=true`,与Host Aggregate中一致。 - -* `nova.conf`的`default`块中参数`cpu_priority_mix_enable`设置为True后,低优先级虚拟机可使用高优先级的虚拟机绑定的CPU,即低优先级虚拟机可使用的CPU为`cpu_shared_set`与`cpu_dedicated_set`中设置的CPU号之和。 - -### API - -创建虚拟机API中可选参数`os:scheduler_hints.priority`可被设置成`high`或`low`,此参数不和flavor中`hw:cpu_priority`属性同时使用。 - -``` -POST v2/servers -{ - "OS-SCH-HNT:scheduler_hints": {"priority": "high"} -} -``` - -迁移API不变 - -### Scheduler - -调度名词解释: - -* 高优虚拟机真实CPU: cpu_dedicated_set指定CPU -* 低优虚拟机真实CPU:cpu_dedicated_set + cpu_shared_set指定CPU -* 高优可售卖CPU数:高优虚拟机真实CPU -* 低优可售卖CPU数:低优虚拟机真实CPU * `cpu_allocation_ratio` - 高优可售卖CPU数 - -新增支持高低优先虚拟机调度Filter `PriorityFilter`,此filter和`numa_topology_filter`不共存: -* 高优先级虚拟机:节点剩余高优可售卖CPU数 > 虚拟机vCPU规格,虚拟机topology为去除低优预留cpu后拓扑 -* 低优先级虚拟机:节点剩余低优可售卖CPU数 > 虚拟机vCPU规格,虚拟机topology为低优虚拟机真实CPU的拓扑 - -举例: -当前物理机CPU有1-12核给虚机使用,准备高优虚机售卖8(1-8)核,低优预留核4(9-12)核,计算节点CPU超卖比例2,那么可以 -创建4核虚机数量如下: -高优机器可创建数量:8 / 4 = 2(台) - -低优机器最多可创建数量:(12 * 2 - 8) / 4 = 4(台) - -\--------------------------------------------- -CPU:     | 1| 2| 3| 4| 5| 6| 7| 8| 9|10|11|12| -\--------------------------------------------- -高优:    |   HVM1  |  HVM2  | -\--------------------------------------------- -低优:    |  LVM1  LVM2   LVM3  LVM4   | -\--------------------------------------------- - -新增Weighter `PriorityWeighter`,此Weighter和`COREWeigher`不共存: -* 高优先级剩余CPU:高优可售卖CPU数 - 高优已使用CPU -* 低优先级剩余CPU:低优可售卖CPU数 - 低优已使用cpu - -### Compute - -#### 资源上报 - -在nova-compute服务上报资源中增加预留cpu信息、高优虚拟机已使用cpu、低优已使用cpu。 - -#### 资源分配绑定 - -高低优先级机器创建按照priority标志分配CPU: - -* 高优先级虚拟机绑定`cpu_dedicated_set`中指定CPU -* 低优先级虚拟机绑定所有真实售卖CPU - -#### 资源预分配 - -在虚拟机的生命周期管理中,高低优先级机器创建按照priority标志进行CPU、内存资源预分配: - -新增优先级虚拟机资源预留,此资源预留与当前numa_topology资源预留不共存 - -* 虚拟机生命周期管理过程,包括创建、(冷、热)迁移、规格变更、疏散、解冻 -* 高低优先级混部虚拟机只能在允许混部宿主机上(冷、热)迁移、规格变更、疏散、解冻 -* 虚拟机(冷、热)迁移、规格变更、疏散、解冻不能超出资源分配比例 - -#### 虚拟机xml - -高低优先级机器创建按照priority标志,对虚拟机进行标识 - -* Libirt XML中新增属性 `high_prio_machine.slice`, `low_prio_machine.slice`,分别表示高低优先级虚拟机。 - -## 开发节奏 - -开发者: - -* 王玺源 -* 郭雷 -* 马干林 -* 韩光宇 -* 张迎 - -时间点: - -* 2022-04-01到2022-05-30 完成开发 -* 2022-06-01到2022-06-30 完成测试、联调 -* 2022-09-30正式发布 - diff --git "a/docs/test/openEuler 20.03 LTS SP3\347\211\210\346\234\254OpenStack\346\265\213\350\257\225\346\212\245\345\221\212.md" "b/docs/test/openEuler 20.03 LTS SP3\347\211\210\346\234\254OpenStack\346\265\213\350\257\225\346\212\245\345\221\212.md" deleted file mode 100644 index 47b0f3d8e54dbcfc31fbdbd1ef29d3e8e6879d16..0000000000000000000000000000000000000000 --- "a/docs/test/openEuler 20.03 LTS SP3\347\211\210\346\234\254OpenStack\346\265\213\350\257\225\346\212\245\345\221\212.md" +++ /dev/null @@ -1,118 +0,0 @@ -![openEuler ico](../../images/openEuler.png) - -版权所有 © 2021 openEuler社区 - 您对“本文档”的复制、使用、修改及分发受知识共享(Creative Commons)署名—相同方式共享4.0国际公共许可协议(以下简称“CC BY-SA 4.0”)的约束。为了方便用户理解,您可以通过访问[https://creativecommons.org/licenses/by-sa/4.0/](https://creativecommons.org/licenses/by-sa/4.0/)了解CC BY-SA 4.0的概要 (但不是替代)。CC BY-SA 4.0的完整协议内容您可以访问如下网址获取:[https://creativecommons.org/licenses/by-sa/4.0/legalcode。](https://creativecommons.org/licenses/by-sa/4.0/legalcode。) - -修订记录 - -|日期|修订版本|修改描述|作者| -|:----|:----|:----|:----| -|2021-12-10|1|初稿及同步Train版本测试情况|黄填华| -| | | | | -| | | | | - -关键词: - -OpenStack - -摘要: - -在openEuler 20.03 LTS SP3版本中提供OpenStack Queens、Rocky、Train版本的RPM安装包。方便用户快速部署OpenStack。 - -缩略语清单: - -|缩略语|英文全名|中文解释| -|:----|:----|:----| -|CLI|Command Line Interface|命令行工具| -|ECS|Elastic Cloud Server|弹性云服务器| - -# 1 特性概述 - -在openEuler 20.03 LTS SP2 release中提供OpenStack Queens、Rocky RPM安装包支持,包括项目:Keystone、Glance、Nova、Neutron、Cinder、Ironic、Trove、Kolla、Horizon、Tempest以及每个项目配套的CLI。 -openEuler 20.03 LTS SP3 release增加了OpenStack Train版本RPM安装包支持,包括项目:Keystone、Glance、Placement、Nova、Neutron、Cinder、Ironic、Trove、Kolla、Heat、Aodh、Ceilometer、Gnocchi、Swift、Horizon、Tempest以及每个项目配套的CLI。 - -# 2 特性测试信息 - -本节描述被测对象的版本信息和测试的时间及测试轮次,包括依赖的硬件。 - -|版本名称|测试起始时间|测试结束时间| -|:----|:----|:----| -|openEuler 20.03 LTS SP3 RC1
(OpenStack Train版本各组件的安装部署测试)|2021.11.25|2021.11.30| -|openEuler 20.03 LTS SP3 RC1
(OpenStack Train版本基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)|2021.12.1|2021.12.2| -|openEuler 20.03 LTS SP3 RC2
(OpenStack Train版本tempest集成测试)|2021.12.3|2021.12.9| -|openEuler 20.03 LTS SP3 RC3
(OpenStack Train版本问题回归测试)|2021.12.10|2021.12.12| -|openEuler 20.03 LTS SP3 RC3
(OpenStack Queens&Rocky版本各组件的安装部署测试)|2021.12.10|2021.12.13| -|openEuler 20.03 LTS SP3 RC3
(OpenStack Queens&Rocky版本基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)|2021.12.14|2021.12.16| -|openEuler 20.03 LTS SP3 RC4
(OpenStack Queens&Rocky版本tempest集成测试)|2021.12.17|2021.12.20| -|openEuler 20.03 LTS SP3 RC4
(OpenStack Queens&Rocky版本问题回归测试)|2021.12.21|2021.12.23| - -描述特性测试的硬件环境信息 - -|硬件型号|硬件配置信息|备注| -|:----|:----|:----| -|华为云ECS|Intel Cascade Lake 3.0GHz 8U16G|华为云x86虚拟机| -|华为云ECS|Huawei Kunpeng 920 2.6GHz 8U16G|华为云arm64虚拟机| -|TaiShan 200-2280|Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAM|ARM架构服务器| - -# 3 测试结论概述 - -## 3.1 测试整体结论 - -OpenStack Queens版本,共计执行Tempest用例1164个,主要覆盖了API测试和功能测试,Skip用例52个(全是openStack Queens版中已废弃的功能或接口,如Keystone V1、Cinder V1等),失败用例3个(测试用例本身问题),其他1109个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。 - -OpenStack Rocky版本,共计执行Tempest用例1197个,主要覆盖了API测试和功能测试,Skip用例101个(全是openStack Rocky版中已废弃的功能或接口,如KeystoneV1、Cinder V1等),其他1096个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。 - -OpenStack Train版本除了Cyborg(Cyborg安装部署正常,功能不可用)各组件基本功能正常,共计执行Tempest用例1179个,主要覆盖了API测试和功能测试,Skip用例115个(包括已废弃的功能或接口,如Keystone V1、Cinder V1等,包括一些复杂功能,比如文件注入,虚拟机配置等),其他1064个用例全部通过,共计发现问题14个(包括libvirt 1个问题),均已解决,回归通过,无遗留风险,整体质量良好。 - -|测试活动|tempest集成测试| -|:----|:----| -|接口测试|API全覆盖| -|功能测试|Queens版本覆盖Tempest所有相关测试用例1164个,其中Skip 52个,Fail 3个,其他全通过。| -|功能测试|Rocky版本覆盖Tempest所有相关测试用例1197个,其中Skip 101个,其他全通过。| -|功能测试|Train版本覆盖Tempest所有相关测试用例1179个,其中Skip 115个,其他全通过。| - -|测试活动|功能测试| -|:----|:----| -|功能测试|虚拟机(KVM、Qemu)、存储(lvm)、网络资源(linuxbridge)管理操作正常| - -## 3.2 约束说明 - -本次测试没有覆盖OpenStack Queens、Rocky版中明确废弃的功能和接口,因此不能保证已废弃的功能和接口(前文提到的Skip的用例)在openEuler 20.03 LTS SP3上能正常使用,另外Cyborg功能不可用。 - -## 3.3 遗留问题分析 - -### 3.3.1 Queens&Rocky遗留问题影响以及规避措施 - -|问题单号|问题描述|问题级别|问题影响和规避措施|当前状态| -|:----|:----|:----|:----|:----| -|1|targetcli软件包与python2-rtslib-fb包冲突,无法安装|中|使用tgtadm代替lioadm命令|解决中| -|2|python2-flake8软件包依赖低版本的pyflakes,导致yum update命令报出警告|低|使用yum update --nobest命令升级软件包|解决中| - -### 3.3.2 Train版本问题统计 - -| |问题总数|严重|主要|次要|不重要| -|:----|:----|:----|:----|:----|:----| -|数目|14|1|6|7| | -|百分比|100|7.1|42.9|50| | - - -# 4 测试执行 - -## 4.1 测试执行统计数据 - -*本节内容根据测试用例及实际执行情况进行特性整体测试的统计,可根据第二章的测试轮次分开进行统计说明。* - -|版本名称|测试用例数|用例执行结果|发现问题单数| -|:----|:----|:----|:----| -|openEuler 20.03 LTS SP3 OpenStack Queens|1164|通过1109个,skip 52个,Fail 3个|0| -|openEuler 20.03 LTS SP3 OpenStack Rocky|1197|通过1096个,skip 101个|0| -|openEuler 20.03 LTS SP3 OpenStack Train|1179|通过1064个,skip 115个|14| - -## 4.2 后续测试建议 - -1. 涵盖主要的性能测试 -2. 覆盖更多的driver/plugin测试 - -# 5 附件 - -*N/A* diff --git a/docs/test/openEuler-20.03-LTS-SP2.md b/docs/test/openEuler-20.03-LTS-SP2.md deleted file mode 100644 index 9225c0035ae6612cb4fca830ded0dfe19ad626d5..0000000000000000000000000000000000000000 --- a/docs/test/openEuler-20.03-LTS-SP2.md +++ /dev/null @@ -1,109 +0,0 @@ -![openEuler ico](../../images/openEuler.png) - -版权所有 © 2021 openEuler社区 - 您对“本文档”的复制、使用、修改及分发受知识共享(Creative Commons)署名—相同方式共享4.0国际公共许可协议(以下简称“CC BY-SA 4.0”)的约束。为了方便用户理解,您可以通过访问[https://creativecommons.org/licenses/by-sa/4.0/](https://creativecommons.org/licenses/by-sa/4.0/)了解CC BY-SA 4.0的概要 (但不是替代)。CC BY-SA 4.0的完整协议内容您可以访问如下网址获取:[https://creativecommons.org/licenses/by-sa/4.0/legalcode。](https://creativecommons.org/licenses/by-sa/4.0/legalcode。) - -修订记录 - -|日期|修订版本|修改描述|作者| -|:----|:----|:----|:----| -|2021-6-16|1|初稿|王玺源| -|2021-6-17|2|增加Rocky版本测试报告|黄填华| -| | | | | -| | | | | - -关键词: - -OpenStack - -摘要: - -在openEuler 20.03 LTS SP2版本中提供OpenStack Queens、Rocky版本的RPM安装包。方便用户快速部署OpenStack。 - -缩略语清单: - -|缩略语|英文全名|中文解释| -|:----|:----|:----| -|CLI|Command Line Interface|命令行工具| -|ECS|Elastic Cloud Server|弹性云服务器| - -# 1 特性概述 - -在openEuler 20.03 LTS SP2 release中提供OpenStack Queens、Rocky RPM安装包支持,包括项目:Keystone、Glance、Nova、Neutron、Cinder、Ironic、Trove、Kolla、Horizon、Tempest以及每个项目配套的CLI。 - -# 2 特性测试信息 - -本节描述被测对象的版本信息和测试的时间及测试轮次,包括依赖的硬件。 - -|版本名称|测试起始时间|测试结束时间| -|:----|:----|:----| -|openEuler 20.03 LTS SP2
(OpenStack各组件的安装部署测试)|2021.6.1|2021.6.7| -|openEuler 20.03 LTS SP2
(OpenStack基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)|2021.6.8|2021.6.10| -|openEuler 20.03 LTS SP2
(OpenStack tempest集成测试)|2021.6.11|2021.6.15| -|openEuler 20.03 LTS SP2
(问题回归测试)|2021.6.16|2021.6.17| - -描述特性测试的硬件环境信息 - -|硬件型号|硬件配置信息|备注| -|:----|:----|:----| -|华为云ECS|Intel Cascade Lake 3.0GHz 8U16G|华为云x86虚拟机| -|华为云ECS|Huawei Kunpeng 920 2.6GHz 8U16G|华为云arm64虚拟机| -|TaiShan 200-2280|Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAM|ARM架构服务器| - -# 3 测试结论概述 - -## 3.1 测试整体结论 - -OpenStack Queens版本,共计执行Tempest用例1164个,主要覆盖了API测试和功能测试,通过7*24的长稳测试,Skip用例52个(全是openStack Queens版中已废弃的功能或接口,如Keystone V1、Cinder V1等),失败用例3个(测试用例本身问题),其他1109个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。 - -OpenStack Rocky版本,共计执行Tempest用例1197个,主要覆盖了API测试和功能测试,通过7*24的长稳测试,Skip用例105个(全是openStack Rocky版中已废弃的功能或接口,如KeystoneV1、Cinder V1等,和不支持的barbican项目),失败用例1个,其他1091个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。 - -|测试活动|tempest集成测试| -|:----|:----| -|接口测试|API全覆盖| -|功能测试|Queens版本覆盖Tempest所有相关测试用例1164个,其中Skip 52个,Fail 3个,其他全通过。| -|功能测试|Rocky版本覆盖Tempest所有相关测试用例1197个,其中Skip 105个,Fail 1个, 其他全通过。| - -|测试活动|功能测试| -|:----|:----| -|功能测试|虚拟机(KVM、Qemu)、存储(lvm、NFS、Ceph后端)、网络资源(linuxbridge、openvswitch)管理操作正常| - -## 3.2 约束说明 - -本次测试没有覆盖OpenStack Queens、Rocky版中明确废弃的功能和接口,因此不能保证已废弃的功能和接口(前文提到的Skip的用例)在openEuler 20.03 LTS SP2上能正常使用。 - -## 3.3 遗留问题分析 - -### 3.3.1 遗留问题影响以及规避措施 - -|问题单号|问题描述|问题级别|问题影响和规避措施|当前状态| -|:----|:----|:----|:----|:----| -|1|targetcli软件包与python2-rtslib-fb包冲突,无法安装|中|使用tgtadm代替lioadm命令|解决中| -|2|python2-flake8软件包依赖低版本的pyflakes,导致yum update命令报出警告|低|使用yum update --nobest命令升级软件包|解决中| - -### 3.3.2 问题统计 - -| |问题总数|严重|主要|次要|不重要| -|:----|:----|:----|:----|:----|:----| -|数目|14|3|6|5| | -|百分比|100|21.4|42.8|35.8| | - -# 4 测试执行 - -## 4.1 测试执行统计数据 - -*本节内容根据测试用例及实际执行情况进行特性整体测试的统计,可根据第二章的测试轮次分开进行统计说明。* - -|版本名称|测试用例数|用例执行结果|发现问题单数| -|:----|:----|:----|:----| -|openEuler 20.03 LTS SP2 OpenStack Queens|1164|通过1109个,skip 52个,Fail 3个|7| -|openEuler 20.03 LTS SP2 OpenStack Rocky|1197|通过1001个,skip 101个|7| - -## 4.2 后续测试建议 - -1. 涵盖主要的性能测试 -2. 覆盖更多的driver/plugin测试 - -# 5 附件 - -*N/A* diff --git a/docs/test/openEuler-22.03-LTS.md b/docs/test/openEuler-22.03-LTS.md deleted file mode 100644 index 2c9813a4b0ca6cda09df15e7576df32b2511cc80..0000000000000000000000000000000000000000 --- a/docs/test/openEuler-22.03-LTS.md +++ /dev/null @@ -1,145 +0,0 @@ -# openEuler 22.03 LTS 测试报告 - -![openEuler ico](../../images/openEuler.png) - -版权所有 © 2021 openEuler社区 -您对“本文档”的复制、使用、修改及分发受知识共享(Creative Commons)署名—相同方式共享4.0国际公共许可协议(以下简称“CC BY-SA 4.0”)的约束。为了方便用户理解,您可以通过访问[https://creativecommons.org/licenses/by-sa/4.0/](https://creativecommons.org/licenses/by-sa/4.0/)了解CC BY-SA 4.0的概要 (但不是替代)。CC BY-SA 4.0的完整协议内容您可以访问如下网址获取:[https://creativecommons.org/licenses/by-sa/4.0/legalcode。](https://creativecommons.org/licenses/by-sa/4.0/legalcode。) - -修订记录 - -|日期|修订版本|修改描述|作者| -|:----|:----|:----|:----| -|2022-03-21|1|初稿|李佳伟| - -关键词: - -OpenStack - -摘要: - -在 ```openEuler 22.03 LTS``` 版本中提供 ```OpenStack Train```、```OpenStack Wallaby``` 版本的 ```RPM``` 安装包,方便用户快速部署 ```OpenStack```。 - -缩略语清单: - -|缩略语|英文全名|中文解释| -|:----|:----|:----| -|CLI|Command Line Interface|命令行工具| -|ECS|Elastic Cloud Server|弹性云服务器| - -## 1 特性概述 - -在 ```openEuler 22.03 LTS``` 版本中提供 ```OpenStack Train```、```OpenStack Wallaby``` 版本的```RPM```安装包,包括以下项目以及每个项目配套的 ```CLI```。 - -- Keystone - -- Neutron - -- Cinder - -- Nova - -- Placement - -- Glance - -- Horizon - -- Aodh - -- Ceilometer - -- Cyborg - -- Gnocchi - -- Heat - -- Swift - -- Ironic - -- Kolla - -- Trove - -- Tempest - -## 2 特性测试信息 - -本节描述被测对象的版本信息和测试的时间及测试轮次,包括依赖的硬件。 - -|版本名称|测试起始时间|测试结束时间| -|:----|:----|:----| -|openEuler 22.03 LTS RC1
(OpenStack Train版本各组件的安装部署测试)|2022.02.20|2022.02.27| -|openEuler 22.03 LTS RC1
(OpenStack Train版本基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)|2022.02.28|2022.03.03| -|openEuler 22.03 LTS RC2
(OpenStack Train版本tempest集成测试)|2022.03.04|2022.03.07| -|openEuler 22.03 LTS RC3
(OpenStack Train版本问题回归测试)|2022.03.08|2022.03.09| -|openEuler 22.03 LTS RC3
(OpenStack Wallaby版本各组件的安装部署测试)|2022.03.10|2022.03.15| -|openEuler 22.03 LTS RC3
(OpenStack Wallaby基版本本功能测试,包括虚拟机,卷,网络相关资源的增删改查)|2022.03.16|2022.03.19| -|openEuler 22.03 LTS RC4
(OpenStack Wallaby版本tempest集成测试)|2022.03.20|2022.03.21| -|openEuler 22.03 LTS RC4
(OpenStack Wallaby版本问题回归测试)|2022.03.21|2022.03.22| - -描述特性测试的硬件环境信息 - -|硬件型号|硬件配置信息|备注| -|:----|:----|:----| -|华为云ECS|Intel Cascade Lake 3.0GHz 8U16G|华为云x86虚拟机| -|华为云ECS|Huawei Kunpeng 920 2.6GHz 8U16G|华为云arm64虚拟机| -|TaiShan 200-2280|Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAM|ARM架构服务器| - -## 3 测试结论概述 - -### 3.1 测试整体结论 - -```OpenStack Train``` 版本,共计执行 ```Tempest``` 用例 ```1354``` 个,主要覆盖了 ```API``` 测试和功能测试,通过 ```7*24``` 的长稳测试,```Skip``` 用例 ```64``` 个(全是 ```OpenStack Train``` 版中已废弃的功能或接口,如Keystone V1、Cinder V1等),失败用例 ```1``` 个(测试用例本身问题),其他 ```1289``` 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。 - -```OpenStack Wallaby``` 版本,共计执行 ```Tempest``` 用例 ```1164``` 个,主要覆盖了API测试和功能测试,通过 ```7*24``` 的长稳测试,```Skip``` 用例 ```70``` 个(全是 ```OpenStack Wallaby``` 版中已废弃的功能或接口,如KeystoneV1、Cinder V1等,和不支持的barbican项目),失败用例 ```6``` 个,其他 ```1088``` 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。 - -|测试活动|tempest集成测试| -|:----|:----| -|接口测试|API全覆盖| -|功能测试|Train版本覆盖Tempest所有相关测试用例1354个,其中Skip 64个,Fail 1个,其他全通过。| -|功能测试|Wallaby版本覆盖Tempest所有相关测试用例1164个,其中Skip 70个,Fail 6个, 其他全通过。| - -|测试活动|功能测试| -|:----|:----| -|功能测试|虚拟机(KVM、Qemu)、存储(lvm、NFS、Ceph后端)、网络资源(linuxbridge、openvswitch)管理操作正常| - -### 3.2 约束说明 - -本次测试没有覆盖 ```OpenStack Train```、```OpenStack Wallaby``` 版中明确废弃的功能和接口,因此不能保证已废弃的功能和接口(前文提到的Skip的用例)在 ```openEuler 22.03 LTS``` 上能正常使用。 - -### 3.3 遗留问题分析 - -#### 3.3.1 遗留问题影响以及规避措施 - -|问题单号|问题描述|问题级别|问题影响和规避措施|当前状态| -|:----|:----|:----|:----|:----| -|N/A|N/A|N/A|N/A|N/A| - -#### 3.3.2 问题统计 - -| |问题总数|严重|主要|次要|不重要| -|:----|:----|:----|:----|:----|:----| -|数目|10|2|6|2|0| -|百分比|100|20|60|20|0| - -## 4 测试执行 - -### 4.1 测试执行统计数据 - -*本节内容根据测试用例及实际执行情况进行特性整体测试的统计,可根据第二章的测试轮次分开进行统计说明。* - -|版本名称|测试用例数|用例执行结果|发现问题单数| -|:----|:----|:----|:----| -|openEuler 22.03 LTS OpenStack Train|1354|通过1289个,skip 64个,Fail 1个|7| -|openEuler 22.03 LTS OpenStack Wallaby|1164|通过1088个,skip 70个,Fail 6个|3| - -### 4.2 后续测试建议 - -1. 涵盖主要的性能测试 -2. 覆盖更多的driver/plugin测试 - -## 5 附件 - -*N/A* diff --git a/example/README.md b/example/README.md deleted file mode 100644 index a29818b82e8ca05345925d5d25903857251ad52f..0000000000000000000000000000000000000000 --- a/example/README.md +++ /dev/null @@ -1,7 +0,0 @@ -# Example - -example目录存放了一些openstack相关的文件demo,方便用户参考。 - -## openstack-config - -该目录存放了openstack 3节点部署架构下,每个服务的配置文件样例。 diff --git a/example/openstack-train-config/compute/linuxbridge_agent.ini b/example/openstack-train-config/compute/linuxbridge_agent.ini deleted file mode 100644 index 211412d330d11828413ba1a4bf37a2df376ca790..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/compute/linuxbridge_agent.ini +++ /dev/null @@ -1,11 +0,0 @@ -[linux_bridge] -#physical_interface_mappings = provider:eth0 - -[vxlan] -enable_vxlan = true -local_ip = 192.168.1.120 -l2_population = true - -[securitygroup] -enable_security_group = true -firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver diff --git a/example/openstack-train-config/compute/neutron.conf b/example/openstack-train-config/compute/neutron.conf deleted file mode 100644 index 7ecd78562c2f51c6540bfcdfa96215b930bf16ed..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/compute/neutron.conf +++ /dev/null @@ -1,17 +0,0 @@ -[DEFAULT] -transport_url = rabbit://openstack:openstack@controller -auth_strategy = keystone - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = default -user_domain_name = default -project_name = service -username = neutron -password = neutron - -[oslo_concurrency] -lock_path = /var/lib/neutron/tmp diff --git a/example/openstack-train-config/compute/nova.conf b/example/openstack-train-config/compute/nova.conf deleted file mode 100644 index 72082ac5e1afbc59e5628182908fcce205c41b8c..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/compute/nova.conf +++ /dev/null @@ -1,62 +0,0 @@ -[DEFAULT] -enabled_apis = osapi_compute,metadata -transport_url = rabbit://openstack:openstack@controller:5672/ -my_ip = 192.168.1.120 -use_neutron = true -firewall_driver = nova.virt.firewall.NoopFirewallDriver -compute_driver=libvirt.LibvirtDriver -instances_path = /var/lib/nova/instances/ -lock_path = /var/lib/nova/tmp - -[api] -auth_strategy = keystone - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000/ -auth_url = http://controller:5000/ -memcached_servers = controller:11211 -auth_type = password -project_domain_name = Default -user_domain_name = Default -project_name = service -username = nova -password = nova - -[vnc] -enabled = true -server_listen = 0.0.0.0 -server_proxyclient_address = $my_ip -novncproxy_base_url = http://controller:6080/vnc_auto.html - -[libvirt] -virt_type = qemu -cpu_mode = custom -cpu_models = cortex-a72 -num_pcie_ports = 12 - -[glance] -api_servers = http://controller:9292 - -[oslo_concurrency] -lock_path = /var/lib/nova/tmp - -[placement] -region_name = RegionOne -project_domain_name = Default -project_name = service -auth_type = password -user_domain_name = Default -auth_url = http://controller:5000/v3 -username = placement -password = placement - -[neutron] -auth_url = http://controller:5000 -auth_type = password -project_domain_name = default -user_domain_name = default -region_name = RegionOne -project_name = service -username = neutron -password = neutron - diff --git a/example/openstack-train-config/controller/cinder.conf b/example/openstack-train-config/controller/cinder.conf deleted file mode 100644 index eb557307376ffac8e9a2e652bb925505388def9d..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/controller/cinder.conf +++ /dev/null @@ -1,21 +0,0 @@ -[DEFAULT] -transport_url = rabbit://openstack:openstack@controller -auth_strategy = keystone -my_ip = 192.168.1.157 - -[database] -connection = mysql+pymysql://cinder:cinder@controller/cinder - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = Default -user_domain_name = Default -project_name = service -username = cinder -password = cinder - -[oslo_concurrency] -lock_path = /var/lib/cinder/tmp diff --git a/example/openstack-train-config/controller/dhcp_agent.ini b/example/openstack-train-config/controller/dhcp_agent.ini deleted file mode 100644 index bfc2439f124de3010b834d17194ba94c05ebc7fd..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/controller/dhcp_agent.ini +++ /dev/null @@ -1,4 +0,0 @@ -[DEFAULT] -interface_driver = linuxbridge -dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq -enable_isolated_metadata = true diff --git a/example/openstack-train-config/controller/glance-api.conf b/example/openstack-train-config/controller/glance-api.conf deleted file mode 100644 index 2a9ca75d9b0a48d65be68b3d326220301761dd49..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/controller/glance-api.conf +++ /dev/null @@ -1,22 +0,0 @@ -[database] -connection = mysql+pymysql://glance:glance@controller/glance - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = Default -user_domain_name = Default -project_name = service -username = glance -password = glance - -[paste_deploy] -flavor = keystone - -[glance_store] -stores = file,http -default_store = file -filesystem_store_datadir = /var/lib/glance/images/ - diff --git a/example/openstack-train-config/controller/heat.conf b/example/openstack-train-config/controller/heat.conf deleted file mode 100644 index 5b8d32919258199daf6f52bf491ff821b4726d33..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/controller/heat.conf +++ /dev/null @@ -1,33 +0,0 @@ -[database] -connection = mysql+pymysql://heat:heat@controller/heat - - -[DEFAULT] -transport_url = rabbit://openstack:openstack@controller -heat_metadata_server_url = http://controller:8000 -heat_waitcondition_server_url = http://controller:8000/v1/waitcondition -stack_domain_admin = heat_domain_admin -stack_domain_admin_password = heat -stack_user_domain_name = heat - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = default -user_domain_name = default -project_name = service -username = heat -password = heat - -[trustee] -auth_type = password -auth_url = http://controller:5000 -username = heat -password = heat -user_domain_name = default - -[clients_keystone] -auth_uri = http://controller:5000 - diff --git a/example/openstack-train-config/controller/l3_agent.ini b/example/openstack-train-config/controller/l3_agent.ini deleted file mode 100644 index 33b6f1e4d98b101545036eb0fb2dc1f886c1362f..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/controller/l3_agent.ini +++ /dev/null @@ -1,2 +0,0 @@ -[DEFAULT] -interface_driver = linuxbridge diff --git a/example/openstack-train-config/controller/linuxbridge_agent.ini b/example/openstack-train-config/controller/linuxbridge_agent.ini deleted file mode 100644 index db4b71720886e53efb840c146c1d25c2506a90b4..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/controller/linuxbridge_agent.ini +++ /dev/null @@ -1,11 +0,0 @@ -[linux_bridge] -physical_interface_mappings = provider:eth1 - -[vxlan] -enable_vxlan = true -local_ip = 192.168.1.157 -l2_population = true - -[securitygroup] -enable_security_group = true -firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver diff --git a/example/openstack-train-config/controller/metadata_agent.ini b/example/openstack-train-config/controller/metadata_agent.ini deleted file mode 100644 index 3bb716c93d5acfd978546aadb5f22d6f08352025..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/controller/metadata_agent.ini +++ /dev/null @@ -1,3 +0,0 @@ -[DEFAULT] -nova_metadata_host = controller -metadata_proxy_shared_secret = meta_share diff --git a/example/openstack-train-config/controller/ml2_conf.ini b/example/openstack-train-config/controller/ml2_conf.ini deleted file mode 100644 index cfce00f37f0cff523d088fe8e93dc7d6fc86b13b..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/controller/ml2_conf.ini +++ /dev/null @@ -1,14 +0,0 @@ -[ml2] -type_drivers = flat,vlan,vxlan -tenant_network_types = vxlan -mechanism_drivers = linuxbridge,l2population -extension_drivers = port_security - -[ml2_type_flat] -flat_networks = provider - -[ml2_type_vxlan] -vni_ranges = 1:1000 - -[securitygroup] -enable_ipset = true diff --git a/example/openstack-train-config/controller/neutron.conf b/example/openstack-train-config/controller/neutron.conf deleted file mode 100644 index 6df5228abaf9119e2cb0562c7f70d413902167cc..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/controller/neutron.conf +++ /dev/null @@ -1,46 +0,0 @@ -[database] -connection = mysql+pymysql://neutron:neutron@controller/neutron - -[DEFAULT] -core_plugin = ml2 -service_plugins = router -allow_overlapping_ips = true -transport_url = rabbit://openstack:openstack@controller -auth_strategy = keystone -notify_nova_on_port_status_changes = true -notify_nova_on_port_data_changes = true -api_workers = 2 - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000/v3 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = Default -user_domain_name = Default -project_name = service -username = neutron -password = neutron - -[nova] -auth_url = http://controller:5000 -auth_type = password -project_domain_name = Default -user_domain_name = Default -region_name = RegionOne -project_name = service -username = nova -password = nova - -[placement] -region_name = RegionOne -project_domain_name = Default -project_name = service -auth_type = password -user_domain_name = Default -auth_url = http://controller:5000/v3 -username = placement -password = placement - -[oslo_concurrency] -lock_path = /var/lib/neutron/tmp diff --git a/example/openstack-train-config/controller/nova.conf b/example/openstack-train-config/controller/nova.conf deleted file mode 100644 index 8bacec2e67f4f19cfb97619417ba3ed7af8cb9c7..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/controller/nova.conf +++ /dev/null @@ -1,58 +0,0 @@ -[DEFAULT] -enabled_apis = osapi_compute,metadata -transport_url = rabbit://openstack:openstack@controller:5672/ -my_ip = 192.168.1.157 -use_neutron = true -firewall_driver = nova.virt.firewall.NoopFirewallDriver - -[api] -auth_strategy = keystone - -[api_database] -connection = mysql+pymysql://nova:nova@controller/nova_api - -[database] -connection = mysql+pymysql://nova:nova@controller/nova - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000/ -auth_url = http://controller:5000/ -memcached_servers = controller:11211 -auth_type = password -project_domain_name = Default -user_domain_name = Default -project_name = service -username = nova -password = nova - -[glance] -api_servers = http://controller:9292 - -[neutron] -auth_url = http://controller:5000 -auth_type = password -project_domain_name = default -user_domain_name = default -region_name = RegionOne -project_name = service -username = neutron -password = neutron - -[placement] -region_name = RegionOne -project_domain_name = Default -project_name = service -auth_type = password -user_domain_name = Default -auth_url = http://controller:5000/v3 -username = placement -password = placement - -[vnc] -enabled = true -server_listen = $my_ip -server_proxyclient_address = $my_ip - -[oslo_concurrency] -lock_path = /var/lib/nova/tmp - diff --git a/example/openstack-train-config/controller/placement.conf b/example/openstack-train-config/controller/placement.conf deleted file mode 100644 index be38a6e5994f10cbc18175fa746304468bbfb5c8..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/controller/placement.conf +++ /dev/null @@ -1,15 +0,0 @@ -[placement_database] -connection = mysql+pymysql://placement:placement@controller/placement - -[api] -auth_strategy = keystone - -[keystone_authtoken] -auth_url = http://controller:5000/v3 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = Default -user_domain_name = Default -project_name = service -username = placement -password = placement diff --git a/example/openstack-train-config/storage/cinder.conf b/example/openstack-train-config/storage/cinder.conf deleted file mode 100644 index 2b1c6f5ab3066cf11bf6ba0c5a94a2e1dcb3cbdf..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/storage/cinder.conf +++ /dev/null @@ -1,30 +0,0 @@ -[DEFAULT] -transport_url = rabbit://openstack:openstack@controller -auth_strategy = keystone -my_ip = 192.168.1.120 -enabled_backends = lvm -#backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver -#backup_share=HOST:PATH - -[database] -connection = mysql+pymysql://cinder:cinder@controller/cinder - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = Default -user_domain_name = Default -project_name = service -username = cinder -password = cinder - -[oslo_concurrency] -lock_path = /var/lib/cinder/tmp - -[lvm] -volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver -volume_group = cinder-volumes -iscsi_protocol = iscsi -iscsi_helper = tgtadm diff --git a/example/openstack-train-config/tempest/tempest.conf b/example/openstack-train-config/tempest/tempest.conf deleted file mode 100644 index aa0d051a9d68589fdff979f01796bffb7da4aec4..0000000000000000000000000000000000000000 --- a/example/openstack-train-config/tempest/tempest.conf +++ /dev/null @@ -1,98 +0,0 @@ -[DEFAULT] -log_dir = /root/mytest/logs -log_file = tempest.log -#debug = true - -[auth] -admin_username = admin -admin_password = admin -admin_project_name = admin -admin_domain_name = Default - -[identity] -auth_version = v3 -uri_v3 = http://controller:5000/v3 - -[identity-feature-enabled] -security_compliance = true -project_tags = true -application_credentials = true - -[compute] -flavor_ref = 3757173b-21a5-49b5-b5fc-7b18a2f514a2 -flavor_ref_alt = 83824e43-4a63-49c4-b055-a8aeace13010 -image_ref = 53e9e0c5-58b5-41e9-ac77-e89045066246 -image_ref_alt = 33e90c23-8c9b-4e91-8319-f1b38ada916d -min_microversion = 2.1 -max_microversion = 2.79 -min_compute_nodes = 2 -fixed_network_name = private - -[scenario] -img_file = /home/hxj/cirros-0.5.2-aarch64-disk.img -img_container_format = bare -img_disk_format = qcow2 - -[compute-feature-enabled] -change_password = true -swap_volume = true -volume_multiattach = true -resize = true -#volume_backed_live_migration = true -#block_migration_for_live_migration = true -#block_migrate_cinder_iscsi = true -#scheduler_enabled_filters = DifferentHostFilter -vnc_console = true -live_migration = false - -[oslo_concurrency] -lock_path = /root/mytest/tempest_lock - -[volume] -min_microversion = 3.0 -max_microversion = 3.59 -backend_names = lvm - -[volume-feature-enabled] -backup = false -multi_backend = true -manage_volume = true -manage_snapshot = true -extend_attached_volume = true - -[service_available] -nova = true -cinder = true -neutron = true -glance = true -horizon = true -heat = true -placement = true -swift = true -keystone = true - -[placement] -min_microversion = 1.0 -max_microversion = 1.36 - -[network] -public_network_id = dde72509-9c1f-4d07-ba07-8477f6e89815 -project_network_cidr = 172.188.0.0/16 -floating_network_name = public-network - -[network-feature-enabled] -port_security = true -ipv6_subnet_attributes = true -qos_placement_physnet = true - -[image-feature-enabled] -import_image = true - -[validation] -image_ssh_user = cirros -image_ssh_password = gocubsgo -image_alt_ssh_user = cirros -image_alt_ssh_password = gocubsgo - -[debug] -trace_requests = .* diff --git a/example/openstack-victoria-config/compute/linuxbridge_agent.ini b/example/openstack-victoria-config/compute/linuxbridge_agent.ini deleted file mode 100644 index 04936d726eac7bd3b417357e231f80ea43a568e7..0000000000000000000000000000000000000000 --- a/example/openstack-victoria-config/compute/linuxbridge_agent.ini +++ /dev/null @@ -1,12 +0,0 @@ -[linux_bridge] -physical_interface_mappings = provider:ens3 - -[vxlan] -enable_vxlan = true -local_ip = 192.168.1.191 -l2_population = true - - -[securitygroup] -enable_security_group = true -firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver diff --git a/example/openstack-victoria-config/compute/neutron.conf b/example/openstack-victoria-config/compute/neutron.conf deleted file mode 100644 index ed350056a9d0f2efc850410c02d03d20233e46c9..0000000000000000000000000000000000000000 --- a/example/openstack-victoria-config/compute/neutron.conf +++ /dev/null @@ -1,17 +0,0 @@ -[DEFAULT] -transport_url = rabbit://openstack:root@controller -auth_strategy = keystone - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = default -user_domain_name = default -project_name = service -username = neutron -password = root - -[oslo_concurrency] -lock_path = /var/lib/neutron/tmp diff --git a/example/openstack-victoria-config/compute/nova.conf b/example/openstack-victoria-config/compute/nova.conf deleted file mode 100644 index 06a88f9718515613376f96494c574708c2aa7f59..0000000000000000000000000000000000000000 --- a/example/openstack-victoria-config/compute/nova.conf +++ /dev/null @@ -1,57 +0,0 @@ -[DEFAULT] -enabled_apis = osapi_compute,metadata -transport_url = rabbit://openstack:root@controller -my_ip = 192.168.1.191 -compute_driver = libvirt.LibvirtDriver -instances_path = /var/lib/nova/instances/ -allow_resize_to_same_host = true - -[api] -auth_strategy = keystone - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000/ -auth_url = http://controller:5000/ -memcached_servers = controller:11211 -auth_type = password -project_domain_name = Default -user_domain_name = Default -project_name = service -username = nova -password = root - -[vnc] -enabled = true -server_listen = 0.0.0.0 -server_proxyclient_address = $my_ip -novncproxy_base_url = http://controller:6080/vnc_auto.html - -[glance] -api_servers = http://controller:9292 - -[oslo_concurrency] -lock_path = /var/lib/nova/tmp - -[placement] -region_name = RegionOne -project_domain_name = Default -project_name = service -auth_type = password -user_domain_name = Default -auth_url = http://controller:5000/v3 -username = placement -password = root - -[libvirt] -virt_type = qemu - -[neutron] -auth_url = http://controller:5000 -auth_type = password -project_domain_name = default -user_domain_name = default -region_name = RegionOne -project_name = service -username = neutron -password = root - diff --git a/example/openstack-victoria-config/controller/glance-api.conf b/example/openstack-victoria-config/controller/glance-api.conf deleted file mode 100644 index a8bebe503699220ccc110f845d93d1de4aeb88c7..0000000000000000000000000000000000000000 --- a/example/openstack-victoria-config/controller/glance-api.conf +++ /dev/null @@ -1,21 +0,0 @@ -[database] -connection = mysql+pymysql://glance:root@controller/glance - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = Default -user_domain_name = Default -project_name = service -username = glance -password = root - -[paste_deploy] -flavor = keystone - -[glance_store] -stores = file,http -default_store = file -filesystem_store_datadir = /var/lib/glance/images/ diff --git a/example/openstack-victoria-config/controller/keystone.conf b/example/openstack-victoria-config/controller/keystone.conf deleted file mode 100644 index 6440acf4161ff2ce4e7e4f9a6f44ec6d68bc78e4..0000000000000000000000000000000000000000 --- a/example/openstack-victoria-config/controller/keystone.conf +++ /dev/null @@ -1,10 +0,0 @@ -[database] -connection = mysql+pymysql://keystone:root@localhost/keystone - -[token] -provider = fernet - -[security_compliance] -lockout_failure_attempts = 2 -lockout_duration = 5 -unique_last_password_count = 2 diff --git a/example/openstack-victoria-config/controller/linuxbridge_agent.ini b/example/openstack-victoria-config/controller/linuxbridge_agent.ini deleted file mode 100644 index 64a00138b4113f396ef6024cb752ff6095501a16..0000000000000000000000000000000000000000 --- a/example/openstack-victoria-config/controller/linuxbridge_agent.ini +++ /dev/null @@ -1,12 +0,0 @@ -[linux_bridge] -physical_interface_mappings = provider:ens3, public:br-ex - -[vxlan] -enable_vxlan = true -local_ip = 192.168.1.196 -l2_population = true - - -[securitygroup] -enable_security_group = true -firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver diff --git a/example/openstack-victoria-config/controller/ml2_conf.ini b/example/openstack-victoria-config/controller/ml2_conf.ini deleted file mode 100644 index 7a0ab2345fd3f00dabd8fb64d32f1ff8105f4289..0000000000000000000000000000000000000000 --- a/example/openstack-victoria-config/controller/ml2_conf.ini +++ /dev/null @@ -1,15 +0,0 @@ -[ml2] -type_drivers = flat,vlan,vxlan -tenant_network_types = vxlan -mechanism_drivers = linuxbridge,l2population -extension_drivers = port_security - - -[ml2_type_flat] -flat_networks = provider, public - -[ml2_type_vxlan] -vni_ranges = 1:1000 - -[securitygroup] -enable_ipset = true diff --git a/example/openstack-victoria-config/controller/neutron.conf b/example/openstack-victoria-config/controller/neutron.conf deleted file mode 100644 index 5f70f6df48b3fd0f16be2d1d6b89ef76f088fd79..0000000000000000000000000000000000000000 --- a/example/openstack-victoria-config/controller/neutron.conf +++ /dev/null @@ -1,36 +0,0 @@ -[DEFAULT] -core_plugin = ml2 -service_plugins = router -allow_overlapping_ips = true -transport_url = rabbit://openstack:root@controller -auth_strategy = keystone -notify_nova_on_port_status_changes = true -notify_nova_on_port_data_changes = true - -[nova] -auth_url = http://controller:5000 -auth_type = password -project_domain_name = default -user_domain_name = default -region_name = RegionOne -project_name = service -username = nova -password = root - -[database] -connection = mysql+pymysql://neutron:root@controller/neutron - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000 -auth_url = http://controller:5000 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = default -user_domain_name = default -project_name = service -username = neutron -password = root - -[oslo_concurrency] -lock_path = /var/lib/neutron/tmp - diff --git a/example/openstack-victoria-config/controller/nova.conf b/example/openstack-victoria-config/controller/nova.conf deleted file mode 100644 index 8e6468709b2831578b9c4a529f7f11abbe53cbc1..0000000000000000000000000000000000000000 --- a/example/openstack-victoria-config/controller/nova.conf +++ /dev/null @@ -1,72 +0,0 @@ -[DEFAULT] -enabled_apis = osapi_compute,metadata -transport_url = rabbit://openstack:root@controller:5672/ -my_ip = 192.168.1.196 -osapi_compute_workers = 2 -debug=true -allow_resize_to_same_host = true - -[api_database] -connection = mysql+pymysql://nova:root@controller/nova_api - -[database] -connection = mysql+pymysql://nova:root@controller/nova - -[api] -auth_strategy = keystone - -[scheduler] -workers = 2 - -[conductor] -workers = 2 - -[keystone_authtoken] -www_authenticate_uri = http://controller:5000/ -auth_url = http://controller:5000/ -memcached_servers = controller:11211 -auth_type = password -project_domain_name = Default -user_domain_name = Default -project_name = service -username = nova -password = root - -[vnc] -enabled = true -server_listen = $my_ip -server_proxyclient_address = $my_ip - -[glance] -api_servers = http://controller:9292 - -[oslo_concurrency] -lock_path = /var/lib/nova/tmp - -[placement] -region_name = RegionOne -project_domain_name = Default -project_name = service -auth_type = password -user_domain_name = Default -auth_url = http://controller:5000/v3 -username = placement -password = root - -[cinder] -os_region_name = RegionOne - -[neutron] -auth_url = http://controller:5000 -auth_type = password -project_domain_name = default -user_domain_name = default -region_name = RegionOne -project_name = service -username = neutron -password = root -service_metadata_proxy = true -metadata_proxy_shared_secret = metadata - -[filter_scheduler] -enabled_filters = DifferentHostFilter diff --git a/example/openstack-victoria-config/controller/placement.conf b/example/openstack-victoria-config/controller/placement.conf deleted file mode 100644 index 335ca2f2365dfe0e201752f875e6d4435601783f..0000000000000000000000000000000000000000 --- a/example/openstack-victoria-config/controller/placement.conf +++ /dev/null @@ -1,15 +0,0 @@ -[placement_database] -connection = mysql+pymysql://placement:root@controller/placement - -[api] -auth_strategy = keystone - -[keystone_authtoken] -auth_url = http://controller:5000/v3 -memcached_servers = controller:11211 -auth_type = password -project_domain_name = Default -user_domain_name = Default -project_name = service -username = placement -password = root diff --git a/example/openstack-victoria-config/tempest/tempest-centos.conf b/example/openstack-victoria-config/tempest/tempest-centos.conf deleted file mode 100644 index 4723e025cf21d2be5ff791cfce15d01e395b47a4..0000000000000000000000000000000000000000 --- a/example/openstack-victoria-config/tempest/tempest-centos.conf +++ /dev/null @@ -1,74 +0,0 @@ -[DEFAULT] -log_dir = /root/test/logs -log_file = tempest.log - -[auth] -admin_username = admin -admin_password = openlab@123 -admin_project_name = admin -admin_domain_name = Default - -[identity] -auth_version = v3 -uri_v3 = http://controller:5000/v3 - -[scenario] -img_file = /opt/cirros-0.5.1-x86_64-disk.img -img_container_format = bare -img_disk_format = raw - -[compute] -flavor_ref = 4 -flavor_ref_alt = 6 -image_ref = fd1b3c60-bbcb-430d-9282-1b431c08b02a -image_ref_alt = feedfcf8-332a-4bd1-9fba-abe64846c86f -min_microversion = 2.1 -max_microversion = 2.87 -min_compute_nodes = 2 - -[compute-feature-enabled] -change_password = true -swap_volume = true -volume_multiattach = true -resize = true -volume_backed_live_migration = true -block_migration_for_live_migration = true -block_migrate_cinder_iscsi = true - -[oslo_concurrency] -lock_path = /root/test/tempest_lock - -[volume] -min_microversion = 3.0 -max_microversion = 3.62 -backend_names = lvm, lvm-2 -volume_size = 10 - -[volume-feature-enabled] -backup = false -multi_backend = true -manage_volume = true -manage_snapshot = true - -[service_available] -swift = false -nova = true -cinder = true -neutron = true -glance = true -horizon = true - -[placement] -min_microversion = 1.0 -max_microversion = 1.36 - -[network] -public_network_id = 6b5d48c1-20c3-441e-89d1-00703c13e3e5 -project_network_cidr = 11.100.0.0/16 -floating_network_name = 6b5d48c1-20c3-441e-89d1-00703c13e3e5 - -[validation] -image_ssh_user = centos -image_alt_ssh_user = centos -ping_timeout = 600 -ssh_timeout = 600 diff --git a/example/openstack-victoria-config/tempest/tempest-cirros.conf b/example/openstack-victoria-config/tempest/tempest-cirros.conf deleted file mode 100644 index a1fcf9cfafe854ba8dd4e8dfb094b6e9064d3607..0000000000000000000000000000000000000000 --- a/example/openstack-victoria-config/tempest/tempest-cirros.conf +++ /dev/null @@ -1,90 +0,0 @@ -[DEFAULT] -log_dir = /root/test/logs -log_file = tempest.log -debug = true - -[auth] -admin_username = admin -admin_password = openlab@123 -admin_project_name = admin -admin_domain_name = Default - -[identity] -auth_version = v3 -uri_v3 = http://controller:5000/v3 - -[identity-feature-enabled] -security_compliance = true -project_tags = true -application_credentials = true - -[compute] -flavor_ref = 1 -flavor_ref_alt = 3 -image_ref = 298467c3-30eb-44c7-8e1d-7ce77f54a599 -image_ref_alt = 6094b91d-e6f6-48fc-a5c6-b2e6ea32d1ad -min_microversion = 2.1 -max_microversion = 2.87 -min_compute_nodes = 2 - -[scenario] -img_file = /opt/cirros-0.5.1-x86_64-disk.img -img_container_format = bare -img_disk_format = raw - -[compute-feature-enabled] -change_password = true -swap_volume = true -volume_multiattach = true -resize = true -volume_backed_live_migration = true -block_migration_for_live_migration = true -block_migrate_cinder_iscsi = true -#scheduler_enabled_filters = DifferentHostFilter - -[oslo_concurrency] -lock_path = /root/test/tempest_lock - -[volume] -min_microversion = 3.0 -max_microversion = 3.62 -backend_names = lvm, lvm-2 - -[volume-feature-enabled] -backup = true -multi_backend = true -manage_volume = true -manage_snapshot = true -extend_attached_volume = true - -[service_available] -swift = false -nova = true -cinder = true -neutron = true -glance = true -horizon = true - -[placement] -min_microversion = 1.0 -max_microversion = 1.36 - -[network] -public_network_id = 6b5d48c1-20c3-441e-89d1-00703c13e3e5 -project_network_cidr = 11.100.0.0/16 -floating_network_name = 6b5d48c1-20c3-441e-89d1-00703c13e3e5 - -[network-feature-enabled] -port_security = true -ipv6_subnet_attributes = true -qos_placement_physnet = true - -[image-feature-enabled] -import_image = true - -[validation] -image_ssh_user = cirros -image_alt_ssh_user = cirros - -[debug] -trace_requests = .* diff --git a/site/404.html b/site/404.html new file mode 100644 index 0000000000000000000000000000000000000000..92afaf80f511e3c45b09473d029f5ac8e97fa2ff --- /dev/null +++ b/site/404.html @@ -0,0 +1,198 @@ + + + + + + + + OpenStack SIG Doc + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • +
  • +
  • +
+
+
+
+
+ + +

404

+ +

找不到页面

+ + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/site/contribute/rpm-packaging-reference/index.html b/site/contribute/rpm-packaging-reference/index.html new file mode 100644 index 0000000000000000000000000000000000000000..29adcafd4f1a0c02408e54a6dac190ea876c1a67 --- /dev/null +++ b/site/contribute/rpm-packaging-reference/index.html @@ -0,0 +1,376 @@ + + + + + + + + RPM开发流程 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

SIG RPM 编包流程梳理

+

OpenStack SIG 有一项长期开发工作是进行 OpenStack 各版本相关 RPM 软件包的打包维护。为了方便新加入 SIG 的开发者更快了解 SIG 编包流程,在此对 SIG 编包流程进行梳理,以供参考。

+

Excel表格说明

+

SIG 编包时,会以共享表格的形式,将需要处理的软件包整理出来,供开发者协同处理。当前表格格式如下:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Project NameopenEuler RepoSIGRepo versionRequired (Min) Versionlt Versionne VersionUpper VersionStatusRequiresDepthAuthorPR linkPR status
pyrsistentpython-pyrsistentsig-python-modules0.18.00.18.1[]0.18.1Need Upgrade[]13
...
+

“Project Name”列为软件项目名。“openEuler Repo”列为此项目在 openEuler gitee 上的仓库名,同时也是此项目在openEuler系统中的软件包名。所有 openEuler 的软件包仓库均存放于https://gitee.com/src-openeuler之中。“SIG”列记录软件包归属于哪个 SIG。

+

处理时首先查看“Status”列,该列表示软件包状态。软件包共有6种状态,开发者需要根据“Status”进行相应处理。

+
    +
  1. “OK”:当前版本直接可用,不需要处理。
  2. +
  3. “Need Create Repo”:openEuler 系统中没有此软件包,需要在 Gitee 中的 src-openeuler repo 仓新建仓库。流程可参考社区指导文档:新增软件包。创建并初始化仓库后,将软件包放入需要的 OBS 工程。
  4. +
  5. “Need Create Branch”:仓库中没有所需分支,需要开发者创建并初始化。
  6. +
  7. “Need Init Branch”:需要初始化分支并将此分支软件包放入需要的 OBS 工程。表明分支存在,但是里面并没有任何版本的源码包,开发者需要对此分支进行初始化,上传所需版本源码包及 spec 文件等。以22.09开发周期适配 Yoga 版本为例,此任务直接在 master 分支工作。get_gitee_project_version 项目状态为“Need Init Branch””,它对应的“python-neutron-tempest-plugin”仓库的master分支,在处理前,只有 README.md 和 README.en.md 两个文件,需要开发者初始化分支。
  8. +
  9. “Need Downgrade”:降级软件包。此种情况靠后处理,与 SIG 确认后再操作。
  10. +
  11. “Need Upgrade”:升级软件包。
  12. +
+

确定好软件包对应的处理类型后,需要根据版本信息进行处理。“Repo version”列为当前仓库中对应分支的软件包版本。“Required (Min) Version”则是需要的最小版本,如果其后有"(Must)"标识,则表示必须使用此版本。“Upper Version”为可以使用的最高版本。如果“Required (Min) Version”和“Upper Version”不同,优先使用“Required (Min) Version”。比如升级软件包,优先升级到“Required (Min) Version”。

+

“Requires”列为软件包的依赖。“Depth”列表示软件包依赖层级。“Depth”为1的是“Depth”为0的软件包的依赖,以此类推,“Depth”高的软件包为“Depth”低的软件包的依赖。处理时应优先处理“Depth”高的行。但如果某个包,没有依赖(“Requires”为[]),也可直接处理。如果某些包需要优先处理,应按照其“Requires”,优先处理其依赖。

+

处理一个软件包时,应首先在“Author”列标注自己的名字,以告诉其他开发者此包已有人处理。pr(pull request)提交后,将 pr 链接贴到“PR link”列。pr 合并后,应在“PR status”列标注“Done”。

+

SIG 处理编包问题流程

+

目前 SIG 处理编包问题主要使用 SIG 自己编写的 oos 工具。oos 工具细节参考 oos README。不同“Status”处理时涉及的“升级”、“初始化分支”、“软件包放入 OBS 工程”等操作,oos 工具有对应实现。

+

以 Yoga 版本升级 python-pyrsistent 软件包为例,演示编包流程,帮助开发者熟悉 OpenStack SIG 基于 oos 工具的打包相关流程。在了解基础流程后,开发者可通过oos README了解其余操作。python-pyrsistent 软件包信息参见上文表格。该软件包需要从0.18.0版本升级到0.18.1版本。Yoga 版本是在22.09版本开发规划中,当前为22年5月,直接提交到master分支即可。

+

签署 CLA

+

在 openEuler 社区提交贡献需要签署 CLA

+

对于初次参与 openEuler 社区的开发者,可首先查看 openEuler 贡献攻略,概览整体贡献情况。

+

环境准备

+
dnf install rpm-build rpmdevtools git
+
+# 生成~/rpmbuild目录,oos默认工作路径也为此
+rpmdev-setuptree
+
+pip install openstack-sig-tool==1.0.6
+

说明:openstack-sig-tool 在 1.1.0 版本对 oos spec 命令进行了重构。如下流程涉及 oos spec 命令的操作对应 1.0.6 版本。建议安装新版 oos, 并参考对应 README 使用。

+

生成个人 Gitee 帐户的 pat(personal access token)

+

首先进入 Gitee 帐户的“设置”界面。

+

设置

+

选择“私人令牌”,然后点击“生成新令牌”。生成后单独保存好自己的私人令牌(pat),Gitee 上无法再次查看,如果丢失只能重新生成。

+

私人令牌

+

生成 python-pyrsistent 包的 spec 并提交

+
export GITEE_PAT=<your gitee pat>
+oos spec push --name python-pyrsistent --version 0.18.1 -dp
+
+-dp, --do-push
+    [可选] 指定是否执行push到gitee仓库上并提交PR,如果不指定则只会提交到本地的仓库中
+

注意此处 --name 参数为表格中的“Project Name”列。

+

oos spec push 命令会自动进行如下流程:

+
    +
  1. fork --name 对应仓库到 pat 对应的 gitee 帐户。
  2. +
  3. 将仓库 clone 到本地,默认路径为 ~/rpmbuild/src-repos
  4. +
  5. 根据 --name--version 下载源码包,并生成 spec 文件(读取仓库中原有 changelog)。此阶段默认路径为 ~/rpmbuild
  6. +
  7. 本地运行 rpm 包构建。本地运行通过后,会自动将 spec 文件及源码包更新到 git 仓库。如果有 -dp 参数则自动进行 push 及创建 pr 操作。如果本地构建时失败,则停止流程。
  8. +
+

如果本地构建失败,则可以修改生成的 spec 文件。然后执行:

+
oos spec push --name python-pyrsistent --version 0.18.1 -dp -rs
+
+-rs, --reuse-spec
+    [可选] 复用已存在的spec文件,不再重新生成。
+

如此循环,直至上传成功。

+

注1:升级时要通过 oos spec push 命令生成 spec 文件,不要使用 oos spec build 命令,push 命令会保留仓库中 现有 spec 的 changelog,build 命令则直接生成新的 changelog。

+

注2:处理错误时,可以参考仓库中现有的 spec 文件;当前 spec 除了 changelog 部分,其余为 oos 工具重新生成,前人遇到的错误,此处仍可能遇到,可参考前人操作结果问题。

+

注3:oos 命令还支持批量处理,可以参考 oos 的 README 自行尝试。

+

PR 门禁检查

+

此时在自己的 gitee 帐户中可以看到 fork 过来的仓库。进入自己帐号中的仓库,可通过点击如下框起位置,可进入原仓库。

+

访问原仓库

+

原仓库中可以看到自动提交的 pr。Pr 中可以看到 openeuler-ci-bot 的评论:

+

门禁结果

+

openEuler 在 gitee 上托管的代码,提交 pr 会自动触发门禁。本地构建通过的,也有可能在门禁检查中构建失败。比如上图中此次提交便构建失败,可以点击框起部分,查看对应架构的 build details。

+

此时可以根据 build details 中日志中报错信息,对本地 spec 进行修改,而后再次执行:

+
oos spec push --name python-pyrsistent --version 0.18.1 -dp -rs
+

线上会自动重新执行测试。

+

门禁详细信息及各项结果含义参考社区的《门禁功能指导手册》

+

PR 检视

+

当一个 pr 通过门禁检查后,需要由软件仓库所属 SIG 的 maintainer 进行 review。为了加速进程,门禁通过后,可以手动 @ 对应的 maintainer,请求帮忙检视。在 pr 提交后,openeuler-ci-bot 会有如下图所示评论,其中被 @ 的人即为当前仓库所属 SIG 的 maintainer。

+

maintainer

+

注意事项

+

这里对一些可能遇到的特殊问题进行记录。

+

测试未执行问题

+

oos 自动生成的 spec 文件中,%check 部分默认为 %{__python3} setup.py test。但是在有些包中,这样并不会真正执行测试,但门禁结果也显示通过。需要开发者人工辨别。参考方法如下:

+
    +
  1. 如果是此前已有 spec 文件,可以参考之前的 spec 中 %check 部分如何书写。如果以前写的不是 %{__python3} setup.py test,便需要重点注意。
  2. +
  3. 进入门禁的 build details(参见上文“PR 门禁检查”部分),查看构建日志的 %check 部分。下图为进入 build details,然后选择“文本方式查看”的日志显示截图。可以看到显示实际运行测试数为0。
  4. +
+

check_log

+

包名不一致问题

+

小部分软件包可能会碰到,oos 自动生成的 spec 所使用的的包名与现有包名不一致。比如一个使用-,一个使用下划线_。此处以原本使用的包名为准,不修改原有包名。

+

作为临时的处理,开发者可以手动将 spec 文件相关地方改为原有包名。与此同时,oos 拥有 mapping 修正功能,开发者可以提交 issue,SIG 将在 oos 中进行修复。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/css/fonts/Roboto-Slab-Bold.woff b/site/css/fonts/Roboto-Slab-Bold.woff new file mode 100644 index 0000000000000000000000000000000000000000..6cb60000181dbd348963953ac8ac54afb46c63d5 Binary files /dev/null and b/site/css/fonts/Roboto-Slab-Bold.woff differ diff --git a/site/css/fonts/Roboto-Slab-Bold.woff2 b/site/css/fonts/Roboto-Slab-Bold.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..7059e23142aae3d8bad6067fc734a6cffec779c9 Binary files /dev/null and b/site/css/fonts/Roboto-Slab-Bold.woff2 differ diff --git a/site/css/fonts/Roboto-Slab-Regular.woff b/site/css/fonts/Roboto-Slab-Regular.woff new file mode 100644 index 0000000000000000000000000000000000000000..f815f63f99da80ad2be69e4021023ec2981eaea0 Binary files /dev/null and b/site/css/fonts/Roboto-Slab-Regular.woff differ diff --git a/site/css/fonts/Roboto-Slab-Regular.woff2 b/site/css/fonts/Roboto-Slab-Regular.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..f2c76e5bda18a9842e24cd60d8787257da215ca7 Binary files /dev/null and b/site/css/fonts/Roboto-Slab-Regular.woff2 differ diff --git a/site/css/fonts/fontawesome-webfont.eot b/site/css/fonts/fontawesome-webfont.eot new file mode 100644 index 0000000000000000000000000000000000000000..e9f60ca953f93e35eab4108bd414bc02ddcf3928 Binary files /dev/null and b/site/css/fonts/fontawesome-webfont.eot differ diff --git a/site/css/fonts/fontawesome-webfont.svg b/site/css/fonts/fontawesome-webfont.svg new file mode 100644 index 0000000000000000000000000000000000000000..855c845e538b65548118279537a04eab2ec6ef0d --- /dev/null +++ b/site/css/fonts/fontawesome-webfont.svg @@ -0,0 +1,2671 @@ + + + + +Created by FontForge 20120731 at Mon Oct 24 17:37:40 2016 + By ,,, +Copyright Dave Gandy 2016. All rights reserved. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/css/fonts/fontawesome-webfont.ttf b/site/css/fonts/fontawesome-webfont.ttf new file mode 100644 index 0000000000000000000000000000000000000000..35acda2fa1196aad98c2adf4378a7611dd713aa3 Binary files /dev/null and b/site/css/fonts/fontawesome-webfont.ttf differ diff --git a/site/css/fonts/fontawesome-webfont.woff b/site/css/fonts/fontawesome-webfont.woff new file mode 100644 index 0000000000000000000000000000000000000000..400014a4b06eee3d0c0d54402a47ab2601b2862b Binary files /dev/null and b/site/css/fonts/fontawesome-webfont.woff differ diff --git a/site/css/fonts/fontawesome-webfont.woff2 b/site/css/fonts/fontawesome-webfont.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..4d13fc60404b91e398a37200c4a77b645cfd9586 Binary files /dev/null and b/site/css/fonts/fontawesome-webfont.woff2 differ diff --git a/site/css/fonts/lato-bold-italic.woff b/site/css/fonts/lato-bold-italic.woff new file mode 100644 index 0000000000000000000000000000000000000000..88ad05b9ff413055b4d4e89dd3eec1c193fa20c6 Binary files /dev/null and b/site/css/fonts/lato-bold-italic.woff differ diff --git a/site/css/fonts/lato-bold-italic.woff2 b/site/css/fonts/lato-bold-italic.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..c4e3d804b57b625b16a36d767bfca6bbf63d414e Binary files /dev/null and b/site/css/fonts/lato-bold-italic.woff2 differ diff --git a/site/css/fonts/lato-bold.woff b/site/css/fonts/lato-bold.woff new file mode 100644 index 0000000000000000000000000000000000000000..c6dff51f063cc732fdb5fe786a8966de85f4ebec Binary files /dev/null and b/site/css/fonts/lato-bold.woff differ diff --git a/site/css/fonts/lato-bold.woff2 b/site/css/fonts/lato-bold.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..bb195043cfc07fa52741c6144d7378b5ba8be4c5 Binary files /dev/null and b/site/css/fonts/lato-bold.woff2 differ diff --git a/site/css/fonts/lato-normal-italic.woff b/site/css/fonts/lato-normal-italic.woff new file mode 100644 index 0000000000000000000000000000000000000000..76114bc03362242c3325ecda6ce6d02bb737880f Binary files /dev/null and b/site/css/fonts/lato-normal-italic.woff differ diff --git a/site/css/fonts/lato-normal-italic.woff2 b/site/css/fonts/lato-normal-italic.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..3404f37e2e312757841abe20343588a7740768ca Binary files /dev/null and b/site/css/fonts/lato-normal-italic.woff2 differ diff --git a/site/css/fonts/lato-normal.woff b/site/css/fonts/lato-normal.woff new file mode 100644 index 0000000000000000000000000000000000000000..ae1307ff5f4c48678621c240f8972d5a6e20b22c Binary files /dev/null and b/site/css/fonts/lato-normal.woff differ diff --git a/site/css/fonts/lato-normal.woff2 b/site/css/fonts/lato-normal.woff2 new file mode 100644 index 0000000000000000000000000000000000000000..3bf9843328a6359b6bd06e50010319c63da0d717 Binary files /dev/null and b/site/css/fonts/lato-normal.woff2 differ diff --git a/site/css/theme.css b/site/css/theme.css new file mode 100644 index 0000000000000000000000000000000000000000..ad773009b9eb22c58b6abe95ff33f8b007408b72 --- /dev/null +++ b/site/css/theme.css @@ -0,0 +1,13 @@ +/* + * This file is copied from the upstream ReadTheDocs Sphinx + * theme. To aid upgradability this file should *not* be edited. + * modifications we need should be included in theme_extra.css. + * + * https://github.com/readthedocs/sphinx_rtd_theme + */ + + /* sphinx_rtd_theme version 1.2.0 | MIT license */ +html{box-sizing:border-box}*,:after,:before{box-sizing:inherit}article,aside,details,figcaption,figure,footer,header,hgroup,nav,section{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}[hidden],audio:not([controls]){display:none}*{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}body{margin:0}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:700}blockquote{margin:0}dfn{font-style:italic}ins{background:#ff9;text-decoration:none}ins,mark{color:#000}mark{background:#ff0;font-style:italic;font-weight:700}.rst-content code,.rst-content tt,code,kbd,pre,samp{font-family:monospace,serif;_font-family:courier new,monospace;font-size:1em}pre{white-space:pre}q{quotes:none}q:after,q:before{content:"";content:none}small{font-size:85%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-.5em}sub{bottom:-.25em}dl,ol,ul{margin:0;padding:0;list-style:none;list-style-image:none}li{list-style:none}dd{margin:0}img{border:0;-ms-interpolation-mode:bicubic;vertical-align:middle;max-width:100%}svg:not(:root){overflow:hidden}figure,form{margin:0}label{cursor:pointer}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,input[type=button],input[type=reset],input[type=submit]{cursor:pointer;-webkit-appearance:button;*overflow:visible}button[disabled],input[disabled]{cursor:default}input[type=search]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}textarea{resize:vertical}table{border-collapse:collapse;border-spacing:0}td{vertical-align:top}.chromeframe{margin:.2em 0;background:#ccc;color:#000;padding:.2em 0}.ir{display:block;border:0;text-indent:-999em;overflow:hidden;background-color:transparent;background-repeat:no-repeat;text-align:left;direction:ltr;*line-height:0}.ir br{display:none}.hidden{display:none!important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.relative{position:relative}big,small{font-size:100%}@media print{body,html,section{background:none!important}*{box-shadow:none!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}a,a:visited{text-decoration:underline}.ir a:after,a[href^="#"]:after,a[href^="javascript:"]:after{content:""}blockquote,pre{page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}.rst-content .toctree-wrapper>p.caption,h2,h3,p{orphans:3;widows:3}.rst-content .toctree-wrapper>p.caption,h2,h3{page-break-after:avoid}}.btn,.fa:before,.icon:before,.rst-content .admonition,.rst-content .admonition-title:before,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .code-block-caption .headerlink:before,.rst-content .danger,.rst-content .eqno .headerlink:before,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-alert,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before,input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week],select,textarea{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */@font-face{font-family:FontAwesome;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713);src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix&v=4.7.0) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#fontawesomeregular) format("svg");font-weight:400;font-style:normal}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14286em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14286em;width:2.14286em;top:.14286em;text-align:center}.fa-li.fa-lg{left:-1.85714em}.fa-border{padding:.2em .25em .15em;border:.08em solid #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa-pull-left.icon,.fa.fa-pull-left,.rst-content .code-block-caption .fa-pull-left.headerlink,.rst-content .eqno .fa-pull-left.headerlink,.rst-content .fa-pull-left.admonition-title,.rst-content code.download span.fa-pull-left:first-child,.rst-content dl dt .fa-pull-left.headerlink,.rst-content h1 .fa-pull-left.headerlink,.rst-content h2 .fa-pull-left.headerlink,.rst-content h3 .fa-pull-left.headerlink,.rst-content h4 .fa-pull-left.headerlink,.rst-content h5 .fa-pull-left.headerlink,.rst-content h6 .fa-pull-left.headerlink,.rst-content p .fa-pull-left.headerlink,.rst-content table>caption .fa-pull-left.headerlink,.rst-content tt.download span.fa-pull-left:first-child,.wy-menu-vertical li.current>a button.fa-pull-left.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-left.toctree-expand,.wy-menu-vertical li button.fa-pull-left.toctree-expand{margin-right:.3em}.fa-pull-right.icon,.fa.fa-pull-right,.rst-content .code-block-caption .fa-pull-right.headerlink,.rst-content .eqno .fa-pull-right.headerlink,.rst-content .fa-pull-right.admonition-title,.rst-content code.download span.fa-pull-right:first-child,.rst-content dl dt .fa-pull-right.headerlink,.rst-content h1 .fa-pull-right.headerlink,.rst-content h2 .fa-pull-right.headerlink,.rst-content h3 .fa-pull-right.headerlink,.rst-content h4 .fa-pull-right.headerlink,.rst-content h5 .fa-pull-right.headerlink,.rst-content h6 .fa-pull-right.headerlink,.rst-content p .fa-pull-right.headerlink,.rst-content table>caption .fa-pull-right.headerlink,.rst-content tt.download span.fa-pull-right:first-child,.wy-menu-vertical li.current>a button.fa-pull-right.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-right.toctree-expand,.wy-menu-vertical li button.fa-pull-right.toctree-expand{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left,.pull-left.icon,.rst-content .code-block-caption .pull-left.headerlink,.rst-content .eqno .pull-left.headerlink,.rst-content .pull-left.admonition-title,.rst-content code.download span.pull-left:first-child,.rst-content dl dt .pull-left.headerlink,.rst-content h1 .pull-left.headerlink,.rst-content h2 .pull-left.headerlink,.rst-content h3 .pull-left.headerlink,.rst-content h4 .pull-left.headerlink,.rst-content h5 .pull-left.headerlink,.rst-content h6 .pull-left.headerlink,.rst-content p .pull-left.headerlink,.rst-content table>caption .pull-left.headerlink,.rst-content tt.download span.pull-left:first-child,.wy-menu-vertical li.current>a button.pull-left.toctree-expand,.wy-menu-vertical li.on a button.pull-left.toctree-expand,.wy-menu-vertical li button.pull-left.toctree-expand{margin-right:.3em}.fa.pull-right,.pull-right.icon,.rst-content .code-block-caption .pull-right.headerlink,.rst-content .eqno .pull-right.headerlink,.rst-content .pull-right.admonition-title,.rst-content code.download span.pull-right:first-child,.rst-content dl dt .pull-right.headerlink,.rst-content h1 .pull-right.headerlink,.rst-content h2 .pull-right.headerlink,.rst-content h3 .pull-right.headerlink,.rst-content h4 .pull-right.headerlink,.rst-content h5 .pull-right.headerlink,.rst-content h6 .pull-right.headerlink,.rst-content p .pull-right.headerlink,.rst-content table>caption .pull-right.headerlink,.rst-content tt.download span.pull-right:first-child,.wy-menu-vertical li.current>a button.pull-right.toctree-expand,.wy-menu-vertical li.on a button.pull-right.toctree-expand,.wy-menu-vertical li button.pull-right.toctree-expand{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);-ms-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scaleY(-1);-ms-transform:scaleY(-1);transform:scaleY(-1)}:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:""}.fa-music:before{content:""}.fa-search:before,.icon-search:before{content:""}.fa-envelope-o:before{content:""}.fa-heart:before{content:""}.fa-star:before{content:""}.fa-star-o:before{content:""}.fa-user:before{content:""}.fa-film:before{content:""}.fa-th-large:before{content:""}.fa-th:before{content:""}.fa-th-list:before{content:""}.fa-check:before{content:""}.fa-close:before,.fa-remove:before,.fa-times:before{content:""}.fa-search-plus:before{content:""}.fa-search-minus:before{content:""}.fa-power-off:before{content:""}.fa-signal:before{content:""}.fa-cog:before,.fa-gear:before{content:""}.fa-trash-o:before{content:""}.fa-home:before,.icon-home:before{content:""}.fa-file-o:before{content:""}.fa-clock-o:before{content:""}.fa-road:before{content:""}.fa-download:before,.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{content:""}.fa-arrow-circle-o-down:before{content:""}.fa-arrow-circle-o-up:before{content:""}.fa-inbox:before{content:""}.fa-play-circle-o:before{content:""}.fa-repeat:before,.fa-rotate-right:before{content:""}.fa-refresh:before{content:""}.fa-list-alt:before{content:""}.fa-lock:before{content:""}.fa-flag:before{content:""}.fa-headphones:before{content:""}.fa-volume-off:before{content:""}.fa-volume-down:before{content:""}.fa-volume-up:before{content:""}.fa-qrcode:before{content:""}.fa-barcode:before{content:""}.fa-tag:before{content:""}.fa-tags:before{content:""}.fa-book:before,.icon-book:before{content:""}.fa-bookmark:before{content:""}.fa-print:before{content:""}.fa-camera:before{content:""}.fa-font:before{content:""}.fa-bold:before{content:""}.fa-italic:before{content:""}.fa-text-height:before{content:""}.fa-text-width:before{content:""}.fa-align-left:before{content:""}.fa-align-center:before{content:""}.fa-align-right:before{content:""}.fa-align-justify:before{content:""}.fa-list:before{content:""}.fa-dedent:before,.fa-outdent:before{content:""}.fa-indent:before{content:""}.fa-video-camera:before{content:""}.fa-image:before,.fa-photo:before,.fa-picture-o:before{content:""}.fa-pencil:before{content:""}.fa-map-marker:before{content:""}.fa-adjust:before{content:""}.fa-tint:before{content:""}.fa-edit:before,.fa-pencil-square-o:before{content:""}.fa-share-square-o:before{content:""}.fa-check-square-o:before{content:""}.fa-arrows:before{content:""}.fa-step-backward:before{content:""}.fa-fast-backward:before{content:""}.fa-backward:before{content:""}.fa-play:before{content:""}.fa-pause:before{content:""}.fa-stop:before{content:""}.fa-forward:before{content:""}.fa-fast-forward:before{content:""}.fa-step-forward:before{content:""}.fa-eject:before{content:""}.fa-chevron-left:before{content:""}.fa-chevron-right:before{content:""}.fa-plus-circle:before{content:""}.fa-minus-circle:before{content:""}.fa-times-circle:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before{content:""}.fa-check-circle:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before{content:""}.fa-question-circle:before{content:""}.fa-info-circle:before{content:""}.fa-crosshairs:before{content:""}.fa-times-circle-o:before{content:""}.fa-check-circle-o:before{content:""}.fa-ban:before{content:""}.fa-arrow-left:before{content:""}.fa-arrow-right:before{content:""}.fa-arrow-up:before{content:""}.fa-arrow-down:before{content:""}.fa-mail-forward:before,.fa-share:before{content:""}.fa-expand:before{content:""}.fa-compress:before{content:""}.fa-plus:before{content:""}.fa-minus:before{content:""}.fa-asterisk:before{content:""}.fa-exclamation-circle:before,.rst-content .admonition-title:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before{content:""}.fa-gift:before{content:""}.fa-leaf:before{content:""}.fa-fire:before,.icon-fire:before{content:""}.fa-eye:before{content:""}.fa-eye-slash:before{content:""}.fa-exclamation-triangle:before,.fa-warning:before{content:""}.fa-plane:before{content:""}.fa-calendar:before{content:""}.fa-random:before{content:""}.fa-comment:before{content:""}.fa-magnet:before{content:""}.fa-chevron-up:before{content:""}.fa-chevron-down:before{content:""}.fa-retweet:before{content:""}.fa-shopping-cart:before{content:""}.fa-folder:before{content:""}.fa-folder-open:before{content:""}.fa-arrows-v:before{content:""}.fa-arrows-h:before{content:""}.fa-bar-chart-o:before,.fa-bar-chart:before{content:""}.fa-twitter-square:before{content:""}.fa-facebook-square:before{content:""}.fa-camera-retro:before{content:""}.fa-key:before{content:""}.fa-cogs:before,.fa-gears:before{content:""}.fa-comments:before{content:""}.fa-thumbs-o-up:before{content:""}.fa-thumbs-o-down:before{content:""}.fa-star-half:before{content:""}.fa-heart-o:before{content:""}.fa-sign-out:before{content:""}.fa-linkedin-square:before{content:""}.fa-thumb-tack:before{content:""}.fa-external-link:before{content:""}.fa-sign-in:before{content:""}.fa-trophy:before{content:""}.fa-github-square:before{content:""}.fa-upload:before{content:""}.fa-lemon-o:before{content:""}.fa-phone:before{content:""}.fa-square-o:before{content:""}.fa-bookmark-o:before{content:""}.fa-phone-square:before{content:""}.fa-twitter:before{content:""}.fa-facebook-f:before,.fa-facebook:before{content:""}.fa-github:before,.icon-github:before{content:""}.fa-unlock:before{content:""}.fa-credit-card:before{content:""}.fa-feed:before,.fa-rss:before{content:""}.fa-hdd-o:before{content:""}.fa-bullhorn:before{content:""}.fa-bell:before{content:""}.fa-certificate:before{content:""}.fa-hand-o-right:before{content:""}.fa-hand-o-left:before{content:""}.fa-hand-o-up:before{content:""}.fa-hand-o-down:before{content:""}.fa-arrow-circle-left:before,.icon-circle-arrow-left:before{content:""}.fa-arrow-circle-right:before,.icon-circle-arrow-right:before{content:""}.fa-arrow-circle-up:before{content:""}.fa-arrow-circle-down:before{content:""}.fa-globe:before{content:""}.fa-wrench:before{content:""}.fa-tasks:before{content:""}.fa-filter:before{content:""}.fa-briefcase:before{content:""}.fa-arrows-alt:before{content:""}.fa-group:before,.fa-users:before{content:""}.fa-chain:before,.fa-link:before,.icon-link:before{content:""}.fa-cloud:before{content:""}.fa-flask:before{content:""}.fa-cut:before,.fa-scissors:before{content:""}.fa-copy:before,.fa-files-o:before{content:""}.fa-paperclip:before{content:""}.fa-floppy-o:before,.fa-save:before{content:""}.fa-square:before{content:""}.fa-bars:before,.fa-navicon:before,.fa-reorder:before{content:""}.fa-list-ul:before{content:""}.fa-list-ol:before{content:""}.fa-strikethrough:before{content:""}.fa-underline:before{content:""}.fa-table:before{content:""}.fa-magic:before{content:""}.fa-truck:before{content:""}.fa-pinterest:before{content:""}.fa-pinterest-square:before{content:""}.fa-google-plus-square:before{content:""}.fa-google-plus:before{content:""}.fa-money:before{content:""}.fa-caret-down:before,.icon-caret-down:before,.wy-dropdown .caret:before{content:""}.fa-caret-up:before{content:""}.fa-caret-left:before{content:""}.fa-caret-right:before{content:""}.fa-columns:before{content:""}.fa-sort:before,.fa-unsorted:before{content:""}.fa-sort-desc:before,.fa-sort-down:before{content:""}.fa-sort-asc:before,.fa-sort-up:before{content:""}.fa-envelope:before{content:""}.fa-linkedin:before{content:""}.fa-rotate-left:before,.fa-undo:before{content:""}.fa-gavel:before,.fa-legal:before{content:""}.fa-dashboard:before,.fa-tachometer:before{content:""}.fa-comment-o:before{content:""}.fa-comments-o:before{content:""}.fa-bolt:before,.fa-flash:before{content:""}.fa-sitemap:before{content:""}.fa-umbrella:before{content:""}.fa-clipboard:before,.fa-paste:before{content:""}.fa-lightbulb-o:before{content:""}.fa-exchange:before{content:""}.fa-cloud-download:before{content:""}.fa-cloud-upload:before{content:""}.fa-user-md:before{content:""}.fa-stethoscope:before{content:""}.fa-suitcase:before{content:""}.fa-bell-o:before{content:""}.fa-coffee:before{content:""}.fa-cutlery:before{content:""}.fa-file-text-o:before{content:""}.fa-building-o:before{content:""}.fa-hospital-o:before{content:""}.fa-ambulance:before{content:""}.fa-medkit:before{content:""}.fa-fighter-jet:before{content:""}.fa-beer:before{content:""}.fa-h-square:before{content:""}.fa-plus-square:before{content:""}.fa-angle-double-left:before{content:""}.fa-angle-double-right:before{content:""}.fa-angle-double-up:before{content:""}.fa-angle-double-down:before{content:""}.fa-angle-left:before{content:""}.fa-angle-right:before{content:""}.fa-angle-up:before{content:""}.fa-angle-down:before{content:""}.fa-desktop:before{content:""}.fa-laptop:before{content:""}.fa-tablet:before{content:""}.fa-mobile-phone:before,.fa-mobile:before{content:""}.fa-circle-o:before{content:""}.fa-quote-left:before{content:""}.fa-quote-right:before{content:""}.fa-spinner:before{content:""}.fa-circle:before{content:""}.fa-mail-reply:before,.fa-reply:before{content:""}.fa-github-alt:before{content:""}.fa-folder-o:before{content:""}.fa-folder-open-o:before{content:""}.fa-smile-o:before{content:""}.fa-frown-o:before{content:""}.fa-meh-o:before{content:""}.fa-gamepad:before{content:""}.fa-keyboard-o:before{content:""}.fa-flag-o:before{content:""}.fa-flag-checkered:before{content:""}.fa-terminal:before{content:""}.fa-code:before{content:""}.fa-mail-reply-all:before,.fa-reply-all:before{content:""}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:""}.fa-location-arrow:before{content:""}.fa-crop:before{content:""}.fa-code-fork:before{content:""}.fa-chain-broken:before,.fa-unlink:before{content:""}.fa-question:before{content:""}.fa-info:before{content:""}.fa-exclamation:before{content:""}.fa-superscript:before{content:""}.fa-subscript:before{content:""}.fa-eraser:before{content:""}.fa-puzzle-piece:before{content:""}.fa-microphone:before{content:""}.fa-microphone-slash:before{content:""}.fa-shield:before{content:""}.fa-calendar-o:before{content:""}.fa-fire-extinguisher:before{content:""}.fa-rocket:before{content:""}.fa-maxcdn:before{content:""}.fa-chevron-circle-left:before{content:""}.fa-chevron-circle-right:before{content:""}.fa-chevron-circle-up:before{content:""}.fa-chevron-circle-down:before{content:""}.fa-html5:before{content:""}.fa-css3:before{content:""}.fa-anchor:before{content:""}.fa-unlock-alt:before{content:""}.fa-bullseye:before{content:""}.fa-ellipsis-h:before{content:""}.fa-ellipsis-v:before{content:""}.fa-rss-square:before{content:""}.fa-play-circle:before{content:""}.fa-ticket:before{content:""}.fa-minus-square:before{content:""}.fa-minus-square-o:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before{content:""}.fa-level-up:before{content:""}.fa-level-down:before{content:""}.fa-check-square:before{content:""}.fa-pencil-square:before{content:""}.fa-external-link-square:before{content:""}.fa-share-square:before{content:""}.fa-compass:before{content:""}.fa-caret-square-o-down:before,.fa-toggle-down:before{content:""}.fa-caret-square-o-up:before,.fa-toggle-up:before{content:""}.fa-caret-square-o-right:before,.fa-toggle-right:before{content:""}.fa-eur:before,.fa-euro:before{content:""}.fa-gbp:before{content:""}.fa-dollar:before,.fa-usd:before{content:""}.fa-inr:before,.fa-rupee:before{content:""}.fa-cny:before,.fa-jpy:before,.fa-rmb:before,.fa-yen:before{content:""}.fa-rouble:before,.fa-rub:before,.fa-ruble:before{content:""}.fa-krw:before,.fa-won:before{content:""}.fa-bitcoin:before,.fa-btc:before{content:""}.fa-file:before{content:""}.fa-file-text:before{content:""}.fa-sort-alpha-asc:before{content:""}.fa-sort-alpha-desc:before{content:""}.fa-sort-amount-asc:before{content:""}.fa-sort-amount-desc:before{content:""}.fa-sort-numeric-asc:before{content:""}.fa-sort-numeric-desc:before{content:""}.fa-thumbs-up:before{content:""}.fa-thumbs-down:before{content:""}.fa-youtube-square:before{content:""}.fa-youtube:before{content:""}.fa-xing:before{content:""}.fa-xing-square:before{content:""}.fa-youtube-play:before{content:""}.fa-dropbox:before{content:""}.fa-stack-overflow:before{content:""}.fa-instagram:before{content:""}.fa-flickr:before{content:""}.fa-adn:before{content:""}.fa-bitbucket:before,.icon-bitbucket:before{content:""}.fa-bitbucket-square:before{content:""}.fa-tumblr:before{content:""}.fa-tumblr-square:before{content:""}.fa-long-arrow-down:before{content:""}.fa-long-arrow-up:before{content:""}.fa-long-arrow-left:before{content:""}.fa-long-arrow-right:before{content:""}.fa-apple:before{content:""}.fa-windows:before{content:""}.fa-android:before{content:""}.fa-linux:before{content:""}.fa-dribbble:before{content:""}.fa-skype:before{content:""}.fa-foursquare:before{content:""}.fa-trello:before{content:""}.fa-female:before{content:""}.fa-male:before{content:""}.fa-gittip:before,.fa-gratipay:before{content:""}.fa-sun-o:before{content:""}.fa-moon-o:before{content:""}.fa-archive:before{content:""}.fa-bug:before{content:""}.fa-vk:before{content:""}.fa-weibo:before{content:""}.fa-renren:before{content:""}.fa-pagelines:before{content:""}.fa-stack-exchange:before{content:""}.fa-arrow-circle-o-right:before{content:""}.fa-arrow-circle-o-left:before{content:""}.fa-caret-square-o-left:before,.fa-toggle-left:before{content:""}.fa-dot-circle-o:before{content:""}.fa-wheelchair:before{content:""}.fa-vimeo-square:before{content:""}.fa-try:before,.fa-turkish-lira:before{content:""}.fa-plus-square-o:before,.wy-menu-vertical li button.toctree-expand:before{content:""}.fa-space-shuttle:before{content:""}.fa-slack:before{content:""}.fa-envelope-square:before{content:""}.fa-wordpress:before{content:""}.fa-openid:before{content:""}.fa-bank:before,.fa-institution:before,.fa-university:before{content:""}.fa-graduation-cap:before,.fa-mortar-board:before{content:""}.fa-yahoo:before{content:""}.fa-google:before{content:""}.fa-reddit:before{content:""}.fa-reddit-square:before{content:""}.fa-stumbleupon-circle:before{content:""}.fa-stumbleupon:before{content:""}.fa-delicious:before{content:""}.fa-digg:before{content:""}.fa-pied-piper-pp:before{content:""}.fa-pied-piper-alt:before{content:""}.fa-drupal:before{content:""}.fa-joomla:before{content:""}.fa-language:before{content:""}.fa-fax:before{content:""}.fa-building:before{content:""}.fa-child:before{content:""}.fa-paw:before{content:""}.fa-spoon:before{content:""}.fa-cube:before{content:""}.fa-cubes:before{content:""}.fa-behance:before{content:""}.fa-behance-square:before{content:""}.fa-steam:before{content:""}.fa-steam-square:before{content:""}.fa-recycle:before{content:""}.fa-automobile:before,.fa-car:before{content:""}.fa-cab:before,.fa-taxi:before{content:""}.fa-tree:before{content:""}.fa-spotify:before{content:""}.fa-deviantart:before{content:""}.fa-soundcloud:before{content:""}.fa-database:before{content:""}.fa-file-pdf-o:before{content:""}.fa-file-word-o:before{content:""}.fa-file-excel-o:before{content:""}.fa-file-powerpoint-o:before{content:""}.fa-file-image-o:before,.fa-file-photo-o:before,.fa-file-picture-o:before{content:""}.fa-file-archive-o:before,.fa-file-zip-o:before{content:""}.fa-file-audio-o:before,.fa-file-sound-o:before{content:""}.fa-file-movie-o:before,.fa-file-video-o:before{content:""}.fa-file-code-o:before{content:""}.fa-vine:before{content:""}.fa-codepen:before{content:""}.fa-jsfiddle:before{content:""}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-ring:before,.fa-life-saver:before,.fa-support:before{content:""}.fa-circle-o-notch:before{content:""}.fa-ra:before,.fa-rebel:before,.fa-resistance:before{content:""}.fa-empire:before,.fa-ge:before{content:""}.fa-git-square:before{content:""}.fa-git:before{content:""}.fa-hacker-news:before,.fa-y-combinator-square:before,.fa-yc-square:before{content:""}.fa-tencent-weibo:before{content:""}.fa-qq:before{content:""}.fa-wechat:before,.fa-weixin:before{content:""}.fa-paper-plane:before,.fa-send:before{content:""}.fa-paper-plane-o:before,.fa-send-o:before{content:""}.fa-history:before{content:""}.fa-circle-thin:before{content:""}.fa-header:before{content:""}.fa-paragraph:before{content:""}.fa-sliders:before{content:""}.fa-share-alt:before{content:""}.fa-share-alt-square:before{content:""}.fa-bomb:before{content:""}.fa-futbol-o:before,.fa-soccer-ball-o:before{content:""}.fa-tty:before{content:""}.fa-binoculars:before{content:""}.fa-plug:before{content:""}.fa-slideshare:before{content:""}.fa-twitch:before{content:""}.fa-yelp:before{content:""}.fa-newspaper-o:before{content:""}.fa-wifi:before{content:""}.fa-calculator:before{content:""}.fa-paypal:before{content:""}.fa-google-wallet:before{content:""}.fa-cc-visa:before{content:""}.fa-cc-mastercard:before{content:""}.fa-cc-discover:before{content:""}.fa-cc-amex:before{content:""}.fa-cc-paypal:before{content:""}.fa-cc-stripe:before{content:""}.fa-bell-slash:before{content:""}.fa-bell-slash-o:before{content:""}.fa-trash:before{content:""}.fa-copyright:before{content:""}.fa-at:before{content:""}.fa-eyedropper:before{content:""}.fa-paint-brush:before{content:""}.fa-birthday-cake:before{content:""}.fa-area-chart:before{content:""}.fa-pie-chart:before{content:""}.fa-line-chart:before{content:""}.fa-lastfm:before{content:""}.fa-lastfm-square:before{content:""}.fa-toggle-off:before{content:""}.fa-toggle-on:before{content:""}.fa-bicycle:before{content:""}.fa-bus:before{content:""}.fa-ioxhost:before{content:""}.fa-angellist:before{content:""}.fa-cc:before{content:""}.fa-ils:before,.fa-shekel:before,.fa-sheqel:before{content:""}.fa-meanpath:before{content:""}.fa-buysellads:before{content:""}.fa-connectdevelop:before{content:""}.fa-dashcube:before{content:""}.fa-forumbee:before{content:""}.fa-leanpub:before{content:""}.fa-sellsy:before{content:""}.fa-shirtsinbulk:before{content:""}.fa-simplybuilt:before{content:""}.fa-skyatlas:before{content:""}.fa-cart-plus:before{content:""}.fa-cart-arrow-down:before{content:""}.fa-diamond:before{content:""}.fa-ship:before{content:""}.fa-user-secret:before{content:""}.fa-motorcycle:before{content:""}.fa-street-view:before{content:""}.fa-heartbeat:before{content:""}.fa-venus:before{content:""}.fa-mars:before{content:""}.fa-mercury:before{content:""}.fa-intersex:before,.fa-transgender:before{content:""}.fa-transgender-alt:before{content:""}.fa-venus-double:before{content:""}.fa-mars-double:before{content:""}.fa-venus-mars:before{content:""}.fa-mars-stroke:before{content:""}.fa-mars-stroke-v:before{content:""}.fa-mars-stroke-h:before{content:""}.fa-neuter:before{content:""}.fa-genderless:before{content:""}.fa-facebook-official:before{content:""}.fa-pinterest-p:before{content:""}.fa-whatsapp:before{content:""}.fa-server:before{content:""}.fa-user-plus:before{content:""}.fa-user-times:before{content:""}.fa-bed:before,.fa-hotel:before{content:""}.fa-viacoin:before{content:""}.fa-train:before{content:""}.fa-subway:before{content:""}.fa-medium:before{content:""}.fa-y-combinator:before,.fa-yc:before{content:""}.fa-optin-monster:before{content:""}.fa-opencart:before{content:""}.fa-expeditedssl:before{content:""}.fa-battery-4:before,.fa-battery-full:before,.fa-battery:before{content:""}.fa-battery-3:before,.fa-battery-three-quarters:before{content:""}.fa-battery-2:before,.fa-battery-half:before{content:""}.fa-battery-1:before,.fa-battery-quarter:before{content:""}.fa-battery-0:before,.fa-battery-empty:before{content:""}.fa-mouse-pointer:before{content:""}.fa-i-cursor:before{content:""}.fa-object-group:before{content:""}.fa-object-ungroup:before{content:""}.fa-sticky-note:before{content:""}.fa-sticky-note-o:before{content:""}.fa-cc-jcb:before{content:""}.fa-cc-diners-club:before{content:""}.fa-clone:before{content:""}.fa-balance-scale:before{content:""}.fa-hourglass-o:before{content:""}.fa-hourglass-1:before,.fa-hourglass-start:before{content:""}.fa-hourglass-2:before,.fa-hourglass-half:before{content:""}.fa-hourglass-3:before,.fa-hourglass-end:before{content:""}.fa-hourglass:before{content:""}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:""}.fa-hand-paper-o:before,.fa-hand-stop-o:before{content:""}.fa-hand-scissors-o:before{content:""}.fa-hand-lizard-o:before{content:""}.fa-hand-spock-o:before{content:""}.fa-hand-pointer-o:before{content:""}.fa-hand-peace-o:before{content:""}.fa-trademark:before{content:""}.fa-registered:before{content:""}.fa-creative-commons:before{content:""}.fa-gg:before{content:""}.fa-gg-circle:before{content:""}.fa-tripadvisor:before{content:""}.fa-odnoklassniki:before{content:""}.fa-odnoklassniki-square:before{content:""}.fa-get-pocket:before{content:""}.fa-wikipedia-w:before{content:""}.fa-safari:before{content:""}.fa-chrome:before{content:""}.fa-firefox:before{content:""}.fa-opera:before{content:""}.fa-internet-explorer:before{content:""}.fa-television:before,.fa-tv:before{content:""}.fa-contao:before{content:""}.fa-500px:before{content:""}.fa-amazon:before{content:""}.fa-calendar-plus-o:before{content:""}.fa-calendar-minus-o:before{content:""}.fa-calendar-times-o:before{content:""}.fa-calendar-check-o:before{content:""}.fa-industry:before{content:""}.fa-map-pin:before{content:""}.fa-map-signs:before{content:""}.fa-map-o:before{content:""}.fa-map:before{content:""}.fa-commenting:before{content:""}.fa-commenting-o:before{content:""}.fa-houzz:before{content:""}.fa-vimeo:before{content:""}.fa-black-tie:before{content:""}.fa-fonticons:before{content:""}.fa-reddit-alien:before{content:""}.fa-edge:before{content:""}.fa-credit-card-alt:before{content:""}.fa-codiepie:before{content:""}.fa-modx:before{content:""}.fa-fort-awesome:before{content:""}.fa-usb:before{content:""}.fa-product-hunt:before{content:""}.fa-mixcloud:before{content:""}.fa-scribd:before{content:""}.fa-pause-circle:before{content:""}.fa-pause-circle-o:before{content:""}.fa-stop-circle:before{content:""}.fa-stop-circle-o:before{content:""}.fa-shopping-bag:before{content:""}.fa-shopping-basket:before{content:""}.fa-hashtag:before{content:""}.fa-bluetooth:before{content:""}.fa-bluetooth-b:before{content:""}.fa-percent:before{content:""}.fa-gitlab:before,.icon-gitlab:before{content:""}.fa-wpbeginner:before{content:""}.fa-wpforms:before{content:""}.fa-envira:before{content:""}.fa-universal-access:before{content:""}.fa-wheelchair-alt:before{content:""}.fa-question-circle-o:before{content:""}.fa-blind:before{content:""}.fa-audio-description:before{content:""}.fa-volume-control-phone:before{content:""}.fa-braille:before{content:""}.fa-assistive-listening-systems:before{content:""}.fa-american-sign-language-interpreting:before,.fa-asl-interpreting:before{content:""}.fa-deaf:before,.fa-deafness:before,.fa-hard-of-hearing:before{content:""}.fa-glide:before{content:""}.fa-glide-g:before{content:""}.fa-sign-language:before,.fa-signing:before{content:""}.fa-low-vision:before{content:""}.fa-viadeo:before{content:""}.fa-viadeo-square:before{content:""}.fa-snapchat:before{content:""}.fa-snapchat-ghost:before{content:""}.fa-snapchat-square:before{content:""}.fa-pied-piper:before{content:""}.fa-first-order:before{content:""}.fa-yoast:before{content:""}.fa-themeisle:before{content:""}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:""}.fa-fa:before,.fa-font-awesome:before{content:""}.fa-handshake-o:before{content:""}.fa-envelope-open:before{content:""}.fa-envelope-open-o:before{content:""}.fa-linode:before{content:""}.fa-address-book:before{content:""}.fa-address-book-o:before{content:""}.fa-address-card:before,.fa-vcard:before{content:""}.fa-address-card-o:before,.fa-vcard-o:before{content:""}.fa-user-circle:before{content:""}.fa-user-circle-o:before{content:""}.fa-user-o:before{content:""}.fa-id-badge:before{content:""}.fa-drivers-license:before,.fa-id-card:before{content:""}.fa-drivers-license-o:before,.fa-id-card-o:before{content:""}.fa-quora:before{content:""}.fa-free-code-camp:before{content:""}.fa-telegram:before{content:""}.fa-thermometer-4:before,.fa-thermometer-full:before,.fa-thermometer:before{content:""}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:""}.fa-thermometer-2:before,.fa-thermometer-half:before{content:""}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:""}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:""}.fa-shower:before{content:""}.fa-bath:before,.fa-bathtub:before,.fa-s15:before{content:""}.fa-podcast:before{content:""}.fa-window-maximize:before{content:""}.fa-window-minimize:before{content:""}.fa-window-restore:before{content:""}.fa-times-rectangle:before,.fa-window-close:before{content:""}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:""}.fa-bandcamp:before{content:""}.fa-grav:before{content:""}.fa-etsy:before{content:""}.fa-imdb:before{content:""}.fa-ravelry:before{content:""}.fa-eercast:before{content:""}.fa-microchip:before{content:""}.fa-snowflake-o:before{content:""}.fa-superpowers:before{content:""}.fa-wpexplorer:before{content:""}.fa-meetup:before{content:""}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-dropdown .caret,.wy-inline-validate.wy-inline-validate-danger .wy-input-context,.wy-inline-validate.wy-inline-validate-info .wy-input-context,.wy-inline-validate.wy-inline-validate-success .wy-input-context,.wy-inline-validate.wy-inline-validate-warning .wy-input-context,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{font-family:inherit}.fa:before,.icon:before,.rst-content .admonition-title:before,.rst-content .code-block-caption .headerlink:before,.rst-content .eqno .headerlink:before,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before{font-family:FontAwesome;display:inline-block;font-style:normal;font-weight:400;line-height:1;text-decoration:inherit}.rst-content .code-block-caption a .headerlink,.rst-content .eqno a .headerlink,.rst-content a .admonition-title,.rst-content code.download a span:first-child,.rst-content dl dt a .headerlink,.rst-content h1 a .headerlink,.rst-content h2 a .headerlink,.rst-content h3 a .headerlink,.rst-content h4 a .headerlink,.rst-content h5 a .headerlink,.rst-content h6 a .headerlink,.rst-content p.caption a .headerlink,.rst-content p a .headerlink,.rst-content table>caption a .headerlink,.rst-content tt.download a span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li a button.toctree-expand,a .fa,a .icon,a .rst-content .admonition-title,a .rst-content .code-block-caption .headerlink,a .rst-content .eqno .headerlink,a .rst-content code.download span:first-child,a .rst-content dl dt .headerlink,a .rst-content h1 .headerlink,a .rst-content h2 .headerlink,a .rst-content h3 .headerlink,a .rst-content h4 .headerlink,a .rst-content h5 .headerlink,a .rst-content h6 .headerlink,a .rst-content p.caption .headerlink,a .rst-content p .headerlink,a .rst-content table>caption .headerlink,a .rst-content tt.download span:first-child,a .wy-menu-vertical li button.toctree-expand{display:inline-block;text-decoration:inherit}.btn .fa,.btn .icon,.btn .rst-content .admonition-title,.btn .rst-content .code-block-caption .headerlink,.btn .rst-content .eqno .headerlink,.btn .rst-content code.download span:first-child,.btn .rst-content dl dt .headerlink,.btn .rst-content h1 .headerlink,.btn .rst-content h2 .headerlink,.btn .rst-content h3 .headerlink,.btn .rst-content h4 .headerlink,.btn .rst-content h5 .headerlink,.btn .rst-content h6 .headerlink,.btn .rst-content p .headerlink,.btn .rst-content table>caption .headerlink,.btn .rst-content tt.download span:first-child,.btn .wy-menu-vertical li.current>a button.toctree-expand,.btn .wy-menu-vertical li.on a button.toctree-expand,.btn .wy-menu-vertical li button.toctree-expand,.nav .fa,.nav .icon,.nav .rst-content .admonition-title,.nav .rst-content .code-block-caption .headerlink,.nav .rst-content .eqno .headerlink,.nav .rst-content code.download span:first-child,.nav .rst-content dl dt .headerlink,.nav .rst-content h1 .headerlink,.nav .rst-content h2 .headerlink,.nav .rst-content h3 .headerlink,.nav .rst-content h4 .headerlink,.nav .rst-content h5 .headerlink,.nav .rst-content h6 .headerlink,.nav .rst-content p .headerlink,.nav .rst-content table>caption .headerlink,.nav .rst-content tt.download span:first-child,.nav .wy-menu-vertical li.current>a button.toctree-expand,.nav .wy-menu-vertical li.on a button.toctree-expand,.nav .wy-menu-vertical li button.toctree-expand,.rst-content .btn .admonition-title,.rst-content .code-block-caption .btn .headerlink,.rst-content .code-block-caption .nav .headerlink,.rst-content .eqno .btn .headerlink,.rst-content .eqno .nav .headerlink,.rst-content .nav .admonition-title,.rst-content code.download .btn span:first-child,.rst-content code.download .nav span:first-child,.rst-content dl dt .btn .headerlink,.rst-content dl dt .nav .headerlink,.rst-content h1 .btn .headerlink,.rst-content h1 .nav .headerlink,.rst-content h2 .btn .headerlink,.rst-content h2 .nav .headerlink,.rst-content h3 .btn .headerlink,.rst-content h3 .nav .headerlink,.rst-content h4 .btn .headerlink,.rst-content h4 .nav .headerlink,.rst-content h5 .btn .headerlink,.rst-content h5 .nav .headerlink,.rst-content h6 .btn .headerlink,.rst-content h6 .nav .headerlink,.rst-content p .btn .headerlink,.rst-content p .nav .headerlink,.rst-content table>caption .btn .headerlink,.rst-content table>caption .nav .headerlink,.rst-content tt.download .btn span:first-child,.rst-content tt.download .nav span:first-child,.wy-menu-vertical li .btn button.toctree-expand,.wy-menu-vertical li.current>a .btn button.toctree-expand,.wy-menu-vertical li.current>a .nav button.toctree-expand,.wy-menu-vertical li .nav button.toctree-expand,.wy-menu-vertical li.on a .btn button.toctree-expand,.wy-menu-vertical li.on a .nav button.toctree-expand{display:inline}.btn .fa-large.icon,.btn .fa.fa-large,.btn .rst-content .code-block-caption .fa-large.headerlink,.btn .rst-content .eqno .fa-large.headerlink,.btn .rst-content .fa-large.admonition-title,.btn .rst-content code.download span.fa-large:first-child,.btn .rst-content dl dt .fa-large.headerlink,.btn .rst-content h1 .fa-large.headerlink,.btn .rst-content h2 .fa-large.headerlink,.btn .rst-content h3 .fa-large.headerlink,.btn .rst-content h4 .fa-large.headerlink,.btn .rst-content h5 .fa-large.headerlink,.btn .rst-content h6 .fa-large.headerlink,.btn .rst-content p .fa-large.headerlink,.btn .rst-content table>caption .fa-large.headerlink,.btn .rst-content tt.download span.fa-large:first-child,.btn .wy-menu-vertical li button.fa-large.toctree-expand,.nav .fa-large.icon,.nav .fa.fa-large,.nav .rst-content .code-block-caption .fa-large.headerlink,.nav .rst-content .eqno .fa-large.headerlink,.nav .rst-content .fa-large.admonition-title,.nav .rst-content code.download span.fa-large:first-child,.nav .rst-content dl dt .fa-large.headerlink,.nav .rst-content h1 .fa-large.headerlink,.nav .rst-content h2 .fa-large.headerlink,.nav .rst-content h3 .fa-large.headerlink,.nav .rst-content h4 .fa-large.headerlink,.nav .rst-content h5 .fa-large.headerlink,.nav .rst-content h6 .fa-large.headerlink,.nav .rst-content p .fa-large.headerlink,.nav .rst-content table>caption .fa-large.headerlink,.nav .rst-content tt.download span.fa-large:first-child,.nav .wy-menu-vertical li button.fa-large.toctree-expand,.rst-content .btn .fa-large.admonition-title,.rst-content .code-block-caption .btn .fa-large.headerlink,.rst-content .code-block-caption .nav .fa-large.headerlink,.rst-content .eqno .btn .fa-large.headerlink,.rst-content .eqno .nav .fa-large.headerlink,.rst-content .nav .fa-large.admonition-title,.rst-content code.download .btn span.fa-large:first-child,.rst-content code.download .nav span.fa-large:first-child,.rst-content dl dt .btn .fa-large.headerlink,.rst-content dl dt .nav .fa-large.headerlink,.rst-content h1 .btn .fa-large.headerlink,.rst-content h1 .nav .fa-large.headerlink,.rst-content h2 .btn .fa-large.headerlink,.rst-content h2 .nav .fa-large.headerlink,.rst-content h3 .btn .fa-large.headerlink,.rst-content h3 .nav .fa-large.headerlink,.rst-content h4 .btn .fa-large.headerlink,.rst-content h4 .nav .fa-large.headerlink,.rst-content h5 .btn .fa-large.headerlink,.rst-content h5 .nav .fa-large.headerlink,.rst-content h6 .btn .fa-large.headerlink,.rst-content h6 .nav .fa-large.headerlink,.rst-content p .btn .fa-large.headerlink,.rst-content p .nav .fa-large.headerlink,.rst-content table>caption .btn .fa-large.headerlink,.rst-content table>caption .nav .fa-large.headerlink,.rst-content tt.download .btn span.fa-large:first-child,.rst-content tt.download .nav span.fa-large:first-child,.wy-menu-vertical li .btn button.fa-large.toctree-expand,.wy-menu-vertical li .nav button.fa-large.toctree-expand{line-height:.9em}.btn .fa-spin.icon,.btn .fa.fa-spin,.btn .rst-content .code-block-caption .fa-spin.headerlink,.btn .rst-content .eqno .fa-spin.headerlink,.btn .rst-content .fa-spin.admonition-title,.btn .rst-content code.download span.fa-spin:first-child,.btn .rst-content dl dt .fa-spin.headerlink,.btn .rst-content h1 .fa-spin.headerlink,.btn .rst-content h2 .fa-spin.headerlink,.btn .rst-content h3 .fa-spin.headerlink,.btn .rst-content h4 .fa-spin.headerlink,.btn .rst-content h5 .fa-spin.headerlink,.btn .rst-content h6 .fa-spin.headerlink,.btn .rst-content p .fa-spin.headerlink,.btn .rst-content table>caption .fa-spin.headerlink,.btn .rst-content tt.download span.fa-spin:first-child,.btn .wy-menu-vertical li button.fa-spin.toctree-expand,.nav .fa-spin.icon,.nav .fa.fa-spin,.nav .rst-content .code-block-caption .fa-spin.headerlink,.nav .rst-content .eqno .fa-spin.headerlink,.nav .rst-content .fa-spin.admonition-title,.nav .rst-content code.download span.fa-spin:first-child,.nav .rst-content dl dt .fa-spin.headerlink,.nav .rst-content h1 .fa-spin.headerlink,.nav .rst-content h2 .fa-spin.headerlink,.nav .rst-content h3 .fa-spin.headerlink,.nav .rst-content h4 .fa-spin.headerlink,.nav .rst-content h5 .fa-spin.headerlink,.nav .rst-content h6 .fa-spin.headerlink,.nav .rst-content p .fa-spin.headerlink,.nav .rst-content table>caption .fa-spin.headerlink,.nav .rst-content tt.download span.fa-spin:first-child,.nav .wy-menu-vertical li button.fa-spin.toctree-expand,.rst-content .btn .fa-spin.admonition-title,.rst-content .code-block-caption .btn .fa-spin.headerlink,.rst-content .code-block-caption .nav .fa-spin.headerlink,.rst-content .eqno .btn .fa-spin.headerlink,.rst-content .eqno .nav .fa-spin.headerlink,.rst-content .nav .fa-spin.admonition-title,.rst-content code.download .btn span.fa-spin:first-child,.rst-content code.download .nav span.fa-spin:first-child,.rst-content dl dt .btn .fa-spin.headerlink,.rst-content dl dt .nav .fa-spin.headerlink,.rst-content h1 .btn .fa-spin.headerlink,.rst-content h1 .nav .fa-spin.headerlink,.rst-content h2 .btn .fa-spin.headerlink,.rst-content h2 .nav .fa-spin.headerlink,.rst-content h3 .btn .fa-spin.headerlink,.rst-content h3 .nav .fa-spin.headerlink,.rst-content h4 .btn .fa-spin.headerlink,.rst-content h4 .nav .fa-spin.headerlink,.rst-content h5 .btn .fa-spin.headerlink,.rst-content h5 .nav .fa-spin.headerlink,.rst-content h6 .btn .fa-spin.headerlink,.rst-content h6 .nav .fa-spin.headerlink,.rst-content p .btn .fa-spin.headerlink,.rst-content p .nav .fa-spin.headerlink,.rst-content table>caption .btn .fa-spin.headerlink,.rst-content table>caption .nav .fa-spin.headerlink,.rst-content tt.download .btn span.fa-spin:first-child,.rst-content tt.download .nav span.fa-spin:first-child,.wy-menu-vertical li .btn button.fa-spin.toctree-expand,.wy-menu-vertical li .nav button.fa-spin.toctree-expand{display:inline-block}.btn.fa:before,.btn.icon:before,.rst-content .btn.admonition-title:before,.rst-content .code-block-caption .btn.headerlink:before,.rst-content .eqno .btn.headerlink:before,.rst-content code.download span.btn:first-child:before,.rst-content dl dt .btn.headerlink:before,.rst-content h1 .btn.headerlink:before,.rst-content h2 .btn.headerlink:before,.rst-content h3 .btn.headerlink:before,.rst-content h4 .btn.headerlink:before,.rst-content h5 .btn.headerlink:before,.rst-content h6 .btn.headerlink:before,.rst-content p .btn.headerlink:before,.rst-content table>caption .btn.headerlink:before,.rst-content tt.download span.btn:first-child:before,.wy-menu-vertical li button.btn.toctree-expand:before{opacity:.5;-webkit-transition:opacity .05s ease-in;-moz-transition:opacity .05s ease-in;transition:opacity .05s ease-in}.btn.fa:hover:before,.btn.icon:hover:before,.rst-content .btn.admonition-title:hover:before,.rst-content .code-block-caption .btn.headerlink:hover:before,.rst-content .eqno .btn.headerlink:hover:before,.rst-content code.download span.btn:first-child:hover:before,.rst-content dl dt .btn.headerlink:hover:before,.rst-content h1 .btn.headerlink:hover:before,.rst-content h2 .btn.headerlink:hover:before,.rst-content h3 .btn.headerlink:hover:before,.rst-content h4 .btn.headerlink:hover:before,.rst-content h5 .btn.headerlink:hover:before,.rst-content h6 .btn.headerlink:hover:before,.rst-content p .btn.headerlink:hover:before,.rst-content table>caption .btn.headerlink:hover:before,.rst-content tt.download span.btn:first-child:hover:before,.wy-menu-vertical li button.btn.toctree-expand:hover:before{opacity:1}.btn-mini .fa:before,.btn-mini .icon:before,.btn-mini .rst-content .admonition-title:before,.btn-mini .rst-content .code-block-caption .headerlink:before,.btn-mini .rst-content .eqno .headerlink:before,.btn-mini .rst-content code.download span:first-child:before,.btn-mini .rst-content dl dt .headerlink:before,.btn-mini .rst-content h1 .headerlink:before,.btn-mini .rst-content h2 .headerlink:before,.btn-mini .rst-content h3 .headerlink:before,.btn-mini .rst-content h4 .headerlink:before,.btn-mini .rst-content h5 .headerlink:before,.btn-mini .rst-content h6 .headerlink:before,.btn-mini .rst-content p .headerlink:before,.btn-mini .rst-content table>caption .headerlink:before,.btn-mini .rst-content tt.download span:first-child:before,.btn-mini .wy-menu-vertical li button.toctree-expand:before,.rst-content .btn-mini .admonition-title:before,.rst-content .code-block-caption .btn-mini .headerlink:before,.rst-content .eqno .btn-mini .headerlink:before,.rst-content code.download .btn-mini span:first-child:before,.rst-content dl dt .btn-mini .headerlink:before,.rst-content h1 .btn-mini .headerlink:before,.rst-content h2 .btn-mini .headerlink:before,.rst-content h3 .btn-mini .headerlink:before,.rst-content h4 .btn-mini .headerlink:before,.rst-content h5 .btn-mini .headerlink:before,.rst-content h6 .btn-mini .headerlink:before,.rst-content p .btn-mini .headerlink:before,.rst-content table>caption .btn-mini .headerlink:before,.rst-content tt.download .btn-mini span:first-child:before,.wy-menu-vertical li .btn-mini button.toctree-expand:before{font-size:14px;vertical-align:-15%}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.wy-alert{padding:12px;line-height:24px;margin-bottom:24px;background:#e7f2fa}.rst-content .admonition-title,.wy-alert-title{font-weight:700;display:block;color:#fff;background:#6ab0de;padding:6px 12px;margin:-12px -12px 12px}.rst-content .danger,.rst-content .error,.rst-content .wy-alert-danger.admonition,.rst-content .wy-alert-danger.admonition-todo,.rst-content .wy-alert-danger.attention,.rst-content .wy-alert-danger.caution,.rst-content .wy-alert-danger.hint,.rst-content .wy-alert-danger.important,.rst-content .wy-alert-danger.note,.rst-content .wy-alert-danger.seealso,.rst-content .wy-alert-danger.tip,.rst-content .wy-alert-danger.warning,.wy-alert.wy-alert-danger{background:#fdf3f2}.rst-content .danger .admonition-title,.rst-content .danger .wy-alert-title,.rst-content .error .admonition-title,.rst-content .error .wy-alert-title,.rst-content .wy-alert-danger.admonition-todo .admonition-title,.rst-content .wy-alert-danger.admonition-todo .wy-alert-title,.rst-content .wy-alert-danger.admonition .admonition-title,.rst-content .wy-alert-danger.admonition .wy-alert-title,.rst-content .wy-alert-danger.attention .admonition-title,.rst-content .wy-alert-danger.attention .wy-alert-title,.rst-content .wy-alert-danger.caution .admonition-title,.rst-content .wy-alert-danger.caution .wy-alert-title,.rst-content .wy-alert-danger.hint .admonition-title,.rst-content .wy-alert-danger.hint .wy-alert-title,.rst-content .wy-alert-danger.important .admonition-title,.rst-content .wy-alert-danger.important .wy-alert-title,.rst-content .wy-alert-danger.note .admonition-title,.rst-content .wy-alert-danger.note .wy-alert-title,.rst-content .wy-alert-danger.seealso .admonition-title,.rst-content .wy-alert-danger.seealso .wy-alert-title,.rst-content .wy-alert-danger.tip .admonition-title,.rst-content .wy-alert-danger.tip .wy-alert-title,.rst-content .wy-alert-danger.warning .admonition-title,.rst-content .wy-alert-danger.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-danger .admonition-title,.wy-alert.wy-alert-danger .rst-content .admonition-title,.wy-alert.wy-alert-danger .wy-alert-title{background:#f29f97}.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .warning,.rst-content .wy-alert-warning.admonition,.rst-content .wy-alert-warning.danger,.rst-content .wy-alert-warning.error,.rst-content .wy-alert-warning.hint,.rst-content .wy-alert-warning.important,.rst-content .wy-alert-warning.note,.rst-content .wy-alert-warning.seealso,.rst-content .wy-alert-warning.tip,.wy-alert.wy-alert-warning{background:#ffedcc}.rst-content .admonition-todo .admonition-title,.rst-content .admonition-todo .wy-alert-title,.rst-content .attention .admonition-title,.rst-content .attention .wy-alert-title,.rst-content .caution .admonition-title,.rst-content .caution .wy-alert-title,.rst-content .warning .admonition-title,.rst-content .warning .wy-alert-title,.rst-content .wy-alert-warning.admonition .admonition-title,.rst-content .wy-alert-warning.admonition .wy-alert-title,.rst-content .wy-alert-warning.danger .admonition-title,.rst-content .wy-alert-warning.danger .wy-alert-title,.rst-content .wy-alert-warning.error .admonition-title,.rst-content .wy-alert-warning.error .wy-alert-title,.rst-content .wy-alert-warning.hint .admonition-title,.rst-content .wy-alert-warning.hint .wy-alert-title,.rst-content .wy-alert-warning.important .admonition-title,.rst-content .wy-alert-warning.important .wy-alert-title,.rst-content .wy-alert-warning.note .admonition-title,.rst-content .wy-alert-warning.note .wy-alert-title,.rst-content .wy-alert-warning.seealso .admonition-title,.rst-content .wy-alert-warning.seealso .wy-alert-title,.rst-content .wy-alert-warning.tip .admonition-title,.rst-content .wy-alert-warning.tip .wy-alert-title,.rst-content .wy-alert.wy-alert-warning .admonition-title,.wy-alert.wy-alert-warning .rst-content .admonition-title,.wy-alert.wy-alert-warning .wy-alert-title{background:#f0b37e}.rst-content .note,.rst-content .seealso,.rst-content .wy-alert-info.admonition,.rst-content .wy-alert-info.admonition-todo,.rst-content .wy-alert-info.attention,.rst-content .wy-alert-info.caution,.rst-content .wy-alert-info.danger,.rst-content .wy-alert-info.error,.rst-content .wy-alert-info.hint,.rst-content .wy-alert-info.important,.rst-content .wy-alert-info.tip,.rst-content .wy-alert-info.warning,.wy-alert.wy-alert-info{background:#e7f2fa}.rst-content .note .admonition-title,.rst-content .note .wy-alert-title,.rst-content .seealso .admonition-title,.rst-content .seealso .wy-alert-title,.rst-content .wy-alert-info.admonition-todo .admonition-title,.rst-content .wy-alert-info.admonition-todo .wy-alert-title,.rst-content .wy-alert-info.admonition .admonition-title,.rst-content .wy-alert-info.admonition .wy-alert-title,.rst-content .wy-alert-info.attention .admonition-title,.rst-content .wy-alert-info.attention .wy-alert-title,.rst-content .wy-alert-info.caution .admonition-title,.rst-content .wy-alert-info.caution .wy-alert-title,.rst-content .wy-alert-info.danger .admonition-title,.rst-content .wy-alert-info.danger .wy-alert-title,.rst-content .wy-alert-info.error .admonition-title,.rst-content .wy-alert-info.error .wy-alert-title,.rst-content .wy-alert-info.hint .admonition-title,.rst-content .wy-alert-info.hint .wy-alert-title,.rst-content .wy-alert-info.important .admonition-title,.rst-content .wy-alert-info.important .wy-alert-title,.rst-content .wy-alert-info.tip .admonition-title,.rst-content .wy-alert-info.tip .wy-alert-title,.rst-content .wy-alert-info.warning .admonition-title,.rst-content .wy-alert-info.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-info .admonition-title,.wy-alert.wy-alert-info .rst-content .admonition-title,.wy-alert.wy-alert-info .wy-alert-title{background:#6ab0de}.rst-content .hint,.rst-content .important,.rst-content .tip,.rst-content .wy-alert-success.admonition,.rst-content .wy-alert-success.admonition-todo,.rst-content .wy-alert-success.attention,.rst-content .wy-alert-success.caution,.rst-content .wy-alert-success.danger,.rst-content .wy-alert-success.error,.rst-content .wy-alert-success.note,.rst-content .wy-alert-success.seealso,.rst-content .wy-alert-success.warning,.wy-alert.wy-alert-success{background:#dbfaf4}.rst-content .hint .admonition-title,.rst-content .hint .wy-alert-title,.rst-content .important .admonition-title,.rst-content .important .wy-alert-title,.rst-content .tip .admonition-title,.rst-content .tip .wy-alert-title,.rst-content .wy-alert-success.admonition-todo .admonition-title,.rst-content .wy-alert-success.admonition-todo .wy-alert-title,.rst-content .wy-alert-success.admonition .admonition-title,.rst-content .wy-alert-success.admonition .wy-alert-title,.rst-content .wy-alert-success.attention .admonition-title,.rst-content .wy-alert-success.attention .wy-alert-title,.rst-content .wy-alert-success.caution .admonition-title,.rst-content .wy-alert-success.caution .wy-alert-title,.rst-content .wy-alert-success.danger .admonition-title,.rst-content .wy-alert-success.danger .wy-alert-title,.rst-content .wy-alert-success.error .admonition-title,.rst-content .wy-alert-success.error .wy-alert-title,.rst-content .wy-alert-success.note .admonition-title,.rst-content .wy-alert-success.note .wy-alert-title,.rst-content .wy-alert-success.seealso .admonition-title,.rst-content .wy-alert-success.seealso .wy-alert-title,.rst-content .wy-alert-success.warning .admonition-title,.rst-content .wy-alert-success.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-success .admonition-title,.wy-alert.wy-alert-success .rst-content .admonition-title,.wy-alert.wy-alert-success .wy-alert-title{background:#1abc9c}.rst-content .wy-alert-neutral.admonition,.rst-content .wy-alert-neutral.admonition-todo,.rst-content .wy-alert-neutral.attention,.rst-content .wy-alert-neutral.caution,.rst-content .wy-alert-neutral.danger,.rst-content .wy-alert-neutral.error,.rst-content .wy-alert-neutral.hint,.rst-content .wy-alert-neutral.important,.rst-content .wy-alert-neutral.note,.rst-content .wy-alert-neutral.seealso,.rst-content .wy-alert-neutral.tip,.rst-content .wy-alert-neutral.warning,.wy-alert.wy-alert-neutral{background:#f3f6f6}.rst-content .wy-alert-neutral.admonition-todo .admonition-title,.rst-content .wy-alert-neutral.admonition-todo .wy-alert-title,.rst-content .wy-alert-neutral.admonition .admonition-title,.rst-content .wy-alert-neutral.admonition .wy-alert-title,.rst-content .wy-alert-neutral.attention .admonition-title,.rst-content .wy-alert-neutral.attention .wy-alert-title,.rst-content .wy-alert-neutral.caution .admonition-title,.rst-content .wy-alert-neutral.caution .wy-alert-title,.rst-content .wy-alert-neutral.danger .admonition-title,.rst-content .wy-alert-neutral.danger .wy-alert-title,.rst-content .wy-alert-neutral.error .admonition-title,.rst-content .wy-alert-neutral.error .wy-alert-title,.rst-content .wy-alert-neutral.hint .admonition-title,.rst-content .wy-alert-neutral.hint .wy-alert-title,.rst-content .wy-alert-neutral.important .admonition-title,.rst-content .wy-alert-neutral.important .wy-alert-title,.rst-content .wy-alert-neutral.note .admonition-title,.rst-content .wy-alert-neutral.note .wy-alert-title,.rst-content .wy-alert-neutral.seealso .admonition-title,.rst-content .wy-alert-neutral.seealso .wy-alert-title,.rst-content .wy-alert-neutral.tip .admonition-title,.rst-content .wy-alert-neutral.tip .wy-alert-title,.rst-content .wy-alert-neutral.warning .admonition-title,.rst-content .wy-alert-neutral.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-neutral .admonition-title,.wy-alert.wy-alert-neutral .rst-content .admonition-title,.wy-alert.wy-alert-neutral .wy-alert-title{color:#404040;background:#e1e4e5}.rst-content .wy-alert-neutral.admonition-todo a,.rst-content .wy-alert-neutral.admonition a,.rst-content .wy-alert-neutral.attention a,.rst-content .wy-alert-neutral.caution a,.rst-content .wy-alert-neutral.danger a,.rst-content .wy-alert-neutral.error a,.rst-content .wy-alert-neutral.hint a,.rst-content .wy-alert-neutral.important a,.rst-content .wy-alert-neutral.note a,.rst-content .wy-alert-neutral.seealso a,.rst-content .wy-alert-neutral.tip a,.rst-content .wy-alert-neutral.warning a,.wy-alert.wy-alert-neutral a{color:#2980b9}.rst-content .admonition-todo p:last-child,.rst-content .admonition p:last-child,.rst-content .attention p:last-child,.rst-content .caution p:last-child,.rst-content .danger p:last-child,.rst-content .error p:last-child,.rst-content .hint p:last-child,.rst-content .important p:last-child,.rst-content .note p:last-child,.rst-content .seealso p:last-child,.rst-content .tip p:last-child,.rst-content .warning p:last-child,.wy-alert p:last-child{margin-bottom:0}.wy-tray-container{position:fixed;bottom:0;left:0;z-index:600}.wy-tray-container li{display:block;width:300px;background:transparent;color:#fff;text-align:center;box-shadow:0 5px 5px 0 rgba(0,0,0,.1);padding:0 24px;min-width:20%;opacity:0;height:0;line-height:56px;overflow:hidden;-webkit-transition:all .3s ease-in;-moz-transition:all .3s ease-in;transition:all .3s ease-in}.wy-tray-container li.wy-tray-item-success{background:#27ae60}.wy-tray-container li.wy-tray-item-info{background:#2980b9}.wy-tray-container li.wy-tray-item-warning{background:#e67e22}.wy-tray-container li.wy-tray-item-danger{background:#e74c3c}.wy-tray-container li.on{opacity:1;height:56px}@media screen and (max-width:768px){.wy-tray-container{bottom:auto;top:0;width:100%}.wy-tray-container li{width:100%}}button{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle;cursor:pointer;line-height:normal;-webkit-appearance:button;*overflow:visible}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}button[disabled]{cursor:default}.btn{display:inline-block;border-radius:2px;line-height:normal;white-space:nowrap;text-align:center;cursor:pointer;font-size:100%;padding:6px 12px 8px;color:#fff;border:1px solid rgba(0,0,0,.1);background-color:#27ae60;text-decoration:none;font-weight:400;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 2px -1px hsla(0,0%,100%,.5),inset 0 -2px 0 0 rgba(0,0,0,.1);outline-none:false;vertical-align:middle;*display:inline;zoom:1;-webkit-user-drag:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-transition:all .1s linear;-moz-transition:all .1s linear;transition:all .1s linear}.btn-hover{background:#2e8ece;color:#fff}.btn:hover{background:#2cc36b;color:#fff}.btn:focus{background:#2cc36b;outline:0}.btn:active{box-shadow:inset 0 -1px 0 0 rgba(0,0,0,.05),inset 0 2px 0 0 rgba(0,0,0,.1);padding:8px 12px 6px}.btn:visited{color:#fff}.btn-disabled,.btn-disabled:active,.btn-disabled:focus,.btn-disabled:hover,.btn:disabled{background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);filter:alpha(opacity=40);opacity:.4;cursor:not-allowed;box-shadow:none}.btn::-moz-focus-inner{padding:0;border:0}.btn-small{font-size:80%}.btn-info{background-color:#2980b9!important}.btn-info:hover{background-color:#2e8ece!important}.btn-neutral{background-color:#f3f6f6!important;color:#404040!important}.btn-neutral:hover{background-color:#e5ebeb!important;color:#404040}.btn-neutral:visited{color:#404040!important}.btn-success{background-color:#27ae60!important}.btn-success:hover{background-color:#295!important}.btn-danger{background-color:#e74c3c!important}.btn-danger:hover{background-color:#ea6153!important}.btn-warning{background-color:#e67e22!important}.btn-warning:hover{background-color:#e98b39!important}.btn-invert{background-color:#222}.btn-invert:hover{background-color:#2f2f2f!important}.btn-link{background-color:transparent!important;color:#2980b9;box-shadow:none;border-color:transparent!important}.btn-link:active,.btn-link:hover{background-color:transparent!important;color:#409ad5!important;box-shadow:none}.btn-link:visited{color:#9b59b6}.wy-btn-group .btn,.wy-control .btn{vertical-align:middle}.wy-btn-group{margin-bottom:24px;*zoom:1}.wy-btn-group:after,.wy-btn-group:before{display:table;content:""}.wy-btn-group:after{clear:both}.wy-dropdown{position:relative;display:inline-block}.wy-dropdown-active .wy-dropdown-menu{display:block}.wy-dropdown-menu{position:absolute;left:0;display:none;float:left;top:100%;min-width:100%;background:#fcfcfc;z-index:100;border:1px solid #cfd7dd;box-shadow:0 2px 2px 0 rgba(0,0,0,.1);padding:12px}.wy-dropdown-menu>dd>a{display:block;clear:both;color:#404040;white-space:nowrap;font-size:90%;padding:0 12px;cursor:pointer}.wy-dropdown-menu>dd>a:hover{background:#2980b9;color:#fff}.wy-dropdown-menu>dd.divider{border-top:1px solid #cfd7dd;margin:6px 0}.wy-dropdown-menu>dd.search{padding-bottom:12px}.wy-dropdown-menu>dd.search input[type=search]{width:100%}.wy-dropdown-menu>dd.call-to-action{background:#e3e3e3;text-transform:uppercase;font-weight:500;font-size:80%}.wy-dropdown-menu>dd.call-to-action:hover{background:#e3e3e3}.wy-dropdown-menu>dd.call-to-action .btn{color:#fff}.wy-dropdown.wy-dropdown-up .wy-dropdown-menu{bottom:100%;top:auto;left:auto;right:0}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu{background:#fcfcfc;margin-top:2px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a{padding:6px 12px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a:hover{background:#2980b9;color:#fff}.wy-dropdown.wy-dropdown-left .wy-dropdown-menu{right:0;left:auto;text-align:right}.wy-dropdown-arrow:before{content:" ";border-bottom:5px solid #f5f5f5;border-left:5px solid transparent;border-right:5px solid transparent;position:absolute;display:block;top:-4px;left:50%;margin-left:-3px}.wy-dropdown-arrow.wy-dropdown-arrow-left:before{left:11px}.wy-form-stacked select{display:block}.wy-form-aligned .wy-help-inline,.wy-form-aligned input,.wy-form-aligned label,.wy-form-aligned select,.wy-form-aligned textarea{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-form-aligned .wy-control-group>label{display:inline-block;vertical-align:middle;width:10em;margin:6px 12px 0 0;float:left}.wy-form-aligned .wy-control{float:left}.wy-form-aligned .wy-control label{display:block}.wy-form-aligned .wy-control select{margin-top:6px}fieldset{margin:0}fieldset,legend{border:0;padding:0}legend{width:100%;white-space:normal;margin-bottom:24px;font-size:150%;*margin-left:-7px}label,legend{display:block}label{margin:0 0 .3125em;color:#333;font-size:90%}input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}.wy-control-group{margin-bottom:24px;max-width:1200px;margin-left:auto;margin-right:auto;*zoom:1}.wy-control-group:after,.wy-control-group:before{display:table;content:""}.wy-control-group:after{clear:both}.wy-control-group.wy-control-group-required>label:after{content:" *";color:#e74c3c}.wy-control-group .wy-form-full,.wy-control-group .wy-form-halves,.wy-control-group .wy-form-thirds{padding-bottom:12px}.wy-control-group .wy-form-full input[type=color],.wy-control-group .wy-form-full input[type=date],.wy-control-group .wy-form-full input[type=datetime-local],.wy-control-group .wy-form-full input[type=datetime],.wy-control-group .wy-form-full input[type=email],.wy-control-group .wy-form-full input[type=month],.wy-control-group .wy-form-full input[type=number],.wy-control-group .wy-form-full input[type=password],.wy-control-group .wy-form-full input[type=search],.wy-control-group .wy-form-full input[type=tel],.wy-control-group .wy-form-full input[type=text],.wy-control-group .wy-form-full input[type=time],.wy-control-group .wy-form-full input[type=url],.wy-control-group .wy-form-full input[type=week],.wy-control-group .wy-form-full select,.wy-control-group .wy-form-halves input[type=color],.wy-control-group .wy-form-halves input[type=date],.wy-control-group .wy-form-halves input[type=datetime-local],.wy-control-group .wy-form-halves input[type=datetime],.wy-control-group .wy-form-halves input[type=email],.wy-control-group .wy-form-halves input[type=month],.wy-control-group .wy-form-halves input[type=number],.wy-control-group .wy-form-halves input[type=password],.wy-control-group .wy-form-halves input[type=search],.wy-control-group .wy-form-halves input[type=tel],.wy-control-group .wy-form-halves input[type=text],.wy-control-group .wy-form-halves input[type=time],.wy-control-group .wy-form-halves input[type=url],.wy-control-group .wy-form-halves input[type=week],.wy-control-group .wy-form-halves select,.wy-control-group .wy-form-thirds input[type=color],.wy-control-group .wy-form-thirds input[type=date],.wy-control-group .wy-form-thirds input[type=datetime-local],.wy-control-group .wy-form-thirds input[type=datetime],.wy-control-group .wy-form-thirds input[type=email],.wy-control-group .wy-form-thirds input[type=month],.wy-control-group .wy-form-thirds input[type=number],.wy-control-group .wy-form-thirds input[type=password],.wy-control-group .wy-form-thirds input[type=search],.wy-control-group .wy-form-thirds input[type=tel],.wy-control-group .wy-form-thirds input[type=text],.wy-control-group .wy-form-thirds input[type=time],.wy-control-group .wy-form-thirds input[type=url],.wy-control-group .wy-form-thirds input[type=week],.wy-control-group .wy-form-thirds select{width:100%}.wy-control-group .wy-form-full{float:left;display:block;width:100%;margin-right:0}.wy-control-group .wy-form-full:last-child{margin-right:0}.wy-control-group .wy-form-halves{float:left;display:block;margin-right:2.35765%;width:48.82117%}.wy-control-group .wy-form-halves:last-child,.wy-control-group .wy-form-halves:nth-of-type(2n){margin-right:0}.wy-control-group .wy-form-halves:nth-of-type(odd){clear:left}.wy-control-group .wy-form-thirds{float:left;display:block;margin-right:2.35765%;width:31.76157%}.wy-control-group .wy-form-thirds:last-child,.wy-control-group .wy-form-thirds:nth-of-type(3n){margin-right:0}.wy-control-group .wy-form-thirds:nth-of-type(3n+1){clear:left}.wy-control-group.wy-control-group-no-input .wy-control,.wy-control-no-input{margin:6px 0 0;font-size:90%}.wy-control-no-input{display:inline-block}.wy-control-group.fluid-input input[type=color],.wy-control-group.fluid-input input[type=date],.wy-control-group.fluid-input input[type=datetime-local],.wy-control-group.fluid-input input[type=datetime],.wy-control-group.fluid-input input[type=email],.wy-control-group.fluid-input input[type=month],.wy-control-group.fluid-input input[type=number],.wy-control-group.fluid-input input[type=password],.wy-control-group.fluid-input input[type=search],.wy-control-group.fluid-input input[type=tel],.wy-control-group.fluid-input input[type=text],.wy-control-group.fluid-input input[type=time],.wy-control-group.fluid-input input[type=url],.wy-control-group.fluid-input input[type=week]{width:100%}.wy-form-message-inline{padding-left:.3em;color:#666;font-size:90%}.wy-form-message{display:block;color:#999;font-size:70%;margin-top:.3125em;font-style:italic}.wy-form-message p{font-size:inherit;font-style:italic;margin-bottom:6px}.wy-form-message p:last-child{margin-bottom:0}input{line-height:normal}input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;*overflow:visible}input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week]{-webkit-appearance:none;padding:6px;display:inline-block;border:1px solid #ccc;font-size:80%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 3px #ddd;border-radius:0;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}input[type=datetime-local]{padding:.34375em .625em}input[disabled]{cursor:default}input[type=checkbox],input[type=radio]{padding:0;margin-right:.3125em;*height:13px;*width:13px}input[type=checkbox],input[type=radio],input[type=search]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}input[type=search]::-webkit-search-cancel-button,input[type=search]::-webkit-search-decoration{-webkit-appearance:none}input[type=color]:focus,input[type=date]:focus,input[type=datetime-local]:focus,input[type=datetime]:focus,input[type=email]:focus,input[type=month]:focus,input[type=number]:focus,input[type=password]:focus,input[type=search]:focus,input[type=tel]:focus,input[type=text]:focus,input[type=time]:focus,input[type=url]:focus,input[type=week]:focus{outline:0;outline:thin dotted\9;border-color:#333}input.no-focus:focus{border-color:#ccc!important}input[type=checkbox]:focus,input[type=file]:focus,input[type=radio]:focus{outline:thin dotted #333;outline:1px auto #129fea}input[type=color][disabled],input[type=date][disabled],input[type=datetime-local][disabled],input[type=datetime][disabled],input[type=email][disabled],input[type=month][disabled],input[type=number][disabled],input[type=password][disabled],input[type=search][disabled],input[type=tel][disabled],input[type=text][disabled],input[type=time][disabled],input[type=url][disabled],input[type=week][disabled]{cursor:not-allowed;background-color:#fafafa}input:focus:invalid,select:focus:invalid,textarea:focus:invalid{color:#e74c3c;border:1px solid #e74c3c}input:focus:invalid:focus,select:focus:invalid:focus,textarea:focus:invalid:focus{border-color:#e74c3c}input[type=checkbox]:focus:invalid:focus,input[type=file]:focus:invalid:focus,input[type=radio]:focus:invalid:focus{outline-color:#e74c3c}input.wy-input-large{padding:12px;font-size:100%}textarea{overflow:auto;vertical-align:top;width:100%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif}select,textarea{padding:.5em .625em;display:inline-block;border:1px solid #ccc;font-size:80%;box-shadow:inset 0 1px 3px #ddd;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}select{border:1px solid #ccc;background-color:#fff}select[multiple]{height:auto}select:focus,textarea:focus{outline:0}input[readonly],select[disabled],select[readonly],textarea[disabled],textarea[readonly]{cursor:not-allowed;background-color:#fafafa}input[type=checkbox][disabled],input[type=radio][disabled]{cursor:not-allowed}.wy-checkbox,.wy-radio{margin:6px 0;color:#404040;display:block}.wy-checkbox input,.wy-radio input{vertical-align:baseline}.wy-form-message-inline{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-input-prefix,.wy-input-suffix{white-space:nowrap;padding:6px}.wy-input-prefix .wy-input-context,.wy-input-suffix .wy-input-context{line-height:27px;padding:0 8px;display:inline-block;font-size:80%;background-color:#f3f6f6;border:1px solid #ccc;color:#999}.wy-input-suffix .wy-input-context{border-left:0}.wy-input-prefix .wy-input-context{border-right:0}.wy-switch{position:relative;display:block;height:24px;margin-top:12px;cursor:pointer}.wy-switch:before{left:0;top:0;width:36px;height:12px;background:#ccc}.wy-switch:after,.wy-switch:before{position:absolute;content:"";display:block;border-radius:4px;-webkit-transition:all .2s ease-in-out;-moz-transition:all .2s ease-in-out;transition:all .2s ease-in-out}.wy-switch:after{width:18px;height:18px;background:#999;left:-3px;top:-3px}.wy-switch span{position:absolute;left:48px;display:block;font-size:12px;color:#ccc;line-height:1}.wy-switch.active:before{background:#1e8449}.wy-switch.active:after{left:24px;background:#27ae60}.wy-switch.disabled{cursor:not-allowed;opacity:.8}.wy-control-group.wy-control-group-error .wy-form-message,.wy-control-group.wy-control-group-error>label{color:#e74c3c}.wy-control-group.wy-control-group-error input[type=color],.wy-control-group.wy-control-group-error input[type=date],.wy-control-group.wy-control-group-error input[type=datetime-local],.wy-control-group.wy-control-group-error input[type=datetime],.wy-control-group.wy-control-group-error input[type=email],.wy-control-group.wy-control-group-error input[type=month],.wy-control-group.wy-control-group-error input[type=number],.wy-control-group.wy-control-group-error input[type=password],.wy-control-group.wy-control-group-error input[type=search],.wy-control-group.wy-control-group-error input[type=tel],.wy-control-group.wy-control-group-error input[type=text],.wy-control-group.wy-control-group-error input[type=time],.wy-control-group.wy-control-group-error input[type=url],.wy-control-group.wy-control-group-error input[type=week],.wy-control-group.wy-control-group-error textarea{border:1px solid #e74c3c}.wy-inline-validate{white-space:nowrap}.wy-inline-validate .wy-input-context{padding:.5em .625em;display:inline-block;font-size:80%}.wy-inline-validate.wy-inline-validate-success .wy-input-context{color:#27ae60}.wy-inline-validate.wy-inline-validate-danger .wy-input-context{color:#e74c3c}.wy-inline-validate.wy-inline-validate-warning .wy-input-context{color:#e67e22}.wy-inline-validate.wy-inline-validate-info .wy-input-context{color:#2980b9}.rotate-90{-webkit-transform:rotate(90deg);-moz-transform:rotate(90deg);-ms-transform:rotate(90deg);-o-transform:rotate(90deg);transform:rotate(90deg)}.rotate-180{-webkit-transform:rotate(180deg);-moz-transform:rotate(180deg);-ms-transform:rotate(180deg);-o-transform:rotate(180deg);transform:rotate(180deg)}.rotate-270{-webkit-transform:rotate(270deg);-moz-transform:rotate(270deg);-ms-transform:rotate(270deg);-o-transform:rotate(270deg);transform:rotate(270deg)}.mirror{-webkit-transform:scaleX(-1);-moz-transform:scaleX(-1);-ms-transform:scaleX(-1);-o-transform:scaleX(-1);transform:scaleX(-1)}.mirror.rotate-90{-webkit-transform:scaleX(-1) rotate(90deg);-moz-transform:scaleX(-1) rotate(90deg);-ms-transform:scaleX(-1) rotate(90deg);-o-transform:scaleX(-1) rotate(90deg);transform:scaleX(-1) rotate(90deg)}.mirror.rotate-180{-webkit-transform:scaleX(-1) rotate(180deg);-moz-transform:scaleX(-1) rotate(180deg);-ms-transform:scaleX(-1) rotate(180deg);-o-transform:scaleX(-1) rotate(180deg);transform:scaleX(-1) rotate(180deg)}.mirror.rotate-270{-webkit-transform:scaleX(-1) rotate(270deg);-moz-transform:scaleX(-1) rotate(270deg);-ms-transform:scaleX(-1) rotate(270deg);-o-transform:scaleX(-1) rotate(270deg);transform:scaleX(-1) rotate(270deg)}@media only screen and (max-width:480px){.wy-form button[type=submit]{margin:.7em 0 0}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=text],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week],.wy-form label{margin-bottom:.3em;display:block}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week]{margin-bottom:0}.wy-form-aligned .wy-control-group label{margin-bottom:.3em;text-align:left;display:block;width:100%}.wy-form-aligned .wy-control{margin:1.5em 0 0}.wy-form-message,.wy-form-message-inline,.wy-form .wy-help-inline{display:block;font-size:80%;padding:6px 0}}@media screen and (max-width:768px){.tablet-hide{display:none}}@media screen and (max-width:480px){.mobile-hide{display:none}}.float-left{float:left}.float-right{float:right}.full-width{width:100%}.rst-content table.docutils,.rst-content table.field-list,.wy-table{border-collapse:collapse;border-spacing:0;empty-cells:show;margin-bottom:24px}.rst-content table.docutils caption,.rst-content table.field-list caption,.wy-table caption{color:#000;font:italic 85%/1 arial,sans-serif;padding:1em 0;text-align:center}.rst-content table.docutils td,.rst-content table.docutils th,.rst-content table.field-list td,.rst-content table.field-list th,.wy-table td,.wy-table th{font-size:90%;margin:0;overflow:visible;padding:8px 16px}.rst-content table.docutils td:first-child,.rst-content table.docutils th:first-child,.rst-content table.field-list td:first-child,.rst-content table.field-list th:first-child,.wy-table td:first-child,.wy-table th:first-child{border-left-width:0}.rst-content table.docutils thead,.rst-content table.field-list thead,.wy-table thead{color:#000;text-align:left;vertical-align:bottom;white-space:nowrap}.rst-content table.docutils thead th,.rst-content table.field-list thead th,.wy-table thead th{font-weight:700;border-bottom:2px solid #e1e4e5}.rst-content table.docutils td,.rst-content table.field-list td,.wy-table td{background-color:transparent;vertical-align:middle}.rst-content table.docutils td p,.rst-content table.field-list td p,.wy-table td p{line-height:18px}.rst-content table.docutils td p:last-child,.rst-content table.field-list td p:last-child,.wy-table td p:last-child{margin-bottom:0}.rst-content table.docutils .wy-table-cell-min,.rst-content table.field-list .wy-table-cell-min,.wy-table .wy-table-cell-min{width:1%;padding-right:0}.rst-content table.docutils .wy-table-cell-min input[type=checkbox],.rst-content table.field-list .wy-table-cell-min input[type=checkbox],.wy-table .wy-table-cell-min input[type=checkbox]{margin:0}.wy-table-secondary{color:grey;font-size:90%}.wy-table-tertiary{color:grey;font-size:80%}.rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td,.wy-table-backed,.wy-table-odd td,.wy-table-striped tr:nth-child(2n-1) td{background-color:#f3f6f6}.rst-content table.docutils,.wy-table-bordered-all{border:1px solid #e1e4e5}.rst-content table.docutils td,.wy-table-bordered-all td{border-bottom:1px solid #e1e4e5;border-left:1px solid #e1e4e5}.rst-content table.docutils tbody>tr:last-child td,.wy-table-bordered-all tbody>tr:last-child td{border-bottom-width:0}.wy-table-bordered{border:1px solid #e1e4e5}.wy-table-bordered-rows td{border-bottom:1px solid #e1e4e5}.wy-table-bordered-rows tbody>tr:last-child td{border-bottom-width:0}.wy-table-horizontal td,.wy-table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #e1e4e5}.wy-table-horizontal tbody>tr:last-child td{border-bottom-width:0}.wy-table-responsive{margin-bottom:24px;max-width:100%;overflow:auto}.wy-table-responsive table{margin-bottom:0!important}.wy-table-responsive table td,.wy-table-responsive table th{white-space:nowrap}a{color:#2980b9;text-decoration:none;cursor:pointer}a:hover{color:#3091d1}a:visited{color:#9b59b6}html{height:100%}body,html{overflow-x:hidden}body{font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;font-weight:400;color:#404040;min-height:100%;background:#edf0f2}.wy-text-left{text-align:left}.wy-text-center{text-align:center}.wy-text-right{text-align:right}.wy-text-large{font-size:120%}.wy-text-normal{font-size:100%}.wy-text-small,small{font-size:80%}.wy-text-strike{text-decoration:line-through}.wy-text-warning{color:#e67e22!important}a.wy-text-warning:hover{color:#eb9950!important}.wy-text-info{color:#2980b9!important}a.wy-text-info:hover{color:#409ad5!important}.wy-text-success{color:#27ae60!important}a.wy-text-success:hover{color:#36d278!important}.wy-text-danger{color:#e74c3c!important}a.wy-text-danger:hover{color:#ed7669!important}.wy-text-neutral{color:#404040!important}a.wy-text-neutral:hover{color:#595959!important}.rst-content .toctree-wrapper>p.caption,h1,h2,h3,h4,h5,h6,legend{margin-top:0;font-weight:700;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif}p{line-height:24px;font-size:16px;margin:0 0 24px}h1{font-size:175%}.rst-content .toctree-wrapper>p.caption,h2{font-size:150%}h3{font-size:125%}h4{font-size:115%}h5{font-size:110%}h6{font-size:100%}hr{display:block;height:1px;border:0;border-top:1px solid #e1e4e5;margin:24px 0;padding:0}.rst-content code,.rst-content tt,code{white-space:nowrap;max-width:100%;background:#fff;border:1px solid #e1e4e5;font-size:75%;padding:0 5px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#e74c3c;overflow-x:auto}.rst-content tt.code-large,code.code-large{font-size:90%}.rst-content .section ul,.rst-content .toctree-wrapper ul,.rst-content section ul,.wy-plain-list-disc,article ul{list-style:disc;line-height:24px;margin-bottom:24px}.rst-content .section ul li,.rst-content .toctree-wrapper ul li,.rst-content section ul li,.wy-plain-list-disc li,article ul li{list-style:disc;margin-left:24px}.rst-content .section ul li p:last-child,.rst-content .section ul li ul,.rst-content .toctree-wrapper ul li p:last-child,.rst-content .toctree-wrapper ul li ul,.rst-content section ul li p:last-child,.rst-content section ul li ul,.wy-plain-list-disc li p:last-child,.wy-plain-list-disc li ul,article ul li p:last-child,article ul li ul{margin-bottom:0}.rst-content .section ul li li,.rst-content .toctree-wrapper ul li li,.rst-content section ul li li,.wy-plain-list-disc li li,article ul li li{list-style:circle}.rst-content .section ul li li li,.rst-content .toctree-wrapper ul li li li,.rst-content section ul li li li,.wy-plain-list-disc li li li,article ul li li li{list-style:square}.rst-content .section ul li ol li,.rst-content .toctree-wrapper ul li ol li,.rst-content section ul li ol li,.wy-plain-list-disc li ol li,article ul li ol li{list-style:decimal}.rst-content .section ol,.rst-content .section ol.arabic,.rst-content .toctree-wrapper ol,.rst-content .toctree-wrapper ol.arabic,.rst-content section ol,.rst-content section ol.arabic,.wy-plain-list-decimal,article ol{list-style:decimal;line-height:24px;margin-bottom:24px}.rst-content .section ol.arabic li,.rst-content .section ol li,.rst-content .toctree-wrapper ol.arabic li,.rst-content .toctree-wrapper ol li,.rst-content section ol.arabic li,.rst-content section ol li,.wy-plain-list-decimal li,article ol li{list-style:decimal;margin-left:24px}.rst-content .section ol.arabic li ul,.rst-content .section ol li p:last-child,.rst-content .section ol li ul,.rst-content .toctree-wrapper ol.arabic li ul,.rst-content .toctree-wrapper ol li p:last-child,.rst-content .toctree-wrapper ol li ul,.rst-content section ol.arabic li ul,.rst-content section ol li p:last-child,.rst-content section ol li ul,.wy-plain-list-decimal li p:last-child,.wy-plain-list-decimal li ul,article ol li p:last-child,article ol li ul{margin-bottom:0}.rst-content .section ol.arabic li ul li,.rst-content .section ol li ul li,.rst-content .toctree-wrapper ol.arabic li ul li,.rst-content .toctree-wrapper ol li ul li,.rst-content section ol.arabic li ul li,.rst-content section ol li ul li,.wy-plain-list-decimal li ul li,article ol li ul li{list-style:disc}.wy-breadcrumbs{*zoom:1}.wy-breadcrumbs:after,.wy-breadcrumbs:before{display:table;content:""}.wy-breadcrumbs:after{clear:both}.wy-breadcrumbs>li{display:inline-block;padding-top:5px}.wy-breadcrumbs>li.wy-breadcrumbs-aside{float:right}.rst-content .wy-breadcrumbs>li code,.rst-content .wy-breadcrumbs>li tt,.wy-breadcrumbs>li .rst-content tt,.wy-breadcrumbs>li code{all:inherit;color:inherit}.breadcrumb-item:before{content:"/";color:#bbb;font-size:13px;padding:0 6px 0 3px}.wy-breadcrumbs-extra{margin-bottom:0;color:#b3b3b3;font-size:80%;display:inline-block}@media screen and (max-width:480px){.wy-breadcrumbs-extra,.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}@media print{.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}html{font-size:16px}.wy-affix{position:fixed;top:1.618em}.wy-menu a:hover{text-decoration:none}.wy-menu-horiz{*zoom:1}.wy-menu-horiz:after,.wy-menu-horiz:before{display:table;content:""}.wy-menu-horiz:after{clear:both}.wy-menu-horiz li,.wy-menu-horiz ul{display:inline-block}.wy-menu-horiz li:hover{background:hsla(0,0%,100%,.1)}.wy-menu-horiz li.divide-left{border-left:1px solid #404040}.wy-menu-horiz li.divide-right{border-right:1px solid #404040}.wy-menu-horiz a{height:32px;display:inline-block;line-height:32px;padding:0 16px}.wy-menu-vertical{width:300px}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#55a5d9;height:32px;line-height:32px;padding:0 1.618em;margin:12px 0 0;display:block;font-weight:700;text-transform:uppercase;font-size:85%;white-space:nowrap}.wy-menu-vertical ul{margin-bottom:0}.wy-menu-vertical li.divide-top{border-top:1px solid #404040}.wy-menu-vertical li.divide-bottom{border-bottom:1px solid #404040}.wy-menu-vertical li.current{background:#e3e3e3}.wy-menu-vertical li.current a{color:grey;border-right:1px solid #c9c9c9;padding:.4045em 2.427em}.wy-menu-vertical li.current a:hover{background:#d6d6d6}.rst-content .wy-menu-vertical li tt,.wy-menu-vertical li .rst-content tt,.wy-menu-vertical li code{border:none;background:inherit;color:inherit;padding-left:0;padding-right:0}.wy-menu-vertical li button.toctree-expand{display:block;float:left;margin-left:-1.2em;line-height:18px;color:#4d4d4d;border:none;background:none;padding:0}.wy-menu-vertical li.current>a,.wy-menu-vertical li.on a{color:#404040;font-weight:700;position:relative;background:#fcfcfc;border:none;padding:.4045em 1.618em}.wy-menu-vertical li.current>a:hover,.wy-menu-vertical li.on a:hover{background:#fcfcfc}.wy-menu-vertical li.current>a:hover button.toctree-expand,.wy-menu-vertical li.on a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand{display:block;line-height:18px;color:#333}.wy-menu-vertical li.toctree-l1.current>a{border-bottom:1px solid #c9c9c9;border-top:1px solid #c9c9c9}.wy-menu-vertical .toctree-l1.current .toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .toctree-l11>ul{display:none}.wy-menu-vertical .toctree-l1.current .current.toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .current.toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .current.toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .current.toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .current.toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .current.toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .current.toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .current.toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .current.toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .current.toctree-l11>ul{display:block}.wy-menu-vertical li.toctree-l3,.wy-menu-vertical li.toctree-l4{font-size:.9em}.wy-menu-vertical li.toctree-l2 a,.wy-menu-vertical li.toctree-l3 a,.wy-menu-vertical li.toctree-l4 a,.wy-menu-vertical li.toctree-l5 a,.wy-menu-vertical li.toctree-l6 a,.wy-menu-vertical li.toctree-l7 a,.wy-menu-vertical li.toctree-l8 a,.wy-menu-vertical li.toctree-l9 a,.wy-menu-vertical li.toctree-l10 a{color:#404040}.wy-menu-vertical li.toctree-l2 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l3 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l4 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l5 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l6 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l7 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l8 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l9 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l10 a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a,.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a,.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a,.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a,.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a,.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a,.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a,.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{display:block}.wy-menu-vertical li.toctree-l2.current>a{padding:.4045em 2.427em}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{padding:.4045em 1.618em .4045em 4.045em}.wy-menu-vertical li.toctree-l3.current>a{padding:.4045em 4.045em}.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{padding:.4045em 1.618em .4045em 5.663em}.wy-menu-vertical li.toctree-l4.current>a{padding:.4045em 5.663em}.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a{padding:.4045em 1.618em .4045em 7.281em}.wy-menu-vertical li.toctree-l5.current>a{padding:.4045em 7.281em}.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a{padding:.4045em 1.618em .4045em 8.899em}.wy-menu-vertical li.toctree-l6.current>a{padding:.4045em 8.899em}.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a{padding:.4045em 1.618em .4045em 10.517em}.wy-menu-vertical li.toctree-l7.current>a{padding:.4045em 10.517em}.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a{padding:.4045em 1.618em .4045em 12.135em}.wy-menu-vertical li.toctree-l8.current>a{padding:.4045em 12.135em}.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a{padding:.4045em 1.618em .4045em 13.753em}.wy-menu-vertical li.toctree-l9.current>a{padding:.4045em 13.753em}.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a{padding:.4045em 1.618em .4045em 15.371em}.wy-menu-vertical li.toctree-l10.current>a{padding:.4045em 15.371em}.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{padding:.4045em 1.618em .4045em 16.989em}.wy-menu-vertical li.toctree-l2.current>a,.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{background:#c9c9c9}.wy-menu-vertical li.toctree-l2 button.toctree-expand{color:#a3a3a3}.wy-menu-vertical li.toctree-l3.current>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{background:#bdbdbd}.wy-menu-vertical li.toctree-l3 button.toctree-expand{color:#969696}.wy-menu-vertical li.current ul{display:block}.wy-menu-vertical li ul{margin-bottom:0;display:none}.wy-menu-vertical li ul li a{margin-bottom:0;color:#d9d9d9;font-weight:400}.wy-menu-vertical a{line-height:18px;padding:.4045em 1.618em;display:block;position:relative;font-size:90%;color:#d9d9d9}.wy-menu-vertical a:hover{background-color:#4e4a4a;cursor:pointer}.wy-menu-vertical a:hover button.toctree-expand{color:#d9d9d9}.wy-menu-vertical a:active{background-color:#2980b9;cursor:pointer;color:#fff}.wy-menu-vertical a:active button.toctree-expand{color:#fff}.wy-side-nav-search{display:block;width:300px;padding:.809em;margin-bottom:.809em;z-index:200;background-color:#2980b9;text-align:center;color:#fcfcfc}.wy-side-nav-search input[type=text]{width:100%;border-radius:50px;padding:6px 12px;border-color:#2472a4}.wy-side-nav-search img{display:block;margin:auto auto .809em;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-side-nav-search .wy-dropdown>a,.wy-side-nav-search>a{color:#fcfcfc;font-size:100%;font-weight:700;display:inline-block;padding:4px 6px;margin-bottom:.809em;max-width:100%}.wy-side-nav-search .wy-dropdown>a:hover,.wy-side-nav-search>a:hover{background:hsla(0,0%,100%,.1)}.wy-side-nav-search .wy-dropdown>a img.logo,.wy-side-nav-search>a img.logo{display:block;margin:0 auto;height:auto;width:auto;border-radius:0;max-width:100%;background:transparent}.wy-side-nav-search .wy-dropdown>a.icon img.logo,.wy-side-nav-search>a.icon img.logo{margin-top:.85em}.wy-side-nav-search>div.version{margin-top:-.4045em;margin-bottom:.809em;font-weight:400;color:hsla(0,0%,100%,.3)}.wy-nav .wy-menu-vertical header{color:#2980b9}.wy-nav .wy-menu-vertical a{color:#b3b3b3}.wy-nav .wy-menu-vertical a:hover{background-color:#2980b9;color:#fff}[data-menu-wrap]{-webkit-transition:all .2s ease-in;-moz-transition:all .2s ease-in;transition:all .2s ease-in;position:absolute;opacity:1;width:100%;opacity:0}[data-menu-wrap].move-center{left:0;right:auto;opacity:1}[data-menu-wrap].move-left{right:auto;left:-100%;opacity:0}[data-menu-wrap].move-right{right:-100%;left:auto;opacity:0}.wy-body-for-nav{background:#fcfcfc}.wy-grid-for-nav{position:absolute;width:100%;height:100%}.wy-nav-side{position:fixed;top:0;bottom:0;left:0;padding-bottom:2em;width:300px;overflow-x:hidden;overflow-y:hidden;min-height:100%;color:#9b9b9b;background:#343131;z-index:200}.wy-side-scroll{width:320px;position:relative;overflow-x:hidden;overflow-y:scroll;height:100%}.wy-nav-top{display:none;background:#2980b9;color:#fff;padding:.4045em .809em;position:relative;line-height:50px;text-align:center;font-size:100%;*zoom:1}.wy-nav-top:after,.wy-nav-top:before{display:table;content:""}.wy-nav-top:after{clear:both}.wy-nav-top a{color:#fff;font-weight:700}.wy-nav-top img{margin-right:12px;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-nav-top i{font-size:30px;float:left;cursor:pointer;padding-top:inherit}.wy-nav-content-wrap{margin-left:300px;background:#fcfcfc;min-height:100%}.wy-nav-content{padding:1.618em 3.236em;height:100%;max-width:800px;margin:auto}.wy-body-mask{position:fixed;width:100%;height:100%;background:rgba(0,0,0,.2);display:none;z-index:499}.wy-body-mask.on{display:block}footer{color:grey}footer p{margin-bottom:12px}.rst-content footer span.commit tt,footer span.commit .rst-content tt,footer span.commit code{padding:0;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:1em;background:none;border:none;color:grey}.rst-footer-buttons{*zoom:1}.rst-footer-buttons:after,.rst-footer-buttons:before{width:100%;display:table;content:""}.rst-footer-buttons:after{clear:both}.rst-breadcrumbs-buttons{margin-top:12px;*zoom:1}.rst-breadcrumbs-buttons:after,.rst-breadcrumbs-buttons:before{display:table;content:""}.rst-breadcrumbs-buttons:after{clear:both}#search-results .search li{margin-bottom:24px;border-bottom:1px solid #e1e4e5;padding-bottom:24px}#search-results .search li:first-child{border-top:1px solid #e1e4e5;padding-top:24px}#search-results .search li a{font-size:120%;margin-bottom:12px;display:inline-block}#search-results .context{color:grey;font-size:90%}.genindextable li>ul{margin-left:24px}@media screen and (max-width:768px){.wy-body-for-nav{background:#fcfcfc}.wy-nav-top{display:block}.wy-nav-side{left:-300px}.wy-nav-side.shift{width:85%;left:0}.wy-menu.wy-menu-vertical,.wy-side-nav-search,.wy-side-scroll{width:auto}.wy-nav-content-wrap{margin-left:0}.wy-nav-content-wrap .wy-nav-content{padding:1.618em}.wy-nav-content-wrap.shift{position:fixed;min-width:100%;left:85%;top:0;height:100%;overflow:hidden}}@media screen and (min-width:1100px){.wy-nav-content-wrap{background:rgba(0,0,0,.05)}.wy-nav-content{margin:0;background:#fcfcfc}}@media print{.rst-versions,.wy-nav-side,footer{display:none}.wy-nav-content-wrap{margin-left:0}}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60;*zoom:1}.rst-versions .rst-current-version:after,.rst-versions .rst-current-version:before{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-content .code-block-caption .rst-versions .rst-current-version .headerlink,.rst-content .eqno .rst-versions .rst-current-version .headerlink,.rst-content .rst-versions .rst-current-version .admonition-title,.rst-content code.download .rst-versions .rst-current-version span:first-child,.rst-content dl dt .rst-versions .rst-current-version .headerlink,.rst-content h1 .rst-versions .rst-current-version .headerlink,.rst-content h2 .rst-versions .rst-current-version .headerlink,.rst-content h3 .rst-versions .rst-current-version .headerlink,.rst-content h4 .rst-versions .rst-current-version .headerlink,.rst-content h5 .rst-versions .rst-current-version .headerlink,.rst-content h6 .rst-versions .rst-current-version .headerlink,.rst-content p .rst-versions .rst-current-version .headerlink,.rst-content table>caption .rst-versions .rst-current-version .headerlink,.rst-content tt.download .rst-versions .rst-current-version span:first-child,.rst-versions .rst-current-version .fa,.rst-versions .rst-current-version .icon,.rst-versions .rst-current-version .rst-content .admonition-title,.rst-versions .rst-current-version .rst-content .code-block-caption .headerlink,.rst-versions .rst-current-version .rst-content .eqno .headerlink,.rst-versions .rst-current-version .rst-content code.download span:first-child,.rst-versions .rst-current-version .rst-content dl dt .headerlink,.rst-versions .rst-current-version .rst-content h1 .headerlink,.rst-versions .rst-current-version .rst-content h2 .headerlink,.rst-versions .rst-current-version .rst-content h3 .headerlink,.rst-versions .rst-current-version .rst-content h4 .headerlink,.rst-versions .rst-current-version .rst-content h5 .headerlink,.rst-versions .rst-current-version .rst-content h6 .headerlink,.rst-versions .rst-current-version .rst-content p .headerlink,.rst-versions .rst-current-version .rst-content table>caption .headerlink,.rst-versions .rst-current-version .rst-content tt.download span:first-child,.rst-versions .rst-current-version .wy-menu-vertical li button.toctree-expand,.wy-menu-vertical li .rst-versions .rst-current-version button.toctree-expand{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}}.rst-content .toctree-wrapper>p.caption,.rst-content h1,.rst-content h2,.rst-content h3,.rst-content h4,.rst-content h5,.rst-content h6{margin-bottom:24px}.rst-content img{max-width:100%;height:auto}.rst-content div.figure,.rst-content figure{margin-bottom:24px}.rst-content div.figure .caption-text,.rst-content figure .caption-text{font-style:italic}.rst-content div.figure p:last-child.caption,.rst-content figure p:last-child.caption{margin-bottom:0}.rst-content div.figure.align-center,.rst-content figure.align-center{text-align:center}.rst-content .section>a>img,.rst-content .section>img,.rst-content section>a>img,.rst-content section>img{margin-bottom:24px}.rst-content abbr[title]{text-decoration:none}.rst-content.style-external-links a.reference.external:after{font-family:FontAwesome;content:"\f08e";color:#b3b3b3;vertical-align:super;font-size:60%;margin:0 .2em}.rst-content blockquote{margin-left:24px;line-height:24px;margin-bottom:24px}.rst-content pre.literal-block{white-space:pre;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;display:block;overflow:auto}.rst-content div[class^=highlight],.rst-content pre.literal-block{border:1px solid #e1e4e5;overflow-x:auto;margin:1px 0 24px}.rst-content div[class^=highlight] div[class^=highlight],.rst-content pre.literal-block div[class^=highlight]{padding:0;border:none;margin:0}.rst-content div[class^=highlight] td.code{width:100%}.rst-content .linenodiv pre{border-right:1px solid #e6e9ea;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;user-select:none;pointer-events:none}.rst-content div[class^=highlight] pre{white-space:pre;margin:0;padding:12px;display:block;overflow:auto}.rst-content div[class^=highlight] pre .hll{display:block;margin:0 -12px;padding:0 12px}.rst-content .linenodiv pre,.rst-content div[class^=highlight] pre,.rst-content pre.literal-block{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:12px;line-height:1.4}.rst-content div.highlight .gp,.rst-content div.highlight span.linenos{user-select:none;pointer-events:none}.rst-content div.highlight span.linenos{display:inline-block;padding-left:0;padding-right:12px;margin-right:12px;border-right:1px solid #e6e9ea}.rst-content .code-block-caption{font-style:italic;font-size:85%;line-height:1;padding:1em 0;text-align:center}@media print{.rst-content .codeblock,.rst-content div[class^=highlight],.rst-content div[class^=highlight] pre{white-space:pre-wrap}}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning{clear:both}.rst-content .admonition-todo .last,.rst-content .admonition-todo>:last-child,.rst-content .admonition .last,.rst-content .admonition>:last-child,.rst-content .attention .last,.rst-content .attention>:last-child,.rst-content .caution .last,.rst-content .caution>:last-child,.rst-content .danger .last,.rst-content .danger>:last-child,.rst-content .error .last,.rst-content .error>:last-child,.rst-content .hint .last,.rst-content .hint>:last-child,.rst-content .important .last,.rst-content .important>:last-child,.rst-content .note .last,.rst-content .note>:last-child,.rst-content .seealso .last,.rst-content .seealso>:last-child,.rst-content .tip .last,.rst-content .tip>:last-child,.rst-content .warning .last,.rst-content .warning>:last-child{margin-bottom:0}.rst-content .admonition-title:before{margin-right:4px}.rst-content .admonition table{border-color:rgba(0,0,0,.1)}.rst-content .admonition table td,.rst-content .admonition table th{background:transparent!important;border-color:rgba(0,0,0,.1)!important}.rst-content .section ol.loweralpha,.rst-content .section ol.loweralpha>li,.rst-content .toctree-wrapper ol.loweralpha,.rst-content .toctree-wrapper ol.loweralpha>li,.rst-content section ol.loweralpha,.rst-content section ol.loweralpha>li{list-style:lower-alpha}.rst-content .section ol.upperalpha,.rst-content .section ol.upperalpha>li,.rst-content .toctree-wrapper ol.upperalpha,.rst-content .toctree-wrapper ol.upperalpha>li,.rst-content section ol.upperalpha,.rst-content section ol.upperalpha>li{list-style:upper-alpha}.rst-content .section ol li>*,.rst-content .section ul li>*,.rst-content .toctree-wrapper ol li>*,.rst-content .toctree-wrapper ul li>*,.rst-content section ol li>*,.rst-content section ul li>*{margin-top:12px;margin-bottom:12px}.rst-content .section ol li>:first-child,.rst-content .section ul li>:first-child,.rst-content .toctree-wrapper ol li>:first-child,.rst-content .toctree-wrapper ul li>:first-child,.rst-content section ol li>:first-child,.rst-content section ul li>:first-child{margin-top:0}.rst-content .section ol li>p,.rst-content .section ol li>p:last-child,.rst-content .section ul li>p,.rst-content .section ul li>p:last-child,.rst-content .toctree-wrapper ol li>p,.rst-content .toctree-wrapper ol li>p:last-child,.rst-content .toctree-wrapper ul li>p,.rst-content .toctree-wrapper ul li>p:last-child,.rst-content section ol li>p,.rst-content section ol li>p:last-child,.rst-content section ul li>p,.rst-content section ul li>p:last-child{margin-bottom:12px}.rst-content .section ol li>p:only-child,.rst-content .section ol li>p:only-child:last-child,.rst-content .section ul li>p:only-child,.rst-content .section ul li>p:only-child:last-child,.rst-content .toctree-wrapper ol li>p:only-child,.rst-content .toctree-wrapper ol li>p:only-child:last-child,.rst-content .toctree-wrapper ul li>p:only-child,.rst-content .toctree-wrapper ul li>p:only-child:last-child,.rst-content section ol li>p:only-child,.rst-content section ol li>p:only-child:last-child,.rst-content section ul li>p:only-child,.rst-content section ul li>p:only-child:last-child{margin-bottom:0}.rst-content .section ol li>ol,.rst-content .section ol li>ul,.rst-content .section ul li>ol,.rst-content .section ul li>ul,.rst-content .toctree-wrapper ol li>ol,.rst-content .toctree-wrapper ol li>ul,.rst-content .toctree-wrapper ul li>ol,.rst-content .toctree-wrapper ul li>ul,.rst-content section ol li>ol,.rst-content section ol li>ul,.rst-content section ul li>ol,.rst-content section ul li>ul{margin-bottom:12px}.rst-content .section ol.simple li>*,.rst-content .section ol.simple li ol,.rst-content .section ol.simple li ul,.rst-content .section ul.simple li>*,.rst-content .section ul.simple li ol,.rst-content .section ul.simple li ul,.rst-content .toctree-wrapper ol.simple li>*,.rst-content .toctree-wrapper ol.simple li ol,.rst-content .toctree-wrapper ol.simple li ul,.rst-content .toctree-wrapper ul.simple li>*,.rst-content .toctree-wrapper ul.simple li ol,.rst-content .toctree-wrapper ul.simple li ul,.rst-content section ol.simple li>*,.rst-content section ol.simple li ol,.rst-content section ol.simple li ul,.rst-content section ul.simple li>*,.rst-content section ul.simple li ol,.rst-content section ul.simple li ul{margin-top:0;margin-bottom:0}.rst-content .line-block{margin-left:0;margin-bottom:24px;line-height:24px}.rst-content .line-block .line-block{margin-left:24px;margin-bottom:0}.rst-content .topic-title{font-weight:700;margin-bottom:12px}.rst-content .toc-backref{color:#404040}.rst-content .align-right{float:right;margin:0 0 24px 24px}.rst-content .align-left{float:left;margin:0 24px 24px 0}.rst-content .align-center{margin:auto}.rst-content .align-center:not(table){display:block}.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink{opacity:0;font-size:14px;font-family:FontAwesome;margin-left:.5em}.rst-content .code-block-caption .headerlink:focus,.rst-content .code-block-caption:hover .headerlink,.rst-content .eqno .headerlink:focus,.rst-content .eqno:hover .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink:focus,.rst-content .toctree-wrapper>p.caption:hover .headerlink,.rst-content dl dt .headerlink:focus,.rst-content dl dt:hover .headerlink,.rst-content h1 .headerlink:focus,.rst-content h1:hover .headerlink,.rst-content h2 .headerlink:focus,.rst-content h2:hover .headerlink,.rst-content h3 .headerlink:focus,.rst-content h3:hover .headerlink,.rst-content h4 .headerlink:focus,.rst-content h4:hover .headerlink,.rst-content h5 .headerlink:focus,.rst-content h5:hover .headerlink,.rst-content h6 .headerlink:focus,.rst-content h6:hover .headerlink,.rst-content p.caption .headerlink:focus,.rst-content p.caption:hover .headerlink,.rst-content p .headerlink:focus,.rst-content p:hover .headerlink,.rst-content table>caption .headerlink:focus,.rst-content table>caption:hover .headerlink{opacity:1}.rst-content p a{overflow-wrap:anywhere}.rst-content .wy-table td p,.rst-content .wy-table td ul,.rst-content .wy-table th p,.rst-content .wy-table th ul,.rst-content table.docutils td p,.rst-content table.docutils td ul,.rst-content table.docutils th p,.rst-content table.docutils th ul,.rst-content table.field-list td p,.rst-content table.field-list td ul,.rst-content table.field-list th p,.rst-content table.field-list th ul{font-size:inherit}.rst-content .btn:focus{outline:2px solid}.rst-content table>caption .headerlink:after{font-size:12px}.rst-content .centered{text-align:center}.rst-content .sidebar{float:right;width:40%;display:block;margin:0 0 24px 24px;padding:24px;background:#f3f6f6;border:1px solid #e1e4e5}.rst-content .sidebar dl,.rst-content .sidebar p,.rst-content .sidebar ul{font-size:90%}.rst-content .sidebar .last,.rst-content .sidebar>:last-child{margin-bottom:0}.rst-content .sidebar .sidebar-title{display:block;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif;font-weight:700;background:#e1e4e5;padding:6px 12px;margin:-24px -24px 24px;font-size:100%}.rst-content .highlighted{background:#f1c40f;box-shadow:0 0 0 2px #f1c40f;display:inline;font-weight:700}.rst-content .citation-reference,.rst-content .footnote-reference{vertical-align:baseline;position:relative;top:-.4em;line-height:0;font-size:90%}.rst-content .citation-reference>span.fn-bracket,.rst-content .footnote-reference>span.fn-bracket{display:none}.rst-content .hlist{width:100%}.rst-content dl dt span.classifier:before{content:" : "}.rst-content dl dt span.classifier-delimiter{display:none!important}html.writer-html4 .rst-content table.docutils.citation,html.writer-html4 .rst-content table.docutils.footnote{background:none;border:none}html.writer-html4 .rst-content table.docutils.citation td,html.writer-html4 .rst-content table.docutils.citation tr,html.writer-html4 .rst-content table.docutils.footnote td,html.writer-html4 .rst-content table.docutils.footnote tr{border:none;background-color:transparent!important;white-space:normal}html.writer-html4 .rst-content table.docutils.citation td.label,html.writer-html4 .rst-content table.docutils.footnote td.label{padding-left:0;padding-right:0;vertical-align:top}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{display:grid;grid-template-columns:auto minmax(80%,95%)}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{display:inline-grid;grid-template-columns:max-content auto}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{display:grid;grid-template-columns:auto auto minmax(.65rem,auto) minmax(40%,95%)}html.writer-html5 .rst-content aside.citation>span.label,html.writer-html5 .rst-content aside.footnote>span.label,html.writer-html5 .rst-content div.citation>span.label{grid-column-start:1;grid-column-end:2}html.writer-html5 .rst-content aside.citation>span.backrefs,html.writer-html5 .rst-content aside.footnote>span.backrefs,html.writer-html5 .rst-content div.citation>span.backrefs{grid-column-start:2;grid-column-end:3;grid-row-start:1;grid-row-end:3}html.writer-html5 .rst-content aside.citation>p,html.writer-html5 .rst-content aside.footnote>p,html.writer-html5 .rst-content div.citation>p{grid-column-start:4;grid-column-end:5}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{margin-bottom:24px}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{padding-left:1rem}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dd,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dd,html.writer-html5 .rst-content dl.footnote>dt{margin-bottom:0}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{font-size:.9rem}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.footnote>dt{margin:0 .5rem .5rem 0;line-height:1.2rem;word-break:break-all;font-weight:400}html.writer-html5 .rst-content dl.citation>dt>span.brackets:before,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:before{content:"["}html.writer-html5 .rst-content dl.citation>dt>span.brackets:after,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:after{content:"]"}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a{word-break:keep-all}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a:not(:first-child):before,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.footnote>dd{margin:0 0 .5rem;line-height:1.2rem}html.writer-html5 .rst-content dl.citation>dd p,html.writer-html5 .rst-content dl.footnote>dd p{font-size:.9rem}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{padding-left:1rem;padding-right:1rem;font-size:.9rem;line-height:1.2rem}html.writer-html5 .rst-content aside.citation p,html.writer-html5 .rst-content aside.footnote p,html.writer-html5 .rst-content div.citation p{font-size:.9rem;line-height:1.2rem;margin-bottom:12px}html.writer-html5 .rst-content aside.citation span.backrefs,html.writer-html5 .rst-content aside.footnote span.backrefs,html.writer-html5 .rst-content div.citation span.backrefs{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content aside.citation span.backrefs>a,html.writer-html5 .rst-content aside.footnote span.backrefs>a,html.writer-html5 .rst-content div.citation span.backrefs>a{word-break:keep-all}html.writer-html5 .rst-content aside.citation span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content aside.footnote span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content div.citation span.backrefs>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content aside.citation span.label,html.writer-html5 .rst-content aside.footnote span.label,html.writer-html5 .rst-content div.citation span.label{line-height:1.2rem}html.writer-html5 .rst-content aside.citation-list,html.writer-html5 .rst-content aside.footnote-list,html.writer-html5 .rst-content div.citation-list{margin-bottom:24px}html.writer-html5 .rst-content dl.option-list kbd{font-size:.9rem}.rst-content table.docutils.footnote,html.writer-html4 .rst-content table.docutils.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content aside.footnote-list aside.footnote,html.writer-html5 .rst-content div.citation-list>div.citation,html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{color:grey}.rst-content table.docutils.footnote code,.rst-content table.docutils.footnote tt,html.writer-html4 .rst-content table.docutils.citation code,html.writer-html4 .rst-content table.docutils.citation tt,html.writer-html5 .rst-content aside.footnote-list aside.footnote code,html.writer-html5 .rst-content aside.footnote-list aside.footnote tt,html.writer-html5 .rst-content aside.footnote code,html.writer-html5 .rst-content aside.footnote tt,html.writer-html5 .rst-content div.citation-list>div.citation code,html.writer-html5 .rst-content div.citation-list>div.citation tt,html.writer-html5 .rst-content dl.citation code,html.writer-html5 .rst-content dl.citation tt,html.writer-html5 .rst-content dl.footnote code,html.writer-html5 .rst-content dl.footnote tt{color:#555}.rst-content .wy-table-responsive.citation,.rst-content .wy-table-responsive.footnote{margin-bottom:0}.rst-content .wy-table-responsive.citation+:not(.citation),.rst-content .wy-table-responsive.footnote+:not(.footnote){margin-top:24px}.rst-content .wy-table-responsive.citation:last-child,.rst-content .wy-table-responsive.footnote:last-child{margin-bottom:24px}.rst-content table.docutils th{border-color:#e1e4e5}html.writer-html5 .rst-content table.docutils th{border:1px solid #e1e4e5}html.writer-html5 .rst-content table.docutils td>p,html.writer-html5 .rst-content table.docutils th>p{line-height:1rem;margin-bottom:0;font-size:.9rem}.rst-content table.docutils td .last,.rst-content table.docutils td .last>:last-child{margin-bottom:0}.rst-content table.field-list,.rst-content table.field-list td{border:none}.rst-content table.field-list td p{line-height:inherit}.rst-content table.field-list td>strong{display:inline-block}.rst-content table.field-list .field-name{padding-right:10px;text-align:left;white-space:nowrap}.rst-content table.field-list .field-body{text-align:left}.rst-content code,.rst-content tt{color:#000;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;padding:2px 5px}.rst-content code big,.rst-content code em,.rst-content tt big,.rst-content tt em{font-size:100%!important;line-height:normal}.rst-content code.literal,.rst-content tt.literal{color:#e74c3c;white-space:normal}.rst-content code.xref,.rst-content tt.xref,a .rst-content code,a .rst-content tt{font-weight:700;color:#404040;overflow-wrap:normal}.rst-content kbd,.rst-content pre,.rst-content samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace}.rst-content a code,.rst-content a tt{color:#2980b9}.rst-content dl{margin-bottom:24px}.rst-content dl dt{font-weight:700;margin-bottom:12px}.rst-content dl ol,.rst-content dl p,.rst-content dl table,.rst-content dl ul{margin-bottom:12px}.rst-content dl dd{margin:0 0 12px 24px;line-height:24px}.rst-content dl dd>ol:last-child,.rst-content dl dd>p:last-child,.rst-content dl dd>table:last-child,.rst-content dl dd>ul:last-child{margin-bottom:0}html.writer-html4 .rst-content dl:not(.docutils),html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple){margin-bottom:24px}html.writer-html4 .rst-content dl:not(.docutils)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{display:table;margin:6px 0;font-size:90%;line-height:normal;background:#e7f2fa;color:#2980b9;border-top:3px solid #6ab0de;padding:6px;position:relative}html.writer-html4 .rst-content dl:not(.docutils)>dt:before,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:before{color:#6ab0de}html.writer-html4 .rst-content dl:not(.docutils)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{margin-bottom:6px;border:none;border-left:3px solid #ccc;background:#f0f0f0;color:#555}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils)>dt:first-child,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:first-child{margin-top:0}html.writer-html4 .rst-content dl:not(.docutils) code.descclassname,html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descclassname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{background-color:transparent;border:none;padding:0;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .optional,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .optional{display:inline-block;padding:0 4px;color:#000;font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .property,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .property{display:inline-block;padding-right:8px;max-width:100%}html.writer-html4 .rst-content dl:not(.docutils) .k,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .k{font-style:italic}html.writer-html4 .rst-content dl:not(.docutils) .descclassname,html.writer-html4 .rst-content dl:not(.docutils) .descname,html.writer-html4 .rst-content dl:not(.docutils) .sig-name,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .sig-name{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#000}.rst-content .viewcode-back,.rst-content .viewcode-link{display:inline-block;color:#27ae60;font-size:80%;padding-left:24px}.rst-content .viewcode-back{display:block;float:right}.rst-content p.rubric{margin-bottom:12px;font-weight:700}.rst-content code.download,.rst-content tt.download{background:inherit;padding:inherit;font-weight:400;font-family:inherit;font-size:inherit;color:inherit;border:inherit;white-space:inherit}.rst-content code.download span:first-child,.rst-content tt.download span:first-child{-webkit-font-smoothing:subpixel-antialiased}.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{margin-right:4px}.rst-content .guilabel{border:1px solid #7fbbe3;background:#e7f2fa;font-size:80%;font-weight:700;border-radius:4px;padding:2.4px 6px;margin:auto 2px}.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>.kbd,.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>kbd{color:inherit;font-size:80%;background-color:#fff;border:1px solid #a6a6a6;border-radius:4px;box-shadow:0 2px grey;padding:2.4px 6px;margin:auto 0}.rst-content .versionmodified{font-style:italic}@media screen and (max-width:480px){.rst-content .sidebar{width:100%}}span[id*=MathJax-Span]{color:#404040}.math{text-align:center}@font-face{font-family:Lato;src:url(fonts/lato-normal.woff2?bd03a2cc277bbbc338d464e679fe9942) format("woff2"),url(fonts/lato-normal.woff?27bd77b9162d388cb8d4c4217c7c5e2a) format("woff");font-weight:400;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold.woff2?cccb897485813c7c256901dbca54ecf2) format("woff2"),url(fonts/lato-bold.woff?d878b6c29b10beca227e9eef4246111b) format("woff");font-weight:700;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold-italic.woff2?0b6bb6725576b072c5d0b02ecdd1900d) format("woff2"),url(fonts/lato-bold-italic.woff?9c7e4e9eb485b4a121c760e61bc3707c) format("woff");font-weight:700;font-style:italic;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-normal-italic.woff2?4eb103b4d12be57cb1d040ed5e162e9d) format("woff2"),url(fonts/lato-normal-italic.woff?f28f2d6482446544ef1ea1ccc6dd5892) format("woff");font-weight:400;font-style:italic;font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:400;src:url(fonts/Roboto-Slab-Regular.woff2?7abf5b8d04d26a2cafea937019bca958) format("woff2"),url(fonts/Roboto-Slab-Regular.woff?c1be9284088d487c5e3ff0a10a92e58c) format("woff");font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:700;src:url(fonts/Roboto-Slab-Bold.woff2?9984f4a9bda09be08e83f2506954adbe) format("woff2"),url(fonts/Roboto-Slab-Bold.woff?bed5564a116b05148e3b3bea6fb1162a) format("woff");font-display:block} diff --git a/site/css/theme_extra.css b/site/css/theme_extra.css new file mode 100644 index 0000000000000000000000000000000000000000..ab0631a1803c69462236d93ebb172e4451c0f06c --- /dev/null +++ b/site/css/theme_extra.css @@ -0,0 +1,197 @@ +/* + * Wrap inline code samples otherwise they shoot of the side and + * can't be read at all. + * + * https://github.com/mkdocs/mkdocs/issues/313 + * https://github.com/mkdocs/mkdocs/issues/233 + * https://github.com/mkdocs/mkdocs/issues/834 + */ +.rst-content code { + white-space: pre-wrap; + word-wrap: break-word; + padding: 2px 5px; +} + +/** + * Make code blocks display as blocks and give them the appropriate + * font size and padding. + * + * https://github.com/mkdocs/mkdocs/issues/855 + * https://github.com/mkdocs/mkdocs/issues/834 + * https://github.com/mkdocs/mkdocs/issues/233 + */ +.rst-content pre code { + white-space: pre; + word-wrap: normal; + display: block; + padding: 12px; + font-size: 12px; +} + +/** + * Fix code colors + * + * https://github.com/mkdocs/mkdocs/issues/2027 + */ +.rst-content code { + color: #E74C3C; +} + +.rst-content pre code { + color: #000; + background: #f8f8f8; +} + +/* + * Fix link colors when the link text is inline code. + * + * https://github.com/mkdocs/mkdocs/issues/718 + */ +a code { + color: #2980B9; +} +a:hover code { + color: #3091d1; +} +a:visited code { + color: #9B59B6; +} + +/* + * The CSS classes from highlight.js seem to clash with the + * ReadTheDocs theme causing some code to be incorrectly made + * bold and italic. + * + * https://github.com/mkdocs/mkdocs/issues/411 + */ +pre .cs, pre .c { + font-weight: inherit; + font-style: inherit; +} + +/* + * Fix some issues with the theme and non-highlighted code + * samples. Without and highlighting styles attached the + * formatting is broken. + * + * https://github.com/mkdocs/mkdocs/issues/319 + */ +.rst-content .no-highlight { + display: block; + padding: 0.5em; + color: #333; +} + + +/* + * Additions specific to the search functionality provided by MkDocs + */ + +.search-results { + margin-top: 23px; +} + +.search-results article { + border-top: 1px solid #E1E4E5; + padding-top: 24px; +} + +.search-results article:first-child { + border-top: none; +} + +form .search-query { + width: 100%; + border-radius: 50px; + padding: 6px 12px; + border-color: #D1D4D5; +} + +/* + * Improve inline code blocks within admonitions. + * + * https://github.com/mkdocs/mkdocs/issues/656 + */ + .rst-content .admonition code { + color: #404040; + border: 1px solid #c7c9cb; + border: 1px solid rgba(0, 0, 0, 0.2); + background: #f8fbfd; + background: rgba(255, 255, 255, 0.7); +} + +/* + * Account for wide tables which go off the side. + * Override borders to avoid weirdness on narrow tables. + * + * https://github.com/mkdocs/mkdocs/issues/834 + * https://github.com/mkdocs/mkdocs/pull/1034 + */ +.rst-content .section .docutils { + width: 100%; + overflow: auto; + display: block; + border: none; +} + +td, th { + border: 1px solid #e1e4e5 !important; + border-collapse: collapse; +} + +/* + * Without the following amendments, the navigation in the theme will be + * slightly cut off. This is due to the fact that the .wy-nav-side has a + * padding-bottom of 2em, which must not necessarily align with the font-size of + * 90 % on the .rst-current-version container, combined with the padding of 12px + * above and below. These amendments fix this in two steps: First, make sure the + * .rst-current-version container has a fixed height of 40px, achieved using + * line-height, and then applying a padding-bottom of 40px to this container. In + * a second step, the items within that container are re-aligned using flexbox. + * + * https://github.com/mkdocs/mkdocs/issues/2012 + */ + .wy-nav-side { + padding-bottom: 40px; +} + +/* For section-index only */ +.wy-menu-vertical .current-section p { + background-color: #e3e3e3; + color: #404040; +} + +/* + * The second step of above amendment: Here we make sure the items are aligned + * correctly within the .rst-current-version container. Using flexbox, we + * achieve it in such a way that it will look like the following: + * + * [No repo_name] + * Next >> // On the first page + * << Previous Next >> // On all subsequent pages + * + * [With repo_name] + * Next >> // On the first page + * << Previous Next >> // On all subsequent pages + * + * https://github.com/mkdocs/mkdocs/issues/2012 + */ +.rst-versions .rst-current-version { + padding: 0 12px; + display: flex; + font-size: initial; + justify-content: space-between; + align-items: center; + line-height: 40px; +} + +/* + * Please note that this amendment also involves removing certain inline-styles + * from the file ./mkdocs/themes/readthedocs/versions.html. + * + * https://github.com/mkdocs/mkdocs/issues/2012 + */ +.rst-current-version span { + flex: 1; + text-align: center; +} diff --git a/site/img/contribute/rpm-packaging-reference/check_log.png b/site/img/contribute/rpm-packaging-reference/check_log.png new file mode 100644 index 0000000000000000000000000000000000000000..5bcc020fcc6411a6b79dc117cbe262181c56bd63 Binary files /dev/null and b/site/img/contribute/rpm-packaging-reference/check_log.png differ diff --git a/site/img/contribute/rpm-packaging-reference/gateway.png b/site/img/contribute/rpm-packaging-reference/gateway.png new file mode 100644 index 0000000000000000000000000000000000000000..a3553b6b7a2e45f8d64c025efb72e99d7fa92154 Binary files /dev/null and b/site/img/contribute/rpm-packaging-reference/gateway.png differ diff --git a/site/img/contribute/rpm-packaging-reference/maintainer.png b/site/img/contribute/rpm-packaging-reference/maintainer.png new file mode 100644 index 0000000000000000000000000000000000000000..7ae5f77b2b25063e9c111df8c38373b42bc6ad9e Binary files /dev/null and b/site/img/contribute/rpm-packaging-reference/maintainer.png differ diff --git a/site/img/contribute/rpm-packaging-reference/pat.png b/site/img/contribute/rpm-packaging-reference/pat.png new file mode 100644 index 0000000000000000000000000000000000000000..9eaa84d9ab487ebd247d81dec6377aaaa2014276 Binary files /dev/null and b/site/img/contribute/rpm-packaging-reference/pat.png differ diff --git a/site/img/contribute/rpm-packaging-reference/redirect_git_repo.png b/site/img/contribute/rpm-packaging-reference/redirect_git_repo.png new file mode 100644 index 0000000000000000000000000000000000000000..702c63d95a5111c7725bb9cdcfa00f42c95fd9ee Binary files /dev/null and b/site/img/contribute/rpm-packaging-reference/redirect_git_repo.png differ diff --git a/site/img/contribute/rpm-packaging-reference/setting.png b/site/img/contribute/rpm-packaging-reference/setting.png new file mode 100644 index 0000000000000000000000000000000000000000..5f5784ee4ae6087a3ef11f2be4cfbb2dcb212728 Binary files /dev/null and b/site/img/contribute/rpm-packaging-reference/setting.png differ diff --git a/site/img/favicon.ico b/site/img/favicon.ico new file mode 100644 index 0000000000000000000000000000000000000000..e85006a3ce1c6fd81faa6d5a13095519c4a6fc96 Binary files /dev/null and b/site/img/favicon.ico differ diff --git a/site/img/install/ironic-err.png b/site/img/install/ironic-err.png new file mode 100644 index 0000000000000000000000000000000000000000..1edfa4fee7013d859ff85a4afdd81e7cbbfda2a8 Binary files /dev/null and b/site/img/install/ironic-err.png differ diff --git a/site/img/install/topology1.PNG b/site/img/install/topology1.PNG new file mode 100644 index 0000000000000000000000000000000000000000..1a23d5dbd20f230cb22420a77647b06c370ebe87 Binary files /dev/null and b/site/img/install/topology1.PNG differ diff --git a/site/img/install/topology2.PNG b/site/img/install/topology2.PNG new file mode 100644 index 0000000000000000000000000000000000000000..847e82a6f92a13986487a5f3967df5ecf9e791c7 Binary files /dev/null and b/site/img/install/topology2.PNG differ diff --git a/site/img/install/topology3.PNG b/site/img/install/topology3.PNG new file mode 100644 index 0000000000000000000000000000000000000000..b0b4d37933d79377b7273ca6f4167793580231aa Binary files /dev/null and b/site/img/install/topology3.PNG differ diff --git a/site/img/install/wechat_group_assistant.jpg b/site/img/install/wechat_group_assistant.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a07ac046cb721fae6dc36d3dcdd04231f16f7e96 Binary files /dev/null and b/site/img/install/wechat_group_assistant.jpg differ diff --git a/images/openEuler.png b/site/img/openEuler.png similarity index 100% rename from images/openEuler.png rename to site/img/openEuler.png diff --git a/site/img/spec/l3_scheduler.png b/site/img/spec/l3_scheduler.png new file mode 100644 index 0000000000000000000000000000000000000000..79c3429a6e5bb395b3c4fcc5c5be11b02b074e88 Binary files /dev/null and b/site/img/spec/l3_scheduler.png differ diff --git a/site/img/spec/router_1.png b/site/img/spec/router_1.png new file mode 100644 index 0000000000000000000000000000000000000000..69bb45752768f143e91d068986bc77314443abb4 Binary files /dev/null and b/site/img/spec/router_1.png differ diff --git a/site/img/spec/router_2.png b/site/img/spec/router_2.png new file mode 100644 index 0000000000000000000000000000000000000000..12ff927ee38407e0f6c5d41c9c1cfa1252165375 Binary files /dev/null and b/site/img/spec/router_2.png differ diff --git a/site/img/spec/router_3.png b/site/img/spec/router_3.png new file mode 100644 index 0000000000000000000000000000000000000000..344fe2aeda7a6d94596b6cb1c8887edca7a4b129 Binary files /dev/null and b/site/img/spec/router_3.png differ diff --git a/site/index.html b/site/index.html new file mode 100644 index 0000000000000000000000000000000000000000..54e0e9434ddf572e253dfaf6fc87606f818c4664 --- /dev/null +++ b/site/index.html @@ -0,0 +1,919 @@ + + + + + + + + OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ +

openEuler OpenStack SIG

+

SIG 工作目标和范围

+
    +
  • 在openEuler之上提供原生的OpenStack,构建开放可靠的云计算技术栈。
  • +
  • 定期召开会议,收集开发者、厂商诉求,讨论OpenStack社区发展。
  • +
+

组织会议

+

公开的会议时间:月度例会,每月中下旬的某个周三下午3:00-4:00(北京时间)

+

会议链接:通过微信群消息和邮件列表发出

+

会议纪要: https://etherpad.openeuler.org/p/sig-openstack-meetings

+

OpenStack版本支持列表

+

OpenStack SIG通过用户反馈等方式收集OpenStack版本需求,经过SIG组内成员公开讨论决定OpenStack的版本演进路线。规划中的版本可能因为需求更变、人力变动等原因进行调整。OpenStack SIG欢迎更多开发者、厂商参与,共同完善openEuler的OpenStack支持。

+

● - 已支持 +○ - 规划中/开发中 +▲ - 部分openEuler版本支持

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
QueensRockyTrainUssuriVictoriaWallabyXenaYogaAntelope
openEuler 20.03 LTS SP1
openEuler 20.03 LTS SP2
openEuler 20.03 LTS SP3
openEuler 20.03 LTS SP4
openEuler 21.03
openEuler 21.09
openEuler 22.03 LTS
openEuler 22.03 LTS SP1
openEuler 22.03 LTS SP2
openEuler 22.03 LTS SP3
openEuler 22.03 LTS SP4
openEuler 22.09
openEuler 24.03 LTS
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
QueensRockyTrainVictoriaWallabyYogaAntelope
Keystone
Glance
Nova
Cinder
Neutron
Tempest
Horizon
Ironic
Placement
Trove
Kolla
Rally
Swift
Heat
Ceilometer
Aodh
Cyborg
Gnocchi
OpenStack-helm
Barbican
Octavia
Designate
Manila
Masakari
Mistral
Senlin
Zaqar
+

Note:

+
    +
  1. openEuler 20.03 LTS SP2不支持Rally
  2. +
  3. Heat、Ceilometer、Swift、Aodh和Cyborg只在22.03 LTS以上版本支持
  4. +
  5. Barbican、Octavia、Designate、Manila、Masakari、Mistral、Senlin和Zaqar只在22.03 LTS SP2以上版本支持
  6. +
+

oepkg软件仓地址列表

+

Queens、Rocky、Train版本的支持放在SIG官方认证的第三方软件平台oepkg:

+
    +
  • +

    20.03-LTS-SP1 Train: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP1/contrib/openstack/train/

    +

    该Train版本不是纯原生代码,包含了智能网卡支持的相关代码,用户使用前请自行评审

    +
  • +
  • +

    20.03-LTS-SP2 Rocky: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/

    +
  • +
  • +

    20.03-LTS-SP3 Rocky: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/

    +
  • +
  • +

    20.03-LTS-SP2 Queens: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/

    +
  • +
  • +

    20.03-LTS-SP3 Rocky: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/

    +
  • +
+

另外,20.03-LTS-SP1虽然有Queens、Rocky版本的软件包,但未经过验证,请谨慎使用:

+
    +
  • +

    20.03-LTS-SP1 Queens: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP1/contrib/openstack/queens/

    +
  • +
  • +

    20.03-LTS-SP1 Rocky: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP1/contrib/openstack/rocky/

    +
  • +
+

Maintainer的加入和退出

+

秉承开源开放的理念,OpenStack SIG在maintainer成员的管理方面也有一定的规范和要求。

+

如何成为maintainer

+

maintainer作为SIG的直接负责人,拥有代码合入、路标规划、提名maintainer等方面的权利,同时也有软件质量看护、版本开发的义务。如果您想成为OpenStack SIG的一名maintainer,需要满足以下几点要求:

+
    +
  1. 持续参与OpenStack SIG开发贡献,不小于一个openEuler release周期(一般为3个月)
  2. +
  3. 持续参与OpenStack SIG代码检视,review排名应不低于SIG平均量
  4. +
  5. 定时参加OpenStack SIG例会(一般为双周一次),一个openEuler release周期一般包括6次例会,缺席次数应不大于2次
  6. +
+

加分项:

+
    +
  1. 积极参加OpenStack SIG组织的各种活动,比如线上分享、线下meetup或峰会等。
  2. +
  3. 帮助SIG扩展运营范围,进行联合技术创新,例如主动开源新项目,吸引新的开发者、厂商加入SIG等。
  4. +
+

SIG maintainer每个季度会组织闭门会议,审视当前贡献数据,根据贡献者满足相关要求,经讨论达成一致后并且贡献者愿意担任maintainer一职时,SIG会向openEuler TC提出相关申请

+

活跃maintainer

+

参考Apache基金会等社区,结合SIG具体情况,引入活跃maintainer机制。

+

对于无法保持长期高活跃,但愿意继续承担SIG责任的maintainer,maintainer角色保留。

+

非高活跃maintainer责任与权限:

+
    +
  • 保持SIG动态跟进,参与SIG重大事务。
  • +
  • 参与SIG决策。活跃maintainer对SIG事务决策具备更高权重,意见相左时以活跃者为准。
  • +
  • 不具备提名权限。
  • +
+

活跃maintainer在SIG主页列表中被列出。

+

当SIG maintainer因为自身原因,无法保持长期高活跃时,可主动申请退出高活跃状态。SIG maintainer每半年例行审视当前maintainer列表,更新活跃列表。

+

maintainer的退出

+

当SIG maintainer因为自身原因(工作变动、业务调整等原因),无法再担任maintainer一职时,可主动申请退出。 +SIG maintainer每年也会例行审视当前maintainer列表,如果发现有不再适合担任maintainer的贡献者(无法保障参与等原因),经讨论达成一致后,会向openEuler TC提出相关申请。

+

活跃Maintainer

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
姓名Gitee ID邮箱公司
郑挺tzing_tzhengting13@huawei.com华为
王东兴desert-sailordongxing.wang_a@thundersoft.com创达奥思维
王静Accessacwangjing@uniontech.com统信软件
+

Maintainer/Committer列表

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
姓名Gitee ID邮箱公司
陈硕joec88joseph.chn1988@gmail.com中国联通
李昆山likshli_kunshan@163.com中国联通
黄填华huangtianhuahuangtianhua223@gmail.com华为
王玺源xiyuanwangwangxiyuan1007@gmail.com华为
张帆zh-fzh.f@outlook.com中国电信
张迎zhangy1317zhangy1317@foxmail.com中国联通
韩光宇han-guangyuhanguangyu@uniontech.com统信软件
王东兴desert-sailordongxing.wang_a@thundersoft.com创达奥思维
郑挺tzing_tzhengting13@huawei.com华为
王静Accessacwangjing@uniontech.com统信软件
+

如何贡献

+

OpenStack SIG秉承OpenStack社区4个Open原则(Open source、Open Design、Open Development、Open Community),欢迎开发者、用户、厂商以各种开源方式参与SIG贡献,包括但不限于:

+
    +
  1. 提交Issue + 如果您在使用OpenStack时遇到了任何问题,可以向SIG提交ISSUE,包括不限于使用疑问、软件包BUG、特性需求等等。
  2. +
  3. 参与技术讨论 + 通过邮件列表、微信群、在线例会等方式,与SIG成员实时讨论OpenStack技术。
  4. +
  5. 参与SIG的软件开发测试工作
      +
    1. OpenStack SIG跟随openEuler版本开发的节奏,每几个月对外发布不同版本的OpenStack,每个版本包含了几百个RPM软件包,开发者可以参与到这些RPM包的开发工作中。
    2. +
    3. OpenStack SIG包括一些来自厂商捐献、自主研发的项目,开发者可以参与相关项目的开发工作。
    4. +
    5. openEuler新版本发布后,用户可以测试试用对应的OpenStack,相关BUG和问题可以提交到SIG。
    6. +
    7. OpenStack SIG还提供了一系列提高开发效率的工具和文档,用户可以帮忙优化、完善。
    8. +
    +
  6. +
  7. 技术预言、联合创新 + OpenStack SIG欢迎各种形式的联合创新,邀请各位开发者以开源的方式、以SIG为平台,创造属于国人的云计算新技术。如果您有idea或开发意愿,欢迎加入SIG。
  8. +
+

当然,贡献形式不仅包含这些,其他任何与OpenStack相关、与开源相关的事务都可以带到SIG中。OpenStack SIG欢迎您的参与。

+

项目清单

+

SIG包含的全部项目:https://gitee.com/openeuler/openstack/blob/master/tools/oos/etc/openeuler_sig_repo.yaml

+

OpenStack包含项目众多,为了方便管理,设置了统一入口项目,用户、开发者对OpenStack SIG以及各OpenStack子项目有任何问题,可以在该项目中提交Issue。

+ +

SIG同时联合各大厂商、开发者,创建了一系列自研项目:

+ +

交流群

+

添加小助手回复"加群"进入openEuler sig-OpenStack交流群
+assistant

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + 下一章 » + + +
+ + + + + + + + + + + diff --git a/site/install/devstack/index.html b/site/install/devstack/index.html new file mode 100644 index 0000000000000000000000000000000000000000..5e37e54b9dff830a5a76397d9b869fbdbbba59b8 --- /dev/null +++ b/site/install/devstack/index.html @@ -0,0 +1,349 @@ + + + + + + + + devstack - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

使用Devstack安装OpenStack

+ +

目前OpenStack原生Devstack项目已经支持在openEuler上安装OpenStack,其中openEuler 20.03 LTS SP2已经过验证,并且有上游官方CI保证质量。其他版本的openEuler需要用户自行测试(2022-04-25 openEuler master分支已验证)。

+

安装步骤

+

准备一个openEuler环境, 20.03 LTS SP2虚拟机镜像地址, master虚拟机镜像地址

+
    +
  1. +

    配置yum源

    +

    openEuler 20.03 LTS SP2

    +

    openEuler官方源中缺少了一些OpenStack需要的RPM包,因此需要先配上OpenStack SIG在oepkg中准备好的RPM源

    +
    vi /etc/yum.repos.d/openeuler.repo
    +
    +[openstack]
    +name=openstack
    +baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack-master-ci/aarch64/
    +enabled=1
    +gpgcheck=0
    +

    openEuler master:

    +

    使用master的RPM源:

    +
    vi /etc/yum.repos.d/openeuler.repo
    +
    +[mainline]
    +name=mainline
    +baseurl=http://119.3.219.20:82/openEuler:/Mainline/standard_aarch64/
    +gpgcheck=false
    +
    +[epol]
    +name=epol
    +baseurl=http://119.3.219.20:82/openEuler:/Epol/standard_aarch64/
    +gpgcheck=false
    +
  2. +
  3. +

    前期准备

    +

    openEuler 20.03 LTS SP2

    +

    在一些版本的openEuler官方镜像的默认源中,EPOL-update的URL可能配置不正确,需要修改

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +# 把[EPOL-UPDATE]URL改成
    +baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/EPOL/update/main/$basearch/
    +

    openEuler master:

    +
    yum remove python3-pip # 系统的pip与devstack pip冲突,需要先删除
    +# master的虚机环境缺少了一些依赖,devstack不会自动安装,需要手动安装
    +yum install iptables tar wget python3-devel httpd-devel iscsi-initiator-utils libvirt python3-libvirt qemu memcached
    +
  4. +
  5. +

    下载devstack

    +
    yum update
    +yum install git
    +cd /opt/
    +git clone https://opendev.org/openstack/devstack.git
    +
  6. +
  7. +

    初始化devstack环境配置

    +
    # 创建stack用户
    +/opt/devstack/tools/create-stack-user.sh
    +# 修改目录权限
    +chown -R stack:stack /opt/devstack
    +chmod -R 755 /opt/devstack
    +chmod -R 755 /opt/stack
    +# 切换到要部署的openstack版本分支,以yoga为例,不切换的话,默认安装的是master版本的openstack
    +git checkout stable/yoga
    +
  8. +
  9. +

    初始化devstack配置文件

    +
    切换到stack用户
    +su stack
    +此时,请确认stack用户的PATH环境变量是否包含了`/usr/sbin`,如果没有,则需要执行
    +PATH=$PATH:/usr/sbin
    +新增配置文件
    +vi /opt/devstack/local.conf
    +
    +[[local|localrc]]
    +DATABASE_PASSWORD=root
    +RABBIT_PASSWORD=root
    +SERVICE_PASSWORD=root
    +ADMIN_PASSWORD=root
    +OVN_BUILD_FROM_SOURCE=True
    +

    openEuler没有提供OVN的RPM软件包,因此需要配置OVN_BUILD_FROM_SOURCE=True, 从源码编译OVN

    +

    另外如果使用的是arm64虚拟机环境,则需要配置libvirt嵌套虚拟化,在local.conf中追加如下配置:

    +
    [[post-config|$NOVA_CONF]]
    +[libvirt]
    +cpu_mode=custom
    +cpu_model=cortex-a72
    +

    如果安装Ironic,需要提前安装依赖:

    +
    sudo dnf install syslinux-nonlinux
    +

    openEuler master的特殊配置: 由于devstack还没有适配最新的openEuler,我们需要手动修复一些问题:

    +
      +
    1. +

      修改devstack源码

      +

      vi /opt/devstack/tools/fixup_stuff.sh
      +把fixup_openeuler方法中的所有echo语句删掉
      +(echo '[openstack-ci]'
      +echo 'name=openstack'
      +echo 'baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack-master-ci/'$arch'/'
      +echo 'enabled=1'
      +echo 'gpgcheck=0') | sudo tee -a /etc/yum.repos.d/openstack-master.repo > /dev/null
      + 2. 修改requirements源码

      +

      Yoga版keystone的依赖setproctitle的devstack默认版本不支持python3.10,需要升级,手动下载requirements项目并修改 +

      cd /opt/stack
      +git clone https://opendev.org/openstack/requirements --branch stable/yoga
      +vi /opt/stack/requirements/upper-constraints.txt
      +setproctitle===1.2.3

      +
    2. +
    3. +

      OpenStack horizon有BUG,无法正常安装。这里我们暂时不安装horizon,修改local.conf,新增一行:

      +
      [[local|localrc]]
      +disable_service horizon
      +

      如果确实有对horizon的需求,则需要解决以下问题:

      +
      # 1. horizon依赖的pyScss默认为1.3.7版本,不支持python3.10
      +# 解决方法:需要提前clone`requirements`项目并修改代码
      +vi /opt/stack/requirements/upper-constraints.txt
      +pyScss===1.4.0
      +
      +# 2. horizon依赖httpd的mod_wsgi插件,但目前openEuler的mod_wsgi构建异常(2022-04-25)(解决后yum install mod_wsgi即可),无法从yum安装
      +# 解决方法:手动源码build mod_wsgi并配置,该过程较复杂,这里略过
      +
    4. +
    5. +

      dstat服务依赖的pcp-system-tools构建异常(2022-04-25)(解决后yum install pcp-system-tools即可),无法从yum安装,暂时先不安装dstat

      +
      [[local|localrc]]
      +disable_service dstat
      +
    6. +
    +
  10. +
  11. +

    部署OpenStack

    +

    进入devstack目录,执行./stack.sh,等待OpenStack完成安装部署。

    +
  12. +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-20.03-LTS-SP2/OpenStack-queens/index.html b/site/install/openEuler-20.03-LTS-SP2/OpenStack-queens/index.html new file mode 100644 index 0000000000000000000000000000000000000000..f1da2a661a5008a083626f63507a3b5b8765e8c0 --- /dev/null +++ b/site/install/openEuler-20.03-LTS-SP2/OpenStack-queens/index.html @@ -0,0 +1,1733 @@ + + + + + + + + openEuler-20.03-LTS-SP2_Queens - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Queens 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 20.03-LTS-SP2 版本官方认证的第三方oepkg yum 源已经支持 Openstack-Queens 版本,用户可以配置好oepkg yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

Openstack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • Cinder
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 20.03-LTS-SP2 官方认证的第三方源 oepkg

    +
    cat << EOF >> /etc/yum.repos.d/OpenStack_Queens.repo
    +[openstack_queens]
    +name=OpenStack_Queens
    +baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/$basearch/
    +gpgcheck=0
    +enabled=1
    +EOF
    +
    +yum clean all && yum makecache
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python2-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python2-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +

    systemctl enable memcached.service
    +systemctl start memcached.service
    +服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd python2-mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python2-openstackclient:

    +
    yum install python2-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +
    vim /etc/glance/glance-registry.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service openstack-glance-registry.service
    +systemctl start openstack-glance-api.service openstack-glance-registry.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载arm64版本的镜像

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CPT)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CPT)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTP)
    +openstack role add --project service --user nova admin                                         (CPT)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CPT)
    +

    创建placement服务凭证:

    +
    openstack user create --domain default --password-prompt placement                             (CPT)
    +openstack role add --project service --user placement admin                                    (CPT)
    +openstack service create --name placement --description "Placement API" placement              (CPT)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CPT)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CPT)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CPT)
    +

    创建placement API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778           (CPT)
    +openstack endpoint create --region RegionOne placement internal http://controller:8778         (CPT)
    +openstack endpoint create --region RegionOne placement admin http://controller:8778            (CPT)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor openstack-nova-console \
    +openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api                (CTL)
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[libvirt]
    +virt_type = qemu                                                                               (CPT)
    +cpu_mode = custom                                                                              (CPT)
    +cpu_model = cortex-a7                                                                          (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 帐户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    手动增加Placement API接入配置。

    +
    vim /etc/httpd/conf.d/00-nova-placement-api.conf                                               (CTL)
    +
    +<Directory /usr/bin>
    +   <IfVersion >= 2.4>
    +      Require all granted
    +   </IfVersion>
    +   <IfVersion < 2.4>
    +      Order allow,deny
    +      Allow from all
    +   </IfVersion>
    +</Directory>
    +

    重启httpd服务:

    +
    systemctl restart httpd                                                                        (CTL)
    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    mkdir -p /usr/share/AAVMF
    +chown nova:nova /usr/share/AAVMF
    +
    +ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_CODE.fd                                                           (CPT)
    +ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_VARS.fd                                                           (CPT)
    +
    +vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]                                    (CPT)
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CPT)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-consoleauth.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-consoleauth.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                            (CTL)
    +

    检查cells和placement API是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                       (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge-agent ebtables ipset \             (CTL)
    +            openstack-neutron-l3-agent openstack-neutron-dhcp-agent \
    +            openstack-neutron-metadata-agent
    +
    yum install openstack-neutron-linuxbridge-agent ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 帐户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini                                                      (CTL)
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                   (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable openstack-neutron-server.service \                                            (CTL)
    +openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \
    +openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service
    +systemctl restart openstack-nova-api.service openstack-neutron-server.service                  (CTL)
    +openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \
    +openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service
    +
    +systemctl enable openstack-neutron-linuxbridge-agent.service                                   (CPT)
    +systemctl restart openstack-neutron-linuxbridge-agent.service openstack-nova-compute.service   (CPT)
    +
  14. +
  15. +

    验证

    +

    列出代理验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (CPT)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (CPT)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (CPT)
    +backup_share=HOST:PATH                                                                         (CPT)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (CPT)
    +volume_group = cinder-volumes                                                                  (CPT)
    +iscsi_protocol = iscsi                                                                         (CPT)
    +iscsi_helper = tgtadm                                                                          (CPT)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 帐户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (CPT)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (CPT)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +ALLOWED_HOSTS = ['*', ]
    +OPENSTACK_HOST = "controller"
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic
+                         --description "Ironic baremetal provisioning service" baremetal
+
+openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
+openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector
+openstack role add --project service --user ironic-inspector admin
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问Openstack网络服务
+[glance] - 访问Openstack镜像服务
+[swift] - 访问Openstack对象存储服务
+[cinder] - 访问Openstack块存储服务
+[inspector] - 访问Openstack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在Openstack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问openstack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口缺省值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. 配置ironic-inspector服务
  2. +
+

配置文件路径/etc/ironic-inspector/inspector.conf

+

1、创建数据库

+
# mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \     IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
+IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+

2、通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSWORDironic_inspector用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+backend = sqlalchemy
+connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
+

3、配置消息度列通信地址

+
[DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

4、设置keystone认证

+
[DEFAULT]
+
+auth_strategy = keystone
+
+[ironic]
+
+api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
+auth_type = password
+auth_url = http://PUBLIC_IDENTITY_IP:5000
+auth_strategy = keystone
+ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
+os_region = RegionOne
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
+username = IRONIC_SERVICE_USER_NAME
+password = IRONIC_SERVICE_USER_PASSWORD
+

5、配置ironic inspector dnsmasq服务

+
# 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
+port=0
+interface=enp3s0                         #替换为实际监听网络接口
+dhcp-range=172.20.19.100,172.20.19.110   #替换为实际dhcp地址范围
+bind-interfaces
+enable-tftp
+
+dhcp-match=set:efi,option:client-arch,7
+dhcp-match=set:efi,option:client-arch,9
+dhcp-match=aarch64, option:client-arch,11
+dhcp-boot=tag:aarch64,grubaa64.efi
+dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
+dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
+
+tftp-root=/tftpboot                       #替换为实际tftpboot目录
+log-facility=/var/log/dnsmasq.log
+

6、启动服务

+
systemctl enable --now openstack-ironic-inspector.service
+systemctl enable --now openstack-ironic-inspector-dnsmasq.service
+
    +
  1. deploy ramdisk镜像制作
  2. +
+

Q版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用Q版原生工具,则需要安装对应的软件包。

+

yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+ 具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 20.03 LTS SP2中引入了Kolla和Kolla-ansible服务。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Trove服务用户

+

openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+openstack role add --project service --user trove admin
+openstack service create --name trove
+                         --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+

openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s
+ 解释: $TROVE_NODE 替换为Trove的API服务部署节点

+
    +
  1. 安装和配置Trove各组件 + 1、安装Trove包 + ```shell script + yum install openstack-trove python-troveclient +
    2. 配置`trove.conf`
    +```shell script
    +vim /etc/trove/trove.conf
    +
    +[DEFAULT]
    +bind_host=TROVE_NODE_IP
    +log_dir = /var/log/trove
    +
    +auth_strategy = keystone
    +# Config option for showing the IP address that nova doles out
    +add_addresses = True
    +network_label_regex = ^NETWORK_LABEL$
    +api_paste_config = /etc/trove/api-paste.ini
    +
    +trove_auth_url = http://controller:35357/v3/
    +nova_compute_url = http://controller:8774/v2
    +cinder_url = http://controller:8776/v1
    +
    +nova_proxy_admin_user = admin
    +nova_proxy_admin_pass = ADMIN_PASS
    +nova_proxy_admin_tenant_name = service
    +taskmanager_manager = trove.taskmanager.manager.Manager
    +use_nova_server_config_drive = True
    +
    +# Set these if using Neutron Networking
    +network_driver=trove.network.neutron.NeutronDriver
    +network_label_regex=.*
    +
    +
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +
    +[database]
    +connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/v3/
    +auth_url=http://controller:35357/v3/
    +#auth_uri = http://controller/identity
    +#auth_url = http://controller/identity_admin
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = trove
    +password = TROVE_PASS
    +
    + 解释:
  2. +
  3. [Default]分组中bind_host配置为Trove部署节点的IP
  4. +
  5. nova_compute_urlcinder_url 为Nova和Cinder在Keystone中创建的endpoint
  6. +
  7. nova_proxy_XXX 为一个能访问Nova服务的用户信息,上例中使用admin用户为例
  8. +
  9. transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  10. +
  11. [database]分组中的connection 为前面在mysql中为Trove创建的数据库信息
  12. +
  13. +

    Trove的用户信息中TROVE_PASS替换为实际trove用户的密码

    +
  14. +
  15. +

    配置trove-taskmanager.conf + ```shell script + vim /etc/trove/trove-taskmanager.conf

    +
  16. +
+

[DEFAULT] + log_dir = /var/log/trove + trove_auth_url = http://controller/identity/v2.0 + nova_compute_url = http://controller:8774/v2 + cinder_url = http://controller:8776/v1 + transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/

+

[database] + connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove +

**解释:** 参照`trove.conf`配置
+
+4. 配置`trove-conductor.conf`
+```shell script
+vim /etc/trove/trove-conductor.conf
+
+[DEFAULT]
+log_dir = /var/log/trove
+trove_auth_url = http://controller/identity/v2.0
+nova_compute_url = http://controller:8774/v2
+cinder_url = http://controller:8776/v1
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:trove@controller/trove
+ 解释: 参照trove.conf配置

+
    +
  1. 配置trove-guestagent.conf + ```shell script + vim /etc/trove/trove-guestagent.conf + [DEFAULT] + rabbit_host = controller + rabbit_password = RABBIT_PASS + nova_proxy_admin_user = admin + nova_proxy_admin_pass = ADMIN_PASS + nova_proxy_admin_tenant_name = service + trove_auth_url = http://controller/identity_admin/v2.0 +
    **解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟
    +机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上
    +报心跳,因此需要配置RabbitMQ的用户和密码信息。
    +
    +6. 生成数据`Trove`数据库表
    +```shell script
    +su -s /bin/sh -c "trove-manage db_sync" trove
  2. +
  3. 完成安装配置
  4. +
  5. 配置Trove服务自启动 + ```shell script + systemctl enable openstack-trove-api.service \ + openstack-trove-taskmanager.service \ + openstack-trove-conductor.service +
    2. 启动服务
    +```shell script
    +systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  6. +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-20.03-LTS-SP2/OpenStack-rocky/index.html b/site/install/openEuler-20.03-LTS-SP2/OpenStack-rocky/index.html new file mode 100644 index 0000000000000000000000000000000000000000..55a40299fd85b4f348d0c0f0e80c7036f53ddd8f --- /dev/null +++ b/site/install/openEuler-20.03-LTS-SP2/OpenStack-rocky/index.html @@ -0,0 +1,1647 @@ + + + + + + + + openEuler-20.03-LTS-SP2_Rocky - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Rocky 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 20.03-LTS-SP2 版本官方认证的第三方oepkg yum 源已经支持 Openstack-Rocky 版本,用户可以配置好oepkg yum 源后根据此文档进行 OpenStack 部署。

+

准备环境

+

OpenStack yum源配置

+

配置 20.03-LTS-SP2 官方认证的第三方源 oepkg,以x86_64为例

+
$ cat << EOF >> /etc/yum.repos.d/OpenStack_Rocky.repo
+[openstack_rocky]
+name=OpenStack_Rocky
+baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/rocky/x86_64/
+gpgcheck=0
+enabled=1
+EOF
+
$ yum clean all && yum makecache
+

环境配置

+

/etc/hosts中添加controller信息,例如节点IP是10.0.0.11,则新增:

+
10.0.0.11   controller
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +

    $ yum install mariadb mariadb-server python2-PyMySQL
    +2. 创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +

    复制如下内容到文件,其中 bind-address 设置为控制节点的管理IP地址。 +

    [mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8

    +
  2. +
  3. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    $ systemctl enable mariadb.service
    +$ systemctl start mariadb.service
    +

    安装 RabbitMQ

    +
  4. +
  5. +

    执行如下命令,安装软件包。

    +
    $ yum install rabbitmq-server
    +
  6. +
  7. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +

    $ systemctl enable rabbitmq-server.service
    +$ systemctl start rabbitmq-server.service
    +3. 添加 OpenStack用户。

    +

    $ rabbitmqctl add_user openstack RABBIT_PASS
    +4. 替换 RABBIT_PASS,为OpenStack用户设置密码

    +
  8. +
  9. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    $ rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  10. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +

    $ yum install memcached python2-memcached
    +2. 编辑 /etc/sysconfig/memcached 文件,添加以下内容

    +

    OPTIONS="-l 127.0.0.1,::1,controller"
    +OPTIONS 修改为实际环境中控制节点的管理IP地址。

    +
  2. +
  3. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    $ systemctl enable memcached.service
    +$ systemctl start memcached.service
    +
  4. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    以 root 用户访问数据库,创建 keystone 数据库并授权。

    +
    $ mysql -u root -p
    +

    MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    执行如下命令,安装软件包。

    +
    $ yum install openstack-keystone httpd python2-mod_wsgi
    +
  4. +
  5. +

    配置keystone,编辑 /etc/keystone/keystone.conf 文件。在[database]部分,配置数据库入口。在[token]部分,配置token provider

    +

    [database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +[token]
    +provider = fernet
    +替换KEYSTONE_DBPASS为Keystone数据库的密码

    +
  6. +
  7. +

    执行如下命令,同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    执行如下命令,初始化Fernet密钥仓库。

    +
    $ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +$ keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    执行如下命令,启动身份服务。

    +

    $ keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +替换 ADMIN_PASS,为 admin 用户设置密码。

    +
  12. +
  13. +

    编辑 /etc/httpd/conf/httpd.conf 文件,配置Apache HTTP server

    +
    $ vim /etc/httpd/conf/httpd.conf
    +

    配置 ServerName 项引用控制节点,如下所示。 +

    ServerName controller

    +

    如果 ServerName 项不存在则需要创建。

    +
  14. +
  15. +

    执行如下命令,为 /usr/share/keystone/wsgi-keystone.conf 文件创建链接。

    +
    $ ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +
  16. +
  17. +

    完成安装,执行如下命令,启动Apache HTTP服务。

    +
    $ systemctl enable httpd.service
    +$ systemctl start httpd.service
    +
  18. +
  19. +

    安装OpenStackClient

    +
    $ yum install python2-openstackclient
    +
  20. +
  21. +

    创建 OpenStack client 环境脚本

    +

    创建admin用户的环境变量脚本:

    +
    # vim admin-openrc
    +
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +

    替换ADMIN_PASS为admin用户的密码, 与上述keystone-manage bootstrap 命令中设置的密码一致 + 运行脚本加载环境变量:

    +
    $ source admin-openrc
    +
  22. +
  23. +

    分别执行如下命令,创建domain, projects, users, roles。

    +

    创建domain ‘example’:

    +
    $ openstack domain create --description "An Example Domain" example
    +

    注:domain ‘default’在 keystone-manage bootstrap 时已创建

    +

    创建project ‘service’:

    +
    $ openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project ’myproject‘,user ’myuser‘ 和 role ’myrole‘,为‘myproject’和‘myuser’添加角色‘myrole’:

    +
    $ openstack project create --domain default --description "Demo Project" myproject
    +$ openstack user create --domain default --password-prompt myuser
    +$ openstack role create myrole
    +$ openstack role add --project myproject --user myuser myrole
    +
  24. +
  25. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    $ unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    $ openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    $ openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  26. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    以 root 用户访问数据库,创建 glance 数据库并授权。

    +
    $ mysql -u root -p
    +
    MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码。

    +
    $ source admin-openrc
    +

    执行以下命令,分别完成创建 glance 服务凭证、创建glance用户和添加‘admin’角色到用户‘glance’。

    +

    $ openstack user create --domain default --password-prompt glance
    +$ openstack role add --project service --user glance admin
    +$ openstack service create --name glance --description "OpenStack Image" image
    +创建镜像服务API端点:

    +
    $ openstack endpoint create --region RegionOne image public http://controller:9292
    +$ openstack endpoint create --region RegionOne image internal http://controller:9292
    +$ openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +

    $ yum install openstack-glance
    +配置glance:

    +

    编辑 /etc/glance/glance-api.conf 文件:

    +

    在[database]部分,配置数据库入口

    +

    在[keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    在[glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +
    [database]
    +# ...
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +[keystone_authtoken]
    +# ...
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +[paste_deploy]
    +# ...
    +flavor = keystone
    +[glance_store]
    +# ...
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    编辑 /etc/glance/glance-registry.conf 文件:

    +

    在[database]部分,配置数据库入口

    +

    在[keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    ```ini +[database]

    +

    ...

    +

    connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance +[keystone_authtoken]

    +

    ...

    +

    www_authenticate_uri = http://controller:5000 +auth_url = http://controller:5000 +memcached_servers = controller:11211 +auth_type = password +project_domain_name = Default +user_domain_name = Default +project_name = service +username = glance +password = GLANCE_PASS +[paste_deploy]

    +

    ...

    +

    flavor = keystone + ```

    +

    其中,替换 GLANCE_DBPASS 为 glance 数据库的密码,替换 GLANCE_PASS 为 glance 用户的密码。

    +

    同步数据库:

    +

    $ su -s /bin/sh -c "glance-manage db_sync" glance
    +启动镜像服务:

    +
    $ systemctl enable openstack-glance-api.service openstack-glance-registry.service
    +$ systemctl start openstack-glance-api.service openstack-glance-registry.service
    +
  4. +
  5. +

    验证

    +

    下载镜像 +```shell +$ source admin-openrc

    +

    注意:如果您使用的环境是鲲鹏架构,请下载arm64版本的镜像。

    +

    $ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img + ```

    +

    向Image服务上传镜像:

    +

    shell +$ glance image-create --name "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public

    +

    确认镜像上传并验证属性:

    +

    shell +$ glance image-list

    +

    Nova 安装

    +
  6. +
  7. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为root用户访问数据库,创建nova、nova_api、nova_cell0 数据库并授权

    +
    $ mysql -u root -p
    +

    MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> CREATE DATABASE placement;
    +
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +替换NOVA_DBPASS及PLACEMENT_DBPASS,为nova及placement数据库设置密码

    +

    执行如下命令,完成创建nova服务凭证、创建nova用户以及添加‘admin’角色到用户‘nova’。

    +
    $ . admin-openrc
    +$ openstack user create --domain default --password-prompt nova
    +$ openstack role add --project service --user nova admin
    +$ openstack service create --name nova --description "OpenStack Compute" compute
    +

    创建计算服务API端点:

    +
    $ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
    +$ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
    +$ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
    +

    创建placement用户并添加‘admin’角色到用户‘placement’: +

    $ openstack user create --domain default --password-prompt placement
    +$ openstack role add --project service --user placement admin

    +

    创建placement服务凭证及API服务端点: +

    $ openstack service create --name placement --description "Placement API" placement
    +$ openstack endpoint create --region RegionOne placement public http://controller:8778
    +$ openstack endpoint create --region RegionOne placement internal http://controller:8778
    +$ openstack endpoint create --region RegionOne placement admin http://controller:8778

    +
  8. +
  9. +

    安装和配置

    +

    安装软件包:

    +
    $ yum install openstack-nova-api openstack-nova-conductor \
    +  openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-compute \
    +  openstack-nova-placement-api openstack-nova-console
    +

    配置nova:

    +

    编辑 /etc/nova/nova.conf 文件:

    +

    在[default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    在[api_database] [database] [placement_database]部分,配置数据库入口;

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    在[vnc]部分,启用并配置远程控制台入口;

    +

    在[glance]部分,配置镜像服务API的地址;

    +

    在[oslo_concurrency]部分,配置lock path;

    +

    在[placement]部分,配置placement服务的入口。

    +
    [DEFAULT]
    +# ...
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.11
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver = libvirt.LibvirtDriver
    +instances_path = /var/lib/nova/instances/
    +[api_database]
    +# ...
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
    +[database]
    +# ...
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +[vnc]
    +enabled = true
    +# ...
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html
    +[glance]
    +# ...
    +api_servers = http://controller:9292
    +[oslo_concurrency]
    +# ...
    +lock_path = /var/lib/nova/tmp
    +[placement]
    +# ...
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +[neutron]
    +# ...
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +

    替换RABBIT_PASS为RabbitMQ中openstack账户的密码;

    +

    配置my_ip为控制节点的管理IP地址;

    +

    替换NOVA_DBPASS为nova数据库的密码;

    +

    替换PLACEMENT_DBPASS为placement数据库的密码;

    +

    替换NOVA_PASS为nova用户的密码;

    +

    替换PLACEMENT_PASS为placement用户的密码;

    +

    替换NEUTRON_PASS为neutron用户的密码;

    +

    编辑/etc/httpd/conf.d/00-nova-placement-api.conf,增加Placement API接入配置

    +
    <Directory /usr/bin>
    +   <IfVersion >= 2.4>
    +      Require all granted
    +   </IfVersion>
    +   <IfVersion < 2.4>
    +      Order allow,deny
    +      Allow from all
    +   </IfVersion>
    +</Directory>
    +

    重启httpd服务:

    +
    $ systemctl restart httpd
    +

    同步nova-api数据库:

    +

    $ su -s /bin/sh -c "nova-manage api_db sync" nova
    +注册cell0数据库:

    +

    $ su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
    +创建cell1 cell:

    +

    $ su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
    +同步nova数据库:

    +

    $ su -s /bin/sh -c "nova-manage db sync" nova
    +验证cell0和cell1注册正确:

    +

    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
    +确定是否支持虚拟机硬件加速(x86架构):

    +
    $ egrep -c '(vmx|svm)' /proc/cpuinfo
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM: +注意: 如果是在ARM64的服务器上,还需要在配置cpu_modecustom,cpu_modelcortex-a72

    +

    # vim /etc/nova/nova.conf
    +[libvirt]
    +# ...
    +virt_type = qemu
    +cpu_mode = custom
    +cpu_model = cortex-a72
    +如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要在compute节点执行以下命令

    +
    mkdir -p /usr/share/AAVMF
    +ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_CODE.fd
    +ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_VARS.fd
    +chown nova:nova /usr/share/AAVMF -R
    +
    +vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
    +     "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2/aarch64/vars-template-pflash.raw"
    +]
    +

    启动计算服务及其依赖项,并配置其开机启动:

    +

    $ systemctl enable \
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +$ systemctl start \
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    $ systemctl enable libvirtd.service openstack-nova-compute.service
    +$ systemctl start libvirtd.service openstack-nova-compute.service
    +添加计算节点到cell数据库:

    +

    确认计算节点存在:

    +

    $ . admin-openrc
    +$ openstack compute service list --service nova-compute
    +注册计算节点:

    +
    $ su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
    +
  10. +
  11. +

    验证

    +

    $ . admin-openrc
    +列出服务组件,验证每个流程都成功启动和注册:

    +
    $ openstack compute service list
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    $ openstack catalog list
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    $ openstack image list
    +

    检查cells和placement API是否运作成功,以及其他必要条件是否已具备。

    +
    $ nova-status upgrade check
    +

    Neutron 安装

    +
  12. +
  13. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为root用户访问数据库,创建 neutron 数据库并授权。

    +
    $ mysql -u root -p
    +

    MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +替换NEUTRON_DBPASS,为neutron数据库设置密码。

    +

    $ . admin-openrc
    +执行如下命令,完成创建 neutron 服务凭证、创建neutron用户和添加‘admin’角色到‘neutron’用户操作。

    +

    创建neutron服务

    +

    $ openstack user create --domain default --password-prompt neutron
    +$ openstack role add --project service --user neutron admin
    +$ openstack service create --name neutron --description "OpenStack Networking" network
    +创建网络服务API端点:

    +
    $ openstack endpoint create --region RegionOne network public http://controller:9696
    +$ openstack endpoint create --region RegionOne network internal http://controller:9696
    +$ openstack endpoint create --region RegionOne network admin http://controller:9696
    +
  14. +
  15. +

    安装和配置 Self-service 网络

    +

    安装软件包:

    +

    $ yum install openstack-neutron openstack-neutron-ml2 \
    +openstack-neutron-linuxbridge ebtables ipset
    +配置neutron:

    +

    编辑 /etc/neutron/neutron.conf 文件:

    +

    在[database]部分,配置数据库入口;

    +

    在[default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    在[default] [keystone]部分,配置身份认证服务入口;

    +

    在[default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    在[oslo_concurrency]部分,配置lock path。

    +
    [database]
    +# ...
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
    +[DEFAULT]
    +# ...
    +core_plugin = ml2
    +service_plugins = router
    +allow_overlapping_ips = true
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true
    +notify_nova_on_port_data_changes = true
    +[keystone_authtoken]
    +# ...
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +[nova]
    +# ...
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +[oslo_concurrency]
    +# ...
    +lock_path = /var/lib/neutron/tmp
    +

    替换NEUTRON_DBPASS为neutron数据库的密码;

    +

    替换RABBIT_PASS为RabbitMQ中openstack账户的密码;

    +

    替换NEUTRON_PASS为neutron用户的密码;

    +

    替换NOVA_PASS为nova用户的密码。

    +

    配置ML2插件:

    +

    编辑 /etc/neutron/plugins/ml2/ml2_conf.ini 文件:

    +

    在[ml2]部分,启用 flat、vlan、vxlan 网络,启用网桥及 layer-2 population 机制,启用端口安全扩展驱动;

    +

    在[ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    在[ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    在[securitygroup]部分,配置允许 ipset。

    +

    # vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +[ml2]
    +# ...
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +[ml2_type_flat]
    +# ...
    +flat_networks = provider
    +[ml2_type_vxlan]
    +# ...
    +vni_ranges = 1:1000
    +[securitygroup]
    +# ...
    +enable_ipset = true
    +配置 Linux bridge 代理:

    +

    编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 文件:

    +

    在[linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    在[vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    在[securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    [linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +[securitygroup]
    +# ...
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +

    编辑 /etc/neutron/l3_agent.ini 文件:

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    [DEFAULT]
    +# ...
    +interface_driver = linuxbridge
    +配置DHCP代理:

    +

    编辑 /etc/neutron/dhcp_agent.ini 文件:

    +

    在[default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    [DEFAULT]
    +# ...
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +配置metadata代理:

    +

    编辑 /etc/neutron/metadata_agent.ini 文件:

    +

    在[default]部分,配置元数据主机和shared secret。

    +

    [DEFAULT]
    +# ...
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +替换METADATA_SECRET为合适的元数据代理secret。

    +
  16. +
  17. +

    配置计算服务

    +

    编辑 /etc/nova/nova.conf 文件:

    +

    在[neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +
    [neutron]
    +# ...
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    替换NEUTRON_PASS为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  18. +
  19. +

    完成安装

    +

    添加配置文件链接:

    +
    $ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    同步数据库:

    +
    $ su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +

    重启计算API服务:

    +
    $ systemctl restart openstack-nova-api.service
    +

    启动网络服务并配置开机启动:

    +
    $ systemctl enable neutron-server.service \
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service
    +$ systemctl start neutron-server.service \
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service
    +$ systemctl enable neutron-l3-agent.service
    +$ systemctl start neutron-l3-agent.service
    +
  20. +
  21. +

    验证

    +

    列出代理验证 neutron 代理启动成功:

    +
    $ openstack network agent list
    +
  22. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为root用户访问数据库,创建cinder数据库并授权。

    +

    $ mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +替换CINDER_DBPASS,为cinder数据库设置密码。

    +
    $ source admin-openrc
    +

    创建cinder服务凭证:

    +

    创建cinder用户

    +

    添加‘admin’角色到用户‘cinder’

    +

    创建cinderv2和cinderv3服务

    +

    $ openstack user create --domain default --password-prompt cinder
    +$ openstack role add --project service --user cinder admin
    +$ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +$ openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +创建块存储服务API端点:

    +
    $ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +$ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +$ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +$ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +$ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +$ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装和配置控制节点

    +

    安装软件包:

    +

    $ yum install openstack-cinder
    +配置cinder:

    +

    编辑 /etc/cinder/cinder.conf 文件:

    +

    在[database]部分,配置数据库入口;

    +

    在[DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    在[DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    在[oslo_concurrency]部分,配置lock path。

    +

    [database]
    +# ...
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +[DEFAULT]
    +# ...
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +[keystone_authtoken]
    +# ...
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +[oslo_concurrency]
    +# ...
    +lock_path = /var/lib/cinder/tmp
    +替换CINDER_DBPASS为cinder数据库的密码;

    +

    替换RABBIT_PASS为RabbitMQ中openstack账户的密码;

    +

    配置my_ip为控制节点的管理IP地址;

    +

    替换CINDER_PASS为cinder用户的密码;

    +

    同步数据库:

    +

    $ su -s /bin/sh -c "cinder-manage db sync" cinder
    +配置计算使用块存储:

    +

    编辑 /etc/nova/nova.conf 文件。

    +

    [cinder]
    +os_region_name = RegionOne
    +完成安装:

    +

    重启计算API服务

    +

    $ systemctl restart openstack-nova-api.service
    +启动块存储服务

    +
    $ systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
    +$ systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
    +
  4. +
  5. +

    安装和配置存储节点(LVM)

    +

    安装软件包:

    +
    $ yum install lvm2 device-mapper-persistent-data scsi-target-utils python2-keystone \
    +openstack-cinder-volume
    +

    创建LVM物理卷 /dev/sdb:

    +

    $ pvcreate /dev/sdb
    +创建LVM卷组 cinder-volumes:

    +

    $ vgcreate cinder-volumes /dev/sdb
    +编辑 /etc/lvm/lvm.conf 文件:

    +

    在devices部分,添加过滤以接受/dev/sdb设备拒绝其他设备。

    +
    devices {
    +
    +# ...
    +
    +filter = [ "a/sdb/", "r/.*/"]
    +

    编辑 /etc/cinder/cinder.conf 文件:

    +

    在[lvm]部分,使用LVM驱动、cinder-volumes卷组、iSCSI协议和适当的iSCSI服务配置LVM后端。

    +

    在[DEFAULT]部分,启用LVM后端,配置镜像服务API的位置。

    +
    [lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    +volume_group = cinder-volumes
    +target_protocol = iscsi
    +target_helper = lioadm
    +[DEFAULT]
    +# ...
    +enabled_backends = lvm
    +glance_api_servers = http://controller:9292
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +

    include /var/lib/cinder/volumes/*
    +完成安装:

    +
    $ systemctl enable openstack-cinder-volume.service tgtd.service iscsid.service
    +$ systemctl start openstack-cinder-volume.service tgtd.service iscsid.service
    +
  6. +
  7. +

    安装和配置存储节点(ceph RBD)

    +

    安装软件包:

    +
    $ yum install ceph-common python2-rados python2-rbd python2-keystone openstack-cinder-volume
    +

    在[DEFAULT]部分,启用LVM后端,配置镜像服务API的位置。

    +
    [DEFAULT]
    +enabled_backends = ceph-rbd
    +

    添加ceph rbd配置部分,配置块命名与enabled_backends中保持一致

    +
    [ceph-rbd]
    +glance_api_version = 2
    +rados_connect_timeout = -1
    +rbd_ceph_conf = /etc/ceph/ceph.conf
    +rbd_flatten_volume_from_snapshot = False
    +rbd_max_clone_depth = 5
    +rbd_pool = <RBD_POOL_NAME>  # RBD存储池名称
    +rbd_secret_uuid = <rbd_secret_uuid> # 随机生成SECRET UUID
    +rbd_store_chunk_size = 4
    +rbd_user = <RBD_USER_NAME>
    +volume_backend_name = ceph-rbd
    +volume_driver = cinder.volume.drivers.rbd.RBDDriver
    +

    配置存储节点ceph客户端,需要保证/etc/ceph/目录中包含ceph集群访问配置,包括ceph.conf以及keyring

    +
    [root@openeuler ~]# ll /etc/ceph
    +-rw-r--r-- 1 root root   82 Jun 16 17:11 ceph.client.<rbd_user>.keyring
    +-rw-r--r-- 1 root root 1.5K Jun 16 17:11 ceph.conf
    +-rw-r--r-- 1 root root   92 Jun 16 17:11 rbdmap
    +

    在存储节点检查ceph集群是否正常可访问

    +
    [root@openeuler ~]# ceph --user cinder -s
    +  cluster:
    +    id:     b7b2fac6-420f-4ec1-aea2-4862d29b4059
    +    health: HEALTH_OK
    +
    +  services:
    +    mon: 3 daemons, quorum VIRT01,VIRT02,VIRT03
    +    mgr: VIRT03(active), standbys: VIRT02, VIRT01
    +    mds: cephfs_virt-1/1/1 up  {0=VIRT03=up:active}, 2 up:standby
    +    osd: 15 osds: 15 up, 15 in
    +
    +  data:
    +    pools:   7 pools, 1416 pgs
    +    objects: 5.41M objects, 19.8TiB
    +    usage:   49.3TiB used, 59.9TiB / 109TiB avail
    +    pgs:     1414 active
    +
    +  io:
    +    client:   2.73MiB/s rd, 22.4MiB/s wr, 3.21kop/s rd, 1.19kop/s wr
    +

    启动服务

    +
    $ systemctl enable openstack-cinder-volume.service
    +$ systemctl start openstack-cinder-volume.service
    +
  8. +
  9. +

    安装和配置备份服务

    +

    编辑 /etc/cinder/cinder.conf 文件:

    +

    在[DEFAULT]部分,配置备份选项

    +

    [DEFAULT]
    +# ...
    +# 注意: openEuler 21.03中没有提供OpenStack Swift软件包,需要用户自行安装。或者使用其他的备份后端,例如,NFS。NFS已经过测试验证,可以正常使用。
    +backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
    +backup_swift_url = SWIFT_URL
    +替换SWIFT_URL为对象存储服务的URL,该URL可以通过对象存储API端点找到:

    +

    $ openstack catalog show object-store
    +完成安装:

    +
    $ systemctl enable openstack-cinder-backup.service
    +$ systemctl start openstack-cinder-backup.service
    +
  10. +
  11. +

    验证

    +

    列出服务组件验证每个步骤成功: +

    $ source admin-openrc
    +$ openstack volume service list

    +

    注:目前暂未对swift组件进行支持,有条件的同学可以配置对接ceph。

    +
  12. +
+

Horizon 安装

+
    +
  1. +

    安装软件包

    +

    $ yum install openstack-dashboard
    +2. 修改文件/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py

    +

    修改变量

    +

    ALLOWED_HOSTS = ['*', ]
    +OPENSTACK_HOST = "controller"
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +CACHES = {
    +    'default': {
    +         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +         'LOCATION': 'controller:11211',
    +    }
    +}
    +新增变量 +
    OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +WEBROOT = "/dashboard/"
    +COMPRESS_OFFLINE = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin"
    +LOGIN_URL = '/dashboard/auth/login/'
    +LOGOUT_URL = '/dashboard/auth/logout/'
    +3. 修改文件/etc/httpd/conf.d/openstack-dashboard.conf +
    WSGIDaemonProcess dashboard
    +WSGIProcessGroup dashboard
    +WSGISocketPrefix run/wsgi
    +WSGIApplicationGroup %{GLOBAL}
    +
    +WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
    +Alias /dashboard/static /usr/share/openstack-dashboard/static
    +
    +<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
    +  Options All
    +  AllowOverride All
    +  Require all granted
    +</Directory>
    +
    +<Directory /usr/share/openstack-dashboard/static>
    +  Options All
    +  AllowOverride All
    +  Require all granted
    +</Directory>
    +4. 在/usr/share/openstack-dashboard目录下执行 +
    $ ./manage.py compress
    +5. 重启 httpd 服务 +
    $ systemctl restart httpd
    +5. 验证 +打开浏览器,输入网址http://,登录 horizon。

    +
  2. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装

+
    +
  1. 安装Tempest +
    $ yum install openstack-tempest
  2. +
  3. +

    初始化目录

    +

    $ tempest init mytest
    +3. 修改配置文件。

    +

    $ cd mytest
    +$ vi etc/tempest.conf
    +tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  4. +
  5. +

    执行测试

    +
    $ tempest run
    +
  6. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
$ mysql -u root -p 
+
MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; 
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \     
+IDENTIFIED BY 'IRONIC_DBPASSWORD'; 
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \     
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 组件安装与配置
  2. +
+

##### 创建服务用户认证

+

1、创建Bare Metal服务用户

+
$ openstack user create --password IRONIC_PASSWORD \ 
+--email ironic@example.com ironic 
+$ openstack role add --project service --user ironic admin 
+$ openstack service create --name ironic --description \ 
+"Ironic baremetal provisioning service" baremetal 
+
+$ openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection 
+$ openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector 
+$ openstack role add --project service --user ironic-inspector admin
+

2、创建Bare Metal服务访问入口

+
$ openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 
+$ openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 
+$ openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 
+$ openstack endpoint create --region RegionOne baremetal-introspection internal http://$IRONIC_NODE:5050/v1 
+$ openstack endpoint create --region RegionOne baremetal-introspection public http://$IRONIC_NODE:5050/v1 
+$ openstack endpoint create --region RegionOne baremetal-introspection admin http://$IRONIC_NODE:5050/v1
+

##### 配置ironic-api服务

+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database] 
+
+# The SQLAlchemy connection string used to connect to the 
+# database (string value) 
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT] 
+
+# A URL representing the messaging driver to use and its full 
+# configuration. (string value) 
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT] 
+
+# Authentication strategy used by ironic-api: one of 
+# "keystone" or "noauth". "noauth" should not be used in a 
+# production environment because all authentication will be 
+# disabled. (string value) 
+
+auth_strategy=keystone 
+force_config_drive = True
+
+[keystone_authtoken] 
+# Authentication type to load (string value) 
+auth_type=password 
+# Complete public Identity API endpoint (string value) 
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 
+# Complete admin Identity API endpoint. (string value) 
+auth_url=http://PRIVATE_IDENTITY_IP:5000 
+# Service username. (string value) 
+username=ironic 
+# Service account password. (string value) 
+password=IRONIC_PASSWORD 
+# Service tenant name. (string value) 
+project_name=service 
+# Domain name containing project (string value) 
+project_domain_name=Default 
+# User's domain name (string value) 
+user_domain_name=Default
+

4、需要在配置文件中指定ironic日志目录

+
[DEFAULT]
+log_dir = /var/log/ironic/
+

5、创建裸金属服务数据库表

+
$ ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

6、重启ironic-api服务

+
$ systemctl restart openstack-ironic-api
+

##### 配置ironic-conductor服务

+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT] 
+
+# IP address of this host. If unset, will determine the IP 
+# programmatically. If unable to do so, will use "127.0.0.1". 
+# (string value) 
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database] 
+
+# The SQLAlchemy connection string to use to connect to the 
+# database. (string value) 
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT] 
+
+# A URL representing the messaging driver to use and its full 
+# configuration. (string value) 
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+

[neutron] - 访问Openstack网络服务 + [glance] - 访问Openstack镜像服务 + [swift] - 访问Openstack对象存储服务 + [cinder] - 访问Openstack块存储服务 + [inspector] - 访问Openstack裸金属introspection服务 + [service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在Openstack身份认证服务目录中的自己的API URL端点

+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问openstack网络服务的身份验证信息配置为:

+

网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口

+

请求时使用特定的CA SSL证书进行HTTPS连接

+

与ironic-api服务配置相同的服务用户

+

动态密码认证插件基于其他选项发现合适的身份认证服务API版本

+
[neutron] 
+
+# Authentication type to load (string value) 
+auth_type = password 
+# Authentication URL (string value) 
+auth_url=https://IDENTITY_IP:5000/ 
+# Username (string value) 
+username=ironic 
+# User's password (string value) 
+password=IRONIC_PASSWORD 
+# Project name to scope to (string value) 
+project_name=service 
+# Domain ID containing project (string value) 
+project_domain_id=default 
+# User's domain id (string value) 
+user_domain_id=default 
+# PEM encoded Certificate Authority to use when verifying 
+# HTTPs connections. (string value) 
+cafile=/opt/stack/data/ca-bundle.pem 
+# The default region_name for endpoint URL discovery. (string 
+# value) 
+region_name = RegionOne 
+# List of interfaces, in order of preference, for endpoint 
+# URL. (list value) 
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] 
+# ...
+endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] 
+enabled_hardware_types = ipmi 
+

配置硬件接口:

+
enabled_boot_interfaces = pxe
+enabled_deploy_interfaces = direct,iscsi
+enabled_inspect_interfaces = inspector
+enabled_management_interfaces = ipmitool
+enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT]
+default_deploy_interface = direct
+default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
$ systemctl restart openstack-ironic-conductor
+

##### 配置ironic-inspector服务

+

配置文件路径/etc/ironic-inspector/inspector.conf

+

1、创建数据库

+

$ mysql -u root -p 
+
MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; 
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \     IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \     
+IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';

+

2、通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSWORDironic_inspector用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database] 
+backend = sqlalchemy 
+connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
+

3、调用 ironic-inspector-dbsync 生成表

+
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
+

4、配置消息队列通信地址

+
[DEFAULT]
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

5、设置keystone认证

+
[DEFAULT] 
+
+auth_strategy = keystone 
+
+[ironic] 
+
+api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 
+auth_type = password 
+auth_url = http://PUBLIC_IDENTITY_IP:5000 
+auth_strategy = keystone 
+ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 
+os_region = RegionOne 
+project_name = service 
+project_domain_name = Default 
+user_domain_name = Default 
+username = IRONIC_SERVICE_USER_NAME 
+password = IRONIC_SERVICE_USER_PASSWORD
+

6、配置ironic inspector dnsmasq服务

+
# 配置文件地址:/etc/ironic-inspector/dnsmasq.conf 
+port=0 
+interface=enp3s0                         #替换为实际监听网络接口 
+dhcp-range=172.20.19.100,172.20.19.110   #替换为实际dhcp地址范围 
+bind-interfaces 
+enable-tftp 
+
+dhcp-match=set:efi,option:client-arch,7 
+dhcp-match=set:efi,option:client-arch,9 
+dhcp-match=aarch64, option:client-arch,11 
+dhcp-boot=tag:aarch64,grubaa64.efi 
+dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi 
+dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 
+
+tftp-root=/tftpboot                       #替换为实际tftpboot目录 
+log-facility=/var/log/dnsmasq.log
+

7、启动服务

+
$ systemctl enable --now openstack-ironic-inspector.service 
+$ systemctl enable --now openstack-ironic-inspector-dnsmasq.service
+

8、如果节点单独部署ironic服务还需要部署启动iscsid.service服务

+
$ systemctl enable openstack-cinder-volume.service tgtd.service iscsid.service
+$ systemctl start openstack-cinder-volume.service tgtd.service iscsid.service
+

注意:arm架构支持不完全,需要根据自己情况进行适配;

+
    +
  1. deploy ramdisk镜像制作
  2. +
+

目前ramdisk镜像支持通过ironic python agent builder来进行制作,这里介绍下使用这个工具构建ironic使用的deploy镜像的完整过程。(用户也可以根据自己的情况获取ironic-python-agent,这里提供使用ipa-builder制作ipa方法)

+

##### 安装 ironic-python-agent-builder

+
    +
  1. +

    安装工具:

    +
    $ pip install ironic-python-agent-builder
    +
  2. +
  3. +

    修改以下文件中的python解释器:

    +
    $ /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +
  4. +
  5. +

    安装其它必须的工具:

    +
    $ yum install git
    +

    由于DIB依赖semanage命令,所以在制作镜像之前确定该命令是否可用:semanage --help,如果提示无此命令,安装即可:

    +
    # 先查询需要安装哪个包
    +[root@localhost ~]# yum provides /usr/sbin/semanage
    +已加载插件:fastestmirror
    +Loading mirror speeds from cached hostfile
    + * base: mirror.vcu.edu
    + * extras: mirror.vcu.edu
    + * updates: mirror.math.princeton.edu
    +policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +源    :base
    +匹配来源:
    +文件名    :/usr/sbin/semanage
    +# 安装
    +[root@localhost ~]# yum install policycoreutils-python
    +
  6. +
+

##### 制作镜像

+

如果是aarch64架构,还需要添加:

+
$ export ARCH=aarch64
+

###### 普通镜像

+

基本用法:

+
usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
+                                   [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
+                                   distribution
+
+positional arguments:
+  distribution          Distribution to use
+
+optional arguments:
+  -h, --help            show this help message and exit
+  -r RELEASE, --release RELEASE
+                        Distribution release to use
+  -o OUTPUT, --output OUTPUT
+                        Output base file name
+  -e ELEMENT, --element ELEMENT
+                        Additional DIB element to use
+  -b BRANCH, --branch BRANCH
+                        If set, override the branch that is used for ironic-
+                        python-agent and requirements
+  -v, --verbose         Enable verbose logging in diskimage-builder
+  --extra-args EXTRA_ARGS
+                        Extra arguments to pass to diskimage-builder
+

举例说明:

+
$ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
+

###### 允许ssh登录

+

初始化环境变量,然后制作镜像:

+
$ export DIB_DEV_USER_USERNAME=ipa \
+$ export DIB_DEV_USER_PWDLESS_SUDO=yes \
+$ export DIB_DEV_USER_PASSWORD='123'
+$ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
+

###### 指定代码仓库

+

初始化对应的环境变量,然后制作镜像:

+
# 指定仓库地址以及版本
+DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
+DIB_REPOREF_ironic_python_agent=origin/develop
+
+# 直接从gerrit上clone代码
+DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
+DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
+

参考:source-repositories

+

指定仓库地址及版本验证成功。

+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 20.03 LTS SP2中引入了Kolla和Kolla-ansible服务。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
$ yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

数据库服务在数据库中存储信息,创建一个trove用户可以访问trove数据库,替换TROVE_DBPASSWORD为对应密码

+
$ mysql -u root -p
+
MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Trove服务用户

+

$ openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+$ openstack role add --project service --user trove admin
+$ openstack service create --name trove
+                         --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+

$ openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s
+$ openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s
+$ openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s
+ 解释: $TROVE_NODE 替换为Trove的API服务部署节点

+
    +
  1. 安装和配置Trove各组件
  2. +
+

1、安装Trove

+

$ yum install openstack-trove python-troveclient
+ 2、配置/etc/trove/trove.conf

+

[DEFAULT]
+bind_host=TROVE_NODE_IP
+log_dir = /var/log/trove
+
+auth_strategy = keystone
+# Config option for showing the IP address that nova doles out
+add_addresses = True
+network_label_regex = ^NETWORK_LABEL$
+api_paste_config = /etc/trove/api-paste.ini
+
+trove_auth_url = http://controller:35357/v3/
+nova_compute_url = http://controller:8774/v2
+cinder_url = http://controller:8776/v1
+
+nova_proxy_admin_user = admin
+nova_proxy_admin_pass = ADMIN_PASS
+nova_proxy_admin_tenant_name = service
+taskmanager_manager = trove.taskmanager.manager.Manager
+use_nova_server_config_drive = True
+
+# Set these if using Neutron Networking
+network_driver=trove.network.neutron.NeutronDriver
+network_label_regex=.*
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000/v3/
+auth_url=http://controller:35357/v3/
+#auth_uri = http://controller/identity
+#auth_url = http://controller/identity_admin
+auth_type = password
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = trove
+password = TROVE_PASS
+
+ 解释: + - [Default]分组中bind_host配置为Trove部署节点的IP + - nova_compute_urlcinder_url 为Nova和Cinder在Keystone中创建的endpoint + - nova_proxy_XXX 为一个能访问Nova服务的用户信息,上例中使用admin用户为例 + - transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码 + - [database]分组中的connection 为前面在mysql中为Trove创建的数据库信息 + - Trove的用户信息中TROVE_PASS替换为实际trove用户的密码

+

3、配置/etc/trove/trove-taskmanager.conf

+

[DEFAULT]
+log_dir = /var/log/trove
+trove_auth_url = http://controller/identity/v2.0
+nova_compute_url = http://controller:8774/v2
+cinder_url = http://controller:8776/v1
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
+ 解释: 参照trove.conf配置 + 4、配置/etc/trove/trove-conductor.conf

+

[DEFAULT]
+log_dir = /var/log/trove
+trove_auth_url = http://controller/identity/v2.0
+nova_compute_url = http://controller:8774/v2
+cinder_url = http://controller:8776/v1
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:trove@controller/trove
+ 解释: 参照trove.conf配置

+

5、配置/etc/trove/trove-guestagent.conf

+

[DEFAULT]
+rabbit_host = controller
+rabbit_password = RABBIT_PASS
+nova_proxy_admin_user = admin
+nova_proxy_admin_pass = ADMIN_PASS
+nova_proxy_admin_tenant_name = service
+trove_auth_url = http://controller/identity_admin/v2.0
+ 解释: guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 + 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 + 报心跳,因此需要配置RabbitMQ的用户和密码信息。

+

6、生成数据Trove数据库表

+
$ su -s /bin/sh -c "trove-manage db_sync" trove
+
    +
  1. 完成安装配置 + 1、配置Trove服务自启动
  2. +
+

$ systemctl enable openstack-trove-api.service \
+openstack-trove-taskmanager.service \
+openstack-trove-conductor.service 
+ 2、启动服务

+
$ systemctl start openstack-trove-api.service \
+openstack-trove-taskmanager.service \
+openstack-trove-conductor.service
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-20.03-LTS-SP3/OpenStack-queens/index.html b/site/install/openEuler-20.03-LTS-SP3/OpenStack-queens/index.html new file mode 100644 index 0000000000000000000000000000000000000000..a9f9ab292294598252574a9d0cff5016761ba43f --- /dev/null +++ b/site/install/openEuler-20.03-LTS-SP3/OpenStack-queens/index.html @@ -0,0 +1,1700 @@ + + + + + + + + openEuler-20.03-LTS-SP3_Queens - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Queens 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由 nova、cinder、neutron、glance、keystone、horizon 等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 20.03-LTS-SP3 版本官方认证的第三方 oepkg yum 源已经支持 Openstack-Queens 版本,用户可以配置好 oepkg yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

Openstack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • Cinder
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 20.03-LTS-SP3 官方认证的第三方源 oepkg

    +
    cat << EOF >> /etc/yum.repos.d/OpenStack_Queens.repo
    +[openstack_queens]
    +name=OpenStack_Queens
    +baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/queens/$basearch/
    +gpgcheck=0
    +enabled=1
    +EOF
    +

    注意

    +

    如果环境启用了Epol源,需要提高queens仓的优先级,设置priority=1: +

    cat << EOF >> /etc/yum.repos.d/OpenStack_Queens.repo
    +[openstack_queens]
    +name=OpenStack_Queens
    +baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/queens/$basearch/
    +gpgcheck=0
    +enabled=1
    +priority=1
    +EOF

    +
    $ yum clean all && yum makecache
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python2-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python2-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +

    systemctl enable memcached.service
    +systemctl start memcached.service
    +服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd python2-mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python2-openstackclient:

    +
    yum install python2-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +
    vim /etc/glance/glance-registry.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service openstack-glance-registry.service
    +systemctl start openstack-glance-api.service openstack-glance-registry.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载arm64版本的镜像

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CPT)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CPT)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTP)
    +openstack role add --project service --user nova admin                                         (CPT)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CPT)
    +

    创建placement服务凭证:

    +
    openstack user create --domain default --password-prompt placement                             (CPT)
    +openstack role add --project service --user placement admin                                    (CPT)
    +openstack service create --name placement --description "Placement API" placement              (CPT)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CPT)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CPT)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CPT)
    +

    创建placement API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778           (CPT)
    +openstack endpoint create --region RegionOne placement internal http://controller:8778         (CPT)
    +openstack endpoint create --region RegionOne placement admin http://controller:8778            (CPT)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor openstack-nova-console \
    +novnc openstack-nova-novncproxy openstack-nova-scheduler \
    +openstack-nova-placement-api                                                         (CTL)
    +
    +yum install openstack-nova-compute                                                   (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver = libvirt.LibvirtDriver                                                         (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +logdir = /var/log/nova/
    +
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova 
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    手动增加Placement API接入配置。

    +
    vim /etc/httpd/conf.d/00-nova-placement-api.conf                                               (CTL)
    +
    +<Directory /usr/bin>
    +   <IfVersion >= 2.4>
    +      Require all granted
    +   </IfVersion>
    +   <IfVersion < 2.4>
    +      Order allow,deny
    +      Allow from all
    +   </IfVersion>
    +</Directory>
    +

    重启httpd服务:

    +
    systemctl restart httpd                                                                        (CTL)
    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要在计算节点执行以下命令

    +
    mkdir -p /usr/share/AAVMF
    +chown nova:nova /usr/share/AAVMF
    +
    +ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_CODE.fd
    +ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_VARS.fd
    +
    +vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +

    并且当ARM架构下的部署环境为嵌套虚拟化时,libvirt配置如下:

    +
    [libvirt]
    +virt_type = qemu
    +cpu_mode = custom
    +cpu_model = cortex-a72
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CPT)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-consoleauth.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-consoleauth.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells和placement API是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge-agent \      (CTL)
    +            ebtables ipset openstack-neutron-l3-agent \
    +            openstack-neutron-dhcp-agent \
    +            openstack-neutron-metadata-agent
    +
    yum install openstack-neutron-linuxbridge-agent ebtables ipset                      (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                   (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable openstack-neutron-server.service \                                            (CTL)
    +openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \
    +openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service
    +systemctl restart openstack-nova-api.service openstack-neutron-server.service \                (CTL)
    +openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \
    +openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service
    +
    +systemctl enable openstack-neutron-linuxbridge-agent.service                                   (CPT)
    +systemctl restart openstack-neutron-linuxbridge-agent.service openstack-nova-compute.service   (CPT)
    +
  14. +
  15. +

    验证

    +

    列出代理验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler              (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (CPT)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (CPT)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (CPT)
    +backup_share=HOST:PATH                                                                         (CPT)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (CPT)
    +volume_group = cinder-volumes                                                                  (CPT)
    +iscsi_protocol = iscsi                                                                         (CPT)
    +iscsi_helper = tgtadm                                                                          (CPT)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (CPT)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (CPT)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +ALLOWED_HOSTS = ['*', ]
    +OPENSTACK_HOST = "controller"
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 安装软件包
  2. +
+
yum install openstack-ironic-api openstack-ironic-conductor python2-ironicclient
+

启动服务

+
systemctl enable openstack-ironic-api openstack-ironic-conductor
+systemctl start openstack-ironic-api openstack-ironic-conductor
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic --description "Ironic baremetal provisioning service" baremetal
+
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问Openstack网络服务
+[glance] - 访问Openstack镜像服务
+[swift] - 访问Openstack对象存储服务
+[cinder] - 访问Openstack块存储服务
+[inspector] - 访问Openstack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在Openstack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问openstack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. deploy ramdisk镜像制作
  2. +
+

Q版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用Q版原生工具,则需要安装对应的软件包。

+

yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+ 具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
+

在Queens中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。

+

Kolla 安装

+

Kolla 为 OpenStack 服务提供生产环境可用的容器化部署的功能。openEuler 20.03 LTS SP2中已经引入了Kolla和Kolla-ansible服务,但是Kolla 以及 Kolla-ansible 原生并不支持 openEuler, +因此 Openstack SIG 在openEuler 20.03 LTS SP3中提供了 openstack-kolla-pluginopenstack-kolla-ansible-plugin 这两个补丁包。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+

支持 openEuler 版本:

+
yum install openstack-kolla-plugin openstack-kolla-ansible-plugin
+

不支持 openEuler 版本:

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Trove服务用户

+

openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+openstack role add --project service --user trove admin
+openstack service create --name trove --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+

openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s
+ 解释: $TROVE_NODE 替换为Trove的API服务部署节点

+
    +
  1. 安装和配置Trove各组件 + 1、安装Trove包 + ```shell script + yum install openstack-trove python2-troveclient +
    2. 配置`trove.conf`
    +```shell script
    +vim /etc/trove/trove.conf
    +
    +[DEFAULT]
    +bind_host=TROVE_NODE_IP
    +log_dir = /var/log/trove
    +
    +auth_strategy = keystone
    +# Config option for showing the IP address that nova doles out
    +add_addresses = True
    +network_label_regex = ^NETWORK_LABEL$
    +api_paste_config = /etc/trove/api-paste.ini
    +
    +trove_auth_url = http://controller:35357/v3/
    +nova_compute_url = http://controller:8774/v2
    +cinder_url = http://controller:8776/v1
    +
    +nova_proxy_admin_user = admin
    +nova_proxy_admin_pass = ADMIN_PASS
    +nova_proxy_admin_tenant_name = service
    +taskmanager_manager = trove.taskmanager.manager.Manager
    +use_nova_server_config_drive = True
    +
    +# Set these if using Neutron Networking
    +network_driver=trove.network.neutron.NeutronDriver
    +network_label_regex=.*
    +
    +
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +
    +[database]
    +connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/v3/
    +auth_url=http://controller:35357/v3/
    +#auth_uri = http://controller/identity
    +#auth_url = http://controller/identity_admin
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = trove
    +password = TROVE_PASS
    +
    + 解释:
  2. +
  3. [Default]分组中bind_host配置为Trove部署节点的IP
  4. +
  5. nova_compute_urlcinder_url 为Nova和Cinder在Keystone中创建的endpoint
  6. +
  7. nova_proxy_XXX 为一个能访问Nova服务的用户信息,上例中使用admin用户为例
  8. +
  9. transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  10. +
  11. [database]分组中的connection 为前面在mysql中为Trove创建的数据库信息
  12. +
  13. +

    Trove的用户信息中TROVE_PASS替换为实际trove用户的密码

    +
  14. +
  15. +

    配置trove-taskmanager.conf + ```shell script + vim /etc/trove/trove-taskmanager.conf

    +
  16. +
+

[DEFAULT] + log_dir = /var/log/trove + trove_auth_url = http://controller/identity/v2.0 + nova_compute_url = http://controller:8774/v2 + cinder_url = http://controller:8776/v1 + transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/

+

[database] + connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove +

**解释:** 参照`trove.conf`配置
+
+4. 配置`trove-conductor.conf`
+```shell script
+vim /etc/trove/trove-conductor.conf
+
+[DEFAULT]
+log_dir = /var/log/trove
+trove_auth_url = http://controller/identity/v2.0
+nova_compute_url = http://controller:8774/v2
+cinder_url = http://controller:8776/v1
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:trove@controller/trove
+ 解释: 参照trove.conf配置

+
    +
  1. 配置trove-guestagent.conf + ```shell script + vim /etc/trove/trove-guestagent.conf + [DEFAULT] + rabbit_host = controller + rabbit_password = RABBIT_PASS + nova_proxy_admin_user = admin + nova_proxy_admin_pass = ADMIN_PASS + nova_proxy_admin_tenant_name = service + trove_auth_url = http://controller/identity_admin/v2.0 +
    **解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟
    +机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上
    +报心跳,因此需要配置RabbitMQ的用户和密码信息。
    +
    +6. 生成数据`Trove`数据库表
    +```shell script
    +su -s /bin/sh -c "trove-manage db_sync" trove
  2. +
  3. 完成安装配置
  4. +
  5. 配置Trove服务自启动 + ```shell script + systemctl enable openstack-trove-api.service \ + openstack-trove-taskmanager.service \ + openstack-trove-conductor.service +
    2. 启动服务
    +```shell script
    +systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  6. +
+

Rally 安装

+

Rally是OpenStack提供的性能测试工具。只需要简单的安装即可。

+
yum install openstack-rally openstack-rally-plugins
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-20.03-LTS-SP3/OpenStack-rocky/index.html b/site/install/openEuler-20.03-LTS-SP3/OpenStack-rocky/index.html new file mode 100644 index 0000000000000000000000000000000000000000..39ded27dd12611a7baf0e13b2d8ba725da7b60ef --- /dev/null +++ b/site/install/openEuler-20.03-LTS-SP3/OpenStack-rocky/index.html @@ -0,0 +1,1606 @@ + + + + + + + + openEuler-20.03-LTS-SP3_Rocky - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Rocky 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由 nova、cinder、neutron、glance、keystone、horizon 等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 20.03-LTS-SP3 版本官方认证的第三方 oepkg yum 源已经支持 Openstack-Rocky 版本,用户可以配置好 oepkg yum 源后根据此文档进行 OpenStack 部署。

+

准备环境

+

OpenStack yum源配置

+

配置 20.03-LTS-SP3 官方认证的第三方源 oepkg

+
$ cat << EOF >> /etc/yum.repos.d/OpenStack_Rocky.repo
+[openstack_rocky]
+name=OpenStack_Rocky
+baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/$basearch/
+gpgcheck=0
+enabled=1
+EOF
+

注意

+

如果环境启用了Epol源,需要提高rocky仓的优先级,设置priority=1: +

$ cat << EOF >> /etc/yum.repos.d/OpenStack_Rocky.repo
+[openstack_rocky]
+name=OpenStack_Rocky
+baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/$basearch/
+gpgcheck=0
+enabled=1
+priority=1
+EOF

+
$ yum clean all && yum makecache
+

环境配置

+

/etc/hosts中添加controller信息,例如节点IP是10.0.0.11,则新增:

+
10.0.0.11   controller
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +

    $ yum install mariadb mariadb-server python2-PyMySQL
    +2. 创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +

    复制如下内容到文件,其中 bind-address 设置为控制节点的管理IP地址。 +

    [mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8

    +
  2. +
  3. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    $ systemctl enable mariadb.service
    +$ systemctl start mariadb.service
    +

    安装 RabbitMQ

    +
  4. +
  5. +

    执行如下命令,安装软件包。

    +
    $ yum install rabbitmq-server
    +
  6. +
  7. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +

    $ systemctl enable rabbitmq-server.service
    +$ systemctl start rabbitmq-server.service
    +3. 添加 OpenStack用户。

    +

    $ rabbitmqctl add_user openstack RABBIT_PASS
    +4. 替换 RABBIT_PASS,为OpenStack用户设置密码

    +
  8. +
  9. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    $ rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  10. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +

    $ yum install memcached python2-memcached
    +2. 编辑 /etc/sysconfig/memcached 文件,添加以下内容

    +

    OPTIONS="-l 127.0.0.1,::1,controller"
    +OPTIONS 修改为实际环境中控制节点的管理IP地址。

    +
  2. +
  3. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    $ systemctl enable memcached.service
    +$ systemctl start memcached.service
    +
  4. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    以 root 用户访问数据库,创建 keystone 数据库并授权。

    +
    $ mysql -u root -p
    +

    MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    执行如下命令,安装软件包。

    +
    $ yum install openstack-keystone httpd python2-mod_wsgi
    +
  4. +
  5. +

    配置keystone,编辑 /etc/keystone/keystone.conf 文件。在[database]部分,配置数据库入口。在[token]部分,配置token provider

    +

    [database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +[token]
    +provider = fernet
    +替换KEYSTONE_DBPASS为Keystone数据库的密码

    +
  6. +
  7. +

    执行如下命令,同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    执行如下命令,初始化Fernet密钥仓库。

    +
    $ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +$ keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    执行如下命令,启动身份服务。

    +

    $ keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +替换 ADMIN_PASS,为 admin 用户设置密码。

    +
  12. +
  13. +

    编辑 /etc/httpd/conf/httpd.conf 文件,配置Apache HTTP server

    +
    $ vim /etc/httpd/conf/httpd.conf
    +

    配置 ServerName 项引用控制节点,如下所示。 +

    ServerName controller

    +

    如果 ServerName 项不存在则需要创建。

    +
  14. +
  15. +

    执行如下命令,为 /usr/share/keystone/wsgi-keystone.conf 文件创建链接。

    +
    $ ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +
  16. +
  17. +

    完成安装,执行如下命令,启动Apache HTTP服务。

    +
    $ systemctl enable httpd.service
    +$ systemctl start httpd.service
    +
  18. +
  19. +

    安装OpenStackClient

    +
    $ yum install python2-openstackclient
    +
  20. +
  21. +

    创建 OpenStack client 环境脚本

    +

    创建admin用户的环境变量脚本:

    +
    # vim admin-openrc
    +
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +

    替换ADMIN_PASS为admin用户的密码, 与上述keystone-manage bootstrap 命令中设置的密码一致 + 运行脚本加载环境变量:

    +
    $ source admin-openrc
    +
  22. +
  23. +

    分别执行如下命令,创建domain, projects, users, roles。

    +

    创建domain ‘example’:

    +
    $ openstack domain create --description "An Example Domain" example
    +

    注:domain ‘default’在 keystone-manage bootstrap 时已创建

    +

    创建project ‘service’:

    +
    $ openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project ’myproject‘,user ’myuser‘ 和 role ’myrole‘,为‘myproject’和‘myuser’添加角色‘myrole’:

    +
    $ openstack project create --domain default --description "Demo Project" myproject
    +$ openstack user create --domain default --password-prompt myuser
    +$ openstack role create myrole
    +$ openstack role add --project myproject --user myuser myrole
    +
  24. +
  25. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    $ unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    $ openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    $ openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  26. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    以 root 用户访问数据库,创建 glance 数据库并授权。

    +
    $ mysql -u root -p
    +
    MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码。

    +
    $ source admin-openrc
    +

    执行以下命令,分别完成创建 glance 服务凭证、创建glance用户和添加‘admin’角色到用户‘glance’。

    +

    $ openstack user create --domain default --password-prompt glance
    +$ openstack role add --project service --user glance admin
    +$ openstack service create --name glance --description "OpenStack Image" image
    +创建镜像服务API端点:

    +
    $ openstack endpoint create --region RegionOne image public http://controller:9292
    +$ openstack endpoint create --region RegionOne image internal http://controller:9292
    +$ openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +

    $ yum install openstack-glance
    +配置glance:

    +

    编辑 /etc/glance/glance-api.conf 文件:

    +

    在[database]部分,配置数据库入口

    +

    在[keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    在[glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +
    [database]
    +# ...
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +[keystone_authtoken]
    +# ...
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +[paste_deploy]
    +# ...
    +flavor = keystone
    +[glance_store]
    +# ...
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    编辑 /etc/glance/glance-registry.conf 文件:

    +

    在[database]部分,配置数据库入口

    +

    在[keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    ```ini +[database]

    +

    ...

    +

    connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance +[keystone_authtoken]

    +

    ...

    +

    www_authenticate_uri = http://controller:5000 +auth_url = http://controller:5000 +memcached_servers = controller:11211 +auth_type = password +project_domain_name = Default +user_domain_name = Default +project_name = service +username = glance +password = GLANCE_PASS +[paste_deploy]

    +

    ...

    +

    flavor = keystone + ```

    +

    其中,替换 GLANCE_DBPASS 为 glance 数据库的密码,替换 GLANCE_PASS 为 glance 用户的密码。

    +

    同步数据库:

    +

    $ su -s /bin/sh -c "glance-manage db_sync" glance
    +启动镜像服务:

    +
    $ systemctl enable openstack-glance-api.service openstack-glance-registry.service
    +$ systemctl start openstack-glance-api.service openstack-glance-registry.service
    +
  4. +
  5. +

    验证

    +

    下载镜像 +```shell +$ source admin-openrc

    +

    注意:如果您使用的环境是鲲鹏架构,请下载arm64版本的镜像。

    +

    $ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img + ```

    +

    向Image服务上传镜像:

    +

    shell +$ glance image-create --name "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public

    +

    确认镜像上传并验证属性:

    +

    shell +$ glance image-list

    +

    Nova 安装

    +
  6. +
  7. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为root用户访问数据库,创建nova、nova_api、nova_cell0 数据库并授权

    +
    $ mysql -u root -p
    +

    MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> CREATE DATABASE placement;
    +
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +替换NOVA_DBPASS及PLACEMENT_DBPASS,为nova及placement数据库设置密码

    +

    执行如下命令,完成创建nova服务凭证、创建nova用户以及添加‘admin’角色到用户‘nova’。

    +
    $ . admin-openrc
    +$ openstack user create --domain default --password-prompt nova
    +$ openstack role add --project service --user nova admin
    +$ openstack service create --name nova --description "OpenStack Compute" compute
    +

    创建计算服务API端点:

    +
    $ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
    +$ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
    +$ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
    +

    创建placement用户并添加‘admin’角色到用户‘placement’: +

    $ openstack user create --domain default --password-prompt placement
    +$ openstack role add --project service --user placement admin

    +

    创建placement服务凭证及API服务端点: +

    $ openstack service create --name placement --description "Placement API" placement
    +$ openstack endpoint create --region RegionOne placement public http://controller:8778
    +$ openstack endpoint create --region RegionOne placement internal http://controller:8778
    +$ openstack endpoint create --region RegionOne placement admin http://controller:8778

    +
  8. +
  9. +

    安装和配置

    +

    安装软件包:

    +
    $ yum install openstack-nova-api openstack-nova-conductor \
    +  openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-compute \
    +  openstack-nova-placement-api openstack-nova-console
    +

    配置nova:

    +

    编辑 /etc/nova/nova.conf 文件:

    +

    在[default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    在[api_database] [database] [placement_database]部分,配置数据库入口;

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    在[vnc]部分,启用并配置远程控制台入口;

    +

    在[glance]部分,配置镜像服务API的地址;

    +

    在[oslo_concurrency]部分,配置lock path;

    +

    在[placement]部分,配置placement服务的入口。

    +
    [DEFAULT]
    +# ...
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.11
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver = libvirt.LibvirtDriver
    +instances_path = /var/lib/nova/instances/
    +[api_database]
    +# ...
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
    +[database]
    +# ...
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +[vnc]
    +enabled = true
    +# ...
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html
    +[glance]
    +# ...
    +api_servers = http://controller:9292
    +[oslo_concurrency]
    +# ...
    +lock_path = /var/lib/nova/tmp
    +[placement]
    +# ...
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +[neutron]
    +# ...
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +

    替换RABBIT_PASS为RabbitMQ中openstack账户的密码;

    +

    配置my_ip为控制节点的管理IP地址;

    +

    替换NOVA_DBPASS为nova数据库的密码;

    +

    替换PLACEMENT_DBPASS为placement数据库的密码;

    +

    替换NOVA_PASS为nova用户的密码;

    +

    替换PLACEMENT_PASS为placement用户的密码;

    +

    替换NEUTRON_PASS为neutron用户的密码;

    +

    编辑/etc/httpd/conf.d/00-nova-placement-api.conf,增加Placement API接入配置

    +
    <Directory /usr/bin>
    +   <IfVersion >= 2.4>
    +      Require all granted
    +   </IfVersion>
    +   <IfVersion < 2.4>
    +      Order allow,deny
    +      Allow from all
    +   </IfVersion>
    +</Directory>
    +

    重启httpd服务:

    +
    $ systemctl restart httpd
    +

    同步nova-api数据库:

    +

    $ su -s /bin/sh -c "nova-manage api_db sync" nova
    +注册cell0数据库:

    +

    $ su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
    +创建cell1 cell:

    +

    $ su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
    +同步nova数据库:

    +

    $ su -s /bin/sh -c "nova-manage db sync" nova
    +验证cell0和cell1注册正确:

    +

    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
    +确定是否支持虚拟机硬件加速(x86架构):

    +
    $ egrep -c '(vmx|svm)' /proc/cpuinfo
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM: +注意: 如果是在ARM64的服务器上,还需要在配置cpu_modecustom,cpu_modelcortex-a72

    +

    # vim /etc/nova/nova.conf
    +[libvirt]
    +# ...
    +virt_type = qemu
    +cpu_mode = custom
    +cpu_model = cortex-a72
    +如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要在compute节点执行以下命令

    +
    mkdir -p /usr/share/AAVMF
    +ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_CODE.fd
    +ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_VARS.fd
    +chown nova:nova /usr/share/AAVMF -R
    +
    +vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd",
    +     "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2/aarch64/vars-template-pflash.raw"
    +]
    +

    启动计算服务及其依赖项,并配置其开机启动:

    +

    $ systemctl enable \
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +$ systemctl start \
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    $ systemctl enable libvirtd.service openstack-nova-compute.service
    +$ systemctl start libvirtd.service openstack-nova-compute.service
    +添加计算节点到cell数据库:

    +

    确认计算节点存在:

    +

    $ . admin-openrc
    +$ openstack compute service list --service nova-compute
    +注册计算节点:

    +
    $ su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
    +
  10. +
  11. +

    验证

    +

    $ . admin-openrc
    +列出服务组件,验证每个流程都成功启动和注册:

    +
    $ openstack compute service list
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    $ openstack catalog list
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    $ openstack image list
    +

    检查cells和placement API是否运作成功,以及其他必要条件是否已具备。

    +
    $ nova-status upgrade check
    +

    Neutron 安装

    +
  12. +
  13. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为root用户访问数据库,创建 neutron 数据库并授权。

    +
    $ mysql -u root -p
    +

    MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +替换NEUTRON_DBPASS,为neutron数据库设置密码。

    +

    $ . admin-openrc
    +执行如下命令,完成创建 neutron 服务凭证、创建neutron用户和添加‘admin’角色到‘neutron’用户操作。

    +

    创建neutron服务

    +

    $ openstack user create --domain default --password-prompt neutron
    +$ openstack role add --project service --user neutron admin
    +$ openstack service create --name neutron --description "OpenStack Networking" network
    +创建网络服务API端点:

    +
    $ openstack endpoint create --region RegionOne network public http://controller:9696
    +$ openstack endpoint create --region RegionOne network internal http://controller:9696
    +$ openstack endpoint create --region RegionOne network admin http://controller:9696
    +
  14. +
  15. +

    安装和配置 Self-service 网络

    +

    安装软件包:

    +

    $ yum install openstack-neutron openstack-neutron-ml2 \
    +openstack-neutron-linuxbridge ebtables ipset
    +配置neutron:

    +

    编辑 /etc/neutron/neutron.conf 文件:

    +

    在[database]部分,配置数据库入口;

    +

    在[default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    在[default] [keystone]部分,配置身份认证服务入口;

    +

    在[default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    在[oslo_concurrency]部分,配置lock path。

    +
    [database]
    +# ...
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
    +[DEFAULT]
    +# ...
    +core_plugin = ml2
    +service_plugins = router
    +allow_overlapping_ips = true
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true
    +notify_nova_on_port_data_changes = true
    +[keystone_authtoken]
    +# ...
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +[nova]
    +# ...
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +[oslo_concurrency]
    +# ...
    +lock_path = /var/lib/neutron/tmp
    +

    替换NEUTRON_DBPASS为neutron数据库的密码;

    +

    替换RABBIT_PASS为RabbitMQ中openstack账户的密码;

    +

    替换NEUTRON_PASS为neutron用户的密码;

    +

    替换NOVA_PASS为nova用户的密码。

    +

    配置ML2插件:

    +

    编辑 /etc/neutron/plugins/ml2/ml2_conf.ini 文件:

    +

    在[ml2]部分,启用 flat、vlan、vxlan 网络,启用网桥及 layer-2 population 机制,启用端口安全扩展驱动;

    +

    在[ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    在[ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    在[securitygroup]部分,配置允许 ipset。

    +

    # vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +[ml2]
    +# ...
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +[ml2_type_flat]
    +# ...
    +flat_networks = provider
    +[ml2_type_vxlan]
    +# ...
    +vni_ranges = 1:1000
    +[securitygroup]
    +# ...
    +enable_ipset = true
    +配置 Linux bridge 代理:

    +

    编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 文件:

    +

    在[linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    在[vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    在[securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    [linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +[securitygroup]
    +# ...
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +

    编辑 /etc/neutron/l3_agent.ini 文件:

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    [DEFAULT]
    +# ...
    +interface_driver = linuxbridge
    +配置DHCP代理:

    +

    编辑 /etc/neutron/dhcp_agent.ini 文件:

    +

    在[default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    [DEFAULT]
    +# ...
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +配置metadata代理:

    +

    编辑 /etc/neutron/metadata_agent.ini 文件:

    +

    在[default]部分,配置元数据主机和shared secret。

    +

    [DEFAULT]
    +# ...
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +替换METADATA_SECRET为合适的元数据代理secret。

    +
  16. +
  17. +

    配置计算服务

    +

    编辑 /etc/nova/nova.conf 文件:

    +

    在[neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +
    [neutron]
    +# ...
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    替换NEUTRON_PASS为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  18. +
  19. +

    完成安装

    +

    添加配置文件链接:

    +
    $ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    同步数据库:

    +
    $ su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +

    重启计算API服务:

    +
    $ systemctl restart openstack-nova-api.service
    +

    启动网络服务并配置开机启动:

    +
    $ systemctl enable neutron-server.service \
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service
    +$ systemctl start neutron-server.service \
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service
    +$ systemctl enable neutron-l3-agent.service
    +$ systemctl start neutron-l3-agent.service
    +
  20. +
  21. +

    验证

    +

    列出代理验证 neutron 代理启动成功:

    +
    $ openstack network agent list
    +
  22. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为root用户访问数据库,创建cinder数据库并授权。

    +

    $ mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +替换CINDER_DBPASS,为cinder数据库设置密码。

    +
    $ source admin-openrc
    +

    创建cinder服务凭证:

    +

    创建cinder用户

    +

    添加‘admin’角色到用户‘cinder’

    +

    创建cinderv2和cinderv3服务

    +

    $ openstack user create --domain default --password-prompt cinder
    +$ openstack role add --project service --user cinder admin
    +$ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +$ openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +创建块存储服务API端点:

    +
    $ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +$ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +$ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +$ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +$ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +$ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装和配置控制节点

    +

    安装软件包:

    +

    $ yum install openstack-cinder
    +配置cinder:

    +

    编辑 /etc/cinder/cinder.conf 文件:

    +

    在[database]部分,配置数据库入口;

    +

    在[DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    在[DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    在[oslo_concurrency]部分,配置lock path。

    +

    [database]
    +# ...
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +[DEFAULT]
    +# ...
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +[keystone_authtoken]
    +# ...
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +[oslo_concurrency]
    +# ...
    +lock_path = /var/lib/cinder/tmp
    +替换CINDER_DBPASS为cinder数据库的密码;

    +

    替换RABBIT_PASS为RabbitMQ中openstack账户的密码;

    +

    配置my_ip为控制节点的管理IP地址;

    +

    替换CINDER_PASS为cinder用户的密码;

    +

    同步数据库:

    +

    $ su -s /bin/sh -c "cinder-manage db sync" cinder
    +配置计算使用块存储:

    +

    编辑 /etc/nova/nova.conf 文件。

    +

    [cinder]
    +os_region_name = RegionOne
    +完成安装:

    +

    重启计算API服务

    +

    $ systemctl restart openstack-nova-api.service
    +启动块存储服务

    +
    $ systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
    +$ systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
    +
  4. +
  5. +

    安装和配置存储节点(LVM)

    +

    安装软件包:

    +
    $ yum install lvm2 device-mapper-persistent-data scsi-target-utils python2-keystone \
    +openstack-cinder-volume
    +

    创建LVM物理卷 /dev/sdb:

    +

    $ pvcreate /dev/sdb
    +创建LVM卷组 cinder-volumes:

    +

    $ vgcreate cinder-volumes /dev/sdb
    +编辑 /etc/lvm/lvm.conf 文件:

    +

    在devices部分,添加过滤以接受/dev/sdb设备拒绝其他设备。

    +
    devices {
    +
    +# ...
    +
    +filter = [ "a/sdb/", "r/.*/"]
    +

    编辑 /etc/cinder/cinder.conf 文件:

    +

    在[lvm]部分,使用LVM驱动、cinder-volumes卷组、iSCSI协议和适当的iSCSI服务配置LVM后端。

    +

    在[DEFAULT]部分,启用LVM后端,配置镜像服务API的位置。

    +
    [lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    +volume_group = cinder-volumes
    +target_protocol = iscsi
    +target_helper = lioadm
    +[DEFAULT]
    +# ...
    +enabled_backends = lvm
    +glance_api_servers = http://controller:9292
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +

    include /var/lib/cinder/volumes/*
    +完成安装:

    +
    $ systemctl enable openstack-cinder-volume.service tgtd.service iscsid.service
    +$ systemctl start openstack-cinder-volume.service tgtd.service iscsid.service
    +
  6. +
  7. +

    安装和配置存储节点(ceph RBD)

    +

    安装软件包:

    +
    $ yum install ceph-common python2-rados python2-rbd python2-keystone openstack-cinder-volume
    +

    在[DEFAULT]部分,启用LVM后端,配置镜像服务API的位置。

    +
    [DEFAULT]
    +enabled_backends = ceph-rbd
    +

    添加ceph rbd配置部分,配置块命名与enabled_backends中保持一致

    +
    [ceph-rbd]
    +glance_api_version = 2
    +rados_connect_timeout = -1
    +rbd_ceph_conf = /etc/ceph/ceph.conf
    +rbd_flatten_volume_from_snapshot = False
    +rbd_max_clone_depth = 5
    +rbd_pool = <RBD_POOL_NAME>  # RBD存储池名称
    +rbd_secret_uuid = <rbd_secret_uuid> # 随机生成SECRET UUID
    +rbd_store_chunk_size = 4
    +rbd_user = <RBD_USER_NAME>
    +volume_backend_name = ceph-rbd
    +volume_driver = cinder.volume.drivers.rbd.RBDDriver
    +

    配置存储节点ceph客户端,需要保证/etc/ceph/目录中包含ceph集群访问配置,包括ceph.conf以及keyring

    +
    [root@openeuler ~]# ll /etc/ceph
    +-rw-r--r-- 1 root root   82 Jun 16 17:11 ceph.client.<rbd_user>.keyring
    +-rw-r--r-- 1 root root 1.5K Jun 16 17:11 ceph.conf
    +-rw-r--r-- 1 root root   92 Jun 16 17:11 rbdmap
    +

    在存储节点检查ceph集群是否正常可访问

    +
    [root@openeuler ~]# ceph --user cinder -s
    +  cluster:
    +    id:     b7b2fac6-420f-4ec1-aea2-4862d29b4059
    +    health: HEALTH_OK
    +
    +  services:
    +    mon: 3 daemons, quorum VIRT01,VIRT02,VIRT03
    +    mgr: VIRT03(active), standbys: VIRT02, VIRT01
    +    mds: cephfs_virt-1/1/1 up  {0=VIRT03=up:active}, 2 up:standby
    +    osd: 15 osds: 15 up, 15 in
    +
    +  data:
    +    pools:   7 pools, 1416 pgs
    +    objects: 5.41M objects, 19.8TiB
    +    usage:   49.3TiB used, 59.9TiB / 109TiB avail
    +    pgs:     1414 active
    +
    +  io:
    +    client:   2.73MiB/s rd, 22.4MiB/s wr, 3.21kop/s rd, 1.19kop/s wr
    +

    启动服务

    +
    $ systemctl enable openstack-cinder-volume.service
    +$ systemctl start openstack-cinder-volume.service
    +
  8. +
  9. +

    安装和配置备份服务

    +

    编辑 /etc/cinder/cinder.conf 文件:

    +

    在[DEFAULT]部分,配置备份选项

    +

    [DEFAULT]
    +# ...
    +# 注意: openEuler 21.03中没有提供OpenStack Swift软件包,需要用户自行安装。或者使用其他的备份后端,例如,NFS。NFS已经过测试验证,可以正常使用。
    +backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
    +backup_swift_url = SWIFT_URL
    +替换SWIFT_URL为对象存储服务的URL,该URL可以通过对象存储API端点找到:

    +

    $ openstack catalog show object-store
    +完成安装:

    +
    $ systemctl enable openstack-cinder-backup.service
    +$ systemctl start openstack-cinder-backup.service
    +
  10. +
  11. +

    验证

    +

    列出服务组件验证每个步骤成功: +

    $ source admin-openrc
    +$ openstack volume service list

    +

    注:目前暂未对swift组件进行支持,有条件的同学可以配置对接ceph。

    +
  12. +
+

Horizon 安装

+
    +
  1. +

    安装软件包

    +

    $ yum install openstack-dashboard
    +2. 修改文件/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py

    +

    修改变量

    +

    ALLOWED_HOSTS = ['*', ]
    +OPENSTACK_HOST = "controller"
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +CACHES = {
    +    'default': {
    +         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +         'LOCATION': 'controller:11211',
    +    }
    +}
    +新增变量 +
    OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +WEBROOT = "/dashboard/"
    +COMPRESS_OFFLINE = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin"
    +LOGIN_URL = '/dashboard/auth/login/'
    +LOGOUT_URL = '/dashboard/auth/logout/'
    +3. 修改文件/etc/httpd/conf.d/openstack-dashboard.conf +
    WSGIDaemonProcess dashboard
    +WSGIProcessGroup dashboard
    +WSGISocketPrefix run/wsgi
    +WSGIApplicationGroup %{GLOBAL}
    +
    +WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
    +Alias /dashboard/static /usr/share/openstack-dashboard/static
    +
    +<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
    +  Options All
    +  AllowOverride All
    +  Require all granted
    +</Directory>
    +
    +<Directory /usr/share/openstack-dashboard/static>
    +  Options All
    +  AllowOverride All
    +  Require all granted
    +</Directory>
    +4. 在/usr/share/openstack-dashboard目录下执行 +
    $ ./manage.py compress
    +5. 重启 httpd 服务 +
    $ systemctl restart httpd
    +5. 验证 +打开浏览器,输入网址http://,登录 horizon。

    +
  2. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装

+
    +
  1. 安装Tempest +
    $ yum install openstack-tempest
  2. +
  3. +

    初始化目录

    +

    $ tempest init mytest
    +3. 修改配置文件。

    +

    $ cd mytest
    +$ vi etc/tempest.conf
    +tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  4. +
  5. +

    执行测试

    +
    $ tempest run
    +
  6. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
$ mysql -u root -p 
+
MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; 
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \     
+IDENTIFIED BY 'IRONIC_DBPASSWORD'; 
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \     
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 安装软件包
  2. +
+
yum install openstack-ironic-api openstack-ironic-conductor python2-ironicclient
+

启动服务

+
systemctl enable openstack-ironic-api openstack-ironic-conductor
+systemctl start openstack-ironic-api openstack-ironic-conductor
+
    +
  1. 组件安装与配置
  2. +
+

##### 创建服务用户认证

+

1、创建Bare Metal服务用户

+
$ openstack user create --password IRONIC_PASSWORD \ 
+--email ironic@example.com ironic 
+$ openstack role add --project service --user ironic admin 
+$ openstack service create --name ironic --description \ 
+"Ironic baremetal provisioning service" baremetal 
+

2、创建Bare Metal服务访问入口

+
$ openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 
+$ openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 
+$ openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 
+

##### 配置ironic-api服务

+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database] 
+
+# The SQLAlchemy connection string used to connect to the 
+# database (string value) 
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT] 
+
+# A URL representing the messaging driver to use and its full 
+# configuration. (string value) 
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT] 
+
+# Authentication strategy used by ironic-api: one of 
+# "keystone" or "noauth". "noauth" should not be used in a 
+# production environment because all authentication will be 
+# disabled. (string value) 
+
+auth_strategy=keystone 
+force_config_drive = True
+
+[keystone_authtoken] 
+# Authentication type to load (string value) 
+auth_type=password 
+# Complete public Identity API endpoint (string value) 
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 
+# Complete admin Identity API endpoint. (string value) 
+auth_url=http://PRIVATE_IDENTITY_IP:5000 
+# Service username. (string value) 
+username=ironic 
+# Service account password. (string value) 
+password=IRONIC_PASSWORD 
+# Service tenant name. (string value) 
+project_name=service 
+# Domain name containing project (string value) 
+project_domain_name=Default 
+# User's domain name (string value) 
+user_domain_name=Default
+

4、需要在配置文件中指定ironic日志目录

+
[DEFAULT]
+log_dir = /var/log/ironic/
+

5、创建裸金属服务数据库表

+
$ ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

6、重启ironic-api服务

+
$ systemctl restart openstack-ironic-api
+

##### 配置ironic-conductor服务

+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT] 
+
+# IP address of this host. If unset, will determine the IP 
+# programmatically. If unable to do so, will use "127.0.0.1". 
+# (string value) 
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database] 
+
+# The SQLAlchemy connection string to use to connect to the 
+# database. (string value) 
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT] 
+
+# A URL representing the messaging driver to use and its full 
+# configuration. (string value) 
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+

[neutron] - 访问Openstack网络服务 + [glance] - 访问Openstack镜像服务 + [swift] - 访问Openstack对象存储服务 + [cinder] - 访问Openstack块存储服务 + [inspector] - 访问Openstack裸金属introspection服务 + [service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在Openstack身份认证服务目录中的自己的API URL端点

+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问openstack网络服务的身份验证信息配置为:

+

网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口

+

请求时使用特定的CA SSL证书进行HTTPS连接

+

与ironic-api服务配置相同的服务用户

+

动态密码认证插件基于其他选项发现合适的身份认证服务API版本

+
[neutron] 
+
+# Authentication type to load (string value) 
+auth_type = password 
+# Authentication URL (string value) 
+auth_url=https://IDENTITY_IP:5000/ 
+# Username (string value) 
+username=ironic 
+# User's password (string value) 
+password=IRONIC_PASSWORD 
+# Project name to scope to (string value) 
+project_name=service 
+# Domain ID containing project (string value) 
+project_domain_id=default 
+# User's domain id (string value) 
+user_domain_id=default 
+# PEM encoded Certificate Authority to use when verifying 
+# HTTPs connections. (string value) 
+cafile=/opt/stack/data/ca-bundle.pem 
+# The default region_name for endpoint URL discovery. (string 
+# value) 
+region_name = RegionOne 
+# List of interfaces, in order of preference, for endpoint 
+# URL. (list value) 
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] 
+# ...
+endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] 
+enabled_hardware_types = ipmi 
+

配置硬件接口:

+
enabled_boot_interfaces = pxe
+enabled_deploy_interfaces = direct,iscsi
+enabled_inspect_interfaces = inspector
+enabled_management_interfaces = ipmitool
+enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT]
+default_deploy_interface = direct
+default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
$ systemctl restart openstack-ironic-conductor
+
    +
  1. deploy ramdisk镜像制作
  2. +
+

目前ramdisk镜像支持通过ironic python agent builder来进行制作,这里介绍下使用这个工具构建ironic使用的deploy镜像的完整过程。(用户也可以根据自己的情况获取ironic-python-agent,这里提供使用ipa-builder制作ipa方法)

+

##### 安装 ironic-python-agent-builder

+
    +
  1. +

    安装工具:

    +
    $ pip install ironic-python-agent-builder
    +
  2. +
  3. +

    修改以下文件中的python解释器:

    +
    $ /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +
  4. +
  5. +

    安装其它必须的工具:

    +
    $ yum install git
    +

    由于DIB依赖semanage命令,所以在制作镜像之前确定该命令是否可用:semanage --help,如果提示无此命令,安装即可:

    +
    # 先查询需要安装哪个包
    +[root@localhost ~]# yum provides /usr/sbin/semanage
    +已加载插件:fastestmirror
    +Loading mirror speeds from cached hostfile
    + * base: mirror.vcu.edu
    + * extras: mirror.vcu.edu
    + * updates: mirror.math.princeton.edu
    +policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +源    :base
    +匹配来源:
    +文件名    :/usr/sbin/semanage
    +# 安装
    +[root@localhost ~]# yum install policycoreutils-python
    +
  6. +
+

##### 制作镜像

+

如果是aarch64架构,还需要添加:

+
$ export ARCH=aarch64
+

###### 普通镜像

+

基本用法:

+
usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
+                                   [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
+                                   distribution
+
+positional arguments:
+  distribution          Distribution to use
+
+optional arguments:
+  -h, --help            show this help message and exit
+  -r RELEASE, --release RELEASE
+                        Distribution release to use
+  -o OUTPUT, --output OUTPUT
+                        Output base file name
+  -e ELEMENT, --element ELEMENT
+                        Additional DIB element to use
+  -b BRANCH, --branch BRANCH
+                        If set, override the branch that is used for ironic-
+                        python-agent and requirements
+  -v, --verbose         Enable verbose logging in diskimage-builder
+  --extra-args EXTRA_ARGS
+                        Extra arguments to pass to diskimage-builder
+

举例说明:

+
$ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
+

###### 允许ssh登录

+

初始化环境变量,然后制作镜像:

+
$ export DIB_DEV_USER_USERNAME=ipa \
+$ export DIB_DEV_USER_PWDLESS_SUDO=yes \
+$ export DIB_DEV_USER_PASSWORD='123'
+$ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
+

###### 指定代码仓库

+

初始化对应的环境变量,然后制作镜像:

+
# 指定仓库地址以及版本
+DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
+DIB_REPOREF_ironic_python_agent=origin/develop
+
+# 直接从gerrit上clone代码
+DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
+DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
+

参考:source-repositories

+

指定仓库地址及版本验证成功。

+

在Rocky中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。

+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 20.03 LTS SP2中已经引入了Kolla和Kolla-ansible服务,但是Kolla 以及 Kolla-ansible 原生并不支持 openEuler, +因此 Openstack SIG 在openEuler 20.03 LTS SP3中提供了 openstack-kolla-pluginopenstack-kolla-ansible-plugin 这两个补丁包。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+

支持 openEuler 版本:

+
yum install openstack-kolla-plugin openstack-kolla-ansible-plugin
+

不支持 openEuler 版本:

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

数据库服务在数据库中存储信息,创建一个trove用户可以访问trove数据库,替换TROVE_DBPASSWORD为对应密码

+
$ mysql -u root -p
+
MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Trove服务用户

+

$ openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+$ openstack role add --project service --user trove admin
+$ openstack service create --name trove
+                         --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+

$ openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s
+$ openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s
+$ openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\(tenant_id\)s
+ 解释: $TROVE_NODE 替换为Trove的API服务部署节点

+
    +
  1. 安装和配置Trove各组件
  2. +
+

1、安装Trove

+

$ yum install openstack-trove python2-troveclient
+ 2、配置/etc/trove/trove.conf

+

[DEFAULT]
+bind_host=TROVE_NODE_IP
+log_dir = /var/log/trove
+
+auth_strategy = keystone
+# Config option for showing the IP address that nova doles out
+add_addresses = True
+network_label_regex = ^NETWORK_LABEL$
+api_paste_config = /etc/trove/api-paste.ini
+
+trove_auth_url = http://controller:35357/v3/
+nova_compute_url = http://controller:8774/v2
+cinder_url = http://controller:8776/v1
+
+nova_proxy_admin_user = admin
+nova_proxy_admin_pass = ADMIN_PASS
+nova_proxy_admin_tenant_name = service
+taskmanager_manager = trove.taskmanager.manager.Manager
+use_nova_server_config_drive = True
+
+# Set these if using Neutron Networking
+network_driver=trove.network.neutron.NeutronDriver
+network_label_regex=.*
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000/v3/
+auth_url=http://controller:35357/v3/
+#auth_uri = http://controller/identity
+#auth_url = http://controller/identity_admin
+auth_type = password
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = trove
+password = TROVE_PASS
+
+ 解释: + - [Default]分组中bind_host配置为Trove部署节点的IP + - nova_compute_urlcinder_url 为Nova和Cinder在Keystone中创建的endpoint + - nova_proxy_XXX 为一个能访问Nova服务的用户信息,上例中使用admin用户为例 + - transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码 + - [database]分组中的connection 为前面在mysql中为Trove创建的数据库信息 + - Trove的用户信息中TROVE_PASS替换为实际trove用户的密码

+

3、配置/etc/trove/trove-taskmanager.conf

+

[DEFAULT]
+log_dir = /var/log/trove
+trove_auth_url = http://controller/identity/v2.0
+nova_compute_url = http://controller:8774/v2
+cinder_url = http://controller:8776/v1
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
+ 解释: 参照trove.conf配置 + 4、配置/etc/trove/trove-conductor.conf

+

[DEFAULT]
+log_dir = /var/log/trove
+trove_auth_url = http://controller/identity/v2.0
+nova_compute_url = http://controller:8774/v2
+cinder_url = http://controller:8776/v1
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:trove@controller/trove
+ 解释: 参照trove.conf配置

+

5、配置/etc/trove/trove-guestagent.conf

+

[DEFAULT]
+rabbit_host = controller
+rabbit_password = RABBIT_PASS
+nova_proxy_admin_user = admin
+nova_proxy_admin_pass = ADMIN_PASS
+nova_proxy_admin_tenant_name = service
+trove_auth_url = http://controller/identity_admin/v2.0
+ 解释: guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 + 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 + 报心跳,因此需要配置RabbitMQ的用户和密码信息。

+

6、生成数据Trove数据库表

+
$ su -s /bin/sh -c "trove-manage db_sync" trove
+
    +
  1. 完成安装配置 + 1、配置Trove服务自启动
  2. +
+

$ systemctl enable openstack-trove-api.service \
+openstack-trove-taskmanager.service \
+openstack-trove-conductor.service 
+ 2、启动服务

+
$ systemctl start openstack-trove-api.service \
+openstack-trove-taskmanager.service \
+openstack-trove-conductor.service
+

Rally 安装

+

Rally是OpenStack提供的性能测试工具。只需要简单的安装即可。

+
yum install openstack-rally openstack-rally-plugins
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-20.03-LTS-SP3/OpenStack-train/index.html b/site/install/openEuler-20.03-LTS-SP3/OpenStack-train/index.html new file mode 100644 index 0000000000000000000000000000000000000000..f4f369789d5ac9c9ca528538d0fa9f80144b5e9f --- /dev/null +++ b/site/install/openEuler-20.03-LTS-SP3/OpenStack-train/index.html @@ -0,0 +1,2307 @@ + + + + + + + + openEuler-20.03-LTS-SP3_Train - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Train 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 20.03-LTS-SP3 版本官方源已经支持 OpenStack-Train 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • Cinder
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 20.03-LTS-SP3 官方yum源,需要启用EPOL软件仓以支持OpenStack

    +
    cat << EOF >> /etc/yum.repos.d/20.03-LTS-SP3-OpenStack_Train.repo
    +[OS]
    +name=OS
    +baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/OS/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler
    +
    +[everything]
    +name=everything
    +baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/everything/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/everything/$basearch/RPM-GPG-KEY-openEuler
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
    +yum clean all && yum makecache
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    . admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +logdir = /var/log/nova/
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,则virt_type可以配置为kvm

    +

    注意

    +

    如果为arm64结构,还需要在计算节点执行以下命令

    +
    
    +mkdir -p /usr/share/AAVMF
    +chown nova:nova /usr/share/AAVMF
    +
    +ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_CODE.fd
    +ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_VARS.fd
    +
    +vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +

    并且当ARM架构下的部署环境为嵌套虚拟化时,libvirt配置如下:

    +
    [libvirt]
    +virt_type = qemu
    +cpu_mode = custom
    +cpu_model = cortex-a72
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CTL)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini                                                      (CTL)
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \                  (CTL)
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl restart neutron-server.service neutron-linuxbridge-agent.service \                   (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Train中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+

mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+2. 安装软件包

+
yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
+

启动服务

+
systemctl enable openstack-ironic-api openstack-ironic-conductor
+systemctl start openstack-ironic-api openstack-ironic-conductor
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic \
+                         --description "Ironic baremetal provisioning service" baremetal
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. +

    配置httpd服务

    +
  2. +
  3. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  4. +
  5. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +

      yum install httpd -y
      + 2. 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    2. +
    3. +

      重启httpd服务。

      +

      systemctl restart httpd
      +7. deploy ramdisk镜像制作

      +
    4. +
    +
  6. +
+

T版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用T版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意 + 原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:

    +

    在T版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    T版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    2. +
    +
    [agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +
      +
    1. ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    2. +
    +

    /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)

    +
    [DEFAULT]
    +enable_auto_tls = False
    +

    设置权限:

    +
    chown -R ipa.ipa /etc/ironic_python_agent/
    +
      +
    1. 修改ipa服务的服务启动文件,添加配置文件选项
    2. +
    +

    vim usr/lib/systemd/system/ironic-python-agent.service

    +
    [Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +
  10. +
+

在Train中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。

+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令进行相关的镜像制作和容器环境部署了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Trove服务用户

+

openstack user create --domain default --password-prompt trove
+openstack role add --project service --user trove admin
+openstack service create --name trove --description "Database" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+
    +
  1. 安装和配置Trove各组件
  2. +
+

1、安装Trove包 + ``shell script + yum install openstack-trove python3-troveclient +


+2. 配置`trove.conf`
+```shell script
+vim /etc/trove/trove.conf
+
+ [DEFAULT]
+ log_dir = /var/log/trove
+ trove_auth_url = http://controller:5000/
+ nova_compute_url = http://controller:8774/v2
+ cinder_url = http://controller:8776/v1
+ swift_url = http://controller:8080/v1/AUTH_
+ rpc_backend = rabbit
+ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672
+ auth_strategy = keystone
+ add_addresses = True
+ api_paste_config = /etc/trove/api-paste.ini
+ nova_proxy_admin_user = admin
+ nova_proxy_admin_pass = ADMIN_PASSWORD
+ nova_proxy_admin_tenant_name = service
+ taskmanager_manager = trove.taskmanager.manager.Manager
+ use_nova_server_config_drive = True
+ # Set these if using Neutron Networking
+ network_driver = trove.network.neutron.NeutronDriver
+ network_label_regex = .*
+
+ [database]
+ connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove
+
+ [keystone_authtoken]
+ www_authenticate_uri = http://controller:5000/
+ auth_url = http://controller:5000/
+ auth_type = password
+ project_domain_name = default
+ user_domain_name = default
+ project_name = service
+ username = trove
+ password = TROVE_PASSWORD
+ **解释:** + -[Default]分组中nova_compute_urlcinder_url为Nova和Cinder在Keystone中创建的endpoint + -nova_proxy_XXX为一个能访问Nova服务的用户信息,上例中使用admin用户为例 + -transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码 + -[database]分组中的connection为前面在mysql中为Trove创建的数据库信息 + - Trove的用户信息中TROVE_PASSWORD`替换为实际trove用户的密码

+
    +
  1. 配置trove-guestagent.conf + ```shell script + vim /etc/trove/trove-guestagent.conf
  2. +
+

rabbit_host = controller + rabbit_password = RABBIT_PASS + trove_auth_url = http://controller:5000/ +

**解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟
+机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上
+报心跳,因此需要配置RabbitMQ的用户和密码信息。
+**从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。**
+- `RABBIT_PASS`替换为RabbitMQ的密码  
+
+4. 生成数据`Trove`数据库表
+```shell script
+su -s /bin/sh -c "trove-manage db_sync" trove

+
    +
  1. 完成安装配置
  2. +
  3. 配置Trove服务自启动 + ```shell script + systemctl enable openstack-trove-api.service \ + openstack-trove-taskmanager.service \ + openstack-trove-conductor.service +
    2. 启动服务
    +```shell script
    +systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#admin为swift用户添加角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您swift在身份服务中为用户选择的密码**
+
    +
  1. +

    安装和配置存储节点 (STG)

    +

    安装支持的程序包: +

    yum install xfsprogs rsync

    +

    将/dev/vdb和/dev/vdc设备格式化为 XFS

    +
    mkfs.xfs /dev/vdb
    +mkfs.xfs /dev/vdc
    +

    创建挂载点目录结构:

    +
    mkdir -p /srv/node/vdb
    +mkdir -p /srv/node/vdc
    +

    找到新分区的 UUID:

    +
    blkid
    +

    编辑/etc/fstab文件并将以下内容添加到其中:

    +
    UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
    +

    挂载设备:

    +

    mount /srv/node/vdb
    +mount /srv/node/vdc
    +注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置

    +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock
    +替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动:

    +
    systemctl enable rsyncd.service
    +systemctl start rsyncd.service
    +
  2. +
  3. +

    在存储节点安装和配置组件 (STG)

    +

    安装软件包:

    +
    yum install openstack-swift-account openstack-swift-container openstack-swift-object
    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    +

    确保挂载点目录结构的正确所有权:

    +
    chown -R swift:swift /srv/node
    +

    创建recon目录并确保其拥有正确的所有权:

    +
    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift
    +
  4. +
  5. +

    创建账号环 (CTL)

    +

    切换到/etc/swift目录。

    +
    cd /etc/swift
    +

    创建基础account.builder文件:

    +
    swift-ring-builder account.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder account.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +
  6. +
  7. +

    创建容器环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件:

    +
       swift-ring-builder container.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder container.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
    +  --device DEVICE_NAME --weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 +对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder container.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder container.builder rebalance
    +
  8. +
  9. +

    创建对象环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件:

    +
    swift-ring-builder object.builder create 10 1 1
    +

    将每个存储节点添加到环中

    +
     swift-ring-builder object.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
    +  --device DEVICE_NAME --weight 100
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder object.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder object.builder rebalance
    +

    分发环配置文件:

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上目录。

    +
  10. +
  11. +

    完成安装

    +

    编辑/etc/swift/swift.conf文件

    +
    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes
    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权:

    +
    chown -R root:swift /etc/swift
    +

    在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-proxy.service memcached.service
    +systemctl start openstack-swift-proxy.service memcached.service
    +

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
    +systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
  12. +
+

Cyborg 安装

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+
    +
  1. 初始化对应数据库
  2. +
+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+
    +
  1. 安装Cyborg
  2. +
+
yum install openstack-cyborg
+
    +
  1. 配置Cyborg
  2. +
+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+
    +
  1. 同步数据库表格
  2. +
+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+
    +
  1. 启动Cyborg服务
  2. +
+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+
    +
  1. 安装Aodh
  2. +
+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+
    +
  1. 修改配置文件
  2. +
+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
aodh-dbsync
+
    +
  1. 启动Aodh服务
  2. +
+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+
    +
  1. 安装Gnocchi
  2. +
+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+
    +
  1. 修改配置文件/etc/gnocchi/gnocchi.conf
  2. +
+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+
    +
  1. 初始化数据库
  2. +
+
gnocchi-upgrade
+
    +
  1. 启动Gnocchi服务
  2. +
+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+
    +
  1. 安装Ceilometer
  2. +
+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+
    +
  1. 修改配置文件/etc/ceilometer/pipeline.yaml
  2. +
+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+
    +
  1. 修改配置文件/etc/ceilometer/ceilometer.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
ceilometer-upgrade
+
    +
  1. 启动Ceilometer服务
  2. +
+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+
    +
  1. 创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码
  2. +
+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+
    +
  1. 创建服务凭证,创建heat用户,并为其增加admin角色
  2. +
+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+
    +
  1. 创建heatheat-cfn服务及其对应的API端点
  2. +
+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+
    +
  1. 创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色
  2. +
+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+
    +
  1. 安装软件包
  2. +
+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+
    +
  1. 修改配置文件/etc/heat/heat.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+
    +
  1. 初始化heat数据库表
  2. +
+
su -s /bin/sh -c "heat-manage db_sync" heat
+
    +
  1. 启动服务
  2. +
+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-20.03-LTS-SP4/OpenStack-train/index.html b/site/install/openEuler-20.03-LTS-SP4/OpenStack-train/index.html new file mode 100644 index 0000000000000000000000000000000000000000..0ab0a1711380807d108dfcda75d79b63c50da6cb --- /dev/null +++ b/site/install/openEuler-20.03-LTS-SP4/OpenStack-train/index.html @@ -0,0 +1,2360 @@ + + + + + + + + openEuler-20.03-LTS-SP4_Train - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Train 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 20.03-LTS-SP4 版本官方源已经支持 OpenStack-Train 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • Cinder
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 20.03-LTS-SP4 官方yum源,需要启用EPOL软件仓以支持OpenStack

    +
    cat << EOF >> /etc/yum.repos.d/20.03-LTS-SP4-OpenStack_Train.repo
    +[OS]
    +name=OS
    +baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/OS/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/OS/$basearch/RPM-GPG-KEY-openEuler
    +
    +[everything]
    +name=everything
    +baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/everything/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/everything/$basearch/RPM-GPG-KEY-openEuler
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
    +yum clean all && yum makecache
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vi /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vi /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vi /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vi /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vi /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vi /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    . admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vi /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +logdir = /var/log/nova/
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vi /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,则virt_type可以配置为kvm

    +

    注意

    +

    如果为arm64结构,还需要在计算节点执行以下命令

    +
    
    +mkdir -p /usr/share/AAVMF
    +chown nova:nova /usr/share/AAVMF
    +
    +ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_CODE.fd
    +ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_VARS.fd
    +
    +vi /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +

    并且当ARM架构下的部署环境为嵌套虚拟化时,libvirt配置如下:

    +
    [libvirt]
    +virt_type = qemu
    +cpu_mode = custom
    +cpu_model = cortex-a72
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CTL)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vi /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vi /etc/neutron/plugins/ml2/ml2_conf.ini                                                      (CTL)
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vi /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vi /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vi /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vi /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \                  (CTL)
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl restart neutron-server.service neutron-linuxbridge-agent.service \                   (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vi /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vi /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vi /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vi /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Train中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+

mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+2. 安装软件包

+
yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
+

启动服务

+
systemctl enable openstack-ironic-api openstack-ironic-conductor
+systemctl start openstack-ironic-api openstack-ironic-conductor
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic \
+                         --description "Ironic baremetal provisioning service" baremetal
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. +

    配置httpd服务

    +
  2. +
  3. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  4. +
  5. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +

      yum install httpd -y
      + 2. 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    2. +
    3. +

      重启httpd服务。

      +

      systemctl restart httpd
      +7. deploy ramdisk镜像制作

      +
    4. +
    +
  6. +
+

T版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用T版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意 + 原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:

    +

    在T版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    T版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    2. +
    +
    [agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +
      +
    1. ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    2. +
    +

    /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)

    +
    [DEFAULT]
    +enable_auto_tls = False
    +

    设置权限:

    +
    chown -R ipa.ipa /etc/ironic_python_agent/
    +
      +
    1. 修改ipa服务的服务启动文件,添加配置文件选项
    2. +
    +

    vi usr/lib/systemd/system/ironic-python-agent.service

    +
    [Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +
  10. +
+

在Train中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。

+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令进行相关的镜像制作和容器环境部署了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Trove服务用户

+

openstack user create --domain default --password-prompt trove
+openstack role add --project service --user trove admin
+openstack service create --name trove --description "Database" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+
    +
  1. 安装和配置Trove各组件
  2. +
+

1、安装Trove包 + ``shell script + yum install openstack-trove python3-troveclient +


+2. 配置`trove.conf`
+```shell script
+vi /etc/trove/trove.conf
+
+ [DEFAULT]
+ log_dir = /var/log/trove
+ trove_auth_url = http://controller:5000/
+ nova_compute_url = http://controller:8774/v2
+ cinder_url = http://controller:8776/v1
+ swift_url = http://controller:8080/v1/AUTH_
+ rpc_backend = rabbit
+ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672
+ auth_strategy = keystone
+ add_addresses = True
+ api_paste_config = /etc/trove/api-paste.ini
+ nova_proxy_admin_user = admin
+ nova_proxy_admin_pass = ADMIN_PASSWORD
+ nova_proxy_admin_tenant_name = service
+ taskmanager_manager = trove.taskmanager.manager.Manager
+ use_nova_server_config_drive = True
+ # Set these if using Neutron Networking
+ network_driver = trove.network.neutron.NeutronDriver
+ network_label_regex = .*
+
+ [database]
+ connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove
+
+ [keystone_authtoken]
+ www_authenticate_uri = http://controller:5000/
+ auth_url = http://controller:5000/
+ auth_type = password
+ project_domain_name = default
+ user_domain_name = default
+ project_name = service
+ username = trove
+ password = TROVE_PASSWORD
+ **解释:** + -[Default]分组中nova_compute_urlcinder_url为Nova和Cinder在Keystone中创建的endpoint + -nova_proxy_XXX为一个能访问Nova服务的用户信息,上例中使用admin用户为例 + -transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码 + -[database]分组中的connection为前面在mysql中为Trove创建的数据库信息 + - Trove的用户信息中TROVE_PASSWORD`替换为实际trove用户的密码

+
    +
  1. 配置trove-guestagent.conf + ```shell script + vi /etc/trove/trove-guestagent.conf
  2. +
+

rabbit_host = controller + rabbit_password = RABBIT_PASS + trove_auth_url = http://controller:5000/ +

**解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟
+机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上
+报心跳,因此需要配置RabbitMQ的用户和密码信息。
+**从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。**
+- `RABBIT_PASS`替换为RabbitMQ的密码  
+
+4. 生成数据`Trove`数据库表
+```shell script
+su -s /bin/sh -c "trove-manage db_sync" trove

+
    +
  1. 完成安装配置
  2. +
  3. 配置Trove服务自启动 + ```shell script + systemctl enable openstack-trove-api.service \ + openstack-trove-taskmanager.service \ + openstack-trove-conductor.service +
    2. 启动服务
    +```shell script
    +systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#admin为swift用户添加角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您swift在身份服务中为用户选择的密码**
+
    +
  1. +

    安装和配置存储节点 (STG)

    +

    安装支持的程序包: +

    yum install xfsprogs rsync

    +

    将/dev/vdb和/dev/vdc设备格式化为 XFS

    +
    mkfs.xfs /dev/vdb
    +mkfs.xfs /dev/vdc
    +

    创建挂载点目录结构:

    +
    mkdir -p /srv/node/vdb
    +mkdir -p /srv/node/vdc
    +

    找到新分区的 UUID:

    +
    blkid
    +

    编辑/etc/fstab文件并将以下内容添加到其中:

    +
    UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
    +

    挂载设备:

    +

    mount /srv/node/vdb
    +mount /srv/node/vdc
    +注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置

    +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock
    +替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动:

    +
    systemctl enable rsyncd.service
    +systemctl start rsyncd.service
    +
  2. +
  3. +

    在存储节点安装和配置组件 (STG)

    +

    安装软件包:

    +
    yum install openstack-swift-account openstack-swift-container openstack-swift-object
    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    +

    确保挂载点目录结构的正确所有权:

    +
    chown -R swift:swift /srv/node
    +

    创建recon目录并确保其拥有正确的所有权:

    +
    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift
    +
  4. +
  5. +

    创建账号环 (CTL)

    +

    切换到/etc/swift目录。

    +
    cd /etc/swift
    +

    创建基础account.builder文件:

    +
    swift-ring-builder account.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder account.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +
  6. +
  7. +

    创建容器环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件:

    +
       swift-ring-builder container.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder container.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
    +  --device DEVICE_NAME --weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 +对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder container.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder container.builder rebalance
    +
  8. +
  9. +

    创建对象环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件:

    +
    swift-ring-builder object.builder create 10 1 1
    +

    将每个存储节点添加到环中

    +
     swift-ring-builder object.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
    +  --device DEVICE_NAME --weight 100
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder object.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder object.builder rebalance
    +

    分发环配置文件:

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上目录。

    +
  10. +
  11. +

    完成安装

    +

    编辑/etc/swift/swift.conf文件

    +
    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes
    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权:

    +
    chown -R root:swift /etc/swift
    +

    在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-proxy.service memcached.service
    +systemctl start openstack-swift-proxy.service memcached.service
    +

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
    +systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
  12. +
+

Cyborg 安装

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+
    +
  1. 初始化对应数据库
  2. +
+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+
    +
  1. 安装Cyborg
  2. +
+
yum install openstack-cyborg
+
    +
  1. 配置Cyborg
  2. +
+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+
    +
  1. 同步数据库表格
  2. +
+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+
    +
  1. 启动Cyborg服务
  2. +
+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+
    +
  1. 安装Aodh
  2. +
+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+
    +
  1. 修改配置文件
  2. +
+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
aodh-dbsync
+
    +
  1. 启动Aodh服务
  2. +
+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+
    +
  1. 安装Gnocchi
  2. +
+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+
    +
  1. 修改配置文件/etc/gnocchi/gnocchi.conf
  2. +
+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+
    +
  1. 初始化数据库
  2. +
+
gnocchi-upgrade
+
    +
  1. 启动Gnocchi服务
  2. +
+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+
    +
  1. 安装Ceilometer
  2. +
+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+
    +
  1. 修改配置文件/etc/ceilometer/pipeline.yaml
  2. +
+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+
    +
  1. 修改配置文件/etc/ceilometer/ceilometer.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
ceilometer-upgrade
+
    +
  1. 启动Ceilometer服务
  2. +
+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+
    +
  1. 创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码
  2. +
+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+
    +
  1. 创建服务凭证,创建heat用户,并为其增加admin角色
  2. +
+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+
    +
  1. 创建heatheat-cfn服务及其对应的API端点
  2. +
+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+
    +
  1. 创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色
  2. +
+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+
    +
  1. 安装软件包
  2. +
+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+
    +
  1. 修改配置文件/etc/heat/heat.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+
    +
  1. 初始化heat数据库表
  2. +
+
su -s /bin/sh -c "heat-manage db_sync" heat
+
    +
  1. 启动服务
  2. +
+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

新特性的安装

+

Neutron流量分散特性

+

流量分散特性是OpenStack SIG在openEuler 20.03中基于OpenStack +Train开发的Neutron新特性,该特性允许用户指定路由器所在的网络节点,同时还提供基于路由器外部网关的端口转发的功能。该特性支持Neutron的L3 HA和DVR,具体细节可以参考特性文档。本文档主要描述安装步骤。

+
    +
  1. +

    按照前面章节部署好一套OpenStack环境(非容器),然后先安装plugin。

    +
    dnf install -y openstack-neutron-distributed-traffic python3-neutron-lib-distributed-traffic
    +
  2. +
  3. +

    配置数据库

    +

    本特性对Neutron的数据表进行了扩充,因此需要同步数据库

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron (CTL)
    +
  4. +
  5. +

    编辑配置文件。

    +
  6. +
+

vim /etc/neutron/neutron.conf

+
[DEFAULT]
+enable_set_route_for_single_port = True
+network_nodes = network-1,network-2,network-3
+router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.PreferredL3AgentRoutersScheduler
+
+[network-1]
+compute_nodes = compute-1
+[network-2]
+compute_nodes = compute-2
+[network-3]
+compute_nodes = compute-3
+

其中network-1、network-2和network-3是网络节点的hostname,compute-1、compute-2和compute-3是计算节点的hostname。按照上面设置用户在创建多个路由器连接到同一子网时,位于不同计算节点的虚拟机的流量就按照配置文件找到对应的网络节点的路由器。

+

打开基于路由器外部网关的端口转发(可选)。基于外部网关的端口转发与基于浮动IP的端口转发不能同时使用。

+

vim /etc/neutron/neutron.conf

+
[DEFAULT]
+service_plugins = router,rg_port_forwarding
+

vim /etc/neutron/l3_agent.ini

+
[agent]
+extensions = rg_port_forwarding
+
    +
  1. 重启相关服务。
  2. +
+
systemctl restart neutron-server.service neutron-dhcp-agent.service neutron-l3-agent.service  (CTL)
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-21.09/OpenStack-wallaby/index.html b/site/install/openEuler-21.09/OpenStack-wallaby/index.html new file mode 100644 index 0000000000000000000000000000000000000000..b2bf55b660e27da35005a0af557cbe9fe27879c2 --- /dev/null +++ b/site/install/openEuler-21.09/OpenStack-wallaby/index.html @@ -0,0 +1,2179 @@ + + + + + + + + openEuler-21.09_Wallaby - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Wallaby 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 21.09 版本官方源已经支持 OpenStack-Wallaby 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • Cinder
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 21.09 官方yum源,需要启用EPOL软件仓以支持OpenStack

    +
    cat << EOF >> /etc/yum.repos.d/21.09-OpenStack_Wallaby.repo
    +[OS]
    +name=OS
    +baseurl=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler
    +
    +[everything]
    +name=everything
    +baseurl=http://repo.openeuler.org/openEuler-21.09/everything/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-21.09/everything/$basearch/RPM-GPG-KEY-openEuler
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-21.09/EPOL/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
    +yum clean all && yum makecache
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    . admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[libvirt]
    +virt_type = qemu                                                                               (CPT)
    +cpu_mode = custom                                                                              (CPT)
    +cpu_model = cortex-a72                                                                         (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +
    +vim /etc/qemu/firmware/edk2-aarch64.json
    +
    +{
    +    "description": "UEFI firmware for ARM64 virtual machines",
    +    "interface-types": [
    +        "uefi"
    +    ],
    +    "mapping": {
    +        "device": "flash",
    +        "executable": {
    +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
    +            "format": "raw"
    +        },
    +        "nvram-template": {
    +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
    +            "format": "raw"
    +        }
    +    },
    +    "targets": [
    +        {
    +            "architecture": "aarch64",
    +            "machines": [
    +                "virt-*"
    +            ]
    +        }
    +    ],
    +    "features": [
    +
    +    ],
    +    "tags": [
    +
    +    ]
    +}
    +
    +(CPT)
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CPT)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +systemctl enable neutron-l3-agent.service
    +systemctl restart openstack-nova-api.service neutron-server.service                            (CTL)
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Wallaby中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic
+                         --description "Ironic baremetal provisioning service" baremetal
+
+openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
+openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector
+openstack role add --project service --user ironic-inspector admin
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+host = controller
+memcache_servers = controller:11211
+enabled_network_interfaces = flat,noop,neutron
+default_network_interface = noop
+transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/
+enabled_hardware_types = ipmi
+enabled_boot_interfaces = pxe
+enabled_deploy_interfaces = direct
+default_deploy_interface = direct
+enabled_inspect_interfaces = inspector
+enabled_management_interfaces = ipmitool
+enabled_power_interfaces = ipmitool
+enabled_rescue_interfaces = no-rescue,agent
+isolinux_bin = /usr/share/syslinux/isolinux.bin
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+[agent]
+deploy_logs_collect = always
+deploy_logs_local_path = /var/log/ironic/deploy
+deploy_logs_storage_backend = local
+image_download_source = http
+stream_raw_images = false
+force_raw_images = false
+verify_ca = False
+
+[oslo_concurrency]
+
+[oslo_messaging_notifications]
+transport_url = rabbit://openstack:123456@172.20.19.25:5672/
+topics = notifications
+driver = messagingv2
+
+[oslo_messaging_rabbit]
+amqp_durable_queues = True
+rabbit_ha_queues = True
+
+[pxe]
+ipxe_enabled = false
+pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
+image_cache_size = 204800
+tftp_root=/var/lib/tftpboot/cephfs/
+tftp_master_path=/var/lib/tftpboot/cephfs/master_images
+
+[dhcp]
+dhcp_provider = none
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. 配置ironic-inspector服务
  2. +
+

配置文件路径/etc/ironic-inspector/inspector.conf

+

1、创建数据库

+
# mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \     IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
+IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+

2、通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSWORDironic_inspector用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+backend = sqlalchemy
+connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
+min_pool_size = 100
+max_pool_size = 500
+pool_timeout = 30
+max_retries = 5
+max_overflow = 200
+db_retry_interval = 2
+db_inc_retry_interval = True
+db_max_retry_interval = 2
+db_max_retries = 5
+

3、配置消息度列通信地址

+
[DEFAULT] 
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+
+

4、设置keystone认证

+
[DEFAULT]
+
+auth_strategy = keystone
+timeout = 900
+rootwrap_config = /etc/ironic-inspector/rootwrap.conf
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+log_dir = /var/log/ironic-inspector
+state_path = /var/lib/ironic-inspector
+use_stderr = False
+
+[ironic]
+api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
+auth_type = password
+auth_url = http://PUBLIC_IDENTITY_IP:5000
+auth_strategy = keystone
+ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
+os_region = RegionOne
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
+username = IRONIC_SERVICE_USER_NAME
+password = IRONIC_SERVICE_USER_PASSWORD
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://control:5000
+www_authenticate_uri = http://control:5000
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = ironic_inspector
+password = IRONICPASSWD
+region_name = RegionOne
+memcache_servers = control:11211
+token_cache_time = 300
+
+[processing]
+add_ports = active
+processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
+ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
+always_store_ramdisk_logs = true
+store_data =none
+power_off = false
+
+[pxe_filter]
+driver = iptables
+
+[capabilities]
+boot_mode=True
+

5、配置ironic inspector dnsmasq服务

+
# 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
+port=0
+interface=enp3s0                         #替换为实际监听网络接口
+dhcp-range=172.20.19.100,172.20.19.110   #替换为实际dhcp地址范围
+bind-interfaces
+enable-tftp
+
+dhcp-match=set:efi,option:client-arch,7
+dhcp-match=set:efi,option:client-arch,9
+dhcp-match=aarch64, option:client-arch,11
+dhcp-boot=tag:aarch64,grubaa64.efi
+dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
+dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
+
+tftp-root=/tftpboot                       #替换为实际tftpboot目录
+log-facility=/var/log/dnsmasq.log
+

6、关闭ironic provision网络子网的dhcp

+
openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
+

7、初始化ironic-inspector服务的数据库

+

在控制节点执行:

+
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
+

8、启动服务

+
systemctl enable --now openstack-ironic-inspector.service
+systemctl enable --now openstack-ironic-inspector-dnsmasq.service
+
    +
  1. +

    配置httpd服务

    +
  2. +
  3. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  4. +
  5. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +
      yum install httpd -y
      +
    2. +
    3. +

      创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    4. +
    5. +

      重启httpd服务。

      +
      systemctl restart httpd
      +
    6. +
    +
  6. +
  7. +

    deploy ramdisk镜像制作

    +
  8. +
+

W版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用W版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意

    +

    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:

    +

    在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败,如下:

    +

    生成的错误配置文件:

    +

    ironic-err

    +

    如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    w版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    2. +
    +
    [agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +

    2) ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:

    +

    /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)

    +
    [DEFAULT]
    +enable_auto_tls = False
    +

    设置权限:

    +
    chown -R ipa.ipa /etc/ironic_python_agent/
    +
      +
    1. 修改ipa服务的服务启动文件,添加配置文件选项
    2. +
    +

    vim usr/lib/systemd/system/ironic-python-agent.service

    +
    [Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +
  10. +
+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 21.09中引入了Kolla和Kolla-ansible服务。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Trove服务用户

+

openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+openstack role add --project service --user trove admin
+openstack service create --name trove
+                         --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+
    +
  1. 安装和配置Trove各组件 + 1、安装Trove包 + ```shell script + yum install openstack-trove python-troveclient +
    2. 配置`trove.conf`
    +```shell script
    +vim /etc/trove/trove.conf
    +
    +[DEFAULT]
    +bind_host=TROVE_NODE_IP
    +log_dir = /var/log/trove
    +network_driver = trove.network.neutron.NeutronDriver
    +management_security_groups = <manage security group>
    +nova_keypair = trove-mgmt
    +default_datastore = mysql
    +taskmanager_manager = trove.taskmanager.manager.Manager
    +trove_api_workers = 5
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +reboot_time_out = 300
    +usage_timeout = 900
    +agent_call_high_timeout = 1200
    +use_syslog = False
    +debug = True
    +
    +# Set these if using Neutron Networking
    +network_driver=trove.network.neutron.NeutronDriver
    +network_label_regex=.*
    +
    +
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +
    +[database]
    +connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
    +
    +[keystone_authtoken]
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = trove
    +username = trove
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +
    +[service_credentials]
    +auth_url = http://controller:5000/v3/
    +region_name = RegionOne
    +project_name = service
    +password = trove
    +project_domain_name = Default
    +user_domain_name = Default
    +username = trove
    +
    +[mariadb]
    +tcp_ports = 3306,4444,4567,4568
    +
    +[mysql]
    +tcp_ports = 3306
    +
    +[postgresql]
    +tcp_ports = 5432
    + 解释:
  2. +
  3. [Default]分组中bind_host配置为Trove部署节点的IP
  4. +
  5. nova_compute_urlcinder_url 为Nova和Cinder在Keystone中创建的endpoint
  6. +
  7. nova_proxy_XXX 为一个能访问Nova服务的用户信息,上例中使用admin用户为例
  8. +
  9. transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  10. +
  11. [database]分组中的connection 为前面在mysql中为Trove创建的数据库信息
  12. +
  13. +

    Trove的用户信息中TROVE_PASS替换为实际trove用户的密码

    +
  14. +
  15. +

    配置trove-guestagent.conf + ```shell script + vim /etc/trove/trove-guestagent.conf

    +
  16. +
+

[DEFAULT] + log_file = trove-guestagent.log + log_dir = /var/log/trove/ + ignore_users = os_admin + control_exchange = trove + transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ + rpc_backend = rabbit + command_process_timeout = 60 + use_syslog = False + debug = True

+

[service_credentials] + auth_url = http://controller:5000/v3/ + region_name = RegionOne + project_name = service + password = TROVE_PASS + project_domain_name = Default + user_domain_name = Default + username = trove

+

[mysql] + docker_image = your-registry/your-repo/mysql + backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 +

**解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟
+机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上
+报心跳,因此需要配置RabbitMQ的用户和密码信息。
+**从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。**
+- `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码
+- Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码  
+
+6. 生成数据`Trove`数据库表
+```shell script
+su -s /bin/sh -c "trove-manage db_sync" trove
+4. 完成安装配置 + 1. 配置Trove服务自启动 + ```shell script + systemctl enable openstack-trove-api.service \ + openstack-trove-taskmanager.service \ + openstack-trove-conductor.service +
2. 启动服务
+```shell script
+systemctl start openstack-trove-api.service \
+openstack-trove-taskmanager.service \
+openstack-trove-conductor.service

+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#admin为swift用户添加角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您swift在身份服务中为用户选择的密码**
+
    +
  1. +

    安装和配置存储节点 (STG)

    +

    安装支持的程序包: +

    yum install xfsprogs rsync

    +

    将/dev/vdb和/dev/vdc设备格式化为 XFS

    +
    mkfs.xfs /dev/vdb
    +mkfs.xfs /dev/vdc
    +

    创建挂载点目录结构:

    +
    mkdir -p /srv/node/vdb
    +mkdir -p /srv/node/vdc
    +

    找到新分区的 UUID:

    +
    blkid
    +

    编辑/etc/fstab文件并将以下内容添加到其中:

    +
    UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
    +

    挂载设备:

    +

    mount /srv/node/vdb
    +mount /srv/node/vdc
    +注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置

    +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock
    +替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动:

    +
    systemctl enable rsyncd.service
    +systemctl start rsyncd.service
    +
  2. +
  3. +

    在存储节点安装和配置组件 (STG)

    +

    安装软件包:

    +
    yum install openstack-swift-account openstack-swift-container openstack-swift-object
    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    +

    确保挂载点目录结构的正确所有权:

    +
    chown -R swift:swift /srv/node
    +

    创建recon目录并确保其拥有正确的所有权:

    +
    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift
    +
  4. +
  5. +

    创建账号环 (CTL)

    +

    切换到/etc/swift目录。

    +
    cd /etc/swift
    +

    创建基础account.builder文件:

    +
    swift-ring-builder account.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder account.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +
  6. +
  7. +

    创建容器环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件:

    +
       swift-ring-builder container.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder container.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
    +  --device DEVICE_NAME --weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 +对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder container.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +
  8. +
  9. +

    创建对象环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件:

    +
    swift-ring-builder object.builder create 10 1 1
    +

    将每个存储节点添加到环中

    +
     swift-ring-builder object.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
    +  --device DEVICE_NAME --weight 100
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder object.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +

    分发环配置文件:

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上目录。

    +
  10. +
  11. +

    完成安装

    +

    编辑/etc/swift/swift.conf文件

    +
    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes
    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权:

    +
    chown -R root:swift /etc/swift
    +

    在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-proxy.service memcached.service
    +systemctl start openstack-swift-proxy.service memcached.service
    +

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
    +systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
  12. +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-22.03-LTS-SP1/OpenStack-train/index.html b/site/install/openEuler-22.03-LTS-SP1/OpenStack-train/index.html new file mode 100644 index 0000000000000000000000000000000000000000..d05ab1a58b4b434bc016bf662b046b6641b1453f --- /dev/null +++ b/site/install/openEuler-22.03-LTS-SP1/OpenStack-train/index.html @@ -0,0 +1,2917 @@ + + + + + + + + openEuler-22.03-LTS-SP1_Train - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Train 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 22.03-LTS-SP1版本官方源已经支持 OpenStack-Train 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • Cinder
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    启动OpenStack Train yum源

    +
    yum update
    +yum install openstack-release-train
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient==4.0.2
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    . admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,则virt_type可以配置为kvm

    +

    注意

    +

    如果为arm64结构,还需要在计算节点执行以下命令

    +
    
    +mkdir -p /usr/share/AAVMF
    +chown nova:nova /usr/share/AAVMF
    +
    +ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_CODE.fd
    +ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_VARS.fd
    +
    +vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +

    并且当ARM架构下的部署环境为嵌套虚拟化时,libvirt配置如下:

    +
    [libvirt]
    +virt_type = qemu
    +cpu_mode = custom
    +cpu_model = cortex-a72
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CTL)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl restart neutron-server.service neutron-linuxbridge-agent.service \                   (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Train中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+

mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+2. 安装软件包

+
yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
+

启动服务

+
systemctl enable openstack-ironic-api openstack-ironic-conductor
+systemctl start openstack-ironic-api openstack-ironic-conductor
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic \
+                         --description "Ironic baremetal provisioning service" baremetal
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. +

    配置httpd服务

    +
  2. +
  3. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  4. +
  5. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +

      yum install httpd -y
      + 2. 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    2. +
    3. +

      重启httpd服务。

      +

      systemctl restart httpd
      +7. deploy ramdisk镜像制作

      +
    4. +
    +
  6. +
+

T版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用T版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意 + 原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:

    +

    在T版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    T版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    2. +
    +
    [agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +
      +
    1. ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    2. +
    +

    /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)

    +
    [DEFAULT]
    +enable_auto_tls = False
    +

    设置权限:

    +
    chown -R ipa.ipa /etc/ironic_python_agent/
    +
      +
    1. 修改ipa服务的服务启动文件,添加配置文件选项
    2. +
    +

    vim usr/lib/systemd/system/ironic-python-agent.service

    +
    [Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +
  10. +
+

在Train中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。

+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令进行相关的镜像制作和容器环境部署了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Trove服务用户

+

openstack user create --domain default --password-prompt trove
+openstack role add --project service --user trove admin
+openstack service create --name trove --description "Database" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+
    +
  1. 安装和配置Trove各组件
  2. +
+

1、安装Trove包 + ``shell script + yum install openstack-trove python3-troveclient +


+2. 配置`trove.conf`
+```shell script
+vim /etc/trove/trove.conf
+
+ [DEFAULT]
+ log_dir = /var/log/trove
+ trove_auth_url = http://controller:5000/
+ nova_compute_url = http://controller:8774/v2
+ cinder_url = http://controller:8776/v1
+ swift_url = http://controller:8080/v1/AUTH_
+ rpc_backend = rabbit
+ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672
+ auth_strategy = keystone
+ add_addresses = True
+ api_paste_config = /etc/trove/api-paste.ini
+ nova_proxy_admin_user = admin
+ nova_proxy_admin_pass = ADMIN_PASSWORD
+ nova_proxy_admin_tenant_name = service
+ taskmanager_manager = trove.taskmanager.manager.Manager
+ use_nova_server_config_drive = True
+ # Set these if using Neutron Networking
+ network_driver = trove.network.neutron.NeutronDriver
+ network_label_regex = .*
+
+ [database]
+ connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove
+
+ [keystone_authtoken]
+ www_authenticate_uri = http://controller:5000/
+ auth_url = http://controller:5000/
+ auth_type = password
+ project_domain_name = default
+ user_domain_name = default
+ project_name = service
+ username = trove
+ password = TROVE_PASSWORD
+ **解释:** + -[Default]分组中nova_compute_urlcinder_url为Nova和Cinder在Keystone中创建的endpoint + -nova_proxy_XXX为一个能访问Nova服务的用户信息,上例中使用admin用户为例 + -transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码 + -[database]分组中的connection为前面在mysql中为Trove创建的数据库信息 + - Trove的用户信息中TROVE_PASSWORD`替换为实际trove用户的密码

+
    +
  1. 配置trove-guestagent.conf + ```shell script + vim /etc/trove/trove-guestagent.conf
  2. +
+

rabbit_host = controller + rabbit_password = RABBIT_PASS + trove_auth_url = http://controller:5000/ +

**解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟
+机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上
+报心跳,因此需要配置RabbitMQ的用户和密码信息。
+**从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。**
+- `RABBIT_PASS`替换为RabbitMQ的密码  
+
+4. 生成数据`Trove`数据库表
+```shell script
+su -s /bin/sh -c "trove-manage db_sync" trove

+
    +
  1. 完成安装配置
  2. +
  3. 配置Trove服务自启动 + ```shell script + systemctl enable openstack-trove-api.service \ + openstack-trove-taskmanager.service \ + openstack-trove-conductor.service +
    2. 启动服务
    +```shell script
    +systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您在身份服务中为swift用户选择的密码**
+
    +
  1. +

    安装和配置存储节点 (STG)

    +

    安装支持的程序包: +

    yum install xfsprogs rsync

    +

    将/dev/vdb和/dev/vdc设备格式化为 XFS

    +
    mkfs.xfs /dev/vdb
    +mkfs.xfs /dev/vdc
    +

    创建挂载点目录结构:

    +
    mkdir -p /srv/node/vdb
    +mkdir -p /srv/node/vdc
    +

    找到新分区的 UUID:

    +
    blkid
    +

    编辑/etc/fstab文件并将以下内容添加到其中:

    +
    UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
    +

    挂载设备:

    +

    mount /srv/node/vdb
    +mount /srv/node/vdc
    +注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置

    +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock
    +替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动:

    +
    systemctl enable rsyncd.service
    +systemctl start rsyncd.service
    +
  2. +
  3. +

    在存储节点安装和配置组件 (STG)

    +

    安装软件包:

    +
    yum install openstack-swift-account openstack-swift-container openstack-swift-object
    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    +

    确保挂载点目录结构的正确所有权:

    +
    chown -R swift:swift /srv/node
    +

    创建recon目录并确保其拥有正确的所有权:

    +
    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift
    +
  4. +
  5. +

    创建账号环 (CTL)

    +

    切换到/etc/swift目录。

    +
    cd /etc/swift
    +

    创建基础account.builder文件:

    +
    swift-ring-builder account.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder account.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +
  6. +
  7. +

    创建容器环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件:

    +
       swift-ring-builder container.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder container.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
    +  --device DEVICE_NAME --weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 +对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder container.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder container.builder rebalance
    +
  8. +
  9. +

    创建对象环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件:

    +
    swift-ring-builder object.builder create 10 1 1
    +

    将每个存储节点添加到环中

    +
     swift-ring-builder object.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
    +  --device DEVICE_NAME --weight 100
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder object.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder object.builder rebalance
    +

    分发环配置文件:

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  10. +
  11. +

    完成安装

    +

    编辑/etc/swift/swift.conf文件

    +
    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes
    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权:

    +
    chown -R root:swift /etc/swift
    +

    在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-proxy.service memcached.service
    +systemctl start openstack-swift-proxy.service memcached.service
    +

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
    +systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
  12. +
+

Cyborg 安装

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+
    +
  1. 初始化对应数据库
  2. +
+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+
    +
  1. 安装Cyborg
  2. +
+
yum install openstack-cyborg
+
    +
  1. 配置Cyborg
  2. +
+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+
    +
  1. 同步数据库表格
  2. +
+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+
    +
  1. 启动Cyborg服务
  2. +
+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+
    +
  1. 安装Aodh
  2. +
+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+
    +
  1. 修改配置文件
  2. +
+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
aodh-dbsync
+
    +
  1. 启动Aodh服务
  2. +
+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+
    +
  1. 安装Gnocchi
  2. +
+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+
    +
  1. 修改配置文件/etc/gnocchi/gnocchi.conf
  2. +
+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+
    +
  1. 初始化数据库
  2. +
+
gnocchi-upgrade
+
    +
  1. 启动Gnocchi服务
  2. +
+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+
    +
  1. 安装Ceilometer
  2. +
+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+
    +
  1. 修改配置文件/etc/ceilometer/pipeline.yaml
  2. +
+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+
    +
  1. 修改配置文件/etc/ceilometer/ceilometer.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
ceilometer-upgrade
+
    +
  1. 启动Ceilometer服务
  2. +
+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+
    +
  1. 创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码
  2. +
+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+
    +
  1. 创建服务凭证,创建heat用户,并为其增加admin角色
  2. +
+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+
    +
  1. 创建heatheat-cfn服务及其对应的API端点
  2. +
+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+
    +
  1. 创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色
  2. +
+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+
    +
  1. 安装软件包
  2. +
+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+
    +
  1. 修改配置文件/etc/heat/heat.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+
    +
  1. 初始化heat数据库表
  2. +
+
su -s /bin/sh -c "heat-manage db_sync" heat
+
    +
  1. 启动服务
  2. +
+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    pip install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 22.03-LTS-SP1的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 22.03-lts-sp1 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +
    oos env setup test-oos -r train
    +

    具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 22.03-lts-sp1 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+

基于OpenStack SIG部署工具opensd部署

+

opensd用于批量地脚本化部署openstack各组件服务。

+

部署步骤

+

1. 部署前需要确认的信息

+
    +
  • 装操作系统时,需将selinux设置为disable
  • +
  • 装操作系统时,将/etc/ssh/sshd_config配置文件内的UseDNS设置为no
  • +
  • 操作系统语言必须设置为英文
  • +
  • 部署之前请确保所有计算节点/etc/hosts文件内没有对计算主机的解析
  • +
+

2. ceph pool与认证创建(可选)

+

不使用ceph或已有ceph集群可忽略此步骤

+

在任意一台ceph monitor节点执行:

+

2.1 创建pool:

+
ceph osd pool create volumes 2048
+ceph osd pool create images 2048
+

2.2 初始化pool

+
rbd pool init volumes
+rbd pool init images
+

2.3 创建用户认证

+
ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images'
+ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes'
+

3. 配置lvm(可选)

+

根据物理机磁盘配置与闲置情况,为mysql数据目录挂载额外的磁盘空间。示例如下(根据实际情况做配置):

+
fdisk -l
+Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors
+Units = sectors of 1 * 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 4096 bytes
+I/O size (minimum/optimal): 4096 bytes / 4096 bytes
+Disk label type: dos
+Disk identifier: 0x000ed242
+创建分区
+parted /dev/sdd
+mkparted 0 -1
+创建pv
+partprobe /dev/sdd1
+pvcreate /dev/sdd1
+创建、激活vg
+vgcreate vg_mariadb /dev/sdd1
+vgchange -ay vg_mariadb
+查看vg容量
+vgdisplay
+--- Volume group ---
+VG Name vg_mariadb
+System ID
+Format lvm2
+Metadata Areas 1
+Metadata Sequence No 2
+VG Access read/write
+VG Status resizable
+MAX LV 0
+Cur LV 1
+Open LV 1
+Max PV 0
+Cur PV 1
+Act PV 1
+VG Size 446.62 GiB
+PE Size 4.00 MiB
+Total PE 114335
+Alloc PE / Size 114176 / 446.00 GiB
+Free PE / Size 159 / 636.00 MiB
+VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc
+创建lv
+lvcreate -L 446G -n lv_mariadb vg_mariadb
+格式化磁盘并获取卷的UUID
+mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb
+blkid /dev/mapper/vg_mariadb-lv_mariadb
+/dev/mapper/vg_mariadb-lv_mariadb: UUID="98d513eb-5f64-4aa5-810e-dc7143884fa2" TYPE="ext4"
+注:98d513eb-5f64-4aa5-810e-dc7143884fa2为卷的UUID
+挂载磁盘
+mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql
+rm -rf  /var/lib/mysql/*
+

4. 配置yum repo

+

在部署节点执行:

+

4.1 备份yum源

+
mkdir /etc/yum.repos.d/bak/
+mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/
+

4.2 配置yum repo

+
cat > /etc/yum.repos.d/opensd.repo << EOF
+[train]
+name=train
+baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP1:/Epol:/Multi-Version:/OpenStack:/Train/standard_$basearch/
+enabled=1
+gpgcheck=0
+
+[epol]
+name=epol
+baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP1:/Epol/standard_$basearch/
+enabled=1
+gpgcheck=0
+
+[everything]
+name=everything
+baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP1/standard_$basearch/
+enabled=1
+gpgcheck=0
+
+EOF
+

4.3 更新yum缓存

+
yum clean all
+yum makecache
+

5. 安装opensd

+

在部署节点执行:

+

5.1 克隆opensd源码并安装

+
git clone https://gitee.com/openeuler/opensd
+cd opensd
+python3 setup.py install
+

6. 做ssh互信

+

在部署节点执行:

+

6.1 生成密钥对

+

执行如下命令并一路回车

+
ssh-keygen
+

6.2 生成主机IP地址文件

+

在auto_ssh_host_ip中配置所有用到的主机ip, 示例:

+
cd /usr/local/share/opensd/tools/
+vim auto_ssh_host_ip
+
+10.0.0.1
+10.0.0.2
+...
+10.0.0.10
+

6.3 更改密码并执行脚本

+

将免密脚本/usr/local/bin/opensd-auto-ssh内123123替换为主机真实密码

+
# 替换脚本内123123字符串
+vim /usr/local/bin/opensd-auto-ssh
+
## 安装expect后执行脚本
+dnf install expect -y
+opensd-auto-ssh
+

6.4 部署节点与ceph monitor做互信(可选)

+
ssh-copy-id root@x.x.x.x
+

7. 配置opensd

+

在部署节点执行:

+

7.1 生成随机密码

+

安装 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils并随机生成密码 +

dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y
+# 执行命令生成密码
+opensd-genpwd
+# 检查密码是否生成
+cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml

+

7.2 配置inventory文件

+

主机信息包含:主机名、ansible_host IP、availability_zone,三者均需配置缺一不可,示例:

+
vim /usr/local/share/opensd/ansible/inventory/multinode
+# 三台控制节点主机信息
+[control]
+controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1
+controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1
+controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1
+
+# 网络节点信息,与控制节点保持一致
+[network]
+controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1
+controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1
+controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1
+
+# cinder-volume服务节点信息
+[storage]
+storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1
+storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1
+storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1
+
+# Cell1 集群信息
+[cell-control-cell1]
+cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1
+cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1
+cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1
+
+[compute-cell1]
+compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1
+compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1
+compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1
+
+[cell1:children]
+cell-control-cell1
+compute-cell1
+
+# Cell2集群信息
+[cell-control-cell2]
+cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1
+cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1
+cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1
+
+[compute-cell2]
+compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1
+compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1
+compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1
+
+[cell2:children]
+cell-control-cell2
+compute-cell2
+
+[baremetal]
+
+[compute-cell1-ironic]
+
+
+# 填写所有cell集群的control主机组
+[nova-conductor:children]
+cell-control-cell1
+cell-control-cell2
+
+# 填写所有cell集群的compute主机组
+[nova-compute:children]
+compute-added
+compute-cell1
+compute-cell2
+
+# 下面的主机组信息不需变动,保留即可
+[compute-added]
+
+[chrony-server:children]
+control
+
+[pacemaker:children]
+control
+......
+......
+

7.3 配置全局变量

+

注: 文档中提到的有注释配置项需要更改,其他参数不需要更改,若无相关配置则为空

+
vim /usr/local/share/opensd/etc_examples/opensd/globals.yml
+########################
+# Network & Base options
+########################
+network_interface: "eth0" #管理网络的网卡名称
+neutron_external_interface: "eth1" #业务网络的网卡名称
+cidr_netmask: 24 #管理网的掩码
+opensd_vip_address: 10.0.0.33  #控制节点虚拟IP地址
+cell1_vip_address: 10.0.0.34 #cell1集群的虚拟IP地址
+cell2_vip_address: 10.0.0.35 #cell2集群的虚拟IP地址
+external_fqdn: "" #用于vnc访问虚拟机的外网域名地址
+external_ntp_servers: [] #外部ntp服务器地址
+yumrepo_host:  #yum源的IP地址
+yumrepo_port:  #yum源端口号
+environment:   #yum源的类型
+upgrade_all_packages: "yes" #是否升级所有安装版的版本(执行yum upgrade),初始部署资源请设置为"yes"
+enable_miner: "no" #是否开启部署miner服务
+
+enable_chrony: "no" #是否开启部署chrony服务
+enable_pri_mariadb: "no" #是否为私有云部署mariadb
+enable_hosts_file_modify: "no" # 扩容计算节点和部署ironic服务的时候,是否将节点信息添加到`/etc/hosts`
+
+########################
+# Available zone options
+########################
+az_cephmon_compose:
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az01的"availability_zone"值保持一致
+    ceph_mon_host:      #az01对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:  
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az02的"availability_zone"值保持一致
+    ceph_mon_host:      #az02对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:  
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az03的"availability_zone"值保持一致
+    ceph_mon_host:      #az03对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:
+
+# `reserve_vcpu_based_on_numa`配置为`yes` or `no`,举例说明:
+NUMA node0 CPU(s): 0-15,32-47
+NUMA node1 CPU(s): 16-31,48-63
+当reserve_vcpu_based_on_numa: "yes", 根据numa node, 平均每个node预留vcpu:
+vcpu_pin_set = 2-15,34-47,18-31,50-63
+当reserve_vcpu_based_on_numa: "no", 从第一个vcpu开始,顺序预留vcpu:
+vcpu_pin_set = 8-64
+
+#######################
+# Nova options
+#######################
+nova_reserved_host_memory_mb: 2048 #计算节点给计算服务预留的内存大小
+enable_cells: "yes" #cell节点是否单独节点部署
+support_gpu: "False" #cell节点是否有GPU服务器,如果有则为True,否则为False
+
+#######################
+# Neutron options
+#######################
+monitor_ip:
+    - 10.0.0.9   #配置监控节点
+    - 10.0.0.10
+enable_meter_full_eip: True   #配置是否允许EIP全量监控,默认为True
+enable_meter_port_forwarding: True   #配置是否允许port forwarding监控,默认为True
+enable_meter_ecs_ipv6: True   #配置是否允许ecs_ipv6监控,默认为True
+enable_meter: True    #配置是否开启监控,默认为True
+is_sdn_arch: False    #配置是否是sdn架构,默认为False
+
+# 默认使能的网络类型是vlan,vlan和vxlan两种类型只能二选一.
+enable_vxlan_network_type: False  # 默认使能的网络类型是vlan,如果使用vxlan网络,配置为True, 如果使用vlan网络,配置为False.
+enable_neutron_fwaas: False       # 环境有使用防火墙, 设置为True, 使能防护墙功能.
+# Neutron provider
+neutron_provider_networks:
+  network_types: "{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}"
+  network_vlan_ranges: "default:xxx:xxx" #部署之前规划的业务网络vlan范围
+  network_mappings: "default:br-provider"
+  network_interface: "{{ neutron_external_interface }}"
+  network_vxlan_ranges: "" #部署之前规划的业务网络vxlan范围
+
+# 如下这些配置是SND控制器的配置参数, `enable_sdn_controller`设置为True, 使能SND控制器功能.
+# 其他参数请根据部署之前的规划和SDN部署信息确定.
+enable_sdn_controller: False
+sdn_controller_ip_address:  # SDN控制器ip地址
+sdn_controller_username:    # SDN控制器的用户名
+sdn_controller_password:    # SDN控制器的用户密码
+
+#######################
+# Dimsagent options
+#######################
+enable_dimsagent: "no" # 安装镜像服务agent, 需要改为yes
+# Address and domain name for s2
+s3_address_domain_pair:
+  - host_ip:           
+    host_name:         
+
+#######################
+# Trove options
+#######################
+enable_trove: "no" #安装trove 需要改为yes
+#default network
+trove_default_neutron_networks:  #trove 的管理网络id `openstack network list|grep -w trove-mgmt|awk '{print$2}'`
+#s3 setup(如果没有s3,以下值填null)
+s3_endpoint_host_ip:   #s3的ip
+s3_endpoint_host_name: #s3的域名
+s3_endpoint_url:       #s3的url ·一般为http://s3域名
+s3_access_key:         #s3的ak 
+s3_secret_key:         #s3的sk
+
+#######################
+# Ironic options
+#######################
+enable_ironic: "no" #是否开机裸金属部署,默认不开启
+ironic_neutron_provisioning_network_uuid:
+ironic_neutron_cleaning_network_uuid: "{{ ironic_neutron_provisioning_network_uuid }}"
+ironic_dnsmasq_interface:
+ironic_dnsmasq_dhcp_range:
+ironic_tftp_server_address: "{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}"
+# 交换机设备相关信息
+neutron_ml2_conf_genericswitch:
+  genericswitch:xxxxxxx:
+    device_type:
+    ngs_mac_address:
+    ip:
+    username:
+    password:
+    ngs_port_default_vlan:
+
+# Package state setting
+haproxy_package_state: "present"
+mariadb_package_state: "present"
+rabbitmq_package_state: "present"
+memcached_package_state: "present"
+ceph_client_package_state: "present"
+keystone_package_state: "present"
+glance_package_state: "present"
+cinder_package_state: "present"
+nova_package_state: "present"
+neutron_package_state: "present"
+miner_package_state: "present"
+

7.4 检查所有节点ssh连接状态

+
dnf install ansible -y
+ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping
+
+# 执行结果显示每台主机都是"SUCCESS"即说明连接状态没问题,示例:
+compute1 | SUCCESS => {
+  "ansible_facts": {
+      "discovered_interpreter_python": "/usr/bin/python"
+  },
+  "changed": false,
+  "ping": "pong"
+}
+

8. 执行部署

+

在部署节点执行:

+

8.1 执行bootstrap

+
# 执行部署
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50
+

8.2 重启服务器

+

注:执行重启的原因是:bootstrap可能会升内核,更改selinux配置或者有GPU服务器,如果装机过程已经是新版内核,selinux disable或者没有GPU服务器,则不需要执行该步骤 +

# 手动重启对应节点,执行命令
+init6
+# 重启完成后,再次检查连通性
+ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping
+# 重启完后操作系统后,再次启动yum源

+

8.3 执行部署前检查

+
opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50
+

8.4 执行部署

+
ln -s /usr/bin/python3 /usr/bin/python
+
+全量部署:
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50
+
+单服务部署:
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/index.html b/site/install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/index.html new file mode 100644 index 0000000000000000000000000000000000000000..3952d6090c4f227b0c3c6cf5ad1e0586910f1057 --- /dev/null +++ b/site/install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/index.html @@ -0,0 +1,2645 @@ + + + + + + + + openEuler-22.03-LTS-SP1_Wallaby - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Wallaby 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 22.03-LTS-SP1版本官方源已经支持 OpenStack-Wallaby 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • Cinder
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 22.03 LTS 官方yum源,需要启用EPOL软件仓以支持OpenStack

    +
    yum update
    +yum install openstack-release-wallaby
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示。

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    . admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +logdir = /var/log/nova/                                                                        (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[libvirt]
    +virt_type = qemu                                                                               (CPT)
    +cpu_mode = custom                                                                              (CPT)
    +cpu_model = cortex-a72                                                                         (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +
    +vim /etc/qemu/firmware/edk2-aarch64.json
    +
    +{
    +    "description": "UEFI firmware for ARM64 virtual machines",
    +    "interface-types": [
    +        "uefi"
    +    ],
    +    "mapping": {
    +        "device": "flash",
    +        "executable": {
    +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
    +            "format": "raw"
    +        },
    +        "nvram-template": {
    +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
    +            "format": "raw"
    +        }
    +    },
    +    "targets": [
    +        {
    +            "architecture": "aarch64",
    +            "machines": [
    +                "virt-*"
    +            ]
    +        }
    +    ],
    +    "features": [
    +
    +    ],
    +    "tags": [
    +
    +    ]
    +}
    +
    +(CPT)
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CPT)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +systemctl enable neutron-l3-agent.service
    +systemctl restart openstack-nova-api.service neutron-server.service                            (CTL)
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Wallaby中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic
+                         --description "Ironic baremetal provisioning service" baremetal
+
+openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
+openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector
+openstack role add --project service --user ironic-inspector admin
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+host = controller
+memcache_servers = controller:11211
+enabled_network_interfaces = flat,noop,neutron
+default_network_interface = noop
+transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/
+enabled_hardware_types = ipmi
+enabled_boot_interfaces = pxe
+enabled_deploy_interfaces = direct
+default_deploy_interface = direct
+enabled_inspect_interfaces = inspector
+enabled_management_interfaces = ipmitool
+enabled_power_interfaces = ipmitool
+enabled_rescue_interfaces = no-rescue,agent
+isolinux_bin = /usr/share/syslinux/isolinux.bin
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+[agent]
+deploy_logs_collect = always
+deploy_logs_local_path = /var/log/ironic/deploy
+deploy_logs_storage_backend = local
+image_download_source = http
+stream_raw_images = false
+force_raw_images = false
+verify_ca = False
+
+[oslo_concurrency]
+
+[oslo_messaging_notifications]
+transport_url = rabbit://openstack:123456@172.20.19.25:5672/
+topics = notifications
+driver = messagingv2
+
+[oslo_messaging_rabbit]
+amqp_durable_queues = True
+rabbit_ha_queues = True
+
+[pxe]
+ipxe_enabled = false
+pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
+image_cache_size = 204800
+tftp_root=/var/lib/tftpboot/cephfs/
+tftp_master_path=/var/lib/tftpboot/cephfs/master_images
+
+[dhcp]
+dhcp_provider = none
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. 配置ironic-inspector服务
  2. +
+

配置文件路径/etc/ironic-inspector/inspector.conf

+

1、创建数据库

+
# mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \     IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
+IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+

2、通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSWORDironic_inspector用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+backend = sqlalchemy
+connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
+min_pool_size = 100
+max_pool_size = 500
+pool_timeout = 30
+max_retries = 5
+max_overflow = 200
+db_retry_interval = 2
+db_inc_retry_interval = True
+db_max_retry_interval = 2
+db_max_retries = 5
+

3、配置消息度列通信地址

+
[DEFAULT] 
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+
+

4、设置keystone认证

+
[DEFAULT]
+
+auth_strategy = keystone
+timeout = 900
+rootwrap_config = /etc/ironic-inspector/rootwrap.conf
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+log_dir = /var/log/ironic-inspector
+state_path = /var/lib/ironic-inspector
+use_stderr = False
+
+[ironic]
+api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
+auth_type = password
+auth_url = http://PUBLIC_IDENTITY_IP:5000
+auth_strategy = keystone
+ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
+os_region = RegionOne
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
+username = IRONIC_SERVICE_USER_NAME
+password = IRONIC_SERVICE_USER_PASSWORD
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://control:5000
+www_authenticate_uri = http://control:5000
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = ironic_inspector
+password = IRONICPASSWD
+region_name = RegionOne
+memcache_servers = control:11211
+token_cache_time = 300
+
+[processing]
+add_ports = active
+processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
+ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
+always_store_ramdisk_logs = true
+store_data =none
+power_off = false
+
+[pxe_filter]
+driver = iptables
+
+[capabilities]
+boot_mode=True
+

5、配置ironic inspector dnsmasq服务

+
# 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
+port=0
+interface=enp3s0                         #替换为实际监听网络接口
+dhcp-range=172.20.19.100,172.20.19.110   #替换为实际dhcp地址范围
+bind-interfaces
+enable-tftp
+
+dhcp-match=set:efi,option:client-arch,7
+dhcp-match=set:efi,option:client-arch,9
+dhcp-match=aarch64, option:client-arch,11
+dhcp-boot=tag:aarch64,grubaa64.efi
+dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
+dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
+
+tftp-root=/tftpboot                       #替换为实际tftpboot目录
+log-facility=/var/log/dnsmasq.log
+

6、关闭ironic provision网络子网的dhcp

+
openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
+

7、初始化ironic-inspector服务的数据库

+

在控制节点执行:

+
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
+

8、启动服务

+
systemctl enable --now openstack-ironic-inspector.service
+systemctl enable --now openstack-ironic-inspector-dnsmasq.service
+
    +
  1. +

    配置httpd服务

    +
  2. +
  3. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  4. +
  5. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +
      yum install httpd -y
      +
    2. +
    3. +

      创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    4. +
    5. +

      重启httpd服务。

      +
      systemctl restart httpd
      +
    6. +
    +
  6. +
  7. +

    deploy ramdisk镜像制作

    +
  8. +
+

W版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用W版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意

    +
    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:
    +
    +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败,如下:
    +
    +生成的错误配置文件:
    +

    ironic-err

    +
    如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。
    +
    +需要用户对生成grub.cfg的代码逻辑自行修改。
    +
    +ironic向ipa发送查询命令执行状态请求的tls报错:
    +
    +w版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。
    +
    +1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    +
    +```
    +[agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +```
    +
    +2) ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    +
    +/etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)
    +
    +```
    +[DEFAULT]
    +enable_auto_tls = False
    +```
    +
    +设置权限:
    +
    +```
    +chown -R ipa.ipa /etc/ironic_python_agent/
    +```
    +
    +3. 修改ipa服务的服务启动文件,添加配置文件选项
    +
    +vim usr/lib/systemd/system/ironic-python-agent.service
    +
    +```
    +[Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +```
    +
  10. +
+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 22.03 LTS中引入了Kolla和Kolla-ansible服务。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

1.设置数据库

+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+

2.创建服务用户认证

+

1、创建Trove服务用户

+

openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+openstack role add --project service --user trove admin
+openstack service create --name trove
+                         --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+

3.安装和配置Trove各组件

+

1、安装Trove

+

yum install openstack-trove python-troveclient
+ 2、配置trove.conf

+
vim /etc/trove/trove.conf
+
+[DEFAULT]
+bind_host=TROVE_NODE_IP
+log_dir = /var/log/trove
+network_driver = trove.network.neutron.NeutronDriver
+management_security_groups = <manage security group>
+nova_keypair = trove-mgmt
+default_datastore = mysql
+taskmanager_manager = trove.taskmanager.manager.Manager
+trove_api_workers = 5
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+reboot_time_out = 300
+usage_timeout = 900
+agent_call_high_timeout = 1200
+use_syslog = False
+debug = True
+
+# Set these if using Neutron Networking
+network_driver=trove.network.neutron.NeutronDriver
+network_label_regex=.*
+
+
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
+
+[keystone_authtoken]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = trove
+username = trove
+auth_url = http://controller:5000/v3/
+auth_type = password
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = trove
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mariadb]
+tcp_ports = 3306,4444,4567,4568
+
+[mysql]
+tcp_ports = 3306
+
+[postgresql]
+tcp_ports = 5432
+

解释:

+
- `[Default]`分组中`bind_host`配置为Trove部署节点的IP
+- `nova_compute_url` 和 `cinder_url` 为Nova和Cinder在Keystone中创建的endpoint
+- `nova_proxy_XXX` 为一个能访问Nova服务的用户信息,上例中使用`admin`用户为例
+- `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码
+- `[database]`分组中的`connection` 为前面在mysql中为Trove创建的数据库信息
+- Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码
+

3、配置trove-guestagent.conf +

vim /etc/trove/trove-guestagent.conf
+
+[DEFAULT]
+log_file = trove-guestagent.log
+log_dir = /var/log/trove/
+ignore_users = os_admin
+control_exchange = trove
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+rpc_backend = rabbit
+command_process_timeout = 60
+use_syslog = False
+debug = True
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = TROVE_PASS
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mysql]
+docker_image = your-registry/your-repo/mysql
+backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0
+ 解释: guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 + 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 + 报心跳,因此需要配置RabbitMQ的用户和密码信息。 + 从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。 + - transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码 + - Trove的用户信息中TROVE_PASS替换为实际trove用户的密码

+

4、生成数据Trove数据库表 +

su -s /bin/sh -c "trove-manage db_sync" trove

+

4.完成安装配置

+
    +
  1. 配置Trove服务自启动 +
    systemctl enable openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service 
  2. +
  3. 启动服务 +
    systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +

    Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

    +

    注意

    +

    注意替换password为您在身份服务中为swift用户选择的密码

    +
  6. +
  7. +

    安装和配置存储节点 (STG)

    +

    安装支持的程序包: +

    yum install xfsprogs rsync

    +

    将/dev/vdb和/dev/vdc设备格式化为 XFS

    +
    mkfs.xfs /dev/vdb
    +mkfs.xfs /dev/vdc
    +

    创建挂载点目录结构:

    +
    mkdir -p /srv/node/vdb
    +mkdir -p /srv/node/vdc
    +

    找到新分区的 UUID:

    +
    blkid
    +

    编辑/etc/fstab文件并将以下内容添加到其中:

    +
    UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
    +

    挂载设备:

    +

    mount /srv/node/vdb
    +mount /srv/node/vdc
    +注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置

    +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock
    +替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动:

    +
    systemctl enable rsyncd.service
    +systemctl start rsyncd.service
    +
  8. +
  9. +

    在存储节点安装和配置组件 (STG)

    +

    安装软件包:

    +
    yum install openstack-swift-account openstack-swift-container openstack-swift-object
    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    +

    确保挂载点目录结构的正确所有权:

    +
    chown -R swift:swift /srv/node
    +

    创建recon目录并确保其拥有正确的所有权:

    +
    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift
    +
  10. +
  11. +

    创建账号环 (CTL)

    +

    切换到/etc/swift目录。

    +
    cd /etc/swift
    +

    创建基础account.builder文件:

    +
    swift-ring-builder account.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder account.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +
  12. +
  13. +

    创建容器环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件:

    +
       swift-ring-builder container.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder container.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
    +  --device DEVICE_NAME --weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 +对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder container.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder container.builder rebalance
    +
  14. +
  15. +

    创建对象环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件:

    +
    swift-ring-builder object.builder create 10 1 1
    +

    将每个存储节点添加到环中

    +
     swift-ring-builder object.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
    +  --device DEVICE_NAME --weight 100
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder object.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder object.builder rebalance
    +

    分发环配置文件:

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  16. +
  17. +

    完成安装

    +

    编辑/etc/swift/swift.conf文件

    +
    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes
    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权:

    +
    chown -R root:swift /etc/swift
    +

    在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-proxy.service memcached.service
    +systemctl start openstack-swift-proxy.service memcached.service
    +

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
    +systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +

    Cyborg 安装

    +
  18. +
+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+
    +
  1. 初始化对应数据库
  2. +
+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+
    +
  1. 安装Cyborg
  2. +
+
yum install openstack-cyborg
+
    +
  1. 配置Cyborg
  2. +
+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+
    +
  1. 同步数据库表格
  2. +
+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+
    +
  1. 启动Cyborg服务
  2. +
+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+
    +
  1. 安装Aodh
  2. +
+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+

注意

+

aodh依赖的软件包pytho3-pyparsing在openEuler的OS仓不适配,需要覆盖安装OpenStack对应版本,可以使用yum list |grep pyparsing |grep OpenStack | awk '{print $2}'获取对应的版本

+

VERSION,然后再yum install -y python3-pyparsing-VERSION覆盖安装适配的pyparsing

+
    +
  1. 修改配置文件
  2. +
+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
aodh-dbsync
+
    +
  1. 启动Aodh服务
  2. +
+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+
    +
  1. 安装Gnocchi
  2. +
+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+
    +
  1. 修改配置文件/etc/gnocchi/gnocchi.conf
  2. +
+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+
    +
  1. 初始化数据库
  2. +
+
gnocchi-upgrade
+
    +
  1. 启动Gnocchi服务
  2. +
+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+
    +
  1. 安装Ceilometer
  2. +
+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+
    +
  1. 修改配置文件/etc/ceilometer/pipeline.yaml
  2. +
+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+
    +
  1. 修改配置文件/etc/ceilometer/ceilometer.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
ceilometer-upgrade
+
    +
  1. 启动Ceilometer服务
  2. +
+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+
    +
  1. 创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码
  2. +
+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+
    +
  1. 创建服务凭证,创建heat用户,并为其增加admin角色
  2. +
+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+
    +
  1. 创建heatheat-cfn服务及其对应的API端点
  2. +
+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+
    +
  1. 创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色
  2. +
+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+
    +
  1. 安装软件包
  2. +
+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+
    +
  1. 修改配置文件/etc/heat/heat.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+
    +
  1. 初始化heat数据库表
  2. +
+
su -s /bin/sh -c "heat-manage db_sync" heat
+
    +
  1. 启动服务
  2. +
+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    pip install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 22.03-LTS-SP1的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 22.03-lts-sp1 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +

    oos env setup test-oos -r wallaby
    +具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 22.03-lts-sp1 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-22.03-LTS-SP2/OpenStack-train/index.html b/site/install/openEuler-22.03-LTS-SP2/OpenStack-train/index.html new file mode 100644 index 0000000000000000000000000000000000000000..f971b689fe9b7c083b2dcac3e455256b06bb97fd --- /dev/null +++ b/site/install/openEuler-22.03-LTS-SP2/OpenStack-train/index.html @@ -0,0 +1,2917 @@ + + + + + + + + openEuler-22.03-LTS-SP2_Train - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Train 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 22.03-LTS-SP2版本官方源已经支持 OpenStack-Train 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • Cinder
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    启动OpenStack Train yum源

    +
    yum update
    +yum install openstack-release-train
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP2/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient==4.0.2
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    . admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,则virt_type可以配置为kvm

    +

    注意

    +

    如果为arm64结构,还需要在计算节点执行以下命令

    +
    
    +mkdir -p /usr/share/AAVMF
    +chown nova:nova /usr/share/AAVMF
    +
    +ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_CODE.fd
    +ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_VARS.fd
    +
    +vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +

    并且当ARM架构下的部署环境为嵌套虚拟化时,libvirt配置如下:

    +
    [libvirt]
    +virt_type = qemu
    +cpu_mode = custom
    +cpu_model = cortex-a72
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CTL)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl restart neutron-server.service neutron-linuxbridge-agent.service \                   (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Train中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+

mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+2. 安装软件包

+
yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
+

启动服务

+
systemctl enable openstack-ironic-api openstack-ironic-conductor
+systemctl start openstack-ironic-api openstack-ironic-conductor
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic \
+                         --description "Ironic baremetal provisioning service" baremetal
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. +

    配置httpd服务

    +
  2. +
  3. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  4. +
  5. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +

      yum install httpd -y
      + 2. 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    2. +
    3. +

      重启httpd服务。

      +

      systemctl restart httpd
      +7. deploy ramdisk镜像制作

      +
    4. +
    +
  6. +
+

T版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用T版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意 + 原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:

    +

    在T版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    T版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    2. +
    +
    [agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +
      +
    1. ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    2. +
    +

    /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)

    +
    [DEFAULT]
    +enable_auto_tls = False
    +

    设置权限:

    +
    chown -R ipa.ipa /etc/ironic_python_agent/
    +
      +
    1. 修改ipa服务的服务启动文件,添加配置文件选项
    2. +
    +

    vim usr/lib/systemd/system/ironic-python-agent.service

    +
    [Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +
  10. +
+

在Train中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。

+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令进行相关的镜像制作和容器环境部署了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Trove服务用户

+

openstack user create --domain default --password-prompt trove
+openstack role add --project service --user trove admin
+openstack service create --name trove --description "Database" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+
    +
  1. 安装和配置Trove各组件
  2. +
+

1、安装Trove包 + ``shell script + yum install openstack-trove python3-troveclient +


+2. 配置`trove.conf`
+```shell script
+vim /etc/trove/trove.conf
+
+ [DEFAULT]
+ log_dir = /var/log/trove
+ trove_auth_url = http://controller:5000/
+ nova_compute_url = http://controller:8774/v2
+ cinder_url = http://controller:8776/v1
+ swift_url = http://controller:8080/v1/AUTH_
+ rpc_backend = rabbit
+ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672
+ auth_strategy = keystone
+ add_addresses = True
+ api_paste_config = /etc/trove/api-paste.ini
+ nova_proxy_admin_user = admin
+ nova_proxy_admin_pass = ADMIN_PASSWORD
+ nova_proxy_admin_tenant_name = service
+ taskmanager_manager = trove.taskmanager.manager.Manager
+ use_nova_server_config_drive = True
+ # Set these if using Neutron Networking
+ network_driver = trove.network.neutron.NeutronDriver
+ network_label_regex = .*
+
+ [database]
+ connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove
+
+ [keystone_authtoken]
+ www_authenticate_uri = http://controller:5000/
+ auth_url = http://controller:5000/
+ auth_type = password
+ project_domain_name = default
+ user_domain_name = default
+ project_name = service
+ username = trove
+ password = TROVE_PASSWORD
+ **解释:** + -[Default]分组中nova_compute_urlcinder_url为Nova和Cinder在Keystone中创建的endpoint + -nova_proxy_XXX为一个能访问Nova服务的用户信息,上例中使用admin用户为例 + -transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码 + -[database]分组中的connection为前面在mysql中为Trove创建的数据库信息 + - Trove的用户信息中TROVE_PASSWORD`替换为实际trove用户的密码

+
    +
  1. 配置trove-guestagent.conf + ```shell script + vim /etc/trove/trove-guestagent.conf
  2. +
+

rabbit_host = controller + rabbit_password = RABBIT_PASS + trove_auth_url = http://controller:5000/ +

**解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟
+机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上
+报心跳,因此需要配置RabbitMQ的用户和密码信息。
+**从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。**
+- `RABBIT_PASS`替换为RabbitMQ的密码  
+
+4. 生成数据`Trove`数据库表
+```shell script
+su -s /bin/sh -c "trove-manage db_sync" trove

+
    +
  1. 完成安装配置
  2. +
  3. 配置Trove服务自启动 + ```shell script + systemctl enable openstack-trove-api.service \ + openstack-trove-taskmanager.service \ + openstack-trove-conductor.service +
    2. 启动服务
    +```shell script
    +systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您在身份服务中为swift用户选择的密码**
+
    +
  1. +

    安装和配置存储节点 (STG)

    +

    安装支持的程序包: +

    yum install xfsprogs rsync

    +

    将/dev/vdb和/dev/vdc设备格式化为 XFS

    +
    mkfs.xfs /dev/vdb
    +mkfs.xfs /dev/vdc
    +

    创建挂载点目录结构:

    +
    mkdir -p /srv/node/vdb
    +mkdir -p /srv/node/vdc
    +

    找到新分区的 UUID:

    +
    blkid
    +

    编辑/etc/fstab文件并将以下内容添加到其中:

    +
    UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
    +

    挂载设备:

    +

    mount /srv/node/vdb
    +mount /srv/node/vdc
    +注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置

    +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock
    +替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动:

    +
    systemctl enable rsyncd.service
    +systemctl start rsyncd.service
    +
  2. +
  3. +

    在存储节点安装和配置组件 (STG)

    +

    安装软件包:

    +
    yum install openstack-swift-account openstack-swift-container openstack-swift-object
    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    +

    确保挂载点目录结构的正确所有权:

    +
    chown -R swift:swift /srv/node
    +

    创建recon目录并确保其拥有正确的所有权:

    +
    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift
    +
  4. +
  5. +

    创建账号环 (CTL)

    +

    切换到/etc/swift目录。

    +
    cd /etc/swift
    +

    创建基础account.builder文件:

    +
    swift-ring-builder account.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder account.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +
  6. +
  7. +

    创建容器环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件:

    +
       swift-ring-builder container.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder container.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
    +  --device DEVICE_NAME --weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 +对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder container.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder container.builder rebalance
    +
  8. +
  9. +

    创建对象环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件:

    +
    swift-ring-builder object.builder create 10 1 1
    +

    将每个存储节点添加到环中

    +
     swift-ring-builder object.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
    +  --device DEVICE_NAME --weight 100
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder object.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder object.builder rebalance
    +

    分发环配置文件:

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  10. +
  11. +

    完成安装

    +

    编辑/etc/swift/swift.conf文件

    +
    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes
    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权:

    +
    chown -R root:swift /etc/swift
    +

    在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-proxy.service memcached.service
    +systemctl start openstack-swift-proxy.service memcached.service
    +

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
    +systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
  12. +
+

Cyborg 安装

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+
    +
  1. 初始化对应数据库
  2. +
+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+
    +
  1. 安装Cyborg
  2. +
+
yum install openstack-cyborg
+
    +
  1. 配置Cyborg
  2. +
+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+
    +
  1. 同步数据库表格
  2. +
+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+
    +
  1. 启动Cyborg服务
  2. +
+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+
    +
  1. 安装Aodh
  2. +
+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+
    +
  1. 修改配置文件
  2. +
+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
aodh-dbsync
+
    +
  1. 启动Aodh服务
  2. +
+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+
    +
  1. 安装Gnocchi
  2. +
+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+
    +
  1. 修改配置文件/etc/gnocchi/gnocchi.conf
  2. +
+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+
    +
  1. 初始化数据库
  2. +
+
gnocchi-upgrade
+
    +
  1. 启动Gnocchi服务
  2. +
+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+
    +
  1. 安装Ceilometer
  2. +
+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+
    +
  1. 修改配置文件/etc/ceilometer/pipeline.yaml
  2. +
+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+
    +
  1. 修改配置文件/etc/ceilometer/ceilometer.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
ceilometer-upgrade
+
    +
  1. 启动Ceilometer服务
  2. +
+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+
    +
  1. 创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码
  2. +
+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+
    +
  1. 创建服务凭证,创建heat用户,并为其增加admin角色
  2. +
+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+
    +
  1. 创建heatheat-cfn服务及其对应的API端点
  2. +
+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+
    +
  1. 创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色
  2. +
+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+
    +
  1. 安装软件包
  2. +
+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+
    +
  1. 修改配置文件/etc/heat/heat.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+
    +
  1. 初始化heat数据库表
  2. +
+
su -s /bin/sh -c "heat-manage db_sync" heat
+
    +
  1. 启动服务
  2. +
+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    pip install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 22.03-LTS-SP2的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 22.03-lts-sp2 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +
    oos env setup test-oos -r train
    +

    具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 22.03-lts-sp2 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+

基于OpenStack SIG部署工具opensd部署

+

opensd用于批量地脚本化部署openstack各组件服务。

+

部署步骤

+

1. 部署前需要确认的信息

+
    +
  • 装操作系统时,需将selinux设置为disable
  • +
  • 装操作系统时,将/etc/ssh/sshd_config配置文件内的UseDNS设置为no
  • +
  • 操作系统语言必须设置为英文
  • +
  • 部署之前请确保所有计算节点/etc/hosts文件内没有对计算主机的解析
  • +
+

2. ceph pool与认证创建(可选)

+

不使用ceph或已有ceph集群可忽略此步骤

+

在任意一台ceph monitor节点执行:

+

2.1 创建pool:

+
ceph osd pool create volumes 2048
+ceph osd pool create images 2048
+

2.2 初始化pool

+
rbd pool init volumes
+rbd pool init images
+

2.3 创建用户认证

+
ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images'
+ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes'
+

3. 配置lvm(可选)

+

根据物理机磁盘配置与闲置情况,为mysql数据目录挂载额外的磁盘空间。示例如下(根据实际情况做配置):

+
fdisk -l
+Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors
+Units = sectors of 1 * 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 4096 bytes
+I/O size (minimum/optimal): 4096 bytes / 4096 bytes
+Disk label type: dos
+Disk identifier: 0x000ed242
+创建分区
+parted /dev/sdd
+mkparted 0 -1
+创建pv
+partprobe /dev/sdd1
+pvcreate /dev/sdd1
+创建、激活vg
+vgcreate vg_mariadb /dev/sdd1
+vgchange -ay vg_mariadb
+查看vg容量
+vgdisplay
+--- Volume group ---
+VG Name vg_mariadb
+System ID
+Format lvm2
+Metadata Areas 1
+Metadata Sequence No 2
+VG Access read/write
+VG Status resizable
+MAX LV 0
+Cur LV 1
+Open LV 1
+Max PV 0
+Cur PV 1
+Act PV 1
+VG Size 446.62 GiB
+PE Size 4.00 MiB
+Total PE 114335
+Alloc PE / Size 114176 / 446.00 GiB
+Free PE / Size 159 / 636.00 MiB
+VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc
+创建lv
+lvcreate -L 446G -n lv_mariadb vg_mariadb
+格式化磁盘并获取卷的UUID
+mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb
+blkid /dev/mapper/vg_mariadb-lv_mariadb
+/dev/mapper/vg_mariadb-lv_mariadb: UUID="98d513eb-5f64-4aa5-810e-dc7143884fa2" TYPE="ext4"
+注:98d513eb-5f64-4aa5-810e-dc7143884fa2为卷的UUID
+挂载磁盘
+mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql
+rm -rf  /var/lib/mysql/*
+

4. 配置yum repo

+

在部署节点执行:

+

4.1 备份yum源

+
mkdir /etc/yum.repos.d/bak/
+mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/
+

4.2 配置yum repo

+
cat > /etc/yum.repos.d/opensd.repo << EOF
+[train]
+name=train
+baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP2:/Epol:/Multi-Version:/OpenStack:/Train/standard_$basearch/
+enabled=1
+gpgcheck=0
+
+[epol]
+name=epol
+baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP2:/Epol/standard_$basearch/
+enabled=1
+gpgcheck=0
+
+[everything]
+name=everything
+baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP2/standard_$basearch/
+enabled=1
+gpgcheck=0
+
+EOF
+

4.3 更新yum缓存

+
yum clean all
+yum makecache
+

5. 安装opensd

+

在部署节点执行:

+

5.1 克隆opensd源码并安装

+
git clone https://gitee.com/openeuler/opensd
+cd opensd
+python3 setup.py install
+

6. 做ssh互信

+

在部署节点执行:

+

6.1 生成密钥对

+

执行如下命令并一路回车

+
ssh-keygen
+

6.2 生成主机IP地址文件

+

在auto_ssh_host_ip中配置所有用到的主机ip, 示例:

+
cd /usr/local/share/opensd/tools/
+vim auto_ssh_host_ip
+
+10.0.0.1
+10.0.0.2
+...
+10.0.0.10
+

6.3 更改密码并执行脚本

+

将免密脚本/usr/local/bin/opensd-auto-ssh内123123替换为主机真实密码

+
# 替换脚本内123123字符串
+vim /usr/local/bin/opensd-auto-ssh
+
## 安装expect后执行脚本
+dnf install expect -y
+opensd-auto-ssh
+

6.4 部署节点与ceph monitor做互信(可选)

+
ssh-copy-id root@x.x.x.x
+

7. 配置opensd

+

在部署节点执行:

+

7.1 生成随机密码

+

安装 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils并随机生成密码 +

dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y
+# 执行命令生成密码
+opensd-genpwd
+# 检查密码是否生成
+cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml

+

7.2 配置inventory文件

+

主机信息包含:主机名、ansible_host IP、availability_zone,三者均需配置缺一不可,示例:

+
vim /usr/local/share/opensd/ansible/inventory/multinode
+# 三台控制节点主机信息
+[control]
+controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1
+controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1
+controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1
+
+# 网络节点信息,与控制节点保持一致
+[network]
+controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1
+controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1
+controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1
+
+# cinder-volume服务节点信息
+[storage]
+storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1
+storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1
+storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1
+
+# Cell1 集群信息
+[cell-control-cell1]
+cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1
+cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1
+cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1
+
+[compute-cell1]
+compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1
+compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1
+compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1
+
+[cell1:children]
+cell-control-cell1
+compute-cell1
+
+# Cell2集群信息
+[cell-control-cell2]
+cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1
+cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1
+cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1
+
+[compute-cell2]
+compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1
+compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1
+compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1
+
+[cell2:children]
+cell-control-cell2
+compute-cell2
+
+[baremetal]
+
+[compute-cell1-ironic]
+
+
+# 填写所有cell集群的control主机组
+[nova-conductor:children]
+cell-control-cell1
+cell-control-cell2
+
+# 填写所有cell集群的compute主机组
+[nova-compute:children]
+compute-added
+compute-cell1
+compute-cell2
+
+# 下面的主机组信息不需变动,保留即可
+[compute-added]
+
+[chrony-server:children]
+control
+
+[pacemaker:children]
+control
+......
+......
+

7.3 配置全局变量

+

注: 文档中提到的有注释配置项需要更改,其他参数不需要更改,若无相关配置则为空

+
vim /usr/local/share/opensd/etc_examples/opensd/globals.yml
+########################
+# Network & Base options
+########################
+network_interface: "eth0" #管理网络的网卡名称
+neutron_external_interface: "eth1" #业务网络的网卡名称
+cidr_netmask: 24 #管理网的掩码
+opensd_vip_address: 10.0.0.33  #控制节点虚拟IP地址
+cell1_vip_address: 10.0.0.34 #cell1集群的虚拟IP地址
+cell2_vip_address: 10.0.0.35 #cell2集群的虚拟IP地址
+external_fqdn: "" #用于vnc访问虚拟机的外网域名地址
+external_ntp_servers: [] #外部ntp服务器地址
+yumrepo_host:  #yum源的IP地址
+yumrepo_port:  #yum源端口号
+environment:   #yum源的类型
+upgrade_all_packages: "yes" #是否升级所有安装版的版本(执行yum upgrade),初始部署资源请设置为"yes"
+enable_miner: "no" #是否开启部署miner服务
+
+enable_chrony: "no" #是否开启部署chrony服务
+enable_pri_mariadb: "no" #是否为私有云部署mariadb
+enable_hosts_file_modify: "no" # 扩容计算节点和部署ironic服务的时候,是否将节点信息添加到`/etc/hosts`
+
+########################
+# Available zone options
+########################
+az_cephmon_compose:
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az01的"availability_zone"值保持一致
+    ceph_mon_host:      #az01对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:  
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az02的"availability_zone"值保持一致
+    ceph_mon_host:      #az02对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:  
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az03的"availability_zone"值保持一致
+    ceph_mon_host:      #az03对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:
+
+# `reserve_vcpu_based_on_numa`配置为`yes` or `no`,举例说明:
+NUMA node0 CPU(s): 0-15,32-47
+NUMA node1 CPU(s): 16-31,48-63
+当reserve_vcpu_based_on_numa: "yes", 根据numa node, 平均每个node预留vcpu:
+vcpu_pin_set = 2-15,34-47,18-31,50-63
+当reserve_vcpu_based_on_numa: "no", 从第一个vcpu开始,顺序预留vcpu:
+vcpu_pin_set = 8-64
+
+#######################
+# Nova options
+#######################
+nova_reserved_host_memory_mb: 2048 #计算节点给计算服务预留的内存大小
+enable_cells: "yes" #cell节点是否单独节点部署
+support_gpu: "False" #cell节点是否有GPU服务器,如果有则为True,否则为False
+
+#######################
+# Neutron options
+#######################
+monitor_ip:
+    - 10.0.0.9   #配置监控节点
+    - 10.0.0.10
+enable_meter_full_eip: True   #配置是否允许EIP全量监控,默认为True
+enable_meter_port_forwarding: True   #配置是否允许port forwarding监控,默认为True
+enable_meter_ecs_ipv6: True   #配置是否允许ecs_ipv6监控,默认为True
+enable_meter: True    #配置是否开启监控,默认为True
+is_sdn_arch: False    #配置是否是sdn架构,默认为False
+
+# 默认使能的网络类型是vlan,vlan和vxlan两种类型只能二选一.
+enable_vxlan_network_type: False  # 默认使能的网络类型是vlan,如果使用vxlan网络,配置为True, 如果使用vlan网络,配置为False.
+enable_neutron_fwaas: False       # 环境有使用防火墙, 设置为True, 使能防护墙功能.
+# Neutron provider
+neutron_provider_networks:
+  network_types: "{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}"
+  network_vlan_ranges: "default:xxx:xxx" #部署之前规划的业务网络vlan范围
+  network_mappings: "default:br-provider"
+  network_interface: "{{ neutron_external_interface }}"
+  network_vxlan_ranges: "" #部署之前规划的业务网络vxlan范围
+
+# 如下这些配置是SND控制器的配置参数, `enable_sdn_controller`设置为True, 使能SND控制器功能.
+# 其他参数请根据部署之前的规划和SDN部署信息确定.
+enable_sdn_controller: False
+sdn_controller_ip_address:  # SDN控制器ip地址
+sdn_controller_username:    # SDN控制器的用户名
+sdn_controller_password:    # SDN控制器的用户密码
+
+#######################
+# Dimsagent options
+#######################
+enable_dimsagent: "no" # 安装镜像服务agent, 需要改为yes
+# Address and domain name for s2
+s3_address_domain_pair:
+  - host_ip:           
+    host_name:         
+
+#######################
+# Trove options
+#######################
+enable_trove: "no" #安装trove 需要改为yes
+#default network
+trove_default_neutron_networks:  #trove 的管理网络id `openstack network list|grep -w trove-mgmt|awk '{print$2}'`
+#s3 setup(如果没有s3,以下值填null)
+s3_endpoint_host_ip:   #s3的ip
+s3_endpoint_host_name: #s3的域名
+s3_endpoint_url:       #s3的url ·一般为http://s3域名
+s3_access_key:         #s3的ak 
+s3_secret_key:         #s3的sk
+
+#######################
+# Ironic options
+#######################
+enable_ironic: "no" #是否开机裸金属部署,默认不开启
+ironic_neutron_provisioning_network_uuid:
+ironic_neutron_cleaning_network_uuid: "{{ ironic_neutron_provisioning_network_uuid }}"
+ironic_dnsmasq_interface:
+ironic_dnsmasq_dhcp_range:
+ironic_tftp_server_address: "{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}"
+# 交换机设备相关信息
+neutron_ml2_conf_genericswitch:
+  genericswitch:xxxxxxx:
+    device_type:
+    ngs_mac_address:
+    ip:
+    username:
+    password:
+    ngs_port_default_vlan:
+
+# Package state setting
+haproxy_package_state: "present"
+mariadb_package_state: "present"
+rabbitmq_package_state: "present"
+memcached_package_state: "present"
+ceph_client_package_state: "present"
+keystone_package_state: "present"
+glance_package_state: "present"
+cinder_package_state: "present"
+nova_package_state: "present"
+neutron_package_state: "present"
+miner_package_state: "present"
+

7.4 检查所有节点ssh连接状态

+
dnf install ansible -y
+ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping
+
+# 执行结果显示每台主机都是"SUCCESS"即说明连接状态没问题,示例:
+compute1 | SUCCESS => {
+  "ansible_facts": {
+      "discovered_interpreter_python": "/usr/bin/python"
+  },
+  "changed": false,
+  "ping": "pong"
+}
+

8. 执行部署

+

在部署节点执行:

+

8.1 执行bootstrap

+
# 执行部署
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50
+

8.2 重启服务器

+

注:执行重启的原因是:bootstrap可能会升内核,更改selinux配置或者有GPU服务器,如果装机过程已经是新版内核,selinux disable或者没有GPU服务器,则不需要执行该步骤 +

# 手动重启对应节点,执行命令
+init6
+# 重启完成后,再次检查连通性
+ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping
+# 重启完后操作系统后,再次启动yum源

+

8.3 执行部署前检查

+
opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50
+

8.4 执行部署

+
ln -s /usr/bin/python3 /usr/bin/python
+
+全量部署:
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50
+
+单服务部署:
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/index.html b/site/install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/index.html new file mode 100644 index 0000000000000000000000000000000000000000..53c23d5b9b0ba69dac889b3d42d337f9a44d2297 --- /dev/null +++ b/site/install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/index.html @@ -0,0 +1,2659 @@ + + + + + + + + openEuler-22.03-LTS-SP2_Wallaby - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Wallaby 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 22.03-LTS-SP2版本官方源已经支持 OpenStack-Wallaby 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • CinderSP1
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 22.03 LTS 官方yum源,需要启用EPOL软件仓以支持OpenStack

    +
    yum update
    +yum install openstack-release-wallaby
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示。

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP2/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source ~/.admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    source ~/.admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[libvirt]
    +virt_type = qemu                                                                               (CPT)
    +cpu_mode = custom                                                                              (CPT)
    +cpu_model = cortex-a72                                                                         (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +
    +vim /etc/qemu/firmware/edk2-aarch64.json
    +
    +{
    +    "description": "UEFI firmware for ARM64 virtual machines",
    +    "interface-types": [
    +        "uefi"
    +    ],
    +    "mapping": {
    +        "device": "flash",
    +        "executable": {
    +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
    +            "format": "raw"
    +        },
    +        "nvram-template": {
    +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
    +            "format": "raw"
    +        }
    +    },
    +    "targets": [
    +        {
    +            "architecture": "aarch64",
    +            "machines": [
    +                "virt-*"
    +            ]
    +        }
    +    ],
    +    "features": [
    +
    +    ],
    +    "tags": [
    +
    +    ]
    +}
    +
    +(CPT)
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CPT)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service 
    +systemctl enable neutron-l3-agent.service
    +systemctl restart openstack-nova-api.service neutron-server.service                            (CTL)
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Wallaby中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic
+                         --description "Ironic baremetal provisioning service" baremetal
+
+openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
+openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector
+openstack role add --project service --user ironic-inspector admin
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+host = controller
+memcache_servers = controller:11211
+enabled_network_interfaces = flat,noop,neutron
+default_network_interface = noop
+transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/
+enabled_hardware_types = ipmi
+enabled_boot_interfaces = pxe
+enabled_deploy_interfaces = direct
+default_deploy_interface = direct
+enabled_inspect_interfaces = inspector
+enabled_management_interfaces = ipmitool
+enabled_power_interfaces = ipmitool
+enabled_rescue_interfaces = no-rescue,agent
+isolinux_bin = /usr/share/syslinux/isolinux.bin
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+[agent]
+deploy_logs_collect = always
+deploy_logs_local_path = /var/log/ironic/deploy
+deploy_logs_storage_backend = local
+image_download_source = http
+stream_raw_images = false
+force_raw_images = false
+verify_ca = False
+
+[oslo_concurrency]
+
+[oslo_messaging_notifications]
+transport_url = rabbit://openstack:123456@172.20.19.25:5672/
+topics = notifications
+driver = messagingv2
+
+[oslo_messaging_rabbit]
+amqp_durable_queues = True
+rabbit_ha_queues = True
+
+[pxe]
+ipxe_enabled = false
+pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
+image_cache_size = 204800
+tftp_root=/var/lib/tftpboot/cephfs/
+tftp_master_path=/var/lib/tftpboot/cephfs/master_images
+
+[dhcp]
+dhcp_provider = none
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. 配置ironic-inspector服务
  2. +
+

配置文件路径/etc/ironic-inspector/inspector.conf

+

1、创建数据库

+
# mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \     IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
+IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+

2、通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSWORDironic_inspector用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+backend = sqlalchemy
+connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
+min_pool_size = 100
+max_pool_size = 500
+pool_timeout = 30
+max_retries = 5
+max_overflow = 200
+db_retry_interval = 2
+db_inc_retry_interval = True
+db_max_retry_interval = 2
+db_max_retries = 5
+

3、配置消息度列通信地址

+
[DEFAULT] 
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+
+

4、设置keystone认证

+
[DEFAULT]
+
+auth_strategy = keystone
+timeout = 900
+rootwrap_config = /etc/ironic-inspector/rootwrap.conf
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+log_dir = /var/log/ironic-inspector
+state_path = /var/lib/ironic-inspector
+use_stderr = False
+
+[ironic]
+api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
+auth_type = password
+auth_url = http://PUBLIC_IDENTITY_IP:5000
+auth_strategy = keystone
+ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
+os_region = RegionOne
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
+username = IRONIC_SERVICE_USER_NAME
+password = IRONIC_SERVICE_USER_PASSWORD
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://control:5000
+www_authenticate_uri = http://control:5000
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = ironic_inspector
+password = IRONICPASSWD
+region_name = RegionOne
+memcache_servers = control:11211
+token_cache_time = 300
+
+[processing]
+add_ports = active
+processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
+ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
+always_store_ramdisk_logs = true
+store_data =none
+power_off = false
+
+[pxe_filter]
+driver = iptables
+
+[capabilities]
+boot_mode=True
+

5、配置ironic inspector dnsmasq服务

+
# 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
+port=0
+interface=enp3s0                         #替换为实际监听网络接口
+dhcp-range=172.20.19.100,172.20.19.110   #替换为实际dhcp地址范围
+bind-interfaces
+enable-tftp
+
+dhcp-match=set:efi,option:client-arch,7
+dhcp-match=set:efi,option:client-arch,9
+dhcp-match=aarch64, option:client-arch,11
+dhcp-boot=tag:aarch64,grubaa64.efi
+dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
+dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
+
+tftp-root=/tftpboot                       #替换为实际tftpboot目录
+log-facility=/var/log/dnsmasq.log
+

6、关闭ironic provision网络子网的dhcp

+
openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
+

7、初始化ironic-inspector服务的数据库

+

在控制节点执行:

+
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
+

8、启动服务

+
systemctl enable --now openstack-ironic-inspector.service
+systemctl enable --now openstack-ironic-inspector-dnsmasq.service
+
    +
  1. +

    配置httpd服务

    +
  2. +
  3. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  4. +
  5. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +
      yum install httpd -y
      +
    2. +
    3. +

      创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    4. +
    5. +

      重启httpd服务。

      +
      systemctl restart httpd
      +
    6. +
    +
  6. +
  7. +

    deploy ramdisk镜像制作

    +
  8. +
+

W版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用W版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意

    +
    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:
    +
    +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败,如下:
    +
    +生成的错误配置文件:
    +
    +![ironic-err](../../img/install/ironic-err.png)
    +
    +如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。
    +
    +需要用户对生成grub.cfg的代码逻辑自行修改。
    +
    +ironic向ipa发送查询命令执行状态请求的tls报错:
    +
    +w版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。
    +
    +1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    +
    +```
    +[agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +```
    +
    +2) ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    +
    +/etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)
    +
    +```
    +[DEFAULT]
    +enable_auto_tls = False
    +```
    +
    +设置权限:
    +
    +```
    +chown -R ipa.ipa /etc/ironic_python_agent/
    +```
    +
    +3. 修改ipa服务的服务启动文件,添加配置文件选项
    +
    +vim usr/lib/systemd/system/ironic-python-agent.service
    +
    +```
    +[Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +```
    +
  10. +
+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 22.03 LTS中引入了Kolla和Kolla-ansible服务。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Trove服务用户

+

openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+openstack role add --project service --user trove admin
+openstack service create --name trove
+                         --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+
    +
  1. 安装和配置Trove各组件
  2. +
+

1、安装Trove包 + ``shell script + yum install openstack-trove python-troveclient +

2. 配置`trove.conf`
+```shell script
+vim /etc/trove/trove.conf
+
+[DEFAULT]
+bind_host=TROVE_NODE_IP
+log_dir = /var/log/trove
+network_driver = trove.network.neutron.NeutronDriver
+management_security_groups = <manage security group>
+nova_keypair = trove-mgmt
+default_datastore = mysql
+taskmanager_manager = trove.taskmanager.manager.Manager
+trove_api_workers = 5
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+reboot_time_out = 300
+usage_timeout = 900
+agent_call_high_timeout = 1200
+use_syslog = False
+debug = True
+
+# Set these if using Neutron Networking
+network_driver=trove.network.neutron.NeutronDriver
+network_label_regex=.*
+
+
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
+
+[keystone_authtoken]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = trove
+username = trove
+auth_url = http://controller:5000/v3/
+auth_type = password
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = trove
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mariadb]
+tcp_ports = 3306,4444,4567,4568
+
+[mysql]
+tcp_ports = 3306
+
+[postgresql]
+tcp_ports = 5432
+ **解释:** + -[Default]分组中bind_host配置为Trove部署节点的IP + -nova_compute_urlcinder_url为Nova和Cinder在Keystone中创建的endpoint + -nova_proxy_XXX为一个能访问Nova服务的用户信息,上例中使用admin用户为例 + -transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码 + -[database]分组中的connection为前面在mysql中为Trove创建的数据库信息 + - Trove的用户信息中TROVE_PASS`替换为实际trove用户的密码

+
    +
  1. 配置trove-guestagent.conf + ```shell script + vim /etc/trove/trove-guestagent.conf
  2. +
+

[DEFAULT] + log_file = trove-guestagent.log + log_dir = /var/log/trove/ + ignore_users = os_admin + control_exchange = trove + transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ + rpc_backend = rabbit + command_process_timeout = 60 + use_syslog = False + debug = True

+

[service_credentials] + auth_url = http://controller:5000/v3/ + region_name = RegionOne + project_name = service + password = TROVE_PASS + project_domain_name = Default + user_domain_name = Default + username = trove

+

[mysql] + docker_image = your-registry/your-repo/mysql + backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 +

**解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟
+机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上
+报心跳,因此需要配置RabbitMQ的用户和密码信息。
+**从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。**
+- `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码
+- Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码  
+
+6. 生成数据`Trove`数据库表
+```shell script
+su -s /bin/sh -c "trove-manage db_sync" trove
+4. 完成安装配置 + 1. 配置Trove服务自启动 + ```shell script + systemctl enable openstack-trove-api.service \ + openstack-trove-taskmanager.service \ + openstack-trove-conductor.service +
2. 启动服务
+```shell script
+systemctl start openstack-trove-api.service \
+openstack-trove-taskmanager.service \
+openstack-trove-conductor.service

+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您在身份服务中为swift用户选择的密码**
+
    +
  1. +

    安装和配置存储节点 (STG)

    +

    安装支持的程序包: +

    yum install xfsprogs rsync

    +

    将/dev/vdb和/dev/vdc设备格式化为 XFS

    +
    mkfs.xfs /dev/vdb
    +mkfs.xfs /dev/vdc
    +

    创建挂载点目录结构:

    +
    mkdir -p /srv/node/vdb
    +mkdir -p /srv/node/vdc
    +

    找到新分区的 UUID:

    +
    blkid
    +

    编辑/etc/fstab文件并将以下内容添加到其中:

    +
    UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
    +

    挂载设备:

    +

    mount /srv/node/vdb
    +mount /srv/node/vdc
    +注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置

    +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock
    +替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动:

    +
    systemctl enable rsyncd.service
    +systemctl start rsyncd.service
    +
  2. +
  3. +

    在存储节点安装和配置组件 (STG)

    +

    安装软件包:

    +
    yum install openstack-swift-account openstack-swift-container openstack-swift-object
    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    +

    确保挂载点目录结构的正确所有权:

    +
    chown -R swift:swift /srv/node
    +

    创建recon目录并确保其拥有正确的所有权:

    +
    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift
    +
  4. +
  5. +

    创建账号环 (CTL)

    +

    切换到/etc/swift目录。

    +
    cd /etc/swift
    +

    创建基础account.builder文件:

    +
    swift-ring-builder account.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder account.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +
  6. +
  7. +

    创建容器环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件:

    +
       swift-ring-builder container.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder container.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
    +  --device DEVICE_NAME --weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 +对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder container.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder container.builder rebalance
    +
  8. +
  9. +

    创建对象环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件:

    +
    swift-ring-builder object.builder create 10 1 1
    +

    将每个存储节点添加到环中

    +
     swift-ring-builder object.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
    +  --device DEVICE_NAME --weight 100
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder object.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder object.builder rebalance
    +

    分发环配置文件:

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  10. +
  11. +

    完成安装

    +

    编辑/etc/swift/swift.conf文件

    +
    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes
    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权:

    +
    chown -R root:swift /etc/swift
    +

    在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-proxy.service memcached.service
    +systemctl start openstack-swift-proxy.service memcached.service
    +

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
    +systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +

    Cyborg 安装

    +
  12. +
+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+
    +
  1. 初始化对应数据库
  2. +
+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+
    +
  1. 安装Cyborg
  2. +
+
yum install openstack-cyborg
+
    +
  1. 配置Cyborg
  2. +
+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+
    +
  1. 同步数据库表格
  2. +
+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+
    +
  1. 启动Cyborg服务
  2. +
+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+
    +
  1. 安装Aodh
  2. +
+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+

注意

+

aodh依赖的软件包pytho3-pyparsing在openEuler的OS仓不适配,需要覆盖安装OpenStack对应版本,可以使用yum list |grep pyparsing |grep OpenStack | awk '{print $2}'获取对应的版本

+

VERSION,然后再yum install -y python3-pyparsing-VERSION覆盖安装适配的pyparsing

+
    +
  1. 修改配置文件
  2. +
+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
aodh-dbsync
+
    +
  1. 启动Aodh服务
  2. +
+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+
    +
  1. 安装Gnocchi
  2. +
+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+
    +
  1. 修改配置文件/etc/gnocchi/gnocchi.conf
  2. +
+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+
    +
  1. 初始化数据库
  2. +
+
gnocchi-upgrade
+
    +
  1. 启动Gnocchi服务
  2. +
+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+
    +
  1. 安装Ceilometer
  2. +
+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+
    +
  1. 修改配置文件/etc/ceilometer/pipeline.yaml
  2. +
+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+
    +
  1. 修改配置文件/etc/ceilometer/ceilometer.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
ceilometer-upgrade
+
    +
  1. 启动Ceilometer服务
  2. +
+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+
    +
  1. 创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码
  2. +
+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+
    +
  1. 创建服务凭证,创建heat用户,并为其增加admin角色
  2. +
+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+
    +
  1. 创建heatheat-cfn服务及其对应的API端点
  2. +
+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+
    +
  1. 创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色
  2. +
+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+
    +
  1. 安装软件包
  2. +
+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+
    +
  1. 修改配置文件/etc/heat/heat.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+
    +
  1. 初始化heat数据库表
  2. +
+
su -s /bin/sh -c "heat-manage db_sync" heat
+
    +
  1. 启动服务
  2. +
+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    pip install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 22.03-LTS-SP2的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 22.03-lts-sp2 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +

    oos env setup test-oos -r wallaby
    +具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 22.03-lts-sp2 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-22.03-LTS-SP3/OpenStack-train/index.html b/site/install/openEuler-22.03-LTS-SP3/OpenStack-train/index.html new file mode 100644 index 0000000000000000000000000000000000000000..dba4d13afd0466db651729020905d613bb42f73c --- /dev/null +++ b/site/install/openEuler-22.03-LTS-SP3/OpenStack-train/index.html @@ -0,0 +1,2843 @@ + + + + + + + + openEuler-22.03-LTS-SP3_Train - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Train 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 22.03-LTS-SP3版本官方源已经支持 OpenStack-Train 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • Cinder
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    启动OpenStack Train yum源

    +
    yum update
    +yum install openstack-release-train
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP3/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient==4.0.2
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    . admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,则virt_type可以配置为kvm

    +

    注意

    +

    如果为arm64结构,还需要在计算节点执行以下命令

    +
    
    +mkdir -p /usr/share/AAVMF
    +chown nova:nova /usr/share/AAVMF
    +
    +ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_CODE.fd
    +ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_VARS.fd
    +
    +vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +

    并且当ARM架构下的部署环境为嵌套虚拟化时,libvirt配置如下:

    +
    [libvirt]
    +virt_type = qemu
    +cpu_mode = custom
    +cpu_model = cortex-a72
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CTL)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl restart neutron-server.service neutron-linuxbridge-agent.service \                   (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Train中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+

mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+2. 安装软件包

+
yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
+

启动服务

+
systemctl enable openstack-ironic-api openstack-ironic-conductor
+systemctl start openstack-ironic-api openstack-ironic-conductor
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic \
+                         --description "Ironic baremetal provisioning service" baremetal
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. +

    配置httpd服务

    +
  2. +
  3. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  4. +
  5. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +

      yum install httpd -y
      + 2. 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    2. +
    3. +

      重启httpd服务。

      +
      systemctl restart httpd
      +
    4. +
    +
  6. +
+

7.deploy ramdisk镜像制作

+

T版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用T版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意 + 原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:

    +

    在T版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    T版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    2. +
    +
    [agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +
      +
    1. ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    2. +
    +

    /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)

    +
    [DEFAULT]
    +enable_auto_tls = False
    +

    设置权限:

    +
    chown -R ipa.ipa /etc/ironic_python_agent/
    +
      +
    1. 修改ipa服务的服务启动文件,添加配置文件选项
    2. +
    +

    vim usr/lib/systemd/system/ironic-python-agent.service

    +
    [Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +
  10. +
+

在Train中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。

+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令进行相关的镜像制作和容器环境部署了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

1.设置数据库

+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+

2.创建服务用户认证

+

1、创建Trove服务用户

+

openstack user create --domain default --password-prompt trove
+openstack role add --project service --user trove admin
+openstack service create --name trove --description "Database" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+

3.安装和配置Trove各组件

+

1.安装Trove

+
yum install openstack-trove python3-troveclient
+

2.配置trove.conf +

vim /etc/trove/trove.conf
+
+ [DEFAULT]
+ log_dir = /var/log/trove
+ trove_auth_url = http://controller:5000/
+ nova_compute_url = http://controller:8774/v2
+ cinder_url = http://controller:8776/v1
+ swift_url = http://controller:8080/v1/AUTH_
+ rpc_backend = rabbit
+ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672
+ auth_strategy = keystone
+ add_addresses = True
+ api_paste_config = /etc/trove/api-paste.ini
+ nova_proxy_admin_user = admin
+ nova_proxy_admin_pass = ADMIN_PASSWORD
+ nova_proxy_admin_tenant_name = service
+ taskmanager_manager = trove.taskmanager.manager.Manager
+ use_nova_server_config_drive = True
+ # Set these if using Neutron Networking
+ network_driver = trove.network.neutron.NeutronDriver
+ network_label_regex = .*
+
+ [database]
+ connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove
+
+ [keystone_authtoken]
+ www_authenticate_uri = http://controller:5000/
+ auth_url = http://controller:5000/
+ auth_type = password
+ project_domain_name = default
+ user_domain_name = default
+ project_name = service
+ username = trove
+ password = TROVE_PASSWORD

+

解释:

+
    +
  • [Default]分组中nova_compute_urlcinder_url 为Nova和Cinder在Keystone中创建的endpoint
  • +
  • nova_proxy_XXX 为一个能访问Nova服务的用户信息,上例中使用admin用户为例
  • +
  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • +
  • [database]分组中的connection 为前面在mysql中为Trove创建的数据库信息
  • +
  • Trove的用户信息中TROVE_PASSWORD替换为实际trove用户的密码
  • +
+

3.配置trove-guestagent.conf

+
vim /etc/trove/trove-guestagent.conf
+
+rabbit_host = controller
+rabbit_password = RABBIT_PASS
+trove_auth_url = http://controller:5000/
+

解释: + guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 + 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 + 报心跳,因此需要配置RabbitMQ的用户和密码信息。 + 从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

+
    +
  • RABBIT_PASS替换为RabbitMQ的密码
  • +
+

4.生成数据Trove数据库表

+
su -s /bin/sh -c "trove-manage db_sync" trove
+

4.完成安装配置

+
    +
  1. 配置Trove服务自启动 +
    systemctl enable openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service 
  2. +
  3. 启动服务 +
    systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您在身份服务中为swift用户选择的密码**
+
    +
  1. +

    安装和配置存储节点 (STG)

    +

    安装支持的程序包: +

    yum install xfsprogs rsync

    +

    将/dev/vdb和/dev/vdc设备格式化为 XFS

    +
    mkfs.xfs /dev/vdb
    +mkfs.xfs /dev/vdc
    +

    创建挂载点目录结构:

    +
    mkdir -p /srv/node/vdb
    +mkdir -p /srv/node/vdc
    +

    找到新分区的 UUID:

    +
    blkid
    +

    编辑/etc/fstab文件并将以下内容添加到其中:

    +
    UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
    +

    挂载设备:

    +

    mount /srv/node/vdb
    +mount /srv/node/vdc
    +注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置

    +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock
    +替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动:

    +
    systemctl enable rsyncd.service
    +systemctl start rsyncd.service
    +
  2. +
  3. +

    在存储节点安装和配置组件 (STG)

    +

    安装软件包:

    +
    yum install openstack-swift-account openstack-swift-container openstack-swift-object
    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    +

    确保挂载点目录结构的正确所有权:

    +
    chown -R swift:swift /srv/node
    +

    创建recon目录并确保其拥有正确的所有权:

    +
    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift
    +
  4. +
  5. +

    创建账号环 (CTL)

    +

    切换到/etc/swift目录。

    +
    cd /etc/swift
    +

    创建基础account.builder文件:

    +
    swift-ring-builder account.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder account.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +
  6. +
  7. +

    创建容器环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件:

    +
       swift-ring-builder container.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder container.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
    +  --device DEVICE_NAME --weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 +对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder container.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder container.builder rebalance
    +
  8. +
  9. +

    创建对象环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件:

    +
    swift-ring-builder object.builder create 10 1 1
    +

    将每个存储节点添加到环中

    +
     swift-ring-builder object.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
    +  --device DEVICE_NAME --weight 100
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder object.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder object.builder rebalance
    +

    分发环配置文件:

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  10. +
  11. +

    完成安装

    +

    编辑/etc/swift/swift.conf文件

    +
    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes
    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权:

    +
    chown -R root:swift /etc/swift
    +

    在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-proxy.service memcached.service
    +systemctl start openstack-swift-proxy.service memcached.service
    +

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
    +systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
  12. +
+

Cyborg 安装

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+

1.初始化对应数据库

+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+

2.创建对应Keystone资源对象

+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+

3.安装Cyborg

+
yum install openstack-cyborg
+

4.配置Cyborg

+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+

5.同步数据库表格

+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+

6.启动Cyborg服务

+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+

1.创建数据库

+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+

3.安装Aodh

+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+

4.修改配置文件

+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
aodh-dbsync
+

6.启动Aodh服务

+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+

1.创建数据库

+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+

3.安装Gnocchi

+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+

4.修改配置文件/etc/gnocchi/gnocchi.conf

+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+

5.初始化数据库

+
gnocchi-upgrade
+

6.启动Gnocchi服务

+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+

1.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+

2.安装Ceilometer

+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+

3.修改配置文件/etc/ceilometer/pipeline.yaml

+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+

4.修改配置文件/etc/ceilometer/ceilometer.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
ceilometer-upgrade
+

6.启动Ceilometer服务

+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+

1.创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码

+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+

2.创建服务凭证,创建heat用户,并为其增加admin角色

+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+

3.创建heatheat-cfn服务及其对应的API端点

+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+

4.创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色

+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+

5.安装软件包

+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+

6.修改配置文件/etc/heat/heat.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+

7.初始化heat数据库表

+
su -s /bin/sh -c "heat-manage db_sync" heat
+

8.启动服务

+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    pip install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 22.03-LTS-SP3的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 22.03-lts-SP3 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +
    oos env setup test-oos -r train
    +

    具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 22.03-lts-SP3 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+

基于OpenStack SIG部署工具opensd部署

+

opensd用于批量地脚本化部署openstack各组件服务。

+

部署步骤

+

1. 部署前需要确认的信息

+
    +
  • 装操作系统时,需将selinux设置为disable
  • +
  • 装操作系统时,将/etc/ssh/sshd_config配置文件内的UseDNS设置为no
  • +
  • 操作系统语言必须设置为英文
  • +
  • 部署之前请确保所有计算节点/etc/hosts文件内没有对计算主机的解析
  • +
+

2. ceph pool与认证创建(可选)

+

不使用ceph或已有ceph集群可忽略此步骤

+

在任意一台ceph monitor节点执行:

+

2.1 创建pool:

+
ceph osd pool create volumes 2048
+ceph osd pool create images 2048
+

2.2 初始化pool

+
rbd pool init volumes
+rbd pool init images
+

2.3 创建用户认证

+
ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images'
+ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes'
+

3. 配置lvm(可选)

+

根据物理机磁盘配置与闲置情况,为mysql数据目录挂载额外的磁盘空间。示例如下(根据实际情况做配置):

+
fdisk -l
+Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors
+Units = sectors of 1 * 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 4096 bytes
+I/O size (minimum/optimal): 4096 bytes / 4096 bytes
+Disk label type: dos
+Disk identifier: 0x000ed242
+创建分区
+parted /dev/sdd
+mkparted 0 -1
+创建pv
+partprobe /dev/sdd1
+pvcreate /dev/sdd1
+创建、激活vg
+vgcreate vg_mariadb /dev/sdd1
+vgchange -ay vg_mariadb
+查看vg容量
+vgdisplay
+--- Volume group ---
+VG Name vg_mariadb
+System ID
+Format lvm2
+Metadata Areas 1
+Metadata Sequence No 2
+VG Access read/write
+VG Status resizable
+MAX LV 0
+Cur LV 1
+Open LV 1
+Max PV 0
+Cur PV 1
+Act PV 1
+VG Size 446.62 GiB
+PE Size 4.00 MiB
+Total PE 114335
+Alloc PE / Size 114176 / 446.00 GiB
+Free PE / Size 159 / 636.00 MiB
+VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc
+创建lv
+lvcreate -L 446G -n lv_mariadb vg_mariadb
+格式化磁盘并获取卷的UUID
+mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb
+blkid /dev/mapper/vg_mariadb-lv_mariadb
+/dev/mapper/vg_mariadb-lv_mariadb: UUID="98d513eb-5f64-4aa5-810e-dc7143884fa2" TYPE="ext4"
+注:98d513eb-5f64-4aa5-810e-dc7143884fa2为卷的UUID
+挂载磁盘
+mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql
+rm -rf  /var/lib/mysql/*
+

4. 配置yum repo

+

在部署节点执行:

+

4.1 备份yum源

+
mkdir /etc/yum.repos.d/bak/
+mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/
+

4.2 配置yum repo

+
cat > /etc/yum.repos.d/opensd.repo << EOF
+[train]
+name=train
+baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP3:/Epol:/Multi-Version:/OpenStack:/Train/standard_$basearch/
+enabled=1
+gpgcheck=0
+
+[epol]
+name=epol
+baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP3:/Epol/standard_$basearch/
+enabled=1
+gpgcheck=0
+
+[everything]
+name=everything
+baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP3/standard_$basearch/
+enabled=1
+gpgcheck=0
+
+EOF
+

4.3 更新yum缓存

+
yum clean all
+yum makecache
+

5. 安装opensd

+

在部署节点执行:

+

5.1 克隆opensd源码并安装

+
git clone https://gitee.com/openeuler/opensd
+cd opensd
+python3 setup.py install
+

6. 做ssh互信

+

在部署节点执行:

+

6.1 生成密钥对

+

执行如下命令并一路回车

+
ssh-keygen
+

6.2 生成主机IP地址文件

+

在auto_ssh_host_ip中配置所有用到的主机ip, 示例:

+
cd /usr/local/share/opensd/tools/
+vim auto_ssh_host_ip
+
+10.0.0.1
+10.0.0.2
+...
+10.0.0.10
+

6.3 更改密码并执行脚本

+

将免密脚本/usr/local/bin/opensd-auto-ssh内123123替换为主机真实密码

+
# 替换脚本内123123字符串
+vim /usr/local/bin/opensd-auto-ssh
+
## 安装expect后执行脚本
+dnf install expect -y
+opensd-auto-ssh
+

6.4 部署节点与ceph monitor做互信(可选)

+
ssh-copy-id root@x.x.x.x
+

7. 配置opensd

+

在部署节点执行:

+

7.1 生成随机密码

+

安装 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils并随机生成密码 +

dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y
+# 执行命令生成密码
+opensd-genpwd
+# 检查密码是否生成
+cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml

+

7.2 配置inventory文件

+

主机信息包含:主机名、ansible_host IP、availability_zone,三者均需配置缺一不可,示例:

+
vim /usr/local/share/opensd/ansible/inventory/multinode
+# 三台控制节点主机信息
+[control]
+controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1
+controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1
+controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1
+
+# 网络节点信息,与控制节点保持一致
+[network]
+controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1
+controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1
+controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1
+
+# cinder-volume服务节点信息
+[storage]
+storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1
+storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1
+storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1
+
+# Cell1 集群信息
+[cell-control-cell1]
+cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1
+cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1
+cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1
+
+[compute-cell1]
+compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1
+compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1
+compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1
+
+[cell1:children]
+cell-control-cell1
+compute-cell1
+
+# Cell2集群信息
+[cell-control-cell2]
+cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1
+cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1
+cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1
+
+[compute-cell2]
+compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1
+compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1
+compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1
+
+[cell2:children]
+cell-control-cell2
+compute-cell2
+
+[baremetal]
+
+[compute-cell1-ironic]
+
+
+# 填写所有cell集群的control主机组
+[nova-conductor:children]
+cell-control-cell1
+cell-control-cell2
+
+# 填写所有cell集群的compute主机组
+[nova-compute:children]
+compute-added
+compute-cell1
+compute-cell2
+
+# 下面的主机组信息不需变动,保留即可
+[compute-added]
+
+[chrony-server:children]
+control
+
+[pacemaker:children]
+control
+......
+......
+

7.3 配置全局变量

+

注: 文档中提到的有注释配置项需要更改,其他参数不需要更改,若无相关配置则为空

+
vim /usr/local/share/opensd/etc_examples/opensd/globals.yml
+########################
+# Network & Base options
+########################
+network_interface: "eth0" #管理网络的网卡名称
+neutron_external_interface: "eth1" #业务网络的网卡名称
+cidr_netmask: 24 #管理网的掩码
+opensd_vip_address: 10.0.0.33  #控制节点虚拟IP地址
+cell1_vip_address: 10.0.0.34 #cell1集群的虚拟IP地址
+cell2_vip_address: 10.0.0.35 #cell2集群的虚拟IP地址
+external_fqdn: "" #用于vnc访问虚拟机的外网域名地址
+external_ntp_servers: [] #外部ntp服务器地址
+yumrepo_host:  #yum源的IP地址
+yumrepo_port:  #yum源端口号
+environment:   #yum源的类型
+upgrade_all_packages: "yes" #是否升级所有安装版的版本(执行yum upgrade),初始部署资源请设置为"yes"
+enable_miner: "no" #是否开启部署miner服务
+
+enable_chrony: "no" #是否开启部署chrony服务
+enable_pri_mariadb: "no" #是否为私有云部署mariadb
+enable_hosts_file_modify: "no" # 扩容计算节点和部署ironic服务的时候,是否将节点信息添加到`/etc/hosts`
+
+########################
+# Available zone options
+########################
+az_cephmon_compose:
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az01的"availability_zone"值保持一致
+    ceph_mon_host:      #az01对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:  
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az02的"availability_zone"值保持一致
+    ceph_mon_host:      #az02对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:  
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az03的"availability_zone"值保持一致
+    ceph_mon_host:      #az03对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:
+
+# `reserve_vcpu_based_on_numa`配置为`yes` or `no`,举例说明:
+NUMA node0 CPU(s): 0-15,32-47
+NUMA node1 CPU(s): 16-31,48-63
+当reserve_vcpu_based_on_numa: "yes", 根据numa node, 平均每个node预留vcpu:
+vcpu_pin_set = 2-15,34-47,18-31,50-63
+当reserve_vcpu_based_on_numa: "no", 从第一个vcpu开始,顺序预留vcpu:
+vcpu_pin_set = 8-64
+
+#######################
+# Nova options
+#######################
+nova_reserved_host_memory_mb: 2048 #计算节点给计算服务预留的内存大小
+enable_cells: "yes" #cell节点是否单独节点部署
+support_gpu: "False" #cell节点是否有GPU服务器,如果有则为True,否则为False
+
+#######################
+# Neutron options
+#######################
+monitor_ip:
+    - 10.0.0.9   #配置监控节点
+    - 10.0.0.10
+enable_meter_full_eip: True   #配置是否允许EIP全量监控,默认为True
+enable_meter_port_forwarding: True   #配置是否允许port forwarding监控,默认为True
+enable_meter_ecs_ipv6: True   #配置是否允许ecs_ipv6监控,默认为True
+enable_meter: True    #配置是否开启监控,默认为True
+is_sdn_arch: False    #配置是否是sdn架构,默认为False
+
+# 默认使能的网络类型是vlan,vlan和vxlan两种类型只能二选一.
+enable_vxlan_network_type: False  # 默认使能的网络类型是vlan,如果使用vxlan网络,配置为True, 如果使用vlan网络,配置为False.
+enable_neutron_fwaas: False       # 环境有使用防火墙, 设置为True, 使能防护墙功能.
+# Neutron provider
+neutron_provider_networks:
+  network_types: "{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}"
+  network_vlan_ranges: "default:xxx:xxx" #部署之前规划的业务网络vlan范围
+  network_mappings: "default:br-provider"
+  network_interface: "{{ neutron_external_interface }}"
+  network_vxlan_ranges: "" #部署之前规划的业务网络vxlan范围
+
+# 如下这些配置是SND控制器的配置参数, `enable_sdn_controller`设置为True, 使能SND控制器功能.
+# 其他参数请根据部署之前的规划和SDN部署信息确定.
+enable_sdn_controller: False
+sdn_controller_ip_address:  # SDN控制器ip地址
+sdn_controller_username:    # SDN控制器的用户名
+sdn_controller_password:    # SDN控制器的用户密码
+
+#######################
+# Dimsagent options
+#######################
+enable_dimsagent: "no" # 安装镜像服务agent, 需要改为yes
+# Address and domain name for s2
+s3_address_domain_pair:
+  - host_ip:           
+    host_name:         
+
+#######################
+# Trove options
+#######################
+enable_trove: "no" #安装trove 需要改为yes
+#default network
+trove_default_neutron_networks:  #trove 的管理网络id `openstack network list|grep -w trove-mgmt|awk '{print$2}'`
+#s3 setup(如果没有s3,以下值填null)
+s3_endpoint_host_ip:   #s3的ip
+s3_endpoint_host_name: #s3的域名
+s3_endpoint_url:       #s3的url ·一般为http://s3域名
+s3_access_key:         #s3的ak 
+s3_secret_key:         #s3的sk
+
+#######################
+# Ironic options
+#######################
+enable_ironic: "no" #是否开机裸金属部署,默认不开启
+ironic_neutron_provisioning_network_uuid:
+ironic_neutron_cleaning_network_uuid: "{{ ironic_neutron_provisioning_network_uuid }}"
+ironic_dnsmasq_interface:
+ironic_dnsmasq_dhcp_range:
+ironic_tftp_server_address: "{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}"
+# 交换机设备相关信息
+neutron_ml2_conf_genericswitch:
+  genericswitch:xxxxxxx:
+    device_type:
+    ngs_mac_address:
+    ip:
+    username:
+    password:
+    ngs_port_default_vlan:
+
+# Package state setting
+haproxy_package_state: "present"
+mariadb_package_state: "present"
+rabbitmq_package_state: "present"
+memcached_package_state: "present"
+ceph_client_package_state: "present"
+keystone_package_state: "present"
+glance_package_state: "present"
+cinder_package_state: "present"
+nova_package_state: "present"
+neutron_package_state: "present"
+miner_package_state: "present"
+

7.4 检查所有节点ssh连接状态

+
dnf install ansible -y
+ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping
+
+# 执行结果显示每台主机都是"SUCCESS"即说明连接状态没问题,示例:
+compute1 | SUCCESS => {
+  "ansible_facts": {
+      "discovered_interpreter_python": "/usr/bin/python"
+  },
+  "changed": false,
+  "ping": "pong"
+}
+

8. 执行部署

+

在部署节点执行:

+

8.1 执行bootstrap

+
# 执行部署
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50
+

8.2 重启服务器

+

注:执行重启的原因是:bootstrap可能会升内核,更改selinux配置或者有GPU服务器,如果装机过程已经是新版内核,selinux disable或者没有GPU服务器,则不需要执行该步骤 +

# 手动重启对应节点,执行命令
+init6
+# 重启完成后,再次检查连通性
+ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping
+# 重启完后操作系统后,再次启动yum源

+

8.3 执行部署前检查

+
opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50
+

8.4 执行部署

+
ln -s /usr/bin/python3 /usr/bin/python
+
+全量部署:
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50
+
+单服务部署:
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/index.html b/site/install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/index.html new file mode 100644 index 0000000000000000000000000000000000000000..cb6d22ded026fb4d838a442cb706799cc87b306d --- /dev/null +++ b/site/install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/index.html @@ -0,0 +1,2673 @@ + + + + + + + + openEuler-22.03-LTS-SP3_Wallaby - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Wallaby 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 22.03-LTS-SP3版本官方源已经支持 OpenStack-Wallaby 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • CinderSP1
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 22.03 LTS 官方yum源,需要启用EPOL软件仓以支持OpenStack

    +
    yum update
    +yum install openstack-release-wallaby
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示。

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP3/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source ~/.admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    source ~/.admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[libvirt]
    +virt_type = qemu                                                                               (CPT)
    +cpu_mode = custom                                                                              (CPT)
    +cpu_model = cortex-a72                                                                         (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +
    +vim /etc/qemu/firmware/edk2-aarch64.json
    +
    +{
    +    "description": "UEFI firmware for ARM64 virtual machines",
    +    "interface-types": [
    +        "uefi"
    +    ],
    +    "mapping": {
    +        "device": "flash",
    +        "executable": {
    +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
    +            "format": "raw"
    +        },
    +        "nvram-template": {
    +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
    +            "format": "raw"
    +        }
    +    },
    +    "targets": [
    +        {
    +            "architecture": "aarch64",
    +            "machines": [
    +                "virt-*"
    +            ]
    +        }
    +    ],
    +    "features": [
    +
    +    ],
    +    "tags": [
    +
    +    ]
    +}
    +
    +(CPT)
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CPT)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service 
    +systemctl enable neutron-l3-agent.service
    +systemctl restart openstack-nova-api.service neutron-server.service                            (CTL)
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Wallaby中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic
+                         --description "Ironic baremetal provisioning service" baremetal
+
+openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
+openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector
+openstack role add --project service --user ironic-inspector admin
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+host = controller
+memcache_servers = controller:11211
+enabled_network_interfaces = flat,noop,neutron
+default_network_interface = noop
+transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/
+enabled_hardware_types = ipmi
+enabled_boot_interfaces = pxe
+enabled_deploy_interfaces = direct
+default_deploy_interface = direct
+enabled_inspect_interfaces = inspector
+enabled_management_interfaces = ipmitool
+enabled_power_interfaces = ipmitool
+enabled_rescue_interfaces = no-rescue,agent
+isolinux_bin = /usr/share/syslinux/isolinux.bin
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+[agent]
+deploy_logs_collect = always
+deploy_logs_local_path = /var/log/ironic/deploy
+deploy_logs_storage_backend = local
+image_download_source = http
+stream_raw_images = false
+force_raw_images = false
+verify_ca = False
+
+[oslo_concurrency]
+
+[oslo_messaging_notifications]
+transport_url = rabbit://openstack:123456@172.20.19.25:5672/
+topics = notifications
+driver = messagingv2
+
+[oslo_messaging_rabbit]
+amqp_durable_queues = True
+rabbit_ha_queues = True
+
+[pxe]
+ipxe_enabled = false
+pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
+image_cache_size = 204800
+tftp_root=/var/lib/tftpboot/cephfs/
+tftp_master_path=/var/lib/tftpboot/cephfs/master_images
+
+[dhcp]
+dhcp_provider = none
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. 配置ironic-inspector服务
  2. +
+

配置文件路径/etc/ironic-inspector/inspector.conf

+

1、创建数据库

+
# mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \     IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
+IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+

2、通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSWORDironic_inspector用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+backend = sqlalchemy
+connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
+min_pool_size = 100
+max_pool_size = 500
+pool_timeout = 30
+max_retries = 5
+max_overflow = 200
+db_retry_interval = 2
+db_inc_retry_interval = True
+db_max_retry_interval = 2
+db_max_retries = 5
+

3、配置消息度列通信地址

+
[DEFAULT] 
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+
+

4、设置keystone认证

+
[DEFAULT]
+
+auth_strategy = keystone
+timeout = 900
+rootwrap_config = /etc/ironic-inspector/rootwrap.conf
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+log_dir = /var/log/ironic-inspector
+state_path = /var/lib/ironic-inspector
+use_stderr = False
+
+[ironic]
+api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
+auth_type = password
+auth_url = http://PUBLIC_IDENTITY_IP:5000
+auth_strategy = keystone
+ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
+os_region = RegionOne
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
+username = IRONIC_SERVICE_USER_NAME
+password = IRONIC_SERVICE_USER_PASSWORD
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://control:5000
+www_authenticate_uri = http://control:5000
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = ironic_inspector
+password = IRONICPASSWD
+region_name = RegionOne
+memcache_servers = control:11211
+token_cache_time = 300
+
+[processing]
+add_ports = active
+processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
+ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
+always_store_ramdisk_logs = true
+store_data =none
+power_off = false
+
+[pxe_filter]
+driver = iptables
+
+[capabilities]
+boot_mode=True
+

5、配置ironic inspector dnsmasq服务

+
# 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
+port=0
+interface=enp3s0                         #替换为实际监听网络接口
+dhcp-range=172.20.19.100,172.20.19.110   #替换为实际dhcp地址范围
+bind-interfaces
+enable-tftp
+
+dhcp-match=set:efi,option:client-arch,7
+dhcp-match=set:efi,option:client-arch,9
+dhcp-match=aarch64, option:client-arch,11
+dhcp-boot=tag:aarch64,grubaa64.efi
+dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
+dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
+
+tftp-root=/tftpboot                       #替换为实际tftpboot目录
+log-facility=/var/log/dnsmasq.log
+

6、关闭ironic provision网络子网的dhcp

+
openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
+

7、初始化ironic-inspector服务的数据库

+

在控制节点执行:

+
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
+

8、启动服务

+
systemctl enable --now openstack-ironic-inspector.service
+systemctl enable --now openstack-ironic-inspector-dnsmasq.service
+

6.配置httpd服务

+
    +
  1. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  2. +
  3. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +
      yum install httpd -y
      +
    2. +
    3. +

      创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    4. +
    5. +

      重启httpd服务。

      +
      systemctl restart httpd
      +
    6. +
    +
  4. +
+

7.deploy ramdisk镜像制作

+

W版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用W版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意

    +
    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:
    +
    +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败,如下:
    +
    +生成的错误配置文件:
    +
    +![ironic-err](../../img/install/ironic-err.png)
    +
    +如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。
    +
    +需要用户对生成grub.cfg的代码逻辑自行修改。
    +
    +ironic向ipa发送查询命令执行状态请求的tls报错:
    +
    +w版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。
    +
    +1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    +
    +```
    +[agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +```
    +
    +2) ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    +
    +/etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)
    +
    +```
    +[DEFAULT]
    +enable_auto_tls = False
    +```
    +
    +设置权限:
    +
    +```
    +chown -R ipa.ipa /etc/ironic_python_agent/
    +```
    +
    +3. 修改ipa服务的服务启动文件,添加配置文件选项
    +
    +vim usr/lib/systemd/system/ironic-python-agent.service
    +
    +```
    +[Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +```
    +
  10. +
+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 22.03 LTS中引入了Kolla和Kolla-ansible服务。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

1.设置数据库

+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+

2.创建服务用户认证

+

1、创建Trove服务用户

+

openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+openstack role add --project service --user trove admin
+openstack service create --name trove
+                         --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+

3.安装和配置Trove各组件

+

1、安装Trove包 +

yum install openstack-trove python-troveclient
+ 2. 配置trove.conf +
vim /etc/trove/trove.conf
+
+[DEFAULT]
+bind_host=TROVE_NODE_IP
+log_dir = /var/log/trove
+network_driver = trove.network.neutron.NeutronDriver
+management_security_groups = <manage security group>
+nova_keypair = trove-mgmt
+default_datastore = mysql
+taskmanager_manager = trove.taskmanager.manager.Manager
+trove_api_workers = 5
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+reboot_time_out = 300
+usage_timeout = 900
+agent_call_high_timeout = 1200
+use_syslog = False
+debug = True
+
+# Set these if using Neutron Networking
+network_driver=trove.network.neutron.NeutronDriver
+network_label_regex=.*
+
+
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
+
+[keystone_authtoken]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = trove
+username = trove
+auth_url = http://controller:5000/v3/
+auth_type = password
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = trove
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mariadb]
+tcp_ports = 3306,4444,4567,4568
+
+[mysql]
+tcp_ports = 3306
+
+[postgresql]
+tcp_ports = 5432
+ 解释:

+
    +
  • [Default]分组中bind_host配置为Trove部署节点的IP
  • +
  • nova_compute_urlcinder_url 为Nova和Cinder在Keystone中创建的endpoint
  • +
  • nova_proxy_XXX 为一个能访问Nova服务的用户信息,上例中使用admin用户为例
  • +
  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • +
  • [database]分组中的connection 为前面在mysql中为Trove创建的数据库信息
  • +
  • Trove的用户信息中TROVE_PASS替换为实际trove用户的密码
  • +
+

3.配置trove-guestagent.conf +

vim /etc/trove/trove-guestagent.conf
+
+[DEFAULT]
+log_file = trove-guestagent.log
+log_dir = /var/log/trove/
+ignore_users = os_admin
+control_exchange = trove
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+rpc_backend = rabbit
+command_process_timeout = 60
+use_syslog = False
+debug = True
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = TROVE_PASS
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mysql]
+docker_image = your-registry/your-repo/mysql
+backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0

+

解释: guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 + 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 + 报心跳,因此需要配置RabbitMQ的用户和密码信息。 + 从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

+
    +
  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • +
  • Trove的用户信息中TROVE_PASS替换为实际trove用户的密码
  • +
+

4.生成数据Trove数据库表 +

su -s /bin/sh -c "trove-manage db_sync" trove

+

4.完成安装配置

+
    +
  1. 配置Trove服务自启动 +
    systemctl enable openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service 
  2. +
  3. 启动服务 +
    systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您在身份服务中为swift用户选择的密码**
+

4.安装和配置存储节点 (STG)

+
安装支持的程序包:
+```shell
+yum install xfsprogs rsync
+```
+
+将/dev/vdb和/dev/vdc设备格式化为 XFS
+
+```shell
+mkfs.xfs /dev/vdb
+mkfs.xfs /dev/vdc
+```
+
+创建挂载点目录结构:
+
+```shell
+mkdir -p /srv/node/vdb
+mkdir -p /srv/node/vdc
+```
+
+找到新分区的 UUID:
+
+```shell
+blkid
+```
+
+编辑/etc/fstab文件并将以下内容添加到其中:
+
+```shell
+UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
+UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
+```
+
+挂载设备:
+
+```shell
+mount /srv/node/vdb
+mount /srv/node/vdc
+```
+***注意***
+
+**如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置**
+
+(可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:
+
+```shell
+[DEFAULT]
+uid = swift
+gid = swift
+log file = /var/log/rsyncd.log
+pid file = /var/run/rsyncd.pid
+address = MANAGEMENT_INTERFACE_IP_ADDRESS
+
+[account]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/account.lock
+
+[container]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/container.lock
+
+[object]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/object.lock
+```
+**替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址**
+
+启动rsyncd服务并配置它在系统启动时启动:
+
+```shell
+systemctl enable rsyncd.service
+systemctl start rsyncd.service
+```
+

5.在存储节点安装和配置组件 (STG)

+
安装软件包:
+
+```shell
+yum install openstack-swift-account openstack-swift-container openstack-swift-object
+```
+
+编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。
+
+确保挂载点目录结构的正确所有权:
+
+```shell
+chown -R swift:swift /srv/node
+```
+
+创建recon目录并确保其拥有正确的所有权:
+
+```shell
+mkdir -p /var/cache/swift
+chown -R root:swift /var/cache/swift
+chmod -R 775 /var/cache/swift
+```
+

6.创建账号环 (CTL)

+
切换到/etc/swift目录。
+
+```shell
+cd /etc/swift
+```
+
+创建基础account.builder文件:
+
+```shell
+swift-ring-builder account.builder create 10 1 1
+```
+
+将每个存储节点添加到环中:
+
+```shell
+swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意 ***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder account.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder account.builder rebalance
+```
+

7.创建容器环 (CTL)

+
切换到`/etc/swift`目录。
+
+创建基础`container.builder`文件:
+
+```shell
+   swift-ring-builder container.builder create 10 1 1
+```
+
+将每个存储节点添加到环中:
+
+```shell
+swift-ring-builder container.builder \
+  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
+  --device DEVICE_NAME --weight 100
+
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder container.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder container.builder rebalance
+```
+

8.创建对象环 (CTL)

+
切换到`/etc/swift`目录。
+
+创建基础`object.builder`文件:
+
+   ```shell
+   swift-ring-builder object.builder create 10 1 1
+   ```
+
+将每个存储节点添加到环中
+
+```shell
+ swift-ring-builder object.builder \
+  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
+  --device DEVICE_NAME --weight 100
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意 ***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder object.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder object.builder rebalance
+```
+
+分发环配置文件:
+
+将`account.ring.gz`,`container.ring.gz`以及 `object.ring.gz`文件复制到每个存储节点和运行代理服务的任何其他节点上的`/etc/swift`目录。
+

9.完成安装

+

编辑/etc/swift/swift.conf文件

+
[swift-hash]
+swift_hash_path_suffix = test-hash
+swift_hash_path_prefix = test-hash
+
+[storage-policy:0]
+name = Policy-0
+default = yes
+

用唯一值替换 test-hash

+

将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

+

在所有节点上,确保配置目录的正确所有权:

+
chown -R root:swift /etc/swift
+

在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

+
systemctl enable openstack-swift-proxy.service memcached.service
+systemctl start openstack-swift-proxy.service memcached.service
+

在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

+
systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
+
+systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
+
+systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
+
+systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
+
+systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
+
+systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
+

Cyborg 安装

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+

1.初始化对应数据库

+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+

2.创建对应Keystone资源对象

+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+

3.安装Cyborg

+
yum install openstack-cyborg
+

4.配置Cyborg

+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+

5.同步数据库表格

+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+

6.启动Cyborg服务

+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+

1.创建数据库

+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+

3.安装Aodh

+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+

注意

+

aodh依赖的软件包pytho3-pyparsing在openEuler的OS仓不适配,需要覆盖安装OpenStack对应版本,可以使用yum list |grep pyparsing |grep OpenStack | awk '{print $2}'获取对应的版本

+

VERSION,然后再yum install -y python3-pyparsing-VERSION覆盖安装适配的pyparsing

+

4.修改配置文件

+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
aodh-dbsync
+

6.启动Aodh服务

+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+

1.创建数据库

+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+

3.安装Gnocchi

+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+

4.修改配置文件/etc/gnocchi/gnocchi.conf

+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+

5.初始化数据库

+
gnocchi-upgrade
+

6.启动Gnocchi服务

+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+

1.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+

2.安装Ceilometer

+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+

3.修改配置文件/etc/ceilometer/pipeline.yaml

+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+

4.修改配置文件/etc/ceilometer/ceilometer.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
ceilometer-upgrade
+

6.启动Ceilometer服务

+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+

1.创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码

+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+

2.创建服务凭证,创建heat用户,并为其增加admin角色

+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+

3.创建heatheat-cfn服务及其对应的API端点

+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+

4.创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色

+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+

5.安装软件包

+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+

6.修改配置文件/etc/heat/heat.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+

7.初始化heat数据库表

+
su -s /bin/sh -c "heat-manage db_sync" heat
+

8.启动服务

+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    pip install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 22.03-LTS-SP3的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 22.03-lts-SP3 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +

    oos env setup test-oos -r wallaby
    +具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 22.03-lts-SP3 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-22.03-LTS-SP4/OpenStack-train/index.html b/site/install/openEuler-22.03-LTS-SP4/OpenStack-train/index.html new file mode 100644 index 0000000000000000000000000000000000000000..4a2848adb0b69dcd17d21624b9e13478d292d34b --- /dev/null +++ b/site/install/openEuler-22.03-LTS-SP4/OpenStack-train/index.html @@ -0,0 +1,2843 @@ + + + + + + + + openEuler-22.03-LTS-SP4_Train - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Train 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 22.03-LTS-SP4版本官方源已经支持 OpenStack-Train 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • Cinder
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    启动OpenStack Train yum源

    +
    yum update
    +yum install openstack-release-train
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-22.04-LTS-SP4/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-22.04-LTS-SP4/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient==4.0.2
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    . admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,则virt_type可以配置为kvm

    +

    注意

    +

    如果为arm64结构,还需要在计算节点执行以下命令

    +
    
    +mkdir -p /usr/share/AAVMF
    +chown nova:nova /usr/share/AAVMF
    +
    +ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_CODE.fd
    +ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_VARS.fd
    +
    +vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +

    并且当ARM架构下的部署环境为嵌套虚拟化时,libvirt配置如下:

    +
    [libvirt]
    +virt_type = qemu
    +cpu_mode = custom
    +cpu_model = cortex-a72
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CTL)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl restart neutron-server.service neutron-linuxbridge-agent.service \                   (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Train中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+

mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+2. 安装软件包

+
yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
+

启动服务

+
systemctl enable openstack-ironic-api openstack-ironic-conductor
+systemctl start openstack-ironic-api openstack-ironic-conductor
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic \
+                         --description "Ironic baremetal provisioning service" baremetal
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. +

    配置httpd服务

    +
  2. +
  3. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  4. +
  5. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +

      yum install httpd -y
      + 2. 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    2. +
    3. +

      重启httpd服务。

      +
      systemctl restart httpd
      +
    4. +
    +
  6. +
+

7.deploy ramdisk镜像制作

+

T版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用T版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意 + 原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:

    +

    在T版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    T版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    2. +
    +
    [agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +
      +
    1. ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    2. +
    +

    /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)

    +
    [DEFAULT]
    +enable_auto_tls = False
    +

    设置权限:

    +
    chown -R ipa.ipa /etc/ironic_python_agent/
    +
      +
    1. 修改ipa服务的服务启动文件,添加配置文件选项
    2. +
    +

    vim usr/lib/systemd/system/ironic-python-agent.service

    +
    [Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +
  10. +
+

在Train中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。

+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令进行相关的镜像制作和容器环境部署了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

1.设置数据库

+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+

2.创建服务用户认证

+

1、创建Trove服务用户

+

openstack user create --domain default --password-prompt trove
+openstack role add --project service --user trove admin
+openstack service create --name trove --description "Database" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+

3.安装和配置Trove各组件

+

1.安装Trove

+
yum install openstack-trove python3-troveclient
+

2.配置trove.conf +

vim /etc/trove/trove.conf
+
+ [DEFAULT]
+ log_dir = /var/log/trove
+ trove_auth_url = http://controller:5000/
+ nova_compute_url = http://controller:8774/v2
+ cinder_url = http://controller:8776/v1
+ swift_url = http://controller:8080/v1/AUTH_
+ rpc_backend = rabbit
+ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672
+ auth_strategy = keystone
+ add_addresses = True
+ api_paste_config = /etc/trove/api-paste.ini
+ nova_proxy_admin_user = admin
+ nova_proxy_admin_pass = ADMIN_PASSWORD
+ nova_proxy_admin_tenant_name = service
+ taskmanager_manager = trove.taskmanager.manager.Manager
+ use_nova_server_config_drive = True
+ # Set these if using Neutron Networking
+ network_driver = trove.network.neutron.NeutronDriver
+ network_label_regex = .*
+
+ [database]
+ connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove
+
+ [keystone_authtoken]
+ www_authenticate_uri = http://controller:5000/
+ auth_url = http://controller:5000/
+ auth_type = password
+ project_domain_name = default
+ user_domain_name = default
+ project_name = service
+ username = trove
+ password = TROVE_PASSWORD

+

解释:

+
    +
  • [Default]分组中nova_compute_urlcinder_url 为Nova和Cinder在Keystone中创建的endpoint
  • +
  • nova_proxy_XXX 为一个能访问Nova服务的用户信息,上例中使用admin用户为例
  • +
  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • +
  • [database]分组中的connection 为前面在mysql中为Trove创建的数据库信息
  • +
  • Trove的用户信息中TROVE_PASSWORD替换为实际trove用户的密码
  • +
+

3.配置trove-guestagent.conf

+
vim /etc/trove/trove-guestagent.conf
+
+rabbit_host = controller
+rabbit_password = RABBIT_PASS
+trove_auth_url = http://controller:5000/
+

解释: + guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 + 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 + 报心跳,因此需要配置RabbitMQ的用户和密码信息。 + 从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

+
    +
  • RABBIT_PASS替换为RabbitMQ的密码
  • +
+

4.生成数据Trove数据库表

+
su -s /bin/sh -c "trove-manage db_sync" trove
+

4.完成安装配置

+
    +
  1. 配置Trove服务自启动 +
    systemctl enable openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service 
  2. +
  3. 启动服务 +
    systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您在身份服务中为swift用户选择的密码**
+
    +
  1. +

    安装和配置存储节点 (STG)

    +

    安装支持的程序包: +

    yum install xfsprogs rsync

    +

    将/dev/vdb和/dev/vdc设备格式化为 XFS

    +
    mkfs.xfs /dev/vdb
    +mkfs.xfs /dev/vdc
    +

    创建挂载点目录结构:

    +
    mkdir -p /srv/node/vdb
    +mkdir -p /srv/node/vdc
    +

    找到新分区的 UUID:

    +
    blkid
    +

    编辑/etc/fstab文件并将以下内容添加到其中:

    +
    UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
    +

    挂载设备:

    +

    mount /srv/node/vdb
    +mount /srv/node/vdc
    +注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置

    +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock
    +替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动:

    +
    systemctl enable rsyncd.service
    +systemctl start rsyncd.service
    +
  2. +
  3. +

    在存储节点安装和配置组件 (STG)

    +

    安装软件包:

    +
    yum install openstack-swift-account openstack-swift-container openstack-swift-object
    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    +

    确保挂载点目录结构的正确所有权:

    +
    chown -R swift:swift /srv/node
    +

    创建recon目录并确保其拥有正确的所有权:

    +
    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift
    +
  4. +
  5. +

    创建账号环 (CTL)

    +

    切换到/etc/swift目录。

    +
    cd /etc/swift
    +

    创建基础account.builder文件:

    +
    swift-ring-builder account.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder account.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +
  6. +
  7. +

    创建容器环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件:

    +
       swift-ring-builder container.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder container.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
    +  --device DEVICE_NAME --weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 +对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder container.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder container.builder rebalance
    +
  8. +
  9. +

    创建对象环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件:

    +
    swift-ring-builder object.builder create 10 1 1
    +

    将每个存储节点添加到环中

    +
     swift-ring-builder object.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
    +  --device DEVICE_NAME --weight 100
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder object.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder object.builder rebalance
    +

    分发环配置文件:

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  10. +
  11. +

    完成安装

    +

    编辑/etc/swift/swift.conf文件

    +
    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes
    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权:

    +
    chown -R root:swift /etc/swift
    +

    在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-proxy.service memcached.service
    +systemctl start openstack-swift-proxy.service memcached.service
    +

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
    +systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
  12. +
+

Cyborg 安装

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+

1.初始化对应数据库

+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+

2.创建对应Keystone资源对象

+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+

3.安装Cyborg

+
yum install openstack-cyborg
+

4.配置Cyborg

+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+

5.同步数据库表格

+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+

6.启动Cyborg服务

+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+

1.创建数据库

+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+

3.安装Aodh

+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+

4.修改配置文件

+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
aodh-dbsync
+

6.启动Aodh服务

+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+

1.创建数据库

+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+

3.安装Gnocchi

+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+

4.修改配置文件/etc/gnocchi/gnocchi.conf

+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+

5.初始化数据库

+
gnocchi-upgrade
+

6.启动Gnocchi服务

+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+

1.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+

2.安装Ceilometer

+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+

3.修改配置文件/etc/ceilometer/pipeline.yaml

+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+

4.修改配置文件/etc/ceilometer/ceilometer.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
ceilometer-upgrade
+

6.启动Ceilometer服务

+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+

1.创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码

+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+

2.创建服务凭证,创建heat用户,并为其增加admin角色

+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+

3.创建heatheat-cfn服务及其对应的API端点

+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+

4.创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色

+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+

5.安装软件包

+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+

6.修改配置文件/etc/heat/heat.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+

7.初始化heat数据库表

+
su -s /bin/sh -c "heat-manage db_sync" heat
+

8.启动服务

+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    pip install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 22.03-LTS-SP4的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 22.03-lts-sp4 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +
    oos env setup test-oos -r train
    +

    具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 22.03-lts-sp4 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+

基于OpenStack SIG部署工具opensd部署

+

opensd用于批量地脚本化部署openstack各组件服务。

+

部署步骤

+

1. 部署前需要确认的信息

+
    +
  • 装操作系统时,需将selinux设置为disable
  • +
  • 装操作系统时,将/etc/ssh/sshd_config配置文件内的UseDNS设置为no
  • +
  • 操作系统语言必须设置为英文
  • +
  • 部署之前请确保所有计算节点/etc/hosts文件内没有对计算主机的解析
  • +
+

2. ceph pool与认证创建(可选)

+

不使用ceph或已有ceph集群可忽略此步骤

+

在任意一台ceph monitor节点执行:

+

2.1 创建pool:

+
ceph osd pool create volumes 2048
+ceph osd pool create images 2048
+

2.2 初始化pool

+
rbd pool init volumes
+rbd pool init images
+

2.3 创建用户认证

+
ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images'
+ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes'
+

3. 配置lvm(可选)

+

根据物理机磁盘配置与闲置情况,为mysql数据目录挂载额外的磁盘空间。示例如下(根据实际情况做配置):

+
fdisk -l
+Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors
+Units = sectors of 1 * 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 4096 bytes
+I/O size (minimum/optimal): 4096 bytes / 4096 bytes
+Disk label type: dos
+Disk identifier: 0x000ed242
+创建分区
+parted /dev/sdd
+mkparted 0 -1
+创建pv
+partprobe /dev/sdd1
+pvcreate /dev/sdd1
+创建、激活vg
+vgcreate vg_mariadb /dev/sdd1
+vgchange -ay vg_mariadb
+查看vg容量
+vgdisplay
+--- Volume group ---
+VG Name vg_mariadb
+System ID
+Format lvm2
+Metadata Areas 1
+Metadata Sequence No 2
+VG Access read/write
+VG Status resizable
+MAX LV 0
+Cur LV 1
+Open LV 1
+Max PV 0
+Cur PV 1
+Act PV 1
+VG Size 446.62 GiB
+PE Size 4.00 MiB
+Total PE 114335
+Alloc PE / Size 114176 / 446.00 GiB
+Free PE / Size 159 / 636.00 MiB
+VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc
+创建lv
+lvcreate -L 446G -n lv_mariadb vg_mariadb
+格式化磁盘并获取卷的UUID
+mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb
+blkid /dev/mapper/vg_mariadb-lv_mariadb
+/dev/mapper/vg_mariadb-lv_mariadb: UUID="98d513eb-5f64-4aa5-810e-dc7143884fa2" TYPE="ext4"
+注:98d513eb-5f64-4aa5-810e-dc7143884fa2为卷的UUID
+挂载磁盘
+mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql
+rm -rf  /var/lib/mysql/*
+

4. 配置yum repo

+

在部署节点执行:

+

4.1 备份yum源

+
mkdir /etc/yum.repos.d/bak/
+mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/
+

4.2 配置yum repo

+
cat > /etc/yum.repos.d/opensd.repo << EOF
+[train]
+name=train
+baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP4:/Epol:/Multi-Version:/OpenStack:/Train/standard_$basearch/
+enabled=1
+gpgcheck=0
+
+[epol]
+name=epol
+baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP4:/Epol/standard_$basearch/
+enabled=1
+gpgcheck=0
+
+[everything]
+name=everything
+baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP4s/standard_$basearch/
+enabled=1
+gpgcheck=0
+
+EOF
+

4.3 更新yum缓存

+
yum clean all
+yum makecache
+

5. 安装opensd

+

在部署节点执行:

+

5.1 克隆opensd源码并安装

+
git clone https://gitee.com/openeuler/opensd
+cd opensd
+python3 setup.py install
+

6. 做ssh互信

+

在部署节点执行:

+

6.1 生成密钥对

+

执行如下命令并一路回车

+
ssh-keygen
+

6.2 生成主机IP地址文件

+

在auto_ssh_host_ip中配置所有用到的主机ip, 示例:

+
cd /usr/local/share/opensd/tools/
+vim auto_ssh_host_ip
+
+10.0.0.1
+10.0.0.2
+...
+10.0.0.10
+

6.3 更改密码并执行脚本

+

将免密脚本/usr/local/bin/opensd-auto-ssh内123123替换为主机真实密码

+
# 替换脚本内123123字符串
+vim /usr/local/bin/opensd-auto-ssh
+
## 安装expect后执行脚本
+dnf install expect -y
+opensd-auto-ssh
+

6.4 部署节点与ceph monitor做互信(可选)

+
ssh-copy-id root@x.x.x.x
+

7. 配置opensd

+

在部署节点执行:

+

7.1 生成随机密码

+

安装 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils并随机生成密码 +

dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y
+# 执行命令生成密码
+opensd-genpwd
+# 检查密码是否生成
+cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml

+

7.2 配置inventory文件

+

主机信息包含:主机名、ansible_host IP、availability_zone,三者均需配置缺一不可,示例:

+
vim /usr/local/share/opensd/ansible/inventory/multinode
+# 三台控制节点主机信息
+[control]
+controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1
+controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1
+controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1
+
+# 网络节点信息,与控制节点保持一致
+[network]
+controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1
+controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1
+controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1
+
+# cinder-volume服务节点信息
+[storage]
+storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1
+storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1
+storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1
+
+# Cell1 集群信息
+[cell-control-cell1]
+cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1
+cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1
+cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1
+
+[compute-cell1]
+compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1
+compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1
+compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1
+
+[cell1:children]
+cell-control-cell1
+compute-cell1
+
+# Cell2集群信息
+[cell-control-cell2]
+cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1
+cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1
+cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1
+
+[compute-cell2]
+compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1
+compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1
+compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1
+
+[cell2:children]
+cell-control-cell2
+compute-cell2
+
+[baremetal]
+
+[compute-cell1-ironic]
+
+
+# 填写所有cell集群的control主机组
+[nova-conductor:children]
+cell-control-cell1
+cell-control-cell2
+
+# 填写所有cell集群的compute主机组
+[nova-compute:children]
+compute-added
+compute-cell1
+compute-cell2
+
+# 下面的主机组信息不需变动,保留即可
+[compute-added]
+
+[chrony-server:children]
+control
+
+[pacemaker:children]
+control
+......
+......
+

7.3 配置全局变量

+

注: 文档中提到的有注释配置项需要更改,其他参数不需要更改,若无相关配置则为空

+
vim /usr/local/share/opensd/etc_examples/opensd/globals.yml
+########################
+# Network & Base options
+########################
+network_interface: "eth0" #管理网络的网卡名称
+neutron_external_interface: "eth1" #业务网络的网卡名称
+cidr_netmask: 24 #管理网的掩码
+opensd_vip_address: 10.0.0.33  #控制节点虚拟IP地址
+cell1_vip_address: 10.0.0.34 #cell1集群的虚拟IP地址
+cell2_vip_address: 10.0.0.35 #cell2集群的虚拟IP地址
+external_fqdn: "" #用于vnc访问虚拟机的外网域名地址
+external_ntp_servers: [] #外部ntp服务器地址
+yumrepo_host:  #yum源的IP地址
+yumrepo_port:  #yum源端口号
+environment:   #yum源的类型
+upgrade_all_packages: "yes" #是否升级所有安装版的版本(执行yum upgrade),初始部署资源请设置为"yes"
+enable_miner: "no" #是否开启部署miner服务
+
+enable_chrony: "no" #是否开启部署chrony服务
+enable_pri_mariadb: "no" #是否为私有云部署mariadb
+enable_hosts_file_modify: "no" # 扩容计算节点和部署ironic服务的时候,是否将节点信息添加到`/etc/hosts`
+
+########################
+# Available zone options
+########################
+az_cephmon_compose:
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az01的"availability_zone"值保持一致
+    ceph_mon_host:      #az01对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:  
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az02的"availability_zone"值保持一致
+    ceph_mon_host:      #az02对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:  
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az03的"availability_zone"值保持一致
+    ceph_mon_host:      #az03对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:
+
+# `reserve_vcpu_based_on_numa`配置为`yes` or `no`,举例说明:
+NUMA node0 CPU(s): 0-15,32-47
+NUMA node1 CPU(s): 16-31,48-63
+当reserve_vcpu_based_on_numa: "yes", 根据numa node, 平均每个node预留vcpu:
+vcpu_pin_set = 2-15,34-47,18-31,50-63
+当reserve_vcpu_based_on_numa: "no", 从第一个vcpu开始,顺序预留vcpu:
+vcpu_pin_set = 8-64
+
+#######################
+# Nova options
+#######################
+nova_reserved_host_memory_mb: 2048 #计算节点给计算服务预留的内存大小
+enable_cells: "yes" #cell节点是否单独节点部署
+support_gpu: "False" #cell节点是否有GPU服务器,如果有则为True,否则为False
+
+#######################
+# Neutron options
+#######################
+monitor_ip:
+    - 10.0.0.9   #配置监控节点
+    - 10.0.0.10
+enable_meter_full_eip: True   #配置是否允许EIP全量监控,默认为True
+enable_meter_port_forwarding: True   #配置是否允许port forwarding监控,默认为True
+enable_meter_ecs_ipv6: True   #配置是否允许ecs_ipv6监控,默认为True
+enable_meter: True    #配置是否开启监控,默认为True
+is_sdn_arch: False    #配置是否是sdn架构,默认为False
+
+# 默认使能的网络类型是vlan,vlan和vxlan两种类型只能二选一.
+enable_vxlan_network_type: False  # 默认使能的网络类型是vlan,如果使用vxlan网络,配置为True, 如果使用vlan网络,配置为False.
+enable_neutron_fwaas: False       # 环境有使用防火墙, 设置为True, 使能防护墙功能.
+# Neutron provider
+neutron_provider_networks:
+  network_types: "{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}"
+  network_vlan_ranges: "default:xxx:xxx" #部署之前规划的业务网络vlan范围
+  network_mappings: "default:br-provider"
+  network_interface: "{{ neutron_external_interface }}"
+  network_vxlan_ranges: "" #部署之前规划的业务网络vxlan范围
+
+# 如下这些配置是SND控制器的配置参数, `enable_sdn_controller`设置为True, 使能SND控制器功能.
+# 其他参数请根据部署之前的规划和SDN部署信息确定.
+enable_sdn_controller: False
+sdn_controller_ip_address:  # SDN控制器ip地址
+sdn_controller_username:    # SDN控制器的用户名
+sdn_controller_password:    # SDN控制器的用户密码
+
+#######################
+# Dimsagent options
+#######################
+enable_dimsagent: "no" # 安装镜像服务agent, 需要改为yes
+# Address and domain name for s2
+s3_address_domain_pair:
+  - host_ip:           
+    host_name:         
+
+#######################
+# Trove options
+#######################
+enable_trove: "no" #安装trove 需要改为yes
+#default network
+trove_default_neutron_networks:  #trove 的管理网络id `openstack network list|grep -w trove-mgmt|awk '{print$2}'`
+#s3 setup(如果没有s3,以下值填null)
+s3_endpoint_host_ip:   #s3的ip
+s3_endpoint_host_name: #s3的域名
+s3_endpoint_url:       #s3的url ·一般为http://s3域名
+s3_access_key:         #s3的ak 
+s3_secret_key:         #s3的sk
+
+#######################
+# Ironic options
+#######################
+enable_ironic: "no" #是否开机裸金属部署,默认不开启
+ironic_neutron_provisioning_network_uuid:
+ironic_neutron_cleaning_network_uuid: "{{ ironic_neutron_provisioning_network_uuid }}"
+ironic_dnsmasq_interface:
+ironic_dnsmasq_dhcp_range:
+ironic_tftp_server_address: "{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}"
+# 交换机设备相关信息
+neutron_ml2_conf_genericswitch:
+  genericswitch:xxxxxxx:
+    device_type:
+    ngs_mac_address:
+    ip:
+    username:
+    password:
+    ngs_port_default_vlan:
+
+# Package state setting
+haproxy_package_state: "present"
+mariadb_package_state: "present"
+rabbitmq_package_state: "present"
+memcached_package_state: "present"
+ceph_client_package_state: "present"
+keystone_package_state: "present"
+glance_package_state: "present"
+cinder_package_state: "present"
+nova_package_state: "present"
+neutron_package_state: "present"
+miner_package_state: "present"
+

7.4 检查所有节点ssh连接状态

+
dnf install ansible -y
+ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping
+
+# 执行结果显示每台主机都是"SUCCESS"即说明连接状态没问题,示例:
+compute1 | SUCCESS => {
+  "ansible_facts": {
+      "discovered_interpreter_python": "/usr/bin/python"
+  },
+  "changed": false,
+  "ping": "pong"
+}
+

8. 执行部署

+

在部署节点执行:

+

8.1 执行bootstrap

+
# 执行部署
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50
+

8.2 重启服务器

+

注:执行重启的原因是:bootstrap可能会升内核,更改selinux配置或者有GPU服务器,如果装机过程已经是新版内核,selinux disable或者没有GPU服务器,则不需要执行该步骤 +

# 手动重启对应节点,执行命令
+init6
+# 重启完成后,再次检查连通性
+ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping
+# 重启完后操作系统后,再次启动yum源

+

8.3 执行部署前检查

+
opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50
+

8.4 执行部署

+
ln -s /usr/bin/python3 /usr/bin/python
+
+全量部署:
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50
+
+单服务部署:
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/index.html b/site/install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/index.html new file mode 100644 index 0000000000000000000000000000000000000000..7d789a248e9f63f1624df2b738bf6f5bc221c90c --- /dev/null +++ b/site/install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/index.html @@ -0,0 +1,2673 @@ + + + + + + + + openEuler-22.03-LTS-SP4_Wallaby - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Wallaby 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 22.03-LTS-SP4版本官方源已经支持 OpenStack-Wallaby 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • CinderSP1
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 22.03 LTS 官方yum源,需要启用EPOL软件仓以支持OpenStack

    +
    yum update
    +yum install openstack-release-wallaby
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示。

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP4/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP4/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source ~/.admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    source ~/.admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[libvirt]
    +virt_type = qemu                                                                               (CPT)
    +cpu_mode = custom                                                                              (CPT)
    +cpu_model = cortex-a72                                                                         (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +
    +vim /etc/qemu/firmware/edk2-aarch64.json
    +
    +{
    +    "description": "UEFI firmware for ARM64 virtual machines",
    +    "interface-types": [
    +        "uefi"
    +    ],
    +    "mapping": {
    +        "device": "flash",
    +        "executable": {
    +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
    +            "format": "raw"
    +        },
    +        "nvram-template": {
    +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
    +            "format": "raw"
    +        }
    +    },
    +    "targets": [
    +        {
    +            "architecture": "aarch64",
    +            "machines": [
    +                "virt-*"
    +            ]
    +        }
    +    ],
    +    "features": [
    +
    +    ],
    +    "tags": [
    +
    +    ]
    +}
    +
    +(CPT)
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CPT)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service 
    +systemctl enable neutron-l3-agent.service
    +systemctl restart openstack-nova-api.service neutron-server.service                            (CTL)
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Wallaby中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic
+                         --description "Ironic baremetal provisioning service" baremetal
+
+openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
+openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector
+openstack role add --project service --user ironic-inspector admin
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+host = controller
+memcache_servers = controller:11211
+enabled_network_interfaces = flat,noop,neutron
+default_network_interface = noop
+transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/
+enabled_hardware_types = ipmi
+enabled_boot_interfaces = pxe
+enabled_deploy_interfaces = direct
+default_deploy_interface = direct
+enabled_inspect_interfaces = inspector
+enabled_management_interfaces = ipmitool
+enabled_power_interfaces = ipmitool
+enabled_rescue_interfaces = no-rescue,agent
+isolinux_bin = /usr/share/syslinux/isolinux.bin
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+[agent]
+deploy_logs_collect = always
+deploy_logs_local_path = /var/log/ironic/deploy
+deploy_logs_storage_backend = local
+image_download_source = http
+stream_raw_images = false
+force_raw_images = false
+verify_ca = False
+
+[oslo_concurrency]
+
+[oslo_messaging_notifications]
+transport_url = rabbit://openstack:123456@172.20.19.25:5672/
+topics = notifications
+driver = messagingv2
+
+[oslo_messaging_rabbit]
+amqp_durable_queues = True
+rabbit_ha_queues = True
+
+[pxe]
+ipxe_enabled = false
+pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
+image_cache_size = 204800
+tftp_root=/var/lib/tftpboot/cephfs/
+tftp_master_path=/var/lib/tftpboot/cephfs/master_images
+
+[dhcp]
+dhcp_provider = none
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. 配置ironic-inspector服务
  2. +
+

配置文件路径/etc/ironic-inspector/inspector.conf

+

1、创建数据库

+
# mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \     IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
+IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+

2、通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSWORDironic_inspector用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+backend = sqlalchemy
+connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
+min_pool_size = 100
+max_pool_size = 500
+pool_timeout = 30
+max_retries = 5
+max_overflow = 200
+db_retry_interval = 2
+db_inc_retry_interval = True
+db_max_retry_interval = 2
+db_max_retries = 5
+

3、配置消息度列通信地址

+
[DEFAULT] 
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+
+

4、设置keystone认证

+
[DEFAULT]
+
+auth_strategy = keystone
+timeout = 900
+rootwrap_config = /etc/ironic-inspector/rootwrap.conf
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+log_dir = /var/log/ironic-inspector
+state_path = /var/lib/ironic-inspector
+use_stderr = False
+
+[ironic]
+api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
+auth_type = password
+auth_url = http://PUBLIC_IDENTITY_IP:5000
+auth_strategy = keystone
+ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
+os_region = RegionOne
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
+username = IRONIC_SERVICE_USER_NAME
+password = IRONIC_SERVICE_USER_PASSWORD
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://control:5000
+www_authenticate_uri = http://control:5000
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = ironic_inspector
+password = IRONICPASSWD
+region_name = RegionOne
+memcache_servers = control:11211
+token_cache_time = 300
+
+[processing]
+add_ports = active
+processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
+ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
+always_store_ramdisk_logs = true
+store_data =none
+power_off = false
+
+[pxe_filter]
+driver = iptables
+
+[capabilities]
+boot_mode=True
+

5、配置ironic inspector dnsmasq服务

+
# 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
+port=0
+interface=enp3s0                         #替换为实际监听网络接口
+dhcp-range=172.20.19.100,172.20.19.110   #替换为实际dhcp地址范围
+bind-interfaces
+enable-tftp
+
+dhcp-match=set:efi,option:client-arch,7
+dhcp-match=set:efi,option:client-arch,9
+dhcp-match=aarch64, option:client-arch,11
+dhcp-boot=tag:aarch64,grubaa64.efi
+dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
+dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
+
+tftp-root=/tftpboot                       #替换为实际tftpboot目录
+log-facility=/var/log/dnsmasq.log
+

6、关闭ironic provision网络子网的dhcp

+
openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
+

7、初始化ironic-inspector服务的数据库

+

在控制节点执行:

+
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
+

8、启动服务

+
systemctl enable --now openstack-ironic-inspector.service
+systemctl enable --now openstack-ironic-inspector-dnsmasq.service
+

6.配置httpd服务

+
    +
  1. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  2. +
  3. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +
      yum install httpd -y
      +
    2. +
    3. +

      创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    4. +
    5. +

      重启httpd服务。

      +
      systemctl restart httpd
      +
    6. +
    +
  4. +
+

7.deploy ramdisk镜像制作

+

W版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用W版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意

    +
    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:
    +
    +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败,如下:
    +
    +生成的错误配置文件:
    +
    +![ironic-err](../../img/install/ironic-err.png)
    +
    +如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。
    +
    +需要用户对生成grub.cfg的代码逻辑自行修改。
    +
    +ironic向ipa发送查询命令执行状态请求的tls报错:
    +
    +w版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。
    +
    +1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    +
    +```
    +[agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +```
    +
    +2) ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    +
    +/etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)
    +
    +```
    +[DEFAULT]
    +enable_auto_tls = False
    +```
    +
    +设置权限:
    +
    +```
    +chown -R ipa.ipa /etc/ironic_python_agent/
    +```
    +
    +3. 修改ipa服务的服务启动文件,添加配置文件选项
    +
    +vim usr/lib/systemd/system/ironic-python-agent.service
    +
    +```
    +[Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +```
    +
  10. +
+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 22.03 LTS中引入了Kolla和Kolla-ansible服务。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

1.设置数据库

+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+

2.创建服务用户认证

+

1、创建Trove服务用户

+

openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+openstack role add --project service --user trove admin
+openstack service create --name trove
+                         --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+

3.安装和配置Trove各组件

+

1、安装Trove包 +

yum install openstack-trove python-troveclient
+ 2. 配置trove.conf +
vim /etc/trove/trove.conf
+
+[DEFAULT]
+bind_host=TROVE_NODE_IP
+log_dir = /var/log/trove
+network_driver = trove.network.neutron.NeutronDriver
+management_security_groups = <manage security group>
+nova_keypair = trove-mgmt
+default_datastore = mysql
+taskmanager_manager = trove.taskmanager.manager.Manager
+trove_api_workers = 5
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+reboot_time_out = 300
+usage_timeout = 900
+agent_call_high_timeout = 1200
+use_syslog = False
+debug = True
+
+# Set these if using Neutron Networking
+network_driver=trove.network.neutron.NeutronDriver
+network_label_regex=.*
+
+
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
+
+[keystone_authtoken]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = trove
+username = trove
+auth_url = http://controller:5000/v3/
+auth_type = password
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = trove
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mariadb]
+tcp_ports = 3306,4444,4567,4568
+
+[mysql]
+tcp_ports = 3306
+
+[postgresql]
+tcp_ports = 5432
+ 解释:

+
    +
  • [Default]分组中bind_host配置为Trove部署节点的IP
  • +
  • nova_compute_urlcinder_url 为Nova和Cinder在Keystone中创建的endpoint
  • +
  • nova_proxy_XXX 为一个能访问Nova服务的用户信息,上例中使用admin用户为例
  • +
  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • +
  • [database]分组中的connection 为前面在mysql中为Trove创建的数据库信息
  • +
  • Trove的用户信息中TROVE_PASS替换为实际trove用户的密码
  • +
+

3.配置trove-guestagent.conf +

vim /etc/trove/trove-guestagent.conf
+
+[DEFAULT]
+log_file = trove-guestagent.log
+log_dir = /var/log/trove/
+ignore_users = os_admin
+control_exchange = trove
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+rpc_backend = rabbit
+command_process_timeout = 60
+use_syslog = False
+debug = True
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = TROVE_PASS
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mysql]
+docker_image = your-registry/your-repo/mysql
+backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0

+

解释: guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 + 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 + 报心跳,因此需要配置RabbitMQ的用户和密码信息。 + 从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

+
    +
  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • +
  • Trove的用户信息中TROVE_PASS替换为实际trove用户的密码
  • +
+

4.生成数据Trove数据库表 +

su -s /bin/sh -c "trove-manage db_sync" trove

+

4.完成安装配置

+
    +
  1. 配置Trove服务自启动 +
    systemctl enable openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service 
  2. +
  3. 启动服务 +
    systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您在身份服务中为swift用户选择的密码**
+

4.安装和配置存储节点 (STG)

+
安装支持的程序包:
+```shell
+yum install xfsprogs rsync
+```
+
+将/dev/vdb和/dev/vdc设备格式化为 XFS
+
+```shell
+mkfs.xfs /dev/vdb
+mkfs.xfs /dev/vdc
+```
+
+创建挂载点目录结构:
+
+```shell
+mkdir -p /srv/node/vdb
+mkdir -p /srv/node/vdc
+```
+
+找到新分区的 UUID:
+
+```shell
+blkid
+```
+
+编辑/etc/fstab文件并将以下内容添加到其中:
+
+```shell
+UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
+UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
+```
+
+挂载设备:
+
+```shell
+mount /srv/node/vdb
+mount /srv/node/vdc
+```
+***注意***
+
+**如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置**
+
+(可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:
+
+```shell
+[DEFAULT]
+uid = swift
+gid = swift
+log file = /var/log/rsyncd.log
+pid file = /var/run/rsyncd.pid
+address = MANAGEMENT_INTERFACE_IP_ADDRESS
+
+[account]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/account.lock
+
+[container]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/container.lock
+
+[object]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/object.lock
+```
+**替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址**
+
+启动rsyncd服务并配置它在系统启动时启动:
+
+```shell
+systemctl enable rsyncd.service
+systemctl start rsyncd.service
+```
+

5.在存储节点安装和配置组件 (STG)

+
安装软件包:
+
+```shell
+yum install openstack-swift-account openstack-swift-container openstack-swift-object
+```
+
+编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。
+
+确保挂载点目录结构的正确所有权:
+
+```shell
+chown -R swift:swift /srv/node
+```
+
+创建recon目录并确保其拥有正确的所有权:
+
+```shell
+mkdir -p /var/cache/swift
+chown -R root:swift /var/cache/swift
+chmod -R 775 /var/cache/swift
+```
+

6.创建账号环 (CTL)

+
切换到/etc/swift目录。
+
+```shell
+cd /etc/swift
+```
+
+创建基础account.builder文件:
+
+```shell
+swift-ring-builder account.builder create 10 1 1
+```
+
+将每个存储节点添加到环中:
+
+```shell
+swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意 ***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder account.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder account.builder rebalance
+```
+

7.创建容器环 (CTL)

+
切换到`/etc/swift`目录。
+
+创建基础`container.builder`文件:
+
+```shell
+   swift-ring-builder container.builder create 10 1 1
+```
+
+将每个存储节点添加到环中:
+
+```shell
+swift-ring-builder container.builder \
+  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
+  --device DEVICE_NAME --weight 100
+
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder container.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder container.builder rebalance
+```
+

8.创建对象环 (CTL)

+
切换到`/etc/swift`目录。
+
+创建基础`object.builder`文件:
+
+   ```shell
+   swift-ring-builder object.builder create 10 1 1
+   ```
+
+将每个存储节点添加到环中
+
+```shell
+ swift-ring-builder object.builder \
+  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
+  --device DEVICE_NAME --weight 100
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意 ***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder object.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder object.builder rebalance
+```
+
+分发环配置文件:
+
+将`account.ring.gz`,`container.ring.gz`以及 `object.ring.gz`文件复制到每个存储节点和运行代理服务的任何其他节点上的`/etc/swift`目录。
+

9.完成安装

+

编辑/etc/swift/swift.conf文件

+
[swift-hash]
+swift_hash_path_suffix = test-hash
+swift_hash_path_prefix = test-hash
+
+[storage-policy:0]
+name = Policy-0
+default = yes
+

用唯一值替换 test-hash

+

将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

+

在所有节点上,确保配置目录的正确所有权:

+
chown -R root:swift /etc/swift
+

在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

+
systemctl enable openstack-swift-proxy.service memcached.service
+systemctl start openstack-swift-proxy.service memcached.service
+

在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

+
systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
+
+systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
+
+systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
+
+systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
+
+systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
+
+systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
+

Cyborg 安装

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+

1.初始化对应数据库

+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+

2.创建对应Keystone资源对象

+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+

3.安装Cyborg

+
yum install openstack-cyborg
+

4.配置Cyborg

+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+

5.同步数据库表格

+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+

6.启动Cyborg服务

+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+

1.创建数据库

+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+

3.安装Aodh

+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+

注意

+

aodh依赖的软件包pytho3-pyparsing在openEuler的OS仓不适配,需要覆盖安装OpenStack对应版本,可以使用yum list |grep pyparsing |grep OpenStack | awk '{print $2}'获取对应的版本

+

VERSION,然后再yum install -y python3-pyparsing-VERSION覆盖安装适配的pyparsing

+

4.修改配置文件

+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
aodh-dbsync
+

6.启动Aodh服务

+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+

1.创建数据库

+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+

3.安装Gnocchi

+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+

4.修改配置文件/etc/gnocchi/gnocchi.conf

+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+

5.初始化数据库

+
gnocchi-upgrade
+

6.启动Gnocchi服务

+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+

1.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+

2.安装Ceilometer

+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+

3.修改配置文件/etc/ceilometer/pipeline.yaml

+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+

4.修改配置文件/etc/ceilometer/ceilometer.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
ceilometer-upgrade
+

6.启动Ceilometer服务

+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+

1.创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码

+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+

2.创建服务凭证,创建heat用户,并为其增加admin角色

+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+

3.创建heatheat-cfn服务及其对应的API端点

+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+

4.创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色

+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+

5.安装软件包

+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+

6.修改配置文件/etc/heat/heat.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+

7.初始化heat数据库表

+
su -s /bin/sh -c "heat-manage db_sync" heat
+

8.启动服务

+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    pip install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 22.03-LTS-SP4的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 22.03-lts-sp4 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +

    oos env setup test-oos -r wallaby
    +具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 22.03-lts-sp4 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-22.03-LTS/OpenStack-train/index.html b/site/install/openEuler-22.03-LTS/OpenStack-train/index.html new file mode 100644 index 0000000000000000000000000000000000000000..be33084ab028912174ed28af0500cc8661bd11b0 --- /dev/null +++ b/site/install/openEuler-22.03-LTS/OpenStack-train/index.html @@ -0,0 +1,2417 @@ + + + + + + + + openEuler-22.03-LTS_Train - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Train 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 22.03-LTS版本官方源已经支持 OpenStack-Train 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • Cinder
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    启动OpenStack Train yum源

    +
    yum update
    +yum install openstack-release-train
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    . admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +logdir = /var/log/nova/                                                                        (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,则virt_type可以配置为kvm

    +

    注意

    +

    如果为arm64结构,还需要在计算节点执行以下命令

    +
    
    +mkdir -p /usr/share/AAVMF
    +chown nova:nova /usr/share/AAVMF
    +
    +ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_CODE.fd
    +ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
    +      /usr/share/AAVMF/AAVMF_VARS.fd
    +
    +vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +

    并且当ARM架构下的部署环境为嵌套虚拟化时,libvirt配置如下:

    +
    [libvirt]
    +virt_type = qemu
    +cpu_mode = custom
    +cpu_model = cortex-a72
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CTL)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl restart neutron-server.service neutron-linuxbridge-agent.service \                   (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Train中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+

mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+2. 安装软件包

+
yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
+

启动服务

+
systemctl enable openstack-ironic-api openstack-ironic-conductor
+systemctl start openstack-ironic-api openstack-ironic-conductor
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic \
+                         --description "Ironic baremetal provisioning service" baremetal
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. +

    配置httpd服务

    +
  2. +
  3. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  4. +
  5. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +

      yum install httpd -y
      + 2. 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    2. +
    3. +

      重启httpd服务。

      +

      systemctl restart httpd
      +7. deploy ramdisk镜像制作

      +
    4. +
    +
  6. +
+

T版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用T版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意 + 原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:

    +

    在T版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    T版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    2. +
    +
    [agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +
      +
    1. ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    2. +
    +

    /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)

    +
    [DEFAULT]
    +enable_auto_tls = False
    +

    设置权限:

    +
    chown -R ipa.ipa /etc/ironic_python_agent/
    +
      +
    1. 修改ipa服务的服务启动文件,添加配置文件选项
    2. +
    +

    vim usr/lib/systemd/system/ironic-python-agent.service

    +
    [Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +
  10. +
+

在Train中,我们还提供了ironic-inspector等服务,用户可根据自身需求安装。

+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令进行相关的镜像制作和容器环境部署了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Trove服务用户

+

openstack user create --domain default --password-prompt trove
+openstack role add --project service --user trove admin
+openstack service create --name trove --description "Database" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+
    +
  1. 安装和配置Trove各组件
  2. +
+

1、安装Trove包 + ``shell script + yum install openstack-trove python3-troveclient +


+2. 配置`trove.conf`
+```shell script
+vim /etc/trove/trove.conf
+
+ [DEFAULT]
+ log_dir = /var/log/trove
+ trove_auth_url = http://controller:5000/
+ nova_compute_url = http://controller:8774/v2
+ cinder_url = http://controller:8776/v1
+ swift_url = http://controller:8080/v1/AUTH_
+ rpc_backend = rabbit
+ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672
+ auth_strategy = keystone
+ add_addresses = True
+ api_paste_config = /etc/trove/api-paste.ini
+ nova_proxy_admin_user = admin
+ nova_proxy_admin_pass = ADMIN_PASSWORD
+ nova_proxy_admin_tenant_name = service
+ taskmanager_manager = trove.taskmanager.manager.Manager
+ use_nova_server_config_drive = True
+ # Set these if using Neutron Networking
+ network_driver = trove.network.neutron.NeutronDriver
+ network_label_regex = .*
+
+ [database]
+ connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove
+
+ [keystone_authtoken]
+ www_authenticate_uri = http://controller:5000/
+ auth_url = http://controller:5000/
+ auth_type = password
+ project_domain_name = default
+ user_domain_name = default
+ project_name = service
+ username = trove
+ password = TROVE_PASSWORD
+ **解释:** + -[Default]分组中nova_compute_urlcinder_url为Nova和Cinder在Keystone中创建的endpoint + -nova_proxy_XXX为一个能访问Nova服务的用户信息,上例中使用admin用户为例 + -transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码 + -[database]分组中的connection为前面在mysql中为Trove创建的数据库信息 + - Trove的用户信息中TROVE_PASSWORD`替换为实际trove用户的密码

+
    +
  1. 配置trove-guestagent.conf + ```shell script + vim /etc/trove/trove-guestagent.conf
  2. +
+

rabbit_host = controller + rabbit_password = RABBIT_PASS + trove_auth_url = http://controller:5000/ +

**解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟
+机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上
+报心跳,因此需要配置RabbitMQ的用户和密码信息。
+**从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。**
+- `RABBIT_PASS`替换为RabbitMQ的密码  
+
+4. 生成数据`Trove`数据库表
+```shell script
+su -s /bin/sh -c "trove-manage db_sync" trove

+
    +
  1. 完成安装配置
  2. +
  3. 配置Trove服务自启动 + ```shell script + systemctl enable openstack-trove-api.service \ + openstack-trove-taskmanager.service \ + openstack-trove-conductor.service +
    2. 启动服务
    +```shell script
    +systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您在身份服务中为swift用户选择的密码**
+
    +
  1. +

    安装和配置存储节点 (STG)

    +

    安装支持的程序包: +

    yum install xfsprogs rsync

    +

    将/dev/vdb和/dev/vdc设备格式化为 XFS

    +
    mkfs.xfs /dev/vdb
    +mkfs.xfs /dev/vdc
    +

    创建挂载点目录结构:

    +
    mkdir -p /srv/node/vdb
    +mkdir -p /srv/node/vdc
    +

    找到新分区的 UUID:

    +
    blkid
    +

    编辑/etc/fstab文件并将以下内容添加到其中:

    +
    UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
    +

    挂载设备:

    +

    mount /srv/node/vdb
    +mount /srv/node/vdc
    +注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置

    +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock
    +替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动:

    +
    systemctl enable rsyncd.service
    +systemctl start rsyncd.service
    +
  2. +
  3. +

    在存储节点安装和配置组件 (STG)

    +

    安装软件包:

    +
    yum install openstack-swift-account openstack-swift-container openstack-swift-object
    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    +

    确保挂载点目录结构的正确所有权:

    +
    chown -R swift:swift /srv/node
    +

    创建recon目录并确保其拥有正确的所有权:

    +
    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift
    +
  4. +
  5. +

    创建账号环 (CTL)

    +

    切换到/etc/swift目录。

    +
    cd /etc/swift
    +

    创建基础account.builder文件:

    +
    swift-ring-builder account.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder account.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +
  6. +
  7. +

    创建容器环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件:

    +
       swift-ring-builder container.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder container.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
    +  --device DEVICE_NAME --weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 +对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder container.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder container.builder rebalance
    +
  8. +
  9. +

    创建对象环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件:

    +
    swift-ring-builder object.builder create 10 1 1
    +

    将每个存储节点添加到环中

    +
     swift-ring-builder object.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
    +  --device DEVICE_NAME --weight 100
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder object.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder object.builder rebalance
    +

    分发环配置文件:

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  10. +
  11. +

    完成安装

    +

    编辑/etc/swift/swift.conf文件

    +
    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes
    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权:

    +
    chown -R root:swift /etc/swift
    +

    在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-proxy.service memcached.service
    +systemctl start openstack-swift-proxy.service memcached.service
    +

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
    +systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
  12. +
+

Cyborg 安装

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+
    +
  1. 初始化对应数据库
  2. +
+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+
    +
  1. 安装Cyborg
  2. +
+
yum install openstack-cyborg
+
    +
  1. 配置Cyborg
  2. +
+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+
    +
  1. 同步数据库表格
  2. +
+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+
    +
  1. 启动Cyborg服务
  2. +
+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+
    +
  1. 安装Aodh
  2. +
+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+
    +
  1. 修改配置文件
  2. +
+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
aodh-dbsync
+
    +
  1. 启动Aodh服务
  2. +
+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+
    +
  1. 安装Gnocchi
  2. +
+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+
    +
  1. 修改配置文件/etc/gnocchi/gnocchi.conf
  2. +
+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+
    +
  1. 初始化数据库
  2. +
+
gnocchi-upgrade
+
    +
  1. 启动Gnocchi服务
  2. +
+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+
    +
  1. 安装Ceilometer
  2. +
+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+
    +
  1. 修改配置文件/etc/ceilometer/pipeline.yaml
  2. +
+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+
    +
  1. 修改配置文件/etc/ceilometer/ceilometer.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
ceilometer-upgrade
+
    +
  1. 启动Ceilometer服务
  2. +
+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+
    +
  1. 创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码
  2. +
+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+
    +
  1. 创建服务凭证,创建heat用户,并为其增加admin角色
  2. +
+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+
    +
  1. 创建heatheat-cfn服务及其对应的API端点
  2. +
+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+
    +
  1. 创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色
  2. +
+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+
    +
  1. 安装软件包
  2. +
+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+
    +
  1. 修改配置文件/etc/heat/heat.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+
    +
  1. 初始化heat数据库表
  2. +
+
su -s /bin/sh -c "heat-manage db_sync" heat
+
    +
  1. 启动服务
  2. +
+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    pip install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 22.03-LTS的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 22.03-lts -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +
    oos env setup test-oos -r train
    +

    具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 22.03-lts -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-22.03-LTS/OpenStack-wallaby/index.html b/site/install/openEuler-22.03-LTS/OpenStack-wallaby/index.html new file mode 100644 index 0000000000000000000000000000000000000000..11fe8f26977eb1acc9441ab2b54fb3c219c0f677 --- /dev/null +++ b/site/install/openEuler-22.03-LTS/OpenStack-wallaby/index.html @@ -0,0 +1,2659 @@ + + + + + + + + openEuler-22.03-LTS_Wallaby - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Wallaby 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 22.03 LTS 版本官方源已经支持 OpenStack-Wallaby 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • Cinder
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 22.03 LTS 官方yum源,需要启用EPOL软件仓以支持OpenStack

    +
    yum update
    +yum install openstack-release-wallaby
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    . admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[libvirt]
    +virt_type = qemu                                                                               (CPT)
    +cpu_mode = custom                                                                              (CPT)
    +cpu_model = cortex-a72                                                                         (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +
    +vim /etc/qemu/firmware/edk2-aarch64.json
    +
    +{
    +    "description": "UEFI firmware for ARM64 virtual machines",
    +    "interface-types": [
    +        "uefi"
    +    ],
    +    "mapping": {
    +        "device": "flash",
    +        "executable": {
    +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
    +            "format": "raw"
    +        },
    +        "nvram-template": {
    +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
    +            "format": "raw"
    +        }
    +    },
    +    "targets": [
    +        {
    +            "architecture": "aarch64",
    +            "machines": [
    +                "virt-*"
    +            ]
    +        }
    +    ],
    +    "features": [
    +
    +    ],
    +    "tags": [
    +
    +    ]
    +}
    +
    +(CPT)
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CPT)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service \
    +systemctl enable neutron-l3-agent.service
    +systemctl restart openstack-nova-api.service neutron-server.service                            (CTL)
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Wallaby中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic
+                         --description "Ironic baremetal provisioning service" baremetal
+
+openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
+openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector
+openstack role add --project service --user ironic-inspector admin
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+host = controller
+memcache_servers = controller:11211
+enabled_network_interfaces = flat,noop,neutron
+default_network_interface = noop
+transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/
+enabled_hardware_types = ipmi
+enabled_boot_interfaces = pxe
+enabled_deploy_interfaces = direct
+default_deploy_interface = direct
+enabled_inspect_interfaces = inspector
+enabled_management_interfaces = ipmitool
+enabled_power_interfaces = ipmitool
+enabled_rescue_interfaces = no-rescue,agent
+isolinux_bin = /usr/share/syslinux/isolinux.bin
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+[agent]
+deploy_logs_collect = always
+deploy_logs_local_path = /var/log/ironic/deploy
+deploy_logs_storage_backend = local
+image_download_source = http
+stream_raw_images = false
+force_raw_images = false
+verify_ca = False
+
+[oslo_concurrency]
+
+[oslo_messaging_notifications]
+transport_url = rabbit://openstack:123456@172.20.19.25:5672/
+topics = notifications
+driver = messagingv2
+
+[oslo_messaging_rabbit]
+amqp_durable_queues = True
+rabbit_ha_queues = True
+
+[pxe]
+ipxe_enabled = false
+pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
+image_cache_size = 204800
+tftp_root=/var/lib/tftpboot/cephfs/
+tftp_master_path=/var/lib/tftpboot/cephfs/master_images
+
+[dhcp]
+dhcp_provider = none
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. 配置ironic-inspector服务
  2. +
+

配置文件路径/etc/ironic-inspector/inspector.conf

+

1、创建数据库

+
# mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \     IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
+IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+

2、通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSWORDironic_inspector用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+backend = sqlalchemy
+connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
+min_pool_size = 100
+max_pool_size = 500
+pool_timeout = 30
+max_retries = 5
+max_overflow = 200
+db_retry_interval = 2
+db_inc_retry_interval = True
+db_max_retry_interval = 2
+db_max_retries = 5
+

3、配置消息度列通信地址

+
[DEFAULT] 
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+
+

4、设置keystone认证

+
[DEFAULT]
+
+auth_strategy = keystone
+timeout = 900
+rootwrap_config = /etc/ironic-inspector/rootwrap.conf
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+log_dir = /var/log/ironic-inspector
+state_path = /var/lib/ironic-inspector
+use_stderr = False
+
+[ironic]
+api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
+auth_type = password
+auth_url = http://PUBLIC_IDENTITY_IP:5000
+auth_strategy = keystone
+ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
+os_region = RegionOne
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
+username = IRONIC_SERVICE_USER_NAME
+password = IRONIC_SERVICE_USER_PASSWORD
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://control:5000
+www_authenticate_uri = http://control:5000
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = ironic_inspector
+password = IRONICPASSWD
+region_name = RegionOne
+memcache_servers = control:11211
+token_cache_time = 300
+
+[processing]
+add_ports = active
+processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
+ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
+always_store_ramdisk_logs = true
+store_data =none
+power_off = false
+
+[pxe_filter]
+driver = iptables
+
+[capabilities]
+boot_mode=True
+

5、配置ironic inspector dnsmasq服务

+
# 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
+port=0
+interface=enp3s0                         #替换为实际监听网络接口
+dhcp-range=172.20.19.100,172.20.19.110   #替换为实际dhcp地址范围
+bind-interfaces
+enable-tftp
+
+dhcp-match=set:efi,option:client-arch,7
+dhcp-match=set:efi,option:client-arch,9
+dhcp-match=aarch64, option:client-arch,11
+dhcp-boot=tag:aarch64,grubaa64.efi
+dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
+dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
+
+tftp-root=/tftpboot                       #替换为实际tftpboot目录
+log-facility=/var/log/dnsmasq.log
+

6、关闭ironic provision网络子网的dhcp

+
openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
+

7、初始化ironic-inspector服务的数据库

+

在控制节点执行:

+
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
+

8、启动服务

+
systemctl enable --now openstack-ironic-inspector.service
+systemctl enable --now openstack-ironic-inspector-dnsmasq.service
+
    +
  1. +

    配置httpd服务

    +
  2. +
  3. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  4. +
  5. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +
      yum install httpd -y
      +
    2. +
    3. +

      创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    4. +
    5. +

      重启httpd服务。

      +
      systemctl restart httpd
      +
    6. +
    +
  6. +
  7. +

    deploy ramdisk镜像制作

    +
  8. +
+

W版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用W版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意

    +
    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:
    +
    +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败,如下:
    +
    +生成的错误配置文件:
    +
    +![ironic-err](../../img/install/ironic-err.png)
    +
    +如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。
    +
    +需要用户对生成grub.cfg的代码逻辑自行修改。
    +
    +ironic向ipa发送查询命令执行状态请求的tls报错:
    +
    +w版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。
    +
    +1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    +
    +```
    +[agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +```
    +
    +2) ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    +
    +/etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)
    +
    +```
    +[DEFAULT]
    +enable_auto_tls = False
    +```
    +
    +设置权限:
    +
    +```
    +chown -R ipa.ipa /etc/ironic_python_agent/
    +```
    +
    +3. 修改ipa服务的服务启动文件,添加配置文件选项
    +
    +vim usr/lib/systemd/system/ironic-python-agent.service
    +
    +```
    +[Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +```
    +
  10. +
+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 22.03 LTS中引入了Kolla和Kolla-ansible服务。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Trove服务用户

+

openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+openstack role add --project service --user trove admin
+openstack service create --name trove
+                         --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+
    +
  1. 安装和配置Trove各组件
  2. +
+

1、安装Trove包 + ``shell script + yum install openstack-trove python-troveclient +

2. 配置`trove.conf`
+```shell script
+vim /etc/trove/trove.conf
+
+[DEFAULT]
+bind_host=TROVE_NODE_IP
+log_dir = /var/log/trove
+network_driver = trove.network.neutron.NeutronDriver
+management_security_groups = <manage security group>
+nova_keypair = trove-mgmt
+default_datastore = mysql
+taskmanager_manager = trove.taskmanager.manager.Manager
+trove_api_workers = 5
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+reboot_time_out = 300
+usage_timeout = 900
+agent_call_high_timeout = 1200
+use_syslog = False
+debug = True
+
+# Set these if using Neutron Networking
+network_driver=trove.network.neutron.NeutronDriver
+network_label_regex=.*
+
+
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
+
+[keystone_authtoken]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = trove
+username = trove
+auth_url = http://controller:5000/v3/
+auth_type = password
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = trove
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mariadb]
+tcp_ports = 3306,4444,4567,4568
+
+[mysql]
+tcp_ports = 3306
+
+[postgresql]
+tcp_ports = 5432
+ **解释:** + -[Default]分组中bind_host配置为Trove部署节点的IP + -nova_compute_urlcinder_url为Nova和Cinder在Keystone中创建的endpoint + -nova_proxy_XXX为一个能访问Nova服务的用户信息,上例中使用admin用户为例 + -transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码 + -[database]分组中的connection为前面在mysql中为Trove创建的数据库信息 + - Trove的用户信息中TROVE_PASS`替换为实际trove用户的密码

+
    +
  1. 配置trove-guestagent.conf + ```shell script + vim /etc/trove/trove-guestagent.conf
  2. +
+

[DEFAULT] + log_file = trove-guestagent.log + log_dir = /var/log/trove/ + ignore_users = os_admin + control_exchange = trove + transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ + rpc_backend = rabbit + command_process_timeout = 60 + use_syslog = False + debug = True

+

[service_credentials] + auth_url = http://controller:5000/v3/ + region_name = RegionOne + project_name = service + password = TROVE_PASS + project_domain_name = Default + user_domain_name = Default + username = trove

+

[mysql] + docker_image = your-registry/your-repo/mysql + backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 +

**解释:** `guestagent`是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟
+机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上
+报心跳,因此需要配置RabbitMQ的用户和密码信息。
+**从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。**
+- `transport_url` 为`RabbitMQ`连接信息,`RABBIT_PASS`替换为RabbitMQ的密码
+- Trove的用户信息中`TROVE_PASS`替换为实际trove用户的密码  
+
+6. 生成数据`Trove`数据库表
+```shell script
+su -s /bin/sh -c "trove-manage db_sync" trove
+4. 完成安装配置 + 1. 配置Trove服务自启动 + ```shell script + systemctl enable openstack-trove-api.service \ + openstack-trove-taskmanager.service \ + openstack-trove-conductor.service +
2. 启动服务
+```shell script
+systemctl start openstack-trove-api.service \
+openstack-trove-taskmanager.service \
+openstack-trove-conductor.service

+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您在身份服务中为swift用户选择的密码**
+
    +
  1. +

    安装和配置存储节点 (STG)

    +

    安装支持的程序包: +

    yum install xfsprogs rsync

    +

    将/dev/vdb和/dev/vdc设备格式化为 XFS

    +
    mkfs.xfs /dev/vdb
    +mkfs.xfs /dev/vdc
    +

    创建挂载点目录结构:

    +
    mkdir -p /srv/node/vdb
    +mkdir -p /srv/node/vdc
    +

    找到新分区的 UUID:

    +
    blkid
    +

    编辑/etc/fstab文件并将以下内容添加到其中:

    +
    UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
    +

    挂载设备:

    +

    mount /srv/node/vdb
    +mount /srv/node/vdc
    +注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置

    +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock
    +替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动:

    +
    systemctl enable rsyncd.service
    +systemctl start rsyncd.service
    +
  2. +
  3. +

    在存储节点安装和配置组件 (STG)

    +

    安装软件包:

    +
    yum install openstack-swift-account openstack-swift-container openstack-swift-object
    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    +

    确保挂载点目录结构的正确所有权:

    +
    chown -R swift:swift /srv/node
    +

    创建recon目录并确保其拥有正确的所有权:

    +
    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift
    +
  4. +
  5. +

    创建账号环 (CTL)

    +

    切换到/etc/swift目录。

    +
    cd /etc/swift
    +

    创建基础account.builder文件:

    +
    swift-ring-builder account.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder account.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder account.builder rebalance
    +
  6. +
  7. +

    创建容器环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件:

    +
       swift-ring-builder container.builder create 10 1 1
    +

    将每个存储节点添加到环中:

    +
    swift-ring-builder container.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
    +  --device DEVICE_NAME --weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 +对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder container.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder container.builder rebalance
    +
  8. +
  9. +

    创建对象环 (CTL)

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件:

    +
    swift-ring-builder object.builder create 10 1 1
    +

    将每个存储节点添加到环中

    +
     swift-ring-builder object.builder \
    +  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
    +  --device DEVICE_NAME --weight 100
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称

    +

    注意 *** +*对每个存储节点上的每个存储设备重复此命令

    +

    验证戒指内容:

    +
    swift-ring-builder object.builder
    +

    重新平衡戒指:

    +
    swift-ring-builder object.builder rebalance
    +

    分发环配置文件:

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  10. +
  11. +

    完成安装

    +

    编辑/etc/swift/swift.conf文件

    +
    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes
    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权:

    +
    chown -R root:swift /etc/swift
    +

    在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-proxy.service memcached.service
    +systemctl start openstack-swift-proxy.service memcached.service
    +

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

    +
    systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    +
    +systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    +
    +systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +
    +systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    +

    Cyborg 安装

    +
  12. +
+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+
    +
  1. 初始化对应数据库
  2. +
+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+
    +
  1. 安装Cyborg
  2. +
+
yum install openstack-cyborg
+
    +
  1. 配置Cyborg
  2. +
+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+
    +
  1. 同步数据库表格
  2. +
+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+
    +
  1. 启动Cyborg服务
  2. +
+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+
    +
  1. 安装Aodh
  2. +
+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+

注意

+

aodh依赖的软件包pytho3-pyparsing在openEuler的OS仓不适配,需要覆盖安装OpenStack对应版本,可以使用yum list |grep pyparsing |grep OpenStack | awk '{print $2}'获取对应的版本

+

VERSION,然后再yum install -y python3-pyparsing-VERSION覆盖安装适配的pyparsing

+
    +
  1. 修改配置文件
  2. +
+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
aodh-dbsync
+
    +
  1. 启动Aodh服务
  2. +
+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+
    +
  1. 创建数据库
  2. +
+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+
    +
  1. 安装Gnocchi
  2. +
+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+
    +
  1. 修改配置文件/etc/gnocchi/gnocchi.conf
  2. +
+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+
    +
  1. 初始化数据库
  2. +
+
gnocchi-upgrade
+
    +
  1. 启动Gnocchi服务
  2. +
+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+
    +
  1. 创建对应Keystone资源对象
  2. +
+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+
    +
  1. 安装Ceilometer
  2. +
+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+
    +
  1. 修改配置文件/etc/ceilometer/pipeline.yaml
  2. +
+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+
    +
  1. 修改配置文件/etc/ceilometer/ceilometer.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+
    +
  1. 初始化数据库
  2. +
+
ceilometer-upgrade
+
    +
  1. 启动Ceilometer服务
  2. +
+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+
    +
  1. 创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码
  2. +
+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+
    +
  1. 创建服务凭证,创建heat用户,并为其增加admin角色
  2. +
+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+
    +
  1. 创建heatheat-cfn服务及其对应的API端点
  2. +
+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+
    +
  1. 创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色
  2. +
+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+
    +
  1. 安装软件包
  2. +
+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+
    +
  1. 修改配置文件/etc/heat/heat.conf
  2. +
+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+
    +
  1. 初始化heat数据库表
  2. +
+
su -s /bin/sh -c "heat-manage db_sync" heat
+
    +
  1. 启动服务
  2. +
+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    pip install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 22.03-LTS的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 22.03-lts -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +

    oos env setup test-oos -r wallaby
    +具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 22.03-lts -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-22.09/OpenStack-yoga/index.html b/site/install/openEuler-22.09/OpenStack-yoga/index.html new file mode 100644 index 0000000000000000000000000000000000000000..7812288970ae678c3c278f340f39ec9ac342af8b --- /dev/null +++ b/site/install/openEuler-22.09/OpenStack-yoga/index.html @@ -0,0 +1,4036 @@ + + + + + + + + openEuler-22.09_Yoga - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack Yoga 部署指南

+ +

本文档是openEuler OpenStack SIG编写的基于openEuler 22.09的OpenStack部署指南,内容由SIG贡献者提供。在阅读过程中,如果您有任何疑问或者发现任何问题,请联系SIG维护人员,或者直接提交issue

+

约定

+

本章节描述文档中的一些通用约定。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
名称定义
RABBIT_PASSrabbitmq的密码,由用户设置,在OpenStack各个服务配置中使用
CINDER_PASScinder服务keystone用户的密码,在cinder配置中使用
CINDER_DBPASScinder服务数据库密码,在cinder配置中使用
KEYSTONE_DBPASSkeystone服务数据库密码,在keystone配置中使用
GLANCE_PASSglance服务keystone用户的密码,在glance配置中使用
GLANCE_DBPASSglance服务数据库密码,在glance配置中使用
HEAT_PASS在keystone注册的heat用户密码,在heat配置中使用
HEAT_DBPASSheat服务数据库密码,在heat配置中使用
CYBORG_PASS在keystone注册的cyborg用户密码,在cyborg配置中使用
CYBORG_DBPASScyborg服务数据库密码,在cyborg配置中使用
NEUTRON_PASS在keystone注册的neutron用户密码,在neutron配置中使用
NEUTRON_DBPASSneutron服务数据库密码,在neutron配置中使用
PROVIDER_INTERFACE_NAME物理网络接口的名称,在neutron配置中使用
OVERLAY_INTERFACE_IP_ADDRESSController控制节点的管理ip地址,在neutron配置中使用
METADATA_SECRETmetadata proxy的secret密码,在nova和neutron配置中使用
PLACEMENT_DBPASSplacement服务数据库密码,在placement配置中使用
PLACEMENT_PASS在keystone注册的placement用户密码,在placement配置中使用
NOVA_DBPASSnova服务数据库密码,在nova配置中使用
NOVA_PASS在keystone注册的nova用户密码,在nova,cyborg,neutron等配置中使用
IRONIC_DBPASSironic服务数据库密码,在ironic配置中使用
IRONIC_PASS在keystone注册的ironic用户密码,在ironic配置中使用
IRONIC_INSPECTOR_DBPASSironic-inspector服务数据库密码,在ironic-inspector配置中使用
IRONIC_INSPECTOR_PASS在keystone注册的ironic-inspector用户密码,在ironic-inspector配置中使用
+

OpenStack SIG提供了多种基于openEuler部署OpenStack的方法,以满足不同的用户场景,请按需选择。

+

基于RPM部署

+

环境准备

+

本文档基于OpenStack经典的三节点环境进行部署,三个节点分别是控制节点(Controller)、计算节点(Compute)、存储节点(Storage),其中存储节点一般只部署存储服务,在资源有限的情况下,可以不单独部署该节点,把存储节点上的服务部署到计算节点即可。

+

首先准备三个openEuler 22.09环境,根据您的环境,下载对应的镜像并安装即可:ISO镜像qcow2镜像

+

下面的安装按照如下拓扑进行: +

controller:192.168.0.2
+compute:   192.168.0.3
+storage:   192.168.0.4
+如果您的环境IP不同,请按照您的环境IP修改相应的配置文件。

+

本文档的三节点服务拓扑如下图所示(只包含Keystone、Glance、Nova、Cinder、Neutron这几个核心服务,其他服务请参考具体部署章节):

+

topology1 +topology2 +topology3

+

在正式部署之前,需要对每个节点做如下配置和检查:

+
    +
  1. +

    保证EPOL yum源已配置

    +

    打开/etc/yum.repos.d/openEuler.repo文件,检查[EPOL]源是否存在,若不存在,则添加如下内容: +

    [EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-22.09/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler
    +不论改不改这个文件,新机器的第一步都要更新一下yum源,执行yum update

    +
  2. +
  3. +

    修改主机名以及映射

    +

    每个节点分别修改主机名,以controller为例:

    +
    hostnamectl set-hostname controller
    +
    +vi /etc/hostname
    +内容修改为controller
    +

    然后修改每个节点的/etc/hosts文件,新增如下内容:

    +
    192.168.0.2   controller
    +192.168.0.3   compute
    +192.168.0.4   storage
    +
  4. +
+

时钟同步

+

集群环境时刻要求每个节点的时间一致,一般由时钟同步软件保证。本文使用chrony软件。步骤如下:

+

Controller节点

+
    +
  1. 安装服务 +
    dnf install chrony
  2. +
  3. 修改/etc/chrony.conf配置文件,新增一行 +
    # 表示允许哪些IP从本节点同步时钟
    +allow 192.168.0.0/24
  4. +
  5. 重启服务 +
    systemctl restart chronyd
  6. +
+

其他节点

+
    +
  1. +

    安装服务 +

    dnf install chrony

    +
  2. +
  3. +

    修改/etc/chrony.conf配置文件,新增一行

    +
    # NTP_SERVER是controller IP,表示从这个机器获取时间,这里我们填192.168.0.2,或者在`/etc/hosts`里配置好的controller名字即可。
    +server NTP_SERVER iburst
    +

    同时,要把pool pool.ntp.org iburst这一行注释掉,表示不从公网同步时钟。

    +
  4. +
  5. +

    重启服务

    +
    systemctl restart chronyd
    +
  6. +
+

配置完成后,检查一下结果,在其他非controller节点执行chronyc sources,返回结果类似如下内容,表示成功从controller同步时钟。

+
MS Name/IP address         Stratum Poll Reach LastRx Last sample
+===============================================================================
+^* 192.168.0.2                 4   6     7     0  -1406ns[  +55us] +/-   16ms
+

安装数据库

+

数据库安装在控制节点,这里推荐使用mariadb。

+
    +
  1. +

    安装软件包

    +
    dnf install mysql-config mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    新增配置文件/etc/my.cnf.d/openstack.cnf,内容如下

    +
    [mysqld]
    +bind-address = 192.168.0.2
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +
  4. +
  5. +

    启动服务器

    +
    systemctl start mariadb
    +
  6. +
  7. +

    初始化数据库,根据提示进行即可

    +
    mysql_secure_installation
    +

    示例如下:

    +
    NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
    +    SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!
    +
    +In order to log into MariaDB to secure it, we'll need the current
    +password for the root user. If you've just installed MariaDB, and
    +haven't set the root password yet, you should just press enter here.
    +
    +Enter current password for root (enter for none): 
    +
    +#这里输入密码,由于我们是初始化DB,直接回车就行
    +
    +OK, successfully used password, moving on...
    +
    +Setting the root password or using the unix_socket ensures that nobody
    +can log into the MariaDB root user without the proper authorisation.
    +
    +You already have your root account protected, so you can safely answer 'n'.
    +
    +# 这里根据提示输入N
    +
    +Switch to unix_socket authentication [Y/n] N
    +
    +Enabled successfully!
    +Reloading privilege tables..
    +... Success!
    +
    +
    +You already have your root account protected, so you can safely answer 'n'.
    +
    +# 输入Y,修改密码
    +
    +Change the root password? [Y/n] Y
    +
    +New password: 
    +Re-enter new password: 
    +Password updated successfully!
    +Reloading privilege tables..
    +... Success!
    +
    +
    +By default, a MariaDB installation has an anonymous user, allowing anyone
    +to log into MariaDB without having to have a user account created for
    +them.  This is intended only for testing, and to make the installation
    +go a bit smoother.  You should remove them before moving into a
    +production environment.
    +
    +# 输入Y,删除匿名用户
    +
    +Remove anonymous users? [Y/n] Y
    +... Success!
    +
    +Normally, root should only be allowed to connect from 'localhost'.  This
    +ensures that someone cannot guess at the root password from the network.
    +
    +# 输入Y,关闭root远程登录权限
    +
    +Disallow root login remotely? [Y/n] Y
    +... Success!
    +
    +By default, MariaDB comes with a database named 'test' that anyone can
    +access.  This is also intended only for testing, and should be removed
    +before moving into a production environment.
    +
    +# 输入Y,删除test数据库
    +
    +Remove test database and access to it? [Y/n] Y
    +- Dropping test database...
    +... Success!
    +- Removing privileges on test database...
    +... Success!
    +
    +Reloading the privilege tables will ensure that all changes made so far
    +will take effect immediately.
    +
    +# 输入Y,重载配置
    +
    +Reload privilege tables now? [Y/n] Y
    +... Success!
    +
    +Cleaning up...
    +
    +All done!  If you've completed all of the above steps, your MariaDB
    +installation should now be secure.
    +
  8. +
  9. +

    验证,根据第四步设置的密码,检查是否能登录mariadb

    +
    mysql -uroot -p
    +
  10. +
+

安装消息队列

+

消息队列安装在控制节点,这里推荐使用rabbitmq。

+
    +
  1. 安装软件包 +
    dnf install rabbitmq-server
  2. +
  3. 启动服务 +
    systemctl start rabbitmq-server
  4. +
  5. 配置openstack用户,RABBIT_PASS是openstack服务登录消息队里的密码,需要和后面各个服务的配置保持一致。 +
    rabbitmqctl add_user openstack RABBIT_PASS
    +rabbitmqctl set_permissions openstack ".*" ".*" ".*"
  6. +
+

安装缓存服务

+

消息队列安装在控制节点,这里推荐使用Memcached。

+
    +
  1. 安装软件包 +
    dnf install memcached python3-memcached
  2. +
  3. 修改配置文件/etc/sysconfig/memcached +
    OPTIONS="-l 127.0.0.1,::1,controller"
  4. +
  5. 启动服务 +
    systemctl start memcached
  6. +
+

部署服务

+

Keystone

+

Keystone是OpenStack提供的鉴权服务,是整个OpenStack的入口,提供了租户隔离、用户认证、服务发现等功能,必须安装。

+
    +
  1. +

    创建 keystone 数据库并授权

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包

    +
    dnf install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +
  6. +
  7. +

    同步数据库

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
  14. +
  15. +

    打开httpd.conf并配置

    +
    #需要修改的配置文件路径
    +vim /etc/httpd/conf/httpd.conf
    +
    +#修改以下项,如果没有则新添加
    +ServerName controller
    +
  16. +
  17. +

    创建软链接

    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 + 如果 ServerName 项不存在则需要创建

    +
  18. +
  19. +

    启动Apache HTTP服务

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  20. +
  21. +

    创建环境变量配置

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  22. +
  23. +

    依次创建domain, projects, users, roles

    +
      +
    • +

      需要先安装python3-openstackclient

      +
      dnf install python3-openstackclient
      +
    • +
    • +

      导入环境变量

      +
      source ~/.admin-openrc
      +
    • +
    • +

      创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

      +
      openstack domain create --description "An Example Domain" example
      +
      openstack project create --domain default --description "Service Project" service
      +
    • +
    • +

      创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

      +
      openstack project create --domain default --description "Demo Project" myproject
      +openstack user create --domain default --password-prompt myuser
      +openstack role create myrole
      +openstack role add --project myproject --user myuser myrole
      +
    • +
    +
  24. +
  25. +

    验证

    +
      +
    • +

      取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

      +
      source ~/.admin-openrc
      +unset OS_AUTH_URL OS_PASSWORD
      +
    • +
    • +

      为admin用户请求token:

      +
      openstack --os-auth-url http://controller:5000/v3 \
      +--os-project-domain-name Default --os-user-domain-name Default \
      +--os-project-name admin --os-username admin token issue
      +
    • +
    • +

      为myuser用户请求token:

      +
      openstack --os-auth-url http://controller:5000/v3 \
      +--os-project-domain-name Default --os-user-domain-name Default \
      +--os-project-name myproject --os-username myuser token issue
      +
    • +
    +
  26. +
+

Glance

+

Glance是OpenStack提供的镜像服务,负责虚拟机、裸机镜像的上传与下载,必须安装。

+

Controller节点

+
    +
  1. +

    创建 glance 数据库并授权

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +
  2. +
  3. +

    初始化 glance 资源对象

    +
  4. +
  5. +

    导入环境变量

    +
    source ~/.admin-openrc
    +
  6. +
  7. +

    创建用户时,命令行会提示输入密码,请输入自定义的密码,下文涉及到GLANCE_PASS的地方替换成该密码即可。

    +
    openstack user create --domain default --password-prompt glance
    +User Password:
    +Repeat User Password:
    +
  8. +
  9. +

    添加glance用户到service project并指定admin角色:

    +
    openstack role add --project service --user glance admin
    +
  10. +
  11. +

    创建glance服务实体:

    +
    openstack service create --name glance --description "OpenStack Image" image
    +
  12. +
  13. +

    创建glance API服务:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  14. +
  15. +

    安装软件包

    +
    dnf install openstack-glance
    +
  16. +
  17. +

    修改 glance 配置文件

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +
  18. +
  19. +

    同步数据库

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  20. +
  21. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  22. +
  23. +

    验证

    +
      +
    • +

      导入环境变量 +

      source ~/.admin-openrcu

      +
    • +
    • +

      下载镜像

      +
      x86镜像下载:
      +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
      +
      +arm镜像下载:
      +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img
      +

      注意

      +

      如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

      +
    • +
    • +

      向Image服务上传镜像:

      +
      openstack image create --disk-format qcow2 --container-format bare \
      +                    --file cirros-0.4.0-x86_64-disk.img --public cirros
      +
    • +
    • +

      确认镜像上传并验证属性:

      +
      openstack image list
      +
    • +
    +
  24. +
+

Placement

+

Placement是OpenStack提供的资源调度组件,一般不面向用户,由Nova等组件调用,安装在控制节点。

+

安装、配置Placement服务前,需要先创建相应的数据库、服务凭证和API endpoints。

+
    +
  1. +

    创建数据库

    +
      +
    • +

      使用root用户访问数据库服务:

      +
      mysql -u root -p
      +
    • +
    • +

      创建placement数据库:

      +
      MariaDB [(none)]> CREATE DATABASE placement;
      +
    • +
    • +

      授权数据库访问:

      +
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
      +  IDENTIFIED BY 'PLACEMENT_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
      +  IDENTIFIED BY 'PLACEMENT_DBPASS';
      +

      替换PLACEMENT_DBPASS为placement数据库访问密码。

      +
    • +
    • +

      退出数据库访问客户端:

      +
      exit
      +
    • +
    +
  2. +
  3. +

    配置用户和Endpoints

    +
      +
    • +

      source admin凭证,以获取admin命令行权限:

      +
      source ~/.admin-openrc
      +
    • +
    • +

      创建placement用户并设置用户密码:

      +
      openstack user create --domain default --password-prompt placement
      +
      +User Password:
      +Repeat User Password:
      +
    • +
    • +

      添加placement用户到service project并指定admin角色:

      +
      openstack role add --project service --user placement admin
      +
    • +
    • +

      创建placement服务实体:

      +
      openstack service create --name placement \
      +  --description "Placement API" placement
      +
    • +
    • +

      创建Placement API服务endpoints:

      +
      openstack endpoint create --region RegionOne \
      +  placement public http://controller:8778
      +openstack endpoint create --region RegionOne \
      +  placement internal http://controller:8778
      +openstack endpoint create --region RegionOne \
      +  placement admin http://controller:8778
      +
    • +
    +
  4. +
  5. +

    安装及配置组件

    +
      +
    • +

      安装软件包:

      +
      dnf install openstack-placement-api
      +
    • +
    • +

      编辑/etc/placement/placement.conf配置文件,完成如下操作:

      +
        +
      • +

        [placement_database]部分,配置数据库入口:

        +
        [placement_database]
        +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
        +

        替换PLACEMENT_DBPASS为placement数据库的密码。

        +
      • +
      • +

        [api][keystone_authtoken]部分,配置身份认证服务入口:

        +
        [api]
        +auth_strategy = keystone
        +
        +[keystone_authtoken]
        +auth_url = http://controller:5000/v3
        +memcached_servers = controller:11211
        +auth_type = password
        +project_domain_name = Default
        +user_domain_name = Default
        +project_name = service
        +username = placement
        +password = PLACEMENT_PASS
        +

        替换PLACEMENT_PASS为placement用户的密码。

        +
      • +
      +
    • +
    • +

      数据库同步,填充Placement数据库:

      +
      su -s /bin/sh -c "placement-manage db sync" placement
      +
    • +
    +
  6. +
  7. +

    启动服务

    +

    重启httpd服务:

    +
    systemctl restart httpd
    +
  8. +
  9. +

    验证

    +
      +
    • +

      source admin凭证,以获取admin命令行权限

      +
      source ~/.admin-openrc
      +
    • +
    • +

      执行状态检查:

      +
      placement-status upgrade check
      +
      +----------------------------------------------------------------------+
      +| Upgrade Check Results                                                |
      ++----------------------------------------------------------------------+
      +| Check: Missing Root Provider IDs                                     |
      +| Result: Success                                                      |
      +| Details: None                                                        |
      ++----------------------------------------------------------------------+
      +| Check: Incomplete Consumers                                          |
      +| Result: Success                                                      |
      +| Details: None                                                        |
      ++----------------------------------------------------------------------+
      +| Check: Policy File JSON to YAML Migration                            |
      +| Result: Failure                                                      |
      +| Details: Your policy file is JSON-formatted which is deprecated. You |
      +|   need to switch to YAML-formatted file. Use the                     |
      +|   ``oslopolicy-convert-json-to-yaml`` tool to convert the            |
      +|   existing JSON-formatted files to YAML in a backwards-              |
      +|   compatible manner: https://docs.openstack.org/oslo.policy/         |
      +|   latest/cli/oslopolicy-convert-json-to-yaml.html.                   |
      ++----------------------------------------------------------------------+
      +

      这里可以看到Policy File JSON to YAML Migration的结果为Failure。这是因为在Placement中,JSON格式的policy文件从Wallaby版本开始已处于deprecated状态。可以参考提示,使用oslopolicy-convert-json-to-yaml工具 将现有的JSON格式policy文件转化为YAML格式。

      +
      oslopolicy-convert-json-to-yaml  --namespace placement \
      +  --policy-file /etc/placement/policy.json \
      +  --output-file /etc/placement/policy.yaml
      +mv /etc/placement/policy.json{,.bak}
      +

      注:当前环境中此问题可忽略,不影响运行。

      +
    • +
    • +

      针对placement API运行命令:

      +
        +
      • +

        安装osc-placement插件:

        +
        dnf install python3-osc-placement
        +
      • +
      • +

        列出可用的资源类别及特性:

        +
        openstack --os-placement-api-version 1.2 resource class list --sort-column name
        ++----------------------------+
        +| name                       |
        ++----------------------------+
        +| DISK_GB                    |
        +| FPGA                       |
        +| ...                        |
        +
        +openstack --os-placement-api-version 1.6 trait list --sort-column name
        ++---------------------------------------+
        +| name                                  |
        ++---------------------------------------+
        +| COMPUTE_ACCELERATORS                  |
        +| COMPUTE_ARCH_AARCH64                  |
        +| ...                                   |
        +
      • +
      +
    • +
    +
  10. +
+

Nova

+

Nova是OpenStack的计算服务,负责虚拟机的创建、发放等功能。

+

Controller节点

+

在控制节点执行以下操作。

+
    +
  1. +

    创建数据库

    +
      +
    • +

      使用root用户访问数据库服务:

      +
      mysql -u root -p
      +
    • +
    • +

      创建nova_apinovanova_cell0数据库:

      +
      MariaDB [(none)]> CREATE DATABASE nova_api;
      +MariaDB [(none)]> CREATE DATABASE nova;
      +MariaDB [(none)]> CREATE DATABASE nova_cell0;
      +
    • +
    • +

      授权数据库访问:

      +
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +

      替换NOVA_DBPASS为nova相关数据库访问密码。

      +
    • +
    • +

      退出数据库访问客户端:

      +
      exit
      +
    • +
    +
  2. +
  3. +

    配置用户和Endpoints

    +
      +
    • +

      source admin凭证,以获取admin命令行权限:

      +
      source ~/.admin-openrc
      +
    • +
    • +

      创建nova用户并设置用户密码:

      +
      openstack user create --domain default --password-prompt nova
      +
      +User Password:
      +Repeat User Password:
      +
    • +
    • +

      添加nova用户到service project并指定admin角色:

      +
      openstack role add --project service --user nova admin
      +
    • +
    • +

      创建nova服务实体:

      +
      openstack service create --name nova \
      +  --description "OpenStack Compute" compute
      +
    • +
    • +

      创建Nova API服务endpoints:

      +
      openstack endpoint create --region RegionOne \
      +  compute public http://controller:8774/v2.1
      +openstack endpoint create --region RegionOne \
      +  compute internal http://controller:8774/v2.1
      +openstack endpoint create --region RegionOne \
      +  compute admin http://controller:8774/v2.1
      +
    • +
    +
  4. +
  5. +

    安装及配置组件

    +
      +
    • +

      安装软件包:

      +
      dnf install openstack-nova-api openstack-nova-conductor \
      +  openstack-nova-novncproxy openstack-nova-scheduler
      +
    • +
    • +

      编辑/etc/nova/nova.conf配置文件,完成如下操作:

      +
        +
      • +

        [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用controller节点管理IP配置my_ip,显式定义log_dir:

        +
        [DEFAULT]
        +enabled_apis = osapi_compute,metadata
        +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
        +my_ip = 192.168.0.2
        +log_dir = /var/log/nova
        +

        替换RABBIT_PASS为RabbitMQ中openstack账户的密码。

        +
      • +
      • +

        [api_database][database]部分,配置数据库入口:

        +
        [api_database]
        +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
        +
        +[database]
        +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
        +

        替换NOVA_DBPASS为nova相关数据库的密码。

        +
      • +
      • +

        [api][keystone_authtoken]部分,配置身份认证服务入口:

        +
        [api]
        +auth_strategy = keystone
        +
        +[keystone_authtoken]
        +auth_url = http://controller:5000/v3
        +memcached_servers = controller:11211
        +auth_type = password
        +project_domain_name = Default
        +user_domain_name = Default
        +project_name = service
        +username = nova
        +password = NOVA_PASS
        +

        替换NOVA_PASS为nova用户的密码。

        +
      • +
      • +

        [vnc]部分,启用并配置远程控制台入口:

        +
        [vnc]
        +enabled = true
        +server_listen = $my_ip
        +server_proxyclient_address = $my_ip
        +
      • +
      • +

        [glance]部分,配置镜像服务API的地址:

        +
        [glance]
        +api_servers = http://controller:9292
        +
      • +
      • +

        [oslo_concurrency]部分,配置lock path:

        +
        [oslo_concurrency]
        +lock_path = /var/lib/nova/tmp
        +
      • +
      • +

        [placement]部分,配置placement服务的入口:

        +
        [placement]
        +region_name = RegionOne
        +project_domain_name = Default
        +project_name = service
        +auth_type = password
        +user_domain_name = Default
        +auth_url = http://controller:5000/v3
        +username = placement
        +password = PLACEMENT_PASS
        +

        替换PLACEMENT_PASS为placement用户的密码。

        +
      • +
      +
    • +
    • +

      数据库同步:

      +
        +
      • +

        同步nova-api数据库:

        +
        su -s /bin/sh -c "nova-manage api_db sync" nova
        +
      • +
      • +

        注册cell0数据库:

        +
        su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
        +
      • +
      • +

        创建cell1 cell:

        +
        su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
        +
      • +
      • +

        同步nova数据库:

        +
        su -s /bin/sh -c "nova-manage db sync" nova
        +
      • +
      • +

        验证cell0和cell1注册正确:

        +
        su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
        +
      • +
      +
    • +
    +
  6. +
  7. +

    启动服务

    +
    systemctl enable \
    +  openstack-nova-api.service \
    +  openstack-nova-scheduler.service \
    +  openstack-nova-conductor.service \
    +  openstack-nova-novncproxy.service
    +
    +systemctl start \
    +  openstack-nova-api.service \
    +  openstack-nova-scheduler.service \
    +  openstack-nova-conductor.service \
    +  openstack-nova-novncproxy.service
    +
  8. +
+

Compute节点

+

在计算节点执行以下操作。

+
    +
  1. +

    安装软件包

    +
    dnf install openstack-nova-compute
    +
  2. +
  3. +

    编辑/etc/nova/nova.conf配置文件

    +
      +
    • +

      [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用Compute节点管理IP配置my_ip,显式定义compute_driver、instances_path、log_dir:

      +
      [DEFAULT]
      +enabled_apis = osapi_compute,metadata
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +my_ip = 192.168.0.3
      +compute_driver = libvirt.LibvirtDriver
      +instances_path = /var/lib/nova/instances
      +log_dir = /var/log/nova
      +

      替换RABBIT_PASS为RabbitMQ中openstack账户的密码。

      +
    • +
    • +

      [api][keystone_authtoken]部分,配置身份认证服务入口:

      +
      [api]
      +auth_strategy = keystone
      +
      +[keystone_authtoken]
      +auth_url = http://controller:5000/v3
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = nova
      +password = NOVA_PASS
      +

      替换NOVA_PASS为nova用户的密码。

      +
    • +
    • +

      [vnc]部分,启用并配置远程控制台入口:

      +
      [vnc]
      +enabled = true
      +server_listen = $my_ip
      +server_proxyclient_address = $my_ip
      +novncproxy_base_url = http://controller:6080/vnc_auto.html
      +
    • +
    • +

      [glance]部分,配置镜像服务API的地址:

      +
      [glance]
      +api_servers = http://controller:9292
      +
    • +
    • +

      [oslo_concurrency]部分,配置lock path:

      +
      [oslo_concurrency]
      +lock_path = /var/lib/nova/tmp
      +
    • +
    • +

      [placement]部分,配置placement服务的入口:

      +
      [placement]
      +region_name = RegionOne
      +project_domain_name = Default
      +project_name = service
      +auth_type = password
      +user_domain_name = Default
      +auth_url = http://controller:5000/v3
      +username = placement
      +password = PLACEMENT_PASS
      +

      替换PLACEMENT_PASS为placement用户的密码。

      +
    • +
    +
  4. +
  5. +

    确认计算节点是否支持虚拟机硬件加速(x86_64)

    +

    处理器为x86_64架构时,可通过运行如下命令确认是否支持硬件加速:

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。编辑/etc/nova/nova.conf[libvirt]部分:

    +
    [libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置。

    +
  6. +
  7. +

    确认计算节点是否支持虚拟机硬件加速(arm64)

    +

    处理器为arm64架构时,可通过运行如下命令确认是否支持硬件加速:

    +
    virt-host-validate
    +# 该命令由libvirt提供,此时libvirt应已作为openstack-nova-compute依赖被安装,环境中已有此命令
    +

    显示FAIL时,表示不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。

    +
    QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded)
    +

    编辑/etc/nova/nova.conf[libvirt]部分:

    +
    [libvirt]
    +virt_type = qemu
    +

    显示PASS时,表示支持硬件加速,不需要进行额外的配置。

    +
    QEMU: Checking if device /dev/kvm exists: PASS
    +
  8. +
  9. +

    配置qemu(仅arm64)

    +

    仅当处理器为arm64架构时需要执行此操作。

    +
      +
    • +

      编辑/etc/libvirt/qemu.conf:

      +
      nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
      +         /usr/share/AAVMF/AAVMF_VARS.fd", \
      +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
      +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
      +
    • +
    • +

      编辑/etc/qemu/firmware/edk2-aarch64.json

      +
      {
      +    "description": "UEFI firmware for ARM64 virtual machines",
      +    "interface-types": [
      +        "uefi"
      +    ],
      +    "mapping": {
      +        "device": "flash",
      +        "executable": {
      +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
      +            "format": "raw"
      +        },
      +        "nvram-template": {
      +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
      +            "format": "raw"
      +        }
      +    },
      +    "targets": [
      +        {
      +            "architecture": "aarch64",
      +            "machines": [
      +                "virt-*"
      +            ]
      +        }
      +    ],
      +    "features": [
      +
      +    ],
      +    "tags": [
      +
      +    ]
      +}
      +
    • +
    +
  10. +
  11. +

    启动服务

    +
    systemctl enable libvirtd.service openstack-nova-compute.service
    +systemctl start libvirtd.service openstack-nova-compute.service
    +
  12. +
+

Controller节点

+

在控制节点执行以下操作。

+
    +
  1. +

    添加计算节点到openstack集群

    +
      +
    • +

      source admin凭证,以获取admin命令行权限:

      +
      source ~/.admin-openrc
      +
    • +
    • +

      确认nova-compute服务已识别到数据库中:

      +
      openstack compute service list --service nova-compute
      +
    • +
    • +

      发现计算节点,将计算节点添加到cell数据库:

      +

      su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
      +结果如下:

      +
      Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be    ignored if the caller is only importing and not executing nova code.
      +Found 2 cell mappings.
      +Skipping cell0 since it does not contain hosts.
      +Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2
      +Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e
      +Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e
      +Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2
      +
    • +
    +
  2. +
  3. +

    验证

    +
      +
    • 列出服务组件,验证每个流程都成功启动和注册:
    • +
    +
    openstack compute service list
    +
      +
    • 列出身份服务中的API端点,验证与身份服务的连接:
    • +
    +
    openstack catalog list
    +
      +
    • 列出镜像服务中的镜像,验证与镜像服务的连接:
    • +
    +
    openstack image list
    +
      +
    • 检查cells是否运作成功,以及其他必要条件是否已具备。
    • +
    +
    nova-status upgrade check
    +
  4. +
+

Neutron

+

Neutron是OpenStack的网络服务,提供虚拟交换机、IP路由、DHCP等功能。

+

Controller节点

+
    +
  1. +

    创建数据库、服务凭证和 API 服务端点

    +
      +
    • +

      创建数据库:

      +
      mysql -u root -p
      +
      +MariaDB [(none)]> CREATE DATABASE neutron;
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
      +MariaDB [(none)]> exit;
      +
    • +
    • +

      创建用户和服务,并记住创建neutron用户时输入的密码,用于配置NEUTRON_PASS:

      +
      source ~/.admin-openrc
      +openstack user create --domain default --password-prompt neutron
      +openstack role add --project service --user neutron admin
      +openstack service create --name neutron --description "OpenStack Networking" network
      +
    • +
    • +

      部署 Neutron API 服务:

      +
      openstack endpoint create --region RegionOne network public http://controller:9696
      +openstack endpoint create --region RegionOne network internal http://controller:9696
      +openstack endpoint create --region RegionOne network admin http://controller:9696
      +
    • +
    +
  2. +
  3. +

    安装软件包

    +

    dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2
    +3. 配置Neutron

    +
      +
    • +

      修改/etc/neutron/neutron.conf +

      [database]
      +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
      +
      +[DEFAULT]
      +core_plugin = ml2
      +service_plugins = router
      +allow_overlapping_ips = true
      +transport_url = rabbit://openstack:RABBIT_PASS@controller
      +auth_strategy = keystone
      +notify_nova_on_port_status_changes = true
      +notify_nova_on_port_data_changes = true
      +
      +[keystone_authtoken]
      +www_authenticate_uri = http://controller:5000
      +auth_url = http://controller:5000
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS
      +
      +[nova]
      +auth_url = http://controller:5000
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +region_name = RegionOne
      +project_name = service
      +username = nova
      +password = NOVA_PASS
      +
      +[oslo_concurrency]
      +lock_path = /var/lib/neutron/tmp

      +
    • +
    • +

      配置ML2,ML2具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge**

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/ml2_conf.ini +

      [ml2]
      +type_drivers = flat,vlan,vxlan
      +tenant_network_types = vxlan
      +mechanism_drivers = linuxbridge,l2population
      +extension_drivers = port_security
      +
      +[ml2_type_flat]
      +flat_networks = provider
      +
      +[ml2_type_vxlan]
      +vni_ranges = 1:1000
      +
      +[securitygroup]
      +enable_ipset = true

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini +

      [linux_bridge]
      +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
      +
      +[vxlan]
      +enable_vxlan = true
      +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
      +l2_population = true
      +
      +[securitygroup]
      +enable_security_group = true
      +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

      +
    • +
    • +

      配置Layer-3代理

      +
    • +
    • +

      修改/etc/neutron/l3_agent.ini

      +
      [DEFAULT]
      +interface_driver = linuxbridge
      +

      配置DHCP代理 +修改/etc/neutron/dhcp_agent.ini +

      [DEFAULT]
      +interface_driver = linuxbridge
      +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
      +enable_isolated_metadata = true

      +
    • +
    • +

      配置metadata代理

      +
    • +
    • +

      修改/etc/neutron/metadata_agent.ini +

      [DEFAULT]
      +nova_metadata_host = controller
      +metadata_proxy_shared_secret = METADATA_SECRET

      +
    • +
    • 配置nova服务使用neutron,修改/etc/nova/nova.conf +
      [neutron]
      +auth_url = http://controller:5000
      +auth_type = password
      +project_domain_name = default
      +user_domain_name = default
      +region_name = RegionOne
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS
      +service_metadata_proxy = true
      +metadata_proxy_shared_secret = METADATA_SECRET
    • +
    +
  4. +
  5. +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +
  6. +
  7. +

    同步数据库 +

    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

    +
  8. +
  9. 重启nova api服务 +
    systemctl restart openstack-nova-api
  10. +
  11. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \
    +neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
    +systemctl start neutron-server.service neutron-linuxbridge-agent.service \
    +neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
    +
  12. +
+

Compute节点

+
    +
  1. 安装软件包 +
    dnf install openstack-neutron-linuxbridge ebtables ipset -y
  2. +
  3. +

    配置Neutron

    +
      +
    • +

      修改/etc/neutron/neutron.conf +

      [DEFAULT]
      +transport_url = rabbit://openstack:RABBIT_PASS@controller
      +auth_strategy = keystone
      +
      +[keystone_authtoken]
      +www_authenticate_uri = http://controller:5000
      +auth_url = http://controller:5000
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS
      +
      +[oslo_concurrency]
      +lock_path = /var/lib/neutron/tmp

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini +

      [linux_bridge]
      +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
      +
      +[vxlan]
      +enable_vxlan = true
      +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
      +l2_population = true
      +
      +[securitygroup]
      +enable_security_group = true
      +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

      +
    • +
    • +

      配置nova compute服务使用neutron,修改/etc/nova/nova.conf +

      [neutron]
      +auth_url = http://controller:5000
      +auth_type = password
      +project_domain_name = default
      +user_domain_name = default
      +region_name = RegionOne
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS

      +
    • +
    • 重启nova-compute服务 +
      systemctl restart openstack-nova-compute.service
    • +
    • 启动Neutron linuxbridge agent服务
    • +
    +
    systemctl enable neutron-linuxbridge-agent
    +systemctl start neutron-linuxbridge-agent
    +
  4. +
+

Cinder

+

Cinder是OpenStack的存储服务,提供块设备的创建、发放、备份等功能。

+

Controller节点

+
    +
  1. +

    初始化数据库

    +

    CINDER_DBPASS是用户自定义的cinder数据库密码。 +

    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit

    +
  2. +
  3. +

    初始化Keystone资源对象

    +

    source ~/.admin-openrc
    +
    +#创建用户时,命令行会提示输入密码,请输入自定义的密码,下文涉及到`CINDER_PASS`的地方替换成该密码即可。
    +openstack user create --domain default --password-prompt cinder
    +
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +3. 安装软件包

    +
    dnf install openstack-cinder-api openstack-cinder-scheduler
    +
  4. +
  5. +

    修改cinder配置文件/etc/cinder/cinder.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 192.168.0.2
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
  6. +
  7. +

    数据库同步

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder
    +
  8. +
  9. +

    修改nova配置/etc/nova/nova.conf

    +
    [cinder]
    +os_region_name = RegionOne
    +
  10. +
  11. +

    启动服务

    +
    systemctl restart openstack-nova-api
    +systemctl start openstack-cinder-api openstack-cinder-scheduler
    +
  12. +
+

Storage节点

+

Storage节点要提前准备至少一块硬盘,作为cinder的存储后端,下文默认storage节点已经存在一块未使用的硬盘,设备名称为/dev/sdb,用户在配置过程中,请按照真实环境信息进行名称替换。

+

Cinder支持很多类型的后端存储,本指导使用最简单的lvm为参考,如果您想使用如ceph等其他后端,请自行配置。

+
    +
  1. +

    安装软件包

    +
    dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup
    +
  2. +
  3. +

    配置lvm卷组

    +
    pvcreate /dev/sdb
    +vgcreate cinder-volumes /dev/sdb
    +
  4. +
  5. +

    修改cinder配置/etc/cinder/cinder.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 192.168.0.4
    +enabled_backends = lvm
    +glance_api_servers = http://controller:9292
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    +volume_group = cinder-volumes
    +target_protocol = iscsi
    +target_helper = lioadm
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
  6. +
  7. +

    配置cinder backup (可选)

    +

    cinder-backup是可选的备份服务,cinder同样支持很多种备份后端,本文使用swift存储,如果您想使用如NFS等后端,请自行配置,例如可以参考OpenStack官方文档对NFS的配置说明。

    +

    修改/etc/cinder/cinder.conf,在[DEFAULT]中新增 +

    [DEFAULT]
    +backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
    +backup_swift_url = SWIFT_URL

    +

    这里的SWIFT_URL是指环境中swift服务的URL,在部署完swift服务后,执行openstack catalog show object-store命令获取。

    +
  8. +
  9. +

    启动服务

    +
    systemctl start openstack-cinder-volume target
    +systemctl start openstack-cinder-backup (可选)
    +
  10. +
+

至此,Cinder服务的部署已全部完成,可以在controller通过以下命令进行简单的验证

+
source ~/.admin-openrc
+openstack storage service list
+openstack volume list
+

Horizon

+

Horizon是OpenStack提供的前端页面,可以让用户通过网页鼠标的操作来控制OpenStack集群,而不用繁琐的CLI命令行。Horizon一般部署在控制节点。

+
    +
  1. +

    安装软件包

    +
    dnf install openstack-dashboard
    +
  2. +
  3. +

    修改配置文件/etc/openstack-dashboard/local_settings

    +
    OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +OPENSTACK_KEYSTONE_URL =  "http://controller:5000/v3"
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +CACHES = {
    +'default': {
    +    'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +    'LOCATION': 'controller:11211',
    +    }
    +}
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启服务

    +
    systemctl restart httpd
    +
  6. +
+

至此,horizon服务的部署已全部完成,打开浏览器,输入http://192.168.0.2/dashboard,打开horizon登录页面。

+

Ironic

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+

在控制节点执行以下操作。

+
    +
  1. +

    设置数据库

    +

    裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASS为合适的密码

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
    +IDENTIFIED BY 'IRONIC_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
    +IDENTIFIED BY 'IRONIC_DBPASS';
    +MariaDB [(none)]> exit
    +Bye
    +
  2. +
  3. +

    创建服务用户认证

    +
      +
    • +

      创建Bare Metal服务用户

      +

      替换IRONIC_PASS为ironic用户密码,IRONIC_INSPECTOR_PASS为ironic_inspector用户密码。

      +
      openstack user create --password IRONIC_PASS \
      +  --email ironic@example.com ironic
      +openstack role add --project service --user ironic admin
      +openstack service create --name ironic \
      +  --description "Ironic baremetal provisioning service" baremetal
      +
      +openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
      +openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector
      +openstack role add --project service --user ironic-inspector admin
      +
    • +
    • +

      创建Bare Metal服务访问入口

      +
      openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385
      +openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385
      +openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385
      +openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1
      +openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1
      +openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1
      +
    • +
    +
  4. +
  5. +

    安装组件

    +
    dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
    +
  6. +
  7. +

    配置ironic-api服务

    +

    配置文件路径/etc/ironic/ironic.conf

    +
      +
    • +

      通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

      +
      [database]
      +
      +# The SQ LAlchemy connection string used to connect to the
      +# database (string value)
      +# connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic
      +connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic
      +
    • +
    • +

      通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

      +
      [DEFAULT]
      +
      +# A URL representing the messaging driver to use and its full
      +# configuration. (string value)
      +# transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +

      用户也可自行使用json-rpc方式替换rabbitmq

      +
    • +
    • +

      配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换 IRONIC_PASS为身份认证服务中ironic用户的密码,替换RABBIT_PASS为RabbitMQ中openstack账户的密码。:

      +
      [DEFAULT]
      +
      +# Authentication strategy used by ironic-api: one of
      +# "keystone" or "noauth". "noauth" should not be used in a
      +# production environment because all authentication will be
      +# disabled. (string value)
      +
      +auth_strategy=keystone
      +host = controller
      +memcache_servers = controller:11211
      +enabled_network_interfaces = flat,noop,neutron
      +default_network_interface = noop
      +enabled_hardware_types = ipmi
      +enabled_boot_interfaces = pxe
      +enabled_deploy_interfaces = direct
      +default_deploy_interface = direct
      +enabled_inspect_interfaces = inspector
      +enabled_management_interfaces = ipmitool
      +enabled_power_interfaces = ipmitool
      +enabled_rescue_interfaces = no-rescue,agent
      +isolinux_bin = /usr/share/syslinux/isolinux.bin
      +logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %     (user_identity)s] %(instance)s%(message)s
      +
      +[keystone_authtoken]
      +# Authentication type to load (string value)
      +auth_type=password
      +# Complete public Identity API endpoint (string value)
      +# www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
      +www_authenticate_uri=http://controller:5000
      +# Complete admin Identity API endpoint. (string value)
      +# auth_url=http://PRIVATE_IDENTITY_IP:5000
      +auth_url=http://controller:5000
      +# Service username. (string value)
      +username=ironic
      +# Service account password. (string value)
      +password=IRONIC_PASS
      +# Service tenant name. (string value)
      +project_name=service
      +# Domain name containing project (string value)
      +project_domain_name=Default
      +# User's domain name (string value)
      +user_domain_name=Default
      +
      +[agent]
      +deploy_logs_collect = always
      +deploy_logs_local_path = /var/log/ironic/deploy
      +deploy_logs_storage_backend = local
      +image_download_source = http
      +stream_raw_images = false
      +force_raw_images = false
      +verify_ca = False
      +
      +[oslo_concurrency]
      +
      +[oslo_messaging_notifications]
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +topics = notifications
      +driver = messagingv2
      +
      +[oslo_messaging_rabbit]
      +amqp_durable_queues = True
      +rabbit_ha_queues = True
      +
      +[pxe]
      +ipxe_enabled = false
      +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
      +image_cache_size = 204800
      +tftp_root=/var/lib/tftpboot/cephfs/
      +tftp_master_path=/var/lib/tftpboot/cephfs/master_images
      +
      +[dhcp]
      +dhcp_provider = none
      +
    • +
    • +

      创建裸金属服务数据库表

      +
      ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
      +
    • +
    • +

      重启ironic-api服务

      +
      sudo systemctl restart openstack-ironic-api
      +
    • +
    +
  8. +
  9. +

    配置ironic-conductor服务

    +

    如下为ironic-conductor服务自身的标准配置,ironic-conductor服务可以与ironic-api服务分布于不同节点,本指南中均部署与控制节点,所以重复的配置项可跳过。

    +
      +
    • +

      替换使用conductor服务所在host的IP配置my_ip:

      +
      [DEFAULT]
      +
      +# IP address of this host. If unset, will determine the IP
      +# programmatically. If unable to do so, will use "127.0.0.1".
      +# (string value)
      +# my_ip=HOST_IP
      +my_ip = 192.168.0.2
      +
    • +
    • +

      配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSironic用户的密码:

      +
      [database]
      +
      +# The SQLAlchemy connection string to use to connect to the
      +# database. (string value)
      +connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic
      +
    • +
    • +

      通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RABBIT_PASS为RabbitMQ中openstack账户的密码:

      +
      [DEFAULT]
      +
      +# A URL representing the messaging driver to use and its full
      +# configuration. (string value)
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +

      用户也可自行使用json-rpc方式替换rabbitmq

      +
    • +
    • +

      配置凭证访问其他OpenStack服务

      +

      为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

      +
      [neutron] - 访问OpenStack网络服务
      +[glance] - 访问OpenStack镜像服务
      +[swift] - 访问OpenStack对象存储服务
      +[cinder] - 访问OpenStack块存储服务
      +[inspector] - 访问OpenStack裸金属introspection服务
      +[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
      +

      简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

      +

      在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

      +
      网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
      +
      +请求时使用特定的CA SSL证书进行HTTPS连接
      +
      +与ironic-api服务配置相同的服务用户
      +
      +动态密码认证插件基于其他选项发现合适的身份认证服务API版本
      +

      替换IRONIC_PASS为ironic用户密码。

      +
      [neutron]
      +
      +# Authentication type to load (string value)
      +auth_type = password
      +# Authentication URL (string value)
      +auth_url=https://IDENTITY_IP:5000/
      +# Username (string value)
      +username=ironic
      +# User's password (string value)
      +password=IRONIC_PASS
      +# Project name to scope to (string value)
      +project_name=service
      +# Domain ID containing project (string value)
      +project_domain_id=default
      +# User's domain id (string value)
      +user_domain_id=default
      +# PEM encoded Certificate Authority to use when verifying
      +# HTTPs connections. (string value)
      +cafile=/opt/stack/data/ca-bundle.pem
      +# The default region_name for endpoint URL discovery. (string
      +# value)
      +region_name = RegionOne
      +# List of interfaces, in order of preference, for endpoint
      +# URL. (list value)
      +valid_interfaces=public
      +
      +# 其他参考配置
      +[glance]
      +endpoint_override = http://controller:9292
      +www_authenticate_uri = http://controller:5000
      +auth_url = http://controller:5000
      +auth_type = password
      +username = ironic
      +password = IRONIC_PASS
      +project_domain_name = default
      +user_domain_name = default
      +region_name = RegionOne
      +project_name = service
      +
      +[service_catalog]  
      +region_name = RegionOne
      +project_domain_id = default
      +user_domain_id = default
      +project_name = service
      +password = IRONIC_PASS
      +username = ironic
      +auth_url = http://controller:5000
      +auth_type = password
      +

      默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

      +
      [neutron]
      +endpoint_override = <NEUTRON_API_ADDRESS>
      +
    • +
    • +

      配置允许的驱动程序和硬件类型

      +

      通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

      +
      [DEFAULT]
      +enabled_hardware_types = ipmi
      +

      配置硬件接口:

      +
      enabled_boot_interfaces = pxe
      +enabled_deploy_interfaces = direct,iscsi
      +enabled_inspect_interfaces = inspector
      +enabled_management_interfaces = ipmitool
      +enabled_power_interfaces = ipmitool
      +

      配置接口默认值:

      +
      [DEFAULT]
      +default_deploy_interface = direct
      +default_network_interface = neutron
      +

      如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

      +
    • +
    • +

      重启ironic-conductor服务

      +
      sudo systemctl restart openstack-ironic-conductor
      +
    • +
    +
  10. +
  11. +

    配置ironic-inspector服务

    +
      +
    • +

      安装组件

      +
      dnf install openstack-ironic-inspector
      +
    • +
    • +

      创建数据库

      +
      # mysql -u root -p
      +
      +MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
      +
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \
      +IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
      +IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS';
      +MariaDB [(none)]> exit
      +Bye
      +
    • +
    • +

      配置/etc/ironic-inspector/inspector.conf

      +

      通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSironic_inspector用户的密码

      +
      [database]
      +backend = sqlalchemy
      +connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector
      +min_pool_size = 100
      +max_pool_size = 500
      +pool_timeout = 30
      +max_retries = 5
      +max_overflow = 200
      +db_retry_interval = 2
      +db_inc_retry_interval = True
      +db_max_retry_interval = 2
      +db_max_retries = 5
      +
    • +
    • +

      配置消息队列通信地址

      +
      [DEFAULT] 
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +
    • +
    • +

      设置keystone认证

      +
      [DEFAULT]
      +
      +auth_strategy = keystone
      +timeout = 900
      +rootwrap_config = /etc/ironic-inspector/rootwrap.conf
      +logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %     (user_identity)s] %(instance)s%(message)s
      +log_dir = /var/log/ironic-inspector
      +state_path = /var/lib/ironic-inspector
      +use_stderr = False
      +
      +[ironic]
      +api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
      +auth_type = password
      +auth_url = http://PUBLIC_IDENTITY_IP:5000
      +auth_strategy = keystone
      +ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
      +os_region = RegionOne
      +project_name = service
      +project_domain_name = Default
      +user_domain_name = Default
      +username = IRONIC_SERVICE_USER_NAME
      +password = IRONIC_SERVICE_USER_PASSWORD
      +
      +[keystone_authtoken]
      +auth_type = password
      +auth_url = http://controller:5000
      +www_authenticate_uri = http://controller:5000
      +project_domain_name = default
      +user_domain_name = default
      +project_name = service
      +username = ironic_inspector
      +password = IRONICPASSWD
      +region_name = RegionOne
      +memcache_servers = controller:11211
      +token_cache_time = 300
      +
      +[processing]
      +add_ports = active
      +processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
      +ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
      +always_store_ramdisk_logs = true
      +store_data =none
      +power_off = false
      +
      +[pxe_filter]
      +driver = iptables
      +
      +[capabilities]
      +boot_mode=True
      +
    • +
    • +

      配置ironic inspector dnsmasq服务

      +
      # 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
      +port=0
      +interface=enp3s0                         #替换为实际监听网络接口
      +dhcp-range=192.168.0.40,192.168.0.50   #替换为实际dhcp地址范围
      +bind-interfaces
      +enable-tftp
      +
      +dhcp-match=set:efi,option:client-arch,7
      +dhcp-match=set:efi,option:client-arch,9
      +dhcp-match=aarch64, option:client-arch,11
      +dhcp-boot=tag:aarch64,grubaa64.efi
      +dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
      +dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
      +
      +tftp-root=/tftpboot                       #替换为实际tftpboot目录
      +log-facility=/var/log/dnsmasq.log
      +
    • +
    • +

      关闭ironic provision网络子网的dhcp

      +
      openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
      +
    • +
    • +

      初始化ironic-inspector服务的数据库

      +
      ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
      +
    • +
    • +

      启动服务

      +
      systemctl enable --now openstack-ironic-inspector.service
      +systemctl enable --now openstack-ironic-inspector-dnsmasq.service
      +
    • +
    +
  12. +
  13. +

    配置httpd服务

    +
      +
    • +

      创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

      +
      mkdir -p /var/lib/ironic/httproot
      +chown ironic.ironic /var/lib/ironic/httproot
      +
    • +
    • +

      安装和配置httpd服务

      +
        +
      • +

        安装httpd服务,已有请忽略

        +
        dnf install httpd -y
        +
      • +
      • +

        创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

        +
        Listen 8080
        +
        +<VirtualHost *:8080>
        +    ServerName ironic.openeuler.com
        +
        +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
        +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
        +
        +    DocumentRoot "/var/lib/ironic/httproot"
        +    <Directory "/var/lib/ironic/httproot">
        +        Options Indexes FollowSymLinks
        +        Require all granted
        +    </Directory>
        +    LogLevel warn
        +    AddDefaultCharset UTF-8
        +    EnableSendfile on
        +</VirtualHost>
        +

        注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

        +
      • +
      • +

        重启httpd服务。

        +
        systemctl restart httpd
        +
      • +
      +
    • +
    +
  14. +
  15. +

    deploy ramdisk镜像下载或制作

    +

    部署一个裸机节点总共需要两组镜像:deploy ramdisk images和user images。Deploy ramdisk images上运行有ironic-python-agent(IPA)服务,Ironic通过它进行裸机节点的环境准备。User images是最终被安装裸机节点上,供用户使用的镜像。

    +

    ramdisk镜像支持通过ironic-python-agent-builder或disk-image-builder工具制作。用户也可以自行选择其他工具制作。若使用原生工具,则需要安装对应的软件包。

    +

    具体的使用方法可以参考官方文档,同时官方也有提供制作好的deploy镜像,可尝试下载。

    +

    下文介绍通过ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

    +
      +
    • +

      安装 ironic-python-agent-builder

      +
      dnf install python3-ironic-python-agent-builder python3-ironic-python-agent-builder-doc
      +
      +或
      +pip3 install ironic-python-agent-builder
      +dnf install qemu-img git
      +

      注:22.09系统中,使用dnf安装时,需要同时按照主包和doc包。doc包内打包的/usr/share目录中文件为运行所需,后续系统版本将合并文件到python3-ironic-python-agent-builder包中。

      +
    • +
    • +

      制作镜像

      +

      基本用法:

      +
      usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH]
      +                           [-v] [--lzma] [--extra-args EXTRA_ARGS]
      +                           [--elements-path ELEMENTS_PATH]
      +                           distribution
      +
      +positional arguments:
      +  distribution          Distribution to use
      +
      +options:
      +  -h, --help            show this help message and exit
      +  -r RELEASE, --release RELEASE
      +                        Distribution release to use
      +  -o OUTPUT, --output OUTPUT
      +                        Output base file name
      +  -e ELEMENT, --element ELEMENT
      +                        Additional DIB element to use
      +  -b BRANCH, --branch BRANCH
      +                        If set, override the branch that is used for         ironic-python-agent
      +                        and requirements
      +  -v, --verbose         Enable verbose logging in diskimage-builder
      +  --lzma                Use lzma compression for smaller images
      +  --extra-args EXTRA_ARGS
      +                        Extra arguments to pass to diskimage-builder
      +  --elements-path ELEMENTS_PATH
      +                        Path(s) to custom DIB elements separated by a colon
      +

      操作实例:

      +
      # -o选项指定生成的镜像名
      +# ubuntu指定生成ubuntu系统的镜像
      +ironic-python-agent-builder -o my-ubuntu-ipa ubuntu
      +

      可通过设置ARCH环境变量(默认为amd64)指定所构建镜像的架构。如果是arm架构,需要添加:

      +
      export ARCH=aarch64
      +
    • +
    • +

      允许ssh登录

      +

      初始化环境变量,设置用户名、密码,启用sodo权限;并添加-e选项使用相应的DIB元素。制作镜像操作如下:

      +
      export DIB_DEV_USER_USERNAME=ipa \
      +export DIB_DEV_USER_PWDLESS_SUDO=yes \
      +export DIB_DEV_USER_PASSWORD='123'
      +ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu
      +
    • +
    • +

      指定代码仓库

      +

      初始化对应的环境变量,然后制作镜像:

      +
      # 直接从gerrit上clone代码
      +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
      +DIB_REPOREF_ironic_python_agent=stable/yoga
      +
      +# 指定本地仓库及分支
      +DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo
      +DIB_REPOREF_ironic_python_agent=my-test-branch
      +
      +ironic-python-agent-builder ubuntu
      +

      参考:source-repositories

      +
    • +
    +
  16. +
  17. +

    注意

    +

    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改: +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败。

    +

    生成的错误配置文件:

    +

    ironic-err

    +

    如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    当前版本的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    • +

      修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:

      +
      [agent]
      +verify_ca = False
      +[pxe]
      +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
      +
    • +
    • +

      ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:

      +

      /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ ironic_python_agent目录)

      +
      [DEFAULT]
      +enable_auto_tls = False
      +

      设置权限:

      +
      chown -R ipa.ipa /etc/ironic_python_agent/
      +
    • +
    • +

      ramdisk镜像中修改ipa服务的服务启动文件,添加配置文件选项

      +

      编辑/usr/lib/systemd/system/ironic-python-agent.service文件

      +
      [Unit]
      +Description=Ironic Python Agent
      +After=network-online.target
      +[Service]
      +ExecStartPre=/sbin/modprobe vfat
      +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/    ironic_python_agent/ironic_python_agent.conf
      +Restart=always
      +RestartSec=30s
      +[Install]
      +WantedBy=multi-user.target
      +
    • +
    +
  18. +
+

Trove

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

Controller节点

+
    +
  1. +

    创建数据库。

    +

    数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASS为合适的密码。 +

    CREATE DATABASE trove CHARACTER SET utf8;
    +GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS';
    +GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS';

    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    # 创建trove用户
    +openstack user create --domain default --password-prompt trove
    +# 添加admin角色
    +openstack role add --project service --user trove admin
    +# 创建database服务
    +openstack service create --name trove --description "Database service" database

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s

    +
  4. +
  5. +

    安装Trove。 +

    dnf install openstack-trove python-troveclient

    +
  6. +
  7. +

    修改配置文件。

    +

    编辑/etc/trove/trove.conf。 +

    [DEFAULT]
    +bind_host=192.168.0.2
    +log_dir = /var/log/trove
    +network_driver = trove.network.neutron.NeutronDriver
    +network_label_regex=.*
    +management_security_groups = <manage security group>
    +nova_keypair = trove-mgmt
    +default_datastore = mysql
    +taskmanager_manager = trove.taskmanager.manager.Manager
    +trove_api_workers = 5
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +reboot_time_out = 300
    +usage_timeout = 900
    +agent_call_high_timeout = 1200
    +use_syslog = False
    +debug = True
    +
    +[database]
    +connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
    +
    +[keystone_authtoken]
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = trove
    +username = TROVE_PASS
    +
    +[service_credentials]
    +auth_url = http://controller:5000/v3/
    +region_name = RegionOne
    +project_name = service
    +project_domain_name = Default
    +user_domain_name = Default
    +username = trove
    +password = TROVE_PASS
    +
    +[mariadb]
    +tcp_ports = 3306,4444,4567,4568
    +
    +[mysql]
    +tcp_ports = 3306
    +
    +[postgresql]
    +tcp_ports = 5432

    +

    解释:

    +
    +

    [Default]分组中bind_host配置为Trove控制节点的IP。\ +transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码。\ +[database]分组中的connection 为前面在mysql中为Trove创建的数据库信息。\ +Trove的用户信息中TROVE_PASSWORD替换为实际trove用户的密码。

    +
    +

    编辑/etc/trove/trove-guestagent.conf。 +

    [DEFAULT]
    +log_file = trove-guestagent.log
    +log_dir = /var/log/trove/
    +ignore_users = os_admin
    +control_exchange = trove
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +rpc_backend = rabbit
    +command_process_timeout = 60
    +use_syslog = False
    +debug = True
    +
    +[service_credentials]
    +auth_url = http://controller:5000/v3/
    +region_name = RegionOne
    +project_name = service
    +password = TROVE_PASS
    +project_domain_name = Default
    +user_domain_name = Default
    +username = trove
    +
    +[mysql]
    +docker_image = your-registry/your-repo/mysql
    +backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0

    +

    解释:

    +
    +

    guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上报心跳,因此需要配置RabbitMQ的用户和密码信息。\ +transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码。\ +Trove的用户信息中TROVE_PASSWORD替换为实际trove用户的密码。\ +从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

    +
    +
  8. +
  9. +

    数据库同步。 +

    su -s /bin/sh -c "trove-manage db_sync" trove

    +
  10. +
  11. +

    完成安装。 +

    # 配置服务自启
    +systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \ 
    +openstack-trove-conductor.service
    +
    +# 启动服务
    +systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \ 
    +openstack-trove-conductor.service

    +
  12. +
+

Swift

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+

Controller节点

+
    +
  1. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    # 创建swift用户
    +openstack user create --domain default --password-prompt swift
    +# 添加admin角色
    +openstack role add --project service --user swift admin
    +# 创建对象存储服务
    +openstack service create --name swift --description "OpenStack Object Storage" object-store

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 

    +
  2. +
  3. +

    安装Swift。 +

    dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \ 
    +python3-keystonemiddleware memcached

    +
  4. +
  5. +

    配置proxy-server。

    +

    Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和SWIFT_PASS即可。 +

    vim /etc/swift/proxy-server.conf
    +
    +[filter:authtoken]
    +paste.filter_factory = keystonemiddleware.auth_token:filter_factory
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = swift
    +password = SWIFT_PASS
    +delay_auth_decision = True
    +service_token_roles_required = True

    +
  6. +
+

Storage节点

+
    +
  1. +

    安装支持的程序包。 +

    dnf install openstack-swift-account openstack-swift-container openstack-swift-object
    +dnf install xfsprogs rsync

    +
  2. +
  3. +

    将设备/dev/sdb和/dev/sdc格式化为XFS。 +

    mkfs.xfs /dev/sdb
    +mkfs.xfs /dev/sdc

    +
  4. +
  5. +

    创建挂载点目录结构。 +

    mkdir -p /srv/node/sdb
    +mkdir -p /srv/node/sdc

    +
  6. +
  7. +

    找到新分区的UUID。 +

    blkid

    +
  8. +
  9. +

    编辑/etc/fstab文件并将以下内容添加到其中。 +

    UUID="<UUID-from-output-above>" /srv/node/sdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/sdc xfs noatime 0 2

    +
  10. +
  11. +

    挂载设备。 +

    mount /srv/node/sdb
    +mount /srv/node/sdc

    +

    注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置。

    +
  12. +
  13. +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容: +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock

    +

    替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动: +

    systemctl enable rsyncd.service
    +systemctl start rsyncd.service

    +
  14. +
  15. +

    配置存储节点。

    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。 +

    [DEFAULT]
    +bind_ip = 192.168.0.4

    +

    确保挂载点目录结构的正确所有权。 +

    chown -R swift:swift /srv/node

    +

    创建recon目录并确保其拥有正确的所有权。 +

    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift

    +
  16. +
+

Controller节点创建并分发环

+
    +
  1. +

    创建账号环。

    +

    切换到/etc/swift目录。 +

    cd /etc/swift

    +

    创建基础account.builder文件。 +

    swift-ring-builder account.builder create 10 1 1

    +

    将每个存储节点添加到环中。 +

    swift-ring-builder account.builder add --region 1 --zone 1 \
    +--ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \ 
    +--port 6202  --device DEVICE_NAME \ 
    +--weight 100

    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证账号环内容。 +

    swift-ring-builder account.builder

    +

    重新平衡账号环。 +

    swift-ring-builder account.builder rebalance

    +
  2. +
  3. +

    创建容器环。

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件。 +

    swift-ring-builder container.builder create 10 1 1

    +

    将每个存储节点添加到环中。 +

    swift-ring-builder container.builder add --region 1 --zone 1 \
    +--ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS 
    +--port 6201 --device DEVICE_NAME \
    +--weight 100

    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证容器环内容。 +

    swift-ring-builder container.builder

    +

    重新平衡容器环。 +

    swift-ring-builder container.builder rebalance

    +
  4. +
  5. +

    创建对象环。

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件。 +

    swift-ring-builder object.builder create 10 1 1

    +

    将每个存储节点添加到环中。 +

     swift-ring-builder object.builder add --region 1 --zone 1 \
    + --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \
    + --port 6200 --device DEVICE_NAME \
    + --weight 100

    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证对象环内容。 +

    swift-ring-builder object.builder

    +

    重新平衡对象环。 +

    swift-ring-builder object.builder rebalance

    +
  6. +
  7. +

    分发环配置文件。

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  8. +
  9. +

    编辑配置文件/etc/swift/swift.conf。 +

    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes

    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权。 +

    chown -R root:swift /etc/swift

    +
  10. +
  11. +

    完成安装

    +
  12. +
+

在控制节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动。 +

systemctl enable openstack-swift-proxy.service memcached.service
+systemctl start openstack-swift-proxy.service memcached.service

+

在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动。 +

systemctl enable openstack-swift-account.service \
+openstack-swift-account-auditor.service \
+openstack-swift-account-reaper.service \
+openstack-swift-account-replicator.service \
+openstack-swift-container.service \
+openstack-swift-container-auditor.service \
+openstack-swift-container-replicator.service \
+openstack-swift-container-updater.service \
+openstack-swift-object.service \
+openstack-swift-object-auditor.service \
+openstack-swift-object-replicator.service \
+openstack-swift-object-updater.service
+
+systemctl start openstack-swift-account.service \
+openstack-swift-account-auditor.service \
+openstack-swift-account-reaper.service \
+openstack-swift-account-replicator.service \
+openstack-swift-container.service \
+openstack-swift-container-auditor.service \
+openstack-swift-container-replicator.service \
+openstack-swift-container-updater.service \
+openstack-swift-object.service \
+openstack-swift-object-auditor.service \
+openstack-swift-object-replicator.service \
+openstack-swift-object-updater.service

+

Cyborg

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+

Controller节点

+
    +
  1. +

    初始化对应数据库

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cyborg;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
    +MariaDB [(none)]> exit;
    +
  2. +
  3. +

    创建用户和服务,并记住创建cybory用户时输入的密码,用于配置CYBORG_PASS

    +
    source ~/.admin-openrc
    +openstack user create --domain default --password-prompt cyborg
    +openstack role add --project service --user cyborg admin
    +openstack service create --name cyborg --description "Acceleration Service" accelerator
    +
  4. +
  5. +

    使用uwsgi部署Cyborg api服务

    +
    openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2
    +openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2
    +openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2
    +
  6. +
  7. +

    安装Cyborg

    +
    dnf install openstack-cyborg
    +
  8. +
  9. +

    配置Cyborg

    +

    修改/etc/cyborg/cyborg.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +use_syslog = False
    +state_path = /var/lib/cyborg
    +debug = True
    +
    +[api]
    +host_ip = 0.0.0.0
    +
    +[database]
    +connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg
    +
    +[service_catalog]
    +cafile = /opt/stack/data/ca-bundle.pem
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +password = CYBORG_PASS
    +username = cyborg
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +
    +[placement]
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = password
    +username = PLACEMENT_PASS
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +auth_section = keystone_authtoken
    +
    +[nova]
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = NOVA_PASS
    +username = nova
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +auth_section = keystone_authtoken
    +
    +[keystone_authtoken]
    +memcached_servers = localhost:11211
    +signing_dir = /var/cache/cyborg/api
    +cafile = /opt/stack/data/ca-bundle.pem
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = CYBORG_PASS
    +username = cyborg
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +
  10. +
  11. +

    同步数据库表格

    +
    cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
    +
  12. +
  13. +

    启动Cyborg服务

    +
    systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
    +systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
    +
  14. +
+

Aodh

+

Aodh可以根据由Ceilometer或者Gnocchi收集的监控数据创建告警,并设置触发规则。

+

Controller节点

+
    +
  1. +

    创建数据库。

    +
    CREATE DATABASE aodh;
    +GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
    +GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    openstack user create --domain default --password-prompt aodh
    +openstack role add --project service --user aodh admin
    +openstack service create --name aodh --description "Telemetry" alarming

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne alarming public http://controller:8042
    +openstack endpoint create --region RegionOne alarming internal http://controller:8042
    +openstack endpoint create --region RegionOne alarming admin http://controller:8042

    +
  4. +
  5. +

    安装Aodh。 +

    dnf install openstack-aodh-api openstack-aodh-evaluator \
    +openstack-aodh-notifier openstack-aodh-listener \
    +openstack-aodh-expirer python3-aodhclient

    +
  6. +
  7. +

    修改配置文件。 +

    vim /etc/aodh/aodh.conf
    +
    +[database]
    +connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = aodh
    +password = AODH_PASS
    +
    +[service_credentials]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = aodh
    +password = AODH_PASS
    +interface = internalURL
    +region_name = RegionOne

    +
  8. +
  9. +

    同步数据库。 +

    aodh-dbsync

    +
  10. +
  11. +

    完成安装。 +

    # 配置服务自启
    +systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \
    +openstack-aodh-notifier.service openstack-aodh-listener.service
    +
    +# 启动服务
    +systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \
    +openstack-aodh-notifier.service openstack-aodh-listener.service

    +
  12. +
+

Gnocchi

+

Gnocchi是一个开源的时间序列数据库,可以对接Ceilometer。

+

Controller节点

+
    +
  1. +

    创建数据库。 +

    CREATE DATABASE gnocchi;
    +GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
    +GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';

    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    openstack user create --domain default --password-prompt gnocchi
    +openstack role add --project service --user gnocchi admin
    +openstack service create --name gnocchi --description "Metric Service" metric

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne metric public http://controller:8041
    +openstack endpoint create --region RegionOne metric internal http://controller:8041
    +openstack endpoint create --region RegionOne metric admin http://controller:8041

    +
  4. +
  5. +

    安装Gnocchi。 +

    dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient

    +
  6. +
  7. +

    修改配置文件。 +

    vim /etc/gnocchi/gnocchi.conf
    +[api]
    +auth_mode = keystone
    +port = 8041
    +uwsgi_mode = http-socket
    +
    +[keystone_authtoken]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = gnocchi
    +password = GNOCCHI_PASS
    +interface = internalURL
    +region_name = RegionOne
    +
    +[indexer]
    +url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
    +
    +[storage]
    +# coordination_url is not required but specifying one will improve
    +# performance with better workload division across workers.
    +# coordination_url = redis://controller:6379
    +file_basepath = /var/lib/gnocchi
    +driver = file

    +
  8. +
  9. +

    同步数据库。 +

    gnocchi-upgrade

    +
  10. +
  11. +

    完成安装。 +

    # 配置服务自启
    +systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
    +
    +# 启动服务
    +systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service

    +
  12. +
+

Ceilometer

+

Ceilometer是OpenStack中负责数据收集的服务。

+

Controller节点

+
    +
  1. +

    创建服务凭证。 +

    openstack user create --domain default --password-prompt ceilometer
    +openstack role add --project service --user ceilometer admin
    +openstack service create --name ceilometer --description "Telemetry" metering

    +
  2. +
  3. +

    安装Ceilometer软件包。 +

    dnf install openstack-ceilometer-notification openstack-ceilometer-central

    +
  4. +
  5. +

    编辑配置文件/etc/ceilometer/pipeline.yaml。 +

    publishers:
    +    # set address of Gnocchi
    +    # + filter out Gnocchi-related activity meters (Swift driver)
    +    # + set default archive policy
    +    - gnocchi://?filter_project=service&archive_policy=low

    +
  6. +
  7. +

    编辑配置文件/etc/ceilometer/ceilometer.conf。 +

    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +
    +[service_credentials]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = ceilometer
    +password = CEILOMETER_PASS
    +interface = internalURL
    +region_name = RegionOne

    +
  8. +
  9. +

    数据库同步。 +

    ceilometer-upgrade

    +
  10. +
  11. +

    完成控制节点Ceilometer安装。 +

    # 配置服务自启
    +systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
    +# 启动服务
    +systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service

    +
  12. +
+

Compute节点

+
    +
  1. +

    安装Ceilometer软件包。 +

    dnf install openstack-ceilometer-compute
    +dnf install openstack-ceilometer-ipmi       # 可选

    +
  2. +
  3. +

    编辑配置文件/etc/ceilometer/ceilometer.conf。 +

    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +
    +[service_credentials]
    +auth_url = http://controller:5000
    +project_domain_id = default
    +user_domain_id = default
    +auth_type = password
    +username = ceilometer
    +project_name = service
    +password = CEILOMETER_PASS
    +interface = internalURL
    +region_name = RegionOne

    +
  4. +
  5. +

    编辑配置文件/etc/nova/nova.conf。 +

    [DEFAULT]
    +instance_usage_audit = True
    +instance_usage_audit_period = hour
    +
    +[notifications]
    +notify_on_state_change = vm_and_task_state
    +
    +[oslo_messaging_notifications]
    +driver = messagingv2

    +
  6. +
  7. +

    完成安装。 +

    systemctl enable openstack-ceilometer-compute.service
    +systemctl start openstack-ceilometer-compute.service
    +systemctl enable openstack-ceilometer-ipmi.service         # 可选
    +systemctl start openstack-ceilometer-ipmi.service          # 可选
    +
    +# 重启nova-compute服务
    +systemctl restart openstack-nova-compute.service

    +
  8. +
+

Heat

+

Heat是 OpenStack 自动编排服务,基于描述性的模板来编排复合云应用,也称为Orchestration Service。Heat 的各服务一般安装在Controller节点上。

+

Controller节点

+
    +
  1. +

    创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE heat;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
    +MariaDB [(none)]> exit;
    +
  2. +
  3. +

    创建服务凭证,创建heat用户,并为其增加admin角色

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt heat
    +openstack role add --project service --user heat admin
    +
  4. +
  5. +

    创建heatheat-cfn服务及其对应的API端点

    +
    openstack service create --name heat --description "Orchestration" orchestration
    +openstack service create --name heat-cfn --description "Orchestration"  cloudformation
    +openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
    +openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
    +openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
    +
  6. +
  7. +

    创建stack管理的额外信息

    +

    创建 heat domain +

    openstack domain create --description "Stack projects and users" heat
    +在 heat domain下创建 heat_domain_admin 用户,并记下输入的密码,用于配置下面的HEAT_DOMAIN_PASS +
    openstack user create --domain heat --password-prompt heat_domain_admin
    +为 heat_domain_admin 用户增加 admin 角色 +
    openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
    +创建 heat_stack_owner 角色 +
    openstack role create heat_stack_owner
    +创建 heat_stack_user 角色 +
    openstack role create heat_stack_user

    +
  8. +
  9. +

    安装软件包

    +
    dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
    +
  10. +
  11. +

    修改配置文件/etc/heat/heat.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +heat_metadata_server_url = http://controller:8000
    +heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
    +stack_domain_admin = heat_domain_admin
    +stack_domain_admin_password = HEAT_DOMAIN_PASS
    +stack_user_domain_name = heat
    +
    +[database]
    +connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = heat
    +password = HEAT_PASS
    +
    +[trustee]
    +auth_type = password
    +auth_url = http://controller:5000
    +username = heat
    +password = HEAT_PASS
    +user_domain_name = default
    +
    +[clients_keystone]
    +auth_uri = http://controller:5000
    +
  12. +
  13. +

    初始化heat数据库表

    +
    su -s /bin/sh -c "heat-manage db_sync" heat
    +
  14. +
  15. +

    启动服务

    +
    systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
    +systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
    +
  16. +
+

Tempest

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+

Controller节点

+
    +
  1. +

    安装Tempest

    +
    dnf install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Yoga中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

基于OpenStack SIG开发工具oos部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +

    oos工具在不断演进,兼容性、可用性不能时刻保证,建议使用已验证的本版,这里选择1.0.6 +

    pip install openstack-sig-tool==1.0.6

    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息,AK/SK是用户的华为云登录密钥,其他配置保持默认即可(默认使用新加坡region),需要提前在云上创建对应的资源,包括:

    +
      +
    • 一个安全组,名字默认是oos
    • +
    • 一个openEuler镜像,名称格式是openEuler-%(release)s-%(arch)s,例如openEuler-22.09-arm64
    • +
    • 一个VPC,名称是oos_vpc
    • +
    • 该VPC下面两个子网,名称是oos_subnet1oos_subnet2
    • +
    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器(只在openEuler LTS上支持)
    +
  6. +
  7. +

    华为云上面创建一台openEuler 22.09的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 22.09 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +
    oos env setup test-oos -r yoga
    +

    具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +
  12. +
  13. +

    执行tempest测试

    +

    用户可以使用oos自动执行:

    +
    oos env test test-oos
    +

    也可以手动登录目标节点,进入根目录下的mytest目录,手动执行tempest run

    +
  14. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,跳过第2步对华为云provider信息的配置,在第4步改为纳管主机操作。

+

被纳管的虚机需要保证:

+
    +
  • 至少有一张给oos使用的网卡,名称与配置保持一致,相关配置neutron_dataplane_interface_name
  • +
  • 至少有一块给oos使用的硬盘,名称与配置保持一致,相关配置cinder_block_device
  • +
  • 如果要部署swift服务,则需要新增一块硬盘,名称与配置保持一致,相关配置swift_storage_devices
  • +
+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 22.09 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+

基于OpenStack SIG部署工具opensd部署

+

opensd用于批量地脚本化部署openstack各组件服务。

+

部署步骤

+

1. 部署前需要确认的信息

+
    +
  • 装操作系统时,需将selinux设置为disable
  • +
  • 装操作系统时,将/etc/ssh/sshd_config配置文件内的UseDNS设置为no
  • +
  • 操作系统语言必须设置为英文
  • +
  • 部署之前请确保所有计算节点/etc/hosts文件内没有对计算主机的解析
  • +
+

2. ceph pool与认证创建(可选)

+

不使用ceph或已有ceph集群可忽略此步骤

+

在任意一台ceph monitor节点执行:

+

2.1 创建pool:

+
ceph osd pool create volumes 2048
+ceph osd pool create images 2048
+

2.2 初始化pool

+
rbd pool init volumes
+rbd pool init images
+

2.3 创建用户认证

+
ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images'
+ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes'
+

3. 配置lvm(可选)

+

根据物理机磁盘配置与闲置情况,为mysql数据目录挂载额外的磁盘空间。示例如下(根据实际情况做配置):

+
fdisk -l
+Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors
+Units = sectors of 1 * 512 = 512 bytes
+Sector size (logical/physical): 512 bytes / 4096 bytes
+I/O size (minimum/optimal): 4096 bytes / 4096 bytes
+Disk label type: dos
+Disk identifier: 0x000ed242
+创建分区
+parted /dev/sdd
+mkparted 0 -1
+创建pv
+partprobe /dev/sdd1
+pvcreate /dev/sdd1
+创建、激活vg
+vgcreate vg_mariadb /dev/sdd1
+vgchange -ay vg_mariadb
+查看vg容量
+vgdisplay
+--- Volume group ---
+VG Name vg_mariadb
+System ID
+Format lvm2
+Metadata Areas 1
+Metadata Sequence No 2
+VG Access read/write
+VG Status resizable
+MAX LV 0
+Cur LV 1
+Open LV 1
+Max PV 0
+Cur PV 1
+Act PV 1
+VG Size 446.62 GiB
+PE Size 4.00 MiB
+Total PE 114335
+Alloc PE / Size 114176 / 446.00 GiB
+Free PE / Size 159 / 636.00 MiB
+VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc
+创建lv
+lvcreate -L 446G -n lv_mariadb vg_mariadb
+格式化磁盘并获取卷的UUID
+mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb
+blkid /dev/mapper/vg_mariadb-lv_mariadb
+/dev/mapper/vg_mariadb-lv_mariadb: UUID="98d513eb-5f64-4aa5-810e-dc7143884fa2" TYPE="ext4"
+注:98d513eb-5f64-4aa5-810e-dc7143884fa2为卷的UUID
+挂载磁盘
+mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql
+rm -rf  /var/lib/mysql/*
+

4. 配置yum repo

+

在部署节点执行:

+

4.1 备份yum源

+
mkdir /etc/yum.repos.d/bak/
+mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/
+

4.2 配置yum repo

+
cat > /etc/yum.repos.d/opensd.repo << EOF
+[epol]
+name=epol
+baseurl=http://repo.openeuler.org/openEuler-22.09/EPOL/main/$basearch/
+enabled=1
+gpgcheck=0
+
+[everything]
+name=everything
+baseurl=http://repo.openeuler.org/openEuler-22.09/$basearch/
+enabled=1
+gpgcheck=0
+
+EOF
+

4.3 更新yum缓存

+
yum clean all
+yum makecache
+

5. 安装opensd

+

在部署节点执行:

+

5.1 克隆opensd源码并安装

+
git clone https://gitee.com/openeuler/opensd
+cd opensd
+python3 setup.py install
+

6. 做ssh互信

+

在部署节点执行:

+

6.1 生成密钥对

+

执行如下命令并一路回车

+
ssh-keygen
+

6.2 生成主机IP地址文件

+

在auto_ssh_host_ip中配置所有用到的主机ip, 示例:

+
cd /usr/local/share/opensd/tools/
+vim auto_ssh_host_ip
+
+10.0.0.1
+10.0.0.2
+...
+10.0.0.10
+

6.3 更改密码并执行脚本

+

将免密脚本/usr/local/bin/opensd-auto-ssh内123123替换为主机真实密码

+
# 替换脚本内123123字符串
+vim /usr/local/bin/opensd-auto-ssh
+
## 安装expect后执行脚本
+dnf install expect -y
+opensd-auto-ssh
+

6.4 部署节点与ceph monitor做互信(可选)

+
ssh-copy-id root@x.x.x.x
+

7. 配置opensd

+

在部署节点执行:

+

7.1 生成随机密码

+

安装 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils并随机生成密码 +

dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y
+# 执行命令生成密码
+opensd-genpwd
+# 检查密码是否生成
+cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml

+

7.2 配置inventory文件

+

主机信息包含:主机名、ansible_host IP、availability_zone,三者均需配置缺一不可,示例:

+
vim /usr/local/share/opensd/ansible/inventory/multinode
+# 三台控制节点主机信息
+[control]
+controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1
+controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1
+controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1
+
+# 网络节点信息,与控制节点保持一致
+[network]
+controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1
+controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1
+controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1
+
+# cinder-volume服务节点信息
+[storage]
+storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1
+storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1
+storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1
+
+# Cell1 集群信息
+[cell-control-cell1]
+cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1
+cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1
+cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1
+
+[compute-cell1]
+compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1
+compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1
+compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1
+
+[cell1:children]
+cell-control-cell1
+compute-cell1
+
+# Cell2集群信息
+[cell-control-cell2]
+cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1
+cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1
+cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1
+
+[compute-cell2]
+compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1
+compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1
+compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1
+
+[cell2:children]
+cell-control-cell2
+compute-cell2
+
+[baremetal]
+
+[compute-cell1-ironic]
+
+
+# 填写所有cell集群的control主机组
+[nova-conductor:children]
+cell-control-cell1
+cell-control-cell2
+
+# 填写所有cell集群的compute主机组
+[nova-compute:children]
+compute-added
+compute-cell1
+compute-cell2
+
+# 下面的主机组信息不需变动,保留即可
+[compute-added]
+
+[chrony-server:children]
+control
+
+[pacemaker:children]
+control
+......
+......
+

7.3 配置全局变量

+

注: 文档中提到的有注释配置项需要更改,其他参数不需要更改,若无相关配置则为空

+
vim /usr/local/share/opensd/etc_examples/opensd/globals.yml
+########################
+# Network & Base options
+########################
+network_interface: "eth0" #管理网络的网卡名称
+neutron_external_interface: "eth1" #业务网络的网卡名称
+cidr_netmask: 24 #管理网的掩码
+opensd_vip_address: 10.0.0.33  #控制节点虚拟IP地址
+cell1_vip_address: 10.0.0.34 #cell1集群的虚拟IP地址
+cell2_vip_address: 10.0.0.35 #cell2集群的虚拟IP地址
+external_fqdn: "" #用于vnc访问虚拟机的外网域名地址
+external_ntp_servers: [] #外部ntp服务器地址
+yumrepo_host:  #yum源的IP地址
+yumrepo_port:  #yum源端口号
+environment:   #yum源的类型
+upgrade_all_packages: "yes" #是否升级所有安装版的版本(执行yum upgrade),初始部署资源请设置为"yes"
+enable_miner: "no" #是否开启部署miner服务
+
+enable_chrony: "no" #是否开启部署chrony服务
+enable_pri_mariadb: "no" #是否为私有云部署mariadb
+enable_hosts_file_modify: "no" # 扩容计算节点和部署ironic服务的时候,是否将节点信息添加到`/etc/hosts`
+
+########################
+# Available zone options
+########################
+az_cephmon_compose:
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az01的"availability_zone"值保持一致
+    ceph_mon_host:      #az01对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:  
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az02的"availability_zone"值保持一致
+    ceph_mon_host:      #az02对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:  
+  - availability_zone:  #availability zone的名称,该名称必须与multinode主机文件内的az03的"availability_zone"值保持一致
+    ceph_mon_host:      #az03对应的一台ceph monitor主机地址,部署节点需要与该主机做ssh互信
+    reserve_vcpu_based_on_numa:
+
+# `reserve_vcpu_based_on_numa`配置为`yes` or `no`,举例说明:
+NUMA node0 CPU(s): 0-15,32-47
+NUMA node1 CPU(s): 16-31,48-63
+当reserve_vcpu_based_on_numa: "yes", 根据numa node, 平均每个node预留vcpu:
+vcpu_pin_set = 2-15,34-47,18-31,50-63
+当reserve_vcpu_based_on_numa: "no", 从第一个vcpu开始,顺序预留vcpu:
+vcpu_pin_set = 8-64
+
+#######################
+# Nova options
+#######################
+nova_reserved_host_memory_mb: 2048 #计算节点给计算服务预留的内存大小
+enable_cells: "yes" #cell节点是否单独节点部署
+support_gpu: "False" #cell节点是否有GPU服务器,如果有则为True,否则为False
+
+#######################
+# Neutron options
+#######################
+monitor_ip:
+    - 10.0.0.9   #配置监控节点
+    - 10.0.0.10
+enable_meter_full_eip: True   #配置是否允许EIP全量监控,默认为True
+enable_meter_port_forwarding: True   #配置是否允许port forwarding监控,默认为True
+enable_meter_ecs_ipv6: True   #配置是否允许ecs_ipv6监控,默认为True
+enable_meter: True    #配置是否开启监控,默认为True
+is_sdn_arch: False    #配置是否是sdn架构,默认为False
+
+# 默认使能的网络类型是vlan,vlan和vxlan两种类型只能二选一.
+enable_vxlan_network_type: False  # 默认使能的网络类型是vlan,如果使用vxlan网络,配置为True, 如果使用vlan网络,配置为False.
+enable_neutron_fwaas: False       # 环境有使用防火墙, 设置为True, 使能防护墙功能.
+# Neutron provider
+neutron_provider_networks:
+  network_types: "{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}"
+  network_vlan_ranges: "default:xxx:xxx" #部署之前规划的业务网络vlan范围
+  network_mappings: "default:br-provider"
+  network_interface: "{{ neutron_external_interface }}"
+  network_vxlan_ranges: "" #部署之前规划的业务网络vxlan范围
+
+# 如下这些配置是SND控制器的配置参数, `enable_sdn_controller`设置为True, 使能SND控制器功能.
+# 其他参数请根据部署之前的规划和SDN部署信息确定.
+enable_sdn_controller: False
+sdn_controller_ip_address:  # SDN控制器ip地址
+sdn_controller_username:    # SDN控制器的用户名
+sdn_controller_password:    # SDN控制器的用户密码
+
+#######################
+# Dimsagent options
+#######################
+enable_dimsagent: "no" # 安装镜像服务agent, 需要改为yes
+# Address and domain name for s2
+s3_address_domain_pair:
+  - host_ip:           
+    host_name:         
+
+#######################
+# Trove options
+#######################
+enable_trove: "no" #安装trove 需要改为yes
+#default network
+trove_default_neutron_networks:  #trove 的管理网络id `openstack network list|grep -w trove-mgmt|awk '{print$2}'`
+#s3 setup(如果没有s3,以下值填null)
+s3_endpoint_host_ip:   #s3的ip
+s3_endpoint_host_name: #s3的域名
+s3_endpoint_url:       #s3的url ·一般为http://s3域名
+s3_access_key:         #s3的ak 
+s3_secret_key:         #s3的sk
+
+#######################
+# Ironic options
+#######################
+enable_ironic: "no" #是否开机裸金属部署,默认不开启
+ironic_neutron_provisioning_network_uuid:
+ironic_neutron_cleaning_network_uuid: "{{ ironic_neutron_provisioning_network_uuid }}"
+ironic_dnsmasq_interface:
+ironic_dnsmasq_dhcp_range:
+ironic_tftp_server_address: "{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}"
+# 交换机设备相关信息
+neutron_ml2_conf_genericswitch:
+  genericswitch:xxxxxxx:
+    device_type:
+    ngs_mac_address:
+    ip:
+    username:
+    password:
+    ngs_port_default_vlan:
+
+# Package state setting
+haproxy_package_state: "present"
+mariadb_package_state: "present"
+rabbitmq_package_state: "present"
+memcached_package_state: "present"
+ceph_client_package_state: "present"
+keystone_package_state: "present"
+glance_package_state: "present"
+cinder_package_state: "present"
+nova_package_state: "present"
+neutron_package_state: "present"
+miner_package_state: "present"
+

7.4 检查所有节点ssh连接状态

+
dnf install ansible -y
+ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping
+
+# 执行结果显示每台主机都是"SUCCESS"即说明连接状态没问题,示例:
+compute1 | SUCCESS => {
+  "ansible_facts": {
+      "discovered_interpreter_python": "/usr/bin/python"
+  },
+  "changed": false,
+  "ping": "pong"
+}
+

8. 执行部署

+

在部署节点执行:

+

8.1 执行bootstrap

+
# 执行部署
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50
+

8.2 重启服务器

+

注:执行重启的原因是:bootstrap可能会升内核,更改selinux配置或者有GPU服务器,如果装机过程已经是新版内核,selinux disable或者没有GPU服务器,则不需要执行该步骤 +

# 手动重启对应节点,执行命令
+init6
+# 重启完成后,再次检查连通性
+ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping
+# 重启完后操作系统后,再次启动yum源

+

8.3 执行部署前检查

+
opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50
+

8.4 执行部署

+
ln -s /usr/bin/python3 /usr/bin/python
+
+全量部署:
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50
+
+单服务部署:
+opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name
+

基于OpenStack helm部署

+

简介

+

OpenStack-Helm 是一个用来允许用户在 Kubernetes 上部署 OpenStack 组件的项目。该项目提供了 OpenStack 各个组件的 Helm Chart,并提供了一系列脚本来供用户完成安装流程。

+

OpenStack-Helm 较为复杂,建议在一个新系统上部署。整个部署将占用约 30GB 的磁盘空间。安装时请使用 root 用户。

+

前置设置

+

在开始安装 OpenStack-Helm 前,可能需要对系统进行一些基础设置,包括主机名和时间等。请参考“基于RPM部署”章节的有关信息。

+

openEuler 22.09 中已经包含了 OpenStack-Helm 软件包。首先安装对应的软件包和补丁:

+
dnf install openstack-helm openstack-helm-infra openstack-helm-images loci
+

这里安装的是原生openstack-helm,默认不支持openEuler,因此如果想在openEuler上使用openstack-helm,还需要安装plugin插件,本章节是对plugin的使用说明。

+
dnf install openstack-plugin-openstack-helm-openeuler-support
+

自动安装

+

OpenStack-Helm 安装文件将被放置到系统的 /usr/share/openstack-helm 目录。

+

openEuler 提供的软件包中包含一个简易的安装向导程序,位于 /usr/bin/openstack-helm 。执行命令进入向导程序:

+
openstack-helm
+
Welcome to OpenStack-Helm installation program for openEuler. I will guide you through the installation. 
+Please refer to https://docs.openstack.org/openstack-helm/latest/ to get more information about OpenStack-Helm. 
+We recommend doing this on a new bare metal or virtual OS installation. 
+
+
+Now you have the following options: 
+i: Start automated installation
+c: Check if all pods in Kubernetes are working
+e: Exit
+Your choice? [i/c/e]: 
+

输入 i 并点击回车进入下一级页面:

+
Welcome to OpenStack-Helm installation program for openEuler. I will guide you through the installation. 
+Please refer to https://docs.openstack.org/openstack-helm/latest/ to get more information about OpenStack-Helm. 
+We recommend doing this on a new bare metal or virtual OS installation. 
+
+
+Now you have the following options: 
+i: Start automated installation
+c: Check if all pods in Kubernetes are working
+e: Exit
+Your choice? [i/c/e]: i
+
+
+There are two storage backends available for OpenStack-Helm: NFS and CEPH. Which storage backend would you like to use? 
+n: NFS storage backend
+c: CEPH storage backend
+b: Go back to parent menu
+Your choice? [n/c/b]: 
+

OpenStack-Helm 提供了两种存储方法:NFSCeph。用户可根据需要输入 n 来选择 NFS 存储后端或者 c 来选择 Ceph 存储后端。

+

选择完成存储后端后,用户将有机会完成确认。收到提示时,按下回车以开始安装。安装过程中,程序将顺序执行一系列安装脚本以完成部署。这一过程可能需要持续几十分钟,安装过程中请确保磁盘空间充足以及互联网连接畅通。

+

安装过程中执行到的脚本会将一些 Helm Chart 部署到系统上。由于目标系统环境复杂多变,某些特定的 Helm Chart 可能无法顺利被部署。这种情况下,您会注意到输出信息的最后包含等待 Pod 就位但超时的提示。若发生此类现象,您可能需要通过下一节给出的手动安装方法来定位问题所在。

+

若您未观察到上述的现象,则恭喜您完成了部署。请参考“使用 OpenStack-Helm”一节来开始使用。

+

手动安装

+

若您在自动安装的过程中遇到了错误,或者希望手动安装来控制整个安装流程,您可以参照以下顺序执行安装流程: +

cd /usr/share/openstack-helm/openstack-helm
+
+#基于 NFS
+./tools/deployment/developer/common/010-deploy-k8s.sh
+./tools/deployment/developer/common/020-setup-client.sh
+./tools/deployment/developer/common/030-ingress.sh
+./tools/deployment/developer/nfs/040-nfs-provisioner.sh
+./tools/deployment/developer/nfs/050-mariadb.sh
+./tools/deployment/developer/nfs/060-rabbitmq.sh
+./tools/deployment/developer/nfs/070-memcached.sh
+./tools/deployment/developer/nfs/080-keystone.sh
+./tools/deployment/developer/nfs/090-heat.sh
+./tools/deployment/developer/nfs/100-horizon.sh
+./tools/deployment/developer/nfs/120-glance.sh
+./tools/deployment/developer/nfs/140-openvswitch.sh
+./tools/deployment/developer/nfs/150-libvirt.sh
+./tools/deployment/developer/nfs/160-compute-kit.sh
+./tools/deployment/developer/nfs/170-setup-gateway.sh
+
+#或者基于 Ceph
+./tools/deployment/developer/common/010-deploy-k8s.sh
+./tools/deployment/developer/common/020-setup-client.sh
+./tools/deployment/developer/common/030-ingress.sh
+./tools/deployment/developer/ceph/040-ceph.sh
+./tools/deployment/developer/ceph/050-mariadb.sh
+./tools/deployment/developer/ceph/060-rabbitmq.sh
+./tools/deployment/developer/ceph/070-memcached.sh
+./tools/deployment/developer/ceph/080-keystone.sh
+./tools/deployment/developer/ceph/090-heat.sh
+./tools/deployment/developer/ceph/100-horizon.sh
+./tools/deployment/developer/ceph/120-glance.sh
+./tools/deployment/developer/ceph/140-openvswitch.sh
+./tools/deployment/developer/ceph/150-libvirt.sh
+./tools/deployment/developer/ceph/160-compute-kit.sh
+./tools/deployment/developer/ceph/170-setup-gateway.sh

+

安装完成后,您可以使用 kubectl get pods -A 来查看当前系统上的 Pod 的运行情况。

+

使用 OpenStack-Helm

+

系统部署完成后,OpenStack CLI 界面将被部署在 /usr/local/bin/openstack。参照下面的例子来使用 OpenStack CLI: +

export OS_CLOUD=openstack_helm
+export OS_USERNAME='admin'
+export OS_PASSWORD='password'
+export OS_PROJECT_NAME='admin'
+export OS_PROJECT_DOMAIN_NAME='default'
+export OS_USER_DOMAIN_NAME='default'
+export OS_AUTH_URL='http://keystone.openstack.svc.cluster.local/v3'
+openstack service list
+openstack stack list
+当然,您也可以通过 Web 界面来访问 OpenStack 的控制面板。Horizon Dashboard 位于 http://localhost:31000,使用以下凭据登录: +
Domain: default
+User Name: admin
+Password: password
+此时,您应当可以看到熟悉的 OpenStack 控制面板了。

+

新特性的安装

+

Kolla支持iSula

+

Kolla是OpenStack基于Docker和ansible的容器化部署方案,包含了Kolla和Kolla-ansible两个项目。Kolla是容器镜像制作工具,Kolla-ansible是容器镜像部署工具。其中Kolla-ansible只支持在openEuler LTS上使用,openEuler创新版暂不支持。使用openEuler 22.09,用户可以基于Kolla制作相应的容器镜像。同时OpenStack SIG在openEuler 22.09中新增了Kolla对iSula运行时的支持,具体步骤如下:

+
    +
  1. +

    安装Kolla

    +
    dnf install openstack-kolla docker
    +

    安装完成后,就可以使用kolla-build命令制作基于Docker容器镜像了,非常简单,如果用户想尝试基于isula的方式,可以继续操作

    +
  2. +
  3. +

    安装OpenStack iSula插件

    +
    dnf install openstack-plugin-kolla-isula-support
    +
  4. +
  5. +

    启动isula-build服务

    +

    第二步会自动安装iSulad和isula-builder服务,isulad会自动启动,但isula-builder不对,需要手动拉起 +

     systemctl start isula-builder

    +
  6. +
  7. +

    配置kolla + 在kolla.conf中的[Default]里新增base_runtime +

    vim /etc/kolla/kolla.conf
    +
    +base_runtime=isula

    +
  8. +
  9. +

    至此安装完成,使用kolla-build即可基于isula制作镜像了,执行完后,执行isula images查看镜像。

    +
  10. +
+

Nova支持高低优先级虚拟机特性

+

高低优先级虚拟机特性是OpenStack SIG在openEuler 22.09中基于OpenStack Yoga开发的Nova特性,该特性允许用户指定虚拟机的优先级,基于不同的优先级,OpenStack自动分配不同的绑核策略,配合openEuler自研的skylark QOS服务,实现高低优先级虚拟机对资源的合理使用。具体细节可以参考特性文档。本文档主要描述安装步骤。

+
    +
  1. +

    按照前面章节部署好一套OpenStack环境(非容器),然后先安装plugin。

    +
    dnf install openstack-plugin-priority-vm
    +
  2. +
  3. +

    配置数据库

    +

    本特性对Nova的数据表进行了扩充,因此需要同步数据库

    +
    nova-manage api_db sync
    +nova-manage db sync
    +
  4. +
  5. +

    重启nova服务 + 在控制节点和计算节点分别执行

    +
    systemctl restart openstack-nova-*
    +
  6. +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-24.03-LTS-SP1/OpenStack-antelope/index.html b/site/install/openEuler-24.03-LTS-SP1/OpenStack-antelope/index.html new file mode 100644 index 0000000000000000000000000000000000000000..22f6a9978bbec730c54954f463a5aaade7886f16 --- /dev/null +++ b/site/install/openEuler-24.03-LTS-SP1/OpenStack-antelope/index.html @@ -0,0 +1,3365 @@ + + + + + + + + openEuler-24.03-LTS-SP1_Antelope - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack Antelope 部署指南

+ +

本文档是 openEuler OpenStack SIG 编写的基于 |openEuler 24.03 LTS SP1 的 OpenStack 部署指南,内容由 SIG 贡献者提供。在阅读过程中,如果您有任何疑问或者发现任何问题,请联系SIG维护人员,或者直接提交issue

+

约定

+

本章节描述文档中的一些通用约定。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
名称定义
RABBIT_PASSrabbitmq的密码,由用户设置,在OpenStack各个服务配置中使用
CINDER_PASScinder服务keystone用户的密码,在cinder配置中使用
CINDER_DBPASScinder服务数据库密码,在cinder配置中使用
KEYSTONE_DBPASSkeystone服务数据库密码,在keystone配置中使用
GLANCE_PASSglance服务keystone用户的密码,在glance配置中使用
GLANCE_DBPASSglance服务数据库密码,在glance配置中使用
HEAT_PASS在keystone注册的heat用户密码,在heat配置中使用
HEAT_DBPASSheat服务数据库密码,在heat配置中使用
CYBORG_PASS在keystone注册的cyborg用户密码,在cyborg配置中使用
CYBORG_DBPASScyborg服务数据库密码,在cyborg配置中使用
NEUTRON_PASS在keystone注册的neutron用户密码,在neutron配置中使用
NEUTRON_DBPASSneutron服务数据库密码,在neutron配置中使用
PROVIDER_INTERFACE_NAME物理网络接口的名称,在neutron配置中使用
OVERLAY_INTERFACE_IP_ADDRESSController控制节点的管理ip地址,在neutron配置中使用
METADATA_SECRETmetadata proxy的secret密码,在nova和neutron配置中使用
PLACEMENT_DBPASSplacement服务数据库密码,在placement配置中使用
PLACEMENT_PASS在keystone注册的placement用户密码,在placement配置中使用
NOVA_DBPASSnova服务数据库密码,在nova配置中使用
NOVA_PASS在keystone注册的nova用户密码,在nova,cyborg,neutron等配置中使用
IRONIC_DBPASSironic服务数据库密码,在ironic配置中使用
IRONIC_PASS在keystone注册的ironic用户密码,在ironic配置中使用
IRONIC_INSPECTOR_DBPASSironic-inspector服务数据库密码,在ironic-inspector配置中使用
IRONIC_INSPECTOR_PASS在keystone注册的ironic-inspector用户密码,在ironic-inspector配置中使用
+

OpenStack SIG 提供了多种基于 openEuler 部署 OpenStack 的方法,以满足不同的用户场景,请按需选择。

+

基于RPM部署

+

环境准备

+

本文档基于OpenStack经典的三节点环境进行部署,三个节点分别是控制节点(Controller)、计算节点(Compute)、存储节点(Storage),其中存储节点一般只部署存储服务,在资源有限的情况下,可以不单独部署该节点,把存储节点上的服务部署到计算节点即可。

+

首先准备三个|openEuler 24.03 LTS SP1环境,根据您的环境,下载对应的镜像并安装即可:ISO镜像qcow2镜像

+

下面的安装按照如下拓扑进行: +

controller:192.168.0.2
+compute:   192.168.0.3
+storage:   192.168.0.4
+如果您的环境IP不同,请按照您的环境IP修改相应的配置文件。

+

本文档的三节点服务拓扑如下图所示(只包含Keystone、Glance、Nova、Cinder、Neutron这几个核心服务,其他服务请参考具体部署章节):

+

topology1 +topology2 +topology3

+

在正式部署之前,需要对每个节点做如下配置和检查:

+
    +
  1. +

    配置 |openEuler 24.03 LTS SP1 官方 yum 源,需要启用 EPOL 软件仓以支持 OpenStack

    +
    yum update
    +yum install openstack-release-antelope
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示。

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    每个节点分别修改主机名,以controller为例:

    +
    hostnamectl set-hostname controller
    +
    +vi /etc/hostname
    +内容修改为controller
    +

    然后修改每个节点的/etc/hosts文件,新增如下内容:

    +
    192.168.0.2   controller
    +192.168.0.3   compute
    +192.168.0.4   storage
    +
  4. +
+

时钟同步

+

集群环境时刻要求每个节点的时间一致,一般由时钟同步软件保证。本文使用chrony软件。步骤如下:

+

Controller节点

+
    +
  1. 安装服务 +
    dnf install chrony
  2. +
  3. 修改/etc/chrony.conf配置文件,新增一行 +
    # 表示允许哪些IP从本节点同步时钟
    +allow 192.168.0.0/24
  4. +
  5. 重启服务 +
    systemctl restart chronyd
  6. +
+

其他节点

+
    +
  1. +

    安装服务 +

    dnf install chrony

    +
  2. +
  3. +

    修改/etc/chrony.conf配置文件,新增一行

    +
    # NTP_SERVER是controller IP,表示从这个机器获取时间,这里我们填192.168.0.2,或者在`/etc/hosts`里配置好的controller名字即可。
    +server NTP_SERVER iburst
    +

    同时,要把pool pool.ntp.org iburst这一行注释掉,表示不从公网同步时钟。

    +
  4. +
  5. +

    重启服务

    +
    systemctl restart chronyd
    +
  6. +
+

配置完成后,检查一下结果,在其他非controller节点执行chronyc sources,返回结果类似如下内容,表示成功从controller同步时钟。

+
MS Name/IP address         Stratum Poll Reach LastRx Last sample
+===============================================================================
+^* 192.168.0.2                 4   6     7     0  -1406ns[  +55us] +/-   16ms
+

安装数据库

+

数据库安装在控制节点,这里推荐使用mariadb。

+
    +
  1. +

    安装软件包

    +
    dnf install mariadb-config mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    新增配置文件/etc/my.cnf.d/openstack.cnf,内容如下

    +
    [mysqld]
    +bind-address = 192.168.0.2
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +
  4. +
  5. +

    启动服务器

    +
    systemctl start mariadb
    +
  6. +
  7. +

    初始化数据库,根据提示进行即可

    +
    mysql_secure_installation
    +

    示例如下:

    +
    NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
    +    SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!
    +
    +In order to log into MariaDB to secure it, we'll need the current
    +password for the root user. If you've just installed MariaDB, and
    +haven't set the root password yet, you should just press enter here.
    +
    +Enter current password for root (enter for none): 
    +
    +#这里输入密码,由于我们是初始化DB,直接回车就行
    +
    +OK, successfully used password, moving on...
    +
    +Setting the root password or using the unix_socket ensures that nobody
    +can log into the MariaDB root user without the proper authorisation.
    +
    +You already have your root account protected, so you can safely answer 'n'.
    +
    +# 这里根据提示输入N
    +
    +Switch to unix_socket authentication [Y/n] N
    +
    +Enabled successfully!
    +Reloading privilege tables..
    +... Success!
    +
    +
    +You already have your root account protected, so you can safely answer 'n'.
    +
    +# 输入Y,修改密码
    +
    +Change the root password? [Y/n] Y
    +
    +New password: 
    +Re-enter new password: 
    +Password updated successfully!
    +Reloading privilege tables..
    +... Success!
    +
    +
    +By default, a MariaDB installation has an anonymous user, allowing anyone
    +to log into MariaDB without having to have a user account created for
    +them.  This is intended only for testing, and to make the installation
    +go a bit smoother.  You should remove them before moving into a
    +production environment.
    +
    +# 输入Y,删除匿名用户
    +
    +Remove anonymous users? [Y/n] Y
    +... Success!
    +
    +Normally, root should only be allowed to connect from 'localhost'.  This
    +ensures that someone cannot guess at the root password from the network.
    +
    +# 输入Y,关闭root远程登录权限
    +
    +Disallow root login remotely? [Y/n] Y
    +... Success!
    +
    +By default, MariaDB comes with a database named 'test' that anyone can
    +access.  This is also intended only for testing, and should be removed
    +before moving into a production environment.
    +
    +# 输入Y,删除test数据库
    +
    +Remove test database and access to it? [Y/n] Y
    +- Dropping test database...
    +... Success!
    +- Removing privileges on test database...
    +... Success!
    +
    +Reloading the privilege tables will ensure that all changes made so far
    +will take effect immediately.
    +
    +# 输入Y,重载配置
    +
    +Reload privilege tables now? [Y/n] Y
    +... Success!
    +
    +Cleaning up...
    +
    +All done!  If you've completed all of the above steps, your MariaDB
    +installation should now be secure.
    +
  8. +
  9. +

    验证,根据第四步设置的密码,检查是否能登录mariadb

    +
    mysql -uroot -p
    +
  10. +
+

安装消息队列

+

消息队列安装在控制节点,这里推荐使用rabbitmq。

+
    +
  1. 安装软件包 +
    dnf install rabbitmq-server
  2. +
  3. 启动服务 +
    systemctl start rabbitmq-server
  4. +
  5. 配置openstack用户,RABBIT_PASS是openstack服务登录消息队里的密码,需要和后面各个服务的配置保持一致。 +
    rabbitmqctl add_user openstack RABBIT_PASS
    +rabbitmqctl set_permissions openstack ".*" ".*" ".*"
  6. +
+

安装缓存服务

+

消息队列安装在控制节点,这里推荐使用Memcached。

+
    +
  1. 安装软件包 +
    dnf install memcached python3-memcached
  2. +
  3. 修改配置文件/etc/sysconfig/memcached +
    OPTIONS="-l 127.0.0.1,::1,controller"
  4. +
  5. 启动服务 +
    systemctl start memcached
  6. +
+

部署服务

+

Keystone

+

Keystone是OpenStack提供的鉴权服务,是整个OpenStack的入口,提供了租户隔离、用户认证、服务发现等功能,必须安装。

+
    +
  1. +

    创建 keystone 数据库并授权

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包

    +
    dnf install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +
  6. +
  7. +

    同步数据库

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
  14. +
  15. +

    打开httpd.conf并配置

    +
    #需要修改的配置文件路径
    +vim /etc/httpd/conf/httpd.conf
    +
    +#修改以下项,如果没有则新添加
    +ServerName controller
    +
  16. +
  17. +

    创建软链接

    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 + 如果 ServerName 项不存在则需要创建

    +
  18. +
  19. +

    启动Apache HTTP服务

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  20. +
  21. +

    创建环境变量配置

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  22. +
  23. +

    依次创建domain, projects, users, roles

    +
      +
    • +

      需要先安装python3-openstackclient

      +
      dnf install python3-openstackclient
      +
    • +
    • +

      导入环境变量

      +
      source ~/.admin-openrc
      +
    • +
    • +

      创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

      +
      openstack domain create --description "An Example Domain" example
      +
      openstack project create --domain default --description "Service Project" service
      +
    • +
    • +

      创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

      +
      openstack project create --domain default --description "Demo Project" myproject
      +openstack user create --domain default --password-prompt myuser
      +openstack role create myrole
      +openstack role add --project myproject --user myuser myrole
      +
    • +
    +
  24. +
  25. +

    验证

    +
      +
    • +

      取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

      +
      source ~/.admin-openrc
      +unset OS_AUTH_URL OS_PASSWORD
      +
    • +
    • +

      为admin用户请求token:

      +
      openstack --os-auth-url http://controller:5000/v3 \
      +--os-project-domain-name Default --os-user-domain-name Default \
      +--os-project-name admin --os-username admin token issue
      +
    • +
    • +

      为myuser用户请求token:

      +
      openstack --os-auth-url http://controller:5000/v3 \
      +--os-project-domain-name Default --os-user-domain-name Default \
      +--os-project-name myproject --os-username myuser token issue
      +
    • +
    +
  26. +
+

Glance

+

Glance是OpenStack提供的镜像服务,负责虚拟机、裸机镜像的上传与下载,必须安装。

+

Controller节点

+
    +
  1. +

    创建 glance 数据库并授权

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +
  2. +
  3. +

    初始化 glance 资源对象

    +
  4. +
  5. +

    导入环境变量

    +
    source ~/.admin-openrc
    +
  6. +
  7. +

    创建用户时,命令行会提示输入密码,请输入自定义的密码,下文涉及到GLANCE_PASS的地方替换成该密码即可。

    +
    openstack user create --domain default --password-prompt glance
    +User Password:
    +Repeat User Password:
    +
  8. +
  9. +

    添加glance用户到service project并指定admin角色:

    +
    openstack role add --project service --user glance admin
    +
  10. +
  11. +

    创建glance服务实体:

    +
    openstack service create --name glance --description "OpenStack Image" image
    +
  12. +
  13. +

    创建glance API服务:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  14. +
  15. +

    安装软件包

    +
    dnf install openstack-glance
    +
  16. +
  17. +

    修改 glance 配置文件

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +
  18. +
  19. +

    同步数据库

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  20. +
  21. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  22. +
  23. +

    验证

    +
      +
    • +

      导入环境变量 +

      sorce ~/.admin-openrcu

      +
    • +
    • +

      下载镜像

      +
      x86镜像下载:
      +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
      +
      +arm镜像下载:
      +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img
      +

      注意

      +

      如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

      +
    • +
    • +

      向Image服务上传镜像:

      +
      openstack image create --disk-format qcow2 --container-format bare \
      +                    --file cirros-0.4.0-x86_64-disk.img --public cirros
      +
    • +
    • +

      确认镜像上传并验证属性:

      +
      openstack image list
      +
    • +
    +
  24. +
+

Placement

+

Placement是OpenStack提供的资源调度组件,一般不面向用户,由Nova等组件调用,安装在控制节点。

+

安装、配置Placement服务前,需要先创建相应的数据库、服务凭证和API endpoints。

+
    +
  1. +

    创建数据库

    +
      +
    • +

      使用root用户访问数据库服务:

      +
      mysql -u root -p
      +
    • +
    • +

      创建placement数据库:

      +
      MariaDB [(none)]> CREATE DATABASE placement;
      +
    • +
    • +

      授权数据库访问:

      +
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
      +  IDENTIFIED BY 'PLACEMENT_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
      +  IDENTIFIED BY 'PLACEMENT_DBPASS';
      +

      替换PLACEMENT_DBPASS为placement数据库访问密码。

      +
    • +
    • +

      退出数据库访问客户端:

      +
      exit
      +
    • +
    +
  2. +
  3. +

    配置用户和Endpoints

    +
      +
    • +

      source admin凭证,以获取admin命令行权限:

      +
      source ~/.admin-openrc
      +
    • +
    • +

      创建placement用户并设置用户密码:

      +
      openstack user create --domain default --password-prompt placement
      +
      +User Password:
      +Repeat User Password:
      +
    • +
    • +

      添加placement用户到service project并指定admin角色:

      +
      openstack role add --project service --user placement admin
      +
    • +
    • +

      创建placement服务实体:

      +
      openstack service create --name placement \
      +  --description "Placement API" placement
      +
    • +
    • +

      创建Placement API服务endpoints:

      +
      openstack endpoint create --region RegionOne \
      +  placement public http://controller:8778
      +openstack endpoint create --region RegionOne \
      +  placement internal http://controller:8778
      +openstack endpoint create --region RegionOne \
      +  placement admin http://controller:8778
      +
    • +
    +
  4. +
  5. +

    安装及配置组件

    +
      +
    • +

      安装软件包:

      +
      dnf install openstack-placement-api
      +
    • +
    • +

      编辑/etc/placement/placement.conf配置文件,完成如下操作:

      +
        +
      • +

        [placement_database]部分,配置数据库入口:

        +
        [placement_database]
        +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
        +

        替换PLACEMENT_DBPASS为placement数据库的密码。

        +
      • +
      • +

        [api][keystone_authtoken]部分,配置身份认证服务入口:

        +
        [api]
        +auth_strategy = keystone
        +
        +[keystone_authtoken]
        +auth_url = http://controller:5000/v3
        +memcached_servers = controller:11211
        +auth_type = password
        +project_domain_name = Default
        +user_domain_name = Default
        +project_name = service
        +username = placement
        +password = PLACEMENT_PASS
        +

        替换PLACEMENT_PASS为placement用户的密码。

        +
      • +
      +
    • +
    • +

      数据库同步,填充Placement数据库:

      +
      su -s /bin/sh -c "placement-manage db sync" placement
      +
    • +
    +
  6. +
  7. +

    启动服务

    +

    重启httpd服务:

    +
    systemctl restart httpd
    +
  8. +
  9. +

    验证

    +
      +
    • +

      source admin凭证,以获取admin命令行权限

      +
      source ~/.admin-openrc
      +
    • +
    • +

      执行状态检查:

      +
      placement-status upgrade check
      +
      +----------------------------------------------------------------------+
      +| Upgrade Check Results                                                |
      ++----------------------------------------------------------------------+
      +| Check: Missing Root Provider IDs                                     |
      +| Result: Success                                                      |
      +| Details: None                                                        |
      ++----------------------------------------------------------------------+
      +| Check: Incomplete Consumers                                          |
      +| Result: Success                                                      |
      +| Details: None                                                        |
      ++----------------------------------------------------------------------+
      +| Check: Policy File JSON to YAML Migration                            |
      +| Result: Failure                                                      |
      +| Details: Your policy file is JSON-formatted which is deprecated. You |
      +|   need to switch to YAML-formatted file. Use the                     |
      +|   ``oslopolicy-convert-json-to-yaml`` tool to convert the            |
      +|   existing JSON-formatted files to YAML in a backwards-              |
      +|   compatible manner: https://docs.openstack.org/oslo.policy/         |
      +|   latest/cli/oslopolicy-convert-json-to-yaml.html.                   |
      ++----------------------------------------------------------------------+
      +

      这里可以看到Policy File JSON to YAML Migration的结果为Failure。这是因为在Placement中,JSON格式的policy文件从Wallaby版本开始已处于deprecated状态。可以参考提示,使用oslopolicy-convert-json-to-yaml工具 将现有的JSON格式policy文件转化为YAML格式。

      +
      oslopolicy-convert-json-to-yaml  --namespace placement \
      +  --policy-file /etc/placement/policy.json \
      +  --output-file /etc/placement/policy.yaml
      +mv /etc/placement/policy.json{,.bak}
      +

      注:当前环境中此问题可忽略,不影响运行。

      +
    • +
    • +

      针对placement API运行命令:

      +
        +
      • +

        安装osc-placement插件:

        +
        dnf install python3-osc-placement
        +
      • +
      • +

        列出可用的资源类别及特性:

        +
        openstack --os-placement-api-version 1.2 resource class list --sort-column name
        ++----------------------------+
        +| name                       |
        ++----------------------------+
        +| DISK_GB                    |
        +| FPGA                       |
        +| ...                        |
        +
        +openstack --os-placement-api-version 1.6 trait list --sort-column name
        ++---------------------------------------+
        +| name                                  |
        ++---------------------------------------+
        +| COMPUTE_ACCELERATORS                  |
        +| COMPUTE_ARCH_AARCH64                  |
        +| ...                                   |
        +
      • +
      +
    • +
    +
  10. +
+

Nova

+

Nova是OpenStack的计算服务,负责虚拟机的创建、发放等功能。

+

Controller节点

+

在控制节点执行以下操作。

+
    +
  1. +

    创建数据库

    +
      +
    • +

      使用root用户访问数据库服务:

      +
      mysql -u root -p
      +
    • +
    • +

      创建nova_apinovanova_cell0数据库:

      +
      MariaDB [(none)]> CREATE DATABASE nova_api;
      +MariaDB [(none)]> CREATE DATABASE nova;
      +MariaDB [(none)]> CREATE DATABASE nova_cell0;
      +
    • +
    • +

      授权数据库访问:

      +
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +

      替换NOVA_DBPASS为nova相关数据库访问密码。

      +
    • +
    • +

      退出数据库访问客户端:

      +
      exit
      +
    • +
    +
  2. +
  3. +

    配置用户和Endpoints

    +
      +
    • +

      source admin凭证,以获取admin命令行权限:

      +
      source ~/.admin-openrc
      +
    • +
    • +

      创建nova用户并设置用户密码:

      +
      openstack user create --domain default --password-prompt nova
      +
      +User Password:
      +Repeat User Password:
      +
    • +
    • +

      添加nova用户到service project并指定admin角色:

      +
      openstack role add --project service --user nova admin
      +
    • +
    • +

      创建nova服务实体:

      +
      openstack service create --name nova \
      +  --description "OpenStack Compute" compute
      +
    • +
    • +

      创建Nova API服务endpoints:

      +
      openstack endpoint create --region RegionOne \
      +  compute public http://controller:8774/v2.1
      +openstack endpoint create --region RegionOne \
      +  compute internal http://controller:8774/v2.1
      +openstack endpoint create --region RegionOne \
      +  compute admin http://controller:8774/v2.1
      +
    • +
    +
  4. +
  5. +

    安装及配置组件

    +
      +
    • +

      安装软件包:

      +
      dnf install openstack-nova-api openstack-nova-conductor \
      +  openstack-nova-novncproxy openstack-nova-scheduler
      +
    • +
    • +

      编辑/etc/nova/nova.conf配置文件,完成如下操作:

      +
        +
      • +

        [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用controller节点管理IP配置my_ip,显式定义log_dir:

        +
        [DEFAULT]
        +enabled_apis = osapi_compute,metadata
        +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
        +my_ip = 192.168.0.2
        +log_dir = /var/log/nova
        +state_path = /var/lib/nova
        +

        替换RABBIT_PASS为RabbitMQ中openstack账户的密码。

        +
      • +
      • +

        [api_database][database]部分,配置数据库入口:

        +
        [api_database]
        +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
        +
        +[database]
        +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
        +

        替换NOVA_DBPASS为nova相关数据库的密码。

        +
      • +
      • +

        [api][keystone_authtoken]部分,配置身份认证服务入口:

        +
        [api]
        +auth_strategy = keystone
        +
        +[keystone_authtoken]
        +auth_url = http://controller:5000/v3
        +memcached_servers = controller:11211
        +auth_type = password
        +project_domain_name = Default
        +user_domain_name = Default
        +project_name = service
        +username = nova
        +password = NOVA_PASS
        +

        替换NOVA_PASS为nova用户的密码。

        +
      • +
      • +

        [vnc]部分,启用并配置远程控制台入口:

        +
        [vnc]
        +enabled = true
        +server_listen = $my_ip
        +server_proxyclient_address = $my_ip
        +
      • +
      • +

        [glance]部分,配置镜像服务API的地址:

        +
        [glance]
        +api_servers = http://controller:9292
        +
      • +
      • +

        [oslo_concurrency]部分,配置lock path:

        +
        [oslo_concurrency]
        +lock_path = /var/lib/nova/tmp
        +
      • +
      • +

        [placement]部分,配置placement服务的入口:

        +
        [placement]
        +region_name = RegionOne
        +project_domain_name = Default
        +project_name = service
        +auth_type = password
        +user_domain_name = Default
        +auth_url = http://controller:5000/v3
        +username = placement
        +password = PLACEMENT_PASS
        +

        替换PLACEMENT_PASS为placement用户的密码。

        +
      • +
      +
    • +
    • +

      数据库同步:

      +
        +
      • +

        同步nova-api数据库:

        +
        su -s /bin/sh -c "nova-manage api_db sync" nova
        +
      • +
      • +

        注册cell0数据库:

        +
        su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
        +
      • +
      • +

        创建cell1 cell:

        +
        su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
        +
      • +
      • +

        同步nova数据库:

        +
        su -s /bin/sh -c "nova-manage db sync" nova
        +
      • +
      • +

        验证cell0和cell1注册正确:

        +
        su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
        +
      • +
      +
    • +
    +
  6. +
  7. +

    启动服务

    +
    systemctl enable \
    +  openstack-nova-api.service \
    +  openstack-nova-scheduler.service \
    +  openstack-nova-conductor.service \
    +  openstack-nova-novncproxy.service
    +
    +systemctl start \
    +  openstack-nova-api.service \
    +  openstack-nova-scheduler.service \
    +  openstack-nova-conductor.service \
    +  openstack-nova-novncproxy.service
    +
  8. +
+

Compute节点

+

在计算节点执行以下操作。

+
    +
  1. +

    安装软件包

    +
    dnf install openstack-nova-compute
    +
  2. +
  3. +

    编辑/etc/nova/nova.conf配置文件

    +
      +
    • +

      [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用Compute节点管理IP配置my_ip,显式定义compute_driver、instances_path、log_dir:

      +
      [DEFAULT]
      +enabled_apis = osapi_compute,metadata
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +my_ip = 192.168.0.3
      +compute_driver = libvirt.LibvirtDriver
      +instances_path = /var/lib/nova/instances
      +log_dir = /var/log/nova
      +

      替换RABBIT_PASS为RabbitMQ中openstack账户的密码。

      +
    • +
    • +

      [api][keystone_authtoken]部分,配置身份认证服务入口:

      +
      [api]
      +auth_strategy = keystone
      +
      +[keystone_authtoken]
      +auth_url = http://controller:5000/v3
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = nova
      +password = NOVA_PASS
      +

      替换NOVA_PASS为nova用户的密码。

      +
    • +
    • +

      [vnc]部分,启用并配置远程控制台入口:

      +
      [vnc]
      +enabled = true
      +server_listen = $my_ip
      +server_proxyclient_address = $my_ip
      +novncproxy_base_url = http://controller:6080/vnc_auto.html
      +
    • +
    • +

      [glance]部分,配置镜像服务API的地址:

      +
      [glance]
      +api_servers = http://controller:9292
      +
    • +
    • +

      [oslo_concurrency]部分,配置lock path:

      +
      [oslo_concurrency]
      +lock_path = /var/lib/nova/tmp
      +
    • +
    • +

      [placement]部分,配置placement服务的入口:

      +
      [placement]
      +region_name = RegionOne
      +project_domain_name = Default
      +project_name = service
      +auth_type = password
      +user_domain_name = Default
      +auth_url = http://controller:5000/v3
      +username = placement
      +password = PLACEMENT_PASS
      +

      替换PLACEMENT_PASS为placement用户的密码。

      +
    • +
    +
  4. +
  5. +

    确认计算节点是否支持虚拟机硬件加速(x86_64)

    +

    处理器为x86_64架构时,可通过运行如下命令确认是否支持硬件加速:

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。编辑/etc/nova/nova.conf[libvirt]部分:

    +
    [libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置。

    +
  6. +
  7. +

    确认计算节点是否支持虚拟机硬件加速(arm64)

    +

    处理器为arm64架构时,可通过运行如下命令确认是否支持硬件加速:

    +
    virt-host-validate
    +# 该命令由libvirt提供,此时libvirt应已作为openstack-nova-compute依赖被安装,环境中已有此命令
    +

    显示FAIL时,表示不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。

    +
    QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded)
    +

    编辑/etc/nova/nova.conf[libvirt]部分:

    +
    [libvirt]
    +virt_type = qemu
    +

    显示PASS时,表示支持硬件加速,不需要进行额外的配置。

    +
    QEMU: Checking if device /dev/kvm exists: PASS
    +
  8. +
  9. +

    配置qemu(仅arm64)

    +

    仅当处理器为arm64架构时需要执行此操作。

    +
      +
    • +

      编辑/etc/libvirt/qemu.conf:

      +
      nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
      +         /usr/share/AAVMF/AAVMF_VARS.fd", \
      +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
      +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
      +
    • +
    • +

      编辑/etc/qemu/firmware/edk2-aarch64.json

      +
      {
      +    "description": "UEFI firmware for ARM64 virtual machines",
      +    "interface-types": [
      +        "uefi"
      +    ],
      +    "mapping": {
      +        "device": "flash",
      +        "executable": {
      +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
      +            "format": "raw"
      +        },
      +        "nvram-template": {
      +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
      +            "format": "raw"
      +        }
      +    },
      +    "targets": [
      +        {
      +            "architecture": "aarch64",
      +            "machines": [
      +                "virt-*"
      +            ]
      +        }
      +    ],
      +    "features": [
      +
      +    ],
      +    "tags": [
      +
      +    ]
      +}
      +
    • +
    +
  10. +
  11. +

    启动服务

    +
    systemctl enable libvirtd.service openstack-nova-compute.service
    +systemctl start libvirtd.service openstack-nova-compute.service
    +
  12. +
+

Controller节点

+

在控制节点执行以下操作。

+
    +
  1. +

    添加计算节点到openstack集群

    +
      +
    • +

      source admin凭证,以获取admin命令行权限:

      +
      source ~/.admin-openrc
      +
    • +
    • +

      确认nova-compute服务已识别到数据库中:

      +
      openstack compute service list --service nova-compute
      +
    • +
    • +

      发现计算节点,将计算节点添加到cell数据库:

      +

      su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
      +结果如下:

      +
      Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be    ignored if the caller is only importing and not executing nova code.
      +Found 2 cell mappings.
      +Skipping cell0 since it does not contain hosts.
      +Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2
      +Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e
      +Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e
      +Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2
      +
    • +
    +
  2. +
  3. +

    验证

    +
      +
    • 列出服务组件,验证每个流程都成功启动和注册:
    • +
    +
    openstack compute service list
    +
      +
    • 列出身份服务中的API端点,验证与身份服务的连接:
    • +
    +
    openstack catalog list
    +
      +
    • 列出镜像服务中的镜像,验证与镜像服务的连接:
    • +
    +
    openstack image list
    +
      +
    • 检查cells是否运作成功,以及其他必要条件是否已具备。
    • +
    +
    nova-status upgrade check
    +
  4. +
+

Neutron

+

Neutron是OpenStack的网络服务,提供虚拟交换机、IP路由、DHCP等功能。

+

Controller节点

+
    +
  1. +

    创建数据库、服务凭证和 API 服务端点

    +
      +
    • +

      创建数据库:

      +
      mysql -u root -p
      +
      +MariaDB [(none)]> CREATE DATABASE neutron;
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
      +MariaDB [(none)]> exit;
      +
    • +
    • +

      创建用户和服务,并记住创建neutron用户时输入的密码,用于配置NEUTRON_PASS:

      +
      source ~/.admin-openrc
      +openstack user create --domain default --password-prompt neutron
      +openstack role add --project service --user neutron admin
      +openstack service create --name neutron --description "OpenStack Networking" network
      +
    • +
    • +

      部署 Neutron API 服务:

      +
      openstack endpoint create --region RegionOne network public http://controller:9696
      +openstack endpoint create --region RegionOne network internal http://controller:9696
      +openstack endpoint create --region RegionOne network admin http://controller:9696
      +
    • +
    +
  2. +
  3. +

    安装软件包

    +

    dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2
    +3. 配置Neutron

    +
      +
    • +

      修改/etc/neutron/neutron.conf +

      [database]
      +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
      +
      +[DEFAULT]
      +core_plugin = ml2
      +service_plugins = router
      +allow_overlapping_ips = true
      +transport_url = rabbit://openstack:RABBIT_PASS@controller
      +auth_strategy = keystone
      +notify_nova_on_port_status_changes = true
      +notify_nova_on_port_data_changes = true
      +
      +[keystone_authtoken]
      +www_authenticate_uri = http://controller:5000
      +auth_url = http://controller:5000
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS
      +
      +[nova]
      +auth_url = http://controller:5000
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +region_name = RegionOne
      +project_name = service
      +username = nova
      +password = NOVA_PASS
      +
      +[oslo_concurrency]
      +lock_path = /var/lib/neutron/tmp
      +
      +[experimental]
      +linuxbridge = true

      +
    • +
    • +

      配置ML2,ML2具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge**

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/ml2_conf.ini +

      [ml2]
      +type_drivers = flat,vlan,vxlan
      +tenant_network_types = vxlan
      +mechanism_drivers = linuxbridge,l2population
      +extension_drivers = port_security
      +
      +[ml2_type_flat]
      +flat_networks = provider
      +
      +[ml2_type_vxlan]
      +vni_ranges = 1:1000
      +
      +[securitygroup]
      +enable_ipset = true

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini +

      [linux_bridge]
      +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
      +
      +[vxlan]
      +enable_vxlan = true
      +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
      +l2_population = true
      +
      +[securitygroup]
      +enable_security_group = true
      +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

      +
    • +
    • +

      配置Layer-3代理

      +
    • +
    • +

      修改/etc/neutron/l3_agent.ini

      +
      [DEFAULT]
      +interface_driver = linuxbridge
      +

      配置DHCP代理 +修改/etc/neutron/dhcp_agent.ini +

      [DEFAULT]
      +interface_driver = linuxbridge
      +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
      +enable_isolated_metadata = true

      +
    • +
    • +

      配置metadata代理

      +
    • +
    • +

      修改/etc/neutron/metadata_agent.ini +

      [DEFAULT]
      +nova_metadata_host = controller
      +metadata_proxy_shared_secret = METADATA_SECRET

      +
    • +
    • 配置nova服务使用neutron,修改/etc/nova/nova.conf +
      [neutron]
      +auth_url = http://controller:5000
      +auth_type = password
      +project_domain_name = default
      +user_domain_name = default
      +region_name = RegionOne
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS
      +service_metadata_proxy = true
      +metadata_proxy_shared_secret = METADATA_SECRET
    • +
    +
  4. +
  5. +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +
  6. +
  7. +

    同步数据库 +

    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

    +
  8. +
  9. 重启nova api服务 +
    systemctl restart openstack-nova-api
  10. +
  11. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \
    +neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
    +systemctl start neutron-server.service neutron-linuxbridge-agent.service \
    +neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
    +
  12. +
+

Compute节点

+
    +
  1. 安装软件包 +
    dnf install openstack-neutron-linuxbridge ebtables ipset -y
  2. +
  3. +

    配置Neutron

    +
      +
    • +

      修改/etc/neutron/neutron.conf +

      [DEFAULT]
      +transport_url = rabbit://openstack:RABBIT_PASS@controller
      +auth_strategy = keystone
      +
      +[keystone_authtoken]
      +www_authenticate_uri = http://controller:5000
      +auth_url = http://controller:5000
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS
      +
      +[oslo_concurrency]
      +lock_path = /var/lib/neutron/tmp

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini +

      [linux_bridge]
      +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
      +
      +[vxlan]
      +enable_vxlan = true
      +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
      +l2_population = true
      +
      +[securitygroup]
      +enable_security_group = true
      +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

      +
    • +
    • +

      配置nova compute服务使用neutron,修改/etc/nova/nova.conf +

      [neutron]
      +auth_url = http://controller:5000
      +auth_type = password
      +project_domain_name = default
      +user_domain_name = default
      +region_name = RegionOne
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS

      +
    • +
    • 重启nova-compute服务 +
      systemctl restart openstack-nova-compute.service
    • +
    • 启动Neutron linuxbridge agent服务
    • +
    +
    systemctl enable neutron-linuxbridge-agent
    +systemctl start neutron-linuxbridge-agent
    +
  4. +
+

Cinder

+

Cinder是OpenStack的存储服务,提供块设备的创建、发放、备份等功能。

+

Controller节点

+
    +
  1. +

    初始化数据库

    +

    CINDER_DBPASS是用户自定义的cinder数据库密码。 +

    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit

    +
  2. +
  3. +

    初始化Keystone资源对象

    +

    source ~/.admin-openrc
    +
    +#创建用户时,命令行会提示输入密码,请输入自定义的密码,下文涉及到`CINDER_PASS`的地方替换成该密码即可。
    +openstack user create --domain default --password-prompt cinder
    +
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +3. 安装软件包

    +
    dnf install openstack-cinder-api openstack-cinder-scheduler
    +
  4. +
  5. +

    修改cinder配置文件/etc/cinder/cinder.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 192.168.0.2
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
  6. +
  7. +

    数据库同步

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder
    +
  8. +
  9. +

    修改nova配置/etc/nova/nova.conf

    +
    [cinder]
    +os_region_name = RegionOne
    +
  10. +
  11. +

    启动服务

    +
    systemctl restart openstack-nova-api
    +systemctl start openstack-cinder-api openstack-cinder-scheduler
    +
  12. +
+

Storage节点

+

Storage节点要提前准备至少一块硬盘,作为cinder的存储后端,下文默认storage节点已经存在一块未使用的硬盘,设备名称为/dev/sdb,用户在配置过程中,请按照真实环境信息进行名称替换。

+

Cinder支持很多类型的后端存储,本指导使用最简单的lvm为参考,如果您想使用如ceph等其他后端,请自行配置。

+
    +
  1. +

    安装软件包

    +
    dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup
    +
  2. +
  3. +

    配置lvm卷组

    +
    pvcreate /dev/sdb
    +vgcreate cinder-volumes /dev/sdb
    +
  4. +
  5. +

    修改cinder配置/etc/cinder/cinder.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 192.168.0.4
    +enabled_backends = lvm
    +glance_api_servers = http://controller:9292
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    +volume_group = cinder-volumes
    +target_protocol = iscsi
    +target_helper = lioadm
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
  6. +
  7. +

    配置cinder backup (可选)

    +

    cinder-backup是可选的备份服务,cinder同样支持很多种备份后端,本文使用swift存储,如果您想使用如NFS等后端,请自行配置,例如可以参考OpenStack官方文档对NFS的配置说明。

    +

    修改/etc/cinder/cinder.conf,在[DEFAULT]中新增 +

    [DEFAULT]
    +backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
    +backup_swift_url = SWIFT_URL

    +

    这里的SWIFT_URL是指环境中swift服务的URL,在部署完swift服务后,执行openstack catalog show object-store命令获取。

    +
  8. +
  9. +

    启动服务

    +
    systemctl start openstack-cinder-volume target
    +systemctl start openstack-cinder-backup (可选)
    +
  10. +
+

至此,Cinder服务的部署已全部完成,可以在controller通过以下命令进行简单的验证

+
source ~/.admin-openrc
+openstack storage service list
+openstack volume list
+

Horizon

+

Horizon是OpenStack提供的前端页面,可以让用户通过网页鼠标的操作来控制OpenStack集群,而不用繁琐的CLI命令行。Horizon一般部署在控制节点。

+
    +
  1. +

    安装软件包

    +
    dnf install openstack-dashboard
    +
  2. +
  3. +

    修改配置文件/etc/openstack-dashboard/local_settings

    +
    OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +OPENSTACK_KEYSTONE_URL =  "http://controller:5000/v3"
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +CACHES = {
    +'default': {
    +    'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +    'LOCATION': 'controller:11211',
    +    }
    +}
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启服务

    +
    systemctl restart httpd
    +
  6. +
+

至此,horizon服务的部署已全部完成,打开浏览器,输入http://192.168.0.2/dashboard,打开horizon登录页面。

+

Ironic

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+

在控制节点执行以下操作。

+
    +
  1. +

    设置数据库

    +

    裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASS为合适的密码

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
    +IDENTIFIED BY 'IRONIC_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
    +IDENTIFIED BY 'IRONIC_DBPASS';
    +MariaDB [(none)]> exit
    +Bye
    +
  2. +
  3. +

    创建服务用户认证

    +
      +
    • +

      创建Bare Metal服务用户

      +

      替换IRONIC_PASS为ironic用户密码,IRONIC_INSPECTOR_PASS为ironic_inspector用户密码。

      +
      openstack user create --password IRONIC_PASS \
      +  --email ironic@example.com ironic
      +openstack role add --project service --user ironic admin
      +openstack service create --name ironic \
      +  --description "Ironic baremetal provisioning service" baremetal
      +
      +openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
      +openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector
      +openstack role add --project service --user ironic-inspector admin
      +
    • +
    • +

      创建Bare Metal服务访问入口

      +
      openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385
      +openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385
      +openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385
      +openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1
      +openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1
      +openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1
      +
    • +
    +
  4. +
  5. +

    安装组件

    +
    dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
    +
  6. +
  7. +

    配置ironic-api服务

    +

    配置文件路径/etc/ironic/ironic.conf

    +
      +
    • +

      通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

      +
      [database]
      +
      +# The SQ LAlchemy connection string used to connect to the
      +# database (string value)
      +# connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic
      +connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic
      +
    • +
    • +

      通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

      +
      [DEFAULT]
      +
      +# A URL representing the messaging driver to use and its full
      +# configuration. (string value)
      +# transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +

      用户也可自行使用json-rpc方式替换rabbitmq

      +
    • +
    • +

      配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换 IRONIC_PASS为身份认证服务中ironic用户的密码,替换RABBIT_PASS为RabbitMQ中openstack账户的密码。:

      +
      [DEFAULT]
      +
      +# Authentication strategy used by ironic-api: one of
      +# "keystone" or "noauth". "noauth" should not be used in a
      +# production environment because all authentication will be
      +# disabled. (string value)
      +
      +auth_strategy=keystone
      +host = controller
      +memcache_servers = controller:11211
      +enabled_network_interfaces = flat,noop,neutron
      +default_network_interface = noop
      +enabled_hardware_types = ipmi
      +enabled_boot_interfaces = pxe
      +enabled_deploy_interfaces = direct
      +default_deploy_interface = direct
      +enabled_inspect_interfaces = inspector
      +enabled_management_interfaces = ipmitool
      +enabled_power_interfaces = ipmitool
      +enabled_rescue_interfaces = no-rescue,agent
      +isolinux_bin = /usr/share/syslinux/isolinux.bin
      +logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %     (user_identity)s] %(instance)s%(message)s
      +
      +[keystone_authtoken]
      +# Authentication type to load (string value)
      +auth_type=password
      +# Complete public Identity API endpoint (string value)
      +# www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
      +www_authenticate_uri=http://controller:5000
      +# Complete admin Identity API endpoint. (string value)
      +# auth_url=http://PRIVATE_IDENTITY_IP:5000
      +auth_url=http://controller:5000
      +# Service username. (string value)
      +username=ironic
      +# Service account password. (string value)
      +password=IRONIC_PASS
      +# Service tenant name. (string value)
      +project_name=service
      +# Domain name containing project (string value)
      +project_domain_name=Default
      +# User's domain name (string value)
      +user_domain_name=Default
      +
      +[agent]
      +deploy_logs_collect = always
      +deploy_logs_local_path = /var/log/ironic/deploy
      +deploy_logs_storage_backend = local
      +image_download_source = http
      +stream_raw_images = false
      +force_raw_images = false
      +verify_ca = False
      +
      +[oslo_concurrency]
      +
      +[oslo_messaging_notifications]
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +topics = notifications
      +driver = messagingv2
      +
      +[oslo_messaging_rabbit]
      +amqp_durable_queues = True
      +rabbit_ha_queues = True
      +
      +[pxe]
      +ipxe_enabled = false
      +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
      +image_cache_size = 204800
      +tftp_root=/var/lib/tftpboot/cephfs/
      +tftp_master_path=/var/lib/tftpboot/cephfs/master_images
      +
      +[dhcp]
      +dhcp_provider = none
      +
    • +
    • +

      创建裸金属服务数据库表

      +
      ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
      +
    • +
    • +

      重启ironic-api服务

      +
      sudo systemctl restart openstack-ironic-api
      +
    • +
    +
  8. +
  9. +

    配置ironic-conductor服务

    +

    如下为ironic-conductor服务自身的标准配置,ironic-conductor服务可以与ironic-api服务分布于不同节点,本指南中均部署与控制节点,所以重复的配置项可跳过。

    +
      +
    • +

      替换使用conductor服务所在host的IP配置my_ip:

      +
      [DEFAULT]
      +
      +# IP address of this host. If unset, will determine the IP
      +# programmatically. If unable to do so, will use "127.0.0.1".
      +# (string value)
      +# my_ip=HOST_IP
      +my_ip = 192.168.0.2
      +
    • +
    • +

      配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSironic用户的密码:

      +
      [database]
      +
      +# The SQLAlchemy connection string to use to connect to the
      +# database. (string value)
      +connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic
      +
    • +
    • +

      通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RABBIT_PASS为RabbitMQ中openstack账户的密码:

      +
      [DEFAULT]
      +
      +# A URL representing the messaging driver to use and its full
      +# configuration. (string value)
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +

      用户也可自行使用json-rpc方式替换rabbitmq

      +
    • +
    • +

      配置凭证访问其他OpenStack服务

      +

      为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

      +
      [neutron] - 访问OpenStack网络服务
      +[glance] - 访问OpenStack镜像服务
      +[swift] - 访问OpenStack对象存储服务
      +[cinder] - 访问OpenStack块存储服务
      +[inspector] - 访问OpenStack裸金属introspection服务
      +[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
      +

      简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

      +

      在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

      +
      网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
      +
      +请求时使用特定的CA SSL证书进行HTTPS连接
      +
      +与ironic-api服务配置相同的服务用户
      +
      +动态密码认证插件基于其他选项发现合适的身份认证服务API版本
      +

      替换IRONIC_PASS为ironic用户密码。

      +
      [neutron]
      +
      +# Authentication type to load (string value)
      +auth_type = password
      +# Authentication URL (string value)
      +auth_url=https://IDENTITY_IP:5000/
      +# Username (string value)
      +username=ironic
      +# User's password (string value)
      +password=IRONIC_PASS
      +# Project name to scope to (string value)
      +project_name=service
      +# Domain ID containing project (string value)
      +project_domain_id=default
      +# User's domain id (string value)
      +user_domain_id=default
      +# PEM encoded Certificate Authority to use when verifying
      +# HTTPs connections. (string value)
      +cafile=/opt/stack/data/ca-bundle.pem
      +# The default region_name for endpoint URL discovery. (string
      +# value)
      +region_name = RegionOne
      +# List of interfaces, in order of preference, for endpoint
      +# URL. (list value)
      +valid_interfaces=public
      +
      +# 其他参考配置
      +[glance]
      +endpoint_override = http://controller:9292
      +www_authenticate_uri = http://controller:5000
      +auth_url = http://controller:5000
      +auth_type = password
      +username = ironic
      +password = IRONIC_PASS
      +project_domain_name = default
      +user_domain_name = default
      +region_name = RegionOne
      +project_name = service
      +
      +[service_catalog]  
      +region_name = RegionOne
      +project_domain_id = default
      +user_domain_id = default
      +project_name = service
      +password = IRONIC_PASS
      +username = ironic
      +auth_url = http://controller:5000
      +auth_type = password
      +

      默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

      +
      [neutron]
      +endpoint_override = <NEUTRON_API_ADDRESS>
      +
    • +
    • +

      配置允许的驱动程序和硬件类型

      +

      通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

      +
      [DEFAULT]
      +enabled_hardware_types = ipmi
      +

      配置硬件接口:

      +
      enabled_boot_interfaces = pxe
      +enabled_deploy_interfaces = direct,iscsi
      +enabled_inspect_interfaces = inspector
      +enabled_management_interfaces = ipmitool
      +enabled_power_interfaces = ipmitool
      +

      配置接口默认值:

      +
      [DEFAULT]
      +default_deploy_interface = direct
      +default_network_interface = neutron
      +

      如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

      +
    • +
    • +

      重启ironic-conductor服务

      +
      sudo systemctl restart openstack-ironic-conductor
      +
    • +
    +
  10. +
  11. +

    配置ironic-inspector服务

    +
      +
    • +

      安装组件

      +
      dnf install openstack-ironic-inspector
      +
    • +
    • +

      创建数据库

      +
      # mysql -u root -p
      +
      +MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
      +
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \
      +IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
      +IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS';
      +MariaDB [(none)]> exit
      +Bye
      +
    • +
    • +

      配置/etc/ironic-inspector/inspector.conf

      +

      通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSironic_inspector用户的密码

      +
      [database]
      +backend = sqlalchemy
      +connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector
      +min_pool_size = 100
      +max_pool_size = 500
      +pool_timeout = 30
      +max_retries = 5
      +max_overflow = 200
      +db_retry_interval = 2
      +db_inc_retry_interval = True
      +db_max_retry_interval = 2
      +db_max_retries = 5
      +
    • +
    • +

      配置消息队列通信地址

      +
      [DEFAULT] 
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +
    • +
    • +

      设置keystone认证

      +
      [DEFAULT]
      +
      +auth_strategy = keystone
      +timeout = 900
      +rootwrap_config = /etc/ironic-inspector/rootwrap.conf
      +logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %     (user_identity)s] %(instance)s%(message)s
      +log_dir = /var/log/ironic-inspector
      +state_path = /var/lib/ironic-inspector
      +use_stderr = False
      +
      +[ironic]
      +api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
      +auth_type = password
      +auth_url = http://PUBLIC_IDENTITY_IP:5000
      +auth_strategy = keystone
      +ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
      +os_region = RegionOne
      +project_name = service
      +project_domain_name = Default
      +user_domain_name = Default
      +username = IRONIC_SERVICE_USER_NAME
      +password = IRONIC_SERVICE_USER_PASSWORD
      +
      +[keystone_authtoken]
      +auth_type = password
      +auth_url = http://controller:5000
      +www_authenticate_uri = http://controller:5000
      +project_domain_name = default
      +user_domain_name = default
      +project_name = service
      +username = ironic_inspector
      +password = IRONICPASSWD
      +region_name = RegionOne
      +memcache_servers = controller:11211
      +token_cache_time = 300
      +
      +[processing]
      +add_ports = active
      +processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
      +ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
      +always_store_ramdisk_logs = true
      +store_data =none
      +power_off = false
      +
      +[pxe_filter]
      +driver = iptables
      +
      +[capabilities]
      +boot_mode=True
      +
    • +
    • +

      配置ironic inspector dnsmasq服务

      +
      # 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
      +port=0
      +interface=enp3s0                         #替换为实际监听网络接口
      +dhcp-range=192.168.0.40,192.168.0.50   #替换为实际dhcp地址范围
      +bind-interfaces
      +enable-tftp
      +
      +dhcp-match=set:efi,option:client-arch,7
      +dhcp-match=set:efi,option:client-arch,9
      +dhcp-match=aarch64, option:client-arch,11
      +dhcp-boot=tag:aarch64,grubaa64.efi
      +dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
      +dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
      +
      +tftp-root=/tftpboot                       #替换为实际tftpboot目录
      +log-facility=/var/log/dnsmasq.log
      +
    • +
    • +

      关闭ironic provision网络子网的dhcp

      +
      openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
      +
    • +
    • +

      初始化ironic-inspector服务的数据库

      +
      ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
      +
    • +
    • +

      启动服务

      +
      systemctl enable --now openstack-ironic-inspector.service
      +systemctl enable --now openstack-ironic-inspector-dnsmasq.service
      +
    • +
    +
  12. +
  13. +

    配置httpd服务

    +
      +
    • +

      创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

      +
      mkdir -p /var/lib/ironic/httproot
      +chown ironic.ironic /var/lib/ironic/httproot
      +
    • +
    • +

      安装和配置httpd服务

      +
        +
      • +

        安装httpd服务,已有请忽略

        +
        dnf install httpd -y
        +
      • +
      • +

        创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

        +
        Listen 8080
        +
        +<VirtualHost *:8080>
        +    ServerName ironic.openeuler.com
        +
        +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
        +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
        +
        +    DocumentRoot "/var/lib/ironic/httproot"
        +    <Directory "/var/lib/ironic/httproot">
        +        Options Indexes FollowSymLinks
        +        Require all granted
        +    </Directory>
        +    LogLevel warn
        +    AddDefaultCharset UTF-8
        +    EnableSendfile on
        +</VirtualHost>
        +

        注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

        +
      • +
      • +

        重启httpd服务。

        +
        systemctl restart httpd
        +
      • +
      +
    • +
    +
  14. +
  15. +

    deploy ramdisk镜像下载或制作

    +

    部署一个裸机节点总共需要两组镜像:deploy ramdisk images和user images。Deploy ramdisk images上运行有ironic-python-agent(IPA)服务,Ironic通过它进行裸机节点的环境准备。User images是最终被安装裸机节点上,供用户使用的镜像。

    +

    ramdisk镜像支持通过ironic-python-agent-builder或disk-image-builder工具制作。用户也可以自行选择其他工具制作。若使用原生工具,则需要安装对应的软件包。

    +

    具体的使用方法可以参考官方文档,同时官方也有提供制作好的deploy镜像,可尝试下载。

    +

    下文介绍通过ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

    +
      +
    • +

      安装 ironic-python-agent-builder

      +
      dnf install python3-ironic-python-agent-builder
      +
      +或
      +pip3 install ironic-python-agent-builder
      +dnf install qemu-img git
      +
    • +
    • +

      制作镜像

      +

      基本用法:

      +
      usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH]
      +                           [-v] [--lzma] [--extra-args EXTRA_ARGS]
      +                           [--elements-path ELEMENTS_PATH]
      +                           distribution
      +
      +positional arguments:
      +  distribution          Distribution to use
      +
      +options:
      +  -h, --help            show this help message and exit
      +  -r RELEASE, --release RELEASE
      +                        Distribution release to use
      +  -o OUTPUT, --output OUTPUT
      +                        Output base file name
      +  -e ELEMENT, --element ELEMENT
      +                        Additional DIB element to use
      +  -b BRANCH, --branch BRANCH
      +                        If set, override the branch that is used for         ironic-python-agent
      +                        and requirements
      +  -v, --verbose         Enable verbose logging in diskimage-builder
      +  --lzma                Use lzma compression for smaller images
      +  --extra-args EXTRA_ARGS
      +                        Extra arguments to pass to diskimage-builder
      +  --elements-path ELEMENTS_PATH
      +                        Path(s) to custom DIB elements separated by a colon
      +

      操作实例:

      +
      # -o选项指定生成的镜像名
      +# ubuntu指定生成ubuntu系统的镜像
      +ironic-python-agent-builder -o my-ubuntu-ipa ubuntu
      +

      可通过设置ARCH环境变量(默认为amd64)指定所构建镜像的架构。如果是arm架构,需要添加:

      +
      export ARCH=aarch64
      +
    • +
    • +

      允许ssh登录

      +

      初始化环境变量,设置用户名、密码,启用sodo权限;并添加-e选项使用相应的DIB元素。制作镜像操作如下:

      +
      export DIB_DEV_USER_USERNAME=ipa \
      +export DIB_DEV_USER_PWDLESS_SUDO=yes \
      +export DIB_DEV_USER_PASSWORD='123'
      +ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu
      +
    • +
    • +

      指定代码仓库

      +

      初始化对应的环境变量,然后制作镜像:

      +
      # 直接从gerrit上clone代码
      +DIB_REPOLOCATION_ironic_python_agent=https://opendev.org/openstack/ironic-python-agent
      +DIB_REPOREF_ironic_python_agent=stable/2023.1
      +
      +# 指定本地仓库及分支
      +DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo
      +DIB_REPOREF_ironic_python_agent=my-test-branch
      +
      +ironic-python-agent-builder ubuntu
      +

      参考:source-repositories

      +
    • +
    +
  16. +
  17. +

    注意

    +

    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改: +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败。

    +

    生成的错误配置文件:

    +

    ironic-err

    +

    如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    当前版本的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    • +

      修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:

      +
      [agent]
      +verify_ca = False
      +[pxe]
      +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
      +
    • +
    • +

      ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:

      +

      /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ ironic_python_agent目录)

      +
      [DEFAULT]
      +enable_auto_tls = False
      +

      设置权限:

      +
      chown -R ipa.ipa /etc/ironic_python_agent/
      +
    • +
    • +

      ramdisk镜像中修改ipa服务的服务启动文件,添加配置文件选项

      +

      编辑/usr/lib/systemd/system/ironic-python-agent.service文件

      +
      [Unit]
      +Description=Ironic Python Agent
      +After=network-online.target
      +[Service]
      +ExecStartPre=/sbin/modprobe vfat
      +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/    ironic_python_agent/ironic_python_agent.conf
      +Restart=always
      +RestartSec=30s
      +[Install]
      +WantedBy=multi-user.target
      +
    • +
    +
  18. +
+

Trove

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

Controller节点

+
    +
  1. +

    创建数据库。

    +

    数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASS为合适的密码。 +

    CREATE DATABASE trove CHARACTER SET utf8;
    +GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS';
    +GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS';

    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    # 创建trove用户
    +openstack user create --domain default --password-prompt trove
    +# 添加admin角色
    +openstack role add --project service --user trove admin
    +# 创建database服务
    +openstack service create --name trove --description "Database service" database

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s

    +
  4. +
  5. +

    安装Trove。 +

    dnf install openstack-trove python-troveclient

    +
  6. +
  7. +

    修改配置文件。

    +

    编辑/etc/trove/trove.conf。 +

    [DEFAULT]
    +bind_host=192.168.0.2
    +log_dir = /var/log/trove
    +network_driver = trove.network.neutron.NeutronDriver
    +network_label_regex=.*
    +management_security_groups = <manage security group>
    +nova_keypair = trove-mgmt
    +default_datastore = mysql
    +taskmanager_manager = trove.taskmanager.manager.Manager
    +trove_api_workers = 5
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +reboot_time_out = 300
    +usage_timeout = 900
    +agent_call_high_timeout = 1200
    +use_syslog = False
    +debug = True
    +
    +[database]
    +connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
    +
    +[keystone_authtoken]
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = trove
    +username = TROVE_PASS
    +
    +[service_credentials]
    +auth_url = http://controller:5000/v3/
    +region_name = RegionOne
    +project_name = service
    +project_domain_name = Default
    +user_domain_name = Default
    +username = trove
    +password = TROVE_PASS
    +
    +[mariadb]
    +tcp_ports = 3306,4444,4567,4568
    +
    +[mysql]
    +tcp_ports = 3306
    +
    +[postgresql]
    +tcp_ports = 5432

    +

    解释:

    +
    +

    [Default]分组中bind_host配置为Trove控制节点的IP。\ +transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码。\ +[database]分组中的connection 为前面在mysql中为Trove创建的数据库信息。\ +Trove的用户信息中TROVE_PASSWORD替换为实际trove用户的密码。

    +
    +

    编辑/etc/trove/trove-guestagent.conf。 +

    [DEFAULT]
    +log_file = trove-guestagent.log
    +log_dir = /var/log/trove/
    +ignore_users = os_admin
    +control_exchange = trove
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +rpc_backend = rabbit
    +command_process_timeout = 60
    +use_syslog = False
    +debug = True
    +
    +[service_credentials]
    +auth_url = http://controller:5000/v3/
    +region_name = RegionOne
    +project_name = service
    +password = TROVE_PASS
    +project_domain_name = Default
    +user_domain_name = Default
    +username = trove
    +
    +[mysql]
    +docker_image = your-registry/your-repo/mysql
    +backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0

    +

    解释:

    +
    +

    guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上报心跳,因此需要配置RabbitMQ的用户和密码信息。\ +transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码。\ +Trove的用户信息中TROVE_PASSWORD替换为实际trove用户的密码。\ +从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

    +
    +
  8. +
  9. +

    数据库同步。 +

    su -s /bin/sh -c "trove-manage db_sync" trove

    +
  10. +
  11. +

    完成安装。 +

    # 配置服务自启
    +systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \ 
    +openstack-trove-conductor.service
    +
    +# 启动服务
    +systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \ 
    +openstack-trove-conductor.service

    +
  12. +
+

Swift

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+

Controller节点

+
    +
  1. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    # 创建swift用户
    +openstack user create --domain default --password-prompt swift
    +# 添加admin角色
    +openstack role add --project service --user swift admin
    +# 创建对象存储服务
    +openstack service create --name swift --description "OpenStack Object Storage" object-store

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 

    +
  2. +
  3. +

    安装Swift。 +

    dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \ 
    +python3-keystonemiddleware memcached

    +
  4. +
  5. +

    配置proxy-server。

    +

    Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和SWIFT_PASS即可。 +

    vim /etc/swift/proxy-server.conf
    +
    +[filter:authtoken]
    +paste.filter_factory = keystonemiddleware.auth_token:filter_factory
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = swift
    +password = SWIFT_PASS
    +delay_auth_decision = True
    +service_token_roles_required = True

    +
  6. +
+

Storage节点

+
    +
  1. +

    安装支持的程序包。 +

    dnf install openstack-swift-account openstack-swift-container openstack-swift-object
    +dnf install xfsprogs rsync

    +
  2. +
  3. +

    将设备/dev/sdb和/dev/sdc格式化为XFS。 +

    mkfs.xfs /dev/sdb
    +mkfs.xfs /dev/sdc

    +
  4. +
  5. +

    创建挂载点目录结构。 +

    mkdir -p /srv/node/sdb
    +mkdir -p /srv/node/sdc

    +
  6. +
  7. +

    找到新分区的UUID。 +

    blkid

    +
  8. +
  9. +

    编辑/etc/fstab文件并将以下内容添加到其中。 +

    UUID="<UUID-from-output-above>" /srv/node/sdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/sdc xfs noatime 0 2

    +
  10. +
  11. +

    挂载设备。 +

    mount /srv/node/sdb
    +mount /srv/node/sdc

    +

    注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置。

    +
  12. +
  13. +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容: +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock

    +

    替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动: +

    systemctl enable rsyncd.service
    +systemctl start rsyncd.service

    +
  14. +
  15. +

    配置存储节点。

    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。 +

    [DEFAULT]
    +bind_ip = 192.168.0.4

    +

    确保挂载点目录结构的正确所有权。 +

    chown -R swift:swift /srv/node

    +

    创建recon目录并确保其拥有正确的所有权。 +

    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift

    +
  16. +
+

Controller节点创建并分发环

+
    +
  1. +

    创建账号环。

    +

    切换到/etc/swift目录。 +

    cd /etc/swift

    +

    创建基础account.builder文件。 +

    swift-ring-builder account.builder create 10 1 1

    +

    将每个存储节点添加到环中。 +

    swift-ring-builder account.builder add --region 1 --zone 1 \
    +--ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \ 
    +--port 6202  --device DEVICE_NAME \ 
    +--weight 100

    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证账号环内容。 +

    swift-ring-builder account.builder

    +

    重新平衡账号环。 +

    swift-ring-builder account.builder rebalance

    +
  2. +
  3. +

    创建容器环。

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件。 +

    swift-ring-builder container.builder create 10 1 1

    +

    将每个存储节点添加到环中。 +

    swift-ring-builder container.builder add --region 1 --zone 1 \
    +--ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS 
    +--port 6201 --device DEVICE_NAME \
    +--weight 100

    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证容器环内容。 +

    swift-ring-builder container.builder

    +

    重新平衡容器环。 +

    swift-ring-builder container.builder rebalance

    +
  4. +
  5. +

    创建对象环。

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件。 +

    swift-ring-builder object.builder create 10 1 1

    +

    将每个存储节点添加到环中。 +

     swift-ring-builder object.builder add --region 1 --zone 1 \
    + --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \
    + --port 6200 --device DEVICE_NAME \
    + --weight 100

    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证对象环内容。 +

    swift-ring-builder object.builder

    +

    重新平衡对象环。 +

    swift-ring-builder object.builder rebalance

    +
  6. +
  7. +

    分发环配置文件。

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  8. +
  9. +

    编辑配置文件/etc/swift/swift.conf。 +

    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes

    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权。 +

    chown -R root:swift /etc/swift

    +
  10. +
  11. +

    完成安装

    +
  12. +
+

在控制节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动。 +

systemctl enable openstack-swift-proxy.service memcached.service
+systemctl start openstack-swift-proxy.service memcached.service

+

在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动。 +

systemctl enable openstack-swift-account.service \
+openstack-swift-account-auditor.service \
+openstack-swift-account-reaper.service \
+openstack-swift-account-replicator.service \
+openstack-swift-container.service \
+openstack-swift-container-auditor.service \
+openstack-swift-container-replicator.service \
+openstack-swift-container-updater.service \
+openstack-swift-object.service \
+openstack-swift-object-auditor.service \
+openstack-swift-object-replicator.service \
+openstack-swift-object-updater.service
+
+systemctl start openstack-swift-account.service \
+openstack-swift-account-auditor.service \
+openstack-swift-account-reaper.service \
+openstack-swift-account-replicator.service \
+openstack-swift-container.service \
+openstack-swift-container-auditor.service \
+openstack-swift-container-replicator.service \
+openstack-swift-container-updater.service \
+openstack-swift-object.service \
+openstack-swift-object-auditor.service \
+openstack-swift-object-replicator.service \
+openstack-swift-object-updater.service

+

Cyborg

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+

Controller节点

+
    +
  1. +

    初始化对应数据库

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cyborg;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
    +MariaDB [(none)]> exit;
    +
  2. +
  3. +

    创建用户和服务,并记住创建cybory用户时输入的密码,用于配置CYBORG_PASS

    +
    source ~/.admin-openrc
    +openstack user create --domain default --password-prompt cyborg
    +openstack role add --project service --user cyborg admin
    +openstack service create --name cyborg --description "Acceleration Service" accelerator
    +
  4. +
  5. +

    使用uwsgi部署Cyborg api服务

    +
    openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2
    +openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2
    +openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2
    +
  6. +
  7. +

    安装Cyborg

    +
    dnf install openstack-cyborg
    +
  8. +
  9. +

    配置Cyborg

    +

    修改/etc/cyborg/cyborg.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +use_syslog = False
    +state_path = /var/lib/cyborg
    +debug = True
    +
    +[api]
    +host_ip = 0.0.0.0
    +
    +[database]
    +connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg
    +
    +[service_catalog]
    +cafile = /opt/stack/data/ca-bundle.pem
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +password = CYBORG_PASS
    +username = cyborg
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +
    +[placement]
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = password
    +username = PLACEMENT_PASS
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +auth_section = keystone_authtoken
    +
    +[nova]
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = NOVA_PASS
    +username = nova
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +auth_section = keystone_authtoken
    +
    +[keystone_authtoken]
    +memcached_servers = localhost:11211
    +signing_dir = /var/cache/cyborg/api
    +cafile = /opt/stack/data/ca-bundle.pem
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = CYBORG_PASS
    +username = cyborg
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +
  10. +
  11. +

    同步数据库表格

    +
    cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
    +
  12. +
  13. +

    启动Cyborg服务

    +
    systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
    +systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
    +
  14. +
+

Aodh

+

Aodh可以根据由Ceilometer或者Gnocchi收集的监控数据创建告警,并设置触发规则。

+

Controller节点

+
    +
  1. +

    创建数据库。

    +
    CREATE DATABASE aodh;
    +GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
    +GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    openstack user create --domain default --password-prompt aodh
    +openstack role add --project service --user aodh admin
    +openstack service create --name aodh --description "Telemetry" alarming

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne alarming public http://controller:8042
    +openstack endpoint create --region RegionOne alarming internal http://controller:8042
    +openstack endpoint create --region RegionOne alarming admin http://controller:8042

    +
  4. +
  5. +

    安装Aodh。 +

    dnf install openstack-aodh-api openstack-aodh-evaluator \
    +openstack-aodh-notifier openstack-aodh-listener \
    +openstack-aodh-expirer python3-aodhclient

    +
  6. +
  7. +

    修改配置文件。 +

    vim /etc/aodh/aodh.conf
    +
    +[database]
    +connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = aodh
    +password = AODH_PASS
    +
    +[service_credentials]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = aodh
    +password = AODH_PASS
    +interface = internalURL
    +region_name = RegionOne

    +
  8. +
  9. +

    同步数据库。 +

    aodh-dbsync

    +
  10. +
  11. +

    完成安装。 +

    # 配置服务自启
    +systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \
    +openstack-aodh-notifier.service openstack-aodh-listener.service
    +
    +# 启动服务
    +systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \
    +openstack-aodh-notifier.service openstack-aodh-listener.service

    +
  12. +
+

Gnocchi

+

Gnocchi是一个开源的时间序列数据库,可以对接Ceilometer。

+

Controller节点

+
    +
  1. +

    创建数据库。 +

    CREATE DATABASE gnocchi;
    +GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
    +GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';

    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    openstack user create --domain default --password-prompt gnocchi
    +openstack role add --project service --user gnocchi admin
    +openstack service create --name gnocchi --description "Metric Service" metric

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne metric public http://controller:8041
    +openstack endpoint create --region RegionOne metric internal http://controller:8041
    +openstack endpoint create --region RegionOne metric admin http://controller:8041

    +
  4. +
  5. +

    安装Gnocchi。 +

    dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient

    +
  6. +
  7. +

    修改配置文件。 +

    vim /etc/gnocchi/gnocchi.conf
    +[api]
    +auth_mode = keystone
    +port = 8041
    +uwsgi_mode = http-socket
    +
    +[keystone_authtoken]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = gnocchi
    +password = GNOCCHI_PASS
    +interface = internalURL
    +region_name = RegionOne
    +
    +[indexer]
    +url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
    +
    +[storage]
    +# coordination_url is not required but specifying one will improve
    +# performance with better workload division across workers.
    +# coordination_url = redis://controller:6379
    +file_basepath = /var/lib/gnocchi
    +driver = file

    +
  8. +
  9. +

    同步数据库。 +

    gnocchi-upgrade

    +
  10. +
  11. +

    完成安装。 +

    # 配置服务自启
    +systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
    +
    +# 启动服务
    +systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service

    +
  12. +
+

Ceilometer

+

Ceilometer是OpenStack中负责数据收集的服务。

+

Controller节点

+
    +
  1. +

    创建服务凭证。 +

    openstack user create --domain default --password-prompt ceilometer
    +openstack role add --project service --user ceilometer admin
    +openstack service create --name ceilometer --description "Telemetry" metering

    +
  2. +
  3. +

    安装Ceilometer软件包。 +

    dnf install openstack-ceilometer-notification openstack-ceilometer-central

    +
  4. +
  5. +

    编辑配置文件/etc/ceilometer/pipeline.yaml。 +

    publishers:
    +    # set address of Gnocchi
    +    # + filter out Gnocchi-related activity meters (Swift driver)
    +    # + set default archive policy
    +    - gnocchi://?filter_project=service&archive_policy=low

    +
  6. +
  7. +

    编辑配置文件/etc/ceilometer/ceilometer.conf。 +

    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +
    +[service_credentials]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = ceilometer
    +password = CEILOMETER_PASS
    +interface = internalURL
    +region_name = RegionOne

    +
  8. +
  9. +

    数据库同步。 +

    ceilometer-upgrade

    +
  10. +
  11. +

    完成控制节点Ceilometer安装。 +

    # 配置服务自启
    +systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
    +# 启动服务
    +systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service

    +
  12. +
+

Compute节点

+
    +
  1. +

    安装Ceilometer软件包。 +

    dnf install openstack-ceilometer-compute
    +dnf install openstack-ceilometer-ipmi       # 可选

    +
  2. +
  3. +

    编辑配置文件/etc/ceilometer/ceilometer.conf。 +

    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +
    +[service_credentials]
    +auth_url = http://controller:5000
    +project_domain_id = default
    +user_domain_id = default
    +auth_type = password
    +username = ceilometer
    +project_name = service
    +password = CEILOMETER_PASS
    +interface = internalURL
    +region_name = RegionOne

    +
  4. +
  5. +

    编辑配置文件/etc/nova/nova.conf。 +

    [DEFAULT]
    +instance_usage_audit = True
    +instance_usage_audit_period = hour
    +
    +[notifications]
    +notify_on_state_change = vm_and_task_state
    +
    +[oslo_messaging_notifications]
    +driver = messagingv2

    +
  6. +
  7. +

    完成安装。 +

    systemctl enable openstack-ceilometer-compute.service
    +systemctl start openstack-ceilometer-compute.service
    +systemctl enable openstack-ceilometer-ipmi.service         # 可选
    +systemctl start openstack-ceilometer-ipmi.service          # 可选
    +
    +# 重启nova-compute服务
    +systemctl restart openstack-nova-compute.service

    +
  8. +
+

Heat

+

Heat是 OpenStack 自动编排服务,基于描述性的模板来编排复合云应用,也称为Orchestration Service。Heat 的各服务一般安装在Controller节点上。

+

Controller节点

+
    +
  1. +

    创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE heat;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
    +MariaDB [(none)]> exit;
    +
  2. +
  3. +

    创建服务凭证,创建heat用户,并为其增加admin角色

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt heat
    +openstack role add --project service --user heat admin
    +
  4. +
  5. +

    创建heatheat-cfn服务及其对应的API端点

    +
    openstack service create --name heat --description "Orchestration" orchestration
    +openstack service create --name heat-cfn --description "Orchestration"  cloudformation
    +openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
    +openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
    +openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
    +
  6. +
  7. +

    创建stack管理的额外信息

    +

    创建 heat domain +

    openstack domain create --description "Stack projects and users" heat
    +在 heat domain下创建 heat_domain_admin 用户,并记下输入的密码,用于配置下面的HEAT_DOMAIN_PASS +
    openstack user create --domain heat --password-prompt heat_domain_admin
    +为 heat_domain_admin 用户增加 admin 角色 +
    openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
    +创建 heat_stack_owner 角色 +
    openstack role create heat_stack_owner
    +创建 heat_stack_user 角色 +
    openstack role create heat_stack_user

    +
  8. +
  9. +

    安装软件包

    +
    dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
    +
  10. +
  11. +

    修改配置文件/etc/heat/heat.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +heat_metadata_server_url = http://controller:8000
    +heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
    +stack_domain_admin = heat_domain_admin
    +stack_domain_admin_password = HEAT_DOMAIN_PASS
    +stack_user_domain_name = heat
    +
    +[database]
    +connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = heat
    +password = HEAT_PASS
    +
    +[trustee]
    +auth_type = password
    +auth_url = http://controller:5000
    +username = heat
    +password = HEAT_PASS
    +user_domain_name = default
    +
    +[clients_keystone]
    +auth_uri = http://controller:5000
    +
  12. +
  13. +

    初始化heat数据库表

    +
    su -s /bin/sh -c "heat-manage db_sync" heat
    +
  14. +
  15. +

    启动服务

    +
    systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
    +systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
    +
  16. +
+

Tempest

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+

Controller节点

+
    +
  1. +

    安装Tempest

    +
    dnf install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Antelope中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

基于OpenStack SIG开发工具oos部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    yum install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息,AK/SK是用户的华为云登录密钥,其他配置保持默认即可(默认使用新加坡region),需要提前在云上创建对应的资源,包括:

    +
      +
    • 一个安全组,名字默认是oos
    • +
    • 一个openEuler镜像,名称格式是openEuler-%(release)s-%(arch)s,例如openEuler-24.03-sp1-arm64
    • +
    • 一个VPC,名称是oos_vpc
    • +
    • 该VPC下面两个子网,名称是oos_subnet1oos_subnet2
    • +
    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器(只在openEuler LTS上支持)
    +
  6. +
  7. +

    华为云上面创建一台|openEuler 24.03 LTS SP1的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 24.03-lts-sp1 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +
    oos env setup test-oos -r antelope
    +

    具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +
  12. +
  13. +

    执行tempest测试

    +

    用户可以使用oos自动执行:

    +
    oos env test test-oos
    +

    也可以手动登录目标节点,进入根目录下的mytest目录,手动执行tempest run

    +
  14. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,跳过第2步对华为云provider信息的配置,在第4步改为纳管主机操作。

+

被纳管的虚机需要保证:

+
    +
  • 至少有一张给oos使用的网卡,名称与配置保持一致,相关配置neutron_dataplane_interface_name
  • +
  • 至少有一块给oos使用的硬盘,名称与配置保持一致,相关配置cinder_block_device
  • +
  • 如果要部署swift服务,则需要新增一块硬盘,名称与配置保持一致,相关配置swift_storage_devices
  • +
+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 24.03-lts-sp1 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/index.html b/site/install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/index.html new file mode 100644 index 0000000000000000000000000000000000000000..88e843c0040d6021f33dcce0d8be2a6bc4d741f8 --- /dev/null +++ b/site/install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/index.html @@ -0,0 +1,2673 @@ + + + + + + + + openEuler-24.03-LTS-SP1_Wallaby - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Wallaby 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 24.03-LTS-SP1 版本官方源已经支持 OpenStack-Wallaby 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • CinderSP1
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 24.03 LTS SP1 官方 yum 源,需要启用 EPOL 软件仓以支持 OpenStack

    +
    yum update
    +yum install openstack-release-wallaby
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示。

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source ~/.admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    source ~/.admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[libvirt]
    +virt_type = qemu                                                                               (CPT)
    +cpu_mode = custom                                                                              (CPT)
    +cpu_model = cortex-a72                                                                         (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +
    +vim /etc/qemu/firmware/edk2-aarch64.json
    +
    +{
    +    "description": "UEFI firmware for ARM64 virtual machines",
    +    "interface-types": [
    +        "uefi"
    +    ],
    +    "mapping": {
    +        "device": "flash",
    +        "executable": {
    +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
    +            "format": "raw"
    +        },
    +        "nvram-template": {
    +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
    +            "format": "raw"
    +        }
    +    },
    +    "targets": [
    +        {
    +            "architecture": "aarch64",
    +            "machines": [
    +                "virt-*"
    +            ]
    +        }
    +    ],
    +    "features": [
    +
    +    ],
    +    "tags": [
    +
    +    ]
    +}
    +
    +(CPT)
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CPT)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service 
    +systemctl enable neutron-l3-agent.service
    +systemctl restart openstack-nova-api.service neutron-server.service                            (CTL)
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Wallaby中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic
+                         --description "Ironic baremetal provisioning service" baremetal
+
+openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
+openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector
+openstack role add --project service --user ironic-inspector admin
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+host = controller
+memcache_servers = controller:11211
+enabled_network_interfaces = flat,noop,neutron
+default_network_interface = noop
+transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/
+enabled_hardware_types = ipmi
+enabled_boot_interfaces = pxe
+enabled_deploy_interfaces = direct
+default_deploy_interface = direct
+enabled_inspect_interfaces = inspector
+enabled_management_interfaces = ipmitool
+enabled_power_interfaces = ipmitool
+enabled_rescue_interfaces = no-rescue,agent
+isolinux_bin = /usr/share/syslinux/isolinux.bin
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+[agent]
+deploy_logs_collect = always
+deploy_logs_local_path = /var/log/ironic/deploy
+deploy_logs_storage_backend = local
+image_download_source = http
+stream_raw_images = false
+force_raw_images = false
+verify_ca = False
+
+[oslo_concurrency]
+
+[oslo_messaging_notifications]
+transport_url = rabbit://openstack:123456@172.20.19.25:5672/
+topics = notifications
+driver = messagingv2
+
+[oslo_messaging_rabbit]
+amqp_durable_queues = True
+rabbit_ha_queues = True
+
+[pxe]
+ipxe_enabled = false
+pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
+image_cache_size = 204800
+tftp_root=/var/lib/tftpboot/cephfs/
+tftp_master_path=/var/lib/tftpboot/cephfs/master_images
+
+[dhcp]
+dhcp_provider = none
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. 配置ironic-inspector服务
  2. +
+

配置文件路径/etc/ironic-inspector/inspector.conf

+

1、创建数据库

+
# mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \     IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
+IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+

2、通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSWORDironic_inspector用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+backend = sqlalchemy
+connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
+min_pool_size = 100
+max_pool_size = 500
+pool_timeout = 30
+max_retries = 5
+max_overflow = 200
+db_retry_interval = 2
+db_inc_retry_interval = True
+db_max_retry_interval = 2
+db_max_retries = 5
+

3、配置消息度列通信地址

+
[DEFAULT] 
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+
+

4、设置keystone认证

+
[DEFAULT]
+
+auth_strategy = keystone
+timeout = 900
+rootwrap_config = /etc/ironic-inspector/rootwrap.conf
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+log_dir = /var/log/ironic-inspector
+state_path = /var/lib/ironic-inspector
+use_stderr = False
+
+[ironic]
+api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
+auth_type = password
+auth_url = http://PUBLIC_IDENTITY_IP:5000
+auth_strategy = keystone
+ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
+os_region = RegionOne
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
+username = IRONIC_SERVICE_USER_NAME
+password = IRONIC_SERVICE_USER_PASSWORD
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://control:5000
+www_authenticate_uri = http://control:5000
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = ironic_inspector
+password = IRONICPASSWD
+region_name = RegionOne
+memcache_servers = control:11211
+token_cache_time = 300
+
+[processing]
+add_ports = active
+processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
+ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
+always_store_ramdisk_logs = true
+store_data =none
+power_off = false
+
+[pxe_filter]
+driver = iptables
+
+[capabilities]
+boot_mode=True
+

5、配置ironic inspector dnsmasq服务

+
# 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
+port=0
+interface=enp3s0                         #替换为实际监听网络接口
+dhcp-range=172.20.19.100,172.20.19.110   #替换为实际dhcp地址范围
+bind-interfaces
+enable-tftp
+
+dhcp-match=set:efi,option:client-arch,7
+dhcp-match=set:efi,option:client-arch,9
+dhcp-match=aarch64, option:client-arch,11
+dhcp-boot=tag:aarch64,grubaa64.efi
+dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
+dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
+
+tftp-root=/tftpboot                       #替换为实际tftpboot目录
+log-facility=/var/log/dnsmasq.log
+

6、关闭ironic provision网络子网的dhcp

+
openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
+

7、初始化ironic-inspector服务的数据库

+

在控制节点执行:

+
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
+

8、启动服务

+
systemctl enable --now openstack-ironic-inspector.service
+systemctl enable --now openstack-ironic-inspector-dnsmasq.service
+

6.配置httpd服务

+
    +
  1. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  2. +
  3. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +
      yum install httpd -y
      +
    2. +
    3. +

      创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    4. +
    5. +

      重启httpd服务。

      +
      systemctl restart httpd
      +
    6. +
    +
  4. +
+

7.deploy ramdisk镜像制作

+

W版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用W版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意

    +
    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:
    +
    +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败,如下:
    +
    +生成的错误配置文件:
    +
    +![ironic-err](../../img/install/ironic-err.png)
    +
    +如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。
    +
    +需要用户对生成grub.cfg的代码逻辑自行修改。
    +
    +ironic向ipa发送查询命令执行状态请求的tls报错:
    +
    +w版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。
    +
    +1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    +
    +```
    +[agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +```
    +
    +2) ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    +
    +/etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)
    +
    +```
    +[DEFAULT]
    +enable_auto_tls = False
    +```
    +
    +设置权限:
    +
    +```
    +chown -R ipa.ipa /etc/ironic_python_agent/
    +```
    +
    +3. 修改ipa服务的服务启动文件,添加配置文件选项
    +
    +vim usr/lib/systemd/system/ironic-python-agent.service
    +
    +```
    +[Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +```
    +
  10. +
+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 24.03 LTS SP1中引入了Kolla和Kolla-ansible服务。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

1.设置数据库

+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+

2.创建服务用户认证

+

1、创建Trove服务用户

+

openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+openstack role add --project service --user trove admin
+openstack service create --name trove
+                         --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+

3.安装和配置Trove各组件

+

1、安装Trove包 +

yum install openstack-trove python-troveclient
+ 2. 配置trove.conf +
vim /etc/trove/trove.conf
+
+[DEFAULT]
+bind_host=TROVE_NODE_IP
+log_dir = /var/log/trove
+network_driver = trove.network.neutron.NeutronDriver
+management_security_groups = <manage security group>
+nova_keypair = trove-mgmt
+default_datastore = mysql
+taskmanager_manager = trove.taskmanager.manager.Manager
+trove_api_workers = 5
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+reboot_time_out = 300
+usage_timeout = 900
+agent_call_high_timeout = 1200
+use_syslog = False
+debug = True
+
+# Set these if using Neutron Networking
+network_driver=trove.network.neutron.NeutronDriver
+network_label_regex=.*
+
+
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
+
+[keystone_authtoken]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = trove
+username = trove
+auth_url = http://controller:5000/v3/
+auth_type = password
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = trove
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mariadb]
+tcp_ports = 3306,4444,4567,4568
+
+[mysql]
+tcp_ports = 3306
+
+[postgresql]
+tcp_ports = 5432
+ 解释:

+
    +
  • [Default]分组中bind_host配置为Trove部署节点的IP
  • +
  • nova_compute_urlcinder_url 为Nova和Cinder在Keystone中创建的endpoint
  • +
  • nova_proxy_XXX 为一个能访问Nova服务的用户信息,上例中使用admin用户为例
  • +
  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • +
  • [database]分组中的connection 为前面在mysql中为Trove创建的数据库信息
  • +
  • Trove的用户信息中TROVE_PASS替换为实际trove用户的密码
  • +
+

3.配置trove-guestagent.conf +

vim /etc/trove/trove-guestagent.conf
+
+[DEFAULT]
+log_file = trove-guestagent.log
+log_dir = /var/log/trove/
+ignore_users = os_admin
+control_exchange = trove
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+rpc_backend = rabbit
+command_process_timeout = 60
+use_syslog = False
+debug = True
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = TROVE_PASS
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mysql]
+docker_image = your-registry/your-repo/mysql
+backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0

+

解释: guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 + 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 + 报心跳,因此需要配置RabbitMQ的用户和密码信息。 + 从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

+
    +
  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • +
  • Trove的用户信息中TROVE_PASS替换为实际trove用户的密码
  • +
+

4.生成数据Trove数据库表 +

su -s /bin/sh -c "trove-manage db_sync" trove

+

4.完成安装配置

+
    +
  1. 配置Trove服务自启动 +
    systemctl enable openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service 
  2. +
  3. 启动服务 +
    systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您在身份服务中为swift用户选择的密码**
+

4.安装和配置存储节点 (STG)

+
安装支持的程序包:
+```shell
+yum install xfsprogs rsync
+```
+
+将/dev/vdb和/dev/vdc设备格式化为 XFS
+
+```shell
+mkfs.xfs /dev/vdb
+mkfs.xfs /dev/vdc
+```
+
+创建挂载点目录结构:
+
+```shell
+mkdir -p /srv/node/vdb
+mkdir -p /srv/node/vdc
+```
+
+找到新分区的 UUID:
+
+```shell
+blkid
+```
+
+编辑/etc/fstab文件并将以下内容添加到其中:
+
+```shell
+UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
+UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
+```
+
+挂载设备:
+
+```shell
+mount /srv/node/vdb
+mount /srv/node/vdc
+```
+***注意***
+
+**如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置**
+
+(可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:
+
+```shell
+[DEFAULT]
+uid = swift
+gid = swift
+log file = /var/log/rsyncd.log
+pid file = /var/run/rsyncd.pid
+address = MANAGEMENT_INTERFACE_IP_ADDRESS
+
+[account]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/account.lock
+
+[container]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/container.lock
+
+[object]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/object.lock
+```
+**替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址**
+
+启动rsyncd服务并配置它在系统启动时启动:
+
+```shell
+systemctl enable rsyncd.service
+systemctl start rsyncd.service
+```
+

5.在存储节点安装和配置组件 (STG)

+
安装软件包:
+
+```shell
+yum install openstack-swift-account openstack-swift-container openstack-swift-object
+```
+
+编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。
+
+确保挂载点目录结构的正确所有权:
+
+```shell
+chown -R swift:swift /srv/node
+```
+
+创建recon目录并确保其拥有正确的所有权:
+
+```shell
+mkdir -p /var/cache/swift
+chown -R root:swift /var/cache/swift
+chmod -R 775 /var/cache/swift
+```
+

6.创建账号环 (CTL)

+
切换到/etc/swift目录。
+
+```shell
+cd /etc/swift
+```
+
+创建基础account.builder文件:
+
+```shell
+swift-ring-builder account.builder create 10 1 1
+```
+
+将每个存储节点添加到环中:
+
+```shell
+swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意 ***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder account.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder account.builder rebalance
+```
+

7.创建容器环 (CTL)

+
切换到`/etc/swift`目录。
+
+创建基础`container.builder`文件:
+
+```shell
+   swift-ring-builder container.builder create 10 1 1
+```
+
+将每个存储节点添加到环中:
+
+```shell
+swift-ring-builder container.builder \
+  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
+  --device DEVICE_NAME --weight 100
+
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder container.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder container.builder rebalance
+```
+

8.创建对象环 (CTL)

+
切换到`/etc/swift`目录。
+
+创建基础`object.builder`文件:
+
+   ```shell
+   swift-ring-builder object.builder create 10 1 1
+   ```
+
+将每个存储节点添加到环中
+
+```shell
+ swift-ring-builder object.builder \
+  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
+  --device DEVICE_NAME --weight 100
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意 ***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder object.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder object.builder rebalance
+```
+
+分发环配置文件:
+
+将`account.ring.gz`,`container.ring.gz`以及 `object.ring.gz`文件复制到每个存储节点和运行代理服务的任何其他节点上的`/etc/swift`目录。
+

9.完成安装

+

编辑/etc/swift/swift.conf文件

+
[swift-hash]
+swift_hash_path_suffix = test-hash
+swift_hash_path_prefix = test-hash
+
+[storage-policy:0]
+name = Policy-0
+default = yes
+

用唯一值替换 test-hash

+

将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

+

在所有节点上,确保配置目录的正确所有权:

+
chown -R root:swift /etc/swift
+

在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

+
systemctl enable openstack-swift-proxy.service memcached.service
+systemctl start openstack-swift-proxy.service memcached.service
+

在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

+
systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
+
+systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
+
+systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
+
+systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
+
+systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
+
+systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
+

Cyborg 安装

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+

1.初始化对应数据库

+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+

2.创建对应Keystone资源对象

+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+

3.安装Cyborg

+
yum install openstack-cyborg
+

4.配置Cyborg

+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+

5.同步数据库表格

+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+

6.启动Cyborg服务

+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+

1.创建数据库

+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+

3.安装Aodh

+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+

注意

+

aodh依赖的软件包pytho3-pyparsing在openEuler的OS仓不适配,需要覆盖安装OpenStack对应版本,可以使用yum list |grep pyparsing |grep OpenStack | awk '{print $2}'获取对应的版本

+

VERSION,然后再yum install -y python3-pyparsing-VERSION覆盖安装适配的pyparsing

+

4.修改配置文件

+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
aodh-dbsync
+

6.启动Aodh服务

+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+

1.创建数据库

+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+

3.安装Gnocchi

+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+

4.修改配置文件/etc/gnocchi/gnocchi.conf

+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+

5.初始化数据库

+
gnocchi-upgrade
+

6.启动Gnocchi服务

+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+

1.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+

2.安装Ceilometer

+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+

3.修改配置文件/etc/ceilometer/pipeline.yaml

+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+

4.修改配置文件/etc/ceilometer/ceilometer.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
ceilometer-upgrade
+

6.启动Ceilometer服务

+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+

1.创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码

+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+

2.创建服务凭证,创建heat用户,并为其增加admin角色

+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+

3.创建heatheat-cfn服务及其对应的API端点

+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+

4.创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色

+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+

5.安装软件包

+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+

6.修改配置文件/etc/heat/heat.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+

7.初始化heat数据库表

+
su -s /bin/sh -c "heat-manage db_sync" heat
+

8.启动服务

+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    yum install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 24.03-LTS-SP1的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 24.03-lts-sp1 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +

    oos env setup test-oos -r wallaby
    +具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 24.03-lts-sp1 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-24.03-LTS-SP2/OpenStack-antelope/index.html b/site/install/openEuler-24.03-LTS-SP2/OpenStack-antelope/index.html new file mode 100644 index 0000000000000000000000000000000000000000..0b35f5ef8a1551e4dbdeeabcf644b60738011a3d --- /dev/null +++ b/site/install/openEuler-24.03-LTS-SP2/OpenStack-antelope/index.html @@ -0,0 +1,3365 @@ + + + + + + + + openEuler-24.03-LTS-SP2_Antelope - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack Antelope 部署指南

+ +

本文档是 openEuler OpenStack SIG 编写的基于 |openEuler 24.03 LTS SP2 的 OpenStack 部署指南,内容由 SIG 贡献者提供。在阅读过程中,如果您有任何疑问或者发现任何问题,请联系SIG维护人员,或者直接提交issue

+

约定

+

本章节描述文档中的一些通用约定。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
名称定义
RABBIT_PASSrabbitmq的密码,由用户设置,在OpenStack各个服务配置中使用
CINDER_PASScinder服务keystone用户的密码,在cinder配置中使用
CINDER_DBPASScinder服务数据库密码,在cinder配置中使用
KEYSTONE_DBPASSkeystone服务数据库密码,在keystone配置中使用
GLANCE_PASSglance服务keystone用户的密码,在glance配置中使用
GLANCE_DBPASSglance服务数据库密码,在glance配置中使用
HEAT_PASS在keystone注册的heat用户密码,在heat配置中使用
HEAT_DBPASSheat服务数据库密码,在heat配置中使用
CYBORG_PASS在keystone注册的cyborg用户密码,在cyborg配置中使用
CYBORG_DBPASScyborg服务数据库密码,在cyborg配置中使用
NEUTRON_PASS在keystone注册的neutron用户密码,在neutron配置中使用
NEUTRON_DBPASSneutron服务数据库密码,在neutron配置中使用
PROVIDER_INTERFACE_NAME物理网络接口的名称,在neutron配置中使用
OVERLAY_INTERFACE_IP_ADDRESSController控制节点的管理ip地址,在neutron配置中使用
METADATA_SECRETmetadata proxy的secret密码,在nova和neutron配置中使用
PLACEMENT_DBPASSplacement服务数据库密码,在placement配置中使用
PLACEMENT_PASS在keystone注册的placement用户密码,在placement配置中使用
NOVA_DBPASSnova服务数据库密码,在nova配置中使用
NOVA_PASS在keystone注册的nova用户密码,在nova,cyborg,neutron等配置中使用
IRONIC_DBPASSironic服务数据库密码,在ironic配置中使用
IRONIC_PASS在keystone注册的ironic用户密码,在ironic配置中使用
IRONIC_INSPECTOR_DBPASSironic-inspector服务数据库密码,在ironic-inspector配置中使用
IRONIC_INSPECTOR_PASS在keystone注册的ironic-inspector用户密码,在ironic-inspector配置中使用
+

OpenStack SIG 提供了多种基于 openEuler 部署 OpenStack 的方法,以满足不同的用户场景,请按需选择。

+

基于RPM部署

+

环境准备

+

本文档基于OpenStack经典的三节点环境进行部署,三个节点分别是控制节点(Controller)、计算节点(Compute)、存储节点(Storage),其中存储节点一般只部署存储服务,在资源有限的情况下,可以不单独部署该节点,把存储节点上的服务部署到计算节点即可。

+

首先准备三个|openEuler 24.03 LTS SP2环境,根据您的环境,下载对应的镜像并安装即可:ISO镜像qcow2镜像

+

下面的安装按照如下拓扑进行: +

controller:192.168.0.2
+compute:   192.168.0.3
+storage:   192.168.0.4
+如果您的环境IP不同,请按照您的环境IP修改相应的配置文件。

+

本文档的三节点服务拓扑如下图所示(只包含Keystone、Glance、Nova、Cinder、Neutron这几个核心服务,其他服务请参考具体部署章节):

+

topology1 +topology2 +topology3

+

在正式部署之前,需要对每个节点做如下配置和检查:

+
    +
  1. +

    配置 |openEuler 24.03 LTS SP2 官方 yum 源,需要启用 EPOL 软件仓以支持 OpenStack

    +
    yum update
    +yum install openstack-release-antelope
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示。

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP2/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    每个节点分别修改主机名,以controller为例:

    +
    hostnamectl set-hostname controller
    +
    +vi /etc/hostname
    +内容修改为controller
    +

    然后修改每个节点的/etc/hosts文件,新增如下内容:

    +
    192.168.0.2   controller
    +192.168.0.3   compute
    +192.168.0.4   storage
    +
  4. +
+

时钟同步

+

集群环境时刻要求每个节点的时间一致,一般由时钟同步软件保证。本文使用chrony软件。步骤如下:

+

Controller节点

+
    +
  1. 安装服务 +
    dnf install chrony
  2. +
  3. 修改/etc/chrony.conf配置文件,新增一行 +
    # 表示允许哪些IP从本节点同步时钟
    +allow 192.168.0.0/24
  4. +
  5. 重启服务 +
    systemctl restart chronyd
  6. +
+

其他节点

+
    +
  1. +

    安装服务 +

    dnf install chrony

    +
  2. +
  3. +

    修改/etc/chrony.conf配置文件,新增一行

    +
    # NTP_SERVER是controller IP,表示从这个机器获取时间,这里我们填192.168.0.2,或者在`/etc/hosts`里配置好的controller名字即可。
    +server NTP_SERVER iburst
    +

    同时,要把pool pool.ntp.org iburst这一行注释掉,表示不从公网同步时钟。

    +
  4. +
  5. +

    重启服务

    +
    systemctl restart chronyd
    +
  6. +
+

配置完成后,检查一下结果,在其他非controller节点执行chronyc sources,返回结果类似如下内容,表示成功从controller同步时钟。

+
MS Name/IP address         Stratum Poll Reach LastRx Last sample
+===============================================================================
+^* 192.168.0.2                 4   6     7     0  -1406ns[  +55us] +/-   16ms
+

安装数据库

+

数据库安装在控制节点,这里推荐使用mariadb。

+
    +
  1. +

    安装软件包

    +
    dnf install mariadb-config mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    新增配置文件/etc/my.cnf.d/openstack.cnf,内容如下

    +
    [mysqld]
    +bind-address = 192.168.0.2
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +
  4. +
  5. +

    启动服务器

    +
    systemctl start mariadb
    +
  6. +
  7. +

    初始化数据库,根据提示进行即可

    +
    mysql_secure_installation
    +

    示例如下:

    +
    NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
    +    SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!
    +
    +In order to log into MariaDB to secure it, we'll need the current
    +password for the root user. If you've just installed MariaDB, and
    +haven't set the root password yet, you should just press enter here.
    +
    +Enter current password for root (enter for none): 
    +
    +#这里输入密码,由于我们是初始化DB,直接回车就行
    +
    +OK, successfully used password, moving on...
    +
    +Setting the root password or using the unix_socket ensures that nobody
    +can log into the MariaDB root user without the proper authorisation.
    +
    +You already have your root account protected, so you can safely answer 'n'.
    +
    +# 这里根据提示输入N
    +
    +Switch to unix_socket authentication [Y/n] N
    +
    +Enabled successfully!
    +Reloading privilege tables..
    +... Success!
    +
    +
    +You already have your root account protected, so you can safely answer 'n'.
    +
    +# 输入Y,修改密码
    +
    +Change the root password? [Y/n] Y
    +
    +New password: 
    +Re-enter new password: 
    +Password updated successfully!
    +Reloading privilege tables..
    +... Success!
    +
    +
    +By default, a MariaDB installation has an anonymous user, allowing anyone
    +to log into MariaDB without having to have a user account created for
    +them.  This is intended only for testing, and to make the installation
    +go a bit smoother.  You should remove them before moving into a
    +production environment.
    +
    +# 输入Y,删除匿名用户
    +
    +Remove anonymous users? [Y/n] Y
    +... Success!
    +
    +Normally, root should only be allowed to connect from 'localhost'.  This
    +ensures that someone cannot guess at the root password from the network.
    +
    +# 输入Y,关闭root远程登录权限
    +
    +Disallow root login remotely? [Y/n] Y
    +... Success!
    +
    +By default, MariaDB comes with a database named 'test' that anyone can
    +access.  This is also intended only for testing, and should be removed
    +before moving into a production environment.
    +
    +# 输入Y,删除test数据库
    +
    +Remove test database and access to it? [Y/n] Y
    +- Dropping test database...
    +... Success!
    +- Removing privileges on test database...
    +... Success!
    +
    +Reloading the privilege tables will ensure that all changes made so far
    +will take effect immediately.
    +
    +# 输入Y,重载配置
    +
    +Reload privilege tables now? [Y/n] Y
    +... Success!
    +
    +Cleaning up...
    +
    +All done!  If you've completed all of the above steps, your MariaDB
    +installation should now be secure.
    +
  8. +
  9. +

    验证,根据第四步设置的密码,检查是否能登录mariadb

    +
    mysql -uroot -p
    +
  10. +
+

安装消息队列

+

消息队列安装在控制节点,这里推荐使用rabbitmq。

+
    +
  1. 安装软件包 +
    dnf install rabbitmq-server
  2. +
  3. 启动服务 +
    systemctl start rabbitmq-server
  4. +
  5. 配置openstack用户,RABBIT_PASS是openstack服务登录消息队里的密码,需要和后面各个服务的配置保持一致。 +
    rabbitmqctl add_user openstack RABBIT_PASS
    +rabbitmqctl set_permissions openstack ".*" ".*" ".*"
  6. +
+

安装缓存服务

+

消息队列安装在控制节点,这里推荐使用Memcached。

+
    +
  1. 安装软件包 +
    dnf install memcached python3-memcached
  2. +
  3. 修改配置文件/etc/sysconfig/memcached +
    OPTIONS="-l 127.0.0.1,::1,controller"
  4. +
  5. 启动服务 +
    systemctl start memcached
  6. +
+

部署服务

+

Keystone

+

Keystone是OpenStack提供的鉴权服务,是整个OpenStack的入口,提供了租户隔离、用户认证、服务发现等功能,必须安装。

+
    +
  1. +

    创建 keystone 数据库并授权

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包

    +
    dnf install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +
  6. +
  7. +

    同步数据库

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
  14. +
  15. +

    打开httpd.conf并配置

    +
    #需要修改的配置文件路径
    +vim /etc/httpd/conf/httpd.conf
    +
    +#修改以下项,如果没有则新添加
    +ServerName controller
    +
  16. +
  17. +

    创建软链接

    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 + 如果 ServerName 项不存在则需要创建

    +
  18. +
  19. +

    启动Apache HTTP服务

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  20. +
  21. +

    创建环境变量配置

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  22. +
  23. +

    依次创建domain, projects, users, roles

    +
      +
    • +

      需要先安装python3-openstackclient

      +
      dnf install python3-openstackclient
      +
    • +
    • +

      导入环境变量

      +
      source ~/.admin-openrc
      +
    • +
    • +

      创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

      +
      openstack domain create --description "An Example Domain" example
      +
      openstack project create --domain default --description "Service Project" service
      +
    • +
    • +

      创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

      +
      openstack project create --domain default --description "Demo Project" myproject
      +openstack user create --domain default --password-prompt myuser
      +openstack role create myrole
      +openstack role add --project myproject --user myuser myrole
      +
    • +
    +
  24. +
  25. +

    验证

    +
      +
    • +

      取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

      +
      source ~/.admin-openrc
      +unset OS_AUTH_URL OS_PASSWORD
      +
    • +
    • +

      为admin用户请求token:

      +
      openstack --os-auth-url http://controller:5000/v3 \
      +--os-project-domain-name Default --os-user-domain-name Default \
      +--os-project-name admin --os-username admin token issue
      +
    • +
    • +

      为myuser用户请求token:

      +
      openstack --os-auth-url http://controller:5000/v3 \
      +--os-project-domain-name Default --os-user-domain-name Default \
      +--os-project-name myproject --os-username myuser token issue
      +
    • +
    +
  26. +
+

Glance

+

Glance是OpenStack提供的镜像服务,负责虚拟机、裸机镜像的上传与下载,必须安装。

+

Controller节点

+
    +
  1. +

    创建 glance 数据库并授权

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +
  2. +
  3. +

    初始化 glance 资源对象

    +
  4. +
  5. +

    导入环境变量

    +
    source ~/.admin-openrc
    +
  6. +
  7. +

    创建用户时,命令行会提示输入密码,请输入自定义的密码,下文涉及到GLANCE_PASS的地方替换成该密码即可。

    +
    openstack user create --domain default --password-prompt glance
    +User Password:
    +Repeat User Password:
    +
  8. +
  9. +

    添加glance用户到service project并指定admin角色:

    +
    openstack role add --project service --user glance admin
    +
  10. +
  11. +

    创建glance服务实体:

    +
    openstack service create --name glance --description "OpenStack Image" image
    +
  12. +
  13. +

    创建glance API服务:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  14. +
  15. +

    安装软件包

    +
    dnf install openstack-glance
    +
  16. +
  17. +

    修改 glance 配置文件

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +
  18. +
  19. +

    同步数据库

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  20. +
  21. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  22. +
  23. +

    验证

    +
      +
    • +

      导入环境变量 +

      sorce ~/.admin-openrcu

      +
    • +
    • +

      下载镜像

      +
      x86镜像下载:
      +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
      +
      +arm镜像下载:
      +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img
      +

      注意

      +

      如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

      +
    • +
    • +

      向Image服务上传镜像:

      +
      openstack image create --disk-format qcow2 --container-format bare \
      +                    --file cirros-0.4.0-x86_64-disk.img --public cirros
      +
    • +
    • +

      确认镜像上传并验证属性:

      +
      openstack image list
      +
    • +
    +
  24. +
+

Placement

+

Placement是OpenStack提供的资源调度组件,一般不面向用户,由Nova等组件调用,安装在控制节点。

+

安装、配置Placement服务前,需要先创建相应的数据库、服务凭证和API endpoints。

+
    +
  1. +

    创建数据库

    +
      +
    • +

      使用root用户访问数据库服务:

      +
      mysql -u root -p
      +
    • +
    • +

      创建placement数据库:

      +
      MariaDB [(none)]> CREATE DATABASE placement;
      +
    • +
    • +

      授权数据库访问:

      +
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
      +  IDENTIFIED BY 'PLACEMENT_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
      +  IDENTIFIED BY 'PLACEMENT_DBPASS';
      +

      替换PLACEMENT_DBPASS为placement数据库访问密码。

      +
    • +
    • +

      退出数据库访问客户端:

      +
      exit
      +
    • +
    +
  2. +
  3. +

    配置用户和Endpoints

    +
      +
    • +

      source admin凭证,以获取admin命令行权限:

      +
      source ~/.admin-openrc
      +
    • +
    • +

      创建placement用户并设置用户密码:

      +
      openstack user create --domain default --password-prompt placement
      +
      +User Password:
      +Repeat User Password:
      +
    • +
    • +

      添加placement用户到service project并指定admin角色:

      +
      openstack role add --project service --user placement admin
      +
    • +
    • +

      创建placement服务实体:

      +
      openstack service create --name placement \
      +  --description "Placement API" placement
      +
    • +
    • +

      创建Placement API服务endpoints:

      +
      openstack endpoint create --region RegionOne \
      +  placement public http://controller:8778
      +openstack endpoint create --region RegionOne \
      +  placement internal http://controller:8778
      +openstack endpoint create --region RegionOne \
      +  placement admin http://controller:8778
      +
    • +
    +
  4. +
  5. +

    安装及配置组件

    +
      +
    • +

      安装软件包:

      +
      dnf install openstack-placement-api
      +
    • +
    • +

      编辑/etc/placement/placement.conf配置文件,完成如下操作:

      +
        +
      • +

        [placement_database]部分,配置数据库入口:

        +
        [placement_database]
        +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
        +

        替换PLACEMENT_DBPASS为placement数据库的密码。

        +
      • +
      • +

        [api][keystone_authtoken]部分,配置身份认证服务入口:

        +
        [api]
        +auth_strategy = keystone
        +
        +[keystone_authtoken]
        +auth_url = http://controller:5000/v3
        +memcached_servers = controller:11211
        +auth_type = password
        +project_domain_name = Default
        +user_domain_name = Default
        +project_name = service
        +username = placement
        +password = PLACEMENT_PASS
        +

        替换PLACEMENT_PASS为placement用户的密码。

        +
      • +
      +
    • +
    • +

      数据库同步,填充Placement数据库:

      +
      su -s /bin/sh -c "placement-manage db sync" placement
      +
    • +
    +
  6. +
  7. +

    启动服务

    +

    重启httpd服务:

    +
    systemctl restart httpd
    +
  8. +
  9. +

    验证

    +
      +
    • +

      source admin凭证,以获取admin命令行权限

      +
      source ~/.admin-openrc
      +
    • +
    • +

      执行状态检查:

      +
      placement-status upgrade check
      +
      +----------------------------------------------------------------------+
      +| Upgrade Check Results                                                |
      ++----------------------------------------------------------------------+
      +| Check: Missing Root Provider IDs                                     |
      +| Result: Success                                                      |
      +| Details: None                                                        |
      ++----------------------------------------------------------------------+
      +| Check: Incomplete Consumers                                          |
      +| Result: Success                                                      |
      +| Details: None                                                        |
      ++----------------------------------------------------------------------+
      +| Check: Policy File JSON to YAML Migration                            |
      +| Result: Failure                                                      |
      +| Details: Your policy file is JSON-formatted which is deprecated. You |
      +|   need to switch to YAML-formatted file. Use the                     |
      +|   ``oslopolicy-convert-json-to-yaml`` tool to convert the            |
      +|   existing JSON-formatted files to YAML in a backwards-              |
      +|   compatible manner: https://docs.openstack.org/oslo.policy/         |
      +|   latest/cli/oslopolicy-convert-json-to-yaml.html.                   |
      ++----------------------------------------------------------------------+
      +

      这里可以看到Policy File JSON to YAML Migration的结果为Failure。这是因为在Placement中,JSON格式的policy文件从Wallaby版本开始已处于deprecated状态。可以参考提示,使用oslopolicy-convert-json-to-yaml工具 将现有的JSON格式policy文件转化为YAML格式。

      +
      oslopolicy-convert-json-to-yaml  --namespace placement \
      +  --policy-file /etc/placement/policy.json \
      +  --output-file /etc/placement/policy.yaml
      +mv /etc/placement/policy.json{,.bak}
      +

      注:当前环境中此问题可忽略,不影响运行。

      +
    • +
    • +

      针对placement API运行命令:

      +
        +
      • +

        安装osc-placement插件:

        +
        dnf install python3-osc-placement
        +
      • +
      • +

        列出可用的资源类别及特性:

        +
        openstack --os-placement-api-version 1.2 resource class list --sort-column name
        ++----------------------------+
        +| name                       |
        ++----------------------------+
        +| DISK_GB                    |
        +| FPGA                       |
        +| ...                        |
        +
        +openstack --os-placement-api-version 1.6 trait list --sort-column name
        ++---------------------------------------+
        +| name                                  |
        ++---------------------------------------+
        +| COMPUTE_ACCELERATORS                  |
        +| COMPUTE_ARCH_AARCH64                  |
        +| ...                                   |
        +
      • +
      +
    • +
    +
  10. +
+

Nova

+

Nova是OpenStack的计算服务,负责虚拟机的创建、发放等功能。

+

Controller节点

+

在控制节点执行以下操作。

+
    +
  1. +

    创建数据库

    +
      +
    • +

      使用root用户访问数据库服务:

      +
      mysql -u root -p
      +
    • +
    • +

      创建nova_apinovanova_cell0数据库:

      +
      MariaDB [(none)]> CREATE DATABASE nova_api;
      +MariaDB [(none)]> CREATE DATABASE nova;
      +MariaDB [(none)]> CREATE DATABASE nova_cell0;
      +
    • +
    • +

      授权数据库访问:

      +
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +

      替换NOVA_DBPASS为nova相关数据库访问密码。

      +
    • +
    • +

      退出数据库访问客户端:

      +
      exit
      +
    • +
    +
  2. +
  3. +

    配置用户和Endpoints

    +
      +
    • +

      source admin凭证,以获取admin命令行权限:

      +
      source ~/.admin-openrc
      +
    • +
    • +

      创建nova用户并设置用户密码:

      +
      openstack user create --domain default --password-prompt nova
      +
      +User Password:
      +Repeat User Password:
      +
    • +
    • +

      添加nova用户到service project并指定admin角色:

      +
      openstack role add --project service --user nova admin
      +
    • +
    • +

      创建nova服务实体:

      +
      openstack service create --name nova \
      +  --description "OpenStack Compute" compute
      +
    • +
    • +

      创建Nova API服务endpoints:

      +
      openstack endpoint create --region RegionOne \
      +  compute public http://controller:8774/v2.1
      +openstack endpoint create --region RegionOne \
      +  compute internal http://controller:8774/v2.1
      +openstack endpoint create --region RegionOne \
      +  compute admin http://controller:8774/v2.1
      +
    • +
    +
  4. +
  5. +

    安装及配置组件

    +
      +
    • +

      安装软件包:

      +
      dnf install openstack-nova-api openstack-nova-conductor \
      +  openstack-nova-novncproxy openstack-nova-scheduler
      +
    • +
    • +

      编辑/etc/nova/nova.conf配置文件,完成如下操作:

      +
        +
      • +

        [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用controller节点管理IP配置my_ip,显式定义log_dir:

        +
        [DEFAULT]
        +enabled_apis = osapi_compute,metadata
        +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
        +my_ip = 192.168.0.2
        +log_dir = /var/log/nova
        +state_path = /var/lib/nova
        +

        替换RABBIT_PASS为RabbitMQ中openstack账户的密码。

        +
      • +
      • +

        [api_database][database]部分,配置数据库入口:

        +
        [api_database]
        +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
        +
        +[database]
        +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
        +

        替换NOVA_DBPASS为nova相关数据库的密码。

        +
      • +
      • +

        [api][keystone_authtoken]部分,配置身份认证服务入口:

        +
        [api]
        +auth_strategy = keystone
        +
        +[keystone_authtoken]
        +auth_url = http://controller:5000/v3
        +memcached_servers = controller:11211
        +auth_type = password
        +project_domain_name = Default
        +user_domain_name = Default
        +project_name = service
        +username = nova
        +password = NOVA_PASS
        +

        替换NOVA_PASS为nova用户的密码。

        +
      • +
      • +

        [vnc]部分,启用并配置远程控制台入口:

        +
        [vnc]
        +enabled = true
        +server_listen = $my_ip
        +server_proxyclient_address = $my_ip
        +
      • +
      • +

        [glance]部分,配置镜像服务API的地址:

        +
        [glance]
        +api_servers = http://controller:9292
        +
      • +
      • +

        [oslo_concurrency]部分,配置lock path:

        +
        [oslo_concurrency]
        +lock_path = /var/lib/nova/tmp
        +
      • +
      • +

        [placement]部分,配置placement服务的入口:

        +
        [placement]
        +region_name = RegionOne
        +project_domain_name = Default
        +project_name = service
        +auth_type = password
        +user_domain_name = Default
        +auth_url = http://controller:5000/v3
        +username = placement
        +password = PLACEMENT_PASS
        +

        替换PLACEMENT_PASS为placement用户的密码。

        +
      • +
      +
    • +
    • +

      数据库同步:

      +
        +
      • +

        同步nova-api数据库:

        +
        su -s /bin/sh -c "nova-manage api_db sync" nova
        +
      • +
      • +

        注册cell0数据库:

        +
        su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
        +
      • +
      • +

        创建cell1 cell:

        +
        su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
        +
      • +
      • +

        同步nova数据库:

        +
        su -s /bin/sh -c "nova-manage db sync" nova
        +
      • +
      • +

        验证cell0和cell1注册正确:

        +
        su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
        +
      • +
      +
    • +
    +
  6. +
  7. +

    启动服务

    +
    systemctl enable \
    +  openstack-nova-api.service \
    +  openstack-nova-scheduler.service \
    +  openstack-nova-conductor.service \
    +  openstack-nova-novncproxy.service
    +
    +systemctl start \
    +  openstack-nova-api.service \
    +  openstack-nova-scheduler.service \
    +  openstack-nova-conductor.service \
    +  openstack-nova-novncproxy.service
    +
  8. +
+

Compute节点

+

在计算节点执行以下操作。

+
    +
  1. +

    安装软件包

    +
    dnf install openstack-nova-compute
    +
  2. +
  3. +

    编辑/etc/nova/nova.conf配置文件

    +
      +
    • +

      [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用Compute节点管理IP配置my_ip,显式定义compute_driver、instances_path、log_dir:

      +
      [DEFAULT]
      +enabled_apis = osapi_compute,metadata
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +my_ip = 192.168.0.3
      +compute_driver = libvirt.LibvirtDriver
      +instances_path = /var/lib/nova/instances
      +log_dir = /var/log/nova
      +

      替换RABBIT_PASS为RabbitMQ中openstack账户的密码。

      +
    • +
    • +

      [api][keystone_authtoken]部分,配置身份认证服务入口:

      +
      [api]
      +auth_strategy = keystone
      +
      +[keystone_authtoken]
      +auth_url = http://controller:5000/v3
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = nova
      +password = NOVA_PASS
      +

      替换NOVA_PASS为nova用户的密码。

      +
    • +
    • +

      [vnc]部分,启用并配置远程控制台入口:

      +
      [vnc]
      +enabled = true
      +server_listen = $my_ip
      +server_proxyclient_address = $my_ip
      +novncproxy_base_url = http://controller:6080/vnc_auto.html
      +
    • +
    • +

      [glance]部分,配置镜像服务API的地址:

      +
      [glance]
      +api_servers = http://controller:9292
      +
    • +
    • +

      [oslo_concurrency]部分,配置lock path:

      +
      [oslo_concurrency]
      +lock_path = /var/lib/nova/tmp
      +
    • +
    • +

      [placement]部分,配置placement服务的入口:

      +
      [placement]
      +region_name = RegionOne
      +project_domain_name = Default
      +project_name = service
      +auth_type = password
      +user_domain_name = Default
      +auth_url = http://controller:5000/v3
      +username = placement
      +password = PLACEMENT_PASS
      +

      替换PLACEMENT_PASS为placement用户的密码。

      +
    • +
    +
  4. +
  5. +

    确认计算节点是否支持虚拟机硬件加速(x86_64)

    +

    处理器为x86_64架构时,可通过运行如下命令确认是否支持硬件加速:

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。编辑/etc/nova/nova.conf[libvirt]部分:

    +
    [libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置。

    +
  6. +
  7. +

    确认计算节点是否支持虚拟机硬件加速(arm64)

    +

    处理器为arm64架构时,可通过运行如下命令确认是否支持硬件加速:

    +
    virt-host-validate
    +# 该命令由libvirt提供,此时libvirt应已作为openstack-nova-compute依赖被安装,环境中已有此命令
    +

    显示FAIL时,表示不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。

    +
    QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded)
    +

    编辑/etc/nova/nova.conf[libvirt]部分:

    +
    [libvirt]
    +virt_type = qemu
    +

    显示PASS时,表示支持硬件加速,不需要进行额外的配置。

    +
    QEMU: Checking if device /dev/kvm exists: PASS
    +
  8. +
  9. +

    配置qemu(仅arm64)

    +

    仅当处理器为arm64架构时需要执行此操作。

    +
      +
    • +

      编辑/etc/libvirt/qemu.conf:

      +
      nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
      +         /usr/share/AAVMF/AAVMF_VARS.fd", \
      +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
      +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
      +
    • +
    • +

      编辑/etc/qemu/firmware/edk2-aarch64.json

      +
      {
      +    "description": "UEFI firmware for ARM64 virtual machines",
      +    "interface-types": [
      +        "uefi"
      +    ],
      +    "mapping": {
      +        "device": "flash",
      +        "executable": {
      +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
      +            "format": "raw"
      +        },
      +        "nvram-template": {
      +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
      +            "format": "raw"
      +        }
      +    },
      +    "targets": [
      +        {
      +            "architecture": "aarch64",
      +            "machines": [
      +                "virt-*"
      +            ]
      +        }
      +    ],
      +    "features": [
      +
      +    ],
      +    "tags": [
      +
      +    ]
      +}
      +
    • +
    +
  10. +
  11. +

    启动服务

    +
    systemctl enable libvirtd.service openstack-nova-compute.service
    +systemctl start libvirtd.service openstack-nova-compute.service
    +
  12. +
+

Controller节点

+

在控制节点执行以下操作。

+
    +
  1. +

    添加计算节点到openstack集群

    +
      +
    • +

      source admin凭证,以获取admin命令行权限:

      +
      source ~/.admin-openrc
      +
    • +
    • +

      确认nova-compute服务已识别到数据库中:

      +
      openstack compute service list --service nova-compute
      +
    • +
    • +

      发现计算节点,将计算节点添加到cell数据库:

      +

      su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
      +结果如下:

      +
      Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be    ignored if the caller is only importing and not executing nova code.
      +Found 2 cell mappings.
      +Skipping cell0 since it does not contain hosts.
      +Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2
      +Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e
      +Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e
      +Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2
      +
    • +
    +
  2. +
  3. +

    验证

    +
      +
    • 列出服务组件,验证每个流程都成功启动和注册:
    • +
    +
    openstack compute service list
    +
      +
    • 列出身份服务中的API端点,验证与身份服务的连接:
    • +
    +
    openstack catalog list
    +
      +
    • 列出镜像服务中的镜像,验证与镜像服务的连接:
    • +
    +
    openstack image list
    +
      +
    • 检查cells是否运作成功,以及其他必要条件是否已具备。
    • +
    +
    nova-status upgrade check
    +
  4. +
+

Neutron

+

Neutron是OpenStack的网络服务,提供虚拟交换机、IP路由、DHCP等功能。

+

Controller节点

+
    +
  1. +

    创建数据库、服务凭证和 API 服务端点

    +
      +
    • +

      创建数据库:

      +
      mysql -u root -p
      +
      +MariaDB [(none)]> CREATE DATABASE neutron;
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
      +MariaDB [(none)]> exit;
      +
    • +
    • +

      创建用户和服务,并记住创建neutron用户时输入的密码,用于配置NEUTRON_PASS:

      +
      source ~/.admin-openrc
      +openstack user create --domain default --password-prompt neutron
      +openstack role add --project service --user neutron admin
      +openstack service create --name neutron --description "OpenStack Networking" network
      +
    • +
    • +

      部署 Neutron API 服务:

      +
      openstack endpoint create --region RegionOne network public http://controller:9696
      +openstack endpoint create --region RegionOne network internal http://controller:9696
      +openstack endpoint create --region RegionOne network admin http://controller:9696
      +
    • +
    +
  2. +
  3. +

    安装软件包

    +

    dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2
    +3. 配置Neutron

    +
      +
    • +

      修改/etc/neutron/neutron.conf +

      [database]
      +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
      +
      +[DEFAULT]
      +core_plugin = ml2
      +service_plugins = router
      +allow_overlapping_ips = true
      +transport_url = rabbit://openstack:RABBIT_PASS@controller
      +auth_strategy = keystone
      +notify_nova_on_port_status_changes = true
      +notify_nova_on_port_data_changes = true
      +
      +[keystone_authtoken]
      +www_authenticate_uri = http://controller:5000
      +auth_url = http://controller:5000
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS
      +
      +[nova]
      +auth_url = http://controller:5000
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +region_name = RegionOne
      +project_name = service
      +username = nova
      +password = NOVA_PASS
      +
      +[oslo_concurrency]
      +lock_path = /var/lib/neutron/tmp
      +
      +[experimental]
      +linuxbridge = true

      +
    • +
    • +

      配置ML2,ML2具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge**

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/ml2_conf.ini +

      [ml2]
      +type_drivers = flat,vlan,vxlan
      +tenant_network_types = vxlan
      +mechanism_drivers = linuxbridge,l2population
      +extension_drivers = port_security
      +
      +[ml2_type_flat]
      +flat_networks = provider
      +
      +[ml2_type_vxlan]
      +vni_ranges = 1:1000
      +
      +[securitygroup]
      +enable_ipset = true

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini +

      [linux_bridge]
      +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
      +
      +[vxlan]
      +enable_vxlan = true
      +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
      +l2_population = true
      +
      +[securitygroup]
      +enable_security_group = true
      +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

      +
    • +
    • +

      配置Layer-3代理

      +
    • +
    • +

      修改/etc/neutron/l3_agent.ini

      +
      [DEFAULT]
      +interface_driver = linuxbridge
      +

      配置DHCP代理 +修改/etc/neutron/dhcp_agent.ini +

      [DEFAULT]
      +interface_driver = linuxbridge
      +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
      +enable_isolated_metadata = true

      +
    • +
    • +

      配置metadata代理

      +
    • +
    • +

      修改/etc/neutron/metadata_agent.ini +

      [DEFAULT]
      +nova_metadata_host = controller
      +metadata_proxy_shared_secret = METADATA_SECRET

      +
    • +
    • 配置nova服务使用neutron,修改/etc/nova/nova.conf +
      [neutron]
      +auth_url = http://controller:5000
      +auth_type = password
      +project_domain_name = default
      +user_domain_name = default
      +region_name = RegionOne
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS
      +service_metadata_proxy = true
      +metadata_proxy_shared_secret = METADATA_SECRET
    • +
    +
  4. +
  5. +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +
  6. +
  7. +

    同步数据库 +

    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

    +
  8. +
  9. 重启nova api服务 +
    systemctl restart openstack-nova-api
  10. +
  11. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \
    +neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
    +systemctl start neutron-server.service neutron-linuxbridge-agent.service \
    +neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
    +
  12. +
+

Compute节点

+
    +
  1. 安装软件包 +
    dnf install openstack-neutron-linuxbridge ebtables ipset -y
  2. +
  3. +

    配置Neutron

    +
      +
    • +

      修改/etc/neutron/neutron.conf +

      [DEFAULT]
      +transport_url = rabbit://openstack:RABBIT_PASS@controller
      +auth_strategy = keystone
      +
      +[keystone_authtoken]
      +www_authenticate_uri = http://controller:5000
      +auth_url = http://controller:5000
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS
      +
      +[oslo_concurrency]
      +lock_path = /var/lib/neutron/tmp

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini +

      [linux_bridge]
      +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
      +
      +[vxlan]
      +enable_vxlan = true
      +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
      +l2_population = true
      +
      +[securitygroup]
      +enable_security_group = true
      +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

      +
    • +
    • +

      配置nova compute服务使用neutron,修改/etc/nova/nova.conf +

      [neutron]
      +auth_url = http://controller:5000
      +auth_type = password
      +project_domain_name = default
      +user_domain_name = default
      +region_name = RegionOne
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS

      +
    • +
    • 重启nova-compute服务 +
      systemctl restart openstack-nova-compute.service
    • +
    • 启动Neutron linuxbridge agent服务
    • +
    +
    systemctl enable neutron-linuxbridge-agent
    +systemctl start neutron-linuxbridge-agent
    +
  4. +
+

Cinder

+

Cinder是OpenStack的存储服务,提供块设备的创建、发放、备份等功能。

+

Controller节点

+
    +
  1. +

    初始化数据库

    +

    CINDER_DBPASS是用户自定义的cinder数据库密码。 +

    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit

    +
  2. +
  3. +

    初始化Keystone资源对象

    +

    source ~/.admin-openrc
    +
    +#创建用户时,命令行会提示输入密码,请输入自定义的密码,下文涉及到`CINDER_PASS`的地方替换成该密码即可。
    +openstack user create --domain default --password-prompt cinder
    +
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +3. 安装软件包

    +
    dnf install openstack-cinder-api openstack-cinder-scheduler
    +
  4. +
  5. +

    修改cinder配置文件/etc/cinder/cinder.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 192.168.0.2
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
  6. +
  7. +

    数据库同步

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder
    +
  8. +
  9. +

    修改nova配置/etc/nova/nova.conf

    +
    [cinder]
    +os_region_name = RegionOne
    +
  10. +
  11. +

    启动服务

    +
    systemctl restart openstack-nova-api
    +systemctl start openstack-cinder-api openstack-cinder-scheduler
    +
  12. +
+

Storage节点

+

Storage节点要提前准备至少一块硬盘,作为cinder的存储后端,下文默认storage节点已经存在一块未使用的硬盘,设备名称为/dev/sdb,用户在配置过程中,请按照真实环境信息进行名称替换。

+

Cinder支持很多类型的后端存储,本指导使用最简单的lvm为参考,如果您想使用如ceph等其他后端,请自行配置。

+
    +
  1. +

    安装软件包

    +
    dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup
    +
  2. +
  3. +

    配置lvm卷组

    +
    pvcreate /dev/sdb
    +vgcreate cinder-volumes /dev/sdb
    +
  4. +
  5. +

    修改cinder配置/etc/cinder/cinder.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 192.168.0.4
    +enabled_backends = lvm
    +glance_api_servers = http://controller:9292
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    +volume_group = cinder-volumes
    +target_protocol = iscsi
    +target_helper = lioadm
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
  6. +
  7. +

    配置cinder backup (可选)

    +

    cinder-backup是可选的备份服务,cinder同样支持很多种备份后端,本文使用swift存储,如果您想使用如NFS等后端,请自行配置,例如可以参考OpenStack官方文档对NFS的配置说明。

    +

    修改/etc/cinder/cinder.conf,在[DEFAULT]中新增 +

    [DEFAULT]
    +backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
    +backup_swift_url = SWIFT_URL

    +

    这里的SWIFT_URL是指环境中swift服务的URL,在部署完swift服务后,执行openstack catalog show object-store命令获取。

    +
  8. +
  9. +

    启动服务

    +
    systemctl start openstack-cinder-volume target
    +systemctl start openstack-cinder-backup (可选)
    +
  10. +
+

至此,Cinder服务的部署已全部完成,可以在controller通过以下命令进行简单的验证

+
source ~/.admin-openrc
+openstack storage service list
+openstack volume list
+

Horizon

+

Horizon是OpenStack提供的前端页面,可以让用户通过网页鼠标的操作来控制OpenStack集群,而不用繁琐的CLI命令行。Horizon一般部署在控制节点。

+
    +
  1. +

    安装软件包

    +
    dnf install openstack-dashboard
    +
  2. +
  3. +

    修改配置文件/etc/openstack-dashboard/local_settings

    +
    OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +OPENSTACK_KEYSTONE_URL =  "http://controller:5000/v3"
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +CACHES = {
    +'default': {
    +    'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +    'LOCATION': 'controller:11211',
    +    }
    +}
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启服务

    +
    systemctl restart httpd
    +
  6. +
+

至此,horizon服务的部署已全部完成,打开浏览器,输入http://192.168.0.2/dashboard,打开horizon登录页面。

+

Ironic

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+

在控制节点执行以下操作。

+
    +
  1. +

    设置数据库

    +

    裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASS为合适的密码

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
    +IDENTIFIED BY 'IRONIC_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
    +IDENTIFIED BY 'IRONIC_DBPASS';
    +MariaDB [(none)]> exit
    +Bye
    +
  2. +
  3. +

    创建服务用户认证

    +
      +
    • +

      创建Bare Metal服务用户

      +

      替换IRONIC_PASS为ironic用户密码,IRONIC_INSPECTOR_PASS为ironic_inspector用户密码。

      +
      openstack user create --password IRONIC_PASS \
      +  --email ironic@example.com ironic
      +openstack role add --project service --user ironic admin
      +openstack service create --name ironic \
      +  --description "Ironic baremetal provisioning service" baremetal
      +
      +openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
      +openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector
      +openstack role add --project service --user ironic-inspector admin
      +
    • +
    • +

      创建Bare Metal服务访问入口

      +
      openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385
      +openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385
      +openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385
      +openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1
      +openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1
      +openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1
      +
    • +
    +
  4. +
  5. +

    安装组件

    +
    dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
    +
  6. +
  7. +

    配置ironic-api服务

    +

    配置文件路径/etc/ironic/ironic.conf

    +
      +
    • +

      通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

      +
      [database]
      +
      +# The SQ LAlchemy connection string used to connect to the
      +# database (string value)
      +# connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic
      +connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic
      +
    • +
    • +

      通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

      +
      [DEFAULT]
      +
      +# A URL representing the messaging driver to use and its full
      +# configuration. (string value)
      +# transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +

      用户也可自行使用json-rpc方式替换rabbitmq

      +
    • +
    • +

      配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换 IRONIC_PASS为身份认证服务中ironic用户的密码,替换RABBIT_PASS为RabbitMQ中openstack账户的密码。:

      +
      [DEFAULT]
      +
      +# Authentication strategy used by ironic-api: one of
      +# "keystone" or "noauth". "noauth" should not be used in a
      +# production environment because all authentication will be
      +# disabled. (string value)
      +
      +auth_strategy=keystone
      +host = controller
      +memcache_servers = controller:11211
      +enabled_network_interfaces = flat,noop,neutron
      +default_network_interface = noop
      +enabled_hardware_types = ipmi
      +enabled_boot_interfaces = pxe
      +enabled_deploy_interfaces = direct
      +default_deploy_interface = direct
      +enabled_inspect_interfaces = inspector
      +enabled_management_interfaces = ipmitool
      +enabled_power_interfaces = ipmitool
      +enabled_rescue_interfaces = no-rescue,agent
      +isolinux_bin = /usr/share/syslinux/isolinux.bin
      +logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %     (user_identity)s] %(instance)s%(message)s
      +
      +[keystone_authtoken]
      +# Authentication type to load (string value)
      +auth_type=password
      +# Complete public Identity API endpoint (string value)
      +# www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
      +www_authenticate_uri=http://controller:5000
      +# Complete admin Identity API endpoint. (string value)
      +# auth_url=http://PRIVATE_IDENTITY_IP:5000
      +auth_url=http://controller:5000
      +# Service username. (string value)
      +username=ironic
      +# Service account password. (string value)
      +password=IRONIC_PASS
      +# Service tenant name. (string value)
      +project_name=service
      +# Domain name containing project (string value)
      +project_domain_name=Default
      +# User's domain name (string value)
      +user_domain_name=Default
      +
      +[agent]
      +deploy_logs_collect = always
      +deploy_logs_local_path = /var/log/ironic/deploy
      +deploy_logs_storage_backend = local
      +image_download_source = http
      +stream_raw_images = false
      +force_raw_images = false
      +verify_ca = False
      +
      +[oslo_concurrency]
      +
      +[oslo_messaging_notifications]
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +topics = notifications
      +driver = messagingv2
      +
      +[oslo_messaging_rabbit]
      +amqp_durable_queues = True
      +rabbit_ha_queues = True
      +
      +[pxe]
      +ipxe_enabled = false
      +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
      +image_cache_size = 204800
      +tftp_root=/var/lib/tftpboot/cephfs/
      +tftp_master_path=/var/lib/tftpboot/cephfs/master_images
      +
      +[dhcp]
      +dhcp_provider = none
      +
    • +
    • +

      创建裸金属服务数据库表

      +
      ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
      +
    • +
    • +

      重启ironic-api服务

      +
      sudo systemctl restart openstack-ironic-api
      +
    • +
    +
  8. +
  9. +

    配置ironic-conductor服务

    +

    如下为ironic-conductor服务自身的标准配置,ironic-conductor服务可以与ironic-api服务分布于不同节点,本指南中均部署与控制节点,所以重复的配置项可跳过。

    +
      +
    • +

      替换使用conductor服务所在host的IP配置my_ip:

      +
      [DEFAULT]
      +
      +# IP address of this host. If unset, will determine the IP
      +# programmatically. If unable to do so, will use "127.0.0.1".
      +# (string value)
      +# my_ip=HOST_IP
      +my_ip = 192.168.0.2
      +
    • +
    • +

      配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSironic用户的密码:

      +
      [database]
      +
      +# The SQLAlchemy connection string to use to connect to the
      +# database. (string value)
      +connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic
      +
    • +
    • +

      通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RABBIT_PASS为RabbitMQ中openstack账户的密码:

      +
      [DEFAULT]
      +
      +# A URL representing the messaging driver to use and its full
      +# configuration. (string value)
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +

      用户也可自行使用json-rpc方式替换rabbitmq

      +
    • +
    • +

      配置凭证访问其他OpenStack服务

      +

      为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

      +
      [neutron] - 访问OpenStack网络服务
      +[glance] - 访问OpenStack镜像服务
      +[swift] - 访问OpenStack对象存储服务
      +[cinder] - 访问OpenStack块存储服务
      +[inspector] - 访问OpenStack裸金属introspection服务
      +[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
      +

      简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

      +

      在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

      +
      网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
      +
      +请求时使用特定的CA SSL证书进行HTTPS连接
      +
      +与ironic-api服务配置相同的服务用户
      +
      +动态密码认证插件基于其他选项发现合适的身份认证服务API版本
      +

      替换IRONIC_PASS为ironic用户密码。

      +
      [neutron]
      +
      +# Authentication type to load (string value)
      +auth_type = password
      +# Authentication URL (string value)
      +auth_url=https://IDENTITY_IP:5000/
      +# Username (string value)
      +username=ironic
      +# User's password (string value)
      +password=IRONIC_PASS
      +# Project name to scope to (string value)
      +project_name=service
      +# Domain ID containing project (string value)
      +project_domain_id=default
      +# User's domain id (string value)
      +user_domain_id=default
      +# PEM encoded Certificate Authority to use when verifying
      +# HTTPs connections. (string value)
      +cafile=/opt/stack/data/ca-bundle.pem
      +# The default region_name for endpoint URL discovery. (string
      +# value)
      +region_name = RegionOne
      +# List of interfaces, in order of preference, for endpoint
      +# URL. (list value)
      +valid_interfaces=public
      +
      +# 其他参考配置
      +[glance]
      +endpoint_override = http://controller:9292
      +www_authenticate_uri = http://controller:5000
      +auth_url = http://controller:5000
      +auth_type = password
      +username = ironic
      +password = IRONIC_PASS
      +project_domain_name = default
      +user_domain_name = default
      +region_name = RegionOne
      +project_name = service
      +
      +[service_catalog]  
      +region_name = RegionOne
      +project_domain_id = default
      +user_domain_id = default
      +project_name = service
      +password = IRONIC_PASS
      +username = ironic
      +auth_url = http://controller:5000
      +auth_type = password
      +

      默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

      +
      [neutron]
      +endpoint_override = <NEUTRON_API_ADDRESS>
      +
    • +
    • +

      配置允许的驱动程序和硬件类型

      +

      通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

      +
      [DEFAULT]
      +enabled_hardware_types = ipmi
      +

      配置硬件接口:

      +
      enabled_boot_interfaces = pxe
      +enabled_deploy_interfaces = direct,iscsi
      +enabled_inspect_interfaces = inspector
      +enabled_management_interfaces = ipmitool
      +enabled_power_interfaces = ipmitool
      +

      配置接口默认值:

      +
      [DEFAULT]
      +default_deploy_interface = direct
      +default_network_interface = neutron
      +

      如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

      +
    • +
    • +

      重启ironic-conductor服务

      +
      sudo systemctl restart openstack-ironic-conductor
      +
    • +
    +
  10. +
  11. +

    配置ironic-inspector服务

    +
      +
    • +

      安装组件

      +
      dnf install openstack-ironic-inspector
      +
    • +
    • +

      创建数据库

      +
      # mysql -u root -p
      +
      +MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
      +
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \
      +IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
      +IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS';
      +MariaDB [(none)]> exit
      +Bye
      +
    • +
    • +

      配置/etc/ironic-inspector/inspector.conf

      +

      通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSironic_inspector用户的密码

      +
      [database]
      +backend = sqlalchemy
      +connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector
      +min_pool_size = 100
      +max_pool_size = 500
      +pool_timeout = 30
      +max_retries = 5
      +max_overflow = 200
      +db_retry_interval = 2
      +db_inc_retry_interval = True
      +db_max_retry_interval = 2
      +db_max_retries = 5
      +
    • +
    • +

      配置消息队列通信地址

      +
      [DEFAULT] 
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +
    • +
    • +

      设置keystone认证

      +
      [DEFAULT]
      +
      +auth_strategy = keystone
      +timeout = 900
      +rootwrap_config = /etc/ironic-inspector/rootwrap.conf
      +logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %     (user_identity)s] %(instance)s%(message)s
      +log_dir = /var/log/ironic-inspector
      +state_path = /var/lib/ironic-inspector
      +use_stderr = False
      +
      +[ironic]
      +api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
      +auth_type = password
      +auth_url = http://PUBLIC_IDENTITY_IP:5000
      +auth_strategy = keystone
      +ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
      +os_region = RegionOne
      +project_name = service
      +project_domain_name = Default
      +user_domain_name = Default
      +username = IRONIC_SERVICE_USER_NAME
      +password = IRONIC_SERVICE_USER_PASSWORD
      +
      +[keystone_authtoken]
      +auth_type = password
      +auth_url = http://controller:5000
      +www_authenticate_uri = http://controller:5000
      +project_domain_name = default
      +user_domain_name = default
      +project_name = service
      +username = ironic_inspector
      +password = IRONICPASSWD
      +region_name = RegionOne
      +memcache_servers = controller:11211
      +token_cache_time = 300
      +
      +[processing]
      +add_ports = active
      +processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
      +ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
      +always_store_ramdisk_logs = true
      +store_data =none
      +power_off = false
      +
      +[pxe_filter]
      +driver = iptables
      +
      +[capabilities]
      +boot_mode=True
      +
    • +
    • +

      配置ironic inspector dnsmasq服务

      +
      # 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
      +port=0
      +interface=enp3s0                         #替换为实际监听网络接口
      +dhcp-range=192.168.0.40,192.168.0.50   #替换为实际dhcp地址范围
      +bind-interfaces
      +enable-tftp
      +
      +dhcp-match=set:efi,option:client-arch,7
      +dhcp-match=set:efi,option:client-arch,9
      +dhcp-match=aarch64, option:client-arch,11
      +dhcp-boot=tag:aarch64,grubaa64.efi
      +dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
      +dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
      +
      +tftp-root=/tftpboot                       #替换为实际tftpboot目录
      +log-facility=/var/log/dnsmasq.log
      +
    • +
    • +

      关闭ironic provision网络子网的dhcp

      +
      openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
      +
    • +
    • +

      初始化ironic-inspector服务的数据库

      +
      ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
      +
    • +
    • +

      启动服务

      +
      systemctl enable --now openstack-ironic-inspector.service
      +systemctl enable --now openstack-ironic-inspector-dnsmasq.service
      +
    • +
    +
  12. +
  13. +

    配置httpd服务

    +
      +
    • +

      创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

      +
      mkdir -p /var/lib/ironic/httproot
      +chown ironic.ironic /var/lib/ironic/httproot
      +
    • +
    • +

      安装和配置httpd服务

      +
        +
      • +

        安装httpd服务,已有请忽略

        +
        dnf install httpd -y
        +
      • +
      • +

        创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

        +
        Listen 8080
        +
        +<VirtualHost *:8080>
        +    ServerName ironic.openeuler.com
        +
        +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
        +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
        +
        +    DocumentRoot "/var/lib/ironic/httproot"
        +    <Directory "/var/lib/ironic/httproot">
        +        Options Indexes FollowSymLinks
        +        Require all granted
        +    </Directory>
        +    LogLevel warn
        +    AddDefaultCharset UTF-8
        +    EnableSendfile on
        +</VirtualHost>
        +

        注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

        +
      • +
      • +

        重启httpd服务。

        +
        systemctl restart httpd
        +
      • +
      +
    • +
    +
  14. +
  15. +

    deploy ramdisk镜像下载或制作

    +

    部署一个裸机节点总共需要两组镜像:deploy ramdisk images和user images。Deploy ramdisk images上运行有ironic-python-agent(IPA)服务,Ironic通过它进行裸机节点的环境准备。User images是最终被安装裸机节点上,供用户使用的镜像。

    +

    ramdisk镜像支持通过ironic-python-agent-builder或disk-image-builder工具制作。用户也可以自行选择其他工具制作。若使用原生工具,则需要安装对应的软件包。

    +

    具体的使用方法可以参考官方文档,同时官方也有提供制作好的deploy镜像,可尝试下载。

    +

    下文介绍通过ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

    +
      +
    • +

      安装 ironic-python-agent-builder

      +
      dnf install python3-ironic-python-agent-builder
      +
      +或
      +pip3 install ironic-python-agent-builder
      +dnf install qemu-img git
      +
    • +
    • +

      制作镜像

      +

      基本用法:

      +
      usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH]
      +                           [-v] [--lzma] [--extra-args EXTRA_ARGS]
      +                           [--elements-path ELEMENTS_PATH]
      +                           distribution
      +
      +positional arguments:
      +  distribution          Distribution to use
      +
      +options:
      +  -h, --help            show this help message and exit
      +  -r RELEASE, --release RELEASE
      +                        Distribution release to use
      +  -o OUTPUT, --output OUTPUT
      +                        Output base file name
      +  -e ELEMENT, --element ELEMENT
      +                        Additional DIB element to use
      +  -b BRANCH, --branch BRANCH
      +                        If set, override the branch that is used for         ironic-python-agent
      +                        and requirements
      +  -v, --verbose         Enable verbose logging in diskimage-builder
      +  --lzma                Use lzma compression for smaller images
      +  --extra-args EXTRA_ARGS
      +                        Extra arguments to pass to diskimage-builder
      +  --elements-path ELEMENTS_PATH
      +                        Path(s) to custom DIB elements separated by a colon
      +

      操作实例:

      +
      # -o选项指定生成的镜像名
      +# ubuntu指定生成ubuntu系统的镜像
      +ironic-python-agent-builder -o my-ubuntu-ipa ubuntu
      +

      可通过设置ARCH环境变量(默认为amd64)指定所构建镜像的架构。如果是arm架构,需要添加:

      +
      export ARCH=aarch64
      +
    • +
    • +

      允许ssh登录

      +

      初始化环境变量,设置用户名、密码,启用sodo权限;并添加-e选项使用相应的DIB元素。制作镜像操作如下:

      +
      export DIB_DEV_USER_USERNAME=ipa \
      +export DIB_DEV_USER_PWDLESS_SUDO=yes \
      +export DIB_DEV_USER_PASSWORD='123'
      +ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu
      +
    • +
    • +

      指定代码仓库

      +

      初始化对应的环境变量,然后制作镜像:

      +
      # 直接从gerrit上clone代码
      +DIB_REPOLOCATION_ironic_python_agent=https://opendev.org/openstack/ironic-python-agent
      +DIB_REPOREF_ironic_python_agent=stable/2023.1
      +
      +# 指定本地仓库及分支
      +DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo
      +DIB_REPOREF_ironic_python_agent=my-test-branch
      +
      +ironic-python-agent-builder ubuntu
      +

      参考:source-repositories

      +
    • +
    +
  16. +
  17. +

    注意

    +

    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改: +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败。

    +

    生成的错误配置文件:

    +

    ironic-err

    +

    如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    当前版本的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    • +

      修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:

      +
      [agent]
      +verify_ca = False
      +[pxe]
      +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
      +
    • +
    • +

      ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:

      +

      /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ ironic_python_agent目录)

      +
      [DEFAULT]
      +enable_auto_tls = False
      +

      设置权限:

      +
      chown -R ipa.ipa /etc/ironic_python_agent/
      +
    • +
    • +

      ramdisk镜像中修改ipa服务的服务启动文件,添加配置文件选项

      +

      编辑/usr/lib/systemd/system/ironic-python-agent.service文件

      +
      [Unit]
      +Description=Ironic Python Agent
      +After=network-online.target
      +[Service]
      +ExecStartPre=/sbin/modprobe vfat
      +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/    ironic_python_agent/ironic_python_agent.conf
      +Restart=always
      +RestartSec=30s
      +[Install]
      +WantedBy=multi-user.target
      +
    • +
    +
  18. +
+

Trove

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

Controller节点

+
    +
  1. +

    创建数据库。

    +

    数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASS为合适的密码。 +

    CREATE DATABASE trove CHARACTER SET utf8;
    +GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS';
    +GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS';

    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    # 创建trove用户
    +openstack user create --domain default --password-prompt trove
    +# 添加admin角色
    +openstack role add --project service --user trove admin
    +# 创建database服务
    +openstack service create --name trove --description "Database service" database

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s

    +
  4. +
  5. +

    安装Trove。 +

    dnf install openstack-trove python-troveclient

    +
  6. +
  7. +

    修改配置文件。

    +

    编辑/etc/trove/trove.conf。 +

    [DEFAULT]
    +bind_host=192.168.0.2
    +log_dir = /var/log/trove
    +network_driver = trove.network.neutron.NeutronDriver
    +network_label_regex=.*
    +management_security_groups = <manage security group>
    +nova_keypair = trove-mgmt
    +default_datastore = mysql
    +taskmanager_manager = trove.taskmanager.manager.Manager
    +trove_api_workers = 5
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +reboot_time_out = 300
    +usage_timeout = 900
    +agent_call_high_timeout = 1200
    +use_syslog = False
    +debug = True
    +
    +[database]
    +connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
    +
    +[keystone_authtoken]
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = trove
    +username = TROVE_PASS
    +
    +[service_credentials]
    +auth_url = http://controller:5000/v3/
    +region_name = RegionOne
    +project_name = service
    +project_domain_name = Default
    +user_domain_name = Default
    +username = trove
    +password = TROVE_PASS
    +
    +[mariadb]
    +tcp_ports = 3306,4444,4567,4568
    +
    +[mysql]
    +tcp_ports = 3306
    +
    +[postgresql]
    +tcp_ports = 5432

    +

    解释:

    +
    +

    [Default]分组中bind_host配置为Trove控制节点的IP。\ +transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码。\ +[database]分组中的connection 为前面在mysql中为Trove创建的数据库信息。\ +Trove的用户信息中TROVE_PASSWORD替换为实际trove用户的密码。

    +
    +

    编辑/etc/trove/trove-guestagent.conf。 +

    [DEFAULT]
    +log_file = trove-guestagent.log
    +log_dir = /var/log/trove/
    +ignore_users = os_admin
    +control_exchange = trove
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +rpc_backend = rabbit
    +command_process_timeout = 60
    +use_syslog = False
    +debug = True
    +
    +[service_credentials]
    +auth_url = http://controller:5000/v3/
    +region_name = RegionOne
    +project_name = service
    +password = TROVE_PASS
    +project_domain_name = Default
    +user_domain_name = Default
    +username = trove
    +
    +[mysql]
    +docker_image = your-registry/your-repo/mysql
    +backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0

    +

    解释:

    +
    +

    guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上报心跳,因此需要配置RabbitMQ的用户和密码信息。\ +transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码。\ +Trove的用户信息中TROVE_PASSWORD替换为实际trove用户的密码。\ +从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

    +
    +
  8. +
  9. +

    数据库同步。 +

    su -s /bin/sh -c "trove-manage db_sync" trove

    +
  10. +
  11. +

    完成安装。 +

    # 配置服务自启
    +systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \ 
    +openstack-trove-conductor.service
    +
    +# 启动服务
    +systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \ 
    +openstack-trove-conductor.service

    +
  12. +
+

Swift

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+

Controller节点

+
    +
  1. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    # 创建swift用户
    +openstack user create --domain default --password-prompt swift
    +# 添加admin角色
    +openstack role add --project service --user swift admin
    +# 创建对象存储服务
    +openstack service create --name swift --description "OpenStack Object Storage" object-store

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 

    +
  2. +
  3. +

    安装Swift。 +

    dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \ 
    +python3-keystonemiddleware memcached

    +
  4. +
  5. +

    配置proxy-server。

    +

    Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和SWIFT_PASS即可。 +

    vim /etc/swift/proxy-server.conf
    +
    +[filter:authtoken]
    +paste.filter_factory = keystonemiddleware.auth_token:filter_factory
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = swift
    +password = SWIFT_PASS
    +delay_auth_decision = True
    +service_token_roles_required = True

    +
  6. +
+

Storage节点

+
    +
  1. +

    安装支持的程序包。 +

    dnf install openstack-swift-account openstack-swift-container openstack-swift-object
    +dnf install xfsprogs rsync

    +
  2. +
  3. +

    将设备/dev/sdb和/dev/sdc格式化为XFS。 +

    mkfs.xfs /dev/sdb
    +mkfs.xfs /dev/sdc

    +
  4. +
  5. +

    创建挂载点目录结构。 +

    mkdir -p /srv/node/sdb
    +mkdir -p /srv/node/sdc

    +
  6. +
  7. +

    找到新分区的UUID。 +

    blkid

    +
  8. +
  9. +

    编辑/etc/fstab文件并将以下内容添加到其中。 +

    UUID="<UUID-from-output-above>" /srv/node/sdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/sdc xfs noatime 0 2

    +
  10. +
  11. +

    挂载设备。 +

    mount /srv/node/sdb
    +mount /srv/node/sdc

    +

    注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置。

    +
  12. +
  13. +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容: +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock

    +

    替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动: +

    systemctl enable rsyncd.service
    +systemctl start rsyncd.service

    +
  14. +
  15. +

    配置存储节点。

    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。 +

    [DEFAULT]
    +bind_ip = 192.168.0.4

    +

    确保挂载点目录结构的正确所有权。 +

    chown -R swift:swift /srv/node

    +

    创建recon目录并确保其拥有正确的所有权。 +

    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift

    +
  16. +
+

Controller节点创建并分发环

+
    +
  1. +

    创建账号环。

    +

    切换到/etc/swift目录。 +

    cd /etc/swift

    +

    创建基础account.builder文件。 +

    swift-ring-builder account.builder create 10 1 1

    +

    将每个存储节点添加到环中。 +

    swift-ring-builder account.builder add --region 1 --zone 1 \
    +--ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \ 
    +--port 6202  --device DEVICE_NAME \ 
    +--weight 100

    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证账号环内容。 +

    swift-ring-builder account.builder

    +

    重新平衡账号环。 +

    swift-ring-builder account.builder rebalance

    +
  2. +
  3. +

    创建容器环。

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件。 +

    swift-ring-builder container.builder create 10 1 1

    +

    将每个存储节点添加到环中。 +

    swift-ring-builder container.builder add --region 1 --zone 1 \
    +--ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS 
    +--port 6201 --device DEVICE_NAME \
    +--weight 100

    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证容器环内容。 +

    swift-ring-builder container.builder

    +

    重新平衡容器环。 +

    swift-ring-builder container.builder rebalance

    +
  4. +
  5. +

    创建对象环。

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件。 +

    swift-ring-builder object.builder create 10 1 1

    +

    将每个存储节点添加到环中。 +

     swift-ring-builder object.builder add --region 1 --zone 1 \
    + --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \
    + --port 6200 --device DEVICE_NAME \
    + --weight 100

    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证对象环内容。 +

    swift-ring-builder object.builder

    +

    重新平衡对象环。 +

    swift-ring-builder object.builder rebalance

    +
  6. +
  7. +

    分发环配置文件。

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  8. +
  9. +

    编辑配置文件/etc/swift/swift.conf。 +

    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes

    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权。 +

    chown -R root:swift /etc/swift

    +
  10. +
  11. +

    完成安装

    +
  12. +
+

在控制节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动。 +

systemctl enable openstack-swift-proxy.service memcached.service
+systemctl start openstack-swift-proxy.service memcached.service

+

在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动。 +

systemctl enable openstack-swift-account.service \
+openstack-swift-account-auditor.service \
+openstack-swift-account-reaper.service \
+openstack-swift-account-replicator.service \
+openstack-swift-container.service \
+openstack-swift-container-auditor.service \
+openstack-swift-container-replicator.service \
+openstack-swift-container-updater.service \
+openstack-swift-object.service \
+openstack-swift-object-auditor.service \
+openstack-swift-object-replicator.service \
+openstack-swift-object-updater.service
+
+systemctl start openstack-swift-account.service \
+openstack-swift-account-auditor.service \
+openstack-swift-account-reaper.service \
+openstack-swift-account-replicator.service \
+openstack-swift-container.service \
+openstack-swift-container-auditor.service \
+openstack-swift-container-replicator.service \
+openstack-swift-container-updater.service \
+openstack-swift-object.service \
+openstack-swift-object-auditor.service \
+openstack-swift-object-replicator.service \
+openstack-swift-object-updater.service

+

Cyborg

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+

Controller节点

+
    +
  1. +

    初始化对应数据库

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cyborg;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
    +MariaDB [(none)]> exit;
    +
  2. +
  3. +

    创建用户和服务,并记住创建cybory用户时输入的密码,用于配置CYBORG_PASS

    +
    source ~/.admin-openrc
    +openstack user create --domain default --password-prompt cyborg
    +openstack role add --project service --user cyborg admin
    +openstack service create --name cyborg --description "Acceleration Service" accelerator
    +
  4. +
  5. +

    使用uwsgi部署Cyborg api服务

    +
    openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2
    +openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2
    +openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2
    +
  6. +
  7. +

    安装Cyborg

    +
    dnf install openstack-cyborg
    +
  8. +
  9. +

    配置Cyborg

    +

    修改/etc/cyborg/cyborg.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +use_syslog = False
    +state_path = /var/lib/cyborg
    +debug = True
    +
    +[api]
    +host_ip = 0.0.0.0
    +
    +[database]
    +connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg
    +
    +[service_catalog]
    +cafile = /opt/stack/data/ca-bundle.pem
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +password = CYBORG_PASS
    +username = cyborg
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +
    +[placement]
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = password
    +username = PLACEMENT_PASS
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +auth_section = keystone_authtoken
    +
    +[nova]
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = NOVA_PASS
    +username = nova
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +auth_section = keystone_authtoken
    +
    +[keystone_authtoken]
    +memcached_servers = localhost:11211
    +signing_dir = /var/cache/cyborg/api
    +cafile = /opt/stack/data/ca-bundle.pem
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = CYBORG_PASS
    +username = cyborg
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +
  10. +
  11. +

    同步数据库表格

    +
    cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
    +
  12. +
  13. +

    启动Cyborg服务

    +
    systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
    +systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
    +
  14. +
+

Aodh

+

Aodh可以根据由Ceilometer或者Gnocchi收集的监控数据创建告警,并设置触发规则。

+

Controller节点

+
    +
  1. +

    创建数据库。

    +
    CREATE DATABASE aodh;
    +GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
    +GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    openstack user create --domain default --password-prompt aodh
    +openstack role add --project service --user aodh admin
    +openstack service create --name aodh --description "Telemetry" alarming

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne alarming public http://controller:8042
    +openstack endpoint create --region RegionOne alarming internal http://controller:8042
    +openstack endpoint create --region RegionOne alarming admin http://controller:8042

    +
  4. +
  5. +

    安装Aodh。 +

    dnf install openstack-aodh-api openstack-aodh-evaluator \
    +openstack-aodh-notifier openstack-aodh-listener \
    +openstack-aodh-expirer python3-aodhclient

    +
  6. +
  7. +

    修改配置文件。 +

    vim /etc/aodh/aodh.conf
    +
    +[database]
    +connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = aodh
    +password = AODH_PASS
    +
    +[service_credentials]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = aodh
    +password = AODH_PASS
    +interface = internalURL
    +region_name = RegionOne

    +
  8. +
  9. +

    同步数据库。 +

    aodh-dbsync

    +
  10. +
  11. +

    完成安装。 +

    # 配置服务自启
    +systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \
    +openstack-aodh-notifier.service openstack-aodh-listener.service
    +
    +# 启动服务
    +systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \
    +openstack-aodh-notifier.service openstack-aodh-listener.service

    +
  12. +
+

Gnocchi

+

Gnocchi是一个开源的时间序列数据库,可以对接Ceilometer。

+

Controller节点

+
    +
  1. +

    创建数据库。 +

    CREATE DATABASE gnocchi;
    +GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
    +GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';

    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    openstack user create --domain default --password-prompt gnocchi
    +openstack role add --project service --user gnocchi admin
    +openstack service create --name gnocchi --description "Metric Service" metric

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne metric public http://controller:8041
    +openstack endpoint create --region RegionOne metric internal http://controller:8041
    +openstack endpoint create --region RegionOne metric admin http://controller:8041

    +
  4. +
  5. +

    安装Gnocchi。 +

    dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient

    +
  6. +
  7. +

    修改配置文件。 +

    vim /etc/gnocchi/gnocchi.conf
    +[api]
    +auth_mode = keystone
    +port = 8041
    +uwsgi_mode = http-socket
    +
    +[keystone_authtoken]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = gnocchi
    +password = GNOCCHI_PASS
    +interface = internalURL
    +region_name = RegionOne
    +
    +[indexer]
    +url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
    +
    +[storage]
    +# coordination_url is not required but specifying one will improve
    +# performance with better workload division across workers.
    +# coordination_url = redis://controller:6379
    +file_basepath = /var/lib/gnocchi
    +driver = file

    +
  8. +
  9. +

    同步数据库。 +

    gnocchi-upgrade

    +
  10. +
  11. +

    完成安装。 +

    # 配置服务自启
    +systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
    +
    +# 启动服务
    +systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service

    +
  12. +
+

Ceilometer

+

Ceilometer是OpenStack中负责数据收集的服务。

+

Controller节点

+
    +
  1. +

    创建服务凭证。 +

    openstack user create --domain default --password-prompt ceilometer
    +openstack role add --project service --user ceilometer admin
    +openstack service create --name ceilometer --description "Telemetry" metering

    +
  2. +
  3. +

    安装Ceilometer软件包。 +

    dnf install openstack-ceilometer-notification openstack-ceilometer-central

    +
  4. +
  5. +

    编辑配置文件/etc/ceilometer/pipeline.yaml。 +

    publishers:
    +    # set address of Gnocchi
    +    # + filter out Gnocchi-related activity meters (Swift driver)
    +    # + set default archive policy
    +    - gnocchi://?filter_project=service&archive_policy=low

    +
  6. +
  7. +

    编辑配置文件/etc/ceilometer/ceilometer.conf。 +

    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +
    +[service_credentials]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = ceilometer
    +password = CEILOMETER_PASS
    +interface = internalURL
    +region_name = RegionOne

    +
  8. +
  9. +

    数据库同步。 +

    ceilometer-upgrade

    +
  10. +
  11. +

    完成控制节点Ceilometer安装。 +

    # 配置服务自启
    +systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
    +# 启动服务
    +systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service

    +
  12. +
+

Compute节点

+
    +
  1. +

    安装Ceilometer软件包。 +

    dnf install openstack-ceilometer-compute
    +dnf install openstack-ceilometer-ipmi       # 可选

    +
  2. +
  3. +

    编辑配置文件/etc/ceilometer/ceilometer.conf。 +

    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +
    +[service_credentials]
    +auth_url = http://controller:5000
    +project_domain_id = default
    +user_domain_id = default
    +auth_type = password
    +username = ceilometer
    +project_name = service
    +password = CEILOMETER_PASS
    +interface = internalURL
    +region_name = RegionOne

    +
  4. +
  5. +

    编辑配置文件/etc/nova/nova.conf。 +

    [DEFAULT]
    +instance_usage_audit = True
    +instance_usage_audit_period = hour
    +
    +[notifications]
    +notify_on_state_change = vm_and_task_state
    +
    +[oslo_messaging_notifications]
    +driver = messagingv2

    +
  6. +
  7. +

    完成安装。 +

    systemctl enable openstack-ceilometer-compute.service
    +systemctl start openstack-ceilometer-compute.service
    +systemctl enable openstack-ceilometer-ipmi.service         # 可选
    +systemctl start openstack-ceilometer-ipmi.service          # 可选
    +
    +# 重启nova-compute服务
    +systemctl restart openstack-nova-compute.service

    +
  8. +
+

Heat

+

Heat是 OpenStack 自动编排服务,基于描述性的模板来编排复合云应用,也称为Orchestration Service。Heat 的各服务一般安装在Controller节点上。

+

Controller节点

+
    +
  1. +

    创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE heat;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
    +MariaDB [(none)]> exit;
    +
  2. +
  3. +

    创建服务凭证,创建heat用户,并为其增加admin角色

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt heat
    +openstack role add --project service --user heat admin
    +
  4. +
  5. +

    创建heatheat-cfn服务及其对应的API端点

    +
    openstack service create --name heat --description "Orchestration" orchestration
    +openstack service create --name heat-cfn --description "Orchestration"  cloudformation
    +openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
    +openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
    +openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
    +
  6. +
  7. +

    创建stack管理的额外信息

    +

    创建 heat domain +

    openstack domain create --description "Stack projects and users" heat
    +在 heat domain下创建 heat_domain_admin 用户,并记下输入的密码,用于配置下面的HEAT_DOMAIN_PASS +
    openstack user create --domain heat --password-prompt heat_domain_admin
    +为 heat_domain_admin 用户增加 admin 角色 +
    openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
    +创建 heat_stack_owner 角色 +
    openstack role create heat_stack_owner
    +创建 heat_stack_user 角色 +
    openstack role create heat_stack_user

    +
  8. +
  9. +

    安装软件包

    +
    dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
    +
  10. +
  11. +

    修改配置文件/etc/heat/heat.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +heat_metadata_server_url = http://controller:8000
    +heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
    +stack_domain_admin = heat_domain_admin
    +stack_domain_admin_password = HEAT_DOMAIN_PASS
    +stack_user_domain_name = heat
    +
    +[database]
    +connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = heat
    +password = HEAT_PASS
    +
    +[trustee]
    +auth_type = password
    +auth_url = http://controller:5000
    +username = heat
    +password = HEAT_PASS
    +user_domain_name = default
    +
    +[clients_keystone]
    +auth_uri = http://controller:5000
    +
  12. +
  13. +

    初始化heat数据库表

    +
    su -s /bin/sh -c "heat-manage db_sync" heat
    +
  14. +
  15. +

    启动服务

    +
    systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
    +systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
    +
  16. +
+

Tempest

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+

Controller节点

+
    +
  1. +

    安装Tempest

    +
    dnf install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Antelope中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

基于OpenStack SIG开发工具oos部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    yum install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息,AK/SK是用户的华为云登录密钥,其他配置保持默认即可(默认使用新加坡region),需要提前在云上创建对应的资源,包括:

    +
      +
    • 一个安全组,名字默认是oos
    • +
    • 一个openEuler镜像,名称格式是openEuler-%(release)s-%(arch)s,例如openEuler-24.03-SP2-arm64
    • +
    • 一个VPC,名称是oos_vpc
    • +
    • 该VPC下面两个子网,名称是oos_subnet1oos_subnet2
    • +
    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器(只在openEuler LTS上支持)
    +
  6. +
  7. +

    华为云上面创建一台|openEuler 24.03 LTS SP2的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 24.03-lts-SP2 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +
    oos env setup test-oos -r antelope
    +

    具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +
  12. +
  13. +

    执行tempest测试

    +

    用户可以使用oos自动执行:

    +
    oos env test test-oos
    +

    也可以手动登录目标节点,进入根目录下的mytest目录,手动执行tempest run

    +
  14. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,跳过第2步对华为云provider信息的配置,在第4步改为纳管主机操作。

+

被纳管的虚机需要保证:

+
    +
  • 至少有一张给oos使用的网卡,名称与配置保持一致,相关配置neutron_dataplane_interface_name
  • +
  • 至少有一块给oos使用的硬盘,名称与配置保持一致,相关配置cinder_block_device
  • +
  • 如果要部署swift服务,则需要新增一块硬盘,名称与配置保持一致,相关配置swift_storage_devices
  • +
+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 24.03-lts-SP2 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/index.html b/site/install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/index.html new file mode 100644 index 0000000000000000000000000000000000000000..de9fbc7c94d59262d0c3de16d7f3b5f5ac0a586b --- /dev/null +++ b/site/install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/index.html @@ -0,0 +1,2673 @@ + + + + + + + + openEuler-24.03-LTS-SP2_Wallaby - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Wallaby 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 24.03-LTS-SP2 版本官方源已经支持 OpenStack-Wallaby 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • CinderSP2
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 24.03 LTS SP2 官方 yum 源,需要启用 EPOL 软件仓以支持 OpenStack

    +
    yum update
    +yum install openstack-release-wallaby
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示。

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP2/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source ~/.admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    source ~/.admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[libvirt]
    +virt_type = qemu                                                                               (CPT)
    +cpu_mode = custom                                                                              (CPT)
    +cpu_model = cortex-a72                                                                         (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +
    +vim /etc/qemu/firmware/edk2-aarch64.json
    +
    +{
    +    "description": "UEFI firmware for ARM64 virtual machines",
    +    "interface-types": [
    +        "uefi"
    +    ],
    +    "mapping": {
    +        "device": "flash",
    +        "executable": {
    +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
    +            "format": "raw"
    +        },
    +        "nvram-template": {
    +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
    +            "format": "raw"
    +        }
    +    },
    +    "targets": [
    +        {
    +            "architecture": "aarch64",
    +            "machines": [
    +                "virt-*"
    +            ]
    +        }
    +    ],
    +    "features": [
    +
    +    ],
    +    "tags": [
    +
    +    ]
    +}
    +
    +(CPT)
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CPT)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service 
    +systemctl enable neutron-l3-agent.service
    +systemctl restart openstack-nova-api.service neutron-server.service                            (CTL)
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Wallaby中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic
+                         --description "Ironic baremetal provisioning service" baremetal
+
+openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
+openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector
+openstack role add --project service --user ironic-inspector admin
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+host = controller
+memcache_servers = controller:11211
+enabled_network_interfaces = flat,noop,neutron
+default_network_interface = noop
+transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/
+enabled_hardware_types = ipmi
+enabled_boot_interfaces = pxe
+enabled_deploy_interfaces = direct
+default_deploy_interface = direct
+enabled_inspect_interfaces = inspector
+enabled_management_interfaces = ipmitool
+enabled_power_interfaces = ipmitool
+enabled_rescue_interfaces = no-rescue,agent
+isolinux_bin = /usr/share/syslinux/isolinux.bin
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+[agent]
+deploy_logs_collect = always
+deploy_logs_local_path = /var/log/ironic/deploy
+deploy_logs_storage_backend = local
+image_download_source = http
+stream_raw_images = false
+force_raw_images = false
+verify_ca = False
+
+[oslo_concurrency]
+
+[oslo_messaging_notifications]
+transport_url = rabbit://openstack:123456@172.20.19.25:5672/
+topics = notifications
+driver = messagingv2
+
+[oslo_messaging_rabbit]
+amqp_durable_queues = True
+rabbit_ha_queues = True
+
+[pxe]
+ipxe_enabled = false
+pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
+image_cache_size = 204800
+tftp_root=/var/lib/tftpboot/cephfs/
+tftp_master_path=/var/lib/tftpboot/cephfs/master_images
+
+[dhcp]
+dhcp_provider = none
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. 配置ironic-inspector服务
  2. +
+

配置文件路径/etc/ironic-inspector/inspector.conf

+

1、创建数据库

+
# mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \     IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
+IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+

2、通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSWORDironic_inspector用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+backend = sqlalchemy
+connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
+min_pool_size = 100
+max_pool_size = 500
+pool_timeout = 30
+max_retries = 5
+max_overflow = 200
+db_retry_interval = 2
+db_inc_retry_interval = True
+db_max_retry_interval = 2
+db_max_retries = 5
+

3、配置消息度列通信地址

+
[DEFAULT] 
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+
+

4、设置keystone认证

+
[DEFAULT]
+
+auth_strategy = keystone
+timeout = 900
+rootwrap_config = /etc/ironic-inspector/rootwrap.conf
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+log_dir = /var/log/ironic-inspector
+state_path = /var/lib/ironic-inspector
+use_stderr = False
+
+[ironic]
+api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
+auth_type = password
+auth_url = http://PUBLIC_IDENTITY_IP:5000
+auth_strategy = keystone
+ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
+os_region = RegionOne
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
+username = IRONIC_SERVICE_USER_NAME
+password = IRONIC_SERVICE_USER_PASSWORD
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://control:5000
+www_authenticate_uri = http://control:5000
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = ironic_inspector
+password = IRONICPASSWD
+region_name = RegionOne
+memcache_servers = control:11211
+token_cache_time = 300
+
+[processing]
+add_ports = active
+processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
+ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
+always_store_ramdisk_logs = true
+store_data =none
+power_off = false
+
+[pxe_filter]
+driver = iptables
+
+[capabilities]
+boot_mode=True
+

5、配置ironic inspector dnsmasq服务

+
# 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
+port=0
+interface=enp3s0                         #替换为实际监听网络接口
+dhcp-range=172.20.19.100,172.20.19.110   #替换为实际dhcp地址范围
+bind-interfaces
+enable-tftp
+
+dhcp-match=set:efi,option:client-arch,7
+dhcp-match=set:efi,option:client-arch,9
+dhcp-match=aarch64, option:client-arch,11
+dhcp-boot=tag:aarch64,grubaa64.efi
+dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
+dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
+
+tftp-root=/tftpboot                       #替换为实际tftpboot目录
+log-facility=/var/log/dnsmasq.log
+

6、关闭ironic provision网络子网的dhcp

+
openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
+

7、初始化ironic-inspector服务的数据库

+

在控制节点执行:

+
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
+

8、启动服务

+
systemctl enable --now openstack-ironic-inspector.service
+systemctl enable --now openstack-ironic-inspector-dnsmasq.service
+

6.配置httpd服务

+
    +
  1. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  2. +
  3. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +
      yum install httpd -y
      +
    2. +
    3. +

      创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    4. +
    5. +

      重启httpd服务。

      +
      systemctl restart httpd
      +
    6. +
    +
  4. +
+

7.deploy ramdisk镜像制作

+

W版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用W版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意

    +
    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:
    +
    +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败,如下:
    +
    +生成的错误配置文件:
    +
    +![ironic-err](../../img/install/ironic-err.png)
    +
    +如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。
    +
    +需要用户对生成grub.cfg的代码逻辑自行修改。
    +
    +ironic向ipa发送查询命令执行状态请求的tls报错:
    +
    +w版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。
    +
    +1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    +
    +```
    +[agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +```
    +
    +2) ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    +
    +/etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)
    +
    +```
    +[DEFAULT]
    +enable_auto_tls = False
    +```
    +
    +设置权限:
    +
    +```
    +chown -R ipa.ipa /etc/ironic_python_agent/
    +```
    +
    +3. 修改ipa服务的服务启动文件,添加配置文件选项
    +
    +vim usr/lib/systemd/system/ironic-python-agent.service
    +
    +```
    +[Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +```
    +
  10. +
+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 24.03 LTS SP2中引入了Kolla和Kolla-ansible服务。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

1.设置数据库

+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+

2.创建服务用户认证

+

1、创建Trove服务用户

+

openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+openstack role add --project service --user trove admin
+openstack service create --name trove
+                         --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+

3.安装和配置Trove各组件

+

1、安装Trove包 +

yum install openstack-trove python-troveclient
+ 2. 配置trove.conf +
vim /etc/trove/trove.conf
+
+[DEFAULT]
+bind_host=TROVE_NODE_IP
+log_dir = /var/log/trove
+network_driver = trove.network.neutron.NeutronDriver
+management_security_groups = <manage security group>
+nova_keypair = trove-mgmt
+default_datastore = mysql
+taskmanager_manager = trove.taskmanager.manager.Manager
+trove_api_workers = 5
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+reboot_time_out = 300
+usage_timeout = 900
+agent_call_high_timeout = 1200
+use_syslog = False
+debug = True
+
+# Set these if using Neutron Networking
+network_driver=trove.network.neutron.NeutronDriver
+network_label_regex=.*
+
+
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
+
+[keystone_authtoken]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = trove
+username = trove
+auth_url = http://controller:5000/v3/
+auth_type = password
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = trove
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mariadb]
+tcp_ports = 3306,4444,4567,4568
+
+[mysql]
+tcp_ports = 3306
+
+[postgresql]
+tcp_ports = 5432
+ 解释:

+
    +
  • [Default]分组中bind_host配置为Trove部署节点的IP
  • +
  • nova_compute_urlcinder_url 为Nova和Cinder在Keystone中创建的endpoint
  • +
  • nova_proxy_XXX 为一个能访问Nova服务的用户信息,上例中使用admin用户为例
  • +
  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • +
  • [database]分组中的connection 为前面在mysql中为Trove创建的数据库信息
  • +
  • Trove的用户信息中TROVE_PASS替换为实际trove用户的密码
  • +
+

3.配置trove-guestagent.conf +

vim /etc/trove/trove-guestagent.conf
+
+[DEFAULT]
+log_file = trove-guestagent.log
+log_dir = /var/log/trove/
+ignore_users = os_admin
+control_exchange = trove
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+rpc_backend = rabbit
+command_process_timeout = 60
+use_syslog = False
+debug = True
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = TROVE_PASS
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mysql]
+docker_image = your-registry/your-repo/mysql
+backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0

+

解释: guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 + 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 + 报心跳,因此需要配置RabbitMQ的用户和密码信息。 + 从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

+
    +
  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • +
  • Trove的用户信息中TROVE_PASS替换为实际trove用户的密码
  • +
+

4.生成数据Trove数据库表 +

su -s /bin/sh -c "trove-manage db_sync" trove

+

4.完成安装配置

+
    +
  1. 配置Trove服务自启动 +
    systemctl enable openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service 
  2. +
  3. 启动服务 +
    systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您在身份服务中为swift用户选择的密码**
+

4.安装和配置存储节点 (STG)

+
安装支持的程序包:
+```shell
+yum install xfsprogs rsync
+```
+
+将/dev/vdb和/dev/vdc设备格式化为 XFS
+
+```shell
+mkfs.xfs /dev/vdb
+mkfs.xfs /dev/vdc
+```
+
+创建挂载点目录结构:
+
+```shell
+mkdir -p /srv/node/vdb
+mkdir -p /srv/node/vdc
+```
+
+找到新分区的 UUID:
+
+```shell
+blkid
+```
+
+编辑/etc/fstab文件并将以下内容添加到其中:
+
+```shell
+UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
+UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
+```
+
+挂载设备:
+
+```shell
+mount /srv/node/vdb
+mount /srv/node/vdc
+```
+***注意***
+
+**如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置**
+
+(可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:
+
+```shell
+[DEFAULT]
+uid = swift
+gid = swift
+log file = /var/log/rsyncd.log
+pid file = /var/run/rsyncd.pid
+address = MANAGEMENT_INTERFACE_IP_ADDRESS
+
+[account]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/account.lock
+
+[container]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/container.lock
+
+[object]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/object.lock
+```
+**替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址**
+
+启动rsyncd服务并配置它在系统启动时启动:
+
+```shell
+systemctl enable rsyncd.service
+systemctl start rsyncd.service
+```
+

5.在存储节点安装和配置组件 (STG)

+
安装软件包:
+
+```shell
+yum install openstack-swift-account openstack-swift-container openstack-swift-object
+```
+
+编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。
+
+确保挂载点目录结构的正确所有权:
+
+```shell
+chown -R swift:swift /srv/node
+```
+
+创建recon目录并确保其拥有正确的所有权:
+
+```shell
+mkdir -p /var/cache/swift
+chown -R root:swift /var/cache/swift
+chmod -R 775 /var/cache/swift
+```
+

6.创建账号环 (CTL)

+
切换到/etc/swift目录。
+
+```shell
+cd /etc/swift
+```
+
+创建基础account.builder文件:
+
+```shell
+swift-ring-builder account.builder create 10 1 1
+```
+
+将每个存储节点添加到环中:
+
+```shell
+swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意 ***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder account.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder account.builder rebalance
+```
+

7.创建容器环 (CTL)

+
切换到`/etc/swift`目录。
+
+创建基础`container.builder`文件:
+
+```shell
+   swift-ring-builder container.builder create 10 1 1
+```
+
+将每个存储节点添加到环中:
+
+```shell
+swift-ring-builder container.builder \
+  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
+  --device DEVICE_NAME --weight 100
+
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder container.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder container.builder rebalance
+```
+

8.创建对象环 (CTL)

+
切换到`/etc/swift`目录。
+
+创建基础`object.builder`文件:
+
+   ```shell
+   swift-ring-builder object.builder create 10 1 1
+   ```
+
+将每个存储节点添加到环中
+
+```shell
+ swift-ring-builder object.builder \
+  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
+  --device DEVICE_NAME --weight 100
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意 ***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder object.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder object.builder rebalance
+```
+
+分发环配置文件:
+
+将`account.ring.gz`,`container.ring.gz`以及 `object.ring.gz`文件复制到每个存储节点和运行代理服务的任何其他节点上的`/etc/swift`目录。
+

9.完成安装

+

编辑/etc/swift/swift.conf文件

+
[swift-hash]
+swift_hash_path_suffix = test-hash
+swift_hash_path_prefix = test-hash
+
+[storage-policy:0]
+name = Policy-0
+default = yes
+

用唯一值替换 test-hash

+

将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

+

在所有节点上,确保配置目录的正确所有权:

+
chown -R root:swift /etc/swift
+

在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

+
systemctl enable openstack-swift-proxy.service memcached.service
+systemctl start openstack-swift-proxy.service memcached.service
+

在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

+
systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
+
+systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
+
+systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
+
+systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
+
+systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
+
+systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
+

Cyborg 安装

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+

1.初始化对应数据库

+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+

2.创建对应Keystone资源对象

+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+

3.安装Cyborg

+
yum install openstack-cyborg
+

4.配置Cyborg

+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+

5.同步数据库表格

+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+

6.启动Cyborg服务

+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+

1.创建数据库

+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+

3.安装Aodh

+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+

注意

+

aodh依赖的软件包pytho3-pyparsing在openEuler的OS仓不适配,需要覆盖安装OpenStack对应版本,可以使用yum list |grep pyparsing |grep OpenStack | awk '{print $2}'获取对应的版本

+

VERSION,然后再yum install -y python3-pyparsing-VERSION覆盖安装适配的pyparsing

+

4.修改配置文件

+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
aodh-dbsync
+

6.启动Aodh服务

+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+

1.创建数据库

+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+

3.安装Gnocchi

+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+

4.修改配置文件/etc/gnocchi/gnocchi.conf

+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+

5.初始化数据库

+
gnocchi-upgrade
+

6.启动Gnocchi服务

+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+

1.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+

2.安装Ceilometer

+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+

3.修改配置文件/etc/ceilometer/pipeline.yaml

+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+

4.修改配置文件/etc/ceilometer/ceilometer.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
ceilometer-upgrade
+

6.启动Ceilometer服务

+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+

1.创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码

+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+

2.创建服务凭证,创建heat用户,并为其增加admin角色

+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+

3.创建heatheat-cfn服务及其对应的API端点

+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+

4.创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色

+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+

5.安装软件包

+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+

6.修改配置文件/etc/heat/heat.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+

7.初始化heat数据库表

+
su -s /bin/sh -c "heat-manage db_sync" heat
+

8.启动服务

+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    yum install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 24.03-LTS-SP2的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 24.03-lts-SP2 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +

    oos env setup test-oos -r wallaby
    +具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 24.03-lts-SP2 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-24.03-LTS/OpenStack-antelope/index.html b/site/install/openEuler-24.03-LTS/OpenStack-antelope/index.html new file mode 100644 index 0000000000000000000000000000000000000000..ca4a20e218a7c08f3f58742bb37d59e1f177ae57 --- /dev/null +++ b/site/install/openEuler-24.03-LTS/OpenStack-antelope/index.html @@ -0,0 +1,3366 @@ + + + + + + + + openEuler-24.03-LTS_Antelope - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack Antelope 部署指南

+ +

本文档是 openEuler OpenStack SIG 编写的基于 openEuler 24.03 LTS 的 OpenStack 部署指南,内容由 SIG 贡献者提供。在阅读过程中,如果您有任何疑问或者发现任何问题,请联系SIG维护人员,或者直接提交issue

+

约定

+

本章节描述文档中的一些通用约定。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
名称定义
RABBIT_PASSrabbitmq的密码,由用户设置,在OpenStack各个服务配置中使用
CINDER_PASScinder服务keystone用户的密码,在cinder配置中使用
CINDER_DBPASScinder服务数据库密码,在cinder配置中使用
KEYSTONE_DBPASSkeystone服务数据库密码,在keystone配置中使用
GLANCE_PASSglance服务keystone用户的密码,在glance配置中使用
GLANCE_DBPASSglance服务数据库密码,在glance配置中使用
HEAT_PASS在keystone注册的heat用户密码,在heat配置中使用
HEAT_DBPASSheat服务数据库密码,在heat配置中使用
CYBORG_PASS在keystone注册的cyborg用户密码,在cyborg配置中使用
CYBORG_DBPASScyborg服务数据库密码,在cyborg配置中使用
NEUTRON_PASS在keystone注册的neutron用户密码,在neutron配置中使用
NEUTRON_DBPASSneutron服务数据库密码,在neutron配置中使用
PROVIDER_INTERFACE_NAME物理网络接口的名称,在neutron配置中使用
OVERLAY_INTERFACE_IP_ADDRESSController控制节点的管理ip地址,在neutron配置中使用
METADATA_SECRETmetadata proxy的secret密码,在nova和neutron配置中使用
PLACEMENT_DBPASSplacement服务数据库密码,在placement配置中使用
PLACEMENT_PASS在keystone注册的placement用户密码,在placement配置中使用
NOVA_DBPASSnova服务数据库密码,在nova配置中使用
NOVA_PASS在keystone注册的nova用户密码,在nova,cyborg,neutron等配置中使用
IRONIC_DBPASSironic服务数据库密码,在ironic配置中使用
IRONIC_PASS在keystone注册的ironic用户密码,在ironic配置中使用
IRONIC_INSPECTOR_DBPASSironic-inspector服务数据库密码,在ironic-inspector配置中使用
IRONIC_INSPECTOR_PASS在keystone注册的ironic-inspector用户密码,在ironic-inspector配置中使用
+

OpenStack SIG 提供了多种基于 openEuler 部署 OpenStack 的方法,以满足不同的用户场景,请按需选择。

+

基于RPM部署

+

环境准备

+

本文档基于OpenStack经典的三节点环境进行部署,三个节点分别是控制节点(Controller)、计算节点(Compute)、存储节点(Storage),其中存储节点一般只部署存储服务,在资源有限的情况下,可以不单独部署该节点,把存储节点上的服务部署到计算节点即可。

+

首先准备三个openEuler 24.03 LTS环境,根据您的环境,下载对应的镜像并安装即可:ISO镜像qcow2镜像

+

下面的安装按照如下拓扑进行: +

controller:192.168.0.2
+compute:   192.168.0.3
+storage:   192.168.0.4
+如果您的环境IP不同,请按照您的环境IP修改相应的配置文件。

+

本文档的三节点服务拓扑如下图所示(只包含Keystone、Glance、Nova、Cinder、Neutron这几个核心服务,其他服务请参考具体部署章节):

+

topology1 +topology2 +topology3

+

在正式部署之前,需要对每个节点做如下配置和检查:

+
    +
  1. +

    配置 openEuler 24.03 LTS 官方 yum 源,需要启用 EPOL 软件仓以支持 OpenStack

    +
    yum update
    +yum install openstack-release-antelope
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示。

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-24.03-LTS/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    每个节点分别修改主机名,以controller为例:

    +
    hostnamectl set-hostname controller
    +
    +vi /etc/hostname
    +内容修改为controller
    +

    然后修改每个节点的/etc/hosts文件,新增如下内容:

    +
    192.168.0.2   controller
    +192.168.0.3   compute
    +192.168.0.4   storage
    +
  4. +
+

时钟同步

+

集群环境时刻要求每个节点的时间一致,一般由时钟同步软件保证。本文使用chrony软件。步骤如下:

+

Controller节点

+
    +
  1. 安装服务 +
    dnf install chrony
  2. +
  3. 修改/etc/chrony.conf配置文件,新增一行 +
    # 表示允许哪些IP从本节点同步时钟
    +allow 192.168.0.0/24
  4. +
  5. 重启服务 +
    systemctl restart chronyd
  6. +
+

其他节点

+
    +
  1. +

    安装服务 +

    dnf install chrony

    +
  2. +
  3. +

    修改/etc/chrony.conf配置文件,新增一行

    +
    # NTP_SERVER是controller IP,表示从这个机器获取时间,这里我们填192.168.0.2,或者在`/etc/hosts`里配置好的controller名字即可。
    +server NTP_SERVER iburst
    +

    同时,要把pool pool.ntp.org iburst这一行注释掉,表示不从公网同步时钟。

    +
  4. +
  5. +

    重启服务

    +
    systemctl restart chronyd
    +
  6. +
+

配置完成后,检查一下结果,在其他非controller节点执行chronyc sources,返回结果类似如下内容,表示成功从controller同步时钟。

+
MS Name/IP address         Stratum Poll Reach LastRx Last sample
+===============================================================================
+^* 192.168.0.2                 4   6     7     0  -1406ns[  +55us] +/-   16ms
+

安装数据库

+

数据库安装在控制节点,这里推荐使用mariadb。

+
    +
  1. +

    安装软件包

    +
    dnf install mysql-config mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    新增配置文件/etc/my.cnf.d/openstack.cnf,内容如下

    +
    [mysqld]
    +bind-address = 192.168.0.2
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +
  4. +
  5. +

    启动服务器

    +
    systemctl start mariadb
    +
  6. +
  7. +

    初始化数据库,根据提示进行即可

    +
    mysql_secure_installation
    +

    示例如下:

    +
    NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
    +    SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!
    +
    +In order to log into MariaDB to secure it, we'll need the current
    +password for the root user. If you've just installed MariaDB, and
    +haven't set the root password yet, you should just press enter here.
    +
    +Enter current password for root (enter for none): 
    +
    +#这里输入密码,由于我们是初始化DB,直接回车就行
    +
    +OK, successfully used password, moving on...
    +
    +Setting the root password or using the unix_socket ensures that nobody
    +can log into the MariaDB root user without the proper authorisation.
    +
    +You already have your root account protected, so you can safely answer 'n'.
    +
    +# 这里根据提示输入N
    +
    +Switch to unix_socket authentication [Y/n] N
    +
    +Enabled successfully!
    +Reloading privilege tables..
    +... Success!
    +
    +
    +You already have your root account protected, so you can safely answer 'n'.
    +
    +# 输入Y,修改密码
    +
    +Change the root password? [Y/n] Y
    +
    +New password: 
    +Re-enter new password: 
    +Password updated successfully!
    +Reloading privilege tables..
    +... Success!
    +
    +
    +By default, a MariaDB installation has an anonymous user, allowing anyone
    +to log into MariaDB without having to have a user account created for
    +them.  This is intended only for testing, and to make the installation
    +go a bit smoother.  You should remove them before moving into a
    +production environment.
    +
    +# 输入Y,删除匿名用户
    +
    +Remove anonymous users? [Y/n] Y
    +... Success!
    +
    +Normally, root should only be allowed to connect from 'localhost'.  This
    +ensures that someone cannot guess at the root password from the network.
    +
    +# 输入Y,关闭root远程登录权限
    +
    +Disallow root login remotely? [Y/n] Y
    +... Success!
    +
    +By default, MariaDB comes with a database named 'test' that anyone can
    +access.  This is also intended only for testing, and should be removed
    +before moving into a production environment.
    +
    +# 输入Y,删除test数据库
    +
    +Remove test database and access to it? [Y/n] Y
    +- Dropping test database...
    +... Success!
    +- Removing privileges on test database...
    +... Success!
    +
    +Reloading the privilege tables will ensure that all changes made so far
    +will take effect immediately.
    +
    +# 输入Y,重载配置
    +
    +Reload privilege tables now? [Y/n] Y
    +... Success!
    +
    +Cleaning up...
    +
    +All done!  If you've completed all of the above steps, your MariaDB
    +installation should now be secure.
    +
  8. +
  9. +

    验证,根据第四步设置的密码,检查是否能登录mariadb

    +
    mysql -uroot -p
    +
  10. +
+

安装消息队列

+

消息队列安装在控制节点,这里推荐使用rabbitmq。

+
    +
  1. 安装软件包 +
    dnf install rabbitmq-server
  2. +
  3. 启动服务 +
    systemctl start rabbitmq-server
  4. +
  5. 配置openstack用户,RABBIT_PASS是openstack服务登录消息队里的密码,需要和后面各个服务的配置保持一致。 +
    rabbitmqctl add_user openstack RABBIT_PASS
    +rabbitmqctl set_permissions openstack ".*" ".*" ".*"
  6. +
+

安装缓存服务

+

消息队列安装在控制节点,这里推荐使用Memcached。

+
    +
  1. 安装软件包 +
    dnf install memcached python3-memcached
  2. +
  3. 修改配置文件/etc/sysconfig/memcached +
    OPTIONS="-l 127.0.0.1,::1,controller"
  4. +
  5. 启动服务 +
    systemctl start memcached
  6. +
+

部署服务

+

Keystone

+

Keystone是OpenStack提供的鉴权服务,是整个OpenStack的入口,提供了租户隔离、用户认证、服务发现等功能,必须安装。

+
    +
  1. +

    创建 keystone 数据库并授权

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包

    +
    dnf install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +
  6. +
  7. +

    同步数据库

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
  14. +
  15. +

    打开httpd.conf并配置

    +
    #需要修改的配置文件路径
    +vim /etc/httpd/conf/httpd.conf
    +
    +#修改以下项,如果没有则新添加
    +ServerName controller
    +
  16. +
  17. +

    创建软链接

    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 + 如果 ServerName 项不存在则需要创建

    +
  18. +
  19. +

    启动Apache HTTP服务

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  20. +
  21. +

    创建环境变量配置

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  22. +
  23. +

    依次创建domain, projects, users, roles

    +
      +
    • +

      需要先安装python3-openstackclient

      +
      dnf install python3-openstackclient
      +
    • +
    • +

      导入环境变量

      +
      source ~/.admin-openrc
      +
    • +
    • +

      创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

      +
      openstack domain create --description "An Example Domain" example
      +
      openstack project create --domain default --description "Service Project" service
      +
    • +
    • +

      创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

      +
      openstack project create --domain default --description "Demo Project" myproject
      +openstack user create --domain default --password-prompt myuser
      +openstack role create myrole
      +openstack role add --project myproject --user myuser myrole
      +
    • +
    +
  24. +
  25. +

    验证

    +
      +
    • +

      取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

      +
      source ~/.admin-openrc
      +unset OS_AUTH_URL OS_PASSWORD
      +
    • +
    • +

      为admin用户请求token:

      +
      openstack --os-auth-url http://controller:5000/v3 \
      +--os-project-domain-name Default --os-user-domain-name Default \
      +--os-project-name admin --os-username admin token issue
      +
    • +
    • +

      为myuser用户请求token:

      +
      openstack --os-auth-url http://controller:5000/v3 \
      +--os-project-domain-name Default --os-user-domain-name Default \
      +--os-project-name myproject --os-username myuser token issue
      +
    • +
    +
  26. +
+

Glance

+

Glance是OpenStack提供的镜像服务,负责虚拟机、裸机镜像的上传与下载,必须安装。

+

Controller节点

+
    +
  1. +

    创建 glance 数据库并授权

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +
  2. +
  3. +

    初始化 glance 资源对象

    +
  4. +
  5. +

    导入环境变量

    +
    source ~/.admin-openrc
    +
  6. +
  7. +

    创建用户时,命令行会提示输入密码,请输入自定义的密码,下文涉及到GLANCE_PASS的地方替换成该密码即可。

    +
    openstack user create --domain default --password-prompt glance
    +User Password:
    +Repeat User Password:
    +
  8. +
  9. +

    添加glance用户到service project并指定admin角色:

    +
    openstack role add --project service --user glance admin
    +
  10. +
  11. +

    创建glance服务实体:

    +
    openstack service create --name glance --description "OpenStack Image" image
    +
  12. +
  13. +

    创建glance API服务:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  14. +
  15. +

    安装软件包

    +
    dnf install openstack-glance
    +
  16. +
  17. +

    修改 glance 配置文件

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +
  18. +
  19. +

    同步数据库

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  20. +
  21. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  22. +
  23. +

    验证

    +
      +
    • +

      导入环境变量 +

      sorce ~/.admin-openrcu

      +
    • +
    • +

      下载镜像

      +
      x86镜像下载:
      +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
      +
      +arm镜像下载:
      +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img
      +

      注意

      +

      如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

      +
    • +
    • +

      向Image服务上传镜像:

      +
      openstack image create --disk-format qcow2 --container-format bare \
      +                    --file cirros-0.4.0-x86_64-disk.img --public cirros
      +
    • +
    • +

      确认镜像上传并验证属性:

      +
      openstack image list
      +
    • +
    +
  24. +
+

Placement

+

Placement是OpenStack提供的资源调度组件,一般不面向用户,由Nova等组件调用,安装在控制节点。

+

安装、配置Placement服务前,需要先创建相应的数据库、服务凭证和API endpoints。

+
    +
  1. +

    创建数据库

    +
      +
    • +

      使用root用户访问数据库服务:

      +
      mysql -u root -p
      +
    • +
    • +

      创建placement数据库:

      +
      MariaDB [(none)]> CREATE DATABASE placement;
      +
    • +
    • +

      授权数据库访问:

      +
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
      +  IDENTIFIED BY 'PLACEMENT_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
      +  IDENTIFIED BY 'PLACEMENT_DBPASS';
      +

      替换PLACEMENT_DBPASS为placement数据库访问密码。

      +
    • +
    • +

      退出数据库访问客户端:

      +
      exit
      +
    • +
    +
  2. +
  3. +

    配置用户和Endpoints

    +
      +
    • +

      source admin凭证,以获取admin命令行权限:

      +
      source ~/.admin-openrc
      +
    • +
    • +

      创建placement用户并设置用户密码:

      +
      openstack user create --domain default --password-prompt placement
      +
      +User Password:
      +Repeat User Password:
      +
    • +
    • +

      添加placement用户到service project并指定admin角色:

      +
      openstack role add --project service --user placement admin
      +
    • +
    • +

      创建placement服务实体:

      +
      openstack service create --name placement \
      +  --description "Placement API" placement
      +
    • +
    • +

      创建Placement API服务endpoints:

      +
      openstack endpoint create --region RegionOne \
      +  placement public http://controller:8778
      +openstack endpoint create --region RegionOne \
      +  placement internal http://controller:8778
      +openstack endpoint create --region RegionOne \
      +  placement admin http://controller:8778
      +
    • +
    +
  4. +
  5. +

    安装及配置组件

    +
      +
    • +

      安装软件包:

      +
      dnf install openstack-placement-api
      +
    • +
    • +

      编辑/etc/placement/placement.conf配置文件,完成如下操作:

      +
        +
      • +

        [placement_database]部分,配置数据库入口:

        +
        [placement_database]
        +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
        +

        替换PLACEMENT_DBPASS为placement数据库的密码。

        +
      • +
      • +

        [api][keystone_authtoken]部分,配置身份认证服务入口:

        +
        [api]
        +auth_strategy = keystone
        +
        +[keystone_authtoken]
        +auth_url = http://controller:5000/v3
        +memcached_servers = controller:11211
        +auth_type = password
        +project_domain_name = Default
        +user_domain_name = Default
        +project_name = service
        +username = placement
        +password = PLACEMENT_PASS
        +

        替换PLACEMENT_PASS为placement用户的密码。

        +
      • +
      +
    • +
    • +

      数据库同步,填充Placement数据库:

      +
      su -s /bin/sh -c "placement-manage db sync" placement
      +
    • +
    +
  6. +
  7. +

    启动服务

    +

    重启httpd服务:

    +
    systemctl restart httpd
    +
  8. +
  9. +

    验证

    +
      +
    • +

      source admin凭证,以获取admin命令行权限

      +
      source ~/.admin-openrc
      +
    • +
    • +

      执行状态检查:

      +
      placement-status upgrade check
      +
      +----------------------------------------------------------------------+
      +| Upgrade Check Results                                                |
      ++----------------------------------------------------------------------+
      +| Check: Missing Root Provider IDs                                     |
      +| Result: Success                                                      |
      +| Details: None                                                        |
      ++----------------------------------------------------------------------+
      +| Check: Incomplete Consumers                                          |
      +| Result: Success                                                      |
      +| Details: None                                                        |
      ++----------------------------------------------------------------------+
      +| Check: Policy File JSON to YAML Migration                            |
      +| Result: Failure                                                      |
      +| Details: Your policy file is JSON-formatted which is deprecated. You |
      +|   need to switch to YAML-formatted file. Use the                     |
      +|   ``oslopolicy-convert-json-to-yaml`` tool to convert the            |
      +|   existing JSON-formatted files to YAML in a backwards-              |
      +|   compatible manner: https://docs.openstack.org/oslo.policy/         |
      +|   latest/cli/oslopolicy-convert-json-to-yaml.html.                   |
      ++----------------------------------------------------------------------+
      +

      这里可以看到Policy File JSON to YAML Migration的结果为Failure。这是因为在Placement中,JSON格式的policy文件从Wallaby版本开始已处于deprecated状态。可以参考提示,使用oslopolicy-convert-json-to-yaml工具 将现有的JSON格式policy文件转化为YAML格式。

      +
      oslopolicy-convert-json-to-yaml  --namespace placement \
      +  --policy-file /etc/placement/policy.json \
      +  --output-file /etc/placement/policy.yaml
      +mv /etc/placement/policy.json{,.bak}
      +

      注:当前环境中此问题可忽略,不影响运行。

      +
    • +
    • +

      针对placement API运行命令:

      +
        +
      • +

        安装osc-placement插件:

        +
        dnf install python3-osc-placement
        +
      • +
      • +

        列出可用的资源类别及特性:

        +
        openstack --os-placement-api-version 1.2 resource class list --sort-column name
        ++----------------------------+
        +| name                       |
        ++----------------------------+
        +| DISK_GB                    |
        +| FPGA                       |
        +| ...                        |
        +
        +openstack --os-placement-api-version 1.6 trait list --sort-column name
        ++---------------------------------------+
        +| name                                  |
        ++---------------------------------------+
        +| COMPUTE_ACCELERATORS                  |
        +| COMPUTE_ARCH_AARCH64                  |
        +| ...                                   |
        +
      • +
      +
    • +
    +
  10. +
+

Nova

+

Nova是OpenStack的计算服务,负责虚拟机的创建、发放等功能。

+

Controller节点

+

在控制节点执行以下操作。

+
    +
  1. +

    创建数据库

    +
      +
    • +

      使用root用户访问数据库服务:

      +
      mysql -u root -p
      +
    • +
    • +

      创建nova_apinovanova_cell0数据库:

      +
      MariaDB [(none)]> CREATE DATABASE nova_api;
      +MariaDB [(none)]> CREATE DATABASE nova;
      +MariaDB [(none)]> CREATE DATABASE nova_cell0;
      +
    • +
    • +

      授权数据库访问:

      +
      MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
      +  IDENTIFIED BY 'NOVA_DBPASS';
      +

      替换NOVA_DBPASS为nova相关数据库访问密码。

      +
    • +
    • +

      退出数据库访问客户端:

      +
      exit
      +
    • +
    +
  2. +
  3. +

    配置用户和Endpoints

    +
      +
    • +

      source admin凭证,以获取admin命令行权限:

      +
      source ~/.admin-openrc
      +
    • +
    • +

      创建nova用户并设置用户密码:

      +
      openstack user create --domain default --password-prompt nova
      +
      +User Password:
      +Repeat User Password:
      +
    • +
    • +

      添加nova用户到service project并指定admin角色:

      +
      openstack role add --project service --user nova admin
      +
    • +
    • +

      创建nova服务实体:

      +
      openstack service create --name nova \
      +  --description "OpenStack Compute" compute
      +
    • +
    • +

      创建Nova API服务endpoints:

      +
      openstack endpoint create --region RegionOne \
      +  compute public http://controller:8774/v2.1
      +openstack endpoint create --region RegionOne \
      +  compute internal http://controller:8774/v2.1
      +openstack endpoint create --region RegionOne \
      +  compute admin http://controller:8774/v2.1
      +
    • +
    +
  4. +
  5. +

    安装及配置组件

    +
      +
    • +

      安装软件包:

      +
      dnf install openstack-nova-api openstack-nova-conductor \
      +  openstack-nova-novncproxy openstack-nova-scheduler
      +
    • +
    • +

      编辑/etc/nova/nova.conf配置文件,完成如下操作:

      +
        +
      • +

        [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用controller节点管理IP配置my_ip,显式定义log_dir:

        +
        [DEFAULT]
        +enabled_apis = osapi_compute,metadata
        +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
        +my_ip = 192.168.0.2
        +log_dir = /var/log/nova
        +state_path = /var/lib/nova
        +

        替换RABBIT_PASS为RabbitMQ中openstack账户的密码。

        +
      • +
      • +

        [api_database][database]部分,配置数据库入口:

        +
        [api_database]
        +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
        +
        +[database]
        +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
        +

        替换NOVA_DBPASS为nova相关数据库的密码。

        +
      • +
      • +

        [api][keystone_authtoken]部分,配置身份认证服务入口:

        +
        [api]
        +auth_strategy = keystone
        +
        +[keystone_authtoken]
        +auth_url = http://controller:5000/v3
        +memcached_servers = controller:11211
        +auth_type = password
        +project_domain_name = Default
        +user_domain_name = Default
        +project_name = service
        +username = nova
        +password = NOVA_PASS
        +

        替换NOVA_PASS为nova用户的密码。

        +
      • +
      • +

        [vnc]部分,启用并配置远程控制台入口:

        +
        [vnc]
        +enabled = true
        +server_listen = $my_ip
        +server_proxyclient_address = $my_ip
        +
      • +
      • +

        [glance]部分,配置镜像服务API的地址:

        +
        [glance]
        +api_servers = http://controller:9292
        +
      • +
      • +

        [oslo_concurrency]部分,配置lock path:

        +
        [oslo_concurrency]
        +lock_path = /var/lib/nova/tmp
        +
      • +
      • +

        [placement]部分,配置placement服务的入口:

        +
        [placement]
        +region_name = RegionOne
        +project_domain_name = Default
        +project_name = service
        +auth_type = password
        +user_domain_name = Default
        +auth_url = http://controller:5000/v3
        +username = placement
        +password = PLACEMENT_PASS
        +

        替换PLACEMENT_PASS为placement用户的密码。

        +
      • +
      +
    • +
    • +

      数据库同步:

      +
        +
      • +

        同步nova-api数据库:

        +
        su -s /bin/sh -c "nova-manage api_db sync" nova
        +
      • +
      • +

        注册cell0数据库:

        +
        su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
        +
      • +
      • +

        创建cell1 cell:

        +
        su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
        +
      • +
      • +

        同步nova数据库:

        +
        su -s /bin/sh -c "nova-manage db sync" nova
        +
      • +
      • +

        验证cell0和cell1注册正确:

        +
        su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
        +
      • +
      +
    • +
    +
  6. +
  7. +

    启动服务

    +
    systemctl enable \
    +  openstack-nova-api.service \
    +  openstack-nova-scheduler.service \
    +  openstack-nova-conductor.service \
    +  openstack-nova-novncproxy.service
    +
    +systemctl start \
    +  openstack-nova-api.service \
    +  openstack-nova-scheduler.service \
    +  openstack-nova-conductor.service \
    +  openstack-nova-novncproxy.service
    +
  8. +
+

Compute节点

+

在计算节点执行以下操作。

+
    +
  1. +

    安装软件包

    +
    dnf install openstack-nova-compute
    +
  2. +
  3. +

    编辑/etc/nova/nova.conf配置文件

    +
      +
    • +

      [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用Compute节点管理IP配置my_ip,显式定义compute_driver、instances_path、log_dir:

      +
      [DEFAULT]
      +enabled_apis = osapi_compute,metadata
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +my_ip = 192.168.0.3
      +compute_driver = libvirt.LibvirtDriver
      +instances_path = /var/lib/nova/instances
      +log_dir = /var/log/nova
      +

      替换RABBIT_PASS为RabbitMQ中openstack账户的密码。

      +
    • +
    • +

      [api][keystone_authtoken]部分,配置身份认证服务入口:

      +
      [api]
      +auth_strategy = keystone
      +
      +[keystone_authtoken]
      +auth_url = http://controller:5000/v3
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = nova
      +password = NOVA_PASS
      +

      替换NOVA_PASS为nova用户的密码。

      +
    • +
    • +

      [vnc]部分,启用并配置远程控制台入口:

      +
      [vnc]
      +enabled = true
      +server_listen = $my_ip
      +server_proxyclient_address = $my_ip
      +novncproxy_base_url = http://controller:6080/vnc_auto.html
      +
    • +
    • +

      [glance]部分,配置镜像服务API的地址:

      +
      [glance]
      +api_servers = http://controller:9292
      +
    • +
    • +

      [oslo_concurrency]部分,配置lock path:

      +
      [oslo_concurrency]
      +lock_path = /var/lib/nova/tmp
      +
    • +
    • +

      [placement]部分,配置placement服务的入口:

      +
      [placement]
      +region_name = RegionOne
      +project_domain_name = Default
      +project_name = service
      +auth_type = password
      +user_domain_name = Default
      +auth_url = http://controller:5000/v3
      +username = placement
      +password = PLACEMENT_PASS
      +

      替换PLACEMENT_PASS为placement用户的密码。

      +
    • +
    +
  4. +
  5. +

    确认计算节点是否支持虚拟机硬件加速(x86_64)

    +

    处理器为x86_64架构时,可通过运行如下命令确认是否支持硬件加速:

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。编辑/etc/nova/nova.conf[libvirt]部分:

    +
    [libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置。

    +
  6. +
  7. +

    确认计算节点是否支持虚拟机硬件加速(arm64)

    +

    处理器为arm64架构时,可通过运行如下命令确认是否支持硬件加速:

    +
    virt-host-validate
    +# 该命令由libvirt提供,此时libvirt应已作为openstack-nova-compute依赖被安装,环境中已有此命令
    +

    显示FAIL时,表示不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。

    +
    QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded)
    +

    编辑/etc/nova/nova.conf[libvirt]部分:

    +
    [libvirt]
    +virt_type = qemu
    +

    显示PASS时,表示支持硬件加速,不需要进行额外的配置。

    +
    QEMU: Checking if device /dev/kvm exists: PASS
    +
  8. +
  9. +

    配置qemu(仅arm64)

    +

    仅当处理器为arm64架构时需要执行此操作。

    +
      +
    • +

      编辑/etc/libvirt/qemu.conf:

      +
      nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
      +         /usr/share/AAVMF/AAVMF_VARS.fd", \
      +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
      +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
      +
    • +
    • +

      编辑/etc/qemu/firmware/edk2-aarch64.json

      +
      {
      +    "description": "UEFI firmware for ARM64 virtual machines",
      +    "interface-types": [
      +        "uefi"
      +    ],
      +    "mapping": {
      +        "device": "flash",
      +        "executable": {
      +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
      +            "format": "raw"
      +        },
      +        "nvram-template": {
      +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
      +            "format": "raw"
      +        }
      +    },
      +    "targets": [
      +        {
      +            "architecture": "aarch64",
      +            "machines": [
      +                "virt-*"
      +            ]
      +        }
      +    ],
      +    "features": [
      +
      +    ],
      +    "tags": [
      +
      +    ]
      +}
      +
    • +
    +
  10. +
  11. +

    启动服务

    +
    systemctl enable libvirtd.service openstack-nova-compute.service
    +systemctl start libvirtd.service openstack-nova-compute.service
    +
  12. +
+

Controller节点

+

在控制节点执行以下操作。

+
    +
  1. +

    添加计算节点到openstack集群

    +
      +
    • +

      source admin凭证,以获取admin命令行权限:

      +
      source ~/.admin-openrc
      +
    • +
    • +

      确认nova-compute服务已识别到数据库中:

      +
      openstack compute service list --service nova-compute
      +
    • +
    • +

      发现计算节点,将计算节点添加到cell数据库:

      +

      su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
      +结果如下:

      +
      Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be    ignored if the caller is only importing and not executing nova code.
      +Found 2 cell mappings.
      +Skipping cell0 since it does not contain hosts.
      +Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2
      +Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e
      +Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e
      +Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2
      +
    • +
    +
  2. +
  3. +

    验证

    +
      +
    • 列出服务组件,验证每个流程都成功启动和注册:
    • +
    +
    openstack compute service list
    +
      +
    • 列出身份服务中的API端点,验证与身份服务的连接:
    • +
    +
    openstack catalog list
    +
      +
    • 列出镜像服务中的镜像,验证与镜像服务的连接:
    • +
    +
    openstack image list
    +
      +
    • 检查cells是否运作成功,以及其他必要条件是否已具备。
    • +
    +
    nova-status upgrade check
    +
  4. +
+

Neutron

+

Neutron是OpenStack的网络服务,提供虚拟交换机、IP路由、DHCP等功能。

+

Controller节点

+
    +
  1. +

    创建数据库、服务凭证和 API 服务端点

    +
      +
    • +

      创建数据库:

      +
      mysql -u root -p
      +
      +MariaDB [(none)]> CREATE DATABASE neutron;
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
      +MariaDB [(none)]> exit;
      +
    • +
    • +

      创建用户和服务,并记住创建neutron用户时输入的密码,用于配置NEUTRON_PASS:

      +
      source ~/.admin-openrc
      +openstack user create --domain default --password-prompt neutron
      +openstack role add --project service --user neutron admin
      +openstack service create --name neutron --description "OpenStack Networking" network
      +
    • +
    • +

      部署 Neutron API 服务:

      +
      openstack endpoint create --region RegionOne network public http://controller:9696
      +openstack endpoint create --region RegionOne network internal http://controller:9696
      +openstack endpoint create --region RegionOne network admin http://controller:9696
      +
    • +
    +
  2. +
  3. +

    安装软件包

    +

    dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2
    +3. 配置Neutron

    +
      +
    • +

      修改/etc/neutron/neutron.conf +

      [database]
      +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
      +
      +[DEFAULT]
      +core_plugin = ml2
      +service_plugins = router
      +allow_overlapping_ips = true
      +transport_url = rabbit://openstack:RABBIT_PASS@controller
      +auth_strategy = keystone
      +notify_nova_on_port_status_changes = true
      +notify_nova_on_port_data_changes = true
      +
      +[keystone_authtoken]
      +www_authenticate_uri = http://controller:5000
      +auth_url = http://controller:5000
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS
      +
      +[nova]
      +auth_url = http://controller:5000
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +region_name = RegionOne
      +project_name = service
      +username = nova
      +password = NOVA_PASS
      +
      +[oslo_concurrency]
      +lock_path = /var/lib/neutron/tmp
      +
      +[experimental]
      +linuxbridge = true

      +
    • +
    • +

      配置ML2,ML2具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge**

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/ml2_conf.ini +

      [ml2]
      +type_drivers = flat,vlan,vxlan
      +tenant_network_types = vxlan
      +mechanism_drivers = linuxbridge,l2population
      +extension_drivers = port_security
      +
      +[ml2_type_flat]
      +flat_networks = provider
      +
      +[ml2_type_vxlan]
      +vni_ranges = 1:1000
      +
      +[securitygroup]
      +enable_ipset = true

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini +

      [linux_bridge]
      +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
      +
      +[vxlan]
      +enable_vxlan = true
      +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
      +l2_population = true
      +
      +[securitygroup]
      +enable_security_group = true
      +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

      +
    • +
    • +

      配置Layer-3代理

      +
    • +
    • +

      修改/etc/neutron/l3_agent.ini

      +
      [DEFAULT]
      +interface_driver = linuxbridge
      +

      配置DHCP代理 +修改/etc/neutron/dhcp_agent.ini +

      [DEFAULT]
      +interface_driver = linuxbridge
      +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
      +enable_isolated_metadata = true

      +
    • +
    • +

      配置metadata代理

      +
    • +
    • +

      修改/etc/neutron/metadata_agent.ini +

      [DEFAULT]
      +nova_metadata_host = controller
      +metadata_proxy_shared_secret = METADATA_SECRET

      +
    • +
    • 配置nova服务使用neutron,修改/etc/nova/nova.conf +
      [neutron]
      +auth_url = http://controller:5000
      +auth_type = password
      +project_domain_name = default
      +user_domain_name = default
      +region_name = RegionOne
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS
      +service_metadata_proxy = true
      +metadata_proxy_shared_secret = METADATA_SECRET
    • +
    +
  4. +
  5. +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +
  6. +
  7. +

    同步数据库 +

    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

    +
  8. +
  9. 重启nova api服务 +
    systemctl restart openstack-nova-api
  10. +
  11. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \
    +neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
    +systemctl start neutron-server.service neutron-linuxbridge-agent.service \
    +neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
    +
  12. +
+

Compute节点

+
    +
  1. 安装软件包 +
    dnf install openstack-neutron-linuxbridge ebtables ipset -y
  2. +
  3. +

    配置Neutron

    +
      +
    • +

      修改/etc/neutron/neutron.conf +

      [DEFAULT]
      +transport_url = rabbit://openstack:RABBIT_PASS@controller
      +auth_strategy = keystone
      +
      +[keystone_authtoken]
      +www_authenticate_uri = http://controller:5000
      +auth_url = http://controller:5000
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS
      +
      +[oslo_concurrency]
      +lock_path = /var/lib/neutron/tmp

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini +

      [linux_bridge]
      +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
      +
      +[vxlan]
      +enable_vxlan = true
      +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
      +l2_population = true
      +
      +[securitygroup]
      +enable_security_group = true
      +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

      +
    • +
    • +

      配置nova compute服务使用neutron,修改/etc/nova/nova.conf +

      [neutron]
      +auth_url = http://controller:5000
      +auth_type = password
      +project_domain_name = default
      +user_domain_name = default
      +region_name = RegionOne
      +project_name = service
      +username = neutron
      +password = NEUTRON_PASS

      +
    • +
    • 重启nova-compute服务 +
      systemctl restart openstack-nova-compute.service
    • +
    • 启动Neutron linuxbridge agent服务
    • +
    +
    systemctl enable neutron-linuxbridge-agent
    +systemctl start neutron-linuxbridge-agent
    +
  4. +
+

Cinder

+

Cinder是OpenStack的存储服务,提供块设备的创建、发放、备份等功能。

+

Controller节点

+
    +
  1. +

    初始化数据库

    +

    CINDER_DBPASS是用户自定义的cinder数据库密码。 +

    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit

    +
  2. +
  3. +

    初始化Keystone资源对象

    +

    source ~/.admin-openrc
    +
    +#创建用户时,命令行会提示输入密码,请输入自定义的密码,下文涉及到`CINDER_PASS`的地方替换成该密码即可。
    +openstack user create --domain default --password-prompt cinder
    +
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +3. 安装软件包

    +
    dnf install openstack-cinder-api openstack-cinder-scheduler
    +
  4. +
  5. +

    修改cinder配置文件/etc/cinder/cinder.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 192.168.0.2
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
  6. +
  7. +

    数据库同步

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder
    +
  8. +
  9. +

    修改nova配置/etc/nova/nova.conf

    +
    [cinder]
    +os_region_name = RegionOne
    +
  10. +
  11. +

    启动服务

    +
    systemctl restart openstack-nova-api
    +systemctl start openstack-cinder-api openstack-cinder-scheduler
    +
  12. +
+

Storage节点

+

Storage节点要提前准备至少一块硬盘,作为cinder的存储后端,下文默认storage节点已经存在一块未使用的硬盘,设备名称为/dev/sdb,用户在配置过程中,请按照真实环境信息进行名称替换。

+

Cinder支持很多类型的后端存储,本指导使用最简单的lvm为参考,如果您想使用如ceph等其他后端,请自行配置。

+
    +
  1. +

    安装软件包

    +
    dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup
    +
  2. +
  3. +

    配置lvm卷组

    +
    pvcreate /dev/sdb
    +vgcreate cinder-volumes /dev/sdb
    +
  4. +
  5. +

    修改cinder配置/etc/cinder/cinder.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 192.168.0.4
    +enabled_backends = lvm
    +glance_api_servers = http://controller:9292
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    +volume_group = cinder-volumes
    +target_protocol = iscsi
    +target_helper = lioadm
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
  6. +
  7. +

    配置cinder backup (可选)

    +

    cinder-backup是可选的备份服务,cinder同样支持很多种备份后端,本文使用swift存储,如果您想使用如NFS等后端,请自行配置,例如可以参考OpenStack官方文档对NFS的配置说明。

    +

    修改/etc/cinder/cinder.conf,在[DEFAULT]中新增 +

    [DEFAULT]
    +backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
    +backup_swift_url = SWIFT_URL

    +

    这里的SWIFT_URL是指环境中swift服务的URL,在部署完swift服务后,执行openstack catalog show object-store命令获取。

    +
  8. +
  9. +

    启动服务

    +
    systemctl start openstack-cinder-volume target
    +systemctl start openstack-cinder-backup (可选)
    +
  10. +
+

至此,Cinder服务的部署已全部完成,可以在controller通过以下命令进行简单的验证

+
source ~/.admin-openrc
+openstack storage service list
+openstack volume list
+

Horizon

+

Horizon是OpenStack提供的前端页面,可以让用户通过网页鼠标的操作来控制OpenStack集群,而不用繁琐的CLI命令行。Horizon一般部署在控制节点。

+
    +
  1. +

    安装软件包

    +
    dnf install openstack-dashboard
    +
  2. +
  3. +

    修改配置文件/etc/openstack-dashboard/local_settings

    +
    OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +OPENSTACK_KEYSTONE_URL =  "http://controller:5000/v3"
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +CACHES = {
    +'default': {
    +    'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +    'LOCATION': 'controller:11211',
    +    }
    +}
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启服务

    +
    systemctl restart httpd
    +
  6. +
+

至此,horizon服务的部署已全部完成,打开浏览器,输入http://192.168.0.2/dashboard,打开horizon登录页面。

+

Ironic

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+

在控制节点执行以下操作。

+
    +
  1. +

    设置数据库

    +

    裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASS为合适的密码

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
    +IDENTIFIED BY 'IRONIC_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
    +IDENTIFIED BY 'IRONIC_DBPASS';
    +MariaDB [(none)]> exit
    +Bye
    +
  2. +
  3. +

    创建服务用户认证

    +
      +
    • +

      创建Bare Metal服务用户

      +

      替换IRONIC_PASS为ironic用户密码,IRONIC_INSPECTOR_PASS为ironic_inspector用户密码。

      +
      openstack user create --password IRONIC_PASS \
      +  --email ironic@example.com ironic
      +openstack role add --project service --user ironic admin
      +openstack service create --name ironic \
      +  --description "Ironic baremetal provisioning service" baremetal
      +
      +openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
      +openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector
      +openstack role add --project service --user ironic-inspector admin
      +
    • +
    • +

      创建Bare Metal服务访问入口

      +
      openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385
      +openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385
      +openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385
      +openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1
      +openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1
      +openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1
      +
    • +
    +
  4. +
  5. +

    安装组件

    +
    dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
    +
  6. +
  7. +

    配置ironic-api服务

    +

    配置文件路径/etc/ironic/ironic.conf

    +
      +
    • +

      通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

      +
      [database]
      +
      +# The SQ LAlchemy connection string used to connect to the
      +# database (string value)
      +# connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic
      +connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic
      +
    • +
    • +

      通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

      +
      [DEFAULT]
      +
      +# A URL representing the messaging driver to use and its full
      +# configuration. (string value)
      +# transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +

      用户也可自行使用json-rpc方式替换rabbitmq

      +
    • +
    • +

      配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换 IRONIC_PASS为身份认证服务中ironic用户的密码,替换RABBIT_PASS为RabbitMQ中openstack账户的密码。:

      +
      [DEFAULT]
      +
      +# Authentication strategy used by ironic-api: one of
      +# "keystone" or "noauth". "noauth" should not be used in a
      +# production environment because all authentication will be
      +# disabled. (string value)
      +
      +auth_strategy=keystone
      +host = controller
      +memcache_servers = controller:11211
      +enabled_network_interfaces = flat,noop,neutron
      +default_network_interface = noop
      +enabled_hardware_types = ipmi
      +enabled_boot_interfaces = pxe
      +enabled_deploy_interfaces = direct
      +default_deploy_interface = direct
      +enabled_inspect_interfaces = inspector
      +enabled_management_interfaces = ipmitool
      +enabled_power_interfaces = ipmitool
      +enabled_rescue_interfaces = no-rescue,agent
      +isolinux_bin = /usr/share/syslinux/isolinux.bin
      +logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %     (user_identity)s] %(instance)s%(message)s
      +
      +[keystone_authtoken]
      +# Authentication type to load (string value)
      +auth_type=password
      +# Complete public Identity API endpoint (string value)
      +# www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
      +www_authenticate_uri=http://controller:5000
      +# Complete admin Identity API endpoint. (string value)
      +# auth_url=http://PRIVATE_IDENTITY_IP:5000
      +auth_url=http://controller:5000
      +# Service username. (string value)
      +username=ironic
      +# Service account password. (string value)
      +password=IRONIC_PASS
      +# Service tenant name. (string value)
      +project_name=service
      +# Domain name containing project (string value)
      +project_domain_name=Default
      +# User's domain name (string value)
      +user_domain_name=Default
      +
      +[agent]
      +deploy_logs_collect = always
      +deploy_logs_local_path = /var/log/ironic/deploy
      +deploy_logs_storage_backend = local
      +image_download_source = http
      +stream_raw_images = false
      +force_raw_images = false
      +verify_ca = False
      +
      +[oslo_concurrency]
      +
      +[oslo_messaging_notifications]
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +topics = notifications
      +driver = messagingv2
      +
      +[oslo_messaging_rabbit]
      +amqp_durable_queues = True
      +rabbit_ha_queues = True
      +
      +[pxe]
      +ipxe_enabled = false
      +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
      +image_cache_size = 204800
      +tftp_root=/var/lib/tftpboot/cephfs/
      +tftp_master_path=/var/lib/tftpboot/cephfs/master_images
      +
      +[dhcp]
      +dhcp_provider = none
      +
    • +
    • +

      创建裸金属服务数据库表

      +
      ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
      +
    • +
    • +

      重启ironic-api服务

      +
      sudo systemctl restart openstack-ironic-api
      +
    • +
    +
  8. +
  9. +

    配置ironic-conductor服务

    +

    如下为ironic-conductor服务自身的标准配置,ironic-conductor服务可以与ironic-api服务分布于不同节点,本指南中均部署与控制节点,所以重复的配置项可跳过。

    +
      +
    • +

      替换使用conductor服务所在host的IP配置my_ip:

      +
      [DEFAULT]
      +
      +# IP address of this host. If unset, will determine the IP
      +# programmatically. If unable to do so, will use "127.0.0.1".
      +# (string value)
      +# my_ip=HOST_IP
      +my_ip = 192.168.0.2
      +
    • +
    • +

      配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSironic用户的密码:

      +
      [database]
      +
      +# The SQLAlchemy connection string to use to connect to the
      +# database. (string value)
      +connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic
      +
    • +
    • +

      通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RABBIT_PASS为RabbitMQ中openstack账户的密码:

      +
      [DEFAULT]
      +
      +# A URL representing the messaging driver to use and its full
      +# configuration. (string value)
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +

      用户也可自行使用json-rpc方式替换rabbitmq

      +
    • +
    • +

      配置凭证访问其他OpenStack服务

      +

      为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

      +
      [neutron] - 访问OpenStack网络服务
      +[glance] - 访问OpenStack镜像服务
      +[swift] - 访问OpenStack对象存储服务
      +[cinder] - 访问OpenStack块存储服务
      +[inspector] - 访问OpenStack裸金属introspection服务
      +[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
      +

      简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

      +

      在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

      +
      网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
      +
      +请求时使用特定的CA SSL证书进行HTTPS连接
      +
      +与ironic-api服务配置相同的服务用户
      +
      +动态密码认证插件基于其他选项发现合适的身份认证服务API版本
      +

      替换IRONIC_PASS为ironic用户密码。

      +
      [neutron]
      +
      +# Authentication type to load (string value)
      +auth_type = password
      +# Authentication URL (string value)
      +auth_url=https://IDENTITY_IP:5000/
      +# Username (string value)
      +username=ironic
      +# User's password (string value)
      +password=IRONIC_PASS
      +# Project name to scope to (string value)
      +project_name=service
      +# Domain ID containing project (string value)
      +project_domain_id=default
      +# User's domain id (string value)
      +user_domain_id=default
      +# PEM encoded Certificate Authority to use when verifying
      +# HTTPs connections. (string value)
      +cafile=/opt/stack/data/ca-bundle.pem
      +# The default region_name for endpoint URL discovery. (string
      +# value)
      +region_name = RegionOne
      +# List of interfaces, in order of preference, for endpoint
      +# URL. (list value)
      +valid_interfaces=public
      +
      +# 其他参考配置
      +[glance]
      +endpoint_override = http://controller:9292
      +www_authenticate_uri = http://controller:5000
      +auth_url = http://controller:5000
      +auth_type = password
      +username = ironic
      +password = IRONIC_PASS
      +project_domain_name = default
      +user_domain_name = default
      +region_name = RegionOne
      +project_name = service
      +
      +[service_catalog]  
      +region_name = RegionOne
      +project_domain_id = default
      +user_domain_id = default
      +project_name = service
      +password = IRONIC_PASS
      +username = ironic
      +auth_url = http://controller:5000
      +auth_type = password
      +

      默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

      +
      [neutron]
      +endpoint_override = <NEUTRON_API_ADDRESS>
      +
    • +
    • +

      配置允许的驱动程序和硬件类型

      +

      通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

      +
      [DEFAULT]
      +enabled_hardware_types = ipmi
      +

      配置硬件接口:

      +
      enabled_boot_interfaces = pxe
      +enabled_deploy_interfaces = direct,iscsi
      +enabled_inspect_interfaces = inspector
      +enabled_management_interfaces = ipmitool
      +enabled_power_interfaces = ipmitool
      +

      配置接口默认值:

      +
      [DEFAULT]
      +default_deploy_interface = direct
      +default_network_interface = neutron
      +

      如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

      +
    • +
    • +

      重启ironic-conductor服务

      +
      sudo systemctl restart openstack-ironic-conductor
      +
    • +
    +
  10. +
  11. +

    配置ironic-inspector服务

    +
      +
    • +

      安装组件

      +
      dnf install openstack-ironic-inspector
      +
    • +
    • +

      创建数据库

      +
      # mysql -u root -p
      +
      +MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
      +
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \
      +IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS';
      +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
      +IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS';
      +MariaDB [(none)]> exit
      +Bye
      +
    • +
    • +

      配置/etc/ironic-inspector/inspector.conf

      +

      通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSironic_inspector用户的密码

      +
      [database]
      +backend = sqlalchemy
      +connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector
      +min_pool_size = 100
      +max_pool_size = 500
      +pool_timeout = 30
      +max_retries = 5
      +max_overflow = 200
      +db_retry_interval = 2
      +db_inc_retry_interval = True
      +db_max_retry_interval = 2
      +db_max_retries = 5
      +
    • +
    • +

      配置消息队列通信地址

      +
      [DEFAULT] 
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +
    • +
    • +

      设置keystone认证

      +
      [DEFAULT]
      +
      +auth_strategy = keystone
      +timeout = 900
      +rootwrap_config = /etc/ironic-inspector/rootwrap.conf
      +logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %     (user_identity)s] %(instance)s%(message)s
      +log_dir = /var/log/ironic-inspector
      +state_path = /var/lib/ironic-inspector
      +use_stderr = False
      +
      +[ironic]
      +api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
      +auth_type = password
      +auth_url = http://PUBLIC_IDENTITY_IP:5000
      +auth_strategy = keystone
      +ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
      +os_region = RegionOne
      +project_name = service
      +project_domain_name = Default
      +user_domain_name = Default
      +username = IRONIC_SERVICE_USER_NAME
      +password = IRONIC_SERVICE_USER_PASSWORD
      +
      +[keystone_authtoken]
      +auth_type = password
      +auth_url = http://controller:5000
      +www_authenticate_uri = http://controller:5000
      +project_domain_name = default
      +user_domain_name = default
      +project_name = service
      +username = ironic_inspector
      +password = IRONICPASSWD
      +region_name = RegionOne
      +memcache_servers = controller:11211
      +token_cache_time = 300
      +
      +[processing]
      +add_ports = active
      +processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
      +ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
      +always_store_ramdisk_logs = true
      +store_data =none
      +power_off = false
      +
      +[pxe_filter]
      +driver = iptables
      +
      +[capabilities]
      +boot_mode=True
      +
    • +
    • +

      配置ironic inspector dnsmasq服务

      +
      # 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
      +port=0
      +interface=enp3s0                         #替换为实际监听网络接口
      +dhcp-range=192.168.0.40,192.168.0.50   #替换为实际dhcp地址范围
      +bind-interfaces
      +enable-tftp
      +
      +dhcp-match=set:efi,option:client-arch,7
      +dhcp-match=set:efi,option:client-arch,9
      +dhcp-match=aarch64, option:client-arch,11
      +dhcp-boot=tag:aarch64,grubaa64.efi
      +dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
      +dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
      +
      +tftp-root=/tftpboot                       #替换为实际tftpboot目录
      +log-facility=/var/log/dnsmasq.log
      +
    • +
    • +

      关闭ironic provision网络子网的dhcp

      +
      openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
      +
    • +
    • +

      初始化ironic-inspector服务的数据库

      +
      ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
      +
    • +
    • +

      启动服务

      +
      systemctl enable --now openstack-ironic-inspector.service
      +systemctl enable --now openstack-ironic-inspector-dnsmasq.service
      +
    • +
    +
  12. +
  13. +

    配置httpd服务

    +
      +
    • +

      创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

      +
      mkdir -p /var/lib/ironic/httproot
      +chown ironic.ironic /var/lib/ironic/httproot
      +
    • +
    • +

      安装和配置httpd服务

      +
        +
      • +

        安装httpd服务,已有请忽略

        +
        dnf install httpd -y
        +
      • +
      • +

        创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

        +
        Listen 8080
        +
        +<VirtualHost *:8080>
        +    ServerName ironic.openeuler.com
        +
        +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
        +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
        +
        +    DocumentRoot "/var/lib/ironic/httproot"
        +    <Directory "/var/lib/ironic/httproot">
        +        Options Indexes FollowSymLinks
        +        Require all granted
        +    </Directory>
        +    LogLevel warn
        +    AddDefaultCharset UTF-8
        +    EnableSendfile on
        +</VirtualHost>
        +

        注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

        +
      • +
      • +

        重启httpd服务。

        +
        systemctl restart httpd
        +
      • +
      +
    • +
    +
  14. +
  15. +

    deploy ramdisk镜像下载或制作

    +

    部署一个裸机节点总共需要两组镜像:deploy ramdisk images和user images。Deploy ramdisk images上运行有ironic-python-agent(IPA)服务,Ironic通过它进行裸机节点的环境准备。User images是最终被安装裸机节点上,供用户使用的镜像。

    +

    ramdisk镜像支持通过ironic-python-agent-builder或disk-image-builder工具制作。用户也可以自行选择其他工具制作。若使用原生工具,则需要安装对应的软件包。

    +

    具体的使用方法可以参考官方文档,同时官方也有提供制作好的deploy镜像,可尝试下载。

    +

    下文介绍通过ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

    +
      +
    • +

      安装 ironic-python-agent-builder

      +
      dnf install python3-ironic-python-agent-builder
      +
      +或
      +pip3 install ironic-python-agent-builder
      +dnf install qemu-img git
      +
    • +
    • +

      制作镜像

      +

      基本用法:

      +
      usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH]
      +                           [-v] [--lzma] [--extra-args EXTRA_ARGS]
      +                           [--elements-path ELEMENTS_PATH]
      +                           distribution
      +
      +positional arguments:
      +  distribution          Distribution to use
      +
      +options:
      +  -h, --help            show this help message and exit
      +  -r RELEASE, --release RELEASE
      +                        Distribution release to use
      +  -o OUTPUT, --output OUTPUT
      +                        Output base file name
      +  -e ELEMENT, --element ELEMENT
      +                        Additional DIB element to use
      +  -b BRANCH, --branch BRANCH
      +                        If set, override the branch that is used for         ironic-python-agent
      +                        and requirements
      +  -v, --verbose         Enable verbose logging in diskimage-builder
      +  --lzma                Use lzma compression for smaller images
      +  --extra-args EXTRA_ARGS
      +                        Extra arguments to pass to diskimage-builder
      +  --elements-path ELEMENTS_PATH
      +                        Path(s) to custom DIB elements separated by a colon
      +

      操作实例:

      +
      # -o选项指定生成的镜像名
      +# ubuntu指定生成ubuntu系统的镜像
      +ironic-python-agent-builder -o my-ubuntu-ipa ubuntu
      +

      可通过设置ARCH环境变量(默认为amd64)指定所构建镜像的架构。如果是arm架构,需要添加:

      +
      export ARCH=aarch64
      +
    • +
    • +

      允许ssh登录

      +

      初始化环境变量,设置用户名、密码,启用sodo权限;并添加-e选项使用相应的DIB元素。制作镜像操作如下:

      +
      export DIB_DEV_USER_USERNAME=ipa \
      +export DIB_DEV_USER_PWDLESS_SUDO=yes \
      +export DIB_DEV_USER_PASSWORD='123'
      +ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu
      +
    • +
    • +

      指定代码仓库

      +

      初始化对应的环境变量,然后制作镜像:

      +
      # 直接从gerrit上clone代码
      +DIB_REPOLOCATION_ironic_python_agent=https://opendev.org/openstack/ironic-python-agent
      +DIB_REPOREF_ironic_python_agent=stable/2023.1
      +
      +# 指定本地仓库及分支
      +DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo
      +DIB_REPOREF_ironic_python_agent=my-test-branch
      +
      +ironic-python-agent-builder ubuntu
      +

      参考:source-repositories

      +
    • +
    +
  16. +
  17. +

    注意

    +

    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改: +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败。

    +

    生成的错误配置文件:

    +

    ironic-err

    +

    如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    当前版本的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    • +

      修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:

      +
      [agent]
      +verify_ca = False
      +[pxe]
      +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
      +
    • +
    • +

      ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:

      +

      /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ ironic_python_agent目录)

      +
      [DEFAULT]
      +enable_auto_tls = False
      +

      设置权限:

      +
      chown -R ipa.ipa /etc/ironic_python_agent/
      +
    • +
    • +

      ramdisk镜像中修改ipa服务的服务启动文件,添加配置文件选项

      +

      编辑/usr/lib/systemd/system/ironic-python-agent.service文件

      +
      [Unit]
      +Description=Ironic Python Agent
      +After=network-online.target
      +[Service]
      +ExecStartPre=/sbin/modprobe vfat
      +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/    ironic_python_agent/ironic_python_agent.conf
      +Restart=always
      +RestartSec=30s
      +[Install]
      +WantedBy=multi-user.target
      +
    • +
    +
  18. +
+

Trove

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

Controller节点

+
    +
  1. +

    创建数据库。

    +

    数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASS为合适的密码。 +

    CREATE DATABASE trove CHARACTER SET utf8;
    +GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS';
    +GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS';

    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    # 创建trove用户
    +openstack user create --domain default --password-prompt trove
    +# 添加admin角色
    +openstack role add --project service --user trove admin
    +# 创建database服务
    +openstack service create --name trove --description "Database service" database

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s

    +
  4. +
  5. +

    安装Trove。 +

    dnf install openstack-trove python-troveclient

    +
  6. +
  7. +

    修改配置文件。

    +

    编辑/etc/trove/trove.conf。 +

    [DEFAULT]
    +bind_host=192.168.0.2
    +log_dir = /var/log/trove
    +network_driver = trove.network.neutron.NeutronDriver
    +network_label_regex=.*
    +management_security_groups = <manage security group>
    +nova_keypair = trove-mgmt
    +default_datastore = mysql
    +taskmanager_manager = trove.taskmanager.manager.Manager
    +trove_api_workers = 5
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +reboot_time_out = 300
    +usage_timeout = 900
    +agent_call_high_timeout = 1200
    +use_syslog = False
    +debug = True
    +
    +[database]
    +connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
    +
    +[keystone_authtoken]
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = trove
    +username = TROVE_PASS
    +
    +[service_credentials]
    +auth_url = http://controller:5000/v3/
    +region_name = RegionOne
    +project_name = service
    +project_domain_name = Default
    +user_domain_name = Default
    +username = trove
    +password = TROVE_PASS
    +
    +[mariadb]
    +tcp_ports = 3306,4444,4567,4568
    +
    +[mysql]
    +tcp_ports = 3306
    +
    +[postgresql]
    +tcp_ports = 5432

    +

    解释:

    +
    +

    [Default]分组中bind_host配置为Trove控制节点的IP。\ +transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码。\ +[database]分组中的connection 为前面在mysql中为Trove创建的数据库信息。\ +Trove的用户信息中TROVE_PASSWORD替换为实际trove用户的密码。

    +
    +

    编辑/etc/trove/trove-guestagent.conf。 +

    [DEFAULT]
    +log_file = trove-guestagent.log
    +log_dir = /var/log/trove/
    +ignore_users = os_admin
    +control_exchange = trove
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +rpc_backend = rabbit
    +command_process_timeout = 60
    +use_syslog = False
    +debug = True
    +
    +[service_credentials]
    +auth_url = http://controller:5000/v3/
    +region_name = RegionOne
    +project_name = service
    +password = TROVE_PASS
    +project_domain_name = Default
    +user_domain_name = Default
    +username = trove
    +
    +[mysql]
    +docker_image = your-registry/your-repo/mysql
    +backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0

    +

    解释:

    +
    +

    guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上报心跳,因此需要配置RabbitMQ的用户和密码信息。\ +transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码。\ +Trove的用户信息中TROVE_PASSWORD替换为实际trove用户的密码。\ +从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

    +
    +
  8. +
  9. +

    数据库同步。 +

    su -s /bin/sh -c "trove-manage db_sync" trove

    +
  10. +
  11. +

    完成安装。 +

    # 配置服务自启
    +systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \ 
    +openstack-trove-conductor.service
    +
    +# 启动服务
    +systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \ 
    +openstack-trove-conductor.service

    +
  12. +
+

Swift

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+

Controller节点

+
    +
  1. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    # 创建swift用户
    +openstack user create --domain default --password-prompt swift
    +# 添加admin角色
    +openstack role add --project service --user swift admin
    +# 创建对象存储服务
    +openstack service create --name swift --description "OpenStack Object Storage" object-store

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 

    +
  2. +
  3. +

    安装Swift。 +

    dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \ 
    +python3-keystonemiddleware memcached

    +
  4. +
  5. +

    配置proxy-server。

    +

    Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和SWIFT_PASS即可。 +

    vim /etc/swift/proxy-server.conf
    +
    +[filter:authtoken]
    +paste.filter_factory = keystonemiddleware.auth_token:filter_factory
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = swift
    +password = SWIFT_PASS
    +delay_auth_decision = True
    +service_token_roles_required = True

    +
  6. +
+

Storage节点

+
    +
  1. +

    安装支持的程序包。 +

    dnf install openstack-swift-account openstack-swift-container openstack-swift-object
    +dnf install xfsprogs rsync

    +
  2. +
  3. +

    将设备/dev/sdb和/dev/sdc格式化为XFS。 +

    mkfs.xfs /dev/sdb
    +mkfs.xfs /dev/sdc

    +
  4. +
  5. +

    创建挂载点目录结构。 +

    mkdir -p /srv/node/sdb
    +mkdir -p /srv/node/sdc

    +
  6. +
  7. +

    找到新分区的UUID。 +

    blkid

    +
  8. +
  9. +

    编辑/etc/fstab文件并将以下内容添加到其中。 +

    UUID="<UUID-from-output-above>" /srv/node/sdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/sdc xfs noatime 0 2

    +
  10. +
  11. +

    挂载设备。 +

    mount /srv/node/sdb
    +mount /srv/node/sdc

    +

    注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置。

    +
  12. +
  13. +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容: +

    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock

    +

    替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动: +

    systemctl enable rsyncd.service
    +systemctl start rsyncd.service

    +
  14. +
  15. +

    配置存储节点。

    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。 +

    [DEFAULT]
    +bind_ip = 192.168.0.4

    +

    确保挂载点目录结构的正确所有权。 +

    chown -R swift:swift /srv/node

    +

    创建recon目录并确保其拥有正确的所有权。 +

    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift

    +
  16. +
+

Controller节点创建并分发环

+
    +
  1. +

    创建账号环。

    +

    切换到/etc/swift目录。 +

    cd /etc/swift

    +

    创建基础account.builder文件。 +

    swift-ring-builder account.builder create 10 1 1

    +

    将每个存储节点添加到环中。 +

    swift-ring-builder account.builder add --region 1 --zone 1 \
    +--ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \ 
    +--port 6202  --device DEVICE_NAME \ 
    +--weight 100

    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证账号环内容。 +

    swift-ring-builder account.builder

    +

    重新平衡账号环。 +

    swift-ring-builder account.builder rebalance

    +
  2. +
  3. +

    创建容器环。

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件。 +

    swift-ring-builder container.builder create 10 1 1

    +

    将每个存储节点添加到环中。 +

    swift-ring-builder container.builder add --region 1 --zone 1 \
    +--ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS 
    +--port 6201 --device DEVICE_NAME \
    +--weight 100

    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证容器环内容。 +

    swift-ring-builder container.builder

    +

    重新平衡容器环。 +

    swift-ring-builder container.builder rebalance

    +
  4. +
  5. +

    创建对象环。

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件。 +

    swift-ring-builder object.builder create 10 1 1

    +

    将每个存储节点添加到环中。 +

     swift-ring-builder object.builder add --region 1 --zone 1 \
    + --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \
    + --port 6200 --device DEVICE_NAME \
    + --weight 100

    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证对象环内容。 +

    swift-ring-builder object.builder

    +

    重新平衡对象环。 +

    swift-ring-builder object.builder rebalance

    +
  6. +
  7. +

    分发环配置文件。

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  8. +
  9. +

    编辑配置文件/etc/swift/swift.conf。 +

    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes

    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权。 +

    chown -R root:swift /etc/swift

    +
  10. +
  11. +

    完成安装

    +
  12. +
+

在控制节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动。 +

systemctl enable openstack-swift-proxy.service memcached.service
+systemctl start openstack-swift-proxy.service memcached.service

+

在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动。 +

systemctl enable openstack-swift-account.service \
+openstack-swift-account-auditor.service \
+openstack-swift-account-reaper.service \
+openstack-swift-account-replicator.service \
+openstack-swift-container.service \
+openstack-swift-container-auditor.service \
+openstack-swift-container-replicator.service \
+openstack-swift-container-updater.service \
+openstack-swift-object.service \
+openstack-swift-object-auditor.service \
+openstack-swift-object-replicator.service \
+openstack-swift-object-updater.service
+
+systemctl start openstack-swift-account.service \
+openstack-swift-account-auditor.service \
+openstack-swift-account-reaper.service \
+openstack-swift-account-replicator.service \
+openstack-swift-container.service \
+openstack-swift-container-auditor.service \
+openstack-swift-container-replicator.service \
+openstack-swift-container-updater.service \
+openstack-swift-object.service \
+openstack-swift-object-auditor.service \
+openstack-swift-object-replicator.service \
+openstack-swift-object-updater.service

+

Cyborg

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+

Controller节点

+
    +
  1. +

    初始化对应数据库

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cyborg;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
    +MariaDB [(none)]> exit;
    +
  2. +
  3. +

    创建用户和服务,并记住创建cybory用户时输入的密码,用于配置CYBORG_PASS

    +
    source ~/.admin-openrc
    +openstack user create --domain default --password-prompt cyborg
    +openstack role add --project service --user cyborg admin
    +openstack service create --name cyborg --description "Acceleration Service" accelerator
    +
  4. +
  5. +

    使用uwsgi部署Cyborg api服务

    +
    openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2
    +openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2
    +openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2
    +
  6. +
  7. +

    安装Cyborg

    +
    dnf install openstack-cyborg
    +
  8. +
  9. +

    配置Cyborg

    +

    修改/etc/cyborg/cyborg.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +use_syslog = False
    +state_path = /var/lib/cyborg
    +debug = True
    +
    +[api]
    +host_ip = 0.0.0.0
    +
    +[database]
    +connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg
    +
    +[service_catalog]
    +cafile = /opt/stack/data/ca-bundle.pem
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +password = CYBORG_PASS
    +username = cyborg
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +
    +[placement]
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = password
    +username = PLACEMENT_PASS
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +auth_section = keystone_authtoken
    +
    +[nova]
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = NOVA_PASS
    +username = nova
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +auth_section = keystone_authtoken
    +
    +[keystone_authtoken]
    +memcached_servers = localhost:11211
    +signing_dir = /var/cache/cyborg/api
    +cafile = /opt/stack/data/ca-bundle.pem
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = CYBORG_PASS
    +username = cyborg
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +
  10. +
  11. +

    同步数据库表格

    +
    cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
    +
  12. +
  13. +

    启动Cyborg服务

    +
    systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
    +systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
    +
  14. +
+

Aodh

+

Aodh可以根据由Ceilometer或者Gnocchi收集的监控数据创建告警,并设置触发规则。

+

Controller节点

+
    +
  1. +

    创建数据库。

    +
    CREATE DATABASE aodh;
    +GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
    +GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    openstack user create --domain default --password-prompt aodh
    +openstack role add --project service --user aodh admin
    +openstack service create --name aodh --description "Telemetry" alarming

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne alarming public http://controller:8042
    +openstack endpoint create --region RegionOne alarming internal http://controller:8042
    +openstack endpoint create --region RegionOne alarming admin http://controller:8042

    +
  4. +
  5. +

    安装Aodh。 +

    dnf install openstack-aodh-api openstack-aodh-evaluator \
    +openstack-aodh-notifier openstack-aodh-listener \
    +openstack-aodh-expirer python3-aodhclient

    +
  6. +
  7. +

    修改配置文件。 +

    vim /etc/aodh/aodh.conf
    +
    +[database]
    +connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = aodh
    +password = AODH_PASS
    +
    +[service_credentials]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = aodh
    +password = AODH_PASS
    +interface = internalURL
    +region_name = RegionOne

    +
  8. +
  9. +

    同步数据库。 +

    aodh-dbsync

    +
  10. +
  11. +

    完成安装。 +

    # 配置服务自启
    +systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \
    +openstack-aodh-notifier.service openstack-aodh-listener.service
    +
    +# 启动服务
    +systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \
    +openstack-aodh-notifier.service openstack-aodh-listener.service

    +
  12. +
+

Gnocchi

+

Gnocchi是一个开源的时间序列数据库,可以对接Ceilometer。

+

Controller节点

+
    +
  1. +

    创建数据库。 +

    CREATE DATABASE gnocchi;
    +GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
    +GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';

    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。 +

    openstack user create --domain default --password-prompt gnocchi
    +openstack role add --project service --user gnocchi admin
    +openstack service create --name gnocchi --description "Metric Service" metric

    +

    创建API端点。 +

    openstack endpoint create --region RegionOne metric public http://controller:8041
    +openstack endpoint create --region RegionOne metric internal http://controller:8041
    +openstack endpoint create --region RegionOne metric admin http://controller:8041

    +
  4. +
  5. +

    安装Gnocchi。 +

    dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient

    +
  6. +
  7. +

    修改配置文件。 +

    vim /etc/gnocchi/gnocchi.conf
    +[api]
    +auth_mode = keystone
    +port = 8041
    +uwsgi_mode = http-socket
    +
    +[keystone_authtoken]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = gnocchi
    +password = GNOCCHI_PASS
    +interface = internalURL
    +region_name = RegionOne
    +
    +[indexer]
    +url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
    +
    +[storage]
    +# coordination_url is not required but specifying one will improve
    +# performance with better workload division across workers.
    +# coordination_url = redis://controller:6379
    +file_basepath = /var/lib/gnocchi
    +driver = file

    +
  8. +
  9. +

    同步数据库。 +

    gnocchi-upgrade

    +
  10. +
  11. +

    完成安装。 +

    # 配置服务自启
    +systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
    +
    +# 启动服务
    +systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service

    +
  12. +
+

Ceilometer

+

Ceilometer是OpenStack中负责数据收集的服务。

+

Controller节点

+
    +
  1. +

    创建服务凭证。 +

    openstack user create --domain default --password-prompt ceilometer
    +openstack role add --project service --user ceilometer admin
    +openstack service create --name ceilometer --description "Telemetry" metering

    +
  2. +
  3. +

    安装Ceilometer软件包。 +

    dnf install openstack-ceilometer-notification openstack-ceilometer-central

    +
  4. +
  5. +

    编辑配置文件/etc/ceilometer/pipeline.yaml。 +

    publishers:
    +    # set address of Gnocchi
    +    # + filter out Gnocchi-related activity meters (Swift driver)
    +    # + set default archive policy
    +    - gnocchi://?filter_project=service&archive_policy=low

    +
  6. +
  7. +

    编辑配置文件/etc/ceilometer/ceilometer.conf。 +

    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +
    +[service_credentials]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = ceilometer
    +password = CEILOMETER_PASS
    +interface = internalURL
    +region_name = RegionOne

    +
  8. +
  9. +

    数据库同步。 +

    ceilometer-upgrade

    +
  10. +
  11. +

    完成控制节点Ceilometer安装。 +

    # 配置服务自启
    +systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
    +# 启动服务
    +systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service

    +
  12. +
+

Compute节点

+
    +
  1. +

    安装Ceilometer软件包。 +

    dnf install openstack-ceilometer-compute
    +dnf install openstack-ceilometer-ipmi       # 可选

    +
  2. +
  3. +

    编辑配置文件/etc/ceilometer/ceilometer.conf。 +

    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +
    +[service_credentials]
    +auth_url = http://controller:5000
    +project_domain_id = default
    +user_domain_id = default
    +auth_type = password
    +username = ceilometer
    +project_name = service
    +password = CEILOMETER_PASS
    +interface = internalURL
    +region_name = RegionOne

    +
  4. +
  5. +

    编辑配置文件/etc/nova/nova.conf。 +

    [DEFAULT]
    +instance_usage_audit = True
    +instance_usage_audit_period = hour
    +
    +[notifications]
    +notify_on_state_change = vm_and_task_state
    +
    +[oslo_messaging_notifications]
    +driver = messagingv2

    +
  6. +
  7. +

    完成安装。 +

    systemctl enable openstack-ceilometer-compute.service
    +systemctl start openstack-ceilometer-compute.service
    +systemctl enable openstack-ceilometer-ipmi.service         # 可选
    +systemctl start openstack-ceilometer-ipmi.service          # 可选
    +
    +# 重启nova-compute服务
    +systemctl restart openstack-nova-compute.service

    +
  8. +
+

Heat

+

Heat是 OpenStack 自动编排服务,基于描述性的模板来编排复合云应用,也称为Orchestration Service。Heat 的各服务一般安装在Controller节点上。

+

Controller节点

+
    +
  1. +

    创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE heat;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
    +MariaDB [(none)]> exit;
    +
  2. +
  3. +

    创建服务凭证,创建heat用户,并为其增加admin角色

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt heat
    +openstack role add --project service --user heat admin
    +
  4. +
  5. +

    创建heatheat-cfn服务及其对应的API端点

    +
    openstack service create --name heat --description "Orchestration" orchestration
    +openstack service create --name heat-cfn --description "Orchestration"  cloudformation
    +openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
    +openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
    +openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
    +
  6. +
  7. +

    创建stack管理的额外信息

    +

    创建 heat domain +

    openstack domain create --description "Stack projects and users" heat
    +在 heat domain下创建 heat_domain_admin 用户,并记下输入的密码,用于配置下面的HEAT_DOMAIN_PASS +
    openstack user create --domain heat --password-prompt heat_domain_admin
    +为 heat_domain_admin 用户增加 admin 角色 +
    openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
    +创建 heat_stack_owner 角色 +
    openstack role create heat_stack_owner
    +创建 heat_stack_user 角色 +
    openstack role create heat_stack_user

    +
  8. +
  9. +

    安装软件包

    +
    dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
    +
  10. +
  11. +

    修改配置文件/etc/heat/heat.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +heat_metadata_server_url = http://controller:8000
    +heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
    +stack_domain_admin = heat_domain_admin
    +stack_domain_admin_password = HEAT_DOMAIN_PASS
    +stack_user_domain_name = heat
    +
    +[database]
    +connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = heat
    +password = HEAT_PASS
    +
    +[trustee]
    +auth_type = password
    +auth_url = http://controller:5000
    +username = heat
    +password = HEAT_PASS
    +user_domain_name = default
    +
    +[clients_keystone]
    +auth_uri = http://controller:5000
    +
  12. +
  13. +

    初始化heat数据库表

    +
    su -s /bin/sh -c "heat-manage db_sync" heat
    +
  14. +
  15. +

    启动服务

    +
    systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
    +systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
    +
  16. +
+

Tempest

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+

Controller节点

+
    +
  1. +

    安装Tempest

    +
    dnf install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Antelope中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

基于OpenStack SIG开发工具oos部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +

    oos工具在不断演进,兼容性、可用性不能时刻保证,建议使用已验证的本版,这里选择1.3.1 +

    pip install openstack-sig-tool==1.3.1

    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息,AK/SK是用户的华为云登录密钥,其他配置保持默认即可(默认使用新加坡region),需要提前在云上创建对应的资源,包括:

    +
      +
    • 一个安全组,名字默认是oos
    • +
    • 一个openEuler镜像,名称格式是openEuler-%(release)s-%(arch)s,例如openEuler-24.03-arm64
    • +
    • 一个VPC,名称是oos_vpc
    • +
    • 该VPC下面两个子网,名称是oos_subnet1oos_subnet2
    • +
    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器(只在openEuler LTS上支持)
    +
  6. +
  7. +

    华为云上面创建一台openEuler 24.03 LTS的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 24.03-lts -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +
    oos env setup test-oos -r antelope
    +

    具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +
  12. +
  13. +

    执行tempest测试

    +

    用户可以使用oos自动执行:

    +
    oos env test test-oos
    +

    也可以手动登录目标节点,进入根目录下的mytest目录,手动执行tempest run

    +
  14. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,跳过第2步对华为云provider信息的配置,在第4步改为纳管主机操作。

+

被纳管的虚机需要保证:

+
    +
  • 至少有一张给oos使用的网卡,名称与配置保持一致,相关配置neutron_dataplane_interface_name
  • +
  • 至少有一块给oos使用的硬盘,名称与配置保持一致,相关配置cinder_block_device
  • +
  • 如果要部署swift服务,则需要新增一块硬盘,名称与配置保持一致,相关配置swift_storage_devices
  • +
+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 24.03-lts -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-24.03-LTS/OpenStack-wallaby/index.html b/site/install/openEuler-24.03-LTS/OpenStack-wallaby/index.html new file mode 100644 index 0000000000000000000000000000000000000000..49c3ecabf17aea79d4cff679b88ac3322e2d58a2 --- /dev/null +++ b/site/install/openEuler-24.03-LTS/OpenStack-wallaby/index.html @@ -0,0 +1,2674 @@ + + + + + + + + openEuler-24.03-LTS_Wallaby - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack-Wallaby 部署指南

+ +

OpenStack 简介

+

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

+

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

+

openEuler 24.03-LTS 版本官方源已经支持 OpenStack-Wallaby 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

+

约定

+

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

+

ALL in One模式:

+
忽略所有可能的后缀
+

Distributed模式:

+
以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
+以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
+以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
+除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`
+

注意

+

涉及到以上约定的服务如下:

+
    +
  • CinderSP1
  • +
  • Nova
  • +
  • Neutron
  • +
+

准备环境

+

环境配置

+
    +
  1. +

    配置 24.03 LTS 官方 yum 源,需要启用 EPOL 软件仓以支持 OpenStack

    +
    yum update
    +yum install openstack-release-wallaby
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示。

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-24.03-LTS/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    设置各个节点的主机名

    +
    hostnamectl set-hostname controller                                                            (CTL)
    +hostnamectl set-hostname compute                                                               (CPT)
    +

    假设controller节点的IP是10.0.0.11,compute节点的IP是10.0.0.12(如果存在的话),则于/etc/hosts新增如下:

    +
    10.0.0.11   controller
    +10.0.0.12   compute
    +
  4. +
+

安装 SQL DataBase

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    +
    vim /etc/my.cnf.d/openstack.cnf
    +
    +[mysqld]
    +bind-address = 10.0.0.11
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +

    注意

    +

    其中 bind-address 设置为控制节点的管理IP地址。

    +
  4. +
  5. +

    启动 DataBase 服务,并为其配置开机自启动:

    +
    systemctl enable mariadb.service
    +systemctl start mariadb.service
    +
  6. +
  7. +

    配置DataBase的默认密码(可选)

    +
    mysql_secure_installation
    +

    注意

    +

    根据提示进行即可

    +
  8. +
+

安装 RabbitMQ

+
    +
  1. +

    执行如下命令,安装软件包。

    +
    yum install rabbitmq-server
    +
  2. +
  3. +

    启动 RabbitMQ 服务,并为其配置开机自启动。

    +
    systemctl enable rabbitmq-server.service
    +systemctl start rabbitmq-server.service
    +
  4. +
  5. +

    添加 OpenStack用户。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +

    注意

    +

    替换 RABBIT_PASS,为 OpenStack 用户设置密码

    +
  6. +
  7. +

    设置openstack用户权限,允许进行配置、写、读:

    +
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  8. +
+

安装 Memcached

+
    +
  1. +

    执行如下命令,安装依赖软件包。

    +
    yum install memcached python3-memcached
    +
  2. +
  3. +

    编辑 /etc/sysconfig/memcached 文件。

    +
    vim /etc/sysconfig/memcached
    +
    +OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    +
    systemctl enable memcached.service
    +systemctl start memcached.service
    +

    注意

    +

    服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

    +
  6. +
+

安装 OpenStack

+

Keystone 安装

+
    +
  1. +

    创建 keystone 数据库并授权。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包。

    +
    yum install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +

    注意:

    +

    替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

    +
  6. +
  7. +

    同步数据库。

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库。

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务。

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
    vim /etc/httpd/conf/httpd.conf
    +
    +ServerName controller
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务。

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置。

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    +
    yum install python3-openstackclient
    +

    导入环境变量

    +
    source ~/.admin-openrc
    +

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +

    为admin用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +

    为myuser用户请求token:

    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +

    创建服务凭证

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt glance
    +openstack role add --project service --user glance admin
    +openstack service create --name glance --description "OpenStack Image" image
    +

    创建镜像服务API端点:

    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-glance
    +
  4. +
  5. +

    配置glance相关配置:

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +

    注意

    +

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    +

    替换 GLANCE_PASS 为 glance 用户的密码

    +
  6. +
  7. +

    同步数据库:

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  8. +
  9. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  10. +
  11. +

    验证

    +

    下载镜像

    +
    source ~/.admin-openrc
    +
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +

    向Image服务上传镜像:

    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                       --file cirros-0.4.0-x86_64-disk.img --public cirros
    +

    确认镜像上传并验证属性:

    +
    openstack image list
    +
  12. +
+

Placement安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    +
    mysql -u root -p
    +MariaDB [(none)]> CREATE DATABASE placement;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    +
    source ~/.admin-openrc
    +

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    +

    创建Placement API服务

    +
    openstack user create --domain default --password-prompt placement
    +openstack role add --project service --user placement admin
    +openstack service create --name placement --description "Placement API" placement
    +

    创建placement服务API端点:

    +
    openstack endpoint create --region RegionOne placement public http://controller:8778
    +openstack endpoint create --region RegionOne placement internal http://controller:8778
    +openstack endpoint create --region RegionOne placement admin http://controller:8778
    +
  2. +
  3. +

    安装和配置

    +

    安装软件包:

    +
    yum install openstack-placement-api
    +

    配置placement:

    +

    编辑 /etc/placement/placement.conf 文件:

    +

    在[placement_database]部分,配置数据库入口

    +

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    +
    # vim /etc/placement/placement.conf
    +[placement_database]
    +# ...
    +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    +[api]
    +# ...
    +auth_strategy = keystone
    +[keystone_authtoken]
    +# ...
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = placement
    +password = PLACEMENT_PASS
    +

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    +

    同步数据库:

    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +

    启动httpd服务:

    +
    systemctl restart httpd
    +
  4. +
  5. +

    验证

    +

    执行如下命令,执行状态检查:

    +
    source ~/.admin-openrc
    +placement-status upgrade check
    +

    安装osc-placement,列出可用的资源类别及特性:

    +
    yum install python3-osc-placement
    +openstack --os-placement-api-version 1.2 resource class list --sort-column name
    +openstack --os-placement-api-version 1.6 trait list --sort-column name
    +
  6. +
+

Nova 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换NOVA_DBPASS,为nova数据库设置密码

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建nova服务凭证:

    +
    openstack user create --domain default --password-prompt nova                                  (CTL)
    +openstack role add --project service --user nova admin                                         (CTL)
    +openstack service create --name nova --description "OpenStack Compute" compute                 (CTL)
    +

    创建nova API端点:

    +
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    +openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    +openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    +
  2. +
  3. +

    安装软件包

    +
    yum install openstack-nova-api openstack-nova-conductor \                                      (CTL)
    +openstack-nova-novncproxy openstack-nova-scheduler 
    +
    +yum install openstack-nova-compute                                                             (CPT)
    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    yum install edk2-aarch64                                                                       (CPT)
    +
  4. +
  5. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 10.0.0.1
    +use_neutron = true
    +firewall_driver = nova.virt.firewall.NoopFirewallDriver
    +compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    +instances_path = /var/lib/nova/instances/                                                      (CPT)
    +lock_path = /var/lib/nova/tmp                                                                  (CPT)
    +
    +[api_database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    +
    +[database]
    +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    +
    +[api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000/
    +auth_url = http://controller:5000/
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    +
    +[libvirt]
    +virt_type = qemu                                                                               (CPT)
    +cpu_mode = custom                                                                              (CPT)
    +cpu_model = cortex-a72                                                                         (CPT)
    +
    +[glance]
    +api_servers = http://controller:9292
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/nova/tmp                                                                  (CTL)
    +
    +[placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    +

    [api_database] [database]部分,配置数据库入口;

    +

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [vnc]部分,启用并配置远程控制台入口;

    +

    [glance]部分,配置镜像服务API的地址;

    +

    [oslo_concurrency]部分,配置lock path;

    +

    [placement]部分,配置placement服务的入口。

    +

    注意

    +

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    +

    配置 my_ip 为控制节点的管理IP地址;

    +

    替换 NOVA_DBPASS 为nova数据库的密码;

    +

    替换 NOVA_PASS 为nova用户的密码;

    +

    替换 PLACEMENT_PASS 为placement用户的密码;

    +

    替换 NEUTRON_PASS 为neutron用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +

    额外

    +

    确定是否支持虚拟机硬件加速(x86架构):

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    +
    vim /etc/nova/nova.conf                                                                        (CPT)
    +
    +[libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置

    +

    注意

    +

    如果为arm64结构,还需要执行以下命令

    +
    vim /etc/libvirt/qemu.conf
    +
    +nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +         /usr/share/AAVMF/AAVMF_VARS.fd", \
    +         "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +         /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +
    +vim /etc/qemu/firmware/edk2-aarch64.json
    +
    +{
    +    "description": "UEFI firmware for ARM64 virtual machines",
    +    "interface-types": [
    +        "uefi"
    +    ],
    +    "mapping": {
    +        "device": "flash",
    +        "executable": {
    +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
    +            "format": "raw"
    +        },
    +        "nvram-template": {
    +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
    +            "format": "raw"
    +        }
    +    },
    +    "targets": [
    +        {
    +            "architecture": "aarch64",
    +            "machines": [
    +                "virt-*"
    +            ]
    +        }
    +    ],
    +    "features": [
    +
    +    ],
    +    "tags": [
    +
    +    ]
    +}
    +
    +(CPT)
    +
  6. +
  7. +

    同步数据库

    +

    同步nova-api数据库:

    +
    su -s /bin/sh -c "nova-manage api_db sync" nova                                                (CTL)
    +

    注册cell0数据库:

    +
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                          (CTL)
    +

    创建cell1 cell:

    +
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova                 (CTL)
    +

    同步nova数据库:

    +
    su -s /bin/sh -c "nova-manage db sync" nova                                                    (CTL)
    +

    验证cell0和cell1注册正确:

    +
    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova                                         (CTL)
    +

    添加计算节点到openstack集群

    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova                           (CPT)
    +
  8. +
  9. +

    启动服务

    +
    systemctl enable \                                                                             (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    +systemctl start \                                                                              (CTL)
    +openstack-nova-api.service \
    +openstack-nova-scheduler.service \
    +openstack-nova-conductor.service \
    +openstack-nova-novncproxy.service
    +
    systemctl enable libvirtd.service openstack-nova-compute.service                               (CPT)
    +systemctl start libvirtd.service openstack-nova-compute.service                                (CPT)
    +
  10. +
  11. +

    验证

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    列出服务组件,验证每个流程都成功启动和注册:

    +
    openstack compute service list                                                                 (CTL)
    +

    列出身份服务中的API端点,验证与身份服务的连接:

    +
    openstack catalog list                                                                         (CTL)
    +

    列出镜像服务中的镜像,验证与镜像服务的连接:

    +
    openstack image list                                                                           (CTL)
    +

    检查cells是否运作成功,以及其他必要条件是否已具备。

    +
    nova-status upgrade check                                                                      (CTL)
    +
  12. +
+

Neutron 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p                                                                               (CTL)
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    +IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    +
    source ~/.admin-openrc                                                                         (CTL)
    +

    创建neutron服务凭证

    +
    openstack user create --domain default --password-prompt neutron                               (CTL)
    +openstack role add --project service --user neutron admin                                      (CTL)
    +openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    +

    创建Neutron服务API端点:

    +
    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    +openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    +openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \                   (CTL)
    +openstack-neutron-ml2
    +
    yum install openstack-neutron-linuxbridge ebtables ipset                                       (CPT)
    +
  4. +
  5. +

    配置neutron相关配置:

    +

    配置主体配置

    +
    vim /etc/neutron/neutron.conf
    +
    +[database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    +
    +[DEFAULT]
    +core_plugin = ml2                                                                              (CTL)
    +service_plugins = router                                                                       (CTL)
    +allow_overlapping_ips = true                                                                   (CTL)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true                                                      (CTL)
    +notify_nova_on_port_data_changes = true                                                        (CTL)
    +api_workers = 3                                                                                (CTL)
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000                                                              (CTL)
    +auth_type = password                                                                           (CTL)
    +project_domain_name = Default                                                                  (CTL)
    +user_domain_name = Default                                                                     (CTL)
    +region_name = RegionOne                                                                        (CTL)
    +project_name = service                                                                         (CTL)
    +username = nova                                                                                (CTL)
    +password = NOVA_PASS                                                                           (CTL)
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    +

    [default] [keystone]部分,配置身份认证服务入口;

    +

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换NOVA_PASS为 nova 用户的密码。

    +

    配置ML2插件:

    +
    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    +
    +[ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +

    注意

    +

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    +

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    +

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    +

    [securitygroup]部分,配置允许 ipset。

    +

    补充

    +

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    +

    配置 Linux bridge 代理:

    +
    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    +
    +[linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +

    解释

    +

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    +

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    +

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    +

    注意

    +

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    +

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    +

    配置Layer-3代理:

    +
    vim /etc/neutron/l3_agent.ini                                                                  (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +

    解释

    +

    在[default]部分,配置接口驱动为linuxbridge

    +

    配置DHCP代理:

    +
    vim /etc/neutron/dhcp_agent.ini                                                                (CTL)
    +
    +[DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +

    解释

    +

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    +

    配置metadata代理:

    +
    vim /etc/neutron/metadata_agent.ini                                                            (CTL)
    +
    +[DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +

    解释

    +

    [default]部分,配置元数据主机和shared secret。

    +

    注意

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  6. +
  7. +

    配置nova相关配置

    +
    vim /etc/nova/nova.conf
    +
    +[neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true                                                                  (CTL)
    +metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    +

    解释

    +

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    +

    注意

    +

    替换NEUTRON_PASS为 neutron 用户的密码;

    +

    替换METADATA_SECRET为合适的元数据代理secret。

    +
  8. +
  9. +

    同步数据库:

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    +--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  10. +
  11. +

    重启计算API服务:

    +
    systemctl restart openstack-nova-api.service
    +
  12. +
  13. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    +neutron-dhcp-agent.service neutron-metadata-agent.service 
    +systemctl enable neutron-l3-agent.service
    +systemctl restart openstack-nova-api.service neutron-server.service                            (CTL)
    +neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
    +neutron-metadata-agent.service neutron-l3-agent.service
    +
    +systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    +systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    +
  14. +
  15. +

    验证

    +

    验证 neutron 代理启动成功:

    +
    openstack network agent list
    +
  16. +
+

Cinder 安装

+
    +
  1. +

    创建数据库、服务凭证和 API 端点

    +

    创建数据库:

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    +IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    +
    source ~/.admin-openrc
    +

    创建cinder服务凭证:

    +
    openstack user create --domain default --password-prompt cinder
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +

    创建块存储服务API端点:

    +
    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-cinder-api openstack-cinder-scheduler                                    (CTL)
    +
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \           (STG)
    +            openstack-cinder-volume openstack-cinder-backup
    +
  4. +
  5. +

    准备存储设备,以下仅为示例:

    +
    pvcreate /dev/vdb
    +vgcreate cinder-volumes /dev/vdb
    +
    +vim /etc/lvm/lvm.conf
    +
    +
    +devices {
    +...
    +filter = [ "a/vdb/", "r/.*/"]
    +

    解释

    +

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

    +
  6. +
  7. +

    准备NFS

    +
    mkdir -p /root/cinder/backup
    +
    +cat << EOF >> /etc/export
    +/root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    +EOF
    +
    +
  8. +
  9. +

    配置cinder相关配置:

    +
    vim /etc/cinder/cinder.conf
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 10.0.0.11
    +enabled_backends = lvm                                                                         (STG)
    +backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    +backup_share=HOST:PATH                                                                         (STG)
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    +volume_group = cinder-volumes                                                                  (STG)
    +iscsi_protocol = iscsi                                                                         (STG)
    +iscsi_helper = tgtadm                                                                          (STG)
    +

    解释

    +

    [database]部分,配置数据库入口;

    +

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    +

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    +

    [oslo_concurrency]部分,配置lock path。

    +

    注意

    +

    替换CINDER_DBPASS为 cinder 数据库的密码;

    +

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    +

    配置my_ip为控制节点的管理 IP 地址;

    +

    替换CINDER_PASS为 cinder 用户的密码;

    +

    替换HOST:PATH为 NFS 的HOSTIP和共享路径;

    +
  10. +
  11. +

    同步数据库:

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    +
  12. +
  13. +

    配置nova:

    +
    vim /etc/nova/nova.conf                                                                        (CTL)
    +
    +[cinder]
    +os_region_name = RegionOne
    +
  14. +
  15. +

    重启计算API服务

    +
    systemctl restart openstack-nova-api.service
    +
  16. +
  17. +

    启动cinder服务

    +
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    +systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    +
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
    +                 openstack-cinder-volume.service \
    +                 openstack-cinder-backup.service
    +systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
    +                openstack-cinder-volume.service \
    +                openstack-cinder-backup.service
    +

    注意

    +

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    +
    include /var/lib/cinder/volumes/*
    +
  18. +
  19. +

    验证

    +
    source ~/.admin-openrc
    +openstack volume service list
    +
  20. +
+

horizon 安装

+
    +
  1. +

    安装软件包

    +
    yum install openstack-dashboard
    +
  2. +
  3. +

    修改文件

    +

    修改变量

    +
    vim /etc/openstack-dashboard/local_settings
    +
    +OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +
    +CACHES = {
    +'default': {
    +     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +     'LOCATION': 'controller:11211',
    +    }
    +}
    +
    +OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启 httpd 服务

    +
    systemctl restart httpd.service memcached.service
    +
  6. +
  7. +

    验证 + 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    +

    注意

    +

    替换HOSTIP为控制节点管理平面IP地址

    +
  8. +
+

Tempest 安装

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+
    +
  1. +

    安装Tempest

    +
    yum install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Wallaby中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用: +

    yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin

    +
  10. +
+

Ironic 安装

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+
    +
  1. 设置数据库
  2. +
+

裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+IDENTIFIED BY 'IRONIC_DBPASSWORD';
+
    +
  1. 创建服务用户认证
  2. +
+

1、创建Bare Metal服务用户

+
openstack user create --password IRONIC_PASSWORD \
+                      --email ironic@example.com ironic
+openstack role add --project service --user ironic admin
+openstack service create --name ironic
+                         --description "Ironic baremetal provisioning service" baremetal
+
+openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
+openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector
+openstack role add --project service --user ironic-inspector admin
+

2、创建Bare Metal服务访问入口

+
openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1
+openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1
+
    +
  1. 配置ironic-api服务
  2. +
+

配置文件路径/etc/ironic/ironic.conf

+

1、通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string used to connect to the
+# database (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

2、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

3、配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换IRONIC_PASSWORD为身份认证服务中ironic用户的密码:

+
[DEFAULT]
+
+# Authentication strategy used by ironic-api: one of
+# "keystone" or "noauth". "noauth" should not be used in a
+# production environment because all authentication will be
+# disabled. (string value)
+
+auth_strategy=keystone
+host = controller
+memcache_servers = controller:11211
+enabled_network_interfaces = flat,noop,neutron
+default_network_interface = noop
+transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/
+enabled_hardware_types = ipmi
+enabled_boot_interfaces = pxe
+enabled_deploy_interfaces = direct
+default_deploy_interface = direct
+enabled_inspect_interfaces = inspector
+enabled_management_interfaces = ipmitool
+enabled_power_interfaces = ipmitool
+enabled_rescue_interfaces = no-rescue,agent
+isolinux_bin = /usr/share/syslinux/isolinux.bin
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+[keystone_authtoken]
+# Authentication type to load (string value)
+auth_type=password
+# Complete public Identity API endpoint (string value)
+www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+# Complete admin Identity API endpoint. (string value)
+auth_url=http://PRIVATE_IDENTITY_IP:5000
+# Service username. (string value)
+username=ironic
+# Service account password. (string value)
+password=IRONIC_PASSWORD
+# Service tenant name. (string value)
+project_name=service
+# Domain name containing project (string value)
+project_domain_name=Default
+# User's domain name (string value)
+user_domain_name=Default
+
+[agent]
+deploy_logs_collect = always
+deploy_logs_local_path = /var/log/ironic/deploy
+deploy_logs_storage_backend = local
+image_download_source = http
+stream_raw_images = false
+force_raw_images = false
+verify_ca = False
+
+[oslo_concurrency]
+
+[oslo_messaging_notifications]
+transport_url = rabbit://openstack:123456@172.20.19.25:5672/
+topics = notifications
+driver = messagingv2
+
+[oslo_messaging_rabbit]
+amqp_durable_queues = True
+rabbit_ha_queues = True
+
+[pxe]
+ipxe_enabled = false
+pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
+image_cache_size = 204800
+tftp_root=/var/lib/tftpboot/cephfs/
+tftp_master_path=/var/lib/tftpboot/cephfs/master_images
+
+[dhcp]
+dhcp_provider = none
+

4、创建裸金属服务数据库表

+
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+

5、重启ironic-api服务

+
sudo systemctl restart openstack-ironic-api
+
    +
  1. 配置ironic-conductor服务
  2. +
+

1、替换HOST_IP为conductor host的IP

+
[DEFAULT]
+
+# IP address of this host. If unset, will determine the IP
+# programmatically. If unable to do so, will use "127.0.0.1".
+# (string value)
+
+my_ip=HOST_IP
+

2、配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSWORDironic用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+
+connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+

3、通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RPC_*为RabbitMQ的详细地址和凭证

+
[DEFAULT]
+
+# A URL representing the messaging driver to use and its full
+# configuration. (string value)
+
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+

用户也可自行使用json-rpc方式替换rabbitmq

+

4、配置凭证访问其他OpenStack服务

+

为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

+
[neutron] - 访问OpenStack网络服务
+[glance] - 访问OpenStack镜像服务
+[swift] - 访问OpenStack对象存储服务
+[cinder] - 访问OpenStack块存储服务
+[inspector] - 访问OpenStack裸金属introspection服务
+[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
+

简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

+

在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

+
网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
+
+请求时使用特定的CA SSL证书进行HTTPS连接
+
+与ironic-api服务配置相同的服务用户
+
+动态密码认证插件基于其他选项发现合适的身份认证服务API版本
+
[neutron]
+
+# Authentication type to load (string value)
+auth_type = password
+# Authentication URL (string value)
+auth_url=https://IDENTITY_IP:5000/
+# Username (string value)
+username=ironic
+# User's password (string value)
+password=IRONIC_PASSWORD
+# Project name to scope to (string value)
+project_name=service
+# Domain ID containing project (string value)
+project_domain_id=default
+# User's domain id (string value)
+user_domain_id=default
+# PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. (string value)
+cafile=/opt/stack/data/ca-bundle.pem
+# The default region_name for endpoint URL discovery. (string
+# value)
+region_name = RegionOne
+# List of interfaces, in order of preference, for endpoint
+# URL. (list value)
+valid_interfaces=public
+

默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

+
[neutron] ... endpoint_override = <NEUTRON_API_ADDRESS>
+

5、配置允许的驱动程序和硬件类型

+

通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

+
[DEFAULT] enabled_hardware_types = ipmi
+

配置硬件接口:

+
enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+

配置接口默认值:

+
[DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+

如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

+

6、重启ironic-conductor服务

+
sudo systemctl restart openstack-ironic-conductor
+
    +
  1. 配置ironic-inspector服务
  2. +
+

配置文件路径/etc/ironic-inspector/inspector.conf

+

1、创建数据库

+
# mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
+
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \     IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
+IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
+

2、通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSWORDironic_inspector用户的密码,替换DB_IP为DB服务器所在的IP地址:

+
[database]
+backend = sqlalchemy
+connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
+min_pool_size = 100
+max_pool_size = 500
+pool_timeout = 30
+max_retries = 5
+max_overflow = 200
+db_retry_interval = 2
+db_inc_retry_interval = True
+db_max_retry_interval = 2
+db_max_retries = 5
+

3、配置消息度列通信地址

+
[DEFAULT] 
+transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+
+

4、设置keystone认证

+
[DEFAULT]
+
+auth_strategy = keystone
+timeout = 900
+rootwrap_config = /etc/ironic-inspector/rootwrap.conf
+logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s
+log_dir = /var/log/ironic-inspector
+state_path = /var/lib/ironic-inspector
+use_stderr = False
+
+[ironic]
+api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
+auth_type = password
+auth_url = http://PUBLIC_IDENTITY_IP:5000
+auth_strategy = keystone
+ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
+os_region = RegionOne
+project_name = service
+project_domain_name = Default
+user_domain_name = Default
+username = IRONIC_SERVICE_USER_NAME
+password = IRONIC_SERVICE_USER_PASSWORD
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://control:5000
+www_authenticate_uri = http://control:5000
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = ironic_inspector
+password = IRONICPASSWD
+region_name = RegionOne
+memcache_servers = control:11211
+token_cache_time = 300
+
+[processing]
+add_ports = active
+processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
+ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
+always_store_ramdisk_logs = true
+store_data =none
+power_off = false
+
+[pxe_filter]
+driver = iptables
+
+[capabilities]
+boot_mode=True
+

5、配置ironic inspector dnsmasq服务

+
# 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
+port=0
+interface=enp3s0                         #替换为实际监听网络接口
+dhcp-range=172.20.19.100,172.20.19.110   #替换为实际dhcp地址范围
+bind-interfaces
+enable-tftp
+
+dhcp-match=set:efi,option:client-arch,7
+dhcp-match=set:efi,option:client-arch,9
+dhcp-match=aarch64, option:client-arch,11
+dhcp-boot=tag:aarch64,grubaa64.efi
+dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
+dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
+
+tftp-root=/tftpboot                       #替换为实际tftpboot目录
+log-facility=/var/log/dnsmasq.log
+

6、关闭ironic provision网络子网的dhcp

+
openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
+

7、初始化ironic-inspector服务的数据库

+

在控制节点执行:

+
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
+

8、启动服务

+
systemctl enable --now openstack-ironic-inspector.service
+systemctl enable --now openstack-ironic-inspector-dnsmasq.service
+

6.配置httpd服务

+
    +
  1. +

    创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。

    +
    mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
    +
  2. +
  3. +

    安装和配置httpd服务

    +
      +
    1. +

      安装httpd服务,已有请忽略

      +
      yum install httpd -y
      +
    2. +
    3. +

      创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:

      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
    4. +
    5. +

      重启httpd服务。

      +
      systemctl restart httpd
      +
    6. +
    +
  4. +
+

7.deploy ramdisk镜像制作

+

W版的ramdisk镜像支持通过ironic-python-agent服务或disk-image-builder工具制作,也可以使用社区最新的ironic-python-agent-builder。用户也可以自行选择其他工具制作。 + 若使用W版原生工具,则需要安装对应的软件包。

+
yum install openstack-ironic-python-agent
+或者
+yum install diskimage-builder
+

具体的使用方法可以参考官方文档

+

这里介绍下使用ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

+
    +
  1. +

    安装 ironic-python-agent-builder

    +
    1. 安装工具:
    +
    +    ```shell
    +    pip install ironic-python-agent-builder
    +    ```
    +
    +2. 修改以下文件中的python解释器:
    +
    +    ```shell
    +    /usr/bin/yum /usr/libexec/urlgrabber-ext-down
    +    ```
    +
    +3. 安装其它必须的工具:
    +
    +    ```shell
    +    yum install git
    +    ```
    +
    +    由于`DIB`依赖`semanage`命令,所以在制作镜像之前确定该命令是否可用:`semanage --help`,如果提示无此命令,安装即可:
    +
    +    ```shell
    +    # 先查询需要安装哪个包
    +    [root@localhost ~]# yum provides /usr/sbin/semanage
    +    已加载插件:fastestmirror
    +    Loading mirror speeds from cached hostfile
    +    * base: mirror.vcu.edu
    +    * extras: mirror.vcu.edu
    +    * updates: mirror.math.princeton.edu
    +    policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
    +    源    :base
    +    匹配来源:
    +    文件名    :/usr/sbin/semanage
    +    # 安装
    +    [root@localhost ~]# yum install policycoreutils-python
    +    ```
    +
  2. +
  3. +

    制作镜像

    +
    如果是`arm`架构,需要添加:
    +```shell
    +export ARCH=aarch64
    +```
    +
    +基本用法:
    +
    +```shell
    +usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
    +                                    [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
    +                                    distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +optional arguments:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for ironic-
    +                        python-agent and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +```
    +
    +举例说明:
    +
    +```shell
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
    +```
    +
  4. +
  5. +

    允许ssh登录

    +
    初始化环境变量,然后制作镜像:
    +
    +```shell
    +export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
    +```
    +
  6. +
  7. +

    指定代码仓库

    +
    初始化对应的环境变量,然后制作镜像:
    +
    +```shell
    +# 指定仓库地址以及版本
    +DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
    +DIB_REPOREF_ironic_python_agent=origin/develop
    +
    +# 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
    +```
    +
    +参考:[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)。
    +
    +指定仓库地址及版本验证成功。
    +
  8. +
  9. +

    注意

    +
    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改:
    +
    +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败,如下:
    +
    +生成的错误配置文件:
    +
    +![ironic-err](../../img/install/ironic-err.png)
    +
    +如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。
    +
    +需要用户对生成grub.cfg的代码逻辑自行修改。
    +
    +ironic向ipa发送查询命令执行状态请求的tls报错:
    +
    +w版的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。
    +
    +1. 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    +
    +```
    +[agent]
    +verify_ca = False
    +
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +```
    +
    +2) ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    +
    +/etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ironic_python_agent目录)
    +
    +```
    +[DEFAULT]
    +enable_auto_tls = False
    +```
    +
    +设置权限:
    +
    +```
    +chown -R ipa.ipa /etc/ironic_python_agent/
    +```
    +
    +3. 修改ipa服务的服务启动文件,添加配置文件选项
    +
    +vim usr/lib/systemd/system/ironic-python-agent.service
    +
    +```
    +[Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +
    +[Install]
    +WantedBy=multi-user.target
    +```
    +
  10. +
+

Kolla 安装

+

Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 24.03 LTS中引入了Kolla和Kolla-ansible服务。

+

Kolla的安装十分简单,只需要安装对应的RPM包即可

+
yum install openstack-kolla openstack-kolla-ansible
+

安装完后,就可以使用kolla-ansible, kolla-build, kolla-genpwd, kolla-mergepwd等命令了。

+

Trove 安装

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

1.设置数据库

+

数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASSWORD为合适的密码

+
mysql -u root -p
+
+MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+IDENTIFIED BY 'TROVE_DBPASSWORD';
+

2.创建服务用户认证

+

1、创建Trove服务用户

+

openstack user create --password TROVE_PASSWORD \
+                      --email trove@example.com trove
+openstack role add --project service --user trove admin
+openstack service create --name trove
+                         --description "Database service" database
+ 解释: TROVE_PASSWORD 替换为trove用户的密码

+

2、创建Database服务访问入口

+
openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+

3.安装和配置Trove各组件

+

1、安装Trove包 +

yum install openstack-trove python-troveclient
+ 2. 配置trove.conf +
vim /etc/trove/trove.conf
+
+[DEFAULT]
+bind_host=TROVE_NODE_IP
+log_dir = /var/log/trove
+network_driver = trove.network.neutron.NeutronDriver
+management_security_groups = <manage security group>
+nova_keypair = trove-mgmt
+default_datastore = mysql
+taskmanager_manager = trove.taskmanager.manager.Manager
+trove_api_workers = 5
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+reboot_time_out = 300
+usage_timeout = 900
+agent_call_high_timeout = 1200
+use_syslog = False
+debug = True
+
+# Set these if using Neutron Networking
+network_driver=trove.network.neutron.NeutronDriver
+network_label_regex=.*
+
+
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+
+[database]
+connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
+
+[keystone_authtoken]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = trove
+username = trove
+auth_url = http://controller:5000/v3/
+auth_type = password
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = trove
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mariadb]
+tcp_ports = 3306,4444,4567,4568
+
+[mysql]
+tcp_ports = 3306
+
+[postgresql]
+tcp_ports = 5432
+ 解释:

+
    +
  • [Default]分组中bind_host配置为Trove部署节点的IP
  • +
  • nova_compute_urlcinder_url 为Nova和Cinder在Keystone中创建的endpoint
  • +
  • nova_proxy_XXX 为一个能访问Nova服务的用户信息,上例中使用admin用户为例
  • +
  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • +
  • [database]分组中的connection 为前面在mysql中为Trove创建的数据库信息
  • +
  • Trove的用户信息中TROVE_PASS替换为实际trove用户的密码
  • +
+

3.配置trove-guestagent.conf +

vim /etc/trove/trove-guestagent.conf
+
+[DEFAULT]
+log_file = trove-guestagent.log
+log_dir = /var/log/trove/
+ignore_users = os_admin
+control_exchange = trove
+transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+rpc_backend = rabbit
+command_process_timeout = 60
+use_syslog = False
+debug = True
+
+[service_credentials]
+auth_url = http://controller:5000/v3/
+region_name = RegionOne
+project_name = service
+password = TROVE_PASS
+project_domain_name = Default
+user_domain_name = Default
+username = trove
+
+[mysql]
+docker_image = your-registry/your-repo/mysql
+backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0

+

解释: guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟 + 机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上 + 报心跳,因此需要配置RabbitMQ的用户和密码信息。 + 从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

+
    +
  • transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码
  • +
  • Trove的用户信息中TROVE_PASS替换为实际trove用户的密码
  • +
+

4.生成数据Trove数据库表 +

su -s /bin/sh -c "trove-manage db_sync" trove

+

4.完成安装配置

+
    +
  1. 配置Trove服务自启动 +
    systemctl enable openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service 
  2. +
  3. 启动服务 +
    systemctl start openstack-trove-api.service \
    +openstack-trove-taskmanager.service \
    +openstack-trove-conductor.service
  4. +
+

Swift 安装

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+
    +
  1. +

    创建服务凭证、API端点。

    +

    创建服务凭证

    +
    #创建swift用户:
    +openstack user create --domain default --password-prompt swift                 
    +#为swift用户添加admin角色:
    +openstack role add --project service --user swift admin                        
    +#创建swift服务实体:
    +openstack service create --name swift --description "OpenStack Object Storage" object-store                                                                   
    +

    创建swift API 端点:

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s                            
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1                                                  
    +
  2. +
  3. +

    安装软件包:

    +
    yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
    +
  4. +
  5. +

    配置proxy-server相关配置

    +
  6. +
+

Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和swift password即可。

+
***注意***
+
+**注意替换password为您在身份服务中为swift用户选择的密码**
+

4.安装和配置存储节点 (STG)

+
安装支持的程序包:
+```shell
+yum install xfsprogs rsync
+```
+
+将/dev/vdb和/dev/vdc设备格式化为 XFS
+
+```shell
+mkfs.xfs /dev/vdb
+mkfs.xfs /dev/vdc
+```
+
+创建挂载点目录结构:
+
+```shell
+mkdir -p /srv/node/vdb
+mkdir -p /srv/node/vdc
+```
+
+找到新分区的 UUID:
+
+```shell
+blkid
+```
+
+编辑/etc/fstab文件并将以下内容添加到其中:
+
+```shell
+UUID="<UUID-from-output-above>" /srv/node/vdb xfs noatime 0 2
+UUID="<UUID-from-output-above>" /srv/node/vdc xfs noatime 0 2
+```
+
+挂载设备:
+
+```shell
+mount /srv/node/vdb
+mount /srv/node/vdc
+```
+***注意***
+
+**如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置**
+
+(可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:
+
+```shell
+[DEFAULT]
+uid = swift
+gid = swift
+log file = /var/log/rsyncd.log
+pid file = /var/run/rsyncd.pid
+address = MANAGEMENT_INTERFACE_IP_ADDRESS
+
+[account]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/account.lock
+
+[container]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/container.lock
+
+[object]
+max connections = 2
+path = /srv/node/
+read only = False
+lock file = /var/lock/object.lock
+```
+**替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址**
+
+启动rsyncd服务并配置它在系统启动时启动:
+
+```shell
+systemctl enable rsyncd.service
+systemctl start rsyncd.service
+```
+

5.在存储节点安装和配置组件 (STG)

+
安装软件包:
+
+```shell
+yum install openstack-swift-account openstack-swift-container openstack-swift-object
+```
+
+编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。
+
+确保挂载点目录结构的正确所有权:
+
+```shell
+chown -R swift:swift /srv/node
+```
+
+创建recon目录并确保其拥有正确的所有权:
+
+```shell
+mkdir -p /var/cache/swift
+chown -R root:swift /var/cache/swift
+chmod -R 775 /var/cache/swift
+```
+

6.创建账号环 (CTL)

+
切换到/etc/swift目录。
+
+```shell
+cd /etc/swift
+```
+
+创建基础account.builder文件:
+
+```shell
+swift-ring-builder account.builder create 10 1 1
+```
+
+将每个存储节点添加到环中:
+
+```shell
+swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202  --device DEVICE_NAME --weight DEVICE_WEIGHT
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意 ***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder account.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder account.builder rebalance
+```
+

7.创建容器环 (CTL)

+
切换到`/etc/swift`目录。
+
+创建基础`container.builder`文件:
+
+```shell
+   swift-ring-builder container.builder create 10 1 1
+```
+
+将每个存储节点添加到环中:
+
+```shell
+swift-ring-builder container.builder \
+  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
+  --device DEVICE_NAME --weight 100
+
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder container.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder container.builder rebalance
+```
+

8.创建对象环 (CTL)

+
切换到`/etc/swift`目录。
+
+创建基础`object.builder`文件:
+
+   ```shell
+   swift-ring-builder object.builder create 10 1 1
+   ```
+
+将每个存储节点添加到环中
+
+```shell
+ swift-ring-builder object.builder \
+  add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
+  --device DEVICE_NAME --weight 100
+```
+
+**替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。替换DEVICE_NAME为同一存储节点上的存储设备名称**
+
+***注意 ***
+**对每个存储节点上的每个存储设备重复此命令**
+
+验证戒指内容:
+
+```shell
+swift-ring-builder object.builder
+```
+
+重新平衡戒指:
+
+```shell
+swift-ring-builder object.builder rebalance
+```
+
+分发环配置文件:
+
+将`account.ring.gz`,`container.ring.gz`以及 `object.ring.gz`文件复制到每个存储节点和运行代理服务的任何其他节点上的`/etc/swift`目录。
+

9.完成安装

+

编辑/etc/swift/swift.conf文件

+
[swift-hash]
+swift_hash_path_suffix = test-hash
+swift_hash_path_prefix = test-hash
+
+[storage-policy:0]
+name = Policy-0
+default = yes
+

用唯一值替换 test-hash

+

将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

+

在所有节点上,确保配置目录的正确所有权:

+
chown -R root:swift /etc/swift
+

在控制器节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动:

+
systemctl enable openstack-swift-proxy.service memcached.service
+systemctl start openstack-swift-proxy.service memcached.service
+

在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动:

+
systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
+
+systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
+
+systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
+
+systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
+
+systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
+
+systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
+

Cyborg 安装

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+

1.初始化对应数据库

+
CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+

2.创建对应Keystone资源对象

+
$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+  accelerator public http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator internal http://<cyborg-ip>:6666/v1
+$ openstack endpoint create --region RegionOne \
+  accelerator admin http://<cyborg-ip>:6666/v1
+

3.安装Cyborg

+
yum install openstack-cyborg
+

4.配置Cyborg

+

修改/etc/cyborg/cyborg.conf

+
[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+

自行修改对应的用户名、密码、IP等信息

+

5.同步数据库表格

+
cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+

6.启动Cyborg服务

+
systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+

Aodh 安装

+

1.创建数据库

+
CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+

3.安装Aodh

+
yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+

注意

+

aodh依赖的软件包pytho3-pyparsing在openEuler的OS仓不适配,需要覆盖安装OpenStack对应版本,可以使用yum list |grep pyparsing |grep OpenStack | awk '{print $2}'获取对应的版本

+

VERSION,然后再yum install -y python3-pyparsing-VERSION覆盖安装适配的pyparsing

+

4.修改配置文件

+
[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
aodh-dbsync
+

6.启动Aodh服务

+
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+

Gnocchi 安装

+

1.创建数据库

+
CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+

2.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+

3.安装Gnocchi

+
yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+

4.修改配置文件/etc/gnocchi/gnocchi.conf

+
[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+

5.初始化数据库

+
gnocchi-upgrade
+

6.启动Gnocchi服务

+
systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+

Ceilometer 安装

+

1.创建对应Keystone资源对象

+
openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+

2.安装Ceilometer

+
yum install openstack-ceilometer-notification openstack-ceilometer-central
+

3.修改配置文件/etc/ceilometer/pipeline.yaml

+
publishers:
+    # set address of Gnocchi
+    # + filter out Gnocchi-related activity meters (Swift driver)
+    # + set default archive policy
+    - gnocchi://?filter_project=service&archive_policy=low
+

4.修改配置文件/etc/ceilometer/ceilometer.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+

5.初始化数据库

+
ceilometer-upgrade
+

6.启动Ceilometer服务

+
systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+

Heat 安装

+

1.创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码

+
CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+

2.创建服务凭证,创建heat用户,并为其增加admin角色

+
openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+

3.创建heatheat-cfn服务及其对应的API端点

+
openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration"  cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+

4.创建stack管理的额外信息,包括heatdomain及其对应domain的admin用户heat_domain_admin, +heat_stack_owner角色,heat_stack_user角色

+
openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+

5.安装软件包

+
yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+

6.修改配置文件/etc/heat/heat.conf

+
[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+

7.初始化heat数据库表

+
su -s /bin/sh -c "heat-manage db_sync" heat
+

8.启动服务

+
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+

基于OpenStack SIG开发工具oos快速部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +

    oos工具在不断演进,兼容性、可用性不能时刻保证,建议使用已验证的本版,这里选择1.3.1 +

    pip install openstack-sig-tool==1.3.1

    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息:

    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器
    +
  6. +
  7. +

    华为云上面创建一台openEuler 24.03-LTS的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 24.03-lts -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +

    oos env setup test-oos -r wallaby
    +具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +

    命令执行成功后,在用户的根目录下会生成mytest目录,进入其中就可以执行tempest run命令了。

    +
  12. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,去除第2步对华为云provider信息的配置,第4步由在华为云上创建虚拟机改为纳管主机操作。

+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 24.03-lts -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/install/openEuler-25.03/OpenStack-antelope/index.html b/site/install/openEuler-25.03/OpenStack-antelope/index.html new file mode 100644 index 0000000000000000000000000000000000000000..8039244ffea96c2a7939a590e48a2d8b5bbfb346 --- /dev/null +++ b/site/install/openEuler-25.03/OpenStack-antelope/index.html @@ -0,0 +1,3355 @@ + + + + + + + + openEuler-25.03_Antelope - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack Antelope 部署指南

+ +

本文档是 openEuler OpenStack SIG 编写的基于 |openEuler 25.03 的 OpenStack 部署指南,内容由 SIG 贡献者提供。在阅读过程中,如果您有任何疑问或者发现任何问题,请联系SIG维护人员,或者直接提交issue

+

约定

+

本章节描述文档中的一些通用约定。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
名称定义
RABBIT_PASSrabbitmq的密码,由用户设置,在OpenStack各个服务配置中使用
CINDER_PASScinder服务keystone用户的密码,在cinder配置中使用
CINDER_DBPASScinder服务数据库密码,在cinder配置中使用
KEYSTONE_DBPASSkeystone服务数据库密码,在keystone配置中使用
GLANCE_PASSglance服务keystone用户的密码,在glance配置中使用
GLANCE_DBPASSglance服务数据库密码,在glance配置中使用
HEAT_PASS在keystone注册的heat用户密码,在heat配置中使用
HEAT_DBPASSheat服务数据库密码,在heat配置中使用
CYBORG_PASS在keystone注册的cyborg用户密码,在cyborg配置中使用
CYBORG_DBPASScyborg服务数据库密码,在cyborg配置中使用
NEUTRON_PASS在keystone注册的neutron用户密码,在neutron配置中使用
NEUTRON_DBPASSneutron服务数据库密码,在neutron配置中使用
PROVIDER_INTERFACE_NAME物理网络接口的名称,在neutron配置中使用
OVERLAY_INTERFACE_IP_ADDRESSController控制节点的管理ip地址,在neutron配置中使用
METADATA_SECRETmetadata proxy的secret密码,在nova和neutron配置中使用
PLACEMENT_DBPASSplacement服务数据库密码,在placement配置中使用
PLACEMENT_PASS在keystone注册的placement用户密码,在placement配置中使用
NOVA_DBPASSnova服务数据库密码,在nova配置中使用
NOVA_PASS在keystone注册的nova用户密码,在nova,cyborg,neutron等配置中使用
IRONIC_DBPASSironic服务数据库密码,在ironic配置中使用
IRONIC_PASS在keystone注册的ironic用户密码,在ironic配置中使用
IRONIC_INSPECTOR_DBPASSironic-inspector服务数据库密码,在ironic-inspector配置中使用
IRONIC_INSPECTOR_PASS在keystone注册的ironic-inspector用户密码,在ironic-inspector配置中使用
+

OpenStack SIG 提供了多种基于 openEuler 部署 OpenStack 的方法,以满足不同的用户场景,请按需选择。

+

基于RPM部署

+

环境准备

+

本文档基于OpenStack经典的三节点环境进行部署,三个节点分别是控制节点(Controller)、计算节点(Compute)、存储节点(Storage),其中存储节点一般只部署存储服务,在资源有限的情况下,可以不单独部署该节点,把存储节点上的服务部署到计算节点即可。

+

首先准备三个|openEuler 25.03环境,根据您的环境,下载对应的镜像并安装即可:[ISO镜像]https://repo.openeuler.org/openEuler-24.03-LTS-SP1/ISO/、[qcow2镜像]https://repo.openeuler.org/openEuler-24.03-LTS-SP1/virtual_machine_img/

+

下面的安装按照如下拓扑进行:

+
controller:192.168.0.2
+compute:   192.168.0.3
+storage:   192.168.0.4
+

如果您的环境IP不同,请按照您的环境IP修改相应的配置文件。

+

本文档的三节点服务拓扑如下图所示(只包含Keystone、Glance、Nova、Cinder、Neutron这几个核心服务,其他服务请参考具体部署章节):

+

topology1 +topology2 +topology3

+

在正式部署之前,需要对每个节点做如下配置和检查:

+
    +
  1. +

    配置 |openEuler 25.03 官方 yum 源,需要启用 EPOL 软件仓以支持 OpenStack

    +
    yum update
    +yum install openstack-release-antelope
    +yum clean all && yum makecache
    +

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示。

    +
    vi /etc/yum.repos.d/openEuler.repo
    +
    +[EPOL]
    +name=EPOL
    +baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/EPOL/main/$basearch/
    +enabled=1
    +gpgcheck=1
    +gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler
    +EOF
    +
  2. +
  3. +

    修改主机名以及映射

    +

    每个节点分别修改主机名,以controller为例:

    +
    hostnamectl set-hostname controller
    +
    +vi /etc/hostname
    +内容修改为controller
    +

    然后修改每个节点的/etc/hosts文件,新增如下内容:

    +
    192.168.0.2   controller
    +192.168.0.3   compute
    +192.168.0.4   storage
    +
  4. +
+

时钟同步

+

集群环境时刻要求每个节点的时间一致,一般由时钟同步软件保证。本文使用chrony软件。步骤如下:

+

Controller节点

+
    +
  1. +

    安装服务

    +
    dnf install chrony
    +
  2. +
  3. +

    修改/etc/chrony.conf配置文件,新增一行

    +
    # 表示允许哪些IP从本节点同步时钟
    +allow 192.168.0.0/24
    +
  4. +
  5. +

    重启服务

    +
    systemctl restart chronyd
    +
  6. +
+

其他节点

+
    +
  1. +

    安装服务

    +
    dnf install chrony
    +
  2. +
  3. +

    修改/etc/chrony.conf配置文件,新增一行

    +
    # NTP_SERVER是controller IP,表示从这个机器获取时间,这里我们填192.168.0.2,或者在`/etc/hosts`里配置好的controller名字即可。
    +server NTP_SERVER iburst
    +

    同时,要把pool pool.ntp.org iburst这一行注释掉,表示不从公网同步时钟。

    +
  4. +
  5. +

    重启服务

    +
    systemctl restart chronyd
    +
  6. +
+

配置完成后,检查一下结果,在其他非controller节点执行chronyc sources,返回结果类似如下内容,表示成功从controller同步时钟。

+
MS Name/IP address         Stratum Poll Reach LastRx Last sample
+===============================================================================
+^* 192.168.0.2                 4   6     7     0  -1406ns[  +55us] +/-   16ms
+

安装数据库

+

数据库安装在控制节点,这里推荐使用mariadb。

+
    +
  1. +

    安装软件包

    +
    dnf install mysql-config mariadb mariadb-server python3-PyMySQL
    +
  2. +
  3. +

    新增配置文件/etc/my.cnf.d/openstack.cnf,内容如下

    +
    [mysqld]
    +bind-address = 192.168.0.2
    +default-storage-engine = innodb
    +innodb_file_per_table = on
    +max_connections = 4096
    +collation-server = utf8_general_ci
    +character-set-server = utf8
    +
  4. +
  5. +

    启动服务器

    +
    systemctl start mariadb
    +
  6. +
  7. +

    初始化数据库,根据提示进行即可

    +
    mysql_secure_installation
    +

    示例如下:

    +
    NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
    +    SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!
    +
    +In order to log into MariaDB to secure it, we'll need the current
    +password for the root user. If you've just installed MariaDB, and
    +haven't set the root password yet, you should just press enter here.
    +
    +Enter current password for root (enter for none): 
    +
    +#这里输入密码,由于我们是初始化DB,直接回车就行
    +
    +OK, successfully used password, moving on...
    +
    +Setting the root password or using the unix_socket ensures that nobody
    +can log into the MariaDB root user without the proper authorisation.
    +
    +You already have your root account protected, so you can safely answer 'n'.
    +
    +# 这里根据提示输入N
    +
    +Switch to unix_socket authentication [Y/n] N
    +
    +Enabled successfully!
    +Reloading privilege tables..
    +... Success!
    +
    +
    +You already have your root account protected, so you can safely answer 'n'.
    +
    +# 输入Y,修改密码
    +
    +Change the root password? [Y/n] Y
    +
    +New password: 
    +Re-enter new password: 
    +Password updated successfully!
    +Reloading privilege tables..
    +... Success!
    +
    +
    +By default, a MariaDB installation has an anonymous user, allowing anyone
    +to log into MariaDB without having to have a user account created for
    +them.  This is intended only for testing, and to make the installation
    +go a bit smoother.  You should remove them before moving into a
    +production environment.
    +
    +# 输入Y,删除匿名用户
    +
    +Remove anonymous users? [Y/n] Y
    +... Success!
    +
    +Normally, root should only be allowed to connect from 'localhost'.  This
    +ensures that someone cannot guess at the root password from the network.
    +
    +# 输入Y,关闭root远程登录权限
    +
    +Disallow root login remotely? [Y/n] Y
    +... Success!
    +
    +By default, MariaDB comes with a database named 'test' that anyone can
    +access.  This is also intended only for testing, and should be removed
    +before moving into a production environment.
    +
    +# 输入Y,删除test数据库
    +
    +Remove test database and access to it? [Y/n] Y
    +- Dropping test database...
    +... Success!
    +- Removing privileges on test database...
    +... Success!
    +
    +Reloading the privilege tables will ensure that all changes made so far
    +will take effect immediately.
    +
    +# 输入Y,重载配置
    +
    +Reload privilege tables now? [Y/n] Y
    +... Success!
    +
    +Cleaning up...
    +
    +All done!  If you've completed all of the above steps, your MariaDB
    +installation should now be secure.
    +
  8. +
  9. +

    验证,根据第四步设置的密码,检查是否能登录mariadb

    +
    mysql -uroot -p
    +
  10. +
+

安装消息队列

+

消息队列安装在控制节点,这里推荐使用rabbitmq。

+
    +
  1. +

    安装软件包

    +
    dnf install rabbitmq-server
    +
  2. +
  3. +

    启动服务

    +
    systemctl start rabbitmq-server
    +
  4. +
  5. +

    配置openstack用户,RABBIT_PASS是openstack服务登录消息队里的密码,需要和后面各个服务的配置保持一致。

    +
    rabbitmqctl add_user openstack RABBIT_PASS
    +rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    +
  6. +
+

安装缓存服务

+

缓存服务安装在控制节点,这里推荐使用Memcached。

+
    +
  1. +

    安装软件包

    +
    dnf install memcached python3-memcached
    +
  2. +
  3. +

    修改配置文件/etc/sysconfig/memcached

    +
    OPTIONS="-l 127.0.0.1,::1,controller"
    +
  4. +
  5. +

    启动服务

    +
    systemctl start memcached
    +
  6. +
+

部署服务

+

Keystone

+

Keystone是OpenStack提供的鉴权服务,是整个OpenStack的入口,提供了租户隔离、用户认证、服务发现等功能,必须安装。

+
    +
  1. +

    创建 keystone 数据库并授权

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE keystone;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    +IDENTIFIED BY 'KEYSTONE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意

    +

    替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

    +
  2. +
  3. +

    安装软件包

    +
    dnf install openstack-keystone httpd mod_wsgi
    +
  4. +
  5. +

    配置keystone相关配置

    +
    vim /etc/keystone/keystone.conf
    +
    +[database]
    +connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    +
    +[token]
    +provider = fernet
    +

    解释

    +

    [database]部分,配置数据库入口

    +

    [token]部分,配置token provider

    +
  6. +
  7. +

    同步数据库

    +
    su -s /bin/sh -c "keystone-manage db_sync" keystone
    +
  8. +
  9. +

    初始化Fernet密钥仓库

    +
    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    +keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    +
  10. +
  11. +

    启动服务

    +
    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    +--bootstrap-admin-url http://controller:5000/v3/ \
    +--bootstrap-internal-url http://controller:5000/v3/ \
    +--bootstrap-public-url http://controller:5000/v3/ \
    +--bootstrap-region-id RegionOne
    +

    注意

    +

    替换 ADMIN_PASS,为 admin 用户设置密码

    +
  12. +
  13. +

    配置Apache HTTP server

    +
      +
    • 打开httpd.conf并配置
    • +
    +
    #需要修改的配置文件路径
    +vim /etc/httpd/conf/httpd.conf
    +
    +#修改以下项,如果没有则新添加
    +ServerName controller
    +
      +
    • 创建软链接
    • +
    +
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    +

    解释

    +

    配置 ServerName 项引用控制节点

    +

    注意 +如果 ServerName 项不存在则需要创建

    +
  14. +
  15. +

    启动Apache HTTP服务

    +
    systemctl enable httpd.service
    +systemctl start httpd.service
    +
  16. +
  17. +

    创建环境变量配置

    +
    cat << EOF >> ~/.admin-openrc
    +export OS_PROJECT_DOMAIN_NAME=Default
    +export OS_USER_DOMAIN_NAME=Default
    +export OS_PROJECT_NAME=admin
    +export OS_USERNAME=admin
    +export OS_PASSWORD=ADMIN_PASS
    +export OS_AUTH_URL=http://controller:5000/v3
    +export OS_IDENTITY_API_VERSION=3
    +export OS_IMAGE_API_VERSION=2
    +EOF
    +

    注意

    +

    替换 ADMIN_PASS 为 admin 用户的密码

    +
  18. +
  19. +

    依次创建domain, projects, users, roles

    +
      +
    • 需要先安装python3-openstackclient
    • +
    +
    dnf install python3-openstackclient
    +
      +
    • 导入环境变量
    • +
    +
    source ~/.admin-openrc
    +
      +
    • 创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建
    • +
    +
    openstack domain create --description "An Example Domain" example
    +
    openstack project create --domain default --description "Service Project" service
    +
      +
    • 创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole
    • +
    +
    openstack project create --domain default --description "Demo Project" myproject
    +openstack user create --domain default --password-prompt myuser
    +openstack role create myrole
    +openstack role add --project myproject --user myuser myrole
    +
  20. +
  21. +

    验证

    +
      +
    • 取消临时环境变量OS_AUTH_URL和OS_PASSWORD:
    • +
    +
    source ~/.admin-openrc
    +unset OS_AUTH_URL OS_PASSWORD
    +
      +
    • 为admin用户请求token:
    • +
    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name admin --os-username admin token issue
    +
      +
    • 为myuser用户请求token:
    • +
    +
    openstack --os-auth-url http://controller:5000/v3 \
    +--os-project-domain-name Default --os-user-domain-name Default \
    +--os-project-name myproject --os-username myuser token issue
    +
  22. +
+

Glance

+

Glance是OpenStack提供的镜像服务,负责虚拟机、裸机镜像的上传与下载,必须安装。

+

Controller节点

+
    +
  1. +

    创建 glance 数据库并授权

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE glance;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    +IDENTIFIED BY 'GLANCE_DBPASS';
    +MariaDB [(none)]> exit
    +

    注意:

    +

    替换 GLANCE_DBPASS,为 glance 数据库设置密码

    +
  2. +
  3. +

    初始化 glance 资源对象

    +
      +
    • 导入环境变量
    • +
    +
    source ~/.admin-openrc
    +
      +
    • 创建用户时,命令行会提示输入密码,请输入自定义的密码,下文涉及到GLANCE_PASS的地方替换成该密码即可。
    • +
    +
    openstack user create --domain default --password-prompt glance
    +User Password:
    +Repeat User Password:
    +
      +
    • 添加glance用户到service project并指定admin角色:
    • +
    +
    openstack role add --project service --user glance admin
    +
      +
    • 创建glance服务实体:
    • +
    +
    openstack service create --name glance --description "OpenStack Image" image
    +
      +
    • 创建glance API服务:
    • +
    +
    openstack endpoint create --region RegionOne image public http://controller:9292
    +openstack endpoint create --region RegionOne image internal http://controller:9292
    +openstack endpoint create --region RegionOne image admin http://controller:9292
    +
  4. +
  5. +

    安装软件包

    +
    dnf install openstack-glance
    +
  6. +
  7. +

    修改 glance 配置文件

    +
    vim /etc/glance/glance-api.conf
    +
    +[database]
    +connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    +
    +[keystone_authtoken]
    +www_authenticate_uri  = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = glance
    +password = GLANCE_PASS
    +
    +[paste_deploy]
    +flavor = keystone
    +
    +[glance_store]
    +stores = file,http
    +default_store = file
    +filesystem_store_datadir = /var/lib/glance/images/
    +

    解释:

    +

    [database]部分,配置数据库入口

    +

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    +

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    +
  8. +
  9. +

    同步数据库

    +
    su -s /bin/sh -c "glance-manage db_sync" glance
    +
  10. +
  11. +

    启动服务:

    +
    systemctl enable openstack-glance-api.service
    +systemctl start openstack-glance-api.service
    +
  12. +
  13. +

    验证

    +
      +
    • 导入环境变量
    • +
    +
    source ~/.admin-openrcu
    +
      +
    • 下载镜像
    • +
    +
    x86镜像下载:
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    +
    +arm镜像下载:
    +wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img
    +

    注意

    +

    如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    +
      +
    • 向Image服务上传镜像:
    • +
    +
    openstack image create --disk-format qcow2 --container-format bare \
    +                    --file cirros-0.4.0-x86_64-disk.img --public cirros
    +
      +
    • 确认镜像上传并验证属性:
    • +
    +
    openstack image list
    +
  14. +
+

Placement

+

Placement是OpenStack提供的资源调度组件,一般不面向用户,由Nova等组件调用,安装在控制节点。

+

安装、配置Placement服务前,需要先创建相应的数据库、服务凭证和API endpoints。

+
    +
  1. +

    创建数据库

    +
      +
    • 使用root用户访问数据库服务:
    • +
    +
    mysql -u root -p
    +
      +
    • 创建placement数据库:
    • +
    +
    MariaDB [(none)]> CREATE DATABASE placement;
    +
      +
    • 授权数据库访问:
    • +
    +
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    +    IDENTIFIED BY 'PLACEMENT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    +    IDENTIFIED BY 'PLACEMENT_DBPASS';
    +

    替换PLACEMENT_DBPASS为placement数据库访问密码。

    +
      +
    • 退出数据库访问客户端:
    • +
    +
    exit
    +
  2. +
  3. +

    配置用户和Endpoints

    +
      +
    • source admin凭证,以获取admin命令行权限:
    • +
    +
    source ~/.admin-openrc
    +
      +
    • 创建placement用户并设置用户密码:
    • +
    +
    openstack user create --domain default --password-prompt placement
    +
    +User Password:
    +Repeat User Password:
    +
      +
    • 添加placement用户到service project并指定admin角色:
    • +
    +
    openstack role add --project service --user placement admin
    +
      +
    • 创建placement服务实体:
    • +
    +
    openstack service create --name placement \
    +    --description "Placement API" placement
    +
      +
    • 创建Placement API服务endpoints:
    • +
    +
    openstack endpoint create --region RegionOne \
    +    placement public http://controller:8778
    +openstack endpoint create --region RegionOne \
    +    placement internal http://controller:8778
    +openstack endpoint create --region RegionOne \
    +    placement admin http://controller:8778
    +
  4. +
  5. +

    安装及配置组件

    +
      +
    • 安装软件包:
    • +
    +
    dnf install openstack-placement-api
    +
      +
    • +

      编辑/etc/placement/placement.conf配置文件,完成如下操作:

      +
        +
      • [placement_database]部分,配置数据库入口:
      • +
      +
      [placement_database]
      +connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
      +

      替换PLACEMENT_DBPASS为placement数据库的密码。

      +
        +
      • [api][keystone_authtoken]部分,配置身份认证服务入口:
      • +
      +
      [api]
      +auth_strategy = keystone
      +
      +[keystone_authtoken]
      +auth_url = http://controller:5000/v3
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = placement
      +password = PLACEMENT_PASS
      +

      替换PLACEMENT_PASS为placement用户的密码。

      +
    • +
    • +

      数据库同步,填充Placement数据库:

      +
    • +
    +
    su -s /bin/sh -c "placement-manage db sync" placement
    +
  6. +
  7. +

    启动服务

    +

    重启httpd服务:

    +
    systemctl restart httpd
    +
  8. +
  9. +

    验证

    +
      +
    • source admin凭证,以获取admin命令行权限
    • +
    +
    source ~/.admin-openrc
    +
      +
    • 执行状态检查:
    • +
    +
    placement-status upgrade check
    +
    +----------------------------------------------------------------------+
    +| Upgrade Check Results                                                |
    ++----------------------------------------------------------------------+
    +| Check: Missing Root Provider IDs                                     |
    +| Result: Success                                                      |
    +| Details: None                                                        |
    ++----------------------------------------------------------------------+
    +| Check: Incomplete Consumers                                          |
    +| Result: Success                                                      |
    +| Details: None                                                        |
    ++----------------------------------------------------------------------+
    +| Check: Policy File JSON to YAML Migration                            |
    +| Result: Failure                                                      |
    +| Details: Your policy file is JSON-formatted which is deprecated. You |
    +|   need to switch to YAML-formatted file. Use the                     |
    +|   ``oslopolicy-convert-json-to-yaml`` tool to convert the            |
    +|   existing JSON-formatted files to YAML in a backwards-              |
    +|   compatible manner: https://docs.openstack.org/oslo.policy/         |
    +|   latest/cli/oslopolicy-convert-json-to-yaml.html.                   |
    ++----------------------------------------------------------------------+
    +

    这里可以看到Policy File JSON to YAML Migration的结果为Failure。这是因为在Placement中,JSON格式的policy文件从Wallaby版本开始已处于deprecated状态。可以参考提示,使用oslopolicy-convert-json-to-yaml工具 将现有的JSON格式policy文件转化为YAML格式。

    +
    oslopolicy-convert-json-to-yaml  --namespace placement \
    +    --policy-file /etc/placement/policy.json \
    +    --output-file /etc/placement/policy.yaml
    +mv /etc/placement/policy.json{,.bak}
    +

    注:当前环境中此问题可忽略,不影响运行。

    +
      +
    • +

      针对placement API运行命令:

      +
        +
      • 安装osc-placement插件:
      • +
      +
      dnf install python3-osc-placement
      +
        +
      • 列出可用的资源类别及特性:
      • +
      +
      openstack --os-placement-api-version 1.2 resource class list --sort-column name
      ++----------------------------+
      +| name                       |
      ++----------------------------+
      +| DISK_GB                    |
      +| FPGA                       |
      +| ...                        |
      +
      +openstack --os-placement-api-version 1.6 trait list --sort-column name
      ++---------------------------------------+
      +| name                                  |
      ++---------------------------------------+
      +| COMPUTE_ACCELERATORS                  |
      +| COMPUTE_ARCH_AARCH64                  |
      +| ...                                   |
      +
    • +
    +
  10. +
+

Nova

+

Nova是OpenStack的计算服务,负责虚拟机的创建、发放等功能。

+

Controller节点

+

在控制节点执行以下操作。

+
    +
  1. +

    创建数据库

    +
      +
    • 使用root用户访问数据库服务:
    • +
    +
    mysql -u root -p
    +
      +
    • 创建nova_apinovanova_cell0数据库:
    • +
    +
    MariaDB [(none)]> CREATE DATABASE nova_api;
    +MariaDB [(none)]> CREATE DATABASE nova;
    +MariaDB [(none)]> CREATE DATABASE nova_cell0;
    +
      +
    • 授权数据库访问:
    • +
    +
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    +    IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    +    IDENTIFIED BY 'NOVA_DBPASS';
    +
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    +    IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    +    IDENTIFIED BY 'NOVA_DBPASS';
    +
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    +    IDENTIFIED BY 'NOVA_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    +    IDENTIFIED BY 'NOVA_DBPASS';
    +

    替换NOVA_DBPASS为nova相关数据库访问密码。

    +
      +
    • 退出数据库访问客户端:
    • +
    +
    exit
    +
  2. +
  3. +

    配置用户和Endpoints

    +
      +
    • source admin凭证,以获取admin命令行权限:
    • +
    +
    source ~/.admin-openrc
    +
      +
    • 创建nova用户并设置用户密码:
    • +
    +
    openstack user create --domain default --password-prompt nova
    +
    +User Password:
    +Repeat User Password:
    +
      +
    • 添加nova用户到service project并指定admin角色:
    • +
    +
    openstack role add --project service --user nova admin
    +
      +
    • 创建nova服务实体:
    • +
    +
    openstack service create --name nova \
    +    --description "OpenStack Compute" compute
    +
      +
    • 创建Nova API服务endpoints:
    • +
    +
    openstack endpoint create --region RegionOne \
    +    compute public http://controller:8774/v2.1
    +openstack endpoint create --region RegionOne \
    +    compute internal http://controller:8774/v2.1
    +openstack endpoint create --region RegionOne \
    +    compute admin http://controller:8774/v2.1
    +
  4. +
  5. +

    安装及配置组件

    +
      +
    • 安装软件包:
    • +
    +
    dnf install openstack-nova-api openstack-nova-conductor \
    +    openstack-nova-novncproxy openstack-nova-scheduler
    +
      +
    • +

      编辑/etc/nova/nova.conf配置文件,完成如下操作:

      +
        +
      • [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用controller节点管理IP配置my_ip,显式定义log_dir:
      • +
      +
      [DEFAULT]
      +enabled_apis = osapi_compute,metadata
      +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
      +my_ip = 192.168.0.2
      +log_dir = /var/log/nova
      +state_path = /var/lib/nova
      +

      替换RABBIT_PASS为RabbitMQ中openstack账户的密码。

      +
        +
      • [api_database][database]部分,配置数据库入口:
      • +
      +
      [api_database]
      +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
      +
      +[database]
      +connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
      +

      替换NOVA_DBPASS为nova相关数据库的密码。

      +
        +
      • [api][keystone_authtoken]部分,配置身份认证服务入口:
      • +
      +
      [api]
      +auth_strategy = keystone
      +
      +[keystone_authtoken]
      +auth_url = http://controller:5000/v3
      +memcached_servers = controller:11211
      +auth_type = password
      +project_domain_name = Default
      +user_domain_name = Default
      +project_name = service
      +username = nova
      +password = NOVA_PASS
      +

      替换NOVA_PASS为nova用户的密码。

      +
        +
      • [vnc]部分,启用并配置远程控制台入口:
      • +
      +
      [vnc]
      +enabled = true
      +server_listen = $my_ip
      +server_proxyclient_address = $my_ip
      +
        +
      • [glance]部分,配置镜像服务API的地址:
      • +
      +
      [glance]
      +api_servers = http://controller:9292
      +
        +
      • [oslo_concurrency]部分,配置lock path:
      • +
      +
      [oslo_concurrency]
      +lock_path = /var/lib/nova/tmp
      +
        +
      • [placement]部分,配置placement服务的入口:
      • +
      +
      [placement]
      +region_name = RegionOne
      +project_domain_name = Default
      +project_name = service
      +auth_type = password
      +user_domain_name = Default
      +auth_url = http://controller:5000/v3
      +username = placement
      +password = PLACEMENT_PASS
      +

      替换PLACEMENT_PASS为placement用户的密码。

      +
    • +
    • +

      数据库同步:

      +
        +
      • 同步nova-api数据库:
      • +
      +
      su -s /bin/sh -c "nova-manage api_db sync" nova
      +
        +
      • 注册cell0数据库:
      • +
      +
      su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
      +
        +
      • 创建cell1 cell:
      • +
      +
      su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
      +
        +
      • 同步nova数据库:
      • +
      +
      su -s /bin/sh -c "nova-manage db sync" nova
      +
        +
      • 验证cell0和cell1注册正确:
      • +
      +
      su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
      +
    • +
    +
  6. +
  7. +

    启动服务

    +
    systemctl enable \
    +  openstack-nova-api.service \
    +  openstack-nova-scheduler.service \
    +  openstack-nova-conductor.service \
    +  openstack-nova-novncproxy.service
    +
    +systemctl start \
    +  openstack-nova-api.service \
    +  openstack-nova-scheduler.service \
    +  openstack-nova-conductor.service \
    +  openstack-nova-novncproxy.service
    +
  8. +
+

Compute节点

+

在计算节点执行以下操作。

+
    +
  1. +

    安装软件包

    +
    dnf install openstack-nova-compute
    +
  2. +
  3. +

    编辑/etc/nova/nova.conf配置文件

    +
      +
    • [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用Compute节点管理IP配置my_ip,显式定义compute_driver、instances_path、log_dir:
    • +
    +
    [DEFAULT]
    +enabled_apis = osapi_compute,metadata
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +my_ip = 192.168.0.3
    +compute_driver = libvirt.LibvirtDriver
    +instances_path = /var/lib/nova/instances
    +log_dir = /var/log/nova
    +

    替换RABBIT_PASS为RabbitMQ中openstack账户的密码。

    +
      +
    • [api][keystone_authtoken]部分,配置身份认证服务入口:
    • +
    +
    [api]
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +auth_url = http://controller:5000/v3
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +

    替换NOVA_PASS为nova用户的密码。

    +
      +
    • [vnc]部分,启用并配置远程控制台入口:
    • +
    +
    [vnc]
    +enabled = true
    +server_listen = $my_ip
    +server_proxyclient_address = $my_ip
    +novncproxy_base_url = http://controller:6080/vnc_auto.html
    +
      +
    • [glance]部分,配置镜像服务API的地址:
    • +
    +
    [glance]
    +api_servers = http://controller:9292
    +
      +
    • [oslo_concurrency]部分,配置lock path:
    • +
    +
    [oslo_concurrency]
    +lock_path = /var/lib/nova/tmp
    +
      +
    • [placement]部分,配置placement服务的入口:
    • +
    +
    [placement]
    +region_name = RegionOne
    +project_domain_name = Default
    +project_name = service
    +auth_type = password
    +user_domain_name = Default
    +auth_url = http://controller:5000/v3
    +username = placement
    +password = PLACEMENT_PASS
    +

    替换PLACEMENT_PASS为placement用户的密码。

    +
  4. +
  5. +

    确认计算节点是否支持虚拟机硬件加速(x86_64)

    +

    处理器为x86_64架构时,可通过运行如下命令确认是否支持硬件加速:

    +
    egrep -c '(vmx|svm)' /proc/cpuinfo
    +

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。编辑/etc/nova/nova.conf[libvirt]部分:

    +
    [libvirt]
    +virt_type = qemu
    +

    如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置。

    +
  6. +
  7. +

    确认计算节点是否支持虚拟机硬件加速(arm64)

    +

    处理器为arm64架构时,可通过运行如下命令确认是否支持硬件加速:

    +
    virt-host-validate
    +# 该命令由libvirt提供,此时libvirt应已作为openstack-nova-compute依赖被安装,环境中已有此命令
    +

    显示FAIL时,表示不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。

    +
    QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded)
    +

    编辑/etc/nova/nova.conf[libvirt]部分:

    +
    [libvirt]
    +virt_type = qemu
    +

    显示PASS时,表示支持硬件加速,不需要进行额外的配置。

    +
    QEMU: Checking if device /dev/kvm exists: PASS
    +
  8. +
  9. +

    配置qemu(仅arm64)

    +

    仅当处理器为arm64架构时需要执行此操作。

    +
      +
    • 编辑/etc/libvirt/qemu.conf:
    • +
    +
    nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
    +            /usr/share/AAVMF/AAVMF_VARS.fd", \
    +            "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
    +            /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    +
      +
    • 编辑/etc/qemu/firmware/edk2-aarch64.json
    • +
    +
    {
    +    "description": "UEFI firmware for ARM64 virtual machines",
    +    "interface-types": [
    +        "uefi"
    +    ],
    +    "mapping": {
    +        "device": "flash",
    +        "executable": {
    +            "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
    +            "format": "raw"
    +        },
    +        "nvram-template": {
    +            "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
    +            "format": "raw"
    +        }
    +    },
    +    "targets": [
    +        {
    +            "architecture": "aarch64",
    +            "machines": [
    +                "virt-*"
    +            ]
    +        }
    +    ],
    +    "features": [
    +
    +    ],
    +    "tags": [
    +
    +    ]
    +}
    +
  10. +
  11. +

    启动服务

    +
    systemctl enable libvirtd.service openstack-nova-compute.service
    +systemctl start libvirtd.service openstack-nova-compute.service
    +
  12. +
+

Controller节点

+

在控制节点执行以下操作。

+
    +
  1. +

    添加计算节点到openstack集群

    +
      +
    • source admin凭证,以获取admin命令行权限:
    • +
    +
    source ~/.admin-openrc
    +
      +
    • 确认nova-compute服务已识别到数据库中:
    • +
    +
    openstack compute service list --service nova-compute
    +
      +
    • 发现计算节点,将计算节点添加到cell数据库:
    • +
    +
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
    +

    结果如下:

    +
    Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be    ignored if the caller is only importing and not executing nova code.
    +Found 2 cell mappings.
    +Skipping cell0 since it does not contain hosts.
    +Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2
    +Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e
    +Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e
    +Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2
    +
  2. +
  3. +

    验证

    +
      +
    • 列出服务组件,验证每个流程都成功启动和注册:
    • +
    +
    openstack compute service list
    +
      +
    • 列出身份服务中的API端点,验证与身份服务的连接:
    • +
    +
    openstack catalog list
    +
      +
    • 列出镜像服务中的镜像,验证与镜像服务的连接:
    • +
    +
    openstack image list
    +
      +
    • 检查cells是否运作成功,以及其他必要条件是否已具备。
    • +
    +
    nova-status upgrade check
    +
  4. +
+

Neutron

+

Neutron是OpenStack的网络服务,提供虚拟交换机、IP路由、DHCP等功能。

+

Controller节点

+
    +
  1. +

    创建数据库、服务凭证和 API 服务端点

    +
      +
    • 创建数据库:
    • +
    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE neutron;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
    +MariaDB [(none)]> exit;
    +
      +
    • 创建用户和服务,并记住创建neutron用户时输入的密码,用于配置NEUTRON_PASS:
    • +
    +
    source ~/.admin-openrc
    +openstack user create --domain default --password-prompt neutron
    +openstack role add --project service --user neutron admin
    +openstack service create --name neutron --description "OpenStack Networking" network
    +
      +
    • 部署 Neutron API 服务:
    • +
    +
    openstack endpoint create --region RegionOne network public http://controller:9696
    +openstack endpoint create --region RegionOne network internal http://controller:9696
    +openstack endpoint create --region RegionOne network admin http://controller:9696
    +
  2. +
  3. +

    安装软件包

    +
    dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2
    +
  4. +
  5. +

    配置Neutron

    +
      +
    • 修改/etc/neutron/neutron.conf
    • +
    +
    [database]
    +connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
    +
    +[DEFAULT]
    +core_plugin = ml2
    +service_plugins = router
    +allow_overlapping_ips = true
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +notify_nova_on_port_status_changes = true
    +notify_nova_on_port_data_changes = true
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[nova]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +region_name = RegionOne
    +project_name = service
    +username = nova
    +password = NOVA_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +
    +[experimental]
    +linuxbridge = true
    +
      +
    • +

      配置ML2,ML2具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge**

      +
    • +
    • +

      修改/etc/neutron/plugins/ml2/ml2_conf.ini

      +
    • +
    +
    [ml2]
    +type_drivers = flat,vlan,vxlan
    +tenant_network_types = vxlan
    +mechanism_drivers = linuxbridge,l2population
    +extension_drivers = port_security
    +
    +[ml2_type_flat]
    +flat_networks = provider
    +
    +[ml2_type_vxlan]
    +vni_ranges = 1:1000
    +
    +[securitygroup]
    +enable_ipset = true
    +
      +
    • 修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini
    • +
    +
    [linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +
      +
    • +

      配置Layer-3代理

      +
    • +
    • +

      修改/etc/neutron/l3_agent.ini

      +
    • +
    +
    [DEFAULT]
    +interface_driver = linuxbridge
    +

    配置DHCP代理 +修改/etc/neutron/dhcp_agent.ini

    +
    [DEFAULT]
    +interface_driver = linuxbridge
    +dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    +enable_isolated_metadata = true
    +
      +
    • +

      配置metadata代理

      +
    • +
    • +

      修改/etc/neutron/metadata_agent.ini

      +
    • +
    +
    [DEFAULT]
    +nova_metadata_host = controller
    +metadata_proxy_shared_secret = METADATA_SECRET
    +
  6. +
  7. +

    配置nova服务使用neutron,修改/etc/nova/nova.conf

    +
    [neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +service_metadata_proxy = true
    +metadata_proxy_shared_secret = METADATA_SECRET
    +
  8. +
  9. +

    创建/etc/neutron/plugin.ini的符号链接

    +
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    +
  10. +
  11. +

    同步数据库

    +
    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    +
  12. +
  13. +

    重启nova api服务

    +
    systemctl restart openstack-nova-api
    +
  14. +
  15. +

    启动网络服务

    +
    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \
    +neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
    +systemctl start neutron-server.service neutron-linuxbridge-agent.service \
    +neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
    +
  16. +
+

Compute节点

+
    +
  1. +

    安装软件包

    +
    dnf install openstack-neutron-linuxbridge ebtables ipset -y
    +
  2. +
  3. +

    配置Neutron

    +
      +
    • 修改/etc/neutron/neutron.conf
    • +
    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/neutron/tmp
    +
      +
    • 修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini
    • +
    +
    [linux_bridge]
    +physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    +
    +[vxlan]
    +enable_vxlan = true
    +local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    +l2_population = true
    +
    +[securitygroup]
    +enable_security_group = true
    +firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    +
      +
    • 配置nova compute服务使用neutron,修改/etc/nova/nova.conf
    • +
    +
    [neutron]
    +auth_url = http://controller:5000
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +username = neutron
    +password = NEUTRON_PASS
    +
  4. +
  5. +

    重启nova-compute服务

    +
    systemctl restart openstack-nova-compute.service
    +
  6. +
  7. +

    启动Neutron linuxbridge agent服务

    +
    systemctl enable neutron-linuxbridge-agent
    +systemctl start neutron-linuxbridge-agent
    +
  8. +
+

Cinder

+

Cinder是OpenStack的存储服务,提供块设备的创建、发放、备份等功能。

+

Controller节点

+
    +
  1. +

    初始化数据库

    +

    CINDER_DBPASS是用户自定义的cinder数据库密码。

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cinder;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';
    +MariaDB [(none)]> exit
    +
  2. +
  3. +

    初始化Keystone资源对象

    +
    source ~/.admin-openrc
    +
    +#创建用户时,命令行会提示输入密码,请输入自定义的密码,下文涉及到`CINDER_PASS`的地方替换成该密码即可。
    +openstack user create --domain default --password-prompt cinder
    +
    +openstack role add --project service --user cinder admin
    +openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +
    +openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +
  4. +
  5. +

    安装软件包

    +
    dnf install openstack-cinder-api openstack-cinder-scheduler
    +
  6. +
  7. +

    修改cinder配置文件/etc/cinder/cinder.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 192.168.0.2
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
  8. +
  9. +

    数据库同步

    +
    su -s /bin/sh -c "cinder-manage db sync" cinder
    +
  10. +
  11. +

    修改nova配置/etc/nova/nova.conf

    +
    [cinder]
    +os_region_name = RegionOne
    +
  12. +
  13. +

    启动服务

    +
    systemctl restart openstack-nova-api
    +systemctl start openstack-cinder-api openstack-cinder-scheduler
    +
  14. +
+

Storage节点

+

Storage节点要提前准备至少一块硬盘,作为cinder的存储后端,下文默认storage节点已经存在一块未使用的硬盘,设备名称为/dev/sdb,用户在配置过程中,请按照真实环境信息进行名称替换。

+

Cinder支持很多类型的后端存储,本指导使用最简单的lvm为参考,如果您想使用如ceph等其他后端,请自行配置。

+
    +
  1. +

    安装软件包

    +
    dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup
    +
  2. +
  3. +

    配置lvm卷组

    +
    pvcreate /dev/sdb
    +vgcreate cinder-volumes /dev/sdb
    +
  4. +
  5. +

    修改cinder配置/etc/cinder/cinder.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +my_ip = 192.168.0.4
    +enabled_backends = lvm
    +glance_api_servers = http://controller:9292
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = cinder
    +password = CINDER_PASS
    +
    +[database]
    +connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    +
    +[lvm]
    +volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    +volume_group = cinder-volumes
    +target_protocol = iscsi
    +target_helper = lioadm
    +
    +[oslo_concurrency]
    +lock_path = /var/lib/cinder/tmp
    +
  6. +
  7. +

    配置cinder backup (可选)

    +

    cinder-backup是可选的备份服务,cinder同样支持很多种备份后端,本文使用swift存储,如果您想使用如NFS等后端,请自行配置,例如可以参考OpenStack官方文档对NFS的配置说明。

    +

    修改/etc/cinder/cinder.conf,在[DEFAULT]中新增

    +
    [DEFAULT]
    +backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
    +backup_swift_url = SWIFT_URL
    +

    这里的SWIFT_URL是指环境中swift服务的URL,在部署完swift服务后,执行openstack catalog show object-store命令获取。

    +
  8. +
  9. +

    启动服务

    +
    systemctl start openstack-cinder-volume target
    +systemctl start openstack-cinder-backup (可选)
    +
  10. +
+

至此,Cinder服务的部署已全部完成,可以在controller通过以下命令进行简单的验证

+
source ~/.admin-openrc
+openstack storage service list
+openstack volume list
+

Horizon

+

Horizon是OpenStack提供的前端页面,可以让用户通过网页鼠标的操作来控制OpenStack集群,而不用繁琐的CLI命令行。Horizon一般部署在控制节点。

+
    +
  1. +

    安装软件包

    +
    dnf install openstack-dashboard
    +
  2. +
  3. +

    修改配置文件/etc/openstack-dashboard/local_settings

    +
    OPENSTACK_HOST = "controller"
    +ALLOWED_HOSTS = ['*', ]
    +OPENSTACK_KEYSTONE_URL =  "http://controller:5000/v3"
    +SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    +CACHES = {
    +'default': {
    +    'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    +    'LOCATION': 'controller:11211',
    +    }
    +}
    +OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    +OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    +OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    +WEBROOT = '/dashboard'
    +POLICY_FILES_PATH = "/etc/openstack-dashboard"
    +
    +OPENSTACK_API_VERSIONS = {
    +    "identity": 3,
    +    "image": 2,
    +    "volume": 3,
    +}
    +
  4. +
  5. +

    重启服务

    +
    systemctl restart httpd
    +
  6. +
+

至此,horizon服务的部署已全部完成,打开浏览器,输入http://192.168.0.2/dashboard,打开horizon登录页面。

+

Ironic

+

Ironic是OpenStack的裸金属服务,如果用户需要进行裸机部署则推荐使用该组件。否则,可以不用安装。

+

在控制节点执行以下操作。

+
    +
  1. +

    设置数据库

    +

    裸金属服务在数据库中存储信息,创建一个ironic用户可以访问的ironic数据库,替换IRONIC_DBPASS为合适的密码

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
    +IDENTIFIED BY 'IRONIC_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
    +IDENTIFIED BY 'IRONIC_DBPASS';
    +MariaDB [(none)]> exit
    +Bye
    +
  2. +
  3. +

    创建服务用户认证

    +
      +
    • 创建Bare Metal服务用户
    • +
    +

    替换IRONIC_PASS为ironic用户密码,IRONIC_INSPECTOR_PASS为ironic_inspector用户密码。

    +
    openstack user create --password IRONIC_PASS \
    +    --email ironic@example.com ironic
    +openstack role add --project service --user ironic admin
    +openstack service create --name ironic \
    +    --description "Ironic baremetal provisioning service" baremetal
    +
    +openstack service create --name ironic-inspector --description     "Ironic inspector baremetal provisioning service" baremetal-introspection
    +openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector
    +openstack role add --project service --user ironic-inspector admin
    +
      +
    • 创建Bare Metal服务访问入口
    • +
    +
    openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385
    +openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385
    +openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385
    +openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1
    +openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1
    +openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1
    +
  4. +
  5. +

    安装组件

    +
    dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
    +
  6. +
  7. +

    配置ironic-api服务

    +

    配置文件路径/etc/ironic/ironic.conf

    +
      +
    • 通过connection选项配置数据库的位置,如下所示,替换IRONIC_DBPASSironic用户的密码,替换DB_IP为DB服务器所在的IP地址:
    • +
    +
    [database]
    +
    +# The SQ LAlchemy connection string used to connect to the
    +# database (string value)
    +# connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic
    +connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic
    +
      +
    • 通过以下选项配置ironic-api服务使用RabbitMQ消息代理,替换RPC_*为RabbitMQ的详细地址和凭证
    • +
    +
    [DEFAULT]
    +
    +# A URL representing the messaging driver to use and its full
    +# configuration. (string value)
    +# transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +

    用户也可自行使用json-rpc方式替换rabbitmq

    +
      +
    • 配置ironic-api服务使用身份认证服务的凭证,替换PUBLIC_IDENTITY_IP为身份认证服务器的公共IP,替换PRIVATE_IDENTITY_IP为身份认证服务器的私有IP,替换 IRONIC_PASS为身份认证服务中ironic用户的密码,替换RABBIT_PASS为RabbitMQ中openstack账户的密码。:
    • +
    +
    [DEFAULT]
    +
    +# Authentication strategy used by ironic-api: one of
    +# "keystone" or "noauth". "noauth" should not be used in a
    +# production environment because all authentication will be
    +# disabled. (string value)
    +
    +auth_strategy=keystone
    +host = controller
    +memcache_servers = controller:11211
    +enabled_network_interfaces = flat,noop,neutron
    +default_network_interface = noop
    +enabled_hardware_types = ipmi
    +enabled_boot_interfaces = pxe
    +enabled_deploy_interfaces = direct
    +default_deploy_interface = direct
    +enabled_inspect_interfaces = inspector
    +enabled_management_interfaces = ipmitool
    +enabled_power_interfaces = ipmitool
    +enabled_rescue_interfaces = no-rescue,agent
    +isolinux_bin = /usr/share/syslinux/isolinux.bin
    +logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %     (user_identity)s] %(instance)s%(message)s
    +
    +[keystone_authtoken]
    +# Authentication type to load (string value)
    +auth_type=password
    +# Complete public Identity API endpoint (string value)
    +# www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
    +www_authenticate_uri=http://controller:5000
    +# Complete admin Identity API endpoint. (string value)
    +# auth_url=http://PRIVATE_IDENTITY_IP:5000
    +auth_url=http://controller:5000
    +# Service username. (string value)
    +username=ironic
    +# Service account password. (string value)
    +password=IRONIC_PASS
    +# Service tenant name. (string value)
    +project_name=service
    +# Domain name containing project (string value)
    +project_domain_name=Default
    +# User's domain name (string value)
    +user_domain_name=Default
    +
    +[agent]
    +deploy_logs_collect = always
    +deploy_logs_local_path = /var/log/ironic/deploy
    +deploy_logs_storage_backend = local
    +image_download_source = http
    +stream_raw_images = false
    +force_raw_images = false
    +verify_ca = False
    +
    +[oslo_concurrency]
    +
    +[oslo_messaging_notifications]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +topics = notifications
    +driver = messagingv2
    +
    +[oslo_messaging_rabbit]
    +amqp_durable_queues = True
    +rabbit_ha_queues = True
    +
    +[pxe]
    +ipxe_enabled = false
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +image_cache_size = 204800
    +tftp_root=/var/lib/tftpboot/cephfs/
    +tftp_master_path=/var/lib/tftpboot/cephfs/master_images
    +
    +[dhcp]
    +dhcp_provider = none
    +
      +
    • 创建裸金属服务数据库表
    • +
    +
    ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
    +
      +
    • 重启ironic-api服务
    • +
    +
    sudo systemctl restart openstack-ironic-api
    +
  8. +
  9. +

    配置ironic-conductor服务

    +

    如下为ironic-conductor服务自身的标准配置,ironic-conductor服务可以与ironic-api服务分布于不同节点,本指南中均部署与控制节点,所以重复的配置项可跳过。

    +
      +
    • 替换使用conductor服务所在host的IP配置my_ip:
    • +
    +
    [DEFAULT]
    +
    +# IP address of this host. If unset, will determine the IP
    +# programmatically. If unable to do so, will use "127.0.0.1".
    +# (string value)
    +# my_ip=HOST_IP
    +my_ip = 192.168.0.2
    +
      +
    • 配置数据库的位置,ironic-conductor应该使用和ironic-api相同的配置。替换IRONIC_DBPASSironic用户的密码:
    • +
    +
    [database]
    +
    +# The SQLAlchemy connection string to use to connect to the
    +# database. (string value)
    +connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic
    +
      +
    • 通过以下选项配置ironic-api服务使用RabbitMQ消息代理,ironic-conductor应该使用和ironic-api相同的配置,替换RABBIT_PASS为RabbitMQ中openstack账户的密码:
    • +
    +
    [DEFAULT]
    +
    +# A URL representing the messaging driver to use and its full
    +# configuration. (string value)
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +

    用户也可自行使用json-rpc方式替换rabbitmq

    +
      +
    • 配置凭证访问其他OpenStack服务
    • +
    +

    为了与其他OpenStack服务进行通信,裸金属服务在请求其他服务时需要使用服务用户与OpenStack Identity服务进行认证。这些用户的凭据必须在与相应服务相关的每个配置文件中进行配置。

    +
    [neutron] - 访问OpenStack网络服务
    +[glance] - 访问OpenStack镜像服务
    +[swift] - 访问OpenStack对象存储服务
    +[cinder] - 访问OpenStack块存储服务
    +[inspector] - 访问OpenStack裸金属introspection服务
    +[service_catalog] - 一个特殊项用于保存裸金属服务使用的凭证,该凭证用于发现注册在OpenStack身份认证服务目录中的自己的API URL端点
    +

    简单起见,可以对所有服务使用同一个服务用户。为了向后兼容,该用户应该和ironic-api服务的[keystone_authtoken]所配置的为同一个用户。但这不是必须的,也可以为每个服务创建并配置不同的服务用户。

    +

    在下面的示例中,用户访问OpenStack网络服务的身份验证信息配置为:

    +
    网络服务部署在名为RegionOne的身份认证服务域中,仅在服务目录中注册公共端点接口
    +
    +请求时使用特定的CA SSL证书进行HTTPS连接
    +
    +与ironic-api服务配置相同的服务用户
    +
    +动态密码认证插件基于其他选项发现合适的身份认证服务API版本
    +

    替换IRONIC_PASS为ironic用户密码。

    +
    [neutron]
    +
    +# Authentication type to load (string value)
    +auth_type = password
    +# Authentication URL (string value)
    +auth_url=https://IDENTITY_IP:5000/
    +# Username (string value)
    +username=ironic
    +# User's password (string value)
    +password=IRONIC_PASS
    +# Project name to scope to (string value)
    +project_name=service
    +# Domain ID containing project (string value)
    +project_domain_id=default
    +# User's domain id (string value)
    +user_domain_id=default
    +# PEM encoded Certificate Authority to use when verifying
    +# HTTPs connections. (string value)
    +cafile=/opt/stack/data/ca-bundle.pem
    +# The default region_name for endpoint URL discovery. (string
    +# value)
    +region_name = RegionOne
    +# List of interfaces, in order of preference, for endpoint
    +# URL. (list value)
    +valid_interfaces=public
    +
    +# 其他参考配置
    +[glance]
    +endpoint_override = http://controller:9292
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +auth_type = password
    +username = ironic
    +password = IRONIC_PASS
    +project_domain_name = default
    +user_domain_name = default
    +region_name = RegionOne
    +project_name = service
    +
    +[service_catalog]  
    +region_name = RegionOne
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +password = IRONIC_PASS
    +username = ironic
    +auth_url = http://controller:5000
    +auth_type = password
    +

    默认情况下,为了与其他服务进行通信,裸金属服务会尝试通过身份认证服务的服务目录发现该服务合适的端点。如果希望对一个特定服务使用一个不同的端点,则在裸金属服务的配置文件中通过endpoint_override选项进行指定:

    +
    [neutron]
    +endpoint_override = <NEUTRON_API_ADDRESS>
    +
      +
    • 配置允许的驱动程序和硬件类型
    • +
    +

    通过设置enabled_hardware_types设置ironic-conductor服务允许使用的硬件类型:

    +
    [DEFAULT]
    +enabled_hardware_types = ipmi
    +

    配置硬件接口:

    +
    enabled_boot_interfaces = pxe
    +enabled_deploy_interfaces = direct,iscsi
    +enabled_inspect_interfaces = inspector
    +enabled_management_interfaces = ipmitool
    +enabled_power_interfaces = ipmitool
    +

    配置接口默认值:

    +
    [DEFAULT]
    +default_deploy_interface = direct
    +default_network_interface = neutron
    +

    如果启用了任何使用Direct deploy的驱动,必须安装和配置镜像服务的Swift后端。Ceph对象网关(RADOS网关)也支持作为镜像服务的后端。

    +
      +
    • 重启ironic-conductor服务
    • +
    +
    sudo systemctl restart openstack-ironic-conductor
    +
  10. +
  11. +

    配置ironic-inspector服务

    +
      +
    • 安装组件
    • +
    +
    dnf install openstack-ironic-inspector
    +
      +
    • 创建数据库
    • +
    +
    # mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
    +
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \
    +IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
    +IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS';
    +MariaDB [(none)]> exit
    +Bye
    +
      +
    • 配置/etc/ironic-inspector/inspector.conf
    • +
    +

    通过connection选项配置数据库的位置,如下所示,替换IRONIC_INSPECTOR_DBPASSironic_inspector用户的密码

    +
    [database]
    +backend = sqlalchemy
    +connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector
    +min_pool_size = 100
    +max_pool_size = 500
    +pool_timeout = 30
    +max_retries = 5
    +max_overflow = 200
    +db_retry_interval = 2
    +db_inc_retry_interval = True
    +db_max_retry_interval = 2
    +db_max_retries = 5
    +
      +
    • 配置消息队列通信地址
    • +
    +
    [DEFAULT] 
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +
      +
    • 设置keystone认证
    • +
    +
    [DEFAULT]
    +
    +auth_strategy = keystone
    +timeout = 900
    +rootwrap_config = /etc/ironic-inspector/rootwrap.conf
    +logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %     (user_identity)s] %(instance)s%(message)s
    +log_dir = /var/log/ironic-inspector
    +state_path = /var/lib/ironic-inspector
    +use_stderr = False
    +
    +[ironic]
    +api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
    +auth_type = password
    +auth_url = http://PUBLIC_IDENTITY_IP:5000
    +auth_strategy = keystone
    +ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
    +os_region = RegionOne
    +project_name = service
    +project_domain_name = Default
    +user_domain_name = Default
    +username = IRONIC_SERVICE_USER_NAME
    +password = IRONIC_SERVICE_USER_PASSWORD
    +
    +[keystone_authtoken]
    +auth_type = password
    +auth_url = http://controller:5000
    +www_authenticate_uri = http://controller:5000
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = ironic_inspector
    +password = IRONICPASSWD
    +region_name = RegionOne
    +memcache_servers = controller:11211
    +token_cache_time = 300
    +
    +[processing]
    +add_ports = active
    +processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic
    +ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
    +always_store_ramdisk_logs = true
    +store_data =none
    +power_off = false
    +
    +[pxe_filter]
    +driver = iptables
    +
    +[capabilities]
    +boot_mode=True
    +
      +
    • 配置ironic inspector dnsmasq服务
    • +
    +
    # 配置文件地址:/etc/ironic-inspector/dnsmasq.conf
    +port=0
    +interface=enp3s0                         #替换为实际监听网络接口
    +dhcp-range=192.168.0.40,192.168.0.50   #替换为实际dhcp地址范围
    +bind-interfaces
    +enable-tftp
    +
    +dhcp-match=set:efi,option:client-arch,7
    +dhcp-match=set:efi,option:client-arch,9
    +dhcp-match=aarch64, option:client-arch,11
    +dhcp-boot=tag:aarch64,grubaa64.efi
    +dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
    +dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
    +
    +tftp-root=/tftpboot                       #替换为实际tftpboot目录
    +log-facility=/var/log/dnsmasq.log
    +
      +
    • 关闭ironic provision网络子网的dhcp
    • +
    +
    openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c
    +
      +
    • 初始化ironic-inspector服务的数据库
    • +
    +
    ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
    +
      +
    • 启动服务
    • +
    +
    systemctl enable --now openstack-ironic-inspector.service
    +systemctl enable --now openstack-ironic-inspector-dnsmasq.service
    +
  12. +
  13. +

    配置httpd服务

    +
      +
    • 创建ironic要使用的httpd的root目录并设置属主属组,目录路径要和/etc/ironic/ironic.conf中[deploy]组中http_root 配置项指定的路径要一致。
    • +
    +
    mkdir -p /var/lib/ironic/httproot
    +chown ironic.ironic /var/lib/ironic/httproot
    +
      +
    • +

      安装和配置httpd服务

      +
        +
      • 安装httpd服务,已有请忽略
      • +
      +
      dnf install httpd -y
      +
        +
      • 创建/etc/httpd/conf.d/openstack-ironic-httpd.conf文件,内容如下:
      • +
      +
      Listen 8080
      +
      +<VirtualHost *:8080>
      +    ServerName ironic.openeuler.com
      +
      +    ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
      +    CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
      +
      +    DocumentRoot "/var/lib/ironic/httproot"
      +    <Directory "/var/lib/ironic/httproot">
      +        Options Indexes FollowSymLinks
      +        Require all granted
      +    </Directory>
      +    LogLevel warn
      +    AddDefaultCharset UTF-8
      +    EnableSendfile on
      +</VirtualHost>
      +

      注意监听的端口要和/etc/ironic/ironic.conf里[deploy]选项中http_url配置项中指定的端口一致。

      +
        +
      • 重启httpd服务。
      • +
      +
      systemctl restart httpd
      +
    • +
    +
  14. +
  15. +

    deploy ramdisk镜像下载或制作

    +

    部署一个裸机节点总共需要两组镜像:deploy ramdisk images和user images。Deploy ramdisk images上运行有ironic-python-agent(IPA)服务,Ironic通过它进行裸机节点的环境准备。User images是最终被安装裸机节点上,供用户使用的镜像。

    +

    ramdisk镜像支持通过ironic-python-agent-builder或disk-image-builder工具制作。用户也可以自行选择其他工具制作。若使用原生工具,则需要安装对应的软件包。

    +

    具体的使用方法可以参考官方文档,同时官方也有提供制作好的deploy镜像,可尝试下载。

    +

    下文介绍通过ironic-python-agent-builder构建ironic使用的deploy镜像的完整过程。

    +
      +
    • 安装 ironic-python-agent-builder
    • +
    +
    dnf install python3-ironic-python-agent-builder
    +
    +或
    +pip3 install ironic-python-agent-builder
    +dnf install qemu-img git
    +
      +
    • 制作镜像
    • +
    +

    基本用法:

    +
    usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH]
    +                            [-v] [--lzma] [--extra-args EXTRA_ARGS]
    +                            [--elements-path ELEMENTS_PATH]
    +                            distribution
    +
    +positional arguments:
    +    distribution          Distribution to use
    +
    +options:
    +    -h, --help            show this help message and exit
    +    -r RELEASE, --release RELEASE
    +                        Distribution release to use
    +    -o OUTPUT, --output OUTPUT
    +                        Output base file name
    +    -e ELEMENT, --element ELEMENT
    +                        Additional DIB element to use
    +    -b BRANCH, --branch BRANCH
    +                        If set, override the branch that is used for         ironic-python-agent
    +                        and requirements
    +    -v, --verbose         Enable verbose logging in diskimage-builder
    +    --lzma                Use lzma compression for smaller images
    +    --extra-args EXTRA_ARGS
    +                        Extra arguments to pass to diskimage-builder
    +    --elements-path ELEMENTS_PATH
    +                        Path(s) to custom DIB elements separated by a colon
    +

    操作实例:

    +
    # -o选项指定生成的镜像名
    +# ubuntu指定生成ubuntu系统的镜像
    +ironic-python-agent-builder -o my-ubuntu-ipa ubuntu
    +

    可通过设置ARCH环境变量(默认为amd64)指定所构建镜像的架构。如果是arm架构,需要添加:

    +
    export ARCH=aarch64
    +
      +
    • 允许ssh登录
    • +
    +

    初始化环境变量,设置用户名、密码,启用sodo权限;并添加-e选项使用相应的DIB元素。制作镜像操作如下:

    +
    export DIB_DEV_USER_USERNAME=ipa \
    +export DIB_DEV_USER_PWDLESS_SUDO=yes \
    +export DIB_DEV_USER_PASSWORD='123'
    +ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu
    +
      +
    • 指定代码仓库
    • +
    +

    初始化对应的环境变量,然后制作镜像:

    +
    # 直接从gerrit上clone代码
    +DIB_REPOLOCATION_ironic_python_agent=https://opendev.org/openstack/ironic-python-agent
    +DIB_REPOREF_ironic_python_agent=stable/2023.1
    +
    +# 指定本地仓库及分支
    +DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo
    +DIB_REPOREF_ironic_python_agent=my-test-branch
    +
    +ironic-python-agent-builder ubuntu
    +

    参考:source-repositories

    +
  16. +
  17. +

    注意

    +

    原生的openstack里的pxe配置文件的模版不支持arm64架构,需要自己对原生openstack代码进行修改: +在W版中,社区的ironic仍然不支持arm64位的uefi pxe启动,表现为生成的grub.cfg文件(一般位于/tftpboot/下)格式不对而导致pxe启动失败。

    +

    生成的错误配置文件:

    +

    ironic-err

    +

    如上图所示,arm架构里寻找vmlinux和ramdisk镜像的命令分别是linux和initrd,上图所示的标红命令是x86架构下的uefi pxe启动。

    +

    需要用户对生成grub.cfg的代码逻辑自行修改。

    +

    ironic向ipa发送查询命令执行状态请求的tls报错:

    +

    当前版本的ipa和ironic默认都会开启tls认证的方式向对方发送请求,跟据官网的说明进行关闭即可。

    +
      +
    • 修改ironic配置文件(/etc/ironic/ironic.conf)下面的配置中添加ipa-insecure=1:
    • +
    +
    [agent]
    +verify_ca = False
    +[pxe]
    +pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
    +
      +
    • ramdisk镜像中添加ipa配置文件/etc/ironic_python_agent/ironic_python_agent.conf并配置tls的配置如下:
    • +
    +

    /etc/ironic_python_agent/ironic_python_agent.conf (需要提前创建/etc/ ironic_python_agent目录)

    +
    [DEFAULT]
    +enable_auto_tls = False
    +

    设置权限:

    +
    chown -R ipa.ipa /etc/ironic_python_agent/
    +
      +
    • ramdisk镜像中修改ipa服务的服务启动文件,添加配置文件选项
    • +
    +

    编辑/usr/lib/systemd/system/ironic-python-agent.service文件

    +
    [Unit]
    +Description=Ironic Python Agent
    +After=network-online.target
    +[Service]
    +ExecStartPre=/sbin/modprobe vfat
    +ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/    ironic_python_agent/ironic_python_agent.conf
    +Restart=always
    +RestartSec=30s
    +[Install]
    +WantedBy=multi-user.target
    +
  18. +
+

Trove

+

Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。

+

Controller节点

+
    +
  1. +

    创建数据库。

    +

    数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,替换TROVE_DBPASS为合适的密码。

    +
    CREATE DATABASE trove CHARACTER SET utf8;
    +GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS';
    +GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS';
    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。

    +
    # 创建trove用户
    +openstack user create --domain default --password-prompt trove
    +# 添加admin角色
    +openstack role add --project service --user trove admin
    +# 创建database服务
    +openstack service create --name trove --description "Database service" database
    +

    创建API端点。

    +
    openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
    +
  4. +
  5. +

    安装Trove。

    +
    dnf install openstack-trove python-troveclient
    +
  6. +
  7. +

    修改配置文件。

    +

    编辑/etc/trove/trove.conf。

    +
    [DEFAULT]
    +bind_host=192.168.0.2
    +log_dir = /var/log/trove
    +network_driver = trove.network.neutron.NeutronDriver
    +network_label_regex=.*
    +management_security_groups = <manage security group>
    +nova_keypair = trove-mgmt
    +default_datastore = mysql
    +taskmanager_manager = trove.taskmanager.manager.Manager
    +trove_api_workers = 5
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +reboot_time_out = 300
    +usage_timeout = 900
    +agent_call_high_timeout = 1200
    +use_syslog = False
    +debug = True
    +
    +[database]
    +connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove
    +
    +[keystone_authtoken]
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = trove
    +username = TROVE_PASS
    +
    +[service_credentials]
    +auth_url = http://controller:5000/v3/
    +region_name = RegionOne
    +project_name = service
    +project_domain_name = Default
    +user_domain_name = Default
    +username = trove
    +password = TROVE_PASS
    +
    +[mariadb]
    +tcp_ports = 3306,4444,4567,4568
    +
    +[mysql]
    +tcp_ports = 3306
    +
    +[postgresql]
    +tcp_ports = 5432
    +

    解释:

    +
    +

    [Default]分组中bind_host配置为Trove控制节点的IP。\ +transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码。\ +[database]分组中的connection 为前面在mysql中为Trove创建的数据库信息。\ +Trove的用户信息中TROVE_PASSWORD替换为实际trove用户的密码。

    +
    +

    编辑/etc/trove/trove-guestagent.conf。

    +
    [DEFAULT]
    +log_file = trove-guestagent.log
    +log_dir = /var/log/trove/
    +ignore_users = os_admin
    +control_exchange = trove
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +rpc_backend = rabbit
    +command_process_timeout = 60
    +use_syslog = False
    +debug = True
    +
    +[service_credentials]
    +auth_url = http://controller:5000/v3/
    +region_name = RegionOne
    +project_name = service
    +password = TROVE_PASS
    +project_domain_name = Default
    +user_domain_name = Default
    +username = trove
    +
    +[mysql]
    +docker_image = your-registry/your-repo/mysql
    +backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0
    +

    解释:

    +
    +

    guestagent是trove中一个独立组件,需要预先内置到Trove通过Nova创建的虚拟机镜像中,在创建好数据库实例后,会起guestagent进程,负责通过消息队列(RabbitMQ)向Trove上报心跳,因此需要配置RabbitMQ的用户和密码信息。\ +transport_urlRabbitMQ连接信息,RABBIT_PASS替换为RabbitMQ的密码。\ +Trove的用户信息中TROVE_PASSWORD替换为实际trove用户的密码。\ +从Victoria版开始,Trove使用一个统一的镜像来跑不同类型的数据库,数据库服务运行在Guest虚拟机的Docker容器中。

    +
    +
  8. +
  9. +

    数据库同步。

    +
    su -s /bin/sh -c "trove-manage db_sync" trove
    +
  10. +
  11. +

    完成安装。

    +
    # 配置服务自启
    +systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \ 
    +openstack-trove-conductor.service
    +
    +# 启动服务
    +systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \ 
    +openstack-trove-conductor.service
    +
  12. +
+

Swift

+

Swift 提供了弹性可伸缩、高可用的分布式对象存储服务,适合存储大规模非结构化数据。

+

Controller节点

+
    +
  1. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。

    +
    # 创建swift用户
    +openstack user create --domain default --password-prompt swift
    +# 添加admin角色
    +openstack role add --project service --user swift admin
    +# 创建对象存储服务
    +openstack service create --name swift --description "OpenStack Object Storage" object-store
    +

    创建API端点。

    +
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s
    +openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s
    +openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 
    +
  2. +
  3. +

    安装Swift。

    +
    dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \ 
    +python3-keystonemiddleware memcached
    +
  4. +
  5. +

    配置proxy-server。

    +

    Swift RPM包里已经包含了一个基本可用的proxy-server.conf,只需要手动修改其中的ip和SWIFT_PASS即可。

    +
    vim /etc/swift/proxy-server.conf
    +
    +[filter:authtoken]
    +paste.filter_factory = keystonemiddleware.auth_token:filter_factory
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = swift
    +password = SWIFT_PASS
    +delay_auth_decision = True
    +service_token_roles_required = True
    +
  6. +
+

Storage节点

+
    +
  1. +

    安装支持的程序包。

    +
    dnf install openstack-swift-account openstack-swift-container openstack-swift-object
    +dnf install xfsprogs rsync
    +
  2. +
  3. +

    将设备/dev/sdb和/dev/sdc格式化为XFS。

    +
    mkfs.xfs /dev/sdb
    +mkfs.xfs /dev/sdc
    +
  4. +
  5. +

    创建挂载点目录结构。

    +
    mkdir -p /srv/node/sdb
    +mkdir -p /srv/node/sdc
    +
  6. +
  7. +

    找到新分区的UUID。

    +
    blkid
    +
  8. +
  9. +

    编辑/etc/fstab文件并将以下内容添加到其中。

    +
    UUID="<UUID-from-output-above>" /srv/node/sdb xfs noatime 0 2
    +UUID="<UUID-from-output-above>" /srv/node/sdc xfs noatime 0 2
    +
  10. +
  11. +

    挂载设备。

    +
    mount /srv/node/sdb
    +mount /srv/node/sdc
    +

    注意

    +

    如果用户不需要容灾功能,以上步骤只需要创建一个设备即可,同时可以跳过下面的rsync配置。

    +
  12. +
  13. +

    (可选)创建或编辑/etc/rsyncd.conf文件以包含以下内容:

    +
    [DEFAULT]
    +uid = swift
    +gid = swift
    +log file = /var/log/rsyncd.log
    +pid file = /var/run/rsyncd.pid
    +address = MANAGEMENT_INTERFACE_IP_ADDRESS
    +
    +[account]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/account.lock
    +
    +[container]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/container.lock
    +
    +[object]
    +max connections = 2
    +path = /srv/node/
    +read only = False
    +lock file = /var/lock/object.lock
    +

    替换MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址

    +

    启动rsyncd服务并配置它在系统启动时启动:

    +
    systemctl enable rsyncd.service
    +systemctl start rsyncd.service
    +
  14. +
  15. +

    配置存储节点。

    +

    编辑/etc/swift目录的account-server.conf、container-server.conf和object-server.conf文件,替换bind_ip为存储节点上管理网络的IP地址。

    +
    [DEFAULT]
    +bind_ip = 192.168.0.4
    +

    确保挂载点目录结构的正确所有权。

    +
    chown -R swift:swift /srv/node
    +

    创建recon目录并确保其拥有正确的所有权。

    +
    mkdir -p /var/cache/swift
    +chown -R root:swift /var/cache/swift
    +chmod -R 775 /var/cache/swift
    +
  16. +
+

Controller节点创建并分发环

+
    +
  1. +

    创建账号环。

    +

    切换到/etc/swift目录。

    +
    cd /etc/swift
    +

    创建基础account.builder文件。

    +
    swift-ring-builder account.builder create 10 1 1
    +

    将每个存储节点添加到环中。

    +
    swift-ring-builder account.builder add --region 1 --zone 1 \
    +--ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \ 
    +--port 6202  --device DEVICE_NAME \ 
    +--weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证账号环内容。

    +
    swift-ring-builder account.builder
    +

    重新平衡账号环。

    +
    swift-ring-builder account.builder rebalance
    +
  2. +
  3. +

    创建容器环。

    +

    切换到/etc/swift目录。

    +

    创建基础container.builder文件。

    +
    swift-ring-builder container.builder create 10 1 1
    +

    将每个存储节点添加到环中。

    +
    swift-ring-builder container.builder add --region 1 --zone 1 \
    +--ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS 
    +--port 6201 --device DEVICE_NAME \
    +--weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证容器环内容。

    +
    swift-ring-builder container.builder
    +

    重新平衡容器环。

    +
    swift-ring-builder container.builder rebalance
    +
  4. +
  5. +

    创建对象环。

    +

    切换到/etc/swift目录。

    +

    创建基础object.builder文件。

    +
    swift-ring-builder object.builder create 10 1 1
    +

    将每个存储节点添加到环中。

    +
     swift-ring-builder object.builder add --region 1 --zone 1 \
    + --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \
    + --port 6200 --device DEVICE_NAME \
    + --weight 100
    +
    +

    替换STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS为存储节点上管理网络的IP地址。\ +替换DEVICE_NAME为同一存储节点上的存储设备名称。

    +
    +

    注意

    +

    对每个存储节点上的每个存储设备重复此命令

    +

    验证对象环内容。

    +
    swift-ring-builder object.builder
    +

    重新平衡对象环。

    +
    swift-ring-builder object.builder rebalance
    +
  6. +
  7. +

    分发环配置文件。

    +

    account.ring.gzcontainer.ring.gz以及 object.ring.gz文件复制到每个存储节点和运行代理服务的任何其他节点上的/etc/swift目录。

    +
  8. +
  9. +

    编辑配置文件/etc/swift/swift.conf。

    +
    [swift-hash]
    +swift_hash_path_suffix = test-hash
    +swift_hash_path_prefix = test-hash
    +
    +[storage-policy:0]
    +name = Policy-0
    +default = yes
    +

    用唯一值替换 test-hash

    +

    将swift.conf文件复制到/etc/swift每个存储节点和运行代理服务的任何其他节点上的目录。

    +

    在所有节点上,确保配置目录的正确所有权。

    +
    chown -R root:swift /etc/swift
    +
  10. +
  11. +

    完成安装

    +

    在控制节点和运行代理服务的任何其他节点上,启动对象存储代理服务及其依赖项,并将它们配置为在系统启动时启动。

    +
    systemctl enable openstack-swift-proxy.service memcached.service
    +systemctl start openstack-swift-proxy.service memcached.service
    +

    在存储节点上,启动对象存储服务并将它们配置为在系统启动时启动。

    +
    systemctl enable openstack-swift-account.service \
    +openstack-swift-account-auditor.service \
    +openstack-swift-account-reaper.service \
    +openstack-swift-account-replicator.service \
    +openstack-swift-container.service \
    +openstack-swift-container-auditor.service \
    +openstack-swift-container-replicator.service \
    +openstack-swift-container-updater.service \
    +openstack-swift-object.service \
    +openstack-swift-object-auditor.service \
    +openstack-swift-object-replicator.service \
    +openstack-swift-object-updater.service
    +
    +systemctl start openstack-swift-account.service \
    +openstack-swift-account-auditor.service \
    +openstack-swift-account-reaper.service \
    +openstack-swift-account-replicator.service \
    +openstack-swift-container.service \
    +openstack-swift-container-auditor.service \
    +openstack-swift-container-replicator.service \
    +openstack-swift-container-updater.service \
    +openstack-swift-object.service \
    +openstack-swift-object-auditor.service \
    +openstack-swift-object-replicator.service \
    +openstack-swift-object-updater.service
    +
  12. +
+

Cyborg

+

Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。

+

Controller节点

+
    +
  1. +

    初始化对应数据库

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE cyborg;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
    +MariaDB [(none)]> exit;
    +
  2. +
  3. +

    创建用户和服务,并记住创建cybory用户时输入的密码,用于配置CYBORG_PASS

    +
    source ~/.admin-openrc
    +openstack user create --domain default --password-prompt cyborg
    +openstack role add --project service --user cyborg admin
    +openstack service create --name cyborg --description "Acceleration Service" accelerator
    +
  4. +
  5. +

    使用uwsgi部署Cyborg api服务

    +
    openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2
    +openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2
    +openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2
    +
  6. +
  7. +

    安装Cyborg

    +
    dnf install openstack-cyborg
    +
  8. +
  9. +

    配置Cyborg

    +

    修改/etc/cyborg/cyborg.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    +use_syslog = False
    +state_path = /var/lib/cyborg
    +debug = True
    +
    +[api]
    +host_ip = 0.0.0.0
    +
    +[database]
    +connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg
    +
    +[service_catalog]
    +cafile = /opt/stack/data/ca-bundle.pem
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +password = CYBORG_PASS
    +username = cyborg
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +
    +[placement]
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = password
    +username = PLACEMENT_PASS
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +auth_section = keystone_authtoken
    +
    +[nova]
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = NOVA_PASS
    +username = nova
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +auth_section = keystone_authtoken
    +
    +[keystone_authtoken]
    +memcached_servers = localhost:11211
    +signing_dir = /var/cache/cyborg/api
    +cafile = /opt/stack/data/ca-bundle.pem
    +project_domain_name = Default
    +project_name = service
    +user_domain_name = Default
    +password = CYBORG_PASS
    +username = cyborg
    +auth_url = http://controller:5000/v3/
    +auth_type = password
    +
  10. +
  11. +

    同步数据库表格

    +
    cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
    +
  12. +
  13. +

    启动Cyborg服务

    +
    systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
    +systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
    +
  14. +
+

Aodh

+

Aodh可以根据由Ceilometer或者Gnocchi收集的监控数据创建告警,并设置触发规则。

+

Controller节点

+
    +
  1. +

    创建数据库。

    +
    CREATE DATABASE aodh;
    +GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
    +GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。

    +
    openstack user create --domain default --password-prompt aodh
    +openstack role add --project service --user aodh admin
    +openstack service create --name aodh --description "Telemetry" alarming
    +

    创建API端点。

    +
    openstack endpoint create --region RegionOne alarming public http://controller:8042
    +openstack endpoint create --region RegionOne alarming internal http://controller:8042
    +openstack endpoint create --region RegionOne alarming admin http://controller:8042
    +
  4. +
  5. +

    安装Aodh。

    +
    dnf install openstack-aodh-api openstack-aodh-evaluator \
    +openstack-aodh-notifier openstack-aodh-listener \
    +openstack-aodh-expirer python3-aodhclient
    +
  6. +
  7. +

    修改配置文件。

    +
    vim /etc/aodh/aodh.conf
    +
    +[database]
    +connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
    +
    +[DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +auth_strategy = keystone
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = aodh
    +password = AODH_PASS
    +
    +[service_credentials]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = aodh
    +password = AODH_PASS
    +interface = internalURL
    +region_name = RegionOne
    +
  8. +
  9. +

    同步数据库。

    +
    aodh-dbsync
    +
  10. +
  11. +

    完成安装。

    +
    # 配置服务自启
    +systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \
    +openstack-aodh-notifier.service openstack-aodh-listener.service
    +
    +# 启动服务
    +systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \
    +openstack-aodh-notifier.service openstack-aodh-listener.service
    +
  12. +
+

Gnocchi

+

Gnocchi是一个开源的时间序列数据库,可以对接Ceilometer。

+

Controller节点

+
    +
  1. +

    创建数据库。

    +
    CREATE DATABASE gnocchi;
    +GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
    +GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
    +
  2. +
  3. +

    创建服务凭证以及API端点。

    +

    创建服务凭证。

    +
    openstack user create --domain default --password-prompt gnocchi
    +openstack role add --project service --user gnocchi admin
    +openstack service create --name gnocchi --description "Metric Service" metric
    +

    创建API端点。

    +
    openstack endpoint create --region RegionOne metric public http://controller:8041
    +openstack endpoint create --region RegionOne metric internal http://controller:8041
    +openstack endpoint create --region RegionOne metric admin http://controller:8041
    +
  4. +
  5. +

    安装Gnocchi。

    +
    dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
    +
  6. +
  7. +

    修改配置文件。

    +
    vim /etc/gnocchi/gnocchi.conf
    +[api]
    +auth_mode = keystone
    +port = 8041
    +uwsgi_mode = http-socket
    +
    +[keystone_authtoken]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_name = Default
    +user_domain_name = Default
    +project_name = service
    +username = gnocchi
    +password = GNOCCHI_PASS
    +interface = internalURL
    +region_name = RegionOne
    +
    +[indexer]
    +url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
    +
    +[storage]
    +# coordination_url is not required but specifying one will improve
    +# performance with better workload division across workers.
    +# coordination_url = redis://controller:6379
    +file_basepath = /var/lib/gnocchi
    +driver = file
    +
  8. +
  9. +

    同步数据库。

    +
    gnocchi-upgrade
    +
  10. +
  11. +

    完成安装。

    +
    # 配置服务自启
    +systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
    +
    +# 启动服务
    +systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
    +
  12. +
+

Ceilometer

+

Ceilometer是OpenStack中负责数据收集的服务。

+

Controller节点

+
    +
  1. +

    创建服务凭证。

    +
    openstack user create --domain default --password-prompt ceilometer
    +openstack role add --project service --user ceilometer admin
    +openstack service create --name ceilometer --description "Telemetry" metering
    +
  2. +
  3. +

    安装Ceilometer软件包。

    +
    dnf install openstack-ceilometer-notification openstack-ceilometer-central
    +
  4. +
  5. +

    编辑配置文件/etc/ceilometer/pipeline.yaml。

    +
    publishers:
    +    # set address of Gnocchi
    +    # + filter out Gnocchi-related activity meters (Swift driver)
    +    # + set default archive policy
    +    - gnocchi://?filter_project=service&archive_policy=low
    +
  6. +
  7. +

    编辑配置文件/etc/ceilometer/ceilometer.conf。

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +
    +[service_credentials]
    +auth_type = password
    +auth_url = http://controller:5000/v3
    +project_domain_id = default
    +user_domain_id = default
    +project_name = service
    +username = ceilometer
    +password = CEILOMETER_PASS
    +interface = internalURL
    +region_name = RegionOne
    +
  8. +
  9. +

    数据库同步。

    +
    ceilometer-upgrade
    +
  10. +
  11. +

    完成控制节点Ceilometer安装。

    +
    # 配置服务自启
    +systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
    +# 启动服务
    +systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
    +
  12. +
+

Compute节点

+
    +
  1. +

    安装Ceilometer软件包。

    +
    dnf install openstack-ceilometer-compute
    +dnf install openstack-ceilometer-ipmi       # 可选
    +
  2. +
  3. +

    编辑配置文件/etc/ceilometer/ceilometer.conf。

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +
    +[service_credentials]
    +auth_url = http://controller:5000
    +project_domain_id = default
    +user_domain_id = default
    +auth_type = password
    +username = ceilometer
    +project_name = service
    +password = CEILOMETER_PASS
    +interface = internalURL
    +region_name = RegionOne
    +
  4. +
  5. +

    编辑配置文件/etc/nova/nova.conf。

    +
    [DEFAULT]
    +instance_usage_audit = True
    +instance_usage_audit_period = hour
    +
    +[notifications]
    +notify_on_state_change = vm_and_task_state
    +
    +[oslo_messaging_notifications]
    +driver = messagingv2
    +
  6. +
  7. +

    完成安装。

    +
    systemctl enable openstack-ceilometer-compute.service
    +systemctl start openstack-ceilometer-compute.service
    +systemctl enable openstack-ceilometer-ipmi.service         # 可选
    +systemctl start openstack-ceilometer-ipmi.service          # 可选
    +
    +# 重启nova-compute服务
    +systemctl restart openstack-nova-compute.service
    +
  8. +
+

Heat

+

Heat是 OpenStack 自动编排服务,基于描述性的模板来编排复合云应用,也称为Orchestration Service。Heat 的各服务一般安装在Controller节点上。

+

Controller节点

+
    +
  1. +

    创建heat数据库,并授予heat数据库正确的访问权限,替换HEAT_DBPASS为合适的密码

    +
    mysql -u root -p
    +
    +MariaDB [(none)]> CREATE DATABASE heat;
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
    +MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
    +MariaDB [(none)]> exit;
    +
  2. +
  3. +

    创建服务凭证,创建heat用户,并为其增加admin角色

    +
    source ~/.admin-openrc
    +
    +openstack user create --domain default --password-prompt heat
    +openstack role add --project service --user heat admin
    +
  4. +
  5. +

    创建heatheat-cfn服务及其对应的API端点

    +
    openstack service create --name heat --description "Orchestration" orchestration
    +openstack service create --name heat-cfn --description "Orchestration"  cloudformation
    +openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
    +openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
    +openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
    +openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
    +
  6. +
  7. +

    创建stack管理的额外信息

    +

    创建 heat domain

    +
    openstack domain create --description "Stack projects and users" heat
    +

    heat domain下创建 heat_domain_admin 用户,并记下输入的密码,用于配置下面的HEAT_DOMAIN_PASS

    +
    openstack user create --domain heat --password-prompt heat_domain_admin
    +

    heat_domain_admin 用户增加 admin 角色

    +
    openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
    +

    创建 heat_stack_owner 角色

    +
    openstack role create heat_stack_owner
    +

    创建 heat_stack_user 角色

    +
    openstack role create heat_stack_user
    +
  8. +
  9. +

    安装软件包

    +
    dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
    +
  10. +
  11. +

    修改配置文件/etc/heat/heat.conf

    +
    [DEFAULT]
    +transport_url = rabbit://openstack:RABBIT_PASS@controller
    +heat_metadata_server_url = http://controller:8000
    +heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
    +stack_domain_admin = heat_domain_admin
    +stack_domain_admin_password = HEAT_DOMAIN_PASS
    +stack_user_domain_name = heat
    +
    +[database]
    +connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
    +
    +[keystone_authtoken]
    +www_authenticate_uri = http://controller:5000
    +auth_url = http://controller:5000
    +memcached_servers = controller:11211
    +auth_type = password
    +project_domain_name = default
    +user_domain_name = default
    +project_name = service
    +username = heat
    +password = HEAT_PASS
    +
    +[trustee]
    +auth_type = password
    +auth_url = http://controller:5000
    +username = heat
    +password = HEAT_PASS
    +user_domain_name = default
    +
    +[clients_keystone]
    +auth_uri = http://controller:5000
    +
  12. +
  13. +

    初始化heat数据库表

    +
    su -s /bin/sh -c "heat-manage db_sync" heat
    +
  14. +
  15. +

    启动服务

    +
    systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
    +systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
    +
  16. +
+

Tempest

+

Tempest是OpenStack的集成测试服务,如果用户需要全面自动化测试已安装的OpenStack环境的功能,则推荐使用该组件。否则,可以不用安装。

+

Controller节点

+
    +
  1. +

    安装Tempest

    +
    dnf install openstack-tempest
    +
  2. +
  3. +

    初始化目录

    +
    tempest init mytest
    +
  4. +
  5. +

    修改配置文件。

    +
    cd mytest
    +vi etc/tempest.conf
    +

    tempest.conf中需要配置当前OpenStack环境的信息,具体内容可以参考官方示例

    +
  6. +
  7. +

    执行测试

    +
    tempest run
    +
  8. +
  9. +

    安装tempest扩展(可选) + OpenStack各个服务本身也提供了一些tempest测试包,用户可以安装这些包来丰富tempest的测试内容。在Antelope中,我们提供了Cinder、Glance、Keystone、Ironic、Trove的扩展测试,用户可以执行如下命令进行安装使用:

    +
  10. +
+
dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin
+

基于OpenStack SIG开发工具oos部署

+

oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。其中oos env系列命令提供了一键部署OpenStack (all in one或三节点cluster)的ansible脚本,用户可以使用该脚本快速部署一套基于 openEuler RPM 的 OpenStack 环境。oos工具支持对接云provider(目前仅支持华为云provider)和主机纳管两种方式来部署 OpenStack 环境,下面以对接华为云部署一套all in one的OpenStack环境为例说明oos工具的使用方法。

+
    +
  1. +

    安装oos工具

    +
    yum install openstack-sig-tool
    +
  2. +
  3. +

    配置对接华为云provider的信息

    +

    打开/usr/local/etc/oos/oos.conf文件,修改配置为您拥有的华为云资源信息,AK/SK是用户的华为云登录密钥,其他配置保持默认即可(默认使用新加坡region),需要提前在云上创建对应的资源,包括:

    +
      +
    • 一个安全组,名字默认是oos
    • +
    • 一个openEuler镜像,名称格式是openEuler-%(release)s-%(arch)s,例如openEuler-25.03-arm64
    • +
    • 一个VPC,名称是oos_vpc
    • +
    • 该VPC下面两个子网,名称是oos_subnet1oos_subnet2
    • +
    +
    [huaweicloud]
    +ak = 
    +sk = 
    +region = ap-southeast-3
    +root_volume_size = 100
    +data_volume_size = 100
    +security_group_name = oos
    +image_format = openEuler-%%(release)s-%%(arch)s
    +vpc_name = oos_vpc
    +subnet1_name = oos_subnet1
    +subnet2_name = oos_subnet2
    +
  4. +
  5. +

    配置 OpenStack 环境信息

    +

    打开/usr/local/etc/oos/oos.conf文件,根据当前机器环境和需求修改配置。内容如下:

    +
    [environment]
    +mysql_root_password = root
    +mysql_project_password = root
    +rabbitmq_password = root
    +project_identity_password = root
    +enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest
    +neutron_provider_interface_name = br-ex
    +default_ext_subnet_range = 10.100.100.0/24
    +default_ext_subnet_gateway = 10.100.100.1
    +neutron_dataplane_interface_name = eth1
    +cinder_block_device = vdb
    +swift_storage_devices = vdc
    +swift_hash_path_suffix = ash
    +swift_hash_path_prefix = has
    +glance_api_workers = 2
    +cinder_api_workers = 2
    +nova_api_workers = 2
    +nova_metadata_api_workers = 2
    +nova_conductor_workers = 2
    +nova_scheduler_workers = 2
    +neutron_api_workers = 2
    +horizon_allowed_host = *
    +kolla_openeuler_plugin = false
    +

    关键配置

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    配置项解释
    enabled_service安装服务列表,根据用户需求自行删减
    neutron_provider_interface_nameneutron L3网桥名称
    default_ext_subnet_rangeneutron私网IP段
    default_ext_subnet_gatewayneutron私网gateway
    neutron_dataplane_interface_nameneutron使用的网卡,推荐使用一张新的网卡,以免和现有网卡冲突,防止all in one主机断连的情况
    cinder_block_devicecinder使用的卷设备名
    swift_storage_devicesswift使用的卷设备名
    kolla_openeuler_plugin是否启用kolla plugin。设置为True,kolla将支持部署openEuler容器(只在openEuler LTS上支持)
    +
  6. +
  7. +

    华为云上面创建一台|openEuler 25.03的x86_64虚拟机,用于部署all in one 的 OpenStack

    +
    # sshpass在`oos env create`过程中被使用,用于配置对目标虚拟机的免密访问
    +dnf install sshpass
    +oos env create -r 25.03 -f small -a x86 -n test-oos all_in_one
    +

    具体的参数可以使用oos env create --help命令查看

    +
  8. +
  9. +

    部署OpenStack all in one 环境

    +
    oos env setup test-oos -r antelope
    +

    具体的参数可以使用oos env setup --help命令查看

    +
  10. +
  11. +

    初始化tempest环境

    +

    如果用户想使用该环境运行tempest测试的话,可以执行命令oos env init,会自动把tempest需要的OpenStack资源自动创建好

    +
    oos env init test-oos
    +
  12. +
  13. +

    执行tempest测试

    +

    用户可以使用oos自动执行:

    +
    oos env test test-oos
    +

    也可以手动登录目标节点,进入根目录下的mytest目录,手动执行tempest run

    +
  14. +
+

如果是以主机纳管的方式部署 OpenStack 环境,总体逻辑与上文对接华为云时一致,1、3、5、6步操作不变,跳过第2步对华为云provider信息的配置,在第4步改为纳管主机操作。

+

被纳管的虚机需要保证:

+
    +
  • 至少有一张给oos使用的网卡,名称与配置保持一致,相关配置neutron_dataplane_interface_name
  • +
  • 至少有一块给oos使用的硬盘,名称与配置保持一致,相关配置cinder_block_device
  • +
  • 如果要部署swift服务,则需要新增一块硬盘,名称与配置保持一致,相关配置swift_storage_devices
  • +
+
# sshpass在`oos env create`过程中被使用,用于配置对目标主机的免密访问
+dnf install sshpass
+oos env manage -r 25.03 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos
+

替换TARGET_MACHINE_IP为目标机ip、TARGET_MACHINE_PASSWD为目标机密码。具体的参数可以使用oos env manage --help命令查看。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/js/html5shiv.min.js b/site/js/html5shiv.min.js new file mode 100644 index 0000000000000000000000000000000000000000..1a01c94ba47a45a4bb55f994419d608fc4bb6ac5 --- /dev/null +++ b/site/js/html5shiv.min.js @@ -0,0 +1,4 @@ +/** +* @preserve HTML5 Shiv 3.7.3 | @afarkas @jdalton @jon_neal @rem | MIT/GPL2 Licensed +*/ +!function(a,b){function c(a,b){var c=a.createElement("p"),d=a.getElementsByTagName("head")[0]||a.documentElement;return c.innerHTML="x",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=t.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=t.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),t.elements=c+" "+a,j(b)}function f(a){var b=s[a[q]];return b||(b={},r++,a[q]=r,s[r]=b),b}function g(a,c,d){if(c||(c=b),l)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():p.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||o.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),l)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return t.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(t,b.frag)}function j(a){a||(a=b);var d=f(a);return!t.shivCSS||k||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),l||i(a,d),a}var k,l,m="3.7.3",n=a.html5||{},o=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,p=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,q="_html5shiv",r=0,s={};!function(){try{var a=b.createElement("a");a.innerHTML="",k="hidden"in a,l=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){k=!0,l=!0}}();var t={elements:n.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:m,shivCSS:n.shivCSS!==!1,supportsUnknownElements:l,shivMethods:n.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=t,j(b),"object"==typeof module&&module.exports&&(module.exports=t)}("undefined"!=typeof window?window:this,document); diff --git a/site/js/jquery-3.6.0.min.js b/site/js/jquery-3.6.0.min.js new file mode 100644 index 0000000000000000000000000000000000000000..c4c6022f2982e8dae64cebd6b9a2b59f2547faad --- /dev/null +++ b/site/js/jquery-3.6.0.min.js @@ -0,0 +1,2 @@ +/*! jQuery v3.6.0 | (c) OpenJS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType&&"function"!=typeof e.item},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.6.0",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e&&e.namespaceURI,n=e&&(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},j=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function j(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||D,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,D=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function je(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function De(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Le(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var _t,zt=[],Ut=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=zt.pop()||S.expando+"_"+wt.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Ut.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Ut.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Ut,"$1"+r):!1!==e.jsonp&&(e.url+=(Tt.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,zt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((_t=E.implementation.createHTMLDocument("").body).innerHTML="
",2===_t.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=Fe(y.pixelPosition,function(e,t){if(t)return t=We(e,n),Pe.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0"),n("table.docutils.footnote").wrap("
"),n("table.docutils.citation").wrap("
"),n(".wy-menu-vertical ul").not(".simple").siblings("a").each((function(){var t=n(this);expand=n(''),expand.on("click",(function(n){return e.toggleCurrent(t),n.stopPropagation(),!1})),t.prepend(expand)}))},reset:function(){var n=encodeURI(window.location.hash)||"#";try{var e=$(".wy-menu-vertical"),t=e.find('[href="'+n+'"]');if(0===t.length){var i=$('.document [id="'+n.substring(1)+'"]').closest("div.section");0===(t=e.find('[href="#'+i.attr("id")+'"]')).length&&(t=e.find('[href="#"]'))}if(t.length>0){$(".wy-menu-vertical .current").removeClass("current").attr("aria-expanded","false"),t.addClass("current").attr("aria-expanded","true"),t.closest("li.toctree-l1").parent().addClass("current").attr("aria-expanded","true");for(let n=1;n<=10;n++)t.closest("li.toctree-l"+n).addClass("current").attr("aria-expanded","true");t[0].scrollIntoView()}}catch(n){console.log("Error expanding nav for anchor",n)}},onScroll:function(){this.winScroll=!1;var n=this.win.scrollTop(),e=n+this.winHeight,t=this.navBar.scrollTop()+(n-this.winPosition);n<0||e>this.docHeight||(this.navBar.scrollTop(t),this.winPosition=n)},onResize:function(){this.winResize=!1,this.winHeight=this.win.height(),this.docHeight=$(document).height()},hashChange:function(){this.linkScroll=!0,this.win.one("hashchange",(function(){this.linkScroll=!1}))},toggleCurrent:function(n){var e=n.closest("li");e.siblings("li.current").removeClass("current").attr("aria-expanded","false"),e.siblings().find("li.current").removeClass("current").attr("aria-expanded","false");var t=e.find("> ul li");t.length&&(t.removeClass("current").attr("aria-expanded","false"),e.toggleClass("current").attr("aria-expanded",(function(n,e){return"true"==e?"false":"true"})))}},"undefined"!=typeof window&&(window.SphinxRtdTheme={Navigation:n.exports.ThemeNav,StickyNav:n.exports.ThemeNav}),function(){for(var n=0,e=["ms","moz","webkit","o"],t=0;t + + + + + + + OpenStack SIG Doc + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • +
  • +
  • +
+
+
+
+
+ + +

搜索结果

+ + + +
+ 搜索中... +
+ + +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/site/search/lunr.js b/site/search/lunr.js new file mode 100644 index 0000000000000000000000000000000000000000..aca0a167f39f894d2d120b07b6e99265882c049c --- /dev/null +++ b/site/search/lunr.js @@ -0,0 +1,3475 @@ +/** + * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.9 + * Copyright (C) 2020 Oliver Nightingale + * @license MIT + */ + +;(function(){ + +/** + * A convenience function for configuring and constructing + * a new lunr Index. + * + * A lunr.Builder instance is created and the pipeline setup + * with a trimmer, stop word filter and stemmer. + * + * This builder object is yielded to the configuration function + * that is passed as a parameter, allowing the list of fields + * and other builder parameters to be customised. + * + * All documents _must_ be added within the passed config function. + * + * @example + * var idx = lunr(function () { + * this.field('title') + * this.field('body') + * this.ref('id') + * + * documents.forEach(function (doc) { + * this.add(doc) + * }, this) + * }) + * + * @see {@link lunr.Builder} + * @see {@link lunr.Pipeline} + * @see {@link lunr.trimmer} + * @see {@link lunr.stopWordFilter} + * @see {@link lunr.stemmer} + * @namespace {function} lunr + */ +var lunr = function (config) { + var builder = new lunr.Builder + + builder.pipeline.add( + lunr.trimmer, + lunr.stopWordFilter, + lunr.stemmer + ) + + builder.searchPipeline.add( + lunr.stemmer + ) + + config.call(builder, builder) + return builder.build() +} + +lunr.version = "2.3.9" +/*! + * lunr.utils + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A namespace containing utils for the rest of the lunr library + * @namespace lunr.utils + */ +lunr.utils = {} + +/** + * Print a warning message to the console. + * + * @param {String} message The message to be printed. + * @memberOf lunr.utils + * @function + */ +lunr.utils.warn = (function (global) { + /* eslint-disable no-console */ + return function (message) { + if (global.console && console.warn) { + console.warn(message) + } + } + /* eslint-enable no-console */ +})(this) + +/** + * Convert an object to a string. + * + * In the case of `null` and `undefined` the function returns + * the empty string, in all other cases the result of calling + * `toString` on the passed object is returned. + * + * @param {Any} obj The object to convert to a string. + * @return {String} string representation of the passed object. + * @memberOf lunr.utils + */ +lunr.utils.asString = function (obj) { + if (obj === void 0 || obj === null) { + return "" + } else { + return obj.toString() + } +} + +/** + * Clones an object. + * + * Will create a copy of an existing object such that any mutations + * on the copy cannot affect the original. + * + * Only shallow objects are supported, passing a nested object to this + * function will cause a TypeError. + * + * Objects with primitives, and arrays of primitives are supported. + * + * @param {Object} obj The object to clone. + * @return {Object} a clone of the passed object. + * @throws {TypeError} when a nested object is passed. + * @memberOf Utils + */ +lunr.utils.clone = function (obj) { + if (obj === null || obj === undefined) { + return obj + } + + var clone = Object.create(null), + keys = Object.keys(obj) + + for (var i = 0; i < keys.length; i++) { + var key = keys[i], + val = obj[key] + + if (Array.isArray(val)) { + clone[key] = val.slice() + continue + } + + if (typeof val === 'string' || + typeof val === 'number' || + typeof val === 'boolean') { + clone[key] = val + continue + } + + throw new TypeError("clone is not deep and does not support nested objects") + } + + return clone +} +lunr.FieldRef = function (docRef, fieldName, stringValue) { + this.docRef = docRef + this.fieldName = fieldName + this._stringValue = stringValue +} + +lunr.FieldRef.joiner = "/" + +lunr.FieldRef.fromString = function (s) { + var n = s.indexOf(lunr.FieldRef.joiner) + + if (n === -1) { + throw "malformed field ref string" + } + + var fieldRef = s.slice(0, n), + docRef = s.slice(n + 1) + + return new lunr.FieldRef (docRef, fieldRef, s) +} + +lunr.FieldRef.prototype.toString = function () { + if (this._stringValue == undefined) { + this._stringValue = this.fieldName + lunr.FieldRef.joiner + this.docRef + } + + return this._stringValue +} +/*! + * lunr.Set + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A lunr set. + * + * @constructor + */ +lunr.Set = function (elements) { + this.elements = Object.create(null) + + if (elements) { + this.length = elements.length + + for (var i = 0; i < this.length; i++) { + this.elements[elements[i]] = true + } + } else { + this.length = 0 + } +} + +/** + * A complete set that contains all elements. + * + * @static + * @readonly + * @type {lunr.Set} + */ +lunr.Set.complete = { + intersect: function (other) { + return other + }, + + union: function () { + return this + }, + + contains: function () { + return true + } +} + +/** + * An empty set that contains no elements. + * + * @static + * @readonly + * @type {lunr.Set} + */ +lunr.Set.empty = { + intersect: function () { + return this + }, + + union: function (other) { + return other + }, + + contains: function () { + return false + } +} + +/** + * Returns true if this set contains the specified object. + * + * @param {object} object - Object whose presence in this set is to be tested. + * @returns {boolean} - True if this set contains the specified object. + */ +lunr.Set.prototype.contains = function (object) { + return !!this.elements[object] +} + +/** + * Returns a new set containing only the elements that are present in both + * this set and the specified set. + * + * @param {lunr.Set} other - set to intersect with this set. + * @returns {lunr.Set} a new set that is the intersection of this and the specified set. + */ + +lunr.Set.prototype.intersect = function (other) { + var a, b, elements, intersection = [] + + if (other === lunr.Set.complete) { + return this + } + + if (other === lunr.Set.empty) { + return other + } + + if (this.length < other.length) { + a = this + b = other + } else { + a = other + b = this + } + + elements = Object.keys(a.elements) + + for (var i = 0; i < elements.length; i++) { + var element = elements[i] + if (element in b.elements) { + intersection.push(element) + } + } + + return new lunr.Set (intersection) +} + +/** + * Returns a new set combining the elements of this and the specified set. + * + * @param {lunr.Set} other - set to union with this set. + * @return {lunr.Set} a new set that is the union of this and the specified set. + */ + +lunr.Set.prototype.union = function (other) { + if (other === lunr.Set.complete) { + return lunr.Set.complete + } + + if (other === lunr.Set.empty) { + return this + } + + return new lunr.Set(Object.keys(this.elements).concat(Object.keys(other.elements))) +} +/** + * A function to calculate the inverse document frequency for + * a posting. This is shared between the builder and the index + * + * @private + * @param {object} posting - The posting for a given term + * @param {number} documentCount - The total number of documents. + */ +lunr.idf = function (posting, documentCount) { + var documentsWithTerm = 0 + + for (var fieldName in posting) { + if (fieldName == '_index') continue // Ignore the term index, its not a field + documentsWithTerm += Object.keys(posting[fieldName]).length + } + + var x = (documentCount - documentsWithTerm + 0.5) / (documentsWithTerm + 0.5) + + return Math.log(1 + Math.abs(x)) +} + +/** + * A token wraps a string representation of a token + * as it is passed through the text processing pipeline. + * + * @constructor + * @param {string} [str=''] - The string token being wrapped. + * @param {object} [metadata={}] - Metadata associated with this token. + */ +lunr.Token = function (str, metadata) { + this.str = str || "" + this.metadata = metadata || {} +} + +/** + * Returns the token string that is being wrapped by this object. + * + * @returns {string} + */ +lunr.Token.prototype.toString = function () { + return this.str +} + +/** + * A token update function is used when updating or optionally + * when cloning a token. + * + * @callback lunr.Token~updateFunction + * @param {string} str - The string representation of the token. + * @param {Object} metadata - All metadata associated with this token. + */ + +/** + * Applies the given function to the wrapped string token. + * + * @example + * token.update(function (str, metadata) { + * return str.toUpperCase() + * }) + * + * @param {lunr.Token~updateFunction} fn - A function to apply to the token string. + * @returns {lunr.Token} + */ +lunr.Token.prototype.update = function (fn) { + this.str = fn(this.str, this.metadata) + return this +} + +/** + * Creates a clone of this token. Optionally a function can be + * applied to the cloned token. + * + * @param {lunr.Token~updateFunction} [fn] - An optional function to apply to the cloned token. + * @returns {lunr.Token} + */ +lunr.Token.prototype.clone = function (fn) { + fn = fn || function (s) { return s } + return new lunr.Token (fn(this.str, this.metadata), this.metadata) +} +/*! + * lunr.tokenizer + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A function for splitting a string into tokens ready to be inserted into + * the search index. Uses `lunr.tokenizer.separator` to split strings, change + * the value of this property to change how strings are split into tokens. + * + * This tokenizer will convert its parameter to a string by calling `toString` and + * then will split this string on the character in `lunr.tokenizer.separator`. + * Arrays will have their elements converted to strings and wrapped in a lunr.Token. + * + * Optional metadata can be passed to the tokenizer, this metadata will be cloned and + * added as metadata to every token that is created from the object to be tokenized. + * + * @static + * @param {?(string|object|object[])} obj - The object to convert into tokens + * @param {?object} metadata - Optional metadata to associate with every token + * @returns {lunr.Token[]} + * @see {@link lunr.Pipeline} + */ +lunr.tokenizer = function (obj, metadata) { + if (obj == null || obj == undefined) { + return [] + } + + if (Array.isArray(obj)) { + return obj.map(function (t) { + return new lunr.Token( + lunr.utils.asString(t).toLowerCase(), + lunr.utils.clone(metadata) + ) + }) + } + + var str = obj.toString().toLowerCase(), + len = str.length, + tokens = [] + + for (var sliceEnd = 0, sliceStart = 0; sliceEnd <= len; sliceEnd++) { + var char = str.charAt(sliceEnd), + sliceLength = sliceEnd - sliceStart + + if ((char.match(lunr.tokenizer.separator) || sliceEnd == len)) { + + if (sliceLength > 0) { + var tokenMetadata = lunr.utils.clone(metadata) || {} + tokenMetadata["position"] = [sliceStart, sliceLength] + tokenMetadata["index"] = tokens.length + + tokens.push( + new lunr.Token ( + str.slice(sliceStart, sliceEnd), + tokenMetadata + ) + ) + } + + sliceStart = sliceEnd + 1 + } + + } + + return tokens +} + +/** + * The separator used to split a string into tokens. Override this property to change the behaviour of + * `lunr.tokenizer` behaviour when tokenizing strings. By default this splits on whitespace and hyphens. + * + * @static + * @see lunr.tokenizer + */ +lunr.tokenizer.separator = /[\s\-]+/ +/*! + * lunr.Pipeline + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.Pipelines maintain an ordered list of functions to be applied to all + * tokens in documents entering the search index and queries being ran against + * the index. + * + * An instance of lunr.Index created with the lunr shortcut will contain a + * pipeline with a stop word filter and an English language stemmer. Extra + * functions can be added before or after either of these functions or these + * default functions can be removed. + * + * When run the pipeline will call each function in turn, passing a token, the + * index of that token in the original list of all tokens and finally a list of + * all the original tokens. + * + * The output of functions in the pipeline will be passed to the next function + * in the pipeline. To exclude a token from entering the index the function + * should return undefined, the rest of the pipeline will not be called with + * this token. + * + * For serialisation of pipelines to work, all functions used in an instance of + * a pipeline should be registered with lunr.Pipeline. Registered functions can + * then be loaded. If trying to load a serialised pipeline that uses functions + * that are not registered an error will be thrown. + * + * If not planning on serialising the pipeline then registering pipeline functions + * is not necessary. + * + * @constructor + */ +lunr.Pipeline = function () { + this._stack = [] +} + +lunr.Pipeline.registeredFunctions = Object.create(null) + +/** + * A pipeline function maps lunr.Token to lunr.Token. A lunr.Token contains the token + * string as well as all known metadata. A pipeline function can mutate the token string + * or mutate (or add) metadata for a given token. + * + * A pipeline function can indicate that the passed token should be discarded by returning + * null, undefined or an empty string. This token will not be passed to any downstream pipeline + * functions and will not be added to the index. + * + * Multiple tokens can be returned by returning an array of tokens. Each token will be passed + * to any downstream pipeline functions and all will returned tokens will be added to the index. + * + * Any number of pipeline functions may be chained together using a lunr.Pipeline. + * + * @interface lunr.PipelineFunction + * @param {lunr.Token} token - A token from the document being processed. + * @param {number} i - The index of this token in the complete list of tokens for this document/field. + * @param {lunr.Token[]} tokens - All tokens for this document/field. + * @returns {(?lunr.Token|lunr.Token[])} + */ + +/** + * Register a function with the pipeline. + * + * Functions that are used in the pipeline should be registered if the pipeline + * needs to be serialised, or a serialised pipeline needs to be loaded. + * + * Registering a function does not add it to a pipeline, functions must still be + * added to instances of the pipeline for them to be used when running a pipeline. + * + * @param {lunr.PipelineFunction} fn - The function to check for. + * @param {String} label - The label to register this function with + */ +lunr.Pipeline.registerFunction = function (fn, label) { + if (label in this.registeredFunctions) { + lunr.utils.warn('Overwriting existing registered function: ' + label) + } + + fn.label = label + lunr.Pipeline.registeredFunctions[fn.label] = fn +} + +/** + * Warns if the function is not registered as a Pipeline function. + * + * @param {lunr.PipelineFunction} fn - The function to check for. + * @private + */ +lunr.Pipeline.warnIfFunctionNotRegistered = function (fn) { + var isRegistered = fn.label && (fn.label in this.registeredFunctions) + + if (!isRegistered) { + lunr.utils.warn('Function is not registered with pipeline. This may cause problems when serialising the index.\n', fn) + } +} + +/** + * Loads a previously serialised pipeline. + * + * All functions to be loaded must already be registered with lunr.Pipeline. + * If any function from the serialised data has not been registered then an + * error will be thrown. + * + * @param {Object} serialised - The serialised pipeline to load. + * @returns {lunr.Pipeline} + */ +lunr.Pipeline.load = function (serialised) { + var pipeline = new lunr.Pipeline + + serialised.forEach(function (fnName) { + var fn = lunr.Pipeline.registeredFunctions[fnName] + + if (fn) { + pipeline.add(fn) + } else { + throw new Error('Cannot load unregistered function: ' + fnName) + } + }) + + return pipeline +} + +/** + * Adds new functions to the end of the pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction[]} functions - Any number of functions to add to the pipeline. + */ +lunr.Pipeline.prototype.add = function () { + var fns = Array.prototype.slice.call(arguments) + + fns.forEach(function (fn) { + lunr.Pipeline.warnIfFunctionNotRegistered(fn) + this._stack.push(fn) + }, this) +} + +/** + * Adds a single function after a function that already exists in the + * pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline. + * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline. + */ +lunr.Pipeline.prototype.after = function (existingFn, newFn) { + lunr.Pipeline.warnIfFunctionNotRegistered(newFn) + + var pos = this._stack.indexOf(existingFn) + if (pos == -1) { + throw new Error('Cannot find existingFn') + } + + pos = pos + 1 + this._stack.splice(pos, 0, newFn) +} + +/** + * Adds a single function before a function that already exists in the + * pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline. + * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline. + */ +lunr.Pipeline.prototype.before = function (existingFn, newFn) { + lunr.Pipeline.warnIfFunctionNotRegistered(newFn) + + var pos = this._stack.indexOf(existingFn) + if (pos == -1) { + throw new Error('Cannot find existingFn') + } + + this._stack.splice(pos, 0, newFn) +} + +/** + * Removes a function from the pipeline. + * + * @param {lunr.PipelineFunction} fn The function to remove from the pipeline. + */ +lunr.Pipeline.prototype.remove = function (fn) { + var pos = this._stack.indexOf(fn) + if (pos == -1) { + return + } + + this._stack.splice(pos, 1) +} + +/** + * Runs the current list of functions that make up the pipeline against the + * passed tokens. + * + * @param {Array} tokens The tokens to run through the pipeline. + * @returns {Array} + */ +lunr.Pipeline.prototype.run = function (tokens) { + var stackLength = this._stack.length + + for (var i = 0; i < stackLength; i++) { + var fn = this._stack[i] + var memo = [] + + for (var j = 0; j < tokens.length; j++) { + var result = fn(tokens[j], j, tokens) + + if (result === null || result === void 0 || result === '') continue + + if (Array.isArray(result)) { + for (var k = 0; k < result.length; k++) { + memo.push(result[k]) + } + } else { + memo.push(result) + } + } + + tokens = memo + } + + return tokens +} + +/** + * Convenience method for passing a string through a pipeline and getting + * strings out. This method takes care of wrapping the passed string in a + * token and mapping the resulting tokens back to strings. + * + * @param {string} str - The string to pass through the pipeline. + * @param {?object} metadata - Optional metadata to associate with the token + * passed to the pipeline. + * @returns {string[]} + */ +lunr.Pipeline.prototype.runString = function (str, metadata) { + var token = new lunr.Token (str, metadata) + + return this.run([token]).map(function (t) { + return t.toString() + }) +} + +/** + * Resets the pipeline by removing any existing processors. + * + */ +lunr.Pipeline.prototype.reset = function () { + this._stack = [] +} + +/** + * Returns a representation of the pipeline ready for serialisation. + * + * Logs a warning if the function has not been registered. + * + * @returns {Array} + */ +lunr.Pipeline.prototype.toJSON = function () { + return this._stack.map(function (fn) { + lunr.Pipeline.warnIfFunctionNotRegistered(fn) + + return fn.label + }) +} +/*! + * lunr.Vector + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A vector is used to construct the vector space of documents and queries. These + * vectors support operations to determine the similarity between two documents or + * a document and a query. + * + * Normally no parameters are required for initializing a vector, but in the case of + * loading a previously dumped vector the raw elements can be provided to the constructor. + * + * For performance reasons vectors are implemented with a flat array, where an elements + * index is immediately followed by its value. E.g. [index, value, index, value]. This + * allows the underlying array to be as sparse as possible and still offer decent + * performance when being used for vector calculations. + * + * @constructor + * @param {Number[]} [elements] - The flat list of element index and element value pairs. + */ +lunr.Vector = function (elements) { + this._magnitude = 0 + this.elements = elements || [] +} + + +/** + * Calculates the position within the vector to insert a given index. + * + * This is used internally by insert and upsert. If there are duplicate indexes then + * the position is returned as if the value for that index were to be updated, but it + * is the callers responsibility to check whether there is a duplicate at that index + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @returns {Number} + */ +lunr.Vector.prototype.positionForIndex = function (index) { + // For an empty vector the tuple can be inserted at the beginning + if (this.elements.length == 0) { + return 0 + } + + var start = 0, + end = this.elements.length / 2, + sliceLength = end - start, + pivotPoint = Math.floor(sliceLength / 2), + pivotIndex = this.elements[pivotPoint * 2] + + while (sliceLength > 1) { + if (pivotIndex < index) { + start = pivotPoint + } + + if (pivotIndex > index) { + end = pivotPoint + } + + if (pivotIndex == index) { + break + } + + sliceLength = end - start + pivotPoint = start + Math.floor(sliceLength / 2) + pivotIndex = this.elements[pivotPoint * 2] + } + + if (pivotIndex == index) { + return pivotPoint * 2 + } + + if (pivotIndex > index) { + return pivotPoint * 2 + } + + if (pivotIndex < index) { + return (pivotPoint + 1) * 2 + } +} + +/** + * Inserts an element at an index within the vector. + * + * Does not allow duplicates, will throw an error if there is already an entry + * for this index. + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @param {Number} val - The value to be inserted into the vector. + */ +lunr.Vector.prototype.insert = function (insertIdx, val) { + this.upsert(insertIdx, val, function () { + throw "duplicate index" + }) +} + +/** + * Inserts or updates an existing index within the vector. + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @param {Number} val - The value to be inserted into the vector. + * @param {function} fn - A function that is called for updates, the existing value and the + * requested value are passed as arguments + */ +lunr.Vector.prototype.upsert = function (insertIdx, val, fn) { + this._magnitude = 0 + var position = this.positionForIndex(insertIdx) + + if (this.elements[position] == insertIdx) { + this.elements[position + 1] = fn(this.elements[position + 1], val) + } else { + this.elements.splice(position, 0, insertIdx, val) + } +} + +/** + * Calculates the magnitude of this vector. + * + * @returns {Number} + */ +lunr.Vector.prototype.magnitude = function () { + if (this._magnitude) return this._magnitude + + var sumOfSquares = 0, + elementsLength = this.elements.length + + for (var i = 1; i < elementsLength; i += 2) { + var val = this.elements[i] + sumOfSquares += val * val + } + + return this._magnitude = Math.sqrt(sumOfSquares) +} + +/** + * Calculates the dot product of this vector and another vector. + * + * @param {lunr.Vector} otherVector - The vector to compute the dot product with. + * @returns {Number} + */ +lunr.Vector.prototype.dot = function (otherVector) { + var dotProduct = 0, + a = this.elements, b = otherVector.elements, + aLen = a.length, bLen = b.length, + aVal = 0, bVal = 0, + i = 0, j = 0 + + while (i < aLen && j < bLen) { + aVal = a[i], bVal = b[j] + if (aVal < bVal) { + i += 2 + } else if (aVal > bVal) { + j += 2 + } else if (aVal == bVal) { + dotProduct += a[i + 1] * b[j + 1] + i += 2 + j += 2 + } + } + + return dotProduct +} + +/** + * Calculates the similarity between this vector and another vector. + * + * @param {lunr.Vector} otherVector - The other vector to calculate the + * similarity with. + * @returns {Number} + */ +lunr.Vector.prototype.similarity = function (otherVector) { + return this.dot(otherVector) / this.magnitude() || 0 +} + +/** + * Converts the vector to an array of the elements within the vector. + * + * @returns {Number[]} + */ +lunr.Vector.prototype.toArray = function () { + var output = new Array (this.elements.length / 2) + + for (var i = 1, j = 0; i < this.elements.length; i += 2, j++) { + output[j] = this.elements[i] + } + + return output +} + +/** + * A JSON serializable representation of the vector. + * + * @returns {Number[]} + */ +lunr.Vector.prototype.toJSON = function () { + return this.elements +} +/* eslint-disable */ +/*! + * lunr.stemmer + * Copyright (C) 2020 Oliver Nightingale + * Includes code from - http://tartarus.org/~martin/PorterStemmer/js.txt + */ + +/** + * lunr.stemmer is an english language stemmer, this is a JavaScript + * implementation of the PorterStemmer taken from http://tartarus.org/~martin + * + * @static + * @implements {lunr.PipelineFunction} + * @param {lunr.Token} token - The string to stem + * @returns {lunr.Token} + * @see {@link lunr.Pipeline} + * @function + */ +lunr.stemmer = (function(){ + var step2list = { + "ational" : "ate", + "tional" : "tion", + "enci" : "ence", + "anci" : "ance", + "izer" : "ize", + "bli" : "ble", + "alli" : "al", + "entli" : "ent", + "eli" : "e", + "ousli" : "ous", + "ization" : "ize", + "ation" : "ate", + "ator" : "ate", + "alism" : "al", + "iveness" : "ive", + "fulness" : "ful", + "ousness" : "ous", + "aliti" : "al", + "iviti" : "ive", + "biliti" : "ble", + "logi" : "log" + }, + + step3list = { + "icate" : "ic", + "ative" : "", + "alize" : "al", + "iciti" : "ic", + "ical" : "ic", + "ful" : "", + "ness" : "" + }, + + c = "[^aeiou]", // consonant + v = "[aeiouy]", // vowel + C = c + "[^aeiouy]*", // consonant sequence + V = v + "[aeiou]*", // vowel sequence + + mgr0 = "^(" + C + ")?" + V + C, // [C]VC... is m>0 + meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$", // [C]VC[V] is m=1 + mgr1 = "^(" + C + ")?" + V + C + V + C, // [C]VCVC... is m>1 + s_v = "^(" + C + ")?" + v; // vowel in stem + + var re_mgr0 = new RegExp(mgr0); + var re_mgr1 = new RegExp(mgr1); + var re_meq1 = new RegExp(meq1); + var re_s_v = new RegExp(s_v); + + var re_1a = /^(.+?)(ss|i)es$/; + var re2_1a = /^(.+?)([^s])s$/; + var re_1b = /^(.+?)eed$/; + var re2_1b = /^(.+?)(ed|ing)$/; + var re_1b_2 = /.$/; + var re2_1b_2 = /(at|bl|iz)$/; + var re3_1b_2 = new RegExp("([^aeiouylsz])\\1$"); + var re4_1b_2 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + + var re_1c = /^(.+?[^aeiou])y$/; + var re_2 = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + + var re_3 = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + + var re_4 = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + var re2_4 = /^(.+?)(s|t)(ion)$/; + + var re_5 = /^(.+?)e$/; + var re_5_1 = /ll$/; + var re3_5 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + + var porterStemmer = function porterStemmer(w) { + var stem, + suffix, + firstch, + re, + re2, + re3, + re4; + + if (w.length < 3) { return w; } + + firstch = w.substr(0,1); + if (firstch == "y") { + w = firstch.toUpperCase() + w.substr(1); + } + + // Step 1a + re = re_1a + re2 = re2_1a; + + if (re.test(w)) { w = w.replace(re,"$1$2"); } + else if (re2.test(w)) { w = w.replace(re2,"$1$2"); } + + // Step 1b + re = re_1b; + re2 = re2_1b; + if (re.test(w)) { + var fp = re.exec(w); + re = re_mgr0; + if (re.test(fp[1])) { + re = re_1b_2; + w = w.replace(re,""); + } + } else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = re_s_v; + if (re2.test(stem)) { + w = stem; + re2 = re2_1b_2; + re3 = re3_1b_2; + re4 = re4_1b_2; + if (re2.test(w)) { w = w + "e"; } + else if (re3.test(w)) { re = re_1b_2; w = w.replace(re,""); } + else if (re4.test(w)) { w = w + "e"; } + } + } + + // Step 1c - replace suffix y or Y by i if preceded by a non-vowel which is not the first letter of the word (so cry -> cri, by -> by, say -> say) + re = re_1c; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + w = stem + "i"; + } + + // Step 2 + re = re_2; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = re_mgr0; + if (re.test(stem)) { + w = stem + step2list[suffix]; + } + } + + // Step 3 + re = re_3; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = re_mgr0; + if (re.test(stem)) { + w = stem + step3list[suffix]; + } + } + + // Step 4 + re = re_4; + re2 = re2_4; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = re_mgr1; + if (re.test(stem)) { + w = stem; + } + } else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = re_mgr1; + if (re2.test(stem)) { + w = stem; + } + } + + // Step 5 + re = re_5; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = re_mgr1; + re2 = re_meq1; + re3 = re3_5; + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) { + w = stem; + } + } + + re = re_5_1; + re2 = re_mgr1; + if (re.test(w) && re2.test(w)) { + re = re_1b_2; + w = w.replace(re,""); + } + + // and turn initial Y back to y + + if (firstch == "y") { + w = firstch.toLowerCase() + w.substr(1); + } + + return w; + }; + + return function (token) { + return token.update(porterStemmer); + } +})(); + +lunr.Pipeline.registerFunction(lunr.stemmer, 'stemmer') +/*! + * lunr.stopWordFilter + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.generateStopWordFilter builds a stopWordFilter function from the provided + * list of stop words. + * + * The built in lunr.stopWordFilter is built using this generator and can be used + * to generate custom stopWordFilters for applications or non English languages. + * + * @function + * @param {Array} token The token to pass through the filter + * @returns {lunr.PipelineFunction} + * @see lunr.Pipeline + * @see lunr.stopWordFilter + */ +lunr.generateStopWordFilter = function (stopWords) { + var words = stopWords.reduce(function (memo, stopWord) { + memo[stopWord] = stopWord + return memo + }, {}) + + return function (token) { + if (token && words[token.toString()] !== token.toString()) return token + } +} + +/** + * lunr.stopWordFilter is an English language stop word list filter, any words + * contained in the list will not be passed through the filter. + * + * This is intended to be used in the Pipeline. If the token does not pass the + * filter then undefined will be returned. + * + * @function + * @implements {lunr.PipelineFunction} + * @params {lunr.Token} token - A token to check for being a stop word. + * @returns {lunr.Token} + * @see {@link lunr.Pipeline} + */ +lunr.stopWordFilter = lunr.generateStopWordFilter([ + 'a', + 'able', + 'about', + 'across', + 'after', + 'all', + 'almost', + 'also', + 'am', + 'among', + 'an', + 'and', + 'any', + 'are', + 'as', + 'at', + 'be', + 'because', + 'been', + 'but', + 'by', + 'can', + 'cannot', + 'could', + 'dear', + 'did', + 'do', + 'does', + 'either', + 'else', + 'ever', + 'every', + 'for', + 'from', + 'get', + 'got', + 'had', + 'has', + 'have', + 'he', + 'her', + 'hers', + 'him', + 'his', + 'how', + 'however', + 'i', + 'if', + 'in', + 'into', + 'is', + 'it', + 'its', + 'just', + 'least', + 'let', + 'like', + 'likely', + 'may', + 'me', + 'might', + 'most', + 'must', + 'my', + 'neither', + 'no', + 'nor', + 'not', + 'of', + 'off', + 'often', + 'on', + 'only', + 'or', + 'other', + 'our', + 'own', + 'rather', + 'said', + 'say', + 'says', + 'she', + 'should', + 'since', + 'so', + 'some', + 'than', + 'that', + 'the', + 'their', + 'them', + 'then', + 'there', + 'these', + 'they', + 'this', + 'tis', + 'to', + 'too', + 'twas', + 'us', + 'wants', + 'was', + 'we', + 'were', + 'what', + 'when', + 'where', + 'which', + 'while', + 'who', + 'whom', + 'why', + 'will', + 'with', + 'would', + 'yet', + 'you', + 'your' +]) + +lunr.Pipeline.registerFunction(lunr.stopWordFilter, 'stopWordFilter') +/*! + * lunr.trimmer + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.trimmer is a pipeline function for trimming non word + * characters from the beginning and end of tokens before they + * enter the index. + * + * This implementation may not work correctly for non latin + * characters and should either be removed or adapted for use + * with languages with non-latin characters. + * + * @static + * @implements {lunr.PipelineFunction} + * @param {lunr.Token} token The token to pass through the filter + * @returns {lunr.Token} + * @see lunr.Pipeline + */ +lunr.trimmer = function (token) { + return token.update(function (s) { + return s.replace(/^\W+/, '').replace(/\W+$/, '') + }) +} + +lunr.Pipeline.registerFunction(lunr.trimmer, 'trimmer') +/*! + * lunr.TokenSet + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * A token set is used to store the unique list of all tokens + * within an index. Token sets are also used to represent an + * incoming query to the index, this query token set and index + * token set are then intersected to find which tokens to look + * up in the inverted index. + * + * A token set can hold multiple tokens, as in the case of the + * index token set, or it can hold a single token as in the + * case of a simple query token set. + * + * Additionally token sets are used to perform wildcard matching. + * Leading, contained and trailing wildcards are supported, and + * from this edit distance matching can also be provided. + * + * Token sets are implemented as a minimal finite state automata, + * where both common prefixes and suffixes are shared between tokens. + * This helps to reduce the space used for storing the token set. + * + * @constructor + */ +lunr.TokenSet = function () { + this.final = false + this.edges = {} + this.id = lunr.TokenSet._nextId + lunr.TokenSet._nextId += 1 +} + +/** + * Keeps track of the next, auto increment, identifier to assign + * to a new tokenSet. + * + * TokenSets require a unique identifier to be correctly minimised. + * + * @private + */ +lunr.TokenSet._nextId = 1 + +/** + * Creates a TokenSet instance from the given sorted array of words. + * + * @param {String[]} arr - A sorted array of strings to create the set from. + * @returns {lunr.TokenSet} + * @throws Will throw an error if the input array is not sorted. + */ +lunr.TokenSet.fromArray = function (arr) { + var builder = new lunr.TokenSet.Builder + + for (var i = 0, len = arr.length; i < len; i++) { + builder.insert(arr[i]) + } + + builder.finish() + return builder.root +} + +/** + * Creates a token set from a query clause. + * + * @private + * @param {Object} clause - A single clause from lunr.Query. + * @param {string} clause.term - The query clause term. + * @param {number} [clause.editDistance] - The optional edit distance for the term. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.fromClause = function (clause) { + if ('editDistance' in clause) { + return lunr.TokenSet.fromFuzzyString(clause.term, clause.editDistance) + } else { + return lunr.TokenSet.fromString(clause.term) + } +} + +/** + * Creates a token set representing a single string with a specified + * edit distance. + * + * Insertions, deletions, substitutions and transpositions are each + * treated as an edit distance of 1. + * + * Increasing the allowed edit distance will have a dramatic impact + * on the performance of both creating and intersecting these TokenSets. + * It is advised to keep the edit distance less than 3. + * + * @param {string} str - The string to create the token set from. + * @param {number} editDistance - The allowed edit distance to match. + * @returns {lunr.Vector} + */ +lunr.TokenSet.fromFuzzyString = function (str, editDistance) { + var root = new lunr.TokenSet + + var stack = [{ + node: root, + editsRemaining: editDistance, + str: str + }] + + while (stack.length) { + var frame = stack.pop() + + // no edit + if (frame.str.length > 0) { + var char = frame.str.charAt(0), + noEditNode + + if (char in frame.node.edges) { + noEditNode = frame.node.edges[char] + } else { + noEditNode = new lunr.TokenSet + frame.node.edges[char] = noEditNode + } + + if (frame.str.length == 1) { + noEditNode.final = true + } + + stack.push({ + node: noEditNode, + editsRemaining: frame.editsRemaining, + str: frame.str.slice(1) + }) + } + + if (frame.editsRemaining == 0) { + continue + } + + // insertion + if ("*" in frame.node.edges) { + var insertionNode = frame.node.edges["*"] + } else { + var insertionNode = new lunr.TokenSet + frame.node.edges["*"] = insertionNode + } + + if (frame.str.length == 0) { + insertionNode.final = true + } + + stack.push({ + node: insertionNode, + editsRemaining: frame.editsRemaining - 1, + str: frame.str + }) + + // deletion + // can only do a deletion if we have enough edits remaining + // and if there are characters left to delete in the string + if (frame.str.length > 1) { + stack.push({ + node: frame.node, + editsRemaining: frame.editsRemaining - 1, + str: frame.str.slice(1) + }) + } + + // deletion + // just removing the last character from the str + if (frame.str.length == 1) { + frame.node.final = true + } + + // substitution + // can only do a substitution if we have enough edits remaining + // and if there are characters left to substitute + if (frame.str.length >= 1) { + if ("*" in frame.node.edges) { + var substitutionNode = frame.node.edges["*"] + } else { + var substitutionNode = new lunr.TokenSet + frame.node.edges["*"] = substitutionNode + } + + if (frame.str.length == 1) { + substitutionNode.final = true + } + + stack.push({ + node: substitutionNode, + editsRemaining: frame.editsRemaining - 1, + str: frame.str.slice(1) + }) + } + + // transposition + // can only do a transposition if there are edits remaining + // and there are enough characters to transpose + if (frame.str.length > 1) { + var charA = frame.str.charAt(0), + charB = frame.str.charAt(1), + transposeNode + + if (charB in frame.node.edges) { + transposeNode = frame.node.edges[charB] + } else { + transposeNode = new lunr.TokenSet + frame.node.edges[charB] = transposeNode + } + + if (frame.str.length == 1) { + transposeNode.final = true + } + + stack.push({ + node: transposeNode, + editsRemaining: frame.editsRemaining - 1, + str: charA + frame.str.slice(2) + }) + } + } + + return root +} + +/** + * Creates a TokenSet from a string. + * + * The string may contain one or more wildcard characters (*) + * that will allow wildcard matching when intersecting with + * another TokenSet. + * + * @param {string} str - The string to create a TokenSet from. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.fromString = function (str) { + var node = new lunr.TokenSet, + root = node + + /* + * Iterates through all characters within the passed string + * appending a node for each character. + * + * When a wildcard character is found then a self + * referencing edge is introduced to continually match + * any number of any characters. + */ + for (var i = 0, len = str.length; i < len; i++) { + var char = str[i], + final = (i == len - 1) + + if (char == "*") { + node.edges[char] = node + node.final = final + + } else { + var next = new lunr.TokenSet + next.final = final + + node.edges[char] = next + node = next + } + } + + return root +} + +/** + * Converts this TokenSet into an array of strings + * contained within the TokenSet. + * + * This is not intended to be used on a TokenSet that + * contains wildcards, in these cases the results are + * undefined and are likely to cause an infinite loop. + * + * @returns {string[]} + */ +lunr.TokenSet.prototype.toArray = function () { + var words = [] + + var stack = [{ + prefix: "", + node: this + }] + + while (stack.length) { + var frame = stack.pop(), + edges = Object.keys(frame.node.edges), + len = edges.length + + if (frame.node.final) { + /* In Safari, at this point the prefix is sometimes corrupted, see: + * https://github.com/olivernn/lunr.js/issues/279 Calling any + * String.prototype method forces Safari to "cast" this string to what + * it's supposed to be, fixing the bug. */ + frame.prefix.charAt(0) + words.push(frame.prefix) + } + + for (var i = 0; i < len; i++) { + var edge = edges[i] + + stack.push({ + prefix: frame.prefix.concat(edge), + node: frame.node.edges[edge] + }) + } + } + + return words +} + +/** + * Generates a string representation of a TokenSet. + * + * This is intended to allow TokenSets to be used as keys + * in objects, largely to aid the construction and minimisation + * of a TokenSet. As such it is not designed to be a human + * friendly representation of the TokenSet. + * + * @returns {string} + */ +lunr.TokenSet.prototype.toString = function () { + // NOTE: Using Object.keys here as this.edges is very likely + // to enter 'hash-mode' with many keys being added + // + // avoiding a for-in loop here as it leads to the function + // being de-optimised (at least in V8). From some simple + // benchmarks the performance is comparable, but allowing + // V8 to optimize may mean easy performance wins in the future. + + if (this._str) { + return this._str + } + + var str = this.final ? '1' : '0', + labels = Object.keys(this.edges).sort(), + len = labels.length + + for (var i = 0; i < len; i++) { + var label = labels[i], + node = this.edges[label] + + str = str + label + node.id + } + + return str +} + +/** + * Returns a new TokenSet that is the intersection of + * this TokenSet and the passed TokenSet. + * + * This intersection will take into account any wildcards + * contained within the TokenSet. + * + * @param {lunr.TokenSet} b - An other TokenSet to intersect with. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.prototype.intersect = function (b) { + var output = new lunr.TokenSet, + frame = undefined + + var stack = [{ + qNode: b, + output: output, + node: this + }] + + while (stack.length) { + frame = stack.pop() + + // NOTE: As with the #toString method, we are using + // Object.keys and a for loop instead of a for-in loop + // as both of these objects enter 'hash' mode, causing + // the function to be de-optimised in V8 + var qEdges = Object.keys(frame.qNode.edges), + qLen = qEdges.length, + nEdges = Object.keys(frame.node.edges), + nLen = nEdges.length + + for (var q = 0; q < qLen; q++) { + var qEdge = qEdges[q] + + for (var n = 0; n < nLen; n++) { + var nEdge = nEdges[n] + + if (nEdge == qEdge || qEdge == '*') { + var node = frame.node.edges[nEdge], + qNode = frame.qNode.edges[qEdge], + final = node.final && qNode.final, + next = undefined + + if (nEdge in frame.output.edges) { + // an edge already exists for this character + // no need to create a new node, just set the finality + // bit unless this node is already final + next = frame.output.edges[nEdge] + next.final = next.final || final + + } else { + // no edge exists yet, must create one + // set the finality bit and insert it + // into the output + next = new lunr.TokenSet + next.final = final + frame.output.edges[nEdge] = next + } + + stack.push({ + qNode: qNode, + output: next, + node: node + }) + } + } + } + } + + return output +} +lunr.TokenSet.Builder = function () { + this.previousWord = "" + this.root = new lunr.TokenSet + this.uncheckedNodes = [] + this.minimizedNodes = {} +} + +lunr.TokenSet.Builder.prototype.insert = function (word) { + var node, + commonPrefix = 0 + + if (word < this.previousWord) { + throw new Error ("Out of order word insertion") + } + + for (var i = 0; i < word.length && i < this.previousWord.length; i++) { + if (word[i] != this.previousWord[i]) break + commonPrefix++ + } + + this.minimize(commonPrefix) + + if (this.uncheckedNodes.length == 0) { + node = this.root + } else { + node = this.uncheckedNodes[this.uncheckedNodes.length - 1].child + } + + for (var i = commonPrefix; i < word.length; i++) { + var nextNode = new lunr.TokenSet, + char = word[i] + + node.edges[char] = nextNode + + this.uncheckedNodes.push({ + parent: node, + char: char, + child: nextNode + }) + + node = nextNode + } + + node.final = true + this.previousWord = word +} + +lunr.TokenSet.Builder.prototype.finish = function () { + this.minimize(0) +} + +lunr.TokenSet.Builder.prototype.minimize = function (downTo) { + for (var i = this.uncheckedNodes.length - 1; i >= downTo; i--) { + var node = this.uncheckedNodes[i], + childKey = node.child.toString() + + if (childKey in this.minimizedNodes) { + node.parent.edges[node.char] = this.minimizedNodes[childKey] + } else { + // Cache the key for this node since + // we know it can't change anymore + node.child._str = childKey + + this.minimizedNodes[childKey] = node.child + } + + this.uncheckedNodes.pop() + } +} +/*! + * lunr.Index + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * An index contains the built index of all documents and provides a query interface + * to the index. + * + * Usually instances of lunr.Index will not be created using this constructor, instead + * lunr.Builder should be used to construct new indexes, or lunr.Index.load should be + * used to load previously built and serialized indexes. + * + * @constructor + * @param {Object} attrs - The attributes of the built search index. + * @param {Object} attrs.invertedIndex - An index of term/field to document reference. + * @param {Object} attrs.fieldVectors - Field vectors + * @param {lunr.TokenSet} attrs.tokenSet - An set of all corpus tokens. + * @param {string[]} attrs.fields - The names of indexed document fields. + * @param {lunr.Pipeline} attrs.pipeline - The pipeline to use for search terms. + */ +lunr.Index = function (attrs) { + this.invertedIndex = attrs.invertedIndex + this.fieldVectors = attrs.fieldVectors + this.tokenSet = attrs.tokenSet + this.fields = attrs.fields + this.pipeline = attrs.pipeline +} + +/** + * A result contains details of a document matching a search query. + * @typedef {Object} lunr.Index~Result + * @property {string} ref - The reference of the document this result represents. + * @property {number} score - A number between 0 and 1 representing how similar this document is to the query. + * @property {lunr.MatchData} matchData - Contains metadata about this match including which term(s) caused the match. + */ + +/** + * Although lunr provides the ability to create queries using lunr.Query, it also provides a simple + * query language which itself is parsed into an instance of lunr.Query. + * + * For programmatically building queries it is advised to directly use lunr.Query, the query language + * is best used for human entered text rather than program generated text. + * + * At its simplest queries can just be a single term, e.g. `hello`, multiple terms are also supported + * and will be combined with OR, e.g `hello world` will match documents that contain either 'hello' + * or 'world', though those that contain both will rank higher in the results. + * + * Wildcards can be included in terms to match one or more unspecified characters, these wildcards can + * be inserted anywhere within the term, and more than one wildcard can exist in a single term. Adding + * wildcards will increase the number of documents that will be found but can also have a negative + * impact on query performance, especially with wildcards at the beginning of a term. + * + * Terms can be restricted to specific fields, e.g. `title:hello`, only documents with the term + * hello in the title field will match this query. Using a field not present in the index will lead + * to an error being thrown. + * + * Modifiers can also be added to terms, lunr supports edit distance and boost modifiers on terms. A term + * boost will make documents matching that term score higher, e.g. `foo^5`. Edit distance is also supported + * to provide fuzzy matching, e.g. 'hello~2' will match documents with hello with an edit distance of 2. + * Avoid large values for edit distance to improve query performance. + * + * Each term also supports a presence modifier. By default a term's presence in document is optional, however + * this can be changed to either required or prohibited. For a term's presence to be required in a document the + * term should be prefixed with a '+', e.g. `+foo bar` is a search for documents that must contain 'foo' and + * optionally contain 'bar'. Conversely a leading '-' sets the terms presence to prohibited, i.e. it must not + * appear in a document, e.g. `-foo bar` is a search for documents that do not contain 'foo' but may contain 'bar'. + * + * To escape special characters the backslash character '\' can be used, this allows searches to include + * characters that would normally be considered modifiers, e.g. `foo\~2` will search for a term "foo~2" instead + * of attempting to apply a boost of 2 to the search term "foo". + * + * @typedef {string} lunr.Index~QueryString + * @example Simple single term query + * hello + * @example Multiple term query + * hello world + * @example term scoped to a field + * title:hello + * @example term with a boost of 10 + * hello^10 + * @example term with an edit distance of 2 + * hello~2 + * @example terms with presence modifiers + * -foo +bar baz + */ + +/** + * Performs a search against the index using lunr query syntax. + * + * Results will be returned sorted by their score, the most relevant results + * will be returned first. For details on how the score is calculated, please see + * the {@link https://lunrjs.com/guides/searching.html#scoring|guide}. + * + * For more programmatic querying use lunr.Index#query. + * + * @param {lunr.Index~QueryString} queryString - A string containing a lunr query. + * @throws {lunr.QueryParseError} If the passed query string cannot be parsed. + * @returns {lunr.Index~Result[]} + */ +lunr.Index.prototype.search = function (queryString) { + return this.query(function (query) { + var parser = new lunr.QueryParser(queryString, query) + parser.parse() + }) +} + +/** + * A query builder callback provides a query object to be used to express + * the query to perform on the index. + * + * @callback lunr.Index~queryBuilder + * @param {lunr.Query} query - The query object to build up. + * @this lunr.Query + */ + +/** + * Performs a query against the index using the yielded lunr.Query object. + * + * If performing programmatic queries against the index, this method is preferred + * over lunr.Index#search so as to avoid the additional query parsing overhead. + * + * A query object is yielded to the supplied function which should be used to + * express the query to be run against the index. + * + * Note that although this function takes a callback parameter it is _not_ an + * asynchronous operation, the callback is just yielded a query object to be + * customized. + * + * @param {lunr.Index~queryBuilder} fn - A function that is used to build the query. + * @returns {lunr.Index~Result[]} + */ +lunr.Index.prototype.query = function (fn) { + // for each query clause + // * process terms + // * expand terms from token set + // * find matching documents and metadata + // * get document vectors + // * score documents + + var query = new lunr.Query(this.fields), + matchingFields = Object.create(null), + queryVectors = Object.create(null), + termFieldCache = Object.create(null), + requiredMatches = Object.create(null), + prohibitedMatches = Object.create(null) + + /* + * To support field level boosts a query vector is created per + * field. An empty vector is eagerly created to support negated + * queries. + */ + for (var i = 0; i < this.fields.length; i++) { + queryVectors[this.fields[i]] = new lunr.Vector + } + + fn.call(query, query) + + for (var i = 0; i < query.clauses.length; i++) { + /* + * Unless the pipeline has been disabled for this term, which is + * the case for terms with wildcards, we need to pass the clause + * term through the search pipeline. A pipeline returns an array + * of processed terms. Pipeline functions may expand the passed + * term, which means we may end up performing multiple index lookups + * for a single query term. + */ + var clause = query.clauses[i], + terms = null, + clauseMatches = lunr.Set.empty + + if (clause.usePipeline) { + terms = this.pipeline.runString(clause.term, { + fields: clause.fields + }) + } else { + terms = [clause.term] + } + + for (var m = 0; m < terms.length; m++) { + var term = terms[m] + + /* + * Each term returned from the pipeline needs to use the same query + * clause object, e.g. the same boost and or edit distance. The + * simplest way to do this is to re-use the clause object but mutate + * its term property. + */ + clause.term = term + + /* + * From the term in the clause we create a token set which will then + * be used to intersect the indexes token set to get a list of terms + * to lookup in the inverted index + */ + var termTokenSet = lunr.TokenSet.fromClause(clause), + expandedTerms = this.tokenSet.intersect(termTokenSet).toArray() + + /* + * If a term marked as required does not exist in the tokenSet it is + * impossible for the search to return any matches. We set all the field + * scoped required matches set to empty and stop examining any further + * clauses. + */ + if (expandedTerms.length === 0 && clause.presence === lunr.Query.presence.REQUIRED) { + for (var k = 0; k < clause.fields.length; k++) { + var field = clause.fields[k] + requiredMatches[field] = lunr.Set.empty + } + + break + } + + for (var j = 0; j < expandedTerms.length; j++) { + /* + * For each term get the posting and termIndex, this is required for + * building the query vector. + */ + var expandedTerm = expandedTerms[j], + posting = this.invertedIndex[expandedTerm], + termIndex = posting._index + + for (var k = 0; k < clause.fields.length; k++) { + /* + * For each field that this query term is scoped by (by default + * all fields are in scope) we need to get all the document refs + * that have this term in that field. + * + * The posting is the entry in the invertedIndex for the matching + * term from above. + */ + var field = clause.fields[k], + fieldPosting = posting[field], + matchingDocumentRefs = Object.keys(fieldPosting), + termField = expandedTerm + "/" + field, + matchingDocumentsSet = new lunr.Set(matchingDocumentRefs) + + /* + * if the presence of this term is required ensure that the matching + * documents are added to the set of required matches for this clause. + * + */ + if (clause.presence == lunr.Query.presence.REQUIRED) { + clauseMatches = clauseMatches.union(matchingDocumentsSet) + + if (requiredMatches[field] === undefined) { + requiredMatches[field] = lunr.Set.complete + } + } + + /* + * if the presence of this term is prohibited ensure that the matching + * documents are added to the set of prohibited matches for this field, + * creating that set if it does not yet exist. + */ + if (clause.presence == lunr.Query.presence.PROHIBITED) { + if (prohibitedMatches[field] === undefined) { + prohibitedMatches[field] = lunr.Set.empty + } + + prohibitedMatches[field] = prohibitedMatches[field].union(matchingDocumentsSet) + + /* + * Prohibited matches should not be part of the query vector used for + * similarity scoring and no metadata should be extracted so we continue + * to the next field + */ + continue + } + + /* + * The query field vector is populated using the termIndex found for + * the term and a unit value with the appropriate boost applied. + * Using upsert because there could already be an entry in the vector + * for the term we are working with. In that case we just add the scores + * together. + */ + queryVectors[field].upsert(termIndex, clause.boost, function (a, b) { return a + b }) + + /** + * If we've already seen this term, field combo then we've already collected + * the matching documents and metadata, no need to go through all that again + */ + if (termFieldCache[termField]) { + continue + } + + for (var l = 0; l < matchingDocumentRefs.length; l++) { + /* + * All metadata for this term/field/document triple + * are then extracted and collected into an instance + * of lunr.MatchData ready to be returned in the query + * results + */ + var matchingDocumentRef = matchingDocumentRefs[l], + matchingFieldRef = new lunr.FieldRef (matchingDocumentRef, field), + metadata = fieldPosting[matchingDocumentRef], + fieldMatch + + if ((fieldMatch = matchingFields[matchingFieldRef]) === undefined) { + matchingFields[matchingFieldRef] = new lunr.MatchData (expandedTerm, field, metadata) + } else { + fieldMatch.add(expandedTerm, field, metadata) + } + + } + + termFieldCache[termField] = true + } + } + } + + /** + * If the presence was required we need to update the requiredMatches field sets. + * We do this after all fields for the term have collected their matches because + * the clause terms presence is required in _any_ of the fields not _all_ of the + * fields. + */ + if (clause.presence === lunr.Query.presence.REQUIRED) { + for (var k = 0; k < clause.fields.length; k++) { + var field = clause.fields[k] + requiredMatches[field] = requiredMatches[field].intersect(clauseMatches) + } + } + } + + /** + * Need to combine the field scoped required and prohibited + * matching documents into a global set of required and prohibited + * matches + */ + var allRequiredMatches = lunr.Set.complete, + allProhibitedMatches = lunr.Set.empty + + for (var i = 0; i < this.fields.length; i++) { + var field = this.fields[i] + + if (requiredMatches[field]) { + allRequiredMatches = allRequiredMatches.intersect(requiredMatches[field]) + } + + if (prohibitedMatches[field]) { + allProhibitedMatches = allProhibitedMatches.union(prohibitedMatches[field]) + } + } + + var matchingFieldRefs = Object.keys(matchingFields), + results = [], + matches = Object.create(null) + + /* + * If the query is negated (contains only prohibited terms) + * we need to get _all_ fieldRefs currently existing in the + * index. This is only done when we know that the query is + * entirely prohibited terms to avoid any cost of getting all + * fieldRefs unnecessarily. + * + * Additionally, blank MatchData must be created to correctly + * populate the results. + */ + if (query.isNegated()) { + matchingFieldRefs = Object.keys(this.fieldVectors) + + for (var i = 0; i < matchingFieldRefs.length; i++) { + var matchingFieldRef = matchingFieldRefs[i] + var fieldRef = lunr.FieldRef.fromString(matchingFieldRef) + matchingFields[matchingFieldRef] = new lunr.MatchData + } + } + + for (var i = 0; i < matchingFieldRefs.length; i++) { + /* + * Currently we have document fields that match the query, but we + * need to return documents. The matchData and scores are combined + * from multiple fields belonging to the same document. + * + * Scores are calculated by field, using the query vectors created + * above, and combined into a final document score using addition. + */ + var fieldRef = lunr.FieldRef.fromString(matchingFieldRefs[i]), + docRef = fieldRef.docRef + + if (!allRequiredMatches.contains(docRef)) { + continue + } + + if (allProhibitedMatches.contains(docRef)) { + continue + } + + var fieldVector = this.fieldVectors[fieldRef], + score = queryVectors[fieldRef.fieldName].similarity(fieldVector), + docMatch + + if ((docMatch = matches[docRef]) !== undefined) { + docMatch.score += score + docMatch.matchData.combine(matchingFields[fieldRef]) + } else { + var match = { + ref: docRef, + score: score, + matchData: matchingFields[fieldRef] + } + matches[docRef] = match + results.push(match) + } + } + + /* + * Sort the results objects by score, highest first. + */ + return results.sort(function (a, b) { + return b.score - a.score + }) +} + +/** + * Prepares the index for JSON serialization. + * + * The schema for this JSON blob will be described in a + * separate JSON schema file. + * + * @returns {Object} + */ +lunr.Index.prototype.toJSON = function () { + var invertedIndex = Object.keys(this.invertedIndex) + .sort() + .map(function (term) { + return [term, this.invertedIndex[term]] + }, this) + + var fieldVectors = Object.keys(this.fieldVectors) + .map(function (ref) { + return [ref, this.fieldVectors[ref].toJSON()] + }, this) + + return { + version: lunr.version, + fields: this.fields, + fieldVectors: fieldVectors, + invertedIndex: invertedIndex, + pipeline: this.pipeline.toJSON() + } +} + +/** + * Loads a previously serialized lunr.Index + * + * @param {Object} serializedIndex - A previously serialized lunr.Index + * @returns {lunr.Index} + */ +lunr.Index.load = function (serializedIndex) { + var attrs = {}, + fieldVectors = {}, + serializedVectors = serializedIndex.fieldVectors, + invertedIndex = Object.create(null), + serializedInvertedIndex = serializedIndex.invertedIndex, + tokenSetBuilder = new lunr.TokenSet.Builder, + pipeline = lunr.Pipeline.load(serializedIndex.pipeline) + + if (serializedIndex.version != lunr.version) { + lunr.utils.warn("Version mismatch when loading serialised index. Current version of lunr '" + lunr.version + "' does not match serialized index '" + serializedIndex.version + "'") + } + + for (var i = 0; i < serializedVectors.length; i++) { + var tuple = serializedVectors[i], + ref = tuple[0], + elements = tuple[1] + + fieldVectors[ref] = new lunr.Vector(elements) + } + + for (var i = 0; i < serializedInvertedIndex.length; i++) { + var tuple = serializedInvertedIndex[i], + term = tuple[0], + posting = tuple[1] + + tokenSetBuilder.insert(term) + invertedIndex[term] = posting + } + + tokenSetBuilder.finish() + + attrs.fields = serializedIndex.fields + + attrs.fieldVectors = fieldVectors + attrs.invertedIndex = invertedIndex + attrs.tokenSet = tokenSetBuilder.root + attrs.pipeline = pipeline + + return new lunr.Index(attrs) +} +/*! + * lunr.Builder + * Copyright (C) 2020 Oliver Nightingale + */ + +/** + * lunr.Builder performs indexing on a set of documents and + * returns instances of lunr.Index ready for querying. + * + * All configuration of the index is done via the builder, the + * fields to index, the document reference, the text processing + * pipeline and document scoring parameters are all set on the + * builder before indexing. + * + * @constructor + * @property {string} _ref - Internal reference to the document reference field. + * @property {string[]} _fields - Internal reference to the document fields to index. + * @property {object} invertedIndex - The inverted index maps terms to document fields. + * @property {object} documentTermFrequencies - Keeps track of document term frequencies. + * @property {object} documentLengths - Keeps track of the length of documents added to the index. + * @property {lunr.tokenizer} tokenizer - Function for splitting strings into tokens for indexing. + * @property {lunr.Pipeline} pipeline - The pipeline performs text processing on tokens before indexing. + * @property {lunr.Pipeline} searchPipeline - A pipeline for processing search terms before querying the index. + * @property {number} documentCount - Keeps track of the total number of documents indexed. + * @property {number} _b - A parameter to control field length normalization, setting this to 0 disabled normalization, 1 fully normalizes field lengths, the default value is 0.75. + * @property {number} _k1 - A parameter to control how quickly an increase in term frequency results in term frequency saturation, the default value is 1.2. + * @property {number} termIndex - A counter incremented for each unique term, used to identify a terms position in the vector space. + * @property {array} metadataWhitelist - A list of metadata keys that have been whitelisted for entry in the index. + */ +lunr.Builder = function () { + this._ref = "id" + this._fields = Object.create(null) + this._documents = Object.create(null) + this.invertedIndex = Object.create(null) + this.fieldTermFrequencies = {} + this.fieldLengths = {} + this.tokenizer = lunr.tokenizer + this.pipeline = new lunr.Pipeline + this.searchPipeline = new lunr.Pipeline + this.documentCount = 0 + this._b = 0.75 + this._k1 = 1.2 + this.termIndex = 0 + this.metadataWhitelist = [] +} + +/** + * Sets the document field used as the document reference. Every document must have this field. + * The type of this field in the document should be a string, if it is not a string it will be + * coerced into a string by calling toString. + * + * The default ref is 'id'. + * + * The ref should _not_ be changed during indexing, it should be set before any documents are + * added to the index. Changing it during indexing can lead to inconsistent results. + * + * @param {string} ref - The name of the reference field in the document. + */ +lunr.Builder.prototype.ref = function (ref) { + this._ref = ref +} + +/** + * A function that is used to extract a field from a document. + * + * Lunr expects a field to be at the top level of a document, if however the field + * is deeply nested within a document an extractor function can be used to extract + * the right field for indexing. + * + * @callback fieldExtractor + * @param {object} doc - The document being added to the index. + * @returns {?(string|object|object[])} obj - The object that will be indexed for this field. + * @example Extracting a nested field + * function (doc) { return doc.nested.field } + */ + +/** + * Adds a field to the list of document fields that will be indexed. Every document being + * indexed should have this field. Null values for this field in indexed documents will + * not cause errors but will limit the chance of that document being retrieved by searches. + * + * All fields should be added before adding documents to the index. Adding fields after + * a document has been indexed will have no effect on already indexed documents. + * + * Fields can be boosted at build time. This allows terms within that field to have more + * importance when ranking search results. Use a field boost to specify that matches within + * one field are more important than other fields. + * + * @param {string} fieldName - The name of a field to index in all documents. + * @param {object} attributes - Optional attributes associated with this field. + * @param {number} [attributes.boost=1] - Boost applied to all terms within this field. + * @param {fieldExtractor} [attributes.extractor] - Function to extract a field from a document. + * @throws {RangeError} fieldName cannot contain unsupported characters '/' + */ +lunr.Builder.prototype.field = function (fieldName, attributes) { + if (/\//.test(fieldName)) { + throw new RangeError ("Field '" + fieldName + "' contains illegal character '/'") + } + + this._fields[fieldName] = attributes || {} +} + +/** + * A parameter to tune the amount of field length normalisation that is applied when + * calculating relevance scores. A value of 0 will completely disable any normalisation + * and a value of 1 will fully normalise field lengths. The default is 0.75. Values of b + * will be clamped to the range 0 - 1. + * + * @param {number} number - The value to set for this tuning parameter. + */ +lunr.Builder.prototype.b = function (number) { + if (number < 0) { + this._b = 0 + } else if (number > 1) { + this._b = 1 + } else { + this._b = number + } +} + +/** + * A parameter that controls the speed at which a rise in term frequency results in term + * frequency saturation. The default value is 1.2. Setting this to a higher value will give + * slower saturation levels, a lower value will result in quicker saturation. + * + * @param {number} number - The value to set for this tuning parameter. + */ +lunr.Builder.prototype.k1 = function (number) { + this._k1 = number +} + +/** + * Adds a document to the index. + * + * Before adding fields to the index the index should have been fully setup, with the document + * ref and all fields to index already having been specified. + * + * The document must have a field name as specified by the ref (by default this is 'id') and + * it should have all fields defined for indexing, though null or undefined values will not + * cause errors. + * + * Entire documents can be boosted at build time. Applying a boost to a document indicates that + * this document should rank higher in search results than other documents. + * + * @param {object} doc - The document to add to the index. + * @param {object} attributes - Optional attributes associated with this document. + * @param {number} [attributes.boost=1] - Boost applied to all terms within this document. + */ +lunr.Builder.prototype.add = function (doc, attributes) { + var docRef = doc[this._ref], + fields = Object.keys(this._fields) + + this._documents[docRef] = attributes || {} + this.documentCount += 1 + + for (var i = 0; i < fields.length; i++) { + var fieldName = fields[i], + extractor = this._fields[fieldName].extractor, + field = extractor ? extractor(doc) : doc[fieldName], + tokens = this.tokenizer(field, { + fields: [fieldName] + }), + terms = this.pipeline.run(tokens), + fieldRef = new lunr.FieldRef (docRef, fieldName), + fieldTerms = Object.create(null) + + this.fieldTermFrequencies[fieldRef] = fieldTerms + this.fieldLengths[fieldRef] = 0 + + // store the length of this field for this document + this.fieldLengths[fieldRef] += terms.length + + // calculate term frequencies for this field + for (var j = 0; j < terms.length; j++) { + var term = terms[j] + + if (fieldTerms[term] == undefined) { + fieldTerms[term] = 0 + } + + fieldTerms[term] += 1 + + // add to inverted index + // create an initial posting if one doesn't exist + if (this.invertedIndex[term] == undefined) { + var posting = Object.create(null) + posting["_index"] = this.termIndex + this.termIndex += 1 + + for (var k = 0; k < fields.length; k++) { + posting[fields[k]] = Object.create(null) + } + + this.invertedIndex[term] = posting + } + + // add an entry for this term/fieldName/docRef to the invertedIndex + if (this.invertedIndex[term][fieldName][docRef] == undefined) { + this.invertedIndex[term][fieldName][docRef] = Object.create(null) + } + + // store all whitelisted metadata about this token in the + // inverted index + for (var l = 0; l < this.metadataWhitelist.length; l++) { + var metadataKey = this.metadataWhitelist[l], + metadata = term.metadata[metadataKey] + + if (this.invertedIndex[term][fieldName][docRef][metadataKey] == undefined) { + this.invertedIndex[term][fieldName][docRef][metadataKey] = [] + } + + this.invertedIndex[term][fieldName][docRef][metadataKey].push(metadata) + } + } + + } +} + +/** + * Calculates the average document length for this index + * + * @private + */ +lunr.Builder.prototype.calculateAverageFieldLengths = function () { + + var fieldRefs = Object.keys(this.fieldLengths), + numberOfFields = fieldRefs.length, + accumulator = {}, + documentsWithField = {} + + for (var i = 0; i < numberOfFields; i++) { + var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]), + field = fieldRef.fieldName + + documentsWithField[field] || (documentsWithField[field] = 0) + documentsWithField[field] += 1 + + accumulator[field] || (accumulator[field] = 0) + accumulator[field] += this.fieldLengths[fieldRef] + } + + var fields = Object.keys(this._fields) + + for (var i = 0; i < fields.length; i++) { + var fieldName = fields[i] + accumulator[fieldName] = accumulator[fieldName] / documentsWithField[fieldName] + } + + this.averageFieldLength = accumulator +} + +/** + * Builds a vector space model of every document using lunr.Vector + * + * @private + */ +lunr.Builder.prototype.createFieldVectors = function () { + var fieldVectors = {}, + fieldRefs = Object.keys(this.fieldTermFrequencies), + fieldRefsLength = fieldRefs.length, + termIdfCache = Object.create(null) + + for (var i = 0; i < fieldRefsLength; i++) { + var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]), + fieldName = fieldRef.fieldName, + fieldLength = this.fieldLengths[fieldRef], + fieldVector = new lunr.Vector, + termFrequencies = this.fieldTermFrequencies[fieldRef], + terms = Object.keys(termFrequencies), + termsLength = terms.length + + + var fieldBoost = this._fields[fieldName].boost || 1, + docBoost = this._documents[fieldRef.docRef].boost || 1 + + for (var j = 0; j < termsLength; j++) { + var term = terms[j], + tf = termFrequencies[term], + termIndex = this.invertedIndex[term]._index, + idf, score, scoreWithPrecision + + if (termIdfCache[term] === undefined) { + idf = lunr.idf(this.invertedIndex[term], this.documentCount) + termIdfCache[term] = idf + } else { + idf = termIdfCache[term] + } + + score = idf * ((this._k1 + 1) * tf) / (this._k1 * (1 - this._b + this._b * (fieldLength / this.averageFieldLength[fieldName])) + tf) + score *= fieldBoost + score *= docBoost + scoreWithPrecision = Math.round(score * 1000) / 1000 + // Converts 1.23456789 to 1.234. + // Reducing the precision so that the vectors take up less + // space when serialised. Doing it now so that they behave + // the same before and after serialisation. Also, this is + // the fastest approach to reducing a number's precision in + // JavaScript. + + fieldVector.insert(termIndex, scoreWithPrecision) + } + + fieldVectors[fieldRef] = fieldVector + } + + this.fieldVectors = fieldVectors +} + +/** + * Creates a token set of all tokens in the index using lunr.TokenSet + * + * @private + */ +lunr.Builder.prototype.createTokenSet = function () { + this.tokenSet = lunr.TokenSet.fromArray( + Object.keys(this.invertedIndex).sort() + ) +} + +/** + * Builds the index, creating an instance of lunr.Index. + * + * This completes the indexing process and should only be called + * once all documents have been added to the index. + * + * @returns {lunr.Index} + */ +lunr.Builder.prototype.build = function () { + this.calculateAverageFieldLengths() + this.createFieldVectors() + this.createTokenSet() + + return new lunr.Index({ + invertedIndex: this.invertedIndex, + fieldVectors: this.fieldVectors, + tokenSet: this.tokenSet, + fields: Object.keys(this._fields), + pipeline: this.searchPipeline + }) +} + +/** + * Applies a plugin to the index builder. + * + * A plugin is a function that is called with the index builder as its context. + * Plugins can be used to customise or extend the behaviour of the index + * in some way. A plugin is just a function, that encapsulated the custom + * behaviour that should be applied when building the index. + * + * The plugin function will be called with the index builder as its argument, additional + * arguments can also be passed when calling use. The function will be called + * with the index builder as its context. + * + * @param {Function} plugin The plugin to apply. + */ +lunr.Builder.prototype.use = function (fn) { + var args = Array.prototype.slice.call(arguments, 1) + args.unshift(this) + fn.apply(this, args) +} +/** + * Contains and collects metadata about a matching document. + * A single instance of lunr.MatchData is returned as part of every + * lunr.Index~Result. + * + * @constructor + * @param {string} term - The term this match data is associated with + * @param {string} field - The field in which the term was found + * @param {object} metadata - The metadata recorded about this term in this field + * @property {object} metadata - A cloned collection of metadata associated with this document. + * @see {@link lunr.Index~Result} + */ +lunr.MatchData = function (term, field, metadata) { + var clonedMetadata = Object.create(null), + metadataKeys = Object.keys(metadata || {}) + + // Cloning the metadata to prevent the original + // being mutated during match data combination. + // Metadata is kept in an array within the inverted + // index so cloning the data can be done with + // Array#slice + for (var i = 0; i < metadataKeys.length; i++) { + var key = metadataKeys[i] + clonedMetadata[key] = metadata[key].slice() + } + + this.metadata = Object.create(null) + + if (term !== undefined) { + this.metadata[term] = Object.create(null) + this.metadata[term][field] = clonedMetadata + } +} + +/** + * An instance of lunr.MatchData will be created for every term that matches a + * document. However only one instance is required in a lunr.Index~Result. This + * method combines metadata from another instance of lunr.MatchData with this + * objects metadata. + * + * @param {lunr.MatchData} otherMatchData - Another instance of match data to merge with this one. + * @see {@link lunr.Index~Result} + */ +lunr.MatchData.prototype.combine = function (otherMatchData) { + var terms = Object.keys(otherMatchData.metadata) + + for (var i = 0; i < terms.length; i++) { + var term = terms[i], + fields = Object.keys(otherMatchData.metadata[term]) + + if (this.metadata[term] == undefined) { + this.metadata[term] = Object.create(null) + } + + for (var j = 0; j < fields.length; j++) { + var field = fields[j], + keys = Object.keys(otherMatchData.metadata[term][field]) + + if (this.metadata[term][field] == undefined) { + this.metadata[term][field] = Object.create(null) + } + + for (var k = 0; k < keys.length; k++) { + var key = keys[k] + + if (this.metadata[term][field][key] == undefined) { + this.metadata[term][field][key] = otherMatchData.metadata[term][field][key] + } else { + this.metadata[term][field][key] = this.metadata[term][field][key].concat(otherMatchData.metadata[term][field][key]) + } + + } + } + } +} + +/** + * Add metadata for a term/field pair to this instance of match data. + * + * @param {string} term - The term this match data is associated with + * @param {string} field - The field in which the term was found + * @param {object} metadata - The metadata recorded about this term in this field + */ +lunr.MatchData.prototype.add = function (term, field, metadata) { + if (!(term in this.metadata)) { + this.metadata[term] = Object.create(null) + this.metadata[term][field] = metadata + return + } + + if (!(field in this.metadata[term])) { + this.metadata[term][field] = metadata + return + } + + var metadataKeys = Object.keys(metadata) + + for (var i = 0; i < metadataKeys.length; i++) { + var key = metadataKeys[i] + + if (key in this.metadata[term][field]) { + this.metadata[term][field][key] = this.metadata[term][field][key].concat(metadata[key]) + } else { + this.metadata[term][field][key] = metadata[key] + } + } +} +/** + * A lunr.Query provides a programmatic way of defining queries to be performed + * against a {@link lunr.Index}. + * + * Prefer constructing a lunr.Query using the {@link lunr.Index#query} method + * so the query object is pre-initialized with the right index fields. + * + * @constructor + * @property {lunr.Query~Clause[]} clauses - An array of query clauses. + * @property {string[]} allFields - An array of all available fields in a lunr.Index. + */ +lunr.Query = function (allFields) { + this.clauses = [] + this.allFields = allFields +} + +/** + * Constants for indicating what kind of automatic wildcard insertion will be used when constructing a query clause. + * + * This allows wildcards to be added to the beginning and end of a term without having to manually do any string + * concatenation. + * + * The wildcard constants can be bitwise combined to select both leading and trailing wildcards. + * + * @constant + * @default + * @property {number} wildcard.NONE - The term will have no wildcards inserted, this is the default behaviour + * @property {number} wildcard.LEADING - Prepend the term with a wildcard, unless a leading wildcard already exists + * @property {number} wildcard.TRAILING - Append a wildcard to the term, unless a trailing wildcard already exists + * @see lunr.Query~Clause + * @see lunr.Query#clause + * @see lunr.Query#term + * @example query term with trailing wildcard + * query.term('foo', { wildcard: lunr.Query.wildcard.TRAILING }) + * @example query term with leading and trailing wildcard + * query.term('foo', { + * wildcard: lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING + * }) + */ + +lunr.Query.wildcard = new String ("*") +lunr.Query.wildcard.NONE = 0 +lunr.Query.wildcard.LEADING = 1 +lunr.Query.wildcard.TRAILING = 2 + +/** + * Constants for indicating what kind of presence a term must have in matching documents. + * + * @constant + * @enum {number} + * @see lunr.Query~Clause + * @see lunr.Query#clause + * @see lunr.Query#term + * @example query term with required presence + * query.term('foo', { presence: lunr.Query.presence.REQUIRED }) + */ +lunr.Query.presence = { + /** + * Term's presence in a document is optional, this is the default value. + */ + OPTIONAL: 1, + + /** + * Term's presence in a document is required, documents that do not contain + * this term will not be returned. + */ + REQUIRED: 2, + + /** + * Term's presence in a document is prohibited, documents that do contain + * this term will not be returned. + */ + PROHIBITED: 3 +} + +/** + * A single clause in a {@link lunr.Query} contains a term and details on how to + * match that term against a {@link lunr.Index}. + * + * @typedef {Object} lunr.Query~Clause + * @property {string[]} fields - The fields in an index this clause should be matched against. + * @property {number} [boost=1] - Any boost that should be applied when matching this clause. + * @property {number} [editDistance] - Whether the term should have fuzzy matching applied, and how fuzzy the match should be. + * @property {boolean} [usePipeline] - Whether the term should be passed through the search pipeline. + * @property {number} [wildcard=lunr.Query.wildcard.NONE] - Whether the term should have wildcards appended or prepended. + * @property {number} [presence=lunr.Query.presence.OPTIONAL] - The terms presence in any matching documents. + */ + +/** + * Adds a {@link lunr.Query~Clause} to this query. + * + * Unless the clause contains the fields to be matched all fields will be matched. In addition + * a default boost of 1 is applied to the clause. + * + * @param {lunr.Query~Clause} clause - The clause to add to this query. + * @see lunr.Query~Clause + * @returns {lunr.Query} + */ +lunr.Query.prototype.clause = function (clause) { + if (!('fields' in clause)) { + clause.fields = this.allFields + } + + if (!('boost' in clause)) { + clause.boost = 1 + } + + if (!('usePipeline' in clause)) { + clause.usePipeline = true + } + + if (!('wildcard' in clause)) { + clause.wildcard = lunr.Query.wildcard.NONE + } + + if ((clause.wildcard & lunr.Query.wildcard.LEADING) && (clause.term.charAt(0) != lunr.Query.wildcard)) { + clause.term = "*" + clause.term + } + + if ((clause.wildcard & lunr.Query.wildcard.TRAILING) && (clause.term.slice(-1) != lunr.Query.wildcard)) { + clause.term = "" + clause.term + "*" + } + + if (!('presence' in clause)) { + clause.presence = lunr.Query.presence.OPTIONAL + } + + this.clauses.push(clause) + + return this +} + +/** + * A negated query is one in which every clause has a presence of + * prohibited. These queries require some special processing to return + * the expected results. + * + * @returns boolean + */ +lunr.Query.prototype.isNegated = function () { + for (var i = 0; i < this.clauses.length; i++) { + if (this.clauses[i].presence != lunr.Query.presence.PROHIBITED) { + return false + } + } + + return true +} + +/** + * Adds a term to the current query, under the covers this will create a {@link lunr.Query~Clause} + * to the list of clauses that make up this query. + * + * The term is used as is, i.e. no tokenization will be performed by this method. Instead conversion + * to a token or token-like string should be done before calling this method. + * + * The term will be converted to a string by calling `toString`. Multiple terms can be passed as an + * array, each term in the array will share the same options. + * + * @param {object|object[]} term - The term(s) to add to the query. + * @param {object} [options] - Any additional properties to add to the query clause. + * @returns {lunr.Query} + * @see lunr.Query#clause + * @see lunr.Query~Clause + * @example adding a single term to a query + * query.term("foo") + * @example adding a single term to a query and specifying search fields, term boost and automatic trailing wildcard + * query.term("foo", { + * fields: ["title"], + * boost: 10, + * wildcard: lunr.Query.wildcard.TRAILING + * }) + * @example using lunr.tokenizer to convert a string to tokens before using them as terms + * query.term(lunr.tokenizer("foo bar")) + */ +lunr.Query.prototype.term = function (term, options) { + if (Array.isArray(term)) { + term.forEach(function (t) { this.term(t, lunr.utils.clone(options)) }, this) + return this + } + + var clause = options || {} + clause.term = term.toString() + + this.clause(clause) + + return this +} +lunr.QueryParseError = function (message, start, end) { + this.name = "QueryParseError" + this.message = message + this.start = start + this.end = end +} + +lunr.QueryParseError.prototype = new Error +lunr.QueryLexer = function (str) { + this.lexemes = [] + this.str = str + this.length = str.length + this.pos = 0 + this.start = 0 + this.escapeCharPositions = [] +} + +lunr.QueryLexer.prototype.run = function () { + var state = lunr.QueryLexer.lexText + + while (state) { + state = state(this) + } +} + +lunr.QueryLexer.prototype.sliceString = function () { + var subSlices = [], + sliceStart = this.start, + sliceEnd = this.pos + + for (var i = 0; i < this.escapeCharPositions.length; i++) { + sliceEnd = this.escapeCharPositions[i] + subSlices.push(this.str.slice(sliceStart, sliceEnd)) + sliceStart = sliceEnd + 1 + } + + subSlices.push(this.str.slice(sliceStart, this.pos)) + this.escapeCharPositions.length = 0 + + return subSlices.join('') +} + +lunr.QueryLexer.prototype.emit = function (type) { + this.lexemes.push({ + type: type, + str: this.sliceString(), + start: this.start, + end: this.pos + }) + + this.start = this.pos +} + +lunr.QueryLexer.prototype.escapeCharacter = function () { + this.escapeCharPositions.push(this.pos - 1) + this.pos += 1 +} + +lunr.QueryLexer.prototype.next = function () { + if (this.pos >= this.length) { + return lunr.QueryLexer.EOS + } + + var char = this.str.charAt(this.pos) + this.pos += 1 + return char +} + +lunr.QueryLexer.prototype.width = function () { + return this.pos - this.start +} + +lunr.QueryLexer.prototype.ignore = function () { + if (this.start == this.pos) { + this.pos += 1 + } + + this.start = this.pos +} + +lunr.QueryLexer.prototype.backup = function () { + this.pos -= 1 +} + +lunr.QueryLexer.prototype.acceptDigitRun = function () { + var char, charCode + + do { + char = this.next() + charCode = char.charCodeAt(0) + } while (charCode > 47 && charCode < 58) + + if (char != lunr.QueryLexer.EOS) { + this.backup() + } +} + +lunr.QueryLexer.prototype.more = function () { + return this.pos < this.length +} + +lunr.QueryLexer.EOS = 'EOS' +lunr.QueryLexer.FIELD = 'FIELD' +lunr.QueryLexer.TERM = 'TERM' +lunr.QueryLexer.EDIT_DISTANCE = 'EDIT_DISTANCE' +lunr.QueryLexer.BOOST = 'BOOST' +lunr.QueryLexer.PRESENCE = 'PRESENCE' + +lunr.QueryLexer.lexField = function (lexer) { + lexer.backup() + lexer.emit(lunr.QueryLexer.FIELD) + lexer.ignore() + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexTerm = function (lexer) { + if (lexer.width() > 1) { + lexer.backup() + lexer.emit(lunr.QueryLexer.TERM) + } + + lexer.ignore() + + if (lexer.more()) { + return lunr.QueryLexer.lexText + } +} + +lunr.QueryLexer.lexEditDistance = function (lexer) { + lexer.ignore() + lexer.acceptDigitRun() + lexer.emit(lunr.QueryLexer.EDIT_DISTANCE) + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexBoost = function (lexer) { + lexer.ignore() + lexer.acceptDigitRun() + lexer.emit(lunr.QueryLexer.BOOST) + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexEOS = function (lexer) { + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } +} + +// This matches the separator used when tokenising fields +// within a document. These should match otherwise it is +// not possible to search for some tokens within a document. +// +// It is possible for the user to change the separator on the +// tokenizer so it _might_ clash with any other of the special +// characters already used within the search string, e.g. :. +// +// This means that it is possible to change the separator in +// such a way that makes some words unsearchable using a search +// string. +lunr.QueryLexer.termSeparator = lunr.tokenizer.separator + +lunr.QueryLexer.lexText = function (lexer) { + while (true) { + var char = lexer.next() + + if (char == lunr.QueryLexer.EOS) { + return lunr.QueryLexer.lexEOS + } + + // Escape character is '\' + if (char.charCodeAt(0) == 92) { + lexer.escapeCharacter() + continue + } + + if (char == ":") { + return lunr.QueryLexer.lexField + } + + if (char == "~") { + lexer.backup() + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } + return lunr.QueryLexer.lexEditDistance + } + + if (char == "^") { + lexer.backup() + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } + return lunr.QueryLexer.lexBoost + } + + // "+" indicates term presence is required + // checking for length to ensure that only + // leading "+" are considered + if (char == "+" && lexer.width() === 1) { + lexer.emit(lunr.QueryLexer.PRESENCE) + return lunr.QueryLexer.lexText + } + + // "-" indicates term presence is prohibited + // checking for length to ensure that only + // leading "-" are considered + if (char == "-" && lexer.width() === 1) { + lexer.emit(lunr.QueryLexer.PRESENCE) + return lunr.QueryLexer.lexText + } + + if (char.match(lunr.QueryLexer.termSeparator)) { + return lunr.QueryLexer.lexTerm + } + } +} + +lunr.QueryParser = function (str, query) { + this.lexer = new lunr.QueryLexer (str) + this.query = query + this.currentClause = {} + this.lexemeIdx = 0 +} + +lunr.QueryParser.prototype.parse = function () { + this.lexer.run() + this.lexemes = this.lexer.lexemes + + var state = lunr.QueryParser.parseClause + + while (state) { + state = state(this) + } + + return this.query +} + +lunr.QueryParser.prototype.peekLexeme = function () { + return this.lexemes[this.lexemeIdx] +} + +lunr.QueryParser.prototype.consumeLexeme = function () { + var lexeme = this.peekLexeme() + this.lexemeIdx += 1 + return lexeme +} + +lunr.QueryParser.prototype.nextClause = function () { + var completedClause = this.currentClause + this.query.clause(completedClause) + this.currentClause = {} +} + +lunr.QueryParser.parseClause = function (parser) { + var lexeme = parser.peekLexeme() + + if (lexeme == undefined) { + return + } + + switch (lexeme.type) { + case lunr.QueryLexer.PRESENCE: + return lunr.QueryParser.parsePresence + case lunr.QueryLexer.FIELD: + return lunr.QueryParser.parseField + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expected either a field or a term, found " + lexeme.type + + if (lexeme.str.length >= 1) { + errorMessage += " with value '" + lexeme.str + "'" + } + + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } +} + +lunr.QueryParser.parsePresence = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + switch (lexeme.str) { + case "-": + parser.currentClause.presence = lunr.Query.presence.PROHIBITED + break + case "+": + parser.currentClause.presence = lunr.Query.presence.REQUIRED + break + default: + var errorMessage = "unrecognised presence operator'" + lexeme.str + "'" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + var errorMessage = "expecting term or field, found nothing" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.FIELD: + return lunr.QueryParser.parseField + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expecting term or field, found '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseField = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + if (parser.query.allFields.indexOf(lexeme.str) == -1) { + var possibleFields = parser.query.allFields.map(function (f) { return "'" + f + "'" }).join(', '), + errorMessage = "unrecognised field '" + lexeme.str + "', possible fields: " + possibleFields + + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.fields = [lexeme.str] + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + var errorMessage = "expecting term, found nothing" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expecting term, found '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseTerm = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + parser.currentClause.term = lexeme.str.toLowerCase() + + if (lexeme.str.indexOf("*") != -1) { + parser.currentClause.usePipeline = false + } + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseEditDistance = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + var editDistance = parseInt(lexeme.str, 10) + + if (isNaN(editDistance)) { + var errorMessage = "edit distance must be numeric" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.editDistance = editDistance + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseBoost = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + var boost = parseInt(lexeme.str, 10) + + if (isNaN(boost)) { + var errorMessage = "boost must be numeric" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.boost = boost + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + + /** + * export the module via AMD, CommonJS or as a browser global + * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js + */ + ;(function (root, factory) { + if (typeof define === 'function' && define.amd) { + // AMD. Register as an anonymous module. + define(factory) + } else if (typeof exports === 'object') { + /** + * Node. Does not work with strict CommonJS, but + * only CommonJS-like environments that support module.exports, + * like Node. + */ + module.exports = factory() + } else { + // Browser globals (root is window) + root.lunr = factory() + } + }(this, function () { + /** + * Just return a value to define the module export. + * This example returns an object, but the module + * can return a function as the exported value. + */ + return lunr + })) +})(); diff --git a/site/search/lunr.stemmer.support.js b/site/search/lunr.stemmer.support.js new file mode 100644 index 0000000000000000000000000000000000000000..896476a1812214c0d3527ff1764bd6729a80bc04 --- /dev/null +++ b/site/search/lunr.stemmer.support.js @@ -0,0 +1,304 @@ +/*! + * Snowball JavaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +/** + * export the module via AMD, CommonJS or as a browser global + * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js + */ +;(function (root, factory) { + if (typeof define === 'function' && define.amd) { + // AMD. Register as an anonymous module. + define(factory) + } else if (typeof exports === 'object') { + /** + * Node. Does not work with strict CommonJS, but + * only CommonJS-like environments that support module.exports, + * like Node. + */ + module.exports = factory() + } else { + // Browser globals (root is window) + factory()(root.lunr); + } +}(this, function () { + /** + * Just return a value to define the module export. + * This example returns an object, but the module + * can return a function as the exported value. + */ + return function(lunr) { + /* provides utilities for the included stemmers */ + lunr.stemmerSupport = { + Among: function(s, substring_i, result, method) { + this.toCharArray = function(s) { + var sLength = s.length, charArr = new Array(sLength); + for (var i = 0; i < sLength; i++) + charArr[i] = s.charCodeAt(i); + return charArr; + }; + + if ((!s && s != "") || (!substring_i && (substring_i != 0)) || !result) + throw ("Bad Among initialisation: s:" + s + ", substring_i: " + + substring_i + ", result: " + result); + this.s_size = s.length; + this.s = this.toCharArray(s); + this.substring_i = substring_i; + this.result = result; + this.method = method; + }, + SnowballProgram: function() { + var current; + return { + bra : 0, + ket : 0, + limit : 0, + cursor : 0, + limit_backward : 0, + setCurrent : function(word) { + current = word; + this.cursor = 0; + this.limit = word.length; + this.limit_backward = 0; + this.bra = this.cursor; + this.ket = this.limit; + }, + getCurrent : function() { + var result = current; + current = null; + return result; + }, + in_grouping : function(s, min, max) { + if (this.cursor < this.limit) { + var ch = current.charCodeAt(this.cursor); + if (ch <= max && ch >= min) { + ch -= min; + if (s[ch >> 3] & (0X1 << (ch & 0X7))) { + this.cursor++; + return true; + } + } + } + return false; + }, + in_grouping_b : function(s, min, max) { + if (this.cursor > this.limit_backward) { + var ch = current.charCodeAt(this.cursor - 1); + if (ch <= max && ch >= min) { + ch -= min; + if (s[ch >> 3] & (0X1 << (ch & 0X7))) { + this.cursor--; + return true; + } + } + } + return false; + }, + out_grouping : function(s, min, max) { + if (this.cursor < this.limit) { + var ch = current.charCodeAt(this.cursor); + if (ch > max || ch < min) { + this.cursor++; + return true; + } + ch -= min; + if (!(s[ch >> 3] & (0X1 << (ch & 0X7)))) { + this.cursor++; + return true; + } + } + return false; + }, + out_grouping_b : function(s, min, max) { + if (this.cursor > this.limit_backward) { + var ch = current.charCodeAt(this.cursor - 1); + if (ch > max || ch < min) { + this.cursor--; + return true; + } + ch -= min; + if (!(s[ch >> 3] & (0X1 << (ch & 0X7)))) { + this.cursor--; + return true; + } + } + return false; + }, + eq_s : function(s_size, s) { + if (this.limit - this.cursor < s_size) + return false; + for (var i = 0; i < s_size; i++) + if (current.charCodeAt(this.cursor + i) != s.charCodeAt(i)) + return false; + this.cursor += s_size; + return true; + }, + eq_s_b : function(s_size, s) { + if (this.cursor - this.limit_backward < s_size) + return false; + for (var i = 0; i < s_size; i++) + if (current.charCodeAt(this.cursor - s_size + i) != s + .charCodeAt(i)) + return false; + this.cursor -= s_size; + return true; + }, + find_among : function(v, v_size) { + var i = 0, j = v_size, c = this.cursor, l = this.limit, common_i = 0, common_j = 0, first_key_inspected = false; + while (true) { + var k = i + ((j - i) >> 1), diff = 0, common = common_i < common_j + ? common_i + : common_j, w = v[k]; + for (var i2 = common; i2 < w.s_size; i2++) { + if (c + common == l) { + diff = -1; + break; + } + diff = current.charCodeAt(c + common) - w.s[i2]; + if (diff) + break; + common++; + } + if (diff < 0) { + j = k; + common_j = common; + } else { + i = k; + common_i = common; + } + if (j - i <= 1) { + if (i > 0 || j == i || first_key_inspected) + break; + first_key_inspected = true; + } + } + while (true) { + var w = v[i]; + if (common_i >= w.s_size) { + this.cursor = c + w.s_size; + if (!w.method) + return w.result; + var res = w.method(); + this.cursor = c + w.s_size; + if (res) + return w.result; + } + i = w.substring_i; + if (i < 0) + return 0; + } + }, + find_among_b : function(v, v_size) { + var i = 0, j = v_size, c = this.cursor, lb = this.limit_backward, common_i = 0, common_j = 0, first_key_inspected = false; + while (true) { + var k = i + ((j - i) >> 1), diff = 0, common = common_i < common_j + ? common_i + : common_j, w = v[k]; + for (var i2 = w.s_size - 1 - common; i2 >= 0; i2--) { + if (c - common == lb) { + diff = -1; + break; + } + diff = current.charCodeAt(c - 1 - common) - w.s[i2]; + if (diff) + break; + common++; + } + if (diff < 0) { + j = k; + common_j = common; + } else { + i = k; + common_i = common; + } + if (j - i <= 1) { + if (i > 0 || j == i || first_key_inspected) + break; + first_key_inspected = true; + } + } + while (true) { + var w = v[i]; + if (common_i >= w.s_size) { + this.cursor = c - w.s_size; + if (!w.method) + return w.result; + var res = w.method(); + this.cursor = c - w.s_size; + if (res) + return w.result; + } + i = w.substring_i; + if (i < 0) + return 0; + } + }, + replace_s : function(c_bra, c_ket, s) { + var adjustment = s.length - (c_ket - c_bra), left = current + .substring(0, c_bra), right = current.substring(c_ket); + current = left + s + right; + this.limit += adjustment; + if (this.cursor >= c_ket) + this.cursor += adjustment; + else if (this.cursor > c_bra) + this.cursor = c_bra; + return adjustment; + }, + slice_check : function() { + if (this.bra < 0 || this.bra > this.ket || this.ket > this.limit + || this.limit > current.length) + throw ("faulty slice operation"); + }, + slice_from : function(s) { + this.slice_check(); + this.replace_s(this.bra, this.ket, s); + }, + slice_del : function() { + this.slice_from(""); + }, + insert : function(c_bra, c_ket, s) { + var adjustment = this.replace_s(c_bra, c_ket, s); + if (c_bra <= this.bra) + this.bra += adjustment; + if (c_bra <= this.ket) + this.ket += adjustment; + }, + slice_to : function() { + this.slice_check(); + return current.substring(this.bra, this.ket); + }, + eq_v_b : function(s) { + return this.eq_s_b(s.length, s); + } + }; + } + }; + + lunr.trimmerSupport = { + generateTrimmer: function(wordCharacters) { + var startRegex = new RegExp("^[^" + wordCharacters + "]+") + var endRegex = new RegExp("[^" + wordCharacters + "]+$") + + return function(token) { + // for lunr version 2 + if (typeof token.update === "function") { + return token.update(function (s) { + return s + .replace(startRegex, '') + .replace(endRegex, ''); + }) + } else { // for lunr version 1 + return token + .replace(startRegex, '') + .replace(endRegex, ''); + } + }; + } + } + } +})); diff --git a/site/search/lunr.zh.js b/site/search/lunr.zh.js new file mode 100644 index 0000000000000000000000000000000000000000..48f5890d964e4e66500e21b6ed2b099f98527ea2 --- /dev/null +++ b/site/search/lunr.zh.js @@ -0,0 +1,145 @@ +/*! + * Lunr languages, `Chinese` language + * https://github.com/MihaiValentin/lunr-languages + * + * Copyright 2019, Felix Lian (repairearth) + * http://www.mozilla.org/MPL/ + */ +/*! + * based on + * Snowball zhvaScript Library v0.3 + * http://code.google.com/p/urim/ + * http://snowball.tartarus.org/ + * + * Copyright 2010, Oleg Mazko + * http://www.mozilla.org/MPL/ + */ + +/** + * export the module via AMD, CommonJS or as a browser global + * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js + */ +; +(function(root, factory) { + if (typeof define === 'function' && define.amd) { + // AMD. Register as an anonymous module. + define(factory) + } else if (typeof exports === 'object') { + /** + * Node. Does not work with strict CommonJS, but + * only CommonJS-like environments that support module.exports, + * like Node. + */ + module.exports = factory(require('@node-rs/jieba')) + } else { + // Browser globals (root is window) + factory()(root.lunr); + } +}(this, function(nodejieba) { + /** + * Just return a value to define the module export. + * This example returns an object, but the module + * can return a function as the exported value. + */ + return function(lunr, nodejiebaDictJson) { + /* throw error if lunr is not yet included */ + if ('undefined' === typeof lunr) { + throw new Error('Lunr is not present. Please include / require Lunr before this script.'); + } + + /* throw error if lunr stemmer support is not yet included */ + if ('undefined' === typeof lunr.stemmerSupport) { + throw new Error('Lunr stemmer support is not present. Please include / require Lunr stemmer support before this script.'); + } + + /* + Chinese tokenization is trickier, since it does not + take into account spaces. + Since the tokenization function is represented different + internally for each of the Lunr versions, this had to be done + in order to try to try to pick the best way of doing this based + on the Lunr version + */ + var isLunr2 = lunr.version[0] == "2"; + + /* register specific locale function */ + lunr.zh = function() { + this.pipeline.reset(); + this.pipeline.add( + lunr.zh.trimmer, + lunr.zh.stopWordFilter, + lunr.zh.stemmer + ); + + // change the tokenizer for Chinese one + if (isLunr2) { // for lunr version 2.0.0 + this.tokenizer = lunr.zh.tokenizer; + } else { + if (lunr.tokenizer) { // for lunr version 0.6.0 + lunr.tokenizer = lunr.zh.tokenizer; + } + if (this.tokenizerFn) { // for lunr version 0.7.0 -> 1.0.0 + this.tokenizerFn = lunr.zh.tokenizer; + } + } + }; + + lunr.zh.tokenizer = function(obj) { + if (!arguments.length || obj == null || obj == undefined) return [] + if (Array.isArray(obj)) return obj.map(function(t) { + return isLunr2 ? new lunr.Token(t.toLowerCase()) : t.toLowerCase() + }) + + nodejiebaDictJson && nodejieba.load(nodejiebaDictJson) + + var str = obj.toString().trim().toLowerCase(); + var tokens = []; + + nodejieba.cut(str, true).forEach(function(seg) { + tokens = tokens.concat(seg.split(' ')) + }) + + tokens = tokens.filter(function(token) { + return !!token; + }); + + var fromIndex = 0 + + return tokens.map(function(token, index) { + if (isLunr2) { + var start = str.indexOf(token, fromIndex) + + var tokenMetadata = {} + tokenMetadata["position"] = [start, token.length] + tokenMetadata["index"] = index + + fromIndex = start + + return new lunr.Token(token, tokenMetadata); + } else { + return token + } + }); + } + + /* lunr trimmer function */ + lunr.zh.wordCharacters = "\\w\u4e00-\u9fa5"; + lunr.zh.trimmer = lunr.trimmerSupport.generateTrimmer(lunr.zh.wordCharacters); + lunr.Pipeline.registerFunction(lunr.zh.trimmer, 'trimmer-zh'); + + /* lunr stemmer function */ + lunr.zh.stemmer = (function() { + + /* TODO Chinese stemmer */ + return function(word) { + return word; + } + })(); + lunr.Pipeline.registerFunction(lunr.zh.stemmer, 'stemmer-zh'); + + /* lunr stop word filter. see https://www.ranks.nl/stopwords/chinese-stopwords */ + lunr.zh.stopWordFilter = lunr.generateStopWordFilter( + '的 一 不 在 人 有 是 为 為 以 于 於 上 他 而 后 後 之 来 來 及 了 因 下 可 到 由 这 這 与 與 也 此 但 并 並 个 個 其 已 无 無 小 我 们 們 起 最 再 今 去 好 只 又 或 很 亦 某 把 那 你 乃 它 吧 被 比 别 趁 当 當 从 從 得 打 凡 儿 兒 尔 爾 该 該 各 给 給 跟 和 何 还 還 即 几 幾 既 看 据 據 距 靠 啦 另 么 麽 每 嘛 拿 哪 您 凭 憑 且 却 卻 让 讓 仍 啥 如 若 使 谁 誰 虽 雖 随 隨 同 所 她 哇 嗡 往 些 向 沿 哟 喲 用 咱 则 則 怎 曾 至 致 着 著 诸 諸 自'.split(' ')); + lunr.Pipeline.registerFunction(lunr.zh.stopWordFilter, 'stopWordFilter-zh'); + }; +})) \ No newline at end of file diff --git a/site/search/main.js b/site/search/main.js new file mode 100644 index 0000000000000000000000000000000000000000..a5e469d7c8d0d5e28fea196c244bc687fa3c9cd2 --- /dev/null +++ b/site/search/main.js @@ -0,0 +1,109 @@ +function getSearchTermFromLocation() { + var sPageURL = window.location.search.substring(1); + var sURLVariables = sPageURL.split('&'); + for (var i = 0; i < sURLVariables.length; i++) { + var sParameterName = sURLVariables[i].split('='); + if (sParameterName[0] == 'q') { + return decodeURIComponent(sParameterName[1].replace(/\+/g, '%20')); + } + } +} + +function joinUrl (base, path) { + if (path.substring(0, 1) === "/") { + // path starts with `/`. Thus it is absolute. + return path; + } + if (base.substring(base.length-1) === "/") { + // base ends with `/` + return base + path; + } + return base + "/" + path; +} + +function escapeHtml (value) { + return value.replace(/&/g, '&') + .replace(/"/g, '"') + .replace(//g, '>'); +} + +function formatResult (location, title, summary) { + return ''; +} + +function displayResults (results) { + var search_results = document.getElementById("mkdocs-search-results"); + while (search_results.firstChild) { + search_results.removeChild(search_results.firstChild); + } + if (results.length > 0){ + for (var i=0; i < results.length; i++){ + var result = results[i]; + var html = formatResult(result.location, result.title, result.summary); + search_results.insertAdjacentHTML('beforeend', html); + } + } else { + var noResultsText = search_results.getAttribute('data-no-results-text'); + if (!noResultsText) { + noResultsText = "No results found"; + } + search_results.insertAdjacentHTML('beforeend', '

' + noResultsText + '

'); + } +} + +function doSearch () { + var query = document.getElementById('mkdocs-search-query').value; + if (query.length > min_search_length) { + if (!window.Worker) { + displayResults(search(query)); + } else { + searchWorker.postMessage({query: query}); + } + } else { + // Clear results for short queries + displayResults([]); + } +} + +function initSearch () { + var search_input = document.getElementById('mkdocs-search-query'); + if (search_input) { + search_input.addEventListener("keyup", doSearch); + } + var term = getSearchTermFromLocation(); + if (term) { + search_input.value = term; + doSearch(); + } +} + +function onWorkerMessage (e) { + if (e.data.allowSearch) { + initSearch(); + } else if (e.data.results) { + var results = e.data.results; + displayResults(results); + } else if (e.data.config) { + min_search_length = e.data.config.min_search_length-1; + } +} + +if (!window.Worker) { + console.log('Web Worker API not supported'); + // load index in main thread + $.getScript(joinUrl(base_url, "search/worker.js")).done(function () { + console.log('Loaded worker'); + init(); + window.postMessage = function (msg) { + onWorkerMessage({data: msg}); + }; + }).fail(function (jqxhr, settings, exception) { + console.error('Could not load worker.js'); + }); +} else { + // Wrap search in a web worker + var searchWorker = new Worker(joinUrl(base_url, "search/worker.js")); + searchWorker.postMessage({init: true}); + searchWorker.onmessage = onWorkerMessage; +} diff --git a/site/search/search_index.json b/site/search/search_index.json new file mode 100644 index 0000000000000000000000000000000000000000..7d1b7958687fc464b16d333b203373bab996822e --- /dev/null +++ b/site/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["zh"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"openEuler OpenStack SIG \u00b6 SIG \u5de5\u4f5c\u76ee\u6807\u548c\u8303\u56f4 \u00b6 \u5728openEuler\u4e4b\u4e0a\u63d0\u4f9b\u539f\u751f\u7684OpenStack\uff0c\u6784\u5efa\u5f00\u653e\u53ef\u9760\u7684\u4e91\u8ba1\u7b97\u6280\u672f\u6808\u3002 \u5b9a\u671f\u53ec\u5f00\u4f1a\u8bae\uff0c\u6536\u96c6\u5f00\u53d1\u8005\u3001\u5382\u5546\u8bc9\u6c42\uff0c\u8ba8\u8bbaOpenStack\u793e\u533a\u53d1\u5c55\u3002 \u7ec4\u7ec7\u4f1a\u8bae \u00b6 \u516c\u5f00\u7684\u4f1a\u8bae\u65f6\u95f4\uff1a\u6708\u5ea6\u4f8b\u4f1a\uff0c\u6bcf\u6708\u4e2d\u4e0b\u65ec\u7684\u67d0\u4e2a\u5468\u4e09\u4e0b\u53483:00-4:00(\u5317\u4eac\u65f6\u95f4) \u4f1a\u8bae\u94fe\u63a5\uff1a\u901a\u8fc7\u5fae\u4fe1\u7fa4\u6d88\u606f\u548c\u90ae\u4ef6\u5217\u8868\u53d1\u51fa \u4f1a\u8bae\u7eaa\u8981\uff1a https://etherpad.openeuler.org/p/sig-openstack-meetings OpenStack\u7248\u672c\u652f\u6301\u5217\u8868 \u00b6 OpenStack SIG\u901a\u8fc7\u7528\u6237\u53cd\u9988\u7b49\u65b9\u5f0f\u6536\u96c6OpenStack\u7248\u672c\u9700\u6c42\uff0c\u7ecf\u8fc7SIG\u7ec4\u5185\u6210\u5458\u516c\u5f00\u8ba8\u8bba\u51b3\u5b9aOpenStack\u7684\u7248\u672c\u6f14\u8fdb\u8def\u7ebf\u3002\u89c4\u5212\u4e2d\u7684\u7248\u672c\u53ef\u80fd\u56e0\u4e3a\u9700\u6c42\u66f4\u53d8\u3001\u4eba\u529b\u53d8\u52a8\u7b49\u539f\u56e0\u8fdb\u884c\u8c03\u6574\u3002OpenStack SIG\u6b22\u8fce\u66f4\u591a\u5f00\u53d1\u8005\u3001\u5382\u5546\u53c2\u4e0e\uff0c\u5171\u540c\u5b8c\u5584openEuler\u7684OpenStack\u652f\u6301\u3002 \u25cf - \u5df2\u652f\u6301 \u25cb - \u89c4\u5212\u4e2d/\u5f00\u53d1\u4e2d \u25b2 - \u90e8\u5206openEuler\u7248\u672c\u652f\u6301 Queens Rocky Train Ussuri Victoria Wallaby Xena Yoga Antelope openEuler 20.03 LTS SP1 \u25cf openEuler 20.03 LTS SP2 \u25cf \u25cf openEuler 20.03 LTS SP3 \u25cf \u25cf \u25cf openEuler 20.03 LTS SP4 \u25cf openEuler 21.03 \u25cf openEuler 21.09 \u25cf openEuler 22.03 LTS \u25cf \u25cf openEuler 22.03 LTS SP1 \u25cf \u25cf openEuler 22.03 LTS SP2 \u25cf \u25cf openEuler 22.03 LTS SP3 \u25cf \u25cf openEuler 22.03 LTS SP4 \u25cb \u25cb openEuler 22.09 \u25cf \u25cf openEuler 24.03 LTS \u25cf \u25cf Queens Rocky Train Victoria Wallaby Yoga Antelope Keystone \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Glance \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Nova \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Cinder \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Neutron \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Tempest \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Horizon \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Ironic \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Placement \u25cf \u25cf \u25cf \u25cf \u25cf Trove \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Kolla \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Rally \u25b2 \u25b2 Swift \u25cf \u25cf \u25cf \u25cf Heat \u25cf \u25b2 \u25cf \u25cf Ceilometer \u25cf \u25b2 \u25cf \u25cf Aodh \u25cf \u25b2 \u25cf \u25cf Cyborg \u25cf \u25b2 \u25cf \u25cf Gnocchi \u25cf \u25cf \u25cf \u25cf OpenStack-helm \u25cf \u25cf Barbican \u25b2 \u25cf Octavia \u25b2 \u25cf Designate \u25b2 \u25cf Manila \u25b2 \u25cf Masakari \u25b2 \u25cf Mistral \u25b2 \u25cf Senlin \u25b2 \u25cf Zaqar \u25b2 \u25cf Note: openEuler 20.03 LTS SP2\u4e0d\u652f\u6301Rally Heat\u3001Ceilometer\u3001Swift\u3001Aodh\u548cCyborg\u53ea\u572822.03 LTS\u4ee5\u4e0a\u7248\u672c\u652f\u6301 Barbican\u3001Octavia\u3001Designate\u3001Manila\u3001Masakari\u3001Mistral\u3001Senlin\u548cZaqar\u53ea\u572822.03 LTS SP2\u4ee5\u4e0a\u7248\u672c\u652f\u6301 oepkg\u8f6f\u4ef6\u4ed3\u5730\u5740\u5217\u8868 \u00b6 Queens\u3001Rocky\u3001Train\u7248\u672c\u7684\u652f\u6301\u653e\u5728SIG\u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9\u8f6f\u4ef6\u5e73\u53f0oepkg: 20.03-LTS-SP1 Train: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP1/contrib/openstack/train/ \u8be5Train\u7248\u672c\u4e0d\u662f\u7eaf\u539f\u751f\u4ee3\u7801\uff0c\u5305\u542b\u4e86\u667a\u80fd\u7f51\u5361\u652f\u6301\u7684\u76f8\u5173\u4ee3\u7801\uff0c\u7528\u6237\u4f7f\u7528\u524d\u8bf7\u81ea\u884c\u8bc4\u5ba1 20.03-LTS-SP2 Rocky\uff1a https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/ 20.03-LTS-SP3 Rocky\uff1a https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/ 20.03-LTS-SP2 Queens\uff1a https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/ 20.03-LTS-SP3 Rocky\uff1a https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/ \u53e6\u5916\uff0c20.03-LTS-SP1\u867d\u7136\u6709Queens\u3001Rocky\u7248\u672c\u7684\u8f6f\u4ef6\u5305\uff0c\u4f46\u672a\u7ecf\u8fc7\u9a8c\u8bc1\uff0c\u8bf7\u8c28\u614e\u4f7f\u7528\uff1a 20.03-LTS-SP1 Queens: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP1/contrib/openstack/queens/ 20.03-LTS-SP1 Rocky: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP1/contrib/openstack/rocky/ Maintainer\u7684\u52a0\u5165\u548c\u9000\u51fa \u00b6 \u79c9\u627f\u5f00\u6e90\u5f00\u653e\u7684\u7406\u5ff5\uff0cOpenStack SIG\u5728maintainer\u6210\u5458\u7684\u7ba1\u7406\u65b9\u9762\u4e5f\u6709\u4e00\u5b9a\u7684\u89c4\u8303\u548c\u8981\u6c42\u3002 \u5982\u4f55\u6210\u4e3amaintainer \u00b6 maintainer\u4f5c\u4e3aSIG\u7684\u76f4\u63a5\u8d1f\u8d23\u4eba\uff0c\u62e5\u6709\u4ee3\u7801\u5408\u5165\u3001\u8def\u6807\u89c4\u5212\u3001\u63d0\u540dmaintainer\u7b49\u65b9\u9762\u7684\u6743\u5229\uff0c\u540c\u65f6\u4e5f\u6709\u8f6f\u4ef6\u8d28\u91cf\u770b\u62a4\u3001\u7248\u672c\u5f00\u53d1\u7684\u4e49\u52a1\u3002\u5982\u679c\u60a8\u60f3\u6210\u4e3aOpenStack SIG\u7684\u4e00\u540dmaintainer\uff0c\u9700\u8981\u6ee1\u8db3\u4ee5\u4e0b\u51e0\u70b9\u8981\u6c42\uff1a \u6301\u7eed\u53c2\u4e0eOpenStack SIG\u5f00\u53d1\u8d21\u732e\uff0c\u4e0d\u5c0f\u4e8e\u4e00\u4e2aopenEuler release\u5468\u671f\uff08\u4e00\u822c\u4e3a3\u4e2a\u6708\uff09 \u6301\u7eed\u53c2\u4e0eOpenStack SIG\u4ee3\u7801\u68c0\u89c6\uff0creview\u6392\u540d\u5e94\u4e0d\u4f4e\u4e8eSIG\u5e73\u5747\u91cf \u5b9a\u65f6\u53c2\u52a0OpenStack SIG\u4f8b\u4f1a\uff08\u4e00\u822c\u4e3a\u53cc\u5468\u4e00\u6b21\uff09\uff0c\u4e00\u4e2aopenEuler release\u5468\u671f\u4e00\u822c\u5305\u62ec6\u6b21\u4f8b\u4f1a\uff0c\u7f3a\u5e2d\u6b21\u6570\u5e94\u4e0d\u5927\u4e8e2\u6b21 \u52a0\u5206\u9879\uff1a \u79ef\u6781\u53c2\u52a0OpenStack SIG\u7ec4\u7ec7\u7684\u5404\u79cd\u6d3b\u52a8\uff0c\u6bd4\u5982\u7ebf\u4e0a\u5206\u4eab\u3001\u7ebf\u4e0bmeetup\u6216\u5cf0\u4f1a\u7b49\u3002 \u5e2e\u52a9SIG\u6269\u5c55\u8fd0\u8425\u8303\u56f4\uff0c\u8fdb\u884c\u8054\u5408\u6280\u672f\u521b\u65b0\uff0c\u4f8b\u5982\u4e3b\u52a8\u5f00\u6e90\u65b0\u9879\u76ee\uff0c\u5438\u5f15\u65b0\u7684\u5f00\u53d1\u8005\u3001\u5382\u5546\u52a0\u5165SIG\u7b49\u3002 SIG maintainer\u6bcf\u4e2a\u5b63\u5ea6\u4f1a\u7ec4\u7ec7\u95ed\u95e8\u4f1a\u8bae\uff0c\u5ba1\u89c6\u5f53\u524d\u8d21\u732e\u6570\u636e\uff0c\u6839\u636e\u8d21\u732e\u8005\u6ee1\u8db3\u76f8\u5173\u8981\u6c42\uff0c\u7ecf\u8ba8\u8bba\u8fbe\u6210\u4e00\u81f4\u540e\u5e76\u4e14\u8d21\u732e\u8005\u613f\u610f\u62c5\u4efbmaintainer\u4e00\u804c\u65f6\uff0cSIG\u4f1a\u5411openEuler TC\u63d0\u51fa\u76f8\u5173\u7533\u8bf7 \u6d3b\u8dc3maintainer \u00b6 \u53c2\u8003 Apache\u57fa\u91d1\u4f1a \u7b49\u793e\u533a\uff0c\u7ed3\u5408SIG\u5177\u4f53\u60c5\u51b5\uff0c\u5f15\u5165\u6d3b\u8dc3maintainer\u673a\u5236\u3002 \u5bf9\u4e8e\u65e0\u6cd5\u4fdd\u6301\u957f\u671f\u9ad8\u6d3b\u8dc3\uff0c\u4f46\u613f\u610f\u7ee7\u7eed\u627f\u62c5SIG\u8d23\u4efb\u7684maintainer\uff0cmaintainer\u89d2\u8272\u4fdd\u7559\u3002 \u975e\u9ad8\u6d3b\u8dc3maintainer\u8d23\u4efb\u4e0e\u6743\u9650\uff1a \u4fdd\u6301SIG\u52a8\u6001\u8ddf\u8fdb\uff0c\u53c2\u4e0eSIG\u91cd\u5927\u4e8b\u52a1\u3002 \u53c2\u4e0eSIG\u51b3\u7b56\u3002\u6d3b\u8dc3maintainer\u5bf9SIG\u4e8b\u52a1\u51b3\u7b56\u5177\u5907\u66f4\u9ad8\u6743\u91cd\uff0c\u610f\u89c1\u76f8\u5de6\u65f6\u4ee5\u6d3b\u8dc3\u8005\u4e3a\u51c6\u3002 \u4e0d\u5177\u5907\u63d0\u540d\u6743\u9650\u3002 \u6d3b\u8dc3maintainer\u5728SIG\u4e3b\u9875\u5217\u8868\u4e2d\u88ab\u5217\u51fa\u3002 \u5f53SIG maintainer\u56e0\u4e3a\u81ea\u8eab\u539f\u56e0\uff0c\u65e0\u6cd5\u4fdd\u6301\u957f\u671f\u9ad8\u6d3b\u8dc3\u65f6\uff0c\u53ef\u4e3b\u52a8\u7533\u8bf7\u9000\u51fa\u9ad8\u6d3b\u8dc3\u72b6\u6001\u3002SIG maintainer\u6bcf\u534a\u5e74\u4f8b\u884c\u5ba1\u89c6\u5f53\u524dmaintainer\u5217\u8868\uff0c\u66f4\u65b0\u6d3b\u8dc3\u5217\u8868\u3002 maintainer\u7684\u9000\u51fa \u00b6 \u5f53SIG maintainer\u56e0\u4e3a\u81ea\u8eab\u539f\u56e0\uff08\u5de5\u4f5c\u53d8\u52a8\u3001\u4e1a\u52a1\u8c03\u6574\u7b49\u539f\u56e0\uff09\uff0c\u65e0\u6cd5\u518d\u62c5\u4efbmaintainer\u4e00\u804c\u65f6\uff0c\u53ef\u4e3b\u52a8\u7533\u8bf7\u9000\u51fa\u3002 SIG maintainer\u6bcf\u5e74\u4e5f\u4f1a\u4f8b\u884c\u5ba1\u89c6\u5f53\u524dmaintainer\u5217\u8868\uff0c\u5982\u679c\u53d1\u73b0\u6709\u4e0d\u518d\u9002\u5408\u62c5\u4efbmaintainer\u7684\u8d21\u732e\u8005\uff08\u65e0\u6cd5\u4fdd\u969c\u53c2\u4e0e\u7b49\u539f\u56e0\uff09\uff0c\u7ecf\u8ba8\u8bba\u8fbe\u6210\u4e00\u81f4\u540e\uff0c\u4f1a\u5411openEuler TC\u63d0\u51fa\u76f8\u5173\u7533\u8bf7\u3002 \u6d3b\u8dc3Maintainer \u00b6 \u59d3\u540d Gitee ID \u90ae\u7bb1 \u516c\u53f8 \u90d1\u633a tzing_t zhengting13@huawei.com \u534e\u4e3a \u738b\u4e1c\u5174 desert-sailor dongxing.wang_a@thundersoft.com \u521b\u8fbe\u5965\u601d\u7ef4 \u738b\u9759 Accessac wangjing@uniontech.com \u7edf\u4fe1\u8f6f\u4ef6 Maintainer/Committer\u5217\u8868 \u00b6 \u59d3\u540d Gitee ID \u90ae\u7bb1 \u516c\u53f8 \u9648\u7855 joec88 joseph.chn1988@gmail.com \u4e2d\u56fd\u8054\u901a \u674e\u6606\u5c71 liksh li_kunshan@163.com \u4e2d\u56fd\u8054\u901a \u9ec4\u586b\u534e huangtianhua huangtianhua223@gmail.com \u534e\u4e3a \u738b\u73ba\u6e90 xiyuanwang wangxiyuan1007@gmail.com \u534e\u4e3a \u5f20\u5e06 zh-f zh.f@outlook.com \u4e2d\u56fd\u7535\u4fe1 \u5f20\u8fce zhangy1317 zhangy1317@foxmail.com \u4e2d\u56fd\u8054\u901a \u97e9\u5149\u5b87 han-guangyu hanguangyu@uniontech.com \u7edf\u4fe1\u8f6f\u4ef6 \u738b\u4e1c\u5174 desert-sailor dongxing.wang_a@thundersoft.com \u521b\u8fbe\u5965\u601d\u7ef4 \u90d1\u633a tzing_t zhengting13@huawei.com \u534e\u4e3a \u738b\u9759 Accessac wangjing@uniontech.com \u7edf\u4fe1\u8f6f\u4ef6 \u5982\u4f55\u8d21\u732e \u00b6 OpenStack SIG\u79c9\u627fOpenStack\u793e\u533a4\u4e2aOpen\u539f\u5219\uff08Open source\u3001Open Design\u3001Open Development\u3001Open Community\uff09\uff0c\u6b22\u8fce\u5f00\u53d1\u8005\u3001\u7528\u6237\u3001\u5382\u5546\u4ee5\u5404\u79cd\u5f00\u6e90\u65b9\u5f0f\u53c2\u4e0eSIG\u8d21\u732e\uff0c\u5305\u62ec\u4f46\u4e0d\u9650\u4e8e\uff1a \u63d0\u4ea4Issue \u5982\u679c\u60a8\u5728\u4f7f\u7528OpenStack\u65f6\u9047\u5230\u4e86\u4efb\u4f55\u95ee\u9898\uff0c\u53ef\u4ee5\u5411SIG\u63d0\u4ea4ISSUE\uff0c\u5305\u62ec\u4e0d\u9650\u4e8e\u4f7f\u7528\u7591\u95ee\u3001\u8f6f\u4ef6\u5305BUG\u3001\u7279\u6027\u9700\u6c42\u7b49\u7b49\u3002 \u53c2\u4e0e\u6280\u672f\u8ba8\u8bba \u901a\u8fc7\u90ae\u4ef6\u5217\u8868\u3001\u5fae\u4fe1\u7fa4\u3001\u5728\u7ebf\u4f8b\u4f1a\u7b49\u65b9\u5f0f\uff0c\u4e0eSIG\u6210\u5458\u5b9e\u65f6\u8ba8\u8bbaOpenStack\u6280\u672f\u3002 \u53c2\u4e0eSIG\u7684\u8f6f\u4ef6\u5f00\u53d1\u6d4b\u8bd5\u5de5\u4f5c OpenStack SIG\u8ddf\u968fopenEuler\u7248\u672c\u5f00\u53d1\u7684\u8282\u594f\uff0c\u6bcf\u51e0\u4e2a\u6708\u5bf9\u5916\u53d1\u5e03\u4e0d\u540c\u7248\u672c\u7684OpenStack\uff0c\u6bcf\u4e2a\u7248\u672c\u5305\u542b\u4e86\u51e0\u767e\u4e2aRPM\u8f6f\u4ef6\u5305\uff0c\u5f00\u53d1\u8005\u53ef\u4ee5\u53c2\u4e0e\u5230\u8fd9\u4e9bRPM\u5305\u7684\u5f00\u53d1\u5de5\u4f5c\u4e2d\u3002 OpenStack SIG\u5305\u62ec\u4e00\u4e9b\u6765\u81ea\u5382\u5546\u6350\u732e\u3001\u81ea\u4e3b\u7814\u53d1\u7684\u9879\u76ee\uff0c\u5f00\u53d1\u8005\u53ef\u4ee5\u53c2\u4e0e\u76f8\u5173\u9879\u76ee\u7684\u5f00\u53d1\u5de5\u4f5c\u3002 openEuler\u65b0\u7248\u672c\u53d1\u5e03\u540e\uff0c\u7528\u6237\u53ef\u4ee5\u6d4b\u8bd5\u8bd5\u7528\u5bf9\u5e94\u7684OpenStack\uff0c\u76f8\u5173BUG\u548c\u95ee\u9898\u53ef\u4ee5\u63d0\u4ea4\u5230SIG\u3002 OpenStack SIG\u8fd8\u63d0\u4f9b\u4e86\u4e00\u7cfb\u5217\u63d0\u9ad8\u5f00\u53d1\u6548\u7387\u7684\u5de5\u5177\u548c\u6587\u6863\uff0c\u7528\u6237\u53ef\u4ee5\u5e2e\u5fd9\u4f18\u5316\u3001\u5b8c\u5584\u3002 \u6280\u672f\u9884\u8a00\u3001\u8054\u5408\u521b\u65b0 OpenStack SIG\u6b22\u8fce\u5404\u79cd\u5f62\u5f0f\u7684\u8054\u5408\u521b\u65b0\uff0c\u9080\u8bf7\u5404\u4f4d\u5f00\u53d1\u8005\u4ee5\u5f00\u6e90\u7684\u65b9\u5f0f\u3001\u4ee5SIG\u4e3a\u5e73\u53f0\uff0c\u521b\u9020\u5c5e\u4e8e\u56fd\u4eba\u7684\u4e91\u8ba1\u7b97\u65b0\u6280\u672f\u3002\u5982\u679c\u60a8\u6709idea\u6216\u5f00\u53d1\u610f\u613f\uff0c\u6b22\u8fce\u52a0\u5165SIG\u3002 \u5f53\u7136\uff0c\u8d21\u732e\u5f62\u5f0f\u4e0d\u4ec5\u5305\u542b\u8fd9\u4e9b\uff0c\u5176\u4ed6\u4efb\u4f55\u4e0eOpenStack\u76f8\u5173\u3001\u4e0e\u5f00\u6e90\u76f8\u5173\u7684\u4e8b\u52a1\u90fd\u53ef\u4ee5\u5e26\u5230SIG\u4e2d\u3002OpenStack SIG\u6b22\u8fce\u60a8\u7684\u53c2\u4e0e\u3002 \u9879\u76ee\u6e05\u5355 \u00b6 SIG\u5305\u542b\u7684\u5168\u90e8\u9879\u76ee\uff1a https://gitee.com/openeuler/openstack/blob/master/tools/oos/etc/openeuler_sig_repo.yaml OpenStack\u5305\u542b\u9879\u76ee\u4f17\u591a\uff0c\u4e3a\u4e86\u65b9\u4fbf\u7ba1\u7406\uff0c\u8bbe\u7f6e\u4e86\u7edf\u4e00\u5165\u53e3\u9879\u76ee\uff0c\u7528\u6237\u3001\u5f00\u53d1\u8005\u5bf9OpenStack SIG\u4ee5\u53ca\u5404OpenStack\u5b50\u9879\u76ee\u6709\u4efb\u4f55\u95ee\u9898\uff0c\u53ef\u4ee5\u5728\u8be5\u9879\u76ee\u4e2d\u63d0\u4ea4Issue\u3002 https://gitee.com/openeuler/openstack SIG\u540c\u65f6\u8054\u5408\u5404\u5927\u5382\u5546\u3001\u5f00\u53d1\u8005\uff0c\u521b\u5efa\u4e86\u4e00\u7cfb\u5217\u81ea\u7814\u9879\u76ee\uff1a https://gitee.com/openeuler/openstack-kolla-ansible-plugin https://gitee.com/openeuler/openstack-kolla-plugin https://gitee.com/openeuler/openstack-plugin https://gitee.com/openeuler/hostha https://gitee.com/openeuler/opensd \u4ea4\u6d41\u7fa4 \u00b6 \u6dfb\u52a0\u5c0f\u52a9\u624b\u56de\u590d\"\u52a0\u7fa4\"\u8fdb\u5165openEuler sig-OpenStack\u4ea4\u6d41\u7fa4","title":"OpenStack SIG"},{"location":"#openeuler-openstack-sig","text":"","title":"openEuler OpenStack SIG"},{"location":"#sig","text":"\u5728openEuler\u4e4b\u4e0a\u63d0\u4f9b\u539f\u751f\u7684OpenStack\uff0c\u6784\u5efa\u5f00\u653e\u53ef\u9760\u7684\u4e91\u8ba1\u7b97\u6280\u672f\u6808\u3002 \u5b9a\u671f\u53ec\u5f00\u4f1a\u8bae\uff0c\u6536\u96c6\u5f00\u53d1\u8005\u3001\u5382\u5546\u8bc9\u6c42\uff0c\u8ba8\u8bbaOpenStack\u793e\u533a\u53d1\u5c55\u3002","title":"SIG \u5de5\u4f5c\u76ee\u6807\u548c\u8303\u56f4"},{"location":"#_1","text":"\u516c\u5f00\u7684\u4f1a\u8bae\u65f6\u95f4\uff1a\u6708\u5ea6\u4f8b\u4f1a\uff0c\u6bcf\u6708\u4e2d\u4e0b\u65ec\u7684\u67d0\u4e2a\u5468\u4e09\u4e0b\u53483:00-4:00(\u5317\u4eac\u65f6\u95f4) \u4f1a\u8bae\u94fe\u63a5\uff1a\u901a\u8fc7\u5fae\u4fe1\u7fa4\u6d88\u606f\u548c\u90ae\u4ef6\u5217\u8868\u53d1\u51fa \u4f1a\u8bae\u7eaa\u8981\uff1a https://etherpad.openeuler.org/p/sig-openstack-meetings","title":"\u7ec4\u7ec7\u4f1a\u8bae"},{"location":"#openstack","text":"OpenStack SIG\u901a\u8fc7\u7528\u6237\u53cd\u9988\u7b49\u65b9\u5f0f\u6536\u96c6OpenStack\u7248\u672c\u9700\u6c42\uff0c\u7ecf\u8fc7SIG\u7ec4\u5185\u6210\u5458\u516c\u5f00\u8ba8\u8bba\u51b3\u5b9aOpenStack\u7684\u7248\u672c\u6f14\u8fdb\u8def\u7ebf\u3002\u89c4\u5212\u4e2d\u7684\u7248\u672c\u53ef\u80fd\u56e0\u4e3a\u9700\u6c42\u66f4\u53d8\u3001\u4eba\u529b\u53d8\u52a8\u7b49\u539f\u56e0\u8fdb\u884c\u8c03\u6574\u3002OpenStack SIG\u6b22\u8fce\u66f4\u591a\u5f00\u53d1\u8005\u3001\u5382\u5546\u53c2\u4e0e\uff0c\u5171\u540c\u5b8c\u5584openEuler\u7684OpenStack\u652f\u6301\u3002 \u25cf - \u5df2\u652f\u6301 \u25cb - \u89c4\u5212\u4e2d/\u5f00\u53d1\u4e2d \u25b2 - \u90e8\u5206openEuler\u7248\u672c\u652f\u6301 Queens Rocky Train Ussuri Victoria Wallaby Xena Yoga Antelope openEuler 20.03 LTS SP1 \u25cf openEuler 20.03 LTS SP2 \u25cf \u25cf openEuler 20.03 LTS SP3 \u25cf \u25cf \u25cf openEuler 20.03 LTS SP4 \u25cf openEuler 21.03 \u25cf openEuler 21.09 \u25cf openEuler 22.03 LTS \u25cf \u25cf openEuler 22.03 LTS SP1 \u25cf \u25cf openEuler 22.03 LTS SP2 \u25cf \u25cf openEuler 22.03 LTS SP3 \u25cf \u25cf openEuler 22.03 LTS SP4 \u25cb \u25cb openEuler 22.09 \u25cf \u25cf openEuler 24.03 LTS \u25cf \u25cf Queens Rocky Train Victoria Wallaby Yoga Antelope Keystone \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Glance \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Nova \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Cinder \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Neutron \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Tempest \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Horizon \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Ironic \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Placement \u25cf \u25cf \u25cf \u25cf \u25cf Trove \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Kolla \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf Rally \u25b2 \u25b2 Swift \u25cf \u25cf \u25cf \u25cf Heat \u25cf \u25b2 \u25cf \u25cf Ceilometer \u25cf \u25b2 \u25cf \u25cf Aodh \u25cf \u25b2 \u25cf \u25cf Cyborg \u25cf \u25b2 \u25cf \u25cf Gnocchi \u25cf \u25cf \u25cf \u25cf OpenStack-helm \u25cf \u25cf Barbican \u25b2 \u25cf Octavia \u25b2 \u25cf Designate \u25b2 \u25cf Manila \u25b2 \u25cf Masakari \u25b2 \u25cf Mistral \u25b2 \u25cf Senlin \u25b2 \u25cf Zaqar \u25b2 \u25cf Note: openEuler 20.03 LTS SP2\u4e0d\u652f\u6301Rally Heat\u3001Ceilometer\u3001Swift\u3001Aodh\u548cCyborg\u53ea\u572822.03 LTS\u4ee5\u4e0a\u7248\u672c\u652f\u6301 Barbican\u3001Octavia\u3001Designate\u3001Manila\u3001Masakari\u3001Mistral\u3001Senlin\u548cZaqar\u53ea\u572822.03 LTS SP2\u4ee5\u4e0a\u7248\u672c\u652f\u6301","title":"OpenStack\u7248\u672c\u652f\u6301\u5217\u8868"},{"location":"#oepkg","text":"Queens\u3001Rocky\u3001Train\u7248\u672c\u7684\u652f\u6301\u653e\u5728SIG\u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9\u8f6f\u4ef6\u5e73\u53f0oepkg: 20.03-LTS-SP1 Train: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP1/contrib/openstack/train/ \u8be5Train\u7248\u672c\u4e0d\u662f\u7eaf\u539f\u751f\u4ee3\u7801\uff0c\u5305\u542b\u4e86\u667a\u80fd\u7f51\u5361\u652f\u6301\u7684\u76f8\u5173\u4ee3\u7801\uff0c\u7528\u6237\u4f7f\u7528\u524d\u8bf7\u81ea\u884c\u8bc4\u5ba1 20.03-LTS-SP2 Rocky\uff1a https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/ 20.03-LTS-SP3 Rocky\uff1a https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/ 20.03-LTS-SP2 Queens\uff1a https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/ 20.03-LTS-SP3 Rocky\uff1a https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/ \u53e6\u5916\uff0c20.03-LTS-SP1\u867d\u7136\u6709Queens\u3001Rocky\u7248\u672c\u7684\u8f6f\u4ef6\u5305\uff0c\u4f46\u672a\u7ecf\u8fc7\u9a8c\u8bc1\uff0c\u8bf7\u8c28\u614e\u4f7f\u7528\uff1a 20.03-LTS-SP1 Queens: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP1/contrib/openstack/queens/ 20.03-LTS-SP1 Rocky: https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP1/contrib/openstack/rocky/","title":"oepkg\u8f6f\u4ef6\u4ed3\u5730\u5740\u5217\u8868"},{"location":"#maintainer","text":"\u79c9\u627f\u5f00\u6e90\u5f00\u653e\u7684\u7406\u5ff5\uff0cOpenStack SIG\u5728maintainer\u6210\u5458\u7684\u7ba1\u7406\u65b9\u9762\u4e5f\u6709\u4e00\u5b9a\u7684\u89c4\u8303\u548c\u8981\u6c42\u3002","title":"Maintainer\u7684\u52a0\u5165\u548c\u9000\u51fa"},{"location":"#maintainer_1","text":"maintainer\u4f5c\u4e3aSIG\u7684\u76f4\u63a5\u8d1f\u8d23\u4eba\uff0c\u62e5\u6709\u4ee3\u7801\u5408\u5165\u3001\u8def\u6807\u89c4\u5212\u3001\u63d0\u540dmaintainer\u7b49\u65b9\u9762\u7684\u6743\u5229\uff0c\u540c\u65f6\u4e5f\u6709\u8f6f\u4ef6\u8d28\u91cf\u770b\u62a4\u3001\u7248\u672c\u5f00\u53d1\u7684\u4e49\u52a1\u3002\u5982\u679c\u60a8\u60f3\u6210\u4e3aOpenStack SIG\u7684\u4e00\u540dmaintainer\uff0c\u9700\u8981\u6ee1\u8db3\u4ee5\u4e0b\u51e0\u70b9\u8981\u6c42\uff1a \u6301\u7eed\u53c2\u4e0eOpenStack SIG\u5f00\u53d1\u8d21\u732e\uff0c\u4e0d\u5c0f\u4e8e\u4e00\u4e2aopenEuler release\u5468\u671f\uff08\u4e00\u822c\u4e3a3\u4e2a\u6708\uff09 \u6301\u7eed\u53c2\u4e0eOpenStack SIG\u4ee3\u7801\u68c0\u89c6\uff0creview\u6392\u540d\u5e94\u4e0d\u4f4e\u4e8eSIG\u5e73\u5747\u91cf \u5b9a\u65f6\u53c2\u52a0OpenStack SIG\u4f8b\u4f1a\uff08\u4e00\u822c\u4e3a\u53cc\u5468\u4e00\u6b21\uff09\uff0c\u4e00\u4e2aopenEuler release\u5468\u671f\u4e00\u822c\u5305\u62ec6\u6b21\u4f8b\u4f1a\uff0c\u7f3a\u5e2d\u6b21\u6570\u5e94\u4e0d\u5927\u4e8e2\u6b21 \u52a0\u5206\u9879\uff1a \u79ef\u6781\u53c2\u52a0OpenStack SIG\u7ec4\u7ec7\u7684\u5404\u79cd\u6d3b\u52a8\uff0c\u6bd4\u5982\u7ebf\u4e0a\u5206\u4eab\u3001\u7ebf\u4e0bmeetup\u6216\u5cf0\u4f1a\u7b49\u3002 \u5e2e\u52a9SIG\u6269\u5c55\u8fd0\u8425\u8303\u56f4\uff0c\u8fdb\u884c\u8054\u5408\u6280\u672f\u521b\u65b0\uff0c\u4f8b\u5982\u4e3b\u52a8\u5f00\u6e90\u65b0\u9879\u76ee\uff0c\u5438\u5f15\u65b0\u7684\u5f00\u53d1\u8005\u3001\u5382\u5546\u52a0\u5165SIG\u7b49\u3002 SIG maintainer\u6bcf\u4e2a\u5b63\u5ea6\u4f1a\u7ec4\u7ec7\u95ed\u95e8\u4f1a\u8bae\uff0c\u5ba1\u89c6\u5f53\u524d\u8d21\u732e\u6570\u636e\uff0c\u6839\u636e\u8d21\u732e\u8005\u6ee1\u8db3\u76f8\u5173\u8981\u6c42\uff0c\u7ecf\u8ba8\u8bba\u8fbe\u6210\u4e00\u81f4\u540e\u5e76\u4e14\u8d21\u732e\u8005\u613f\u610f\u62c5\u4efbmaintainer\u4e00\u804c\u65f6\uff0cSIG\u4f1a\u5411openEuler TC\u63d0\u51fa\u76f8\u5173\u7533\u8bf7","title":"\u5982\u4f55\u6210\u4e3amaintainer"},{"location":"#maintainer_2","text":"\u53c2\u8003 Apache\u57fa\u91d1\u4f1a \u7b49\u793e\u533a\uff0c\u7ed3\u5408SIG\u5177\u4f53\u60c5\u51b5\uff0c\u5f15\u5165\u6d3b\u8dc3maintainer\u673a\u5236\u3002 \u5bf9\u4e8e\u65e0\u6cd5\u4fdd\u6301\u957f\u671f\u9ad8\u6d3b\u8dc3\uff0c\u4f46\u613f\u610f\u7ee7\u7eed\u627f\u62c5SIG\u8d23\u4efb\u7684maintainer\uff0cmaintainer\u89d2\u8272\u4fdd\u7559\u3002 \u975e\u9ad8\u6d3b\u8dc3maintainer\u8d23\u4efb\u4e0e\u6743\u9650\uff1a \u4fdd\u6301SIG\u52a8\u6001\u8ddf\u8fdb\uff0c\u53c2\u4e0eSIG\u91cd\u5927\u4e8b\u52a1\u3002 \u53c2\u4e0eSIG\u51b3\u7b56\u3002\u6d3b\u8dc3maintainer\u5bf9SIG\u4e8b\u52a1\u51b3\u7b56\u5177\u5907\u66f4\u9ad8\u6743\u91cd\uff0c\u610f\u89c1\u76f8\u5de6\u65f6\u4ee5\u6d3b\u8dc3\u8005\u4e3a\u51c6\u3002 \u4e0d\u5177\u5907\u63d0\u540d\u6743\u9650\u3002 \u6d3b\u8dc3maintainer\u5728SIG\u4e3b\u9875\u5217\u8868\u4e2d\u88ab\u5217\u51fa\u3002 \u5f53SIG maintainer\u56e0\u4e3a\u81ea\u8eab\u539f\u56e0\uff0c\u65e0\u6cd5\u4fdd\u6301\u957f\u671f\u9ad8\u6d3b\u8dc3\u65f6\uff0c\u53ef\u4e3b\u52a8\u7533\u8bf7\u9000\u51fa\u9ad8\u6d3b\u8dc3\u72b6\u6001\u3002SIG maintainer\u6bcf\u534a\u5e74\u4f8b\u884c\u5ba1\u89c6\u5f53\u524dmaintainer\u5217\u8868\uff0c\u66f4\u65b0\u6d3b\u8dc3\u5217\u8868\u3002","title":"\u6d3b\u8dc3maintainer"},{"location":"#maintainer_3","text":"\u5f53SIG maintainer\u56e0\u4e3a\u81ea\u8eab\u539f\u56e0\uff08\u5de5\u4f5c\u53d8\u52a8\u3001\u4e1a\u52a1\u8c03\u6574\u7b49\u539f\u56e0\uff09\uff0c\u65e0\u6cd5\u518d\u62c5\u4efbmaintainer\u4e00\u804c\u65f6\uff0c\u53ef\u4e3b\u52a8\u7533\u8bf7\u9000\u51fa\u3002 SIG maintainer\u6bcf\u5e74\u4e5f\u4f1a\u4f8b\u884c\u5ba1\u89c6\u5f53\u524dmaintainer\u5217\u8868\uff0c\u5982\u679c\u53d1\u73b0\u6709\u4e0d\u518d\u9002\u5408\u62c5\u4efbmaintainer\u7684\u8d21\u732e\u8005\uff08\u65e0\u6cd5\u4fdd\u969c\u53c2\u4e0e\u7b49\u539f\u56e0\uff09\uff0c\u7ecf\u8ba8\u8bba\u8fbe\u6210\u4e00\u81f4\u540e\uff0c\u4f1a\u5411openEuler TC\u63d0\u51fa\u76f8\u5173\u7533\u8bf7\u3002","title":"maintainer\u7684\u9000\u51fa"},{"location":"#maintainer_4","text":"\u59d3\u540d Gitee ID \u90ae\u7bb1 \u516c\u53f8 \u90d1\u633a tzing_t zhengting13@huawei.com \u534e\u4e3a \u738b\u4e1c\u5174 desert-sailor dongxing.wang_a@thundersoft.com \u521b\u8fbe\u5965\u601d\u7ef4 \u738b\u9759 Accessac wangjing@uniontech.com \u7edf\u4fe1\u8f6f\u4ef6","title":"\u6d3b\u8dc3Maintainer"},{"location":"#maintainercommitter","text":"\u59d3\u540d Gitee ID \u90ae\u7bb1 \u516c\u53f8 \u9648\u7855 joec88 joseph.chn1988@gmail.com \u4e2d\u56fd\u8054\u901a \u674e\u6606\u5c71 liksh li_kunshan@163.com \u4e2d\u56fd\u8054\u901a \u9ec4\u586b\u534e huangtianhua huangtianhua223@gmail.com \u534e\u4e3a \u738b\u73ba\u6e90 xiyuanwang wangxiyuan1007@gmail.com \u534e\u4e3a \u5f20\u5e06 zh-f zh.f@outlook.com \u4e2d\u56fd\u7535\u4fe1 \u5f20\u8fce zhangy1317 zhangy1317@foxmail.com \u4e2d\u56fd\u8054\u901a \u97e9\u5149\u5b87 han-guangyu hanguangyu@uniontech.com \u7edf\u4fe1\u8f6f\u4ef6 \u738b\u4e1c\u5174 desert-sailor dongxing.wang_a@thundersoft.com \u521b\u8fbe\u5965\u601d\u7ef4 \u90d1\u633a tzing_t zhengting13@huawei.com \u534e\u4e3a \u738b\u9759 Accessac wangjing@uniontech.com \u7edf\u4fe1\u8f6f\u4ef6","title":"Maintainer/Committer\u5217\u8868"},{"location":"#_2","text":"OpenStack SIG\u79c9\u627fOpenStack\u793e\u533a4\u4e2aOpen\u539f\u5219\uff08Open source\u3001Open Design\u3001Open Development\u3001Open Community\uff09\uff0c\u6b22\u8fce\u5f00\u53d1\u8005\u3001\u7528\u6237\u3001\u5382\u5546\u4ee5\u5404\u79cd\u5f00\u6e90\u65b9\u5f0f\u53c2\u4e0eSIG\u8d21\u732e\uff0c\u5305\u62ec\u4f46\u4e0d\u9650\u4e8e\uff1a \u63d0\u4ea4Issue \u5982\u679c\u60a8\u5728\u4f7f\u7528OpenStack\u65f6\u9047\u5230\u4e86\u4efb\u4f55\u95ee\u9898\uff0c\u53ef\u4ee5\u5411SIG\u63d0\u4ea4ISSUE\uff0c\u5305\u62ec\u4e0d\u9650\u4e8e\u4f7f\u7528\u7591\u95ee\u3001\u8f6f\u4ef6\u5305BUG\u3001\u7279\u6027\u9700\u6c42\u7b49\u7b49\u3002 \u53c2\u4e0e\u6280\u672f\u8ba8\u8bba \u901a\u8fc7\u90ae\u4ef6\u5217\u8868\u3001\u5fae\u4fe1\u7fa4\u3001\u5728\u7ebf\u4f8b\u4f1a\u7b49\u65b9\u5f0f\uff0c\u4e0eSIG\u6210\u5458\u5b9e\u65f6\u8ba8\u8bbaOpenStack\u6280\u672f\u3002 \u53c2\u4e0eSIG\u7684\u8f6f\u4ef6\u5f00\u53d1\u6d4b\u8bd5\u5de5\u4f5c OpenStack SIG\u8ddf\u968fopenEuler\u7248\u672c\u5f00\u53d1\u7684\u8282\u594f\uff0c\u6bcf\u51e0\u4e2a\u6708\u5bf9\u5916\u53d1\u5e03\u4e0d\u540c\u7248\u672c\u7684OpenStack\uff0c\u6bcf\u4e2a\u7248\u672c\u5305\u542b\u4e86\u51e0\u767e\u4e2aRPM\u8f6f\u4ef6\u5305\uff0c\u5f00\u53d1\u8005\u53ef\u4ee5\u53c2\u4e0e\u5230\u8fd9\u4e9bRPM\u5305\u7684\u5f00\u53d1\u5de5\u4f5c\u4e2d\u3002 OpenStack SIG\u5305\u62ec\u4e00\u4e9b\u6765\u81ea\u5382\u5546\u6350\u732e\u3001\u81ea\u4e3b\u7814\u53d1\u7684\u9879\u76ee\uff0c\u5f00\u53d1\u8005\u53ef\u4ee5\u53c2\u4e0e\u76f8\u5173\u9879\u76ee\u7684\u5f00\u53d1\u5de5\u4f5c\u3002 openEuler\u65b0\u7248\u672c\u53d1\u5e03\u540e\uff0c\u7528\u6237\u53ef\u4ee5\u6d4b\u8bd5\u8bd5\u7528\u5bf9\u5e94\u7684OpenStack\uff0c\u76f8\u5173BUG\u548c\u95ee\u9898\u53ef\u4ee5\u63d0\u4ea4\u5230SIG\u3002 OpenStack SIG\u8fd8\u63d0\u4f9b\u4e86\u4e00\u7cfb\u5217\u63d0\u9ad8\u5f00\u53d1\u6548\u7387\u7684\u5de5\u5177\u548c\u6587\u6863\uff0c\u7528\u6237\u53ef\u4ee5\u5e2e\u5fd9\u4f18\u5316\u3001\u5b8c\u5584\u3002 \u6280\u672f\u9884\u8a00\u3001\u8054\u5408\u521b\u65b0 OpenStack SIG\u6b22\u8fce\u5404\u79cd\u5f62\u5f0f\u7684\u8054\u5408\u521b\u65b0\uff0c\u9080\u8bf7\u5404\u4f4d\u5f00\u53d1\u8005\u4ee5\u5f00\u6e90\u7684\u65b9\u5f0f\u3001\u4ee5SIG\u4e3a\u5e73\u53f0\uff0c\u521b\u9020\u5c5e\u4e8e\u56fd\u4eba\u7684\u4e91\u8ba1\u7b97\u65b0\u6280\u672f\u3002\u5982\u679c\u60a8\u6709idea\u6216\u5f00\u53d1\u610f\u613f\uff0c\u6b22\u8fce\u52a0\u5165SIG\u3002 \u5f53\u7136\uff0c\u8d21\u732e\u5f62\u5f0f\u4e0d\u4ec5\u5305\u542b\u8fd9\u4e9b\uff0c\u5176\u4ed6\u4efb\u4f55\u4e0eOpenStack\u76f8\u5173\u3001\u4e0e\u5f00\u6e90\u76f8\u5173\u7684\u4e8b\u52a1\u90fd\u53ef\u4ee5\u5e26\u5230SIG\u4e2d\u3002OpenStack SIG\u6b22\u8fce\u60a8\u7684\u53c2\u4e0e\u3002","title":"\u5982\u4f55\u8d21\u732e"},{"location":"#_3","text":"SIG\u5305\u542b\u7684\u5168\u90e8\u9879\u76ee\uff1a https://gitee.com/openeuler/openstack/blob/master/tools/oos/etc/openeuler_sig_repo.yaml OpenStack\u5305\u542b\u9879\u76ee\u4f17\u591a\uff0c\u4e3a\u4e86\u65b9\u4fbf\u7ba1\u7406\uff0c\u8bbe\u7f6e\u4e86\u7edf\u4e00\u5165\u53e3\u9879\u76ee\uff0c\u7528\u6237\u3001\u5f00\u53d1\u8005\u5bf9OpenStack SIG\u4ee5\u53ca\u5404OpenStack\u5b50\u9879\u76ee\u6709\u4efb\u4f55\u95ee\u9898\uff0c\u53ef\u4ee5\u5728\u8be5\u9879\u76ee\u4e2d\u63d0\u4ea4Issue\u3002 https://gitee.com/openeuler/openstack SIG\u540c\u65f6\u8054\u5408\u5404\u5927\u5382\u5546\u3001\u5f00\u53d1\u8005\uff0c\u521b\u5efa\u4e86\u4e00\u7cfb\u5217\u81ea\u7814\u9879\u76ee\uff1a https://gitee.com/openeuler/openstack-kolla-ansible-plugin https://gitee.com/openeuler/openstack-kolla-plugin https://gitee.com/openeuler/openstack-plugin https://gitee.com/openeuler/hostha https://gitee.com/openeuler/opensd","title":"\u9879\u76ee\u6e05\u5355"},{"location":"#_4","text":"\u6dfb\u52a0\u5c0f\u52a9\u624b\u56de\u590d\"\u52a0\u7fa4\"\u8fdb\u5165openEuler sig-OpenStack\u4ea4\u6d41\u7fa4","title":"\u4ea4\u6d41\u7fa4"},{"location":"contribute/rpm-packaging-reference/","text":"SIG RPM \u7f16\u5305\u6d41\u7a0b\u68b3\u7406 \u00b6 OpenStack SIG \u6709\u4e00\u9879\u957f\u671f\u5f00\u53d1\u5de5\u4f5c\u662f\u8fdb\u884c OpenStack \u5404\u7248\u672c\u76f8\u5173 RPM \u8f6f\u4ef6\u5305\u7684\u6253\u5305\u7ef4\u62a4\u3002\u4e3a\u4e86\u65b9\u4fbf\u65b0\u52a0\u5165 SIG \u7684\u5f00\u53d1\u8005\u66f4\u5feb\u4e86\u89e3 SIG \u7f16\u5305\u6d41\u7a0b\uff0c\u5728\u6b64\u5bf9 SIG \u7f16\u5305\u6d41\u7a0b\u8fdb\u884c\u68b3\u7406\uff0c\u4ee5\u4f9b\u53c2\u8003\u3002 Excel\u8868\u683c\u8bf4\u660e \u00b6 SIG \u7f16\u5305\u65f6\uff0c\u4f1a\u4ee5\u5171\u4eab\u8868\u683c\u7684\u5f62\u5f0f\uff0c\u5c06\u9700\u8981\u5904\u7406\u7684\u8f6f\u4ef6\u5305\u6574\u7406\u51fa\u6765\uff0c\u4f9b\u5f00\u53d1\u8005\u534f\u540c\u5904\u7406\u3002\u5f53\u524d\u8868\u683c\u683c\u5f0f\u5982\u4e0b\uff1a Project Name openEuler Repo SIG Repo version Required (Min) Version lt Version ne Version Upper Version Status Requires Depth Author PR link PR status pyrsistent python-pyrsistent sig-python-modules 0.18.0 0.18.1 [] 0.18.1 Need Upgrade [] 13 ... \u201cProject Name\u201d\u5217\u4e3a\u8f6f\u4ef6\u9879\u76ee\u540d\u3002\u201copenEuler Repo\u201d\u5217\u4e3a\u6b64\u9879\u76ee\u5728 openEuler gitee \u4e0a\u7684\u4ed3\u5e93\u540d\uff0c\u540c\u65f6\u4e5f\u662f\u6b64\u9879\u76ee\u5728openEuler\u7cfb\u7edf\u4e2d\u7684\u8f6f\u4ef6\u5305\u540d\u3002\u6240\u6709 openEuler \u7684\u8f6f\u4ef6\u5305\u4ed3\u5e93\u5747\u5b58\u653e\u4e8ehttps://gitee.com/src-openeuler\u4e4b\u4e2d\u3002\u201cSIG\u201d\u5217\u8bb0\u5f55\u8f6f\u4ef6\u5305\u5f52\u5c5e\u4e8e\u54ea\u4e2a SIG\u3002 \u5904\u7406\u65f6\u9996\u5148\u67e5\u770b\u201cStatus\u201d\u5217\uff0c\u8be5\u5217\u8868\u793a\u8f6f\u4ef6\u5305\u72b6\u6001\u3002\u8f6f\u4ef6\u5305\u5171\u67096\u79cd\u72b6\u6001\uff0c\u5f00\u53d1\u8005\u9700\u8981\u6839\u636e\u201cStatus\u201d\u8fdb\u884c\u76f8\u5e94\u5904\u7406\u3002 \u201cOK\u201d\uff1a\u5f53\u524d\u7248\u672c\u76f4\u63a5\u53ef\u7528\uff0c\u4e0d\u9700\u8981\u5904\u7406\u3002 \u201cNeed Create Repo\u201d\uff1aopenEuler \u7cfb\u7edf\u4e2d\u6ca1\u6709\u6b64\u8f6f\u4ef6\u5305\uff0c\u9700\u8981\u5728 Gitee \u4e2d\u7684 src-openeuler repo \u4ed3\u65b0\u5efa\u4ed3\u5e93\u3002\u6d41\u7a0b\u53ef\u53c2\u8003\u793e\u533a\u6307\u5bfc\u6587\u6863\uff1a \u65b0\u589e\u8f6f\u4ef6\u5305 \u3002\u521b\u5efa\u5e76\u521d\u59cb\u5316\u4ed3\u5e93\u540e\uff0c\u5c06\u8f6f\u4ef6\u5305\u653e\u5165\u9700\u8981\u7684 OBS \u5de5\u7a0b\u3002 \u201cNeed Create Branch\u201d\uff1a\u4ed3\u5e93\u4e2d\u6ca1\u6709\u6240\u9700\u5206\u652f\uff0c\u9700\u8981\u5f00\u53d1\u8005\u521b\u5efa\u5e76\u521d\u59cb\u5316\u3002 \u201cNeed Init Branch\u201d\uff1a\u9700\u8981\u521d\u59cb\u5316\u5206\u652f\u5e76\u5c06\u6b64\u5206\u652f\u8f6f\u4ef6\u5305\u653e\u5165\u9700\u8981\u7684 OBS \u5de5\u7a0b\u3002\u8868\u660e\u5206\u652f\u5b58\u5728\uff0c\u4f46\u662f\u91cc\u9762\u5e76\u6ca1\u6709\u4efb\u4f55\u7248\u672c\u7684\u6e90\u7801\u5305\uff0c\u5f00\u53d1\u8005\u9700\u8981\u5bf9\u6b64\u5206\u652f\u8fdb\u884c\u521d\u59cb\u5316\uff0c\u4e0a\u4f20\u6240\u9700\u7248\u672c\u6e90\u7801\u5305\u53ca spec \u6587\u4ef6\u7b49\u3002\u4ee522.09\u5f00\u53d1\u5468\u671f\u9002\u914d Yoga \u7248\u672c\u4e3a\u4f8b\uff0c\u6b64\u4efb\u52a1\u76f4\u63a5\u5728 master \u5206\u652f\u5de5\u4f5c\u3002get_gitee_project_version \u9879\u76ee\u72b6\u6001\u4e3a\u201cNeed Init Branch\u201d\u201d\uff0c\u5b83\u5bf9\u5e94\u7684\u201cpython-neutron-tempest-plugin\u201d\u4ed3\u5e93\u7684master\u5206\u652f\uff0c\u5728\u5904\u7406\u524d\uff0c\u53ea\u6709 README.md \u548c README.en.md \u4e24\u4e2a\u6587\u4ef6\uff0c\u9700\u8981\u5f00\u53d1\u8005\u521d\u59cb\u5316\u5206\u652f\u3002 \u201cNeed Downgrade\u201d\uff1a\u964d\u7ea7\u8f6f\u4ef6\u5305\u3002\u6b64\u79cd\u60c5\u51b5\u9760\u540e\u5904\u7406\uff0c\u4e0e SIG \u786e\u8ba4\u540e\u518d\u64cd\u4f5c\u3002 \u201cNeed Upgrade\u201d\uff1a\u5347\u7ea7\u8f6f\u4ef6\u5305\u3002 \u786e\u5b9a\u597d\u8f6f\u4ef6\u5305\u5bf9\u5e94\u7684\u5904\u7406\u7c7b\u578b\u540e\uff0c\u9700\u8981\u6839\u636e\u7248\u672c\u4fe1\u606f\u8fdb\u884c\u5904\u7406\u3002\u201cRepo version\u201d\u5217\u4e3a\u5f53\u524d\u4ed3\u5e93\u4e2d\u5bf9\u5e94\u5206\u652f\u7684\u8f6f\u4ef6\u5305\u7248\u672c\u3002\u201cRequired (Min) Version\u201d\u5219\u662f\u9700\u8981\u7684\u6700\u5c0f\u7248\u672c\uff0c\u5982\u679c\u5176\u540e\u6709\"(Must)\"\u6807\u8bc6\uff0c\u5219\u8868\u793a\u5fc5\u987b\u4f7f\u7528\u6b64\u7248\u672c\u3002\u201cUpper Version\u201d\u4e3a\u53ef\u4ee5\u4f7f\u7528\u7684\u6700\u9ad8\u7248\u672c\u3002\u5982\u679c\u201cRequired (Min) Version\u201d\u548c\u201cUpper Version\u201d\u4e0d\u540c\uff0c\u4f18\u5148\u4f7f\u7528\u201cRequired (Min) Version\u201d\u3002\u6bd4\u5982\u5347\u7ea7\u8f6f\u4ef6\u5305\uff0c\u4f18\u5148\u5347\u7ea7\u5230\u201cRequired (Min) Version\u201d\u3002 \u201cRequires\u201d\u5217\u4e3a\u8f6f\u4ef6\u5305\u7684\u4f9d\u8d56\u3002\u201cDepth\u201d\u5217\u8868\u793a\u8f6f\u4ef6\u5305\u4f9d\u8d56\u5c42\u7ea7\u3002\u201cDepth\u201d\u4e3a1\u7684\u662f\u201cDepth\u201d\u4e3a0\u7684\u8f6f\u4ef6\u5305\u7684\u4f9d\u8d56\uff0c\u4ee5\u6b64\u7c7b\u63a8\uff0c\u201cDepth\u201d\u9ad8\u7684\u8f6f\u4ef6\u5305\u4e3a\u201cDepth\u201d\u4f4e\u7684\u8f6f\u4ef6\u5305\u7684\u4f9d\u8d56\u3002\u5904\u7406\u65f6\u5e94\u4f18\u5148\u5904\u7406\u201cDepth\u201d\u9ad8\u7684\u884c\u3002\u4f46\u5982\u679c\u67d0\u4e2a\u5305\uff0c\u6ca1\u6709\u4f9d\u8d56\uff08\u201cRequires\u201d\u4e3a[]\uff09,\u4e5f\u53ef\u76f4\u63a5\u5904\u7406\u3002\u5982\u679c\u67d0\u4e9b\u5305\u9700\u8981\u4f18\u5148\u5904\u7406\uff0c\u5e94\u6309\u7167\u5176\u201cRequires\u201d\uff0c\u4f18\u5148\u5904\u7406\u5176\u4f9d\u8d56\u3002 \u5904\u7406\u4e00\u4e2a\u8f6f\u4ef6\u5305\u65f6\uff0c\u5e94\u9996\u5148\u5728\u201cAuthor\u201d\u5217\u6807\u6ce8\u81ea\u5df1\u7684\u540d\u5b57\uff0c\u4ee5\u544a\u8bc9\u5176\u4ed6\u5f00\u53d1\u8005\u6b64\u5305\u5df2\u6709\u4eba\u5904\u7406\u3002pr\uff08pull request\uff09\u63d0\u4ea4\u540e\uff0c\u5c06 pr \u94fe\u63a5\u8d34\u5230\u201cPR link\u201d\u5217\u3002pr \u5408\u5e76\u540e\uff0c\u5e94\u5728\u201cPR status\u201d\u5217\u6807\u6ce8\u201cDone\u201d\u3002 SIG \u5904\u7406\u7f16\u5305\u95ee\u9898\u6d41\u7a0b \u00b6 \u76ee\u524d SIG \u5904\u7406\u7f16\u5305\u95ee\u9898\u4e3b\u8981\u4f7f\u7528 SIG \u81ea\u5df1\u7f16\u5199\u7684 oos \u5de5\u5177\u3002oos \u5de5\u5177\u7ec6\u8282\u53c2\u8003 oos README \u3002\u4e0d\u540c\u201cStatus\u201d\u5904\u7406\u65f6\u6d89\u53ca\u7684\u201c\u5347\u7ea7\u201d\u3001\u201c\u521d\u59cb\u5316\u5206\u652f\u201d\u3001\u201c\u8f6f\u4ef6\u5305\u653e\u5165 OBS \u5de5\u7a0b\u201d\u7b49\u64cd\u4f5c\uff0coos \u5de5\u5177\u6709\u5bf9\u5e94\u5b9e\u73b0\u3002 \u4ee5 Yoga \u7248\u672c\u5347\u7ea7 python-pyrsistent \u8f6f\u4ef6\u5305\u4e3a\u4f8b\uff0c\u6f14\u793a\u7f16\u5305\u6d41\u7a0b\uff0c\u5e2e\u52a9\u5f00\u53d1\u8005\u719f\u6089 OpenStack SIG \u57fa\u4e8e oos \u5de5\u5177\u7684\u6253\u5305\u76f8\u5173\u6d41\u7a0b\u3002\u5728\u4e86\u89e3\u57fa\u7840\u6d41\u7a0b\u540e\uff0c\u5f00\u53d1\u8005\u53ef\u901a\u8fc7 oos README \u4e86\u89e3\u5176\u4f59\u64cd\u4f5c\u3002python-pyrsistent \u8f6f\u4ef6\u5305\u4fe1\u606f\u53c2\u89c1\u4e0a\u6587\u8868\u683c\u3002\u8be5\u8f6f\u4ef6\u5305\u9700\u8981\u4ece0.18.0\u7248\u672c\u5347\u7ea7\u52300.18.1\u7248\u672c\u3002Yoga \u7248\u672c\u662f\u572822.09\u7248\u672c\u5f00\u53d1\u89c4\u5212\u4e2d\uff0c\u5f53\u524d\u4e3a22\u5e745\u6708\uff0c\u76f4\u63a5\u63d0\u4ea4\u5230master\u5206\u652f\u5373\u53ef\u3002 \u7b7e\u7f72 CLA \u00b6 \u5728 openEuler \u793e\u533a\u63d0\u4ea4\u8d21\u732e\u9700\u8981\u7b7e\u7f72 CLA \u3002 \u5bf9\u4e8e\u521d\u6b21\u53c2\u4e0e openEuler \u793e\u533a\u7684\u5f00\u53d1\u8005\uff0c\u53ef\u9996\u5148\u67e5\u770b openEuler \u8d21\u732e\u653b\u7565 \uff0c\u6982\u89c8\u6574\u4f53\u8d21\u732e\u60c5\u51b5\u3002 \u73af\u5883\u51c6\u5907 \u00b6 dnf install rpm-build rpmdevtools git # \u751f\u6210~/rpmbuild\u76ee\u5f55\uff0coos\u9ed8\u8ba4\u5de5\u4f5c\u8def\u5f84\u4e5f\u4e3a\u6b64 rpmdev-setuptree pip install openstack-sig-tool==1.0.6 \u8bf4\u660e\uff1aopenstack-sig-tool \u5728 1.1.0 \u7248\u672c\u5bf9 oos spec \u547d\u4ee4\u8fdb\u884c\u4e86 \u91cd\u6784 \u3002\u5982\u4e0b\u6d41\u7a0b\u6d89\u53ca oos spec \u547d\u4ee4\u7684\u64cd\u4f5c\u5bf9\u5e94 1.0.6 \u7248\u672c\u3002\u5efa\u8bae\u5b89\u88c5\u65b0\u7248 oos , \u5e76\u53c2\u8003\u5bf9\u5e94 README \u4f7f\u7528\u3002 \u751f\u6210\u4e2a\u4eba Gitee \u5e10\u6237\u7684 pat(personal access token) \u00b6 \u9996\u5148\u8fdb\u5165 Gitee \u5e10\u6237\u7684\u201c\u8bbe\u7f6e\u201d\u754c\u9762\u3002 \u9009\u62e9\u201c\u79c1\u4eba\u4ee4\u724c\u201d\uff0c\u7136\u540e\u70b9\u51fb\u201c\u751f\u6210\u65b0\u4ee4\u724c\u201d\u3002\u751f\u6210\u540e\u5355\u72ec\u4fdd\u5b58\u597d\u81ea\u5df1\u7684\u79c1\u4eba\u4ee4\u724c\uff08pat\uff09\uff0cGitee \u4e0a\u65e0\u6cd5\u518d\u6b21\u67e5\u770b\uff0c\u5982\u679c\u4e22\u5931\u53ea\u80fd\u91cd\u65b0\u751f\u6210\u3002 \u751f\u6210 python-pyrsistent \u5305\u7684 spec \u5e76\u63d0\u4ea4 \u00b6 export GITEE_PAT= oos spec push --name python-pyrsistent --version 0.18.1 -dp -dp, --do-push [\u53ef\u9009] \u6307\u5b9a\u662f\u5426\u6267\u884cpush\u5230gitee\u4ed3\u5e93\u4e0a\u5e76\u63d0\u4ea4PR\uff0c\u5982\u679c\u4e0d\u6307\u5b9a\u5219\u53ea\u4f1a\u63d0\u4ea4\u5230\u672c\u5730\u7684\u4ed3\u5e93\u4e2d \u6ce8\u610f\u6b64\u5904 --name \u53c2\u6570\u4e3a\u8868\u683c\u4e2d\u7684\u201cProject Name\u201d\u5217\u3002 oos spec push \u547d\u4ee4\u4f1a\u81ea\u52a8\u8fdb\u884c\u5982\u4e0b\u6d41\u7a0b\uff1a fork --name \u5bf9\u5e94\u4ed3\u5e93\u5230 pat \u5bf9\u5e94\u7684 gitee \u5e10\u6237\u3002 \u5c06\u4ed3\u5e93 clone \u5230\u672c\u5730\uff0c\u9ed8\u8ba4\u8def\u5f84\u4e3a ~/rpmbuild/src-repos \u3002 \u6839\u636e --name \u548c --version \u4e0b\u8f7d\u6e90\u7801\u5305\uff0c\u5e76\u751f\u6210 spec \u6587\u4ef6(\u8bfb\u53d6\u4ed3\u5e93\u4e2d\u539f\u6709 changelog)\u3002\u6b64\u9636\u6bb5\u9ed8\u8ba4\u8def\u5f84\u4e3a ~/rpmbuild \u3002 \u672c\u5730\u8fd0\u884c rpm \u5305\u6784\u5efa\u3002\u672c\u5730\u8fd0\u884c\u901a\u8fc7\u540e\uff0c\u4f1a\u81ea\u52a8\u5c06 spec \u6587\u4ef6\u53ca\u6e90\u7801\u5305\u66f4\u65b0\u5230 git \u4ed3\u5e93\u3002\u5982\u679c\u6709 -dp \u53c2\u6570\u5219\u81ea\u52a8\u8fdb\u884c push \u53ca\u521b\u5efa pr \u64cd\u4f5c\u3002\u5982\u679c\u672c\u5730\u6784\u5efa\u65f6\u5931\u8d25\uff0c\u5219\u505c\u6b62\u6d41\u7a0b\u3002 \u5982\u679c\u672c\u5730\u6784\u5efa\u5931\u8d25\uff0c\u5219\u53ef\u4ee5\u4fee\u6539\u751f\u6210\u7684 spec \u6587\u4ef6\u3002\u7136\u540e\u6267\u884c\uff1a oos spec push --name python-pyrsistent --version 0.18.1 -dp -rs -rs, --reuse-spec [\u53ef\u9009] \u590d\u7528\u5df2\u5b58\u5728\u7684spec\u6587\u4ef6\uff0c\u4e0d\u518d\u91cd\u65b0\u751f\u6210\u3002 \u5982\u6b64\u5faa\u73af\uff0c\u76f4\u81f3\u4e0a\u4f20\u6210\u529f\u3002 \u6ce81\uff1a\u5347\u7ea7\u65f6\u8981\u901a\u8fc7 oos spec push \u547d\u4ee4\u751f\u6210 spec \u6587\u4ef6\uff0c\u4e0d\u8981\u4f7f\u7528 oos spec build \u547d\u4ee4\uff0cpush \u547d\u4ee4\u4f1a\u4fdd\u7559\u4ed3\u5e93\u4e2d \u73b0\u6709 spec \u7684 changelog\uff0cbuild \u547d\u4ee4\u5219\u76f4\u63a5\u751f\u6210\u65b0\u7684 changelog\u3002 \u6ce82\uff1a\u5904\u7406\u9519\u8bef\u65f6\uff0c\u53ef\u4ee5\u53c2\u8003\u4ed3\u5e93\u4e2d\u73b0\u6709\u7684 spec \u6587\u4ef6\uff1b\u5f53\u524d spec \u9664\u4e86 changelog \u90e8\u5206\uff0c\u5176\u4f59\u4e3a oos \u5de5\u5177\u91cd\u65b0\u751f\u6210\uff0c\u524d\u4eba\u9047\u5230\u7684\u9519\u8bef\uff0c\u6b64\u5904\u4ecd\u53ef\u80fd\u9047\u5230\uff0c\u53ef\u53c2\u8003\u524d\u4eba\u64cd\u4f5c\u7ed3\u679c\u95ee\u9898\u3002 \u6ce83\uff1aoos \u547d\u4ee4\u8fd8\u652f\u6301\u6279\u91cf\u5904\u7406\uff0c\u53ef\u4ee5\u53c2\u8003 oos \u7684 README \u81ea\u884c\u5c1d\u8bd5\u3002 PR \u95e8\u7981\u68c0\u67e5 \u00b6 \u6b64\u65f6\u5728\u81ea\u5df1\u7684 gitee \u5e10\u6237\u4e2d\u53ef\u4ee5\u770b\u5230 fork \u8fc7\u6765\u7684\u4ed3\u5e93\u3002\u8fdb\u5165\u81ea\u5df1\u5e10\u53f7\u4e2d\u7684\u4ed3\u5e93\uff0c\u53ef\u901a\u8fc7\u70b9\u51fb\u5982\u4e0b\u6846\u8d77\u4f4d\u7f6e\uff0c\u53ef\u8fdb\u5165\u539f\u4ed3\u5e93\u3002 \u539f\u4ed3\u5e93\u4e2d\u53ef\u4ee5\u770b\u5230\u81ea\u52a8\u63d0\u4ea4\u7684 pr\u3002Pr \u4e2d\u53ef\u4ee5\u770b\u5230 openeuler-ci-bot \u7684\u8bc4\u8bba\uff1a openEuler \u5728 gitee \u4e0a\u6258\u7ba1\u7684\u4ee3\u7801\uff0c\u63d0\u4ea4 pr \u4f1a\u81ea\u52a8\u89e6\u53d1\u95e8\u7981\u3002\u672c\u5730\u6784\u5efa\u901a\u8fc7\u7684\uff0c\u4e5f\u6709\u53ef\u80fd\u5728\u95e8\u7981\u68c0\u67e5\u4e2d\u6784\u5efa\u5931\u8d25\u3002\u6bd4\u5982\u4e0a\u56fe\u4e2d\u6b64\u6b21\u63d0\u4ea4\u4fbf\u6784\u5efa\u5931\u8d25\uff0c\u53ef\u4ee5\u70b9\u51fb\u6846\u8d77\u90e8\u5206\uff0c\u67e5\u770b\u5bf9\u5e94\u67b6\u6784\u7684 build details\u3002 \u6b64\u65f6\u53ef\u4ee5\u6839\u636e build details \u4e2d\u65e5\u5fd7\u4e2d\u62a5\u9519\u4fe1\u606f\uff0c\u5bf9\u672c\u5730 spec \u8fdb\u884c\u4fee\u6539\uff0c\u800c\u540e\u518d\u6b21\u6267\u884c\uff1a oos spec push --name python-pyrsistent --version 0.18.1 -dp -rs \u7ebf\u4e0a\u4f1a\u81ea\u52a8\u91cd\u65b0\u6267\u884c\u6d4b\u8bd5\u3002 \u95e8\u7981\u8be6\u7ec6\u4fe1\u606f\u53ca\u5404\u9879\u7ed3\u679c\u542b\u4e49\u53c2\u8003\u793e\u533a\u7684 \u300a\u95e8\u7981\u529f\u80fd\u6307\u5bfc\u624b\u518c\u300b \u3002 PR \u68c0\u89c6 \u00b6 \u5f53\u4e00\u4e2a pr \u901a\u8fc7\u95e8\u7981\u68c0\u67e5\u540e\uff0c\u9700\u8981\u7531\u8f6f\u4ef6\u4ed3\u5e93\u6240\u5c5e SIG \u7684 maintainer \u8fdb\u884c review\u3002\u4e3a\u4e86\u52a0\u901f\u8fdb\u7a0b\uff0c\u95e8\u7981\u901a\u8fc7\u540e\uff0c\u53ef\u4ee5\u624b\u52a8 @ \u5bf9\u5e94\u7684 maintainer\uff0c\u8bf7\u6c42\u5e2e\u5fd9\u68c0\u89c6\u3002\u5728 pr \u63d0\u4ea4\u540e\uff0copeneuler-ci-bot \u4f1a\u6709\u5982\u4e0b\u56fe\u6240\u793a\u8bc4\u8bba\uff0c\u5176\u4e2d\u88ab @ \u7684\u4eba\u5373\u4e3a\u5f53\u524d\u4ed3\u5e93\u6240\u5c5e SIG \u7684 maintainer\u3002 \u6ce8\u610f\u4e8b\u9879 \u00b6 \u8fd9\u91cc\u5bf9\u4e00\u4e9b\u53ef\u80fd\u9047\u5230\u7684\u7279\u6b8a\u95ee\u9898\u8fdb\u884c\u8bb0\u5f55\u3002 \u6d4b\u8bd5\u672a\u6267\u884c\u95ee\u9898 \u00b6 oos \u81ea\u52a8\u751f\u6210\u7684 spec \u6587\u4ef6\u4e2d\uff0c%check \u90e8\u5206\u9ed8\u8ba4\u4e3a %{__python3} setup.py test \u3002\u4f46\u662f\u5728\u6709\u4e9b\u5305\u4e2d\uff0c\u8fd9\u6837\u5e76\u4e0d\u4f1a\u771f\u6b63\u6267\u884c\u6d4b\u8bd5\uff0c\u4f46\u95e8\u7981\u7ed3\u679c\u4e5f\u663e\u793a\u901a\u8fc7\u3002\u9700\u8981\u5f00\u53d1\u8005\u4eba\u5de5\u8fa8\u522b\u3002\u53c2\u8003\u65b9\u6cd5\u5982\u4e0b\uff1a \u5982\u679c\u662f\u6b64\u524d\u5df2\u6709 spec \u6587\u4ef6\uff0c\u53ef\u4ee5\u53c2\u8003\u4e4b\u524d\u7684 spec \u4e2d %check \u90e8\u5206\u5982\u4f55\u4e66\u5199\u3002\u5982\u679c\u4ee5\u524d\u5199\u7684\u4e0d\u662f %{__python3} setup.py test \uff0c\u4fbf\u9700\u8981\u91cd\u70b9\u6ce8\u610f\u3002 \u8fdb\u5165\u95e8\u7981\u7684 build details(\u53c2\u89c1\u4e0a\u6587\u201cPR \u95e8\u7981\u68c0\u67e5\u201d\u90e8\u5206)\uff0c\u67e5\u770b\u6784\u5efa\u65e5\u5fd7\u7684 %check \u90e8\u5206\u3002\u4e0b\u56fe\u4e3a\u8fdb\u5165 build details\uff0c\u7136\u540e\u9009\u62e9\u201c\u6587\u672c\u65b9\u5f0f\u67e5\u770b\u201d\u7684\u65e5\u5fd7\u663e\u793a\u622a\u56fe\u3002\u53ef\u4ee5\u770b\u5230\u663e\u793a\u5b9e\u9645\u8fd0\u884c\u6d4b\u8bd5\u6570\u4e3a0\u3002 \u5305\u540d\u4e0d\u4e00\u81f4\u95ee\u9898 \u00b6 \u5c0f\u90e8\u5206\u8f6f\u4ef6\u5305\u53ef\u80fd\u4f1a\u78b0\u5230\uff0coos \u81ea\u52a8\u751f\u6210\u7684 spec \u6240\u4f7f\u7528\u7684\u7684\u5305\u540d\u4e0e\u73b0\u6709\u5305\u540d\u4e0d\u4e00\u81f4\u3002\u6bd4\u5982\u4e00\u4e2a\u4f7f\u7528 - ,\u4e00\u4e2a\u4f7f\u7528\u4e0b\u5212\u7ebf _ \u3002\u6b64\u5904\u4ee5\u539f\u672c\u4f7f\u7528\u7684\u5305\u540d\u4e3a\u51c6\uff0c\u4e0d\u4fee\u6539\u539f\u6709\u5305\u540d\u3002 \u4f5c\u4e3a\u4e34\u65f6\u7684\u5904\u7406\uff0c\u5f00\u53d1\u8005\u53ef\u4ee5\u624b\u52a8\u5c06 spec \u6587\u4ef6\u76f8\u5173\u5730\u65b9\u6539\u4e3a\u539f\u6709\u5305\u540d\u3002\u4e0e\u6b64\u540c\u65f6\uff0coos \u62e5\u6709 mapping \u4fee\u6b63\u529f\u80fd\uff0c\u5f00\u53d1\u8005\u53ef\u4ee5\u63d0\u4ea4 issue\uff0cSIG \u5c06\u5728 oos \u4e2d\u8fdb\u884c\u4fee\u590d\u3002","title":"RPM\u5f00\u53d1\u6d41\u7a0b"},{"location":"contribute/rpm-packaging-reference/#sig-rpm","text":"OpenStack SIG \u6709\u4e00\u9879\u957f\u671f\u5f00\u53d1\u5de5\u4f5c\u662f\u8fdb\u884c OpenStack \u5404\u7248\u672c\u76f8\u5173 RPM \u8f6f\u4ef6\u5305\u7684\u6253\u5305\u7ef4\u62a4\u3002\u4e3a\u4e86\u65b9\u4fbf\u65b0\u52a0\u5165 SIG \u7684\u5f00\u53d1\u8005\u66f4\u5feb\u4e86\u89e3 SIG \u7f16\u5305\u6d41\u7a0b\uff0c\u5728\u6b64\u5bf9 SIG \u7f16\u5305\u6d41\u7a0b\u8fdb\u884c\u68b3\u7406\uff0c\u4ee5\u4f9b\u53c2\u8003\u3002","title":"SIG RPM \u7f16\u5305\u6d41\u7a0b\u68b3\u7406"},{"location":"contribute/rpm-packaging-reference/#excel","text":"SIG \u7f16\u5305\u65f6\uff0c\u4f1a\u4ee5\u5171\u4eab\u8868\u683c\u7684\u5f62\u5f0f\uff0c\u5c06\u9700\u8981\u5904\u7406\u7684\u8f6f\u4ef6\u5305\u6574\u7406\u51fa\u6765\uff0c\u4f9b\u5f00\u53d1\u8005\u534f\u540c\u5904\u7406\u3002\u5f53\u524d\u8868\u683c\u683c\u5f0f\u5982\u4e0b\uff1a Project Name openEuler Repo SIG Repo version Required (Min) Version lt Version ne Version Upper Version Status Requires Depth Author PR link PR status pyrsistent python-pyrsistent sig-python-modules 0.18.0 0.18.1 [] 0.18.1 Need Upgrade [] 13 ... \u201cProject Name\u201d\u5217\u4e3a\u8f6f\u4ef6\u9879\u76ee\u540d\u3002\u201copenEuler Repo\u201d\u5217\u4e3a\u6b64\u9879\u76ee\u5728 openEuler gitee \u4e0a\u7684\u4ed3\u5e93\u540d\uff0c\u540c\u65f6\u4e5f\u662f\u6b64\u9879\u76ee\u5728openEuler\u7cfb\u7edf\u4e2d\u7684\u8f6f\u4ef6\u5305\u540d\u3002\u6240\u6709 openEuler \u7684\u8f6f\u4ef6\u5305\u4ed3\u5e93\u5747\u5b58\u653e\u4e8ehttps://gitee.com/src-openeuler\u4e4b\u4e2d\u3002\u201cSIG\u201d\u5217\u8bb0\u5f55\u8f6f\u4ef6\u5305\u5f52\u5c5e\u4e8e\u54ea\u4e2a SIG\u3002 \u5904\u7406\u65f6\u9996\u5148\u67e5\u770b\u201cStatus\u201d\u5217\uff0c\u8be5\u5217\u8868\u793a\u8f6f\u4ef6\u5305\u72b6\u6001\u3002\u8f6f\u4ef6\u5305\u5171\u67096\u79cd\u72b6\u6001\uff0c\u5f00\u53d1\u8005\u9700\u8981\u6839\u636e\u201cStatus\u201d\u8fdb\u884c\u76f8\u5e94\u5904\u7406\u3002 \u201cOK\u201d\uff1a\u5f53\u524d\u7248\u672c\u76f4\u63a5\u53ef\u7528\uff0c\u4e0d\u9700\u8981\u5904\u7406\u3002 \u201cNeed Create Repo\u201d\uff1aopenEuler \u7cfb\u7edf\u4e2d\u6ca1\u6709\u6b64\u8f6f\u4ef6\u5305\uff0c\u9700\u8981\u5728 Gitee \u4e2d\u7684 src-openeuler repo \u4ed3\u65b0\u5efa\u4ed3\u5e93\u3002\u6d41\u7a0b\u53ef\u53c2\u8003\u793e\u533a\u6307\u5bfc\u6587\u6863\uff1a \u65b0\u589e\u8f6f\u4ef6\u5305 \u3002\u521b\u5efa\u5e76\u521d\u59cb\u5316\u4ed3\u5e93\u540e\uff0c\u5c06\u8f6f\u4ef6\u5305\u653e\u5165\u9700\u8981\u7684 OBS \u5de5\u7a0b\u3002 \u201cNeed Create Branch\u201d\uff1a\u4ed3\u5e93\u4e2d\u6ca1\u6709\u6240\u9700\u5206\u652f\uff0c\u9700\u8981\u5f00\u53d1\u8005\u521b\u5efa\u5e76\u521d\u59cb\u5316\u3002 \u201cNeed Init Branch\u201d\uff1a\u9700\u8981\u521d\u59cb\u5316\u5206\u652f\u5e76\u5c06\u6b64\u5206\u652f\u8f6f\u4ef6\u5305\u653e\u5165\u9700\u8981\u7684 OBS \u5de5\u7a0b\u3002\u8868\u660e\u5206\u652f\u5b58\u5728\uff0c\u4f46\u662f\u91cc\u9762\u5e76\u6ca1\u6709\u4efb\u4f55\u7248\u672c\u7684\u6e90\u7801\u5305\uff0c\u5f00\u53d1\u8005\u9700\u8981\u5bf9\u6b64\u5206\u652f\u8fdb\u884c\u521d\u59cb\u5316\uff0c\u4e0a\u4f20\u6240\u9700\u7248\u672c\u6e90\u7801\u5305\u53ca spec \u6587\u4ef6\u7b49\u3002\u4ee522.09\u5f00\u53d1\u5468\u671f\u9002\u914d Yoga \u7248\u672c\u4e3a\u4f8b\uff0c\u6b64\u4efb\u52a1\u76f4\u63a5\u5728 master \u5206\u652f\u5de5\u4f5c\u3002get_gitee_project_version \u9879\u76ee\u72b6\u6001\u4e3a\u201cNeed Init Branch\u201d\u201d\uff0c\u5b83\u5bf9\u5e94\u7684\u201cpython-neutron-tempest-plugin\u201d\u4ed3\u5e93\u7684master\u5206\u652f\uff0c\u5728\u5904\u7406\u524d\uff0c\u53ea\u6709 README.md \u548c README.en.md \u4e24\u4e2a\u6587\u4ef6\uff0c\u9700\u8981\u5f00\u53d1\u8005\u521d\u59cb\u5316\u5206\u652f\u3002 \u201cNeed Downgrade\u201d\uff1a\u964d\u7ea7\u8f6f\u4ef6\u5305\u3002\u6b64\u79cd\u60c5\u51b5\u9760\u540e\u5904\u7406\uff0c\u4e0e SIG \u786e\u8ba4\u540e\u518d\u64cd\u4f5c\u3002 \u201cNeed Upgrade\u201d\uff1a\u5347\u7ea7\u8f6f\u4ef6\u5305\u3002 \u786e\u5b9a\u597d\u8f6f\u4ef6\u5305\u5bf9\u5e94\u7684\u5904\u7406\u7c7b\u578b\u540e\uff0c\u9700\u8981\u6839\u636e\u7248\u672c\u4fe1\u606f\u8fdb\u884c\u5904\u7406\u3002\u201cRepo version\u201d\u5217\u4e3a\u5f53\u524d\u4ed3\u5e93\u4e2d\u5bf9\u5e94\u5206\u652f\u7684\u8f6f\u4ef6\u5305\u7248\u672c\u3002\u201cRequired (Min) Version\u201d\u5219\u662f\u9700\u8981\u7684\u6700\u5c0f\u7248\u672c\uff0c\u5982\u679c\u5176\u540e\u6709\"(Must)\"\u6807\u8bc6\uff0c\u5219\u8868\u793a\u5fc5\u987b\u4f7f\u7528\u6b64\u7248\u672c\u3002\u201cUpper Version\u201d\u4e3a\u53ef\u4ee5\u4f7f\u7528\u7684\u6700\u9ad8\u7248\u672c\u3002\u5982\u679c\u201cRequired (Min) Version\u201d\u548c\u201cUpper Version\u201d\u4e0d\u540c\uff0c\u4f18\u5148\u4f7f\u7528\u201cRequired (Min) Version\u201d\u3002\u6bd4\u5982\u5347\u7ea7\u8f6f\u4ef6\u5305\uff0c\u4f18\u5148\u5347\u7ea7\u5230\u201cRequired (Min) Version\u201d\u3002 \u201cRequires\u201d\u5217\u4e3a\u8f6f\u4ef6\u5305\u7684\u4f9d\u8d56\u3002\u201cDepth\u201d\u5217\u8868\u793a\u8f6f\u4ef6\u5305\u4f9d\u8d56\u5c42\u7ea7\u3002\u201cDepth\u201d\u4e3a1\u7684\u662f\u201cDepth\u201d\u4e3a0\u7684\u8f6f\u4ef6\u5305\u7684\u4f9d\u8d56\uff0c\u4ee5\u6b64\u7c7b\u63a8\uff0c\u201cDepth\u201d\u9ad8\u7684\u8f6f\u4ef6\u5305\u4e3a\u201cDepth\u201d\u4f4e\u7684\u8f6f\u4ef6\u5305\u7684\u4f9d\u8d56\u3002\u5904\u7406\u65f6\u5e94\u4f18\u5148\u5904\u7406\u201cDepth\u201d\u9ad8\u7684\u884c\u3002\u4f46\u5982\u679c\u67d0\u4e2a\u5305\uff0c\u6ca1\u6709\u4f9d\u8d56\uff08\u201cRequires\u201d\u4e3a[]\uff09,\u4e5f\u53ef\u76f4\u63a5\u5904\u7406\u3002\u5982\u679c\u67d0\u4e9b\u5305\u9700\u8981\u4f18\u5148\u5904\u7406\uff0c\u5e94\u6309\u7167\u5176\u201cRequires\u201d\uff0c\u4f18\u5148\u5904\u7406\u5176\u4f9d\u8d56\u3002 \u5904\u7406\u4e00\u4e2a\u8f6f\u4ef6\u5305\u65f6\uff0c\u5e94\u9996\u5148\u5728\u201cAuthor\u201d\u5217\u6807\u6ce8\u81ea\u5df1\u7684\u540d\u5b57\uff0c\u4ee5\u544a\u8bc9\u5176\u4ed6\u5f00\u53d1\u8005\u6b64\u5305\u5df2\u6709\u4eba\u5904\u7406\u3002pr\uff08pull request\uff09\u63d0\u4ea4\u540e\uff0c\u5c06 pr \u94fe\u63a5\u8d34\u5230\u201cPR link\u201d\u5217\u3002pr \u5408\u5e76\u540e\uff0c\u5e94\u5728\u201cPR status\u201d\u5217\u6807\u6ce8\u201cDone\u201d\u3002","title":"Excel\u8868\u683c\u8bf4\u660e"},{"location":"contribute/rpm-packaging-reference/#sig","text":"\u76ee\u524d SIG \u5904\u7406\u7f16\u5305\u95ee\u9898\u4e3b\u8981\u4f7f\u7528 SIG \u81ea\u5df1\u7f16\u5199\u7684 oos \u5de5\u5177\u3002oos \u5de5\u5177\u7ec6\u8282\u53c2\u8003 oos README \u3002\u4e0d\u540c\u201cStatus\u201d\u5904\u7406\u65f6\u6d89\u53ca\u7684\u201c\u5347\u7ea7\u201d\u3001\u201c\u521d\u59cb\u5316\u5206\u652f\u201d\u3001\u201c\u8f6f\u4ef6\u5305\u653e\u5165 OBS \u5de5\u7a0b\u201d\u7b49\u64cd\u4f5c\uff0coos \u5de5\u5177\u6709\u5bf9\u5e94\u5b9e\u73b0\u3002 \u4ee5 Yoga \u7248\u672c\u5347\u7ea7 python-pyrsistent \u8f6f\u4ef6\u5305\u4e3a\u4f8b\uff0c\u6f14\u793a\u7f16\u5305\u6d41\u7a0b\uff0c\u5e2e\u52a9\u5f00\u53d1\u8005\u719f\u6089 OpenStack SIG \u57fa\u4e8e oos \u5de5\u5177\u7684\u6253\u5305\u76f8\u5173\u6d41\u7a0b\u3002\u5728\u4e86\u89e3\u57fa\u7840\u6d41\u7a0b\u540e\uff0c\u5f00\u53d1\u8005\u53ef\u901a\u8fc7 oos README \u4e86\u89e3\u5176\u4f59\u64cd\u4f5c\u3002python-pyrsistent \u8f6f\u4ef6\u5305\u4fe1\u606f\u53c2\u89c1\u4e0a\u6587\u8868\u683c\u3002\u8be5\u8f6f\u4ef6\u5305\u9700\u8981\u4ece0.18.0\u7248\u672c\u5347\u7ea7\u52300.18.1\u7248\u672c\u3002Yoga \u7248\u672c\u662f\u572822.09\u7248\u672c\u5f00\u53d1\u89c4\u5212\u4e2d\uff0c\u5f53\u524d\u4e3a22\u5e745\u6708\uff0c\u76f4\u63a5\u63d0\u4ea4\u5230master\u5206\u652f\u5373\u53ef\u3002","title":"SIG \u5904\u7406\u7f16\u5305\u95ee\u9898\u6d41\u7a0b"},{"location":"contribute/rpm-packaging-reference/#cla","text":"\u5728 openEuler \u793e\u533a\u63d0\u4ea4\u8d21\u732e\u9700\u8981\u7b7e\u7f72 CLA \u3002 \u5bf9\u4e8e\u521d\u6b21\u53c2\u4e0e openEuler \u793e\u533a\u7684\u5f00\u53d1\u8005\uff0c\u53ef\u9996\u5148\u67e5\u770b openEuler \u8d21\u732e\u653b\u7565 \uff0c\u6982\u89c8\u6574\u4f53\u8d21\u732e\u60c5\u51b5\u3002","title":"\u7b7e\u7f72 CLA"},{"location":"contribute/rpm-packaging-reference/#_1","text":"dnf install rpm-build rpmdevtools git # \u751f\u6210~/rpmbuild\u76ee\u5f55\uff0coos\u9ed8\u8ba4\u5de5\u4f5c\u8def\u5f84\u4e5f\u4e3a\u6b64 rpmdev-setuptree pip install openstack-sig-tool==1.0.6 \u8bf4\u660e\uff1aopenstack-sig-tool \u5728 1.1.0 \u7248\u672c\u5bf9 oos spec \u547d\u4ee4\u8fdb\u884c\u4e86 \u91cd\u6784 \u3002\u5982\u4e0b\u6d41\u7a0b\u6d89\u53ca oos spec \u547d\u4ee4\u7684\u64cd\u4f5c\u5bf9\u5e94 1.0.6 \u7248\u672c\u3002\u5efa\u8bae\u5b89\u88c5\u65b0\u7248 oos , \u5e76\u53c2\u8003\u5bf9\u5e94 README \u4f7f\u7528\u3002","title":"\u73af\u5883\u51c6\u5907"},{"location":"contribute/rpm-packaging-reference/#gitee-patpersonal-access-token","text":"\u9996\u5148\u8fdb\u5165 Gitee \u5e10\u6237\u7684\u201c\u8bbe\u7f6e\u201d\u754c\u9762\u3002 \u9009\u62e9\u201c\u79c1\u4eba\u4ee4\u724c\u201d\uff0c\u7136\u540e\u70b9\u51fb\u201c\u751f\u6210\u65b0\u4ee4\u724c\u201d\u3002\u751f\u6210\u540e\u5355\u72ec\u4fdd\u5b58\u597d\u81ea\u5df1\u7684\u79c1\u4eba\u4ee4\u724c\uff08pat\uff09\uff0cGitee \u4e0a\u65e0\u6cd5\u518d\u6b21\u67e5\u770b\uff0c\u5982\u679c\u4e22\u5931\u53ea\u80fd\u91cd\u65b0\u751f\u6210\u3002","title":"\u751f\u6210\u4e2a\u4eba Gitee \u5e10\u6237\u7684 pat(personal access token)"},{"location":"contribute/rpm-packaging-reference/#python-pyrsistent-spec","text":"export GITEE_PAT= oos spec push --name python-pyrsistent --version 0.18.1 -dp -dp, --do-push [\u53ef\u9009] \u6307\u5b9a\u662f\u5426\u6267\u884cpush\u5230gitee\u4ed3\u5e93\u4e0a\u5e76\u63d0\u4ea4PR\uff0c\u5982\u679c\u4e0d\u6307\u5b9a\u5219\u53ea\u4f1a\u63d0\u4ea4\u5230\u672c\u5730\u7684\u4ed3\u5e93\u4e2d \u6ce8\u610f\u6b64\u5904 --name \u53c2\u6570\u4e3a\u8868\u683c\u4e2d\u7684\u201cProject Name\u201d\u5217\u3002 oos spec push \u547d\u4ee4\u4f1a\u81ea\u52a8\u8fdb\u884c\u5982\u4e0b\u6d41\u7a0b\uff1a fork --name \u5bf9\u5e94\u4ed3\u5e93\u5230 pat \u5bf9\u5e94\u7684 gitee \u5e10\u6237\u3002 \u5c06\u4ed3\u5e93 clone \u5230\u672c\u5730\uff0c\u9ed8\u8ba4\u8def\u5f84\u4e3a ~/rpmbuild/src-repos \u3002 \u6839\u636e --name \u548c --version \u4e0b\u8f7d\u6e90\u7801\u5305\uff0c\u5e76\u751f\u6210 spec \u6587\u4ef6(\u8bfb\u53d6\u4ed3\u5e93\u4e2d\u539f\u6709 changelog)\u3002\u6b64\u9636\u6bb5\u9ed8\u8ba4\u8def\u5f84\u4e3a ~/rpmbuild \u3002 \u672c\u5730\u8fd0\u884c rpm \u5305\u6784\u5efa\u3002\u672c\u5730\u8fd0\u884c\u901a\u8fc7\u540e\uff0c\u4f1a\u81ea\u52a8\u5c06 spec \u6587\u4ef6\u53ca\u6e90\u7801\u5305\u66f4\u65b0\u5230 git \u4ed3\u5e93\u3002\u5982\u679c\u6709 -dp \u53c2\u6570\u5219\u81ea\u52a8\u8fdb\u884c push \u53ca\u521b\u5efa pr \u64cd\u4f5c\u3002\u5982\u679c\u672c\u5730\u6784\u5efa\u65f6\u5931\u8d25\uff0c\u5219\u505c\u6b62\u6d41\u7a0b\u3002 \u5982\u679c\u672c\u5730\u6784\u5efa\u5931\u8d25\uff0c\u5219\u53ef\u4ee5\u4fee\u6539\u751f\u6210\u7684 spec \u6587\u4ef6\u3002\u7136\u540e\u6267\u884c\uff1a oos spec push --name python-pyrsistent --version 0.18.1 -dp -rs -rs, --reuse-spec [\u53ef\u9009] \u590d\u7528\u5df2\u5b58\u5728\u7684spec\u6587\u4ef6\uff0c\u4e0d\u518d\u91cd\u65b0\u751f\u6210\u3002 \u5982\u6b64\u5faa\u73af\uff0c\u76f4\u81f3\u4e0a\u4f20\u6210\u529f\u3002 \u6ce81\uff1a\u5347\u7ea7\u65f6\u8981\u901a\u8fc7 oos spec push \u547d\u4ee4\u751f\u6210 spec \u6587\u4ef6\uff0c\u4e0d\u8981\u4f7f\u7528 oos spec build \u547d\u4ee4\uff0cpush \u547d\u4ee4\u4f1a\u4fdd\u7559\u4ed3\u5e93\u4e2d \u73b0\u6709 spec \u7684 changelog\uff0cbuild \u547d\u4ee4\u5219\u76f4\u63a5\u751f\u6210\u65b0\u7684 changelog\u3002 \u6ce82\uff1a\u5904\u7406\u9519\u8bef\u65f6\uff0c\u53ef\u4ee5\u53c2\u8003\u4ed3\u5e93\u4e2d\u73b0\u6709\u7684 spec \u6587\u4ef6\uff1b\u5f53\u524d spec \u9664\u4e86 changelog \u90e8\u5206\uff0c\u5176\u4f59\u4e3a oos \u5de5\u5177\u91cd\u65b0\u751f\u6210\uff0c\u524d\u4eba\u9047\u5230\u7684\u9519\u8bef\uff0c\u6b64\u5904\u4ecd\u53ef\u80fd\u9047\u5230\uff0c\u53ef\u53c2\u8003\u524d\u4eba\u64cd\u4f5c\u7ed3\u679c\u95ee\u9898\u3002 \u6ce83\uff1aoos \u547d\u4ee4\u8fd8\u652f\u6301\u6279\u91cf\u5904\u7406\uff0c\u53ef\u4ee5\u53c2\u8003 oos \u7684 README \u81ea\u884c\u5c1d\u8bd5\u3002","title":"\u751f\u6210 python-pyrsistent \u5305\u7684 spec \u5e76\u63d0\u4ea4"},{"location":"contribute/rpm-packaging-reference/#pr","text":"\u6b64\u65f6\u5728\u81ea\u5df1\u7684 gitee \u5e10\u6237\u4e2d\u53ef\u4ee5\u770b\u5230 fork \u8fc7\u6765\u7684\u4ed3\u5e93\u3002\u8fdb\u5165\u81ea\u5df1\u5e10\u53f7\u4e2d\u7684\u4ed3\u5e93\uff0c\u53ef\u901a\u8fc7\u70b9\u51fb\u5982\u4e0b\u6846\u8d77\u4f4d\u7f6e\uff0c\u53ef\u8fdb\u5165\u539f\u4ed3\u5e93\u3002 \u539f\u4ed3\u5e93\u4e2d\u53ef\u4ee5\u770b\u5230\u81ea\u52a8\u63d0\u4ea4\u7684 pr\u3002Pr \u4e2d\u53ef\u4ee5\u770b\u5230 openeuler-ci-bot \u7684\u8bc4\u8bba\uff1a openEuler \u5728 gitee \u4e0a\u6258\u7ba1\u7684\u4ee3\u7801\uff0c\u63d0\u4ea4 pr \u4f1a\u81ea\u52a8\u89e6\u53d1\u95e8\u7981\u3002\u672c\u5730\u6784\u5efa\u901a\u8fc7\u7684\uff0c\u4e5f\u6709\u53ef\u80fd\u5728\u95e8\u7981\u68c0\u67e5\u4e2d\u6784\u5efa\u5931\u8d25\u3002\u6bd4\u5982\u4e0a\u56fe\u4e2d\u6b64\u6b21\u63d0\u4ea4\u4fbf\u6784\u5efa\u5931\u8d25\uff0c\u53ef\u4ee5\u70b9\u51fb\u6846\u8d77\u90e8\u5206\uff0c\u67e5\u770b\u5bf9\u5e94\u67b6\u6784\u7684 build details\u3002 \u6b64\u65f6\u53ef\u4ee5\u6839\u636e build details \u4e2d\u65e5\u5fd7\u4e2d\u62a5\u9519\u4fe1\u606f\uff0c\u5bf9\u672c\u5730 spec \u8fdb\u884c\u4fee\u6539\uff0c\u800c\u540e\u518d\u6b21\u6267\u884c\uff1a oos spec push --name python-pyrsistent --version 0.18.1 -dp -rs \u7ebf\u4e0a\u4f1a\u81ea\u52a8\u91cd\u65b0\u6267\u884c\u6d4b\u8bd5\u3002 \u95e8\u7981\u8be6\u7ec6\u4fe1\u606f\u53ca\u5404\u9879\u7ed3\u679c\u542b\u4e49\u53c2\u8003\u793e\u533a\u7684 \u300a\u95e8\u7981\u529f\u80fd\u6307\u5bfc\u624b\u518c\u300b \u3002","title":"PR \u95e8\u7981\u68c0\u67e5"},{"location":"contribute/rpm-packaging-reference/#pr_1","text":"\u5f53\u4e00\u4e2a pr \u901a\u8fc7\u95e8\u7981\u68c0\u67e5\u540e\uff0c\u9700\u8981\u7531\u8f6f\u4ef6\u4ed3\u5e93\u6240\u5c5e SIG \u7684 maintainer \u8fdb\u884c review\u3002\u4e3a\u4e86\u52a0\u901f\u8fdb\u7a0b\uff0c\u95e8\u7981\u901a\u8fc7\u540e\uff0c\u53ef\u4ee5\u624b\u52a8 @ \u5bf9\u5e94\u7684 maintainer\uff0c\u8bf7\u6c42\u5e2e\u5fd9\u68c0\u89c6\u3002\u5728 pr \u63d0\u4ea4\u540e\uff0copeneuler-ci-bot \u4f1a\u6709\u5982\u4e0b\u56fe\u6240\u793a\u8bc4\u8bba\uff0c\u5176\u4e2d\u88ab @ \u7684\u4eba\u5373\u4e3a\u5f53\u524d\u4ed3\u5e93\u6240\u5c5e SIG \u7684 maintainer\u3002","title":"PR \u68c0\u89c6"},{"location":"contribute/rpm-packaging-reference/#_2","text":"\u8fd9\u91cc\u5bf9\u4e00\u4e9b\u53ef\u80fd\u9047\u5230\u7684\u7279\u6b8a\u95ee\u9898\u8fdb\u884c\u8bb0\u5f55\u3002","title":"\u6ce8\u610f\u4e8b\u9879"},{"location":"contribute/rpm-packaging-reference/#_3","text":"oos \u81ea\u52a8\u751f\u6210\u7684 spec \u6587\u4ef6\u4e2d\uff0c%check \u90e8\u5206\u9ed8\u8ba4\u4e3a %{__python3} setup.py test \u3002\u4f46\u662f\u5728\u6709\u4e9b\u5305\u4e2d\uff0c\u8fd9\u6837\u5e76\u4e0d\u4f1a\u771f\u6b63\u6267\u884c\u6d4b\u8bd5\uff0c\u4f46\u95e8\u7981\u7ed3\u679c\u4e5f\u663e\u793a\u901a\u8fc7\u3002\u9700\u8981\u5f00\u53d1\u8005\u4eba\u5de5\u8fa8\u522b\u3002\u53c2\u8003\u65b9\u6cd5\u5982\u4e0b\uff1a \u5982\u679c\u662f\u6b64\u524d\u5df2\u6709 spec \u6587\u4ef6\uff0c\u53ef\u4ee5\u53c2\u8003\u4e4b\u524d\u7684 spec \u4e2d %check \u90e8\u5206\u5982\u4f55\u4e66\u5199\u3002\u5982\u679c\u4ee5\u524d\u5199\u7684\u4e0d\u662f %{__python3} setup.py test \uff0c\u4fbf\u9700\u8981\u91cd\u70b9\u6ce8\u610f\u3002 \u8fdb\u5165\u95e8\u7981\u7684 build details(\u53c2\u89c1\u4e0a\u6587\u201cPR \u95e8\u7981\u68c0\u67e5\u201d\u90e8\u5206)\uff0c\u67e5\u770b\u6784\u5efa\u65e5\u5fd7\u7684 %check \u90e8\u5206\u3002\u4e0b\u56fe\u4e3a\u8fdb\u5165 build details\uff0c\u7136\u540e\u9009\u62e9\u201c\u6587\u672c\u65b9\u5f0f\u67e5\u770b\u201d\u7684\u65e5\u5fd7\u663e\u793a\u622a\u56fe\u3002\u53ef\u4ee5\u770b\u5230\u663e\u793a\u5b9e\u9645\u8fd0\u884c\u6d4b\u8bd5\u6570\u4e3a0\u3002","title":"\u6d4b\u8bd5\u672a\u6267\u884c\u95ee\u9898"},{"location":"contribute/rpm-packaging-reference/#_4","text":"\u5c0f\u90e8\u5206\u8f6f\u4ef6\u5305\u53ef\u80fd\u4f1a\u78b0\u5230\uff0coos \u81ea\u52a8\u751f\u6210\u7684 spec \u6240\u4f7f\u7528\u7684\u7684\u5305\u540d\u4e0e\u73b0\u6709\u5305\u540d\u4e0d\u4e00\u81f4\u3002\u6bd4\u5982\u4e00\u4e2a\u4f7f\u7528 - ,\u4e00\u4e2a\u4f7f\u7528\u4e0b\u5212\u7ebf _ \u3002\u6b64\u5904\u4ee5\u539f\u672c\u4f7f\u7528\u7684\u5305\u540d\u4e3a\u51c6\uff0c\u4e0d\u4fee\u6539\u539f\u6709\u5305\u540d\u3002 \u4f5c\u4e3a\u4e34\u65f6\u7684\u5904\u7406\uff0c\u5f00\u53d1\u8005\u53ef\u4ee5\u624b\u52a8\u5c06 spec \u6587\u4ef6\u76f8\u5173\u5730\u65b9\u6539\u4e3a\u539f\u6709\u5305\u540d\u3002\u4e0e\u6b64\u540c\u65f6\uff0coos \u62e5\u6709 mapping \u4fee\u6b63\u529f\u80fd\uff0c\u5f00\u53d1\u8005\u53ef\u4ee5\u63d0\u4ea4 issue\uff0cSIG \u5c06\u5728 oos \u4e2d\u8fdb\u884c\u4fee\u590d\u3002","title":"\u5305\u540d\u4e0d\u4e00\u81f4\u95ee\u9898"},{"location":"install/devstack/","text":"\u4f7f\u7528Devstack\u5b89\u88c5OpenStack \u00b6 \u4f7f\u7528Devstack\u5b89\u88c5OpenStack \u5b89\u88c5\u6b65\u9aa4 \u76ee\u524dOpenStack\u539f\u751fDevstack\u9879\u76ee\u5df2\u7ecf\u652f\u6301\u5728openEuler\u4e0a\u5b89\u88c5OpenStack\uff0c\u5176\u4e2dopenEuler 20.03 LTS SP2\u5df2\u7ecf\u8fc7\u9a8c\u8bc1\uff0c\u5e76\u4e14\u6709\u4e0a\u6e38\u5b98\u65b9CI\u4fdd\u8bc1\u8d28\u91cf\u3002\u5176\u4ed6\u7248\u672c\u7684openEuler\u9700\u8981\u7528\u6237\u81ea\u884c\u6d4b\u8bd5(2022-04-25 openEuler master\u5206\u652f\u5df2\u9a8c\u8bc1)\u3002 \u5b89\u88c5\u6b65\u9aa4 \u00b6 \u51c6\u5907\u4e00\u4e2aopenEuler\u73af\u5883, 20.03 LTS SP2 \u865a\u62df\u673a\u955c\u50cf\u5730\u5740 , master \u865a\u62df\u673a\u955c\u50cf\u5730\u5740 \u914d\u7f6eyum\u6e90 openEuler 20.03 LTS SP2 \uff1a openEuler\u5b98\u65b9\u6e90\u4e2d\u7f3a\u5c11\u4e86\u4e00\u4e9bOpenStack\u9700\u8981\u7684RPM\u5305\uff0c\u56e0\u6b64\u9700\u8981\u5148\u914d\u4e0aOpenStack SIG\u5728oepkg\u4e2d\u51c6\u5907\u597d\u7684RPM\u6e90 vi /etc/yum.repos.d/openeuler.repo [openstack] name=openstack baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack-master-ci/aarch64/ enabled=1 gpgcheck=0 openEuler master : \u4f7f\u7528master\u7684RPM\u6e90: vi /etc/yum.repos.d/openeuler.repo [mainline] name=mainline baseurl=http://119.3.219.20:82/openEuler:/Mainline/standard_aarch64/ gpgcheck=false [epol] name=epol baseurl=http://119.3.219.20:82/openEuler:/Epol/standard_aarch64/ gpgcheck=false \u524d\u671f\u51c6\u5907 openEuler 20.03 LTS SP2 \uff1a \u5728\u4e00\u4e9b\u7248\u672c\u7684openEuler\u5b98\u65b9\u955c\u50cf\u7684\u9ed8\u8ba4\u6e90\u4e2d\uff0cEPOL-update\u7684URL\u53ef\u80fd\u914d\u7f6e\u4e0d\u6b63\u786e\uff0c\u9700\u8981\u4fee\u6539 vi /etc/yum.repos.d/openEuler.repo # \u628a[EPOL-UPDATE]URL\u6539\u6210 baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/EPOL/update/main/$basearch/ openEuler master : yum remove python3-pip # \u7cfb\u7edf\u7684pip\u4e0edevstack pip\u51b2\u7a81\uff0c\u9700\u8981\u5148\u5220\u9664 # master\u7684\u865a\u673a\u73af\u5883\u7f3a\u5c11\u4e86\u4e00\u4e9b\u4f9d\u8d56\uff0cdevstack\u4e0d\u4f1a\u81ea\u52a8\u5b89\u88c5\uff0c\u9700\u8981\u624b\u52a8\u5b89\u88c5 yum install iptables tar wget python3-devel httpd-devel iscsi-initiator-utils libvirt python3-libvirt qemu memcached \u4e0b\u8f7ddevstack yum update yum install git cd /opt/ git clone https://opendev.org/openstack/devstack.git \u521d\u59cb\u5316devstack\u73af\u5883\u914d\u7f6e # \u521b\u5efastack\u7528\u6237 /opt/devstack/tools/create-stack-user.sh # \u4fee\u6539\u76ee\u5f55\u6743\u9650 chown -R stack:stack /opt/devstack chmod -R 755 /opt/devstack chmod -R 755 /opt/stack # \u5207\u6362\u5230\u8981\u90e8\u7f72\u7684openstack\u7248\u672c\u5206\u652f\uff0c\u4ee5yoga\u4e3a\u4f8b\uff0c\u4e0d\u5207\u6362\u7684\u8bdd\uff0c\u9ed8\u8ba4\u5b89\u88c5\u7684\u662fmaster\u7248\u672c\u7684openstack git checkout stable/yoga \u521d\u59cb\u5316devstack\u914d\u7f6e\u6587\u4ef6 \u5207\u6362\u5230stack\u7528\u6237 su stack \u6b64\u65f6\uff0c\u8bf7\u786e\u8ba4stack\u7528\u6237\u7684PATH\u73af\u5883\u53d8\u91cf\u662f\u5426\u5305\u542b\u4e86`/usr/sbin`\uff0c\u5982\u679c\u6ca1\u6709\uff0c\u5219\u9700\u8981\u6267\u884c PATH=$PATH:/usr/sbin \u65b0\u589e\u914d\u7f6e\u6587\u4ef6 vi /opt/devstack/local.conf [[local|localrc]] DATABASE_PASSWORD=root RABBIT_PASSWORD=root SERVICE_PASSWORD=root ADMIN_PASSWORD=root OVN_BUILD_FROM_SOURCE=True openEuler\u6ca1\u6709\u63d0\u4f9bOVN\u7684RPM\u8f6f\u4ef6\u5305\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6e OVN_BUILD_FROM_SOURCE=True , \u4ece\u6e90\u7801\u7f16\u8bd1OVN \u53e6\u5916\u5982\u679c\u4f7f\u7528\u7684\u662farm64\u865a\u62df\u673a\u73af\u5883\uff0c\u5219\u9700\u8981\u914d\u7f6elibvirt\u5d4c\u5957\u865a\u62df\u5316\uff0c\u5728 local.conf \u4e2d\u8ffd\u52a0\u5982\u4e0b\u914d\u7f6e\uff1a [[post-config|$NOVA_CONF]] [libvirt] cpu_mode=custom cpu_model=cortex-a72 \u5982\u679c\u5b89\u88c5Ironic\uff0c\u9700\u8981\u63d0\u524d\u5b89\u88c5\u4f9d\u8d56\uff1a sudo dnf install syslinux-nonlinux openEuler master\u7684\u7279\u6b8a\u914d\u7f6e \uff1a \u7531\u4e8edevstack\u8fd8\u6ca1\u6709\u9002\u914d\u6700\u65b0\u7684openEuler\uff0c\u6211\u4eec\u9700\u8981\u624b\u52a8\u4fee\u590d\u4e00\u4e9b\u95ee\u9898\uff1a \u4fee\u6539devstack\u6e90\u7801 vi /opt/devstack/tools/fixup_stuff.sh \u628afixup_openeuler\u65b9\u6cd5\u4e2d\u7684\u6240\u6709echo\u8bed\u53e5\u5220\u6389 (echo '[openstack-ci]' echo 'name=openstack' echo 'baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack-master-ci/'$arch'/' echo 'enabled=1' echo 'gpgcheck=0') | sudo tee -a /etc/yum.repos.d/openstack-master.repo > /dev/null 2. \u4fee\u6539requirements\u6e90\u7801 Yoga\u7248keystone\u7684\u4f9d\u8d56 setproctitle \u7684devstack\u9ed8\u8ba4\u7248\u672c\u4e0d\u652f\u6301python3.10\uff0c\u9700\u8981\u5347\u7ea7\uff0c\u624b\u52a8\u4e0b\u8f7drequirements\u9879\u76ee\u5e76\u4fee\u6539 cd /opt/stack git clone https://opendev.org/openstack/requirements --branch stable/yoga vi /opt/stack/requirements/upper-constraints.txt setproctitle===1.2.3 OpenStack horizon\u6709BUG\uff0c\u65e0\u6cd5\u6b63\u5e38\u5b89\u88c5\u3002\u8fd9\u91cc\u6211\u4eec\u6682\u65f6\u4e0d\u5b89\u88c5horizon\uff0c\u4fee\u6539 local.conf \uff0c\u65b0\u589e\u4e00\u884c\uff1a [[local|localrc]] disable_service horizon \u5982\u679c\u786e\u5b9e\u6709\u5bf9horizon\u7684\u9700\u6c42\uff0c\u5219\u9700\u8981\u89e3\u51b3\u4ee5\u4e0b\u95ee\u9898\uff1a # 1. horizon\u4f9d\u8d56\u7684pyScss\u9ed8\u8ba4\u4e3a1.3.7\u7248\u672c\uff0c\u4e0d\u652f\u6301python3.10 # \u89e3\u51b3\u65b9\u6cd5\uff1a\u9700\u8981\u63d0\u524dclone`requirements`\u9879\u76ee\u5e76\u4fee\u6539\u4ee3\u7801 vi /opt/stack/requirements/upper-constraints.txt pyScss===1.4.0 # 2. horizon\u4f9d\u8d56httpd\u7684mod_wsgi\u63d2\u4ef6\uff0c\u4f46\u76ee\u524dopenEuler\u7684mod_wsgi\u6784\u5efa\u5f02\u5e38\uff082022-04-25\uff09\uff08\u89e3\u51b3\u540eyum install mod_wsgi\u5373\u53ef\uff09\uff0c\u65e0\u6cd5\u4eceyum\u5b89\u88c5 # \u89e3\u51b3\u65b9\u6cd5\uff1a\u624b\u52a8\u6e90\u7801build mod_wsgi\u5e76\u914d\u7f6e\uff0c\u8be5\u8fc7\u7a0b\u8f83\u590d\u6742\uff0c\u8fd9\u91cc\u7565\u8fc7 dstat\u670d\u52a1\u4f9d\u8d56\u7684 pcp-system-tools \u6784\u5efa\u5f02\u5e38\uff082022-04-25\uff09\uff08\u89e3\u51b3\u540eyum install pcp-system-tools\u5373\u53ef\uff09\uff0c\u65e0\u6cd5\u4eceyum\u5b89\u88c5\uff0c\u6682\u65f6\u5148\u4e0d\u5b89\u88c5dstat [[local|localrc]] disable_service dstat \u90e8\u7f72OpenStack \u8fdb\u5165devstack\u76ee\u5f55\uff0c\u6267\u884c ./stack.sh \uff0c\u7b49\u5f85OpenStack\u5b8c\u6210\u5b89\u88c5\u90e8\u7f72\u3002","title":"devstack"},{"location":"install/devstack/#devstackopenstack","text":"\u4f7f\u7528Devstack\u5b89\u88c5OpenStack \u5b89\u88c5\u6b65\u9aa4 \u76ee\u524dOpenStack\u539f\u751fDevstack\u9879\u76ee\u5df2\u7ecf\u652f\u6301\u5728openEuler\u4e0a\u5b89\u88c5OpenStack\uff0c\u5176\u4e2dopenEuler 20.03 LTS SP2\u5df2\u7ecf\u8fc7\u9a8c\u8bc1\uff0c\u5e76\u4e14\u6709\u4e0a\u6e38\u5b98\u65b9CI\u4fdd\u8bc1\u8d28\u91cf\u3002\u5176\u4ed6\u7248\u672c\u7684openEuler\u9700\u8981\u7528\u6237\u81ea\u884c\u6d4b\u8bd5(2022-04-25 openEuler master\u5206\u652f\u5df2\u9a8c\u8bc1)\u3002","title":"\u4f7f\u7528Devstack\u5b89\u88c5OpenStack"},{"location":"install/devstack/#_1","text":"\u51c6\u5907\u4e00\u4e2aopenEuler\u73af\u5883, 20.03 LTS SP2 \u865a\u62df\u673a\u955c\u50cf\u5730\u5740 , master \u865a\u62df\u673a\u955c\u50cf\u5730\u5740 \u914d\u7f6eyum\u6e90 openEuler 20.03 LTS SP2 \uff1a openEuler\u5b98\u65b9\u6e90\u4e2d\u7f3a\u5c11\u4e86\u4e00\u4e9bOpenStack\u9700\u8981\u7684RPM\u5305\uff0c\u56e0\u6b64\u9700\u8981\u5148\u914d\u4e0aOpenStack SIG\u5728oepkg\u4e2d\u51c6\u5907\u597d\u7684RPM\u6e90 vi /etc/yum.repos.d/openeuler.repo [openstack] name=openstack baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack-master-ci/aarch64/ enabled=1 gpgcheck=0 openEuler master : \u4f7f\u7528master\u7684RPM\u6e90: vi /etc/yum.repos.d/openeuler.repo [mainline] name=mainline baseurl=http://119.3.219.20:82/openEuler:/Mainline/standard_aarch64/ gpgcheck=false [epol] name=epol baseurl=http://119.3.219.20:82/openEuler:/Epol/standard_aarch64/ gpgcheck=false \u524d\u671f\u51c6\u5907 openEuler 20.03 LTS SP2 \uff1a \u5728\u4e00\u4e9b\u7248\u672c\u7684openEuler\u5b98\u65b9\u955c\u50cf\u7684\u9ed8\u8ba4\u6e90\u4e2d\uff0cEPOL-update\u7684URL\u53ef\u80fd\u914d\u7f6e\u4e0d\u6b63\u786e\uff0c\u9700\u8981\u4fee\u6539 vi /etc/yum.repos.d/openEuler.repo # \u628a[EPOL-UPDATE]URL\u6539\u6210 baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/EPOL/update/main/$basearch/ openEuler master : yum remove python3-pip # \u7cfb\u7edf\u7684pip\u4e0edevstack pip\u51b2\u7a81\uff0c\u9700\u8981\u5148\u5220\u9664 # master\u7684\u865a\u673a\u73af\u5883\u7f3a\u5c11\u4e86\u4e00\u4e9b\u4f9d\u8d56\uff0cdevstack\u4e0d\u4f1a\u81ea\u52a8\u5b89\u88c5\uff0c\u9700\u8981\u624b\u52a8\u5b89\u88c5 yum install iptables tar wget python3-devel httpd-devel iscsi-initiator-utils libvirt python3-libvirt qemu memcached \u4e0b\u8f7ddevstack yum update yum install git cd /opt/ git clone https://opendev.org/openstack/devstack.git \u521d\u59cb\u5316devstack\u73af\u5883\u914d\u7f6e # \u521b\u5efastack\u7528\u6237 /opt/devstack/tools/create-stack-user.sh # \u4fee\u6539\u76ee\u5f55\u6743\u9650 chown -R stack:stack /opt/devstack chmod -R 755 /opt/devstack chmod -R 755 /opt/stack # \u5207\u6362\u5230\u8981\u90e8\u7f72\u7684openstack\u7248\u672c\u5206\u652f\uff0c\u4ee5yoga\u4e3a\u4f8b\uff0c\u4e0d\u5207\u6362\u7684\u8bdd\uff0c\u9ed8\u8ba4\u5b89\u88c5\u7684\u662fmaster\u7248\u672c\u7684openstack git checkout stable/yoga \u521d\u59cb\u5316devstack\u914d\u7f6e\u6587\u4ef6 \u5207\u6362\u5230stack\u7528\u6237 su stack \u6b64\u65f6\uff0c\u8bf7\u786e\u8ba4stack\u7528\u6237\u7684PATH\u73af\u5883\u53d8\u91cf\u662f\u5426\u5305\u542b\u4e86`/usr/sbin`\uff0c\u5982\u679c\u6ca1\u6709\uff0c\u5219\u9700\u8981\u6267\u884c PATH=$PATH:/usr/sbin \u65b0\u589e\u914d\u7f6e\u6587\u4ef6 vi /opt/devstack/local.conf [[local|localrc]] DATABASE_PASSWORD=root RABBIT_PASSWORD=root SERVICE_PASSWORD=root ADMIN_PASSWORD=root OVN_BUILD_FROM_SOURCE=True openEuler\u6ca1\u6709\u63d0\u4f9bOVN\u7684RPM\u8f6f\u4ef6\u5305\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6e OVN_BUILD_FROM_SOURCE=True , \u4ece\u6e90\u7801\u7f16\u8bd1OVN \u53e6\u5916\u5982\u679c\u4f7f\u7528\u7684\u662farm64\u865a\u62df\u673a\u73af\u5883\uff0c\u5219\u9700\u8981\u914d\u7f6elibvirt\u5d4c\u5957\u865a\u62df\u5316\uff0c\u5728 local.conf \u4e2d\u8ffd\u52a0\u5982\u4e0b\u914d\u7f6e\uff1a [[post-config|$NOVA_CONF]] [libvirt] cpu_mode=custom cpu_model=cortex-a72 \u5982\u679c\u5b89\u88c5Ironic\uff0c\u9700\u8981\u63d0\u524d\u5b89\u88c5\u4f9d\u8d56\uff1a sudo dnf install syslinux-nonlinux openEuler master\u7684\u7279\u6b8a\u914d\u7f6e \uff1a \u7531\u4e8edevstack\u8fd8\u6ca1\u6709\u9002\u914d\u6700\u65b0\u7684openEuler\uff0c\u6211\u4eec\u9700\u8981\u624b\u52a8\u4fee\u590d\u4e00\u4e9b\u95ee\u9898\uff1a \u4fee\u6539devstack\u6e90\u7801 vi /opt/devstack/tools/fixup_stuff.sh \u628afixup_openeuler\u65b9\u6cd5\u4e2d\u7684\u6240\u6709echo\u8bed\u53e5\u5220\u6389 (echo '[openstack-ci]' echo 'name=openstack' echo 'baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack-master-ci/'$arch'/' echo 'enabled=1' echo 'gpgcheck=0') | sudo tee -a /etc/yum.repos.d/openstack-master.repo > /dev/null 2. \u4fee\u6539requirements\u6e90\u7801 Yoga\u7248keystone\u7684\u4f9d\u8d56 setproctitle \u7684devstack\u9ed8\u8ba4\u7248\u672c\u4e0d\u652f\u6301python3.10\uff0c\u9700\u8981\u5347\u7ea7\uff0c\u624b\u52a8\u4e0b\u8f7drequirements\u9879\u76ee\u5e76\u4fee\u6539 cd /opt/stack git clone https://opendev.org/openstack/requirements --branch stable/yoga vi /opt/stack/requirements/upper-constraints.txt setproctitle===1.2.3 OpenStack horizon\u6709BUG\uff0c\u65e0\u6cd5\u6b63\u5e38\u5b89\u88c5\u3002\u8fd9\u91cc\u6211\u4eec\u6682\u65f6\u4e0d\u5b89\u88c5horizon\uff0c\u4fee\u6539 local.conf \uff0c\u65b0\u589e\u4e00\u884c\uff1a [[local|localrc]] disable_service horizon \u5982\u679c\u786e\u5b9e\u6709\u5bf9horizon\u7684\u9700\u6c42\uff0c\u5219\u9700\u8981\u89e3\u51b3\u4ee5\u4e0b\u95ee\u9898\uff1a # 1. horizon\u4f9d\u8d56\u7684pyScss\u9ed8\u8ba4\u4e3a1.3.7\u7248\u672c\uff0c\u4e0d\u652f\u6301python3.10 # \u89e3\u51b3\u65b9\u6cd5\uff1a\u9700\u8981\u63d0\u524dclone`requirements`\u9879\u76ee\u5e76\u4fee\u6539\u4ee3\u7801 vi /opt/stack/requirements/upper-constraints.txt pyScss===1.4.0 # 2. horizon\u4f9d\u8d56httpd\u7684mod_wsgi\u63d2\u4ef6\uff0c\u4f46\u76ee\u524dopenEuler\u7684mod_wsgi\u6784\u5efa\u5f02\u5e38\uff082022-04-25\uff09\uff08\u89e3\u51b3\u540eyum install mod_wsgi\u5373\u53ef\uff09\uff0c\u65e0\u6cd5\u4eceyum\u5b89\u88c5 # \u89e3\u51b3\u65b9\u6cd5\uff1a\u624b\u52a8\u6e90\u7801build mod_wsgi\u5e76\u914d\u7f6e\uff0c\u8be5\u8fc7\u7a0b\u8f83\u590d\u6742\uff0c\u8fd9\u91cc\u7565\u8fc7 dstat\u670d\u52a1\u4f9d\u8d56\u7684 pcp-system-tools \u6784\u5efa\u5f02\u5e38\uff082022-04-25\uff09\uff08\u89e3\u51b3\u540eyum install pcp-system-tools\u5373\u53ef\uff09\uff0c\u65e0\u6cd5\u4eceyum\u5b89\u88c5\uff0c\u6682\u65f6\u5148\u4e0d\u5b89\u88c5dstat [[local|localrc]] disable_service dstat \u90e8\u7f72OpenStack \u8fdb\u5165devstack\u76ee\u5f55\uff0c\u6267\u884c ./stack.sh \uff0c\u7b49\u5f85OpenStack\u5b8c\u6210\u5b89\u88c5\u90e8\u7f72\u3002","title":"\u5b89\u88c5\u6b65\u9aa4"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/","text":"OpenStack-Queens \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Queens \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 20.03-LTS-SP2 \u7248\u672c\u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9oepkg yum \u6e90\u5df2\u7ecf\u652f\u6301 Openstack-Queens \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597doepkg yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 Openstack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 20.03-LTS-SP2 \u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9\u6e90 oepkg cat << EOF >> /etc/yum.repos.d/OpenStack_Queens.repo [openstack_queens] name=OpenStack_Queens baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/$basearch/ gpgcheck=0 enabled=1 EOF yum clean all && yum makecache \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python2-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python2-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd python2-mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython2-openstackclient\uff1a yum install python2-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ vim /etc/glance/glance-registry.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service openstack-glance-registry.service systemctl start openstack-glance-api.service openstack-glance-registry.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7darm64\u7248\u672c\u7684\u955c\u50cf \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CPT) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CPT) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTP) openstack role add --project service --user nova admin (CPT) openstack service create --name nova --description \"OpenStack Compute\" compute (CPT) \u521b\u5efaplacement\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt placement (CPT) openstack role add --project service --user placement admin (CPT) openstack service create --name placement --description \"Placement API\" placement (CPT) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CPT) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CPT) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CPT) \u521b\u5efaplacement API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 (CPT) openstack endpoint create --region RegionOne placement internal http://controller:8778 (CPT) openstack endpoint create --region RegionOne placement admin http://controller:8778 (CPT) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor openstack-nova-console \\ openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api (CTL) yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a7 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u5e10\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u624b\u52a8\u589e\u52a0Placement API\u63a5\u5165\u914d\u7f6e\u3002 vim /etc/httpd/conf.d/00-nova-placement-api.conf (CTL) = 2.4> Require all granted Order allow,deny Allow from all \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd (CTL) \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd (CPT) ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd (CPT) vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-consoleauth.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-consoleauth.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u548cplacement API\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge-agent ebtables ipset \\ (CTL) openstack-neutron-l3-agent openstack-neutron-dhcp-agent \\ openstack-neutron-metadata-agent yum install openstack-neutron-linuxbridge-agent ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u5e10\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini (CTL) [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable openstack-neutron-server.service \\ (CTL) openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \\ openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service systemctl restart openstack-nova-api.service openstack-neutron-server.service (CTL) openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \\ openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service systemctl enable openstack-neutron-linuxbridge-agent.service (CPT) systemctl restart openstack-neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u5217\u51fa\u4ee3\u7406\u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (CPT) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (CPT) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (CPT) backup_share=HOST:PATH (CPT) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (CPT) volume_group = cinder-volumes (CPT) iscsi_protocol = iscsi (CPT) iscsi_helper = tgtadm (CPT) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u5e10\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS\u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (CPT) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (CPT) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings ALLOWED_HOSTS = ['*', ] OPENSTACK_HOST = \"controller\" OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenstack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenstack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenstack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenstack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenstack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728Openstack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeopenstack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u7f3a\u7701\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service deploy ramdisk\u955c\u50cf\u5236\u4f5c Q\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528Q\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 20.03 LTS SP2\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s \u89e3\u91ca\uff1a $TROVE_NODE \u66ff\u6362\u4e3aTrove\u7684API\u670d\u52a1\u90e8\u7f72\u8282\u70b9 \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ```shell script yum install openstack-trove python-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove auth_strategy = keystone # Config option for showing the IP address that nova doles out add_addresses = True network_label_regex = ^NETWORK_LABEL$ api_paste_config = /etc/trove/api-paste.ini trove_auth_url = http://controller:35357/v3/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/v3/ auth_url=http://controller:35357/v3/ #auth_uri = http://controller/identity #auth_url = http://controller/identity_admin auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASS \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-taskmanager.conf ```shell script vim /etc/trove/trove-taskmanager.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove **\u89e3\u91ca\uff1a** \u53c2\u7167`trove.conf`\u914d\u7f6e 4. \u914d\u7f6e`trove-conductor.conf` ```shell script vim /etc/trove/trove-conductor.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:trove@controller/trove \u89e3\u91ca\uff1a \u53c2\u7167 trove.conf \u914d\u7f6e \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf [DEFAULT] rabbit_host = controller rabbit_password = RABBIT_PASS nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service trove_auth_url = http://controller/identity_admin/v2.0 **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 6. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"openEuler-20.03-LTS-SP2_Queens"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#openstack-queens","text":"OpenStack-Queens \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5","title":"OpenStack-Queens \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 20.03-LTS-SP2 \u7248\u672c\u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9oepkg yum \u6e90\u5df2\u7ecf\u652f\u6301 Openstack-Queens \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597doepkg yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#_1","text":"Openstack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#_3","text":"\u914d\u7f6e 20.03-LTS-SP2 \u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9\u6e90 oepkg cat << EOF >> /etc/yum.repos.d/OpenStack_Queens.repo [openstack_queens] name=OpenStack_Queens baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/queens/$basearch/ gpgcheck=0 enabled=1 EOF yum clean all && yum makecache \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python2-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python2-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd python2-mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython2-openstackclient\uff1a yum install python2-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ vim /etc/glance/glance-registry.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service openstack-glance-registry.service systemctl start openstack-glance-api.service openstack-glance-registry.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7darm64\u7248\u672c\u7684\u955c\u50cf \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CPT) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CPT) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTP) openstack role add --project service --user nova admin (CPT) openstack service create --name nova --description \"OpenStack Compute\" compute (CPT) \u521b\u5efaplacement\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt placement (CPT) openstack role add --project service --user placement admin (CPT) openstack service create --name placement --description \"Placement API\" placement (CPT) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CPT) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CPT) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CPT) \u521b\u5efaplacement API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 (CPT) openstack endpoint create --region RegionOne placement internal http://controller:8778 (CPT) openstack endpoint create --region RegionOne placement admin http://controller:8778 (CPT) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor openstack-nova-console \\ openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api (CTL) yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a7 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u5e10\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u624b\u52a8\u589e\u52a0Placement API\u63a5\u5165\u914d\u7f6e\u3002 vim /etc/httpd/conf.d/00-nova-placement-api.conf (CTL) = 2.4> Require all granted Order allow,deny Allow from all \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd (CTL) \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd (CPT) ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd (CPT) vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-consoleauth.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-consoleauth.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u548cplacement API\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge-agent ebtables ipset \\ (CTL) openstack-neutron-l3-agent openstack-neutron-dhcp-agent \\ openstack-neutron-metadata-agent yum install openstack-neutron-linuxbridge-agent ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u5e10\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini (CTL) [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable openstack-neutron-server.service \\ (CTL) openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \\ openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service systemctl restart openstack-nova-api.service openstack-neutron-server.service (CTL) openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \\ openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service systemctl enable openstack-neutron-linuxbridge-agent.service (CPT) systemctl restart openstack-neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u5217\u51fa\u4ee3\u7406\u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (CPT) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (CPT) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (CPT) backup_share=HOST:PATH (CPT) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (CPT) volume_group = cinder-volumes (CPT) iscsi_protocol = iscsi (CPT) iscsi_helper = tgtadm (CPT) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u5e10\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS\u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (CPT) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (CPT) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings ALLOWED_HOSTS = ['*', ] OPENSTACK_HOST = \"controller\" OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenstack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenstack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenstack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenstack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenstack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728Openstack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeopenstack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u7f3a\u7701\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service deploy ramdisk\u955c\u50cf\u5236\u4f5c Q\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528Q\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 20.03 LTS SP2\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-queens/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s \u89e3\u91ca\uff1a $TROVE_NODE \u66ff\u6362\u4e3aTrove\u7684API\u670d\u52a1\u90e8\u7f72\u8282\u70b9 \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ```shell script yum install openstack-trove python-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove auth_strategy = keystone # Config option for showing the IP address that nova doles out add_addresses = True network_label_regex = ^NETWORK_LABEL$ api_paste_config = /etc/trove/api-paste.ini trove_auth_url = http://controller:35357/v3/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/v3/ auth_url=http://controller:35357/v3/ #auth_uri = http://controller/identity #auth_url = http://controller/identity_admin auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASS \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-taskmanager.conf ```shell script vim /etc/trove/trove-taskmanager.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove **\u89e3\u91ca\uff1a** \u53c2\u7167`trove.conf`\u914d\u7f6e 4. \u914d\u7f6e`trove-conductor.conf` ```shell script vim /etc/trove/trove-conductor.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:trove@controller/trove \u89e3\u91ca\uff1a \u53c2\u7167 trove.conf \u914d\u7f6e \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf [DEFAULT] rabbit_host = controller rabbit_password = RABBIT_PASS nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service trove_auth_url = http://controller/identity_admin/v2.0 **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 6. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/","text":"OpenStack-Rocky \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Rocky \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u51c6\u5907\u73af\u5883 OpenStack yum\u6e90\u914d\u7f6e \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 ... ... ... \u6ce8\u610f\uff1a\u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7darm64\u7248\u672c\u7684\u955c\u50cf\u3002 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 Horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 20.03-LTS-SP2 \u7248\u672c\u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9oepkg yum \u6e90\u5df2\u7ecf\u652f\u6301 Openstack-Rocky \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597doepkg yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u51c6\u5907\u73af\u5883 \u00b6 OpenStack yum\u6e90\u914d\u7f6e \u00b6 \u914d\u7f6e 20.03-LTS-SP2 \u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9\u6e90 oepkg\uff0c\u4ee5x86_64\u4e3a\u4f8b $ cat << EOF >> /etc/yum.repos.d/OpenStack_Rocky.repo [openstack_rocky] name=OpenStack_Rocky baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/rocky/x86_64/ gpgcheck=0 enabled=1 EOF $ yum clean all && yum makecache \u73af\u5883\u914d\u7f6e \u00b6 \u5728 /etc/hosts \u4e2d\u6dfb\u52a0controller\u4fe1\u606f\uff0c\u4f8b\u5982\u8282\u70b9IP\u662f 10.0.0.11 \uff0c\u5219\u65b0\u589e\uff1a 10.0.0.11 controller \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 $ yum install mariadb mariadb-server python2-PyMySQL 2. \u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 \u590d\u5236\u5982\u4e0b\u5185\u5bb9\u5230\u6587\u4ef6\uff0c\u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a $ systemctl enable mariadb.service $ systemctl start mariadb.service \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 $ yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 $ systemctl enable rabbitmq-server.service $ systemctl start rabbitmq-server.service 3. \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 $ rabbitmqctl add_user openstack RABBIT_PASS 4. \u66ff\u6362 RABBIT_PASS\uff0c\u4e3aOpenStack\u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a $ rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 $ yum install memcached python2-memcached 2. \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\uff0c\u6dfb\u52a0\u4ee5\u4e0b\u5185\u5bb9 OPTIONS=\"-l 127.0.0.1,::1,controller\" OPTIONS \u4fee\u6539\u4e3a\u5b9e\u9645\u73af\u5883\u4e2d\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 $ systemctl enable memcached.service $ systemctl start memcached.service \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u4ee5 root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362 KEYSTONE_DBPASS\uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 $ yum install openstack-keystone httpd python2-mod_wsgi \u914d\u7f6ekeystone\uff0c\u7f16\u8f91 /etc/keystone/keystone.conf \u6587\u4ef6\u3002\u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\u3002\u5728[token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u66ff\u6362KEYSTONE_DBPASS\u4e3aKeystone\u6570\u636e\u5e93\u7684\u5bc6\u7801 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 $ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone $ keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8\u8eab\u4efd\u670d\u52a1\u3002 $ keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u66ff\u6362 ADMIN_PASS\uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801\u3002 \u7f16\u8f91 /etc/httpd/conf/httpd.conf \u6587\u4ef6\uff0c\u914d\u7f6eApache HTTP server $ vim /etc/httpd/conf/httpd.conf \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9\uff0c\u5982\u4e0b\u6240\u793a\u3002 ServerName controller \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa\u3002 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u4e3a /usr/share/keystone/wsgi-keystone.conf \u6587\u4ef6\u521b\u5efa\u94fe\u63a5\u3002 $ ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u5b8c\u6210\u5b89\u88c5\uff0c\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8Apache HTTP\u670d\u52a1\u3002 $ systemctl enable httpd.service $ systemctl start httpd.service \u5b89\u88c5OpenStackClient $ yum install python2-openstackclient \u521b\u5efa OpenStack client \u73af\u5883\u811a\u672c \u521b\u5efaadmin\u7528\u6237\u7684\u73af\u5883\u53d8\u91cf\u811a\u672c\uff1a # vim admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 \u66ff\u6362ADMIN_PASS\u4e3aadmin\u7528\u6237\u7684\u5bc6\u7801, \u4e0e\u4e0a\u8ff0 keystone-manage bootstrap \u547d\u4ee4\u4e2d\u8bbe\u7f6e\u7684\u5bc6\u7801\u4e00\u81f4 \u8fd0\u884c\u811a\u672c\u52a0\u8f7d\u73af\u5883\u53d8\u91cf\uff1a $ source admin-openrc \u5206\u522b\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efadomain, projects, users, roles\u3002 \u521b\u5efadomain \u2018example\u2019\uff1a $ openstack domain create --description \"An Example Domain\" example \u6ce8\uff1adomain \u2018default\u2019\u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa \u521b\u5efaproject \u2018service\u2019\uff1a $ openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project \u2019myproject\u2018\uff0cuser \u2019myuser\u2018 \u548c role \u2019myrole\u2018\uff0c\u4e3a\u2018myproject\u2019\u548c\u2018myuser\u2019\u6dfb\u52a0\u89d2\u8272\u2018myrole\u2019\uff1a $ openstack project create --domain default --description \"Demo Project\" myproject $ openstack user create --domain default --password-prompt myuser $ openstack role create myrole $ openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a $ unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a $ openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a $ openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4ee5 root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362 GLANCE_DBPASS\uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 $ source admin-openrc \u6267\u884c\u4ee5\u4e0b\u547d\u4ee4\uff0c\u5206\u522b\u5b8c\u6210\u521b\u5efa glance \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efaglance\u7528\u6237\u548c\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018glance\u2019\u3002 $ openstack user create --domain default --password-prompt glance $ openstack role add --project service --user glance admin $ openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne image public http://controller:9292 $ openstack endpoint create --region RegionOne image internal http://controller:9292 $ openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-glance \u914d\u7f6eglance\uff1a \u7f16\u8f91 /etc/glance/glance-api.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 \u5728[glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e [database] # ... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] # ... flavor = keystone [glance_store] # ... stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u7f16\u8f91 /etc/glance/glance-registry.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 ```ini [database] ... \u00b6 connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] ... \u00b6 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] ... \u00b6 flavor = keystone ``` \u5176\u4e2d\uff0c\u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u955c\u50cf\u670d\u52a1\uff1a $ systemctl enable openstack-glance-api.service openstack-glance-registry.service $ systemctl start openstack-glance-api.service openstack-glance-registry.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf ```shell $ source admin-openrc \u6ce8\u610f\uff1a\u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7darm64\u7248\u672c\u7684\u955c\u50cf\u3002 \u00b6 $ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img ``` \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a shell $ glance image-create --name \"cirros\" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a shell $ glance image-list Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3aroot\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efanova\u3001nova_api\u3001nova_cell0 \u6570\u636e\u5e93\u5e76\u6388\u6743 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362NOVA_DBPASS\u53caPLACEMENT_DBPASS\uff0c\u4e3anova\u53caplacement\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b8c\u6210\u521b\u5efanova\u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efanova\u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018nova\u2019\u3002 $ . admin-openrc $ openstack user create --domain default --password-prompt nova $ openstack role add --project service --user nova admin $ openstack service create --name nova --description \"OpenStack Compute\" compute \u521b\u5efa\u8ba1\u7b97\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 $ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 $ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 \u521b\u5efaplacement\u7528\u6237\u5e76\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\uff1a $ openstack user create --domain default --password-prompt placement $ openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u51ed\u8bc1\u53caAPI\u670d\u52a1\u7aef\u70b9\uff1a $ openstack service create --name placement --description \"Placement API\" placement $ openstack endpoint create --region RegionOne placement public http://controller:8778 $ openstack endpoint create --region RegionOne placement internal http://controller:8778 $ openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-compute \\ openstack-nova-placement-api openstack-nova-console \u914d\u7f6enova\uff1a \u7f16\u8f91 /etc/nova/nova.conf \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b \u5728[api_database] [database] [placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b \u5728[vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b \u5728[glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b \u5728[oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b \u5728[placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 [DEFAULT] # ... enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.11 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances/ [api_database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true # ... server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] # ... api_servers = http://controller:9292 [oslo_concurrency] # ... lock_path = /var/lib/nova/tmp [placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u66ff\u6362RABBIT_PASS\u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6emy_ip\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362NOVA_DBPASS\u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362PLACEMENT_DBPASS\u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362NOVA_PASS\u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362PLACEMENT_PASS\u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362NEUTRON_PASS\u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u7f16\u8f91 /etc/httpd/conf.d/00-nova-placement-api.conf \uff0c\u589e\u52a0Placement API\u63a5\u5165\u914d\u7f6e = 2.4> Require all granted Order allow,deny Allow from all \u91cd\u542fhttpd\u670d\u52a1\uff1a $ systemctl restart httpd \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a $ su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a $ egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a \u6ce8\u610f\uff1a \u5982\u679c\u662f\u5728ARM64\u7684\u670d\u52a1\u5668\u4e0a\uff0c\u8fd8\u9700\u8981\u5728\u914d\u7f6e cpu_mode \u4e3a custom , cpu_model \u4e3a cortex-a72 # vim /etc/nova/nova.conf [libvirt] # ... virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728 compute \u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd chown nova:nova /usr/share/AAVMF -R vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd\", \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2/aarch64/vars-template-pflash.raw\" ] \u542f\u52a8\u8ba1\u7b97\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u914d\u7f6e\u5176\u5f00\u673a\u542f\u52a8\uff1a $ systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service $ systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service $ systemctl enable libvirtd.service openstack-nova-compute.service $ systemctl start libvirtd.service openstack-nova-compute.service \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230cell\u6570\u636e\u5e93\uff1a \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u5b58\u5728\uff1a $ . admin-openrc $ openstack compute service list --service nova-compute \u6ce8\u518c\u8ba1\u7b97\u8282\u70b9\uff1a $ su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u9a8c\u8bc1 $ . admin-openrc \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a $ openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a $ openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a $ openstack image list \u68c0\u67e5cells\u548cplacement API\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 $ nova-status upgrade check Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3aroot\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa neutron \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362NEUTRON_DBPASS\uff0c\u4e3aneutron\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 $ . admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b8c\u6210\u521b\u5efa neutron \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efaneutron\u7528\u6237\u548c\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u2018neutron\u2019\u7528\u6237\u64cd\u4f5c\u3002 \u521b\u5efaneutron\u670d\u52a1 $ openstack user create --domain default --password-prompt neutron $ openstack role add --project service --user neutron admin $ openstack service create --name neutron --description \"OpenStack Networking\" network \u521b\u5efa\u7f51\u7edc\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne network public http://controller:9696 $ openstack endpoint create --region RegionOne network internal http://controller:9696 $ openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u548c\u914d\u7f6e Self-service \u7f51\u7edc \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-neutron openstack-neutron-ml2 \\ openstack-neutron-linuxbridge ebtables ipset \u914d\u7f6eneutron\uff1a \u7f16\u8f91 /etc/neutron/neutron.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b \u5728[default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b \u5728[default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b \u5728[default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b \u5728[oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 [database] # ... connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] # ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] # ... lock_path = /var/lib/neutron/tmp \u66ff\u6362NEUTRON_DBPASS\u4e3aneutron\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362RABBIT_PASS\u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362NEUTRON_PASS\u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362NOVA_PASS\u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a \u7f16\u8f91 /etc/neutron/plugins/ml2/ml2_conf.ini \u6587\u4ef6\uff1a \u5728[ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528\u7f51\u6865\u53ca layer-2 population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b \u5728[ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b \u5728[ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b \u5728[securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 # vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] # ... type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] # ... flat_networks = provider [ml2_type_vxlan] # ... vni_ranges = 1:1000 [securitygroup] # ... enable_ipset = true \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/plugins/ml2/linuxbridge_agent.ini \u6587\u4ef6\uff1a \u5728[linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u5728[vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b \u5728[securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] # ... enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u66ff\u6362PROVIDER_INTERFACE_NAME\u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362OVERLAY_INTERFACE_IP_ADDRESS\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/l3_agent.ini \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge [DEFAULT] # ... interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/dhcp_agent.ini \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 [DEFAULT] # ... interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/metadata_agent.ini \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 [DEFAULT] # ... nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u66ff\u6362METADATA_SECRET\u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6e\u8ba1\u7b97\u670d\u52a1 \u7f16\u8f91 /etc/nova/nova.conf \u6587\u4ef6\uff1a \u5728[neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 [neutron] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u66ff\u6362NEUTRON_PASS\u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362METADATA_SECRET\u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u5b8c\u6210\u5b89\u88c5 \u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u94fe\u63a5\uff1a $ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a $ systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1\u5e76\u914d\u7f6e\u5f00\u673a\u542f\u52a8\uff1a $ systemctl enable neutron-server.service \\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service $ systemctl start neutron-server.service \\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service $ systemctl enable neutron-l3-agent.service $ systemctl start neutron-l3-agent.service \u9a8c\u8bc1 \u5217\u51fa\u4ee3\u7406\u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a $ openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3aroot\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efacinder\u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362CINDER_DBPASS\uff0c\u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 $ source admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a \u521b\u5efacinder\u7528\u6237 \u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018cinder\u2019 \u521b\u5efacinderv2\u548ccinderv3\u670d\u52a1 $ openstack user create --domain default --password-prompt cinder $ openstack role add --project service --user cinder admin $ openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 $ openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e\u63a7\u5236\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-cinder \u914d\u7f6ecinder\uff1a \u7f16\u8f91 /etc/cinder/cinder.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b \u5728[DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b \u5728[DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b \u5728[oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 [database] # ... connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] # ... lock_path = /var/lib/cinder/tmp \u66ff\u6362CINDER_DBPASS\u4e3acinder\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362RABBIT_PASS\u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6emy_ip\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362CINDER_PASS\u4e3acinder\u7528\u6237\u7684\u5bc6\u7801\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"cinder-manage db sync\" cinder \u914d\u7f6e\u8ba1\u7b97\u4f7f\u7528\u5757\u5b58\u50a8\uff1a \u7f16\u8f91 /etc/nova/nova.conf \u6587\u4ef6\u3002 [cinder] os_region_name = RegionOne \u5b8c\u6210\u5b89\u88c5\uff1a \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 $ systemctl restart openstack-nova-api.service \u542f\u52a8\u5757\u5b58\u50a8\u670d\u52a1 $ systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service $ systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9\uff08LVM\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install lvm2 device-mapper-persistent-data scsi-target-utils python2-keystone \\ openstack-cinder-volume \u521b\u5efaLVM\u7269\u7406\u5377 /dev/sdb\uff1a $ pvcreate /dev/sdb \u521b\u5efaLVM\u5377\u7ec4 cinder-volumes\uff1a $ vgcreate cinder-volumes /dev/sdb \u7f16\u8f91 /etc/lvm/lvm.conf \u6587\u4ef6\uff1a \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/sdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 devices { # ... filter = [ \"a/sdb/\", \"r/.*/\"] \u7f16\u8f91 /etc/cinder/cinder.conf \u6587\u4ef6\uff1a \u5728[lvm]\u90e8\u5206\uff0c\u4f7f\u7528LVM\u9a71\u52a8\u3001cinder-volumes\u5377\u7ec4\u3001iSCSI\u534f\u8bae\u548c\u9002\u5f53\u7684iSCSI\u670d\u52a1\u914d\u7f6eLVM\u540e\u7aef\u3002 \u5728[DEFAULT]\u90e8\u5206\uff0c\u542f\u7528LVM\u540e\u7aef\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u4f4d\u7f6e\u3002 [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [DEFAULT] # ... enabled_backends = lvm glance_api_servers = http://controller:9292 \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u5b8c\u6210\u5b89\u88c5\uff1a $ systemctl enable openstack-cinder-volume.service tgtd.service iscsid.service $ systemctl start openstack-cinder-volume.service tgtd.service iscsid.service \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9\uff08ceph RBD\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install ceph-common python2-rados python2-rbd python2-keystone openstack-cinder-volume \u5728[DEFAULT]\u90e8\u5206\uff0c\u542f\u7528LVM\u540e\u7aef\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u4f4d\u7f6e\u3002 [DEFAULT] enabled_backends = ceph-rbd \u6dfb\u52a0ceph rbd\u914d\u7f6e\u90e8\u5206\uff0c\u914d\u7f6e\u5757\u547d\u540d\u4e0eenabled_backends\u4e2d\u4fdd\u6301\u4e00\u81f4 [ceph-rbd] glance_api_version = 2 rados_connect_timeout = -1 rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = False rbd_max_clone_depth = 5 rbd_pool = # RBD\u5b58\u50a8\u6c60\u540d\u79f0 rbd_secret_uuid = # \u968f\u673a\u751f\u6210SECRET UUID rbd_store_chunk_size = 4 rbd_user = volume_backend_name = ceph-rbd volume_driver = cinder.volume.drivers.rbd.RBDDriver \u914d\u7f6e\u5b58\u50a8\u8282\u70b9ceph\u5ba2\u6237\u7aef\uff0c\u9700\u8981\u4fdd\u8bc1/etc/ceph/\u76ee\u5f55\u4e2d\u5305\u542bceph\u96c6\u7fa4\u8bbf\u95ee\u914d\u7f6e\uff0c\u5305\u62ecceph.conf\u4ee5\u53cakeyring [root@openeuler ~]# ll /etc/ceph -rw-r--r-- 1 root root 82 Jun 16 17:11 ceph.client..keyring -rw-r--r-- 1 root root 1.5K Jun 16 17:11 ceph.conf -rw-r--r-- 1 root root 92 Jun 16 17:11 rbdmap \u5728\u5b58\u50a8\u8282\u70b9\u68c0\u67e5ceph\u96c6\u7fa4\u662f\u5426\u6b63\u5e38\u53ef\u8bbf\u95ee [root@openeuler ~]# ceph --user cinder -s cluster: id: b7b2fac6-420f-4ec1-aea2-4862d29b4059 health: HEALTH_OK services: mon: 3 daemons, quorum VIRT01,VIRT02,VIRT03 mgr: VIRT03(active), standbys: VIRT02, VIRT01 mds: cephfs_virt-1/1/1 up {0=VIRT03=up:active}, 2 up:standby osd: 15 osds: 15 up, 15 in data: pools: 7 pools, 1416 pgs objects: 5.41M objects, 19.8TiB usage: 49.3TiB used, 59.9TiB / 109TiB avail pgs: 1414 active io: client: 2.73MiB/s rd, 22.4MiB/s wr, 3.21kop/s rd, 1.19kop/s wr \u542f\u52a8\u670d\u52a1 $ systemctl enable openstack-cinder-volume.service $ systemctl start openstack-cinder-volume.service \u5b89\u88c5\u548c\u914d\u7f6e\u5907\u4efd\u670d\u52a1 \u7f16\u8f91 /etc/cinder/cinder.conf \u6587\u4ef6\uff1a \u5728[DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6e\u5907\u4efd\u9009\u9879 [DEFAULT] # ... # \u6ce8\u610f: openEuler 21.03\u4e2d\u6ca1\u6709\u63d0\u4f9bOpenStack Swift\u8f6f\u4ef6\u5305\uff0c\u9700\u8981\u7528\u6237\u81ea\u884c\u5b89\u88c5\u3002\u6216\u8005\u4f7f\u7528\u5176\u4ed6\u7684\u5907\u4efd\u540e\u7aef\uff0c\u4f8b\u5982\uff0cNFS\u3002NFS\u5df2\u7ecf\u8fc7\u6d4b\u8bd5\u9a8c\u8bc1\uff0c\u53ef\u4ee5\u6b63\u5e38\u4f7f\u7528\u3002 backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u66ff\u6362SWIFT_URL\u4e3a\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u7684URL\uff0c\u8be5URL\u53ef\u4ee5\u901a\u8fc7\u5bf9\u8c61\u5b58\u50a8API\u7aef\u70b9\u627e\u5230\uff1a $ openstack catalog show object-store \u5b8c\u6210\u5b89\u88c5\uff1a $ systemctl enable openstack-cinder-backup.service $ systemctl start openstack-cinder-backup.service \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\u9a8c\u8bc1\u6bcf\u4e2a\u6b65\u9aa4\u6210\u529f\uff1a $ source admin-openrc $ openstack volume service list \u6ce8\uff1a\u76ee\u524d\u6682\u672a\u5bf9swift\u7ec4\u4ef6\u8fdb\u884c\u652f\u6301\uff0c\u6709\u6761\u4ef6\u7684\u540c\u5b66\u53ef\u4ee5\u914d\u7f6e\u5bf9\u63a5ceph\u3002 Horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 $ yum install openstack-dashboard 2. \u4fee\u6539\u6587\u4ef6 /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py \u4fee\u6539\u53d8\u91cf ALLOWED_HOSTS = ['*', ] OPENSTACK_HOST = \"controller\" OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } \u65b0\u589e\u53d8\u91cf OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } WEBROOT = \"/dashboard/\" COMPRESS_OFFLINE = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"admin\" LOGIN_URL = '/dashboard/auth/login/' LOGOUT_URL = '/dashboard/auth/logout/' 3. \u4fee\u6539\u6587\u4ef6/etc/httpd/conf.d/openstack-dashboard.conf WSGIDaemonProcess dashboard WSGIProcessGroup dashboard WSGISocketPrefix run/wsgi WSGIApplicationGroup %{GLOBAL} WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi Alias /dashboard/static /usr/share/openstack-dashboard/static Options All AllowOverride All Require all granted Options All AllowOverride All Require all granted 4. \u5728/usr/share/openstack-dashboard\u76ee\u5f55\u4e0b\u6267\u884c $ ./manage.py compress 5. \u91cd\u542f httpd \u670d\u52a1 $ systemctl restart httpd 5. \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740http:// \uff0c\u767b\u5f55 horizon\u3002 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5 \u5b89\u88c5Tempest $ yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 $ tempest init mytest 3. \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 $ cd mytest $ vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 $ tempest run Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u7ec4\u4ef6\u5b89\u88c5\u4e0e\u914d\u7f6e ##### \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 $ openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic $ openstack role add --project service --user ironic admin $ openstack service create --name ironic --description \\ \"Ironic baremetal provisioning service\" baremetal $ openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection $ openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector $ openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 $ openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 $ openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 $ openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 $ openstack endpoint create --region RegionOne baremetal-introspection internal http://$IRONIC_NODE:5050/v1 $ openstack endpoint create --region RegionOne baremetal-introspection public http://$IRONIC_NODE:5050/v1 $ openstack endpoint create --region RegionOne baremetal-introspection admin http://$IRONIC_NODE:5050/v1 ##### \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone force_config_drive = True [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u9700\u8981\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d\u6307\u5b9aironic\u65e5\u5fd7\u76ee\u5f55 [DEFAULT] log_dir = /var/log/ironic/ 5\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 $ ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 6\u3001\u91cd\u542fironic-api\u670d\u52a1 $ systemctl restart openstack-ironic-api ##### \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenstack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenstack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenstack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenstack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenstack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728Openstack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeopenstack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] # ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 $ systemctl restart openstack-ironic-conductor ##### \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84 /etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector 3\u3001\u8c03\u7528 ironic-inspector-dbsync \u751f\u6210\u8868 ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 4\u3001\u914d\u7f6e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 5\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD 6\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 7\u3001\u542f\u52a8\u670d\u52a1 $ systemctl enable --now openstack-ironic-inspector.service $ systemctl enable --now openstack-ironic-inspector-dnsmasq.service 8\u3001\u5982\u679c\u8282\u70b9\u5355\u72ec\u90e8\u7f72ironic\u670d\u52a1\u8fd8\u9700\u8981\u90e8\u7f72\u542f\u52a8iscsid.service\u670d\u52a1 $ systemctl enable openstack-cinder-volume.service tgtd.service iscsid.service $ systemctl start openstack-cinder-volume.service tgtd.service iscsid.service \u6ce8\u610f \uff1aarm\u67b6\u6784\u652f\u6301\u4e0d\u5b8c\u5168\uff0c\u9700\u8981\u6839\u636e\u81ea\u5df1\u60c5\u51b5\u8fdb\u884c\u9002\u914d\uff1b deploy ramdisk\u955c\u50cf\u5236\u4f5c \u76ee\u524dramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic python agent builder\u6765\u8fdb\u884c\u5236\u4f5c\uff0c\u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528\u8fd9\u4e2a\u5de5\u5177\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002\uff08\u7528\u6237\u4e5f\u53ef\u4ee5\u6839\u636e\u81ea\u5df1\u7684\u60c5\u51b5\u83b7\u53d6ironic-python-agent\uff0c\u8fd9\u91cc\u63d0\u4f9b\u4f7f\u7528ipa-builder\u5236\u4f5cipa\u65b9\u6cd5\uff09 ##### \u5b89\u88c5 ironic-python-agent-builder \u5b89\u88c5\u5de5\u5177\uff1a $ pip install ironic-python-agent-builder \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a $ /usr/bin/yum /usr/libexec/urlgrabber-ext-down \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a $ yum install git \u7531\u4e8e DIB \u4f9d\u8d56 semanage \u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a semanage --help \uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ##### \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f aarch64 \u67b6\u6784\uff0c\u8fd8\u9700\u8981\u6dfb\u52a0\uff1a $ export ARCH=aarch64 ###### \u666e\u901a\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder \u4e3e\u4f8b\u8bf4\u660e\uff1a $ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ###### \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a $ export DIB_DEV_USER_USERNAME=ipa \\ $ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ $ export DIB_DEV_USER_PASSWORD='123' $ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ###### \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 \u53c2\u8003\uff1a source-repositories \u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 20.03 LTS SP2\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef $ yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5bf9\u5e94\u5bc6\u7801 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 $ openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove $ openstack role add --project service --user trove admin $ openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 $ openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s \u89e3\u91ca\uff1a $TROVE_NODE \u66ff\u6362\u4e3aTrove\u7684API\u670d\u52a1\u90e8\u7f72\u8282\u70b9 \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 $ yum install openstack-trove python-troveclient 2\u3001\u914d\u7f6e /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove auth_strategy = keystone # Config option for showing the IP address that nova doles out add_addresses = True network_label_regex = ^NETWORK_LABEL$ api_paste_config = /etc/trove/api-paste.ini trove_auth_url = http://controller:35357/v3/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/v3/ auth_url=http://controller:35357/v3/ #auth_uri = http://controller/identity #auth_url = http://controller/identity_admin auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = trove password = TROVE_PASS \u89e3\u91ca\uff1a - [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP - nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3\u3001\u914d\u7f6e /etc/trove/trove-taskmanager.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove \u89e3\u91ca\uff1a \u53c2\u7167 trove.conf \u914d\u7f6e 4\u3001\u914d\u7f6e /etc/trove/trove-conductor.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:trove@controller/trove \u89e3\u91ca\uff1a \u53c2\u7167 trove.conf \u914d\u7f6e 5\u3001\u914d\u7f6e /etc/trove/trove-guestagent.conf [DEFAULT] rabbit_host = controller rabbit_password = RABBIT_PASS nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service trove_auth_url = http://controller/identity_admin/v2.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 6\u3001\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 $ su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e 1\u3001\u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 $ systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2\u3001\u542f\u52a8\u670d\u52a1 $ systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"openEuler-20.03-LTS-SP2_Rocky"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#openstack-rocky","text":"OpenStack-Rocky \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u51c6\u5907\u73af\u5883 OpenStack yum\u6e90\u914d\u7f6e \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 ... ... ... \u6ce8\u610f\uff1a\u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7darm64\u7248\u672c\u7684\u955c\u50cf\u3002 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 Horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5","title":"OpenStack-Rocky \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 20.03-LTS-SP2 \u7248\u672c\u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9oepkg yum \u6e90\u5df2\u7ecf\u652f\u6301 Openstack-Rocky \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597doepkg yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#_1","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#openstack-yum","text":"\u914d\u7f6e 20.03-LTS-SP2 \u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9\u6e90 oepkg\uff0c\u4ee5x86_64\u4e3a\u4f8b $ cat << EOF >> /etc/yum.repos.d/OpenStack_Rocky.repo [openstack_rocky] name=OpenStack_Rocky baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP2/budding-openeuler/openstack/rocky/x86_64/ gpgcheck=0 enabled=1 EOF $ yum clean all && yum makecache","title":"OpenStack yum\u6e90\u914d\u7f6e"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#_2","text":"\u5728 /etc/hosts \u4e2d\u6dfb\u52a0controller\u4fe1\u606f\uff0c\u4f8b\u5982\u8282\u70b9IP\u662f 10.0.0.11 \uff0c\u5219\u65b0\u589e\uff1a 10.0.0.11 controller","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 $ yum install mariadb mariadb-server python2-PyMySQL 2. \u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 \u590d\u5236\u5982\u4e0b\u5185\u5bb9\u5230\u6587\u4ef6\uff0c\u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a $ systemctl enable mariadb.service $ systemctl start mariadb.service","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 $ yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 $ systemctl enable rabbitmq-server.service $ systemctl start rabbitmq-server.service 3. \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 $ rabbitmqctl add_user openstack RABBIT_PASS 4. \u66ff\u6362 RABBIT_PASS\uff0c\u4e3aOpenStack\u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a $ rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 $ yum install memcached python2-memcached 2. \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\uff0c\u6dfb\u52a0\u4ee5\u4e0b\u5185\u5bb9 OPTIONS=\"-l 127.0.0.1,::1,controller\" OPTIONS \u4fee\u6539\u4e3a\u5b9e\u9645\u73af\u5883\u4e2d\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 $ systemctl enable memcached.service $ systemctl start memcached.service","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#keystone","text":"\u4ee5 root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362 KEYSTONE_DBPASS\uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 $ yum install openstack-keystone httpd python2-mod_wsgi \u914d\u7f6ekeystone\uff0c\u7f16\u8f91 /etc/keystone/keystone.conf \u6587\u4ef6\u3002\u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\u3002\u5728[token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u66ff\u6362KEYSTONE_DBPASS\u4e3aKeystone\u6570\u636e\u5e93\u7684\u5bc6\u7801 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 $ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone $ keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8\u8eab\u4efd\u670d\u52a1\u3002 $ keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u66ff\u6362 ADMIN_PASS\uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801\u3002 \u7f16\u8f91 /etc/httpd/conf/httpd.conf \u6587\u4ef6\uff0c\u914d\u7f6eApache HTTP server $ vim /etc/httpd/conf/httpd.conf \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9\uff0c\u5982\u4e0b\u6240\u793a\u3002 ServerName controller \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa\u3002 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u4e3a /usr/share/keystone/wsgi-keystone.conf \u6587\u4ef6\u521b\u5efa\u94fe\u63a5\u3002 $ ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u5b8c\u6210\u5b89\u88c5\uff0c\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8Apache HTTP\u670d\u52a1\u3002 $ systemctl enable httpd.service $ systemctl start httpd.service \u5b89\u88c5OpenStackClient $ yum install python2-openstackclient \u521b\u5efa OpenStack client \u73af\u5883\u811a\u672c \u521b\u5efaadmin\u7528\u6237\u7684\u73af\u5883\u53d8\u91cf\u811a\u672c\uff1a # vim admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 \u66ff\u6362ADMIN_PASS\u4e3aadmin\u7528\u6237\u7684\u5bc6\u7801, \u4e0e\u4e0a\u8ff0 keystone-manage bootstrap \u547d\u4ee4\u4e2d\u8bbe\u7f6e\u7684\u5bc6\u7801\u4e00\u81f4 \u8fd0\u884c\u811a\u672c\u52a0\u8f7d\u73af\u5883\u53d8\u91cf\uff1a $ source admin-openrc \u5206\u522b\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efadomain, projects, users, roles\u3002 \u521b\u5efadomain \u2018example\u2019\uff1a $ openstack domain create --description \"An Example Domain\" example \u6ce8\uff1adomain \u2018default\u2019\u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa \u521b\u5efaproject \u2018service\u2019\uff1a $ openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project \u2019myproject\u2018\uff0cuser \u2019myuser\u2018 \u548c role \u2019myrole\u2018\uff0c\u4e3a\u2018myproject\u2019\u548c\u2018myuser\u2019\u6dfb\u52a0\u89d2\u8272\u2018myrole\u2019\uff1a $ openstack project create --domain default --description \"Demo Project\" myproject $ openstack user create --domain default --password-prompt myuser $ openstack role create myrole $ openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a $ unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a $ openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a $ openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4ee5 root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362 GLANCE_DBPASS\uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 $ source admin-openrc \u6267\u884c\u4ee5\u4e0b\u547d\u4ee4\uff0c\u5206\u522b\u5b8c\u6210\u521b\u5efa glance \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efaglance\u7528\u6237\u548c\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018glance\u2019\u3002 $ openstack user create --domain default --password-prompt glance $ openstack role add --project service --user glance admin $ openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne image public http://controller:9292 $ openstack endpoint create --region RegionOne image internal http://controller:9292 $ openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-glance \u914d\u7f6eglance\uff1a \u7f16\u8f91 /etc/glance/glance-api.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 \u5728[glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e [database] # ... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] # ... flavor = keystone [glance_store] # ... stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u7f16\u8f91 /etc/glance/glance-registry.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 ```ini [database]","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#_3","text":"connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken]","title":"..."},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#_4","text":"www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy]","title":"..."},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#_5","text":"flavor = keystone ``` \u5176\u4e2d\uff0c\u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u955c\u50cf\u670d\u52a1\uff1a $ systemctl enable openstack-glance-api.service openstack-glance-registry.service $ systemctl start openstack-glance-api.service openstack-glance-registry.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf ```shell $ source admin-openrc","title":"..."},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#arm64","text":"$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img ``` \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a shell $ glance image-create --name \"cirros\" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a shell $ glance image-list","title":"\u6ce8\u610f\uff1a\u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7darm64\u7248\u672c\u7684\u955c\u50cf\u3002"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3aroot\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efanova\u3001nova_api\u3001nova_cell0 \u6570\u636e\u5e93\u5e76\u6388\u6743 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362NOVA_DBPASS\u53caPLACEMENT_DBPASS\uff0c\u4e3anova\u53caplacement\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b8c\u6210\u521b\u5efanova\u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efanova\u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018nova\u2019\u3002 $ . admin-openrc $ openstack user create --domain default --password-prompt nova $ openstack role add --project service --user nova admin $ openstack service create --name nova --description \"OpenStack Compute\" compute \u521b\u5efa\u8ba1\u7b97\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 $ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 $ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 \u521b\u5efaplacement\u7528\u6237\u5e76\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\uff1a $ openstack user create --domain default --password-prompt placement $ openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u51ed\u8bc1\u53caAPI\u670d\u52a1\u7aef\u70b9\uff1a $ openstack service create --name placement --description \"Placement API\" placement $ openstack endpoint create --region RegionOne placement public http://controller:8778 $ openstack endpoint create --region RegionOne placement internal http://controller:8778 $ openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-compute \\ openstack-nova-placement-api openstack-nova-console \u914d\u7f6enova\uff1a \u7f16\u8f91 /etc/nova/nova.conf \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b \u5728[api_database] [database] [placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b \u5728[vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b \u5728[glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b \u5728[oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b \u5728[placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 [DEFAULT] # ... enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.11 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances/ [api_database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true # ... server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] # ... api_servers = http://controller:9292 [oslo_concurrency] # ... lock_path = /var/lib/nova/tmp [placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u66ff\u6362RABBIT_PASS\u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6emy_ip\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362NOVA_DBPASS\u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362PLACEMENT_DBPASS\u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362NOVA_PASS\u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362PLACEMENT_PASS\u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362NEUTRON_PASS\u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u7f16\u8f91 /etc/httpd/conf.d/00-nova-placement-api.conf \uff0c\u589e\u52a0Placement API\u63a5\u5165\u914d\u7f6e = 2.4> Require all granted Order allow,deny Allow from all \u91cd\u542fhttpd\u670d\u52a1\uff1a $ systemctl restart httpd \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a $ su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a $ egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a \u6ce8\u610f\uff1a \u5982\u679c\u662f\u5728ARM64\u7684\u670d\u52a1\u5668\u4e0a\uff0c\u8fd8\u9700\u8981\u5728\u914d\u7f6e cpu_mode \u4e3a custom , cpu_model \u4e3a cortex-a72 # vim /etc/nova/nova.conf [libvirt] # ... virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728 compute \u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd chown nova:nova /usr/share/AAVMF -R vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd\", \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2/aarch64/vars-template-pflash.raw\" ] \u542f\u52a8\u8ba1\u7b97\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u914d\u7f6e\u5176\u5f00\u673a\u542f\u52a8\uff1a $ systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service $ systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service $ systemctl enable libvirtd.service openstack-nova-compute.service $ systemctl start libvirtd.service openstack-nova-compute.service \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230cell\u6570\u636e\u5e93\uff1a \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u5b58\u5728\uff1a $ . admin-openrc $ openstack compute service list --service nova-compute \u6ce8\u518c\u8ba1\u7b97\u8282\u70b9\uff1a $ su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u9a8c\u8bc1 $ . admin-openrc \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a $ openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a $ openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a $ openstack image list \u68c0\u67e5cells\u548cplacement API\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 $ nova-status upgrade check","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3aroot\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa neutron \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362NEUTRON_DBPASS\uff0c\u4e3aneutron\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 $ . admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b8c\u6210\u521b\u5efa neutron \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efaneutron\u7528\u6237\u548c\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u2018neutron\u2019\u7528\u6237\u64cd\u4f5c\u3002 \u521b\u5efaneutron\u670d\u52a1 $ openstack user create --domain default --password-prompt neutron $ openstack role add --project service --user neutron admin $ openstack service create --name neutron --description \"OpenStack Networking\" network \u521b\u5efa\u7f51\u7edc\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne network public http://controller:9696 $ openstack endpoint create --region RegionOne network internal http://controller:9696 $ openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u548c\u914d\u7f6e Self-service \u7f51\u7edc \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-neutron openstack-neutron-ml2 \\ openstack-neutron-linuxbridge ebtables ipset \u914d\u7f6eneutron\uff1a \u7f16\u8f91 /etc/neutron/neutron.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b \u5728[default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b \u5728[default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b \u5728[default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b \u5728[oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 [database] # ... connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] # ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] # ... lock_path = /var/lib/neutron/tmp \u66ff\u6362NEUTRON_DBPASS\u4e3aneutron\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362RABBIT_PASS\u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362NEUTRON_PASS\u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362NOVA_PASS\u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a \u7f16\u8f91 /etc/neutron/plugins/ml2/ml2_conf.ini \u6587\u4ef6\uff1a \u5728[ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528\u7f51\u6865\u53ca layer-2 population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b \u5728[ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b \u5728[ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b \u5728[securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 # vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] # ... type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] # ... flat_networks = provider [ml2_type_vxlan] # ... vni_ranges = 1:1000 [securitygroup] # ... enable_ipset = true \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/plugins/ml2/linuxbridge_agent.ini \u6587\u4ef6\uff1a \u5728[linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u5728[vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b \u5728[securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] # ... enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u66ff\u6362PROVIDER_INTERFACE_NAME\u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362OVERLAY_INTERFACE_IP_ADDRESS\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/l3_agent.ini \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge [DEFAULT] # ... interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/dhcp_agent.ini \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 [DEFAULT] # ... interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/metadata_agent.ini \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 [DEFAULT] # ... nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u66ff\u6362METADATA_SECRET\u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6e\u8ba1\u7b97\u670d\u52a1 \u7f16\u8f91 /etc/nova/nova.conf \u6587\u4ef6\uff1a \u5728[neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 [neutron] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u66ff\u6362NEUTRON_PASS\u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362METADATA_SECRET\u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u5b8c\u6210\u5b89\u88c5 \u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u94fe\u63a5\uff1a $ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a $ systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1\u5e76\u914d\u7f6e\u5f00\u673a\u542f\u52a8\uff1a $ systemctl enable neutron-server.service \\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service $ systemctl start neutron-server.service \\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service $ systemctl enable neutron-l3-agent.service $ systemctl start neutron-l3-agent.service \u9a8c\u8bc1 \u5217\u51fa\u4ee3\u7406\u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a $ openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3aroot\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efacinder\u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362CINDER_DBPASS\uff0c\u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 $ source admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a \u521b\u5efacinder\u7528\u6237 \u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018cinder\u2019 \u521b\u5efacinderv2\u548ccinderv3\u670d\u52a1 $ openstack user create --domain default --password-prompt cinder $ openstack role add --project service --user cinder admin $ openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 $ openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e\u63a7\u5236\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-cinder \u914d\u7f6ecinder\uff1a \u7f16\u8f91 /etc/cinder/cinder.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b \u5728[DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b \u5728[DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b \u5728[oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 [database] # ... connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] # ... lock_path = /var/lib/cinder/tmp \u66ff\u6362CINDER_DBPASS\u4e3acinder\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362RABBIT_PASS\u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6emy_ip\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362CINDER_PASS\u4e3acinder\u7528\u6237\u7684\u5bc6\u7801\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"cinder-manage db sync\" cinder \u914d\u7f6e\u8ba1\u7b97\u4f7f\u7528\u5757\u5b58\u50a8\uff1a \u7f16\u8f91 /etc/nova/nova.conf \u6587\u4ef6\u3002 [cinder] os_region_name = RegionOne \u5b8c\u6210\u5b89\u88c5\uff1a \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 $ systemctl restart openstack-nova-api.service \u542f\u52a8\u5757\u5b58\u50a8\u670d\u52a1 $ systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service $ systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9\uff08LVM\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install lvm2 device-mapper-persistent-data scsi-target-utils python2-keystone \\ openstack-cinder-volume \u521b\u5efaLVM\u7269\u7406\u5377 /dev/sdb\uff1a $ pvcreate /dev/sdb \u521b\u5efaLVM\u5377\u7ec4 cinder-volumes\uff1a $ vgcreate cinder-volumes /dev/sdb \u7f16\u8f91 /etc/lvm/lvm.conf \u6587\u4ef6\uff1a \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/sdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 devices { # ... filter = [ \"a/sdb/\", \"r/.*/\"] \u7f16\u8f91 /etc/cinder/cinder.conf \u6587\u4ef6\uff1a \u5728[lvm]\u90e8\u5206\uff0c\u4f7f\u7528LVM\u9a71\u52a8\u3001cinder-volumes\u5377\u7ec4\u3001iSCSI\u534f\u8bae\u548c\u9002\u5f53\u7684iSCSI\u670d\u52a1\u914d\u7f6eLVM\u540e\u7aef\u3002 \u5728[DEFAULT]\u90e8\u5206\uff0c\u542f\u7528LVM\u540e\u7aef\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u4f4d\u7f6e\u3002 [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [DEFAULT] # ... enabled_backends = lvm glance_api_servers = http://controller:9292 \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u5b8c\u6210\u5b89\u88c5\uff1a $ systemctl enable openstack-cinder-volume.service tgtd.service iscsid.service $ systemctl start openstack-cinder-volume.service tgtd.service iscsid.service \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9\uff08ceph RBD\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install ceph-common python2-rados python2-rbd python2-keystone openstack-cinder-volume \u5728[DEFAULT]\u90e8\u5206\uff0c\u542f\u7528LVM\u540e\u7aef\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u4f4d\u7f6e\u3002 [DEFAULT] enabled_backends = ceph-rbd \u6dfb\u52a0ceph rbd\u914d\u7f6e\u90e8\u5206\uff0c\u914d\u7f6e\u5757\u547d\u540d\u4e0eenabled_backends\u4e2d\u4fdd\u6301\u4e00\u81f4 [ceph-rbd] glance_api_version = 2 rados_connect_timeout = -1 rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = False rbd_max_clone_depth = 5 rbd_pool = # RBD\u5b58\u50a8\u6c60\u540d\u79f0 rbd_secret_uuid = # \u968f\u673a\u751f\u6210SECRET UUID rbd_store_chunk_size = 4 rbd_user = volume_backend_name = ceph-rbd volume_driver = cinder.volume.drivers.rbd.RBDDriver \u914d\u7f6e\u5b58\u50a8\u8282\u70b9ceph\u5ba2\u6237\u7aef\uff0c\u9700\u8981\u4fdd\u8bc1/etc/ceph/\u76ee\u5f55\u4e2d\u5305\u542bceph\u96c6\u7fa4\u8bbf\u95ee\u914d\u7f6e\uff0c\u5305\u62ecceph.conf\u4ee5\u53cakeyring [root@openeuler ~]# ll /etc/ceph -rw-r--r-- 1 root root 82 Jun 16 17:11 ceph.client..keyring -rw-r--r-- 1 root root 1.5K Jun 16 17:11 ceph.conf -rw-r--r-- 1 root root 92 Jun 16 17:11 rbdmap \u5728\u5b58\u50a8\u8282\u70b9\u68c0\u67e5ceph\u96c6\u7fa4\u662f\u5426\u6b63\u5e38\u53ef\u8bbf\u95ee [root@openeuler ~]# ceph --user cinder -s cluster: id: b7b2fac6-420f-4ec1-aea2-4862d29b4059 health: HEALTH_OK services: mon: 3 daemons, quorum VIRT01,VIRT02,VIRT03 mgr: VIRT03(active), standbys: VIRT02, VIRT01 mds: cephfs_virt-1/1/1 up {0=VIRT03=up:active}, 2 up:standby osd: 15 osds: 15 up, 15 in data: pools: 7 pools, 1416 pgs objects: 5.41M objects, 19.8TiB usage: 49.3TiB used, 59.9TiB / 109TiB avail pgs: 1414 active io: client: 2.73MiB/s rd, 22.4MiB/s wr, 3.21kop/s rd, 1.19kop/s wr \u542f\u52a8\u670d\u52a1 $ systemctl enable openstack-cinder-volume.service $ systemctl start openstack-cinder-volume.service \u5b89\u88c5\u548c\u914d\u7f6e\u5907\u4efd\u670d\u52a1 \u7f16\u8f91 /etc/cinder/cinder.conf \u6587\u4ef6\uff1a \u5728[DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6e\u5907\u4efd\u9009\u9879 [DEFAULT] # ... # \u6ce8\u610f: openEuler 21.03\u4e2d\u6ca1\u6709\u63d0\u4f9bOpenStack Swift\u8f6f\u4ef6\u5305\uff0c\u9700\u8981\u7528\u6237\u81ea\u884c\u5b89\u88c5\u3002\u6216\u8005\u4f7f\u7528\u5176\u4ed6\u7684\u5907\u4efd\u540e\u7aef\uff0c\u4f8b\u5982\uff0cNFS\u3002NFS\u5df2\u7ecf\u8fc7\u6d4b\u8bd5\u9a8c\u8bc1\uff0c\u53ef\u4ee5\u6b63\u5e38\u4f7f\u7528\u3002 backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u66ff\u6362SWIFT_URL\u4e3a\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u7684URL\uff0c\u8be5URL\u53ef\u4ee5\u901a\u8fc7\u5bf9\u8c61\u5b58\u50a8API\u7aef\u70b9\u627e\u5230\uff1a $ openstack catalog show object-store \u5b8c\u6210\u5b89\u88c5\uff1a $ systemctl enable openstack-cinder-backup.service $ systemctl start openstack-cinder-backup.service \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\u9a8c\u8bc1\u6bcf\u4e2a\u6b65\u9aa4\u6210\u529f\uff1a $ source admin-openrc $ openstack volume service list \u6ce8\uff1a\u76ee\u524d\u6682\u672a\u5bf9swift\u7ec4\u4ef6\u8fdb\u884c\u652f\u6301\uff0c\u6709\u6761\u4ef6\u7684\u540c\u5b66\u53ef\u4ee5\u914d\u7f6e\u5bf9\u63a5ceph\u3002","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 $ yum install openstack-dashboard 2. \u4fee\u6539\u6587\u4ef6 /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py \u4fee\u6539\u53d8\u91cf ALLOWED_HOSTS = ['*', ] OPENSTACK_HOST = \"controller\" OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } \u65b0\u589e\u53d8\u91cf OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } WEBROOT = \"/dashboard/\" COMPRESS_OFFLINE = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"admin\" LOGIN_URL = '/dashboard/auth/login/' LOGOUT_URL = '/dashboard/auth/logout/' 3. \u4fee\u6539\u6587\u4ef6/etc/httpd/conf.d/openstack-dashboard.conf WSGIDaemonProcess dashboard WSGIProcessGroup dashboard WSGISocketPrefix run/wsgi WSGIApplicationGroup %{GLOBAL} WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi Alias /dashboard/static /usr/share/openstack-dashboard/static Options All AllowOverride All Require all granted Options All AllowOverride All Require all granted 4. \u5728/usr/share/openstack-dashboard\u76ee\u5f55\u4e0b\u6267\u884c $ ./manage.py compress 5. \u91cd\u542f httpd \u670d\u52a1 $ systemctl restart httpd 5. \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740http:// \uff0c\u767b\u5f55 horizon\u3002","title":"Horizon \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5 \u5b89\u88c5Tempest $ yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 $ tempest init mytest 3. \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 $ cd mytest $ vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 $ tempest run","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u7ec4\u4ef6\u5b89\u88c5\u4e0e\u914d\u7f6e ##### \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 $ openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic $ openstack role add --project service --user ironic admin $ openstack service create --name ironic --description \\ \"Ironic baremetal provisioning service\" baremetal $ openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection $ openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector $ openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 $ openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 $ openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 $ openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 $ openstack endpoint create --region RegionOne baremetal-introspection internal http://$IRONIC_NODE:5050/v1 $ openstack endpoint create --region RegionOne baremetal-introspection public http://$IRONIC_NODE:5050/v1 $ openstack endpoint create --region RegionOne baremetal-introspection admin http://$IRONIC_NODE:5050/v1 ##### \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone force_config_drive = True [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u9700\u8981\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d\u6307\u5b9aironic\u65e5\u5fd7\u76ee\u5f55 [DEFAULT] log_dir = /var/log/ironic/ 5\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 $ ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 6\u3001\u91cd\u542fironic-api\u670d\u52a1 $ systemctl restart openstack-ironic-api ##### \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenstack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenstack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenstack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenstack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenstack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728Openstack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeopenstack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] # ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 $ systemctl restart openstack-ironic-conductor ##### \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84 /etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector 3\u3001\u8c03\u7528 ironic-inspector-dbsync \u751f\u6210\u8868 ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 4\u3001\u914d\u7f6e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 5\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD 6\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 7\u3001\u542f\u52a8\u670d\u52a1 $ systemctl enable --now openstack-ironic-inspector.service $ systemctl enable --now openstack-ironic-inspector-dnsmasq.service 8\u3001\u5982\u679c\u8282\u70b9\u5355\u72ec\u90e8\u7f72ironic\u670d\u52a1\u8fd8\u9700\u8981\u90e8\u7f72\u542f\u52a8iscsid.service\u670d\u52a1 $ systemctl enable openstack-cinder-volume.service tgtd.service iscsid.service $ systemctl start openstack-cinder-volume.service tgtd.service iscsid.service \u6ce8\u610f \uff1aarm\u67b6\u6784\u652f\u6301\u4e0d\u5b8c\u5168\uff0c\u9700\u8981\u6839\u636e\u81ea\u5df1\u60c5\u51b5\u8fdb\u884c\u9002\u914d\uff1b deploy ramdisk\u955c\u50cf\u5236\u4f5c \u76ee\u524dramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic python agent builder\u6765\u8fdb\u884c\u5236\u4f5c\uff0c\u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528\u8fd9\u4e2a\u5de5\u5177\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002\uff08\u7528\u6237\u4e5f\u53ef\u4ee5\u6839\u636e\u81ea\u5df1\u7684\u60c5\u51b5\u83b7\u53d6ironic-python-agent\uff0c\u8fd9\u91cc\u63d0\u4f9b\u4f7f\u7528ipa-builder\u5236\u4f5cipa\u65b9\u6cd5\uff09 ##### \u5b89\u88c5 ironic-python-agent-builder \u5b89\u88c5\u5de5\u5177\uff1a $ pip install ironic-python-agent-builder \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a $ /usr/bin/yum /usr/libexec/urlgrabber-ext-down \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a $ yum install git \u7531\u4e8e DIB \u4f9d\u8d56 semanage \u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a semanage --help \uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ##### \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f aarch64 \u67b6\u6784\uff0c\u8fd8\u9700\u8981\u6dfb\u52a0\uff1a $ export ARCH=aarch64 ###### \u666e\u901a\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder \u4e3e\u4f8b\u8bf4\u660e\uff1a $ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ###### \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a $ export DIB_DEV_USER_USERNAME=ipa \\ $ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ $ export DIB_DEV_USER_PASSWORD='123' $ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ###### \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 \u53c2\u8003\uff1a source-repositories \u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 20.03 LTS SP2\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef $ yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP2/OpenStack-rocky/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5bf9\u5e94\u5bc6\u7801 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 $ openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove $ openstack role add --project service --user trove admin $ openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 $ openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s \u89e3\u91ca\uff1a $TROVE_NODE \u66ff\u6362\u4e3aTrove\u7684API\u670d\u52a1\u90e8\u7f72\u8282\u70b9 \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 $ yum install openstack-trove python-troveclient 2\u3001\u914d\u7f6e /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove auth_strategy = keystone # Config option for showing the IP address that nova doles out add_addresses = True network_label_regex = ^NETWORK_LABEL$ api_paste_config = /etc/trove/api-paste.ini trove_auth_url = http://controller:35357/v3/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/v3/ auth_url=http://controller:35357/v3/ #auth_uri = http://controller/identity #auth_url = http://controller/identity_admin auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = trove password = TROVE_PASS \u89e3\u91ca\uff1a - [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP - nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3\u3001\u914d\u7f6e /etc/trove/trove-taskmanager.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove \u89e3\u91ca\uff1a \u53c2\u7167 trove.conf \u914d\u7f6e 4\u3001\u914d\u7f6e /etc/trove/trove-conductor.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:trove@controller/trove \u89e3\u91ca\uff1a \u53c2\u7167 trove.conf \u914d\u7f6e 5\u3001\u914d\u7f6e /etc/trove/trove-guestagent.conf [DEFAULT] rabbit_host = controller rabbit_password = RABBIT_PASS nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service trove_auth_url = http://controller/identity_admin/v2.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 6\u3001\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 $ su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e 1\u3001\u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 $ systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2\u3001\u542f\u52a8\u670d\u52a1 $ systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/","text":"OpenStack-Queens \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Queens \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Rally \u5b89\u88c5 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531 nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon \u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 20.03-LTS-SP3 \u7248\u672c\u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9 oepkg yum \u6e90\u5df2\u7ecf\u652f\u6301 Openstack-Queens \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d oepkg yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 Openstack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 20.03-LTS-SP3 \u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9\u6e90 oepkg cat << EOF >> /etc/yum.repos.d/OpenStack_Queens.repo [openstack_queens] name=OpenStack_Queens baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/queens/$basearch/ gpgcheck=0 enabled=1 EOF \u6ce8\u610f \u5982\u679c\u73af\u5883\u542f\u7528\u4e86Epol\u6e90\uff0c\u9700\u8981\u63d0\u9ad8queens\u4ed3\u7684\u4f18\u5148\u7ea7\uff0c\u8bbe\u7f6epriority=1\uff1a cat << EOF >> /etc/yum.repos.d/OpenStack_Queens.repo [openstack_queens] name=OpenStack_Queens baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/queens/$basearch/ gpgcheck=0 enabled=1 priority=1 EOF $ yum clean all && yum makecache \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python2-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python2-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd python2-mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython2-openstackclient\uff1a yum install python2-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ vim /etc/glance/glance-registry.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service openstack-glance-registry.service systemctl start openstack-glance-api.service openstack-glance-registry.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7darm64\u7248\u672c\u7684\u955c\u50cf \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CPT) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CPT) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTP) openstack role add --project service --user nova admin (CPT) openstack service create --name nova --description \"OpenStack Compute\" compute (CPT) \u521b\u5efaplacement\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt placement (CPT) openstack role add --project service --user placement admin (CPT) openstack service create --name placement --description \"Placement API\" placement (CPT) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CPT) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CPT) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CPT) \u521b\u5efaplacement API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 (CPT) openstack endpoint create --region RegionOne placement internal http://controller:8778 (CPT) openstack endpoint create --region RegionOne placement admin http://controller:8778 (CPT) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor openstack-nova-console \\ novnc openstack-nova-novncproxy openstack-nova-scheduler \\ openstack-nova-placement-api (CTL) yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver = libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) logdir = /var/log/nova/ [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u624b\u52a8\u589e\u52a0Placement API\u63a5\u5165\u914d\u7f6e\u3002 vim /etc/httpd/conf.d/00-nova-placement-api.conf (CTL) = 2.4> Require all granted Order allow,deny Allow from all \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd (CTL) \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-consoleauth.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-consoleauth.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u548cplacement API\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge-agent \\ (CTL) ebtables ipset openstack-neutron-l3-agent \\ openstack-neutron-dhcp-agent \\ openstack-neutron-metadata-agent yum install openstack-neutron-linuxbridge-agent ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable openstack-neutron-server.service \\ (CTL) openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \\ openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service systemctl restart openstack-nova-api.service openstack-neutron-server.service \\ (CTL) openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \\ openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service systemctl enable openstack-neutron-linuxbridge-agent.service (CPT) systemctl restart openstack-neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u5217\u51fa\u4ee3\u7406\u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (CPT) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (CPT) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (CPT) backup_share=HOST:PATH (CPT) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (CPT) volume_group = cinder-volumes (CPT) iscsi_protocol = iscsi (CPT) iscsi_helper = tgtadm (CPT) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS\u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (CPT) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (CPT) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings ALLOWED_HOSTS = ['*', ] OPENSTACK_HOST = \"controller\" OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python2-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenstack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenstack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenstack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenstack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenstack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728Openstack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeopenstack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor deploy ramdisk\u955c\u50cf\u5236\u4f5c Q\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528Q\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u5728Queens\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002 Kolla \u5b89\u88c5 \u00b6 Kolla \u4e3a OpenStack \u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 20.03 LTS SP2\u4e2d\u5df2\u7ecf\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\uff0c\u4f46\u662fKolla \u4ee5\u53ca Kolla-ansible \u539f\u751f\u5e76\u4e0d\u652f\u6301 openEuler\uff0c \u56e0\u6b64 Openstack SIG \u5728openEuler 20.03 LTS SP3\u4e2d\u63d0\u4f9b\u4e86 openstack-kolla-plugin \u548c openstack-kolla-ansible-plugin \u8fd9\u4e24\u4e2a\u8865\u4e01\u5305\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef \u652f\u6301 openEuler \u7248\u672c\uff1a yum install openstack-kolla-plugin openstack-kolla-ansible-plugin \u4e0d\u652f\u6301 openEuler \u7248\u672c\uff1a yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s \u89e3\u91ca\uff1a $TROVE_NODE \u66ff\u6362\u4e3aTrove\u7684API\u670d\u52a1\u90e8\u7f72\u8282\u70b9 \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ```shell script yum install openstack-trove python2-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove auth_strategy = keystone # Config option for showing the IP address that nova doles out add_addresses = True network_label_regex = ^NETWORK_LABEL$ api_paste_config = /etc/trove/api-paste.ini trove_auth_url = http://controller:35357/v3/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/v3/ auth_url=http://controller:35357/v3/ #auth_uri = http://controller/identity #auth_url = http://controller/identity_admin auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASS \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-taskmanager.conf ```shell script vim /etc/trove/trove-taskmanager.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove **\u89e3\u91ca\uff1a** \u53c2\u7167`trove.conf`\u914d\u7f6e 4. \u914d\u7f6e`trove-conductor.conf` ```shell script vim /etc/trove/trove-conductor.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:trove@controller/trove \u89e3\u91ca\uff1a \u53c2\u7167 trove.conf \u914d\u7f6e \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf [DEFAULT] rabbit_host = controller rabbit_password = RABBIT_PASS nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service trove_auth_url = http://controller/identity_admin/v2.0 **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 6. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Rally \u5b89\u88c5 \u00b6 Rally\u662fOpenStack\u63d0\u4f9b\u7684\u6027\u80fd\u6d4b\u8bd5\u5de5\u5177\u3002\u53ea\u9700\u8981\u7b80\u5355\u7684\u5b89\u88c5\u5373\u53ef\u3002 yum install openstack-rally openstack-rally-plugins","title":"openEuler-20.03-LTS-SP3_Queens"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#openstack-queens","text":"OpenStack-Queens \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Rally \u5b89\u88c5","title":"OpenStack-Queens \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531 nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon \u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 20.03-LTS-SP3 \u7248\u672c\u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9 oepkg yum \u6e90\u5df2\u7ecf\u652f\u6301 Openstack-Queens \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d oepkg yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#_1","text":"Openstack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#_3","text":"\u914d\u7f6e 20.03-LTS-SP3 \u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9\u6e90 oepkg cat << EOF >> /etc/yum.repos.d/OpenStack_Queens.repo [openstack_queens] name=OpenStack_Queens baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/queens/$basearch/ gpgcheck=0 enabled=1 EOF \u6ce8\u610f \u5982\u679c\u73af\u5883\u542f\u7528\u4e86Epol\u6e90\uff0c\u9700\u8981\u63d0\u9ad8queens\u4ed3\u7684\u4f18\u5148\u7ea7\uff0c\u8bbe\u7f6epriority=1\uff1a cat << EOF >> /etc/yum.repos.d/OpenStack_Queens.repo [openstack_queens] name=OpenStack_Queens baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/queens/$basearch/ gpgcheck=0 enabled=1 priority=1 EOF $ yum clean all && yum makecache \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python2-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python2-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd python2-mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython2-openstackclient\uff1a yum install python2-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ vim /etc/glance/glance-registry.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service openstack-glance-registry.service systemctl start openstack-glance-api.service openstack-glance-registry.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7darm64\u7248\u672c\u7684\u955c\u50cf \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CPT) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CPT) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTP) openstack role add --project service --user nova admin (CPT) openstack service create --name nova --description \"OpenStack Compute\" compute (CPT) \u521b\u5efaplacement\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt placement (CPT) openstack role add --project service --user placement admin (CPT) openstack service create --name placement --description \"Placement API\" placement (CPT) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CPT) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CPT) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CPT) \u521b\u5efaplacement API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 (CPT) openstack endpoint create --region RegionOne placement internal http://controller:8778 (CPT) openstack endpoint create --region RegionOne placement admin http://controller:8778 (CPT) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor openstack-nova-console \\ novnc openstack-nova-novncproxy openstack-nova-scheduler \\ openstack-nova-placement-api (CTL) yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver = libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) logdir = /var/log/nova/ [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u624b\u52a8\u589e\u52a0Placement API\u63a5\u5165\u914d\u7f6e\u3002 vim /etc/httpd/conf.d/00-nova-placement-api.conf (CTL) = 2.4> Require all granted Order allow,deny Allow from all \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd (CTL) \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-consoleauth.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-consoleauth.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u548cplacement API\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge-agent \\ (CTL) ebtables ipset openstack-neutron-l3-agent \\ openstack-neutron-dhcp-agent \\ openstack-neutron-metadata-agent yum install openstack-neutron-linuxbridge-agent ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable openstack-neutron-server.service \\ (CTL) openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \\ openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service systemctl restart openstack-nova-api.service openstack-neutron-server.service \\ (CTL) openstack-neutron-linuxbridge-agent.service openstack-neutron-dhcp-agent.service \\ openstack-neutron-metadata-agent.service openstack-neutron-l3-agent.service systemctl enable openstack-neutron-linuxbridge-agent.service (CPT) systemctl restart openstack-neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u5217\u51fa\u4ee3\u7406\u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (CPT) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (CPT) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (CPT) backup_share=HOST:PATH (CPT) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (CPT) volume_group = cinder-volumes (CPT) iscsi_protocol = iscsi (CPT) iscsi_helper = tgtadm (CPT) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS\u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (CPT) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (CPT) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings ALLOWED_HOSTS = ['*', ] OPENSTACK_HOST = \"controller\" OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python2-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenstack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenstack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenstack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenstack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenstack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728Openstack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeopenstack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor deploy ramdisk\u955c\u50cf\u5236\u4f5c Q\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528Q\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u5728Queens\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#kolla","text":"Kolla \u4e3a OpenStack \u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 20.03 LTS SP2\u4e2d\u5df2\u7ecf\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\uff0c\u4f46\u662fKolla \u4ee5\u53ca Kolla-ansible \u539f\u751f\u5e76\u4e0d\u652f\u6301 openEuler\uff0c \u56e0\u6b64 Openstack SIG \u5728openEuler 20.03 LTS SP3\u4e2d\u63d0\u4f9b\u4e86 openstack-kolla-plugin \u548c openstack-kolla-ansible-plugin \u8fd9\u4e24\u4e2a\u8865\u4e01\u5305\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef \u652f\u6301 openEuler \u7248\u672c\uff1a yum install openstack-kolla-plugin openstack-kolla-ansible-plugin \u4e0d\u652f\u6301 openEuler \u7248\u672c\uff1a yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s \u89e3\u91ca\uff1a $TROVE_NODE \u66ff\u6362\u4e3aTrove\u7684API\u670d\u52a1\u90e8\u7f72\u8282\u70b9 \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ```shell script yum install openstack-trove python2-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove auth_strategy = keystone # Config option for showing the IP address that nova doles out add_addresses = True network_label_regex = ^NETWORK_LABEL$ api_paste_config = /etc/trove/api-paste.ini trove_auth_url = http://controller:35357/v3/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/v3/ auth_url=http://controller:35357/v3/ #auth_uri = http://controller/identity #auth_url = http://controller/identity_admin auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASS \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-taskmanager.conf ```shell script vim /etc/trove/trove-taskmanager.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove **\u89e3\u91ca\uff1a** \u53c2\u7167`trove.conf`\u914d\u7f6e 4. \u914d\u7f6e`trove-conductor.conf` ```shell script vim /etc/trove/trove-conductor.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:trove@controller/trove \u89e3\u91ca\uff1a \u53c2\u7167 trove.conf \u914d\u7f6e \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf [DEFAULT] rabbit_host = controller rabbit_password = RABBIT_PASS nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service trove_auth_url = http://controller/identity_admin/v2.0 **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 6. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-queens/#rally","text":"Rally\u662fOpenStack\u63d0\u4f9b\u7684\u6027\u80fd\u6d4b\u8bd5\u5de5\u5177\u3002\u53ea\u9700\u8981\u7b80\u5355\u7684\u5b89\u88c5\u5373\u53ef\u3002 yum install openstack-rally openstack-rally-plugins","title":"Rally \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/","text":"OpenStack-Rocky \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Rocky \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u51c6\u5907\u73af\u5883 OpenStack yum\u6e90\u914d\u7f6e \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 ... ... ... \u6ce8\u610f\uff1a\u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7darm64\u7248\u672c\u7684\u955c\u50cf\u3002 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 Horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Rally \u5b89\u88c5 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531 nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon \u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 20.03-LTS-SP3 \u7248\u672c\u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9 oepkg yum \u6e90\u5df2\u7ecf\u652f\u6301 Openstack-Rocky \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d oepkg yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u51c6\u5907\u73af\u5883 \u00b6 OpenStack yum\u6e90\u914d\u7f6e \u00b6 \u914d\u7f6e 20.03-LTS-SP3 \u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9\u6e90 oepkg $ cat << EOF >> /etc/yum.repos.d/OpenStack_Rocky.repo [openstack_rocky] name=OpenStack_Rocky baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/$basearch/ gpgcheck=0 enabled=1 EOF \u6ce8\u610f \u5982\u679c\u73af\u5883\u542f\u7528\u4e86Epol\u6e90\uff0c\u9700\u8981\u63d0\u9ad8rocky\u4ed3\u7684\u4f18\u5148\u7ea7\uff0c\u8bbe\u7f6epriority=1\uff1a $ cat << EOF >> /etc/yum.repos.d/OpenStack_Rocky.repo [openstack_rocky] name=OpenStack_Rocky baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/$basearch/ gpgcheck=0 enabled=1 priority=1 EOF $ yum clean all && yum makecache \u73af\u5883\u914d\u7f6e \u00b6 \u5728 /etc/hosts \u4e2d\u6dfb\u52a0controller\u4fe1\u606f\uff0c\u4f8b\u5982\u8282\u70b9IP\u662f 10.0.0.11 \uff0c\u5219\u65b0\u589e\uff1a 10.0.0.11 controller \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 $ yum install mariadb mariadb-server python2-PyMySQL 2. \u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 \u590d\u5236\u5982\u4e0b\u5185\u5bb9\u5230\u6587\u4ef6\uff0c\u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a $ systemctl enable mariadb.service $ systemctl start mariadb.service \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 $ yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 $ systemctl enable rabbitmq-server.service $ systemctl start rabbitmq-server.service 3. \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 $ rabbitmqctl add_user openstack RABBIT_PASS 4. \u66ff\u6362 RABBIT_PASS\uff0c\u4e3aOpenStack\u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a $ rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 $ yum install memcached python2-memcached 2. \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\uff0c\u6dfb\u52a0\u4ee5\u4e0b\u5185\u5bb9 OPTIONS=\"-l 127.0.0.1,::1,controller\" OPTIONS \u4fee\u6539\u4e3a\u5b9e\u9645\u73af\u5883\u4e2d\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 $ systemctl enable memcached.service $ systemctl start memcached.service \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u4ee5 root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362 KEYSTONE_DBPASS\uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 $ yum install openstack-keystone httpd python2-mod_wsgi \u914d\u7f6ekeystone\uff0c\u7f16\u8f91 /etc/keystone/keystone.conf \u6587\u4ef6\u3002\u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\u3002\u5728[token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u66ff\u6362KEYSTONE_DBPASS\u4e3aKeystone\u6570\u636e\u5e93\u7684\u5bc6\u7801 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 $ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone $ keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8\u8eab\u4efd\u670d\u52a1\u3002 $ keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u66ff\u6362 ADMIN_PASS\uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801\u3002 \u7f16\u8f91 /etc/httpd/conf/httpd.conf \u6587\u4ef6\uff0c\u914d\u7f6eApache HTTP server $ vim /etc/httpd/conf/httpd.conf \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9\uff0c\u5982\u4e0b\u6240\u793a\u3002 ServerName controller \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa\u3002 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u4e3a /usr/share/keystone/wsgi-keystone.conf \u6587\u4ef6\u521b\u5efa\u94fe\u63a5\u3002 $ ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u5b8c\u6210\u5b89\u88c5\uff0c\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8Apache HTTP\u670d\u52a1\u3002 $ systemctl enable httpd.service $ systemctl start httpd.service \u5b89\u88c5OpenStackClient $ yum install python2-openstackclient \u521b\u5efa OpenStack client \u73af\u5883\u811a\u672c \u521b\u5efaadmin\u7528\u6237\u7684\u73af\u5883\u53d8\u91cf\u811a\u672c\uff1a # vim admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 \u66ff\u6362ADMIN_PASS\u4e3aadmin\u7528\u6237\u7684\u5bc6\u7801, \u4e0e\u4e0a\u8ff0 keystone-manage bootstrap \u547d\u4ee4\u4e2d\u8bbe\u7f6e\u7684\u5bc6\u7801\u4e00\u81f4 \u8fd0\u884c\u811a\u672c\u52a0\u8f7d\u73af\u5883\u53d8\u91cf\uff1a $ source admin-openrc \u5206\u522b\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efadomain, projects, users, roles\u3002 \u521b\u5efadomain \u2018example\u2019\uff1a $ openstack domain create --description \"An Example Domain\" example \u6ce8\uff1adomain \u2018default\u2019\u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa \u521b\u5efaproject \u2018service\u2019\uff1a $ openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project \u2019myproject\u2018\uff0cuser \u2019myuser\u2018 \u548c role \u2019myrole\u2018\uff0c\u4e3a\u2018myproject\u2019\u548c\u2018myuser\u2019\u6dfb\u52a0\u89d2\u8272\u2018myrole\u2019\uff1a $ openstack project create --domain default --description \"Demo Project\" myproject $ openstack user create --domain default --password-prompt myuser $ openstack role create myrole $ openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a $ unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a $ openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a $ openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4ee5 root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362 GLANCE_DBPASS\uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 $ source admin-openrc \u6267\u884c\u4ee5\u4e0b\u547d\u4ee4\uff0c\u5206\u522b\u5b8c\u6210\u521b\u5efa glance \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efaglance\u7528\u6237\u548c\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018glance\u2019\u3002 $ openstack user create --domain default --password-prompt glance $ openstack role add --project service --user glance admin $ openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne image public http://controller:9292 $ openstack endpoint create --region RegionOne image internal http://controller:9292 $ openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-glance \u914d\u7f6eglance\uff1a \u7f16\u8f91 /etc/glance/glance-api.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 \u5728[glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e [database] # ... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] # ... flavor = keystone [glance_store] # ... stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u7f16\u8f91 /etc/glance/glance-registry.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 ```ini [database] ... \u00b6 connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] ... \u00b6 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] ... \u00b6 flavor = keystone ``` \u5176\u4e2d\uff0c\u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u955c\u50cf\u670d\u52a1\uff1a $ systemctl enable openstack-glance-api.service openstack-glance-registry.service $ systemctl start openstack-glance-api.service openstack-glance-registry.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf ```shell $ source admin-openrc \u6ce8\u610f\uff1a\u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7darm64\u7248\u672c\u7684\u955c\u50cf\u3002 \u00b6 $ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img ``` \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a shell $ glance image-create --name \"cirros\" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a shell $ glance image-list Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3aroot\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efanova\u3001nova_api\u3001nova_cell0 \u6570\u636e\u5e93\u5e76\u6388\u6743 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362NOVA_DBPASS\u53caPLACEMENT_DBPASS\uff0c\u4e3anova\u53caplacement\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b8c\u6210\u521b\u5efanova\u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efanova\u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018nova\u2019\u3002 $ . admin-openrc $ openstack user create --domain default --password-prompt nova $ openstack role add --project service --user nova admin $ openstack service create --name nova --description \"OpenStack Compute\" compute \u521b\u5efa\u8ba1\u7b97\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 $ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 $ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 \u521b\u5efaplacement\u7528\u6237\u5e76\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\uff1a $ openstack user create --domain default --password-prompt placement $ openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u51ed\u8bc1\u53caAPI\u670d\u52a1\u7aef\u70b9\uff1a $ openstack service create --name placement --description \"Placement API\" placement $ openstack endpoint create --region RegionOne placement public http://controller:8778 $ openstack endpoint create --region RegionOne placement internal http://controller:8778 $ openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-compute \\ openstack-nova-placement-api openstack-nova-console \u914d\u7f6enova\uff1a \u7f16\u8f91 /etc/nova/nova.conf \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b \u5728[api_database] [database] [placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b \u5728[vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b \u5728[glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b \u5728[oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b \u5728[placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 [DEFAULT] # ... enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.11 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances/ [api_database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true # ... server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] # ... api_servers = http://controller:9292 [oslo_concurrency] # ... lock_path = /var/lib/nova/tmp [placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u66ff\u6362RABBIT_PASS\u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6emy_ip\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362NOVA_DBPASS\u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362PLACEMENT_DBPASS\u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362NOVA_PASS\u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362PLACEMENT_PASS\u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362NEUTRON_PASS\u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u7f16\u8f91 /etc/httpd/conf.d/00-nova-placement-api.conf \uff0c\u589e\u52a0Placement API\u63a5\u5165\u914d\u7f6e = 2.4> Require all granted Order allow,deny Allow from all \u91cd\u542fhttpd\u670d\u52a1\uff1a $ systemctl restart httpd \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a $ su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a $ egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a \u6ce8\u610f\uff1a \u5982\u679c\u662f\u5728ARM64\u7684\u670d\u52a1\u5668\u4e0a\uff0c\u8fd8\u9700\u8981\u5728\u914d\u7f6e cpu_mode \u4e3a custom , cpu_model \u4e3a cortex-a72 # vim /etc/nova/nova.conf [libvirt] # ... virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728 compute \u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd chown nova:nova /usr/share/AAVMF -R vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd\", \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2/aarch64/vars-template-pflash.raw\" ] \u542f\u52a8\u8ba1\u7b97\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u914d\u7f6e\u5176\u5f00\u673a\u542f\u52a8\uff1a $ systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service $ systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service $ systemctl enable libvirtd.service openstack-nova-compute.service $ systemctl start libvirtd.service openstack-nova-compute.service \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230cell\u6570\u636e\u5e93\uff1a \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u5b58\u5728\uff1a $ . admin-openrc $ openstack compute service list --service nova-compute \u6ce8\u518c\u8ba1\u7b97\u8282\u70b9\uff1a $ su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u9a8c\u8bc1 $ . admin-openrc \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a $ openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a $ openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a $ openstack image list \u68c0\u67e5cells\u548cplacement API\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 $ nova-status upgrade check Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3aroot\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa neutron \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362NEUTRON_DBPASS\uff0c\u4e3aneutron\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 $ . admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b8c\u6210\u521b\u5efa neutron \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efaneutron\u7528\u6237\u548c\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u2018neutron\u2019\u7528\u6237\u64cd\u4f5c\u3002 \u521b\u5efaneutron\u670d\u52a1 $ openstack user create --domain default --password-prompt neutron $ openstack role add --project service --user neutron admin $ openstack service create --name neutron --description \"OpenStack Networking\" network \u521b\u5efa\u7f51\u7edc\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne network public http://controller:9696 $ openstack endpoint create --region RegionOne network internal http://controller:9696 $ openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u548c\u914d\u7f6e Self-service \u7f51\u7edc \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-neutron openstack-neutron-ml2 \\ openstack-neutron-linuxbridge ebtables ipset \u914d\u7f6eneutron\uff1a \u7f16\u8f91 /etc/neutron/neutron.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b \u5728[default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b \u5728[default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b \u5728[default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b \u5728[oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 [database] # ... connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] # ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] # ... lock_path = /var/lib/neutron/tmp \u66ff\u6362NEUTRON_DBPASS\u4e3aneutron\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362RABBIT_PASS\u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362NEUTRON_PASS\u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362NOVA_PASS\u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a \u7f16\u8f91 /etc/neutron/plugins/ml2/ml2_conf.ini \u6587\u4ef6\uff1a \u5728[ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528\u7f51\u6865\u53ca layer-2 population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b \u5728[ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b \u5728[ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b \u5728[securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 # vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] # ... type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] # ... flat_networks = provider [ml2_type_vxlan] # ... vni_ranges = 1:1000 [securitygroup] # ... enable_ipset = true \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/plugins/ml2/linuxbridge_agent.ini \u6587\u4ef6\uff1a \u5728[linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u5728[vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b \u5728[securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] # ... enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u66ff\u6362PROVIDER_INTERFACE_NAME\u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362OVERLAY_INTERFACE_IP_ADDRESS\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/l3_agent.ini \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge [DEFAULT] # ... interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/dhcp_agent.ini \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 [DEFAULT] # ... interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/metadata_agent.ini \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 [DEFAULT] # ... nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u66ff\u6362METADATA_SECRET\u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6e\u8ba1\u7b97\u670d\u52a1 \u7f16\u8f91 /etc/nova/nova.conf \u6587\u4ef6\uff1a \u5728[neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 [neutron] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u66ff\u6362NEUTRON_PASS\u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362METADATA_SECRET\u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u5b8c\u6210\u5b89\u88c5 \u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u94fe\u63a5\uff1a $ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a $ systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1\u5e76\u914d\u7f6e\u5f00\u673a\u542f\u52a8\uff1a $ systemctl enable neutron-server.service \\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service $ systemctl start neutron-server.service \\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service $ systemctl enable neutron-l3-agent.service $ systemctl start neutron-l3-agent.service \u9a8c\u8bc1 \u5217\u51fa\u4ee3\u7406\u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a $ openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3aroot\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efacinder\u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362CINDER_DBPASS\uff0c\u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 $ source admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a \u521b\u5efacinder\u7528\u6237 \u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018cinder\u2019 \u521b\u5efacinderv2\u548ccinderv3\u670d\u52a1 $ openstack user create --domain default --password-prompt cinder $ openstack role add --project service --user cinder admin $ openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 $ openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e\u63a7\u5236\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-cinder \u914d\u7f6ecinder\uff1a \u7f16\u8f91 /etc/cinder/cinder.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b \u5728[DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b \u5728[DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b \u5728[oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 [database] # ... connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] # ... lock_path = /var/lib/cinder/tmp \u66ff\u6362CINDER_DBPASS\u4e3acinder\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362RABBIT_PASS\u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6emy_ip\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362CINDER_PASS\u4e3acinder\u7528\u6237\u7684\u5bc6\u7801\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"cinder-manage db sync\" cinder \u914d\u7f6e\u8ba1\u7b97\u4f7f\u7528\u5757\u5b58\u50a8\uff1a \u7f16\u8f91 /etc/nova/nova.conf \u6587\u4ef6\u3002 [cinder] os_region_name = RegionOne \u5b8c\u6210\u5b89\u88c5\uff1a \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 $ systemctl restart openstack-nova-api.service \u542f\u52a8\u5757\u5b58\u50a8\u670d\u52a1 $ systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service $ systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9\uff08LVM\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install lvm2 device-mapper-persistent-data scsi-target-utils python2-keystone \\ openstack-cinder-volume \u521b\u5efaLVM\u7269\u7406\u5377 /dev/sdb\uff1a $ pvcreate /dev/sdb \u521b\u5efaLVM\u5377\u7ec4 cinder-volumes\uff1a $ vgcreate cinder-volumes /dev/sdb \u7f16\u8f91 /etc/lvm/lvm.conf \u6587\u4ef6\uff1a \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/sdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 devices { # ... filter = [ \"a/sdb/\", \"r/.*/\"] \u7f16\u8f91 /etc/cinder/cinder.conf \u6587\u4ef6\uff1a \u5728[lvm]\u90e8\u5206\uff0c\u4f7f\u7528LVM\u9a71\u52a8\u3001cinder-volumes\u5377\u7ec4\u3001iSCSI\u534f\u8bae\u548c\u9002\u5f53\u7684iSCSI\u670d\u52a1\u914d\u7f6eLVM\u540e\u7aef\u3002 \u5728[DEFAULT]\u90e8\u5206\uff0c\u542f\u7528LVM\u540e\u7aef\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u4f4d\u7f6e\u3002 [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [DEFAULT] # ... enabled_backends = lvm glance_api_servers = http://controller:9292 \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u5b8c\u6210\u5b89\u88c5\uff1a $ systemctl enable openstack-cinder-volume.service tgtd.service iscsid.service $ systemctl start openstack-cinder-volume.service tgtd.service iscsid.service \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9\uff08ceph RBD\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install ceph-common python2-rados python2-rbd python2-keystone openstack-cinder-volume \u5728[DEFAULT]\u90e8\u5206\uff0c\u542f\u7528LVM\u540e\u7aef\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u4f4d\u7f6e\u3002 [DEFAULT] enabled_backends = ceph-rbd \u6dfb\u52a0ceph rbd\u914d\u7f6e\u90e8\u5206\uff0c\u914d\u7f6e\u5757\u547d\u540d\u4e0eenabled_backends\u4e2d\u4fdd\u6301\u4e00\u81f4 [ceph-rbd] glance_api_version = 2 rados_connect_timeout = -1 rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = False rbd_max_clone_depth = 5 rbd_pool = # RBD\u5b58\u50a8\u6c60\u540d\u79f0 rbd_secret_uuid = # \u968f\u673a\u751f\u6210SECRET UUID rbd_store_chunk_size = 4 rbd_user = volume_backend_name = ceph-rbd volume_driver = cinder.volume.drivers.rbd.RBDDriver \u914d\u7f6e\u5b58\u50a8\u8282\u70b9ceph\u5ba2\u6237\u7aef\uff0c\u9700\u8981\u4fdd\u8bc1/etc/ceph/\u76ee\u5f55\u4e2d\u5305\u542bceph\u96c6\u7fa4\u8bbf\u95ee\u914d\u7f6e\uff0c\u5305\u62ecceph.conf\u4ee5\u53cakeyring [root@openeuler ~]# ll /etc/ceph -rw-r--r-- 1 root root 82 Jun 16 17:11 ceph.client..keyring -rw-r--r-- 1 root root 1.5K Jun 16 17:11 ceph.conf -rw-r--r-- 1 root root 92 Jun 16 17:11 rbdmap \u5728\u5b58\u50a8\u8282\u70b9\u68c0\u67e5ceph\u96c6\u7fa4\u662f\u5426\u6b63\u5e38\u53ef\u8bbf\u95ee [root@openeuler ~]# ceph --user cinder -s cluster: id: b7b2fac6-420f-4ec1-aea2-4862d29b4059 health: HEALTH_OK services: mon: 3 daemons, quorum VIRT01,VIRT02,VIRT03 mgr: VIRT03(active), standbys: VIRT02, VIRT01 mds: cephfs_virt-1/1/1 up {0=VIRT03=up:active}, 2 up:standby osd: 15 osds: 15 up, 15 in data: pools: 7 pools, 1416 pgs objects: 5.41M objects, 19.8TiB usage: 49.3TiB used, 59.9TiB / 109TiB avail pgs: 1414 active io: client: 2.73MiB/s rd, 22.4MiB/s wr, 3.21kop/s rd, 1.19kop/s wr \u542f\u52a8\u670d\u52a1 $ systemctl enable openstack-cinder-volume.service $ systemctl start openstack-cinder-volume.service \u5b89\u88c5\u548c\u914d\u7f6e\u5907\u4efd\u670d\u52a1 \u7f16\u8f91 /etc/cinder/cinder.conf \u6587\u4ef6\uff1a \u5728[DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6e\u5907\u4efd\u9009\u9879 [DEFAULT] # ... # \u6ce8\u610f: openEuler 21.03\u4e2d\u6ca1\u6709\u63d0\u4f9bOpenStack Swift\u8f6f\u4ef6\u5305\uff0c\u9700\u8981\u7528\u6237\u81ea\u884c\u5b89\u88c5\u3002\u6216\u8005\u4f7f\u7528\u5176\u4ed6\u7684\u5907\u4efd\u540e\u7aef\uff0c\u4f8b\u5982\uff0cNFS\u3002NFS\u5df2\u7ecf\u8fc7\u6d4b\u8bd5\u9a8c\u8bc1\uff0c\u53ef\u4ee5\u6b63\u5e38\u4f7f\u7528\u3002 backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u66ff\u6362SWIFT_URL\u4e3a\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u7684URL\uff0c\u8be5URL\u53ef\u4ee5\u901a\u8fc7\u5bf9\u8c61\u5b58\u50a8API\u7aef\u70b9\u627e\u5230\uff1a $ openstack catalog show object-store \u5b8c\u6210\u5b89\u88c5\uff1a $ systemctl enable openstack-cinder-backup.service $ systemctl start openstack-cinder-backup.service \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\u9a8c\u8bc1\u6bcf\u4e2a\u6b65\u9aa4\u6210\u529f\uff1a $ source admin-openrc $ openstack volume service list \u6ce8\uff1a\u76ee\u524d\u6682\u672a\u5bf9swift\u7ec4\u4ef6\u8fdb\u884c\u652f\u6301\uff0c\u6709\u6761\u4ef6\u7684\u540c\u5b66\u53ef\u4ee5\u914d\u7f6e\u5bf9\u63a5ceph\u3002 Horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 $ yum install openstack-dashboard 2. \u4fee\u6539\u6587\u4ef6 /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py \u4fee\u6539\u53d8\u91cf ALLOWED_HOSTS = ['*', ] OPENSTACK_HOST = \"controller\" OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } \u65b0\u589e\u53d8\u91cf OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } WEBROOT = \"/dashboard/\" COMPRESS_OFFLINE = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"admin\" LOGIN_URL = '/dashboard/auth/login/' LOGOUT_URL = '/dashboard/auth/logout/' 3. \u4fee\u6539\u6587\u4ef6/etc/httpd/conf.d/openstack-dashboard.conf WSGIDaemonProcess dashboard WSGIProcessGroup dashboard WSGISocketPrefix run/wsgi WSGIApplicationGroup %{GLOBAL} WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi Alias /dashboard/static /usr/share/openstack-dashboard/static Options All AllowOverride All Require all granted Options All AllowOverride All Require all granted 4. \u5728/usr/share/openstack-dashboard\u76ee\u5f55\u4e0b\u6267\u884c $ ./manage.py compress 5. \u91cd\u542f httpd \u670d\u52a1 $ systemctl restart httpd 5. \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740http:// \uff0c\u767b\u5f55 horizon\u3002 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5 \u5b89\u88c5Tempest $ yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 $ tempest init mytest 3. \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 $ cd mytest $ vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 $ tempest run Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python2-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u7ec4\u4ef6\u5b89\u88c5\u4e0e\u914d\u7f6e ##### \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 $ openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic $ openstack role add --project service --user ironic admin $ openstack service create --name ironic --description \\ \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 $ openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 $ openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 $ openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 ##### \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone force_config_drive = True [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u9700\u8981\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d\u6307\u5b9aironic\u65e5\u5fd7\u76ee\u5f55 [DEFAULT] log_dir = /var/log/ironic/ 5\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 $ ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 6\u3001\u91cd\u542fironic-api\u670d\u52a1 $ systemctl restart openstack-ironic-api ##### \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenstack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenstack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenstack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenstack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenstack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728Openstack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeopenstack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] # ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 $ systemctl restart openstack-ironic-conductor deploy ramdisk\u955c\u50cf\u5236\u4f5c \u76ee\u524dramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic python agent builder\u6765\u8fdb\u884c\u5236\u4f5c\uff0c\u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528\u8fd9\u4e2a\u5de5\u5177\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002\uff08\u7528\u6237\u4e5f\u53ef\u4ee5\u6839\u636e\u81ea\u5df1\u7684\u60c5\u51b5\u83b7\u53d6ironic-python-agent\uff0c\u8fd9\u91cc\u63d0\u4f9b\u4f7f\u7528ipa-builder\u5236\u4f5cipa\u65b9\u6cd5\uff09 ##### \u5b89\u88c5 ironic-python-agent-builder \u5b89\u88c5\u5de5\u5177\uff1a $ pip install ironic-python-agent-builder \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a $ /usr/bin/yum /usr/libexec/urlgrabber-ext-down \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a $ yum install git \u7531\u4e8e DIB \u4f9d\u8d56 semanage \u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a semanage --help \uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ##### \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f aarch64 \u67b6\u6784\uff0c\u8fd8\u9700\u8981\u6dfb\u52a0\uff1a $ export ARCH=aarch64 ###### \u666e\u901a\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder \u4e3e\u4f8b\u8bf4\u660e\uff1a $ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ###### \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a $ export DIB_DEV_USER_USERNAME=ipa \\ $ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ $ export DIB_DEV_USER_PASSWORD='123' $ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ###### \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 \u53c2\u8003\uff1a source-repositories \u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u5728Rocky\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002 Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 20.03 LTS SP2\u4e2d\u5df2\u7ecf\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\uff0c\u4f46\u662fKolla \u4ee5\u53ca Kolla-ansible \u539f\u751f\u5e76\u4e0d\u652f\u6301 openEuler\uff0c \u56e0\u6b64 Openstack SIG \u5728openEuler 20.03 LTS SP3\u4e2d\u63d0\u4f9b\u4e86 openstack-kolla-plugin \u548c openstack-kolla-ansible-plugin \u8fd9\u4e24\u4e2a\u8865\u4e01\u5305\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef \u652f\u6301 openEuler \u7248\u672c\uff1a yum install openstack-kolla-plugin openstack-kolla-ansible-plugin \u4e0d\u652f\u6301 openEuler \u7248\u672c\uff1a yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5bf9\u5e94\u5bc6\u7801 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 $ openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove $ openstack role add --project service --user trove admin $ openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 $ openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s \u89e3\u91ca\uff1a $TROVE_NODE \u66ff\u6362\u4e3aTrove\u7684API\u670d\u52a1\u90e8\u7f72\u8282\u70b9 \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 $ yum install openstack-trove python2-troveclient 2\u3001\u914d\u7f6e /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove auth_strategy = keystone # Config option for showing the IP address that nova doles out add_addresses = True network_label_regex = ^NETWORK_LABEL$ api_paste_config = /etc/trove/api-paste.ini trove_auth_url = http://controller:35357/v3/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/v3/ auth_url=http://controller:35357/v3/ #auth_uri = http://controller/identity #auth_url = http://controller/identity_admin auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = trove password = TROVE_PASS \u89e3\u91ca\uff1a - [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP - nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3\u3001\u914d\u7f6e /etc/trove/trove-taskmanager.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove \u89e3\u91ca\uff1a \u53c2\u7167 trove.conf \u914d\u7f6e 4\u3001\u914d\u7f6e /etc/trove/trove-conductor.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:trove@controller/trove \u89e3\u91ca\uff1a \u53c2\u7167 trove.conf \u914d\u7f6e 5\u3001\u914d\u7f6e /etc/trove/trove-guestagent.conf [DEFAULT] rabbit_host = controller rabbit_password = RABBIT_PASS nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service trove_auth_url = http://controller/identity_admin/v2.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 6\u3001\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 $ su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e 1\u3001\u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 $ systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2\u3001\u542f\u52a8\u670d\u52a1 $ systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Rally \u5b89\u88c5 \u00b6 Rally\u662fOpenStack\u63d0\u4f9b\u7684\u6027\u80fd\u6d4b\u8bd5\u5de5\u5177\u3002\u53ea\u9700\u8981\u7b80\u5355\u7684\u5b89\u88c5\u5373\u53ef\u3002 yum install openstack-rally openstack-rally-plugins","title":"openEuler-20.03-LTS-SP3_Rocky"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#openstack-rocky","text":"OpenStack-Rocky \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u51c6\u5907\u73af\u5883 OpenStack yum\u6e90\u914d\u7f6e \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 ... ... ... \u6ce8\u610f\uff1a\u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7darm64\u7248\u672c\u7684\u955c\u50cf\u3002 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 Horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Rally \u5b89\u88c5","title":"OpenStack-Rocky \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531 nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon \u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 20.03-LTS-SP3 \u7248\u672c\u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9 oepkg yum \u6e90\u5df2\u7ecf\u652f\u6301 Openstack-Rocky \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d oepkg yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#_1","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#openstack-yum","text":"\u914d\u7f6e 20.03-LTS-SP3 \u5b98\u65b9\u8ba4\u8bc1\u7684\u7b2c\u4e09\u65b9\u6e90 oepkg $ cat << EOF >> /etc/yum.repos.d/OpenStack_Rocky.repo [openstack_rocky] name=OpenStack_Rocky baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/$basearch/ gpgcheck=0 enabled=1 EOF \u6ce8\u610f \u5982\u679c\u73af\u5883\u542f\u7528\u4e86Epol\u6e90\uff0c\u9700\u8981\u63d0\u9ad8rocky\u4ed3\u7684\u4f18\u5148\u7ea7\uff0c\u8bbe\u7f6epriority=1\uff1a $ cat << EOF >> /etc/yum.repos.d/OpenStack_Rocky.repo [openstack_rocky] name=OpenStack_Rocky baseurl=https://repo.oepkgs.net/openEuler/rpm/openEuler-20.03-LTS-SP3/budding-openeuler/openstack/rocky/$basearch/ gpgcheck=0 enabled=1 priority=1 EOF $ yum clean all && yum makecache","title":"OpenStack yum\u6e90\u914d\u7f6e"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#_2","text":"\u5728 /etc/hosts \u4e2d\u6dfb\u52a0controller\u4fe1\u606f\uff0c\u4f8b\u5982\u8282\u70b9IP\u662f 10.0.0.11 \uff0c\u5219\u65b0\u589e\uff1a 10.0.0.11 controller","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 $ yum install mariadb mariadb-server python2-PyMySQL 2. \u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 \u590d\u5236\u5982\u4e0b\u5185\u5bb9\u5230\u6587\u4ef6\uff0c\u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a $ systemctl enable mariadb.service $ systemctl start mariadb.service","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 $ yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 $ systemctl enable rabbitmq-server.service $ systemctl start rabbitmq-server.service 3. \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 $ rabbitmqctl add_user openstack RABBIT_PASS 4. \u66ff\u6362 RABBIT_PASS\uff0c\u4e3aOpenStack\u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a $ rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 $ yum install memcached python2-memcached 2. \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\uff0c\u6dfb\u52a0\u4ee5\u4e0b\u5185\u5bb9 OPTIONS=\"-l 127.0.0.1,::1,controller\" OPTIONS \u4fee\u6539\u4e3a\u5b9e\u9645\u73af\u5883\u4e2d\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 $ systemctl enable memcached.service $ systemctl start memcached.service","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#keystone","text":"\u4ee5 root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362 KEYSTONE_DBPASS\uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 $ yum install openstack-keystone httpd python2-mod_wsgi \u914d\u7f6ekeystone\uff0c\u7f16\u8f91 /etc/keystone/keystone.conf \u6587\u4ef6\u3002\u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\u3002\u5728[token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u66ff\u6362KEYSTONE_DBPASS\u4e3aKeystone\u6570\u636e\u5e93\u7684\u5bc6\u7801 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 $ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone $ keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8\u8eab\u4efd\u670d\u52a1\u3002 $ keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u66ff\u6362 ADMIN_PASS\uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801\u3002 \u7f16\u8f91 /etc/httpd/conf/httpd.conf \u6587\u4ef6\uff0c\u914d\u7f6eApache HTTP server $ vim /etc/httpd/conf/httpd.conf \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9\uff0c\u5982\u4e0b\u6240\u793a\u3002 ServerName controller \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa\u3002 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u4e3a /usr/share/keystone/wsgi-keystone.conf \u6587\u4ef6\u521b\u5efa\u94fe\u63a5\u3002 $ ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u5b8c\u6210\u5b89\u88c5\uff0c\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8Apache HTTP\u670d\u52a1\u3002 $ systemctl enable httpd.service $ systemctl start httpd.service \u5b89\u88c5OpenStackClient $ yum install python2-openstackclient \u521b\u5efa OpenStack client \u73af\u5883\u811a\u672c \u521b\u5efaadmin\u7528\u6237\u7684\u73af\u5883\u53d8\u91cf\u811a\u672c\uff1a # vim admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 \u66ff\u6362ADMIN_PASS\u4e3aadmin\u7528\u6237\u7684\u5bc6\u7801, \u4e0e\u4e0a\u8ff0 keystone-manage bootstrap \u547d\u4ee4\u4e2d\u8bbe\u7f6e\u7684\u5bc6\u7801\u4e00\u81f4 \u8fd0\u884c\u811a\u672c\u52a0\u8f7d\u73af\u5883\u53d8\u91cf\uff1a $ source admin-openrc \u5206\u522b\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efadomain, projects, users, roles\u3002 \u521b\u5efadomain \u2018example\u2019\uff1a $ openstack domain create --description \"An Example Domain\" example \u6ce8\uff1adomain \u2018default\u2019\u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa \u521b\u5efaproject \u2018service\u2019\uff1a $ openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project \u2019myproject\u2018\uff0cuser \u2019myuser\u2018 \u548c role \u2019myrole\u2018\uff0c\u4e3a\u2018myproject\u2019\u548c\u2018myuser\u2019\u6dfb\u52a0\u89d2\u8272\u2018myrole\u2019\uff1a $ openstack project create --domain default --description \"Demo Project\" myproject $ openstack user create --domain default --password-prompt myuser $ openstack role create myrole $ openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a $ unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a $ openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a $ openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4ee5 root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362 GLANCE_DBPASS\uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 $ source admin-openrc \u6267\u884c\u4ee5\u4e0b\u547d\u4ee4\uff0c\u5206\u522b\u5b8c\u6210\u521b\u5efa glance \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efaglance\u7528\u6237\u548c\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018glance\u2019\u3002 $ openstack user create --domain default --password-prompt glance $ openstack role add --project service --user glance admin $ openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne image public http://controller:9292 $ openstack endpoint create --region RegionOne image internal http://controller:9292 $ openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-glance \u914d\u7f6eglance\uff1a \u7f16\u8f91 /etc/glance/glance-api.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 \u5728[glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e [database] # ... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] # ... flavor = keystone [glance_store] # ... stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u7f16\u8f91 /etc/glance/glance-registry.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 ```ini [database]","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#_3","text":"connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken]","title":"..."},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#_4","text":"www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy]","title":"..."},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#_5","text":"flavor = keystone ``` \u5176\u4e2d\uff0c\u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u955c\u50cf\u670d\u52a1\uff1a $ systemctl enable openstack-glance-api.service openstack-glance-registry.service $ systemctl start openstack-glance-api.service openstack-glance-registry.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf ```shell $ source admin-openrc","title":"..."},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#arm64","text":"$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img ``` \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a shell $ glance image-create --name \"cirros\" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a shell $ glance image-list","title":"\u6ce8\u610f\uff1a\u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7darm64\u7248\u672c\u7684\u955c\u50cf\u3002"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3aroot\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efanova\u3001nova_api\u3001nova_cell0 \u6570\u636e\u5e93\u5e76\u6388\u6743 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362NOVA_DBPASS\u53caPLACEMENT_DBPASS\uff0c\u4e3anova\u53caplacement\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b8c\u6210\u521b\u5efanova\u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efanova\u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018nova\u2019\u3002 $ . admin-openrc $ openstack user create --domain default --password-prompt nova $ openstack role add --project service --user nova admin $ openstack service create --name nova --description \"OpenStack Compute\" compute \u521b\u5efa\u8ba1\u7b97\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 $ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 $ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 \u521b\u5efaplacement\u7528\u6237\u5e76\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\uff1a $ openstack user create --domain default --password-prompt placement $ openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u51ed\u8bc1\u53caAPI\u670d\u52a1\u7aef\u70b9\uff1a $ openstack service create --name placement --description \"Placement API\" placement $ openstack endpoint create --region RegionOne placement public http://controller:8778 $ openstack endpoint create --region RegionOne placement internal http://controller:8778 $ openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-compute \\ openstack-nova-placement-api openstack-nova-console \u914d\u7f6enova\uff1a \u7f16\u8f91 /etc/nova/nova.conf \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b \u5728[api_database] [database] [placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b \u5728[vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b \u5728[glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b \u5728[oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b \u5728[placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 [DEFAULT] # ... enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.11 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances/ [api_database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true # ... server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] # ... api_servers = http://controller:9292 [oslo_concurrency] # ... lock_path = /var/lib/nova/tmp [placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u66ff\u6362RABBIT_PASS\u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6emy_ip\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362NOVA_DBPASS\u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362PLACEMENT_DBPASS\u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362NOVA_PASS\u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362PLACEMENT_PASS\u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362NEUTRON_PASS\u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u7f16\u8f91 /etc/httpd/conf.d/00-nova-placement-api.conf \uff0c\u589e\u52a0Placement API\u63a5\u5165\u914d\u7f6e = 2.4> Require all granted Order allow,deny Allow from all \u91cd\u542fhttpd\u670d\u52a1\uff1a $ systemctl restart httpd \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a $ su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a $ egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a \u6ce8\u610f\uff1a \u5982\u679c\u662f\u5728ARM64\u7684\u670d\u52a1\u5668\u4e0a\uff0c\u8fd8\u9700\u8981\u5728\u914d\u7f6e cpu_mode \u4e3a custom , cpu_model \u4e3a cortex-a72 # vim /etc/nova/nova.conf [libvirt] # ... virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728 compute \u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd chown nova:nova /usr/share/AAVMF -R vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd\", \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2/aarch64/vars-template-pflash.raw\" ] \u542f\u52a8\u8ba1\u7b97\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u914d\u7f6e\u5176\u5f00\u673a\u542f\u52a8\uff1a $ systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service $ systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service $ systemctl enable libvirtd.service openstack-nova-compute.service $ systemctl start libvirtd.service openstack-nova-compute.service \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230cell\u6570\u636e\u5e93\uff1a \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u5b58\u5728\uff1a $ . admin-openrc $ openstack compute service list --service nova-compute \u6ce8\u518c\u8ba1\u7b97\u8282\u70b9\uff1a $ su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u9a8c\u8bc1 $ . admin-openrc \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a $ openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a $ openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a $ openstack image list \u68c0\u67e5cells\u548cplacement API\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 $ nova-status upgrade check","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3aroot\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa neutron \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362NEUTRON_DBPASS\uff0c\u4e3aneutron\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 $ . admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b8c\u6210\u521b\u5efa neutron \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efaneutron\u7528\u6237\u548c\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u2018neutron\u2019\u7528\u6237\u64cd\u4f5c\u3002 \u521b\u5efaneutron\u670d\u52a1 $ openstack user create --domain default --password-prompt neutron $ openstack role add --project service --user neutron admin $ openstack service create --name neutron --description \"OpenStack Networking\" network \u521b\u5efa\u7f51\u7edc\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne network public http://controller:9696 $ openstack endpoint create --region RegionOne network internal http://controller:9696 $ openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u548c\u914d\u7f6e Self-service \u7f51\u7edc \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-neutron openstack-neutron-ml2 \\ openstack-neutron-linuxbridge ebtables ipset \u914d\u7f6eneutron\uff1a \u7f16\u8f91 /etc/neutron/neutron.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b \u5728[default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b \u5728[default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b \u5728[default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b \u5728[oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 [database] # ... connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] # ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] # ... lock_path = /var/lib/neutron/tmp \u66ff\u6362NEUTRON_DBPASS\u4e3aneutron\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362RABBIT_PASS\u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362NEUTRON_PASS\u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362NOVA_PASS\u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a \u7f16\u8f91 /etc/neutron/plugins/ml2/ml2_conf.ini \u6587\u4ef6\uff1a \u5728[ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528\u7f51\u6865\u53ca layer-2 population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b \u5728[ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b \u5728[ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b \u5728[securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 # vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] # ... type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] # ... flat_networks = provider [ml2_type_vxlan] # ... vni_ranges = 1:1000 [securitygroup] # ... enable_ipset = true \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/plugins/ml2/linuxbridge_agent.ini \u6587\u4ef6\uff1a \u5728[linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u5728[vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b \u5728[securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] # ... enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u66ff\u6362PROVIDER_INTERFACE_NAME\u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362OVERLAY_INTERFACE_IP_ADDRESS\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/l3_agent.ini \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge [DEFAULT] # ... interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/dhcp_agent.ini \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 [DEFAULT] # ... interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406\uff1a \u7f16\u8f91 /etc/neutron/metadata_agent.ini \u6587\u4ef6\uff1a \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 [DEFAULT] # ... nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u66ff\u6362METADATA_SECRET\u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6e\u8ba1\u7b97\u670d\u52a1 \u7f16\u8f91 /etc/nova/nova.conf \u6587\u4ef6\uff1a \u5728[neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 [neutron] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u66ff\u6362NEUTRON_PASS\u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362METADATA_SECRET\u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u5b8c\u6210\u5b89\u88c5 \u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u94fe\u63a5\uff1a $ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a $ systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1\u5e76\u914d\u7f6e\u5f00\u673a\u542f\u52a8\uff1a $ systemctl enable neutron-server.service \\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service $ systemctl start neutron-server.service \\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service $ systemctl enable neutron-l3-agent.service $ systemctl start neutron-l3-agent.service \u9a8c\u8bc1 \u5217\u51fa\u4ee3\u7406\u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a $ openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3aroot\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efacinder\u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u66ff\u6362CINDER_DBPASS\uff0c\u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 $ source admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a \u521b\u5efacinder\u7528\u6237 \u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018cinder\u2019 \u521b\u5efacinderv2\u548ccinderv3\u670d\u52a1 $ openstack user create --domain default --password-prompt cinder $ openstack role add --project service --user cinder admin $ openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 $ openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a $ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s $ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e\u63a7\u5236\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install openstack-cinder \u914d\u7f6ecinder\uff1a \u7f16\u8f91 /etc/cinder/cinder.conf \u6587\u4ef6\uff1a \u5728[database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b \u5728[DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b \u5728[DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b \u5728[oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 [database] # ... connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] # ... lock_path = /var/lib/cinder/tmp \u66ff\u6362CINDER_DBPASS\u4e3acinder\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362RABBIT_PASS\u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6emy_ip\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362CINDER_PASS\u4e3acinder\u7528\u6237\u7684\u5bc6\u7801\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a $ su -s /bin/sh -c \"cinder-manage db sync\" cinder \u914d\u7f6e\u8ba1\u7b97\u4f7f\u7528\u5757\u5b58\u50a8\uff1a \u7f16\u8f91 /etc/nova/nova.conf \u6587\u4ef6\u3002 [cinder] os_region_name = RegionOne \u5b8c\u6210\u5b89\u88c5\uff1a \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 $ systemctl restart openstack-nova-api.service \u542f\u52a8\u5757\u5b58\u50a8\u670d\u52a1 $ systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service $ systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9\uff08LVM\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install lvm2 device-mapper-persistent-data scsi-target-utils python2-keystone \\ openstack-cinder-volume \u521b\u5efaLVM\u7269\u7406\u5377 /dev/sdb\uff1a $ pvcreate /dev/sdb \u521b\u5efaLVM\u5377\u7ec4 cinder-volumes\uff1a $ vgcreate cinder-volumes /dev/sdb \u7f16\u8f91 /etc/lvm/lvm.conf \u6587\u4ef6\uff1a \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/sdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 devices { # ... filter = [ \"a/sdb/\", \"r/.*/\"] \u7f16\u8f91 /etc/cinder/cinder.conf \u6587\u4ef6\uff1a \u5728[lvm]\u90e8\u5206\uff0c\u4f7f\u7528LVM\u9a71\u52a8\u3001cinder-volumes\u5377\u7ec4\u3001iSCSI\u534f\u8bae\u548c\u9002\u5f53\u7684iSCSI\u670d\u52a1\u914d\u7f6eLVM\u540e\u7aef\u3002 \u5728[DEFAULT]\u90e8\u5206\uff0c\u542f\u7528LVM\u540e\u7aef\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u4f4d\u7f6e\u3002 [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [DEFAULT] # ... enabled_backends = lvm glance_api_servers = http://controller:9292 \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u5b8c\u6210\u5b89\u88c5\uff1a $ systemctl enable openstack-cinder-volume.service tgtd.service iscsid.service $ systemctl start openstack-cinder-volume.service tgtd.service iscsid.service \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9\uff08ceph RBD\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a $ yum install ceph-common python2-rados python2-rbd python2-keystone openstack-cinder-volume \u5728[DEFAULT]\u90e8\u5206\uff0c\u542f\u7528LVM\u540e\u7aef\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u4f4d\u7f6e\u3002 [DEFAULT] enabled_backends = ceph-rbd \u6dfb\u52a0ceph rbd\u914d\u7f6e\u90e8\u5206\uff0c\u914d\u7f6e\u5757\u547d\u540d\u4e0eenabled_backends\u4e2d\u4fdd\u6301\u4e00\u81f4 [ceph-rbd] glance_api_version = 2 rados_connect_timeout = -1 rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = False rbd_max_clone_depth = 5 rbd_pool = # RBD\u5b58\u50a8\u6c60\u540d\u79f0 rbd_secret_uuid = # \u968f\u673a\u751f\u6210SECRET UUID rbd_store_chunk_size = 4 rbd_user = volume_backend_name = ceph-rbd volume_driver = cinder.volume.drivers.rbd.RBDDriver \u914d\u7f6e\u5b58\u50a8\u8282\u70b9ceph\u5ba2\u6237\u7aef\uff0c\u9700\u8981\u4fdd\u8bc1/etc/ceph/\u76ee\u5f55\u4e2d\u5305\u542bceph\u96c6\u7fa4\u8bbf\u95ee\u914d\u7f6e\uff0c\u5305\u62ecceph.conf\u4ee5\u53cakeyring [root@openeuler ~]# ll /etc/ceph -rw-r--r-- 1 root root 82 Jun 16 17:11 ceph.client..keyring -rw-r--r-- 1 root root 1.5K Jun 16 17:11 ceph.conf -rw-r--r-- 1 root root 92 Jun 16 17:11 rbdmap \u5728\u5b58\u50a8\u8282\u70b9\u68c0\u67e5ceph\u96c6\u7fa4\u662f\u5426\u6b63\u5e38\u53ef\u8bbf\u95ee [root@openeuler ~]# ceph --user cinder -s cluster: id: b7b2fac6-420f-4ec1-aea2-4862d29b4059 health: HEALTH_OK services: mon: 3 daemons, quorum VIRT01,VIRT02,VIRT03 mgr: VIRT03(active), standbys: VIRT02, VIRT01 mds: cephfs_virt-1/1/1 up {0=VIRT03=up:active}, 2 up:standby osd: 15 osds: 15 up, 15 in data: pools: 7 pools, 1416 pgs objects: 5.41M objects, 19.8TiB usage: 49.3TiB used, 59.9TiB / 109TiB avail pgs: 1414 active io: client: 2.73MiB/s rd, 22.4MiB/s wr, 3.21kop/s rd, 1.19kop/s wr \u542f\u52a8\u670d\u52a1 $ systemctl enable openstack-cinder-volume.service $ systemctl start openstack-cinder-volume.service \u5b89\u88c5\u548c\u914d\u7f6e\u5907\u4efd\u670d\u52a1 \u7f16\u8f91 /etc/cinder/cinder.conf \u6587\u4ef6\uff1a \u5728[DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6e\u5907\u4efd\u9009\u9879 [DEFAULT] # ... # \u6ce8\u610f: openEuler 21.03\u4e2d\u6ca1\u6709\u63d0\u4f9bOpenStack Swift\u8f6f\u4ef6\u5305\uff0c\u9700\u8981\u7528\u6237\u81ea\u884c\u5b89\u88c5\u3002\u6216\u8005\u4f7f\u7528\u5176\u4ed6\u7684\u5907\u4efd\u540e\u7aef\uff0c\u4f8b\u5982\uff0cNFS\u3002NFS\u5df2\u7ecf\u8fc7\u6d4b\u8bd5\u9a8c\u8bc1\uff0c\u53ef\u4ee5\u6b63\u5e38\u4f7f\u7528\u3002 backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u66ff\u6362SWIFT_URL\u4e3a\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u7684URL\uff0c\u8be5URL\u53ef\u4ee5\u901a\u8fc7\u5bf9\u8c61\u5b58\u50a8API\u7aef\u70b9\u627e\u5230\uff1a $ openstack catalog show object-store \u5b8c\u6210\u5b89\u88c5\uff1a $ systemctl enable openstack-cinder-backup.service $ systemctl start openstack-cinder-backup.service \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\u9a8c\u8bc1\u6bcf\u4e2a\u6b65\u9aa4\u6210\u529f\uff1a $ source admin-openrc $ openstack volume service list \u6ce8\uff1a\u76ee\u524d\u6682\u672a\u5bf9swift\u7ec4\u4ef6\u8fdb\u884c\u652f\u6301\uff0c\u6709\u6761\u4ef6\u7684\u540c\u5b66\u53ef\u4ee5\u914d\u7f6e\u5bf9\u63a5ceph\u3002","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 $ yum install openstack-dashboard 2. \u4fee\u6539\u6587\u4ef6 /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py \u4fee\u6539\u53d8\u91cf ALLOWED_HOSTS = ['*', ] OPENSTACK_HOST = \"controller\" OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } \u65b0\u589e\u53d8\u91cf OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } WEBROOT = \"/dashboard/\" COMPRESS_OFFLINE = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"admin\" LOGIN_URL = '/dashboard/auth/login/' LOGOUT_URL = '/dashboard/auth/logout/' 3. \u4fee\u6539\u6587\u4ef6/etc/httpd/conf.d/openstack-dashboard.conf WSGIDaemonProcess dashboard WSGIProcessGroup dashboard WSGISocketPrefix run/wsgi WSGIApplicationGroup %{GLOBAL} WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi Alias /dashboard/static /usr/share/openstack-dashboard/static Options All AllowOverride All Require all granted Options All AllowOverride All Require all granted 4. \u5728/usr/share/openstack-dashboard\u76ee\u5f55\u4e0b\u6267\u884c $ ./manage.py compress 5. \u91cd\u542f httpd \u670d\u52a1 $ systemctl restart httpd 5. \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740http:// \uff0c\u767b\u5f55 horizon\u3002","title":"Horizon \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5 \u5b89\u88c5Tempest $ yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 $ tempest init mytest 3. \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 $ cd mytest $ vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 $ tempest run","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python2-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u7ec4\u4ef6\u5b89\u88c5\u4e0e\u914d\u7f6e ##### \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 $ openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic $ openstack role add --project service --user ironic admin $ openstack service create --name ironic --description \\ \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 $ openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 $ openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 $ openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 ##### \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone force_config_drive = True [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u9700\u8981\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d\u6307\u5b9aironic\u65e5\u5fd7\u76ee\u5f55 [DEFAULT] log_dir = /var/log/ironic/ 5\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 $ ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 6\u3001\u91cd\u542fironic-api\u670d\u52a1 $ systemctl restart openstack-ironic-api ##### \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenstack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenstack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenstack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenstack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenstack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728Openstack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeopenstack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] # ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 $ systemctl restart openstack-ironic-conductor deploy ramdisk\u955c\u50cf\u5236\u4f5c \u76ee\u524dramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic python agent builder\u6765\u8fdb\u884c\u5236\u4f5c\uff0c\u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528\u8fd9\u4e2a\u5de5\u5177\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002\uff08\u7528\u6237\u4e5f\u53ef\u4ee5\u6839\u636e\u81ea\u5df1\u7684\u60c5\u51b5\u83b7\u53d6ironic-python-agent\uff0c\u8fd9\u91cc\u63d0\u4f9b\u4f7f\u7528ipa-builder\u5236\u4f5cipa\u65b9\u6cd5\uff09 ##### \u5b89\u88c5 ironic-python-agent-builder \u5b89\u88c5\u5de5\u5177\uff1a $ pip install ironic-python-agent-builder \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a $ /usr/bin/yum /usr/libexec/urlgrabber-ext-down \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a $ yum install git \u7531\u4e8e DIB \u4f9d\u8d56 semanage \u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a semanage --help \uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ##### \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f aarch64 \u67b6\u6784\uff0c\u8fd8\u9700\u8981\u6dfb\u52a0\uff1a $ export ARCH=aarch64 ###### \u666e\u901a\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder \u4e3e\u4f8b\u8bf4\u660e\uff1a $ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ###### \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a $ export DIB_DEV_USER_USERNAME=ipa \\ $ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ $ export DIB_DEV_USER_PASSWORD='123' $ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ###### \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 \u53c2\u8003\uff1a source-repositories \u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u5728Rocky\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 20.03 LTS SP2\u4e2d\u5df2\u7ecf\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\uff0c\u4f46\u662fKolla \u4ee5\u53ca Kolla-ansible \u539f\u751f\u5e76\u4e0d\u652f\u6301 openEuler\uff0c \u56e0\u6b64 Openstack SIG \u5728openEuler 20.03 LTS SP3\u4e2d\u63d0\u4f9b\u4e86 openstack-kolla-plugin \u548c openstack-kolla-ansible-plugin \u8fd9\u4e24\u4e2a\u8865\u4e01\u5305\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef \u652f\u6301 openEuler \u7248\u672c\uff1a yum install openstack-kolla-plugin openstack-kolla-ansible-plugin \u4e0d\u652f\u6301 openEuler \u7248\u672c\uff1a yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5bf9\u5e94\u5bc6\u7801 $ mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 $ openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove $ openstack role add --project service --user trove admin $ openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 $ openstack endpoint create --region RegionOne database public http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne database internal http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne database admin http://$TROVE_NODE:8779/v1.0/%\\(tenant_id\\)s \u89e3\u91ca\uff1a $TROVE_NODE \u66ff\u6362\u4e3aTrove\u7684API\u670d\u52a1\u90e8\u7f72\u8282\u70b9 \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 $ yum install openstack-trove python2-troveclient 2\u3001\u914d\u7f6e /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove auth_strategy = keystone # Config option for showing the IP address that nova doles out add_addresses = True network_label_regex = ^NETWORK_LABEL$ api_paste_config = /etc/trove/api-paste.ini trove_auth_url = http://controller:35357/v3/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/v3/ auth_url=http://controller:35357/v3/ #auth_uri = http://controller/identity #auth_url = http://controller/identity_admin auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = trove password = TROVE_PASS \u89e3\u91ca\uff1a - [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP - nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3\u3001\u914d\u7f6e /etc/trove/trove-taskmanager.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove \u89e3\u91ca\uff1a \u53c2\u7167 trove.conf \u914d\u7f6e 4\u3001\u914d\u7f6e /etc/trove/trove-conductor.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller/identity/v2.0 nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:trove@controller/trove \u89e3\u91ca\uff1a \u53c2\u7167 trove.conf \u914d\u7f6e 5\u3001\u914d\u7f6e /etc/trove/trove-guestagent.conf [DEFAULT] rabbit_host = controller rabbit_password = RABBIT_PASS nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASS nova_proxy_admin_tenant_name = service trove_auth_url = http://controller/identity_admin/v2.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 6\u3001\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 $ su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e 1\u3001\u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 $ systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2\u3001\u542f\u52a8\u670d\u52a1 $ systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-rocky/#rally","text":"Rally\u662fOpenStack\u63d0\u4f9b\u7684\u6027\u80fd\u6d4b\u8bd5\u5de5\u5177\u3002\u53ea\u9700\u8981\u7b80\u5355\u7684\u5b89\u88c5\u5373\u53ef\u3002 yum install openstack-rally openstack-rally-plugins","title":"Rally \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 20.03-LTS-SP3 \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 20.03-LTS-SP3 \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack cat << EOF >> /etc/yum.repos.d/20.03-LTS-SP3-OpenStack_Train.repo [OS] name=OS baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/OS/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler [everything] name=everything baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/everything/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/everything/$basearch/RPM-GPG-KEY-openEuler [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler EOF yum clean all && yum makecache \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) logdir = /var/log/nova/ [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini (CTL) [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ (CTL) --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS\u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"user\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7. deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002 Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python3-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #admin\u4e3aswift\u7528\u6237\u6dfb\u52a0\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8swift\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3a\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230 /etc/swift \u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"openEuler-20.03-LTS-SP3_Train"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#openstack-train","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5","title":"OpenStack-Train \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 20.03-LTS-SP3 \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#_3","text":"\u914d\u7f6e 20.03-LTS-SP3 \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack cat << EOF >> /etc/yum.repos.d/20.03-LTS-SP3-OpenStack_Train.repo [OS] name=OS baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/OS/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler [everything] name=everything baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/everything/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/everything/$basearch/RPM-GPG-KEY-openEuler [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler EOF yum clean all && yum makecache \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) logdir = /var/log/nova/ [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini (CTL) [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ (CTL) --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS\u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"user\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7. deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python3-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #admin\u4e3aswift\u7528\u6237\u6dfb\u52a0\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8swift\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3a\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230 /etc/swift \u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#aodh","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#gnocchi","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#ceilometer","text":"\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP3/OpenStack-train/#heat","text":"\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u65b0\u7279\u6027\u7684\u5b89\u88c5 Neutron\u6d41\u91cf\u5206\u6563\u7279\u6027 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 20.03-LTS-SP4 \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 20.03-LTS-SP4 \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack cat << EOF >> /etc/yum.repos.d/20.03-LTS-SP4-OpenStack_Train.repo [OS] name=OS baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/OS/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/OS/$basearch/RPM-GPG-KEY-openEuler [everything] name=everything baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/everything/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/everything/$basearch/RPM-GPG-KEY-openEuler [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/OS/$basearch/RPM-GPG-KEY-openEuler EOF yum clean all && yum makecache \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vi /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vi /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vi /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vi /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vi /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vi /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vi /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) logdir = /var/log/nova/ [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vi /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vi /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vi /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vi /etc/neutron/plugins/ml2/ml2_conf.ini (CTL) [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vi /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vi /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vi /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vi /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ (CTL) --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vi /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vi /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS\u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vi /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vi /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"user\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7. deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vi usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002 Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python3-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vi /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vi /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #admin\u4e3aswift\u7528\u6237\u6dfb\u52a0\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8swift\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3a\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230 /etc/swift \u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u65b0\u7279\u6027\u7684\u5b89\u88c5 \u00b6 Neutron\u6d41\u91cf\u5206\u6563\u7279\u6027 \u00b6 \u6d41\u91cf\u5206\u6563\u7279\u6027\u662fOpenStack SIG\u5728openEuler 20.03\u4e2d\u57fa\u4e8eOpenStack Train\u5f00\u53d1\u7684Neutron\u65b0\u7279\u6027\uff0c\u8be5\u7279\u6027\u5141\u8bb8\u7528\u6237\u6307\u5b9a\u8def\u7531\u5668\u6240\u5728\u7684\u7f51\u7edc\u8282\u70b9\uff0c\u540c\u65f6\u8fd8\u63d0\u4f9b\u57fa\u4e8e\u8def\u7531\u5668\u5916\u90e8\u7f51\u5173\u7684\u7aef\u53e3\u8f6c\u53d1\u7684\u529f\u80fd\u3002\u8be5\u7279\u6027\u652f\u6301Neutron\u7684L3 HA\u548cDVR\uff0c\u5177\u4f53\u7ec6\u8282\u53ef\u4ee5\u53c2\u8003 \u7279\u6027\u6587\u6863 \u3002\u672c\u6587\u6863\u4e3b\u8981\u63cf\u8ff0\u5b89\u88c5\u6b65\u9aa4\u3002 \u6309\u7167\u524d\u9762\u7ae0\u8282\u90e8\u7f72\u597d\u4e00\u5957OpenStack\u73af\u5883\uff08\u975e\u5bb9\u5668\uff09\uff0c\u7136\u540e\u5148\u5b89\u88c5plugin\u3002 dnf install -y openstack-neutron-distributed-traffic python3-neutron-lib-distributed-traffic \u914d\u7f6e\u6570\u636e\u5e93 \u672c\u7279\u6027\u5bf9Neutron\u7684\u6570\u636e\u8868\u8fdb\u884c\u4e86\u6269\u5145\uff0c\u56e0\u6b64\u9700\u8981\u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron (CTL) \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/neutron/neutron.conf [DEFAULT] enable_set_route_for_single_port = True network_nodes = network-1,network-2,network-3 router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.PreferredL3AgentRoutersScheduler [network-1] compute_nodes = compute-1 [network-2] compute_nodes = compute-2 [network-3] compute_nodes = compute-3 \u5176\u4e2dnetwork-1\u3001network-2\u548cnetwork-3\u662f\u7f51\u7edc\u8282\u70b9\u7684hostname\uff0ccompute-1\u3001compute-2\u548ccompute-3\u662f\u8ba1\u7b97\u8282\u70b9\u7684hostname\u3002\u6309\u7167\u4e0a\u9762\u8bbe\u7f6e\u7528\u6237\u5728\u521b\u5efa\u591a\u4e2a\u8def\u7531\u5668\u8fde\u63a5\u5230\u540c\u4e00\u5b50\u7f51\u65f6\uff0c\u4f4d\u4e8e\u4e0d\u540c\u8ba1\u7b97\u8282\u70b9\u7684\u865a\u62df\u673a\u7684\u6d41\u91cf\u5c31\u6309\u7167\u914d\u7f6e\u6587\u4ef6\u627e\u5230\u5bf9\u5e94\u7684\u7f51\u7edc\u8282\u70b9\u7684\u8def\u7531\u5668\u3002 \u6253\u5f00\u57fa\u4e8e\u8def\u7531\u5668\u5916\u90e8\u7f51\u5173\u7684\u7aef\u53e3\u8f6c\u53d1\uff08\u53ef\u9009\uff09\u3002\u57fa\u4e8e\u5916\u90e8\u7f51\u5173\u7684\u7aef\u53e3\u8f6c\u53d1\u4e0e\u57fa\u4e8e\u6d6e\u52a8IP\u7684\u7aef\u53e3\u8f6c\u53d1\u4e0d\u80fd\u540c\u65f6\u4f7f\u7528\u3002 vim /etc/neutron/neutron.conf [DEFAULT] service_plugins = router,rg_port_forwarding vim /etc/neutron/l3_agent.ini [agent] extensions = rg_port_forwarding \u91cd\u542f\u76f8\u5173\u670d\u52a1\u3002 systemctl restart neutron-server.service neutron-dhcp-agent.service neutron-l3-agent.service (CTL)","title":"openEuler-20.03-LTS-SP4_Train"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#openstack-train","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u65b0\u7279\u6027\u7684\u5b89\u88c5 Neutron\u6d41\u91cf\u5206\u6563\u7279\u6027","title":"OpenStack-Train \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 20.03-LTS-SP4 \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#_3","text":"\u914d\u7f6e 20.03-LTS-SP4 \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack cat << EOF >> /etc/yum.repos.d/20.03-LTS-SP4-OpenStack_Train.repo [OS] name=OS baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/OS/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/OS/$basearch/RPM-GPG-KEY-openEuler [everything] name=everything baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/everything/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/everything/$basearch/RPM-GPG-KEY-openEuler [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP4/OS/$basearch/RPM-GPG-KEY-openEuler EOF yum clean all && yum makecache \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vi /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vi /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vi /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vi /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vi /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vi /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vi /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) logdir = /var/log/nova/ [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vi /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vi /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vi /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vi /etc/neutron/plugins/ml2/ml2_conf.ini (CTL) [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vi /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vi /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vi /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vi /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ (CTL) --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vi /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vi /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS\u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vi /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vi /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"user\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7. deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vi usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python3-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vi /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vi /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #admin\u4e3aswift\u7528\u6237\u6dfb\u52a0\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8swift\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3a\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230 /etc/swift \u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#aodh","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#gnocchi","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#ceilometer","text":"\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#heat","text":"\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#_4","text":"","title":"\u65b0\u7279\u6027\u7684\u5b89\u88c5"},{"location":"install/openEuler-20.03-LTS-SP4/OpenStack-train/#neutron_1","text":"\u6d41\u91cf\u5206\u6563\u7279\u6027\u662fOpenStack SIG\u5728openEuler 20.03\u4e2d\u57fa\u4e8eOpenStack Train\u5f00\u53d1\u7684Neutron\u65b0\u7279\u6027\uff0c\u8be5\u7279\u6027\u5141\u8bb8\u7528\u6237\u6307\u5b9a\u8def\u7531\u5668\u6240\u5728\u7684\u7f51\u7edc\u8282\u70b9\uff0c\u540c\u65f6\u8fd8\u63d0\u4f9b\u57fa\u4e8e\u8def\u7531\u5668\u5916\u90e8\u7f51\u5173\u7684\u7aef\u53e3\u8f6c\u53d1\u7684\u529f\u80fd\u3002\u8be5\u7279\u6027\u652f\u6301Neutron\u7684L3 HA\u548cDVR\uff0c\u5177\u4f53\u7ec6\u8282\u53ef\u4ee5\u53c2\u8003 \u7279\u6027\u6587\u6863 \u3002\u672c\u6587\u6863\u4e3b\u8981\u63cf\u8ff0\u5b89\u88c5\u6b65\u9aa4\u3002 \u6309\u7167\u524d\u9762\u7ae0\u8282\u90e8\u7f72\u597d\u4e00\u5957OpenStack\u73af\u5883\uff08\u975e\u5bb9\u5668\uff09\uff0c\u7136\u540e\u5148\u5b89\u88c5plugin\u3002 dnf install -y openstack-neutron-distributed-traffic python3-neutron-lib-distributed-traffic \u914d\u7f6e\u6570\u636e\u5e93 \u672c\u7279\u6027\u5bf9Neutron\u7684\u6570\u636e\u8868\u8fdb\u884c\u4e86\u6269\u5145\uff0c\u56e0\u6b64\u9700\u8981\u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron (CTL) \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/neutron/neutron.conf [DEFAULT] enable_set_route_for_single_port = True network_nodes = network-1,network-2,network-3 router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.PreferredL3AgentRoutersScheduler [network-1] compute_nodes = compute-1 [network-2] compute_nodes = compute-2 [network-3] compute_nodes = compute-3 \u5176\u4e2dnetwork-1\u3001network-2\u548cnetwork-3\u662f\u7f51\u7edc\u8282\u70b9\u7684hostname\uff0ccompute-1\u3001compute-2\u548ccompute-3\u662f\u8ba1\u7b97\u8282\u70b9\u7684hostname\u3002\u6309\u7167\u4e0a\u9762\u8bbe\u7f6e\u7528\u6237\u5728\u521b\u5efa\u591a\u4e2a\u8def\u7531\u5668\u8fde\u63a5\u5230\u540c\u4e00\u5b50\u7f51\u65f6\uff0c\u4f4d\u4e8e\u4e0d\u540c\u8ba1\u7b97\u8282\u70b9\u7684\u865a\u62df\u673a\u7684\u6d41\u91cf\u5c31\u6309\u7167\u914d\u7f6e\u6587\u4ef6\u627e\u5230\u5bf9\u5e94\u7684\u7f51\u7edc\u8282\u70b9\u7684\u8def\u7531\u5668\u3002 \u6253\u5f00\u57fa\u4e8e\u8def\u7531\u5668\u5916\u90e8\u7f51\u5173\u7684\u7aef\u53e3\u8f6c\u53d1\uff08\u53ef\u9009\uff09\u3002\u57fa\u4e8e\u5916\u90e8\u7f51\u5173\u7684\u7aef\u53e3\u8f6c\u53d1\u4e0e\u57fa\u4e8e\u6d6e\u52a8IP\u7684\u7aef\u53e3\u8f6c\u53d1\u4e0d\u80fd\u540c\u65f6\u4f7f\u7528\u3002 vim /etc/neutron/neutron.conf [DEFAULT] service_plugins = router,rg_port_forwarding vim /etc/neutron/l3_agent.ini [agent] extensions = rg_port_forwarding \u91cd\u542f\u76f8\u5173\u670d\u52a1\u3002 systemctl restart neutron-server.service neutron-dhcp-agent.service neutron-l3-agent.service (CTL)","title":"Neutron\u6d41\u91cf\u5206\u6563\u7279\u6027"},{"location":"install/openEuler-21.09/OpenStack-wallaby/","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 21.09 \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 21.09 \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack cat << EOF >> /etc/yum.repos.d/21.09-OpenStack_Wallaby.repo [OS] name=OS baseurl=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler [everything] name=everything baseurl=http://repo.openeuler.org/openEuler-21.09/everything/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-21.09/everything/$basearch/RPM-GPG-KEY-openEuler [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-21.09/EPOL/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler EOF yum clean all && yum makecache \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS\u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"user\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 21.09\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ```shell script yum install openstack-trove python-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `transport_url` \u4e3a`RabbitMQ`\u8fde\u63a5\u4fe1\u606f\uff0c`RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d`TROVE_PASS`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 6. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove 4. \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e 1. \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #admin\u4e3aswift\u7528\u6237\u6dfb\u52a0\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8swift\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3a\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230 /etc/swift \u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"openEuler-21.09_Wallaby"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#openstack-wallaby","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5","title":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 21.09 \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#_3","text":"\u914d\u7f6e 21.09 \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack cat << EOF >> /etc/yum.repos.d/21.09-OpenStack_Wallaby.repo [OS] name=OS baseurl=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler [everything] name=everything baseurl=http://repo.openeuler.org/openEuler-21.09/everything/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-21.09/everything/$basearch/RPM-GPG-KEY-openEuler [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-21.09/EPOL/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler EOF yum clean all && yum makecache \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS\u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"user\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 21.09\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ```shell script yum install openstack-trove python-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `transport_url` \u4e3a`RabbitMQ`\u8fde\u63a5\u4fe1\u606f\uff0c`RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d`TROVE_PASS`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 6. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove 4. \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e 1. \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-21.09/OpenStack-wallaby/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #admin\u4e3aswift\u7528\u6237\u6dfb\u52a0\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8swift\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3a\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230 /etc/swift \u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u542f\u52a8OpenStack Train yum\u6e90 yum update yum install openstack-release-train yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) logdir = /var/log/nova/ (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7. deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002 Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python3-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r train \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-22.03-LTS_Train"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#openstack-train","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72","title":"OpenStack-Train \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#_3","text":"\u542f\u52a8OpenStack Train yum\u6e90 yum update yum install openstack-release-train yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) logdir = /var/log/nova/ (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7. deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python3-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#aodh","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#gnocchi","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#ceilometer","text":"\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#heat","text":"\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-train/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r train \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03 LTS \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 22.03 LTS \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ``` Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 22.03 LTS\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP - nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `transport_url` \u4e3a`RabbitMQ`\u8fde\u63a5\u4fe1\u606f\uff0c`RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d`TROVE_PASS`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 6. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove 4. \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e 1. \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-22.03-LTS_Wallaby"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#openstack-wallaby","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72","title":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03 LTS \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#_3","text":"\u914d\u7f6e 22.03 LTS \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ```","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 22.03 LTS\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP - nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `transport_url` \u4e3a`RabbitMQ`\u8fde\u63a5\u4fe1\u606f\uff0c`RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d`TROVE_PASS`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 6. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove 4. \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e 1. \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#aodh","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#gnocchi","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#ceilometer","text":"\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#heat","text":"\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS/OpenStack-wallaby/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u90e8\u7f72\u6b65\u9aa4 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 2.1 \u521b\u5efapool: 2.2 \u521d\u59cb\u5316pool 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 4. \u914d\u7f6eyum repo 4.1 \u5907\u4efdyum\u6e90 4.2 \u914d\u7f6eyum repo 4.3 \u66f4\u65b0yum\u7f13\u5b58 5. \u5b89\u88c5opensd 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 6. \u505assh\u4e92\u4fe1 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 7. \u914d\u7f6eopensd 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 7.2 \u914d\u7f6einventory\u6587\u4ef6 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 8. \u6267\u884c\u90e8\u7f72 8.1 \u6267\u884cbootstrap 8.2 \u91cd\u542f\u670d\u52a1\u5668 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 8.4 \u6267\u884c\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP1\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u542f\u52a8OpenStack Train yum\u6e90 yum update yum install openstack-release-train yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient==4.0.2 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7. deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002 Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python3-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP1\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-sp1 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r train \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-sp1 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u00b6 opensd\u7528\u4e8e\u6279\u91cf\u5730\u811a\u672c\u5316\u90e8\u7f72openstack\u5404\u7ec4\u4ef6\u670d\u52a1\u3002 \u90e8\u7f72\u6b65\u9aa4 \u00b6 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f \u00b6 \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u9700\u5c06selinux\u8bbe\u7f6e\u4e3adisable \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u5c06/etc/ssh/sshd_config\u914d\u7f6e\u6587\u4ef6\u5185\u7684UseDNS\u8bbe\u7f6e\u4e3ano \u64cd\u4f5c\u7cfb\u7edf\u8bed\u8a00\u5fc5\u987b\u8bbe\u7f6e\u4e3a\u82f1\u6587 \u90e8\u7f72\u4e4b\u524d\u8bf7\u786e\u4fdd\u6240\u6709\u8ba1\u7b97\u8282\u70b9/etc/hosts\u6587\u4ef6\u5185\u6ca1\u6709\u5bf9\u8ba1\u7b97\u4e3b\u673a\u7684\u89e3\u6790 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 \u00b6 \u4e0d\u4f7f\u7528ceph\u6216\u5df2\u6709ceph\u96c6\u7fa4\u53ef\u5ffd\u7565\u6b64\u6b65\u9aa4 \u5728\u4efb\u610f\u4e00\u53f0ceph monitor\u8282\u70b9\u6267\u884c: 2.1 \u521b\u5efapool: \u00b6 ceph osd pool create volumes 2048 ceph osd pool create images 2048 2.2 \u521d\u59cb\u5316pool \u00b6 rbd pool init volumes rbd pool init images 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 \u00b6 ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images' ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes' 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 \u00b6 \u6839\u636e\u7269\u7406\u673a\u78c1\u76d8\u914d\u7f6e\u4e0e\u95f2\u7f6e\u60c5\u51b5\uff0c\u4e3amysql\u6570\u636e\u76ee\u5f55\u6302\u8f7d\u989d\u5916\u7684\u78c1\u76d8\u7a7a\u95f4\u3002\u793a\u4f8b\u5982\u4e0b\uff08\u6839\u636e\u5b9e\u9645\u60c5\u51b5\u505a\u914d\u7f6e\uff09\uff1a fdisk -l Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x000ed242 \u521b\u5efa\u5206\u533a parted /dev/sdd mkparted 0 -1 \u521b\u5efapv partprobe /dev/sdd1 pvcreate /dev/sdd1 \u521b\u5efa\u3001\u6fc0\u6d3bvg vgcreate vg_mariadb /dev/sdd1 vgchange -ay vg_mariadb \u67e5\u770bvg\u5bb9\u91cf vgdisplay --- Volume group --- VG Name vg_mariadb System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 446.62 GiB PE Size 4.00 MiB Total PE 114335 Alloc PE / Size 114176 / 446.00 GiB Free PE / Size 159 / 636.00 MiB VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc \u521b\u5efalv lvcreate -L 446G -n lv_mariadb vg_mariadb \u683c\u5f0f\u5316\u78c1\u76d8\u5e76\u83b7\u53d6\u5377\u7684UUID mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb blkid /dev/mapper/vg_mariadb-lv_mariadb /dev/mapper/vg_mariadb-lv_mariadb: UUID=\"98d513eb-5f64-4aa5-810e-dc7143884fa2\" TYPE=\"ext4\" \u6ce8\uff1a98d513eb-5f64-4aa5-810e-dc7143884fa2\u4e3a\u5377\u7684UUID \u6302\u8f7d\u78c1\u76d8 mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql rm -rf /var/lib/mysql/* 4. \u914d\u7f6eyum repo \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 4.1 \u5907\u4efdyum\u6e90 \u00b6 mkdir /etc/yum.repos.d/bak/ mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/ 4.2 \u914d\u7f6eyum repo \u00b6 cat > /etc/yum.repos.d/opensd.repo << EOF [train] name=train baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP1:/Epol:/Multi-Version:/OpenStack:/Train/standard_$basearch/ enabled=1 gpgcheck=0 [epol] name=epol baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP1:/Epol/standard_$basearch/ enabled=1 gpgcheck=0 [everything] name=everything baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP1/standard_$basearch/ enabled=1 gpgcheck=0 EOF 4.3 \u66f4\u65b0yum\u7f13\u5b58 \u00b6 yum clean all yum makecache 5. \u5b89\u88c5opensd \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 \u00b6 git clone https://gitee.com/openeuler/opensd cd opensd python3 setup.py install 6. \u505assh\u4e92\u4fe1 \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\u5e76\u4e00\u8def\u56de\u8f66 ssh-keygen 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 \u00b6 \u5728auto_ssh_host_ip\u4e2d\u914d\u7f6e\u6240\u6709\u7528\u5230\u7684\u4e3b\u673aip, \u793a\u4f8b\uff1a cd /usr/local/share/opensd/tools/ vim auto_ssh_host_ip 10.0.0.1 10.0.0.2 ... 10.0.0.10 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c \u00b6 \u5c06\u514d\u5bc6\u811a\u672c /usr/local/bin/opensd-auto-ssh \u5185123123\u66ff\u6362\u4e3a\u4e3b\u673a\u771f\u5b9e\u5bc6\u7801 # \u66ff\u6362\u811a\u672c\u5185123123\u5b57\u7b26\u4e32 vim /usr/local/bin/opensd-auto-ssh ## \u5b89\u88c5expect\u540e\u6267\u884c\u811a\u672c dnf install expect -y opensd-auto-ssh 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 \u00b6 ssh-copy-id root@x.x.x.x 7. \u914d\u7f6eopensd \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 \u00b6 \u5b89\u88c5 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils\u5e76\u968f\u673a\u751f\u6210\u5bc6\u7801 dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y # \u6267\u884c\u547d\u4ee4\u751f\u6210\u5bc6\u7801 opensd-genpwd # \u68c0\u67e5\u5bc6\u7801\u662f\u5426\u751f\u6210 cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml 7.2 \u914d\u7f6einventory\u6587\u4ef6 \u00b6 \u4e3b\u673a\u4fe1\u606f\u5305\u542b\uff1a\u4e3b\u673a\u540d\u3001ansible_host IP\u3001availability_zone\uff0c\u4e09\u8005\u5747\u9700\u914d\u7f6e\u7f3a\u4e00\u4e0d\u53ef\uff0c\u793a\u4f8b\uff1a vim /usr/local/share/opensd/ansible/inventory/multinode # \u4e09\u53f0\u63a7\u5236\u8282\u70b9\u4e3b\u673a\u4fe1\u606f [control] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # \u7f51\u7edc\u8282\u70b9\u4fe1\u606f\uff0c\u4e0e\u63a7\u5236\u8282\u70b9\u4fdd\u6301\u4e00\u81f4 [network] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # cinder-volume\u670d\u52a1\u8282\u70b9\u4fe1\u606f [storage] storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1 storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1 storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1 # Cell1 \u96c6\u7fa4\u4fe1\u606f [cell-control-cell1] cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1 cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1 cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1 [compute-cell1] compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1 compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1 compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1 [cell1:children] cell-control-cell1 compute-cell1 # Cell2\u96c6\u7fa4\u4fe1\u606f [cell-control-cell2] cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1 cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1 cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1 [compute-cell2] compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1 compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1 compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1 [cell2:children] cell-control-cell2 compute-cell2 [baremetal] [compute-cell1-ironic] # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684control\u4e3b\u673a\u7ec4 [nova-conductor:children] cell-control-cell1 cell-control-cell2 # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684compute\u4e3b\u673a\u7ec4 [nova-compute:children] compute-added compute-cell1 compute-cell2 # \u4e0b\u9762\u7684\u4e3b\u673a\u7ec4\u4fe1\u606f\u4e0d\u9700\u53d8\u52a8\uff0c\u4fdd\u7559\u5373\u53ef [compute-added] [chrony-server:children] control [pacemaker:children] control ...... ...... 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf \u00b6 \u6ce8: \u6587\u6863\u4e2d\u63d0\u5230\u7684\u6709\u6ce8\u91ca\u914d\u7f6e\u9879\u9700\u8981\u66f4\u6539\uff0c\u5176\u4ed6\u53c2\u6570\u4e0d\u9700\u8981\u66f4\u6539\uff0c\u82e5\u65e0\u76f8\u5173\u914d\u7f6e\u5219\u4e3a\u7a7a vim /usr/local/share/opensd/etc_examples/opensd/globals.yml ######################## # Network & Base options ######################## network_interface: \"eth0\" #\u7ba1\u7406\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 neutron_external_interface: \"eth1\" #\u4e1a\u52a1\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 cidr_netmask: 24 #\u7ba1\u7406\u7f51\u7684\u63a9\u7801 opensd_vip_address: 10.0.0.33 #\u63a7\u5236\u8282\u70b9\u865a\u62dfIP\u5730\u5740 cell1_vip_address: 10.0.0.34 #cell1\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 cell2_vip_address: 10.0.0.35 #cell2\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 external_fqdn: \"\" #\u7528\u4e8evnc\u8bbf\u95ee\u865a\u62df\u673a\u7684\u5916\u7f51\u57df\u540d\u5730\u5740 external_ntp_servers: [] #\u5916\u90e8ntp\u670d\u52a1\u5668\u5730\u5740 yumrepo_host: #yum\u6e90\u7684IP\u5730\u5740 yumrepo_port: #yum\u6e90\u7aef\u53e3\u53f7 environment: #yum\u6e90\u7684\u7c7b\u578b upgrade_all_packages: \"yes\" #\u662f\u5426\u5347\u7ea7\u6240\u6709\u5b89\u88c5\u7248\u7684\u7248\u672c(\u6267\u884cyum upgrade)\uff0c\u521d\u59cb\u90e8\u7f72\u8d44\u6e90\u8bf7\u8bbe\u7f6e\u4e3a\"yes\" enable_miner: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72miner\u670d\u52a1 enable_chrony: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72chrony\u670d\u52a1 enable_pri_mariadb: \"no\" #\u662f\u5426\u4e3a\u79c1\u6709\u4e91\u90e8\u7f72mariadb enable_hosts_file_modify: \"no\" # \u6269\u5bb9\u8ba1\u7b97\u8282\u70b9\u548c\u90e8\u7f72ironic\u670d\u52a1\u7684\u65f6\u5019\uff0c\u662f\u5426\u5c06\u8282\u70b9\u4fe1\u606f\u6dfb\u52a0\u5230`/etc/hosts` ######################## # Available zone options ######################## az_cephmon_compose: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az01\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az01\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az02\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az02\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az03\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az03\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: # `reserve_vcpu_based_on_numa`\u914d\u7f6e\u4e3a`yes` or `no`,\u4e3e\u4f8b\u8bf4\u660e\uff1a NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 \u5f53reserve_vcpu_based_on_numa: \"yes\", \u6839\u636enuma node, \u5e73\u5747\u6bcf\u4e2anode\u9884\u7559vcpu: vcpu_pin_set = 2-15,34-47,18-31,50-63 \u5f53reserve_vcpu_based_on_numa: \"no\", \u4ece\u7b2c\u4e00\u4e2avcpu\u5f00\u59cb\uff0c\u987a\u5e8f\u9884\u7559vcpu: vcpu_pin_set = 8-64 ####################### # Nova options ####################### nova_reserved_host_memory_mb: 2048 #\u8ba1\u7b97\u8282\u70b9\u7ed9\u8ba1\u7b97\u670d\u52a1\u9884\u7559\u7684\u5185\u5b58\u5927\u5c0f enable_cells: \"yes\" #cell\u8282\u70b9\u662f\u5426\u5355\u72ec\u8282\u70b9\u90e8\u7f72 support_gpu: \"False\" #cell\u8282\u70b9\u662f\u5426\u6709GPU\u670d\u52a1\u5668\uff0c\u5982\u679c\u6709\u5219\u4e3aTrue\uff0c\u5426\u5219\u4e3aFalse ####################### # Neutron options ####################### monitor_ip: - 10.0.0.9 #\u914d\u7f6e\u76d1\u63a7\u8282\u70b9 - 10.0.0.10 enable_meter_full_eip: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8EIP\u5168\u91cf\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_port_forwarding: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8port forwarding\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_ecs_ipv6: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8ecs_ipv6\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter: True #\u914d\u7f6e\u662f\u5426\u5f00\u542f\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue is_sdn_arch: False #\u914d\u7f6e\u662f\u5426\u662fsdn\u67b6\u6784\uff0c\u9ed8\u8ba4\u4e3aFalse # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,vlan\u548cvxlan\u4e24\u79cd\u7c7b\u578b\u53ea\u80fd\u4e8c\u9009\u4e00. enable_vxlan_network_type: False # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,\u5982\u679c\u4f7f\u7528vxlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aTrue, \u5982\u679c\u4f7f\u7528vlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aFalse. enable_neutron_fwaas: False # \u73af\u5883\u6709\u4f7f\u7528\u9632\u706b\u5899, \u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fd\u9632\u62a4\u5899\u529f\u80fd. # Neutron provider neutron_provider_networks: network_types: \"{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}\" network_vlan_ranges: \"default:xxx:xxx\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvlan\u8303\u56f4 network_mappings: \"default:br-provider\" network_interface: \"{{ neutron_external_interface }}\" network_vxlan_ranges: \"\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvxlan\u8303\u56f4 # \u5982\u4e0b\u8fd9\u4e9b\u914d\u7f6e\u662fSND\u63a7\u5236\u5668\u7684\u914d\u7f6e\u53c2\u6570, `enable_sdn_controller`\u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fdSND\u63a7\u5236\u5668\u529f\u80fd. # \u5176\u4ed6\u53c2\u6570\u8bf7\u6839\u636e\u90e8\u7f72\u4e4b\u524d\u7684\u89c4\u5212\u548cSDN\u90e8\u7f72\u4fe1\u606f\u786e\u5b9a. enable_sdn_controller: False sdn_controller_ip_address: # SDN\u63a7\u5236\u5668ip\u5730\u5740 sdn_controller_username: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u540d sdn_controller_password: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u5bc6\u7801 ####################### # Dimsagent options ####################### enable_dimsagent: \"no\" # \u5b89\u88c5\u955c\u50cf\u670d\u52a1agent, \u9700\u8981\u6539\u4e3ayes # Address and domain name for s2 s3_address_domain_pair: - host_ip: host_name: ####################### # Trove options ####################### enable_trove: \"no\" #\u5b89\u88c5trove \u9700\u8981\u6539\u4e3ayes #default network trove_default_neutron_networks: #trove \u7684\u7ba1\u7406\u7f51\u7edcid `openstack network list|grep -w trove-mgmt|awk '{print$2}'` #s3 setup(\u5982\u679c\u6ca1\u6709s3,\u4ee5\u4e0b\u503c\u586bnull) s3_endpoint_host_ip: #s3\u7684ip s3_endpoint_host_name: #s3\u7684\u57df\u540d s3_endpoint_url: #s3\u7684url \u00b7\u4e00\u822c\u4e3ahttp\uff1a//s3\u57df\u540d s3_access_key: #s3\u7684ak s3_secret_key: #s3\u7684sk ####################### # Ironic options ####################### enable_ironic: \"no\" #\u662f\u5426\u5f00\u673a\u88f8\u91d1\u5c5e\u90e8\u7f72\uff0c\u9ed8\u8ba4\u4e0d\u5f00\u542f ironic_neutron_provisioning_network_uuid: ironic_neutron_cleaning_network_uuid: \"{{ ironic_neutron_provisioning_network_uuid }}\" ironic_dnsmasq_interface: ironic_dnsmasq_dhcp_range: ironic_tftp_server_address: \"{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}\" # \u4ea4\u6362\u673a\u8bbe\u5907\u76f8\u5173\u4fe1\u606f neutron_ml2_conf_genericswitch: genericswitch:xxxxxxx: device_type: ngs_mac_address: ip: username: password: ngs_port_default_vlan: # Package state setting haproxy_package_state: \"present\" mariadb_package_state: \"present\" rabbitmq_package_state: \"present\" memcached_package_state: \"present\" ceph_client_package_state: \"present\" keystone_package_state: \"present\" glance_package_state: \"present\" cinder_package_state: \"present\" nova_package_state: \"present\" neutron_package_state: \"present\" miner_package_state: \"present\" 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 \u00b6 dnf install ansible -y ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u6267\u884c\u7ed3\u679c\u663e\u793a\u6bcf\u53f0\u4e3b\u673a\u90fd\u662f\"SUCCESS\"\u5373\u8bf4\u660e\u8fde\u63a5\u72b6\u6001\u6ca1\u95ee\u9898,\u793a\u4f8b\uff1a compute1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python\" }, \"changed\": false, \"ping\": \"pong\" } 8. \u6267\u884c\u90e8\u7f72 \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 8.1 \u6267\u884cbootstrap \u00b6 # \u6267\u884c\u90e8\u7f72 opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50 8.2 \u91cd\u542f\u670d\u52a1\u5668 \u00b6 \u6ce8\uff1a\u6267\u884c\u91cd\u542f\u7684\u539f\u56e0\u662f:bootstrap\u53ef\u80fd\u4f1a\u5347\u5185\u6838,\u66f4\u6539selinux\u914d\u7f6e\u6216\u8005\u6709GPU\u670d\u52a1\u5668,\u5982\u679c\u88c5\u673a\u8fc7\u7a0b\u5df2\u7ecf\u662f\u65b0\u7248\u5185\u6838,selinux disable\u6216\u8005\u6ca1\u6709GPU\u670d\u52a1\u5668,\u5219\u4e0d\u9700\u8981\u6267\u884c\u8be5\u6b65\u9aa4 # \u624b\u52a8\u91cd\u542f\u5bf9\u5e94\u8282\u70b9,\u6267\u884c\u547d\u4ee4 init6 # \u91cd\u542f\u5b8c\u6210\u540e\uff0c\u518d\u6b21\u68c0\u67e5\u8fde\u901a\u6027 ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u91cd\u542f\u5b8c\u540e\u64cd\u4f5c\u7cfb\u7edf\u540e\uff0c\u518d\u6b21\u542f\u52a8yum\u6e90 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 \u00b6 opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50 8.4 \u6267\u884c\u90e8\u7f72 \u00b6 ln -s /usr/bin/python3 /usr/bin/python \u5168\u91cf\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 \u5355\u670d\u52a1\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name","title":"openEuler-22.03-LTS-SP1_Train"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#openstack-train","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u90e8\u7f72\u6b65\u9aa4 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 2.1 \u521b\u5efapool: 2.2 \u521d\u59cb\u5316pool 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 4. \u914d\u7f6eyum repo 4.1 \u5907\u4efdyum\u6e90 4.2 \u914d\u7f6eyum repo 4.3 \u66f4\u65b0yum\u7f13\u5b58 5. \u5b89\u88c5opensd 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 6. \u505assh\u4e92\u4fe1 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 7. \u914d\u7f6eopensd 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 7.2 \u914d\u7f6einventory\u6587\u4ef6 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 8. \u6267\u884c\u90e8\u7f72 8.1 \u6267\u884cbootstrap 8.2 \u91cd\u542f\u670d\u52a1\u5668 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 8.4 \u6267\u884c\u90e8\u7f72","title":"OpenStack-Train \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP1\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#_3","text":"\u542f\u52a8OpenStack Train yum\u6e90 yum update yum install openstack-release-train yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient==4.0.2 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7. deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python3-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#aodh","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#gnocchi","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#ceilometer","text":"\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#heat","text":"\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP1\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-sp1 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r train \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-sp1 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#openstack-sigopensd","text":"opensd\u7528\u4e8e\u6279\u91cf\u5730\u811a\u672c\u5316\u90e8\u7f72openstack\u5404\u7ec4\u4ef6\u670d\u52a1\u3002","title":"\u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#_4","text":"","title":"\u90e8\u7f72\u6b65\u9aa4"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#1","text":"\u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u9700\u5c06selinux\u8bbe\u7f6e\u4e3adisable \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u5c06/etc/ssh/sshd_config\u914d\u7f6e\u6587\u4ef6\u5185\u7684UseDNS\u8bbe\u7f6e\u4e3ano \u64cd\u4f5c\u7cfb\u7edf\u8bed\u8a00\u5fc5\u987b\u8bbe\u7f6e\u4e3a\u82f1\u6587 \u90e8\u7f72\u4e4b\u524d\u8bf7\u786e\u4fdd\u6240\u6709\u8ba1\u7b97\u8282\u70b9/etc/hosts\u6587\u4ef6\u5185\u6ca1\u6709\u5bf9\u8ba1\u7b97\u4e3b\u673a\u7684\u89e3\u6790","title":"1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#2-ceph-pool","text":"\u4e0d\u4f7f\u7528ceph\u6216\u5df2\u6709ceph\u96c6\u7fa4\u53ef\u5ffd\u7565\u6b64\u6b65\u9aa4 \u5728\u4efb\u610f\u4e00\u53f0ceph monitor\u8282\u70b9\u6267\u884c:","title":"2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#21-pool","text":"ceph osd pool create volumes 2048 ceph osd pool create images 2048","title":"2.1 \u521b\u5efapool:"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#22-pool","text":"rbd pool init volumes rbd pool init images","title":"2.2 \u521d\u59cb\u5316pool"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#23","text":"ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images' ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes'","title":"2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#3-lvm","text":"\u6839\u636e\u7269\u7406\u673a\u78c1\u76d8\u914d\u7f6e\u4e0e\u95f2\u7f6e\u60c5\u51b5\uff0c\u4e3amysql\u6570\u636e\u76ee\u5f55\u6302\u8f7d\u989d\u5916\u7684\u78c1\u76d8\u7a7a\u95f4\u3002\u793a\u4f8b\u5982\u4e0b\uff08\u6839\u636e\u5b9e\u9645\u60c5\u51b5\u505a\u914d\u7f6e\uff09\uff1a fdisk -l Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x000ed242 \u521b\u5efa\u5206\u533a parted /dev/sdd mkparted 0 -1 \u521b\u5efapv partprobe /dev/sdd1 pvcreate /dev/sdd1 \u521b\u5efa\u3001\u6fc0\u6d3bvg vgcreate vg_mariadb /dev/sdd1 vgchange -ay vg_mariadb \u67e5\u770bvg\u5bb9\u91cf vgdisplay --- Volume group --- VG Name vg_mariadb System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 446.62 GiB PE Size 4.00 MiB Total PE 114335 Alloc PE / Size 114176 / 446.00 GiB Free PE / Size 159 / 636.00 MiB VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc \u521b\u5efalv lvcreate -L 446G -n lv_mariadb vg_mariadb \u683c\u5f0f\u5316\u78c1\u76d8\u5e76\u83b7\u53d6\u5377\u7684UUID mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb blkid /dev/mapper/vg_mariadb-lv_mariadb /dev/mapper/vg_mariadb-lv_mariadb: UUID=\"98d513eb-5f64-4aa5-810e-dc7143884fa2\" TYPE=\"ext4\" \u6ce8\uff1a98d513eb-5f64-4aa5-810e-dc7143884fa2\u4e3a\u5377\u7684UUID \u6302\u8f7d\u78c1\u76d8 mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql rm -rf /var/lib/mysql/*","title":"3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#4-yum-repo","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"4. \u914d\u7f6eyum repo"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#41-yum","text":"mkdir /etc/yum.repos.d/bak/ mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/","title":"4.1 \u5907\u4efdyum\u6e90"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#42-yum-repo","text":"cat > /etc/yum.repos.d/opensd.repo << EOF [train] name=train baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP1:/Epol:/Multi-Version:/OpenStack:/Train/standard_$basearch/ enabled=1 gpgcheck=0 [epol] name=epol baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP1:/Epol/standard_$basearch/ enabled=1 gpgcheck=0 [everything] name=everything baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP1/standard_$basearch/ enabled=1 gpgcheck=0 EOF","title":"4.2 \u914d\u7f6eyum repo"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#43-yum","text":"yum clean all yum makecache","title":"4.3 \u66f4\u65b0yum\u7f13\u5b58"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#5-opensd","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"5. \u5b89\u88c5opensd"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#51-opensd","text":"git clone https://gitee.com/openeuler/opensd cd opensd python3 setup.py install","title":"5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#6-ssh","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"6. \u505assh\u4e92\u4fe1"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#61","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u5e76\u4e00\u8def\u56de\u8f66 ssh-keygen","title":"6.1 \u751f\u6210\u5bc6\u94a5\u5bf9"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#62-ip","text":"\u5728auto_ssh_host_ip\u4e2d\u914d\u7f6e\u6240\u6709\u7528\u5230\u7684\u4e3b\u673aip, \u793a\u4f8b\uff1a cd /usr/local/share/opensd/tools/ vim auto_ssh_host_ip 10.0.0.1 10.0.0.2 ... 10.0.0.10","title":"6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#63","text":"\u5c06\u514d\u5bc6\u811a\u672c /usr/local/bin/opensd-auto-ssh \u5185123123\u66ff\u6362\u4e3a\u4e3b\u673a\u771f\u5b9e\u5bc6\u7801 # \u66ff\u6362\u811a\u672c\u5185123123\u5b57\u7b26\u4e32 vim /usr/local/bin/opensd-auto-ssh ## \u5b89\u88c5expect\u540e\u6267\u884c\u811a\u672c dnf install expect -y opensd-auto-ssh","title":"6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#64-ceph-monitor","text":"ssh-copy-id root@x.x.x.x","title":"6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#7-opensd","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"7. \u914d\u7f6eopensd"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#71","text":"\u5b89\u88c5 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils\u5e76\u968f\u673a\u751f\u6210\u5bc6\u7801 dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y # \u6267\u884c\u547d\u4ee4\u751f\u6210\u5bc6\u7801 opensd-genpwd # \u68c0\u67e5\u5bc6\u7801\u662f\u5426\u751f\u6210 cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml","title":"7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#72-inventory","text":"\u4e3b\u673a\u4fe1\u606f\u5305\u542b\uff1a\u4e3b\u673a\u540d\u3001ansible_host IP\u3001availability_zone\uff0c\u4e09\u8005\u5747\u9700\u914d\u7f6e\u7f3a\u4e00\u4e0d\u53ef\uff0c\u793a\u4f8b\uff1a vim /usr/local/share/opensd/ansible/inventory/multinode # \u4e09\u53f0\u63a7\u5236\u8282\u70b9\u4e3b\u673a\u4fe1\u606f [control] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # \u7f51\u7edc\u8282\u70b9\u4fe1\u606f\uff0c\u4e0e\u63a7\u5236\u8282\u70b9\u4fdd\u6301\u4e00\u81f4 [network] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # cinder-volume\u670d\u52a1\u8282\u70b9\u4fe1\u606f [storage] storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1 storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1 storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1 # Cell1 \u96c6\u7fa4\u4fe1\u606f [cell-control-cell1] cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1 cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1 cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1 [compute-cell1] compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1 compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1 compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1 [cell1:children] cell-control-cell1 compute-cell1 # Cell2\u96c6\u7fa4\u4fe1\u606f [cell-control-cell2] cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1 cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1 cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1 [compute-cell2] compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1 compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1 compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1 [cell2:children] cell-control-cell2 compute-cell2 [baremetal] [compute-cell1-ironic] # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684control\u4e3b\u673a\u7ec4 [nova-conductor:children] cell-control-cell1 cell-control-cell2 # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684compute\u4e3b\u673a\u7ec4 [nova-compute:children] compute-added compute-cell1 compute-cell2 # \u4e0b\u9762\u7684\u4e3b\u673a\u7ec4\u4fe1\u606f\u4e0d\u9700\u53d8\u52a8\uff0c\u4fdd\u7559\u5373\u53ef [compute-added] [chrony-server:children] control [pacemaker:children] control ...... ......","title":"7.2 \u914d\u7f6einventory\u6587\u4ef6"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#73","text":"\u6ce8: \u6587\u6863\u4e2d\u63d0\u5230\u7684\u6709\u6ce8\u91ca\u914d\u7f6e\u9879\u9700\u8981\u66f4\u6539\uff0c\u5176\u4ed6\u53c2\u6570\u4e0d\u9700\u8981\u66f4\u6539\uff0c\u82e5\u65e0\u76f8\u5173\u914d\u7f6e\u5219\u4e3a\u7a7a vim /usr/local/share/opensd/etc_examples/opensd/globals.yml ######################## # Network & Base options ######################## network_interface: \"eth0\" #\u7ba1\u7406\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 neutron_external_interface: \"eth1\" #\u4e1a\u52a1\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 cidr_netmask: 24 #\u7ba1\u7406\u7f51\u7684\u63a9\u7801 opensd_vip_address: 10.0.0.33 #\u63a7\u5236\u8282\u70b9\u865a\u62dfIP\u5730\u5740 cell1_vip_address: 10.0.0.34 #cell1\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 cell2_vip_address: 10.0.0.35 #cell2\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 external_fqdn: \"\" #\u7528\u4e8evnc\u8bbf\u95ee\u865a\u62df\u673a\u7684\u5916\u7f51\u57df\u540d\u5730\u5740 external_ntp_servers: [] #\u5916\u90e8ntp\u670d\u52a1\u5668\u5730\u5740 yumrepo_host: #yum\u6e90\u7684IP\u5730\u5740 yumrepo_port: #yum\u6e90\u7aef\u53e3\u53f7 environment: #yum\u6e90\u7684\u7c7b\u578b upgrade_all_packages: \"yes\" #\u662f\u5426\u5347\u7ea7\u6240\u6709\u5b89\u88c5\u7248\u7684\u7248\u672c(\u6267\u884cyum upgrade)\uff0c\u521d\u59cb\u90e8\u7f72\u8d44\u6e90\u8bf7\u8bbe\u7f6e\u4e3a\"yes\" enable_miner: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72miner\u670d\u52a1 enable_chrony: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72chrony\u670d\u52a1 enable_pri_mariadb: \"no\" #\u662f\u5426\u4e3a\u79c1\u6709\u4e91\u90e8\u7f72mariadb enable_hosts_file_modify: \"no\" # \u6269\u5bb9\u8ba1\u7b97\u8282\u70b9\u548c\u90e8\u7f72ironic\u670d\u52a1\u7684\u65f6\u5019\uff0c\u662f\u5426\u5c06\u8282\u70b9\u4fe1\u606f\u6dfb\u52a0\u5230`/etc/hosts` ######################## # Available zone options ######################## az_cephmon_compose: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az01\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az01\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az02\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az02\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az03\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az03\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: # `reserve_vcpu_based_on_numa`\u914d\u7f6e\u4e3a`yes` or `no`,\u4e3e\u4f8b\u8bf4\u660e\uff1a NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 \u5f53reserve_vcpu_based_on_numa: \"yes\", \u6839\u636enuma node, \u5e73\u5747\u6bcf\u4e2anode\u9884\u7559vcpu: vcpu_pin_set = 2-15,34-47,18-31,50-63 \u5f53reserve_vcpu_based_on_numa: \"no\", \u4ece\u7b2c\u4e00\u4e2avcpu\u5f00\u59cb\uff0c\u987a\u5e8f\u9884\u7559vcpu: vcpu_pin_set = 8-64 ####################### # Nova options ####################### nova_reserved_host_memory_mb: 2048 #\u8ba1\u7b97\u8282\u70b9\u7ed9\u8ba1\u7b97\u670d\u52a1\u9884\u7559\u7684\u5185\u5b58\u5927\u5c0f enable_cells: \"yes\" #cell\u8282\u70b9\u662f\u5426\u5355\u72ec\u8282\u70b9\u90e8\u7f72 support_gpu: \"False\" #cell\u8282\u70b9\u662f\u5426\u6709GPU\u670d\u52a1\u5668\uff0c\u5982\u679c\u6709\u5219\u4e3aTrue\uff0c\u5426\u5219\u4e3aFalse ####################### # Neutron options ####################### monitor_ip: - 10.0.0.9 #\u914d\u7f6e\u76d1\u63a7\u8282\u70b9 - 10.0.0.10 enable_meter_full_eip: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8EIP\u5168\u91cf\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_port_forwarding: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8port forwarding\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_ecs_ipv6: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8ecs_ipv6\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter: True #\u914d\u7f6e\u662f\u5426\u5f00\u542f\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue is_sdn_arch: False #\u914d\u7f6e\u662f\u5426\u662fsdn\u67b6\u6784\uff0c\u9ed8\u8ba4\u4e3aFalse # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,vlan\u548cvxlan\u4e24\u79cd\u7c7b\u578b\u53ea\u80fd\u4e8c\u9009\u4e00. enable_vxlan_network_type: False # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,\u5982\u679c\u4f7f\u7528vxlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aTrue, \u5982\u679c\u4f7f\u7528vlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aFalse. enable_neutron_fwaas: False # \u73af\u5883\u6709\u4f7f\u7528\u9632\u706b\u5899, \u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fd\u9632\u62a4\u5899\u529f\u80fd. # Neutron provider neutron_provider_networks: network_types: \"{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}\" network_vlan_ranges: \"default:xxx:xxx\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvlan\u8303\u56f4 network_mappings: \"default:br-provider\" network_interface: \"{{ neutron_external_interface }}\" network_vxlan_ranges: \"\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvxlan\u8303\u56f4 # \u5982\u4e0b\u8fd9\u4e9b\u914d\u7f6e\u662fSND\u63a7\u5236\u5668\u7684\u914d\u7f6e\u53c2\u6570, `enable_sdn_controller`\u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fdSND\u63a7\u5236\u5668\u529f\u80fd. # \u5176\u4ed6\u53c2\u6570\u8bf7\u6839\u636e\u90e8\u7f72\u4e4b\u524d\u7684\u89c4\u5212\u548cSDN\u90e8\u7f72\u4fe1\u606f\u786e\u5b9a. enable_sdn_controller: False sdn_controller_ip_address: # SDN\u63a7\u5236\u5668ip\u5730\u5740 sdn_controller_username: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u540d sdn_controller_password: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u5bc6\u7801 ####################### # Dimsagent options ####################### enable_dimsagent: \"no\" # \u5b89\u88c5\u955c\u50cf\u670d\u52a1agent, \u9700\u8981\u6539\u4e3ayes # Address and domain name for s2 s3_address_domain_pair: - host_ip: host_name: ####################### # Trove options ####################### enable_trove: \"no\" #\u5b89\u88c5trove \u9700\u8981\u6539\u4e3ayes #default network trove_default_neutron_networks: #trove \u7684\u7ba1\u7406\u7f51\u7edcid `openstack network list|grep -w trove-mgmt|awk '{print$2}'` #s3 setup(\u5982\u679c\u6ca1\u6709s3,\u4ee5\u4e0b\u503c\u586bnull) s3_endpoint_host_ip: #s3\u7684ip s3_endpoint_host_name: #s3\u7684\u57df\u540d s3_endpoint_url: #s3\u7684url \u00b7\u4e00\u822c\u4e3ahttp\uff1a//s3\u57df\u540d s3_access_key: #s3\u7684ak s3_secret_key: #s3\u7684sk ####################### # Ironic options ####################### enable_ironic: \"no\" #\u662f\u5426\u5f00\u673a\u88f8\u91d1\u5c5e\u90e8\u7f72\uff0c\u9ed8\u8ba4\u4e0d\u5f00\u542f ironic_neutron_provisioning_network_uuid: ironic_neutron_cleaning_network_uuid: \"{{ ironic_neutron_provisioning_network_uuid }}\" ironic_dnsmasq_interface: ironic_dnsmasq_dhcp_range: ironic_tftp_server_address: \"{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}\" # \u4ea4\u6362\u673a\u8bbe\u5907\u76f8\u5173\u4fe1\u606f neutron_ml2_conf_genericswitch: genericswitch:xxxxxxx: device_type: ngs_mac_address: ip: username: password: ngs_port_default_vlan: # Package state setting haproxy_package_state: \"present\" mariadb_package_state: \"present\" rabbitmq_package_state: \"present\" memcached_package_state: \"present\" ceph_client_package_state: \"present\" keystone_package_state: \"present\" glance_package_state: \"present\" cinder_package_state: \"present\" nova_package_state: \"present\" neutron_package_state: \"present\" miner_package_state: \"present\"","title":"7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#74-ssh","text":"dnf install ansible -y ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u6267\u884c\u7ed3\u679c\u663e\u793a\u6bcf\u53f0\u4e3b\u673a\u90fd\u662f\"SUCCESS\"\u5373\u8bf4\u660e\u8fde\u63a5\u72b6\u6001\u6ca1\u95ee\u9898,\u793a\u4f8b\uff1a compute1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python\" }, \"changed\": false, \"ping\": \"pong\" }","title":"7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#8","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"8. \u6267\u884c\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#81-bootstrap","text":"# \u6267\u884c\u90e8\u7f72 opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50","title":"8.1 \u6267\u884cbootstrap"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#82","text":"\u6ce8\uff1a\u6267\u884c\u91cd\u542f\u7684\u539f\u56e0\u662f:bootstrap\u53ef\u80fd\u4f1a\u5347\u5185\u6838,\u66f4\u6539selinux\u914d\u7f6e\u6216\u8005\u6709GPU\u670d\u52a1\u5668,\u5982\u679c\u88c5\u673a\u8fc7\u7a0b\u5df2\u7ecf\u662f\u65b0\u7248\u5185\u6838,selinux disable\u6216\u8005\u6ca1\u6709GPU\u670d\u52a1\u5668,\u5219\u4e0d\u9700\u8981\u6267\u884c\u8be5\u6b65\u9aa4 # \u624b\u52a8\u91cd\u542f\u5bf9\u5e94\u8282\u70b9,\u6267\u884c\u547d\u4ee4 init6 # \u91cd\u542f\u5b8c\u6210\u540e\uff0c\u518d\u6b21\u68c0\u67e5\u8fde\u901a\u6027 ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u91cd\u542f\u5b8c\u540e\u64cd\u4f5c\u7cfb\u7edf\u540e\uff0c\u518d\u6b21\u542f\u52a8yum\u6e90","title":"8.2 \u91cd\u542f\u670d\u52a1\u5668"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#83","text":"opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50","title":"8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-train/#84","text":"ln -s /usr/bin/python3 /usr/bin/python \u5168\u91cf\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 \u5355\u670d\u52a1\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name","title":"8.4 \u6267\u884c\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP1\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 22.03 LTS \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) logdir = /var/log/nova/ (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ``` Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 22.03 LTS\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 yum install openstack-trove python-troveclient 2\u3001\u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a - `[Default]`\u5206\u7ec4\u4e2d`bind_host`\u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP - `nova_compute_url` \u548c `cinder_url` \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - `nova_proxy_XXX` \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528`admin`\u7528\u6237\u4e3a\u4f8b - `transport_url` \u4e3a`RabbitMQ`\u8fde\u63a5\u4fe1\u606f\uff0c`RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - `[database]`\u5206\u7ec4\u4e2d\u7684`connection` \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d`TROVE_PASS`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3\u3001\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 4\u3001\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 \u6ce8\u610f \u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801 \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP1\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-sp1 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-sp1 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-22.03-LTS-SP1_Wallaby"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#openstack-wallaby","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72","title":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP1\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#_3","text":"\u914d\u7f6e 22.03 LTS \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) logdir = /var/log/nova/ (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ```","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 22.03 LTS\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 yum install openstack-trove python-troveclient 2\u3001\u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a - `[Default]`\u5206\u7ec4\u4e2d`bind_host`\u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP - `nova_compute_url` \u548c `cinder_url` \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - `nova_proxy_XXX` \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528`admin`\u7528\u6237\u4e3a\u4f8b - `transport_url` \u4e3a`RabbitMQ`\u8fde\u63a5\u4fe1\u606f\uff0c`RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - `[database]`\u5206\u7ec4\u4e2d\u7684`connection` \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d`TROVE_PASS`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3\u3001\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 4\u3001\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 \u6ce8\u610f \u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801 \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#aodh","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#gnocchi","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#ceilometer","text":"\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#heat","text":"\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP1/OpenStack-wallaby/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP1\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-sp1 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-sp1 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u90e8\u7f72\u6b65\u9aa4 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 2.1 \u521b\u5efapool: 2.2 \u521d\u59cb\u5316pool 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 4. \u914d\u7f6eyum repo 4.1 \u5907\u4efdyum\u6e90 4.2 \u914d\u7f6eyum repo 4.3 \u66f4\u65b0yum\u7f13\u5b58 5. \u5b89\u88c5opensd 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 6. \u505assh\u4e92\u4fe1 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 7. \u914d\u7f6eopensd 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 7.2 \u914d\u7f6einventory\u6587\u4ef6 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 8. \u6267\u884c\u90e8\u7f72 8.1 \u6267\u884cbootstrap 8.2 \u91cd\u542f\u670d\u52a1\u5668 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 8.4 \u6267\u884c\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP2\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u542f\u52a8OpenStack Train yum\u6e90 yum update yum install openstack-release-train yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP2/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient==4.0.2 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7. deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002 Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python3-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP2\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-sp2 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r train \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-sp2 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u00b6 opensd\u7528\u4e8e\u6279\u91cf\u5730\u811a\u672c\u5316\u90e8\u7f72openstack\u5404\u7ec4\u4ef6\u670d\u52a1\u3002 \u90e8\u7f72\u6b65\u9aa4 \u00b6 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f \u00b6 \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u9700\u5c06selinux\u8bbe\u7f6e\u4e3adisable \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u5c06/etc/ssh/sshd_config\u914d\u7f6e\u6587\u4ef6\u5185\u7684UseDNS\u8bbe\u7f6e\u4e3ano \u64cd\u4f5c\u7cfb\u7edf\u8bed\u8a00\u5fc5\u987b\u8bbe\u7f6e\u4e3a\u82f1\u6587 \u90e8\u7f72\u4e4b\u524d\u8bf7\u786e\u4fdd\u6240\u6709\u8ba1\u7b97\u8282\u70b9/etc/hosts\u6587\u4ef6\u5185\u6ca1\u6709\u5bf9\u8ba1\u7b97\u4e3b\u673a\u7684\u89e3\u6790 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 \u00b6 \u4e0d\u4f7f\u7528ceph\u6216\u5df2\u6709ceph\u96c6\u7fa4\u53ef\u5ffd\u7565\u6b64\u6b65\u9aa4 \u5728\u4efb\u610f\u4e00\u53f0ceph monitor\u8282\u70b9\u6267\u884c: 2.1 \u521b\u5efapool: \u00b6 ceph osd pool create volumes 2048 ceph osd pool create images 2048 2.2 \u521d\u59cb\u5316pool \u00b6 rbd pool init volumes rbd pool init images 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 \u00b6 ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images' ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes' 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 \u00b6 \u6839\u636e\u7269\u7406\u673a\u78c1\u76d8\u914d\u7f6e\u4e0e\u95f2\u7f6e\u60c5\u51b5\uff0c\u4e3amysql\u6570\u636e\u76ee\u5f55\u6302\u8f7d\u989d\u5916\u7684\u78c1\u76d8\u7a7a\u95f4\u3002\u793a\u4f8b\u5982\u4e0b\uff08\u6839\u636e\u5b9e\u9645\u60c5\u51b5\u505a\u914d\u7f6e\uff09\uff1a fdisk -l Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x000ed242 \u521b\u5efa\u5206\u533a parted /dev/sdd mkparted 0 -1 \u521b\u5efapv partprobe /dev/sdd1 pvcreate /dev/sdd1 \u521b\u5efa\u3001\u6fc0\u6d3bvg vgcreate vg_mariadb /dev/sdd1 vgchange -ay vg_mariadb \u67e5\u770bvg\u5bb9\u91cf vgdisplay --- Volume group --- VG Name vg_mariadb System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 446.62 GiB PE Size 4.00 MiB Total PE 114335 Alloc PE / Size 114176 / 446.00 GiB Free PE / Size 159 / 636.00 MiB VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc \u521b\u5efalv lvcreate -L 446G -n lv_mariadb vg_mariadb \u683c\u5f0f\u5316\u78c1\u76d8\u5e76\u83b7\u53d6\u5377\u7684UUID mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb blkid /dev/mapper/vg_mariadb-lv_mariadb /dev/mapper/vg_mariadb-lv_mariadb: UUID=\"98d513eb-5f64-4aa5-810e-dc7143884fa2\" TYPE=\"ext4\" \u6ce8\uff1a98d513eb-5f64-4aa5-810e-dc7143884fa2\u4e3a\u5377\u7684UUID \u6302\u8f7d\u78c1\u76d8 mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql rm -rf /var/lib/mysql/* 4. \u914d\u7f6eyum repo \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 4.1 \u5907\u4efdyum\u6e90 \u00b6 mkdir /etc/yum.repos.d/bak/ mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/ 4.2 \u914d\u7f6eyum repo \u00b6 cat > /etc/yum.repos.d/opensd.repo << EOF [train] name=train baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP2:/Epol:/Multi-Version:/OpenStack:/Train/standard_$basearch/ enabled=1 gpgcheck=0 [epol] name=epol baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP2:/Epol/standard_$basearch/ enabled=1 gpgcheck=0 [everything] name=everything baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP2/standard_$basearch/ enabled=1 gpgcheck=0 EOF 4.3 \u66f4\u65b0yum\u7f13\u5b58 \u00b6 yum clean all yum makecache 5. \u5b89\u88c5opensd \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 \u00b6 git clone https://gitee.com/openeuler/opensd cd opensd python3 setup.py install 6. \u505assh\u4e92\u4fe1 \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\u5e76\u4e00\u8def\u56de\u8f66 ssh-keygen 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 \u00b6 \u5728auto_ssh_host_ip\u4e2d\u914d\u7f6e\u6240\u6709\u7528\u5230\u7684\u4e3b\u673aip, \u793a\u4f8b\uff1a cd /usr/local/share/opensd/tools/ vim auto_ssh_host_ip 10.0.0.1 10.0.0.2 ... 10.0.0.10 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c \u00b6 \u5c06\u514d\u5bc6\u811a\u672c /usr/local/bin/opensd-auto-ssh \u5185123123\u66ff\u6362\u4e3a\u4e3b\u673a\u771f\u5b9e\u5bc6\u7801 # \u66ff\u6362\u811a\u672c\u5185123123\u5b57\u7b26\u4e32 vim /usr/local/bin/opensd-auto-ssh ## \u5b89\u88c5expect\u540e\u6267\u884c\u811a\u672c dnf install expect -y opensd-auto-ssh 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 \u00b6 ssh-copy-id root@x.x.x.x 7. \u914d\u7f6eopensd \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 \u00b6 \u5b89\u88c5 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils\u5e76\u968f\u673a\u751f\u6210\u5bc6\u7801 dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y # \u6267\u884c\u547d\u4ee4\u751f\u6210\u5bc6\u7801 opensd-genpwd # \u68c0\u67e5\u5bc6\u7801\u662f\u5426\u751f\u6210 cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml 7.2 \u914d\u7f6einventory\u6587\u4ef6 \u00b6 \u4e3b\u673a\u4fe1\u606f\u5305\u542b\uff1a\u4e3b\u673a\u540d\u3001ansible_host IP\u3001availability_zone\uff0c\u4e09\u8005\u5747\u9700\u914d\u7f6e\u7f3a\u4e00\u4e0d\u53ef\uff0c\u793a\u4f8b\uff1a vim /usr/local/share/opensd/ansible/inventory/multinode # \u4e09\u53f0\u63a7\u5236\u8282\u70b9\u4e3b\u673a\u4fe1\u606f [control] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # \u7f51\u7edc\u8282\u70b9\u4fe1\u606f\uff0c\u4e0e\u63a7\u5236\u8282\u70b9\u4fdd\u6301\u4e00\u81f4 [network] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # cinder-volume\u670d\u52a1\u8282\u70b9\u4fe1\u606f [storage] storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1 storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1 storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1 # Cell1 \u96c6\u7fa4\u4fe1\u606f [cell-control-cell1] cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1 cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1 cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1 [compute-cell1] compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1 compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1 compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1 [cell1:children] cell-control-cell1 compute-cell1 # Cell2\u96c6\u7fa4\u4fe1\u606f [cell-control-cell2] cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1 cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1 cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1 [compute-cell2] compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1 compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1 compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1 [cell2:children] cell-control-cell2 compute-cell2 [baremetal] [compute-cell1-ironic] # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684control\u4e3b\u673a\u7ec4 [nova-conductor:children] cell-control-cell1 cell-control-cell2 # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684compute\u4e3b\u673a\u7ec4 [nova-compute:children] compute-added compute-cell1 compute-cell2 # \u4e0b\u9762\u7684\u4e3b\u673a\u7ec4\u4fe1\u606f\u4e0d\u9700\u53d8\u52a8\uff0c\u4fdd\u7559\u5373\u53ef [compute-added] [chrony-server:children] control [pacemaker:children] control ...... ...... 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf \u00b6 \u6ce8: \u6587\u6863\u4e2d\u63d0\u5230\u7684\u6709\u6ce8\u91ca\u914d\u7f6e\u9879\u9700\u8981\u66f4\u6539\uff0c\u5176\u4ed6\u53c2\u6570\u4e0d\u9700\u8981\u66f4\u6539\uff0c\u82e5\u65e0\u76f8\u5173\u914d\u7f6e\u5219\u4e3a\u7a7a vim /usr/local/share/opensd/etc_examples/opensd/globals.yml ######################## # Network & Base options ######################## network_interface: \"eth0\" #\u7ba1\u7406\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 neutron_external_interface: \"eth1\" #\u4e1a\u52a1\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 cidr_netmask: 24 #\u7ba1\u7406\u7f51\u7684\u63a9\u7801 opensd_vip_address: 10.0.0.33 #\u63a7\u5236\u8282\u70b9\u865a\u62dfIP\u5730\u5740 cell1_vip_address: 10.0.0.34 #cell1\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 cell2_vip_address: 10.0.0.35 #cell2\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 external_fqdn: \"\" #\u7528\u4e8evnc\u8bbf\u95ee\u865a\u62df\u673a\u7684\u5916\u7f51\u57df\u540d\u5730\u5740 external_ntp_servers: [] #\u5916\u90e8ntp\u670d\u52a1\u5668\u5730\u5740 yumrepo_host: #yum\u6e90\u7684IP\u5730\u5740 yumrepo_port: #yum\u6e90\u7aef\u53e3\u53f7 environment: #yum\u6e90\u7684\u7c7b\u578b upgrade_all_packages: \"yes\" #\u662f\u5426\u5347\u7ea7\u6240\u6709\u5b89\u88c5\u7248\u7684\u7248\u672c(\u6267\u884cyum upgrade)\uff0c\u521d\u59cb\u90e8\u7f72\u8d44\u6e90\u8bf7\u8bbe\u7f6e\u4e3a\"yes\" enable_miner: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72miner\u670d\u52a1 enable_chrony: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72chrony\u670d\u52a1 enable_pri_mariadb: \"no\" #\u662f\u5426\u4e3a\u79c1\u6709\u4e91\u90e8\u7f72mariadb enable_hosts_file_modify: \"no\" # \u6269\u5bb9\u8ba1\u7b97\u8282\u70b9\u548c\u90e8\u7f72ironic\u670d\u52a1\u7684\u65f6\u5019\uff0c\u662f\u5426\u5c06\u8282\u70b9\u4fe1\u606f\u6dfb\u52a0\u5230`/etc/hosts` ######################## # Available zone options ######################## az_cephmon_compose: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az01\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az01\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az02\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az02\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az03\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az03\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: # `reserve_vcpu_based_on_numa`\u914d\u7f6e\u4e3a`yes` or `no`,\u4e3e\u4f8b\u8bf4\u660e\uff1a NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 \u5f53reserve_vcpu_based_on_numa: \"yes\", \u6839\u636enuma node, \u5e73\u5747\u6bcf\u4e2anode\u9884\u7559vcpu: vcpu_pin_set = 2-15,34-47,18-31,50-63 \u5f53reserve_vcpu_based_on_numa: \"no\", \u4ece\u7b2c\u4e00\u4e2avcpu\u5f00\u59cb\uff0c\u987a\u5e8f\u9884\u7559vcpu: vcpu_pin_set = 8-64 ####################### # Nova options ####################### nova_reserved_host_memory_mb: 2048 #\u8ba1\u7b97\u8282\u70b9\u7ed9\u8ba1\u7b97\u670d\u52a1\u9884\u7559\u7684\u5185\u5b58\u5927\u5c0f enable_cells: \"yes\" #cell\u8282\u70b9\u662f\u5426\u5355\u72ec\u8282\u70b9\u90e8\u7f72 support_gpu: \"False\" #cell\u8282\u70b9\u662f\u5426\u6709GPU\u670d\u52a1\u5668\uff0c\u5982\u679c\u6709\u5219\u4e3aTrue\uff0c\u5426\u5219\u4e3aFalse ####################### # Neutron options ####################### monitor_ip: - 10.0.0.9 #\u914d\u7f6e\u76d1\u63a7\u8282\u70b9 - 10.0.0.10 enable_meter_full_eip: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8EIP\u5168\u91cf\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_port_forwarding: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8port forwarding\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_ecs_ipv6: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8ecs_ipv6\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter: True #\u914d\u7f6e\u662f\u5426\u5f00\u542f\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue is_sdn_arch: False #\u914d\u7f6e\u662f\u5426\u662fsdn\u67b6\u6784\uff0c\u9ed8\u8ba4\u4e3aFalse # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,vlan\u548cvxlan\u4e24\u79cd\u7c7b\u578b\u53ea\u80fd\u4e8c\u9009\u4e00. enable_vxlan_network_type: False # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,\u5982\u679c\u4f7f\u7528vxlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aTrue, \u5982\u679c\u4f7f\u7528vlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aFalse. enable_neutron_fwaas: False # \u73af\u5883\u6709\u4f7f\u7528\u9632\u706b\u5899, \u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fd\u9632\u62a4\u5899\u529f\u80fd. # Neutron provider neutron_provider_networks: network_types: \"{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}\" network_vlan_ranges: \"default:xxx:xxx\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvlan\u8303\u56f4 network_mappings: \"default:br-provider\" network_interface: \"{{ neutron_external_interface }}\" network_vxlan_ranges: \"\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvxlan\u8303\u56f4 # \u5982\u4e0b\u8fd9\u4e9b\u914d\u7f6e\u662fSND\u63a7\u5236\u5668\u7684\u914d\u7f6e\u53c2\u6570, `enable_sdn_controller`\u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fdSND\u63a7\u5236\u5668\u529f\u80fd. # \u5176\u4ed6\u53c2\u6570\u8bf7\u6839\u636e\u90e8\u7f72\u4e4b\u524d\u7684\u89c4\u5212\u548cSDN\u90e8\u7f72\u4fe1\u606f\u786e\u5b9a. enable_sdn_controller: False sdn_controller_ip_address: # SDN\u63a7\u5236\u5668ip\u5730\u5740 sdn_controller_username: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u540d sdn_controller_password: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u5bc6\u7801 ####################### # Dimsagent options ####################### enable_dimsagent: \"no\" # \u5b89\u88c5\u955c\u50cf\u670d\u52a1agent, \u9700\u8981\u6539\u4e3ayes # Address and domain name for s2 s3_address_domain_pair: - host_ip: host_name: ####################### # Trove options ####################### enable_trove: \"no\" #\u5b89\u88c5trove \u9700\u8981\u6539\u4e3ayes #default network trove_default_neutron_networks: #trove \u7684\u7ba1\u7406\u7f51\u7edcid `openstack network list|grep -w trove-mgmt|awk '{print$2}'` #s3 setup(\u5982\u679c\u6ca1\u6709s3,\u4ee5\u4e0b\u503c\u586bnull) s3_endpoint_host_ip: #s3\u7684ip s3_endpoint_host_name: #s3\u7684\u57df\u540d s3_endpoint_url: #s3\u7684url \u00b7\u4e00\u822c\u4e3ahttp\uff1a//s3\u57df\u540d s3_access_key: #s3\u7684ak s3_secret_key: #s3\u7684sk ####################### # Ironic options ####################### enable_ironic: \"no\" #\u662f\u5426\u5f00\u673a\u88f8\u91d1\u5c5e\u90e8\u7f72\uff0c\u9ed8\u8ba4\u4e0d\u5f00\u542f ironic_neutron_provisioning_network_uuid: ironic_neutron_cleaning_network_uuid: \"{{ ironic_neutron_provisioning_network_uuid }}\" ironic_dnsmasq_interface: ironic_dnsmasq_dhcp_range: ironic_tftp_server_address: \"{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}\" # \u4ea4\u6362\u673a\u8bbe\u5907\u76f8\u5173\u4fe1\u606f neutron_ml2_conf_genericswitch: genericswitch:xxxxxxx: device_type: ngs_mac_address: ip: username: password: ngs_port_default_vlan: # Package state setting haproxy_package_state: \"present\" mariadb_package_state: \"present\" rabbitmq_package_state: \"present\" memcached_package_state: \"present\" ceph_client_package_state: \"present\" keystone_package_state: \"present\" glance_package_state: \"present\" cinder_package_state: \"present\" nova_package_state: \"present\" neutron_package_state: \"present\" miner_package_state: \"present\" 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 \u00b6 dnf install ansible -y ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u6267\u884c\u7ed3\u679c\u663e\u793a\u6bcf\u53f0\u4e3b\u673a\u90fd\u662f\"SUCCESS\"\u5373\u8bf4\u660e\u8fde\u63a5\u72b6\u6001\u6ca1\u95ee\u9898,\u793a\u4f8b\uff1a compute1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python\" }, \"changed\": false, \"ping\": \"pong\" } 8. \u6267\u884c\u90e8\u7f72 \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 8.1 \u6267\u884cbootstrap \u00b6 # \u6267\u884c\u90e8\u7f72 opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50 8.2 \u91cd\u542f\u670d\u52a1\u5668 \u00b6 \u6ce8\uff1a\u6267\u884c\u91cd\u542f\u7684\u539f\u56e0\u662f:bootstrap\u53ef\u80fd\u4f1a\u5347\u5185\u6838,\u66f4\u6539selinux\u914d\u7f6e\u6216\u8005\u6709GPU\u670d\u52a1\u5668,\u5982\u679c\u88c5\u673a\u8fc7\u7a0b\u5df2\u7ecf\u662f\u65b0\u7248\u5185\u6838,selinux disable\u6216\u8005\u6ca1\u6709GPU\u670d\u52a1\u5668,\u5219\u4e0d\u9700\u8981\u6267\u884c\u8be5\u6b65\u9aa4 # \u624b\u52a8\u91cd\u542f\u5bf9\u5e94\u8282\u70b9,\u6267\u884c\u547d\u4ee4 init6 # \u91cd\u542f\u5b8c\u6210\u540e\uff0c\u518d\u6b21\u68c0\u67e5\u8fde\u901a\u6027 ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u91cd\u542f\u5b8c\u540e\u64cd\u4f5c\u7cfb\u7edf\u540e\uff0c\u518d\u6b21\u542f\u52a8yum\u6e90 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 \u00b6 opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50 8.4 \u6267\u884c\u90e8\u7f72 \u00b6 ln -s /usr/bin/python3 /usr/bin/python \u5168\u91cf\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 \u5355\u670d\u52a1\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name","title":"openEuler-22.03-LTS-SP2_Train"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#openstack-train","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u90e8\u7f72\u6b65\u9aa4 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 2.1 \u521b\u5efapool: 2.2 \u521d\u59cb\u5316pool 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 4. \u914d\u7f6eyum repo 4.1 \u5907\u4efdyum\u6e90 4.2 \u914d\u7f6eyum repo 4.3 \u66f4\u65b0yum\u7f13\u5b58 5. \u5b89\u88c5opensd 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 6. \u505assh\u4e92\u4fe1 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 7. \u914d\u7f6eopensd 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 7.2 \u914d\u7f6einventory\u6587\u4ef6 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 8. \u6267\u884c\u90e8\u7f72 8.1 \u6267\u884cbootstrap 8.2 \u91cd\u542f\u670d\u52a1\u5668 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 8.4 \u6267\u884c\u90e8\u7f72","title":"OpenStack-Train \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP2\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#_3","text":"\u542f\u52a8OpenStack Train yum\u6e90 yum update yum install openstack-release-train yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP2/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient==4.0.2 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7. deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python3-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#aodh","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#gnocchi","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#ceilometer","text":"\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#heat","text":"\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP2\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-sp2 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r train \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-sp2 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#openstack-sigopensd","text":"opensd\u7528\u4e8e\u6279\u91cf\u5730\u811a\u672c\u5316\u90e8\u7f72openstack\u5404\u7ec4\u4ef6\u670d\u52a1\u3002","title":"\u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#_4","text":"","title":"\u90e8\u7f72\u6b65\u9aa4"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#1","text":"\u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u9700\u5c06selinux\u8bbe\u7f6e\u4e3adisable \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u5c06/etc/ssh/sshd_config\u914d\u7f6e\u6587\u4ef6\u5185\u7684UseDNS\u8bbe\u7f6e\u4e3ano \u64cd\u4f5c\u7cfb\u7edf\u8bed\u8a00\u5fc5\u987b\u8bbe\u7f6e\u4e3a\u82f1\u6587 \u90e8\u7f72\u4e4b\u524d\u8bf7\u786e\u4fdd\u6240\u6709\u8ba1\u7b97\u8282\u70b9/etc/hosts\u6587\u4ef6\u5185\u6ca1\u6709\u5bf9\u8ba1\u7b97\u4e3b\u673a\u7684\u89e3\u6790","title":"1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#2-ceph-pool","text":"\u4e0d\u4f7f\u7528ceph\u6216\u5df2\u6709ceph\u96c6\u7fa4\u53ef\u5ffd\u7565\u6b64\u6b65\u9aa4 \u5728\u4efb\u610f\u4e00\u53f0ceph monitor\u8282\u70b9\u6267\u884c:","title":"2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#21-pool","text":"ceph osd pool create volumes 2048 ceph osd pool create images 2048","title":"2.1 \u521b\u5efapool:"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#22-pool","text":"rbd pool init volumes rbd pool init images","title":"2.2 \u521d\u59cb\u5316pool"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#23","text":"ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images' ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes'","title":"2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#3-lvm","text":"\u6839\u636e\u7269\u7406\u673a\u78c1\u76d8\u914d\u7f6e\u4e0e\u95f2\u7f6e\u60c5\u51b5\uff0c\u4e3amysql\u6570\u636e\u76ee\u5f55\u6302\u8f7d\u989d\u5916\u7684\u78c1\u76d8\u7a7a\u95f4\u3002\u793a\u4f8b\u5982\u4e0b\uff08\u6839\u636e\u5b9e\u9645\u60c5\u51b5\u505a\u914d\u7f6e\uff09\uff1a fdisk -l Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x000ed242 \u521b\u5efa\u5206\u533a parted /dev/sdd mkparted 0 -1 \u521b\u5efapv partprobe /dev/sdd1 pvcreate /dev/sdd1 \u521b\u5efa\u3001\u6fc0\u6d3bvg vgcreate vg_mariadb /dev/sdd1 vgchange -ay vg_mariadb \u67e5\u770bvg\u5bb9\u91cf vgdisplay --- Volume group --- VG Name vg_mariadb System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 446.62 GiB PE Size 4.00 MiB Total PE 114335 Alloc PE / Size 114176 / 446.00 GiB Free PE / Size 159 / 636.00 MiB VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc \u521b\u5efalv lvcreate -L 446G -n lv_mariadb vg_mariadb \u683c\u5f0f\u5316\u78c1\u76d8\u5e76\u83b7\u53d6\u5377\u7684UUID mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb blkid /dev/mapper/vg_mariadb-lv_mariadb /dev/mapper/vg_mariadb-lv_mariadb: UUID=\"98d513eb-5f64-4aa5-810e-dc7143884fa2\" TYPE=\"ext4\" \u6ce8\uff1a98d513eb-5f64-4aa5-810e-dc7143884fa2\u4e3a\u5377\u7684UUID \u6302\u8f7d\u78c1\u76d8 mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql rm -rf /var/lib/mysql/*","title":"3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#4-yum-repo","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"4. \u914d\u7f6eyum repo"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#41-yum","text":"mkdir /etc/yum.repos.d/bak/ mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/","title":"4.1 \u5907\u4efdyum\u6e90"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#42-yum-repo","text":"cat > /etc/yum.repos.d/opensd.repo << EOF [train] name=train baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP2:/Epol:/Multi-Version:/OpenStack:/Train/standard_$basearch/ enabled=1 gpgcheck=0 [epol] name=epol baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP2:/Epol/standard_$basearch/ enabled=1 gpgcheck=0 [everything] name=everything baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP2/standard_$basearch/ enabled=1 gpgcheck=0 EOF","title":"4.2 \u914d\u7f6eyum repo"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#43-yum","text":"yum clean all yum makecache","title":"4.3 \u66f4\u65b0yum\u7f13\u5b58"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#5-opensd","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"5. \u5b89\u88c5opensd"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#51-opensd","text":"git clone https://gitee.com/openeuler/opensd cd opensd python3 setup.py install","title":"5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#6-ssh","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"6. \u505assh\u4e92\u4fe1"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#61","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u5e76\u4e00\u8def\u56de\u8f66 ssh-keygen","title":"6.1 \u751f\u6210\u5bc6\u94a5\u5bf9"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#62-ip","text":"\u5728auto_ssh_host_ip\u4e2d\u914d\u7f6e\u6240\u6709\u7528\u5230\u7684\u4e3b\u673aip, \u793a\u4f8b\uff1a cd /usr/local/share/opensd/tools/ vim auto_ssh_host_ip 10.0.0.1 10.0.0.2 ... 10.0.0.10","title":"6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#63","text":"\u5c06\u514d\u5bc6\u811a\u672c /usr/local/bin/opensd-auto-ssh \u5185123123\u66ff\u6362\u4e3a\u4e3b\u673a\u771f\u5b9e\u5bc6\u7801 # \u66ff\u6362\u811a\u672c\u5185123123\u5b57\u7b26\u4e32 vim /usr/local/bin/opensd-auto-ssh ## \u5b89\u88c5expect\u540e\u6267\u884c\u811a\u672c dnf install expect -y opensd-auto-ssh","title":"6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#64-ceph-monitor","text":"ssh-copy-id root@x.x.x.x","title":"6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#7-opensd","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"7. \u914d\u7f6eopensd"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#71","text":"\u5b89\u88c5 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils\u5e76\u968f\u673a\u751f\u6210\u5bc6\u7801 dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y # \u6267\u884c\u547d\u4ee4\u751f\u6210\u5bc6\u7801 opensd-genpwd # \u68c0\u67e5\u5bc6\u7801\u662f\u5426\u751f\u6210 cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml","title":"7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#72-inventory","text":"\u4e3b\u673a\u4fe1\u606f\u5305\u542b\uff1a\u4e3b\u673a\u540d\u3001ansible_host IP\u3001availability_zone\uff0c\u4e09\u8005\u5747\u9700\u914d\u7f6e\u7f3a\u4e00\u4e0d\u53ef\uff0c\u793a\u4f8b\uff1a vim /usr/local/share/opensd/ansible/inventory/multinode # \u4e09\u53f0\u63a7\u5236\u8282\u70b9\u4e3b\u673a\u4fe1\u606f [control] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # \u7f51\u7edc\u8282\u70b9\u4fe1\u606f\uff0c\u4e0e\u63a7\u5236\u8282\u70b9\u4fdd\u6301\u4e00\u81f4 [network] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # cinder-volume\u670d\u52a1\u8282\u70b9\u4fe1\u606f [storage] storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1 storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1 storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1 # Cell1 \u96c6\u7fa4\u4fe1\u606f [cell-control-cell1] cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1 cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1 cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1 [compute-cell1] compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1 compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1 compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1 [cell1:children] cell-control-cell1 compute-cell1 # Cell2\u96c6\u7fa4\u4fe1\u606f [cell-control-cell2] cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1 cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1 cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1 [compute-cell2] compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1 compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1 compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1 [cell2:children] cell-control-cell2 compute-cell2 [baremetal] [compute-cell1-ironic] # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684control\u4e3b\u673a\u7ec4 [nova-conductor:children] cell-control-cell1 cell-control-cell2 # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684compute\u4e3b\u673a\u7ec4 [nova-compute:children] compute-added compute-cell1 compute-cell2 # \u4e0b\u9762\u7684\u4e3b\u673a\u7ec4\u4fe1\u606f\u4e0d\u9700\u53d8\u52a8\uff0c\u4fdd\u7559\u5373\u53ef [compute-added] [chrony-server:children] control [pacemaker:children] control ...... ......","title":"7.2 \u914d\u7f6einventory\u6587\u4ef6"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#73","text":"\u6ce8: \u6587\u6863\u4e2d\u63d0\u5230\u7684\u6709\u6ce8\u91ca\u914d\u7f6e\u9879\u9700\u8981\u66f4\u6539\uff0c\u5176\u4ed6\u53c2\u6570\u4e0d\u9700\u8981\u66f4\u6539\uff0c\u82e5\u65e0\u76f8\u5173\u914d\u7f6e\u5219\u4e3a\u7a7a vim /usr/local/share/opensd/etc_examples/opensd/globals.yml ######################## # Network & Base options ######################## network_interface: \"eth0\" #\u7ba1\u7406\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 neutron_external_interface: \"eth1\" #\u4e1a\u52a1\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 cidr_netmask: 24 #\u7ba1\u7406\u7f51\u7684\u63a9\u7801 opensd_vip_address: 10.0.0.33 #\u63a7\u5236\u8282\u70b9\u865a\u62dfIP\u5730\u5740 cell1_vip_address: 10.0.0.34 #cell1\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 cell2_vip_address: 10.0.0.35 #cell2\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 external_fqdn: \"\" #\u7528\u4e8evnc\u8bbf\u95ee\u865a\u62df\u673a\u7684\u5916\u7f51\u57df\u540d\u5730\u5740 external_ntp_servers: [] #\u5916\u90e8ntp\u670d\u52a1\u5668\u5730\u5740 yumrepo_host: #yum\u6e90\u7684IP\u5730\u5740 yumrepo_port: #yum\u6e90\u7aef\u53e3\u53f7 environment: #yum\u6e90\u7684\u7c7b\u578b upgrade_all_packages: \"yes\" #\u662f\u5426\u5347\u7ea7\u6240\u6709\u5b89\u88c5\u7248\u7684\u7248\u672c(\u6267\u884cyum upgrade)\uff0c\u521d\u59cb\u90e8\u7f72\u8d44\u6e90\u8bf7\u8bbe\u7f6e\u4e3a\"yes\" enable_miner: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72miner\u670d\u52a1 enable_chrony: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72chrony\u670d\u52a1 enable_pri_mariadb: \"no\" #\u662f\u5426\u4e3a\u79c1\u6709\u4e91\u90e8\u7f72mariadb enable_hosts_file_modify: \"no\" # \u6269\u5bb9\u8ba1\u7b97\u8282\u70b9\u548c\u90e8\u7f72ironic\u670d\u52a1\u7684\u65f6\u5019\uff0c\u662f\u5426\u5c06\u8282\u70b9\u4fe1\u606f\u6dfb\u52a0\u5230`/etc/hosts` ######################## # Available zone options ######################## az_cephmon_compose: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az01\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az01\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az02\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az02\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az03\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az03\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: # `reserve_vcpu_based_on_numa`\u914d\u7f6e\u4e3a`yes` or `no`,\u4e3e\u4f8b\u8bf4\u660e\uff1a NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 \u5f53reserve_vcpu_based_on_numa: \"yes\", \u6839\u636enuma node, \u5e73\u5747\u6bcf\u4e2anode\u9884\u7559vcpu: vcpu_pin_set = 2-15,34-47,18-31,50-63 \u5f53reserve_vcpu_based_on_numa: \"no\", \u4ece\u7b2c\u4e00\u4e2avcpu\u5f00\u59cb\uff0c\u987a\u5e8f\u9884\u7559vcpu: vcpu_pin_set = 8-64 ####################### # Nova options ####################### nova_reserved_host_memory_mb: 2048 #\u8ba1\u7b97\u8282\u70b9\u7ed9\u8ba1\u7b97\u670d\u52a1\u9884\u7559\u7684\u5185\u5b58\u5927\u5c0f enable_cells: \"yes\" #cell\u8282\u70b9\u662f\u5426\u5355\u72ec\u8282\u70b9\u90e8\u7f72 support_gpu: \"False\" #cell\u8282\u70b9\u662f\u5426\u6709GPU\u670d\u52a1\u5668\uff0c\u5982\u679c\u6709\u5219\u4e3aTrue\uff0c\u5426\u5219\u4e3aFalse ####################### # Neutron options ####################### monitor_ip: - 10.0.0.9 #\u914d\u7f6e\u76d1\u63a7\u8282\u70b9 - 10.0.0.10 enable_meter_full_eip: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8EIP\u5168\u91cf\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_port_forwarding: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8port forwarding\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_ecs_ipv6: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8ecs_ipv6\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter: True #\u914d\u7f6e\u662f\u5426\u5f00\u542f\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue is_sdn_arch: False #\u914d\u7f6e\u662f\u5426\u662fsdn\u67b6\u6784\uff0c\u9ed8\u8ba4\u4e3aFalse # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,vlan\u548cvxlan\u4e24\u79cd\u7c7b\u578b\u53ea\u80fd\u4e8c\u9009\u4e00. enable_vxlan_network_type: False # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,\u5982\u679c\u4f7f\u7528vxlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aTrue, \u5982\u679c\u4f7f\u7528vlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aFalse. enable_neutron_fwaas: False # \u73af\u5883\u6709\u4f7f\u7528\u9632\u706b\u5899, \u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fd\u9632\u62a4\u5899\u529f\u80fd. # Neutron provider neutron_provider_networks: network_types: \"{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}\" network_vlan_ranges: \"default:xxx:xxx\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvlan\u8303\u56f4 network_mappings: \"default:br-provider\" network_interface: \"{{ neutron_external_interface }}\" network_vxlan_ranges: \"\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvxlan\u8303\u56f4 # \u5982\u4e0b\u8fd9\u4e9b\u914d\u7f6e\u662fSND\u63a7\u5236\u5668\u7684\u914d\u7f6e\u53c2\u6570, `enable_sdn_controller`\u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fdSND\u63a7\u5236\u5668\u529f\u80fd. # \u5176\u4ed6\u53c2\u6570\u8bf7\u6839\u636e\u90e8\u7f72\u4e4b\u524d\u7684\u89c4\u5212\u548cSDN\u90e8\u7f72\u4fe1\u606f\u786e\u5b9a. enable_sdn_controller: False sdn_controller_ip_address: # SDN\u63a7\u5236\u5668ip\u5730\u5740 sdn_controller_username: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u540d sdn_controller_password: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u5bc6\u7801 ####################### # Dimsagent options ####################### enable_dimsagent: \"no\" # \u5b89\u88c5\u955c\u50cf\u670d\u52a1agent, \u9700\u8981\u6539\u4e3ayes # Address and domain name for s2 s3_address_domain_pair: - host_ip: host_name: ####################### # Trove options ####################### enable_trove: \"no\" #\u5b89\u88c5trove \u9700\u8981\u6539\u4e3ayes #default network trove_default_neutron_networks: #trove \u7684\u7ba1\u7406\u7f51\u7edcid `openstack network list|grep -w trove-mgmt|awk '{print$2}'` #s3 setup(\u5982\u679c\u6ca1\u6709s3,\u4ee5\u4e0b\u503c\u586bnull) s3_endpoint_host_ip: #s3\u7684ip s3_endpoint_host_name: #s3\u7684\u57df\u540d s3_endpoint_url: #s3\u7684url \u00b7\u4e00\u822c\u4e3ahttp\uff1a//s3\u57df\u540d s3_access_key: #s3\u7684ak s3_secret_key: #s3\u7684sk ####################### # Ironic options ####################### enable_ironic: \"no\" #\u662f\u5426\u5f00\u673a\u88f8\u91d1\u5c5e\u90e8\u7f72\uff0c\u9ed8\u8ba4\u4e0d\u5f00\u542f ironic_neutron_provisioning_network_uuid: ironic_neutron_cleaning_network_uuid: \"{{ ironic_neutron_provisioning_network_uuid }}\" ironic_dnsmasq_interface: ironic_dnsmasq_dhcp_range: ironic_tftp_server_address: \"{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}\" # \u4ea4\u6362\u673a\u8bbe\u5907\u76f8\u5173\u4fe1\u606f neutron_ml2_conf_genericswitch: genericswitch:xxxxxxx: device_type: ngs_mac_address: ip: username: password: ngs_port_default_vlan: # Package state setting haproxy_package_state: \"present\" mariadb_package_state: \"present\" rabbitmq_package_state: \"present\" memcached_package_state: \"present\" ceph_client_package_state: \"present\" keystone_package_state: \"present\" glance_package_state: \"present\" cinder_package_state: \"present\" nova_package_state: \"present\" neutron_package_state: \"present\" miner_package_state: \"present\"","title":"7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#74-ssh","text":"dnf install ansible -y ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u6267\u884c\u7ed3\u679c\u663e\u793a\u6bcf\u53f0\u4e3b\u673a\u90fd\u662f\"SUCCESS\"\u5373\u8bf4\u660e\u8fde\u63a5\u72b6\u6001\u6ca1\u95ee\u9898,\u793a\u4f8b\uff1a compute1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python\" }, \"changed\": false, \"ping\": \"pong\" }","title":"7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#8","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"8. \u6267\u884c\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#81-bootstrap","text":"# \u6267\u884c\u90e8\u7f72 opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50","title":"8.1 \u6267\u884cbootstrap"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#82","text":"\u6ce8\uff1a\u6267\u884c\u91cd\u542f\u7684\u539f\u56e0\u662f:bootstrap\u53ef\u80fd\u4f1a\u5347\u5185\u6838,\u66f4\u6539selinux\u914d\u7f6e\u6216\u8005\u6709GPU\u670d\u52a1\u5668,\u5982\u679c\u88c5\u673a\u8fc7\u7a0b\u5df2\u7ecf\u662f\u65b0\u7248\u5185\u6838,selinux disable\u6216\u8005\u6ca1\u6709GPU\u670d\u52a1\u5668,\u5219\u4e0d\u9700\u8981\u6267\u884c\u8be5\u6b65\u9aa4 # \u624b\u52a8\u91cd\u542f\u5bf9\u5e94\u8282\u70b9,\u6267\u884c\u547d\u4ee4 init6 # \u91cd\u542f\u5b8c\u6210\u540e\uff0c\u518d\u6b21\u68c0\u67e5\u8fde\u901a\u6027 ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u91cd\u542f\u5b8c\u540e\u64cd\u4f5c\u7cfb\u7edf\u540e\uff0c\u518d\u6b21\u542f\u52a8yum\u6e90","title":"8.2 \u91cd\u542f\u670d\u52a1\u5668"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#83","text":"opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50","title":"8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-train/#84","text":"ln -s /usr/bin/python3 /usr/bin/python \u5168\u91cf\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 \u5355\u670d\u52a1\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name","title":"8.4 \u6267\u884c\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP2\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a CinderSP1 Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 22.03 LTS \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP2/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a source ~/.admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ``` Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 22.03 LTS\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP - nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `transport_url` \u4e3a`RabbitMQ`\u8fde\u63a5\u4fe1\u606f\uff0c`RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d`TROVE_PASS`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 6. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove 4. \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e 1. \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP2\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-sp2 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-sp2 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-22.03-LTS-SP2_Wallaby"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#openstack-wallaby","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72","title":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP2\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a CinderSP1 Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#_3","text":"\u914d\u7f6e 22.03 LTS \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP2/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a source ~/.admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ```","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 22.03 LTS\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 ``shell script yum install openstack-trove python-troveclient 2. \u914d\u7f6e`trove.conf` ```shell script vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 **\u89e3\u91ca\uff1a** - [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP - nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint - nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b - transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 \u914d\u7f6e trove-guestagent.conf ```shell script vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 **\u89e3\u91ca\uff1a** `guestagent`\u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 **\u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002** - `transport_url` \u4e3a`RabbitMQ`\u8fde\u63a5\u4fe1\u606f\uff0c`RABBIT_PASS`\u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 - Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d`TROVE_PASS`\u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 6. \u751f\u6210\u6570\u636e`Trove`\u6570\u636e\u5e93\u8868 ```shell script su -s /bin/sh -c \"trove-manage db_sync\" trove 4. \u5b8c\u6210\u5b89\u88c5\u914d\u7f6e 1. \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 ```shell script systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service 2. \u542f\u52a8\u670d\u52a1 ```shell script systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 \u5b89\u88c5Cyborg yum install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#aodh","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync \u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#gnocchi","text":"\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade \u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#ceilometer","text":"\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade \u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#heat","text":"\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP2/OpenStack-wallaby/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP2\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-sp2 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-sp2 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u90e8\u7f72\u6b65\u9aa4 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 2.1 \u521b\u5efapool: 2.2 \u521d\u59cb\u5316pool 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 4. \u914d\u7f6eyum repo 4.1 \u5907\u4efdyum\u6e90 4.2 \u914d\u7f6eyum repo 4.3 \u66f4\u65b0yum\u7f13\u5b58 5. \u5b89\u88c5opensd 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 6. \u505assh\u4e92\u4fe1 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 7. \u914d\u7f6eopensd 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 7.2 \u914d\u7f6einventory\u6587\u4ef6 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 8. \u6267\u884c\u90e8\u7f72 8.1 \u6267\u884cbootstrap 8.2 \u91cd\u542f\u670d\u52a1\u5668 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 8.4 \u6267\u884c\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP3\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u542f\u52a8OpenStack Train yum\u6e90 yum update yum install openstack-release-train yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP3/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient==4.0.2 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002 Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1.\u5b89\u88c5 Trove \u5305 yum install openstack-trove python3-troveclient 2.\u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP3\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-SP3 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r train \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-SP3 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u00b6 opensd\u7528\u4e8e\u6279\u91cf\u5730\u811a\u672c\u5316\u90e8\u7f72openstack\u5404\u7ec4\u4ef6\u670d\u52a1\u3002 \u90e8\u7f72\u6b65\u9aa4 \u00b6 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f \u00b6 \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u9700\u5c06selinux\u8bbe\u7f6e\u4e3adisable \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u5c06/etc/ssh/sshd_config\u914d\u7f6e\u6587\u4ef6\u5185\u7684UseDNS\u8bbe\u7f6e\u4e3ano \u64cd\u4f5c\u7cfb\u7edf\u8bed\u8a00\u5fc5\u987b\u8bbe\u7f6e\u4e3a\u82f1\u6587 \u90e8\u7f72\u4e4b\u524d\u8bf7\u786e\u4fdd\u6240\u6709\u8ba1\u7b97\u8282\u70b9/etc/hosts\u6587\u4ef6\u5185\u6ca1\u6709\u5bf9\u8ba1\u7b97\u4e3b\u673a\u7684\u89e3\u6790 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 \u00b6 \u4e0d\u4f7f\u7528ceph\u6216\u5df2\u6709ceph\u96c6\u7fa4\u53ef\u5ffd\u7565\u6b64\u6b65\u9aa4 \u5728\u4efb\u610f\u4e00\u53f0ceph monitor\u8282\u70b9\u6267\u884c: 2.1 \u521b\u5efapool: \u00b6 ceph osd pool create volumes 2048 ceph osd pool create images 2048 2.2 \u521d\u59cb\u5316pool \u00b6 rbd pool init volumes rbd pool init images 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 \u00b6 ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images' ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes' 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 \u00b6 \u6839\u636e\u7269\u7406\u673a\u78c1\u76d8\u914d\u7f6e\u4e0e\u95f2\u7f6e\u60c5\u51b5\uff0c\u4e3amysql\u6570\u636e\u76ee\u5f55\u6302\u8f7d\u989d\u5916\u7684\u78c1\u76d8\u7a7a\u95f4\u3002\u793a\u4f8b\u5982\u4e0b\uff08\u6839\u636e\u5b9e\u9645\u60c5\u51b5\u505a\u914d\u7f6e\uff09\uff1a fdisk -l Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x000ed242 \u521b\u5efa\u5206\u533a parted /dev/sdd mkparted 0 -1 \u521b\u5efapv partprobe /dev/sdd1 pvcreate /dev/sdd1 \u521b\u5efa\u3001\u6fc0\u6d3bvg vgcreate vg_mariadb /dev/sdd1 vgchange -ay vg_mariadb \u67e5\u770bvg\u5bb9\u91cf vgdisplay --- Volume group --- VG Name vg_mariadb System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 446.62 GiB PE Size 4.00 MiB Total PE 114335 Alloc PE / Size 114176 / 446.00 GiB Free PE / Size 159 / 636.00 MiB VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc \u521b\u5efalv lvcreate -L 446G -n lv_mariadb vg_mariadb \u683c\u5f0f\u5316\u78c1\u76d8\u5e76\u83b7\u53d6\u5377\u7684UUID mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb blkid /dev/mapper/vg_mariadb-lv_mariadb /dev/mapper/vg_mariadb-lv_mariadb: UUID=\"98d513eb-5f64-4aa5-810e-dc7143884fa2\" TYPE=\"ext4\" \u6ce8\uff1a98d513eb-5f64-4aa5-810e-dc7143884fa2\u4e3a\u5377\u7684UUID \u6302\u8f7d\u78c1\u76d8 mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql rm -rf /var/lib/mysql/* 4. \u914d\u7f6eyum repo \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 4.1 \u5907\u4efdyum\u6e90 \u00b6 mkdir /etc/yum.repos.d/bak/ mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/ 4.2 \u914d\u7f6eyum repo \u00b6 cat > /etc/yum.repos.d/opensd.repo << EOF [train] name=train baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP3:/Epol:/Multi-Version:/OpenStack:/Train/standard_$basearch/ enabled=1 gpgcheck=0 [epol] name=epol baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP3:/Epol/standard_$basearch/ enabled=1 gpgcheck=0 [everything] name=everything baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP3/standard_$basearch/ enabled=1 gpgcheck=0 EOF 4.3 \u66f4\u65b0yum\u7f13\u5b58 \u00b6 yum clean all yum makecache 5. \u5b89\u88c5opensd \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 \u00b6 git clone https://gitee.com/openeuler/opensd cd opensd python3 setup.py install 6. \u505assh\u4e92\u4fe1 \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\u5e76\u4e00\u8def\u56de\u8f66 ssh-keygen 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 \u00b6 \u5728auto_ssh_host_ip\u4e2d\u914d\u7f6e\u6240\u6709\u7528\u5230\u7684\u4e3b\u673aip, \u793a\u4f8b\uff1a cd /usr/local/share/opensd/tools/ vim auto_ssh_host_ip 10.0.0.1 10.0.0.2 ... 10.0.0.10 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c \u00b6 \u5c06\u514d\u5bc6\u811a\u672c /usr/local/bin/opensd-auto-ssh \u5185123123\u66ff\u6362\u4e3a\u4e3b\u673a\u771f\u5b9e\u5bc6\u7801 # \u66ff\u6362\u811a\u672c\u5185123123\u5b57\u7b26\u4e32 vim /usr/local/bin/opensd-auto-ssh ## \u5b89\u88c5expect\u540e\u6267\u884c\u811a\u672c dnf install expect -y opensd-auto-ssh 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 \u00b6 ssh-copy-id root@x.x.x.x 7. \u914d\u7f6eopensd \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 \u00b6 \u5b89\u88c5 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils\u5e76\u968f\u673a\u751f\u6210\u5bc6\u7801 dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y # \u6267\u884c\u547d\u4ee4\u751f\u6210\u5bc6\u7801 opensd-genpwd # \u68c0\u67e5\u5bc6\u7801\u662f\u5426\u751f\u6210 cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml 7.2 \u914d\u7f6einventory\u6587\u4ef6 \u00b6 \u4e3b\u673a\u4fe1\u606f\u5305\u542b\uff1a\u4e3b\u673a\u540d\u3001ansible_host IP\u3001availability_zone\uff0c\u4e09\u8005\u5747\u9700\u914d\u7f6e\u7f3a\u4e00\u4e0d\u53ef\uff0c\u793a\u4f8b\uff1a vim /usr/local/share/opensd/ansible/inventory/multinode # \u4e09\u53f0\u63a7\u5236\u8282\u70b9\u4e3b\u673a\u4fe1\u606f [control] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # \u7f51\u7edc\u8282\u70b9\u4fe1\u606f\uff0c\u4e0e\u63a7\u5236\u8282\u70b9\u4fdd\u6301\u4e00\u81f4 [network] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # cinder-volume\u670d\u52a1\u8282\u70b9\u4fe1\u606f [storage] storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1 storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1 storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1 # Cell1 \u96c6\u7fa4\u4fe1\u606f [cell-control-cell1] cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1 cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1 cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1 [compute-cell1] compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1 compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1 compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1 [cell1:children] cell-control-cell1 compute-cell1 # Cell2\u96c6\u7fa4\u4fe1\u606f [cell-control-cell2] cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1 cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1 cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1 [compute-cell2] compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1 compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1 compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1 [cell2:children] cell-control-cell2 compute-cell2 [baremetal] [compute-cell1-ironic] # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684control\u4e3b\u673a\u7ec4 [nova-conductor:children] cell-control-cell1 cell-control-cell2 # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684compute\u4e3b\u673a\u7ec4 [nova-compute:children] compute-added compute-cell1 compute-cell2 # \u4e0b\u9762\u7684\u4e3b\u673a\u7ec4\u4fe1\u606f\u4e0d\u9700\u53d8\u52a8\uff0c\u4fdd\u7559\u5373\u53ef [compute-added] [chrony-server:children] control [pacemaker:children] control ...... ...... 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf \u00b6 \u6ce8: \u6587\u6863\u4e2d\u63d0\u5230\u7684\u6709\u6ce8\u91ca\u914d\u7f6e\u9879\u9700\u8981\u66f4\u6539\uff0c\u5176\u4ed6\u53c2\u6570\u4e0d\u9700\u8981\u66f4\u6539\uff0c\u82e5\u65e0\u76f8\u5173\u914d\u7f6e\u5219\u4e3a\u7a7a vim /usr/local/share/opensd/etc_examples/opensd/globals.yml ######################## # Network & Base options ######################## network_interface: \"eth0\" #\u7ba1\u7406\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 neutron_external_interface: \"eth1\" #\u4e1a\u52a1\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 cidr_netmask: 24 #\u7ba1\u7406\u7f51\u7684\u63a9\u7801 opensd_vip_address: 10.0.0.33 #\u63a7\u5236\u8282\u70b9\u865a\u62dfIP\u5730\u5740 cell1_vip_address: 10.0.0.34 #cell1\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 cell2_vip_address: 10.0.0.35 #cell2\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 external_fqdn: \"\" #\u7528\u4e8evnc\u8bbf\u95ee\u865a\u62df\u673a\u7684\u5916\u7f51\u57df\u540d\u5730\u5740 external_ntp_servers: [] #\u5916\u90e8ntp\u670d\u52a1\u5668\u5730\u5740 yumrepo_host: #yum\u6e90\u7684IP\u5730\u5740 yumrepo_port: #yum\u6e90\u7aef\u53e3\u53f7 environment: #yum\u6e90\u7684\u7c7b\u578b upgrade_all_packages: \"yes\" #\u662f\u5426\u5347\u7ea7\u6240\u6709\u5b89\u88c5\u7248\u7684\u7248\u672c(\u6267\u884cyum upgrade)\uff0c\u521d\u59cb\u90e8\u7f72\u8d44\u6e90\u8bf7\u8bbe\u7f6e\u4e3a\"yes\" enable_miner: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72miner\u670d\u52a1 enable_chrony: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72chrony\u670d\u52a1 enable_pri_mariadb: \"no\" #\u662f\u5426\u4e3a\u79c1\u6709\u4e91\u90e8\u7f72mariadb enable_hosts_file_modify: \"no\" # \u6269\u5bb9\u8ba1\u7b97\u8282\u70b9\u548c\u90e8\u7f72ironic\u670d\u52a1\u7684\u65f6\u5019\uff0c\u662f\u5426\u5c06\u8282\u70b9\u4fe1\u606f\u6dfb\u52a0\u5230`/etc/hosts` ######################## # Available zone options ######################## az_cephmon_compose: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az01\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az01\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az02\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az02\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az03\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az03\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: # `reserve_vcpu_based_on_numa`\u914d\u7f6e\u4e3a`yes` or `no`,\u4e3e\u4f8b\u8bf4\u660e\uff1a NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 \u5f53reserve_vcpu_based_on_numa: \"yes\", \u6839\u636enuma node, \u5e73\u5747\u6bcf\u4e2anode\u9884\u7559vcpu: vcpu_pin_set = 2-15,34-47,18-31,50-63 \u5f53reserve_vcpu_based_on_numa: \"no\", \u4ece\u7b2c\u4e00\u4e2avcpu\u5f00\u59cb\uff0c\u987a\u5e8f\u9884\u7559vcpu: vcpu_pin_set = 8-64 ####################### # Nova options ####################### nova_reserved_host_memory_mb: 2048 #\u8ba1\u7b97\u8282\u70b9\u7ed9\u8ba1\u7b97\u670d\u52a1\u9884\u7559\u7684\u5185\u5b58\u5927\u5c0f enable_cells: \"yes\" #cell\u8282\u70b9\u662f\u5426\u5355\u72ec\u8282\u70b9\u90e8\u7f72 support_gpu: \"False\" #cell\u8282\u70b9\u662f\u5426\u6709GPU\u670d\u52a1\u5668\uff0c\u5982\u679c\u6709\u5219\u4e3aTrue\uff0c\u5426\u5219\u4e3aFalse ####################### # Neutron options ####################### monitor_ip: - 10.0.0.9 #\u914d\u7f6e\u76d1\u63a7\u8282\u70b9 - 10.0.0.10 enable_meter_full_eip: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8EIP\u5168\u91cf\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_port_forwarding: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8port forwarding\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_ecs_ipv6: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8ecs_ipv6\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter: True #\u914d\u7f6e\u662f\u5426\u5f00\u542f\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue is_sdn_arch: False #\u914d\u7f6e\u662f\u5426\u662fsdn\u67b6\u6784\uff0c\u9ed8\u8ba4\u4e3aFalse # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,vlan\u548cvxlan\u4e24\u79cd\u7c7b\u578b\u53ea\u80fd\u4e8c\u9009\u4e00. enable_vxlan_network_type: False # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,\u5982\u679c\u4f7f\u7528vxlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aTrue, \u5982\u679c\u4f7f\u7528vlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aFalse. enable_neutron_fwaas: False # \u73af\u5883\u6709\u4f7f\u7528\u9632\u706b\u5899, \u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fd\u9632\u62a4\u5899\u529f\u80fd. # Neutron provider neutron_provider_networks: network_types: \"{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}\" network_vlan_ranges: \"default:xxx:xxx\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvlan\u8303\u56f4 network_mappings: \"default:br-provider\" network_interface: \"{{ neutron_external_interface }}\" network_vxlan_ranges: \"\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvxlan\u8303\u56f4 # \u5982\u4e0b\u8fd9\u4e9b\u914d\u7f6e\u662fSND\u63a7\u5236\u5668\u7684\u914d\u7f6e\u53c2\u6570, `enable_sdn_controller`\u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fdSND\u63a7\u5236\u5668\u529f\u80fd. # \u5176\u4ed6\u53c2\u6570\u8bf7\u6839\u636e\u90e8\u7f72\u4e4b\u524d\u7684\u89c4\u5212\u548cSDN\u90e8\u7f72\u4fe1\u606f\u786e\u5b9a. enable_sdn_controller: False sdn_controller_ip_address: # SDN\u63a7\u5236\u5668ip\u5730\u5740 sdn_controller_username: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u540d sdn_controller_password: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u5bc6\u7801 ####################### # Dimsagent options ####################### enable_dimsagent: \"no\" # \u5b89\u88c5\u955c\u50cf\u670d\u52a1agent, \u9700\u8981\u6539\u4e3ayes # Address and domain name for s2 s3_address_domain_pair: - host_ip: host_name: ####################### # Trove options ####################### enable_trove: \"no\" #\u5b89\u88c5trove \u9700\u8981\u6539\u4e3ayes #default network trove_default_neutron_networks: #trove \u7684\u7ba1\u7406\u7f51\u7edcid `openstack network list|grep -w trove-mgmt|awk '{print$2}'` #s3 setup(\u5982\u679c\u6ca1\u6709s3,\u4ee5\u4e0b\u503c\u586bnull) s3_endpoint_host_ip: #s3\u7684ip s3_endpoint_host_name: #s3\u7684\u57df\u540d s3_endpoint_url: #s3\u7684url \u00b7\u4e00\u822c\u4e3ahttp\uff1a//s3\u57df\u540d s3_access_key: #s3\u7684ak s3_secret_key: #s3\u7684sk ####################### # Ironic options ####################### enable_ironic: \"no\" #\u662f\u5426\u5f00\u673a\u88f8\u91d1\u5c5e\u90e8\u7f72\uff0c\u9ed8\u8ba4\u4e0d\u5f00\u542f ironic_neutron_provisioning_network_uuid: ironic_neutron_cleaning_network_uuid: \"{{ ironic_neutron_provisioning_network_uuid }}\" ironic_dnsmasq_interface: ironic_dnsmasq_dhcp_range: ironic_tftp_server_address: \"{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}\" # \u4ea4\u6362\u673a\u8bbe\u5907\u76f8\u5173\u4fe1\u606f neutron_ml2_conf_genericswitch: genericswitch:xxxxxxx: device_type: ngs_mac_address: ip: username: password: ngs_port_default_vlan: # Package state setting haproxy_package_state: \"present\" mariadb_package_state: \"present\" rabbitmq_package_state: \"present\" memcached_package_state: \"present\" ceph_client_package_state: \"present\" keystone_package_state: \"present\" glance_package_state: \"present\" cinder_package_state: \"present\" nova_package_state: \"present\" neutron_package_state: \"present\" miner_package_state: \"present\" 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 \u00b6 dnf install ansible -y ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u6267\u884c\u7ed3\u679c\u663e\u793a\u6bcf\u53f0\u4e3b\u673a\u90fd\u662f\"SUCCESS\"\u5373\u8bf4\u660e\u8fde\u63a5\u72b6\u6001\u6ca1\u95ee\u9898,\u793a\u4f8b\uff1a compute1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python\" }, \"changed\": false, \"ping\": \"pong\" } 8. \u6267\u884c\u90e8\u7f72 \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 8.1 \u6267\u884cbootstrap \u00b6 # \u6267\u884c\u90e8\u7f72 opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50 8.2 \u91cd\u542f\u670d\u52a1\u5668 \u00b6 \u6ce8\uff1a\u6267\u884c\u91cd\u542f\u7684\u539f\u56e0\u662f:bootstrap\u53ef\u80fd\u4f1a\u5347\u5185\u6838,\u66f4\u6539selinux\u914d\u7f6e\u6216\u8005\u6709GPU\u670d\u52a1\u5668,\u5982\u679c\u88c5\u673a\u8fc7\u7a0b\u5df2\u7ecf\u662f\u65b0\u7248\u5185\u6838,selinux disable\u6216\u8005\u6ca1\u6709GPU\u670d\u52a1\u5668,\u5219\u4e0d\u9700\u8981\u6267\u884c\u8be5\u6b65\u9aa4 # \u624b\u52a8\u91cd\u542f\u5bf9\u5e94\u8282\u70b9,\u6267\u884c\u547d\u4ee4 init6 # \u91cd\u542f\u5b8c\u6210\u540e\uff0c\u518d\u6b21\u68c0\u67e5\u8fde\u901a\u6027 ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u91cd\u542f\u5b8c\u540e\u64cd\u4f5c\u7cfb\u7edf\u540e\uff0c\u518d\u6b21\u542f\u52a8yum\u6e90 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 \u00b6 opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50 8.4 \u6267\u884c\u90e8\u7f72 \u00b6 ln -s /usr/bin/python3 /usr/bin/python \u5168\u91cf\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 \u5355\u670d\u52a1\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name","title":"openEuler-22.03-LTS-SP3_Train"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#openstack-train","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u90e8\u7f72\u6b65\u9aa4 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 2.1 \u521b\u5efapool: 2.2 \u521d\u59cb\u5316pool 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 4. \u914d\u7f6eyum repo 4.1 \u5907\u4efdyum\u6e90 4.2 \u914d\u7f6eyum repo 4.3 \u66f4\u65b0yum\u7f13\u5b58 5. \u5b89\u88c5opensd 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 6. \u505assh\u4e92\u4fe1 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 7. \u914d\u7f6eopensd 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 7.2 \u914d\u7f6einventory\u6587\u4ef6 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 8. \u6267\u884c\u90e8\u7f72 8.1 \u6267\u884cbootstrap 8.2 \u91cd\u542f\u670d\u52a1\u5668 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 8.4 \u6267\u884c\u90e8\u7f72","title":"OpenStack-Train \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP3\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#_3","text":"\u542f\u52a8OpenStack Train yum\u6e90 yum update yum install openstack-release-train yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP3/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient==4.0.2 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1.\u5b89\u88c5 Trove \u5305 yum install openstack-trove python3-troveclient 2.\u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#aodh","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#gnocchi","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#ceilometer","text":"1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#heat","text":"1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP3\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-SP3 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r train \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-SP3 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#openstack-sigopensd","text":"opensd\u7528\u4e8e\u6279\u91cf\u5730\u811a\u672c\u5316\u90e8\u7f72openstack\u5404\u7ec4\u4ef6\u670d\u52a1\u3002","title":"\u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#_4","text":"","title":"\u90e8\u7f72\u6b65\u9aa4"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#1","text":"\u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u9700\u5c06selinux\u8bbe\u7f6e\u4e3adisable \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u5c06/etc/ssh/sshd_config\u914d\u7f6e\u6587\u4ef6\u5185\u7684UseDNS\u8bbe\u7f6e\u4e3ano \u64cd\u4f5c\u7cfb\u7edf\u8bed\u8a00\u5fc5\u987b\u8bbe\u7f6e\u4e3a\u82f1\u6587 \u90e8\u7f72\u4e4b\u524d\u8bf7\u786e\u4fdd\u6240\u6709\u8ba1\u7b97\u8282\u70b9/etc/hosts\u6587\u4ef6\u5185\u6ca1\u6709\u5bf9\u8ba1\u7b97\u4e3b\u673a\u7684\u89e3\u6790","title":"1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#2-ceph-pool","text":"\u4e0d\u4f7f\u7528ceph\u6216\u5df2\u6709ceph\u96c6\u7fa4\u53ef\u5ffd\u7565\u6b64\u6b65\u9aa4 \u5728\u4efb\u610f\u4e00\u53f0ceph monitor\u8282\u70b9\u6267\u884c:","title":"2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#21-pool","text":"ceph osd pool create volumes 2048 ceph osd pool create images 2048","title":"2.1 \u521b\u5efapool:"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#22-pool","text":"rbd pool init volumes rbd pool init images","title":"2.2 \u521d\u59cb\u5316pool"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#23","text":"ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images' ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes'","title":"2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#3-lvm","text":"\u6839\u636e\u7269\u7406\u673a\u78c1\u76d8\u914d\u7f6e\u4e0e\u95f2\u7f6e\u60c5\u51b5\uff0c\u4e3amysql\u6570\u636e\u76ee\u5f55\u6302\u8f7d\u989d\u5916\u7684\u78c1\u76d8\u7a7a\u95f4\u3002\u793a\u4f8b\u5982\u4e0b\uff08\u6839\u636e\u5b9e\u9645\u60c5\u51b5\u505a\u914d\u7f6e\uff09\uff1a fdisk -l Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x000ed242 \u521b\u5efa\u5206\u533a parted /dev/sdd mkparted 0 -1 \u521b\u5efapv partprobe /dev/sdd1 pvcreate /dev/sdd1 \u521b\u5efa\u3001\u6fc0\u6d3bvg vgcreate vg_mariadb /dev/sdd1 vgchange -ay vg_mariadb \u67e5\u770bvg\u5bb9\u91cf vgdisplay --- Volume group --- VG Name vg_mariadb System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 446.62 GiB PE Size 4.00 MiB Total PE 114335 Alloc PE / Size 114176 / 446.00 GiB Free PE / Size 159 / 636.00 MiB VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc \u521b\u5efalv lvcreate -L 446G -n lv_mariadb vg_mariadb \u683c\u5f0f\u5316\u78c1\u76d8\u5e76\u83b7\u53d6\u5377\u7684UUID mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb blkid /dev/mapper/vg_mariadb-lv_mariadb /dev/mapper/vg_mariadb-lv_mariadb: UUID=\"98d513eb-5f64-4aa5-810e-dc7143884fa2\" TYPE=\"ext4\" \u6ce8\uff1a98d513eb-5f64-4aa5-810e-dc7143884fa2\u4e3a\u5377\u7684UUID \u6302\u8f7d\u78c1\u76d8 mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql rm -rf /var/lib/mysql/*","title":"3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#4-yum-repo","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"4. \u914d\u7f6eyum repo"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#41-yum","text":"mkdir /etc/yum.repos.d/bak/ mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/","title":"4.1 \u5907\u4efdyum\u6e90"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#42-yum-repo","text":"cat > /etc/yum.repos.d/opensd.repo << EOF [train] name=train baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP3:/Epol:/Multi-Version:/OpenStack:/Train/standard_$basearch/ enabled=1 gpgcheck=0 [epol] name=epol baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP3:/Epol/standard_$basearch/ enabled=1 gpgcheck=0 [everything] name=everything baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP3/standard_$basearch/ enabled=1 gpgcheck=0 EOF","title":"4.2 \u914d\u7f6eyum repo"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#43-yum","text":"yum clean all yum makecache","title":"4.3 \u66f4\u65b0yum\u7f13\u5b58"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#5-opensd","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"5. \u5b89\u88c5opensd"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#51-opensd","text":"git clone https://gitee.com/openeuler/opensd cd opensd python3 setup.py install","title":"5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#6-ssh","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"6. \u505assh\u4e92\u4fe1"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#61","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u5e76\u4e00\u8def\u56de\u8f66 ssh-keygen","title":"6.1 \u751f\u6210\u5bc6\u94a5\u5bf9"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#62-ip","text":"\u5728auto_ssh_host_ip\u4e2d\u914d\u7f6e\u6240\u6709\u7528\u5230\u7684\u4e3b\u673aip, \u793a\u4f8b\uff1a cd /usr/local/share/opensd/tools/ vim auto_ssh_host_ip 10.0.0.1 10.0.0.2 ... 10.0.0.10","title":"6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#63","text":"\u5c06\u514d\u5bc6\u811a\u672c /usr/local/bin/opensd-auto-ssh \u5185123123\u66ff\u6362\u4e3a\u4e3b\u673a\u771f\u5b9e\u5bc6\u7801 # \u66ff\u6362\u811a\u672c\u5185123123\u5b57\u7b26\u4e32 vim /usr/local/bin/opensd-auto-ssh ## \u5b89\u88c5expect\u540e\u6267\u884c\u811a\u672c dnf install expect -y opensd-auto-ssh","title":"6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#64-ceph-monitor","text":"ssh-copy-id root@x.x.x.x","title":"6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#7-opensd","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"7. \u914d\u7f6eopensd"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#71","text":"\u5b89\u88c5 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils\u5e76\u968f\u673a\u751f\u6210\u5bc6\u7801 dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y # \u6267\u884c\u547d\u4ee4\u751f\u6210\u5bc6\u7801 opensd-genpwd # \u68c0\u67e5\u5bc6\u7801\u662f\u5426\u751f\u6210 cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml","title":"7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#72-inventory","text":"\u4e3b\u673a\u4fe1\u606f\u5305\u542b\uff1a\u4e3b\u673a\u540d\u3001ansible_host IP\u3001availability_zone\uff0c\u4e09\u8005\u5747\u9700\u914d\u7f6e\u7f3a\u4e00\u4e0d\u53ef\uff0c\u793a\u4f8b\uff1a vim /usr/local/share/opensd/ansible/inventory/multinode # \u4e09\u53f0\u63a7\u5236\u8282\u70b9\u4e3b\u673a\u4fe1\u606f [control] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # \u7f51\u7edc\u8282\u70b9\u4fe1\u606f\uff0c\u4e0e\u63a7\u5236\u8282\u70b9\u4fdd\u6301\u4e00\u81f4 [network] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # cinder-volume\u670d\u52a1\u8282\u70b9\u4fe1\u606f [storage] storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1 storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1 storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1 # Cell1 \u96c6\u7fa4\u4fe1\u606f [cell-control-cell1] cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1 cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1 cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1 [compute-cell1] compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1 compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1 compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1 [cell1:children] cell-control-cell1 compute-cell1 # Cell2\u96c6\u7fa4\u4fe1\u606f [cell-control-cell2] cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1 cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1 cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1 [compute-cell2] compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1 compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1 compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1 [cell2:children] cell-control-cell2 compute-cell2 [baremetal] [compute-cell1-ironic] # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684control\u4e3b\u673a\u7ec4 [nova-conductor:children] cell-control-cell1 cell-control-cell2 # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684compute\u4e3b\u673a\u7ec4 [nova-compute:children] compute-added compute-cell1 compute-cell2 # \u4e0b\u9762\u7684\u4e3b\u673a\u7ec4\u4fe1\u606f\u4e0d\u9700\u53d8\u52a8\uff0c\u4fdd\u7559\u5373\u53ef [compute-added] [chrony-server:children] control [pacemaker:children] control ...... ......","title":"7.2 \u914d\u7f6einventory\u6587\u4ef6"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#73","text":"\u6ce8: \u6587\u6863\u4e2d\u63d0\u5230\u7684\u6709\u6ce8\u91ca\u914d\u7f6e\u9879\u9700\u8981\u66f4\u6539\uff0c\u5176\u4ed6\u53c2\u6570\u4e0d\u9700\u8981\u66f4\u6539\uff0c\u82e5\u65e0\u76f8\u5173\u914d\u7f6e\u5219\u4e3a\u7a7a vim /usr/local/share/opensd/etc_examples/opensd/globals.yml ######################## # Network & Base options ######################## network_interface: \"eth0\" #\u7ba1\u7406\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 neutron_external_interface: \"eth1\" #\u4e1a\u52a1\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 cidr_netmask: 24 #\u7ba1\u7406\u7f51\u7684\u63a9\u7801 opensd_vip_address: 10.0.0.33 #\u63a7\u5236\u8282\u70b9\u865a\u62dfIP\u5730\u5740 cell1_vip_address: 10.0.0.34 #cell1\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 cell2_vip_address: 10.0.0.35 #cell2\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 external_fqdn: \"\" #\u7528\u4e8evnc\u8bbf\u95ee\u865a\u62df\u673a\u7684\u5916\u7f51\u57df\u540d\u5730\u5740 external_ntp_servers: [] #\u5916\u90e8ntp\u670d\u52a1\u5668\u5730\u5740 yumrepo_host: #yum\u6e90\u7684IP\u5730\u5740 yumrepo_port: #yum\u6e90\u7aef\u53e3\u53f7 environment: #yum\u6e90\u7684\u7c7b\u578b upgrade_all_packages: \"yes\" #\u662f\u5426\u5347\u7ea7\u6240\u6709\u5b89\u88c5\u7248\u7684\u7248\u672c(\u6267\u884cyum upgrade)\uff0c\u521d\u59cb\u90e8\u7f72\u8d44\u6e90\u8bf7\u8bbe\u7f6e\u4e3a\"yes\" enable_miner: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72miner\u670d\u52a1 enable_chrony: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72chrony\u670d\u52a1 enable_pri_mariadb: \"no\" #\u662f\u5426\u4e3a\u79c1\u6709\u4e91\u90e8\u7f72mariadb enable_hosts_file_modify: \"no\" # \u6269\u5bb9\u8ba1\u7b97\u8282\u70b9\u548c\u90e8\u7f72ironic\u670d\u52a1\u7684\u65f6\u5019\uff0c\u662f\u5426\u5c06\u8282\u70b9\u4fe1\u606f\u6dfb\u52a0\u5230`/etc/hosts` ######################## # Available zone options ######################## az_cephmon_compose: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az01\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az01\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az02\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az02\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az03\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az03\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: # `reserve_vcpu_based_on_numa`\u914d\u7f6e\u4e3a`yes` or `no`,\u4e3e\u4f8b\u8bf4\u660e\uff1a NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 \u5f53reserve_vcpu_based_on_numa: \"yes\", \u6839\u636enuma node, \u5e73\u5747\u6bcf\u4e2anode\u9884\u7559vcpu: vcpu_pin_set = 2-15,34-47,18-31,50-63 \u5f53reserve_vcpu_based_on_numa: \"no\", \u4ece\u7b2c\u4e00\u4e2avcpu\u5f00\u59cb\uff0c\u987a\u5e8f\u9884\u7559vcpu: vcpu_pin_set = 8-64 ####################### # Nova options ####################### nova_reserved_host_memory_mb: 2048 #\u8ba1\u7b97\u8282\u70b9\u7ed9\u8ba1\u7b97\u670d\u52a1\u9884\u7559\u7684\u5185\u5b58\u5927\u5c0f enable_cells: \"yes\" #cell\u8282\u70b9\u662f\u5426\u5355\u72ec\u8282\u70b9\u90e8\u7f72 support_gpu: \"False\" #cell\u8282\u70b9\u662f\u5426\u6709GPU\u670d\u52a1\u5668\uff0c\u5982\u679c\u6709\u5219\u4e3aTrue\uff0c\u5426\u5219\u4e3aFalse ####################### # Neutron options ####################### monitor_ip: - 10.0.0.9 #\u914d\u7f6e\u76d1\u63a7\u8282\u70b9 - 10.0.0.10 enable_meter_full_eip: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8EIP\u5168\u91cf\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_port_forwarding: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8port forwarding\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_ecs_ipv6: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8ecs_ipv6\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter: True #\u914d\u7f6e\u662f\u5426\u5f00\u542f\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue is_sdn_arch: False #\u914d\u7f6e\u662f\u5426\u662fsdn\u67b6\u6784\uff0c\u9ed8\u8ba4\u4e3aFalse # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,vlan\u548cvxlan\u4e24\u79cd\u7c7b\u578b\u53ea\u80fd\u4e8c\u9009\u4e00. enable_vxlan_network_type: False # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,\u5982\u679c\u4f7f\u7528vxlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aTrue, \u5982\u679c\u4f7f\u7528vlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aFalse. enable_neutron_fwaas: False # \u73af\u5883\u6709\u4f7f\u7528\u9632\u706b\u5899, \u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fd\u9632\u62a4\u5899\u529f\u80fd. # Neutron provider neutron_provider_networks: network_types: \"{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}\" network_vlan_ranges: \"default:xxx:xxx\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvlan\u8303\u56f4 network_mappings: \"default:br-provider\" network_interface: \"{{ neutron_external_interface }}\" network_vxlan_ranges: \"\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvxlan\u8303\u56f4 # \u5982\u4e0b\u8fd9\u4e9b\u914d\u7f6e\u662fSND\u63a7\u5236\u5668\u7684\u914d\u7f6e\u53c2\u6570, `enable_sdn_controller`\u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fdSND\u63a7\u5236\u5668\u529f\u80fd. # \u5176\u4ed6\u53c2\u6570\u8bf7\u6839\u636e\u90e8\u7f72\u4e4b\u524d\u7684\u89c4\u5212\u548cSDN\u90e8\u7f72\u4fe1\u606f\u786e\u5b9a. enable_sdn_controller: False sdn_controller_ip_address: # SDN\u63a7\u5236\u5668ip\u5730\u5740 sdn_controller_username: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u540d sdn_controller_password: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u5bc6\u7801 ####################### # Dimsagent options ####################### enable_dimsagent: \"no\" # \u5b89\u88c5\u955c\u50cf\u670d\u52a1agent, \u9700\u8981\u6539\u4e3ayes # Address and domain name for s2 s3_address_domain_pair: - host_ip: host_name: ####################### # Trove options ####################### enable_trove: \"no\" #\u5b89\u88c5trove \u9700\u8981\u6539\u4e3ayes #default network trove_default_neutron_networks: #trove \u7684\u7ba1\u7406\u7f51\u7edcid `openstack network list|grep -w trove-mgmt|awk '{print$2}'` #s3 setup(\u5982\u679c\u6ca1\u6709s3,\u4ee5\u4e0b\u503c\u586bnull) s3_endpoint_host_ip: #s3\u7684ip s3_endpoint_host_name: #s3\u7684\u57df\u540d s3_endpoint_url: #s3\u7684url \u00b7\u4e00\u822c\u4e3ahttp\uff1a//s3\u57df\u540d s3_access_key: #s3\u7684ak s3_secret_key: #s3\u7684sk ####################### # Ironic options ####################### enable_ironic: \"no\" #\u662f\u5426\u5f00\u673a\u88f8\u91d1\u5c5e\u90e8\u7f72\uff0c\u9ed8\u8ba4\u4e0d\u5f00\u542f ironic_neutron_provisioning_network_uuid: ironic_neutron_cleaning_network_uuid: \"{{ ironic_neutron_provisioning_network_uuid }}\" ironic_dnsmasq_interface: ironic_dnsmasq_dhcp_range: ironic_tftp_server_address: \"{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}\" # \u4ea4\u6362\u673a\u8bbe\u5907\u76f8\u5173\u4fe1\u606f neutron_ml2_conf_genericswitch: genericswitch:xxxxxxx: device_type: ngs_mac_address: ip: username: password: ngs_port_default_vlan: # Package state setting haproxy_package_state: \"present\" mariadb_package_state: \"present\" rabbitmq_package_state: \"present\" memcached_package_state: \"present\" ceph_client_package_state: \"present\" keystone_package_state: \"present\" glance_package_state: \"present\" cinder_package_state: \"present\" nova_package_state: \"present\" neutron_package_state: \"present\" miner_package_state: \"present\"","title":"7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#74-ssh","text":"dnf install ansible -y ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u6267\u884c\u7ed3\u679c\u663e\u793a\u6bcf\u53f0\u4e3b\u673a\u90fd\u662f\"SUCCESS\"\u5373\u8bf4\u660e\u8fde\u63a5\u72b6\u6001\u6ca1\u95ee\u9898,\u793a\u4f8b\uff1a compute1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python\" }, \"changed\": false, \"ping\": \"pong\" }","title":"7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#8","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"8. \u6267\u884c\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#81-bootstrap","text":"# \u6267\u884c\u90e8\u7f72 opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50","title":"8.1 \u6267\u884cbootstrap"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#82","text":"\u6ce8\uff1a\u6267\u884c\u91cd\u542f\u7684\u539f\u56e0\u662f:bootstrap\u53ef\u80fd\u4f1a\u5347\u5185\u6838,\u66f4\u6539selinux\u914d\u7f6e\u6216\u8005\u6709GPU\u670d\u52a1\u5668,\u5982\u679c\u88c5\u673a\u8fc7\u7a0b\u5df2\u7ecf\u662f\u65b0\u7248\u5185\u6838,selinux disable\u6216\u8005\u6ca1\u6709GPU\u670d\u52a1\u5668,\u5219\u4e0d\u9700\u8981\u6267\u884c\u8be5\u6b65\u9aa4 # \u624b\u52a8\u91cd\u542f\u5bf9\u5e94\u8282\u70b9,\u6267\u884c\u547d\u4ee4 init6 # \u91cd\u542f\u5b8c\u6210\u540e\uff0c\u518d\u6b21\u68c0\u67e5\u8fde\u901a\u6027 ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u91cd\u542f\u5b8c\u540e\u64cd\u4f5c\u7cfb\u7edf\u540e\uff0c\u518d\u6b21\u542f\u52a8yum\u6e90","title":"8.2 \u91cd\u542f\u670d\u52a1\u5668"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#83","text":"opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50","title":"8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-train/#84","text":"ln -s /usr/bin/python3 /usr/bin/python \u5168\u91cf\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 \u5355\u670d\u52a1\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name","title":"8.4 \u6267\u884c\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP3\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a CinderSP1 Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 22.03 LTS \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP3/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a source ~/.admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service 6.\u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ``` Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 22.03 LTS\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 yum install openstack-trove python-troveclient 2. \u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** 4.\u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: ```shell yum install xfsprogs rsync ``` \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS ```shell mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc ``` \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: ```shell mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc ``` \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: ```shell blkid ``` \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: ```shell UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 ``` \u6302\u8f7d\u8bbe\u5907\uff1a ```shell mount /srv/node/vdb mount /srv/node/vdc ``` ***\u6ce8\u610f*** **\u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e** \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: ```shell [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock ``` **\u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740** \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: ```shell systemctl enable rsyncd.service systemctl start rsyncd.service ``` 5.\u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: ```shell yum install openstack-swift-account openstack-swift-container openstack-swift-object ``` \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: ```shell chown -R swift:swift /srv/node ``` \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a ```shell mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift ``` 6.\u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 ```shell cd /etc/swift ``` \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: ```shell swift-ring-builder account.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder account.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder account.builder rebalance ``` 7.\u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`container.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder container.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f*** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder container.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder container.builder rebalance ``` 8.\u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`object.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder object.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d ```shell swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder object.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder object.builder rebalance ``` \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06`account.ring.gz`\uff0c`container.ring.gz`\u4ee5\u53ca `object.ring.gz`\u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684`/etc/swift`\u76ee\u5f55\u3002 9.\u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP3\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-SP3 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-SP3 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-22.03-LTS-SP3_Wallaby"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#openstack-wallaby","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72","title":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP3\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a CinderSP1 Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#_3","text":"\u914d\u7f6e 22.03 LTS \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP3/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a source ~/.admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service 6.\u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ```","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 22.03 LTS\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 yum install openstack-trove python-troveclient 2. \u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** 4.\u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: ```shell yum install xfsprogs rsync ``` \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS ```shell mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc ``` \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: ```shell mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc ``` \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: ```shell blkid ``` \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: ```shell UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 ``` \u6302\u8f7d\u8bbe\u5907\uff1a ```shell mount /srv/node/vdb mount /srv/node/vdc ``` ***\u6ce8\u610f*** **\u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e** \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: ```shell [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock ``` **\u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740** \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: ```shell systemctl enable rsyncd.service systemctl start rsyncd.service ``` 5.\u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: ```shell yum install openstack-swift-account openstack-swift-container openstack-swift-object ``` \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: ```shell chown -R swift:swift /srv/node ``` \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a ```shell mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift ``` 6.\u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 ```shell cd /etc/swift ``` \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: ```shell swift-ring-builder account.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder account.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder account.builder rebalance ``` 7.\u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`container.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder container.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f*** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder container.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder container.builder rebalance ``` 8.\u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`object.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder object.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d ```shell swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder object.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder object.builder rebalance ``` \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06`account.ring.gz`\uff0c`container.ring.gz`\u4ee5\u53ca `object.ring.gz`\u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684`/etc/swift`\u76ee\u5f55\u3002 9.\u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#aodh","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#gnocchi","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#ceilometer","text":"1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#heat","text":"1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP3/OpenStack-wallaby/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP3\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-SP3 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-SP3 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u90e8\u7f72\u6b65\u9aa4 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 2.1 \u521b\u5efapool: 2.2 \u521d\u59cb\u5316pool 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 4. \u914d\u7f6eyum repo 4.1 \u5907\u4efdyum\u6e90 4.2 \u914d\u7f6eyum repo 4.3 \u66f4\u65b0yum\u7f13\u5b58 5. \u5b89\u88c5opensd 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 6. \u505assh\u4e92\u4fe1 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 7. \u914d\u7f6eopensd 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 7.2 \u914d\u7f6einventory\u6587\u4ef6 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 8. \u6267\u884c\u90e8\u7f72 8.1 \u6267\u884cbootstrap 8.2 \u91cd\u542f\u670d\u52a1\u5668 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 8.4 \u6267\u884c\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP4\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u542f\u52a8OpenStack Train yum\u6e90 yum update yum install openstack-release-train yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.04-LTS-SP4/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.04-LTS-SP4/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient==4.0.2 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002 Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1.\u5b89\u88c5 Trove \u5305 yum install openstack-trove python3-troveclient 2.\u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP4\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-sp4 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r train \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-sp4 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u00b6 opensd\u7528\u4e8e\u6279\u91cf\u5730\u811a\u672c\u5316\u90e8\u7f72openstack\u5404\u7ec4\u4ef6\u670d\u52a1\u3002 \u90e8\u7f72\u6b65\u9aa4 \u00b6 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f \u00b6 \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u9700\u5c06selinux\u8bbe\u7f6e\u4e3adisable \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u5c06/etc/ssh/sshd_config\u914d\u7f6e\u6587\u4ef6\u5185\u7684UseDNS\u8bbe\u7f6e\u4e3ano \u64cd\u4f5c\u7cfb\u7edf\u8bed\u8a00\u5fc5\u987b\u8bbe\u7f6e\u4e3a\u82f1\u6587 \u90e8\u7f72\u4e4b\u524d\u8bf7\u786e\u4fdd\u6240\u6709\u8ba1\u7b97\u8282\u70b9/etc/hosts\u6587\u4ef6\u5185\u6ca1\u6709\u5bf9\u8ba1\u7b97\u4e3b\u673a\u7684\u89e3\u6790 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 \u00b6 \u4e0d\u4f7f\u7528ceph\u6216\u5df2\u6709ceph\u96c6\u7fa4\u53ef\u5ffd\u7565\u6b64\u6b65\u9aa4 \u5728\u4efb\u610f\u4e00\u53f0ceph monitor\u8282\u70b9\u6267\u884c: 2.1 \u521b\u5efapool: \u00b6 ceph osd pool create volumes 2048 ceph osd pool create images 2048 2.2 \u521d\u59cb\u5316pool \u00b6 rbd pool init volumes rbd pool init images 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 \u00b6 ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images' ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes' 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 \u00b6 \u6839\u636e\u7269\u7406\u673a\u78c1\u76d8\u914d\u7f6e\u4e0e\u95f2\u7f6e\u60c5\u51b5\uff0c\u4e3amysql\u6570\u636e\u76ee\u5f55\u6302\u8f7d\u989d\u5916\u7684\u78c1\u76d8\u7a7a\u95f4\u3002\u793a\u4f8b\u5982\u4e0b\uff08\u6839\u636e\u5b9e\u9645\u60c5\u51b5\u505a\u914d\u7f6e\uff09\uff1a fdisk -l Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x000ed242 \u521b\u5efa\u5206\u533a parted /dev/sdd mkparted 0 -1 \u521b\u5efapv partprobe /dev/sdd1 pvcreate /dev/sdd1 \u521b\u5efa\u3001\u6fc0\u6d3bvg vgcreate vg_mariadb /dev/sdd1 vgchange -ay vg_mariadb \u67e5\u770bvg\u5bb9\u91cf vgdisplay --- Volume group --- VG Name vg_mariadb System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 446.62 GiB PE Size 4.00 MiB Total PE 114335 Alloc PE / Size 114176 / 446.00 GiB Free PE / Size 159 / 636.00 MiB VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc \u521b\u5efalv lvcreate -L 446G -n lv_mariadb vg_mariadb \u683c\u5f0f\u5316\u78c1\u76d8\u5e76\u83b7\u53d6\u5377\u7684UUID mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb blkid /dev/mapper/vg_mariadb-lv_mariadb /dev/mapper/vg_mariadb-lv_mariadb: UUID=\"98d513eb-5f64-4aa5-810e-dc7143884fa2\" TYPE=\"ext4\" \u6ce8\uff1a98d513eb-5f64-4aa5-810e-dc7143884fa2\u4e3a\u5377\u7684UUID \u6302\u8f7d\u78c1\u76d8 mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql rm -rf /var/lib/mysql/* 4. \u914d\u7f6eyum repo \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 4.1 \u5907\u4efdyum\u6e90 \u00b6 mkdir /etc/yum.repos.d/bak/ mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/ 4.2 \u914d\u7f6eyum repo \u00b6 cat > /etc/yum.repos.d/opensd.repo << EOF [train] name=train baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP4:/Epol:/Multi-Version:/OpenStack:/Train/standard_$basearch/ enabled=1 gpgcheck=0 [epol] name=epol baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP4:/Epol/standard_$basearch/ enabled=1 gpgcheck=0 [everything] name=everything baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP4s/standard_$basearch/ enabled=1 gpgcheck=0 EOF 4.3 \u66f4\u65b0yum\u7f13\u5b58 \u00b6 yum clean all yum makecache 5. \u5b89\u88c5opensd \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 \u00b6 git clone https://gitee.com/openeuler/opensd cd opensd python3 setup.py install 6. \u505assh\u4e92\u4fe1 \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\u5e76\u4e00\u8def\u56de\u8f66 ssh-keygen 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 \u00b6 \u5728auto_ssh_host_ip\u4e2d\u914d\u7f6e\u6240\u6709\u7528\u5230\u7684\u4e3b\u673aip, \u793a\u4f8b\uff1a cd /usr/local/share/opensd/tools/ vim auto_ssh_host_ip 10.0.0.1 10.0.0.2 ... 10.0.0.10 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c \u00b6 \u5c06\u514d\u5bc6\u811a\u672c /usr/local/bin/opensd-auto-ssh \u5185123123\u66ff\u6362\u4e3a\u4e3b\u673a\u771f\u5b9e\u5bc6\u7801 # \u66ff\u6362\u811a\u672c\u5185123123\u5b57\u7b26\u4e32 vim /usr/local/bin/opensd-auto-ssh ## \u5b89\u88c5expect\u540e\u6267\u884c\u811a\u672c dnf install expect -y opensd-auto-ssh 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 \u00b6 ssh-copy-id root@x.x.x.x 7. \u914d\u7f6eopensd \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 \u00b6 \u5b89\u88c5 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils\u5e76\u968f\u673a\u751f\u6210\u5bc6\u7801 dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y # \u6267\u884c\u547d\u4ee4\u751f\u6210\u5bc6\u7801 opensd-genpwd # \u68c0\u67e5\u5bc6\u7801\u662f\u5426\u751f\u6210 cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml 7.2 \u914d\u7f6einventory\u6587\u4ef6 \u00b6 \u4e3b\u673a\u4fe1\u606f\u5305\u542b\uff1a\u4e3b\u673a\u540d\u3001ansible_host IP\u3001availability_zone\uff0c\u4e09\u8005\u5747\u9700\u914d\u7f6e\u7f3a\u4e00\u4e0d\u53ef\uff0c\u793a\u4f8b\uff1a vim /usr/local/share/opensd/ansible/inventory/multinode # \u4e09\u53f0\u63a7\u5236\u8282\u70b9\u4e3b\u673a\u4fe1\u606f [control] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # \u7f51\u7edc\u8282\u70b9\u4fe1\u606f\uff0c\u4e0e\u63a7\u5236\u8282\u70b9\u4fdd\u6301\u4e00\u81f4 [network] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # cinder-volume\u670d\u52a1\u8282\u70b9\u4fe1\u606f [storage] storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1 storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1 storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1 # Cell1 \u96c6\u7fa4\u4fe1\u606f [cell-control-cell1] cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1 cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1 cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1 [compute-cell1] compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1 compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1 compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1 [cell1:children] cell-control-cell1 compute-cell1 # Cell2\u96c6\u7fa4\u4fe1\u606f [cell-control-cell2] cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1 cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1 cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1 [compute-cell2] compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1 compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1 compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1 [cell2:children] cell-control-cell2 compute-cell2 [baremetal] [compute-cell1-ironic] # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684control\u4e3b\u673a\u7ec4 [nova-conductor:children] cell-control-cell1 cell-control-cell2 # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684compute\u4e3b\u673a\u7ec4 [nova-compute:children] compute-added compute-cell1 compute-cell2 # \u4e0b\u9762\u7684\u4e3b\u673a\u7ec4\u4fe1\u606f\u4e0d\u9700\u53d8\u52a8\uff0c\u4fdd\u7559\u5373\u53ef [compute-added] [chrony-server:children] control [pacemaker:children] control ...... ...... 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf \u00b6 \u6ce8: \u6587\u6863\u4e2d\u63d0\u5230\u7684\u6709\u6ce8\u91ca\u914d\u7f6e\u9879\u9700\u8981\u66f4\u6539\uff0c\u5176\u4ed6\u53c2\u6570\u4e0d\u9700\u8981\u66f4\u6539\uff0c\u82e5\u65e0\u76f8\u5173\u914d\u7f6e\u5219\u4e3a\u7a7a vim /usr/local/share/opensd/etc_examples/opensd/globals.yml ######################## # Network & Base options ######################## network_interface: \"eth0\" #\u7ba1\u7406\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 neutron_external_interface: \"eth1\" #\u4e1a\u52a1\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 cidr_netmask: 24 #\u7ba1\u7406\u7f51\u7684\u63a9\u7801 opensd_vip_address: 10.0.0.33 #\u63a7\u5236\u8282\u70b9\u865a\u62dfIP\u5730\u5740 cell1_vip_address: 10.0.0.34 #cell1\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 cell2_vip_address: 10.0.0.35 #cell2\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 external_fqdn: \"\" #\u7528\u4e8evnc\u8bbf\u95ee\u865a\u62df\u673a\u7684\u5916\u7f51\u57df\u540d\u5730\u5740 external_ntp_servers: [] #\u5916\u90e8ntp\u670d\u52a1\u5668\u5730\u5740 yumrepo_host: #yum\u6e90\u7684IP\u5730\u5740 yumrepo_port: #yum\u6e90\u7aef\u53e3\u53f7 environment: #yum\u6e90\u7684\u7c7b\u578b upgrade_all_packages: \"yes\" #\u662f\u5426\u5347\u7ea7\u6240\u6709\u5b89\u88c5\u7248\u7684\u7248\u672c(\u6267\u884cyum upgrade)\uff0c\u521d\u59cb\u90e8\u7f72\u8d44\u6e90\u8bf7\u8bbe\u7f6e\u4e3a\"yes\" enable_miner: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72miner\u670d\u52a1 enable_chrony: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72chrony\u670d\u52a1 enable_pri_mariadb: \"no\" #\u662f\u5426\u4e3a\u79c1\u6709\u4e91\u90e8\u7f72mariadb enable_hosts_file_modify: \"no\" # \u6269\u5bb9\u8ba1\u7b97\u8282\u70b9\u548c\u90e8\u7f72ironic\u670d\u52a1\u7684\u65f6\u5019\uff0c\u662f\u5426\u5c06\u8282\u70b9\u4fe1\u606f\u6dfb\u52a0\u5230`/etc/hosts` ######################## # Available zone options ######################## az_cephmon_compose: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az01\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az01\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az02\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az02\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az03\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az03\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: # `reserve_vcpu_based_on_numa`\u914d\u7f6e\u4e3a`yes` or `no`,\u4e3e\u4f8b\u8bf4\u660e\uff1a NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 \u5f53reserve_vcpu_based_on_numa: \"yes\", \u6839\u636enuma node, \u5e73\u5747\u6bcf\u4e2anode\u9884\u7559vcpu: vcpu_pin_set = 2-15,34-47,18-31,50-63 \u5f53reserve_vcpu_based_on_numa: \"no\", \u4ece\u7b2c\u4e00\u4e2avcpu\u5f00\u59cb\uff0c\u987a\u5e8f\u9884\u7559vcpu: vcpu_pin_set = 8-64 ####################### # Nova options ####################### nova_reserved_host_memory_mb: 2048 #\u8ba1\u7b97\u8282\u70b9\u7ed9\u8ba1\u7b97\u670d\u52a1\u9884\u7559\u7684\u5185\u5b58\u5927\u5c0f enable_cells: \"yes\" #cell\u8282\u70b9\u662f\u5426\u5355\u72ec\u8282\u70b9\u90e8\u7f72 support_gpu: \"False\" #cell\u8282\u70b9\u662f\u5426\u6709GPU\u670d\u52a1\u5668\uff0c\u5982\u679c\u6709\u5219\u4e3aTrue\uff0c\u5426\u5219\u4e3aFalse ####################### # Neutron options ####################### monitor_ip: - 10.0.0.9 #\u914d\u7f6e\u76d1\u63a7\u8282\u70b9 - 10.0.0.10 enable_meter_full_eip: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8EIP\u5168\u91cf\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_port_forwarding: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8port forwarding\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_ecs_ipv6: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8ecs_ipv6\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter: True #\u914d\u7f6e\u662f\u5426\u5f00\u542f\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue is_sdn_arch: False #\u914d\u7f6e\u662f\u5426\u662fsdn\u67b6\u6784\uff0c\u9ed8\u8ba4\u4e3aFalse # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,vlan\u548cvxlan\u4e24\u79cd\u7c7b\u578b\u53ea\u80fd\u4e8c\u9009\u4e00. enable_vxlan_network_type: False # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,\u5982\u679c\u4f7f\u7528vxlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aTrue, \u5982\u679c\u4f7f\u7528vlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aFalse. enable_neutron_fwaas: False # \u73af\u5883\u6709\u4f7f\u7528\u9632\u706b\u5899, \u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fd\u9632\u62a4\u5899\u529f\u80fd. # Neutron provider neutron_provider_networks: network_types: \"{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}\" network_vlan_ranges: \"default:xxx:xxx\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvlan\u8303\u56f4 network_mappings: \"default:br-provider\" network_interface: \"{{ neutron_external_interface }}\" network_vxlan_ranges: \"\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvxlan\u8303\u56f4 # \u5982\u4e0b\u8fd9\u4e9b\u914d\u7f6e\u662fSND\u63a7\u5236\u5668\u7684\u914d\u7f6e\u53c2\u6570, `enable_sdn_controller`\u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fdSND\u63a7\u5236\u5668\u529f\u80fd. # \u5176\u4ed6\u53c2\u6570\u8bf7\u6839\u636e\u90e8\u7f72\u4e4b\u524d\u7684\u89c4\u5212\u548cSDN\u90e8\u7f72\u4fe1\u606f\u786e\u5b9a. enable_sdn_controller: False sdn_controller_ip_address: # SDN\u63a7\u5236\u5668ip\u5730\u5740 sdn_controller_username: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u540d sdn_controller_password: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u5bc6\u7801 ####################### # Dimsagent options ####################### enable_dimsagent: \"no\" # \u5b89\u88c5\u955c\u50cf\u670d\u52a1agent, \u9700\u8981\u6539\u4e3ayes # Address and domain name for s2 s3_address_domain_pair: - host_ip: host_name: ####################### # Trove options ####################### enable_trove: \"no\" #\u5b89\u88c5trove \u9700\u8981\u6539\u4e3ayes #default network trove_default_neutron_networks: #trove \u7684\u7ba1\u7406\u7f51\u7edcid `openstack network list|grep -w trove-mgmt|awk '{print$2}'` #s3 setup(\u5982\u679c\u6ca1\u6709s3,\u4ee5\u4e0b\u503c\u586bnull) s3_endpoint_host_ip: #s3\u7684ip s3_endpoint_host_name: #s3\u7684\u57df\u540d s3_endpoint_url: #s3\u7684url \u00b7\u4e00\u822c\u4e3ahttp\uff1a//s3\u57df\u540d s3_access_key: #s3\u7684ak s3_secret_key: #s3\u7684sk ####################### # Ironic options ####################### enable_ironic: \"no\" #\u662f\u5426\u5f00\u673a\u88f8\u91d1\u5c5e\u90e8\u7f72\uff0c\u9ed8\u8ba4\u4e0d\u5f00\u542f ironic_neutron_provisioning_network_uuid: ironic_neutron_cleaning_network_uuid: \"{{ ironic_neutron_provisioning_network_uuid }}\" ironic_dnsmasq_interface: ironic_dnsmasq_dhcp_range: ironic_tftp_server_address: \"{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}\" # \u4ea4\u6362\u673a\u8bbe\u5907\u76f8\u5173\u4fe1\u606f neutron_ml2_conf_genericswitch: genericswitch:xxxxxxx: device_type: ngs_mac_address: ip: username: password: ngs_port_default_vlan: # Package state setting haproxy_package_state: \"present\" mariadb_package_state: \"present\" rabbitmq_package_state: \"present\" memcached_package_state: \"present\" ceph_client_package_state: \"present\" keystone_package_state: \"present\" glance_package_state: \"present\" cinder_package_state: \"present\" nova_package_state: \"present\" neutron_package_state: \"present\" miner_package_state: \"present\" 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 \u00b6 dnf install ansible -y ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u6267\u884c\u7ed3\u679c\u663e\u793a\u6bcf\u53f0\u4e3b\u673a\u90fd\u662f\"SUCCESS\"\u5373\u8bf4\u660e\u8fde\u63a5\u72b6\u6001\u6ca1\u95ee\u9898,\u793a\u4f8b\uff1a compute1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python\" }, \"changed\": false, \"ping\": \"pong\" } 8. \u6267\u884c\u90e8\u7f72 \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 8.1 \u6267\u884cbootstrap \u00b6 # \u6267\u884c\u90e8\u7f72 opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50 8.2 \u91cd\u542f\u670d\u52a1\u5668 \u00b6 \u6ce8\uff1a\u6267\u884c\u91cd\u542f\u7684\u539f\u56e0\u662f:bootstrap\u53ef\u80fd\u4f1a\u5347\u5185\u6838,\u66f4\u6539selinux\u914d\u7f6e\u6216\u8005\u6709GPU\u670d\u52a1\u5668,\u5982\u679c\u88c5\u673a\u8fc7\u7a0b\u5df2\u7ecf\u662f\u65b0\u7248\u5185\u6838,selinux disable\u6216\u8005\u6ca1\u6709GPU\u670d\u52a1\u5668,\u5219\u4e0d\u9700\u8981\u6267\u884c\u8be5\u6b65\u9aa4 # \u624b\u52a8\u91cd\u542f\u5bf9\u5e94\u8282\u70b9,\u6267\u884c\u547d\u4ee4 init6 # \u91cd\u542f\u5b8c\u6210\u540e\uff0c\u518d\u6b21\u68c0\u67e5\u8fde\u901a\u6027 ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u91cd\u542f\u5b8c\u540e\u64cd\u4f5c\u7cfb\u7edf\u540e\uff0c\u518d\u6b21\u542f\u52a8yum\u6e90 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 \u00b6 opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50 8.4 \u6267\u884c\u90e8\u7f72 \u00b6 ln -s /usr/bin/python3 /usr/bin/python \u5168\u91cf\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 \u5355\u670d\u52a1\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name","title":"openEuler-22.03-LTS-SP4_Train"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#openstack-train","text":"OpenStack-Train \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u90e8\u7f72\u6b65\u9aa4 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 2.1 \u521b\u5efapool: 2.2 \u521d\u59cb\u5316pool 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 4. \u914d\u7f6eyum repo 4.1 \u5907\u4efdyum\u6e90 4.2 \u914d\u7f6eyum repo 4.3 \u66f4\u65b0yum\u7f13\u5b58 5. \u5b89\u88c5opensd 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 6. \u505assh\u4e92\u4fe1 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 7. \u914d\u7f6eopensd 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 7.2 \u914d\u7f6einventory\u6587\u4ef6 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 8. \u6267\u884c\u90e8\u7f72 8.1 \u6267\u884cbootstrap 8.2 \u91cd\u542f\u670d\u52a1\u5668 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 8.4 \u6267\u884c\u90e8\u7f72","title":"OpenStack-Train \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP4\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Train \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a Cinder Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#_3","text":"\u542f\u52a8OpenStack Train yum\u6e90 yum update yum install openstack-release-train yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.04-LTS-SP4/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.04-LTS-SP4/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient==4.0.2 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a . admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u5219 virt_type \u53ef\u4ee5\u914d\u7f6e\u4e3a kvm \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 mkdir -p /usr/share/AAVMF chown nova:nova /usr/share/AAVMF ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \\ /usr/share/AAVMF/AAVMF_CODE.fd ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \\ /usr/share/AAVMF/AAVMF_VARS.fd vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u5e76\u4e14\u5f53ARM\u67b6\u6784\u4e0b\u7684\u90e8\u7f72\u73af\u5883\u4e3a\u5d4c\u5957\u865a\u62df\u5316\u65f6\uff0c libvirt \u914d\u7f6e\u5982\u4e0b\uff1a [libvirt] virt_type = qemu cpu_mode = custom cpu_model = cortex-a72 \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CTL) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl restart neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service \\ neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Train\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; 2. \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u542f\u52a8\u670d\u52a1 systemctl enable openstack-ironic-api openstack-ironic-conductor systemctl start openstack-ironic-api openstack-ironic-conductor \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y 2. \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c T\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528T\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728T\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a T\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target \u5728Train\u4e2d\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u4e86ironic-inspector\u7b49\u670d\u52a1\uff0c\u7528\u6237\u53ef\u6839\u636e\u81ea\u8eab\u9700\u6c42\u5b89\u88c5\u3002","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u8fdb\u884c\u76f8\u5173\u7684\u955c\u50cf\u5236\u4f5c\u548c\u5bb9\u5668\u73af\u5883\u90e8\u7f72\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --domain default --password-prompt trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1.\u5b89\u88c5 Trove \u5305 yum install openstack-trove python3-troveclient 2.\u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] log_dir = /var/log/trove trove_auth_url = http://controller:5000/ nova_compute_url = http://controller:8774/v2 cinder_url = http://controller:8776/v1 swift_url = http://controller:8080/v1/AUTH_ rpc_backend = rabbit transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 auth_strategy = keystone add_addresses = True api_paste_config = /etc/trove/api-paste.ini nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = service taskmanager_manager = trove.taskmanager.manager.Manager use_nova_server_config_drive = True # Set these if using Neutron Networking network_driver = trove.network.neutron.NeutronDriver network_label_regex = .* [database] connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ auth_type = password project_domain_name = default user_domain_name = default project_name = service username = trove password = TROVE_PASSWORD \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf rabbit_host = controller rabbit_password = RABBIT_PASS trove_auth_url = http://controller:5000/ \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** \u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: yum install xfsprogs rsync \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\uff1a mount /srv/node/vdb mount /srv/node/vdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: yum install openstack-swift-account openstack-swift-container openstack-swift-object \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift \u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\uff1a swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\uff1a swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0 \u6ce8\u610f *** *\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#aodh","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#gnocchi","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#ceilometer","text":"1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#heat","text":"1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP4\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-sp4 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r train \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-sp4 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#openstack-sigopensd","text":"opensd\u7528\u4e8e\u6279\u91cf\u5730\u811a\u672c\u5316\u90e8\u7f72openstack\u5404\u7ec4\u4ef6\u670d\u52a1\u3002","title":"\u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#_4","text":"","title":"\u90e8\u7f72\u6b65\u9aa4"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#1","text":"\u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u9700\u5c06selinux\u8bbe\u7f6e\u4e3adisable \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u5c06/etc/ssh/sshd_config\u914d\u7f6e\u6587\u4ef6\u5185\u7684UseDNS\u8bbe\u7f6e\u4e3ano \u64cd\u4f5c\u7cfb\u7edf\u8bed\u8a00\u5fc5\u987b\u8bbe\u7f6e\u4e3a\u82f1\u6587 \u90e8\u7f72\u4e4b\u524d\u8bf7\u786e\u4fdd\u6240\u6709\u8ba1\u7b97\u8282\u70b9/etc/hosts\u6587\u4ef6\u5185\u6ca1\u6709\u5bf9\u8ba1\u7b97\u4e3b\u673a\u7684\u89e3\u6790","title":"1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#2-ceph-pool","text":"\u4e0d\u4f7f\u7528ceph\u6216\u5df2\u6709ceph\u96c6\u7fa4\u53ef\u5ffd\u7565\u6b64\u6b65\u9aa4 \u5728\u4efb\u610f\u4e00\u53f0ceph monitor\u8282\u70b9\u6267\u884c:","title":"2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#21-pool","text":"ceph osd pool create volumes 2048 ceph osd pool create images 2048","title":"2.1 \u521b\u5efapool:"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#22-pool","text":"rbd pool init volumes rbd pool init images","title":"2.2 \u521d\u59cb\u5316pool"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#23","text":"ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images' ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes'","title":"2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#3-lvm","text":"\u6839\u636e\u7269\u7406\u673a\u78c1\u76d8\u914d\u7f6e\u4e0e\u95f2\u7f6e\u60c5\u51b5\uff0c\u4e3amysql\u6570\u636e\u76ee\u5f55\u6302\u8f7d\u989d\u5916\u7684\u78c1\u76d8\u7a7a\u95f4\u3002\u793a\u4f8b\u5982\u4e0b\uff08\u6839\u636e\u5b9e\u9645\u60c5\u51b5\u505a\u914d\u7f6e\uff09\uff1a fdisk -l Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x000ed242 \u521b\u5efa\u5206\u533a parted /dev/sdd mkparted 0 -1 \u521b\u5efapv partprobe /dev/sdd1 pvcreate /dev/sdd1 \u521b\u5efa\u3001\u6fc0\u6d3bvg vgcreate vg_mariadb /dev/sdd1 vgchange -ay vg_mariadb \u67e5\u770bvg\u5bb9\u91cf vgdisplay --- Volume group --- VG Name vg_mariadb System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 446.62 GiB PE Size 4.00 MiB Total PE 114335 Alloc PE / Size 114176 / 446.00 GiB Free PE / Size 159 / 636.00 MiB VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc \u521b\u5efalv lvcreate -L 446G -n lv_mariadb vg_mariadb \u683c\u5f0f\u5316\u78c1\u76d8\u5e76\u83b7\u53d6\u5377\u7684UUID mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb blkid /dev/mapper/vg_mariadb-lv_mariadb /dev/mapper/vg_mariadb-lv_mariadb: UUID=\"98d513eb-5f64-4aa5-810e-dc7143884fa2\" TYPE=\"ext4\" \u6ce8\uff1a98d513eb-5f64-4aa5-810e-dc7143884fa2\u4e3a\u5377\u7684UUID \u6302\u8f7d\u78c1\u76d8 mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql rm -rf /var/lib/mysql/*","title":"3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#4-yum-repo","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"4. \u914d\u7f6eyum repo"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#41-yum","text":"mkdir /etc/yum.repos.d/bak/ mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/","title":"4.1 \u5907\u4efdyum\u6e90"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#42-yum-repo","text":"cat > /etc/yum.repos.d/opensd.repo << EOF [train] name=train baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP4:/Epol:/Multi-Version:/OpenStack:/Train/standard_$basearch/ enabled=1 gpgcheck=0 [epol] name=epol baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP4:/Epol/standard_$basearch/ enabled=1 gpgcheck=0 [everything] name=everything baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/SP4s/standard_$basearch/ enabled=1 gpgcheck=0 EOF","title":"4.2 \u914d\u7f6eyum repo"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#43-yum","text":"yum clean all yum makecache","title":"4.3 \u66f4\u65b0yum\u7f13\u5b58"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#5-opensd","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"5. \u5b89\u88c5opensd"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#51-opensd","text":"git clone https://gitee.com/openeuler/opensd cd opensd python3 setup.py install","title":"5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#6-ssh","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"6. \u505assh\u4e92\u4fe1"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#61","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u5e76\u4e00\u8def\u56de\u8f66 ssh-keygen","title":"6.1 \u751f\u6210\u5bc6\u94a5\u5bf9"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#62-ip","text":"\u5728auto_ssh_host_ip\u4e2d\u914d\u7f6e\u6240\u6709\u7528\u5230\u7684\u4e3b\u673aip, \u793a\u4f8b\uff1a cd /usr/local/share/opensd/tools/ vim auto_ssh_host_ip 10.0.0.1 10.0.0.2 ... 10.0.0.10","title":"6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#63","text":"\u5c06\u514d\u5bc6\u811a\u672c /usr/local/bin/opensd-auto-ssh \u5185123123\u66ff\u6362\u4e3a\u4e3b\u673a\u771f\u5b9e\u5bc6\u7801 # \u66ff\u6362\u811a\u672c\u5185123123\u5b57\u7b26\u4e32 vim /usr/local/bin/opensd-auto-ssh ## \u5b89\u88c5expect\u540e\u6267\u884c\u811a\u672c dnf install expect -y opensd-auto-ssh","title":"6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#64-ceph-monitor","text":"ssh-copy-id root@x.x.x.x","title":"6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#7-opensd","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"7. \u914d\u7f6eopensd"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#71","text":"\u5b89\u88c5 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils\u5e76\u968f\u673a\u751f\u6210\u5bc6\u7801 dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y # \u6267\u884c\u547d\u4ee4\u751f\u6210\u5bc6\u7801 opensd-genpwd # \u68c0\u67e5\u5bc6\u7801\u662f\u5426\u751f\u6210 cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml","title":"7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#72-inventory","text":"\u4e3b\u673a\u4fe1\u606f\u5305\u542b\uff1a\u4e3b\u673a\u540d\u3001ansible_host IP\u3001availability_zone\uff0c\u4e09\u8005\u5747\u9700\u914d\u7f6e\u7f3a\u4e00\u4e0d\u53ef\uff0c\u793a\u4f8b\uff1a vim /usr/local/share/opensd/ansible/inventory/multinode # \u4e09\u53f0\u63a7\u5236\u8282\u70b9\u4e3b\u673a\u4fe1\u606f [control] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # \u7f51\u7edc\u8282\u70b9\u4fe1\u606f\uff0c\u4e0e\u63a7\u5236\u8282\u70b9\u4fdd\u6301\u4e00\u81f4 [network] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # cinder-volume\u670d\u52a1\u8282\u70b9\u4fe1\u606f [storage] storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1 storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1 storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1 # Cell1 \u96c6\u7fa4\u4fe1\u606f [cell-control-cell1] cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1 cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1 cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1 [compute-cell1] compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1 compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1 compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1 [cell1:children] cell-control-cell1 compute-cell1 # Cell2\u96c6\u7fa4\u4fe1\u606f [cell-control-cell2] cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1 cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1 cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1 [compute-cell2] compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1 compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1 compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1 [cell2:children] cell-control-cell2 compute-cell2 [baremetal] [compute-cell1-ironic] # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684control\u4e3b\u673a\u7ec4 [nova-conductor:children] cell-control-cell1 cell-control-cell2 # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684compute\u4e3b\u673a\u7ec4 [nova-compute:children] compute-added compute-cell1 compute-cell2 # \u4e0b\u9762\u7684\u4e3b\u673a\u7ec4\u4fe1\u606f\u4e0d\u9700\u53d8\u52a8\uff0c\u4fdd\u7559\u5373\u53ef [compute-added] [chrony-server:children] control [pacemaker:children] control ...... ......","title":"7.2 \u914d\u7f6einventory\u6587\u4ef6"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#73","text":"\u6ce8: \u6587\u6863\u4e2d\u63d0\u5230\u7684\u6709\u6ce8\u91ca\u914d\u7f6e\u9879\u9700\u8981\u66f4\u6539\uff0c\u5176\u4ed6\u53c2\u6570\u4e0d\u9700\u8981\u66f4\u6539\uff0c\u82e5\u65e0\u76f8\u5173\u914d\u7f6e\u5219\u4e3a\u7a7a vim /usr/local/share/opensd/etc_examples/opensd/globals.yml ######################## # Network & Base options ######################## network_interface: \"eth0\" #\u7ba1\u7406\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 neutron_external_interface: \"eth1\" #\u4e1a\u52a1\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 cidr_netmask: 24 #\u7ba1\u7406\u7f51\u7684\u63a9\u7801 opensd_vip_address: 10.0.0.33 #\u63a7\u5236\u8282\u70b9\u865a\u62dfIP\u5730\u5740 cell1_vip_address: 10.0.0.34 #cell1\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 cell2_vip_address: 10.0.0.35 #cell2\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 external_fqdn: \"\" #\u7528\u4e8evnc\u8bbf\u95ee\u865a\u62df\u673a\u7684\u5916\u7f51\u57df\u540d\u5730\u5740 external_ntp_servers: [] #\u5916\u90e8ntp\u670d\u52a1\u5668\u5730\u5740 yumrepo_host: #yum\u6e90\u7684IP\u5730\u5740 yumrepo_port: #yum\u6e90\u7aef\u53e3\u53f7 environment: #yum\u6e90\u7684\u7c7b\u578b upgrade_all_packages: \"yes\" #\u662f\u5426\u5347\u7ea7\u6240\u6709\u5b89\u88c5\u7248\u7684\u7248\u672c(\u6267\u884cyum upgrade)\uff0c\u521d\u59cb\u90e8\u7f72\u8d44\u6e90\u8bf7\u8bbe\u7f6e\u4e3a\"yes\" enable_miner: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72miner\u670d\u52a1 enable_chrony: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72chrony\u670d\u52a1 enable_pri_mariadb: \"no\" #\u662f\u5426\u4e3a\u79c1\u6709\u4e91\u90e8\u7f72mariadb enable_hosts_file_modify: \"no\" # \u6269\u5bb9\u8ba1\u7b97\u8282\u70b9\u548c\u90e8\u7f72ironic\u670d\u52a1\u7684\u65f6\u5019\uff0c\u662f\u5426\u5c06\u8282\u70b9\u4fe1\u606f\u6dfb\u52a0\u5230`/etc/hosts` ######################## # Available zone options ######################## az_cephmon_compose: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az01\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az01\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az02\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az02\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az03\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az03\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: # `reserve_vcpu_based_on_numa`\u914d\u7f6e\u4e3a`yes` or `no`,\u4e3e\u4f8b\u8bf4\u660e\uff1a NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 \u5f53reserve_vcpu_based_on_numa: \"yes\", \u6839\u636enuma node, \u5e73\u5747\u6bcf\u4e2anode\u9884\u7559vcpu: vcpu_pin_set = 2-15,34-47,18-31,50-63 \u5f53reserve_vcpu_based_on_numa: \"no\", \u4ece\u7b2c\u4e00\u4e2avcpu\u5f00\u59cb\uff0c\u987a\u5e8f\u9884\u7559vcpu: vcpu_pin_set = 8-64 ####################### # Nova options ####################### nova_reserved_host_memory_mb: 2048 #\u8ba1\u7b97\u8282\u70b9\u7ed9\u8ba1\u7b97\u670d\u52a1\u9884\u7559\u7684\u5185\u5b58\u5927\u5c0f enable_cells: \"yes\" #cell\u8282\u70b9\u662f\u5426\u5355\u72ec\u8282\u70b9\u90e8\u7f72 support_gpu: \"False\" #cell\u8282\u70b9\u662f\u5426\u6709GPU\u670d\u52a1\u5668\uff0c\u5982\u679c\u6709\u5219\u4e3aTrue\uff0c\u5426\u5219\u4e3aFalse ####################### # Neutron options ####################### monitor_ip: - 10.0.0.9 #\u914d\u7f6e\u76d1\u63a7\u8282\u70b9 - 10.0.0.10 enable_meter_full_eip: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8EIP\u5168\u91cf\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_port_forwarding: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8port forwarding\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_ecs_ipv6: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8ecs_ipv6\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter: True #\u914d\u7f6e\u662f\u5426\u5f00\u542f\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue is_sdn_arch: False #\u914d\u7f6e\u662f\u5426\u662fsdn\u67b6\u6784\uff0c\u9ed8\u8ba4\u4e3aFalse # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,vlan\u548cvxlan\u4e24\u79cd\u7c7b\u578b\u53ea\u80fd\u4e8c\u9009\u4e00. enable_vxlan_network_type: False # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,\u5982\u679c\u4f7f\u7528vxlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aTrue, \u5982\u679c\u4f7f\u7528vlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aFalse. enable_neutron_fwaas: False # \u73af\u5883\u6709\u4f7f\u7528\u9632\u706b\u5899, \u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fd\u9632\u62a4\u5899\u529f\u80fd. # Neutron provider neutron_provider_networks: network_types: \"{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}\" network_vlan_ranges: \"default:xxx:xxx\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvlan\u8303\u56f4 network_mappings: \"default:br-provider\" network_interface: \"{{ neutron_external_interface }}\" network_vxlan_ranges: \"\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvxlan\u8303\u56f4 # \u5982\u4e0b\u8fd9\u4e9b\u914d\u7f6e\u662fSND\u63a7\u5236\u5668\u7684\u914d\u7f6e\u53c2\u6570, `enable_sdn_controller`\u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fdSND\u63a7\u5236\u5668\u529f\u80fd. # \u5176\u4ed6\u53c2\u6570\u8bf7\u6839\u636e\u90e8\u7f72\u4e4b\u524d\u7684\u89c4\u5212\u548cSDN\u90e8\u7f72\u4fe1\u606f\u786e\u5b9a. enable_sdn_controller: False sdn_controller_ip_address: # SDN\u63a7\u5236\u5668ip\u5730\u5740 sdn_controller_username: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u540d sdn_controller_password: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u5bc6\u7801 ####################### # Dimsagent options ####################### enable_dimsagent: \"no\" # \u5b89\u88c5\u955c\u50cf\u670d\u52a1agent, \u9700\u8981\u6539\u4e3ayes # Address and domain name for s2 s3_address_domain_pair: - host_ip: host_name: ####################### # Trove options ####################### enable_trove: \"no\" #\u5b89\u88c5trove \u9700\u8981\u6539\u4e3ayes #default network trove_default_neutron_networks: #trove \u7684\u7ba1\u7406\u7f51\u7edcid `openstack network list|grep -w trove-mgmt|awk '{print$2}'` #s3 setup(\u5982\u679c\u6ca1\u6709s3,\u4ee5\u4e0b\u503c\u586bnull) s3_endpoint_host_ip: #s3\u7684ip s3_endpoint_host_name: #s3\u7684\u57df\u540d s3_endpoint_url: #s3\u7684url \u00b7\u4e00\u822c\u4e3ahttp\uff1a//s3\u57df\u540d s3_access_key: #s3\u7684ak s3_secret_key: #s3\u7684sk ####################### # Ironic options ####################### enable_ironic: \"no\" #\u662f\u5426\u5f00\u673a\u88f8\u91d1\u5c5e\u90e8\u7f72\uff0c\u9ed8\u8ba4\u4e0d\u5f00\u542f ironic_neutron_provisioning_network_uuid: ironic_neutron_cleaning_network_uuid: \"{{ ironic_neutron_provisioning_network_uuid }}\" ironic_dnsmasq_interface: ironic_dnsmasq_dhcp_range: ironic_tftp_server_address: \"{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}\" # \u4ea4\u6362\u673a\u8bbe\u5907\u76f8\u5173\u4fe1\u606f neutron_ml2_conf_genericswitch: genericswitch:xxxxxxx: device_type: ngs_mac_address: ip: username: password: ngs_port_default_vlan: # Package state setting haproxy_package_state: \"present\" mariadb_package_state: \"present\" rabbitmq_package_state: \"present\" memcached_package_state: \"present\" ceph_client_package_state: \"present\" keystone_package_state: \"present\" glance_package_state: \"present\" cinder_package_state: \"present\" nova_package_state: \"present\" neutron_package_state: \"present\" miner_package_state: \"present\"","title":"7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#74-ssh","text":"dnf install ansible -y ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u6267\u884c\u7ed3\u679c\u663e\u793a\u6bcf\u53f0\u4e3b\u673a\u90fd\u662f\"SUCCESS\"\u5373\u8bf4\u660e\u8fde\u63a5\u72b6\u6001\u6ca1\u95ee\u9898,\u793a\u4f8b\uff1a compute1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python\" }, \"changed\": false, \"ping\": \"pong\" }","title":"7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#8","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"8. \u6267\u884c\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#81-bootstrap","text":"# \u6267\u884c\u90e8\u7f72 opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50","title":"8.1 \u6267\u884cbootstrap"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#82","text":"\u6ce8\uff1a\u6267\u884c\u91cd\u542f\u7684\u539f\u56e0\u662f:bootstrap\u53ef\u80fd\u4f1a\u5347\u5185\u6838,\u66f4\u6539selinux\u914d\u7f6e\u6216\u8005\u6709GPU\u670d\u52a1\u5668,\u5982\u679c\u88c5\u673a\u8fc7\u7a0b\u5df2\u7ecf\u662f\u65b0\u7248\u5185\u6838,selinux disable\u6216\u8005\u6ca1\u6709GPU\u670d\u52a1\u5668,\u5219\u4e0d\u9700\u8981\u6267\u884c\u8be5\u6b65\u9aa4 # \u624b\u52a8\u91cd\u542f\u5bf9\u5e94\u8282\u70b9,\u6267\u884c\u547d\u4ee4 init6 # \u91cd\u542f\u5b8c\u6210\u540e\uff0c\u518d\u6b21\u68c0\u67e5\u8fde\u901a\u6027 ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u91cd\u542f\u5b8c\u540e\u64cd\u4f5c\u7cfb\u7edf\u540e\uff0c\u518d\u6b21\u542f\u52a8yum\u6e90","title":"8.2 \u91cd\u542f\u670d\u52a1\u5668"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#83","text":"opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50","title":"8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-train/#84","text":"ln -s /usr/bin/python3 /usr/bin/python \u5168\u91cf\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 \u5355\u670d\u52a1\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name","title":"8.4 \u6267\u884c\u90e8\u7f72"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP4\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a CinderSP1 Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 22.03 LTS \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP4/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP4/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a source ~/.admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service 6.\u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ``` Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 22.03 LTS\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 yum install openstack-trove python-troveclient 2. \u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** 4.\u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: ```shell yum install xfsprogs rsync ``` \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS ```shell mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc ``` \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: ```shell mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc ``` \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: ```shell blkid ``` \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: ```shell UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 ``` \u6302\u8f7d\u8bbe\u5907\uff1a ```shell mount /srv/node/vdb mount /srv/node/vdc ``` ***\u6ce8\u610f*** **\u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e** \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: ```shell [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock ``` **\u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740** \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: ```shell systemctl enable rsyncd.service systemctl start rsyncd.service ``` 5.\u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: ```shell yum install openstack-swift-account openstack-swift-container openstack-swift-object ``` \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: ```shell chown -R swift:swift /srv/node ``` \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a ```shell mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift ``` 6.\u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 ```shell cd /etc/swift ``` \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: ```shell swift-ring-builder account.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder account.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder account.builder rebalance ``` 7.\u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`container.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder container.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f*** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder container.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder container.builder rebalance ``` 8.\u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`object.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder object.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d ```shell swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder object.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder object.builder rebalance ``` \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06`account.ring.gz`\uff0c`container.ring.gz`\u4ee5\u53ca `object.ring.gz`\u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684`/etc/swift`\u76ee\u5f55\u3002 9.\u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP4\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-sp4 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-sp4 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-22.03-LTS-SP4_Wallaby"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#openstack-wallaby","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72","title":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 22.03-LTS-SP4\u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a CinderSP1 Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#_3","text":"\u914d\u7f6e 22.03 LTS \u5b98\u65b9yum\u6e90\uff0c\u9700\u8981\u542f\u7528EPOL\u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP4/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP4/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a source ~/.admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service 6.\u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ```","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 22.03 LTS\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 yum install openstack-trove python-troveclient 2. \u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** 4.\u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: ```shell yum install xfsprogs rsync ``` \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS ```shell mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc ``` \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: ```shell mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc ``` \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: ```shell blkid ``` \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: ```shell UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 ``` \u6302\u8f7d\u8bbe\u5907\uff1a ```shell mount /srv/node/vdb mount /srv/node/vdc ``` ***\u6ce8\u610f*** **\u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e** \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: ```shell [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock ``` **\u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740** \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: ```shell systemctl enable rsyncd.service systemctl start rsyncd.service ``` 5.\u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: ```shell yum install openstack-swift-account openstack-swift-container openstack-swift-object ``` \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: ```shell chown -R swift:swift /srv/node ``` \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a ```shell mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift ``` 6.\u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 ```shell cd /etc/swift ``` \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: ```shell swift-ring-builder account.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder account.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder account.builder rebalance ``` 7.\u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`container.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder container.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f*** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder container.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder container.builder rebalance ``` 8.\u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`object.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder object.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d ```shell swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder object.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder object.builder rebalance ``` \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06`account.ring.gz`\uff0c`container.ring.gz`\u4ee5\u53ca `object.ring.gz`\u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684`/etc/swift`\u76ee\u5f55\u3002 9.\u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#aodh","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#gnocchi","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#ceilometer","text":"1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#heat","text":"1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-22.03-LTS-SP4/OpenStack-wallaby/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 pip install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.03-LTS-SP4\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.03-lts-sp4 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.03-lts-sp4 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-22.09/OpenStack-yoga/","text":"OpenStack Yoga \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack Yoga \u90e8\u7f72\u6307\u5357 \u57fa\u4e8eRPM\u90e8\u7f72 \u73af\u5883\u51c6\u5907 \u65f6\u949f\u540c\u6b65 \u5b89\u88c5\u6570\u636e\u5e93 \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u90e8\u7f72\u670d\u52a1 Keystone Glance Placement Nova Neutron Cinder Horizon Ironic Trove Swift Cyborg Aodh Gnocchi Ceilometer Heat Tempest \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u90e8\u7f72\u6b65\u9aa4 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 2.1 \u521b\u5efapool: 2.2 \u521d\u59cb\u5316pool 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 4. \u914d\u7f6eyum repo 4.1 \u5907\u4efdyum\u6e90 4.2 \u914d\u7f6eyum repo 4.3 \u66f4\u65b0yum\u7f13\u5b58 5. \u5b89\u88c5opensd 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 6. \u505assh\u4e92\u4fe1 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 7. \u914d\u7f6eopensd 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 7.2 \u914d\u7f6einventory\u6587\u4ef6 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 8. \u6267\u884c\u90e8\u7f72 8.1 \u6267\u884cbootstrap 8.2 \u91cd\u542f\u670d\u52a1\u5668 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 8.4 \u6267\u884c\u90e8\u7f72 \u57fa\u4e8eOpenStack helm\u90e8\u7f72 \u7b80\u4ecb \u524d\u7f6e\u8bbe\u7f6e \u81ea\u52a8\u5b89\u88c5 \u624b\u52a8\u5b89\u88c5 \u4f7f\u7528 OpenStack-Helm \u65b0\u7279\u6027\u7684\u5b89\u88c5 Kolla\u652f\u6301iSula Nova\u652f\u6301\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u7279\u6027 \u672c\u6587\u6863\u662fopenEuler OpenStack SIG\u7f16\u5199\u7684\u57fa\u4e8eopenEuler 22.09\u7684OpenStack\u90e8\u7f72\u6307\u5357\uff0c\u5185\u5bb9\u7531SIG\u8d21\u732e\u8005\u63d0\u4f9b\u3002\u5728\u9605\u8bfb\u8fc7\u7a0b\u4e2d\uff0c\u5982\u679c\u60a8\u6709\u4efb\u4f55\u7591\u95ee\u6216\u8005\u53d1\u73b0\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7 \u8054\u7cfb SIG\u7ef4\u62a4\u4eba\u5458\uff0c\u6216\u8005\u76f4\u63a5 \u63d0\u4ea4issue \u7ea6\u5b9a \u672c\u7ae0\u8282\u63cf\u8ff0\u6587\u6863\u4e2d\u7684\u4e00\u4e9b\u901a\u7528\u7ea6\u5b9a\u3002 \u540d\u79f0 \u5b9a\u4e49 RABBIT_PASS rabbitmq\u7684\u5bc6\u7801\uff0c\u7531\u7528\u6237\u8bbe\u7f6e\uff0c\u5728OpenStack\u5404\u4e2a\u670d\u52a1\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_PASS cinder\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_DBPASS cinder\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 KEYSTONE_DBPASS keystone\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728keystone\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_PASS glance\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_DBPASS glance\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_PASS \u5728keystone\u6ce8\u518c\u7684heat\u7528\u6237\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_DBPASS heat\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_PASS \u5728keystone\u6ce8\u518c\u7684cyborg\u7528\u6237\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_DBPASS cyborg\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_PASS \u5728keystone\u6ce8\u518c\u7684neutron\u7528\u6237\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_DBPASS neutron\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PROVIDER_INTERFACE_NAME \u7269\u7406\u7f51\u7edc\u63a5\u53e3\u7684\u540d\u79f0\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 OVERLAY_INTERFACE_IP_ADDRESS Controller\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406ip\u5730\u5740\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 METADATA_SECRET metadata proxy\u7684secret\u5bc6\u7801\uff0c\u5728nova\u548cneutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_DBPASS placement\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_PASS \u5728keystone\u6ce8\u518c\u7684placement\u7528\u6237\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_DBPASS nova\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728nova\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_PASS \u5728keystone\u6ce8\u518c\u7684nova\u7528\u6237\u5bc6\u7801\uff0c\u5728nova,cyborg,neutron\u7b49\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_DBPASS ironic\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_PASS \u5728keystone\u6ce8\u518c\u7684ironic\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_DBPASS ironic-inspector\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_PASS \u5728keystone\u6ce8\u518c\u7684ironic-inspector\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 OpenStack SIG\u63d0\u4f9b\u4e86\u591a\u79cd\u57fa\u4e8eopenEuler\u90e8\u7f72OpenStack\u7684\u65b9\u6cd5\uff0c\u4ee5\u6ee1\u8db3\u4e0d\u540c\u7684\u7528\u6237\u573a\u666f\uff0c\u8bf7\u6309\u9700\u9009\u62e9\u3002 \u57fa\u4e8eRPM\u90e8\u7f72 \u00b6 \u73af\u5883\u51c6\u5907 \u00b6 \u672c\u6587\u6863\u57fa\u4e8eOpenStack\u7ecf\u5178\u7684\u4e09\u8282\u70b9\u73af\u5883\u8fdb\u884c\u90e8\u7f72\uff0c\u4e09\u4e2a\u8282\u70b9\u5206\u522b\u662f\u63a7\u5236\u8282\u70b9(Controller)\u3001\u8ba1\u7b97\u8282\u70b9(Compute)\u3001\u5b58\u50a8\u8282\u70b9(Storage)\uff0c\u5176\u4e2d\u5b58\u50a8\u8282\u70b9\u4e00\u822c\u53ea\u90e8\u7f72\u5b58\u50a8\u670d\u52a1\uff0c\u5728\u8d44\u6e90\u6709\u9650\u7684\u60c5\u51b5\u4e0b\uff0c\u53ef\u4ee5\u4e0d\u5355\u72ec\u90e8\u7f72\u8be5\u8282\u70b9\uff0c\u628a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u670d\u52a1\u90e8\u7f72\u5230\u8ba1\u7b97\u8282\u70b9\u5373\u53ef\u3002 \u9996\u5148\u51c6\u5907\u4e09\u4e2aopenEuler 22.09\u73af\u5883\uff0c\u6839\u636e\u60a8\u7684\u73af\u5883\uff0c\u4e0b\u8f7d\u5bf9\u5e94\u7684\u955c\u50cf\u5e76\u5b89\u88c5\u5373\u53ef\uff1a ISO\u955c\u50cf \u3001 qcow2\u955c\u50cf \u3002 \u4e0b\u9762\u7684\u5b89\u88c5\u6309\u7167\u5982\u4e0b\u62d3\u6251\u8fdb\u884c\uff1a controller\uff1a192.168.0.2 compute\uff1a 192.168.0.3 storage\uff1a 192.168.0.4 \u5982\u679c\u60a8\u7684\u73af\u5883IP\u4e0d\u540c\uff0c\u8bf7\u6309\u7167\u60a8\u7684\u73af\u5883IP\u4fee\u6539\u76f8\u5e94\u7684\u914d\u7f6e\u6587\u4ef6\u3002 \u672c\u6587\u6863\u7684\u4e09\u8282\u70b9\u670d\u52a1\u62d3\u6251\u5982\u4e0b\u56fe\u6240\u793a(\u53ea\u5305\u542bKeystone\u3001Glance\u3001Nova\u3001Cinder\u3001Neutron\u8fd9\u51e0\u4e2a\u6838\u5fc3\u670d\u52a1\uff0c\u5176\u4ed6\u670d\u52a1\u8bf7\u53c2\u8003\u5177\u4f53\u90e8\u7f72\u7ae0\u8282)\uff1a \u5728\u6b63\u5f0f\u90e8\u7f72\u4e4b\u524d\uff0c\u9700\u8981\u5bf9\u6bcf\u4e2a\u8282\u70b9\u505a\u5982\u4e0b\u914d\u7f6e\u548c\u68c0\u67e5\uff1a \u4fdd\u8bc1EPOL yum\u6e90\u5df2\u914d\u7f6e \u6253\u5f00 /etc/yum.repos.d/openEuler.repo \u6587\u4ef6\uff0c\u68c0\u67e5 [EPOL] \u6e90\u662f\u5426\u5b58\u5728\uff0c\u82e5\u4e0d\u5b58\u5728\uff0c\u5219\u6dfb\u52a0\u5982\u4e0b\u5185\u5bb9: [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.09/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler \u4e0d\u8bba\u6539\u4e0d\u6539\u8fd9\u4e2a\u6587\u4ef6\uff0c\u65b0\u673a\u5668\u7684\u7b2c\u4e00\u6b65\u90fd\u8981\u66f4\u65b0\u4e00\u4e0byum\u6e90\uff0c\u6267\u884c yum update \u3002 \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u6bcf\u4e2a\u8282\u70b9\u5206\u522b\u4fee\u6539\u4e3b\u673a\u540d\uff0c\u4ee5controller\u4e3a\u4f8b\uff1a hostnamectl set-hostname controller vi /etc/hostname \u5185\u5bb9\u4fee\u6539\u4e3acontroller \u7136\u540e\u4fee\u6539\u6bcf\u4e2a\u8282\u70b9\u7684 /etc/hosts \u6587\u4ef6\uff0c\u65b0\u589e\u5982\u4e0b\u5185\u5bb9: 192.168.0.2 controller 192.168.0.3 compute 192.168.0.4 storage \u65f6\u949f\u540c\u6b65 \u00b6 \u96c6\u7fa4\u73af\u5883\u65f6\u523b\u8981\u6c42\u6bcf\u4e2a\u8282\u70b9\u7684\u65f6\u95f4\u4e00\u81f4\uff0c\u4e00\u822c\u7531\u65f6\u949f\u540c\u6b65\u8f6f\u4ef6\u4fdd\u8bc1\u3002\u672c\u6587\u4f7f\u7528 chrony \u8f6f\u4ef6\u3002\u6b65\u9aa4\u5982\u4e0b\uff1a Controller\u8282\u70b9 \uff1a \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # \u8868\u793a\u5141\u8bb8\u54ea\u4e9bIP\u4ece\u672c\u8282\u70b9\u540c\u6b65\u65f6\u949f allow 192.168.0.0/24 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u5176\u4ed6\u8282\u70b9 \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # NTP_SERVER\u662fcontroller IP\uff0c\u8868\u793a\u4ece\u8fd9\u4e2a\u673a\u5668\u83b7\u53d6\u65f6\u95f4\uff0c\u8fd9\u91cc\u6211\u4eec\u586b192.168.0.2\uff0c\u6216\u8005\u5728`/etc/hosts`\u91cc\u914d\u7f6e\u597d\u7684controller\u540d\u5b57\u5373\u53ef\u3002 server NTP_SERVER iburst \u540c\u65f6\uff0c\u8981\u628a pool pool.ntp.org iburst \u8fd9\u4e00\u884c\u6ce8\u91ca\u6389\uff0c\u8868\u793a\u4e0d\u4ece\u516c\u7f51\u540c\u6b65\u65f6\u949f\u3002 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u914d\u7f6e\u5b8c\u6210\u540e\uff0c\u68c0\u67e5\u4e00\u4e0b\u7ed3\u679c\uff0c\u5728\u5176\u4ed6\u975econtroller\u8282\u70b9\u6267\u884c chronyc sources \uff0c\u8fd4\u56de\u7ed3\u679c\u7c7b\u4f3c\u5982\u4e0b\u5185\u5bb9\uff0c\u8868\u793a\u6210\u529f\u4ececontroller\u540c\u6b65\u65f6\u949f\u3002 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 192.168.0.2 4 6 7 0 -1406ns[ +55us] +/- 16ms \u5b89\u88c5\u6570\u636e\u5e93 \u00b6 \u6570\u636e\u5e93\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528mariadb\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install mysql-config mariadb mariadb-server python3-PyMySQL \u65b0\u589e\u914d\u7f6e\u6587\u4ef6 /etc/my.cnf.d/openstack.cnf \uff0c\u5185\u5bb9\u5982\u4e0b [mysqld] bind-address = 192.168.0.2 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8\u670d\u52a1\u5668 systemctl start mariadb \u521d\u59cb\u5316\u6570\u636e\u5e93\uff0c\u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef mysql_secure_installation \u793a\u4f8b\u5982\u4e0b\uff1a NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and haven't set the root password yet, you should just press enter here. Enter current password for root (enter for none): #\u8fd9\u91cc\u8f93\u5165\u5bc6\u7801\uff0c\u7531\u4e8e\u6211\u4eec\u662f\u521d\u59cb\u5316DB\uff0c\u76f4\u63a5\u56de\u8f66\u5c31\u884c OK, successfully used password, moving on... Setting the root password or using the unix_socket ensures that nobody can log into the MariaDB root user without the proper authorisation. You already have your root account protected, so you can safely answer 'n'. # \u8fd9\u91cc\u6839\u636e\u63d0\u793a\u8f93\u5165N Switch to unix_socket authentication [Y/n] N Enabled successfully! Reloading privilege tables.. ... Success! You already have your root account protected, so you can safely answer 'n'. # \u8f93\u5165Y\uff0c\u4fee\u6539\u5bc6\u7801 Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664\u533f\u540d\u7528\u6237 Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. # \u8f93\u5165Y\uff0c\u5173\u95edroot\u8fdc\u7a0b\u767b\u5f55\u6743\u9650 Disallow root login remotely? [Y/n] Y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664test\u6570\u636e\u5e93 Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. # \u8f93\u5165Y\uff0c\u91cd\u8f7d\u914d\u7f6e Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. \u9a8c\u8bc1\uff0c\u6839\u636e\u7b2c\u56db\u6b65\u8bbe\u7f6e\u7684\u5bc6\u7801\uff0c\u68c0\u67e5\u662f\u5426\u80fd\u767b\u5f55mariadb mysql -uroot -p \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u00b6 \u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528rabbitmq\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install rabbitmq-server \u542f\u52a8\u670d\u52a1 systemctl start rabbitmq-server \u914d\u7f6eopenstack\u7528\u6237\uff0c RABBIT_PASS \u662fopenstack\u670d\u52a1\u767b\u5f55\u6d88\u606f\u961f\u91cc\u7684\u5bc6\u7801\uff0c\u9700\u8981\u548c\u540e\u9762\u5404\u4e2a\u670d\u52a1\u7684\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\u3002 rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u00b6 \u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528Memcached\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install memcached python3-memcached \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u542f\u52a8\u670d\u52a1 systemctl start memcached \u90e8\u7f72\u670d\u52a1 \u00b6 Keystone \u00b6 Keystone\u662fOpenStack\u63d0\u4f9b\u7684\u9274\u6743\u670d\u52a1\uff0c\u662f\u6574\u4e2aOpenStack\u7684\u5165\u53e3\uff0c\u63d0\u4f9b\u4e86\u79df\u6237\u9694\u79bb\u3001\u7528\u6237\u8ba4\u8bc1\u3001\u670d\u52a1\u53d1\u73b0\u7b49\u529f\u80fd\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server \u6253\u5f00httpd.conf\u5e76\u914d\u7f6e #\u9700\u8981\u4fee\u6539\u7684\u914d\u7f6e\u6587\u4ef6\u8def\u5f84 vim /etc/httpd/conf/httpd.conf #\u4fee\u6539\u4ee5\u4e0b\u9879\uff0c\u5982\u679c\u6ca1\u6709\u5219\u65b0\u6dfb\u52a0 ServerName controller \u521b\u5efa\u8f6f\u94fe\u63a5 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles \u9700\u8981\u5148\u5b89\u88c5python3-openstackclient dnf install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u00b6 Glance\u662fOpenStack\u63d0\u4f9b\u7684\u955c\u50cf\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u3001\u88f8\u673a\u955c\u50cf\u7684\u4e0a\u4f20\u4e0e\u4e0b\u8f7d\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521d\u59cb\u5316 glance \u8d44\u6e90\u5bf9\u8c61 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230 GLANCE_PASS \u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt glance User Password: Repeat User Password: \u6dfb\u52a0glance\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user glance admin \u521b\u5efaglance\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efaglance API\u670d\u52a1\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-glance \u4fee\u6539 glance \u914d\u7f6e\u6587\u4ef6 vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrcu \u4e0b\u8f7d\u955c\u50cf x86\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img arm\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement \u00b6 Placement\u662fOpenStack\u63d0\u4f9b\u7684\u8d44\u6e90\u8c03\u5ea6\u7ec4\u4ef6\uff0c\u4e00\u822c\u4e0d\u9762\u5411\u7528\u6237\uff0c\u7531Nova\u7b49\u7ec4\u4ef6\u8c03\u7528\uff0c\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u3001\u914d\u7f6ePlacement\u670d\u52a1\u524d\uff0c\u9700\u8981\u5148\u521b\u5efa\u76f8\u5e94\u7684\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548cAPI endpoints\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efaplacement\u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE placement; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efaplacement\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt placement User Password: Repeat User Password: \u6dfb\u52a0placement\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name placement \\ --description \"Placement API\" placement \u521b\u5efaPlacement API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ placement public http://controller:8778 openstack endpoint create --region RegionOne \\ placement internal http://controller:8778 openstack endpoint create --region RegionOne \\ placement admin http://controller:8778 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-placement-api \u7f16\u8f91 /etc/placement/placement.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [placement_database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [placement_database] connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff0c\u586b\u5145Placement\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8\u670d\u52a1 \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650 source ~/.admin-openrc \u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a placement-status upgrade check +----------------------------------------------------------------------+ | Upgrade Check Results | +----------------------------------------------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Failure | | Details: Your policy file is JSON-formatted which is deprecated. You | | need to switch to YAML-formatted file. Use the | | ``oslopolicy-convert-json-to-yaml`` tool to convert the | | existing JSON-formatted files to YAML in a backwards- | | compatible manner: https://docs.openstack.org/oslo.policy/ | | latest/cli/oslopolicy-convert-json-to-yaml.html. | +----------------------------------------------------------------------+ \u8fd9\u91cc\u53ef\u4ee5\u770b\u5230 Policy File JSON to YAML Migration \u7684\u7ed3\u679c\u4e3aFailure\u3002\u8fd9\u662f\u56e0\u4e3a\u5728Placement\u4e2d\uff0cJSON\u683c\u5f0f\u7684policy\u6587\u4ef6\u4eceWallaby\u7248\u672c\u5f00\u59cb\u5df2\u5904\u4e8e deprecated \u72b6\u6001\u3002\u53ef\u4ee5\u53c2\u8003\u63d0\u793a\uff0c\u4f7f\u7528 oslopolicy-convert-json-to-yaml \u5de5\u5177 \u5c06\u73b0\u6709\u7684JSON\u683c\u5f0fpolicy\u6587\u4ef6\u8f6c\u5316\u4e3aYAML\u683c\u5f0f\u3002 oslopolicy-convert-json-to-yaml --namespace placement \\ --policy-file /etc/placement/policy.json \\ --output-file /etc/placement/policy.yaml mv /etc/placement/policy.json{,.bak} \u6ce8\uff1a\u5f53\u524d\u73af\u5883\u4e2d\u6b64\u95ee\u9898\u53ef\u5ffd\u7565\uff0c\u4e0d\u5f71\u54cd\u8fd0\u884c\u3002 \u9488\u5bf9placement API\u8fd0\u884c\u547d\u4ee4\uff1a \u5b89\u88c5osc-placement\u63d2\u4ef6\uff1a dnf install python3-osc-placement \u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a openstack --os-placement-api-version 1.2 resource class list --sort-column name +----------------------------+ | name | +----------------------------+ | DISK_GB | | FPGA | | ... | openstack --os-placement-api-version 1.6 trait list --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_ACCELERATORS | | COMPUTE_ARCH_AARCH64 | | ... | Nova \u00b6 Nova\u662fOpenStack\u7684\u8ba1\u7b97\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u7684\u521b\u5efa\u3001\u53d1\u653e\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efa nova_api \u3001 nova \u548c nova_cell0 \u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efanova\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt nova User Password: Repeat User Password: \u6dfb\u52a0nova\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user nova admin \u521b\u5efanova\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name nova \\ --description \"OpenStack Compute\" compute \u521b\u5efaNova API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute admin http://controller:8774/v2.1 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528controller\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.2 log_dir = /var/log/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api_database] \u548c [database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff1a \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u542f\u52a8\u670d\u52a1 systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service Compute\u8282\u70b9 \u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-nova-compute \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6 \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528Compute\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49compute_driver\u3001instances_path\u3001log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.3 compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances log_dir = /var/log/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86_64\uff09 \u5904\u7406\u5668\u4e3ax86_64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002\u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08arm64\uff09 \u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a virt-host-validate # \u8be5\u547d\u4ee4\u7531libvirt\u63d0\u4f9b\uff0c\u6b64\u65f6libvirt\u5e94\u5df2\u4f5c\u4e3aopenstack-nova-compute\u4f9d\u8d56\u88ab\u5b89\u88c5\uff0c\u73af\u5883\u4e2d\u5df2\u6709\u6b64\u547d\u4ee4 \u663e\u793aFAIL\u65f6\uff0c\u8868\u793a\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002 QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded) \u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u663e\u793aPASS\u65f6\uff0c\u8868\u793a\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 QEMU: Checking if device /dev/kvm exists: PASS \u914d\u7f6eqemu\uff08\u4ec5arm64\uff09 \u4ec5\u5f53\u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\u9700\u8981\u6267\u884c\u6b64\u64cd\u4f5c\u3002 \u7f16\u8f91 /etc/libvirt/qemu.conf : nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u7f16\u8f91 /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } \u542f\u52a8\u670d\u52a1 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u786e\u8ba4nova-compute\u670d\u52a1\u5df2\u8bc6\u522b\u5230\u6570\u636e\u5e93\u4e2d\uff1a openstack compute service list --service nova-compute \u53d1\u73b0\u8ba1\u7b97\u8282\u70b9\uff0c\u5c06\u8ba1\u7b97\u8282\u70b9\u6dfb\u52a0\u5230cell\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u7ed3\u679c\u5982\u4e0b\uff1a Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code. Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check Neutron \u00b6 Neutron\u662fOpenStack\u7684\u7f51\u7edc\u670d\u52a1\uff0c\u63d0\u4f9b\u865a\u62df\u4ea4\u6362\u673a\u3001IP\u8def\u7531\u3001DHCP\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u670d\u52a1\u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efaneutron\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eNEUTRON_PASS\uff1a source ~/.admin-openrc openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description \"OpenStack Networking\" network \u90e8\u7f72 Neutron API \u670d\u52a1\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2 3. \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u914d\u7f6eML2\uff0cML2\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge** \u4fee\u6539/etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6eLayer-3\u4ee3\u7406 \u4fee\u6539/etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406 \u4fee\u6539/etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406 \u4fee\u6539/etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u914d\u7f6enova\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542fnova api\u670d\u52a1 systemctl restart openstack-nova-api \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl start neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service Compute\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-neutron-linuxbridge ebtables ipset -y \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6enova compute\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service \u542f\u52a8Neutron linuxbridge agent\u670d\u52a1 systemctl enable neutron-linuxbridge-agent systemctl start neutron-linuxbridge-agent Cinder \u00b6 Cinder\u662fOpenStack\u7684\u5b58\u50a8\u670d\u52a1\uff0c\u63d0\u4f9b\u5757\u8bbe\u5907\u7684\u521b\u5efa\u3001\u53d1\u653e\u3001\u5907\u4efd\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \uff1a \u521d\u59cb\u5316\u6570\u636e\u5e93 CINDER_DBPASS \u662f\u7528\u6237\u81ea\u5b9a\u4e49\u7684cinder\u6570\u636e\u5e93\u5bc6\u7801\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u521d\u59cb\u5316Keystone\u8d44\u6e90\u5bf9\u8c61 source ~/.admin-openrc #\u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230`CINDER_PASS`\u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s 3. \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-cinder-api openstack-cinder-scheduler \u4fee\u6539cinder\u914d\u7f6e\u6587\u4ef6 /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.2 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u6570\u636e\u5e93\u540c\u6b65 su -s /bin/sh -c \"cinder-manage db sync\" cinder \u4fee\u6539nova\u914d\u7f6e /etc/nova/nova.conf [cinder] os_region_name = RegionOne \u542f\u52a8\u670d\u52a1 systemctl restart openstack-nova-api systemctl start openstack-cinder-api openstack-cinder-scheduler Storage\u8282\u70b9 \uff1a Storage\u8282\u70b9\u8981\u63d0\u524d\u51c6\u5907\u81f3\u5c11\u4e00\u5757\u786c\u76d8\uff0c\u4f5c\u4e3acinder\u7684\u5b58\u50a8\u540e\u7aef\uff0c\u4e0b\u6587\u9ed8\u8ba4storage\u8282\u70b9\u5df2\u7ecf\u5b58\u5728\u4e00\u5757\u672a\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u8bbe\u5907\u540d\u79f0\u4e3a /dev/sdb \uff0c\u7528\u6237\u5728\u914d\u7f6e\u8fc7\u7a0b\u4e2d\uff0c\u8bf7\u6309\u7167\u771f\u5b9e\u73af\u5883\u4fe1\u606f\u8fdb\u884c\u540d\u79f0\u66ff\u6362\u3002 Cinder\u652f\u6301\u5f88\u591a\u7c7b\u578b\u7684\u540e\u7aef\u5b58\u50a8\uff0c\u672c\u6307\u5bfc\u4f7f\u7528\u6700\u7b80\u5355\u7684lvm\u4e3a\u53c2\u8003\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982ceph\u7b49\u5176\u4ed6\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup \u914d\u7f6elvm\u5377\u7ec4 pvcreate /dev/sdb vgcreate cinder-volumes /dev/sdb \u4fee\u6539cinder\u914d\u7f6e /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.4 enabled_backends = lvm glance_api_servers = http://controller:9292 [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u914d\u7f6ecinder backup \uff08\u53ef\u9009\uff09 cinder-backup\u662f\u53ef\u9009\u7684\u5907\u4efd\u670d\u52a1\uff0ccinder\u540c\u6837\u652f\u6301\u5f88\u591a\u79cd\u5907\u4efd\u540e\u7aef\uff0c\u672c\u6587\u4f7f\u7528swift\u5b58\u50a8\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982NFS\u7b49\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\uff0c\u4f8b\u5982\u53ef\u4ee5\u53c2\u8003 OpenStack\u5b98\u65b9\u6587\u6863 \u5bf9NFS\u7684\u914d\u7f6e\u8bf4\u660e\u3002 \u4fee\u6539 /etc/cinder/cinder.conf \uff0c\u5728 [DEFAULT] \u4e2d\u65b0\u589e [DEFAULT] backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u8fd9\u91cc\u7684 SWIFT_URL \u662f\u6307\u73af\u5883\u4e2dswift\u670d\u52a1\u7684URL\uff0c\u5728\u90e8\u7f72\u5b8cswift\u670d\u52a1\u540e\uff0c\u6267\u884c openstack catalog show object-store \u547d\u4ee4\u83b7\u53d6\u3002 \u542f\u52a8\u670d\u52a1 systemctl start openstack-cinder-volume target systemctl start openstack-cinder-backup (\u53ef\u9009) \u81f3\u6b64\uff0cCinder\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u53ef\u4ee5\u5728controller\u901a\u8fc7\u4ee5\u4e0b\u547d\u4ee4\u8fdb\u884c\u7b80\u5355\u7684\u9a8c\u8bc1 source ~/.admin-openrc openstack storage service list openstack volume list Horizon \u00b6 Horizon\u662fOpenStack\u63d0\u4f9b\u7684\u524d\u7aef\u9875\u9762\uff0c\u53ef\u4ee5\u8ba9\u7528\u6237\u901a\u8fc7\u7f51\u9875\u9f20\u6807\u7684\u64cd\u4f5c\u6765\u63a7\u5236OpenStack\u96c6\u7fa4\uff0c\u800c\u4e0d\u7528\u7e41\u7410\u7684CLI\u547d\u4ee4\u884c\u3002Horizon\u4e00\u822c\u90e8\u7f72\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-dashboard \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] OPENSTACK_KEYSTONE_URL = \"http://controller:5000/v3\" SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f\u670d\u52a1 systemctl restart httpd \u81f3\u6b64\uff0chorizon\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165 http://192.168.0.2/dashboard \uff0c\u6253\u5f00horizon\u767b\u5f55\u9875\u9762\u3002 Ironic \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> exit Bye \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 \u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 \u66ff\u6362 IRONIC_PASS \u4e3aironic\u7528\u6237\u5bc6\u7801\uff0c IRONIC_INSPECTOR_PASS \u4e3aironic_inspector\u7528\u6237\u5bc6\u7801\u3002 openstack user create --password IRONIC_PASS \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector openstack role add --project service --user ironic-inspector admin \u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQ LAlchemy connection string used to connect to the # database (string value) # connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) # transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASS \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) # www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 www_authenticate_uri=http://controller:5000 # Complete admin Identity API endpoint. (string value) # auth_url=http://PRIVATE_IDENTITY_IP:5000 auth_url=http://controller:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASS # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none \u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema \u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 \u5982\u4e0b\u4e3aironic-conductor\u670d\u52a1\u81ea\u8eab\u7684\u6807\u51c6\u914d\u7f6e\uff0cironic-conductor\u670d\u52a1\u53ef\u4ee5\u4e0eironic-api\u670d\u52a1\u5206\u5e03\u4e8e\u4e0d\u540c\u8282\u70b9\uff0c\u672c\u6307\u5357\u4e2d\u5747\u90e8\u7f72\u4e0e\u63a7\u5236\u8282\u70b9\uff0c\u6240\u4ee5\u91cd\u590d\u7684\u914d\u7f6e\u9879\u53ef\u8df3\u8fc7\u3002 \u66ff\u6362\u4f7f\u7528conductor\u670d\u52a1\u6240\u5728host\u7684IP\u914d\u7f6emy_ip\uff1a [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) # my_ip=HOST_IP my_ip = 192.168.0.2 \u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c \u66ff\u6362IRONIC_PASS\u4e3aironic\u7528\u6237\u5bc6\u7801\u3002 [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASS # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public # \u5176\u4ed6\u53c2\u8003\u914d\u7f6e [glance] endpoint_override = http://controller:9292 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 auth_type = password username = ironic password = IRONIC_PASS project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service [service_catalog] region_name = RegionOne project_domain_id = default user_domain_id = default project_name = service password = IRONIC_PASS username = ironic auth_url = http://controller:5000 auth_type = password \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] endpoint_override = \u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 \u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-inspector \u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> exit Bye \u914d\u7f6e /etc/ironic-inspector/inspector.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASS \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801 [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 \u914d\u7f6e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://controller:5000 www_authenticate_uri = http://controller:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = controller:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True \u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=192.168.0.40,192.168.0.50 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log \u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c \u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade \u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 dnf install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u4e0b\u8f7d\u6216\u5236\u4f5c \u90e8\u7f72\u4e00\u4e2a\u88f8\u673a\u8282\u70b9\u603b\u5171\u9700\u8981\u4e24\u7ec4\u955c\u50cf\uff1adeploy ramdisk images\u548cuser images\u3002Deploy ramdisk images\u4e0a\u8fd0\u884c\u6709ironic-python-agent(IPA)\u670d\u52a1\uff0cIronic\u901a\u8fc7\u5b83\u8fdb\u884c\u88f8\u673a\u8282\u70b9\u7684\u73af\u5883\u51c6\u5907\u3002User images\u662f\u6700\u7ec8\u88ab\u5b89\u88c5\u88f8\u673a\u8282\u70b9\u4e0a\uff0c\u4f9b\u7528\u6237\u4f7f\u7528\u7684\u955c\u50cf\u3002 ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent-builder\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002\u82e5\u4f7f\u7528\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \uff0c\u540c\u65f6\u5b98\u65b9\u4e5f\u6709\u63d0\u4f9b\u5236\u4f5c\u597d\u7684deploy\u955c\u50cf\uff0c\u53ef\u5c1d\u8bd5\u4e0b\u8f7d\u3002 \u4e0b\u6587\u4ecb\u7ecd\u901a\u8fc7ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder dnf install python3-ironic-python-agent-builder python3-ironic-python-agent-builder-doc \u6216 pip3 install ironic-python-agent-builder dnf install qemu-img git \u6ce8\uff1a22.09\u7cfb\u7edf\u4e2d\uff0c\u4f7f\u7528dnf\u5b89\u88c5\u65f6\uff0c\u9700\u8981\u540c\u65f6\u6309\u7167\u4e3b\u5305\u548cdoc\u5305\u3002doc\u5305\u5185\u6253\u5305\u7684 /usr/share \u76ee\u5f55\u4e2d\u6587\u4ef6\u4e3a\u8fd0\u884c\u6240\u9700\uff0c\u540e\u7eed\u7cfb\u7edf\u7248\u672c\u5c06\u5408\u5e76\u6587\u4ef6\u5230python3-ironic-python-agent-builder\u5305\u4e2d\u3002 \u5236\u4f5c\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--lzma] [--extra-args EXTRA_ARGS] [--elements-path ELEMENTS_PATH] distribution positional arguments: distribution Distribution to use options: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic-python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --lzma Use lzma compression for smaller images --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder --elements-path ELEMENTS_PATH Path(s) to custom DIB elements separated by a colon \u64cd\u4f5c\u5b9e\u4f8b\uff1a # -o\u9009\u9879\u6307\u5b9a\u751f\u6210\u7684\u955c\u50cf\u540d # ubuntu\u6307\u5b9a\u751f\u6210ubuntu\u7cfb\u7edf\u7684\u955c\u50cf ironic-python-agent-builder -o my-ubuntu-ipa ubuntu \u53ef\u901a\u8fc7\u8bbe\u7f6e ARCH \u73af\u5883\u53d8\u91cf\uff08\u9ed8\u8ba4\u4e3aamd64\uff09\u6307\u5b9a\u6240\u6784\u5efa\u955c\u50cf\u7684\u67b6\u6784\u3002\u5982\u679c\u662f arm \u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a export ARCH=aarch64 \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf,\u8bbe\u7f6e\u7528\u6237\u540d\u3001\u5bc6\u7801\uff0c\u542f\u7528 sodo \u6743\u9650\uff1b\u5e76\u6dfb\u52a0 -e \u9009\u9879\u4f7f\u7528\u76f8\u5e94\u7684DIB\u5143\u7d20\u3002\u5236\u4f5c\u955c\u50cf\u64cd\u4f5c\u5982\u4e0b\uff1a export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=stable/yoga # \u6307\u5b9a\u672c\u5730\u4ed3\u5e93\u53ca\u5206\u652f DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo DIB_REPOREF_ironic_python_agent=my-test-branch ironic-python-agent-builder ubuntu \u53c2\u8003\uff1a source-repositories \u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\u3002 \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a \u5f53\u524d\u7248\u672c\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ ramdisk\u955c\u50cf\u4e2d\u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 \u7f16\u8f91/usr/lib/systemd/system/ironic-python-agent.service\u6587\u4ef6 [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target Trove \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2atrove\u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684trove\u6570\u636e\u5e93\uff0c\u66ff\u6362TROVE_DBPASS\u4e3a\u5408\u9002\u7684\u5bc6\u7801\u3002 CREATE DATABASE trove CHARACTER SET utf8; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS'; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efatrove\u7528\u6237 openstack user create --domain default --password-prompt trove # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user trove admin # \u521b\u5efadatabase\u670d\u52a1 openstack service create --name trove --description \"Database service\" database \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5Trove\u3002 dnf install openstack-trove python-troveclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 \u7f16\u8f91/etc/trove/trove.conf\u3002 [DEFAULT] bind_host=192.168.0.2 log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver network_label_regex=.* management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] auth_url = http://controller:5000/v3/ auth_type = password project_domain_name = Default project_name = service user_domain_name = Default password = trove username = TROVE_PASS [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = trove password = TROVE_PASS [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u63a7\u5236\u8282\u70b9\u7684IP\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002 \u7f16\u8f91/etc/trove/trove-guestagent.conf\u3002 [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df\u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a\u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002\\ \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 \u6570\u636e\u5e93\u540c\u6b65\u3002 su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efaswift\u7528\u6237 openstack user create --domain default --password-prompt swift # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user swift admin # \u521b\u5efa\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5Swift\u3002 dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \\ python3-keystonemiddleware memcached \u914d\u7f6eproxy-server\u3002 Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cSWIFT_PASS\u5373\u53ef\u3002 vim /etc/swift/proxy-server.conf [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = swift password = SWIFT_PASS delay_auth_decision = True service_token_roles_required = True Storage\u8282\u70b9 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305\u3002 dnf install openstack-swift-account openstack-swift-container openstack-swift-object dnf install xfsprogs rsync \u5c06\u8bbe\u5907/dev/sdb\u548c/dev/sdc\u683c\u5f0f\u5316\u4e3aXFS\u3002 mkfs.xfs /dev/sdb mkfs.xfs /dev/sdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u3002 mkdir -p /srv/node/sdb mkdir -p /srv/node/sdc \u627e\u5230\u65b0\u5206\u533a\u7684UUID\u3002 blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d\u3002 UUID=\"\" /srv/node/sdb xfs noatime 0 2 UUID=\"\" /srv/node/sdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\u3002 mount /srv/node/sdb mount /srv/node/sdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e\u3002 \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u914d\u7f6e\u5b58\u50a8\u8282\u70b9\u3002 \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 [DEFAULT] bind_ip = 192.168.0.4 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\u3002 mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift Controller\u8282\u70b9\u521b\u5efa\u5e76\u5206\u53d1\u73af \u521b\u5efa\u8d26\u53f7\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840 account.builder \u6587\u4ef6\u3002 swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder account.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6202 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u8d26\u53f7\u73af\u5185\u5bb9\u3002 swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u8d26\u53f7\u73af\u3002 swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\u3002 swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder container.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bb9\u5668\u73af\u5185\u5bb9\u3002 swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u5bb9\u5668\u73af\u3002 swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\u3002 swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder object.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6200 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bf9\u8c61\u73af\u5185\u5bb9\u3002 swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u5bf9\u8c61\u73af\u3002 swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\u3002 \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/swift/swift.conf\u3002 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R root:swift /etc/swift \u5b8c\u6210\u5b89\u88c5 \u5728\u63a7\u5236\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service systemctl start openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service Cyborg \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 Controller\u8282\u70b9 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cyborg; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efacybory\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eCYBORG_PASS source ~/.admin-openrc openstack user create --domain default --password-prompt cyborg openstack role add --project service --user cyborg admin openstack service create --name cyborg --description \"Acceleration Service\" accelerator \u4f7f\u7528uwsgi\u90e8\u7f72Cyborg api\u670d\u52a1 openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2 \u5b89\u88c5Cyborg dnf install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [api] host_ip = 0.0.0.0 [database] connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg [service_catalog] cafile = /opt/stack/data/ca-bundle.pem project_domain_id = default user_domain_id = default project_name = service password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = password username = PLACEMENT_PASS auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [nova] project_domain_name = Default project_name = service user_domain_name = Default password = NOVA_PASS username = nova auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [keystone_authtoken] memcached_servers = localhost:11211 signing_dir = /var/cache/cyborg/api cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u00b6 Aodh\u53ef\u4ee5\u6839\u636e\u7531Ceilometer\u6216\u8005Gnocchi\u6536\u96c6\u7684\u76d1\u63a7\u6570\u636e\u521b\u5efa\u544a\u8b66\uff0c\u5e76\u8bbe\u7f6e\u89e6\u53d1\u89c4\u5219\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh\u3002 dnf install openstack-aodh-api openstack-aodh-evaluator \\ openstack-aodh-notifier openstack-aodh-listener \\ openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/aodh/aodh.conf [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u540c\u6b65\u6570\u636e\u5e93\u3002 aodh-dbsync \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u00b6 Gnocchi\u662f\u4e00\u4e2a\u5f00\u6e90\u7684\u65f6\u95f4\u5e8f\u5217\u6570\u636e\u5e93\uff0c\u53ef\u4ee5\u5bf9\u63a5Ceilometer\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi\u3002 dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. # coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u540c\u6b65\u6570\u636e\u5e93\u3002 gnocchi-upgrade \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u00b6 Ceilometer\u662fOpenStack\u4e2d\u8d1f\u8d23\u6570\u636e\u6536\u96c6\u7684\u670d\u52a1\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-notification openstack-ceilometer-central \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/pipeline.yaml\u3002 publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u6570\u636e\u5e93\u540c\u6b65\u3002 ceilometer-upgrade \u5b8c\u6210\u63a7\u5236\u8282\u70b9Ceilometer\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Compute\u8282\u70b9 \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-compute dnf install openstack-ceilometer-ipmi # \u53ef\u9009 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_url = http://controller:5000 project_domain_id = default user_domain_id = default auth_type = password username = ceilometer project_name = service password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/nova/nova.conf\u3002 [DEFAULT] instance_usage_audit = True instance_usage_audit_period = hour [notifications] notify_on_state_change = vm_and_task_state [oslo_messaging_notifications] driver = messagingv2 \u5b8c\u6210\u5b89\u88c5\u3002 systemctl enable openstack-ceilometer-compute.service systemctl start openstack-ceilometer-compute.service systemctl enable openstack-ceilometer-ipmi.service # \u53ef\u9009 systemctl start openstack-ceilometer-ipmi.service # \u53ef\u9009 # \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service Heat \u00b6 Heat\u662f OpenStack \u81ea\u52a8\u7f16\u6392\u670d\u52a1\uff0c\u57fa\u4e8e\u63cf\u8ff0\u6027\u7684\u6a21\u677f\u6765\u7f16\u6392\u590d\u5408\u4e91\u5e94\u7528\uff0c\u4e5f\u79f0\u4e3a Orchestration Service \u3002Heat \u7684\u5404\u670d\u52a1\u4e00\u822c\u5b89\u88c5\u5728 Controller \u8282\u70b9\u4e0a\u3002 Controller\u8282\u70b9 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE heat; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 source ~/.admin-openrc openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f \u521b\u5efa heat domain openstack domain create --description \"Stack projects and users\" heat \u5728 heat domain\u4e0b\u521b\u5efa heat_domain_admin \u7528\u6237\uff0c\u5e76\u8bb0\u4e0b\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6e\u4e0b\u9762\u7684 HEAT_DOMAIN_PASS openstack user create --domain heat --password-prompt heat_domain_admin \u4e3a heat_domain_admin \u7528\u6237\u589e\u52a0 admin \u89d2\u8272 openstack role add --domain heat --user-domain heat --user heat_domain_admin admin \u521b\u5efa heat_stack_owner \u89d2\u8272 openstack role create heat_stack_owner \u521b\u5efa heat_stack_user \u89d2\u8272 openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service Tempest \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u5b89\u88c5Tempest dnf install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Yoga\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 oos\u5de5\u5177\u5728\u4e0d\u65ad\u6f14\u8fdb\uff0c\u517c\u5bb9\u6027\u3001\u53ef\u7528\u6027\u4e0d\u80fd\u65f6\u523b\u4fdd\u8bc1\uff0c\u5efa\u8bae\u4f7f\u7528\u5df2\u9a8c\u8bc1\u7684\u672c\u7248\uff0c\u8fd9\u91cc\u9009\u62e9 1.0.6 pip install openstack-sig-tool==1.0.6 \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff0cAK/SK\u662f\u7528\u6237\u7684\u534e\u4e3a\u4e91\u767b\u5f55\u5bc6\u94a5\uff0c\u5176\u4ed6\u914d\u7f6e\u4fdd\u6301\u9ed8\u8ba4\u5373\u53ef\uff08\u9ed8\u8ba4\u4f7f\u7528\u65b0\u52a0\u5761region\uff09\uff0c\u9700\u8981\u63d0\u524d\u5728\u4e91\u4e0a\u521b\u5efa\u5bf9\u5e94\u7684\u8d44\u6e90\uff0c\u5305\u62ec\uff1a \u4e00\u4e2a\u5b89\u5168\u7ec4\uff0c\u540d\u5b57\u9ed8\u8ba4\u662f oos \u4e00\u4e2aopenEuler\u955c\u50cf\uff0c\u540d\u79f0\u683c\u5f0f\u662fopenEuler-%(release)s-%(arch)s\uff0c\u4f8b\u5982 openEuler-22.09-arm64 \u4e00\u4e2aVPC\uff0c\u540d\u79f0\u662f oos_vpc \u8be5VPC\u4e0b\u9762\u4e24\u4e2a\u5b50\u7f51\uff0c\u540d\u79f0\u662f oos_subnet1 \u3001 oos_subnet2 [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668(\u53ea\u5728openEuler LTS\u4e0a\u652f\u6301) \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.09\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.09 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r yoga \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u6267\u884ctempest\u6d4b\u8bd5 \u7528\u6237\u53ef\u4ee5\u4f7f\u7528oos\u81ea\u52a8\u6267\u884c\uff1a oos env test test-oos \u4e5f\u53ef\u4ee5\u624b\u52a8\u767b\u5f55\u76ee\u6807\u8282\u70b9\uff0c\u8fdb\u5165\u6839\u76ee\u5f55\u4e0b\u7684 mytest \u76ee\u5f55\uff0c\u624b\u52a8\u6267\u884c tempest run \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u8df3\u8fc7\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u5728\u7b2c4\u6b65\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 \u88ab\u7eb3\u7ba1\u7684\u865a\u673a\u9700\u8981\u4fdd\u8bc1\uff1a \u81f3\u5c11\u6709\u4e00\u5f20\u7ed9oos\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e neutron_dataplane_interface_name \u81f3\u5c11\u6709\u4e00\u5757\u7ed9oos\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e cinder_block_device \u5982\u679c\u8981\u90e8\u7f72swift\u670d\u52a1\uff0c\u5219\u9700\u8981\u65b0\u589e\u4e00\u5757\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e swift_storage_devices # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.09 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u00b6 opensd\u7528\u4e8e\u6279\u91cf\u5730\u811a\u672c\u5316\u90e8\u7f72openstack\u5404\u7ec4\u4ef6\u670d\u52a1\u3002 \u90e8\u7f72\u6b65\u9aa4 \u00b6 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f \u00b6 \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u9700\u5c06selinux\u8bbe\u7f6e\u4e3adisable \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u5c06/etc/ssh/sshd_config\u914d\u7f6e\u6587\u4ef6\u5185\u7684UseDNS\u8bbe\u7f6e\u4e3ano \u64cd\u4f5c\u7cfb\u7edf\u8bed\u8a00\u5fc5\u987b\u8bbe\u7f6e\u4e3a\u82f1\u6587 \u90e8\u7f72\u4e4b\u524d\u8bf7\u786e\u4fdd\u6240\u6709\u8ba1\u7b97\u8282\u70b9/etc/hosts\u6587\u4ef6\u5185\u6ca1\u6709\u5bf9\u8ba1\u7b97\u4e3b\u673a\u7684\u89e3\u6790 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 \u00b6 \u4e0d\u4f7f\u7528ceph\u6216\u5df2\u6709ceph\u96c6\u7fa4\u53ef\u5ffd\u7565\u6b64\u6b65\u9aa4 \u5728\u4efb\u610f\u4e00\u53f0ceph monitor\u8282\u70b9\u6267\u884c: 2.1 \u521b\u5efapool: \u00b6 ceph osd pool create volumes 2048 ceph osd pool create images 2048 2.2 \u521d\u59cb\u5316pool \u00b6 rbd pool init volumes rbd pool init images 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 \u00b6 ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images' ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes' 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 \u00b6 \u6839\u636e\u7269\u7406\u673a\u78c1\u76d8\u914d\u7f6e\u4e0e\u95f2\u7f6e\u60c5\u51b5\uff0c\u4e3amysql\u6570\u636e\u76ee\u5f55\u6302\u8f7d\u989d\u5916\u7684\u78c1\u76d8\u7a7a\u95f4\u3002\u793a\u4f8b\u5982\u4e0b\uff08\u6839\u636e\u5b9e\u9645\u60c5\u51b5\u505a\u914d\u7f6e\uff09\uff1a fdisk -l Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x000ed242 \u521b\u5efa\u5206\u533a parted /dev/sdd mkparted 0 -1 \u521b\u5efapv partprobe /dev/sdd1 pvcreate /dev/sdd1 \u521b\u5efa\u3001\u6fc0\u6d3bvg vgcreate vg_mariadb /dev/sdd1 vgchange -ay vg_mariadb \u67e5\u770bvg\u5bb9\u91cf vgdisplay --- Volume group --- VG Name vg_mariadb System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 446.62 GiB PE Size 4.00 MiB Total PE 114335 Alloc PE / Size 114176 / 446.00 GiB Free PE / Size 159 / 636.00 MiB VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc \u521b\u5efalv lvcreate -L 446G -n lv_mariadb vg_mariadb \u683c\u5f0f\u5316\u78c1\u76d8\u5e76\u83b7\u53d6\u5377\u7684UUID mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb blkid /dev/mapper/vg_mariadb-lv_mariadb /dev/mapper/vg_mariadb-lv_mariadb: UUID=\"98d513eb-5f64-4aa5-810e-dc7143884fa2\" TYPE=\"ext4\" \u6ce8\uff1a98d513eb-5f64-4aa5-810e-dc7143884fa2\u4e3a\u5377\u7684UUID \u6302\u8f7d\u78c1\u76d8 mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql rm -rf /var/lib/mysql/* 4. \u914d\u7f6eyum repo \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 4.1 \u5907\u4efdyum\u6e90 \u00b6 mkdir /etc/yum.repos.d/bak/ mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/ 4.2 \u914d\u7f6eyum repo \u00b6 cat > /etc/yum.repos.d/opensd.repo << EOF [epol] name=epol baseurl=http://repo.openeuler.org/openEuler-22.09/EPOL/main/$basearch/ enabled=1 gpgcheck=0 [everything] name=everything baseurl=http://repo.openeuler.org/openEuler-22.09/$basearch/ enabled=1 gpgcheck=0 EOF 4.3 \u66f4\u65b0yum\u7f13\u5b58 \u00b6 yum clean all yum makecache 5. \u5b89\u88c5opensd \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 \u00b6 git clone https://gitee.com/openeuler/opensd cd opensd python3 setup.py install 6. \u505assh\u4e92\u4fe1 \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\u5e76\u4e00\u8def\u56de\u8f66 ssh-keygen 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 \u00b6 \u5728auto_ssh_host_ip\u4e2d\u914d\u7f6e\u6240\u6709\u7528\u5230\u7684\u4e3b\u673aip, \u793a\u4f8b\uff1a cd /usr/local/share/opensd/tools/ vim auto_ssh_host_ip 10.0.0.1 10.0.0.2 ... 10.0.0.10 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c \u00b6 \u5c06\u514d\u5bc6\u811a\u672c /usr/local/bin/opensd-auto-ssh \u5185123123\u66ff\u6362\u4e3a\u4e3b\u673a\u771f\u5b9e\u5bc6\u7801 # \u66ff\u6362\u811a\u672c\u5185123123\u5b57\u7b26\u4e32 vim /usr/local/bin/opensd-auto-ssh ## \u5b89\u88c5expect\u540e\u6267\u884c\u811a\u672c dnf install expect -y opensd-auto-ssh 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 \u00b6 ssh-copy-id root@x.x.x.x 7. \u914d\u7f6eopensd \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 \u00b6 \u5b89\u88c5 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils\u5e76\u968f\u673a\u751f\u6210\u5bc6\u7801 dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y # \u6267\u884c\u547d\u4ee4\u751f\u6210\u5bc6\u7801 opensd-genpwd # \u68c0\u67e5\u5bc6\u7801\u662f\u5426\u751f\u6210 cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml 7.2 \u914d\u7f6einventory\u6587\u4ef6 \u00b6 \u4e3b\u673a\u4fe1\u606f\u5305\u542b\uff1a\u4e3b\u673a\u540d\u3001ansible_host IP\u3001availability_zone\uff0c\u4e09\u8005\u5747\u9700\u914d\u7f6e\u7f3a\u4e00\u4e0d\u53ef\uff0c\u793a\u4f8b\uff1a vim /usr/local/share/opensd/ansible/inventory/multinode # \u4e09\u53f0\u63a7\u5236\u8282\u70b9\u4e3b\u673a\u4fe1\u606f [control] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # \u7f51\u7edc\u8282\u70b9\u4fe1\u606f\uff0c\u4e0e\u63a7\u5236\u8282\u70b9\u4fdd\u6301\u4e00\u81f4 [network] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # cinder-volume\u670d\u52a1\u8282\u70b9\u4fe1\u606f [storage] storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1 storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1 storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1 # Cell1 \u96c6\u7fa4\u4fe1\u606f [cell-control-cell1] cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1 cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1 cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1 [compute-cell1] compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1 compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1 compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1 [cell1:children] cell-control-cell1 compute-cell1 # Cell2\u96c6\u7fa4\u4fe1\u606f [cell-control-cell2] cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1 cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1 cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1 [compute-cell2] compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1 compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1 compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1 [cell2:children] cell-control-cell2 compute-cell2 [baremetal] [compute-cell1-ironic] # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684control\u4e3b\u673a\u7ec4 [nova-conductor:children] cell-control-cell1 cell-control-cell2 # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684compute\u4e3b\u673a\u7ec4 [nova-compute:children] compute-added compute-cell1 compute-cell2 # \u4e0b\u9762\u7684\u4e3b\u673a\u7ec4\u4fe1\u606f\u4e0d\u9700\u53d8\u52a8\uff0c\u4fdd\u7559\u5373\u53ef [compute-added] [chrony-server:children] control [pacemaker:children] control ...... ...... 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf \u00b6 \u6ce8: \u6587\u6863\u4e2d\u63d0\u5230\u7684\u6709\u6ce8\u91ca\u914d\u7f6e\u9879\u9700\u8981\u66f4\u6539\uff0c\u5176\u4ed6\u53c2\u6570\u4e0d\u9700\u8981\u66f4\u6539\uff0c\u82e5\u65e0\u76f8\u5173\u914d\u7f6e\u5219\u4e3a\u7a7a vim /usr/local/share/opensd/etc_examples/opensd/globals.yml ######################## # Network & Base options ######################## network_interface: \"eth0\" #\u7ba1\u7406\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 neutron_external_interface: \"eth1\" #\u4e1a\u52a1\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 cidr_netmask: 24 #\u7ba1\u7406\u7f51\u7684\u63a9\u7801 opensd_vip_address: 10.0.0.33 #\u63a7\u5236\u8282\u70b9\u865a\u62dfIP\u5730\u5740 cell1_vip_address: 10.0.0.34 #cell1\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 cell2_vip_address: 10.0.0.35 #cell2\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 external_fqdn: \"\" #\u7528\u4e8evnc\u8bbf\u95ee\u865a\u62df\u673a\u7684\u5916\u7f51\u57df\u540d\u5730\u5740 external_ntp_servers: [] #\u5916\u90e8ntp\u670d\u52a1\u5668\u5730\u5740 yumrepo_host: #yum\u6e90\u7684IP\u5730\u5740 yumrepo_port: #yum\u6e90\u7aef\u53e3\u53f7 environment: #yum\u6e90\u7684\u7c7b\u578b upgrade_all_packages: \"yes\" #\u662f\u5426\u5347\u7ea7\u6240\u6709\u5b89\u88c5\u7248\u7684\u7248\u672c(\u6267\u884cyum upgrade)\uff0c\u521d\u59cb\u90e8\u7f72\u8d44\u6e90\u8bf7\u8bbe\u7f6e\u4e3a\"yes\" enable_miner: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72miner\u670d\u52a1 enable_chrony: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72chrony\u670d\u52a1 enable_pri_mariadb: \"no\" #\u662f\u5426\u4e3a\u79c1\u6709\u4e91\u90e8\u7f72mariadb enable_hosts_file_modify: \"no\" # \u6269\u5bb9\u8ba1\u7b97\u8282\u70b9\u548c\u90e8\u7f72ironic\u670d\u52a1\u7684\u65f6\u5019\uff0c\u662f\u5426\u5c06\u8282\u70b9\u4fe1\u606f\u6dfb\u52a0\u5230`/etc/hosts` ######################## # Available zone options ######################## az_cephmon_compose: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az01\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az01\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az02\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az02\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az03\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az03\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: # `reserve_vcpu_based_on_numa`\u914d\u7f6e\u4e3a`yes` or `no`,\u4e3e\u4f8b\u8bf4\u660e\uff1a NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 \u5f53reserve_vcpu_based_on_numa: \"yes\", \u6839\u636enuma node, \u5e73\u5747\u6bcf\u4e2anode\u9884\u7559vcpu: vcpu_pin_set = 2-15,34-47,18-31,50-63 \u5f53reserve_vcpu_based_on_numa: \"no\", \u4ece\u7b2c\u4e00\u4e2avcpu\u5f00\u59cb\uff0c\u987a\u5e8f\u9884\u7559vcpu: vcpu_pin_set = 8-64 ####################### # Nova options ####################### nova_reserved_host_memory_mb: 2048 #\u8ba1\u7b97\u8282\u70b9\u7ed9\u8ba1\u7b97\u670d\u52a1\u9884\u7559\u7684\u5185\u5b58\u5927\u5c0f enable_cells: \"yes\" #cell\u8282\u70b9\u662f\u5426\u5355\u72ec\u8282\u70b9\u90e8\u7f72 support_gpu: \"False\" #cell\u8282\u70b9\u662f\u5426\u6709GPU\u670d\u52a1\u5668\uff0c\u5982\u679c\u6709\u5219\u4e3aTrue\uff0c\u5426\u5219\u4e3aFalse ####################### # Neutron options ####################### monitor_ip: - 10.0.0.9 #\u914d\u7f6e\u76d1\u63a7\u8282\u70b9 - 10.0.0.10 enable_meter_full_eip: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8EIP\u5168\u91cf\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_port_forwarding: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8port forwarding\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_ecs_ipv6: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8ecs_ipv6\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter: True #\u914d\u7f6e\u662f\u5426\u5f00\u542f\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue is_sdn_arch: False #\u914d\u7f6e\u662f\u5426\u662fsdn\u67b6\u6784\uff0c\u9ed8\u8ba4\u4e3aFalse # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,vlan\u548cvxlan\u4e24\u79cd\u7c7b\u578b\u53ea\u80fd\u4e8c\u9009\u4e00. enable_vxlan_network_type: False # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,\u5982\u679c\u4f7f\u7528vxlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aTrue, \u5982\u679c\u4f7f\u7528vlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aFalse. enable_neutron_fwaas: False # \u73af\u5883\u6709\u4f7f\u7528\u9632\u706b\u5899, \u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fd\u9632\u62a4\u5899\u529f\u80fd. # Neutron provider neutron_provider_networks: network_types: \"{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}\" network_vlan_ranges: \"default:xxx:xxx\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvlan\u8303\u56f4 network_mappings: \"default:br-provider\" network_interface: \"{{ neutron_external_interface }}\" network_vxlan_ranges: \"\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvxlan\u8303\u56f4 # \u5982\u4e0b\u8fd9\u4e9b\u914d\u7f6e\u662fSND\u63a7\u5236\u5668\u7684\u914d\u7f6e\u53c2\u6570, `enable_sdn_controller`\u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fdSND\u63a7\u5236\u5668\u529f\u80fd. # \u5176\u4ed6\u53c2\u6570\u8bf7\u6839\u636e\u90e8\u7f72\u4e4b\u524d\u7684\u89c4\u5212\u548cSDN\u90e8\u7f72\u4fe1\u606f\u786e\u5b9a. enable_sdn_controller: False sdn_controller_ip_address: # SDN\u63a7\u5236\u5668ip\u5730\u5740 sdn_controller_username: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u540d sdn_controller_password: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u5bc6\u7801 ####################### # Dimsagent options ####################### enable_dimsagent: \"no\" # \u5b89\u88c5\u955c\u50cf\u670d\u52a1agent, \u9700\u8981\u6539\u4e3ayes # Address and domain name for s2 s3_address_domain_pair: - host_ip: host_name: ####################### # Trove options ####################### enable_trove: \"no\" #\u5b89\u88c5trove \u9700\u8981\u6539\u4e3ayes #default network trove_default_neutron_networks: #trove \u7684\u7ba1\u7406\u7f51\u7edcid `openstack network list|grep -w trove-mgmt|awk '{print$2}'` #s3 setup(\u5982\u679c\u6ca1\u6709s3,\u4ee5\u4e0b\u503c\u586bnull) s3_endpoint_host_ip: #s3\u7684ip s3_endpoint_host_name: #s3\u7684\u57df\u540d s3_endpoint_url: #s3\u7684url \u00b7\u4e00\u822c\u4e3ahttp\uff1a//s3\u57df\u540d s3_access_key: #s3\u7684ak s3_secret_key: #s3\u7684sk ####################### # Ironic options ####################### enable_ironic: \"no\" #\u662f\u5426\u5f00\u673a\u88f8\u91d1\u5c5e\u90e8\u7f72\uff0c\u9ed8\u8ba4\u4e0d\u5f00\u542f ironic_neutron_provisioning_network_uuid: ironic_neutron_cleaning_network_uuid: \"{{ ironic_neutron_provisioning_network_uuid }}\" ironic_dnsmasq_interface: ironic_dnsmasq_dhcp_range: ironic_tftp_server_address: \"{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}\" # \u4ea4\u6362\u673a\u8bbe\u5907\u76f8\u5173\u4fe1\u606f neutron_ml2_conf_genericswitch: genericswitch:xxxxxxx: device_type: ngs_mac_address: ip: username: password: ngs_port_default_vlan: # Package state setting haproxy_package_state: \"present\" mariadb_package_state: \"present\" rabbitmq_package_state: \"present\" memcached_package_state: \"present\" ceph_client_package_state: \"present\" keystone_package_state: \"present\" glance_package_state: \"present\" cinder_package_state: \"present\" nova_package_state: \"present\" neutron_package_state: \"present\" miner_package_state: \"present\" 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 \u00b6 dnf install ansible -y ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u6267\u884c\u7ed3\u679c\u663e\u793a\u6bcf\u53f0\u4e3b\u673a\u90fd\u662f\"SUCCESS\"\u5373\u8bf4\u660e\u8fde\u63a5\u72b6\u6001\u6ca1\u95ee\u9898,\u793a\u4f8b\uff1a compute1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python\" }, \"changed\": false, \"ping\": \"pong\" } 8. \u6267\u884c\u90e8\u7f72 \u00b6 \u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a 8.1 \u6267\u884cbootstrap \u00b6 # \u6267\u884c\u90e8\u7f72 opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50 8.2 \u91cd\u542f\u670d\u52a1\u5668 \u00b6 \u6ce8\uff1a\u6267\u884c\u91cd\u542f\u7684\u539f\u56e0\u662f:bootstrap\u53ef\u80fd\u4f1a\u5347\u5185\u6838,\u66f4\u6539selinux\u914d\u7f6e\u6216\u8005\u6709GPU\u670d\u52a1\u5668,\u5982\u679c\u88c5\u673a\u8fc7\u7a0b\u5df2\u7ecf\u662f\u65b0\u7248\u5185\u6838,selinux disable\u6216\u8005\u6ca1\u6709GPU\u670d\u52a1\u5668,\u5219\u4e0d\u9700\u8981\u6267\u884c\u8be5\u6b65\u9aa4 # \u624b\u52a8\u91cd\u542f\u5bf9\u5e94\u8282\u70b9,\u6267\u884c\u547d\u4ee4 init6 # \u91cd\u542f\u5b8c\u6210\u540e\uff0c\u518d\u6b21\u68c0\u67e5\u8fde\u901a\u6027 ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u91cd\u542f\u5b8c\u540e\u64cd\u4f5c\u7cfb\u7edf\u540e\uff0c\u518d\u6b21\u542f\u52a8yum\u6e90 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 \u00b6 opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50 8.4 \u6267\u884c\u90e8\u7f72 \u00b6 ln -s /usr/bin/python3 /usr/bin/python \u5168\u91cf\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 \u5355\u670d\u52a1\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name \u57fa\u4e8eOpenStack helm\u90e8\u7f72 \u00b6 \u7b80\u4ecb \u00b6 OpenStack-Helm \u662f\u4e00\u4e2a\u7528\u6765\u5141\u8bb8\u7528\u6237\u5728 Kubernetes \u4e0a\u90e8\u7f72 OpenStack \u7ec4\u4ef6\u7684\u9879\u76ee\u3002\u8be5\u9879\u76ee\u63d0\u4f9b\u4e86 OpenStack \u5404\u4e2a\u7ec4\u4ef6\u7684 Helm Chart\uff0c\u5e76\u63d0\u4f9b\u4e86\u4e00\u7cfb\u5217\u811a\u672c\u6765\u4f9b\u7528\u6237\u5b8c\u6210\u5b89\u88c5\u6d41\u7a0b\u3002 OpenStack-Helm \u8f83\u4e3a\u590d\u6742\uff0c\u5efa\u8bae\u5728\u4e00\u4e2a\u65b0\u7cfb\u7edf\u4e0a\u90e8\u7f72\u3002\u6574\u4e2a\u90e8\u7f72\u5c06\u5360\u7528\u7ea6 30GB \u7684\u78c1\u76d8\u7a7a\u95f4\u3002\u5b89\u88c5\u65f6\u8bf7\u4f7f\u7528 root \u7528\u6237\u3002 \u524d\u7f6e\u8bbe\u7f6e \u00b6 \u5728\u5f00\u59cb\u5b89\u88c5 OpenStack-Helm \u524d\uff0c\u53ef\u80fd\u9700\u8981\u5bf9\u7cfb\u7edf\u8fdb\u884c\u4e00\u4e9b\u57fa\u7840\u8bbe\u7f6e\uff0c\u5305\u62ec\u4e3b\u673a\u540d\u548c\u65f6\u95f4\u7b49\u3002\u8bf7\u53c2\u8003\u201c\u57fa\u4e8eRPM\u90e8\u7f72\u201d\u7ae0\u8282\u7684\u6709\u5173\u4fe1\u606f\u3002 openEuler 22.09 \u4e2d\u5df2\u7ecf\u5305\u542b\u4e86 OpenStack-Helm \u8f6f\u4ef6\u5305\u3002\u9996\u5148\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u548c\u8865\u4e01\uff1a dnf install openstack-helm openstack-helm-infra openstack-helm-images loci \u8fd9\u91cc\u5b89\u88c5\u7684\u662f\u539f\u751fopenstack-helm\uff0c\u9ed8\u8ba4\u4e0d\u652f\u6301openEuler\uff0c\u56e0\u6b64\u5982\u679c\u60f3\u5728openEuler\u4e0a\u4f7f\u7528openstack-helm\uff0c\u8fd8\u9700\u8981\u5b89\u88c5plugin\u63d2\u4ef6\uff0c\u672c\u7ae0\u8282\u662f\u5bf9plugin\u7684\u4f7f\u7528\u8bf4\u660e\u3002 dnf install openstack-plugin-openstack-helm-openeuler-support \u81ea\u52a8\u5b89\u88c5 \u00b6 OpenStack-Helm \u5b89\u88c5\u6587\u4ef6\u5c06\u88ab\u653e\u7f6e\u5230\u7cfb\u7edf\u7684 /usr/share/openstack-helm \u76ee\u5f55\u3002 openEuler \u63d0\u4f9b\u7684\u8f6f\u4ef6\u5305\u4e2d\u5305\u542b\u4e00\u4e2a\u7b80\u6613\u7684\u5b89\u88c5\u5411\u5bfc\u7a0b\u5e8f\uff0c\u4f4d\u4e8e /usr/bin/openstack-helm \u3002\u6267\u884c\u547d\u4ee4\u8fdb\u5165\u5411\u5bfc\u7a0b\u5e8f\uff1a openstack-helm Welcome to OpenStack-Helm installation program for openEuler. I will guide you through the installation. Please refer to https://docs.openstack.org/openstack-helm/latest/ to get more information about OpenStack-Helm. We recommend doing this on a new bare metal or virtual OS installation. Now you have the following options: i: Start automated installation c: Check if all pods in Kubernetes are working e: Exit Your choice? [i/c/e]: \u8f93\u5165 i \u5e76\u70b9\u51fb\u56de\u8f66\u8fdb\u5165\u4e0b\u4e00\u7ea7\u9875\u9762\uff1a Welcome to OpenStack-Helm installation program for openEuler. I will guide you through the installation. Please refer to https://docs.openstack.org/openstack-helm/latest/ to get more information about OpenStack-Helm. We recommend doing this on a new bare metal or virtual OS installation. Now you have the following options: i: Start automated installation c: Check if all pods in Kubernetes are working e: Exit Your choice? [i/c/e]: i There are two storage backends available for OpenStack-Helm: NFS and CEPH. Which storage backend would you like to use? n: NFS storage backend c: CEPH storage backend b: Go back to parent menu Your choice? [n/c/b]: OpenStack-Helm \u63d0\u4f9b\u4e86\u4e24\u79cd\u5b58\u50a8\u65b9\u6cd5\uff1a NFS \u548c Ceph \u3002\u7528\u6237\u53ef\u6839\u636e\u9700\u8981\u8f93\u5165 n \u6765\u9009\u62e9 NFS \u5b58\u50a8\u540e\u7aef\u6216\u8005 c \u6765\u9009\u62e9 Ceph \u5b58\u50a8\u540e\u7aef\u3002 \u9009\u62e9\u5b8c\u6210\u5b58\u50a8\u540e\u7aef\u540e\uff0c\u7528\u6237\u5c06\u6709\u673a\u4f1a\u5b8c\u6210\u786e\u8ba4\u3002\u6536\u5230\u63d0\u793a\u65f6\uff0c\u6309\u4e0b\u56de\u8f66\u4ee5\u5f00\u59cb\u5b89\u88c5\u3002\u5b89\u88c5\u8fc7\u7a0b\u4e2d\uff0c\u7a0b\u5e8f\u5c06\u987a\u5e8f\u6267\u884c\u4e00\u7cfb\u5217\u5b89\u88c5\u811a\u672c\u4ee5\u5b8c\u6210\u90e8\u7f72\u3002\u8fd9\u4e00\u8fc7\u7a0b\u53ef\u80fd\u9700\u8981\u6301\u7eed\u51e0\u5341\u5206\u949f\uff0c\u5b89\u88c5\u8fc7\u7a0b\u4e2d\u8bf7\u786e\u4fdd\u78c1\u76d8\u7a7a\u95f4\u5145\u8db3\u4ee5\u53ca\u4e92\u8054\u7f51\u8fde\u63a5\u7545\u901a\u3002 \u5b89\u88c5\u8fc7\u7a0b\u4e2d\u6267\u884c\u5230\u7684\u811a\u672c\u4f1a\u5c06\u4e00\u4e9b Helm Chart \u90e8\u7f72\u5230\u7cfb\u7edf\u4e0a\u3002\u7531\u4e8e\u76ee\u6807\u7cfb\u7edf\u73af\u5883\u590d\u6742\u591a\u53d8\uff0c\u67d0\u4e9b\u7279\u5b9a\u7684 Helm Chart \u53ef\u80fd\u65e0\u6cd5\u987a\u5229\u88ab\u90e8\u7f72\u3002\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u60a8\u4f1a\u6ce8\u610f\u5230\u8f93\u51fa\u4fe1\u606f\u7684\u6700\u540e\u5305\u542b\u7b49\u5f85 Pod \u5c31\u4f4d\u4f46\u8d85\u65f6\u7684\u63d0\u793a\u3002\u82e5\u53d1\u751f\u6b64\u7c7b\u73b0\u8c61\uff0c\u60a8\u53ef\u80fd\u9700\u8981\u901a\u8fc7\u4e0b\u4e00\u8282\u7ed9\u51fa\u7684\u624b\u52a8\u5b89\u88c5\u65b9\u6cd5\u6765\u5b9a\u4f4d\u95ee\u9898\u6240\u5728\u3002 \u82e5\u60a8\u672a\u89c2\u5bdf\u5230\u4e0a\u8ff0\u7684\u73b0\u8c61\uff0c\u5219\u606d\u559c\u60a8\u5b8c\u6210\u4e86\u90e8\u7f72\u3002\u8bf7\u53c2\u8003\u201c\u4f7f\u7528 OpenStack-Helm\u201d\u4e00\u8282\u6765\u5f00\u59cb\u4f7f\u7528\u3002 \u624b\u52a8\u5b89\u88c5 \u00b6 \u82e5\u60a8\u5728\u81ea\u52a8\u5b89\u88c5\u7684\u8fc7\u7a0b\u4e2d\u9047\u5230\u4e86\u9519\u8bef\uff0c\u6216\u8005\u5e0c\u671b\u624b\u52a8\u5b89\u88c5\u6765\u63a7\u5236\u6574\u4e2a\u5b89\u88c5\u6d41\u7a0b\uff0c\u60a8\u53ef\u4ee5\u53c2\u7167\u4ee5\u4e0b\u987a\u5e8f\u6267\u884c\u5b89\u88c5\u6d41\u7a0b\uff1a cd /usr/share/openstack-helm/openstack-helm #\u57fa\u4e8e NFS ./tools/deployment/developer/common/010-deploy-k8s.sh ./tools/deployment/developer/common/020-setup-client.sh ./tools/deployment/developer/common/030-ingress.sh ./tools/deployment/developer/nfs/040-nfs-provisioner.sh ./tools/deployment/developer/nfs/050-mariadb.sh ./tools/deployment/developer/nfs/060-rabbitmq.sh ./tools/deployment/developer/nfs/070-memcached.sh ./tools/deployment/developer/nfs/080-keystone.sh ./tools/deployment/developer/nfs/090-heat.sh ./tools/deployment/developer/nfs/100-horizon.sh ./tools/deployment/developer/nfs/120-glance.sh ./tools/deployment/developer/nfs/140-openvswitch.sh ./tools/deployment/developer/nfs/150-libvirt.sh ./tools/deployment/developer/nfs/160-compute-kit.sh ./tools/deployment/developer/nfs/170-setup-gateway.sh #\u6216\u8005\u57fa\u4e8e Ceph ./tools/deployment/developer/common/010-deploy-k8s.sh ./tools/deployment/developer/common/020-setup-client.sh ./tools/deployment/developer/common/030-ingress.sh ./tools/deployment/developer/ceph/040-ceph.sh ./tools/deployment/developer/ceph/050-mariadb.sh ./tools/deployment/developer/ceph/060-rabbitmq.sh ./tools/deployment/developer/ceph/070-memcached.sh ./tools/deployment/developer/ceph/080-keystone.sh ./tools/deployment/developer/ceph/090-heat.sh ./tools/deployment/developer/ceph/100-horizon.sh ./tools/deployment/developer/ceph/120-glance.sh ./tools/deployment/developer/ceph/140-openvswitch.sh ./tools/deployment/developer/ceph/150-libvirt.sh ./tools/deployment/developer/ceph/160-compute-kit.sh ./tools/deployment/developer/ceph/170-setup-gateway.sh \u5b89\u88c5\u5b8c\u6210\u540e\uff0c\u60a8\u53ef\u4ee5\u4f7f\u7528 kubectl get pods -A \u6765\u67e5\u770b\u5f53\u524d\u7cfb\u7edf\u4e0a\u7684 Pod \u7684\u8fd0\u884c\u60c5\u51b5\u3002 \u4f7f\u7528 OpenStack-Helm \u00b6 \u7cfb\u7edf\u90e8\u7f72\u5b8c\u6210\u540e\uff0cOpenStack CLI \u754c\u9762\u5c06\u88ab\u90e8\u7f72\u5728 /usr/local/bin/openstack \u3002\u53c2\u7167\u4e0b\u9762\u7684\u4f8b\u5b50\u6765\u4f7f\u7528 OpenStack CLI\uff1a export OS_CLOUD=openstack_helm export OS_USERNAME='admin' export OS_PASSWORD='password' export OS_PROJECT_NAME='admin' export OS_PROJECT_DOMAIN_NAME='default' export OS_USER_DOMAIN_NAME='default' export OS_AUTH_URL='http://keystone.openstack.svc.cluster.local/v3' openstack service list openstack stack list \u5f53\u7136\uff0c\u60a8\u4e5f\u53ef\u4ee5\u901a\u8fc7 Web \u754c\u9762\u6765\u8bbf\u95ee OpenStack \u7684\u63a7\u5236\u9762\u677f\u3002Horizon Dashboard \u4f4d\u4e8e http://localhost:31000 \uff0c\u4f7f\u7528\u4ee5\u4e0b\u51ed\u636e\u767b\u5f55\uff1a Domain\uff1a default User Name\uff1a admin Password\uff1a password \u6b64\u65f6\uff0c\u60a8\u5e94\u5f53\u53ef\u4ee5\u770b\u5230\u719f\u6089\u7684 OpenStack \u63a7\u5236\u9762\u677f\u4e86\u3002 \u65b0\u7279\u6027\u7684\u5b89\u88c5 \u00b6 Kolla\u652f\u6301iSula \u00b6 Kolla\u662fOpenStack\u57fa\u4e8eDocker\u548cansible\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u65b9\u6848\uff0c\u5305\u542b\u4e86Kolla\u548cKolla-ansible\u4e24\u4e2a\u9879\u76ee\u3002Kolla\u662f\u5bb9\u5668\u955c\u50cf\u5236\u4f5c\u5de5\u5177\uff0cKolla-ansible\u662f\u5bb9\u5668\u955c\u50cf\u90e8\u7f72\u5de5\u5177\u3002\u5176\u4e2dKolla-ansible\u53ea\u652f\u6301\u5728openEuler LTS\u4e0a\u4f7f\u7528\uff0copenEuler\u521b\u65b0\u7248\u6682\u4e0d\u652f\u6301\u3002\u4f7f\u7528openEuler 22.09\uff0c\u7528\u6237\u53ef\u4ee5\u57fa\u4e8eKolla\u5236\u4f5c\u76f8\u5e94\u7684\u5bb9\u5668\u955c\u50cf\u3002\u540c\u65f6OpenStack SIG\u5728openEuler 22.09\u4e2d\u65b0\u589e\u4e86Kolla\u5bf9iSula\u8fd0\u884c\u65f6\u7684\u652f\u6301\uff0c\u5177\u4f53\u6b65\u9aa4\u5982\u4e0b\uff1a \u5b89\u88c5Kolla dnf install openstack-kolla docker \u5b89\u88c5\u5b8c\u6210\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-build \u547d\u4ee4\u5236\u4f5c\u57fa\u4e8eDocker\u5bb9\u5668\u955c\u50cf\u4e86\uff0c\u975e\u5e38\u7b80\u5355\uff0c\u5982\u679c\u7528\u6237\u60f3\u5c1d\u8bd5\u57fa\u4e8eisula\u7684\u65b9\u5f0f\uff0c\u53ef\u4ee5\u7ee7\u7eed\u64cd\u4f5c \u5b89\u88c5OpenStack iSula\u63d2\u4ef6 dnf install openstack-plugin-kolla-isula-support \u542f\u52a8isula-build\u670d\u52a1 \u7b2c\u4e8c\u6b65\u4f1a\u81ea\u52a8\u5b89\u88c5iSulad\u548cisula-builder\u670d\u52a1\uff0cisulad\u4f1a\u81ea\u52a8\u542f\u52a8\uff0c\u4f46isula-builder\u4e0d\u5bf9\uff0c\u9700\u8981\u624b\u52a8\u62c9\u8d77 systemctl start isula-builder \u914d\u7f6ekolla \u5728 kolla.conf \u4e2d\u7684[Default]\u91cc\u65b0\u589e base_runtime vim /etc/kolla/kolla.conf base_runtime=isula \u81f3\u6b64\u5b89\u88c5\u5b8c\u6210\uff0c\u4f7f\u7528 kolla-build \u5373\u53ef\u57fa\u4e8eisula\u5236\u4f5c\u955c\u50cf\u4e86\uff0c\u6267\u884c\u5b8c\u540e\uff0c\u6267\u884c isula images \u67e5\u770b\u955c\u50cf\u3002 Nova\u652f\u6301\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u7279\u6027 \u00b6 \u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u7279\u6027\u662fOpenStack SIG\u5728openEuler 22.09\u4e2d\u57fa\u4e8eOpenStack Yoga\u5f00\u53d1\u7684Nova\u7279\u6027\uff0c\u8be5\u7279\u6027\u5141\u8bb8\u7528\u6237\u6307\u5b9a\u865a\u62df\u673a\u7684\u4f18\u5148\u7ea7\uff0c\u57fa\u4e8e\u4e0d\u540c\u7684\u4f18\u5148\u7ea7\uff0cOpenStack\u81ea\u52a8\u5206\u914d\u4e0d\u540c\u7684\u7ed1\u6838\u7b56\u7565\uff0c\u914d\u5408openEuler\u81ea\u7814\u7684 skylark QOS\u670d\u52a1\uff0c\u5b9e\u73b0\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u5bf9\u8d44\u6e90\u7684\u5408\u7406\u4f7f\u7528\u3002\u5177\u4f53\u7ec6\u8282\u53ef\u4ee5\u53c2\u8003 \u7279\u6027\u6587\u6863 \u3002\u672c\u6587\u6863\u4e3b\u8981\u63cf\u8ff0\u5b89\u88c5\u6b65\u9aa4\u3002 \u6309\u7167\u524d\u9762\u7ae0\u8282\u90e8\u7f72\u597d\u4e00\u5957OpenStack\u73af\u5883\uff08\u975e\u5bb9\u5668\uff09\uff0c\u7136\u540e\u5148\u5b89\u88c5plugin\u3002 dnf install openstack-plugin-priority-vm \u914d\u7f6e\u6570\u636e\u5e93 \u672c\u7279\u6027\u5bf9Nova\u7684\u6570\u636e\u8868\u8fdb\u884c\u4e86\u6269\u5145\uff0c\u56e0\u6b64\u9700\u8981\u540c\u6b65\u6570\u636e\u5e93 nova-manage api_db sync nova-manage db sync \u91cd\u542fnova\u670d\u52a1 \u5728\u63a7\u5236\u8282\u70b9\u548c\u8ba1\u7b97\u8282\u70b9\u5206\u522b\u6267\u884c systemctl restart openstack-nova-*","title":"openEuler-22.09_Yoga"},{"location":"install/openEuler-22.09/OpenStack-yoga/#openstack-yoga","text":"OpenStack Yoga \u90e8\u7f72\u6307\u5357 \u57fa\u4e8eRPM\u90e8\u7f72 \u73af\u5883\u51c6\u5907 \u65f6\u949f\u540c\u6b65 \u5b89\u88c5\u6570\u636e\u5e93 \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u90e8\u7f72\u670d\u52a1 Keystone Glance Placement Nova Neutron Cinder Horizon Ironic Trove Swift Cyborg Aodh Gnocchi Ceilometer Heat Tempest \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72 \u90e8\u7f72\u6b65\u9aa4 1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f 2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09 2.1 \u521b\u5efapool: 2.2 \u521d\u59cb\u5316pool 2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1 3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09 4. \u914d\u7f6eyum repo 4.1 \u5907\u4efdyum\u6e90 4.2 \u914d\u7f6eyum repo 4.3 \u66f4\u65b0yum\u7f13\u5b58 5. \u5b89\u88c5opensd 5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5 6. \u505assh\u4e92\u4fe1 6.1 \u751f\u6210\u5bc6\u94a5\u5bf9 6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6 6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c 6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09 7. \u914d\u7f6eopensd 7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801 7.2 \u914d\u7f6einventory\u6587\u4ef6 7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf 7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001 8. \u6267\u884c\u90e8\u7f72 8.1 \u6267\u884cbootstrap 8.2 \u91cd\u542f\u670d\u52a1\u5668 8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5 8.4 \u6267\u884c\u90e8\u7f72 \u57fa\u4e8eOpenStack helm\u90e8\u7f72 \u7b80\u4ecb \u524d\u7f6e\u8bbe\u7f6e \u81ea\u52a8\u5b89\u88c5 \u624b\u52a8\u5b89\u88c5 \u4f7f\u7528 OpenStack-Helm \u65b0\u7279\u6027\u7684\u5b89\u88c5 Kolla\u652f\u6301iSula Nova\u652f\u6301\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u7279\u6027 \u672c\u6587\u6863\u662fopenEuler OpenStack SIG\u7f16\u5199\u7684\u57fa\u4e8eopenEuler 22.09\u7684OpenStack\u90e8\u7f72\u6307\u5357\uff0c\u5185\u5bb9\u7531SIG\u8d21\u732e\u8005\u63d0\u4f9b\u3002\u5728\u9605\u8bfb\u8fc7\u7a0b\u4e2d\uff0c\u5982\u679c\u60a8\u6709\u4efb\u4f55\u7591\u95ee\u6216\u8005\u53d1\u73b0\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7 \u8054\u7cfb SIG\u7ef4\u62a4\u4eba\u5458\uff0c\u6216\u8005\u76f4\u63a5 \u63d0\u4ea4issue \u7ea6\u5b9a \u672c\u7ae0\u8282\u63cf\u8ff0\u6587\u6863\u4e2d\u7684\u4e00\u4e9b\u901a\u7528\u7ea6\u5b9a\u3002 \u540d\u79f0 \u5b9a\u4e49 RABBIT_PASS rabbitmq\u7684\u5bc6\u7801\uff0c\u7531\u7528\u6237\u8bbe\u7f6e\uff0c\u5728OpenStack\u5404\u4e2a\u670d\u52a1\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_PASS cinder\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_DBPASS cinder\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 KEYSTONE_DBPASS keystone\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728keystone\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_PASS glance\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_DBPASS glance\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_PASS \u5728keystone\u6ce8\u518c\u7684heat\u7528\u6237\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_DBPASS heat\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_PASS \u5728keystone\u6ce8\u518c\u7684cyborg\u7528\u6237\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_DBPASS cyborg\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_PASS \u5728keystone\u6ce8\u518c\u7684neutron\u7528\u6237\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_DBPASS neutron\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PROVIDER_INTERFACE_NAME \u7269\u7406\u7f51\u7edc\u63a5\u53e3\u7684\u540d\u79f0\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 OVERLAY_INTERFACE_IP_ADDRESS Controller\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406ip\u5730\u5740\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 METADATA_SECRET metadata proxy\u7684secret\u5bc6\u7801\uff0c\u5728nova\u548cneutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_DBPASS placement\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_PASS \u5728keystone\u6ce8\u518c\u7684placement\u7528\u6237\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_DBPASS nova\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728nova\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_PASS \u5728keystone\u6ce8\u518c\u7684nova\u7528\u6237\u5bc6\u7801\uff0c\u5728nova,cyborg,neutron\u7b49\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_DBPASS ironic\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_PASS \u5728keystone\u6ce8\u518c\u7684ironic\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_DBPASS ironic-inspector\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_PASS \u5728keystone\u6ce8\u518c\u7684ironic-inspector\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 OpenStack SIG\u63d0\u4f9b\u4e86\u591a\u79cd\u57fa\u4e8eopenEuler\u90e8\u7f72OpenStack\u7684\u65b9\u6cd5\uff0c\u4ee5\u6ee1\u8db3\u4e0d\u540c\u7684\u7528\u6237\u573a\u666f\uff0c\u8bf7\u6309\u9700\u9009\u62e9\u3002","title":"OpenStack Yoga \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-22.09/OpenStack-yoga/#rpm","text":"","title":"\u57fa\u4e8eRPM\u90e8\u7f72"},{"location":"install/openEuler-22.09/OpenStack-yoga/#_1","text":"\u672c\u6587\u6863\u57fa\u4e8eOpenStack\u7ecf\u5178\u7684\u4e09\u8282\u70b9\u73af\u5883\u8fdb\u884c\u90e8\u7f72\uff0c\u4e09\u4e2a\u8282\u70b9\u5206\u522b\u662f\u63a7\u5236\u8282\u70b9(Controller)\u3001\u8ba1\u7b97\u8282\u70b9(Compute)\u3001\u5b58\u50a8\u8282\u70b9(Storage)\uff0c\u5176\u4e2d\u5b58\u50a8\u8282\u70b9\u4e00\u822c\u53ea\u90e8\u7f72\u5b58\u50a8\u670d\u52a1\uff0c\u5728\u8d44\u6e90\u6709\u9650\u7684\u60c5\u51b5\u4e0b\uff0c\u53ef\u4ee5\u4e0d\u5355\u72ec\u90e8\u7f72\u8be5\u8282\u70b9\uff0c\u628a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u670d\u52a1\u90e8\u7f72\u5230\u8ba1\u7b97\u8282\u70b9\u5373\u53ef\u3002 \u9996\u5148\u51c6\u5907\u4e09\u4e2aopenEuler 22.09\u73af\u5883\uff0c\u6839\u636e\u60a8\u7684\u73af\u5883\uff0c\u4e0b\u8f7d\u5bf9\u5e94\u7684\u955c\u50cf\u5e76\u5b89\u88c5\u5373\u53ef\uff1a ISO\u955c\u50cf \u3001 qcow2\u955c\u50cf \u3002 \u4e0b\u9762\u7684\u5b89\u88c5\u6309\u7167\u5982\u4e0b\u62d3\u6251\u8fdb\u884c\uff1a controller\uff1a192.168.0.2 compute\uff1a 192.168.0.3 storage\uff1a 192.168.0.4 \u5982\u679c\u60a8\u7684\u73af\u5883IP\u4e0d\u540c\uff0c\u8bf7\u6309\u7167\u60a8\u7684\u73af\u5883IP\u4fee\u6539\u76f8\u5e94\u7684\u914d\u7f6e\u6587\u4ef6\u3002 \u672c\u6587\u6863\u7684\u4e09\u8282\u70b9\u670d\u52a1\u62d3\u6251\u5982\u4e0b\u56fe\u6240\u793a(\u53ea\u5305\u542bKeystone\u3001Glance\u3001Nova\u3001Cinder\u3001Neutron\u8fd9\u51e0\u4e2a\u6838\u5fc3\u670d\u52a1\uff0c\u5176\u4ed6\u670d\u52a1\u8bf7\u53c2\u8003\u5177\u4f53\u90e8\u7f72\u7ae0\u8282)\uff1a \u5728\u6b63\u5f0f\u90e8\u7f72\u4e4b\u524d\uff0c\u9700\u8981\u5bf9\u6bcf\u4e2a\u8282\u70b9\u505a\u5982\u4e0b\u914d\u7f6e\u548c\u68c0\u67e5\uff1a \u4fdd\u8bc1EPOL yum\u6e90\u5df2\u914d\u7f6e \u6253\u5f00 /etc/yum.repos.d/openEuler.repo \u6587\u4ef6\uff0c\u68c0\u67e5 [EPOL] \u6e90\u662f\u5426\u5b58\u5728\uff0c\u82e5\u4e0d\u5b58\u5728\uff0c\u5219\u6dfb\u52a0\u5982\u4e0b\u5185\u5bb9: [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.09/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler \u4e0d\u8bba\u6539\u4e0d\u6539\u8fd9\u4e2a\u6587\u4ef6\uff0c\u65b0\u673a\u5668\u7684\u7b2c\u4e00\u6b65\u90fd\u8981\u66f4\u65b0\u4e00\u4e0byum\u6e90\uff0c\u6267\u884c yum update \u3002 \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u6bcf\u4e2a\u8282\u70b9\u5206\u522b\u4fee\u6539\u4e3b\u673a\u540d\uff0c\u4ee5controller\u4e3a\u4f8b\uff1a hostnamectl set-hostname controller vi /etc/hostname \u5185\u5bb9\u4fee\u6539\u4e3acontroller \u7136\u540e\u4fee\u6539\u6bcf\u4e2a\u8282\u70b9\u7684 /etc/hosts \u6587\u4ef6\uff0c\u65b0\u589e\u5982\u4e0b\u5185\u5bb9: 192.168.0.2 controller 192.168.0.3 compute 192.168.0.4 storage","title":"\u73af\u5883\u51c6\u5907"},{"location":"install/openEuler-22.09/OpenStack-yoga/#_2","text":"\u96c6\u7fa4\u73af\u5883\u65f6\u523b\u8981\u6c42\u6bcf\u4e2a\u8282\u70b9\u7684\u65f6\u95f4\u4e00\u81f4\uff0c\u4e00\u822c\u7531\u65f6\u949f\u540c\u6b65\u8f6f\u4ef6\u4fdd\u8bc1\u3002\u672c\u6587\u4f7f\u7528 chrony \u8f6f\u4ef6\u3002\u6b65\u9aa4\u5982\u4e0b\uff1a Controller\u8282\u70b9 \uff1a \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # \u8868\u793a\u5141\u8bb8\u54ea\u4e9bIP\u4ece\u672c\u8282\u70b9\u540c\u6b65\u65f6\u949f allow 192.168.0.0/24 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u5176\u4ed6\u8282\u70b9 \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # NTP_SERVER\u662fcontroller IP\uff0c\u8868\u793a\u4ece\u8fd9\u4e2a\u673a\u5668\u83b7\u53d6\u65f6\u95f4\uff0c\u8fd9\u91cc\u6211\u4eec\u586b192.168.0.2\uff0c\u6216\u8005\u5728`/etc/hosts`\u91cc\u914d\u7f6e\u597d\u7684controller\u540d\u5b57\u5373\u53ef\u3002 server NTP_SERVER iburst \u540c\u65f6\uff0c\u8981\u628a pool pool.ntp.org iburst \u8fd9\u4e00\u884c\u6ce8\u91ca\u6389\uff0c\u8868\u793a\u4e0d\u4ece\u516c\u7f51\u540c\u6b65\u65f6\u949f\u3002 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u914d\u7f6e\u5b8c\u6210\u540e\uff0c\u68c0\u67e5\u4e00\u4e0b\u7ed3\u679c\uff0c\u5728\u5176\u4ed6\u975econtroller\u8282\u70b9\u6267\u884c chronyc sources \uff0c\u8fd4\u56de\u7ed3\u679c\u7c7b\u4f3c\u5982\u4e0b\u5185\u5bb9\uff0c\u8868\u793a\u6210\u529f\u4ececontroller\u540c\u6b65\u65f6\u949f\u3002 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 192.168.0.2 4 6 7 0 -1406ns[ +55us] +/- 16ms","title":"\u65f6\u949f\u540c\u6b65"},{"location":"install/openEuler-22.09/OpenStack-yoga/#_3","text":"\u6570\u636e\u5e93\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528mariadb\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install mysql-config mariadb mariadb-server python3-PyMySQL \u65b0\u589e\u914d\u7f6e\u6587\u4ef6 /etc/my.cnf.d/openstack.cnf \uff0c\u5185\u5bb9\u5982\u4e0b [mysqld] bind-address = 192.168.0.2 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8\u670d\u52a1\u5668 systemctl start mariadb \u521d\u59cb\u5316\u6570\u636e\u5e93\uff0c\u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef mysql_secure_installation \u793a\u4f8b\u5982\u4e0b\uff1a NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and haven't set the root password yet, you should just press enter here. Enter current password for root (enter for none): #\u8fd9\u91cc\u8f93\u5165\u5bc6\u7801\uff0c\u7531\u4e8e\u6211\u4eec\u662f\u521d\u59cb\u5316DB\uff0c\u76f4\u63a5\u56de\u8f66\u5c31\u884c OK, successfully used password, moving on... Setting the root password or using the unix_socket ensures that nobody can log into the MariaDB root user without the proper authorisation. You already have your root account protected, so you can safely answer 'n'. # \u8fd9\u91cc\u6839\u636e\u63d0\u793a\u8f93\u5165N Switch to unix_socket authentication [Y/n] N Enabled successfully! Reloading privilege tables.. ... Success! You already have your root account protected, so you can safely answer 'n'. # \u8f93\u5165Y\uff0c\u4fee\u6539\u5bc6\u7801 Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664\u533f\u540d\u7528\u6237 Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. # \u8f93\u5165Y\uff0c\u5173\u95edroot\u8fdc\u7a0b\u767b\u5f55\u6743\u9650 Disallow root login remotely? [Y/n] Y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664test\u6570\u636e\u5e93 Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. # \u8f93\u5165Y\uff0c\u91cd\u8f7d\u914d\u7f6e Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. \u9a8c\u8bc1\uff0c\u6839\u636e\u7b2c\u56db\u6b65\u8bbe\u7f6e\u7684\u5bc6\u7801\uff0c\u68c0\u67e5\u662f\u5426\u80fd\u767b\u5f55mariadb mysql -uroot -p","title":"\u5b89\u88c5\u6570\u636e\u5e93"},{"location":"install/openEuler-22.09/OpenStack-yoga/#_4","text":"\u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528rabbitmq\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install rabbitmq-server \u542f\u52a8\u670d\u52a1 systemctl start rabbitmq-server \u914d\u7f6eopenstack\u7528\u6237\uff0c RABBIT_PASS \u662fopenstack\u670d\u52a1\u767b\u5f55\u6d88\u606f\u961f\u91cc\u7684\u5bc6\u7801\uff0c\u9700\u8981\u548c\u540e\u9762\u5404\u4e2a\u670d\u52a1\u7684\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\u3002 rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5\u6d88\u606f\u961f\u5217"},{"location":"install/openEuler-22.09/OpenStack-yoga/#_5","text":"\u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528Memcached\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install memcached python3-memcached \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u542f\u52a8\u670d\u52a1 systemctl start memcached","title":"\u5b89\u88c5\u7f13\u5b58\u670d\u52a1"},{"location":"install/openEuler-22.09/OpenStack-yoga/#_6","text":"","title":"\u90e8\u7f72\u670d\u52a1"},{"location":"install/openEuler-22.09/OpenStack-yoga/#keystone","text":"Keystone\u662fOpenStack\u63d0\u4f9b\u7684\u9274\u6743\u670d\u52a1\uff0c\u662f\u6574\u4e2aOpenStack\u7684\u5165\u53e3\uff0c\u63d0\u4f9b\u4e86\u79df\u6237\u9694\u79bb\u3001\u7528\u6237\u8ba4\u8bc1\u3001\u670d\u52a1\u53d1\u73b0\u7b49\u529f\u80fd\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server \u6253\u5f00httpd.conf\u5e76\u914d\u7f6e #\u9700\u8981\u4fee\u6539\u7684\u914d\u7f6e\u6587\u4ef6\u8def\u5f84 vim /etc/httpd/conf/httpd.conf #\u4fee\u6539\u4ee5\u4e0b\u9879\uff0c\u5982\u679c\u6ca1\u6709\u5219\u65b0\u6dfb\u52a0 ServerName controller \u521b\u5efa\u8f6f\u94fe\u63a5 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles \u9700\u8981\u5148\u5b89\u88c5python3-openstackclient dnf install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone"},{"location":"install/openEuler-22.09/OpenStack-yoga/#glance","text":"Glance\u662fOpenStack\u63d0\u4f9b\u7684\u955c\u50cf\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u3001\u88f8\u673a\u955c\u50cf\u7684\u4e0a\u4f20\u4e0e\u4e0b\u8f7d\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521d\u59cb\u5316 glance \u8d44\u6e90\u5bf9\u8c61 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230 GLANCE_PASS \u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt glance User Password: Repeat User Password: \u6dfb\u52a0glance\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user glance admin \u521b\u5efaglance\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efaglance API\u670d\u52a1\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-glance \u4fee\u6539 glance \u914d\u7f6e\u6587\u4ef6 vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrcu \u4e0b\u8f7d\u955c\u50cf x86\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img arm\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance"},{"location":"install/openEuler-22.09/OpenStack-yoga/#placement","text":"Placement\u662fOpenStack\u63d0\u4f9b\u7684\u8d44\u6e90\u8c03\u5ea6\u7ec4\u4ef6\uff0c\u4e00\u822c\u4e0d\u9762\u5411\u7528\u6237\uff0c\u7531Nova\u7b49\u7ec4\u4ef6\u8c03\u7528\uff0c\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u3001\u914d\u7f6ePlacement\u670d\u52a1\u524d\uff0c\u9700\u8981\u5148\u521b\u5efa\u76f8\u5e94\u7684\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548cAPI endpoints\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efaplacement\u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE placement; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efaplacement\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt placement User Password: Repeat User Password: \u6dfb\u52a0placement\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name placement \\ --description \"Placement API\" placement \u521b\u5efaPlacement API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ placement public http://controller:8778 openstack endpoint create --region RegionOne \\ placement internal http://controller:8778 openstack endpoint create --region RegionOne \\ placement admin http://controller:8778 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-placement-api \u7f16\u8f91 /etc/placement/placement.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [placement_database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [placement_database] connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff0c\u586b\u5145Placement\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8\u670d\u52a1 \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650 source ~/.admin-openrc \u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a placement-status upgrade check +----------------------------------------------------------------------+ | Upgrade Check Results | +----------------------------------------------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Failure | | Details: Your policy file is JSON-formatted which is deprecated. You | | need to switch to YAML-formatted file. Use the | | ``oslopolicy-convert-json-to-yaml`` tool to convert the | | existing JSON-formatted files to YAML in a backwards- | | compatible manner: https://docs.openstack.org/oslo.policy/ | | latest/cli/oslopolicy-convert-json-to-yaml.html. | +----------------------------------------------------------------------+ \u8fd9\u91cc\u53ef\u4ee5\u770b\u5230 Policy File JSON to YAML Migration \u7684\u7ed3\u679c\u4e3aFailure\u3002\u8fd9\u662f\u56e0\u4e3a\u5728Placement\u4e2d\uff0cJSON\u683c\u5f0f\u7684policy\u6587\u4ef6\u4eceWallaby\u7248\u672c\u5f00\u59cb\u5df2\u5904\u4e8e deprecated \u72b6\u6001\u3002\u53ef\u4ee5\u53c2\u8003\u63d0\u793a\uff0c\u4f7f\u7528 oslopolicy-convert-json-to-yaml \u5de5\u5177 \u5c06\u73b0\u6709\u7684JSON\u683c\u5f0fpolicy\u6587\u4ef6\u8f6c\u5316\u4e3aYAML\u683c\u5f0f\u3002 oslopolicy-convert-json-to-yaml --namespace placement \\ --policy-file /etc/placement/policy.json \\ --output-file /etc/placement/policy.yaml mv /etc/placement/policy.json{,.bak} \u6ce8\uff1a\u5f53\u524d\u73af\u5883\u4e2d\u6b64\u95ee\u9898\u53ef\u5ffd\u7565\uff0c\u4e0d\u5f71\u54cd\u8fd0\u884c\u3002 \u9488\u5bf9placement API\u8fd0\u884c\u547d\u4ee4\uff1a \u5b89\u88c5osc-placement\u63d2\u4ef6\uff1a dnf install python3-osc-placement \u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a openstack --os-placement-api-version 1.2 resource class list --sort-column name +----------------------------+ | name | +----------------------------+ | DISK_GB | | FPGA | | ... | openstack --os-placement-api-version 1.6 trait list --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_ACCELERATORS | | COMPUTE_ARCH_AARCH64 | | ... |","title":"Placement"},{"location":"install/openEuler-22.09/OpenStack-yoga/#nova","text":"Nova\u662fOpenStack\u7684\u8ba1\u7b97\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u7684\u521b\u5efa\u3001\u53d1\u653e\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efa nova_api \u3001 nova \u548c nova_cell0 \u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efanova\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt nova User Password: Repeat User Password: \u6dfb\u52a0nova\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user nova admin \u521b\u5efanova\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name nova \\ --description \"OpenStack Compute\" compute \u521b\u5efaNova API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute admin http://controller:8774/v2.1 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528controller\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.2 log_dir = /var/log/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api_database] \u548c [database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff1a \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u542f\u52a8\u670d\u52a1 systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service Compute\u8282\u70b9 \u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-nova-compute \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6 \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528Compute\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49compute_driver\u3001instances_path\u3001log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.3 compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances log_dir = /var/log/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86_64\uff09 \u5904\u7406\u5668\u4e3ax86_64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002\u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08arm64\uff09 \u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a virt-host-validate # \u8be5\u547d\u4ee4\u7531libvirt\u63d0\u4f9b\uff0c\u6b64\u65f6libvirt\u5e94\u5df2\u4f5c\u4e3aopenstack-nova-compute\u4f9d\u8d56\u88ab\u5b89\u88c5\uff0c\u73af\u5883\u4e2d\u5df2\u6709\u6b64\u547d\u4ee4 \u663e\u793aFAIL\u65f6\uff0c\u8868\u793a\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002 QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded) \u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u663e\u793aPASS\u65f6\uff0c\u8868\u793a\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 QEMU: Checking if device /dev/kvm exists: PASS \u914d\u7f6eqemu\uff08\u4ec5arm64\uff09 \u4ec5\u5f53\u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\u9700\u8981\u6267\u884c\u6b64\u64cd\u4f5c\u3002 \u7f16\u8f91 /etc/libvirt/qemu.conf : nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u7f16\u8f91 /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } \u542f\u52a8\u670d\u52a1 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u786e\u8ba4nova-compute\u670d\u52a1\u5df2\u8bc6\u522b\u5230\u6570\u636e\u5e93\u4e2d\uff1a openstack compute service list --service nova-compute \u53d1\u73b0\u8ba1\u7b97\u8282\u70b9\uff0c\u5c06\u8ba1\u7b97\u8282\u70b9\u6dfb\u52a0\u5230cell\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u7ed3\u679c\u5982\u4e0b\uff1a Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code. Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check","title":"Nova"},{"location":"install/openEuler-22.09/OpenStack-yoga/#neutron","text":"Neutron\u662fOpenStack\u7684\u7f51\u7edc\u670d\u52a1\uff0c\u63d0\u4f9b\u865a\u62df\u4ea4\u6362\u673a\u3001IP\u8def\u7531\u3001DHCP\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u670d\u52a1\u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efaneutron\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eNEUTRON_PASS\uff1a source ~/.admin-openrc openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description \"OpenStack Networking\" network \u90e8\u7f72 Neutron API \u670d\u52a1\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2 3. \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u914d\u7f6eML2\uff0cML2\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge** \u4fee\u6539/etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6eLayer-3\u4ee3\u7406 \u4fee\u6539/etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406 \u4fee\u6539/etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406 \u4fee\u6539/etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u914d\u7f6enova\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542fnova api\u670d\u52a1 systemctl restart openstack-nova-api \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl start neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service Compute\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-neutron-linuxbridge ebtables ipset -y \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6enova compute\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service \u542f\u52a8Neutron linuxbridge agent\u670d\u52a1 systemctl enable neutron-linuxbridge-agent systemctl start neutron-linuxbridge-agent","title":"Neutron"},{"location":"install/openEuler-22.09/OpenStack-yoga/#cinder","text":"Cinder\u662fOpenStack\u7684\u5b58\u50a8\u670d\u52a1\uff0c\u63d0\u4f9b\u5757\u8bbe\u5907\u7684\u521b\u5efa\u3001\u53d1\u653e\u3001\u5907\u4efd\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \uff1a \u521d\u59cb\u5316\u6570\u636e\u5e93 CINDER_DBPASS \u662f\u7528\u6237\u81ea\u5b9a\u4e49\u7684cinder\u6570\u636e\u5e93\u5bc6\u7801\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u521d\u59cb\u5316Keystone\u8d44\u6e90\u5bf9\u8c61 source ~/.admin-openrc #\u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230`CINDER_PASS`\u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s 3. \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-cinder-api openstack-cinder-scheduler \u4fee\u6539cinder\u914d\u7f6e\u6587\u4ef6 /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.2 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u6570\u636e\u5e93\u540c\u6b65 su -s /bin/sh -c \"cinder-manage db sync\" cinder \u4fee\u6539nova\u914d\u7f6e /etc/nova/nova.conf [cinder] os_region_name = RegionOne \u542f\u52a8\u670d\u52a1 systemctl restart openstack-nova-api systemctl start openstack-cinder-api openstack-cinder-scheduler Storage\u8282\u70b9 \uff1a Storage\u8282\u70b9\u8981\u63d0\u524d\u51c6\u5907\u81f3\u5c11\u4e00\u5757\u786c\u76d8\uff0c\u4f5c\u4e3acinder\u7684\u5b58\u50a8\u540e\u7aef\uff0c\u4e0b\u6587\u9ed8\u8ba4storage\u8282\u70b9\u5df2\u7ecf\u5b58\u5728\u4e00\u5757\u672a\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u8bbe\u5907\u540d\u79f0\u4e3a /dev/sdb \uff0c\u7528\u6237\u5728\u914d\u7f6e\u8fc7\u7a0b\u4e2d\uff0c\u8bf7\u6309\u7167\u771f\u5b9e\u73af\u5883\u4fe1\u606f\u8fdb\u884c\u540d\u79f0\u66ff\u6362\u3002 Cinder\u652f\u6301\u5f88\u591a\u7c7b\u578b\u7684\u540e\u7aef\u5b58\u50a8\uff0c\u672c\u6307\u5bfc\u4f7f\u7528\u6700\u7b80\u5355\u7684lvm\u4e3a\u53c2\u8003\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982ceph\u7b49\u5176\u4ed6\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup \u914d\u7f6elvm\u5377\u7ec4 pvcreate /dev/sdb vgcreate cinder-volumes /dev/sdb \u4fee\u6539cinder\u914d\u7f6e /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.4 enabled_backends = lvm glance_api_servers = http://controller:9292 [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u914d\u7f6ecinder backup \uff08\u53ef\u9009\uff09 cinder-backup\u662f\u53ef\u9009\u7684\u5907\u4efd\u670d\u52a1\uff0ccinder\u540c\u6837\u652f\u6301\u5f88\u591a\u79cd\u5907\u4efd\u540e\u7aef\uff0c\u672c\u6587\u4f7f\u7528swift\u5b58\u50a8\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982NFS\u7b49\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\uff0c\u4f8b\u5982\u53ef\u4ee5\u53c2\u8003 OpenStack\u5b98\u65b9\u6587\u6863 \u5bf9NFS\u7684\u914d\u7f6e\u8bf4\u660e\u3002 \u4fee\u6539 /etc/cinder/cinder.conf \uff0c\u5728 [DEFAULT] \u4e2d\u65b0\u589e [DEFAULT] backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u8fd9\u91cc\u7684 SWIFT_URL \u662f\u6307\u73af\u5883\u4e2dswift\u670d\u52a1\u7684URL\uff0c\u5728\u90e8\u7f72\u5b8cswift\u670d\u52a1\u540e\uff0c\u6267\u884c openstack catalog show object-store \u547d\u4ee4\u83b7\u53d6\u3002 \u542f\u52a8\u670d\u52a1 systemctl start openstack-cinder-volume target systemctl start openstack-cinder-backup (\u53ef\u9009) \u81f3\u6b64\uff0cCinder\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u53ef\u4ee5\u5728controller\u901a\u8fc7\u4ee5\u4e0b\u547d\u4ee4\u8fdb\u884c\u7b80\u5355\u7684\u9a8c\u8bc1 source ~/.admin-openrc openstack storage service list openstack volume list","title":"Cinder"},{"location":"install/openEuler-22.09/OpenStack-yoga/#horizon","text":"Horizon\u662fOpenStack\u63d0\u4f9b\u7684\u524d\u7aef\u9875\u9762\uff0c\u53ef\u4ee5\u8ba9\u7528\u6237\u901a\u8fc7\u7f51\u9875\u9f20\u6807\u7684\u64cd\u4f5c\u6765\u63a7\u5236OpenStack\u96c6\u7fa4\uff0c\u800c\u4e0d\u7528\u7e41\u7410\u7684CLI\u547d\u4ee4\u884c\u3002Horizon\u4e00\u822c\u90e8\u7f72\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-dashboard \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] OPENSTACK_KEYSTONE_URL = \"http://controller:5000/v3\" SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f\u670d\u52a1 systemctl restart httpd \u81f3\u6b64\uff0chorizon\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165 http://192.168.0.2/dashboard \uff0c\u6253\u5f00horizon\u767b\u5f55\u9875\u9762\u3002","title":"Horizon"},{"location":"install/openEuler-22.09/OpenStack-yoga/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> exit Bye \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 \u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 \u66ff\u6362 IRONIC_PASS \u4e3aironic\u7528\u6237\u5bc6\u7801\uff0c IRONIC_INSPECTOR_PASS \u4e3aironic_inspector\u7528\u6237\u5bc6\u7801\u3002 openstack user create --password IRONIC_PASS \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector openstack role add --project service --user ironic-inspector admin \u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQ LAlchemy connection string used to connect to the # database (string value) # connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) # transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASS \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) # www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 www_authenticate_uri=http://controller:5000 # Complete admin Identity API endpoint. (string value) # auth_url=http://PRIVATE_IDENTITY_IP:5000 auth_url=http://controller:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASS # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none \u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema \u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 \u5982\u4e0b\u4e3aironic-conductor\u670d\u52a1\u81ea\u8eab\u7684\u6807\u51c6\u914d\u7f6e\uff0cironic-conductor\u670d\u52a1\u53ef\u4ee5\u4e0eironic-api\u670d\u52a1\u5206\u5e03\u4e8e\u4e0d\u540c\u8282\u70b9\uff0c\u672c\u6307\u5357\u4e2d\u5747\u90e8\u7f72\u4e0e\u63a7\u5236\u8282\u70b9\uff0c\u6240\u4ee5\u91cd\u590d\u7684\u914d\u7f6e\u9879\u53ef\u8df3\u8fc7\u3002 \u66ff\u6362\u4f7f\u7528conductor\u670d\u52a1\u6240\u5728host\u7684IP\u914d\u7f6emy_ip\uff1a [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) # my_ip=HOST_IP my_ip = 192.168.0.2 \u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c \u66ff\u6362IRONIC_PASS\u4e3aironic\u7528\u6237\u5bc6\u7801\u3002 [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASS # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public # \u5176\u4ed6\u53c2\u8003\u914d\u7f6e [glance] endpoint_override = http://controller:9292 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 auth_type = password username = ironic password = IRONIC_PASS project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service [service_catalog] region_name = RegionOne project_domain_id = default user_domain_id = default project_name = service password = IRONIC_PASS username = ironic auth_url = http://controller:5000 auth_type = password \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] endpoint_override = \u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 \u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-inspector \u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> exit Bye \u914d\u7f6e /etc/ironic-inspector/inspector.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASS \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801 [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 \u914d\u7f6e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://controller:5000 www_authenticate_uri = http://controller:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = controller:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True \u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=192.168.0.40,192.168.0.50 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log \u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c \u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade \u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 dnf install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u4e0b\u8f7d\u6216\u5236\u4f5c \u90e8\u7f72\u4e00\u4e2a\u88f8\u673a\u8282\u70b9\u603b\u5171\u9700\u8981\u4e24\u7ec4\u955c\u50cf\uff1adeploy ramdisk images\u548cuser images\u3002Deploy ramdisk images\u4e0a\u8fd0\u884c\u6709ironic-python-agent(IPA)\u670d\u52a1\uff0cIronic\u901a\u8fc7\u5b83\u8fdb\u884c\u88f8\u673a\u8282\u70b9\u7684\u73af\u5883\u51c6\u5907\u3002User images\u662f\u6700\u7ec8\u88ab\u5b89\u88c5\u88f8\u673a\u8282\u70b9\u4e0a\uff0c\u4f9b\u7528\u6237\u4f7f\u7528\u7684\u955c\u50cf\u3002 ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent-builder\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002\u82e5\u4f7f\u7528\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \uff0c\u540c\u65f6\u5b98\u65b9\u4e5f\u6709\u63d0\u4f9b\u5236\u4f5c\u597d\u7684deploy\u955c\u50cf\uff0c\u53ef\u5c1d\u8bd5\u4e0b\u8f7d\u3002 \u4e0b\u6587\u4ecb\u7ecd\u901a\u8fc7ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder dnf install python3-ironic-python-agent-builder python3-ironic-python-agent-builder-doc \u6216 pip3 install ironic-python-agent-builder dnf install qemu-img git \u6ce8\uff1a22.09\u7cfb\u7edf\u4e2d\uff0c\u4f7f\u7528dnf\u5b89\u88c5\u65f6\uff0c\u9700\u8981\u540c\u65f6\u6309\u7167\u4e3b\u5305\u548cdoc\u5305\u3002doc\u5305\u5185\u6253\u5305\u7684 /usr/share \u76ee\u5f55\u4e2d\u6587\u4ef6\u4e3a\u8fd0\u884c\u6240\u9700\uff0c\u540e\u7eed\u7cfb\u7edf\u7248\u672c\u5c06\u5408\u5e76\u6587\u4ef6\u5230python3-ironic-python-agent-builder\u5305\u4e2d\u3002 \u5236\u4f5c\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--lzma] [--extra-args EXTRA_ARGS] [--elements-path ELEMENTS_PATH] distribution positional arguments: distribution Distribution to use options: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic-python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --lzma Use lzma compression for smaller images --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder --elements-path ELEMENTS_PATH Path(s) to custom DIB elements separated by a colon \u64cd\u4f5c\u5b9e\u4f8b\uff1a # -o\u9009\u9879\u6307\u5b9a\u751f\u6210\u7684\u955c\u50cf\u540d # ubuntu\u6307\u5b9a\u751f\u6210ubuntu\u7cfb\u7edf\u7684\u955c\u50cf ironic-python-agent-builder -o my-ubuntu-ipa ubuntu \u53ef\u901a\u8fc7\u8bbe\u7f6e ARCH \u73af\u5883\u53d8\u91cf\uff08\u9ed8\u8ba4\u4e3aamd64\uff09\u6307\u5b9a\u6240\u6784\u5efa\u955c\u50cf\u7684\u67b6\u6784\u3002\u5982\u679c\u662f arm \u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a export ARCH=aarch64 \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf,\u8bbe\u7f6e\u7528\u6237\u540d\u3001\u5bc6\u7801\uff0c\u542f\u7528 sodo \u6743\u9650\uff1b\u5e76\u6dfb\u52a0 -e \u9009\u9879\u4f7f\u7528\u76f8\u5e94\u7684DIB\u5143\u7d20\u3002\u5236\u4f5c\u955c\u50cf\u64cd\u4f5c\u5982\u4e0b\uff1a export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=stable/yoga # \u6307\u5b9a\u672c\u5730\u4ed3\u5e93\u53ca\u5206\u652f DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo DIB_REPOREF_ironic_python_agent=my-test-branch ironic-python-agent-builder ubuntu \u53c2\u8003\uff1a source-repositories \u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\u3002 \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a \u5f53\u524d\u7248\u672c\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ ramdisk\u955c\u50cf\u4e2d\u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 \u7f16\u8f91/usr/lib/systemd/system/ironic-python-agent.service\u6587\u4ef6 [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target","title":"Ironic"},{"location":"install/openEuler-22.09/OpenStack-yoga/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2atrove\u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684trove\u6570\u636e\u5e93\uff0c\u66ff\u6362TROVE_DBPASS\u4e3a\u5408\u9002\u7684\u5bc6\u7801\u3002 CREATE DATABASE trove CHARACTER SET utf8; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS'; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efatrove\u7528\u6237 openstack user create --domain default --password-prompt trove # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user trove admin # \u521b\u5efadatabase\u670d\u52a1 openstack service create --name trove --description \"Database service\" database \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5Trove\u3002 dnf install openstack-trove python-troveclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 \u7f16\u8f91/etc/trove/trove.conf\u3002 [DEFAULT] bind_host=192.168.0.2 log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver network_label_regex=.* management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] auth_url = http://controller:5000/v3/ auth_type = password project_domain_name = Default project_name = service user_domain_name = Default password = trove username = TROVE_PASS [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = trove password = TROVE_PASS [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u63a7\u5236\u8282\u70b9\u7684IP\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002 \u7f16\u8f91/etc/trove/trove-guestagent.conf\u3002 [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df\u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a\u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002\\ \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 \u6570\u636e\u5e93\u540c\u6b65\u3002 su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove"},{"location":"install/openEuler-22.09/OpenStack-yoga/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efaswift\u7528\u6237 openstack user create --domain default --password-prompt swift # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user swift admin # \u521b\u5efa\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5Swift\u3002 dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \\ python3-keystonemiddleware memcached \u914d\u7f6eproxy-server\u3002 Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cSWIFT_PASS\u5373\u53ef\u3002 vim /etc/swift/proxy-server.conf [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = swift password = SWIFT_PASS delay_auth_decision = True service_token_roles_required = True Storage\u8282\u70b9 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305\u3002 dnf install openstack-swift-account openstack-swift-container openstack-swift-object dnf install xfsprogs rsync \u5c06\u8bbe\u5907/dev/sdb\u548c/dev/sdc\u683c\u5f0f\u5316\u4e3aXFS\u3002 mkfs.xfs /dev/sdb mkfs.xfs /dev/sdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u3002 mkdir -p /srv/node/sdb mkdir -p /srv/node/sdc \u627e\u5230\u65b0\u5206\u533a\u7684UUID\u3002 blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d\u3002 UUID=\"\" /srv/node/sdb xfs noatime 0 2 UUID=\"\" /srv/node/sdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\u3002 mount /srv/node/sdb mount /srv/node/sdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e\u3002 \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u914d\u7f6e\u5b58\u50a8\u8282\u70b9\u3002 \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 [DEFAULT] bind_ip = 192.168.0.4 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\u3002 mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift Controller\u8282\u70b9\u521b\u5efa\u5e76\u5206\u53d1\u73af \u521b\u5efa\u8d26\u53f7\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840 account.builder \u6587\u4ef6\u3002 swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder account.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6202 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u8d26\u53f7\u73af\u5185\u5bb9\u3002 swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u8d26\u53f7\u73af\u3002 swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\u3002 swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder container.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bb9\u5668\u73af\u5185\u5bb9\u3002 swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u5bb9\u5668\u73af\u3002 swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\u3002 swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder object.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6200 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bf9\u8c61\u73af\u5185\u5bb9\u3002 swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u5bf9\u8c61\u73af\u3002 swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\u3002 \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/swift/swift.conf\u3002 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R root:swift /etc/swift \u5b8c\u6210\u5b89\u88c5 \u5728\u63a7\u5236\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service systemctl start openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service","title":"Swift"},{"location":"install/openEuler-22.09/OpenStack-yoga/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 Controller\u8282\u70b9 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cyborg; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efacybory\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eCYBORG_PASS source ~/.admin-openrc openstack user create --domain default --password-prompt cyborg openstack role add --project service --user cyborg admin openstack service create --name cyborg --description \"Acceleration Service\" accelerator \u4f7f\u7528uwsgi\u90e8\u7f72Cyborg api\u670d\u52a1 openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2 \u5b89\u88c5Cyborg dnf install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [api] host_ip = 0.0.0.0 [database] connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg [service_catalog] cafile = /opt/stack/data/ca-bundle.pem project_domain_id = default user_domain_id = default project_name = service password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = password username = PLACEMENT_PASS auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [nova] project_domain_name = Default project_name = service user_domain_name = Default password = NOVA_PASS username = nova auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [keystone_authtoken] memcached_servers = localhost:11211 signing_dir = /var/cache/cyborg/api cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg"},{"location":"install/openEuler-22.09/OpenStack-yoga/#aodh","text":"Aodh\u53ef\u4ee5\u6839\u636e\u7531Ceilometer\u6216\u8005Gnocchi\u6536\u96c6\u7684\u76d1\u63a7\u6570\u636e\u521b\u5efa\u544a\u8b66\uff0c\u5e76\u8bbe\u7f6e\u89e6\u53d1\u89c4\u5219\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh\u3002 dnf install openstack-aodh-api openstack-aodh-evaluator \\ openstack-aodh-notifier openstack-aodh-listener \\ openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/aodh/aodh.conf [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u540c\u6b65\u6570\u636e\u5e93\u3002 aodh-dbsync \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh"},{"location":"install/openEuler-22.09/OpenStack-yoga/#gnocchi","text":"Gnocchi\u662f\u4e00\u4e2a\u5f00\u6e90\u7684\u65f6\u95f4\u5e8f\u5217\u6570\u636e\u5e93\uff0c\u53ef\u4ee5\u5bf9\u63a5Ceilometer\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi\u3002 dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. # coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u540c\u6b65\u6570\u636e\u5e93\u3002 gnocchi-upgrade \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi"},{"location":"install/openEuler-22.09/OpenStack-yoga/#ceilometer","text":"Ceilometer\u662fOpenStack\u4e2d\u8d1f\u8d23\u6570\u636e\u6536\u96c6\u7684\u670d\u52a1\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-notification openstack-ceilometer-central \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/pipeline.yaml\u3002 publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u6570\u636e\u5e93\u540c\u6b65\u3002 ceilometer-upgrade \u5b8c\u6210\u63a7\u5236\u8282\u70b9Ceilometer\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Compute\u8282\u70b9 \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-compute dnf install openstack-ceilometer-ipmi # \u53ef\u9009 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_url = http://controller:5000 project_domain_id = default user_domain_id = default auth_type = password username = ceilometer project_name = service password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/nova/nova.conf\u3002 [DEFAULT] instance_usage_audit = True instance_usage_audit_period = hour [notifications] notify_on_state_change = vm_and_task_state [oslo_messaging_notifications] driver = messagingv2 \u5b8c\u6210\u5b89\u88c5\u3002 systemctl enable openstack-ceilometer-compute.service systemctl start openstack-ceilometer-compute.service systemctl enable openstack-ceilometer-ipmi.service # \u53ef\u9009 systemctl start openstack-ceilometer-ipmi.service # \u53ef\u9009 # \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service","title":"Ceilometer"},{"location":"install/openEuler-22.09/OpenStack-yoga/#heat","text":"Heat\u662f OpenStack \u81ea\u52a8\u7f16\u6392\u670d\u52a1\uff0c\u57fa\u4e8e\u63cf\u8ff0\u6027\u7684\u6a21\u677f\u6765\u7f16\u6392\u590d\u5408\u4e91\u5e94\u7528\uff0c\u4e5f\u79f0\u4e3a Orchestration Service \u3002Heat \u7684\u5404\u670d\u52a1\u4e00\u822c\u5b89\u88c5\u5728 Controller \u8282\u70b9\u4e0a\u3002 Controller\u8282\u70b9 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE heat; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 source ~/.admin-openrc openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f \u521b\u5efa heat domain openstack domain create --description \"Stack projects and users\" heat \u5728 heat domain\u4e0b\u521b\u5efa heat_domain_admin \u7528\u6237\uff0c\u5e76\u8bb0\u4e0b\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6e\u4e0b\u9762\u7684 HEAT_DOMAIN_PASS openstack user create --domain heat --password-prompt heat_domain_admin \u4e3a heat_domain_admin \u7528\u6237\u589e\u52a0 admin \u89d2\u8272 openstack role add --domain heat --user-domain heat --user heat_domain_admin admin \u521b\u5efa heat_stack_owner \u89d2\u8272 openstack role create heat_stack_owner \u521b\u5efa heat_stack_user \u89d2\u8272 openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat"},{"location":"install/openEuler-22.09/OpenStack-yoga/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u5b89\u88c5Tempest dnf install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Yoga\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest"},{"location":"install/openEuler-22.09/OpenStack-yoga/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 oos\u5de5\u5177\u5728\u4e0d\u65ad\u6f14\u8fdb\uff0c\u517c\u5bb9\u6027\u3001\u53ef\u7528\u6027\u4e0d\u80fd\u65f6\u523b\u4fdd\u8bc1\uff0c\u5efa\u8bae\u4f7f\u7528\u5df2\u9a8c\u8bc1\u7684\u672c\u7248\uff0c\u8fd9\u91cc\u9009\u62e9 1.0.6 pip install openstack-sig-tool==1.0.6 \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff0cAK/SK\u662f\u7528\u6237\u7684\u534e\u4e3a\u4e91\u767b\u5f55\u5bc6\u94a5\uff0c\u5176\u4ed6\u914d\u7f6e\u4fdd\u6301\u9ed8\u8ba4\u5373\u53ef\uff08\u9ed8\u8ba4\u4f7f\u7528\u65b0\u52a0\u5761region\uff09\uff0c\u9700\u8981\u63d0\u524d\u5728\u4e91\u4e0a\u521b\u5efa\u5bf9\u5e94\u7684\u8d44\u6e90\uff0c\u5305\u62ec\uff1a \u4e00\u4e2a\u5b89\u5168\u7ec4\uff0c\u540d\u5b57\u9ed8\u8ba4\u662f oos \u4e00\u4e2aopenEuler\u955c\u50cf\uff0c\u540d\u79f0\u683c\u5f0f\u662fopenEuler-%(release)s-%(arch)s\uff0c\u4f8b\u5982 openEuler-22.09-arm64 \u4e00\u4e2aVPC\uff0c\u540d\u79f0\u662f oos_vpc \u8be5VPC\u4e0b\u9762\u4e24\u4e2a\u5b50\u7f51\uff0c\u540d\u79f0\u662f oos_subnet1 \u3001 oos_subnet2 [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668(\u53ea\u5728openEuler LTS\u4e0a\u652f\u6301) \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 22.09\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 22.09 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r yoga \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u6267\u884ctempest\u6d4b\u8bd5 \u7528\u6237\u53ef\u4ee5\u4f7f\u7528oos\u81ea\u52a8\u6267\u884c\uff1a oos env test test-oos \u4e5f\u53ef\u4ee5\u624b\u52a8\u767b\u5f55\u76ee\u6807\u8282\u70b9\uff0c\u8fdb\u5165\u6839\u76ee\u5f55\u4e0b\u7684 mytest \u76ee\u5f55\uff0c\u624b\u52a8\u6267\u884c tempest run \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u8df3\u8fc7\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u5728\u7b2c4\u6b65\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 \u88ab\u7eb3\u7ba1\u7684\u865a\u673a\u9700\u8981\u4fdd\u8bc1\uff1a \u81f3\u5c11\u6709\u4e00\u5f20\u7ed9oos\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e neutron_dataplane_interface_name \u81f3\u5c11\u6709\u4e00\u5757\u7ed9oos\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e cinder_block_device \u5982\u679c\u8981\u90e8\u7f72swift\u670d\u52a1\uff0c\u5219\u9700\u8981\u65b0\u589e\u4e00\u5757\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e swift_storage_devices # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 22.09 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72"},{"location":"install/openEuler-22.09/OpenStack-yoga/#openstack-sigopensd","text":"opensd\u7528\u4e8e\u6279\u91cf\u5730\u811a\u672c\u5316\u90e8\u7f72openstack\u5404\u7ec4\u4ef6\u670d\u52a1\u3002","title":"\u57fa\u4e8eOpenStack SIG\u90e8\u7f72\u5de5\u5177opensd\u90e8\u7f72"},{"location":"install/openEuler-22.09/OpenStack-yoga/#_7","text":"","title":"\u90e8\u7f72\u6b65\u9aa4"},{"location":"install/openEuler-22.09/OpenStack-yoga/#1","text":"\u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u9700\u5c06selinux\u8bbe\u7f6e\u4e3adisable \u88c5\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u5c06/etc/ssh/sshd_config\u914d\u7f6e\u6587\u4ef6\u5185\u7684UseDNS\u8bbe\u7f6e\u4e3ano \u64cd\u4f5c\u7cfb\u7edf\u8bed\u8a00\u5fc5\u987b\u8bbe\u7f6e\u4e3a\u82f1\u6587 \u90e8\u7f72\u4e4b\u524d\u8bf7\u786e\u4fdd\u6240\u6709\u8ba1\u7b97\u8282\u70b9/etc/hosts\u6587\u4ef6\u5185\u6ca1\u6709\u5bf9\u8ba1\u7b97\u4e3b\u673a\u7684\u89e3\u6790","title":"1. \u90e8\u7f72\u524d\u9700\u8981\u786e\u8ba4\u7684\u4fe1\u606f"},{"location":"install/openEuler-22.09/OpenStack-yoga/#2-ceph-pool","text":"\u4e0d\u4f7f\u7528ceph\u6216\u5df2\u6709ceph\u96c6\u7fa4\u53ef\u5ffd\u7565\u6b64\u6b65\u9aa4 \u5728\u4efb\u610f\u4e00\u53f0ceph monitor\u8282\u70b9\u6267\u884c:","title":"2. ceph pool\u4e0e\u8ba4\u8bc1\u521b\u5efa\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.09/OpenStack-yoga/#21-pool","text":"ceph osd pool create volumes 2048 ceph osd pool create images 2048","title":"2.1 \u521b\u5efapool:"},{"location":"install/openEuler-22.09/OpenStack-yoga/#22-pool","text":"rbd pool init volumes rbd pool init images","title":"2.2 \u521d\u59cb\u5316pool"},{"location":"install/openEuler-22.09/OpenStack-yoga/#23","text":"ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images' ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=images' mgr 'profile rbd pool=volumes'","title":"2.3 \u521b\u5efa\u7528\u6237\u8ba4\u8bc1"},{"location":"install/openEuler-22.09/OpenStack-yoga/#3-lvm","text":"\u6839\u636e\u7269\u7406\u673a\u78c1\u76d8\u914d\u7f6e\u4e0e\u95f2\u7f6e\u60c5\u51b5\uff0c\u4e3amysql\u6570\u636e\u76ee\u5f55\u6302\u8f7d\u989d\u5916\u7684\u78c1\u76d8\u7a7a\u95f4\u3002\u793a\u4f8b\u5982\u4e0b\uff08\u6839\u636e\u5b9e\u9645\u60c5\u51b5\u505a\u914d\u7f6e\uff09\uff1a fdisk -l Disk /dev/sdd: 479.6 GB, 479559942144 bytes, 936640512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x000ed242 \u521b\u5efa\u5206\u533a parted /dev/sdd mkparted 0 -1 \u521b\u5efapv partprobe /dev/sdd1 pvcreate /dev/sdd1 \u521b\u5efa\u3001\u6fc0\u6d3bvg vgcreate vg_mariadb /dev/sdd1 vgchange -ay vg_mariadb \u67e5\u770bvg\u5bb9\u91cf vgdisplay --- Volume group --- VG Name vg_mariadb System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 446.62 GiB PE Size 4.00 MiB Total PE 114335 Alloc PE / Size 114176 / 446.00 GiB Free PE / Size 159 / 636.00 MiB VG UUID bVUmDc-VkMu-Vi43-mg27-TEkG-oQfK-TvqdEc \u521b\u5efalv lvcreate -L 446G -n lv_mariadb vg_mariadb \u683c\u5f0f\u5316\u78c1\u76d8\u5e76\u83b7\u53d6\u5377\u7684UUID mkfs.ext4 /dev/mapper/vg_mariadb-lv_mariadb blkid /dev/mapper/vg_mariadb-lv_mariadb /dev/mapper/vg_mariadb-lv_mariadb: UUID=\"98d513eb-5f64-4aa5-810e-dc7143884fa2\" TYPE=\"ext4\" \u6ce8\uff1a98d513eb-5f64-4aa5-810e-dc7143884fa2\u4e3a\u5377\u7684UUID \u6302\u8f7d\u78c1\u76d8 mount /dev/mapper/vg_mariadb-lv_mariadb /var/lib/mysql rm -rf /var/lib/mysql/*","title":"3. \u914d\u7f6elvm\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.09/OpenStack-yoga/#4-yum-repo","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"4. \u914d\u7f6eyum repo"},{"location":"install/openEuler-22.09/OpenStack-yoga/#41-yum","text":"mkdir /etc/yum.repos.d/bak/ mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/","title":"4.1 \u5907\u4efdyum\u6e90"},{"location":"install/openEuler-22.09/OpenStack-yoga/#42-yum-repo","text":"cat > /etc/yum.repos.d/opensd.repo << EOF [epol] name=epol baseurl=http://repo.openeuler.org/openEuler-22.09/EPOL/main/$basearch/ enabled=1 gpgcheck=0 [everything] name=everything baseurl=http://repo.openeuler.org/openEuler-22.09/$basearch/ enabled=1 gpgcheck=0 EOF","title":"4.2 \u914d\u7f6eyum repo"},{"location":"install/openEuler-22.09/OpenStack-yoga/#43-yum","text":"yum clean all yum makecache","title":"4.3 \u66f4\u65b0yum\u7f13\u5b58"},{"location":"install/openEuler-22.09/OpenStack-yoga/#5-opensd","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"5. \u5b89\u88c5opensd"},{"location":"install/openEuler-22.09/OpenStack-yoga/#51-opensd","text":"git clone https://gitee.com/openeuler/opensd cd opensd python3 setup.py install","title":"5.1 \u514b\u9686opensd\u6e90\u7801\u5e76\u5b89\u88c5"},{"location":"install/openEuler-22.09/OpenStack-yoga/#6-ssh","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"6. \u505assh\u4e92\u4fe1"},{"location":"install/openEuler-22.09/OpenStack-yoga/#61","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u5e76\u4e00\u8def\u56de\u8f66 ssh-keygen","title":"6.1 \u751f\u6210\u5bc6\u94a5\u5bf9"},{"location":"install/openEuler-22.09/OpenStack-yoga/#62-ip","text":"\u5728auto_ssh_host_ip\u4e2d\u914d\u7f6e\u6240\u6709\u7528\u5230\u7684\u4e3b\u673aip, \u793a\u4f8b\uff1a cd /usr/local/share/opensd/tools/ vim auto_ssh_host_ip 10.0.0.1 10.0.0.2 ... 10.0.0.10","title":"6.2 \u751f\u6210\u4e3b\u673aIP\u5730\u5740\u6587\u4ef6"},{"location":"install/openEuler-22.09/OpenStack-yoga/#63","text":"\u5c06\u514d\u5bc6\u811a\u672c /usr/local/bin/opensd-auto-ssh \u5185123123\u66ff\u6362\u4e3a\u4e3b\u673a\u771f\u5b9e\u5bc6\u7801 # \u66ff\u6362\u811a\u672c\u5185123123\u5b57\u7b26\u4e32 vim /usr/local/bin/opensd-auto-ssh ## \u5b89\u88c5expect\u540e\u6267\u884c\u811a\u672c dnf install expect -y opensd-auto-ssh","title":"6.3 \u66f4\u6539\u5bc6\u7801\u5e76\u6267\u884c\u811a\u672c"},{"location":"install/openEuler-22.09/OpenStack-yoga/#64-ceph-monitor","text":"ssh-copy-id root@x.x.x.x","title":"6.4 \u90e8\u7f72\u8282\u70b9\u4e0eceph monitor\u505a\u4e92\u4fe1\uff08\u53ef\u9009\uff09"},{"location":"install/openEuler-22.09/OpenStack-yoga/#7-opensd","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"7. \u914d\u7f6eopensd"},{"location":"install/openEuler-22.09/OpenStack-yoga/#71","text":"\u5b89\u88c5 python3-pbr, python3-utils, python3-pyyaml, python3-oslo-utils\u5e76\u968f\u673a\u751f\u6210\u5bc6\u7801 dnf install python3-pbr python3-utils python3-pyyaml python3-oslo-utils -y # \u6267\u884c\u547d\u4ee4\u751f\u6210\u5bc6\u7801 opensd-genpwd # \u68c0\u67e5\u5bc6\u7801\u662f\u5426\u751f\u6210 cat /usr/local/share/opensd/etc_examples/opensd/passwords.yml","title":"7.1 \u751f\u6210\u968f\u673a\u5bc6\u7801"},{"location":"install/openEuler-22.09/OpenStack-yoga/#72-inventory","text":"\u4e3b\u673a\u4fe1\u606f\u5305\u542b\uff1a\u4e3b\u673a\u540d\u3001ansible_host IP\u3001availability_zone\uff0c\u4e09\u8005\u5747\u9700\u914d\u7f6e\u7f3a\u4e00\u4e0d\u53ef\uff0c\u793a\u4f8b\uff1a vim /usr/local/share/opensd/ansible/inventory/multinode # \u4e09\u53f0\u63a7\u5236\u8282\u70b9\u4e3b\u673a\u4fe1\u606f [control] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # \u7f51\u7edc\u8282\u70b9\u4fe1\u606f\uff0c\u4e0e\u63a7\u5236\u8282\u70b9\u4fdd\u6301\u4e00\u81f4 [network] controller1 ansible_host=10.0.0.35 availability_zone=az01.cell01.cn-yogadev-1 controller2 ansible_host=10.0.0.36 availability_zone=az01.cell01.cn-yogadev-1 controller3 ansible_host=10.0.0.37 availability_zone=az01.cell01.cn-yogadev-1 # cinder-volume\u670d\u52a1\u8282\u70b9\u4fe1\u606f [storage] storage1 ansible_host=10.0.0.61 availability_zone=az01.cell01.cn-yogadev-1 storage2 ansible_host=10.0.0.78 availability_zone=az01.cell01.cn-yogadev-1 storage3 ansible_host=10.0.0.82 availability_zone=az01.cell01.cn-yogadev-1 # Cell1 \u96c6\u7fa4\u4fe1\u606f [cell-control-cell1] cell1 ansible_host=10.0.0.24 availability_zone=az01.cell01.cn-yogadev-1 cell2 ansible_host=10.0.0.25 availability_zone=az01.cell01.cn-yogadev-1 cell3 ansible_host=10.0.0.26 availability_zone=az01.cell01.cn-yogadev-1 [compute-cell1] compute1 ansible_host=10.0.0.27 availability_zone=az01.cell01.cn-yogadev-1 compute2 ansible_host=10.0.0.28 availability_zone=az01.cell01.cn-yogadev-1 compute3 ansible_host=10.0.0.29 availability_zone=az01.cell01.cn-yogadev-1 [cell1:children] cell-control-cell1 compute-cell1 # Cell2\u96c6\u7fa4\u4fe1\u606f [cell-control-cell2] cell4 ansible_host=10.0.0.36 availability_zone=az03.cell02.cn-yogadev-1 cell5 ansible_host=10.0.0.37 availability_zone=az03.cell02.cn-yogadev-1 cell6 ansible_host=10.0.0.38 availability_zone=az03.cell02.cn-yogadev-1 [compute-cell2] compute4 ansible_host=10.0.0.39 availability_zone=az03.cell02.cn-yogadev-1 compute5 ansible_host=10.0.0.40 availability_zone=az03.cell02.cn-yogadev-1 compute6 ansible_host=10.0.0.41 availability_zone=az03.cell02.cn-yogadev-1 [cell2:children] cell-control-cell2 compute-cell2 [baremetal] [compute-cell1-ironic] # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684control\u4e3b\u673a\u7ec4 [nova-conductor:children] cell-control-cell1 cell-control-cell2 # \u586b\u5199\u6240\u6709cell\u96c6\u7fa4\u7684compute\u4e3b\u673a\u7ec4 [nova-compute:children] compute-added compute-cell1 compute-cell2 # \u4e0b\u9762\u7684\u4e3b\u673a\u7ec4\u4fe1\u606f\u4e0d\u9700\u53d8\u52a8\uff0c\u4fdd\u7559\u5373\u53ef [compute-added] [chrony-server:children] control [pacemaker:children] control ...... ......","title":"7.2 \u914d\u7f6einventory\u6587\u4ef6"},{"location":"install/openEuler-22.09/OpenStack-yoga/#73","text":"\u6ce8: \u6587\u6863\u4e2d\u63d0\u5230\u7684\u6709\u6ce8\u91ca\u914d\u7f6e\u9879\u9700\u8981\u66f4\u6539\uff0c\u5176\u4ed6\u53c2\u6570\u4e0d\u9700\u8981\u66f4\u6539\uff0c\u82e5\u65e0\u76f8\u5173\u914d\u7f6e\u5219\u4e3a\u7a7a vim /usr/local/share/opensd/etc_examples/opensd/globals.yml ######################## # Network & Base options ######################## network_interface: \"eth0\" #\u7ba1\u7406\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 neutron_external_interface: \"eth1\" #\u4e1a\u52a1\u7f51\u7edc\u7684\u7f51\u5361\u540d\u79f0 cidr_netmask: 24 #\u7ba1\u7406\u7f51\u7684\u63a9\u7801 opensd_vip_address: 10.0.0.33 #\u63a7\u5236\u8282\u70b9\u865a\u62dfIP\u5730\u5740 cell1_vip_address: 10.0.0.34 #cell1\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 cell2_vip_address: 10.0.0.35 #cell2\u96c6\u7fa4\u7684\u865a\u62dfIP\u5730\u5740 external_fqdn: \"\" #\u7528\u4e8evnc\u8bbf\u95ee\u865a\u62df\u673a\u7684\u5916\u7f51\u57df\u540d\u5730\u5740 external_ntp_servers: [] #\u5916\u90e8ntp\u670d\u52a1\u5668\u5730\u5740 yumrepo_host: #yum\u6e90\u7684IP\u5730\u5740 yumrepo_port: #yum\u6e90\u7aef\u53e3\u53f7 environment: #yum\u6e90\u7684\u7c7b\u578b upgrade_all_packages: \"yes\" #\u662f\u5426\u5347\u7ea7\u6240\u6709\u5b89\u88c5\u7248\u7684\u7248\u672c(\u6267\u884cyum upgrade)\uff0c\u521d\u59cb\u90e8\u7f72\u8d44\u6e90\u8bf7\u8bbe\u7f6e\u4e3a\"yes\" enable_miner: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72miner\u670d\u52a1 enable_chrony: \"no\" #\u662f\u5426\u5f00\u542f\u90e8\u7f72chrony\u670d\u52a1 enable_pri_mariadb: \"no\" #\u662f\u5426\u4e3a\u79c1\u6709\u4e91\u90e8\u7f72mariadb enable_hosts_file_modify: \"no\" # \u6269\u5bb9\u8ba1\u7b97\u8282\u70b9\u548c\u90e8\u7f72ironic\u670d\u52a1\u7684\u65f6\u5019\uff0c\u662f\u5426\u5c06\u8282\u70b9\u4fe1\u606f\u6dfb\u52a0\u5230`/etc/hosts` ######################## # Available zone options ######################## az_cephmon_compose: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az01\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az01\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az02\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az02\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: - availability_zone: #availability zone\u7684\u540d\u79f0\uff0c\u8be5\u540d\u79f0\u5fc5\u987b\u4e0emultinode\u4e3b\u673a\u6587\u4ef6\u5185\u7684az03\u7684\"availability_zone\"\u503c\u4fdd\u6301\u4e00\u81f4 ceph_mon_host: #az03\u5bf9\u5e94\u7684\u4e00\u53f0ceph monitor\u4e3b\u673a\u5730\u5740\uff0c\u90e8\u7f72\u8282\u70b9\u9700\u8981\u4e0e\u8be5\u4e3b\u673a\u505assh\u4e92\u4fe1 reserve_vcpu_based_on_numa: # `reserve_vcpu_based_on_numa`\u914d\u7f6e\u4e3a`yes` or `no`,\u4e3e\u4f8b\u8bf4\u660e\uff1a NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 \u5f53reserve_vcpu_based_on_numa: \"yes\", \u6839\u636enuma node, \u5e73\u5747\u6bcf\u4e2anode\u9884\u7559vcpu: vcpu_pin_set = 2-15,34-47,18-31,50-63 \u5f53reserve_vcpu_based_on_numa: \"no\", \u4ece\u7b2c\u4e00\u4e2avcpu\u5f00\u59cb\uff0c\u987a\u5e8f\u9884\u7559vcpu: vcpu_pin_set = 8-64 ####################### # Nova options ####################### nova_reserved_host_memory_mb: 2048 #\u8ba1\u7b97\u8282\u70b9\u7ed9\u8ba1\u7b97\u670d\u52a1\u9884\u7559\u7684\u5185\u5b58\u5927\u5c0f enable_cells: \"yes\" #cell\u8282\u70b9\u662f\u5426\u5355\u72ec\u8282\u70b9\u90e8\u7f72 support_gpu: \"False\" #cell\u8282\u70b9\u662f\u5426\u6709GPU\u670d\u52a1\u5668\uff0c\u5982\u679c\u6709\u5219\u4e3aTrue\uff0c\u5426\u5219\u4e3aFalse ####################### # Neutron options ####################### monitor_ip: - 10.0.0.9 #\u914d\u7f6e\u76d1\u63a7\u8282\u70b9 - 10.0.0.10 enable_meter_full_eip: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8EIP\u5168\u91cf\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_port_forwarding: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8port forwarding\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter_ecs_ipv6: True #\u914d\u7f6e\u662f\u5426\u5141\u8bb8ecs_ipv6\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue enable_meter: True #\u914d\u7f6e\u662f\u5426\u5f00\u542f\u76d1\u63a7\uff0c\u9ed8\u8ba4\u4e3aTrue is_sdn_arch: False #\u914d\u7f6e\u662f\u5426\u662fsdn\u67b6\u6784\uff0c\u9ed8\u8ba4\u4e3aFalse # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,vlan\u548cvxlan\u4e24\u79cd\u7c7b\u578b\u53ea\u80fd\u4e8c\u9009\u4e00. enable_vxlan_network_type: False # \u9ed8\u8ba4\u4f7f\u80fd\u7684\u7f51\u7edc\u7c7b\u578b\u662fvlan,\u5982\u679c\u4f7f\u7528vxlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aTrue, \u5982\u679c\u4f7f\u7528vlan\u7f51\u7edc\uff0c\u914d\u7f6e\u4e3aFalse. enable_neutron_fwaas: False # \u73af\u5883\u6709\u4f7f\u7528\u9632\u706b\u5899, \u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fd\u9632\u62a4\u5899\u529f\u80fd. # Neutron provider neutron_provider_networks: network_types: \"{{ 'vxlan' if enable_vxlan_network_type else 'vlan' }}\" network_vlan_ranges: \"default:xxx:xxx\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvlan\u8303\u56f4 network_mappings: \"default:br-provider\" network_interface: \"{{ neutron_external_interface }}\" network_vxlan_ranges: \"\" #\u90e8\u7f72\u4e4b\u524d\u89c4\u5212\u7684\u4e1a\u52a1\u7f51\u7edcvxlan\u8303\u56f4 # \u5982\u4e0b\u8fd9\u4e9b\u914d\u7f6e\u662fSND\u63a7\u5236\u5668\u7684\u914d\u7f6e\u53c2\u6570, `enable_sdn_controller`\u8bbe\u7f6e\u4e3aTrue, \u4f7f\u80fdSND\u63a7\u5236\u5668\u529f\u80fd. # \u5176\u4ed6\u53c2\u6570\u8bf7\u6839\u636e\u90e8\u7f72\u4e4b\u524d\u7684\u89c4\u5212\u548cSDN\u90e8\u7f72\u4fe1\u606f\u786e\u5b9a. enable_sdn_controller: False sdn_controller_ip_address: # SDN\u63a7\u5236\u5668ip\u5730\u5740 sdn_controller_username: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u540d sdn_controller_password: # SDN\u63a7\u5236\u5668\u7684\u7528\u6237\u5bc6\u7801 ####################### # Dimsagent options ####################### enable_dimsagent: \"no\" # \u5b89\u88c5\u955c\u50cf\u670d\u52a1agent, \u9700\u8981\u6539\u4e3ayes # Address and domain name for s2 s3_address_domain_pair: - host_ip: host_name: ####################### # Trove options ####################### enable_trove: \"no\" #\u5b89\u88c5trove \u9700\u8981\u6539\u4e3ayes #default network trove_default_neutron_networks: #trove \u7684\u7ba1\u7406\u7f51\u7edcid `openstack network list|grep -w trove-mgmt|awk '{print$2}'` #s3 setup(\u5982\u679c\u6ca1\u6709s3,\u4ee5\u4e0b\u503c\u586bnull) s3_endpoint_host_ip: #s3\u7684ip s3_endpoint_host_name: #s3\u7684\u57df\u540d s3_endpoint_url: #s3\u7684url \u00b7\u4e00\u822c\u4e3ahttp\uff1a//s3\u57df\u540d s3_access_key: #s3\u7684ak s3_secret_key: #s3\u7684sk ####################### # Ironic options ####################### enable_ironic: \"no\" #\u662f\u5426\u5f00\u673a\u88f8\u91d1\u5c5e\u90e8\u7f72\uff0c\u9ed8\u8ba4\u4e0d\u5f00\u542f ironic_neutron_provisioning_network_uuid: ironic_neutron_cleaning_network_uuid: \"{{ ironic_neutron_provisioning_network_uuid }}\" ironic_dnsmasq_interface: ironic_dnsmasq_dhcp_range: ironic_tftp_server_address: \"{{ hostvars[inventory_hostname]['ansible_' + ironic_dnsmasq_interface]['ipv4']['address'] }}\" # \u4ea4\u6362\u673a\u8bbe\u5907\u76f8\u5173\u4fe1\u606f neutron_ml2_conf_genericswitch: genericswitch:xxxxxxx: device_type: ngs_mac_address: ip: username: password: ngs_port_default_vlan: # Package state setting haproxy_package_state: \"present\" mariadb_package_state: \"present\" rabbitmq_package_state: \"present\" memcached_package_state: \"present\" ceph_client_package_state: \"present\" keystone_package_state: \"present\" glance_package_state: \"present\" cinder_package_state: \"present\" nova_package_state: \"present\" neutron_package_state: \"present\" miner_package_state: \"present\"","title":"7.3 \u914d\u7f6e\u5168\u5c40\u53d8\u91cf"},{"location":"install/openEuler-22.09/OpenStack-yoga/#74-ssh","text":"dnf install ansible -y ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u6267\u884c\u7ed3\u679c\u663e\u793a\u6bcf\u53f0\u4e3b\u673a\u90fd\u662f\"SUCCESS\"\u5373\u8bf4\u660e\u8fde\u63a5\u72b6\u6001\u6ca1\u95ee\u9898,\u793a\u4f8b\uff1a compute1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python\" }, \"changed\": false, \"ping\": \"pong\" }","title":"7.4 \u68c0\u67e5\u6240\u6709\u8282\u70b9ssh\u8fde\u63a5\u72b6\u6001"},{"location":"install/openEuler-22.09/OpenStack-yoga/#8","text":"\u5728\u90e8\u7f72\u8282\u70b9\u6267\u884c\uff1a","title":"8. \u6267\u884c\u90e8\u7f72"},{"location":"install/openEuler-22.09/OpenStack-yoga/#81-bootstrap","text":"# \u6267\u884c\u90e8\u7f72 opensd -i /usr/local/share/opensd/ansible/inventory/multinode bootstrap --forks 50","title":"8.1 \u6267\u884cbootstrap"},{"location":"install/openEuler-22.09/OpenStack-yoga/#82","text":"\u6ce8\uff1a\u6267\u884c\u91cd\u542f\u7684\u539f\u56e0\u662f:bootstrap\u53ef\u80fd\u4f1a\u5347\u5185\u6838,\u66f4\u6539selinux\u914d\u7f6e\u6216\u8005\u6709GPU\u670d\u52a1\u5668,\u5982\u679c\u88c5\u673a\u8fc7\u7a0b\u5df2\u7ecf\u662f\u65b0\u7248\u5185\u6838,selinux disable\u6216\u8005\u6ca1\u6709GPU\u670d\u52a1\u5668,\u5219\u4e0d\u9700\u8981\u6267\u884c\u8be5\u6b65\u9aa4 # \u624b\u52a8\u91cd\u542f\u5bf9\u5e94\u8282\u70b9,\u6267\u884c\u547d\u4ee4 init6 # \u91cd\u542f\u5b8c\u6210\u540e\uff0c\u518d\u6b21\u68c0\u67e5\u8fde\u901a\u6027 ansible all -i /usr/local/share/opensd/ansible/inventory/multinode -m ping # \u91cd\u542f\u5b8c\u540e\u64cd\u4f5c\u7cfb\u7edf\u540e\uff0c\u518d\u6b21\u542f\u52a8yum\u6e90","title":"8.2 \u91cd\u542f\u670d\u52a1\u5668"},{"location":"install/openEuler-22.09/OpenStack-yoga/#83","text":"opensd -i /usr/local/share/opensd/ansible/inventory/multinode prechecks --forks 50","title":"8.3 \u6267\u884c\u90e8\u7f72\u524d\u68c0\u67e5"},{"location":"install/openEuler-22.09/OpenStack-yoga/#84","text":"ln -s /usr/bin/python3 /usr/bin/python \u5168\u91cf\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 \u5355\u670d\u52a1\u90e8\u7f72\uff1a opensd -i /usr/local/share/opensd/ansible/inventory/multinode deploy --forks 50 -t service_name","title":"8.4 \u6267\u884c\u90e8\u7f72"},{"location":"install/openEuler-22.09/OpenStack-yoga/#openstack-helm","text":"","title":"\u57fa\u4e8eOpenStack helm\u90e8\u7f72"},{"location":"install/openEuler-22.09/OpenStack-yoga/#_8","text":"OpenStack-Helm \u662f\u4e00\u4e2a\u7528\u6765\u5141\u8bb8\u7528\u6237\u5728 Kubernetes \u4e0a\u90e8\u7f72 OpenStack \u7ec4\u4ef6\u7684\u9879\u76ee\u3002\u8be5\u9879\u76ee\u63d0\u4f9b\u4e86 OpenStack \u5404\u4e2a\u7ec4\u4ef6\u7684 Helm Chart\uff0c\u5e76\u63d0\u4f9b\u4e86\u4e00\u7cfb\u5217\u811a\u672c\u6765\u4f9b\u7528\u6237\u5b8c\u6210\u5b89\u88c5\u6d41\u7a0b\u3002 OpenStack-Helm \u8f83\u4e3a\u590d\u6742\uff0c\u5efa\u8bae\u5728\u4e00\u4e2a\u65b0\u7cfb\u7edf\u4e0a\u90e8\u7f72\u3002\u6574\u4e2a\u90e8\u7f72\u5c06\u5360\u7528\u7ea6 30GB \u7684\u78c1\u76d8\u7a7a\u95f4\u3002\u5b89\u88c5\u65f6\u8bf7\u4f7f\u7528 root \u7528\u6237\u3002","title":"\u7b80\u4ecb"},{"location":"install/openEuler-22.09/OpenStack-yoga/#_9","text":"\u5728\u5f00\u59cb\u5b89\u88c5 OpenStack-Helm \u524d\uff0c\u53ef\u80fd\u9700\u8981\u5bf9\u7cfb\u7edf\u8fdb\u884c\u4e00\u4e9b\u57fa\u7840\u8bbe\u7f6e\uff0c\u5305\u62ec\u4e3b\u673a\u540d\u548c\u65f6\u95f4\u7b49\u3002\u8bf7\u53c2\u8003\u201c\u57fa\u4e8eRPM\u90e8\u7f72\u201d\u7ae0\u8282\u7684\u6709\u5173\u4fe1\u606f\u3002 openEuler 22.09 \u4e2d\u5df2\u7ecf\u5305\u542b\u4e86 OpenStack-Helm \u8f6f\u4ef6\u5305\u3002\u9996\u5148\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u548c\u8865\u4e01\uff1a dnf install openstack-helm openstack-helm-infra openstack-helm-images loci \u8fd9\u91cc\u5b89\u88c5\u7684\u662f\u539f\u751fopenstack-helm\uff0c\u9ed8\u8ba4\u4e0d\u652f\u6301openEuler\uff0c\u56e0\u6b64\u5982\u679c\u60f3\u5728openEuler\u4e0a\u4f7f\u7528openstack-helm\uff0c\u8fd8\u9700\u8981\u5b89\u88c5plugin\u63d2\u4ef6\uff0c\u672c\u7ae0\u8282\u662f\u5bf9plugin\u7684\u4f7f\u7528\u8bf4\u660e\u3002 dnf install openstack-plugin-openstack-helm-openeuler-support","title":"\u524d\u7f6e\u8bbe\u7f6e"},{"location":"install/openEuler-22.09/OpenStack-yoga/#_10","text":"OpenStack-Helm \u5b89\u88c5\u6587\u4ef6\u5c06\u88ab\u653e\u7f6e\u5230\u7cfb\u7edf\u7684 /usr/share/openstack-helm \u76ee\u5f55\u3002 openEuler \u63d0\u4f9b\u7684\u8f6f\u4ef6\u5305\u4e2d\u5305\u542b\u4e00\u4e2a\u7b80\u6613\u7684\u5b89\u88c5\u5411\u5bfc\u7a0b\u5e8f\uff0c\u4f4d\u4e8e /usr/bin/openstack-helm \u3002\u6267\u884c\u547d\u4ee4\u8fdb\u5165\u5411\u5bfc\u7a0b\u5e8f\uff1a openstack-helm Welcome to OpenStack-Helm installation program for openEuler. I will guide you through the installation. Please refer to https://docs.openstack.org/openstack-helm/latest/ to get more information about OpenStack-Helm. We recommend doing this on a new bare metal or virtual OS installation. Now you have the following options: i: Start automated installation c: Check if all pods in Kubernetes are working e: Exit Your choice? [i/c/e]: \u8f93\u5165 i \u5e76\u70b9\u51fb\u56de\u8f66\u8fdb\u5165\u4e0b\u4e00\u7ea7\u9875\u9762\uff1a Welcome to OpenStack-Helm installation program for openEuler. I will guide you through the installation. Please refer to https://docs.openstack.org/openstack-helm/latest/ to get more information about OpenStack-Helm. We recommend doing this on a new bare metal or virtual OS installation. Now you have the following options: i: Start automated installation c: Check if all pods in Kubernetes are working e: Exit Your choice? [i/c/e]: i There are two storage backends available for OpenStack-Helm: NFS and CEPH. Which storage backend would you like to use? n: NFS storage backend c: CEPH storage backend b: Go back to parent menu Your choice? [n/c/b]: OpenStack-Helm \u63d0\u4f9b\u4e86\u4e24\u79cd\u5b58\u50a8\u65b9\u6cd5\uff1a NFS \u548c Ceph \u3002\u7528\u6237\u53ef\u6839\u636e\u9700\u8981\u8f93\u5165 n \u6765\u9009\u62e9 NFS \u5b58\u50a8\u540e\u7aef\u6216\u8005 c \u6765\u9009\u62e9 Ceph \u5b58\u50a8\u540e\u7aef\u3002 \u9009\u62e9\u5b8c\u6210\u5b58\u50a8\u540e\u7aef\u540e\uff0c\u7528\u6237\u5c06\u6709\u673a\u4f1a\u5b8c\u6210\u786e\u8ba4\u3002\u6536\u5230\u63d0\u793a\u65f6\uff0c\u6309\u4e0b\u56de\u8f66\u4ee5\u5f00\u59cb\u5b89\u88c5\u3002\u5b89\u88c5\u8fc7\u7a0b\u4e2d\uff0c\u7a0b\u5e8f\u5c06\u987a\u5e8f\u6267\u884c\u4e00\u7cfb\u5217\u5b89\u88c5\u811a\u672c\u4ee5\u5b8c\u6210\u90e8\u7f72\u3002\u8fd9\u4e00\u8fc7\u7a0b\u53ef\u80fd\u9700\u8981\u6301\u7eed\u51e0\u5341\u5206\u949f\uff0c\u5b89\u88c5\u8fc7\u7a0b\u4e2d\u8bf7\u786e\u4fdd\u78c1\u76d8\u7a7a\u95f4\u5145\u8db3\u4ee5\u53ca\u4e92\u8054\u7f51\u8fde\u63a5\u7545\u901a\u3002 \u5b89\u88c5\u8fc7\u7a0b\u4e2d\u6267\u884c\u5230\u7684\u811a\u672c\u4f1a\u5c06\u4e00\u4e9b Helm Chart \u90e8\u7f72\u5230\u7cfb\u7edf\u4e0a\u3002\u7531\u4e8e\u76ee\u6807\u7cfb\u7edf\u73af\u5883\u590d\u6742\u591a\u53d8\uff0c\u67d0\u4e9b\u7279\u5b9a\u7684 Helm Chart \u53ef\u80fd\u65e0\u6cd5\u987a\u5229\u88ab\u90e8\u7f72\u3002\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u60a8\u4f1a\u6ce8\u610f\u5230\u8f93\u51fa\u4fe1\u606f\u7684\u6700\u540e\u5305\u542b\u7b49\u5f85 Pod \u5c31\u4f4d\u4f46\u8d85\u65f6\u7684\u63d0\u793a\u3002\u82e5\u53d1\u751f\u6b64\u7c7b\u73b0\u8c61\uff0c\u60a8\u53ef\u80fd\u9700\u8981\u901a\u8fc7\u4e0b\u4e00\u8282\u7ed9\u51fa\u7684\u624b\u52a8\u5b89\u88c5\u65b9\u6cd5\u6765\u5b9a\u4f4d\u95ee\u9898\u6240\u5728\u3002 \u82e5\u60a8\u672a\u89c2\u5bdf\u5230\u4e0a\u8ff0\u7684\u73b0\u8c61\uff0c\u5219\u606d\u559c\u60a8\u5b8c\u6210\u4e86\u90e8\u7f72\u3002\u8bf7\u53c2\u8003\u201c\u4f7f\u7528 OpenStack-Helm\u201d\u4e00\u8282\u6765\u5f00\u59cb\u4f7f\u7528\u3002","title":"\u81ea\u52a8\u5b89\u88c5"},{"location":"install/openEuler-22.09/OpenStack-yoga/#_11","text":"\u82e5\u60a8\u5728\u81ea\u52a8\u5b89\u88c5\u7684\u8fc7\u7a0b\u4e2d\u9047\u5230\u4e86\u9519\u8bef\uff0c\u6216\u8005\u5e0c\u671b\u624b\u52a8\u5b89\u88c5\u6765\u63a7\u5236\u6574\u4e2a\u5b89\u88c5\u6d41\u7a0b\uff0c\u60a8\u53ef\u4ee5\u53c2\u7167\u4ee5\u4e0b\u987a\u5e8f\u6267\u884c\u5b89\u88c5\u6d41\u7a0b\uff1a cd /usr/share/openstack-helm/openstack-helm #\u57fa\u4e8e NFS ./tools/deployment/developer/common/010-deploy-k8s.sh ./tools/deployment/developer/common/020-setup-client.sh ./tools/deployment/developer/common/030-ingress.sh ./tools/deployment/developer/nfs/040-nfs-provisioner.sh ./tools/deployment/developer/nfs/050-mariadb.sh ./tools/deployment/developer/nfs/060-rabbitmq.sh ./tools/deployment/developer/nfs/070-memcached.sh ./tools/deployment/developer/nfs/080-keystone.sh ./tools/deployment/developer/nfs/090-heat.sh ./tools/deployment/developer/nfs/100-horizon.sh ./tools/deployment/developer/nfs/120-glance.sh ./tools/deployment/developer/nfs/140-openvswitch.sh ./tools/deployment/developer/nfs/150-libvirt.sh ./tools/deployment/developer/nfs/160-compute-kit.sh ./tools/deployment/developer/nfs/170-setup-gateway.sh #\u6216\u8005\u57fa\u4e8e Ceph ./tools/deployment/developer/common/010-deploy-k8s.sh ./tools/deployment/developer/common/020-setup-client.sh ./tools/deployment/developer/common/030-ingress.sh ./tools/deployment/developer/ceph/040-ceph.sh ./tools/deployment/developer/ceph/050-mariadb.sh ./tools/deployment/developer/ceph/060-rabbitmq.sh ./tools/deployment/developer/ceph/070-memcached.sh ./tools/deployment/developer/ceph/080-keystone.sh ./tools/deployment/developer/ceph/090-heat.sh ./tools/deployment/developer/ceph/100-horizon.sh ./tools/deployment/developer/ceph/120-glance.sh ./tools/deployment/developer/ceph/140-openvswitch.sh ./tools/deployment/developer/ceph/150-libvirt.sh ./tools/deployment/developer/ceph/160-compute-kit.sh ./tools/deployment/developer/ceph/170-setup-gateway.sh \u5b89\u88c5\u5b8c\u6210\u540e\uff0c\u60a8\u53ef\u4ee5\u4f7f\u7528 kubectl get pods -A \u6765\u67e5\u770b\u5f53\u524d\u7cfb\u7edf\u4e0a\u7684 Pod \u7684\u8fd0\u884c\u60c5\u51b5\u3002","title":"\u624b\u52a8\u5b89\u88c5"},{"location":"install/openEuler-22.09/OpenStack-yoga/#openstack-helm_1","text":"\u7cfb\u7edf\u90e8\u7f72\u5b8c\u6210\u540e\uff0cOpenStack CLI \u754c\u9762\u5c06\u88ab\u90e8\u7f72\u5728 /usr/local/bin/openstack \u3002\u53c2\u7167\u4e0b\u9762\u7684\u4f8b\u5b50\u6765\u4f7f\u7528 OpenStack CLI\uff1a export OS_CLOUD=openstack_helm export OS_USERNAME='admin' export OS_PASSWORD='password' export OS_PROJECT_NAME='admin' export OS_PROJECT_DOMAIN_NAME='default' export OS_USER_DOMAIN_NAME='default' export OS_AUTH_URL='http://keystone.openstack.svc.cluster.local/v3' openstack service list openstack stack list \u5f53\u7136\uff0c\u60a8\u4e5f\u53ef\u4ee5\u901a\u8fc7 Web \u754c\u9762\u6765\u8bbf\u95ee OpenStack \u7684\u63a7\u5236\u9762\u677f\u3002Horizon Dashboard \u4f4d\u4e8e http://localhost:31000 \uff0c\u4f7f\u7528\u4ee5\u4e0b\u51ed\u636e\u767b\u5f55\uff1a Domain\uff1a default User Name\uff1a admin Password\uff1a password \u6b64\u65f6\uff0c\u60a8\u5e94\u5f53\u53ef\u4ee5\u770b\u5230\u719f\u6089\u7684 OpenStack \u63a7\u5236\u9762\u677f\u4e86\u3002","title":"\u4f7f\u7528 OpenStack-Helm"},{"location":"install/openEuler-22.09/OpenStack-yoga/#_12","text":"","title":"\u65b0\u7279\u6027\u7684\u5b89\u88c5"},{"location":"install/openEuler-22.09/OpenStack-yoga/#kollaisula","text":"Kolla\u662fOpenStack\u57fa\u4e8eDocker\u548cansible\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u65b9\u6848\uff0c\u5305\u542b\u4e86Kolla\u548cKolla-ansible\u4e24\u4e2a\u9879\u76ee\u3002Kolla\u662f\u5bb9\u5668\u955c\u50cf\u5236\u4f5c\u5de5\u5177\uff0cKolla-ansible\u662f\u5bb9\u5668\u955c\u50cf\u90e8\u7f72\u5de5\u5177\u3002\u5176\u4e2dKolla-ansible\u53ea\u652f\u6301\u5728openEuler LTS\u4e0a\u4f7f\u7528\uff0copenEuler\u521b\u65b0\u7248\u6682\u4e0d\u652f\u6301\u3002\u4f7f\u7528openEuler 22.09\uff0c\u7528\u6237\u53ef\u4ee5\u57fa\u4e8eKolla\u5236\u4f5c\u76f8\u5e94\u7684\u5bb9\u5668\u955c\u50cf\u3002\u540c\u65f6OpenStack SIG\u5728openEuler 22.09\u4e2d\u65b0\u589e\u4e86Kolla\u5bf9iSula\u8fd0\u884c\u65f6\u7684\u652f\u6301\uff0c\u5177\u4f53\u6b65\u9aa4\u5982\u4e0b\uff1a \u5b89\u88c5Kolla dnf install openstack-kolla docker \u5b89\u88c5\u5b8c\u6210\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-build \u547d\u4ee4\u5236\u4f5c\u57fa\u4e8eDocker\u5bb9\u5668\u955c\u50cf\u4e86\uff0c\u975e\u5e38\u7b80\u5355\uff0c\u5982\u679c\u7528\u6237\u60f3\u5c1d\u8bd5\u57fa\u4e8eisula\u7684\u65b9\u5f0f\uff0c\u53ef\u4ee5\u7ee7\u7eed\u64cd\u4f5c \u5b89\u88c5OpenStack iSula\u63d2\u4ef6 dnf install openstack-plugin-kolla-isula-support \u542f\u52a8isula-build\u670d\u52a1 \u7b2c\u4e8c\u6b65\u4f1a\u81ea\u52a8\u5b89\u88c5iSulad\u548cisula-builder\u670d\u52a1\uff0cisulad\u4f1a\u81ea\u52a8\u542f\u52a8\uff0c\u4f46isula-builder\u4e0d\u5bf9\uff0c\u9700\u8981\u624b\u52a8\u62c9\u8d77 systemctl start isula-builder \u914d\u7f6ekolla \u5728 kolla.conf \u4e2d\u7684[Default]\u91cc\u65b0\u589e base_runtime vim /etc/kolla/kolla.conf base_runtime=isula \u81f3\u6b64\u5b89\u88c5\u5b8c\u6210\uff0c\u4f7f\u7528 kolla-build \u5373\u53ef\u57fa\u4e8eisula\u5236\u4f5c\u955c\u50cf\u4e86\uff0c\u6267\u884c\u5b8c\u540e\uff0c\u6267\u884c isula images \u67e5\u770b\u955c\u50cf\u3002","title":"Kolla\u652f\u6301iSula"},{"location":"install/openEuler-22.09/OpenStack-yoga/#nova_1","text":"\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u7279\u6027\u662fOpenStack SIG\u5728openEuler 22.09\u4e2d\u57fa\u4e8eOpenStack Yoga\u5f00\u53d1\u7684Nova\u7279\u6027\uff0c\u8be5\u7279\u6027\u5141\u8bb8\u7528\u6237\u6307\u5b9a\u865a\u62df\u673a\u7684\u4f18\u5148\u7ea7\uff0c\u57fa\u4e8e\u4e0d\u540c\u7684\u4f18\u5148\u7ea7\uff0cOpenStack\u81ea\u52a8\u5206\u914d\u4e0d\u540c\u7684\u7ed1\u6838\u7b56\u7565\uff0c\u914d\u5408openEuler\u81ea\u7814\u7684 skylark QOS\u670d\u52a1\uff0c\u5b9e\u73b0\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u5bf9\u8d44\u6e90\u7684\u5408\u7406\u4f7f\u7528\u3002\u5177\u4f53\u7ec6\u8282\u53ef\u4ee5\u53c2\u8003 \u7279\u6027\u6587\u6863 \u3002\u672c\u6587\u6863\u4e3b\u8981\u63cf\u8ff0\u5b89\u88c5\u6b65\u9aa4\u3002 \u6309\u7167\u524d\u9762\u7ae0\u8282\u90e8\u7f72\u597d\u4e00\u5957OpenStack\u73af\u5883\uff08\u975e\u5bb9\u5668\uff09\uff0c\u7136\u540e\u5148\u5b89\u88c5plugin\u3002 dnf install openstack-plugin-priority-vm \u914d\u7f6e\u6570\u636e\u5e93 \u672c\u7279\u6027\u5bf9Nova\u7684\u6570\u636e\u8868\u8fdb\u884c\u4e86\u6269\u5145\uff0c\u56e0\u6b64\u9700\u8981\u540c\u6b65\u6570\u636e\u5e93 nova-manage api_db sync nova-manage db sync \u91cd\u542fnova\u670d\u52a1 \u5728\u63a7\u5236\u8282\u70b9\u548c\u8ba1\u7b97\u8282\u70b9\u5206\u522b\u6267\u884c systemctl restart openstack-nova-*","title":"Nova\u652f\u6301\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u7279\u6027"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/","text":"OpenStack Antelope \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack Antelope \u90e8\u7f72\u6307\u5357 \u57fa\u4e8eRPM\u90e8\u7f72 \u73af\u5883\u51c6\u5907 \u65f6\u949f\u540c\u6b65 \u5b89\u88c5\u6570\u636e\u5e93 \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u90e8\u7f72\u670d\u52a1 Keystone Glance Placement Nova Neutron Cinder Horizon Ironic Trove Swift Cyborg Aodh Gnocchi Ceilometer Heat Tempest \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u672c\u6587\u6863\u662f openEuler OpenStack SIG \u7f16\u5199\u7684\u57fa\u4e8e openEuler 24.03 LTS \u7684 OpenStack \u90e8\u7f72\u6307\u5357\uff0c\u5185\u5bb9\u7531 SIG \u8d21\u732e\u8005\u63d0\u4f9b\u3002\u5728\u9605\u8bfb\u8fc7\u7a0b\u4e2d\uff0c\u5982\u679c\u60a8\u6709\u4efb\u4f55\u7591\u95ee\u6216\u8005\u53d1\u73b0\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7 \u8054\u7cfb SIG\u7ef4\u62a4\u4eba\u5458\uff0c\u6216\u8005\u76f4\u63a5 \u63d0\u4ea4issue \u7ea6\u5b9a \u672c\u7ae0\u8282\u63cf\u8ff0\u6587\u6863\u4e2d\u7684\u4e00\u4e9b\u901a\u7528\u7ea6\u5b9a\u3002 \u540d\u79f0 \u5b9a\u4e49 RABBIT_PASS rabbitmq\u7684\u5bc6\u7801\uff0c\u7531\u7528\u6237\u8bbe\u7f6e\uff0c\u5728OpenStack\u5404\u4e2a\u670d\u52a1\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_PASS cinder\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_DBPASS cinder\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 KEYSTONE_DBPASS keystone\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728keystone\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_PASS glance\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_DBPASS glance\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_PASS \u5728keystone\u6ce8\u518c\u7684heat\u7528\u6237\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_DBPASS heat\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_PASS \u5728keystone\u6ce8\u518c\u7684cyborg\u7528\u6237\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_DBPASS cyborg\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_PASS \u5728keystone\u6ce8\u518c\u7684neutron\u7528\u6237\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_DBPASS neutron\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PROVIDER_INTERFACE_NAME \u7269\u7406\u7f51\u7edc\u63a5\u53e3\u7684\u540d\u79f0\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 OVERLAY_INTERFACE_IP_ADDRESS Controller\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406ip\u5730\u5740\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 METADATA_SECRET metadata proxy\u7684secret\u5bc6\u7801\uff0c\u5728nova\u548cneutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_DBPASS placement\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_PASS \u5728keystone\u6ce8\u518c\u7684placement\u7528\u6237\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_DBPASS nova\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728nova\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_PASS \u5728keystone\u6ce8\u518c\u7684nova\u7528\u6237\u5bc6\u7801\uff0c\u5728nova,cyborg,neutron\u7b49\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_DBPASS ironic\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_PASS \u5728keystone\u6ce8\u518c\u7684ironic\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_DBPASS ironic-inspector\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_PASS \u5728keystone\u6ce8\u518c\u7684ironic-inspector\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 OpenStack SIG \u63d0\u4f9b\u4e86\u591a\u79cd\u57fa\u4e8e openEuler \u90e8\u7f72 OpenStack \u7684\u65b9\u6cd5\uff0c\u4ee5\u6ee1\u8db3\u4e0d\u540c\u7684\u7528\u6237\u573a\u666f\uff0c\u8bf7\u6309\u9700\u9009\u62e9\u3002 \u57fa\u4e8eRPM\u90e8\u7f72 \u00b6 \u73af\u5883\u51c6\u5907 \u00b6 \u672c\u6587\u6863\u57fa\u4e8eOpenStack\u7ecf\u5178\u7684\u4e09\u8282\u70b9\u73af\u5883\u8fdb\u884c\u90e8\u7f72\uff0c\u4e09\u4e2a\u8282\u70b9\u5206\u522b\u662f\u63a7\u5236\u8282\u70b9(Controller)\u3001\u8ba1\u7b97\u8282\u70b9(Compute)\u3001\u5b58\u50a8\u8282\u70b9(Storage)\uff0c\u5176\u4e2d\u5b58\u50a8\u8282\u70b9\u4e00\u822c\u53ea\u90e8\u7f72\u5b58\u50a8\u670d\u52a1\uff0c\u5728\u8d44\u6e90\u6709\u9650\u7684\u60c5\u51b5\u4e0b\uff0c\u53ef\u4ee5\u4e0d\u5355\u72ec\u90e8\u7f72\u8be5\u8282\u70b9\uff0c\u628a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u670d\u52a1\u90e8\u7f72\u5230\u8ba1\u7b97\u8282\u70b9\u5373\u53ef\u3002 \u9996\u5148\u51c6\u5907\u4e09\u4e2aopenEuler 24.03 LTS\u73af\u5883\uff0c\u6839\u636e\u60a8\u7684\u73af\u5883\uff0c\u4e0b\u8f7d\u5bf9\u5e94\u7684\u955c\u50cf\u5e76\u5b89\u88c5\u5373\u53ef\uff1a ISO\u955c\u50cf \u3001 qcow2\u955c\u50cf \u3002 \u4e0b\u9762\u7684\u5b89\u88c5\u6309\u7167\u5982\u4e0b\u62d3\u6251\u8fdb\u884c\uff1a controller\uff1a192.168.0.2 compute\uff1a 192.168.0.3 storage\uff1a 192.168.0.4 \u5982\u679c\u60a8\u7684\u73af\u5883IP\u4e0d\u540c\uff0c\u8bf7\u6309\u7167\u60a8\u7684\u73af\u5883IP\u4fee\u6539\u76f8\u5e94\u7684\u914d\u7f6e\u6587\u4ef6\u3002 \u672c\u6587\u6863\u7684\u4e09\u8282\u70b9\u670d\u52a1\u62d3\u6251\u5982\u4e0b\u56fe\u6240\u793a(\u53ea\u5305\u542bKeystone\u3001Glance\u3001Nova\u3001Cinder\u3001Neutron\u8fd9\u51e0\u4e2a\u6838\u5fc3\u670d\u52a1\uff0c\u5176\u4ed6\u670d\u52a1\u8bf7\u53c2\u8003\u5177\u4f53\u90e8\u7f72\u7ae0\u8282)\uff1a \u5728\u6b63\u5f0f\u90e8\u7f72\u4e4b\u524d\uff0c\u9700\u8981\u5bf9\u6bcf\u4e2a\u8282\u70b9\u505a\u5982\u4e0b\u914d\u7f6e\u548c\u68c0\u67e5\uff1a \u914d\u7f6e openEuler 24.03 LTS \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-antelope yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u6bcf\u4e2a\u8282\u70b9\u5206\u522b\u4fee\u6539\u4e3b\u673a\u540d\uff0c\u4ee5controller\u4e3a\u4f8b\uff1a hostnamectl set-hostname controller vi /etc/hostname \u5185\u5bb9\u4fee\u6539\u4e3acontroller \u7136\u540e\u4fee\u6539\u6bcf\u4e2a\u8282\u70b9\u7684 /etc/hosts \u6587\u4ef6\uff0c\u65b0\u589e\u5982\u4e0b\u5185\u5bb9: 192.168.0.2 controller 192.168.0.3 compute 192.168.0.4 storage \u65f6\u949f\u540c\u6b65 \u00b6 \u96c6\u7fa4\u73af\u5883\u65f6\u523b\u8981\u6c42\u6bcf\u4e2a\u8282\u70b9\u7684\u65f6\u95f4\u4e00\u81f4\uff0c\u4e00\u822c\u7531\u65f6\u949f\u540c\u6b65\u8f6f\u4ef6\u4fdd\u8bc1\u3002\u672c\u6587\u4f7f\u7528 chrony \u8f6f\u4ef6\u3002\u6b65\u9aa4\u5982\u4e0b\uff1a Controller\u8282\u70b9 \uff1a \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # \u8868\u793a\u5141\u8bb8\u54ea\u4e9bIP\u4ece\u672c\u8282\u70b9\u540c\u6b65\u65f6\u949f allow 192.168.0.0/24 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u5176\u4ed6\u8282\u70b9 \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # NTP_SERVER\u662fcontroller IP\uff0c\u8868\u793a\u4ece\u8fd9\u4e2a\u673a\u5668\u83b7\u53d6\u65f6\u95f4\uff0c\u8fd9\u91cc\u6211\u4eec\u586b192.168.0.2\uff0c\u6216\u8005\u5728`/etc/hosts`\u91cc\u914d\u7f6e\u597d\u7684controller\u540d\u5b57\u5373\u53ef\u3002 server NTP_SERVER iburst \u540c\u65f6\uff0c\u8981\u628a pool pool.ntp.org iburst \u8fd9\u4e00\u884c\u6ce8\u91ca\u6389\uff0c\u8868\u793a\u4e0d\u4ece\u516c\u7f51\u540c\u6b65\u65f6\u949f\u3002 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u914d\u7f6e\u5b8c\u6210\u540e\uff0c\u68c0\u67e5\u4e00\u4e0b\u7ed3\u679c\uff0c\u5728\u5176\u4ed6\u975econtroller\u8282\u70b9\u6267\u884c chronyc sources \uff0c\u8fd4\u56de\u7ed3\u679c\u7c7b\u4f3c\u5982\u4e0b\u5185\u5bb9\uff0c\u8868\u793a\u6210\u529f\u4ececontroller\u540c\u6b65\u65f6\u949f\u3002 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 192.168.0.2 4 6 7 0 -1406ns[ +55us] +/- 16ms \u5b89\u88c5\u6570\u636e\u5e93 \u00b6 \u6570\u636e\u5e93\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528mariadb\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install mysql-config mariadb mariadb-server python3-PyMySQL \u65b0\u589e\u914d\u7f6e\u6587\u4ef6 /etc/my.cnf.d/openstack.cnf \uff0c\u5185\u5bb9\u5982\u4e0b [mysqld] bind-address = 192.168.0.2 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8\u670d\u52a1\u5668 systemctl start mariadb \u521d\u59cb\u5316\u6570\u636e\u5e93\uff0c\u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef mysql_secure_installation \u793a\u4f8b\u5982\u4e0b\uff1a NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and haven't set the root password yet, you should just press enter here. Enter current password for root (enter for none): #\u8fd9\u91cc\u8f93\u5165\u5bc6\u7801\uff0c\u7531\u4e8e\u6211\u4eec\u662f\u521d\u59cb\u5316DB\uff0c\u76f4\u63a5\u56de\u8f66\u5c31\u884c OK, successfully used password, moving on... Setting the root password or using the unix_socket ensures that nobody can log into the MariaDB root user without the proper authorisation. You already have your root account protected, so you can safely answer 'n'. # \u8fd9\u91cc\u6839\u636e\u63d0\u793a\u8f93\u5165N Switch to unix_socket authentication [Y/n] N Enabled successfully! Reloading privilege tables.. ... Success! You already have your root account protected, so you can safely answer 'n'. # \u8f93\u5165Y\uff0c\u4fee\u6539\u5bc6\u7801 Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664\u533f\u540d\u7528\u6237 Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. # \u8f93\u5165Y\uff0c\u5173\u95edroot\u8fdc\u7a0b\u767b\u5f55\u6743\u9650 Disallow root login remotely? [Y/n] Y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664test\u6570\u636e\u5e93 Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. # \u8f93\u5165Y\uff0c\u91cd\u8f7d\u914d\u7f6e Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. \u9a8c\u8bc1\uff0c\u6839\u636e\u7b2c\u56db\u6b65\u8bbe\u7f6e\u7684\u5bc6\u7801\uff0c\u68c0\u67e5\u662f\u5426\u80fd\u767b\u5f55mariadb mysql -uroot -p \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u00b6 \u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528rabbitmq\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install rabbitmq-server \u542f\u52a8\u670d\u52a1 systemctl start rabbitmq-server \u914d\u7f6eopenstack\u7528\u6237\uff0c RABBIT_PASS \u662fopenstack\u670d\u52a1\u767b\u5f55\u6d88\u606f\u961f\u91cc\u7684\u5bc6\u7801\uff0c\u9700\u8981\u548c\u540e\u9762\u5404\u4e2a\u670d\u52a1\u7684\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\u3002 rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u00b6 \u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528Memcached\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install memcached python3-memcached \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u542f\u52a8\u670d\u52a1 systemctl start memcached \u90e8\u7f72\u670d\u52a1 \u00b6 Keystone \u00b6 Keystone\u662fOpenStack\u63d0\u4f9b\u7684\u9274\u6743\u670d\u52a1\uff0c\u662f\u6574\u4e2aOpenStack\u7684\u5165\u53e3\uff0c\u63d0\u4f9b\u4e86\u79df\u6237\u9694\u79bb\u3001\u7528\u6237\u8ba4\u8bc1\u3001\u670d\u52a1\u53d1\u73b0\u7b49\u529f\u80fd\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server \u6253\u5f00httpd.conf\u5e76\u914d\u7f6e #\u9700\u8981\u4fee\u6539\u7684\u914d\u7f6e\u6587\u4ef6\u8def\u5f84 vim /etc/httpd/conf/httpd.conf #\u4fee\u6539\u4ee5\u4e0b\u9879\uff0c\u5982\u679c\u6ca1\u6709\u5219\u65b0\u6dfb\u52a0 ServerName controller \u521b\u5efa\u8f6f\u94fe\u63a5 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles \u9700\u8981\u5148\u5b89\u88c5python3-openstackclient dnf install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u00b6 Glance\u662fOpenStack\u63d0\u4f9b\u7684\u955c\u50cf\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u3001\u88f8\u673a\u955c\u50cf\u7684\u4e0a\u4f20\u4e0e\u4e0b\u8f7d\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521d\u59cb\u5316 glance \u8d44\u6e90\u5bf9\u8c61 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230 GLANCE_PASS \u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt glance User Password: Repeat User Password: \u6dfb\u52a0glance\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user glance admin \u521b\u5efaglance\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efaglance API\u670d\u52a1\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-glance \u4fee\u6539 glance \u914d\u7f6e\u6587\u4ef6 vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u5bfc\u5165\u73af\u5883\u53d8\u91cf sorce ~/.admin-openrcu \u4e0b\u8f7d\u955c\u50cf x86\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img arm\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement \u00b6 Placement\u662fOpenStack\u63d0\u4f9b\u7684\u8d44\u6e90\u8c03\u5ea6\u7ec4\u4ef6\uff0c\u4e00\u822c\u4e0d\u9762\u5411\u7528\u6237\uff0c\u7531Nova\u7b49\u7ec4\u4ef6\u8c03\u7528\uff0c\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u3001\u914d\u7f6ePlacement\u670d\u52a1\u524d\uff0c\u9700\u8981\u5148\u521b\u5efa\u76f8\u5e94\u7684\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548cAPI endpoints\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efaplacement\u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE placement; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efaplacement\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt placement User Password: Repeat User Password: \u6dfb\u52a0placement\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name placement \\ --description \"Placement API\" placement \u521b\u5efaPlacement API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ placement public http://controller:8778 openstack endpoint create --region RegionOne \\ placement internal http://controller:8778 openstack endpoint create --region RegionOne \\ placement admin http://controller:8778 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-placement-api \u7f16\u8f91 /etc/placement/placement.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [placement_database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [placement_database] connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff0c\u586b\u5145Placement\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8\u670d\u52a1 \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650 source ~/.admin-openrc \u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a placement-status upgrade check +----------------------------------------------------------------------+ | Upgrade Check Results | +----------------------------------------------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Failure | | Details: Your policy file is JSON-formatted which is deprecated. You | | need to switch to YAML-formatted file. Use the | | ``oslopolicy-convert-json-to-yaml`` tool to convert the | | existing JSON-formatted files to YAML in a backwards- | | compatible manner: https://docs.openstack.org/oslo.policy/ | | latest/cli/oslopolicy-convert-json-to-yaml.html. | +----------------------------------------------------------------------+ \u8fd9\u91cc\u53ef\u4ee5\u770b\u5230 Policy File JSON to YAML Migration \u7684\u7ed3\u679c\u4e3aFailure\u3002\u8fd9\u662f\u56e0\u4e3a\u5728Placement\u4e2d\uff0cJSON\u683c\u5f0f\u7684policy\u6587\u4ef6\u4eceWallaby\u7248\u672c\u5f00\u59cb\u5df2\u5904\u4e8e deprecated \u72b6\u6001\u3002\u53ef\u4ee5\u53c2\u8003\u63d0\u793a\uff0c\u4f7f\u7528 oslopolicy-convert-json-to-yaml \u5de5\u5177 \u5c06\u73b0\u6709\u7684JSON\u683c\u5f0fpolicy\u6587\u4ef6\u8f6c\u5316\u4e3aYAML\u683c\u5f0f\u3002 oslopolicy-convert-json-to-yaml --namespace placement \\ --policy-file /etc/placement/policy.json \\ --output-file /etc/placement/policy.yaml mv /etc/placement/policy.json{,.bak} \u6ce8\uff1a\u5f53\u524d\u73af\u5883\u4e2d\u6b64\u95ee\u9898\u53ef\u5ffd\u7565\uff0c\u4e0d\u5f71\u54cd\u8fd0\u884c\u3002 \u9488\u5bf9placement API\u8fd0\u884c\u547d\u4ee4\uff1a \u5b89\u88c5osc-placement\u63d2\u4ef6\uff1a dnf install python3-osc-placement \u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a openstack --os-placement-api-version 1.2 resource class list --sort-column name +----------------------------+ | name | +----------------------------+ | DISK_GB | | FPGA | | ... | openstack --os-placement-api-version 1.6 trait list --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_ACCELERATORS | | COMPUTE_ARCH_AARCH64 | | ... | Nova \u00b6 Nova\u662fOpenStack\u7684\u8ba1\u7b97\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u7684\u521b\u5efa\u3001\u53d1\u653e\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efa nova_api \u3001 nova \u548c nova_cell0 \u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efanova\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt nova User Password: Repeat User Password: \u6dfb\u52a0nova\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user nova admin \u521b\u5efanova\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name nova \\ --description \"OpenStack Compute\" compute \u521b\u5efaNova API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute admin http://controller:8774/v2.1 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528controller\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.2 log_dir = /var/log/nova state_path = /var/lib/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api_database] \u548c [database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff1a \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u542f\u52a8\u670d\u52a1 systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service Compute\u8282\u70b9 \u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-nova-compute \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6 \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528Compute\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49compute_driver\u3001instances_path\u3001log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.3 compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances log_dir = /var/log/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86_64\uff09 \u5904\u7406\u5668\u4e3ax86_64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002\u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08arm64\uff09 \u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a virt-host-validate # \u8be5\u547d\u4ee4\u7531libvirt\u63d0\u4f9b\uff0c\u6b64\u65f6libvirt\u5e94\u5df2\u4f5c\u4e3aopenstack-nova-compute\u4f9d\u8d56\u88ab\u5b89\u88c5\uff0c\u73af\u5883\u4e2d\u5df2\u6709\u6b64\u547d\u4ee4 \u663e\u793aFAIL\u65f6\uff0c\u8868\u793a\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002 QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded) \u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u663e\u793aPASS\u65f6\uff0c\u8868\u793a\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 QEMU: Checking if device /dev/kvm exists: PASS \u914d\u7f6eqemu\uff08\u4ec5arm64\uff09 \u4ec5\u5f53\u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\u9700\u8981\u6267\u884c\u6b64\u64cd\u4f5c\u3002 \u7f16\u8f91 /etc/libvirt/qemu.conf : nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u7f16\u8f91 /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } \u542f\u52a8\u670d\u52a1 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u786e\u8ba4nova-compute\u670d\u52a1\u5df2\u8bc6\u522b\u5230\u6570\u636e\u5e93\u4e2d\uff1a openstack compute service list --service nova-compute \u53d1\u73b0\u8ba1\u7b97\u8282\u70b9\uff0c\u5c06\u8ba1\u7b97\u8282\u70b9\u6dfb\u52a0\u5230cell\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u7ed3\u679c\u5982\u4e0b\uff1a Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code. Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check Neutron \u00b6 Neutron\u662fOpenStack\u7684\u7f51\u7edc\u670d\u52a1\uff0c\u63d0\u4f9b\u865a\u62df\u4ea4\u6362\u673a\u3001IP\u8def\u7531\u3001DHCP\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u670d\u52a1\u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efaneutron\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eNEUTRON_PASS\uff1a source ~/.admin-openrc openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description \"OpenStack Networking\" network \u90e8\u7f72 Neutron API \u670d\u52a1\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2 3. \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp [experimental] linuxbridge = true \u914d\u7f6eML2\uff0cML2\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge** \u4fee\u6539/etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6eLayer-3\u4ee3\u7406 \u4fee\u6539/etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406 \u4fee\u6539/etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406 \u4fee\u6539/etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u914d\u7f6enova\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542fnova api\u670d\u52a1 systemctl restart openstack-nova-api \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl start neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service Compute\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-neutron-linuxbridge ebtables ipset -y \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6enova compute\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service \u542f\u52a8Neutron linuxbridge agent\u670d\u52a1 systemctl enable neutron-linuxbridge-agent systemctl start neutron-linuxbridge-agent Cinder \u00b6 Cinder\u662fOpenStack\u7684\u5b58\u50a8\u670d\u52a1\uff0c\u63d0\u4f9b\u5757\u8bbe\u5907\u7684\u521b\u5efa\u3001\u53d1\u653e\u3001\u5907\u4efd\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \uff1a \u521d\u59cb\u5316\u6570\u636e\u5e93 CINDER_DBPASS \u662f\u7528\u6237\u81ea\u5b9a\u4e49\u7684cinder\u6570\u636e\u5e93\u5bc6\u7801\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u521d\u59cb\u5316Keystone\u8d44\u6e90\u5bf9\u8c61 source ~/.admin-openrc #\u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230`CINDER_PASS`\u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s 3. \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-cinder-api openstack-cinder-scheduler \u4fee\u6539cinder\u914d\u7f6e\u6587\u4ef6 /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.2 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u6570\u636e\u5e93\u540c\u6b65 su -s /bin/sh -c \"cinder-manage db sync\" cinder \u4fee\u6539nova\u914d\u7f6e /etc/nova/nova.conf [cinder] os_region_name = RegionOne \u542f\u52a8\u670d\u52a1 systemctl restart openstack-nova-api systemctl start openstack-cinder-api openstack-cinder-scheduler Storage\u8282\u70b9 \uff1a Storage\u8282\u70b9\u8981\u63d0\u524d\u51c6\u5907\u81f3\u5c11\u4e00\u5757\u786c\u76d8\uff0c\u4f5c\u4e3acinder\u7684\u5b58\u50a8\u540e\u7aef\uff0c\u4e0b\u6587\u9ed8\u8ba4storage\u8282\u70b9\u5df2\u7ecf\u5b58\u5728\u4e00\u5757\u672a\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u8bbe\u5907\u540d\u79f0\u4e3a /dev/sdb \uff0c\u7528\u6237\u5728\u914d\u7f6e\u8fc7\u7a0b\u4e2d\uff0c\u8bf7\u6309\u7167\u771f\u5b9e\u73af\u5883\u4fe1\u606f\u8fdb\u884c\u540d\u79f0\u66ff\u6362\u3002 Cinder\u652f\u6301\u5f88\u591a\u7c7b\u578b\u7684\u540e\u7aef\u5b58\u50a8\uff0c\u672c\u6307\u5bfc\u4f7f\u7528\u6700\u7b80\u5355\u7684lvm\u4e3a\u53c2\u8003\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982ceph\u7b49\u5176\u4ed6\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup \u914d\u7f6elvm\u5377\u7ec4 pvcreate /dev/sdb vgcreate cinder-volumes /dev/sdb \u4fee\u6539cinder\u914d\u7f6e /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.4 enabled_backends = lvm glance_api_servers = http://controller:9292 [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u914d\u7f6ecinder backup \uff08\u53ef\u9009\uff09 cinder-backup\u662f\u53ef\u9009\u7684\u5907\u4efd\u670d\u52a1\uff0ccinder\u540c\u6837\u652f\u6301\u5f88\u591a\u79cd\u5907\u4efd\u540e\u7aef\uff0c\u672c\u6587\u4f7f\u7528swift\u5b58\u50a8\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982NFS\u7b49\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\uff0c\u4f8b\u5982\u53ef\u4ee5\u53c2\u8003 OpenStack\u5b98\u65b9\u6587\u6863 \u5bf9NFS\u7684\u914d\u7f6e\u8bf4\u660e\u3002 \u4fee\u6539 /etc/cinder/cinder.conf \uff0c\u5728 [DEFAULT] \u4e2d\u65b0\u589e [DEFAULT] backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u8fd9\u91cc\u7684 SWIFT_URL \u662f\u6307\u73af\u5883\u4e2dswift\u670d\u52a1\u7684URL\uff0c\u5728\u90e8\u7f72\u5b8cswift\u670d\u52a1\u540e\uff0c\u6267\u884c openstack catalog show object-store \u547d\u4ee4\u83b7\u53d6\u3002 \u542f\u52a8\u670d\u52a1 systemctl start openstack-cinder-volume target systemctl start openstack-cinder-backup (\u53ef\u9009) \u81f3\u6b64\uff0cCinder\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u53ef\u4ee5\u5728controller\u901a\u8fc7\u4ee5\u4e0b\u547d\u4ee4\u8fdb\u884c\u7b80\u5355\u7684\u9a8c\u8bc1 source ~/.admin-openrc openstack storage service list openstack volume list Horizon \u00b6 Horizon\u662fOpenStack\u63d0\u4f9b\u7684\u524d\u7aef\u9875\u9762\uff0c\u53ef\u4ee5\u8ba9\u7528\u6237\u901a\u8fc7\u7f51\u9875\u9f20\u6807\u7684\u64cd\u4f5c\u6765\u63a7\u5236OpenStack\u96c6\u7fa4\uff0c\u800c\u4e0d\u7528\u7e41\u7410\u7684CLI\u547d\u4ee4\u884c\u3002Horizon\u4e00\u822c\u90e8\u7f72\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-dashboard \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] OPENSTACK_KEYSTONE_URL = \"http://controller:5000/v3\" SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f\u670d\u52a1 systemctl restart httpd \u81f3\u6b64\uff0chorizon\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165 http://192.168.0.2/dashboard \uff0c\u6253\u5f00horizon\u767b\u5f55\u9875\u9762\u3002 Ironic \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> exit Bye \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 \u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 \u66ff\u6362 IRONIC_PASS \u4e3aironic\u7528\u6237\u5bc6\u7801\uff0c IRONIC_INSPECTOR_PASS \u4e3aironic_inspector\u7528\u6237\u5bc6\u7801\u3002 openstack user create --password IRONIC_PASS \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector openstack role add --project service --user ironic-inspector admin \u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQ LAlchemy connection string used to connect to the # database (string value) # connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) # transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASS \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) # www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 www_authenticate_uri=http://controller:5000 # Complete admin Identity API endpoint. (string value) # auth_url=http://PRIVATE_IDENTITY_IP:5000 auth_url=http://controller:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASS # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none \u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema \u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 \u5982\u4e0b\u4e3aironic-conductor\u670d\u52a1\u81ea\u8eab\u7684\u6807\u51c6\u914d\u7f6e\uff0cironic-conductor\u670d\u52a1\u53ef\u4ee5\u4e0eironic-api\u670d\u52a1\u5206\u5e03\u4e8e\u4e0d\u540c\u8282\u70b9\uff0c\u672c\u6307\u5357\u4e2d\u5747\u90e8\u7f72\u4e0e\u63a7\u5236\u8282\u70b9\uff0c\u6240\u4ee5\u91cd\u590d\u7684\u914d\u7f6e\u9879\u53ef\u8df3\u8fc7\u3002 \u66ff\u6362\u4f7f\u7528conductor\u670d\u52a1\u6240\u5728host\u7684IP\u914d\u7f6emy_ip\uff1a [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) # my_ip=HOST_IP my_ip = 192.168.0.2 \u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c \u66ff\u6362IRONIC_PASS\u4e3aironic\u7528\u6237\u5bc6\u7801\u3002 [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASS # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public # \u5176\u4ed6\u53c2\u8003\u914d\u7f6e [glance] endpoint_override = http://controller:9292 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 auth_type = password username = ironic password = IRONIC_PASS project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service [service_catalog] region_name = RegionOne project_domain_id = default user_domain_id = default project_name = service password = IRONIC_PASS username = ironic auth_url = http://controller:5000 auth_type = password \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] endpoint_override = \u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 \u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-inspector \u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> exit Bye \u914d\u7f6e /etc/ironic-inspector/inspector.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASS \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801 [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 \u914d\u7f6e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://controller:5000 www_authenticate_uri = http://controller:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = controller:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True \u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=192.168.0.40,192.168.0.50 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log \u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c \u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade \u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 dnf install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u4e0b\u8f7d\u6216\u5236\u4f5c \u90e8\u7f72\u4e00\u4e2a\u88f8\u673a\u8282\u70b9\u603b\u5171\u9700\u8981\u4e24\u7ec4\u955c\u50cf\uff1adeploy ramdisk images\u548cuser images\u3002Deploy ramdisk images\u4e0a\u8fd0\u884c\u6709ironic-python-agent(IPA)\u670d\u52a1\uff0cIronic\u901a\u8fc7\u5b83\u8fdb\u884c\u88f8\u673a\u8282\u70b9\u7684\u73af\u5883\u51c6\u5907\u3002User images\u662f\u6700\u7ec8\u88ab\u5b89\u88c5\u88f8\u673a\u8282\u70b9\u4e0a\uff0c\u4f9b\u7528\u6237\u4f7f\u7528\u7684\u955c\u50cf\u3002 ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent-builder\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002\u82e5\u4f7f\u7528\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \uff0c\u540c\u65f6\u5b98\u65b9\u4e5f\u6709\u63d0\u4f9b\u5236\u4f5c\u597d\u7684deploy\u955c\u50cf\uff0c\u53ef\u5c1d\u8bd5\u4e0b\u8f7d\u3002 \u4e0b\u6587\u4ecb\u7ecd\u901a\u8fc7ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder dnf install python3-ironic-python-agent-builder \u6216 pip3 install ironic-python-agent-builder dnf install qemu-img git \u5236\u4f5c\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--lzma] [--extra-args EXTRA_ARGS] [--elements-path ELEMENTS_PATH] distribution positional arguments: distribution Distribution to use options: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic-python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --lzma Use lzma compression for smaller images --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder --elements-path ELEMENTS_PATH Path(s) to custom DIB elements separated by a colon \u64cd\u4f5c\u5b9e\u4f8b\uff1a # -o\u9009\u9879\u6307\u5b9a\u751f\u6210\u7684\u955c\u50cf\u540d # ubuntu\u6307\u5b9a\u751f\u6210ubuntu\u7cfb\u7edf\u7684\u955c\u50cf ironic-python-agent-builder -o my-ubuntu-ipa ubuntu \u53ef\u901a\u8fc7\u8bbe\u7f6e ARCH \u73af\u5883\u53d8\u91cf\uff08\u9ed8\u8ba4\u4e3aamd64\uff09\u6307\u5b9a\u6240\u6784\u5efa\u955c\u50cf\u7684\u67b6\u6784\u3002\u5982\u679c\u662f arm \u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a export ARCH=aarch64 \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf,\u8bbe\u7f6e\u7528\u6237\u540d\u3001\u5bc6\u7801\uff0c\u542f\u7528 sodo \u6743\u9650\uff1b\u5e76\u6dfb\u52a0 -e \u9009\u9879\u4f7f\u7528\u76f8\u5e94\u7684DIB\u5143\u7d20\u3002\u5236\u4f5c\u955c\u50cf\u64cd\u4f5c\u5982\u4e0b\uff1a export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=stable/2023.1 # \u6307\u5b9a\u672c\u5730\u4ed3\u5e93\u53ca\u5206\u652f DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo DIB_REPOREF_ironic_python_agent=my-test-branch ironic-python-agent-builder ubuntu \u53c2\u8003\uff1a source-repositories \u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\u3002 \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a \u5f53\u524d\u7248\u672c\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ ramdisk\u955c\u50cf\u4e2d\u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 \u7f16\u8f91/usr/lib/systemd/system/ironic-python-agent.service\u6587\u4ef6 [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target Trove \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2atrove\u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684trove\u6570\u636e\u5e93\uff0c\u66ff\u6362TROVE_DBPASS\u4e3a\u5408\u9002\u7684\u5bc6\u7801\u3002 CREATE DATABASE trove CHARACTER SET utf8; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS'; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efatrove\u7528\u6237 openstack user create --domain default --password-prompt trove # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user trove admin # \u521b\u5efadatabase\u670d\u52a1 openstack service create --name trove --description \"Database service\" database \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5Trove\u3002 dnf install openstack-trove python-troveclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 \u7f16\u8f91/etc/trove/trove.conf\u3002 [DEFAULT] bind_host=192.168.0.2 log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver network_label_regex=.* management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] auth_url = http://controller:5000/v3/ auth_type = password project_domain_name = Default project_name = service user_domain_name = Default password = trove username = TROVE_PASS [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = trove password = TROVE_PASS [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u63a7\u5236\u8282\u70b9\u7684IP\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002 \u7f16\u8f91/etc/trove/trove-guestagent.conf\u3002 [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df\u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a\u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002\\ \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 \u6570\u636e\u5e93\u540c\u6b65\u3002 su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efaswift\u7528\u6237 openstack user create --domain default --password-prompt swift # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user swift admin # \u521b\u5efa\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5Swift\u3002 dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \\ python3-keystonemiddleware memcached \u914d\u7f6eproxy-server\u3002 Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cSWIFT_PASS\u5373\u53ef\u3002 vim /etc/swift/proxy-server.conf [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = swift password = SWIFT_PASS delay_auth_decision = True service_token_roles_required = True Storage\u8282\u70b9 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305\u3002 dnf install openstack-swift-account openstack-swift-container openstack-swift-object dnf install xfsprogs rsync \u5c06\u8bbe\u5907/dev/sdb\u548c/dev/sdc\u683c\u5f0f\u5316\u4e3aXFS\u3002 mkfs.xfs /dev/sdb mkfs.xfs /dev/sdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u3002 mkdir -p /srv/node/sdb mkdir -p /srv/node/sdc \u627e\u5230\u65b0\u5206\u533a\u7684UUID\u3002 blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d\u3002 UUID=\"\" /srv/node/sdb xfs noatime 0 2 UUID=\"\" /srv/node/sdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\u3002 mount /srv/node/sdb mount /srv/node/sdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e\u3002 \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u914d\u7f6e\u5b58\u50a8\u8282\u70b9\u3002 \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 [DEFAULT] bind_ip = 192.168.0.4 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\u3002 mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift Controller\u8282\u70b9\u521b\u5efa\u5e76\u5206\u53d1\u73af \u521b\u5efa\u8d26\u53f7\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840 account.builder \u6587\u4ef6\u3002 swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder account.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6202 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u8d26\u53f7\u73af\u5185\u5bb9\u3002 swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u8d26\u53f7\u73af\u3002 swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\u3002 swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder container.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bb9\u5668\u73af\u5185\u5bb9\u3002 swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u5bb9\u5668\u73af\u3002 swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\u3002 swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder object.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6200 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bf9\u8c61\u73af\u5185\u5bb9\u3002 swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u5bf9\u8c61\u73af\u3002 swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\u3002 \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/swift/swift.conf\u3002 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R root:swift /etc/swift \u5b8c\u6210\u5b89\u88c5 \u5728\u63a7\u5236\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service systemctl start openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service Cyborg \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 Controller\u8282\u70b9 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cyborg; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efacybory\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eCYBORG_PASS source ~/.admin-openrc openstack user create --domain default --password-prompt cyborg openstack role add --project service --user cyborg admin openstack service create --name cyborg --description \"Acceleration Service\" accelerator \u4f7f\u7528uwsgi\u90e8\u7f72Cyborg api\u670d\u52a1 openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2 \u5b89\u88c5Cyborg dnf install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [api] host_ip = 0.0.0.0 [database] connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg [service_catalog] cafile = /opt/stack/data/ca-bundle.pem project_domain_id = default user_domain_id = default project_name = service password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = password username = PLACEMENT_PASS auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [nova] project_domain_name = Default project_name = service user_domain_name = Default password = NOVA_PASS username = nova auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [keystone_authtoken] memcached_servers = localhost:11211 signing_dir = /var/cache/cyborg/api cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u00b6 Aodh\u53ef\u4ee5\u6839\u636e\u7531Ceilometer\u6216\u8005Gnocchi\u6536\u96c6\u7684\u76d1\u63a7\u6570\u636e\u521b\u5efa\u544a\u8b66\uff0c\u5e76\u8bbe\u7f6e\u89e6\u53d1\u89c4\u5219\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh\u3002 dnf install openstack-aodh-api openstack-aodh-evaluator \\ openstack-aodh-notifier openstack-aodh-listener \\ openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/aodh/aodh.conf [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u540c\u6b65\u6570\u636e\u5e93\u3002 aodh-dbsync \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u00b6 Gnocchi\u662f\u4e00\u4e2a\u5f00\u6e90\u7684\u65f6\u95f4\u5e8f\u5217\u6570\u636e\u5e93\uff0c\u53ef\u4ee5\u5bf9\u63a5Ceilometer\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi\u3002 dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. # coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u540c\u6b65\u6570\u636e\u5e93\u3002 gnocchi-upgrade \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u00b6 Ceilometer\u662fOpenStack\u4e2d\u8d1f\u8d23\u6570\u636e\u6536\u96c6\u7684\u670d\u52a1\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-notification openstack-ceilometer-central \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/pipeline.yaml\u3002 publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u6570\u636e\u5e93\u540c\u6b65\u3002 ceilometer-upgrade \u5b8c\u6210\u63a7\u5236\u8282\u70b9Ceilometer\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Compute\u8282\u70b9 \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-compute dnf install openstack-ceilometer-ipmi # \u53ef\u9009 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_url = http://controller:5000 project_domain_id = default user_domain_id = default auth_type = password username = ceilometer project_name = service password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/nova/nova.conf\u3002 [DEFAULT] instance_usage_audit = True instance_usage_audit_period = hour [notifications] notify_on_state_change = vm_and_task_state [oslo_messaging_notifications] driver = messagingv2 \u5b8c\u6210\u5b89\u88c5\u3002 systemctl enable openstack-ceilometer-compute.service systemctl start openstack-ceilometer-compute.service systemctl enable openstack-ceilometer-ipmi.service # \u53ef\u9009 systemctl start openstack-ceilometer-ipmi.service # \u53ef\u9009 # \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service Heat \u00b6 Heat\u662f OpenStack \u81ea\u52a8\u7f16\u6392\u670d\u52a1\uff0c\u57fa\u4e8e\u63cf\u8ff0\u6027\u7684\u6a21\u677f\u6765\u7f16\u6392\u590d\u5408\u4e91\u5e94\u7528\uff0c\u4e5f\u79f0\u4e3a Orchestration Service \u3002Heat \u7684\u5404\u670d\u52a1\u4e00\u822c\u5b89\u88c5\u5728 Controller \u8282\u70b9\u4e0a\u3002 Controller\u8282\u70b9 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE heat; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 source ~/.admin-openrc openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f \u521b\u5efa heat domain openstack domain create --description \"Stack projects and users\" heat \u5728 heat domain\u4e0b\u521b\u5efa heat_domain_admin \u7528\u6237\uff0c\u5e76\u8bb0\u4e0b\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6e\u4e0b\u9762\u7684 HEAT_DOMAIN_PASS openstack user create --domain heat --password-prompt heat_domain_admin \u4e3a heat_domain_admin \u7528\u6237\u589e\u52a0 admin \u89d2\u8272 openstack role add --domain heat --user-domain heat --user heat_domain_admin admin \u521b\u5efa heat_stack_owner \u89d2\u8272 openstack role create heat_stack_owner \u521b\u5efa heat_stack_user \u89d2\u8272 openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service Tempest \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u5b89\u88c5Tempest dnf install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Antelope\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 oos\u5de5\u5177\u5728\u4e0d\u65ad\u6f14\u8fdb\uff0c\u517c\u5bb9\u6027\u3001\u53ef\u7528\u6027\u4e0d\u80fd\u65f6\u523b\u4fdd\u8bc1\uff0c\u5efa\u8bae\u4f7f\u7528\u5df2\u9a8c\u8bc1\u7684\u672c\u7248\uff0c\u8fd9\u91cc\u9009\u62e9 1.3.1 pip install openstack-sig-tool==1.3.1 \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff0cAK/SK\u662f\u7528\u6237\u7684\u534e\u4e3a\u4e91\u767b\u5f55\u5bc6\u94a5\uff0c\u5176\u4ed6\u914d\u7f6e\u4fdd\u6301\u9ed8\u8ba4\u5373\u53ef\uff08\u9ed8\u8ba4\u4f7f\u7528\u65b0\u52a0\u5761region\uff09\uff0c\u9700\u8981\u63d0\u524d\u5728\u4e91\u4e0a\u521b\u5efa\u5bf9\u5e94\u7684\u8d44\u6e90\uff0c\u5305\u62ec\uff1a \u4e00\u4e2a\u5b89\u5168\u7ec4\uff0c\u540d\u5b57\u9ed8\u8ba4\u662f oos \u4e00\u4e2aopenEuler\u955c\u50cf\uff0c\u540d\u79f0\u683c\u5f0f\u662fopenEuler-%(release)s-%(arch)s\uff0c\u4f8b\u5982 openEuler-24.03-arm64 \u4e00\u4e2aVPC\uff0c\u540d\u79f0\u662f oos_vpc \u8be5VPC\u4e0b\u9762\u4e24\u4e2a\u5b50\u7f51\uff0c\u540d\u79f0\u662f oos_subnet1 \u3001 oos_subnet2 [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668(\u53ea\u5728openEuler LTS\u4e0a\u652f\u6301) \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 24.03 LTS\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 24.03-lts -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r antelope \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u6267\u884ctempest\u6d4b\u8bd5 \u7528\u6237\u53ef\u4ee5\u4f7f\u7528oos\u81ea\u52a8\u6267\u884c\uff1a oos env test test-oos \u4e5f\u53ef\u4ee5\u624b\u52a8\u767b\u5f55\u76ee\u6807\u8282\u70b9\uff0c\u8fdb\u5165\u6839\u76ee\u5f55\u4e0b\u7684 mytest \u76ee\u5f55\uff0c\u624b\u52a8\u6267\u884c tempest run \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u8df3\u8fc7\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u5728\u7b2c4\u6b65\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 \u88ab\u7eb3\u7ba1\u7684\u865a\u673a\u9700\u8981\u4fdd\u8bc1\uff1a \u81f3\u5c11\u6709\u4e00\u5f20\u7ed9oos\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e neutron_dataplane_interface_name \u81f3\u5c11\u6709\u4e00\u5757\u7ed9oos\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e cinder_block_device \u5982\u679c\u8981\u90e8\u7f72swift\u670d\u52a1\uff0c\u5219\u9700\u8981\u65b0\u589e\u4e00\u5757\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e swift_storage_devices # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 24.03-lts -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-24.03-LTS_Antelope"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#openstack-antelope","text":"OpenStack Antelope \u90e8\u7f72\u6307\u5357 \u57fa\u4e8eRPM\u90e8\u7f72 \u73af\u5883\u51c6\u5907 \u65f6\u949f\u540c\u6b65 \u5b89\u88c5\u6570\u636e\u5e93 \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u90e8\u7f72\u670d\u52a1 Keystone Glance Placement Nova Neutron Cinder Horizon Ironic Trove Swift Cyborg Aodh Gnocchi Ceilometer Heat Tempest \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u672c\u6587\u6863\u662f openEuler OpenStack SIG \u7f16\u5199\u7684\u57fa\u4e8e openEuler 24.03 LTS \u7684 OpenStack \u90e8\u7f72\u6307\u5357\uff0c\u5185\u5bb9\u7531 SIG \u8d21\u732e\u8005\u63d0\u4f9b\u3002\u5728\u9605\u8bfb\u8fc7\u7a0b\u4e2d\uff0c\u5982\u679c\u60a8\u6709\u4efb\u4f55\u7591\u95ee\u6216\u8005\u53d1\u73b0\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7 \u8054\u7cfb SIG\u7ef4\u62a4\u4eba\u5458\uff0c\u6216\u8005\u76f4\u63a5 \u63d0\u4ea4issue \u7ea6\u5b9a \u672c\u7ae0\u8282\u63cf\u8ff0\u6587\u6863\u4e2d\u7684\u4e00\u4e9b\u901a\u7528\u7ea6\u5b9a\u3002 \u540d\u79f0 \u5b9a\u4e49 RABBIT_PASS rabbitmq\u7684\u5bc6\u7801\uff0c\u7531\u7528\u6237\u8bbe\u7f6e\uff0c\u5728OpenStack\u5404\u4e2a\u670d\u52a1\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_PASS cinder\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_DBPASS cinder\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 KEYSTONE_DBPASS keystone\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728keystone\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_PASS glance\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_DBPASS glance\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_PASS \u5728keystone\u6ce8\u518c\u7684heat\u7528\u6237\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_DBPASS heat\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_PASS \u5728keystone\u6ce8\u518c\u7684cyborg\u7528\u6237\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_DBPASS cyborg\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_PASS \u5728keystone\u6ce8\u518c\u7684neutron\u7528\u6237\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_DBPASS neutron\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PROVIDER_INTERFACE_NAME \u7269\u7406\u7f51\u7edc\u63a5\u53e3\u7684\u540d\u79f0\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 OVERLAY_INTERFACE_IP_ADDRESS Controller\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406ip\u5730\u5740\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 METADATA_SECRET metadata proxy\u7684secret\u5bc6\u7801\uff0c\u5728nova\u548cneutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_DBPASS placement\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_PASS \u5728keystone\u6ce8\u518c\u7684placement\u7528\u6237\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_DBPASS nova\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728nova\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_PASS \u5728keystone\u6ce8\u518c\u7684nova\u7528\u6237\u5bc6\u7801\uff0c\u5728nova,cyborg,neutron\u7b49\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_DBPASS ironic\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_PASS \u5728keystone\u6ce8\u518c\u7684ironic\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_DBPASS ironic-inspector\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_PASS \u5728keystone\u6ce8\u518c\u7684ironic-inspector\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 OpenStack SIG \u63d0\u4f9b\u4e86\u591a\u79cd\u57fa\u4e8e openEuler \u90e8\u7f72 OpenStack \u7684\u65b9\u6cd5\uff0c\u4ee5\u6ee1\u8db3\u4e0d\u540c\u7684\u7528\u6237\u573a\u666f\uff0c\u8bf7\u6309\u9700\u9009\u62e9\u3002","title":"OpenStack Antelope \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#rpm","text":"","title":"\u57fa\u4e8eRPM\u90e8\u7f72"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#_1","text":"\u672c\u6587\u6863\u57fa\u4e8eOpenStack\u7ecf\u5178\u7684\u4e09\u8282\u70b9\u73af\u5883\u8fdb\u884c\u90e8\u7f72\uff0c\u4e09\u4e2a\u8282\u70b9\u5206\u522b\u662f\u63a7\u5236\u8282\u70b9(Controller)\u3001\u8ba1\u7b97\u8282\u70b9(Compute)\u3001\u5b58\u50a8\u8282\u70b9(Storage)\uff0c\u5176\u4e2d\u5b58\u50a8\u8282\u70b9\u4e00\u822c\u53ea\u90e8\u7f72\u5b58\u50a8\u670d\u52a1\uff0c\u5728\u8d44\u6e90\u6709\u9650\u7684\u60c5\u51b5\u4e0b\uff0c\u53ef\u4ee5\u4e0d\u5355\u72ec\u90e8\u7f72\u8be5\u8282\u70b9\uff0c\u628a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u670d\u52a1\u90e8\u7f72\u5230\u8ba1\u7b97\u8282\u70b9\u5373\u53ef\u3002 \u9996\u5148\u51c6\u5907\u4e09\u4e2aopenEuler 24.03 LTS\u73af\u5883\uff0c\u6839\u636e\u60a8\u7684\u73af\u5883\uff0c\u4e0b\u8f7d\u5bf9\u5e94\u7684\u955c\u50cf\u5e76\u5b89\u88c5\u5373\u53ef\uff1a ISO\u955c\u50cf \u3001 qcow2\u955c\u50cf \u3002 \u4e0b\u9762\u7684\u5b89\u88c5\u6309\u7167\u5982\u4e0b\u62d3\u6251\u8fdb\u884c\uff1a controller\uff1a192.168.0.2 compute\uff1a 192.168.0.3 storage\uff1a 192.168.0.4 \u5982\u679c\u60a8\u7684\u73af\u5883IP\u4e0d\u540c\uff0c\u8bf7\u6309\u7167\u60a8\u7684\u73af\u5883IP\u4fee\u6539\u76f8\u5e94\u7684\u914d\u7f6e\u6587\u4ef6\u3002 \u672c\u6587\u6863\u7684\u4e09\u8282\u70b9\u670d\u52a1\u62d3\u6251\u5982\u4e0b\u56fe\u6240\u793a(\u53ea\u5305\u542bKeystone\u3001Glance\u3001Nova\u3001Cinder\u3001Neutron\u8fd9\u51e0\u4e2a\u6838\u5fc3\u670d\u52a1\uff0c\u5176\u4ed6\u670d\u52a1\u8bf7\u53c2\u8003\u5177\u4f53\u90e8\u7f72\u7ae0\u8282)\uff1a \u5728\u6b63\u5f0f\u90e8\u7f72\u4e4b\u524d\uff0c\u9700\u8981\u5bf9\u6bcf\u4e2a\u8282\u70b9\u505a\u5982\u4e0b\u914d\u7f6e\u548c\u68c0\u67e5\uff1a \u914d\u7f6e openEuler 24.03 LTS \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-antelope yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u6bcf\u4e2a\u8282\u70b9\u5206\u522b\u4fee\u6539\u4e3b\u673a\u540d\uff0c\u4ee5controller\u4e3a\u4f8b\uff1a hostnamectl set-hostname controller vi /etc/hostname \u5185\u5bb9\u4fee\u6539\u4e3acontroller \u7136\u540e\u4fee\u6539\u6bcf\u4e2a\u8282\u70b9\u7684 /etc/hosts \u6587\u4ef6\uff0c\u65b0\u589e\u5982\u4e0b\u5185\u5bb9: 192.168.0.2 controller 192.168.0.3 compute 192.168.0.4 storage","title":"\u73af\u5883\u51c6\u5907"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#_2","text":"\u96c6\u7fa4\u73af\u5883\u65f6\u523b\u8981\u6c42\u6bcf\u4e2a\u8282\u70b9\u7684\u65f6\u95f4\u4e00\u81f4\uff0c\u4e00\u822c\u7531\u65f6\u949f\u540c\u6b65\u8f6f\u4ef6\u4fdd\u8bc1\u3002\u672c\u6587\u4f7f\u7528 chrony \u8f6f\u4ef6\u3002\u6b65\u9aa4\u5982\u4e0b\uff1a Controller\u8282\u70b9 \uff1a \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # \u8868\u793a\u5141\u8bb8\u54ea\u4e9bIP\u4ece\u672c\u8282\u70b9\u540c\u6b65\u65f6\u949f allow 192.168.0.0/24 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u5176\u4ed6\u8282\u70b9 \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # NTP_SERVER\u662fcontroller IP\uff0c\u8868\u793a\u4ece\u8fd9\u4e2a\u673a\u5668\u83b7\u53d6\u65f6\u95f4\uff0c\u8fd9\u91cc\u6211\u4eec\u586b192.168.0.2\uff0c\u6216\u8005\u5728`/etc/hosts`\u91cc\u914d\u7f6e\u597d\u7684controller\u540d\u5b57\u5373\u53ef\u3002 server NTP_SERVER iburst \u540c\u65f6\uff0c\u8981\u628a pool pool.ntp.org iburst \u8fd9\u4e00\u884c\u6ce8\u91ca\u6389\uff0c\u8868\u793a\u4e0d\u4ece\u516c\u7f51\u540c\u6b65\u65f6\u949f\u3002 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u914d\u7f6e\u5b8c\u6210\u540e\uff0c\u68c0\u67e5\u4e00\u4e0b\u7ed3\u679c\uff0c\u5728\u5176\u4ed6\u975econtroller\u8282\u70b9\u6267\u884c chronyc sources \uff0c\u8fd4\u56de\u7ed3\u679c\u7c7b\u4f3c\u5982\u4e0b\u5185\u5bb9\uff0c\u8868\u793a\u6210\u529f\u4ececontroller\u540c\u6b65\u65f6\u949f\u3002 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 192.168.0.2 4 6 7 0 -1406ns[ +55us] +/- 16ms","title":"\u65f6\u949f\u540c\u6b65"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#_3","text":"\u6570\u636e\u5e93\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528mariadb\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install mysql-config mariadb mariadb-server python3-PyMySQL \u65b0\u589e\u914d\u7f6e\u6587\u4ef6 /etc/my.cnf.d/openstack.cnf \uff0c\u5185\u5bb9\u5982\u4e0b [mysqld] bind-address = 192.168.0.2 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8\u670d\u52a1\u5668 systemctl start mariadb \u521d\u59cb\u5316\u6570\u636e\u5e93\uff0c\u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef mysql_secure_installation \u793a\u4f8b\u5982\u4e0b\uff1a NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and haven't set the root password yet, you should just press enter here. Enter current password for root (enter for none): #\u8fd9\u91cc\u8f93\u5165\u5bc6\u7801\uff0c\u7531\u4e8e\u6211\u4eec\u662f\u521d\u59cb\u5316DB\uff0c\u76f4\u63a5\u56de\u8f66\u5c31\u884c OK, successfully used password, moving on... Setting the root password or using the unix_socket ensures that nobody can log into the MariaDB root user without the proper authorisation. You already have your root account protected, so you can safely answer 'n'. # \u8fd9\u91cc\u6839\u636e\u63d0\u793a\u8f93\u5165N Switch to unix_socket authentication [Y/n] N Enabled successfully! Reloading privilege tables.. ... Success! You already have your root account protected, so you can safely answer 'n'. # \u8f93\u5165Y\uff0c\u4fee\u6539\u5bc6\u7801 Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664\u533f\u540d\u7528\u6237 Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. # \u8f93\u5165Y\uff0c\u5173\u95edroot\u8fdc\u7a0b\u767b\u5f55\u6743\u9650 Disallow root login remotely? [Y/n] Y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664test\u6570\u636e\u5e93 Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. # \u8f93\u5165Y\uff0c\u91cd\u8f7d\u914d\u7f6e Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. \u9a8c\u8bc1\uff0c\u6839\u636e\u7b2c\u56db\u6b65\u8bbe\u7f6e\u7684\u5bc6\u7801\uff0c\u68c0\u67e5\u662f\u5426\u80fd\u767b\u5f55mariadb mysql -uroot -p","title":"\u5b89\u88c5\u6570\u636e\u5e93"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#_4","text":"\u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528rabbitmq\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install rabbitmq-server \u542f\u52a8\u670d\u52a1 systemctl start rabbitmq-server \u914d\u7f6eopenstack\u7528\u6237\uff0c RABBIT_PASS \u662fopenstack\u670d\u52a1\u767b\u5f55\u6d88\u606f\u961f\u91cc\u7684\u5bc6\u7801\uff0c\u9700\u8981\u548c\u540e\u9762\u5404\u4e2a\u670d\u52a1\u7684\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\u3002 rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5\u6d88\u606f\u961f\u5217"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#_5","text":"\u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528Memcached\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install memcached python3-memcached \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u542f\u52a8\u670d\u52a1 systemctl start memcached","title":"\u5b89\u88c5\u7f13\u5b58\u670d\u52a1"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#_6","text":"","title":"\u90e8\u7f72\u670d\u52a1"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#keystone","text":"Keystone\u662fOpenStack\u63d0\u4f9b\u7684\u9274\u6743\u670d\u52a1\uff0c\u662f\u6574\u4e2aOpenStack\u7684\u5165\u53e3\uff0c\u63d0\u4f9b\u4e86\u79df\u6237\u9694\u79bb\u3001\u7528\u6237\u8ba4\u8bc1\u3001\u670d\u52a1\u53d1\u73b0\u7b49\u529f\u80fd\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server \u6253\u5f00httpd.conf\u5e76\u914d\u7f6e #\u9700\u8981\u4fee\u6539\u7684\u914d\u7f6e\u6587\u4ef6\u8def\u5f84 vim /etc/httpd/conf/httpd.conf #\u4fee\u6539\u4ee5\u4e0b\u9879\uff0c\u5982\u679c\u6ca1\u6709\u5219\u65b0\u6dfb\u52a0 ServerName controller \u521b\u5efa\u8f6f\u94fe\u63a5 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles \u9700\u8981\u5148\u5b89\u88c5python3-openstackclient dnf install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#glance","text":"Glance\u662fOpenStack\u63d0\u4f9b\u7684\u955c\u50cf\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u3001\u88f8\u673a\u955c\u50cf\u7684\u4e0a\u4f20\u4e0e\u4e0b\u8f7d\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521d\u59cb\u5316 glance \u8d44\u6e90\u5bf9\u8c61 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230 GLANCE_PASS \u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt glance User Password: Repeat User Password: \u6dfb\u52a0glance\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user glance admin \u521b\u5efaglance\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efaglance API\u670d\u52a1\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-glance \u4fee\u6539 glance \u914d\u7f6e\u6587\u4ef6 vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u5bfc\u5165\u73af\u5883\u53d8\u91cf sorce ~/.admin-openrcu \u4e0b\u8f7d\u955c\u50cf x86\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img arm\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#placement","text":"Placement\u662fOpenStack\u63d0\u4f9b\u7684\u8d44\u6e90\u8c03\u5ea6\u7ec4\u4ef6\uff0c\u4e00\u822c\u4e0d\u9762\u5411\u7528\u6237\uff0c\u7531Nova\u7b49\u7ec4\u4ef6\u8c03\u7528\uff0c\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u3001\u914d\u7f6ePlacement\u670d\u52a1\u524d\uff0c\u9700\u8981\u5148\u521b\u5efa\u76f8\u5e94\u7684\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548cAPI endpoints\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efaplacement\u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE placement; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efaplacement\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt placement User Password: Repeat User Password: \u6dfb\u52a0placement\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name placement \\ --description \"Placement API\" placement \u521b\u5efaPlacement API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ placement public http://controller:8778 openstack endpoint create --region RegionOne \\ placement internal http://controller:8778 openstack endpoint create --region RegionOne \\ placement admin http://controller:8778 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-placement-api \u7f16\u8f91 /etc/placement/placement.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [placement_database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [placement_database] connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff0c\u586b\u5145Placement\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8\u670d\u52a1 \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650 source ~/.admin-openrc \u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a placement-status upgrade check +----------------------------------------------------------------------+ | Upgrade Check Results | +----------------------------------------------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Failure | | Details: Your policy file is JSON-formatted which is deprecated. You | | need to switch to YAML-formatted file. Use the | | ``oslopolicy-convert-json-to-yaml`` tool to convert the | | existing JSON-formatted files to YAML in a backwards- | | compatible manner: https://docs.openstack.org/oslo.policy/ | | latest/cli/oslopolicy-convert-json-to-yaml.html. | +----------------------------------------------------------------------+ \u8fd9\u91cc\u53ef\u4ee5\u770b\u5230 Policy File JSON to YAML Migration \u7684\u7ed3\u679c\u4e3aFailure\u3002\u8fd9\u662f\u56e0\u4e3a\u5728Placement\u4e2d\uff0cJSON\u683c\u5f0f\u7684policy\u6587\u4ef6\u4eceWallaby\u7248\u672c\u5f00\u59cb\u5df2\u5904\u4e8e deprecated \u72b6\u6001\u3002\u53ef\u4ee5\u53c2\u8003\u63d0\u793a\uff0c\u4f7f\u7528 oslopolicy-convert-json-to-yaml \u5de5\u5177 \u5c06\u73b0\u6709\u7684JSON\u683c\u5f0fpolicy\u6587\u4ef6\u8f6c\u5316\u4e3aYAML\u683c\u5f0f\u3002 oslopolicy-convert-json-to-yaml --namespace placement \\ --policy-file /etc/placement/policy.json \\ --output-file /etc/placement/policy.yaml mv /etc/placement/policy.json{,.bak} \u6ce8\uff1a\u5f53\u524d\u73af\u5883\u4e2d\u6b64\u95ee\u9898\u53ef\u5ffd\u7565\uff0c\u4e0d\u5f71\u54cd\u8fd0\u884c\u3002 \u9488\u5bf9placement API\u8fd0\u884c\u547d\u4ee4\uff1a \u5b89\u88c5osc-placement\u63d2\u4ef6\uff1a dnf install python3-osc-placement \u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a openstack --os-placement-api-version 1.2 resource class list --sort-column name +----------------------------+ | name | +----------------------------+ | DISK_GB | | FPGA | | ... | openstack --os-placement-api-version 1.6 trait list --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_ACCELERATORS | | COMPUTE_ARCH_AARCH64 | | ... |","title":"Placement"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#nova","text":"Nova\u662fOpenStack\u7684\u8ba1\u7b97\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u7684\u521b\u5efa\u3001\u53d1\u653e\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efa nova_api \u3001 nova \u548c nova_cell0 \u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efanova\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt nova User Password: Repeat User Password: \u6dfb\u52a0nova\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user nova admin \u521b\u5efanova\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name nova \\ --description \"OpenStack Compute\" compute \u521b\u5efaNova API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute admin http://controller:8774/v2.1 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528controller\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.2 log_dir = /var/log/nova state_path = /var/lib/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api_database] \u548c [database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff1a \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u542f\u52a8\u670d\u52a1 systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service Compute\u8282\u70b9 \u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-nova-compute \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6 \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528Compute\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49compute_driver\u3001instances_path\u3001log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.3 compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances log_dir = /var/log/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86_64\uff09 \u5904\u7406\u5668\u4e3ax86_64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002\u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08arm64\uff09 \u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a virt-host-validate # \u8be5\u547d\u4ee4\u7531libvirt\u63d0\u4f9b\uff0c\u6b64\u65f6libvirt\u5e94\u5df2\u4f5c\u4e3aopenstack-nova-compute\u4f9d\u8d56\u88ab\u5b89\u88c5\uff0c\u73af\u5883\u4e2d\u5df2\u6709\u6b64\u547d\u4ee4 \u663e\u793aFAIL\u65f6\uff0c\u8868\u793a\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002 QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded) \u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u663e\u793aPASS\u65f6\uff0c\u8868\u793a\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 QEMU: Checking if device /dev/kvm exists: PASS \u914d\u7f6eqemu\uff08\u4ec5arm64\uff09 \u4ec5\u5f53\u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\u9700\u8981\u6267\u884c\u6b64\u64cd\u4f5c\u3002 \u7f16\u8f91 /etc/libvirt/qemu.conf : nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u7f16\u8f91 /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } \u542f\u52a8\u670d\u52a1 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u786e\u8ba4nova-compute\u670d\u52a1\u5df2\u8bc6\u522b\u5230\u6570\u636e\u5e93\u4e2d\uff1a openstack compute service list --service nova-compute \u53d1\u73b0\u8ba1\u7b97\u8282\u70b9\uff0c\u5c06\u8ba1\u7b97\u8282\u70b9\u6dfb\u52a0\u5230cell\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u7ed3\u679c\u5982\u4e0b\uff1a Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code. Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check","title":"Nova"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#neutron","text":"Neutron\u662fOpenStack\u7684\u7f51\u7edc\u670d\u52a1\uff0c\u63d0\u4f9b\u865a\u62df\u4ea4\u6362\u673a\u3001IP\u8def\u7531\u3001DHCP\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u670d\u52a1\u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efaneutron\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eNEUTRON_PASS\uff1a source ~/.admin-openrc openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description \"OpenStack Networking\" network \u90e8\u7f72 Neutron API \u670d\u52a1\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2 3. \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp [experimental] linuxbridge = true \u914d\u7f6eML2\uff0cML2\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge** \u4fee\u6539/etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6eLayer-3\u4ee3\u7406 \u4fee\u6539/etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406 \u4fee\u6539/etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406 \u4fee\u6539/etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u914d\u7f6enova\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542fnova api\u670d\u52a1 systemctl restart openstack-nova-api \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl start neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service Compute\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-neutron-linuxbridge ebtables ipset -y \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6enova compute\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service \u542f\u52a8Neutron linuxbridge agent\u670d\u52a1 systemctl enable neutron-linuxbridge-agent systemctl start neutron-linuxbridge-agent","title":"Neutron"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#cinder","text":"Cinder\u662fOpenStack\u7684\u5b58\u50a8\u670d\u52a1\uff0c\u63d0\u4f9b\u5757\u8bbe\u5907\u7684\u521b\u5efa\u3001\u53d1\u653e\u3001\u5907\u4efd\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \uff1a \u521d\u59cb\u5316\u6570\u636e\u5e93 CINDER_DBPASS \u662f\u7528\u6237\u81ea\u5b9a\u4e49\u7684cinder\u6570\u636e\u5e93\u5bc6\u7801\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u521d\u59cb\u5316Keystone\u8d44\u6e90\u5bf9\u8c61 source ~/.admin-openrc #\u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230`CINDER_PASS`\u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s 3. \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-cinder-api openstack-cinder-scheduler \u4fee\u6539cinder\u914d\u7f6e\u6587\u4ef6 /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.2 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u6570\u636e\u5e93\u540c\u6b65 su -s /bin/sh -c \"cinder-manage db sync\" cinder \u4fee\u6539nova\u914d\u7f6e /etc/nova/nova.conf [cinder] os_region_name = RegionOne \u542f\u52a8\u670d\u52a1 systemctl restart openstack-nova-api systemctl start openstack-cinder-api openstack-cinder-scheduler Storage\u8282\u70b9 \uff1a Storage\u8282\u70b9\u8981\u63d0\u524d\u51c6\u5907\u81f3\u5c11\u4e00\u5757\u786c\u76d8\uff0c\u4f5c\u4e3acinder\u7684\u5b58\u50a8\u540e\u7aef\uff0c\u4e0b\u6587\u9ed8\u8ba4storage\u8282\u70b9\u5df2\u7ecf\u5b58\u5728\u4e00\u5757\u672a\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u8bbe\u5907\u540d\u79f0\u4e3a /dev/sdb \uff0c\u7528\u6237\u5728\u914d\u7f6e\u8fc7\u7a0b\u4e2d\uff0c\u8bf7\u6309\u7167\u771f\u5b9e\u73af\u5883\u4fe1\u606f\u8fdb\u884c\u540d\u79f0\u66ff\u6362\u3002 Cinder\u652f\u6301\u5f88\u591a\u7c7b\u578b\u7684\u540e\u7aef\u5b58\u50a8\uff0c\u672c\u6307\u5bfc\u4f7f\u7528\u6700\u7b80\u5355\u7684lvm\u4e3a\u53c2\u8003\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982ceph\u7b49\u5176\u4ed6\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup \u914d\u7f6elvm\u5377\u7ec4 pvcreate /dev/sdb vgcreate cinder-volumes /dev/sdb \u4fee\u6539cinder\u914d\u7f6e /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.4 enabled_backends = lvm glance_api_servers = http://controller:9292 [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u914d\u7f6ecinder backup \uff08\u53ef\u9009\uff09 cinder-backup\u662f\u53ef\u9009\u7684\u5907\u4efd\u670d\u52a1\uff0ccinder\u540c\u6837\u652f\u6301\u5f88\u591a\u79cd\u5907\u4efd\u540e\u7aef\uff0c\u672c\u6587\u4f7f\u7528swift\u5b58\u50a8\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982NFS\u7b49\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\uff0c\u4f8b\u5982\u53ef\u4ee5\u53c2\u8003 OpenStack\u5b98\u65b9\u6587\u6863 \u5bf9NFS\u7684\u914d\u7f6e\u8bf4\u660e\u3002 \u4fee\u6539 /etc/cinder/cinder.conf \uff0c\u5728 [DEFAULT] \u4e2d\u65b0\u589e [DEFAULT] backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u8fd9\u91cc\u7684 SWIFT_URL \u662f\u6307\u73af\u5883\u4e2dswift\u670d\u52a1\u7684URL\uff0c\u5728\u90e8\u7f72\u5b8cswift\u670d\u52a1\u540e\uff0c\u6267\u884c openstack catalog show object-store \u547d\u4ee4\u83b7\u53d6\u3002 \u542f\u52a8\u670d\u52a1 systemctl start openstack-cinder-volume target systemctl start openstack-cinder-backup (\u53ef\u9009) \u81f3\u6b64\uff0cCinder\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u53ef\u4ee5\u5728controller\u901a\u8fc7\u4ee5\u4e0b\u547d\u4ee4\u8fdb\u884c\u7b80\u5355\u7684\u9a8c\u8bc1 source ~/.admin-openrc openstack storage service list openstack volume list","title":"Cinder"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#horizon","text":"Horizon\u662fOpenStack\u63d0\u4f9b\u7684\u524d\u7aef\u9875\u9762\uff0c\u53ef\u4ee5\u8ba9\u7528\u6237\u901a\u8fc7\u7f51\u9875\u9f20\u6807\u7684\u64cd\u4f5c\u6765\u63a7\u5236OpenStack\u96c6\u7fa4\uff0c\u800c\u4e0d\u7528\u7e41\u7410\u7684CLI\u547d\u4ee4\u884c\u3002Horizon\u4e00\u822c\u90e8\u7f72\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-dashboard \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] OPENSTACK_KEYSTONE_URL = \"http://controller:5000/v3\" SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f\u670d\u52a1 systemctl restart httpd \u81f3\u6b64\uff0chorizon\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165 http://192.168.0.2/dashboard \uff0c\u6253\u5f00horizon\u767b\u5f55\u9875\u9762\u3002","title":"Horizon"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> exit Bye \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 \u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 \u66ff\u6362 IRONIC_PASS \u4e3aironic\u7528\u6237\u5bc6\u7801\uff0c IRONIC_INSPECTOR_PASS \u4e3aironic_inspector\u7528\u6237\u5bc6\u7801\u3002 openstack user create --password IRONIC_PASS \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector openstack role add --project service --user ironic-inspector admin \u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQ LAlchemy connection string used to connect to the # database (string value) # connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) # transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASS \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) # www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 www_authenticate_uri=http://controller:5000 # Complete admin Identity API endpoint. (string value) # auth_url=http://PRIVATE_IDENTITY_IP:5000 auth_url=http://controller:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASS # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none \u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema \u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 \u5982\u4e0b\u4e3aironic-conductor\u670d\u52a1\u81ea\u8eab\u7684\u6807\u51c6\u914d\u7f6e\uff0cironic-conductor\u670d\u52a1\u53ef\u4ee5\u4e0eironic-api\u670d\u52a1\u5206\u5e03\u4e8e\u4e0d\u540c\u8282\u70b9\uff0c\u672c\u6307\u5357\u4e2d\u5747\u90e8\u7f72\u4e0e\u63a7\u5236\u8282\u70b9\uff0c\u6240\u4ee5\u91cd\u590d\u7684\u914d\u7f6e\u9879\u53ef\u8df3\u8fc7\u3002 \u66ff\u6362\u4f7f\u7528conductor\u670d\u52a1\u6240\u5728host\u7684IP\u914d\u7f6emy_ip\uff1a [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) # my_ip=HOST_IP my_ip = 192.168.0.2 \u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c \u66ff\u6362IRONIC_PASS\u4e3aironic\u7528\u6237\u5bc6\u7801\u3002 [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASS # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public # \u5176\u4ed6\u53c2\u8003\u914d\u7f6e [glance] endpoint_override = http://controller:9292 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 auth_type = password username = ironic password = IRONIC_PASS project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service [service_catalog] region_name = RegionOne project_domain_id = default user_domain_id = default project_name = service password = IRONIC_PASS username = ironic auth_url = http://controller:5000 auth_type = password \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] endpoint_override = \u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 \u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-inspector \u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> exit Bye \u914d\u7f6e /etc/ironic-inspector/inspector.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASS \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801 [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 \u914d\u7f6e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://controller:5000 www_authenticate_uri = http://controller:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = controller:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True \u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=192.168.0.40,192.168.0.50 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log \u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c \u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade \u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 dnf install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u4e0b\u8f7d\u6216\u5236\u4f5c \u90e8\u7f72\u4e00\u4e2a\u88f8\u673a\u8282\u70b9\u603b\u5171\u9700\u8981\u4e24\u7ec4\u955c\u50cf\uff1adeploy ramdisk images\u548cuser images\u3002Deploy ramdisk images\u4e0a\u8fd0\u884c\u6709ironic-python-agent(IPA)\u670d\u52a1\uff0cIronic\u901a\u8fc7\u5b83\u8fdb\u884c\u88f8\u673a\u8282\u70b9\u7684\u73af\u5883\u51c6\u5907\u3002User images\u662f\u6700\u7ec8\u88ab\u5b89\u88c5\u88f8\u673a\u8282\u70b9\u4e0a\uff0c\u4f9b\u7528\u6237\u4f7f\u7528\u7684\u955c\u50cf\u3002 ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent-builder\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002\u82e5\u4f7f\u7528\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \uff0c\u540c\u65f6\u5b98\u65b9\u4e5f\u6709\u63d0\u4f9b\u5236\u4f5c\u597d\u7684deploy\u955c\u50cf\uff0c\u53ef\u5c1d\u8bd5\u4e0b\u8f7d\u3002 \u4e0b\u6587\u4ecb\u7ecd\u901a\u8fc7ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder dnf install python3-ironic-python-agent-builder \u6216 pip3 install ironic-python-agent-builder dnf install qemu-img git \u5236\u4f5c\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--lzma] [--extra-args EXTRA_ARGS] [--elements-path ELEMENTS_PATH] distribution positional arguments: distribution Distribution to use options: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic-python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --lzma Use lzma compression for smaller images --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder --elements-path ELEMENTS_PATH Path(s) to custom DIB elements separated by a colon \u64cd\u4f5c\u5b9e\u4f8b\uff1a # -o\u9009\u9879\u6307\u5b9a\u751f\u6210\u7684\u955c\u50cf\u540d # ubuntu\u6307\u5b9a\u751f\u6210ubuntu\u7cfb\u7edf\u7684\u955c\u50cf ironic-python-agent-builder -o my-ubuntu-ipa ubuntu \u53ef\u901a\u8fc7\u8bbe\u7f6e ARCH \u73af\u5883\u53d8\u91cf\uff08\u9ed8\u8ba4\u4e3aamd64\uff09\u6307\u5b9a\u6240\u6784\u5efa\u955c\u50cf\u7684\u67b6\u6784\u3002\u5982\u679c\u662f arm \u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a export ARCH=aarch64 \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf,\u8bbe\u7f6e\u7528\u6237\u540d\u3001\u5bc6\u7801\uff0c\u542f\u7528 sodo \u6743\u9650\uff1b\u5e76\u6dfb\u52a0 -e \u9009\u9879\u4f7f\u7528\u76f8\u5e94\u7684DIB\u5143\u7d20\u3002\u5236\u4f5c\u955c\u50cf\u64cd\u4f5c\u5982\u4e0b\uff1a export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=stable/2023.1 # \u6307\u5b9a\u672c\u5730\u4ed3\u5e93\u53ca\u5206\u652f DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo DIB_REPOREF_ironic_python_agent=my-test-branch ironic-python-agent-builder ubuntu \u53c2\u8003\uff1a source-repositories \u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\u3002 \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a \u5f53\u524d\u7248\u672c\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ ramdisk\u955c\u50cf\u4e2d\u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 \u7f16\u8f91/usr/lib/systemd/system/ironic-python-agent.service\u6587\u4ef6 [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target","title":"Ironic"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2atrove\u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684trove\u6570\u636e\u5e93\uff0c\u66ff\u6362TROVE_DBPASS\u4e3a\u5408\u9002\u7684\u5bc6\u7801\u3002 CREATE DATABASE trove CHARACTER SET utf8; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS'; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efatrove\u7528\u6237 openstack user create --domain default --password-prompt trove # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user trove admin # \u521b\u5efadatabase\u670d\u52a1 openstack service create --name trove --description \"Database service\" database \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5Trove\u3002 dnf install openstack-trove python-troveclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 \u7f16\u8f91/etc/trove/trove.conf\u3002 [DEFAULT] bind_host=192.168.0.2 log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver network_label_regex=.* management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] auth_url = http://controller:5000/v3/ auth_type = password project_domain_name = Default project_name = service user_domain_name = Default password = trove username = TROVE_PASS [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = trove password = TROVE_PASS [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u63a7\u5236\u8282\u70b9\u7684IP\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002 \u7f16\u8f91/etc/trove/trove-guestagent.conf\u3002 [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df\u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a\u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002\\ \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 \u6570\u636e\u5e93\u540c\u6b65\u3002 su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efaswift\u7528\u6237 openstack user create --domain default --password-prompt swift # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user swift admin # \u521b\u5efa\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5Swift\u3002 dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \\ python3-keystonemiddleware memcached \u914d\u7f6eproxy-server\u3002 Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cSWIFT_PASS\u5373\u53ef\u3002 vim /etc/swift/proxy-server.conf [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = swift password = SWIFT_PASS delay_auth_decision = True service_token_roles_required = True Storage\u8282\u70b9 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305\u3002 dnf install openstack-swift-account openstack-swift-container openstack-swift-object dnf install xfsprogs rsync \u5c06\u8bbe\u5907/dev/sdb\u548c/dev/sdc\u683c\u5f0f\u5316\u4e3aXFS\u3002 mkfs.xfs /dev/sdb mkfs.xfs /dev/sdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u3002 mkdir -p /srv/node/sdb mkdir -p /srv/node/sdc \u627e\u5230\u65b0\u5206\u533a\u7684UUID\u3002 blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d\u3002 UUID=\"\" /srv/node/sdb xfs noatime 0 2 UUID=\"\" /srv/node/sdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\u3002 mount /srv/node/sdb mount /srv/node/sdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e\u3002 \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u914d\u7f6e\u5b58\u50a8\u8282\u70b9\u3002 \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 [DEFAULT] bind_ip = 192.168.0.4 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\u3002 mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift Controller\u8282\u70b9\u521b\u5efa\u5e76\u5206\u53d1\u73af \u521b\u5efa\u8d26\u53f7\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840 account.builder \u6587\u4ef6\u3002 swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder account.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6202 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u8d26\u53f7\u73af\u5185\u5bb9\u3002 swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u8d26\u53f7\u73af\u3002 swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\u3002 swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder container.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bb9\u5668\u73af\u5185\u5bb9\u3002 swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u5bb9\u5668\u73af\u3002 swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\u3002 swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder object.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6200 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bf9\u8c61\u73af\u5185\u5bb9\u3002 swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u5bf9\u8c61\u73af\u3002 swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\u3002 \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/swift/swift.conf\u3002 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R root:swift /etc/swift \u5b8c\u6210\u5b89\u88c5 \u5728\u63a7\u5236\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service systemctl start openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service","title":"Swift"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 Controller\u8282\u70b9 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cyborg; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efacybory\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eCYBORG_PASS source ~/.admin-openrc openstack user create --domain default --password-prompt cyborg openstack role add --project service --user cyborg admin openstack service create --name cyborg --description \"Acceleration Service\" accelerator \u4f7f\u7528uwsgi\u90e8\u7f72Cyborg api\u670d\u52a1 openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2 \u5b89\u88c5Cyborg dnf install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [api] host_ip = 0.0.0.0 [database] connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg [service_catalog] cafile = /opt/stack/data/ca-bundle.pem project_domain_id = default user_domain_id = default project_name = service password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = password username = PLACEMENT_PASS auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [nova] project_domain_name = Default project_name = service user_domain_name = Default password = NOVA_PASS username = nova auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [keystone_authtoken] memcached_servers = localhost:11211 signing_dir = /var/cache/cyborg/api cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#aodh","text":"Aodh\u53ef\u4ee5\u6839\u636e\u7531Ceilometer\u6216\u8005Gnocchi\u6536\u96c6\u7684\u76d1\u63a7\u6570\u636e\u521b\u5efa\u544a\u8b66\uff0c\u5e76\u8bbe\u7f6e\u89e6\u53d1\u89c4\u5219\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh\u3002 dnf install openstack-aodh-api openstack-aodh-evaluator \\ openstack-aodh-notifier openstack-aodh-listener \\ openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/aodh/aodh.conf [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u540c\u6b65\u6570\u636e\u5e93\u3002 aodh-dbsync \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#gnocchi","text":"Gnocchi\u662f\u4e00\u4e2a\u5f00\u6e90\u7684\u65f6\u95f4\u5e8f\u5217\u6570\u636e\u5e93\uff0c\u53ef\u4ee5\u5bf9\u63a5Ceilometer\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi\u3002 dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. # coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u540c\u6b65\u6570\u636e\u5e93\u3002 gnocchi-upgrade \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#ceilometer","text":"Ceilometer\u662fOpenStack\u4e2d\u8d1f\u8d23\u6570\u636e\u6536\u96c6\u7684\u670d\u52a1\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-notification openstack-ceilometer-central \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/pipeline.yaml\u3002 publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u6570\u636e\u5e93\u540c\u6b65\u3002 ceilometer-upgrade \u5b8c\u6210\u63a7\u5236\u8282\u70b9Ceilometer\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Compute\u8282\u70b9 \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-compute dnf install openstack-ceilometer-ipmi # \u53ef\u9009 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_url = http://controller:5000 project_domain_id = default user_domain_id = default auth_type = password username = ceilometer project_name = service password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/nova/nova.conf\u3002 [DEFAULT] instance_usage_audit = True instance_usage_audit_period = hour [notifications] notify_on_state_change = vm_and_task_state [oslo_messaging_notifications] driver = messagingv2 \u5b8c\u6210\u5b89\u88c5\u3002 systemctl enable openstack-ceilometer-compute.service systemctl start openstack-ceilometer-compute.service systemctl enable openstack-ceilometer-ipmi.service # \u53ef\u9009 systemctl start openstack-ceilometer-ipmi.service # \u53ef\u9009 # \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service","title":"Ceilometer"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#heat","text":"Heat\u662f OpenStack \u81ea\u52a8\u7f16\u6392\u670d\u52a1\uff0c\u57fa\u4e8e\u63cf\u8ff0\u6027\u7684\u6a21\u677f\u6765\u7f16\u6392\u590d\u5408\u4e91\u5e94\u7528\uff0c\u4e5f\u79f0\u4e3a Orchestration Service \u3002Heat \u7684\u5404\u670d\u52a1\u4e00\u822c\u5b89\u88c5\u5728 Controller \u8282\u70b9\u4e0a\u3002 Controller\u8282\u70b9 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE heat; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 source ~/.admin-openrc openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f \u521b\u5efa heat domain openstack domain create --description \"Stack projects and users\" heat \u5728 heat domain\u4e0b\u521b\u5efa heat_domain_admin \u7528\u6237\uff0c\u5e76\u8bb0\u4e0b\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6e\u4e0b\u9762\u7684 HEAT_DOMAIN_PASS openstack user create --domain heat --password-prompt heat_domain_admin \u4e3a heat_domain_admin \u7528\u6237\u589e\u52a0 admin \u89d2\u8272 openstack role add --domain heat --user-domain heat --user heat_domain_admin admin \u521b\u5efa heat_stack_owner \u89d2\u8272 openstack role create heat_stack_owner \u521b\u5efa heat_stack_user \u89d2\u8272 openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u5b89\u88c5Tempest dnf install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Antelope\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest"},{"location":"install/openEuler-24.03-LTS/OpenStack-antelope/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 oos\u5de5\u5177\u5728\u4e0d\u65ad\u6f14\u8fdb\uff0c\u517c\u5bb9\u6027\u3001\u53ef\u7528\u6027\u4e0d\u80fd\u65f6\u523b\u4fdd\u8bc1\uff0c\u5efa\u8bae\u4f7f\u7528\u5df2\u9a8c\u8bc1\u7684\u672c\u7248\uff0c\u8fd9\u91cc\u9009\u62e9 1.3.1 pip install openstack-sig-tool==1.3.1 \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff0cAK/SK\u662f\u7528\u6237\u7684\u534e\u4e3a\u4e91\u767b\u5f55\u5bc6\u94a5\uff0c\u5176\u4ed6\u914d\u7f6e\u4fdd\u6301\u9ed8\u8ba4\u5373\u53ef\uff08\u9ed8\u8ba4\u4f7f\u7528\u65b0\u52a0\u5761region\uff09\uff0c\u9700\u8981\u63d0\u524d\u5728\u4e91\u4e0a\u521b\u5efa\u5bf9\u5e94\u7684\u8d44\u6e90\uff0c\u5305\u62ec\uff1a \u4e00\u4e2a\u5b89\u5168\u7ec4\uff0c\u540d\u5b57\u9ed8\u8ba4\u662f oos \u4e00\u4e2aopenEuler\u955c\u50cf\uff0c\u540d\u79f0\u683c\u5f0f\u662fopenEuler-%(release)s-%(arch)s\uff0c\u4f8b\u5982 openEuler-24.03-arm64 \u4e00\u4e2aVPC\uff0c\u540d\u79f0\u662f oos_vpc \u8be5VPC\u4e0b\u9762\u4e24\u4e2a\u5b50\u7f51\uff0c\u540d\u79f0\u662f oos_subnet1 \u3001 oos_subnet2 [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668(\u53ea\u5728openEuler LTS\u4e0a\u652f\u6301) \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 24.03 LTS\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 24.03-lts -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r antelope \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u6267\u884ctempest\u6d4b\u8bd5 \u7528\u6237\u53ef\u4ee5\u4f7f\u7528oos\u81ea\u52a8\u6267\u884c\uff1a oos env test test-oos \u4e5f\u53ef\u4ee5\u624b\u52a8\u767b\u5f55\u76ee\u6807\u8282\u70b9\uff0c\u8fdb\u5165\u6839\u76ee\u5f55\u4e0b\u7684 mytest \u76ee\u5f55\uff0c\u624b\u52a8\u6267\u884c tempest run \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u8df3\u8fc7\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u5728\u7b2c4\u6b65\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 \u88ab\u7eb3\u7ba1\u7684\u865a\u673a\u9700\u8981\u4fdd\u8bc1\uff1a \u81f3\u5c11\u6709\u4e00\u5f20\u7ed9oos\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e neutron_dataplane_interface_name \u81f3\u5c11\u6709\u4e00\u5757\u7ed9oos\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e cinder_block_device \u5982\u679c\u8981\u90e8\u7f72swift\u670d\u52a1\uff0c\u5219\u9700\u8981\u65b0\u589e\u4e00\u5757\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e swift_storage_devices # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 24.03-lts -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 24.03-LTS \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a CinderSP1 Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 24.03 LTS \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a source ~/.admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service 6.\u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ``` Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 24.03 LTS\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 yum install openstack-trove python-troveclient 2. \u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** 4.\u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: ```shell yum install xfsprogs rsync ``` \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS ```shell mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc ``` \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: ```shell mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc ``` \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: ```shell blkid ``` \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: ```shell UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 ``` \u6302\u8f7d\u8bbe\u5907\uff1a ```shell mount /srv/node/vdb mount /srv/node/vdc ``` ***\u6ce8\u610f*** **\u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e** \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: ```shell [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock ``` **\u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740** \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: ```shell systemctl enable rsyncd.service systemctl start rsyncd.service ``` 5.\u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: ```shell yum install openstack-swift-account openstack-swift-container openstack-swift-object ``` \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: ```shell chown -R swift:swift /srv/node ``` \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a ```shell mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift ``` 6.\u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 ```shell cd /etc/swift ``` \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: ```shell swift-ring-builder account.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder account.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder account.builder rebalance ``` 7.\u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`container.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder container.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f*** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder container.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder container.builder rebalance ``` 8.\u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`object.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder object.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d ```shell swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder object.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder object.builder rebalance ``` \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06`account.ring.gz`\uff0c`container.ring.gz`\u4ee5\u53ca `object.ring.gz`\u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684`/etc/swift`\u76ee\u5f55\u3002 9.\u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 oos\u5de5\u5177\u5728\u4e0d\u65ad\u6f14\u8fdb\uff0c\u517c\u5bb9\u6027\u3001\u53ef\u7528\u6027\u4e0d\u80fd\u65f6\u523b\u4fdd\u8bc1\uff0c\u5efa\u8bae\u4f7f\u7528\u5df2\u9a8c\u8bc1\u7684\u672c\u7248\uff0c\u8fd9\u91cc\u9009\u62e9 1.3.1 pip install openstack-sig-tool==1.3.1 \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 24.03-LTS\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 24.03-lts -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 24.03-lts -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-24.03-LTS_Wallaby"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#openstack-wallaby","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72","title":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 24.03-LTS \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a CinderSP1 Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#_3","text":"\u914d\u7f6e 24.03 LTS \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a source ~/.admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service 6.\u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ```","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 24.03 LTS\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 yum install openstack-trove python-troveclient 2. \u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** 4.\u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: ```shell yum install xfsprogs rsync ``` \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS ```shell mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc ``` \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: ```shell mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc ``` \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: ```shell blkid ``` \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: ```shell UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 ``` \u6302\u8f7d\u8bbe\u5907\uff1a ```shell mount /srv/node/vdb mount /srv/node/vdc ``` ***\u6ce8\u610f*** **\u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e** \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: ```shell [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock ``` **\u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740** \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: ```shell systemctl enable rsyncd.service systemctl start rsyncd.service ``` 5.\u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: ```shell yum install openstack-swift-account openstack-swift-container openstack-swift-object ``` \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: ```shell chown -R swift:swift /srv/node ``` \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a ```shell mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift ``` 6.\u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 ```shell cd /etc/swift ``` \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: ```shell swift-ring-builder account.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder account.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder account.builder rebalance ``` 7.\u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`container.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder container.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f*** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder container.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder container.builder rebalance ``` 8.\u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`object.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder object.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d ```shell swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder object.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder object.builder rebalance ``` \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06`account.ring.gz`\uff0c`container.ring.gz`\u4ee5\u53ca `object.ring.gz`\u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684`/etc/swift`\u76ee\u5f55\u3002 9.\u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#aodh","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#gnocchi","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#ceilometer","text":"1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#heat","text":"1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS/OpenStack-wallaby/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 oos\u5de5\u5177\u5728\u4e0d\u65ad\u6f14\u8fdb\uff0c\u517c\u5bb9\u6027\u3001\u53ef\u7528\u6027\u4e0d\u80fd\u65f6\u523b\u4fdd\u8bc1\uff0c\u5efa\u8bae\u4f7f\u7528\u5df2\u9a8c\u8bc1\u7684\u672c\u7248\uff0c\u8fd9\u91cc\u9009\u62e9 1.3.1 pip install openstack-sig-tool==1.3.1 \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 24.03-LTS\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 24.03-lts -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 24.03-lts -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/","text":"OpenStack Antelope \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack Antelope \u90e8\u7f72\u6307\u5357 \u57fa\u4e8eRPM\u90e8\u7f72 \u73af\u5883\u51c6\u5907 \u65f6\u949f\u540c\u6b65 \u5b89\u88c5\u6570\u636e\u5e93 \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u90e8\u7f72\u670d\u52a1 Keystone Glance Placement Nova Neutron Cinder Horizon Ironic Trove Swift Cyborg Aodh Gnocchi Ceilometer Heat Tempest \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u672c\u6587\u6863\u662f openEuler OpenStack SIG \u7f16\u5199\u7684\u57fa\u4e8e |openEuler 24.03 LTS SP1 \u7684 OpenStack \u90e8\u7f72\u6307\u5357\uff0c\u5185\u5bb9\u7531 SIG \u8d21\u732e\u8005\u63d0\u4f9b\u3002\u5728\u9605\u8bfb\u8fc7\u7a0b\u4e2d\uff0c\u5982\u679c\u60a8\u6709\u4efb\u4f55\u7591\u95ee\u6216\u8005\u53d1\u73b0\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7 \u8054\u7cfb SIG\u7ef4\u62a4\u4eba\u5458\uff0c\u6216\u8005\u76f4\u63a5 \u63d0\u4ea4issue \u7ea6\u5b9a \u672c\u7ae0\u8282\u63cf\u8ff0\u6587\u6863\u4e2d\u7684\u4e00\u4e9b\u901a\u7528\u7ea6\u5b9a\u3002 \u540d\u79f0 \u5b9a\u4e49 RABBIT_PASS rabbitmq\u7684\u5bc6\u7801\uff0c\u7531\u7528\u6237\u8bbe\u7f6e\uff0c\u5728OpenStack\u5404\u4e2a\u670d\u52a1\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_PASS cinder\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_DBPASS cinder\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 KEYSTONE_DBPASS keystone\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728keystone\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_PASS glance\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_DBPASS glance\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_PASS \u5728keystone\u6ce8\u518c\u7684heat\u7528\u6237\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_DBPASS heat\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_PASS \u5728keystone\u6ce8\u518c\u7684cyborg\u7528\u6237\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_DBPASS cyborg\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_PASS \u5728keystone\u6ce8\u518c\u7684neutron\u7528\u6237\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_DBPASS neutron\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PROVIDER_INTERFACE_NAME \u7269\u7406\u7f51\u7edc\u63a5\u53e3\u7684\u540d\u79f0\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 OVERLAY_INTERFACE_IP_ADDRESS Controller\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406ip\u5730\u5740\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 METADATA_SECRET metadata proxy\u7684secret\u5bc6\u7801\uff0c\u5728nova\u548cneutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_DBPASS placement\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_PASS \u5728keystone\u6ce8\u518c\u7684placement\u7528\u6237\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_DBPASS nova\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728nova\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_PASS \u5728keystone\u6ce8\u518c\u7684nova\u7528\u6237\u5bc6\u7801\uff0c\u5728nova,cyborg,neutron\u7b49\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_DBPASS ironic\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_PASS \u5728keystone\u6ce8\u518c\u7684ironic\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_DBPASS ironic-inspector\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_PASS \u5728keystone\u6ce8\u518c\u7684ironic-inspector\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 OpenStack SIG \u63d0\u4f9b\u4e86\u591a\u79cd\u57fa\u4e8e openEuler \u90e8\u7f72 OpenStack \u7684\u65b9\u6cd5\uff0c\u4ee5\u6ee1\u8db3\u4e0d\u540c\u7684\u7528\u6237\u573a\u666f\uff0c\u8bf7\u6309\u9700\u9009\u62e9\u3002 \u57fa\u4e8eRPM\u90e8\u7f72 \u00b6 \u73af\u5883\u51c6\u5907 \u00b6 \u672c\u6587\u6863\u57fa\u4e8eOpenStack\u7ecf\u5178\u7684\u4e09\u8282\u70b9\u73af\u5883\u8fdb\u884c\u90e8\u7f72\uff0c\u4e09\u4e2a\u8282\u70b9\u5206\u522b\u662f\u63a7\u5236\u8282\u70b9(Controller)\u3001\u8ba1\u7b97\u8282\u70b9(Compute)\u3001\u5b58\u50a8\u8282\u70b9(Storage)\uff0c\u5176\u4e2d\u5b58\u50a8\u8282\u70b9\u4e00\u822c\u53ea\u90e8\u7f72\u5b58\u50a8\u670d\u52a1\uff0c\u5728\u8d44\u6e90\u6709\u9650\u7684\u60c5\u51b5\u4e0b\uff0c\u53ef\u4ee5\u4e0d\u5355\u72ec\u90e8\u7f72\u8be5\u8282\u70b9\uff0c\u628a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u670d\u52a1\u90e8\u7f72\u5230\u8ba1\u7b97\u8282\u70b9\u5373\u53ef\u3002 \u9996\u5148\u51c6\u5907\u4e09\u4e2a|openEuler 24.03 LTS SP1\u73af\u5883\uff0c\u6839\u636e\u60a8\u7684\u73af\u5883\uff0c\u4e0b\u8f7d\u5bf9\u5e94\u7684\u955c\u50cf\u5e76\u5b89\u88c5\u5373\u53ef\uff1a ISO\u955c\u50cf \u3001 qcow2\u955c\u50cf \u3002 \u4e0b\u9762\u7684\u5b89\u88c5\u6309\u7167\u5982\u4e0b\u62d3\u6251\u8fdb\u884c\uff1a controller\uff1a192.168.0.2 compute\uff1a 192.168.0.3 storage\uff1a 192.168.0.4 \u5982\u679c\u60a8\u7684\u73af\u5883IP\u4e0d\u540c\uff0c\u8bf7\u6309\u7167\u60a8\u7684\u73af\u5883IP\u4fee\u6539\u76f8\u5e94\u7684\u914d\u7f6e\u6587\u4ef6\u3002 \u672c\u6587\u6863\u7684\u4e09\u8282\u70b9\u670d\u52a1\u62d3\u6251\u5982\u4e0b\u56fe\u6240\u793a(\u53ea\u5305\u542bKeystone\u3001Glance\u3001Nova\u3001Cinder\u3001Neutron\u8fd9\u51e0\u4e2a\u6838\u5fc3\u670d\u52a1\uff0c\u5176\u4ed6\u670d\u52a1\u8bf7\u53c2\u8003\u5177\u4f53\u90e8\u7f72\u7ae0\u8282)\uff1a \u5728\u6b63\u5f0f\u90e8\u7f72\u4e4b\u524d\uff0c\u9700\u8981\u5bf9\u6bcf\u4e2a\u8282\u70b9\u505a\u5982\u4e0b\u914d\u7f6e\u548c\u68c0\u67e5\uff1a \u914d\u7f6e |openEuler 24.03 LTS SP1 \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-antelope yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u6bcf\u4e2a\u8282\u70b9\u5206\u522b\u4fee\u6539\u4e3b\u673a\u540d\uff0c\u4ee5controller\u4e3a\u4f8b\uff1a hostnamectl set-hostname controller vi /etc/hostname \u5185\u5bb9\u4fee\u6539\u4e3acontroller \u7136\u540e\u4fee\u6539\u6bcf\u4e2a\u8282\u70b9\u7684 /etc/hosts \u6587\u4ef6\uff0c\u65b0\u589e\u5982\u4e0b\u5185\u5bb9: 192.168.0.2 controller 192.168.0.3 compute 192.168.0.4 storage \u65f6\u949f\u540c\u6b65 \u00b6 \u96c6\u7fa4\u73af\u5883\u65f6\u523b\u8981\u6c42\u6bcf\u4e2a\u8282\u70b9\u7684\u65f6\u95f4\u4e00\u81f4\uff0c\u4e00\u822c\u7531\u65f6\u949f\u540c\u6b65\u8f6f\u4ef6\u4fdd\u8bc1\u3002\u672c\u6587\u4f7f\u7528 chrony \u8f6f\u4ef6\u3002\u6b65\u9aa4\u5982\u4e0b\uff1a Controller\u8282\u70b9 \uff1a \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # \u8868\u793a\u5141\u8bb8\u54ea\u4e9bIP\u4ece\u672c\u8282\u70b9\u540c\u6b65\u65f6\u949f allow 192.168.0.0/24 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u5176\u4ed6\u8282\u70b9 \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # NTP_SERVER\u662fcontroller IP\uff0c\u8868\u793a\u4ece\u8fd9\u4e2a\u673a\u5668\u83b7\u53d6\u65f6\u95f4\uff0c\u8fd9\u91cc\u6211\u4eec\u586b192.168.0.2\uff0c\u6216\u8005\u5728`/etc/hosts`\u91cc\u914d\u7f6e\u597d\u7684controller\u540d\u5b57\u5373\u53ef\u3002 server NTP_SERVER iburst \u540c\u65f6\uff0c\u8981\u628a pool pool.ntp.org iburst \u8fd9\u4e00\u884c\u6ce8\u91ca\u6389\uff0c\u8868\u793a\u4e0d\u4ece\u516c\u7f51\u540c\u6b65\u65f6\u949f\u3002 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u914d\u7f6e\u5b8c\u6210\u540e\uff0c\u68c0\u67e5\u4e00\u4e0b\u7ed3\u679c\uff0c\u5728\u5176\u4ed6\u975econtroller\u8282\u70b9\u6267\u884c chronyc sources \uff0c\u8fd4\u56de\u7ed3\u679c\u7c7b\u4f3c\u5982\u4e0b\u5185\u5bb9\uff0c\u8868\u793a\u6210\u529f\u4ececontroller\u540c\u6b65\u65f6\u949f\u3002 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 192.168.0.2 4 6 7 0 -1406ns[ +55us] +/- 16ms \u5b89\u88c5\u6570\u636e\u5e93 \u00b6 \u6570\u636e\u5e93\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528mariadb\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install mariadb-config mariadb mariadb-server python3-PyMySQL \u65b0\u589e\u914d\u7f6e\u6587\u4ef6 /etc/my.cnf.d/openstack.cnf \uff0c\u5185\u5bb9\u5982\u4e0b [mysqld] bind-address = 192.168.0.2 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8\u670d\u52a1\u5668 systemctl start mariadb \u521d\u59cb\u5316\u6570\u636e\u5e93\uff0c\u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef mysql_secure_installation \u793a\u4f8b\u5982\u4e0b\uff1a NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and haven't set the root password yet, you should just press enter here. Enter current password for root (enter for none): #\u8fd9\u91cc\u8f93\u5165\u5bc6\u7801\uff0c\u7531\u4e8e\u6211\u4eec\u662f\u521d\u59cb\u5316DB\uff0c\u76f4\u63a5\u56de\u8f66\u5c31\u884c OK, successfully used password, moving on... Setting the root password or using the unix_socket ensures that nobody can log into the MariaDB root user without the proper authorisation. You already have your root account protected, so you can safely answer 'n'. # \u8fd9\u91cc\u6839\u636e\u63d0\u793a\u8f93\u5165N Switch to unix_socket authentication [Y/n] N Enabled successfully! Reloading privilege tables.. ... Success! You already have your root account protected, so you can safely answer 'n'. # \u8f93\u5165Y\uff0c\u4fee\u6539\u5bc6\u7801 Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664\u533f\u540d\u7528\u6237 Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. # \u8f93\u5165Y\uff0c\u5173\u95edroot\u8fdc\u7a0b\u767b\u5f55\u6743\u9650 Disallow root login remotely? [Y/n] Y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664test\u6570\u636e\u5e93 Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. # \u8f93\u5165Y\uff0c\u91cd\u8f7d\u914d\u7f6e Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. \u9a8c\u8bc1\uff0c\u6839\u636e\u7b2c\u56db\u6b65\u8bbe\u7f6e\u7684\u5bc6\u7801\uff0c\u68c0\u67e5\u662f\u5426\u80fd\u767b\u5f55mariadb mysql -uroot -p \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u00b6 \u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528rabbitmq\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install rabbitmq-server \u542f\u52a8\u670d\u52a1 systemctl start rabbitmq-server \u914d\u7f6eopenstack\u7528\u6237\uff0c RABBIT_PASS \u662fopenstack\u670d\u52a1\u767b\u5f55\u6d88\u606f\u961f\u91cc\u7684\u5bc6\u7801\uff0c\u9700\u8981\u548c\u540e\u9762\u5404\u4e2a\u670d\u52a1\u7684\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\u3002 rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u00b6 \u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528Memcached\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install memcached python3-memcached \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u542f\u52a8\u670d\u52a1 systemctl start memcached \u90e8\u7f72\u670d\u52a1 \u00b6 Keystone \u00b6 Keystone\u662fOpenStack\u63d0\u4f9b\u7684\u9274\u6743\u670d\u52a1\uff0c\u662f\u6574\u4e2aOpenStack\u7684\u5165\u53e3\uff0c\u63d0\u4f9b\u4e86\u79df\u6237\u9694\u79bb\u3001\u7528\u6237\u8ba4\u8bc1\u3001\u670d\u52a1\u53d1\u73b0\u7b49\u529f\u80fd\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server \u6253\u5f00httpd.conf\u5e76\u914d\u7f6e #\u9700\u8981\u4fee\u6539\u7684\u914d\u7f6e\u6587\u4ef6\u8def\u5f84 vim /etc/httpd/conf/httpd.conf #\u4fee\u6539\u4ee5\u4e0b\u9879\uff0c\u5982\u679c\u6ca1\u6709\u5219\u65b0\u6dfb\u52a0 ServerName controller \u521b\u5efa\u8f6f\u94fe\u63a5 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles \u9700\u8981\u5148\u5b89\u88c5python3-openstackclient dnf install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u00b6 Glance\u662fOpenStack\u63d0\u4f9b\u7684\u955c\u50cf\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u3001\u88f8\u673a\u955c\u50cf\u7684\u4e0a\u4f20\u4e0e\u4e0b\u8f7d\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521d\u59cb\u5316 glance \u8d44\u6e90\u5bf9\u8c61 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230 GLANCE_PASS \u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt glance User Password: Repeat User Password: \u6dfb\u52a0glance\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user glance admin \u521b\u5efaglance\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efaglance API\u670d\u52a1\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-glance \u4fee\u6539 glance \u914d\u7f6e\u6587\u4ef6 vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u5bfc\u5165\u73af\u5883\u53d8\u91cf sorce ~/.admin-openrcu \u4e0b\u8f7d\u955c\u50cf x86\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img arm\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement \u00b6 Placement\u662fOpenStack\u63d0\u4f9b\u7684\u8d44\u6e90\u8c03\u5ea6\u7ec4\u4ef6\uff0c\u4e00\u822c\u4e0d\u9762\u5411\u7528\u6237\uff0c\u7531Nova\u7b49\u7ec4\u4ef6\u8c03\u7528\uff0c\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u3001\u914d\u7f6ePlacement\u670d\u52a1\u524d\uff0c\u9700\u8981\u5148\u521b\u5efa\u76f8\u5e94\u7684\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548cAPI endpoints\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efaplacement\u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE placement; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efaplacement\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt placement User Password: Repeat User Password: \u6dfb\u52a0placement\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name placement \\ --description \"Placement API\" placement \u521b\u5efaPlacement API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ placement public http://controller:8778 openstack endpoint create --region RegionOne \\ placement internal http://controller:8778 openstack endpoint create --region RegionOne \\ placement admin http://controller:8778 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-placement-api \u7f16\u8f91 /etc/placement/placement.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [placement_database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [placement_database] connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff0c\u586b\u5145Placement\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8\u670d\u52a1 \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650 source ~/.admin-openrc \u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a placement-status upgrade check +----------------------------------------------------------------------+ | Upgrade Check Results | +----------------------------------------------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Failure | | Details: Your policy file is JSON-formatted which is deprecated. You | | need to switch to YAML-formatted file. Use the | | ``oslopolicy-convert-json-to-yaml`` tool to convert the | | existing JSON-formatted files to YAML in a backwards- | | compatible manner: https://docs.openstack.org/oslo.policy/ | | latest/cli/oslopolicy-convert-json-to-yaml.html. | +----------------------------------------------------------------------+ \u8fd9\u91cc\u53ef\u4ee5\u770b\u5230 Policy File JSON to YAML Migration \u7684\u7ed3\u679c\u4e3aFailure\u3002\u8fd9\u662f\u56e0\u4e3a\u5728Placement\u4e2d\uff0cJSON\u683c\u5f0f\u7684policy\u6587\u4ef6\u4eceWallaby\u7248\u672c\u5f00\u59cb\u5df2\u5904\u4e8e deprecated \u72b6\u6001\u3002\u53ef\u4ee5\u53c2\u8003\u63d0\u793a\uff0c\u4f7f\u7528 oslopolicy-convert-json-to-yaml \u5de5\u5177 \u5c06\u73b0\u6709\u7684JSON\u683c\u5f0fpolicy\u6587\u4ef6\u8f6c\u5316\u4e3aYAML\u683c\u5f0f\u3002 oslopolicy-convert-json-to-yaml --namespace placement \\ --policy-file /etc/placement/policy.json \\ --output-file /etc/placement/policy.yaml mv /etc/placement/policy.json{,.bak} \u6ce8\uff1a\u5f53\u524d\u73af\u5883\u4e2d\u6b64\u95ee\u9898\u53ef\u5ffd\u7565\uff0c\u4e0d\u5f71\u54cd\u8fd0\u884c\u3002 \u9488\u5bf9placement API\u8fd0\u884c\u547d\u4ee4\uff1a \u5b89\u88c5osc-placement\u63d2\u4ef6\uff1a dnf install python3-osc-placement \u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a openstack --os-placement-api-version 1.2 resource class list --sort-column name +----------------------------+ | name | +----------------------------+ | DISK_GB | | FPGA | | ... | openstack --os-placement-api-version 1.6 trait list --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_ACCELERATORS | | COMPUTE_ARCH_AARCH64 | | ... | Nova \u00b6 Nova\u662fOpenStack\u7684\u8ba1\u7b97\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u7684\u521b\u5efa\u3001\u53d1\u653e\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efa nova_api \u3001 nova \u548c nova_cell0 \u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efanova\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt nova User Password: Repeat User Password: \u6dfb\u52a0nova\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user nova admin \u521b\u5efanova\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name nova \\ --description \"OpenStack Compute\" compute \u521b\u5efaNova API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute admin http://controller:8774/v2.1 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528controller\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.2 log_dir = /var/log/nova state_path = /var/lib/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api_database] \u548c [database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff1a \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u542f\u52a8\u670d\u52a1 systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service Compute\u8282\u70b9 \u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-nova-compute \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6 \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528Compute\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49compute_driver\u3001instances_path\u3001log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.3 compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances log_dir = /var/log/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86_64\uff09 \u5904\u7406\u5668\u4e3ax86_64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002\u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08arm64\uff09 \u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a virt-host-validate # \u8be5\u547d\u4ee4\u7531libvirt\u63d0\u4f9b\uff0c\u6b64\u65f6libvirt\u5e94\u5df2\u4f5c\u4e3aopenstack-nova-compute\u4f9d\u8d56\u88ab\u5b89\u88c5\uff0c\u73af\u5883\u4e2d\u5df2\u6709\u6b64\u547d\u4ee4 \u663e\u793aFAIL\u65f6\uff0c\u8868\u793a\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002 QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded) \u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u663e\u793aPASS\u65f6\uff0c\u8868\u793a\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 QEMU: Checking if device /dev/kvm exists: PASS \u914d\u7f6eqemu\uff08\u4ec5arm64\uff09 \u4ec5\u5f53\u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\u9700\u8981\u6267\u884c\u6b64\u64cd\u4f5c\u3002 \u7f16\u8f91 /etc/libvirt/qemu.conf : nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u7f16\u8f91 /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } \u542f\u52a8\u670d\u52a1 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u786e\u8ba4nova-compute\u670d\u52a1\u5df2\u8bc6\u522b\u5230\u6570\u636e\u5e93\u4e2d\uff1a openstack compute service list --service nova-compute \u53d1\u73b0\u8ba1\u7b97\u8282\u70b9\uff0c\u5c06\u8ba1\u7b97\u8282\u70b9\u6dfb\u52a0\u5230cell\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u7ed3\u679c\u5982\u4e0b\uff1a Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code. Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check Neutron \u00b6 Neutron\u662fOpenStack\u7684\u7f51\u7edc\u670d\u52a1\uff0c\u63d0\u4f9b\u865a\u62df\u4ea4\u6362\u673a\u3001IP\u8def\u7531\u3001DHCP\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u670d\u52a1\u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efaneutron\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eNEUTRON_PASS\uff1a source ~/.admin-openrc openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description \"OpenStack Networking\" network \u90e8\u7f72 Neutron API \u670d\u52a1\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2 3. \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp [experimental] linuxbridge = true \u914d\u7f6eML2\uff0cML2\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge** \u4fee\u6539/etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6eLayer-3\u4ee3\u7406 \u4fee\u6539/etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406 \u4fee\u6539/etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406 \u4fee\u6539/etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u914d\u7f6enova\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542fnova api\u670d\u52a1 systemctl restart openstack-nova-api \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl start neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service Compute\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-neutron-linuxbridge ebtables ipset -y \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6enova compute\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service \u542f\u52a8Neutron linuxbridge agent\u670d\u52a1 systemctl enable neutron-linuxbridge-agent systemctl start neutron-linuxbridge-agent Cinder \u00b6 Cinder\u662fOpenStack\u7684\u5b58\u50a8\u670d\u52a1\uff0c\u63d0\u4f9b\u5757\u8bbe\u5907\u7684\u521b\u5efa\u3001\u53d1\u653e\u3001\u5907\u4efd\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \uff1a \u521d\u59cb\u5316\u6570\u636e\u5e93 CINDER_DBPASS \u662f\u7528\u6237\u81ea\u5b9a\u4e49\u7684cinder\u6570\u636e\u5e93\u5bc6\u7801\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u521d\u59cb\u5316Keystone\u8d44\u6e90\u5bf9\u8c61 source ~/.admin-openrc #\u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230`CINDER_PASS`\u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s 3. \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-cinder-api openstack-cinder-scheduler \u4fee\u6539cinder\u914d\u7f6e\u6587\u4ef6 /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.2 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u6570\u636e\u5e93\u540c\u6b65 su -s /bin/sh -c \"cinder-manage db sync\" cinder \u4fee\u6539nova\u914d\u7f6e /etc/nova/nova.conf [cinder] os_region_name = RegionOne \u542f\u52a8\u670d\u52a1 systemctl restart openstack-nova-api systemctl start openstack-cinder-api openstack-cinder-scheduler Storage\u8282\u70b9 \uff1a Storage\u8282\u70b9\u8981\u63d0\u524d\u51c6\u5907\u81f3\u5c11\u4e00\u5757\u786c\u76d8\uff0c\u4f5c\u4e3acinder\u7684\u5b58\u50a8\u540e\u7aef\uff0c\u4e0b\u6587\u9ed8\u8ba4storage\u8282\u70b9\u5df2\u7ecf\u5b58\u5728\u4e00\u5757\u672a\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u8bbe\u5907\u540d\u79f0\u4e3a /dev/sdb \uff0c\u7528\u6237\u5728\u914d\u7f6e\u8fc7\u7a0b\u4e2d\uff0c\u8bf7\u6309\u7167\u771f\u5b9e\u73af\u5883\u4fe1\u606f\u8fdb\u884c\u540d\u79f0\u66ff\u6362\u3002 Cinder\u652f\u6301\u5f88\u591a\u7c7b\u578b\u7684\u540e\u7aef\u5b58\u50a8\uff0c\u672c\u6307\u5bfc\u4f7f\u7528\u6700\u7b80\u5355\u7684lvm\u4e3a\u53c2\u8003\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982ceph\u7b49\u5176\u4ed6\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup \u914d\u7f6elvm\u5377\u7ec4 pvcreate /dev/sdb vgcreate cinder-volumes /dev/sdb \u4fee\u6539cinder\u914d\u7f6e /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.4 enabled_backends = lvm glance_api_servers = http://controller:9292 [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u914d\u7f6ecinder backup \uff08\u53ef\u9009\uff09 cinder-backup\u662f\u53ef\u9009\u7684\u5907\u4efd\u670d\u52a1\uff0ccinder\u540c\u6837\u652f\u6301\u5f88\u591a\u79cd\u5907\u4efd\u540e\u7aef\uff0c\u672c\u6587\u4f7f\u7528swift\u5b58\u50a8\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982NFS\u7b49\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\uff0c\u4f8b\u5982\u53ef\u4ee5\u53c2\u8003 OpenStack\u5b98\u65b9\u6587\u6863 \u5bf9NFS\u7684\u914d\u7f6e\u8bf4\u660e\u3002 \u4fee\u6539 /etc/cinder/cinder.conf \uff0c\u5728 [DEFAULT] \u4e2d\u65b0\u589e [DEFAULT] backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u8fd9\u91cc\u7684 SWIFT_URL \u662f\u6307\u73af\u5883\u4e2dswift\u670d\u52a1\u7684URL\uff0c\u5728\u90e8\u7f72\u5b8cswift\u670d\u52a1\u540e\uff0c\u6267\u884c openstack catalog show object-store \u547d\u4ee4\u83b7\u53d6\u3002 \u542f\u52a8\u670d\u52a1 systemctl start openstack-cinder-volume target systemctl start openstack-cinder-backup (\u53ef\u9009) \u81f3\u6b64\uff0cCinder\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u53ef\u4ee5\u5728controller\u901a\u8fc7\u4ee5\u4e0b\u547d\u4ee4\u8fdb\u884c\u7b80\u5355\u7684\u9a8c\u8bc1 source ~/.admin-openrc openstack storage service list openstack volume list Horizon \u00b6 Horizon\u662fOpenStack\u63d0\u4f9b\u7684\u524d\u7aef\u9875\u9762\uff0c\u53ef\u4ee5\u8ba9\u7528\u6237\u901a\u8fc7\u7f51\u9875\u9f20\u6807\u7684\u64cd\u4f5c\u6765\u63a7\u5236OpenStack\u96c6\u7fa4\uff0c\u800c\u4e0d\u7528\u7e41\u7410\u7684CLI\u547d\u4ee4\u884c\u3002Horizon\u4e00\u822c\u90e8\u7f72\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-dashboard \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] OPENSTACK_KEYSTONE_URL = \"http://controller:5000/v3\" SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f\u670d\u52a1 systemctl restart httpd \u81f3\u6b64\uff0chorizon\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165 http://192.168.0.2/dashboard \uff0c\u6253\u5f00horizon\u767b\u5f55\u9875\u9762\u3002 Ironic \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> exit Bye \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 \u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 \u66ff\u6362 IRONIC_PASS \u4e3aironic\u7528\u6237\u5bc6\u7801\uff0c IRONIC_INSPECTOR_PASS \u4e3aironic_inspector\u7528\u6237\u5bc6\u7801\u3002 openstack user create --password IRONIC_PASS \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector openstack role add --project service --user ironic-inspector admin \u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQ LAlchemy connection string used to connect to the # database (string value) # connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) # transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASS \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) # www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 www_authenticate_uri=http://controller:5000 # Complete admin Identity API endpoint. (string value) # auth_url=http://PRIVATE_IDENTITY_IP:5000 auth_url=http://controller:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASS # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none \u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema \u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 \u5982\u4e0b\u4e3aironic-conductor\u670d\u52a1\u81ea\u8eab\u7684\u6807\u51c6\u914d\u7f6e\uff0cironic-conductor\u670d\u52a1\u53ef\u4ee5\u4e0eironic-api\u670d\u52a1\u5206\u5e03\u4e8e\u4e0d\u540c\u8282\u70b9\uff0c\u672c\u6307\u5357\u4e2d\u5747\u90e8\u7f72\u4e0e\u63a7\u5236\u8282\u70b9\uff0c\u6240\u4ee5\u91cd\u590d\u7684\u914d\u7f6e\u9879\u53ef\u8df3\u8fc7\u3002 \u66ff\u6362\u4f7f\u7528conductor\u670d\u52a1\u6240\u5728host\u7684IP\u914d\u7f6emy_ip\uff1a [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) # my_ip=HOST_IP my_ip = 192.168.0.2 \u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c \u66ff\u6362IRONIC_PASS\u4e3aironic\u7528\u6237\u5bc6\u7801\u3002 [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASS # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public # \u5176\u4ed6\u53c2\u8003\u914d\u7f6e [glance] endpoint_override = http://controller:9292 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 auth_type = password username = ironic password = IRONIC_PASS project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service [service_catalog] region_name = RegionOne project_domain_id = default user_domain_id = default project_name = service password = IRONIC_PASS username = ironic auth_url = http://controller:5000 auth_type = password \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] endpoint_override = \u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 \u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-inspector \u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> exit Bye \u914d\u7f6e /etc/ironic-inspector/inspector.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASS \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801 [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 \u914d\u7f6e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://controller:5000 www_authenticate_uri = http://controller:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = controller:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True \u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=192.168.0.40,192.168.0.50 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log \u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c \u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade \u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 dnf install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u4e0b\u8f7d\u6216\u5236\u4f5c \u90e8\u7f72\u4e00\u4e2a\u88f8\u673a\u8282\u70b9\u603b\u5171\u9700\u8981\u4e24\u7ec4\u955c\u50cf\uff1adeploy ramdisk images\u548cuser images\u3002Deploy ramdisk images\u4e0a\u8fd0\u884c\u6709ironic-python-agent(IPA)\u670d\u52a1\uff0cIronic\u901a\u8fc7\u5b83\u8fdb\u884c\u88f8\u673a\u8282\u70b9\u7684\u73af\u5883\u51c6\u5907\u3002User images\u662f\u6700\u7ec8\u88ab\u5b89\u88c5\u88f8\u673a\u8282\u70b9\u4e0a\uff0c\u4f9b\u7528\u6237\u4f7f\u7528\u7684\u955c\u50cf\u3002 ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent-builder\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002\u82e5\u4f7f\u7528\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \uff0c\u540c\u65f6\u5b98\u65b9\u4e5f\u6709\u63d0\u4f9b\u5236\u4f5c\u597d\u7684deploy\u955c\u50cf\uff0c\u53ef\u5c1d\u8bd5\u4e0b\u8f7d\u3002 \u4e0b\u6587\u4ecb\u7ecd\u901a\u8fc7ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder dnf install python3-ironic-python-agent-builder \u6216 pip3 install ironic-python-agent-builder dnf install qemu-img git \u5236\u4f5c\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--lzma] [--extra-args EXTRA_ARGS] [--elements-path ELEMENTS_PATH] distribution positional arguments: distribution Distribution to use options: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic-python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --lzma Use lzma compression for smaller images --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder --elements-path ELEMENTS_PATH Path(s) to custom DIB elements separated by a colon \u64cd\u4f5c\u5b9e\u4f8b\uff1a # -o\u9009\u9879\u6307\u5b9a\u751f\u6210\u7684\u955c\u50cf\u540d # ubuntu\u6307\u5b9a\u751f\u6210ubuntu\u7cfb\u7edf\u7684\u955c\u50cf ironic-python-agent-builder -o my-ubuntu-ipa ubuntu \u53ef\u901a\u8fc7\u8bbe\u7f6e ARCH \u73af\u5883\u53d8\u91cf\uff08\u9ed8\u8ba4\u4e3aamd64\uff09\u6307\u5b9a\u6240\u6784\u5efa\u955c\u50cf\u7684\u67b6\u6784\u3002\u5982\u679c\u662f arm \u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a export ARCH=aarch64 \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf,\u8bbe\u7f6e\u7528\u6237\u540d\u3001\u5bc6\u7801\uff0c\u542f\u7528 sodo \u6743\u9650\uff1b\u5e76\u6dfb\u52a0 -e \u9009\u9879\u4f7f\u7528\u76f8\u5e94\u7684DIB\u5143\u7d20\u3002\u5236\u4f5c\u955c\u50cf\u64cd\u4f5c\u5982\u4e0b\uff1a export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=stable/2023.1 # \u6307\u5b9a\u672c\u5730\u4ed3\u5e93\u53ca\u5206\u652f DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo DIB_REPOREF_ironic_python_agent=my-test-branch ironic-python-agent-builder ubuntu \u53c2\u8003\uff1a source-repositories \u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\u3002 \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a \u5f53\u524d\u7248\u672c\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ ramdisk\u955c\u50cf\u4e2d\u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 \u7f16\u8f91/usr/lib/systemd/system/ironic-python-agent.service\u6587\u4ef6 [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target Trove \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2atrove\u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684trove\u6570\u636e\u5e93\uff0c\u66ff\u6362TROVE_DBPASS\u4e3a\u5408\u9002\u7684\u5bc6\u7801\u3002 CREATE DATABASE trove CHARACTER SET utf8; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS'; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efatrove\u7528\u6237 openstack user create --domain default --password-prompt trove # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user trove admin # \u521b\u5efadatabase\u670d\u52a1 openstack service create --name trove --description \"Database service\" database \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5Trove\u3002 dnf install openstack-trove python-troveclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 \u7f16\u8f91/etc/trove/trove.conf\u3002 [DEFAULT] bind_host=192.168.0.2 log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver network_label_regex=.* management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] auth_url = http://controller:5000/v3/ auth_type = password project_domain_name = Default project_name = service user_domain_name = Default password = trove username = TROVE_PASS [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = trove password = TROVE_PASS [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u63a7\u5236\u8282\u70b9\u7684IP\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002 \u7f16\u8f91/etc/trove/trove-guestagent.conf\u3002 [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df\u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a\u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002\\ \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 \u6570\u636e\u5e93\u540c\u6b65\u3002 su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efaswift\u7528\u6237 openstack user create --domain default --password-prompt swift # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user swift admin # \u521b\u5efa\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5Swift\u3002 dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \\ python3-keystonemiddleware memcached \u914d\u7f6eproxy-server\u3002 Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cSWIFT_PASS\u5373\u53ef\u3002 vim /etc/swift/proxy-server.conf [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = swift password = SWIFT_PASS delay_auth_decision = True service_token_roles_required = True Storage\u8282\u70b9 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305\u3002 dnf install openstack-swift-account openstack-swift-container openstack-swift-object dnf install xfsprogs rsync \u5c06\u8bbe\u5907/dev/sdb\u548c/dev/sdc\u683c\u5f0f\u5316\u4e3aXFS\u3002 mkfs.xfs /dev/sdb mkfs.xfs /dev/sdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u3002 mkdir -p /srv/node/sdb mkdir -p /srv/node/sdc \u627e\u5230\u65b0\u5206\u533a\u7684UUID\u3002 blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d\u3002 UUID=\"\" /srv/node/sdb xfs noatime 0 2 UUID=\"\" /srv/node/sdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\u3002 mount /srv/node/sdb mount /srv/node/sdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e\u3002 \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u914d\u7f6e\u5b58\u50a8\u8282\u70b9\u3002 \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 [DEFAULT] bind_ip = 192.168.0.4 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\u3002 mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift Controller\u8282\u70b9\u521b\u5efa\u5e76\u5206\u53d1\u73af \u521b\u5efa\u8d26\u53f7\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840 account.builder \u6587\u4ef6\u3002 swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder account.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6202 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u8d26\u53f7\u73af\u5185\u5bb9\u3002 swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u8d26\u53f7\u73af\u3002 swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\u3002 swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder container.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bb9\u5668\u73af\u5185\u5bb9\u3002 swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u5bb9\u5668\u73af\u3002 swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\u3002 swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder object.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6200 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bf9\u8c61\u73af\u5185\u5bb9\u3002 swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u5bf9\u8c61\u73af\u3002 swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\u3002 \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/swift/swift.conf\u3002 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R root:swift /etc/swift \u5b8c\u6210\u5b89\u88c5 \u5728\u63a7\u5236\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service systemctl start openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service Cyborg \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 Controller\u8282\u70b9 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cyborg; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efacybory\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eCYBORG_PASS source ~/.admin-openrc openstack user create --domain default --password-prompt cyborg openstack role add --project service --user cyborg admin openstack service create --name cyborg --description \"Acceleration Service\" accelerator \u4f7f\u7528uwsgi\u90e8\u7f72Cyborg api\u670d\u52a1 openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2 \u5b89\u88c5Cyborg dnf install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [api] host_ip = 0.0.0.0 [database] connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg [service_catalog] cafile = /opt/stack/data/ca-bundle.pem project_domain_id = default user_domain_id = default project_name = service password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = password username = PLACEMENT_PASS auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [nova] project_domain_name = Default project_name = service user_domain_name = Default password = NOVA_PASS username = nova auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [keystone_authtoken] memcached_servers = localhost:11211 signing_dir = /var/cache/cyborg/api cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u00b6 Aodh\u53ef\u4ee5\u6839\u636e\u7531Ceilometer\u6216\u8005Gnocchi\u6536\u96c6\u7684\u76d1\u63a7\u6570\u636e\u521b\u5efa\u544a\u8b66\uff0c\u5e76\u8bbe\u7f6e\u89e6\u53d1\u89c4\u5219\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh\u3002 dnf install openstack-aodh-api openstack-aodh-evaluator \\ openstack-aodh-notifier openstack-aodh-listener \\ openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/aodh/aodh.conf [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u540c\u6b65\u6570\u636e\u5e93\u3002 aodh-dbsync \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u00b6 Gnocchi\u662f\u4e00\u4e2a\u5f00\u6e90\u7684\u65f6\u95f4\u5e8f\u5217\u6570\u636e\u5e93\uff0c\u53ef\u4ee5\u5bf9\u63a5Ceilometer\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi\u3002 dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. # coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u540c\u6b65\u6570\u636e\u5e93\u3002 gnocchi-upgrade \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u00b6 Ceilometer\u662fOpenStack\u4e2d\u8d1f\u8d23\u6570\u636e\u6536\u96c6\u7684\u670d\u52a1\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-notification openstack-ceilometer-central \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/pipeline.yaml\u3002 publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u6570\u636e\u5e93\u540c\u6b65\u3002 ceilometer-upgrade \u5b8c\u6210\u63a7\u5236\u8282\u70b9Ceilometer\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Compute\u8282\u70b9 \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-compute dnf install openstack-ceilometer-ipmi # \u53ef\u9009 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_url = http://controller:5000 project_domain_id = default user_domain_id = default auth_type = password username = ceilometer project_name = service password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/nova/nova.conf\u3002 [DEFAULT] instance_usage_audit = True instance_usage_audit_period = hour [notifications] notify_on_state_change = vm_and_task_state [oslo_messaging_notifications] driver = messagingv2 \u5b8c\u6210\u5b89\u88c5\u3002 systemctl enable openstack-ceilometer-compute.service systemctl start openstack-ceilometer-compute.service systemctl enable openstack-ceilometer-ipmi.service # \u53ef\u9009 systemctl start openstack-ceilometer-ipmi.service # \u53ef\u9009 # \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service Heat \u00b6 Heat\u662f OpenStack \u81ea\u52a8\u7f16\u6392\u670d\u52a1\uff0c\u57fa\u4e8e\u63cf\u8ff0\u6027\u7684\u6a21\u677f\u6765\u7f16\u6392\u590d\u5408\u4e91\u5e94\u7528\uff0c\u4e5f\u79f0\u4e3a Orchestration Service \u3002Heat \u7684\u5404\u670d\u52a1\u4e00\u822c\u5b89\u88c5\u5728 Controller \u8282\u70b9\u4e0a\u3002 Controller\u8282\u70b9 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE heat; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 source ~/.admin-openrc openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f \u521b\u5efa heat domain openstack domain create --description \"Stack projects and users\" heat \u5728 heat domain\u4e0b\u521b\u5efa heat_domain_admin \u7528\u6237\uff0c\u5e76\u8bb0\u4e0b\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6e\u4e0b\u9762\u7684 HEAT_DOMAIN_PASS openstack user create --domain heat --password-prompt heat_domain_admin \u4e3a heat_domain_admin \u7528\u6237\u589e\u52a0 admin \u89d2\u8272 openstack role add --domain heat --user-domain heat --user heat_domain_admin admin \u521b\u5efa heat_stack_owner \u89d2\u8272 openstack role create heat_stack_owner \u521b\u5efa heat_stack_user \u89d2\u8272 openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service Tempest \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u5b89\u88c5Tempest dnf install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Antelope\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 yum install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff0cAK/SK\u662f\u7528\u6237\u7684\u534e\u4e3a\u4e91\u767b\u5f55\u5bc6\u94a5\uff0c\u5176\u4ed6\u914d\u7f6e\u4fdd\u6301\u9ed8\u8ba4\u5373\u53ef\uff08\u9ed8\u8ba4\u4f7f\u7528\u65b0\u52a0\u5761region\uff09\uff0c\u9700\u8981\u63d0\u524d\u5728\u4e91\u4e0a\u521b\u5efa\u5bf9\u5e94\u7684\u8d44\u6e90\uff0c\u5305\u62ec\uff1a \u4e00\u4e2a\u5b89\u5168\u7ec4\uff0c\u540d\u5b57\u9ed8\u8ba4\u662f oos \u4e00\u4e2aopenEuler\u955c\u50cf\uff0c\u540d\u79f0\u683c\u5f0f\u662fopenEuler-%(release)s-%(arch)s\uff0c\u4f8b\u5982 openEuler-24.03-sp1-arm64 \u4e00\u4e2aVPC\uff0c\u540d\u79f0\u662f oos_vpc \u8be5VPC\u4e0b\u9762\u4e24\u4e2a\u5b50\u7f51\uff0c\u540d\u79f0\u662f oos_subnet1 \u3001 oos_subnet2 [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668(\u53ea\u5728openEuler LTS\u4e0a\u652f\u6301) \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0|openEuler 24.03 LTS SP1\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 24.03-lts-sp1 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r antelope \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u6267\u884ctempest\u6d4b\u8bd5 \u7528\u6237\u53ef\u4ee5\u4f7f\u7528oos\u81ea\u52a8\u6267\u884c\uff1a oos env test test-oos \u4e5f\u53ef\u4ee5\u624b\u52a8\u767b\u5f55\u76ee\u6807\u8282\u70b9\uff0c\u8fdb\u5165\u6839\u76ee\u5f55\u4e0b\u7684 mytest \u76ee\u5f55\uff0c\u624b\u52a8\u6267\u884c tempest run \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u8df3\u8fc7\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u5728\u7b2c4\u6b65\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 \u88ab\u7eb3\u7ba1\u7684\u865a\u673a\u9700\u8981\u4fdd\u8bc1\uff1a \u81f3\u5c11\u6709\u4e00\u5f20\u7ed9oos\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e neutron_dataplane_interface_name \u81f3\u5c11\u6709\u4e00\u5757\u7ed9oos\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e cinder_block_device \u5982\u679c\u8981\u90e8\u7f72swift\u670d\u52a1\uff0c\u5219\u9700\u8981\u65b0\u589e\u4e00\u5757\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e swift_storage_devices # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 24.03-lts-sp1 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-24.03-LTS-SP1_Antelope"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#openstack-antelope","text":"OpenStack Antelope \u90e8\u7f72\u6307\u5357 \u57fa\u4e8eRPM\u90e8\u7f72 \u73af\u5883\u51c6\u5907 \u65f6\u949f\u540c\u6b65 \u5b89\u88c5\u6570\u636e\u5e93 \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u90e8\u7f72\u670d\u52a1 Keystone Glance Placement Nova Neutron Cinder Horizon Ironic Trove Swift Cyborg Aodh Gnocchi Ceilometer Heat Tempest \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u672c\u6587\u6863\u662f openEuler OpenStack SIG \u7f16\u5199\u7684\u57fa\u4e8e |openEuler 24.03 LTS SP1 \u7684 OpenStack \u90e8\u7f72\u6307\u5357\uff0c\u5185\u5bb9\u7531 SIG \u8d21\u732e\u8005\u63d0\u4f9b\u3002\u5728\u9605\u8bfb\u8fc7\u7a0b\u4e2d\uff0c\u5982\u679c\u60a8\u6709\u4efb\u4f55\u7591\u95ee\u6216\u8005\u53d1\u73b0\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7 \u8054\u7cfb SIG\u7ef4\u62a4\u4eba\u5458\uff0c\u6216\u8005\u76f4\u63a5 \u63d0\u4ea4issue \u7ea6\u5b9a \u672c\u7ae0\u8282\u63cf\u8ff0\u6587\u6863\u4e2d\u7684\u4e00\u4e9b\u901a\u7528\u7ea6\u5b9a\u3002 \u540d\u79f0 \u5b9a\u4e49 RABBIT_PASS rabbitmq\u7684\u5bc6\u7801\uff0c\u7531\u7528\u6237\u8bbe\u7f6e\uff0c\u5728OpenStack\u5404\u4e2a\u670d\u52a1\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_PASS cinder\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_DBPASS cinder\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 KEYSTONE_DBPASS keystone\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728keystone\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_PASS glance\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_DBPASS glance\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_PASS \u5728keystone\u6ce8\u518c\u7684heat\u7528\u6237\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_DBPASS heat\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_PASS \u5728keystone\u6ce8\u518c\u7684cyborg\u7528\u6237\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_DBPASS cyborg\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_PASS \u5728keystone\u6ce8\u518c\u7684neutron\u7528\u6237\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_DBPASS neutron\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PROVIDER_INTERFACE_NAME \u7269\u7406\u7f51\u7edc\u63a5\u53e3\u7684\u540d\u79f0\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 OVERLAY_INTERFACE_IP_ADDRESS Controller\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406ip\u5730\u5740\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 METADATA_SECRET metadata proxy\u7684secret\u5bc6\u7801\uff0c\u5728nova\u548cneutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_DBPASS placement\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_PASS \u5728keystone\u6ce8\u518c\u7684placement\u7528\u6237\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_DBPASS nova\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728nova\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_PASS \u5728keystone\u6ce8\u518c\u7684nova\u7528\u6237\u5bc6\u7801\uff0c\u5728nova,cyborg,neutron\u7b49\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_DBPASS ironic\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_PASS \u5728keystone\u6ce8\u518c\u7684ironic\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_DBPASS ironic-inspector\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_PASS \u5728keystone\u6ce8\u518c\u7684ironic-inspector\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 OpenStack SIG \u63d0\u4f9b\u4e86\u591a\u79cd\u57fa\u4e8e openEuler \u90e8\u7f72 OpenStack \u7684\u65b9\u6cd5\uff0c\u4ee5\u6ee1\u8db3\u4e0d\u540c\u7684\u7528\u6237\u573a\u666f\uff0c\u8bf7\u6309\u9700\u9009\u62e9\u3002","title":"OpenStack Antelope \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#rpm","text":"","title":"\u57fa\u4e8eRPM\u90e8\u7f72"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#_1","text":"\u672c\u6587\u6863\u57fa\u4e8eOpenStack\u7ecf\u5178\u7684\u4e09\u8282\u70b9\u73af\u5883\u8fdb\u884c\u90e8\u7f72\uff0c\u4e09\u4e2a\u8282\u70b9\u5206\u522b\u662f\u63a7\u5236\u8282\u70b9(Controller)\u3001\u8ba1\u7b97\u8282\u70b9(Compute)\u3001\u5b58\u50a8\u8282\u70b9(Storage)\uff0c\u5176\u4e2d\u5b58\u50a8\u8282\u70b9\u4e00\u822c\u53ea\u90e8\u7f72\u5b58\u50a8\u670d\u52a1\uff0c\u5728\u8d44\u6e90\u6709\u9650\u7684\u60c5\u51b5\u4e0b\uff0c\u53ef\u4ee5\u4e0d\u5355\u72ec\u90e8\u7f72\u8be5\u8282\u70b9\uff0c\u628a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u670d\u52a1\u90e8\u7f72\u5230\u8ba1\u7b97\u8282\u70b9\u5373\u53ef\u3002 \u9996\u5148\u51c6\u5907\u4e09\u4e2a|openEuler 24.03 LTS SP1\u73af\u5883\uff0c\u6839\u636e\u60a8\u7684\u73af\u5883\uff0c\u4e0b\u8f7d\u5bf9\u5e94\u7684\u955c\u50cf\u5e76\u5b89\u88c5\u5373\u53ef\uff1a ISO\u955c\u50cf \u3001 qcow2\u955c\u50cf \u3002 \u4e0b\u9762\u7684\u5b89\u88c5\u6309\u7167\u5982\u4e0b\u62d3\u6251\u8fdb\u884c\uff1a controller\uff1a192.168.0.2 compute\uff1a 192.168.0.3 storage\uff1a 192.168.0.4 \u5982\u679c\u60a8\u7684\u73af\u5883IP\u4e0d\u540c\uff0c\u8bf7\u6309\u7167\u60a8\u7684\u73af\u5883IP\u4fee\u6539\u76f8\u5e94\u7684\u914d\u7f6e\u6587\u4ef6\u3002 \u672c\u6587\u6863\u7684\u4e09\u8282\u70b9\u670d\u52a1\u62d3\u6251\u5982\u4e0b\u56fe\u6240\u793a(\u53ea\u5305\u542bKeystone\u3001Glance\u3001Nova\u3001Cinder\u3001Neutron\u8fd9\u51e0\u4e2a\u6838\u5fc3\u670d\u52a1\uff0c\u5176\u4ed6\u670d\u52a1\u8bf7\u53c2\u8003\u5177\u4f53\u90e8\u7f72\u7ae0\u8282)\uff1a \u5728\u6b63\u5f0f\u90e8\u7f72\u4e4b\u524d\uff0c\u9700\u8981\u5bf9\u6bcf\u4e2a\u8282\u70b9\u505a\u5982\u4e0b\u914d\u7f6e\u548c\u68c0\u67e5\uff1a \u914d\u7f6e |openEuler 24.03 LTS SP1 \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-antelope yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u6bcf\u4e2a\u8282\u70b9\u5206\u522b\u4fee\u6539\u4e3b\u673a\u540d\uff0c\u4ee5controller\u4e3a\u4f8b\uff1a hostnamectl set-hostname controller vi /etc/hostname \u5185\u5bb9\u4fee\u6539\u4e3acontroller \u7136\u540e\u4fee\u6539\u6bcf\u4e2a\u8282\u70b9\u7684 /etc/hosts \u6587\u4ef6\uff0c\u65b0\u589e\u5982\u4e0b\u5185\u5bb9: 192.168.0.2 controller 192.168.0.3 compute 192.168.0.4 storage","title":"\u73af\u5883\u51c6\u5907"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#_2","text":"\u96c6\u7fa4\u73af\u5883\u65f6\u523b\u8981\u6c42\u6bcf\u4e2a\u8282\u70b9\u7684\u65f6\u95f4\u4e00\u81f4\uff0c\u4e00\u822c\u7531\u65f6\u949f\u540c\u6b65\u8f6f\u4ef6\u4fdd\u8bc1\u3002\u672c\u6587\u4f7f\u7528 chrony \u8f6f\u4ef6\u3002\u6b65\u9aa4\u5982\u4e0b\uff1a Controller\u8282\u70b9 \uff1a \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # \u8868\u793a\u5141\u8bb8\u54ea\u4e9bIP\u4ece\u672c\u8282\u70b9\u540c\u6b65\u65f6\u949f allow 192.168.0.0/24 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u5176\u4ed6\u8282\u70b9 \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # NTP_SERVER\u662fcontroller IP\uff0c\u8868\u793a\u4ece\u8fd9\u4e2a\u673a\u5668\u83b7\u53d6\u65f6\u95f4\uff0c\u8fd9\u91cc\u6211\u4eec\u586b192.168.0.2\uff0c\u6216\u8005\u5728`/etc/hosts`\u91cc\u914d\u7f6e\u597d\u7684controller\u540d\u5b57\u5373\u53ef\u3002 server NTP_SERVER iburst \u540c\u65f6\uff0c\u8981\u628a pool pool.ntp.org iburst \u8fd9\u4e00\u884c\u6ce8\u91ca\u6389\uff0c\u8868\u793a\u4e0d\u4ece\u516c\u7f51\u540c\u6b65\u65f6\u949f\u3002 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u914d\u7f6e\u5b8c\u6210\u540e\uff0c\u68c0\u67e5\u4e00\u4e0b\u7ed3\u679c\uff0c\u5728\u5176\u4ed6\u975econtroller\u8282\u70b9\u6267\u884c chronyc sources \uff0c\u8fd4\u56de\u7ed3\u679c\u7c7b\u4f3c\u5982\u4e0b\u5185\u5bb9\uff0c\u8868\u793a\u6210\u529f\u4ececontroller\u540c\u6b65\u65f6\u949f\u3002 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 192.168.0.2 4 6 7 0 -1406ns[ +55us] +/- 16ms","title":"\u65f6\u949f\u540c\u6b65"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#_3","text":"\u6570\u636e\u5e93\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528mariadb\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install mariadb-config mariadb mariadb-server python3-PyMySQL \u65b0\u589e\u914d\u7f6e\u6587\u4ef6 /etc/my.cnf.d/openstack.cnf \uff0c\u5185\u5bb9\u5982\u4e0b [mysqld] bind-address = 192.168.0.2 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8\u670d\u52a1\u5668 systemctl start mariadb \u521d\u59cb\u5316\u6570\u636e\u5e93\uff0c\u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef mysql_secure_installation \u793a\u4f8b\u5982\u4e0b\uff1a NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and haven't set the root password yet, you should just press enter here. Enter current password for root (enter for none): #\u8fd9\u91cc\u8f93\u5165\u5bc6\u7801\uff0c\u7531\u4e8e\u6211\u4eec\u662f\u521d\u59cb\u5316DB\uff0c\u76f4\u63a5\u56de\u8f66\u5c31\u884c OK, successfully used password, moving on... Setting the root password or using the unix_socket ensures that nobody can log into the MariaDB root user without the proper authorisation. You already have your root account protected, so you can safely answer 'n'. # \u8fd9\u91cc\u6839\u636e\u63d0\u793a\u8f93\u5165N Switch to unix_socket authentication [Y/n] N Enabled successfully! Reloading privilege tables.. ... Success! You already have your root account protected, so you can safely answer 'n'. # \u8f93\u5165Y\uff0c\u4fee\u6539\u5bc6\u7801 Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664\u533f\u540d\u7528\u6237 Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. # \u8f93\u5165Y\uff0c\u5173\u95edroot\u8fdc\u7a0b\u767b\u5f55\u6743\u9650 Disallow root login remotely? [Y/n] Y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664test\u6570\u636e\u5e93 Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. # \u8f93\u5165Y\uff0c\u91cd\u8f7d\u914d\u7f6e Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. \u9a8c\u8bc1\uff0c\u6839\u636e\u7b2c\u56db\u6b65\u8bbe\u7f6e\u7684\u5bc6\u7801\uff0c\u68c0\u67e5\u662f\u5426\u80fd\u767b\u5f55mariadb mysql -uroot -p","title":"\u5b89\u88c5\u6570\u636e\u5e93"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#_4","text":"\u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528rabbitmq\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install rabbitmq-server \u542f\u52a8\u670d\u52a1 systemctl start rabbitmq-server \u914d\u7f6eopenstack\u7528\u6237\uff0c RABBIT_PASS \u662fopenstack\u670d\u52a1\u767b\u5f55\u6d88\u606f\u961f\u91cc\u7684\u5bc6\u7801\uff0c\u9700\u8981\u548c\u540e\u9762\u5404\u4e2a\u670d\u52a1\u7684\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\u3002 rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5\u6d88\u606f\u961f\u5217"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#_5","text":"\u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528Memcached\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install memcached python3-memcached \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u542f\u52a8\u670d\u52a1 systemctl start memcached","title":"\u5b89\u88c5\u7f13\u5b58\u670d\u52a1"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#_6","text":"","title":"\u90e8\u7f72\u670d\u52a1"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#keystone","text":"Keystone\u662fOpenStack\u63d0\u4f9b\u7684\u9274\u6743\u670d\u52a1\uff0c\u662f\u6574\u4e2aOpenStack\u7684\u5165\u53e3\uff0c\u63d0\u4f9b\u4e86\u79df\u6237\u9694\u79bb\u3001\u7528\u6237\u8ba4\u8bc1\u3001\u670d\u52a1\u53d1\u73b0\u7b49\u529f\u80fd\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server \u6253\u5f00httpd.conf\u5e76\u914d\u7f6e #\u9700\u8981\u4fee\u6539\u7684\u914d\u7f6e\u6587\u4ef6\u8def\u5f84 vim /etc/httpd/conf/httpd.conf #\u4fee\u6539\u4ee5\u4e0b\u9879\uff0c\u5982\u679c\u6ca1\u6709\u5219\u65b0\u6dfb\u52a0 ServerName controller \u521b\u5efa\u8f6f\u94fe\u63a5 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles \u9700\u8981\u5148\u5b89\u88c5python3-openstackclient dnf install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#glance","text":"Glance\u662fOpenStack\u63d0\u4f9b\u7684\u955c\u50cf\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u3001\u88f8\u673a\u955c\u50cf\u7684\u4e0a\u4f20\u4e0e\u4e0b\u8f7d\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521d\u59cb\u5316 glance \u8d44\u6e90\u5bf9\u8c61 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230 GLANCE_PASS \u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt glance User Password: Repeat User Password: \u6dfb\u52a0glance\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user glance admin \u521b\u5efaglance\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efaglance API\u670d\u52a1\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-glance \u4fee\u6539 glance \u914d\u7f6e\u6587\u4ef6 vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u5bfc\u5165\u73af\u5883\u53d8\u91cf sorce ~/.admin-openrcu \u4e0b\u8f7d\u955c\u50cf x86\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img arm\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#placement","text":"Placement\u662fOpenStack\u63d0\u4f9b\u7684\u8d44\u6e90\u8c03\u5ea6\u7ec4\u4ef6\uff0c\u4e00\u822c\u4e0d\u9762\u5411\u7528\u6237\uff0c\u7531Nova\u7b49\u7ec4\u4ef6\u8c03\u7528\uff0c\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u3001\u914d\u7f6ePlacement\u670d\u52a1\u524d\uff0c\u9700\u8981\u5148\u521b\u5efa\u76f8\u5e94\u7684\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548cAPI endpoints\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efaplacement\u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE placement; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efaplacement\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt placement User Password: Repeat User Password: \u6dfb\u52a0placement\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name placement \\ --description \"Placement API\" placement \u521b\u5efaPlacement API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ placement public http://controller:8778 openstack endpoint create --region RegionOne \\ placement internal http://controller:8778 openstack endpoint create --region RegionOne \\ placement admin http://controller:8778 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-placement-api \u7f16\u8f91 /etc/placement/placement.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [placement_database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [placement_database] connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff0c\u586b\u5145Placement\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8\u670d\u52a1 \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650 source ~/.admin-openrc \u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a placement-status upgrade check +----------------------------------------------------------------------+ | Upgrade Check Results | +----------------------------------------------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Failure | | Details: Your policy file is JSON-formatted which is deprecated. You | | need to switch to YAML-formatted file. Use the | | ``oslopolicy-convert-json-to-yaml`` tool to convert the | | existing JSON-formatted files to YAML in a backwards- | | compatible manner: https://docs.openstack.org/oslo.policy/ | | latest/cli/oslopolicy-convert-json-to-yaml.html. | +----------------------------------------------------------------------+ \u8fd9\u91cc\u53ef\u4ee5\u770b\u5230 Policy File JSON to YAML Migration \u7684\u7ed3\u679c\u4e3aFailure\u3002\u8fd9\u662f\u56e0\u4e3a\u5728Placement\u4e2d\uff0cJSON\u683c\u5f0f\u7684policy\u6587\u4ef6\u4eceWallaby\u7248\u672c\u5f00\u59cb\u5df2\u5904\u4e8e deprecated \u72b6\u6001\u3002\u53ef\u4ee5\u53c2\u8003\u63d0\u793a\uff0c\u4f7f\u7528 oslopolicy-convert-json-to-yaml \u5de5\u5177 \u5c06\u73b0\u6709\u7684JSON\u683c\u5f0fpolicy\u6587\u4ef6\u8f6c\u5316\u4e3aYAML\u683c\u5f0f\u3002 oslopolicy-convert-json-to-yaml --namespace placement \\ --policy-file /etc/placement/policy.json \\ --output-file /etc/placement/policy.yaml mv /etc/placement/policy.json{,.bak} \u6ce8\uff1a\u5f53\u524d\u73af\u5883\u4e2d\u6b64\u95ee\u9898\u53ef\u5ffd\u7565\uff0c\u4e0d\u5f71\u54cd\u8fd0\u884c\u3002 \u9488\u5bf9placement API\u8fd0\u884c\u547d\u4ee4\uff1a \u5b89\u88c5osc-placement\u63d2\u4ef6\uff1a dnf install python3-osc-placement \u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a openstack --os-placement-api-version 1.2 resource class list --sort-column name +----------------------------+ | name | +----------------------------+ | DISK_GB | | FPGA | | ... | openstack --os-placement-api-version 1.6 trait list --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_ACCELERATORS | | COMPUTE_ARCH_AARCH64 | | ... |","title":"Placement"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#nova","text":"Nova\u662fOpenStack\u7684\u8ba1\u7b97\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u7684\u521b\u5efa\u3001\u53d1\u653e\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efa nova_api \u3001 nova \u548c nova_cell0 \u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efanova\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt nova User Password: Repeat User Password: \u6dfb\u52a0nova\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user nova admin \u521b\u5efanova\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name nova \\ --description \"OpenStack Compute\" compute \u521b\u5efaNova API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute admin http://controller:8774/v2.1 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528controller\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.2 log_dir = /var/log/nova state_path = /var/lib/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api_database] \u548c [database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff1a \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u542f\u52a8\u670d\u52a1 systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service Compute\u8282\u70b9 \u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-nova-compute \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6 \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528Compute\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49compute_driver\u3001instances_path\u3001log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.3 compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances log_dir = /var/log/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86_64\uff09 \u5904\u7406\u5668\u4e3ax86_64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002\u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08arm64\uff09 \u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a virt-host-validate # \u8be5\u547d\u4ee4\u7531libvirt\u63d0\u4f9b\uff0c\u6b64\u65f6libvirt\u5e94\u5df2\u4f5c\u4e3aopenstack-nova-compute\u4f9d\u8d56\u88ab\u5b89\u88c5\uff0c\u73af\u5883\u4e2d\u5df2\u6709\u6b64\u547d\u4ee4 \u663e\u793aFAIL\u65f6\uff0c\u8868\u793a\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002 QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded) \u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u663e\u793aPASS\u65f6\uff0c\u8868\u793a\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 QEMU: Checking if device /dev/kvm exists: PASS \u914d\u7f6eqemu\uff08\u4ec5arm64\uff09 \u4ec5\u5f53\u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\u9700\u8981\u6267\u884c\u6b64\u64cd\u4f5c\u3002 \u7f16\u8f91 /etc/libvirt/qemu.conf : nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u7f16\u8f91 /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } \u542f\u52a8\u670d\u52a1 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u786e\u8ba4nova-compute\u670d\u52a1\u5df2\u8bc6\u522b\u5230\u6570\u636e\u5e93\u4e2d\uff1a openstack compute service list --service nova-compute \u53d1\u73b0\u8ba1\u7b97\u8282\u70b9\uff0c\u5c06\u8ba1\u7b97\u8282\u70b9\u6dfb\u52a0\u5230cell\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u7ed3\u679c\u5982\u4e0b\uff1a Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code. Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check","title":"Nova"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#neutron","text":"Neutron\u662fOpenStack\u7684\u7f51\u7edc\u670d\u52a1\uff0c\u63d0\u4f9b\u865a\u62df\u4ea4\u6362\u673a\u3001IP\u8def\u7531\u3001DHCP\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u670d\u52a1\u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efaneutron\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eNEUTRON_PASS\uff1a source ~/.admin-openrc openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description \"OpenStack Networking\" network \u90e8\u7f72 Neutron API \u670d\u52a1\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2 3. \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp [experimental] linuxbridge = true \u914d\u7f6eML2\uff0cML2\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge** \u4fee\u6539/etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6eLayer-3\u4ee3\u7406 \u4fee\u6539/etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406 \u4fee\u6539/etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406 \u4fee\u6539/etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u914d\u7f6enova\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542fnova api\u670d\u52a1 systemctl restart openstack-nova-api \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl start neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service Compute\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-neutron-linuxbridge ebtables ipset -y \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6enova compute\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service \u542f\u52a8Neutron linuxbridge agent\u670d\u52a1 systemctl enable neutron-linuxbridge-agent systemctl start neutron-linuxbridge-agent","title":"Neutron"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#cinder","text":"Cinder\u662fOpenStack\u7684\u5b58\u50a8\u670d\u52a1\uff0c\u63d0\u4f9b\u5757\u8bbe\u5907\u7684\u521b\u5efa\u3001\u53d1\u653e\u3001\u5907\u4efd\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \uff1a \u521d\u59cb\u5316\u6570\u636e\u5e93 CINDER_DBPASS \u662f\u7528\u6237\u81ea\u5b9a\u4e49\u7684cinder\u6570\u636e\u5e93\u5bc6\u7801\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u521d\u59cb\u5316Keystone\u8d44\u6e90\u5bf9\u8c61 source ~/.admin-openrc #\u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230`CINDER_PASS`\u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s 3. \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-cinder-api openstack-cinder-scheduler \u4fee\u6539cinder\u914d\u7f6e\u6587\u4ef6 /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.2 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u6570\u636e\u5e93\u540c\u6b65 su -s /bin/sh -c \"cinder-manage db sync\" cinder \u4fee\u6539nova\u914d\u7f6e /etc/nova/nova.conf [cinder] os_region_name = RegionOne \u542f\u52a8\u670d\u52a1 systemctl restart openstack-nova-api systemctl start openstack-cinder-api openstack-cinder-scheduler Storage\u8282\u70b9 \uff1a Storage\u8282\u70b9\u8981\u63d0\u524d\u51c6\u5907\u81f3\u5c11\u4e00\u5757\u786c\u76d8\uff0c\u4f5c\u4e3acinder\u7684\u5b58\u50a8\u540e\u7aef\uff0c\u4e0b\u6587\u9ed8\u8ba4storage\u8282\u70b9\u5df2\u7ecf\u5b58\u5728\u4e00\u5757\u672a\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u8bbe\u5907\u540d\u79f0\u4e3a /dev/sdb \uff0c\u7528\u6237\u5728\u914d\u7f6e\u8fc7\u7a0b\u4e2d\uff0c\u8bf7\u6309\u7167\u771f\u5b9e\u73af\u5883\u4fe1\u606f\u8fdb\u884c\u540d\u79f0\u66ff\u6362\u3002 Cinder\u652f\u6301\u5f88\u591a\u7c7b\u578b\u7684\u540e\u7aef\u5b58\u50a8\uff0c\u672c\u6307\u5bfc\u4f7f\u7528\u6700\u7b80\u5355\u7684lvm\u4e3a\u53c2\u8003\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982ceph\u7b49\u5176\u4ed6\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup \u914d\u7f6elvm\u5377\u7ec4 pvcreate /dev/sdb vgcreate cinder-volumes /dev/sdb \u4fee\u6539cinder\u914d\u7f6e /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.4 enabled_backends = lvm glance_api_servers = http://controller:9292 [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u914d\u7f6ecinder backup \uff08\u53ef\u9009\uff09 cinder-backup\u662f\u53ef\u9009\u7684\u5907\u4efd\u670d\u52a1\uff0ccinder\u540c\u6837\u652f\u6301\u5f88\u591a\u79cd\u5907\u4efd\u540e\u7aef\uff0c\u672c\u6587\u4f7f\u7528swift\u5b58\u50a8\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982NFS\u7b49\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\uff0c\u4f8b\u5982\u53ef\u4ee5\u53c2\u8003 OpenStack\u5b98\u65b9\u6587\u6863 \u5bf9NFS\u7684\u914d\u7f6e\u8bf4\u660e\u3002 \u4fee\u6539 /etc/cinder/cinder.conf \uff0c\u5728 [DEFAULT] \u4e2d\u65b0\u589e [DEFAULT] backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u8fd9\u91cc\u7684 SWIFT_URL \u662f\u6307\u73af\u5883\u4e2dswift\u670d\u52a1\u7684URL\uff0c\u5728\u90e8\u7f72\u5b8cswift\u670d\u52a1\u540e\uff0c\u6267\u884c openstack catalog show object-store \u547d\u4ee4\u83b7\u53d6\u3002 \u542f\u52a8\u670d\u52a1 systemctl start openstack-cinder-volume target systemctl start openstack-cinder-backup (\u53ef\u9009) \u81f3\u6b64\uff0cCinder\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u53ef\u4ee5\u5728controller\u901a\u8fc7\u4ee5\u4e0b\u547d\u4ee4\u8fdb\u884c\u7b80\u5355\u7684\u9a8c\u8bc1 source ~/.admin-openrc openstack storage service list openstack volume list","title":"Cinder"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#horizon","text":"Horizon\u662fOpenStack\u63d0\u4f9b\u7684\u524d\u7aef\u9875\u9762\uff0c\u53ef\u4ee5\u8ba9\u7528\u6237\u901a\u8fc7\u7f51\u9875\u9f20\u6807\u7684\u64cd\u4f5c\u6765\u63a7\u5236OpenStack\u96c6\u7fa4\uff0c\u800c\u4e0d\u7528\u7e41\u7410\u7684CLI\u547d\u4ee4\u884c\u3002Horizon\u4e00\u822c\u90e8\u7f72\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-dashboard \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] OPENSTACK_KEYSTONE_URL = \"http://controller:5000/v3\" SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f\u670d\u52a1 systemctl restart httpd \u81f3\u6b64\uff0chorizon\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165 http://192.168.0.2/dashboard \uff0c\u6253\u5f00horizon\u767b\u5f55\u9875\u9762\u3002","title":"Horizon"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> exit Bye \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 \u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 \u66ff\u6362 IRONIC_PASS \u4e3aironic\u7528\u6237\u5bc6\u7801\uff0c IRONIC_INSPECTOR_PASS \u4e3aironic_inspector\u7528\u6237\u5bc6\u7801\u3002 openstack user create --password IRONIC_PASS \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector openstack role add --project service --user ironic-inspector admin \u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQ LAlchemy connection string used to connect to the # database (string value) # connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) # transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASS \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) # www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 www_authenticate_uri=http://controller:5000 # Complete admin Identity API endpoint. (string value) # auth_url=http://PRIVATE_IDENTITY_IP:5000 auth_url=http://controller:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASS # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none \u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema \u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 \u5982\u4e0b\u4e3aironic-conductor\u670d\u52a1\u81ea\u8eab\u7684\u6807\u51c6\u914d\u7f6e\uff0cironic-conductor\u670d\u52a1\u53ef\u4ee5\u4e0eironic-api\u670d\u52a1\u5206\u5e03\u4e8e\u4e0d\u540c\u8282\u70b9\uff0c\u672c\u6307\u5357\u4e2d\u5747\u90e8\u7f72\u4e0e\u63a7\u5236\u8282\u70b9\uff0c\u6240\u4ee5\u91cd\u590d\u7684\u914d\u7f6e\u9879\u53ef\u8df3\u8fc7\u3002 \u66ff\u6362\u4f7f\u7528conductor\u670d\u52a1\u6240\u5728host\u7684IP\u914d\u7f6emy_ip\uff1a [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) # my_ip=HOST_IP my_ip = 192.168.0.2 \u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c \u66ff\u6362IRONIC_PASS\u4e3aironic\u7528\u6237\u5bc6\u7801\u3002 [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASS # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public # \u5176\u4ed6\u53c2\u8003\u914d\u7f6e [glance] endpoint_override = http://controller:9292 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 auth_type = password username = ironic password = IRONIC_PASS project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service [service_catalog] region_name = RegionOne project_domain_id = default user_domain_id = default project_name = service password = IRONIC_PASS username = ironic auth_url = http://controller:5000 auth_type = password \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] endpoint_override = \u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 \u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-inspector \u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> exit Bye \u914d\u7f6e /etc/ironic-inspector/inspector.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASS \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801 [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 \u914d\u7f6e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://controller:5000 www_authenticate_uri = http://controller:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = controller:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True \u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=192.168.0.40,192.168.0.50 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log \u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c \u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade \u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 dnf install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u4e0b\u8f7d\u6216\u5236\u4f5c \u90e8\u7f72\u4e00\u4e2a\u88f8\u673a\u8282\u70b9\u603b\u5171\u9700\u8981\u4e24\u7ec4\u955c\u50cf\uff1adeploy ramdisk images\u548cuser images\u3002Deploy ramdisk images\u4e0a\u8fd0\u884c\u6709ironic-python-agent(IPA)\u670d\u52a1\uff0cIronic\u901a\u8fc7\u5b83\u8fdb\u884c\u88f8\u673a\u8282\u70b9\u7684\u73af\u5883\u51c6\u5907\u3002User images\u662f\u6700\u7ec8\u88ab\u5b89\u88c5\u88f8\u673a\u8282\u70b9\u4e0a\uff0c\u4f9b\u7528\u6237\u4f7f\u7528\u7684\u955c\u50cf\u3002 ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent-builder\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002\u82e5\u4f7f\u7528\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \uff0c\u540c\u65f6\u5b98\u65b9\u4e5f\u6709\u63d0\u4f9b\u5236\u4f5c\u597d\u7684deploy\u955c\u50cf\uff0c\u53ef\u5c1d\u8bd5\u4e0b\u8f7d\u3002 \u4e0b\u6587\u4ecb\u7ecd\u901a\u8fc7ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder dnf install python3-ironic-python-agent-builder \u6216 pip3 install ironic-python-agent-builder dnf install qemu-img git \u5236\u4f5c\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--lzma] [--extra-args EXTRA_ARGS] [--elements-path ELEMENTS_PATH] distribution positional arguments: distribution Distribution to use options: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic-python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --lzma Use lzma compression for smaller images --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder --elements-path ELEMENTS_PATH Path(s) to custom DIB elements separated by a colon \u64cd\u4f5c\u5b9e\u4f8b\uff1a # -o\u9009\u9879\u6307\u5b9a\u751f\u6210\u7684\u955c\u50cf\u540d # ubuntu\u6307\u5b9a\u751f\u6210ubuntu\u7cfb\u7edf\u7684\u955c\u50cf ironic-python-agent-builder -o my-ubuntu-ipa ubuntu \u53ef\u901a\u8fc7\u8bbe\u7f6e ARCH \u73af\u5883\u53d8\u91cf\uff08\u9ed8\u8ba4\u4e3aamd64\uff09\u6307\u5b9a\u6240\u6784\u5efa\u955c\u50cf\u7684\u67b6\u6784\u3002\u5982\u679c\u662f arm \u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a export ARCH=aarch64 \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf,\u8bbe\u7f6e\u7528\u6237\u540d\u3001\u5bc6\u7801\uff0c\u542f\u7528 sodo \u6743\u9650\uff1b\u5e76\u6dfb\u52a0 -e \u9009\u9879\u4f7f\u7528\u76f8\u5e94\u7684DIB\u5143\u7d20\u3002\u5236\u4f5c\u955c\u50cf\u64cd\u4f5c\u5982\u4e0b\uff1a export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=stable/2023.1 # \u6307\u5b9a\u672c\u5730\u4ed3\u5e93\u53ca\u5206\u652f DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo DIB_REPOREF_ironic_python_agent=my-test-branch ironic-python-agent-builder ubuntu \u53c2\u8003\uff1a source-repositories \u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\u3002 \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a \u5f53\u524d\u7248\u672c\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ ramdisk\u955c\u50cf\u4e2d\u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 \u7f16\u8f91/usr/lib/systemd/system/ironic-python-agent.service\u6587\u4ef6 [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target","title":"Ironic"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2atrove\u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684trove\u6570\u636e\u5e93\uff0c\u66ff\u6362TROVE_DBPASS\u4e3a\u5408\u9002\u7684\u5bc6\u7801\u3002 CREATE DATABASE trove CHARACTER SET utf8; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS'; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efatrove\u7528\u6237 openstack user create --domain default --password-prompt trove # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user trove admin # \u521b\u5efadatabase\u670d\u52a1 openstack service create --name trove --description \"Database service\" database \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5Trove\u3002 dnf install openstack-trove python-troveclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 \u7f16\u8f91/etc/trove/trove.conf\u3002 [DEFAULT] bind_host=192.168.0.2 log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver network_label_regex=.* management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] auth_url = http://controller:5000/v3/ auth_type = password project_domain_name = Default project_name = service user_domain_name = Default password = trove username = TROVE_PASS [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = trove password = TROVE_PASS [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u63a7\u5236\u8282\u70b9\u7684IP\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002 \u7f16\u8f91/etc/trove/trove-guestagent.conf\u3002 [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df\u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a\u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002\\ \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 \u6570\u636e\u5e93\u540c\u6b65\u3002 su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efaswift\u7528\u6237 openstack user create --domain default --password-prompt swift # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user swift admin # \u521b\u5efa\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5Swift\u3002 dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \\ python3-keystonemiddleware memcached \u914d\u7f6eproxy-server\u3002 Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cSWIFT_PASS\u5373\u53ef\u3002 vim /etc/swift/proxy-server.conf [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = swift password = SWIFT_PASS delay_auth_decision = True service_token_roles_required = True Storage\u8282\u70b9 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305\u3002 dnf install openstack-swift-account openstack-swift-container openstack-swift-object dnf install xfsprogs rsync \u5c06\u8bbe\u5907/dev/sdb\u548c/dev/sdc\u683c\u5f0f\u5316\u4e3aXFS\u3002 mkfs.xfs /dev/sdb mkfs.xfs /dev/sdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u3002 mkdir -p /srv/node/sdb mkdir -p /srv/node/sdc \u627e\u5230\u65b0\u5206\u533a\u7684UUID\u3002 blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d\u3002 UUID=\"\" /srv/node/sdb xfs noatime 0 2 UUID=\"\" /srv/node/sdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\u3002 mount /srv/node/sdb mount /srv/node/sdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e\u3002 \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u914d\u7f6e\u5b58\u50a8\u8282\u70b9\u3002 \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 [DEFAULT] bind_ip = 192.168.0.4 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\u3002 mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift Controller\u8282\u70b9\u521b\u5efa\u5e76\u5206\u53d1\u73af \u521b\u5efa\u8d26\u53f7\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840 account.builder \u6587\u4ef6\u3002 swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder account.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6202 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u8d26\u53f7\u73af\u5185\u5bb9\u3002 swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u8d26\u53f7\u73af\u3002 swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\u3002 swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder container.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bb9\u5668\u73af\u5185\u5bb9\u3002 swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u5bb9\u5668\u73af\u3002 swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\u3002 swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder object.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6200 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bf9\u8c61\u73af\u5185\u5bb9\u3002 swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u5bf9\u8c61\u73af\u3002 swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\u3002 \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/swift/swift.conf\u3002 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R root:swift /etc/swift \u5b8c\u6210\u5b89\u88c5 \u5728\u63a7\u5236\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service systemctl start openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service","title":"Swift"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 Controller\u8282\u70b9 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cyborg; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efacybory\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eCYBORG_PASS source ~/.admin-openrc openstack user create --domain default --password-prompt cyborg openstack role add --project service --user cyborg admin openstack service create --name cyborg --description \"Acceleration Service\" accelerator \u4f7f\u7528uwsgi\u90e8\u7f72Cyborg api\u670d\u52a1 openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2 \u5b89\u88c5Cyborg dnf install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [api] host_ip = 0.0.0.0 [database] connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg [service_catalog] cafile = /opt/stack/data/ca-bundle.pem project_domain_id = default user_domain_id = default project_name = service password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = password username = PLACEMENT_PASS auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [nova] project_domain_name = Default project_name = service user_domain_name = Default password = NOVA_PASS username = nova auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [keystone_authtoken] memcached_servers = localhost:11211 signing_dir = /var/cache/cyborg/api cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#aodh","text":"Aodh\u53ef\u4ee5\u6839\u636e\u7531Ceilometer\u6216\u8005Gnocchi\u6536\u96c6\u7684\u76d1\u63a7\u6570\u636e\u521b\u5efa\u544a\u8b66\uff0c\u5e76\u8bbe\u7f6e\u89e6\u53d1\u89c4\u5219\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh\u3002 dnf install openstack-aodh-api openstack-aodh-evaluator \\ openstack-aodh-notifier openstack-aodh-listener \\ openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/aodh/aodh.conf [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u540c\u6b65\u6570\u636e\u5e93\u3002 aodh-dbsync \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#gnocchi","text":"Gnocchi\u662f\u4e00\u4e2a\u5f00\u6e90\u7684\u65f6\u95f4\u5e8f\u5217\u6570\u636e\u5e93\uff0c\u53ef\u4ee5\u5bf9\u63a5Ceilometer\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi\u3002 dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. # coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u540c\u6b65\u6570\u636e\u5e93\u3002 gnocchi-upgrade \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#ceilometer","text":"Ceilometer\u662fOpenStack\u4e2d\u8d1f\u8d23\u6570\u636e\u6536\u96c6\u7684\u670d\u52a1\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-notification openstack-ceilometer-central \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/pipeline.yaml\u3002 publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u6570\u636e\u5e93\u540c\u6b65\u3002 ceilometer-upgrade \u5b8c\u6210\u63a7\u5236\u8282\u70b9Ceilometer\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Compute\u8282\u70b9 \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-compute dnf install openstack-ceilometer-ipmi # \u53ef\u9009 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_url = http://controller:5000 project_domain_id = default user_domain_id = default auth_type = password username = ceilometer project_name = service password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/nova/nova.conf\u3002 [DEFAULT] instance_usage_audit = True instance_usage_audit_period = hour [notifications] notify_on_state_change = vm_and_task_state [oslo_messaging_notifications] driver = messagingv2 \u5b8c\u6210\u5b89\u88c5\u3002 systemctl enable openstack-ceilometer-compute.service systemctl start openstack-ceilometer-compute.service systemctl enable openstack-ceilometer-ipmi.service # \u53ef\u9009 systemctl start openstack-ceilometer-ipmi.service # \u53ef\u9009 # \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service","title":"Ceilometer"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#heat","text":"Heat\u662f OpenStack \u81ea\u52a8\u7f16\u6392\u670d\u52a1\uff0c\u57fa\u4e8e\u63cf\u8ff0\u6027\u7684\u6a21\u677f\u6765\u7f16\u6392\u590d\u5408\u4e91\u5e94\u7528\uff0c\u4e5f\u79f0\u4e3a Orchestration Service \u3002Heat \u7684\u5404\u670d\u52a1\u4e00\u822c\u5b89\u88c5\u5728 Controller \u8282\u70b9\u4e0a\u3002 Controller\u8282\u70b9 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE heat; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 source ~/.admin-openrc openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f \u521b\u5efa heat domain openstack domain create --description \"Stack projects and users\" heat \u5728 heat domain\u4e0b\u521b\u5efa heat_domain_admin \u7528\u6237\uff0c\u5e76\u8bb0\u4e0b\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6e\u4e0b\u9762\u7684 HEAT_DOMAIN_PASS openstack user create --domain heat --password-prompt heat_domain_admin \u4e3a heat_domain_admin \u7528\u6237\u589e\u52a0 admin \u89d2\u8272 openstack role add --domain heat --user-domain heat --user heat_domain_admin admin \u521b\u5efa heat_stack_owner \u89d2\u8272 openstack role create heat_stack_owner \u521b\u5efa heat_stack_user \u89d2\u8272 openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u5b89\u88c5Tempest dnf install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Antelope\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-antelope/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 yum install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff0cAK/SK\u662f\u7528\u6237\u7684\u534e\u4e3a\u4e91\u767b\u5f55\u5bc6\u94a5\uff0c\u5176\u4ed6\u914d\u7f6e\u4fdd\u6301\u9ed8\u8ba4\u5373\u53ef\uff08\u9ed8\u8ba4\u4f7f\u7528\u65b0\u52a0\u5761region\uff09\uff0c\u9700\u8981\u63d0\u524d\u5728\u4e91\u4e0a\u521b\u5efa\u5bf9\u5e94\u7684\u8d44\u6e90\uff0c\u5305\u62ec\uff1a \u4e00\u4e2a\u5b89\u5168\u7ec4\uff0c\u540d\u5b57\u9ed8\u8ba4\u662f oos \u4e00\u4e2aopenEuler\u955c\u50cf\uff0c\u540d\u79f0\u683c\u5f0f\u662fopenEuler-%(release)s-%(arch)s\uff0c\u4f8b\u5982 openEuler-24.03-sp1-arm64 \u4e00\u4e2aVPC\uff0c\u540d\u79f0\u662f oos_vpc \u8be5VPC\u4e0b\u9762\u4e24\u4e2a\u5b50\u7f51\uff0c\u540d\u79f0\u662f oos_subnet1 \u3001 oos_subnet2 [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668(\u53ea\u5728openEuler LTS\u4e0a\u652f\u6301) \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0|openEuler 24.03 LTS SP1\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 24.03-lts-sp1 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r antelope \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u6267\u884ctempest\u6d4b\u8bd5 \u7528\u6237\u53ef\u4ee5\u4f7f\u7528oos\u81ea\u52a8\u6267\u884c\uff1a oos env test test-oos \u4e5f\u53ef\u4ee5\u624b\u52a8\u767b\u5f55\u76ee\u6807\u8282\u70b9\uff0c\u8fdb\u5165\u6839\u76ee\u5f55\u4e0b\u7684 mytest \u76ee\u5f55\uff0c\u624b\u52a8\u6267\u884c tempest run \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u8df3\u8fc7\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u5728\u7b2c4\u6b65\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 \u88ab\u7eb3\u7ba1\u7684\u865a\u673a\u9700\u8981\u4fdd\u8bc1\uff1a \u81f3\u5c11\u6709\u4e00\u5f20\u7ed9oos\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e neutron_dataplane_interface_name \u81f3\u5c11\u6709\u4e00\u5757\u7ed9oos\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e cinder_block_device \u5982\u679c\u8981\u90e8\u7f72swift\u670d\u52a1\uff0c\u5219\u9700\u8981\u65b0\u589e\u4e00\u5757\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e swift_storage_devices # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 24.03-lts-sp1 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 24.03-LTS-SP1 \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a CinderSP1 Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 24.03 LTS SP1 \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a source ~/.admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service 6.\u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ``` Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 24.03 LTS SP1\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 yum install openstack-trove python-troveclient 2. \u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** 4.\u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: ```shell yum install xfsprogs rsync ``` \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS ```shell mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc ``` \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: ```shell mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc ``` \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: ```shell blkid ``` \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: ```shell UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 ``` \u6302\u8f7d\u8bbe\u5907\uff1a ```shell mount /srv/node/vdb mount /srv/node/vdc ``` ***\u6ce8\u610f*** **\u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e** \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: ```shell [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock ``` **\u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740** \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: ```shell systemctl enable rsyncd.service systemctl start rsyncd.service ``` 5.\u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: ```shell yum install openstack-swift-account openstack-swift-container openstack-swift-object ``` \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: ```shell chown -R swift:swift /srv/node ``` \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a ```shell mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift ``` 6.\u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 ```shell cd /etc/swift ``` \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: ```shell swift-ring-builder account.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder account.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder account.builder rebalance ``` 7.\u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`container.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder container.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f*** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder container.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder container.builder rebalance ``` 8.\u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`object.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder object.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d ```shell swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder object.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder object.builder rebalance ``` \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06`account.ring.gz`\uff0c`container.ring.gz`\u4ee5\u53ca `object.ring.gz`\u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684`/etc/swift`\u76ee\u5f55\u3002 9.\u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 yum install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 24.03-LTS-SP1\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 24.03-lts-sp1 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 24.03-lts-sp1 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-24.03-LTS-SP1_Wallaby"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#openstack-wallaby","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72","title":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 24.03-LTS-SP1 \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a CinderSP1 Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#_3","text":"\u914d\u7f6e 24.03 LTS SP1 \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a source ~/.admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service 6.\u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ```","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 24.03 LTS SP1\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 yum install openstack-trove python-troveclient 2. \u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** 4.\u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: ```shell yum install xfsprogs rsync ``` \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS ```shell mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc ``` \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: ```shell mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc ``` \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: ```shell blkid ``` \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: ```shell UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 ``` \u6302\u8f7d\u8bbe\u5907\uff1a ```shell mount /srv/node/vdb mount /srv/node/vdc ``` ***\u6ce8\u610f*** **\u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e** \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: ```shell [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock ``` **\u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740** \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: ```shell systemctl enable rsyncd.service systemctl start rsyncd.service ``` 5.\u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: ```shell yum install openstack-swift-account openstack-swift-container openstack-swift-object ``` \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: ```shell chown -R swift:swift /srv/node ``` \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a ```shell mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift ``` 6.\u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 ```shell cd /etc/swift ``` \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: ```shell swift-ring-builder account.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder account.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder account.builder rebalance ``` 7.\u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`container.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder container.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f*** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder container.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder container.builder rebalance ``` 8.\u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`object.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder object.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d ```shell swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder object.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder object.builder rebalance ``` \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06`account.ring.gz`\uff0c`container.ring.gz`\u4ee5\u53ca `object.ring.gz`\u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684`/etc/swift`\u76ee\u5f55\u3002 9.\u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#aodh","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#gnocchi","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#ceilometer","text":"1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#heat","text":"1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP1/OpenStack-wallaby/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 yum install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 24.03-LTS-SP1\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 24.03-lts-sp1 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 24.03-lts-sp1 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/","text":"OpenStack Antelope \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack Antelope \u90e8\u7f72\u6307\u5357 \u57fa\u4e8eRPM\u90e8\u7f72 \u73af\u5883\u51c6\u5907 \u65f6\u949f\u540c\u6b65 \u5b89\u88c5\u6570\u636e\u5e93 \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u90e8\u7f72\u670d\u52a1 Keystone Glance Placement Nova Neutron Cinder Horizon Ironic Trove Swift Cyborg Aodh Gnocchi Ceilometer Heat Tempest \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u672c\u6587\u6863\u662f openEuler OpenStack SIG \u7f16\u5199\u7684\u57fa\u4e8e |openEuler 24.03 LTS SP2 \u7684 OpenStack \u90e8\u7f72\u6307\u5357\uff0c\u5185\u5bb9\u7531 SIG \u8d21\u732e\u8005\u63d0\u4f9b\u3002\u5728\u9605\u8bfb\u8fc7\u7a0b\u4e2d\uff0c\u5982\u679c\u60a8\u6709\u4efb\u4f55\u7591\u95ee\u6216\u8005\u53d1\u73b0\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7 \u8054\u7cfb SIG\u7ef4\u62a4\u4eba\u5458\uff0c\u6216\u8005\u76f4\u63a5 \u63d0\u4ea4issue \u7ea6\u5b9a \u672c\u7ae0\u8282\u63cf\u8ff0\u6587\u6863\u4e2d\u7684\u4e00\u4e9b\u901a\u7528\u7ea6\u5b9a\u3002 \u540d\u79f0 \u5b9a\u4e49 RABBIT_PASS rabbitmq\u7684\u5bc6\u7801\uff0c\u7531\u7528\u6237\u8bbe\u7f6e\uff0c\u5728OpenStack\u5404\u4e2a\u670d\u52a1\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_PASS cinder\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_DBPASS cinder\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 KEYSTONE_DBPASS keystone\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728keystone\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_PASS glance\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_DBPASS glance\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_PASS \u5728keystone\u6ce8\u518c\u7684heat\u7528\u6237\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_DBPASS heat\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_PASS \u5728keystone\u6ce8\u518c\u7684cyborg\u7528\u6237\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_DBPASS cyborg\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_PASS \u5728keystone\u6ce8\u518c\u7684neutron\u7528\u6237\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_DBPASS neutron\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PROVIDER_INTERFACE_NAME \u7269\u7406\u7f51\u7edc\u63a5\u53e3\u7684\u540d\u79f0\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 OVERLAY_INTERFACE_IP_ADDRESS Controller\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406ip\u5730\u5740\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 METADATA_SECRET metadata proxy\u7684secret\u5bc6\u7801\uff0c\u5728nova\u548cneutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_DBPASS placement\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_PASS \u5728keystone\u6ce8\u518c\u7684placement\u7528\u6237\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_DBPASS nova\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728nova\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_PASS \u5728keystone\u6ce8\u518c\u7684nova\u7528\u6237\u5bc6\u7801\uff0c\u5728nova,cyborg,neutron\u7b49\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_DBPASS ironic\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_PASS \u5728keystone\u6ce8\u518c\u7684ironic\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_DBPASS ironic-inspector\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_PASS \u5728keystone\u6ce8\u518c\u7684ironic-inspector\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 OpenStack SIG \u63d0\u4f9b\u4e86\u591a\u79cd\u57fa\u4e8e openEuler \u90e8\u7f72 OpenStack \u7684\u65b9\u6cd5\uff0c\u4ee5\u6ee1\u8db3\u4e0d\u540c\u7684\u7528\u6237\u573a\u666f\uff0c\u8bf7\u6309\u9700\u9009\u62e9\u3002 \u57fa\u4e8eRPM\u90e8\u7f72 \u00b6 \u73af\u5883\u51c6\u5907 \u00b6 \u672c\u6587\u6863\u57fa\u4e8eOpenStack\u7ecf\u5178\u7684\u4e09\u8282\u70b9\u73af\u5883\u8fdb\u884c\u90e8\u7f72\uff0c\u4e09\u4e2a\u8282\u70b9\u5206\u522b\u662f\u63a7\u5236\u8282\u70b9(Controller)\u3001\u8ba1\u7b97\u8282\u70b9(Compute)\u3001\u5b58\u50a8\u8282\u70b9(Storage)\uff0c\u5176\u4e2d\u5b58\u50a8\u8282\u70b9\u4e00\u822c\u53ea\u90e8\u7f72\u5b58\u50a8\u670d\u52a1\uff0c\u5728\u8d44\u6e90\u6709\u9650\u7684\u60c5\u51b5\u4e0b\uff0c\u53ef\u4ee5\u4e0d\u5355\u72ec\u90e8\u7f72\u8be5\u8282\u70b9\uff0c\u628a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u670d\u52a1\u90e8\u7f72\u5230\u8ba1\u7b97\u8282\u70b9\u5373\u53ef\u3002 \u9996\u5148\u51c6\u5907\u4e09\u4e2a|openEuler 24.03 LTS SP2\u73af\u5883\uff0c\u6839\u636e\u60a8\u7684\u73af\u5883\uff0c\u4e0b\u8f7d\u5bf9\u5e94\u7684\u955c\u50cf\u5e76\u5b89\u88c5\u5373\u53ef\uff1a ISO\u955c\u50cf \u3001 qcow2\u955c\u50cf \u3002 \u4e0b\u9762\u7684\u5b89\u88c5\u6309\u7167\u5982\u4e0b\u62d3\u6251\u8fdb\u884c\uff1a controller\uff1a192.168.0.2 compute\uff1a 192.168.0.3 storage\uff1a 192.168.0.4 \u5982\u679c\u60a8\u7684\u73af\u5883IP\u4e0d\u540c\uff0c\u8bf7\u6309\u7167\u60a8\u7684\u73af\u5883IP\u4fee\u6539\u76f8\u5e94\u7684\u914d\u7f6e\u6587\u4ef6\u3002 \u672c\u6587\u6863\u7684\u4e09\u8282\u70b9\u670d\u52a1\u62d3\u6251\u5982\u4e0b\u56fe\u6240\u793a(\u53ea\u5305\u542bKeystone\u3001Glance\u3001Nova\u3001Cinder\u3001Neutron\u8fd9\u51e0\u4e2a\u6838\u5fc3\u670d\u52a1\uff0c\u5176\u4ed6\u670d\u52a1\u8bf7\u53c2\u8003\u5177\u4f53\u90e8\u7f72\u7ae0\u8282)\uff1a \u5728\u6b63\u5f0f\u90e8\u7f72\u4e4b\u524d\uff0c\u9700\u8981\u5bf9\u6bcf\u4e2a\u8282\u70b9\u505a\u5982\u4e0b\u914d\u7f6e\u548c\u68c0\u67e5\uff1a \u914d\u7f6e |openEuler 24.03 LTS SP2 \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-antelope yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP2/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u6bcf\u4e2a\u8282\u70b9\u5206\u522b\u4fee\u6539\u4e3b\u673a\u540d\uff0c\u4ee5controller\u4e3a\u4f8b\uff1a hostnamectl set-hostname controller vi /etc/hostname \u5185\u5bb9\u4fee\u6539\u4e3acontroller \u7136\u540e\u4fee\u6539\u6bcf\u4e2a\u8282\u70b9\u7684 /etc/hosts \u6587\u4ef6\uff0c\u65b0\u589e\u5982\u4e0b\u5185\u5bb9: 192.168.0.2 controller 192.168.0.3 compute 192.168.0.4 storage \u65f6\u949f\u540c\u6b65 \u00b6 \u96c6\u7fa4\u73af\u5883\u65f6\u523b\u8981\u6c42\u6bcf\u4e2a\u8282\u70b9\u7684\u65f6\u95f4\u4e00\u81f4\uff0c\u4e00\u822c\u7531\u65f6\u949f\u540c\u6b65\u8f6f\u4ef6\u4fdd\u8bc1\u3002\u672c\u6587\u4f7f\u7528 chrony \u8f6f\u4ef6\u3002\u6b65\u9aa4\u5982\u4e0b\uff1a Controller\u8282\u70b9 \uff1a \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # \u8868\u793a\u5141\u8bb8\u54ea\u4e9bIP\u4ece\u672c\u8282\u70b9\u540c\u6b65\u65f6\u949f allow 192.168.0.0/24 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u5176\u4ed6\u8282\u70b9 \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # NTP_SERVER\u662fcontroller IP\uff0c\u8868\u793a\u4ece\u8fd9\u4e2a\u673a\u5668\u83b7\u53d6\u65f6\u95f4\uff0c\u8fd9\u91cc\u6211\u4eec\u586b192.168.0.2\uff0c\u6216\u8005\u5728`/etc/hosts`\u91cc\u914d\u7f6e\u597d\u7684controller\u540d\u5b57\u5373\u53ef\u3002 server NTP_SERVER iburst \u540c\u65f6\uff0c\u8981\u628a pool pool.ntp.org iburst \u8fd9\u4e00\u884c\u6ce8\u91ca\u6389\uff0c\u8868\u793a\u4e0d\u4ece\u516c\u7f51\u540c\u6b65\u65f6\u949f\u3002 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u914d\u7f6e\u5b8c\u6210\u540e\uff0c\u68c0\u67e5\u4e00\u4e0b\u7ed3\u679c\uff0c\u5728\u5176\u4ed6\u975econtroller\u8282\u70b9\u6267\u884c chronyc sources \uff0c\u8fd4\u56de\u7ed3\u679c\u7c7b\u4f3c\u5982\u4e0b\u5185\u5bb9\uff0c\u8868\u793a\u6210\u529f\u4ececontroller\u540c\u6b65\u65f6\u949f\u3002 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 192.168.0.2 4 6 7 0 -1406ns[ +55us] +/- 16ms \u5b89\u88c5\u6570\u636e\u5e93 \u00b6 \u6570\u636e\u5e93\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528mariadb\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install mariadb-config mariadb mariadb-server python3-PyMySQL \u65b0\u589e\u914d\u7f6e\u6587\u4ef6 /etc/my.cnf.d/openstack.cnf \uff0c\u5185\u5bb9\u5982\u4e0b [mysqld] bind-address = 192.168.0.2 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8\u670d\u52a1\u5668 systemctl start mariadb \u521d\u59cb\u5316\u6570\u636e\u5e93\uff0c\u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef mysql_secure_installation \u793a\u4f8b\u5982\u4e0b\uff1a NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and haven't set the root password yet, you should just press enter here. Enter current password for root (enter for none): #\u8fd9\u91cc\u8f93\u5165\u5bc6\u7801\uff0c\u7531\u4e8e\u6211\u4eec\u662f\u521d\u59cb\u5316DB\uff0c\u76f4\u63a5\u56de\u8f66\u5c31\u884c OK, successfully used password, moving on... Setting the root password or using the unix_socket ensures that nobody can log into the MariaDB root user without the proper authorisation. You already have your root account protected, so you can safely answer 'n'. # \u8fd9\u91cc\u6839\u636e\u63d0\u793a\u8f93\u5165N Switch to unix_socket authentication [Y/n] N Enabled successfully! Reloading privilege tables.. ... Success! You already have your root account protected, so you can safely answer 'n'. # \u8f93\u5165Y\uff0c\u4fee\u6539\u5bc6\u7801 Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664\u533f\u540d\u7528\u6237 Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. # \u8f93\u5165Y\uff0c\u5173\u95edroot\u8fdc\u7a0b\u767b\u5f55\u6743\u9650 Disallow root login remotely? [Y/n] Y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664test\u6570\u636e\u5e93 Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. # \u8f93\u5165Y\uff0c\u91cd\u8f7d\u914d\u7f6e Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. \u9a8c\u8bc1\uff0c\u6839\u636e\u7b2c\u56db\u6b65\u8bbe\u7f6e\u7684\u5bc6\u7801\uff0c\u68c0\u67e5\u662f\u5426\u80fd\u767b\u5f55mariadb mysql -uroot -p \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u00b6 \u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528rabbitmq\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install rabbitmq-server \u542f\u52a8\u670d\u52a1 systemctl start rabbitmq-server \u914d\u7f6eopenstack\u7528\u6237\uff0c RABBIT_PASS \u662fopenstack\u670d\u52a1\u767b\u5f55\u6d88\u606f\u961f\u91cc\u7684\u5bc6\u7801\uff0c\u9700\u8981\u548c\u540e\u9762\u5404\u4e2a\u670d\u52a1\u7684\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\u3002 rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u00b6 \u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528Memcached\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install memcached python3-memcached \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u542f\u52a8\u670d\u52a1 systemctl start memcached \u90e8\u7f72\u670d\u52a1 \u00b6 Keystone \u00b6 Keystone\u662fOpenStack\u63d0\u4f9b\u7684\u9274\u6743\u670d\u52a1\uff0c\u662f\u6574\u4e2aOpenStack\u7684\u5165\u53e3\uff0c\u63d0\u4f9b\u4e86\u79df\u6237\u9694\u79bb\u3001\u7528\u6237\u8ba4\u8bc1\u3001\u670d\u52a1\u53d1\u73b0\u7b49\u529f\u80fd\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server \u6253\u5f00httpd.conf\u5e76\u914d\u7f6e #\u9700\u8981\u4fee\u6539\u7684\u914d\u7f6e\u6587\u4ef6\u8def\u5f84 vim /etc/httpd/conf/httpd.conf #\u4fee\u6539\u4ee5\u4e0b\u9879\uff0c\u5982\u679c\u6ca1\u6709\u5219\u65b0\u6dfb\u52a0 ServerName controller \u521b\u5efa\u8f6f\u94fe\u63a5 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles \u9700\u8981\u5148\u5b89\u88c5python3-openstackclient dnf install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u00b6 Glance\u662fOpenStack\u63d0\u4f9b\u7684\u955c\u50cf\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u3001\u88f8\u673a\u955c\u50cf\u7684\u4e0a\u4f20\u4e0e\u4e0b\u8f7d\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521d\u59cb\u5316 glance \u8d44\u6e90\u5bf9\u8c61 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230 GLANCE_PASS \u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt glance User Password: Repeat User Password: \u6dfb\u52a0glance\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user glance admin \u521b\u5efaglance\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efaglance API\u670d\u52a1\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-glance \u4fee\u6539 glance \u914d\u7f6e\u6587\u4ef6 vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u5bfc\u5165\u73af\u5883\u53d8\u91cf sorce ~/.admin-openrcu \u4e0b\u8f7d\u955c\u50cf x86\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img arm\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement \u00b6 Placement\u662fOpenStack\u63d0\u4f9b\u7684\u8d44\u6e90\u8c03\u5ea6\u7ec4\u4ef6\uff0c\u4e00\u822c\u4e0d\u9762\u5411\u7528\u6237\uff0c\u7531Nova\u7b49\u7ec4\u4ef6\u8c03\u7528\uff0c\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u3001\u914d\u7f6ePlacement\u670d\u52a1\u524d\uff0c\u9700\u8981\u5148\u521b\u5efa\u76f8\u5e94\u7684\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548cAPI endpoints\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efaplacement\u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE placement; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efaplacement\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt placement User Password: Repeat User Password: \u6dfb\u52a0placement\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name placement \\ --description \"Placement API\" placement \u521b\u5efaPlacement API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ placement public http://controller:8778 openstack endpoint create --region RegionOne \\ placement internal http://controller:8778 openstack endpoint create --region RegionOne \\ placement admin http://controller:8778 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-placement-api \u7f16\u8f91 /etc/placement/placement.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [placement_database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [placement_database] connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff0c\u586b\u5145Placement\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8\u670d\u52a1 \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650 source ~/.admin-openrc \u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a placement-status upgrade check +----------------------------------------------------------------------+ | Upgrade Check Results | +----------------------------------------------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Failure | | Details: Your policy file is JSON-formatted which is deprecated. You | | need to switch to YAML-formatted file. Use the | | ``oslopolicy-convert-json-to-yaml`` tool to convert the | | existing JSON-formatted files to YAML in a backwards- | | compatible manner: https://docs.openstack.org/oslo.policy/ | | latest/cli/oslopolicy-convert-json-to-yaml.html. | +----------------------------------------------------------------------+ \u8fd9\u91cc\u53ef\u4ee5\u770b\u5230 Policy File JSON to YAML Migration \u7684\u7ed3\u679c\u4e3aFailure\u3002\u8fd9\u662f\u56e0\u4e3a\u5728Placement\u4e2d\uff0cJSON\u683c\u5f0f\u7684policy\u6587\u4ef6\u4eceWallaby\u7248\u672c\u5f00\u59cb\u5df2\u5904\u4e8e deprecated \u72b6\u6001\u3002\u53ef\u4ee5\u53c2\u8003\u63d0\u793a\uff0c\u4f7f\u7528 oslopolicy-convert-json-to-yaml \u5de5\u5177 \u5c06\u73b0\u6709\u7684JSON\u683c\u5f0fpolicy\u6587\u4ef6\u8f6c\u5316\u4e3aYAML\u683c\u5f0f\u3002 oslopolicy-convert-json-to-yaml --namespace placement \\ --policy-file /etc/placement/policy.json \\ --output-file /etc/placement/policy.yaml mv /etc/placement/policy.json{,.bak} \u6ce8\uff1a\u5f53\u524d\u73af\u5883\u4e2d\u6b64\u95ee\u9898\u53ef\u5ffd\u7565\uff0c\u4e0d\u5f71\u54cd\u8fd0\u884c\u3002 \u9488\u5bf9placement API\u8fd0\u884c\u547d\u4ee4\uff1a \u5b89\u88c5osc-placement\u63d2\u4ef6\uff1a dnf install python3-osc-placement \u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a openstack --os-placement-api-version 1.2 resource class list --sort-column name +----------------------------+ | name | +----------------------------+ | DISK_GB | | FPGA | | ... | openstack --os-placement-api-version 1.6 trait list --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_ACCELERATORS | | COMPUTE_ARCH_AARCH64 | | ... | Nova \u00b6 Nova\u662fOpenStack\u7684\u8ba1\u7b97\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u7684\u521b\u5efa\u3001\u53d1\u653e\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efa nova_api \u3001 nova \u548c nova_cell0 \u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efanova\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt nova User Password: Repeat User Password: \u6dfb\u52a0nova\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user nova admin \u521b\u5efanova\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name nova \\ --description \"OpenStack Compute\" compute \u521b\u5efaNova API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute admin http://controller:8774/v2.1 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528controller\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.2 log_dir = /var/log/nova state_path = /var/lib/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api_database] \u548c [database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff1a \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u542f\u52a8\u670d\u52a1 systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service Compute\u8282\u70b9 \u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-nova-compute \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6 \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528Compute\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49compute_driver\u3001instances_path\u3001log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.3 compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances log_dir = /var/log/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86_64\uff09 \u5904\u7406\u5668\u4e3ax86_64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002\u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08arm64\uff09 \u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a virt-host-validate # \u8be5\u547d\u4ee4\u7531libvirt\u63d0\u4f9b\uff0c\u6b64\u65f6libvirt\u5e94\u5df2\u4f5c\u4e3aopenstack-nova-compute\u4f9d\u8d56\u88ab\u5b89\u88c5\uff0c\u73af\u5883\u4e2d\u5df2\u6709\u6b64\u547d\u4ee4 \u663e\u793aFAIL\u65f6\uff0c\u8868\u793a\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002 QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded) \u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u663e\u793aPASS\u65f6\uff0c\u8868\u793a\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 QEMU: Checking if device /dev/kvm exists: PASS \u914d\u7f6eqemu\uff08\u4ec5arm64\uff09 \u4ec5\u5f53\u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\u9700\u8981\u6267\u884c\u6b64\u64cd\u4f5c\u3002 \u7f16\u8f91 /etc/libvirt/qemu.conf : nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u7f16\u8f91 /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } \u542f\u52a8\u670d\u52a1 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u786e\u8ba4nova-compute\u670d\u52a1\u5df2\u8bc6\u522b\u5230\u6570\u636e\u5e93\u4e2d\uff1a openstack compute service list --service nova-compute \u53d1\u73b0\u8ba1\u7b97\u8282\u70b9\uff0c\u5c06\u8ba1\u7b97\u8282\u70b9\u6dfb\u52a0\u5230cell\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u7ed3\u679c\u5982\u4e0b\uff1a Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code. Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check Neutron \u00b6 Neutron\u662fOpenStack\u7684\u7f51\u7edc\u670d\u52a1\uff0c\u63d0\u4f9b\u865a\u62df\u4ea4\u6362\u673a\u3001IP\u8def\u7531\u3001DHCP\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u670d\u52a1\u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efaneutron\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eNEUTRON_PASS\uff1a source ~/.admin-openrc openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description \"OpenStack Networking\" network \u90e8\u7f72 Neutron API \u670d\u52a1\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2 3. \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp [experimental] linuxbridge = true \u914d\u7f6eML2\uff0cML2\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge** \u4fee\u6539/etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6eLayer-3\u4ee3\u7406 \u4fee\u6539/etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406 \u4fee\u6539/etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406 \u4fee\u6539/etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u914d\u7f6enova\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542fnova api\u670d\u52a1 systemctl restart openstack-nova-api \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl start neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service Compute\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-neutron-linuxbridge ebtables ipset -y \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6enova compute\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service \u542f\u52a8Neutron linuxbridge agent\u670d\u52a1 systemctl enable neutron-linuxbridge-agent systemctl start neutron-linuxbridge-agent Cinder \u00b6 Cinder\u662fOpenStack\u7684\u5b58\u50a8\u670d\u52a1\uff0c\u63d0\u4f9b\u5757\u8bbe\u5907\u7684\u521b\u5efa\u3001\u53d1\u653e\u3001\u5907\u4efd\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \uff1a \u521d\u59cb\u5316\u6570\u636e\u5e93 CINDER_DBPASS \u662f\u7528\u6237\u81ea\u5b9a\u4e49\u7684cinder\u6570\u636e\u5e93\u5bc6\u7801\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u521d\u59cb\u5316Keystone\u8d44\u6e90\u5bf9\u8c61 source ~/.admin-openrc #\u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230`CINDER_PASS`\u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s 3. \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-cinder-api openstack-cinder-scheduler \u4fee\u6539cinder\u914d\u7f6e\u6587\u4ef6 /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.2 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u6570\u636e\u5e93\u540c\u6b65 su -s /bin/sh -c \"cinder-manage db sync\" cinder \u4fee\u6539nova\u914d\u7f6e /etc/nova/nova.conf [cinder] os_region_name = RegionOne \u542f\u52a8\u670d\u52a1 systemctl restart openstack-nova-api systemctl start openstack-cinder-api openstack-cinder-scheduler Storage\u8282\u70b9 \uff1a Storage\u8282\u70b9\u8981\u63d0\u524d\u51c6\u5907\u81f3\u5c11\u4e00\u5757\u786c\u76d8\uff0c\u4f5c\u4e3acinder\u7684\u5b58\u50a8\u540e\u7aef\uff0c\u4e0b\u6587\u9ed8\u8ba4storage\u8282\u70b9\u5df2\u7ecf\u5b58\u5728\u4e00\u5757\u672a\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u8bbe\u5907\u540d\u79f0\u4e3a /dev/sdb \uff0c\u7528\u6237\u5728\u914d\u7f6e\u8fc7\u7a0b\u4e2d\uff0c\u8bf7\u6309\u7167\u771f\u5b9e\u73af\u5883\u4fe1\u606f\u8fdb\u884c\u540d\u79f0\u66ff\u6362\u3002 Cinder\u652f\u6301\u5f88\u591a\u7c7b\u578b\u7684\u540e\u7aef\u5b58\u50a8\uff0c\u672c\u6307\u5bfc\u4f7f\u7528\u6700\u7b80\u5355\u7684lvm\u4e3a\u53c2\u8003\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982ceph\u7b49\u5176\u4ed6\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup \u914d\u7f6elvm\u5377\u7ec4 pvcreate /dev/sdb vgcreate cinder-volumes /dev/sdb \u4fee\u6539cinder\u914d\u7f6e /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.4 enabled_backends = lvm glance_api_servers = http://controller:9292 [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u914d\u7f6ecinder backup \uff08\u53ef\u9009\uff09 cinder-backup\u662f\u53ef\u9009\u7684\u5907\u4efd\u670d\u52a1\uff0ccinder\u540c\u6837\u652f\u6301\u5f88\u591a\u79cd\u5907\u4efd\u540e\u7aef\uff0c\u672c\u6587\u4f7f\u7528swift\u5b58\u50a8\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982NFS\u7b49\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\uff0c\u4f8b\u5982\u53ef\u4ee5\u53c2\u8003 OpenStack\u5b98\u65b9\u6587\u6863 \u5bf9NFS\u7684\u914d\u7f6e\u8bf4\u660e\u3002 \u4fee\u6539 /etc/cinder/cinder.conf \uff0c\u5728 [DEFAULT] \u4e2d\u65b0\u589e [DEFAULT] backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u8fd9\u91cc\u7684 SWIFT_URL \u662f\u6307\u73af\u5883\u4e2dswift\u670d\u52a1\u7684URL\uff0c\u5728\u90e8\u7f72\u5b8cswift\u670d\u52a1\u540e\uff0c\u6267\u884c openstack catalog show object-store \u547d\u4ee4\u83b7\u53d6\u3002 \u542f\u52a8\u670d\u52a1 systemctl start openstack-cinder-volume target systemctl start openstack-cinder-backup (\u53ef\u9009) \u81f3\u6b64\uff0cCinder\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u53ef\u4ee5\u5728controller\u901a\u8fc7\u4ee5\u4e0b\u547d\u4ee4\u8fdb\u884c\u7b80\u5355\u7684\u9a8c\u8bc1 source ~/.admin-openrc openstack storage service list openstack volume list Horizon \u00b6 Horizon\u662fOpenStack\u63d0\u4f9b\u7684\u524d\u7aef\u9875\u9762\uff0c\u53ef\u4ee5\u8ba9\u7528\u6237\u901a\u8fc7\u7f51\u9875\u9f20\u6807\u7684\u64cd\u4f5c\u6765\u63a7\u5236OpenStack\u96c6\u7fa4\uff0c\u800c\u4e0d\u7528\u7e41\u7410\u7684CLI\u547d\u4ee4\u884c\u3002Horizon\u4e00\u822c\u90e8\u7f72\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-dashboard \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] OPENSTACK_KEYSTONE_URL = \"http://controller:5000/v3\" SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f\u670d\u52a1 systemctl restart httpd \u81f3\u6b64\uff0chorizon\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165 http://192.168.0.2/dashboard \uff0c\u6253\u5f00horizon\u767b\u5f55\u9875\u9762\u3002 Ironic \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> exit Bye \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 \u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 \u66ff\u6362 IRONIC_PASS \u4e3aironic\u7528\u6237\u5bc6\u7801\uff0c IRONIC_INSPECTOR_PASS \u4e3aironic_inspector\u7528\u6237\u5bc6\u7801\u3002 openstack user create --password IRONIC_PASS \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector openstack role add --project service --user ironic-inspector admin \u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQ LAlchemy connection string used to connect to the # database (string value) # connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) # transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASS \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) # www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 www_authenticate_uri=http://controller:5000 # Complete admin Identity API endpoint. (string value) # auth_url=http://PRIVATE_IDENTITY_IP:5000 auth_url=http://controller:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASS # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none \u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema \u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 \u5982\u4e0b\u4e3aironic-conductor\u670d\u52a1\u81ea\u8eab\u7684\u6807\u51c6\u914d\u7f6e\uff0cironic-conductor\u670d\u52a1\u53ef\u4ee5\u4e0eironic-api\u670d\u52a1\u5206\u5e03\u4e8e\u4e0d\u540c\u8282\u70b9\uff0c\u672c\u6307\u5357\u4e2d\u5747\u90e8\u7f72\u4e0e\u63a7\u5236\u8282\u70b9\uff0c\u6240\u4ee5\u91cd\u590d\u7684\u914d\u7f6e\u9879\u53ef\u8df3\u8fc7\u3002 \u66ff\u6362\u4f7f\u7528conductor\u670d\u52a1\u6240\u5728host\u7684IP\u914d\u7f6emy_ip\uff1a [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) # my_ip=HOST_IP my_ip = 192.168.0.2 \u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c \u66ff\u6362IRONIC_PASS\u4e3aironic\u7528\u6237\u5bc6\u7801\u3002 [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASS # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public # \u5176\u4ed6\u53c2\u8003\u914d\u7f6e [glance] endpoint_override = http://controller:9292 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 auth_type = password username = ironic password = IRONIC_PASS project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service [service_catalog] region_name = RegionOne project_domain_id = default user_domain_id = default project_name = service password = IRONIC_PASS username = ironic auth_url = http://controller:5000 auth_type = password \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] endpoint_override = \u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 \u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-inspector \u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> exit Bye \u914d\u7f6e /etc/ironic-inspector/inspector.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASS \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801 [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 \u914d\u7f6e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://controller:5000 www_authenticate_uri = http://controller:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = controller:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True \u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=192.168.0.40,192.168.0.50 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log \u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c \u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade \u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 dnf install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u4e0b\u8f7d\u6216\u5236\u4f5c \u90e8\u7f72\u4e00\u4e2a\u88f8\u673a\u8282\u70b9\u603b\u5171\u9700\u8981\u4e24\u7ec4\u955c\u50cf\uff1adeploy ramdisk images\u548cuser images\u3002Deploy ramdisk images\u4e0a\u8fd0\u884c\u6709ironic-python-agent(IPA)\u670d\u52a1\uff0cIronic\u901a\u8fc7\u5b83\u8fdb\u884c\u88f8\u673a\u8282\u70b9\u7684\u73af\u5883\u51c6\u5907\u3002User images\u662f\u6700\u7ec8\u88ab\u5b89\u88c5\u88f8\u673a\u8282\u70b9\u4e0a\uff0c\u4f9b\u7528\u6237\u4f7f\u7528\u7684\u955c\u50cf\u3002 ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent-builder\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002\u82e5\u4f7f\u7528\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \uff0c\u540c\u65f6\u5b98\u65b9\u4e5f\u6709\u63d0\u4f9b\u5236\u4f5c\u597d\u7684deploy\u955c\u50cf\uff0c\u53ef\u5c1d\u8bd5\u4e0b\u8f7d\u3002 \u4e0b\u6587\u4ecb\u7ecd\u901a\u8fc7ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder dnf install python3-ironic-python-agent-builder \u6216 pip3 install ironic-python-agent-builder dnf install qemu-img git \u5236\u4f5c\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--lzma] [--extra-args EXTRA_ARGS] [--elements-path ELEMENTS_PATH] distribution positional arguments: distribution Distribution to use options: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic-python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --lzma Use lzma compression for smaller images --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder --elements-path ELEMENTS_PATH Path(s) to custom DIB elements separated by a colon \u64cd\u4f5c\u5b9e\u4f8b\uff1a # -o\u9009\u9879\u6307\u5b9a\u751f\u6210\u7684\u955c\u50cf\u540d # ubuntu\u6307\u5b9a\u751f\u6210ubuntu\u7cfb\u7edf\u7684\u955c\u50cf ironic-python-agent-builder -o my-ubuntu-ipa ubuntu \u53ef\u901a\u8fc7\u8bbe\u7f6e ARCH \u73af\u5883\u53d8\u91cf\uff08\u9ed8\u8ba4\u4e3aamd64\uff09\u6307\u5b9a\u6240\u6784\u5efa\u955c\u50cf\u7684\u67b6\u6784\u3002\u5982\u679c\u662f arm \u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a export ARCH=aarch64 \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf,\u8bbe\u7f6e\u7528\u6237\u540d\u3001\u5bc6\u7801\uff0c\u542f\u7528 sodo \u6743\u9650\uff1b\u5e76\u6dfb\u52a0 -e \u9009\u9879\u4f7f\u7528\u76f8\u5e94\u7684DIB\u5143\u7d20\u3002\u5236\u4f5c\u955c\u50cf\u64cd\u4f5c\u5982\u4e0b\uff1a export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=stable/2023.1 # \u6307\u5b9a\u672c\u5730\u4ed3\u5e93\u53ca\u5206\u652f DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo DIB_REPOREF_ironic_python_agent=my-test-branch ironic-python-agent-builder ubuntu \u53c2\u8003\uff1a source-repositories \u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\u3002 \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a \u5f53\u524d\u7248\u672c\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ ramdisk\u955c\u50cf\u4e2d\u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 \u7f16\u8f91/usr/lib/systemd/system/ironic-python-agent.service\u6587\u4ef6 [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target Trove \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2atrove\u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684trove\u6570\u636e\u5e93\uff0c\u66ff\u6362TROVE_DBPASS\u4e3a\u5408\u9002\u7684\u5bc6\u7801\u3002 CREATE DATABASE trove CHARACTER SET utf8; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS'; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efatrove\u7528\u6237 openstack user create --domain default --password-prompt trove # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user trove admin # \u521b\u5efadatabase\u670d\u52a1 openstack service create --name trove --description \"Database service\" database \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5Trove\u3002 dnf install openstack-trove python-troveclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 \u7f16\u8f91/etc/trove/trove.conf\u3002 [DEFAULT] bind_host=192.168.0.2 log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver network_label_regex=.* management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] auth_url = http://controller:5000/v3/ auth_type = password project_domain_name = Default project_name = service user_domain_name = Default password = trove username = TROVE_PASS [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = trove password = TROVE_PASS [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u63a7\u5236\u8282\u70b9\u7684IP\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002 \u7f16\u8f91/etc/trove/trove-guestagent.conf\u3002 [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df\u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a\u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002\\ \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 \u6570\u636e\u5e93\u540c\u6b65\u3002 su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efaswift\u7528\u6237 openstack user create --domain default --password-prompt swift # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user swift admin # \u521b\u5efa\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5Swift\u3002 dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \\ python3-keystonemiddleware memcached \u914d\u7f6eproxy-server\u3002 Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cSWIFT_PASS\u5373\u53ef\u3002 vim /etc/swift/proxy-server.conf [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = swift password = SWIFT_PASS delay_auth_decision = True service_token_roles_required = True Storage\u8282\u70b9 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305\u3002 dnf install openstack-swift-account openstack-swift-container openstack-swift-object dnf install xfsprogs rsync \u5c06\u8bbe\u5907/dev/sdb\u548c/dev/sdc\u683c\u5f0f\u5316\u4e3aXFS\u3002 mkfs.xfs /dev/sdb mkfs.xfs /dev/sdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u3002 mkdir -p /srv/node/sdb mkdir -p /srv/node/sdc \u627e\u5230\u65b0\u5206\u533a\u7684UUID\u3002 blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d\u3002 UUID=\"\" /srv/node/sdb xfs noatime 0 2 UUID=\"\" /srv/node/sdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\u3002 mount /srv/node/sdb mount /srv/node/sdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e\u3002 \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u914d\u7f6e\u5b58\u50a8\u8282\u70b9\u3002 \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 [DEFAULT] bind_ip = 192.168.0.4 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\u3002 mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift Controller\u8282\u70b9\u521b\u5efa\u5e76\u5206\u53d1\u73af \u521b\u5efa\u8d26\u53f7\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840 account.builder \u6587\u4ef6\u3002 swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder account.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6202 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u8d26\u53f7\u73af\u5185\u5bb9\u3002 swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u8d26\u53f7\u73af\u3002 swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\u3002 swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder container.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bb9\u5668\u73af\u5185\u5bb9\u3002 swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u5bb9\u5668\u73af\u3002 swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\u3002 swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder object.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6200 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bf9\u8c61\u73af\u5185\u5bb9\u3002 swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u5bf9\u8c61\u73af\u3002 swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\u3002 \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/swift/swift.conf\u3002 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R root:swift /etc/swift \u5b8c\u6210\u5b89\u88c5 \u5728\u63a7\u5236\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service systemctl start openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service Cyborg \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 Controller\u8282\u70b9 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cyborg; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efacybory\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eCYBORG_PASS source ~/.admin-openrc openstack user create --domain default --password-prompt cyborg openstack role add --project service --user cyborg admin openstack service create --name cyborg --description \"Acceleration Service\" accelerator \u4f7f\u7528uwsgi\u90e8\u7f72Cyborg api\u670d\u52a1 openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2 \u5b89\u88c5Cyborg dnf install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [api] host_ip = 0.0.0.0 [database] connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg [service_catalog] cafile = /opt/stack/data/ca-bundle.pem project_domain_id = default user_domain_id = default project_name = service password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = password username = PLACEMENT_PASS auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [nova] project_domain_name = Default project_name = service user_domain_name = Default password = NOVA_PASS username = nova auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [keystone_authtoken] memcached_servers = localhost:11211 signing_dir = /var/cache/cyborg/api cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u00b6 Aodh\u53ef\u4ee5\u6839\u636e\u7531Ceilometer\u6216\u8005Gnocchi\u6536\u96c6\u7684\u76d1\u63a7\u6570\u636e\u521b\u5efa\u544a\u8b66\uff0c\u5e76\u8bbe\u7f6e\u89e6\u53d1\u89c4\u5219\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh\u3002 dnf install openstack-aodh-api openstack-aodh-evaluator \\ openstack-aodh-notifier openstack-aodh-listener \\ openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/aodh/aodh.conf [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u540c\u6b65\u6570\u636e\u5e93\u3002 aodh-dbsync \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u00b6 Gnocchi\u662f\u4e00\u4e2a\u5f00\u6e90\u7684\u65f6\u95f4\u5e8f\u5217\u6570\u636e\u5e93\uff0c\u53ef\u4ee5\u5bf9\u63a5Ceilometer\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi\u3002 dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. # coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u540c\u6b65\u6570\u636e\u5e93\u3002 gnocchi-upgrade \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u00b6 Ceilometer\u662fOpenStack\u4e2d\u8d1f\u8d23\u6570\u636e\u6536\u96c6\u7684\u670d\u52a1\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-notification openstack-ceilometer-central \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/pipeline.yaml\u3002 publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u6570\u636e\u5e93\u540c\u6b65\u3002 ceilometer-upgrade \u5b8c\u6210\u63a7\u5236\u8282\u70b9Ceilometer\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Compute\u8282\u70b9 \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-compute dnf install openstack-ceilometer-ipmi # \u53ef\u9009 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_url = http://controller:5000 project_domain_id = default user_domain_id = default auth_type = password username = ceilometer project_name = service password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/nova/nova.conf\u3002 [DEFAULT] instance_usage_audit = True instance_usage_audit_period = hour [notifications] notify_on_state_change = vm_and_task_state [oslo_messaging_notifications] driver = messagingv2 \u5b8c\u6210\u5b89\u88c5\u3002 systemctl enable openstack-ceilometer-compute.service systemctl start openstack-ceilometer-compute.service systemctl enable openstack-ceilometer-ipmi.service # \u53ef\u9009 systemctl start openstack-ceilometer-ipmi.service # \u53ef\u9009 # \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service Heat \u00b6 Heat\u662f OpenStack \u81ea\u52a8\u7f16\u6392\u670d\u52a1\uff0c\u57fa\u4e8e\u63cf\u8ff0\u6027\u7684\u6a21\u677f\u6765\u7f16\u6392\u590d\u5408\u4e91\u5e94\u7528\uff0c\u4e5f\u79f0\u4e3a Orchestration Service \u3002Heat \u7684\u5404\u670d\u52a1\u4e00\u822c\u5b89\u88c5\u5728 Controller \u8282\u70b9\u4e0a\u3002 Controller\u8282\u70b9 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE heat; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 source ~/.admin-openrc openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f \u521b\u5efa heat domain openstack domain create --description \"Stack projects and users\" heat \u5728 heat domain\u4e0b\u521b\u5efa heat_domain_admin \u7528\u6237\uff0c\u5e76\u8bb0\u4e0b\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6e\u4e0b\u9762\u7684 HEAT_DOMAIN_PASS openstack user create --domain heat --password-prompt heat_domain_admin \u4e3a heat_domain_admin \u7528\u6237\u589e\u52a0 admin \u89d2\u8272 openstack role add --domain heat --user-domain heat --user heat_domain_admin admin \u521b\u5efa heat_stack_owner \u89d2\u8272 openstack role create heat_stack_owner \u521b\u5efa heat_stack_user \u89d2\u8272 openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service Tempest \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u5b89\u88c5Tempest dnf install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Antelope\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 yum install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff0cAK/SK\u662f\u7528\u6237\u7684\u534e\u4e3a\u4e91\u767b\u5f55\u5bc6\u94a5\uff0c\u5176\u4ed6\u914d\u7f6e\u4fdd\u6301\u9ed8\u8ba4\u5373\u53ef\uff08\u9ed8\u8ba4\u4f7f\u7528\u65b0\u52a0\u5761region\uff09\uff0c\u9700\u8981\u63d0\u524d\u5728\u4e91\u4e0a\u521b\u5efa\u5bf9\u5e94\u7684\u8d44\u6e90\uff0c\u5305\u62ec\uff1a \u4e00\u4e2a\u5b89\u5168\u7ec4\uff0c\u540d\u5b57\u9ed8\u8ba4\u662f oos \u4e00\u4e2aopenEuler\u955c\u50cf\uff0c\u540d\u79f0\u683c\u5f0f\u662fopenEuler-%(release)s-%(arch)s\uff0c\u4f8b\u5982 openEuler-24.03-SP2-arm64 \u4e00\u4e2aVPC\uff0c\u540d\u79f0\u662f oos_vpc \u8be5VPC\u4e0b\u9762\u4e24\u4e2a\u5b50\u7f51\uff0c\u540d\u79f0\u662f oos_subnet1 \u3001 oos_subnet2 [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668(\u53ea\u5728openEuler LTS\u4e0a\u652f\u6301) \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0|openEuler 24.03 LTS SP2\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 24.03-lts-SP2 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r antelope \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u6267\u884ctempest\u6d4b\u8bd5 \u7528\u6237\u53ef\u4ee5\u4f7f\u7528oos\u81ea\u52a8\u6267\u884c\uff1a oos env test test-oos \u4e5f\u53ef\u4ee5\u624b\u52a8\u767b\u5f55\u76ee\u6807\u8282\u70b9\uff0c\u8fdb\u5165\u6839\u76ee\u5f55\u4e0b\u7684 mytest \u76ee\u5f55\uff0c\u624b\u52a8\u6267\u884c tempest run \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u8df3\u8fc7\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u5728\u7b2c4\u6b65\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 \u88ab\u7eb3\u7ba1\u7684\u865a\u673a\u9700\u8981\u4fdd\u8bc1\uff1a \u81f3\u5c11\u6709\u4e00\u5f20\u7ed9oos\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e neutron_dataplane_interface_name \u81f3\u5c11\u6709\u4e00\u5757\u7ed9oos\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e cinder_block_device \u5982\u679c\u8981\u90e8\u7f72swift\u670d\u52a1\uff0c\u5219\u9700\u8981\u65b0\u589e\u4e00\u5757\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e swift_storage_devices # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 24.03-lts-SP2 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-24.03-LTS-SP2_Antelope"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#openstack-antelope","text":"OpenStack Antelope \u90e8\u7f72\u6307\u5357 \u57fa\u4e8eRPM\u90e8\u7f72 \u73af\u5883\u51c6\u5907 \u65f6\u949f\u540c\u6b65 \u5b89\u88c5\u6570\u636e\u5e93 \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u90e8\u7f72\u670d\u52a1 Keystone Glance Placement Nova Neutron Cinder Horizon Ironic Trove Swift Cyborg Aodh Gnocchi Ceilometer Heat Tempest \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u672c\u6587\u6863\u662f openEuler OpenStack SIG \u7f16\u5199\u7684\u57fa\u4e8e |openEuler 24.03 LTS SP2 \u7684 OpenStack \u90e8\u7f72\u6307\u5357\uff0c\u5185\u5bb9\u7531 SIG \u8d21\u732e\u8005\u63d0\u4f9b\u3002\u5728\u9605\u8bfb\u8fc7\u7a0b\u4e2d\uff0c\u5982\u679c\u60a8\u6709\u4efb\u4f55\u7591\u95ee\u6216\u8005\u53d1\u73b0\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7 \u8054\u7cfb SIG\u7ef4\u62a4\u4eba\u5458\uff0c\u6216\u8005\u76f4\u63a5 \u63d0\u4ea4issue \u7ea6\u5b9a \u672c\u7ae0\u8282\u63cf\u8ff0\u6587\u6863\u4e2d\u7684\u4e00\u4e9b\u901a\u7528\u7ea6\u5b9a\u3002 \u540d\u79f0 \u5b9a\u4e49 RABBIT_PASS rabbitmq\u7684\u5bc6\u7801\uff0c\u7531\u7528\u6237\u8bbe\u7f6e\uff0c\u5728OpenStack\u5404\u4e2a\u670d\u52a1\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_PASS cinder\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_DBPASS cinder\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 KEYSTONE_DBPASS keystone\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728keystone\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_PASS glance\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_DBPASS glance\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_PASS \u5728keystone\u6ce8\u518c\u7684heat\u7528\u6237\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_DBPASS heat\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_PASS \u5728keystone\u6ce8\u518c\u7684cyborg\u7528\u6237\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_DBPASS cyborg\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_PASS \u5728keystone\u6ce8\u518c\u7684neutron\u7528\u6237\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_DBPASS neutron\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PROVIDER_INTERFACE_NAME \u7269\u7406\u7f51\u7edc\u63a5\u53e3\u7684\u540d\u79f0\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 OVERLAY_INTERFACE_IP_ADDRESS Controller\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406ip\u5730\u5740\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 METADATA_SECRET metadata proxy\u7684secret\u5bc6\u7801\uff0c\u5728nova\u548cneutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_DBPASS placement\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_PASS \u5728keystone\u6ce8\u518c\u7684placement\u7528\u6237\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_DBPASS nova\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728nova\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_PASS \u5728keystone\u6ce8\u518c\u7684nova\u7528\u6237\u5bc6\u7801\uff0c\u5728nova,cyborg,neutron\u7b49\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_DBPASS ironic\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_PASS \u5728keystone\u6ce8\u518c\u7684ironic\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_DBPASS ironic-inspector\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_PASS \u5728keystone\u6ce8\u518c\u7684ironic-inspector\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 OpenStack SIG \u63d0\u4f9b\u4e86\u591a\u79cd\u57fa\u4e8e openEuler \u90e8\u7f72 OpenStack \u7684\u65b9\u6cd5\uff0c\u4ee5\u6ee1\u8db3\u4e0d\u540c\u7684\u7528\u6237\u573a\u666f\uff0c\u8bf7\u6309\u9700\u9009\u62e9\u3002","title":"OpenStack Antelope \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#rpm","text":"","title":"\u57fa\u4e8eRPM\u90e8\u7f72"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#_1","text":"\u672c\u6587\u6863\u57fa\u4e8eOpenStack\u7ecf\u5178\u7684\u4e09\u8282\u70b9\u73af\u5883\u8fdb\u884c\u90e8\u7f72\uff0c\u4e09\u4e2a\u8282\u70b9\u5206\u522b\u662f\u63a7\u5236\u8282\u70b9(Controller)\u3001\u8ba1\u7b97\u8282\u70b9(Compute)\u3001\u5b58\u50a8\u8282\u70b9(Storage)\uff0c\u5176\u4e2d\u5b58\u50a8\u8282\u70b9\u4e00\u822c\u53ea\u90e8\u7f72\u5b58\u50a8\u670d\u52a1\uff0c\u5728\u8d44\u6e90\u6709\u9650\u7684\u60c5\u51b5\u4e0b\uff0c\u53ef\u4ee5\u4e0d\u5355\u72ec\u90e8\u7f72\u8be5\u8282\u70b9\uff0c\u628a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u670d\u52a1\u90e8\u7f72\u5230\u8ba1\u7b97\u8282\u70b9\u5373\u53ef\u3002 \u9996\u5148\u51c6\u5907\u4e09\u4e2a|openEuler 24.03 LTS SP2\u73af\u5883\uff0c\u6839\u636e\u60a8\u7684\u73af\u5883\uff0c\u4e0b\u8f7d\u5bf9\u5e94\u7684\u955c\u50cf\u5e76\u5b89\u88c5\u5373\u53ef\uff1a ISO\u955c\u50cf \u3001 qcow2\u955c\u50cf \u3002 \u4e0b\u9762\u7684\u5b89\u88c5\u6309\u7167\u5982\u4e0b\u62d3\u6251\u8fdb\u884c\uff1a controller\uff1a192.168.0.2 compute\uff1a 192.168.0.3 storage\uff1a 192.168.0.4 \u5982\u679c\u60a8\u7684\u73af\u5883IP\u4e0d\u540c\uff0c\u8bf7\u6309\u7167\u60a8\u7684\u73af\u5883IP\u4fee\u6539\u76f8\u5e94\u7684\u914d\u7f6e\u6587\u4ef6\u3002 \u672c\u6587\u6863\u7684\u4e09\u8282\u70b9\u670d\u52a1\u62d3\u6251\u5982\u4e0b\u56fe\u6240\u793a(\u53ea\u5305\u542bKeystone\u3001Glance\u3001Nova\u3001Cinder\u3001Neutron\u8fd9\u51e0\u4e2a\u6838\u5fc3\u670d\u52a1\uff0c\u5176\u4ed6\u670d\u52a1\u8bf7\u53c2\u8003\u5177\u4f53\u90e8\u7f72\u7ae0\u8282)\uff1a \u5728\u6b63\u5f0f\u90e8\u7f72\u4e4b\u524d\uff0c\u9700\u8981\u5bf9\u6bcf\u4e2a\u8282\u70b9\u505a\u5982\u4e0b\u914d\u7f6e\u548c\u68c0\u67e5\uff1a \u914d\u7f6e |openEuler 24.03 LTS SP2 \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-antelope yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP2/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u6bcf\u4e2a\u8282\u70b9\u5206\u522b\u4fee\u6539\u4e3b\u673a\u540d\uff0c\u4ee5controller\u4e3a\u4f8b\uff1a hostnamectl set-hostname controller vi /etc/hostname \u5185\u5bb9\u4fee\u6539\u4e3acontroller \u7136\u540e\u4fee\u6539\u6bcf\u4e2a\u8282\u70b9\u7684 /etc/hosts \u6587\u4ef6\uff0c\u65b0\u589e\u5982\u4e0b\u5185\u5bb9: 192.168.0.2 controller 192.168.0.3 compute 192.168.0.4 storage","title":"\u73af\u5883\u51c6\u5907"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#_2","text":"\u96c6\u7fa4\u73af\u5883\u65f6\u523b\u8981\u6c42\u6bcf\u4e2a\u8282\u70b9\u7684\u65f6\u95f4\u4e00\u81f4\uff0c\u4e00\u822c\u7531\u65f6\u949f\u540c\u6b65\u8f6f\u4ef6\u4fdd\u8bc1\u3002\u672c\u6587\u4f7f\u7528 chrony \u8f6f\u4ef6\u3002\u6b65\u9aa4\u5982\u4e0b\uff1a Controller\u8282\u70b9 \uff1a \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # \u8868\u793a\u5141\u8bb8\u54ea\u4e9bIP\u4ece\u672c\u8282\u70b9\u540c\u6b65\u65f6\u949f allow 192.168.0.0/24 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u5176\u4ed6\u8282\u70b9 \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # NTP_SERVER\u662fcontroller IP\uff0c\u8868\u793a\u4ece\u8fd9\u4e2a\u673a\u5668\u83b7\u53d6\u65f6\u95f4\uff0c\u8fd9\u91cc\u6211\u4eec\u586b192.168.0.2\uff0c\u6216\u8005\u5728`/etc/hosts`\u91cc\u914d\u7f6e\u597d\u7684controller\u540d\u5b57\u5373\u53ef\u3002 server NTP_SERVER iburst \u540c\u65f6\uff0c\u8981\u628a pool pool.ntp.org iburst \u8fd9\u4e00\u884c\u6ce8\u91ca\u6389\uff0c\u8868\u793a\u4e0d\u4ece\u516c\u7f51\u540c\u6b65\u65f6\u949f\u3002 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u914d\u7f6e\u5b8c\u6210\u540e\uff0c\u68c0\u67e5\u4e00\u4e0b\u7ed3\u679c\uff0c\u5728\u5176\u4ed6\u975econtroller\u8282\u70b9\u6267\u884c chronyc sources \uff0c\u8fd4\u56de\u7ed3\u679c\u7c7b\u4f3c\u5982\u4e0b\u5185\u5bb9\uff0c\u8868\u793a\u6210\u529f\u4ececontroller\u540c\u6b65\u65f6\u949f\u3002 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 192.168.0.2 4 6 7 0 -1406ns[ +55us] +/- 16ms","title":"\u65f6\u949f\u540c\u6b65"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#_3","text":"\u6570\u636e\u5e93\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528mariadb\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install mariadb-config mariadb mariadb-server python3-PyMySQL \u65b0\u589e\u914d\u7f6e\u6587\u4ef6 /etc/my.cnf.d/openstack.cnf \uff0c\u5185\u5bb9\u5982\u4e0b [mysqld] bind-address = 192.168.0.2 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8\u670d\u52a1\u5668 systemctl start mariadb \u521d\u59cb\u5316\u6570\u636e\u5e93\uff0c\u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef mysql_secure_installation \u793a\u4f8b\u5982\u4e0b\uff1a NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and haven't set the root password yet, you should just press enter here. Enter current password for root (enter for none): #\u8fd9\u91cc\u8f93\u5165\u5bc6\u7801\uff0c\u7531\u4e8e\u6211\u4eec\u662f\u521d\u59cb\u5316DB\uff0c\u76f4\u63a5\u56de\u8f66\u5c31\u884c OK, successfully used password, moving on... Setting the root password or using the unix_socket ensures that nobody can log into the MariaDB root user without the proper authorisation. You already have your root account protected, so you can safely answer 'n'. # \u8fd9\u91cc\u6839\u636e\u63d0\u793a\u8f93\u5165N Switch to unix_socket authentication [Y/n] N Enabled successfully! Reloading privilege tables.. ... Success! You already have your root account protected, so you can safely answer 'n'. # \u8f93\u5165Y\uff0c\u4fee\u6539\u5bc6\u7801 Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664\u533f\u540d\u7528\u6237 Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. # \u8f93\u5165Y\uff0c\u5173\u95edroot\u8fdc\u7a0b\u767b\u5f55\u6743\u9650 Disallow root login remotely? [Y/n] Y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664test\u6570\u636e\u5e93 Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. # \u8f93\u5165Y\uff0c\u91cd\u8f7d\u914d\u7f6e Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. \u9a8c\u8bc1\uff0c\u6839\u636e\u7b2c\u56db\u6b65\u8bbe\u7f6e\u7684\u5bc6\u7801\uff0c\u68c0\u67e5\u662f\u5426\u80fd\u767b\u5f55mariadb mysql -uroot -p","title":"\u5b89\u88c5\u6570\u636e\u5e93"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#_4","text":"\u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528rabbitmq\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install rabbitmq-server \u542f\u52a8\u670d\u52a1 systemctl start rabbitmq-server \u914d\u7f6eopenstack\u7528\u6237\uff0c RABBIT_PASS \u662fopenstack\u670d\u52a1\u767b\u5f55\u6d88\u606f\u961f\u91cc\u7684\u5bc6\u7801\uff0c\u9700\u8981\u548c\u540e\u9762\u5404\u4e2a\u670d\u52a1\u7684\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\u3002 rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5\u6d88\u606f\u961f\u5217"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#_5","text":"\u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528Memcached\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install memcached python3-memcached \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u542f\u52a8\u670d\u52a1 systemctl start memcached","title":"\u5b89\u88c5\u7f13\u5b58\u670d\u52a1"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#_6","text":"","title":"\u90e8\u7f72\u670d\u52a1"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#keystone","text":"Keystone\u662fOpenStack\u63d0\u4f9b\u7684\u9274\u6743\u670d\u52a1\uff0c\u662f\u6574\u4e2aOpenStack\u7684\u5165\u53e3\uff0c\u63d0\u4f9b\u4e86\u79df\u6237\u9694\u79bb\u3001\u7528\u6237\u8ba4\u8bc1\u3001\u670d\u52a1\u53d1\u73b0\u7b49\u529f\u80fd\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server \u6253\u5f00httpd.conf\u5e76\u914d\u7f6e #\u9700\u8981\u4fee\u6539\u7684\u914d\u7f6e\u6587\u4ef6\u8def\u5f84 vim /etc/httpd/conf/httpd.conf #\u4fee\u6539\u4ee5\u4e0b\u9879\uff0c\u5982\u679c\u6ca1\u6709\u5219\u65b0\u6dfb\u52a0 ServerName controller \u521b\u5efa\u8f6f\u94fe\u63a5 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles \u9700\u8981\u5148\u5b89\u88c5python3-openstackclient dnf install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#glance","text":"Glance\u662fOpenStack\u63d0\u4f9b\u7684\u955c\u50cf\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u3001\u88f8\u673a\u955c\u50cf\u7684\u4e0a\u4f20\u4e0e\u4e0b\u8f7d\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521d\u59cb\u5316 glance \u8d44\u6e90\u5bf9\u8c61 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230 GLANCE_PASS \u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt glance User Password: Repeat User Password: \u6dfb\u52a0glance\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user glance admin \u521b\u5efaglance\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efaglance API\u670d\u52a1\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-glance \u4fee\u6539 glance \u914d\u7f6e\u6587\u4ef6 vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u5bfc\u5165\u73af\u5883\u53d8\u91cf sorce ~/.admin-openrcu \u4e0b\u8f7d\u955c\u50cf x86\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img arm\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#placement","text":"Placement\u662fOpenStack\u63d0\u4f9b\u7684\u8d44\u6e90\u8c03\u5ea6\u7ec4\u4ef6\uff0c\u4e00\u822c\u4e0d\u9762\u5411\u7528\u6237\uff0c\u7531Nova\u7b49\u7ec4\u4ef6\u8c03\u7528\uff0c\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u3001\u914d\u7f6ePlacement\u670d\u52a1\u524d\uff0c\u9700\u8981\u5148\u521b\u5efa\u76f8\u5e94\u7684\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548cAPI endpoints\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efaplacement\u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE placement; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efaplacement\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt placement User Password: Repeat User Password: \u6dfb\u52a0placement\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name placement \\ --description \"Placement API\" placement \u521b\u5efaPlacement API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ placement public http://controller:8778 openstack endpoint create --region RegionOne \\ placement internal http://controller:8778 openstack endpoint create --region RegionOne \\ placement admin http://controller:8778 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-placement-api \u7f16\u8f91 /etc/placement/placement.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [placement_database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [placement_database] connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff0c\u586b\u5145Placement\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8\u670d\u52a1 \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650 source ~/.admin-openrc \u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a placement-status upgrade check +----------------------------------------------------------------------+ | Upgrade Check Results | +----------------------------------------------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Failure | | Details: Your policy file is JSON-formatted which is deprecated. You | | need to switch to YAML-formatted file. Use the | | ``oslopolicy-convert-json-to-yaml`` tool to convert the | | existing JSON-formatted files to YAML in a backwards- | | compatible manner: https://docs.openstack.org/oslo.policy/ | | latest/cli/oslopolicy-convert-json-to-yaml.html. | +----------------------------------------------------------------------+ \u8fd9\u91cc\u53ef\u4ee5\u770b\u5230 Policy File JSON to YAML Migration \u7684\u7ed3\u679c\u4e3aFailure\u3002\u8fd9\u662f\u56e0\u4e3a\u5728Placement\u4e2d\uff0cJSON\u683c\u5f0f\u7684policy\u6587\u4ef6\u4eceWallaby\u7248\u672c\u5f00\u59cb\u5df2\u5904\u4e8e deprecated \u72b6\u6001\u3002\u53ef\u4ee5\u53c2\u8003\u63d0\u793a\uff0c\u4f7f\u7528 oslopolicy-convert-json-to-yaml \u5de5\u5177 \u5c06\u73b0\u6709\u7684JSON\u683c\u5f0fpolicy\u6587\u4ef6\u8f6c\u5316\u4e3aYAML\u683c\u5f0f\u3002 oslopolicy-convert-json-to-yaml --namespace placement \\ --policy-file /etc/placement/policy.json \\ --output-file /etc/placement/policy.yaml mv /etc/placement/policy.json{,.bak} \u6ce8\uff1a\u5f53\u524d\u73af\u5883\u4e2d\u6b64\u95ee\u9898\u53ef\u5ffd\u7565\uff0c\u4e0d\u5f71\u54cd\u8fd0\u884c\u3002 \u9488\u5bf9placement API\u8fd0\u884c\u547d\u4ee4\uff1a \u5b89\u88c5osc-placement\u63d2\u4ef6\uff1a dnf install python3-osc-placement \u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a openstack --os-placement-api-version 1.2 resource class list --sort-column name +----------------------------+ | name | +----------------------------+ | DISK_GB | | FPGA | | ... | openstack --os-placement-api-version 1.6 trait list --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_ACCELERATORS | | COMPUTE_ARCH_AARCH64 | | ... |","title":"Placement"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#nova","text":"Nova\u662fOpenStack\u7684\u8ba1\u7b97\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u7684\u521b\u5efa\u3001\u53d1\u653e\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efa nova_api \u3001 nova \u548c nova_cell0 \u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efanova\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt nova User Password: Repeat User Password: \u6dfb\u52a0nova\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user nova admin \u521b\u5efanova\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name nova \\ --description \"OpenStack Compute\" compute \u521b\u5efaNova API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute admin http://controller:8774/v2.1 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528controller\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.2 log_dir = /var/log/nova state_path = /var/lib/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api_database] \u548c [database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff1a \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u542f\u52a8\u670d\u52a1 systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service Compute\u8282\u70b9 \u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-nova-compute \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6 \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528Compute\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49compute_driver\u3001instances_path\u3001log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.3 compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances log_dir = /var/log/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86_64\uff09 \u5904\u7406\u5668\u4e3ax86_64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002\u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08arm64\uff09 \u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a virt-host-validate # \u8be5\u547d\u4ee4\u7531libvirt\u63d0\u4f9b\uff0c\u6b64\u65f6libvirt\u5e94\u5df2\u4f5c\u4e3aopenstack-nova-compute\u4f9d\u8d56\u88ab\u5b89\u88c5\uff0c\u73af\u5883\u4e2d\u5df2\u6709\u6b64\u547d\u4ee4 \u663e\u793aFAIL\u65f6\uff0c\u8868\u793a\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002 QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded) \u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u663e\u793aPASS\u65f6\uff0c\u8868\u793a\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 QEMU: Checking if device /dev/kvm exists: PASS \u914d\u7f6eqemu\uff08\u4ec5arm64\uff09 \u4ec5\u5f53\u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\u9700\u8981\u6267\u884c\u6b64\u64cd\u4f5c\u3002 \u7f16\u8f91 /etc/libvirt/qemu.conf : nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u7f16\u8f91 /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } \u542f\u52a8\u670d\u52a1 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u786e\u8ba4nova-compute\u670d\u52a1\u5df2\u8bc6\u522b\u5230\u6570\u636e\u5e93\u4e2d\uff1a openstack compute service list --service nova-compute \u53d1\u73b0\u8ba1\u7b97\u8282\u70b9\uff0c\u5c06\u8ba1\u7b97\u8282\u70b9\u6dfb\u52a0\u5230cell\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u7ed3\u679c\u5982\u4e0b\uff1a Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code. Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check","title":"Nova"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#neutron","text":"Neutron\u662fOpenStack\u7684\u7f51\u7edc\u670d\u52a1\uff0c\u63d0\u4f9b\u865a\u62df\u4ea4\u6362\u673a\u3001IP\u8def\u7531\u3001DHCP\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u670d\u52a1\u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efaneutron\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eNEUTRON_PASS\uff1a source ~/.admin-openrc openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description \"OpenStack Networking\" network \u90e8\u7f72 Neutron API \u670d\u52a1\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2 3. \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp [experimental] linuxbridge = true \u914d\u7f6eML2\uff0cML2\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge** \u4fee\u6539/etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6eLayer-3\u4ee3\u7406 \u4fee\u6539/etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406 \u4fee\u6539/etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406 \u4fee\u6539/etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u914d\u7f6enova\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542fnova api\u670d\u52a1 systemctl restart openstack-nova-api \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl start neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service Compute\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-neutron-linuxbridge ebtables ipset -y \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6enova compute\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service \u542f\u52a8Neutron linuxbridge agent\u670d\u52a1 systemctl enable neutron-linuxbridge-agent systemctl start neutron-linuxbridge-agent","title":"Neutron"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#cinder","text":"Cinder\u662fOpenStack\u7684\u5b58\u50a8\u670d\u52a1\uff0c\u63d0\u4f9b\u5757\u8bbe\u5907\u7684\u521b\u5efa\u3001\u53d1\u653e\u3001\u5907\u4efd\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \uff1a \u521d\u59cb\u5316\u6570\u636e\u5e93 CINDER_DBPASS \u662f\u7528\u6237\u81ea\u5b9a\u4e49\u7684cinder\u6570\u636e\u5e93\u5bc6\u7801\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u521d\u59cb\u5316Keystone\u8d44\u6e90\u5bf9\u8c61 source ~/.admin-openrc #\u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230`CINDER_PASS`\u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s 3. \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-cinder-api openstack-cinder-scheduler \u4fee\u6539cinder\u914d\u7f6e\u6587\u4ef6 /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.2 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u6570\u636e\u5e93\u540c\u6b65 su -s /bin/sh -c \"cinder-manage db sync\" cinder \u4fee\u6539nova\u914d\u7f6e /etc/nova/nova.conf [cinder] os_region_name = RegionOne \u542f\u52a8\u670d\u52a1 systemctl restart openstack-nova-api systemctl start openstack-cinder-api openstack-cinder-scheduler Storage\u8282\u70b9 \uff1a Storage\u8282\u70b9\u8981\u63d0\u524d\u51c6\u5907\u81f3\u5c11\u4e00\u5757\u786c\u76d8\uff0c\u4f5c\u4e3acinder\u7684\u5b58\u50a8\u540e\u7aef\uff0c\u4e0b\u6587\u9ed8\u8ba4storage\u8282\u70b9\u5df2\u7ecf\u5b58\u5728\u4e00\u5757\u672a\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u8bbe\u5907\u540d\u79f0\u4e3a /dev/sdb \uff0c\u7528\u6237\u5728\u914d\u7f6e\u8fc7\u7a0b\u4e2d\uff0c\u8bf7\u6309\u7167\u771f\u5b9e\u73af\u5883\u4fe1\u606f\u8fdb\u884c\u540d\u79f0\u66ff\u6362\u3002 Cinder\u652f\u6301\u5f88\u591a\u7c7b\u578b\u7684\u540e\u7aef\u5b58\u50a8\uff0c\u672c\u6307\u5bfc\u4f7f\u7528\u6700\u7b80\u5355\u7684lvm\u4e3a\u53c2\u8003\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982ceph\u7b49\u5176\u4ed6\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup \u914d\u7f6elvm\u5377\u7ec4 pvcreate /dev/sdb vgcreate cinder-volumes /dev/sdb \u4fee\u6539cinder\u914d\u7f6e /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.4 enabled_backends = lvm glance_api_servers = http://controller:9292 [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u914d\u7f6ecinder backup \uff08\u53ef\u9009\uff09 cinder-backup\u662f\u53ef\u9009\u7684\u5907\u4efd\u670d\u52a1\uff0ccinder\u540c\u6837\u652f\u6301\u5f88\u591a\u79cd\u5907\u4efd\u540e\u7aef\uff0c\u672c\u6587\u4f7f\u7528swift\u5b58\u50a8\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982NFS\u7b49\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\uff0c\u4f8b\u5982\u53ef\u4ee5\u53c2\u8003 OpenStack\u5b98\u65b9\u6587\u6863 \u5bf9NFS\u7684\u914d\u7f6e\u8bf4\u660e\u3002 \u4fee\u6539 /etc/cinder/cinder.conf \uff0c\u5728 [DEFAULT] \u4e2d\u65b0\u589e [DEFAULT] backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u8fd9\u91cc\u7684 SWIFT_URL \u662f\u6307\u73af\u5883\u4e2dswift\u670d\u52a1\u7684URL\uff0c\u5728\u90e8\u7f72\u5b8cswift\u670d\u52a1\u540e\uff0c\u6267\u884c openstack catalog show object-store \u547d\u4ee4\u83b7\u53d6\u3002 \u542f\u52a8\u670d\u52a1 systemctl start openstack-cinder-volume target systemctl start openstack-cinder-backup (\u53ef\u9009) \u81f3\u6b64\uff0cCinder\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u53ef\u4ee5\u5728controller\u901a\u8fc7\u4ee5\u4e0b\u547d\u4ee4\u8fdb\u884c\u7b80\u5355\u7684\u9a8c\u8bc1 source ~/.admin-openrc openstack storage service list openstack volume list","title":"Cinder"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#horizon","text":"Horizon\u662fOpenStack\u63d0\u4f9b\u7684\u524d\u7aef\u9875\u9762\uff0c\u53ef\u4ee5\u8ba9\u7528\u6237\u901a\u8fc7\u7f51\u9875\u9f20\u6807\u7684\u64cd\u4f5c\u6765\u63a7\u5236OpenStack\u96c6\u7fa4\uff0c\u800c\u4e0d\u7528\u7e41\u7410\u7684CLI\u547d\u4ee4\u884c\u3002Horizon\u4e00\u822c\u90e8\u7f72\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-dashboard \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] OPENSTACK_KEYSTONE_URL = \"http://controller:5000/v3\" SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f\u670d\u52a1 systemctl restart httpd \u81f3\u6b64\uff0chorizon\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165 http://192.168.0.2/dashboard \uff0c\u6253\u5f00horizon\u767b\u5f55\u9875\u9762\u3002","title":"Horizon"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> exit Bye \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 \u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 \u66ff\u6362 IRONIC_PASS \u4e3aironic\u7528\u6237\u5bc6\u7801\uff0c IRONIC_INSPECTOR_PASS \u4e3aironic_inspector\u7528\u6237\u5bc6\u7801\u3002 openstack user create --password IRONIC_PASS \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector openstack role add --project service --user ironic-inspector admin \u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQ LAlchemy connection string used to connect to the # database (string value) # connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) # transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASS \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) # www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 www_authenticate_uri=http://controller:5000 # Complete admin Identity API endpoint. (string value) # auth_url=http://PRIVATE_IDENTITY_IP:5000 auth_url=http://controller:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASS # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none \u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema \u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 \u5982\u4e0b\u4e3aironic-conductor\u670d\u52a1\u81ea\u8eab\u7684\u6807\u51c6\u914d\u7f6e\uff0cironic-conductor\u670d\u52a1\u53ef\u4ee5\u4e0eironic-api\u670d\u52a1\u5206\u5e03\u4e8e\u4e0d\u540c\u8282\u70b9\uff0c\u672c\u6307\u5357\u4e2d\u5747\u90e8\u7f72\u4e0e\u63a7\u5236\u8282\u70b9\uff0c\u6240\u4ee5\u91cd\u590d\u7684\u914d\u7f6e\u9879\u53ef\u8df3\u8fc7\u3002 \u66ff\u6362\u4f7f\u7528conductor\u670d\u52a1\u6240\u5728host\u7684IP\u914d\u7f6emy_ip\uff1a [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) # my_ip=HOST_IP my_ip = 192.168.0.2 \u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c \u66ff\u6362IRONIC_PASS\u4e3aironic\u7528\u6237\u5bc6\u7801\u3002 [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASS # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public # \u5176\u4ed6\u53c2\u8003\u914d\u7f6e [glance] endpoint_override = http://controller:9292 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 auth_type = password username = ironic password = IRONIC_PASS project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service [service_catalog] region_name = RegionOne project_domain_id = default user_domain_id = default project_name = service password = IRONIC_PASS username = ironic auth_url = http://controller:5000 auth_type = password \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] endpoint_override = \u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 \u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-inspector \u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> exit Bye \u914d\u7f6e /etc/ironic-inspector/inspector.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASS \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801 [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 \u914d\u7f6e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://controller:5000 www_authenticate_uri = http://controller:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = controller:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True \u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=192.168.0.40,192.168.0.50 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log \u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c \u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade \u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 dnf install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u4e0b\u8f7d\u6216\u5236\u4f5c \u90e8\u7f72\u4e00\u4e2a\u88f8\u673a\u8282\u70b9\u603b\u5171\u9700\u8981\u4e24\u7ec4\u955c\u50cf\uff1adeploy ramdisk images\u548cuser images\u3002Deploy ramdisk images\u4e0a\u8fd0\u884c\u6709ironic-python-agent(IPA)\u670d\u52a1\uff0cIronic\u901a\u8fc7\u5b83\u8fdb\u884c\u88f8\u673a\u8282\u70b9\u7684\u73af\u5883\u51c6\u5907\u3002User images\u662f\u6700\u7ec8\u88ab\u5b89\u88c5\u88f8\u673a\u8282\u70b9\u4e0a\uff0c\u4f9b\u7528\u6237\u4f7f\u7528\u7684\u955c\u50cf\u3002 ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent-builder\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002\u82e5\u4f7f\u7528\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \uff0c\u540c\u65f6\u5b98\u65b9\u4e5f\u6709\u63d0\u4f9b\u5236\u4f5c\u597d\u7684deploy\u955c\u50cf\uff0c\u53ef\u5c1d\u8bd5\u4e0b\u8f7d\u3002 \u4e0b\u6587\u4ecb\u7ecd\u901a\u8fc7ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder dnf install python3-ironic-python-agent-builder \u6216 pip3 install ironic-python-agent-builder dnf install qemu-img git \u5236\u4f5c\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--lzma] [--extra-args EXTRA_ARGS] [--elements-path ELEMENTS_PATH] distribution positional arguments: distribution Distribution to use options: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic-python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --lzma Use lzma compression for smaller images --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder --elements-path ELEMENTS_PATH Path(s) to custom DIB elements separated by a colon \u64cd\u4f5c\u5b9e\u4f8b\uff1a # -o\u9009\u9879\u6307\u5b9a\u751f\u6210\u7684\u955c\u50cf\u540d # ubuntu\u6307\u5b9a\u751f\u6210ubuntu\u7cfb\u7edf\u7684\u955c\u50cf ironic-python-agent-builder -o my-ubuntu-ipa ubuntu \u53ef\u901a\u8fc7\u8bbe\u7f6e ARCH \u73af\u5883\u53d8\u91cf\uff08\u9ed8\u8ba4\u4e3aamd64\uff09\u6307\u5b9a\u6240\u6784\u5efa\u955c\u50cf\u7684\u67b6\u6784\u3002\u5982\u679c\u662f arm \u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a export ARCH=aarch64 \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf,\u8bbe\u7f6e\u7528\u6237\u540d\u3001\u5bc6\u7801\uff0c\u542f\u7528 sodo \u6743\u9650\uff1b\u5e76\u6dfb\u52a0 -e \u9009\u9879\u4f7f\u7528\u76f8\u5e94\u7684DIB\u5143\u7d20\u3002\u5236\u4f5c\u955c\u50cf\u64cd\u4f5c\u5982\u4e0b\uff1a export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=stable/2023.1 # \u6307\u5b9a\u672c\u5730\u4ed3\u5e93\u53ca\u5206\u652f DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo DIB_REPOREF_ironic_python_agent=my-test-branch ironic-python-agent-builder ubuntu \u53c2\u8003\uff1a source-repositories \u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\u3002 \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a \u5f53\u524d\u7248\u672c\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ ramdisk\u955c\u50cf\u4e2d\u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 \u7f16\u8f91/usr/lib/systemd/system/ironic-python-agent.service\u6587\u4ef6 [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target","title":"Ironic"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2atrove\u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684trove\u6570\u636e\u5e93\uff0c\u66ff\u6362TROVE_DBPASS\u4e3a\u5408\u9002\u7684\u5bc6\u7801\u3002 CREATE DATABASE trove CHARACTER SET utf8; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS'; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efatrove\u7528\u6237 openstack user create --domain default --password-prompt trove # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user trove admin # \u521b\u5efadatabase\u670d\u52a1 openstack service create --name trove --description \"Database service\" database \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5Trove\u3002 dnf install openstack-trove python-troveclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 \u7f16\u8f91/etc/trove/trove.conf\u3002 [DEFAULT] bind_host=192.168.0.2 log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver network_label_regex=.* management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] auth_url = http://controller:5000/v3/ auth_type = password project_domain_name = Default project_name = service user_domain_name = Default password = trove username = TROVE_PASS [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = trove password = TROVE_PASS [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u63a7\u5236\u8282\u70b9\u7684IP\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002 \u7f16\u8f91/etc/trove/trove-guestagent.conf\u3002 [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df\u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a\u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002\\ \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 \u6570\u636e\u5e93\u540c\u6b65\u3002 su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efaswift\u7528\u6237 openstack user create --domain default --password-prompt swift # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user swift admin # \u521b\u5efa\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5Swift\u3002 dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \\ python3-keystonemiddleware memcached \u914d\u7f6eproxy-server\u3002 Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cSWIFT_PASS\u5373\u53ef\u3002 vim /etc/swift/proxy-server.conf [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = swift password = SWIFT_PASS delay_auth_decision = True service_token_roles_required = True Storage\u8282\u70b9 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305\u3002 dnf install openstack-swift-account openstack-swift-container openstack-swift-object dnf install xfsprogs rsync \u5c06\u8bbe\u5907/dev/sdb\u548c/dev/sdc\u683c\u5f0f\u5316\u4e3aXFS\u3002 mkfs.xfs /dev/sdb mkfs.xfs /dev/sdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u3002 mkdir -p /srv/node/sdb mkdir -p /srv/node/sdc \u627e\u5230\u65b0\u5206\u533a\u7684UUID\u3002 blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d\u3002 UUID=\"\" /srv/node/sdb xfs noatime 0 2 UUID=\"\" /srv/node/sdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\u3002 mount /srv/node/sdb mount /srv/node/sdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e\u3002 \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u914d\u7f6e\u5b58\u50a8\u8282\u70b9\u3002 \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 [DEFAULT] bind_ip = 192.168.0.4 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\u3002 mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift Controller\u8282\u70b9\u521b\u5efa\u5e76\u5206\u53d1\u73af \u521b\u5efa\u8d26\u53f7\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840 account.builder \u6587\u4ef6\u3002 swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder account.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6202 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u8d26\u53f7\u73af\u5185\u5bb9\u3002 swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u8d26\u53f7\u73af\u3002 swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\u3002 swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder container.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bb9\u5668\u73af\u5185\u5bb9\u3002 swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u5bb9\u5668\u73af\u3002 swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\u3002 swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder object.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6200 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bf9\u8c61\u73af\u5185\u5bb9\u3002 swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u5bf9\u8c61\u73af\u3002 swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\u3002 \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/swift/swift.conf\u3002 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R root:swift /etc/swift \u5b8c\u6210\u5b89\u88c5 \u5728\u63a7\u5236\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service systemctl start openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service","title":"Swift"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 Controller\u8282\u70b9 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cyborg; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efacybory\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eCYBORG_PASS source ~/.admin-openrc openstack user create --domain default --password-prompt cyborg openstack role add --project service --user cyborg admin openstack service create --name cyborg --description \"Acceleration Service\" accelerator \u4f7f\u7528uwsgi\u90e8\u7f72Cyborg api\u670d\u52a1 openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2 \u5b89\u88c5Cyborg dnf install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [api] host_ip = 0.0.0.0 [database] connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg [service_catalog] cafile = /opt/stack/data/ca-bundle.pem project_domain_id = default user_domain_id = default project_name = service password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = password username = PLACEMENT_PASS auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [nova] project_domain_name = Default project_name = service user_domain_name = Default password = NOVA_PASS username = nova auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [keystone_authtoken] memcached_servers = localhost:11211 signing_dir = /var/cache/cyborg/api cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#aodh","text":"Aodh\u53ef\u4ee5\u6839\u636e\u7531Ceilometer\u6216\u8005Gnocchi\u6536\u96c6\u7684\u76d1\u63a7\u6570\u636e\u521b\u5efa\u544a\u8b66\uff0c\u5e76\u8bbe\u7f6e\u89e6\u53d1\u89c4\u5219\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh\u3002 dnf install openstack-aodh-api openstack-aodh-evaluator \\ openstack-aodh-notifier openstack-aodh-listener \\ openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/aodh/aodh.conf [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u540c\u6b65\u6570\u636e\u5e93\u3002 aodh-dbsync \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#gnocchi","text":"Gnocchi\u662f\u4e00\u4e2a\u5f00\u6e90\u7684\u65f6\u95f4\u5e8f\u5217\u6570\u636e\u5e93\uff0c\u53ef\u4ee5\u5bf9\u63a5Ceilometer\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi\u3002 dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. # coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u540c\u6b65\u6570\u636e\u5e93\u3002 gnocchi-upgrade \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#ceilometer","text":"Ceilometer\u662fOpenStack\u4e2d\u8d1f\u8d23\u6570\u636e\u6536\u96c6\u7684\u670d\u52a1\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-notification openstack-ceilometer-central \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/pipeline.yaml\u3002 publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u6570\u636e\u5e93\u540c\u6b65\u3002 ceilometer-upgrade \u5b8c\u6210\u63a7\u5236\u8282\u70b9Ceilometer\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Compute\u8282\u70b9 \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-compute dnf install openstack-ceilometer-ipmi # \u53ef\u9009 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_url = http://controller:5000 project_domain_id = default user_domain_id = default auth_type = password username = ceilometer project_name = service password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/nova/nova.conf\u3002 [DEFAULT] instance_usage_audit = True instance_usage_audit_period = hour [notifications] notify_on_state_change = vm_and_task_state [oslo_messaging_notifications] driver = messagingv2 \u5b8c\u6210\u5b89\u88c5\u3002 systemctl enable openstack-ceilometer-compute.service systemctl start openstack-ceilometer-compute.service systemctl enable openstack-ceilometer-ipmi.service # \u53ef\u9009 systemctl start openstack-ceilometer-ipmi.service # \u53ef\u9009 # \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service","title":"Ceilometer"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#heat","text":"Heat\u662f OpenStack \u81ea\u52a8\u7f16\u6392\u670d\u52a1\uff0c\u57fa\u4e8e\u63cf\u8ff0\u6027\u7684\u6a21\u677f\u6765\u7f16\u6392\u590d\u5408\u4e91\u5e94\u7528\uff0c\u4e5f\u79f0\u4e3a Orchestration Service \u3002Heat \u7684\u5404\u670d\u52a1\u4e00\u822c\u5b89\u88c5\u5728 Controller \u8282\u70b9\u4e0a\u3002 Controller\u8282\u70b9 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE heat; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 source ~/.admin-openrc openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f \u521b\u5efa heat domain openstack domain create --description \"Stack projects and users\" heat \u5728 heat domain\u4e0b\u521b\u5efa heat_domain_admin \u7528\u6237\uff0c\u5e76\u8bb0\u4e0b\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6e\u4e0b\u9762\u7684 HEAT_DOMAIN_PASS openstack user create --domain heat --password-prompt heat_domain_admin \u4e3a heat_domain_admin \u7528\u6237\u589e\u52a0 admin \u89d2\u8272 openstack role add --domain heat --user-domain heat --user heat_domain_admin admin \u521b\u5efa heat_stack_owner \u89d2\u8272 openstack role create heat_stack_owner \u521b\u5efa heat_stack_user \u89d2\u8272 openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u5b89\u88c5Tempest dnf install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Antelope\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-antelope/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 yum install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff0cAK/SK\u662f\u7528\u6237\u7684\u534e\u4e3a\u4e91\u767b\u5f55\u5bc6\u94a5\uff0c\u5176\u4ed6\u914d\u7f6e\u4fdd\u6301\u9ed8\u8ba4\u5373\u53ef\uff08\u9ed8\u8ba4\u4f7f\u7528\u65b0\u52a0\u5761region\uff09\uff0c\u9700\u8981\u63d0\u524d\u5728\u4e91\u4e0a\u521b\u5efa\u5bf9\u5e94\u7684\u8d44\u6e90\uff0c\u5305\u62ec\uff1a \u4e00\u4e2a\u5b89\u5168\u7ec4\uff0c\u540d\u5b57\u9ed8\u8ba4\u662f oos \u4e00\u4e2aopenEuler\u955c\u50cf\uff0c\u540d\u79f0\u683c\u5f0f\u662fopenEuler-%(release)s-%(arch)s\uff0c\u4f8b\u5982 openEuler-24.03-SP2-arm64 \u4e00\u4e2aVPC\uff0c\u540d\u79f0\u662f oos_vpc \u8be5VPC\u4e0b\u9762\u4e24\u4e2a\u5b50\u7f51\uff0c\u540d\u79f0\u662f oos_subnet1 \u3001 oos_subnet2 [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668(\u53ea\u5728openEuler LTS\u4e0a\u652f\u6301) \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0|openEuler 24.03 LTS SP2\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 24.03-lts-SP2 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r antelope \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u6267\u884ctempest\u6d4b\u8bd5 \u7528\u6237\u53ef\u4ee5\u4f7f\u7528oos\u81ea\u52a8\u6267\u884c\uff1a oos env test test-oos \u4e5f\u53ef\u4ee5\u624b\u52a8\u767b\u5f55\u76ee\u6807\u8282\u70b9\uff0c\u8fdb\u5165\u6839\u76ee\u5f55\u4e0b\u7684 mytest \u76ee\u5f55\uff0c\u624b\u52a8\u6267\u884c tempest run \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u8df3\u8fc7\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u5728\u7b2c4\u6b65\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 \u88ab\u7eb3\u7ba1\u7684\u865a\u673a\u9700\u8981\u4fdd\u8bc1\uff1a \u81f3\u5c11\u6709\u4e00\u5f20\u7ed9oos\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e neutron_dataplane_interface_name \u81f3\u5c11\u6709\u4e00\u5757\u7ed9oos\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e cinder_block_device \u5982\u679c\u8981\u90e8\u7f72swift\u670d\u52a1\uff0c\u5219\u9700\u8981\u65b0\u589e\u4e00\u5757\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e swift_storage_devices # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 24.03-lts-SP2 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 OpenStack \u7b80\u4ecb \u00b6 OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 24.03-LTS-SP2 \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002 \u7ea6\u5b9a \u00b6 OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a CinderSP2 Nova Neutron \u51c6\u5907\u73af\u5883 \u00b6 \u73af\u5883\u914d\u7f6e \u00b6 \u914d\u7f6e 24.03 LTS SP2 \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP2/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute \u5b89\u88c5 SQL DataBase \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef \u5b89\u88c5 RabbitMQ \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5 Memcached \u00b6 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u5b89\u88c5 OpenStack \u00b6 Keystone \u5b89\u88c5 \u00b6 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement\u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a source ~/.admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name Nova \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL) Neutron \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list Cinder \u5b89\u88c5 \u00b6 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list horizon \u5b89\u88c5 \u00b6 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740 Tempest \u5b89\u88c5 \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin Ironic \u5b89\u88c5 \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service 6.\u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ``` Kolla \u5b89\u88c5 \u00b6 Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 24.03 LTS SP2\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002 Trove \u5b89\u88c5 \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 yum install openstack-trove python-troveclient 2. \u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u5b89\u88c5 \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** 4.\u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: ```shell yum install xfsprogs rsync ``` \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS ```shell mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc ``` \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: ```shell mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc ``` \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: ```shell blkid ``` \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: ```shell UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 ``` \u6302\u8f7d\u8bbe\u5907\uff1a ```shell mount /srv/node/vdb mount /srv/node/vdc ``` ***\u6ce8\u610f*** **\u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e** \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: ```shell [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock ``` **\u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740** \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: ```shell systemctl enable rsyncd.service systemctl start rsyncd.service ``` 5.\u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: ```shell yum install openstack-swift-account openstack-swift-container openstack-swift-object ``` \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: ```shell chown -R swift:swift /srv/node ``` \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a ```shell mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift ``` 6.\u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 ```shell cd /etc/swift ``` \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: ```shell swift-ring-builder account.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder account.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder account.builder rebalance ``` 7.\u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`container.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder container.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f*** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder container.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder container.builder rebalance ``` 8.\u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`object.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder object.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d ```shell swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder object.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder object.builder rebalance ``` \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06`account.ring.gz`\uff0c`container.ring.gz`\u4ee5\u53ca `object.ring.gz`\u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684`/etc/swift`\u76ee\u5f55\u3002 9.\u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service Cyborg \u5b89\u88c5 \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u5b89\u88c5 \u00b6 1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Heat \u5b89\u88c5 \u00b6 1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 yum install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 24.03-LTS-SP2\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 24.03-lts-SP2 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 24.03-lts-SP2 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-24.03-LTS-SP2_Wallaby"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#openstack-wallaby","text":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357 OpenStack \u7b80\u4ecb \u7ea6\u5b9a \u51c6\u5907\u73af\u5883 \u73af\u5883\u914d\u7f6e \u5b89\u88c5 SQL DataBase \u5b89\u88c5 RabbitMQ \u5b89\u88c5 Memcached \u5b89\u88c5 OpenStack Keystone \u5b89\u88c5 Glance \u5b89\u88c5 Placement\u5b89\u88c5 Nova \u5b89\u88c5 Neutron \u5b89\u88c5 Cinder \u5b89\u88c5 horizon \u5b89\u88c5 Tempest \u5b89\u88c5 Ironic \u5b89\u88c5 Kolla \u5b89\u88c5 Trove \u5b89\u88c5 Swift \u5b89\u88c5 Cyborg \u5b89\u88c5 Aodh \u5b89\u88c5 Gnocchi \u5b89\u88c5 Ceilometer \u5b89\u88c5 Heat \u5b89\u88c5 \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72","title":"OpenStack-Wallaby \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#openstack","text":"OpenStack \u662f\u4e00\u4e2a\u793e\u533a\uff0c\u4e5f\u662f\u4e00\u4e2a\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u90e8\u7f72\u4e91\u7684\u64cd\u4f5c\u5e73\u53f0\u6216\u5de5\u5177\u96c6\uff0c\u4e3a\u7ec4\u7ec7\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7075\u6d3b\u7684\u4e91\u8ba1\u7b97\u3002 \u4f5c\u4e3a\u4e00\u4e2a\u5f00\u6e90\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\uff0cOpenStack \u7531nova\u3001cinder\u3001neutron\u3001glance\u3001keystone\u3001horizon\u7b49\u51e0\u4e2a\u4e3b\u8981\u7684\u7ec4\u4ef6\u7ec4\u5408\u8d77\u6765\u5b8c\u6210\u5177\u4f53\u5de5\u4f5c\u3002OpenStack \u652f\u6301\u51e0\u4e4e\u6240\u6709\u7c7b\u578b\u7684\u4e91\u73af\u5883\uff0c\u9879\u76ee\u76ee\u6807\u662f\u63d0\u4f9b\u5b9e\u65bd\u7b80\u5355\u3001\u53ef\u5927\u89c4\u6a21\u6269\u5c55\u3001\u4e30\u5bcc\u3001\u6807\u51c6\u7edf\u4e00\u7684\u4e91\u8ba1\u7b97\u7ba1\u7406\u5e73\u53f0\u3002OpenStack \u901a\u8fc7\u5404\u79cd\u4e92\u8865\u7684\u670d\u52a1\u63d0\u4f9b\u4e86\u57fa\u7840\u8bbe\u65bd\u5373\u670d\u52a1\uff08IaaS\uff09\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6bcf\u4e2a\u670d\u52a1\u63d0\u4f9b API \u8fdb\u884c\u96c6\u6210\u3002 openEuler 24.03-LTS-SP2 \u7248\u672c\u5b98\u65b9\u6e90\u5df2\u7ecf\u652f\u6301 OpenStack-Wallaby \u7248\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u914d\u7f6e\u597d yum \u6e90\u540e\u6839\u636e\u6b64\u6587\u6863\u8fdb\u884c OpenStack \u90e8\u7f72\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#_1","text":"OpenStack \u652f\u6301\u591a\u79cd\u5f62\u6001\u90e8\u7f72\uff0c\u6b64\u6587\u6863\u652f\u6301 ALL in One \u4ee5\u53ca Distributed \u4e24\u79cd\u90e8\u7f72\u65b9\u5f0f\uff0c\u6309\u7167\u5982\u4e0b\u65b9\u5f0f\u7ea6\u5b9a\uff1a ALL in One \u6a21\u5f0f: \u5ffd\u7565\u6240\u6709\u53ef\u80fd\u7684\u540e\u7f00 Distributed \u6a21\u5f0f: \u4ee5 `(CTL)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u63a7\u5236\u8282\u70b9` \u4ee5 `(CPT)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u8ba1\u7b97\u8282\u70b9` \u4ee5 `(STG)` \u4e3a\u540e\u7f00\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u4ec5\u9002\u7528`\u5b58\u50a8\u8282\u70b9` \u9664\u6b64\u4e4b\u5916\u8868\u793a\u6b64\u6761\u914d\u7f6e\u6216\u8005\u547d\u4ee4\u540c\u65f6\u9002\u7528`\u63a7\u5236\u8282\u70b9`\u548c`\u8ba1\u7b97\u8282\u70b9` \u6ce8\u610f \u6d89\u53ca\u5230\u4ee5\u4e0a\u7ea6\u5b9a\u7684\u670d\u52a1\u5982\u4e0b\uff1a CinderSP2 Nova Neutron","title":"\u7ea6\u5b9a"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#_2","text":"","title":"\u51c6\u5907\u73af\u5883"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#_3","text":"\u914d\u7f6e 24.03 LTS SP2 \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-wallaby yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP2/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u8bbe\u7f6e\u5404\u4e2a\u8282\u70b9\u7684\u4e3b\u673a\u540d hostnamectl set-hostname controller (CTL) hostnamectl set-hostname compute (CPT) \u5047\u8bbecontroller\u8282\u70b9\u7684IP\u662f 10.0.0.11 ,compute\u8282\u70b9\u7684IP\u662f 10.0.0.12 \uff08\u5982\u679c\u5b58\u5728\u7684\u8bdd\uff09,\u5219\u4e8e /etc/hosts \u65b0\u589e\u5982\u4e0b\uff1a 10.0.0.11 controller 10.0.0.12 compute","title":"\u73af\u5883\u914d\u7f6e"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#sql-database","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install mariadb mariadb-server python3-PyMySQL \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa\u5e76\u7f16\u8f91 /etc/my.cnf.d/openstack.cnf \u6587\u4ef6\u3002 vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u6ce8\u610f \u5176\u4e2d bind-address \u8bbe\u7f6e\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u542f\u52a8 DataBase \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\uff1a systemctl enable mariadb.service systemctl start mariadb.service \u914d\u7f6eDataBase\u7684\u9ed8\u8ba4\u5bc6\u7801\uff08\u53ef\u9009\uff09 mysql_secure_installation \u6ce8\u610f \u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef","title":"\u5b89\u88c5 SQL DataBase"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#rabbitmq","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install rabbitmq-server \u542f\u52a8 RabbitMQ \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u81ea\u542f\u52a8\u3002 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service \u6dfb\u52a0 OpenStack\u7528\u6237\u3002 rabbitmqctl add_user openstack RABBIT_PASS \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \uff0c\u4e3a OpenStack \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u8bbe\u7f6eopenstack\u7528\u6237\u6743\u9650\uff0c\u5141\u8bb8\u8fdb\u884c\u914d\u7f6e\u3001\u5199\u3001\u8bfb\uff1a rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5 RabbitMQ"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#memcached","text":"\u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u5b89\u88c5\u4f9d\u8d56\u8f6f\u4ef6\u5305\u3002 yum install memcached python3-memcached \u7f16\u8f91 /etc/sysconfig/memcached \u6587\u4ef6\u3002 vim /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u542f\u52a8 Memcached \u670d\u52a1\uff0c\u5e76\u4e3a\u5176\u914d\u7f6e\u5f00\u673a\u542f\u52a8\u3002 systemctl enable memcached.service systemctl start memcached.service \u6ce8\u610f \u670d\u52a1\u542f\u52a8\u540e\uff0c\u53ef\u4ee5\u901a\u8fc7\u547d\u4ee4 memcached-tool controller stats \u786e\u4fdd\u542f\u52a8\u6b63\u5e38\uff0c\u670d\u52a1\u53ef\u7528\uff0c\u5176\u4e2d\u53ef\u4ee5\u5c06 controller \u66ff\u6362\u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002","title":"\u5b89\u88c5 Memcached"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#openstack_1","text":"","title":"\u5b89\u88c5 OpenStack"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#keystone","text":"\u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305\u3002 yum install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u6ce8\u610f\uff1a \u66ff\u6362 KEYSTONE_DBPASS \u4e3a Keystone \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\u3002 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93\u3002 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1\u3002 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server vim /etc/httpd/conf/httpd.conf ServerName controller ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1\u3002 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e\u3002 cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles\uff0c\u9700\u8981\u5148\u5b89\u88c5\u597dpython3-openstackclient\uff1a yum install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#glance","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 source ~/.admin-openrc openstack user create --domain default --password-prompt glance openstack role add --project service --user glance admin openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efa\u955c\u50cf\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-glance \u914d\u7f6eglance\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u6ce8\u610f \u66ff\u6362 GLANCE_DBPASS \u4e3a glance \u6570\u636e\u5e93\u7684\u5bc6\u7801 \u66ff\u6362 GLANCE_PASS \u4e3a glance \u7528\u6237\u7684\u5bc6\u7801 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u4e0b\u8f7d\u955c\u50cf source ~/.admin-openrc wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#placement","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a \u4f5c\u4e3a root \u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\uff0c\u521b\u5efa placement \u6570\u636e\u5e93\u5e76\u6388\u6743\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u521b\u5efa placement \u670d\u52a1\u51ed\u8bc1\u3001\u521b\u5efa placement \u7528\u6237\u4ee5\u53ca\u6dfb\u52a0\u2018admin\u2019\u89d2\u8272\u5230\u7528\u6237\u2018placement\u2019\u3002 \u521b\u5efaPlacement API\u670d\u52a1 openstack user create --domain default --password-prompt placement openstack role add --project service --user placement admin openstack service create --name placement --description \"Placement API\" placement \u521b\u5efaplacement\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 \u5b89\u88c5\u548c\u914d\u7f6e \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-placement-api \u914d\u7f6eplacement\uff1a \u7f16\u8f91 /etc/placement/placement.conf \u6587\u4ef6\uff1a \u5728[placement_database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 \u5728[api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 # vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u5176\u4e2d\uff0c\u66ff\u6362 PLACEMENT_DBPASS \u4e3a placement \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff0c\u66ff\u6362 PLACEMENT_PASS \u4e3a placement \u7528\u6237\u7684\u5bc6\u7801\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8httpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 \u6267\u884c\u5982\u4e0b\u547d\u4ee4\uff0c\u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a source ~/.admin-openrc placement-status upgrade check \u5b89\u88c5osc-placement\uff0c\u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name","title":"Placement\u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#nova","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362NOVA_DBPASS\uff0c\u4e3anova\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 source ~/.admin-openrc (CTL) \u521b\u5efanova\u670d\u52a1\u51ed\u8bc1: openstack user create --domain default --password-prompt nova (CTL) openstack role add --project service --user nova admin (CTL) openstack service create --name nova --description \"OpenStack Compute\" compute (CTL) \u521b\u5efanova API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-nova-api openstack-nova-conductor \\ (CTL) openstack-nova-novncproxy openstack-nova-scheduler yum install openstack-nova-compute (CPT) \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 yum install edk2-aarch64 (CPT) \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 10.0.0.1 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver (CPT) instances_path = /var/lib/nova/instances/ (CPT) lock_path = /var/lib/nova/tmp (CPT) [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL) [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT) [libvirt] virt_type = qemu (CPT) cpu_mode = custom (CPT) cpu_model = cortex-a72 (CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp (CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [default]\u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff0c\u542f\u7528\u7f51\u7edc\u670d\u52a1neutron\uff1b [api_database] [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [api] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [vnc]\u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1b [glance]\u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\uff1b [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\u3002 \u6ce8\u610f \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\uff1b \u66ff\u6362 NOVA_DBPASS \u4e3anova\u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3aneutron\u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u989d\u5916 \u786e\u5b9a\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86\u67b6\u6784\uff09\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo (CPT) \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662fKVM\uff1a vim /etc/nova/nova.conf (CPT) [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e \u6ce8\u610f \u5982\u679c\u4e3aarm64\u7ed3\u6784\uff0c\u8fd8\u9700\u8981\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4 vim /etc/libvirt/qemu.conf nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] vim /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } (CPT) \u540c\u6b65\u6570\u636e\u5e93 \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova (CTL) \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova (CTL) \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova (CTL) \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova (CTL) \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova (CTL) \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova (CPT) \u542f\u52a8\u670d\u52a1 systemctl enable \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ (CTL) openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl enable libvirtd.service openstack-nova-compute.service (CPT) systemctl start libvirtd.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 source ~/.admin-openrc (CTL) \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list (CTL) \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list (CTL) \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list (CTL) \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check (CTL)","title":"Nova \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#neutron","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p (CTL) MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc (CTL) \u521b\u5efaneutron\u670d\u52a1\u51ed\u8bc1 openstack user create --domain default --password-prompt neutron (CTL) openstack role add --project service --user neutron admin (CTL) openstack service create --name neutron --description \"OpenStack Networking\" network (CTL) \u521b\u5efaNeutron\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 (CTL) openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL) openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL) \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \\ (CTL) openstack-neutron-ml2 yum install openstack-neutron-linuxbridge ebtables ipset (CPT) \u914d\u7f6eneutron\u76f8\u5173\u914d\u7f6e\uff1a \u914d\u7f6e\u4e3b\u4f53\u914d\u7f6e vim /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL) [DEFAULT] core_plugin = ml2 (CTL) service_plugins = router (CTL) allow_overlapping_ips = true (CTL) transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true (CTL) notify_nova_on_port_data_changes = true (CTL) api_workers = 3 (CTL) [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 (CTL) auth_type = password (CTL) project_domain_name = Default (CTL) user_domain_name = Default (CTL) region_name = RegionOne (CTL) project_name = service (CTL) username = nova (CTL) password = NOVA_PASS (CTL) [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [default]\u90e8\u5206\uff0c\u542f\u7528ml2\u63d2\u4ef6\u548crouter\u63d2\u4ef6\uff0c\u5141\u8bb8ip\u5730\u5740\u91cd\u53e0\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff1b [default] [keystone]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [default] [nova]\u90e8\u5206\uff0c\u914d\u7f6e\u7f51\u7edc\u6765\u901a\u77e5\u8ba1\u7b97\u7f51\u7edc\u62d3\u6251\u7684\u53d8\u5316\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_DBPASS \u4e3a neutron \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ\u4e2dopenstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 NOVA_PASS \u4e3a nova \u7528\u6237\u7684\u5bc6\u7801\u3002 \u914d\u7f6eML2\u63d2\u4ef6\uff1a vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u6ce8\u610f [ml2]\u90e8\u5206\uff0c\u542f\u7528 flat\u3001vlan\u3001vxlan \u7f51\u7edc\uff0c\u542f\u7528 linuxbridge \u53ca l2population \u673a\u5236\uff0c\u542f\u7528\u7aef\u53e3\u5b89\u5168\u6269\u5c55\u9a71\u52a8\uff1b [ml2_type_flat]\u90e8\u5206\uff0c\u914d\u7f6e flat \u7f51\u7edc\u4e3a provider \u865a\u62df\u7f51\u7edc\uff1b [ml2_type_vxlan]\u90e8\u5206\uff0c\u914d\u7f6e VXLAN \u7f51\u7edc\u6807\u8bc6\u7b26\u8303\u56f4\uff1b [securitygroup]\u90e8\u5206\uff0c\u914d\u7f6e\u5141\u8bb8 ipset\u3002 \u8865\u5145 l2 \u7684\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge \u914d\u7f6e Linux bridge \u4ee3\u7406\uff1a vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u89e3\u91ca [linux_bridge]\u90e8\u5206\uff0c\u6620\u5c04 provider \u865a\u62df\u7f51\u7edc\u5230\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b [vxlan]\u90e8\u5206\uff0c\u542f\u7528 vxlan \u8986\u76d6\u7f51\u7edc\uff0c\u914d\u7f6e\u5904\u7406\u8986\u76d6\u7f51\u7edc\u7684\u7269\u7406\u7f51\u7edc\u63a5\u53e3 IP \u5730\u5740\uff0c\u542f\u7528 layer-2 population\uff1b [securitygroup]\u90e8\u5206\uff0c\u5141\u8bb8\u5b89\u5168\u7ec4\uff0c\u914d\u7f6e linux bridge iptables \u9632\u706b\u5899\u9a71\u52a8\u3002 \u6ce8\u610f \u66ff\u6362 PROVIDER_INTERFACE_NAME \u4e3a\u7269\u7406\u7f51\u7edc\u63a5\u53e3\uff1b \u66ff\u6362 OVERLAY_INTERFACE_IP_ADDRESS \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406IP\u5730\u5740\u3002 \u914d\u7f6eLayer-3\u4ee3\u7406\uff1a vim /etc/neutron/l3_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge \u89e3\u91ca \u5728[default]\u90e8\u5206\uff0c\u914d\u7f6e\u63a5\u53e3\u9a71\u52a8\u4e3alinuxbridge \u914d\u7f6eDHCP\u4ee3\u7406\uff1a vim /etc/neutron/dhcp_agent.ini (CTL) [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6elinuxbridge\u63a5\u53e3\u9a71\u52a8\u3001Dnsmasq DHCP\u9a71\u52a8\uff0c\u542f\u7528\u9694\u79bb\u7684\u5143\u6570\u636e\u3002 \u914d\u7f6emetadata\u4ee3\u7406\uff1a vim /etc/neutron/metadata_agent.ini (CTL) [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u89e3\u91ca [default]\u90e8\u5206\uff0c\u914d\u7f6e\u5143\u6570\u636e\u4e3b\u673a\u548cshared secret\u3002 \u6ce8\u610f \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u914d\u7f6enova\u76f8\u5173\u914d\u7f6e vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL) \u89e3\u91ca [neutron]\u90e8\u5206\uff0c\u914d\u7f6e\u8bbf\u95ee\u53c2\u6570\uff0c\u542f\u7528\u5143\u6570\u636e\u4ee3\u7406\uff0c\u914d\u7f6esecret\u3002 \u6ce8\u610f \u66ff\u6362 NEUTRON_PASS \u4e3a neutron \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 METADATA_SECRET \u4e3a\u5408\u9002\u7684\u5143\u6570\u636e\u4ee3\u7406secret\u3002 \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf \\ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1\uff1a systemctl restart openstack-nova-api.service \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ (CTL) neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service (CTL) neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-linuxbridge-agent.service (CPT) systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT) \u9a8c\u8bc1 \u9a8c\u8bc1 neutron \u4ee3\u7406\u542f\u52a8\u6210\u529f\uff1a openstack network agent list","title":"Neutron \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#cinder","text":"\u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3acinder\u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801\u3002 source ~/.admin-openrc \u521b\u5efacinder\u670d\u52a1\u51ed\u8bc1\uff1a openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description \"OpenStack Block Storage\" volumev2 openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 \u521b\u5efa\u5757\u5b58\u50a8\u670d\u52a1API\u7aef\u70b9\uff1a openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-cinder-api openstack-cinder-scheduler (CTL) yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \\ (STG) openstack-cinder-volume openstack-cinder-backup \u51c6\u5907\u5b58\u50a8\u8bbe\u5907\uff0c\u4ee5\u4e0b\u4ec5\u4e3a\u793a\u4f8b\uff1a pvcreate /dev/vdb vgcreate cinder-volumes /dev/vdb vim /etc/lvm/lvm.conf devices { ... filter = [ \"a/vdb/\", \"r/.*/\"] \u89e3\u91ca \u5728devices\u90e8\u5206\uff0c\u6dfb\u52a0\u8fc7\u6ee4\u4ee5\u63a5\u53d7/dev/vdb\u8bbe\u5907\u62d2\u7edd\u5176\u4ed6\u8bbe\u5907\u3002 \u51c6\u5907NFS mkdir -p /root/cinder/backup cat << EOF >> /etc/export /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) EOF \u914d\u7f6ecinder\u76f8\u5173\u914d\u7f6e\uff1a vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 10.0.0.11 enabled_backends = lvm (STG) backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG) backup_share=HOST:PATH (STG) [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG) volume_group = cinder-volumes (STG) iscsi_protocol = iscsi (STG) iscsi_helper = tgtadm (STG) \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1b [DEFAULT]\u90e8\u5206\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u914d\u7f6emy_ip\uff1b [DEFAULT] [keystone_authtoken]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1b [oslo_concurrency]\u90e8\u5206\uff0c\u914d\u7f6elock path\u3002 \u6ce8\u610f \u66ff\u6362 CINDER_DBPASS \u4e3a cinder \u6570\u636e\u5e93\u7684\u5bc6\u7801\uff1b \u66ff\u6362 RABBIT_PASS \u4e3a RabbitMQ \u4e2d openstack \u8d26\u6237\u7684\u5bc6\u7801\uff1b \u914d\u7f6e my_ip \u4e3a\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\uff1b \u66ff\u6362 CINDER_PASS \u4e3a cinder \u7528\u6237\u7684\u5bc6\u7801\uff1b \u66ff\u6362 HOST:PATH \u4e3a NFS \u7684HOSTIP\u548c\u5171\u4eab\u8def\u5f84\uff1b \u540c\u6b65\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"cinder-manage db sync\" cinder (CTL) \u914d\u7f6enova\uff1a vim /etc/nova/nova.conf (CTL) [cinder] os_region_name = RegionOne \u91cd\u542f\u8ba1\u7b97API\u670d\u52a1 systemctl restart openstack-nova-api.service \u542f\u52a8cinder\u670d\u52a1 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL) systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \\ (STG) openstack-cinder-volume.service \\ openstack-cinder-backup.service \u6ce8\u610f \u5f53cinder\u4f7f\u7528tgtadm\u7684\u65b9\u5f0f\u6302\u5377\u7684\u65f6\u5019\uff0c\u8981\u4fee\u6539/etc/tgt/tgtd.conf\uff0c\u5185\u5bb9\u5982\u4e0b\uff0c\u4fdd\u8bc1tgtd\u53ef\u4ee5\u53d1\u73b0cinder-volume\u7684iscsi target\u3002 include /var/lib/cinder/volumes/* \u9a8c\u8bc1 source ~/.admin-openrc openstack volume service list","title":"Cinder \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#horizon","text":"\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-dashboard \u4fee\u6539\u6587\u4ef6 \u4fee\u6539\u53d8\u91cf vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = \"http://%s:5000/v3\" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f httpd \u670d\u52a1 systemctl restart httpd.service memcached.service \u9a8c\u8bc1 \u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165\u7f51\u5740 http://HOSTIP/dashboard/ \uff0c\u767b\u5f55 horizon\u3002 \u6ce8\u610f \u66ff\u6362HOSTIP\u4e3a\u63a7\u5236\u8282\u70b9\u7ba1\u7406\u5e73\u9762IP\u5730\u5740","title":"horizon \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5b89\u88c5Tempest yum install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Wallaby\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASSWORD'; \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 openstack user create --password IRONIC_PASSWORD \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector openstack role add --project service --user ironic-inspector admin 2\u3001\u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1 \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf 1\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string used to connect to the # database (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 2\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 3\u3001\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASSWORD \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop transport_url = rabbit://openstack:RABBITPASSWD@controller:5672/ enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 # Complete admin Identity API endpoint. (string value) auth_url=http://PRIVATE_IDENTITY_IP:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASSWORD # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:123456@172.20.19.25:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none 4\u3001\u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema 5\u3001\u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 1\u3001\u66ff\u6362 HOST_IP \u4e3aconductor host\u7684IP [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) my_ip=HOST_IP 2\u3001\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASSWORD \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362DB_IP\u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic 3\u3001\u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq 4\u3001\u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASSWORD # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] ... endpoint_override = 5\u3001\u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 6\u3001\u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic-inspector/inspector.conf 1\u3001\u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD'; 2\u3001\u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASSWORD \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 3\u3001\u914d\u7f6e\u6d88\u606f\u5ea6\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ 4\u3001\u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://control:5000 www_authenticate_uri = http://control:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = control:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True 5\u3001\u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=172.20.19.100,172.20.19.110 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log 6\u3001\u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c 7\u3001\u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\uff1a ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade 8\u3001\u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service 6.\u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 yum install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd 7.deploy ramdisk\u955c\u50cf\u5236\u4f5c W\u7248\u7684ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent\u670d\u52a1\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u793e\u533a\u6700\u65b0\u7684ironic-python-agent-builder\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002 \u82e5\u4f7f\u7528W\u7248\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 yum install openstack-ironic-python-agent \u6216\u8005 yum install diskimage-builder \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \u8fd9\u91cc\u4ecb\u7ecd\u4e0b\u4f7f\u7528ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder 1. \u5b89\u88c5\u5de5\u5177\uff1a ```shell pip install ironic-python-agent-builder ``` 2. \u4fee\u6539\u4ee5\u4e0b\u6587\u4ef6\u4e2d\u7684python\u89e3\u91ca\u5668\uff1a ```shell /usr/bin/yum /usr/libexec/urlgrabber-ext-down ``` 3. \u5b89\u88c5\u5176\u5b83\u5fc5\u987b\u7684\u5de5\u5177\uff1a ```shell yum install git ``` \u7531\u4e8e`DIB`\u4f9d\u8d56`semanage`\u547d\u4ee4\uff0c\u6240\u4ee5\u5728\u5236\u4f5c\u955c\u50cf\u4e4b\u524d\u786e\u5b9a\u8be5\u547d\u4ee4\u662f\u5426\u53ef\u7528\uff1a`semanage --help`\uff0c\u5982\u679c\u63d0\u793a\u65e0\u6b64\u547d\u4ee4\uff0c\u5b89\u88c5\u5373\u53ef\uff1a ```shell # \u5148\u67e5\u8be2\u9700\u8981\u5b89\u88c5\u54ea\u4e2a\u5305 [root@localhost ~]# yum provides /usr/sbin/semanage \u5df2\u52a0\u8f7d\u63d2\u4ef6\uff1afastestmirror Loading mirror speeds from cached hostfile * base: mirror.vcu.edu * extras: mirror.vcu.edu * updates: mirror.math.princeton.edu policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities \u6e90 \uff1abase \u5339\u914d\u6765\u6e90\uff1a \u6587\u4ef6\u540d \uff1a/usr/sbin/semanage # \u5b89\u88c5 [root@localhost ~]# yum install policycoreutils-python ``` \u5236\u4f5c\u955c\u50cf \u5982\u679c\u662f`arm`\u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a ```shell export ARCH=aarch64 ``` \u57fa\u672c\u7528\u6cd5\uff1a ```shell usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--extra-args EXTRA_ARGS] distribution positional arguments: distribution Distribution to use optional arguments: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic- python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder ``` \u4e3e\u4f8b\u8bf4\u660e\uff1a ```shell ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky ``` \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser ``` \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a ```shell # \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u4ee5\u53ca\u7248\u672c DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git DIB_REPOREF_ironic_python_agent=origin/develop # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1 ``` \u53c2\u8003\uff1a[source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html)\u3002 \u6307\u5b9a\u4ed3\u5e93\u5730\u5740\u53ca\u7248\u672c\u9a8c\u8bc1\u6210\u529f\u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\uff0c\u5982\u4e0b\uff1a \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a ![ironic-err](../../img/install/ironic-err.png) \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a w\u7248\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 1. \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a ``` [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ``` 2) ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ironic_python_agent\u76ee\u5f55\uff09 ``` [DEFAULT] enable_auto_tls = False ``` \u8bbe\u7f6e\u6743\u9650\uff1a ``` chown -R ipa.ipa /etc/ironic_python_agent/ ``` 3. \u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 vim usr/lib/systemd/system/ironic-python-agent.service ``` [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target ```","title":"Ironic \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#kolla","text":"Kolla\u4e3aOpenStack\u670d\u52a1\u63d0\u4f9b\u751f\u4ea7\u73af\u5883\u53ef\u7528\u7684\u5bb9\u5668\u5316\u90e8\u7f72\u7684\u529f\u80fd\u3002openEuler 24.03 LTS SP2\u4e2d\u5f15\u5165\u4e86Kolla\u548cKolla-ansible\u670d\u52a1\u3002 Kolla\u7684\u5b89\u88c5\u5341\u5206\u7b80\u5355\uff0c\u53ea\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684RPM\u5305\u5373\u53ef yum install openstack-kolla openstack-kolla-ansible \u5b89\u88c5\u5b8c\u540e\uff0c\u5c31\u53ef\u4ee5\u4f7f\u7528 kolla-ansible , kolla-build , kolla-genpwd , kolla-mergepwd \u7b49\u547d\u4ee4\u4e86\u3002","title":"Kolla \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 1.\u8bbe\u7f6e\u6570\u636e\u5e93 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a trove \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 trove \u6570\u636e\u5e93\uff0c\u66ff\u6362 TROVE_DBPASSWORD \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \\ IDENTIFIED BY 'TROVE_DBPASSWORD'; 2.\u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 1\u3001\u521b\u5efa Trove \u670d\u52a1\u7528\u6237 openstack user create --password TROVE_PASSWORD \\ --email trove@example.com trove openstack role add --project service --user trove admin openstack service create --name trove --description \"Database service\" database \u89e3\u91ca\uff1a TROVE_PASSWORD \u66ff\u6362\u4e3a trove \u7528\u6237\u7684\u5bc6\u7801 2\u3001\u521b\u5efa Database \u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s 3.\u5b89\u88c5\u548c\u914d\u7f6e Trove \u5404\u7ec4\u4ef6 1\u3001\u5b89\u88c5 Trove \u5305 yum install openstack-trove python-troveclient 2. \u914d\u7f6e trove.conf vim /etc/trove/trove.conf [DEFAULT] bind_host=TROVE_NODE_IP log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True # Set these if using Neutron Networking network_driver=trove.network.neutron.NeutronDriver network_label_regex=.* transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] project_domain_name = Default project_name = service user_domain_name = Default password = trove username = trove auth_url = http://controller:5000/v3/ auth_type = password [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = trove project_domain_name = Default user_domain_name = Default username = trove [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u90e8\u7f72\u8282\u70b9\u7684IP nova_compute_url \u548c cinder_url \u4e3aNova\u548cCinder\u5728Keystone\u4e2d\u521b\u5efa\u7684endpoint nova_proxy_XXX \u4e3a\u4e00\u4e2a\u80fd\u8bbf\u95eeNova\u670d\u52a1\u7684\u7528\u6237\u4fe1\u606f\uff0c\u4e0a\u4f8b\u4e2d\u4f7f\u7528 admin \u7528\u6237\u4e3a\u4f8b transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 3.\u914d\u7f6e trove-guestagent.conf vim /etc/trove/trove-guestagent.conf [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df \u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a \u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002 \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801 Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASS \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801 4.\u751f\u6210\u6570\u636e Trove \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"trove-manage db_sync\" trove 4.\u5b8c\u6210\u5b89\u88c5\u914d\u7f6e \u914d\u7f6e Trove \u670d\u52a1\u81ea\u542f\u52a8 systemctl enable openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service \\ openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3001API\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1 #\u521b\u5efaswift\u7528\u6237\uff1a openstack user create --domain default --password-prompt swift #\u4e3aswift\u7528\u6237\u6dfb\u52a0admin\u89d2\u8272\uff1a openstack role add --project service --user swift admin #\u521b\u5efaswift\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaswift API \u7aef\u70b9: openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached \uff08CTL\uff09 \u914d\u7f6eproxy-server\u76f8\u5173\u914d\u7f6e Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cswift password\u5373\u53ef\u3002 ***\u6ce8\u610f*** **\u6ce8\u610f\u66ff\u6362password\u4e3a\u60a8\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u4e3aswift\u7528\u6237\u9009\u62e9\u7684\u5bc6\u7801** 4.\u5b89\u88c5\u548c\u914d\u7f6e\u5b58\u50a8\u8282\u70b9 \uff08STG\uff09 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305: ```shell yum install xfsprogs rsync ``` \u5c06/dev/vdb\u548c/dev/vdc\u8bbe\u5907\u683c\u5f0f\u5316\u4e3a XFS ```shell mkfs.xfs /dev/vdb mkfs.xfs /dev/vdc ``` \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784: ```shell mkdir -p /srv/node/vdb mkdir -p /srv/node/vdc ``` \u627e\u5230\u65b0\u5206\u533a\u7684 UUID: ```shell blkid ``` \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d: ```shell UUID=\"\" /srv/node/vdb xfs noatime 0 2 UUID=\"\" /srv/node/vdc xfs noatime 0 2 ``` \u6302\u8f7d\u8bbe\u5907\uff1a ```shell mount /srv/node/vdb mount /srv/node/vdc ``` ***\u6ce8\u610f*** **\u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e** \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: ```shell [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock ``` **\u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740** \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: ```shell systemctl enable rsyncd.service systemctl start rsyncd.service ``` 5.\u5728\u5b58\u50a8\u8282\u70b9\u5b89\u88c5\u548c\u914d\u7f6e\u7ec4\u4ef6 \uff08STG\uff09 \u5b89\u88c5\u8f6f\u4ef6\u5305: ```shell yum install openstack-swift-account openstack-swift-container openstack-swift-object ``` \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743: ```shell chown -R swift:swift /srv/node ``` \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\uff1a ```shell mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift ``` 6.\u521b\u5efa\u8d26\u53f7\u73af (CTL) \u5207\u6362\u5230/etc/swift\u76ee\u5f55\u3002 ```shell cd /etc/swift ``` \u521b\u5efa\u57fa\u7840account.builder\u6587\u4ef6: ```shell swift-ring-builder account.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder account.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder account.builder rebalance ``` 7.\u521b\u5efa\u5bb9\u5668\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`container.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder container.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\uff1a ```shell swift-ring-builder container.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f*** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder container.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder container.builder rebalance ``` 8.\u521b\u5efa\u5bf9\u8c61\u73af (CTL) \u5207\u6362\u5230`/etc/swift`\u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840`object.builder`\u6587\u4ef6\uff1a ```shell swift-ring-builder object.builder create 10 1 1 ``` \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d ```shell swift-ring-builder object.builder \\ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \\ --device DEVICE_NAME --weight 100 ``` **\u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0** ***\u6ce8\u610f *** **\u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4** \u9a8c\u8bc1\u6212\u6307\u5185\u5bb9\uff1a ```shell swift-ring-builder object.builder ``` \u91cd\u65b0\u5e73\u8861\u6212\u6307\uff1a ```shell swift-ring-builder object.builder rebalance ``` \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\uff1a \u5c06`account.ring.gz`\uff0c`container.ring.gz`\u4ee5\u53ca `object.ring.gz`\u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684`/etc/swift`\u76ee\u5f55\u3002 9.\u5b8c\u6210\u5b89\u88c5 \u7f16\u8f91 /etc/swift/swift.conf \u6587\u4ef6 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\uff1a chown -R root:swift /etc/swift \u5728\u63a7\u5236\u5668\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\uff1a systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service","title":"Swift \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 1.\u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 CREATE DATABASE cyborg; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 $ openstack user create --domain default --password-prompt cyborg $ openstack role add --project service --user cyborg admin $ openstack service create --name cyborg --description \"Acceleration Service\" accelerator $ openstack endpoint create --region RegionOne \\ accelerator public http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator internal http://:6666/v1 $ openstack endpoint create --region RegionOne \\ accelerator admin http://:6666/v1 3.\u5b89\u88c5Cyborg yum install openstack-cyborg 4.\u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [database] connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg [service_catalog] project_domain_id = default user_domain_id = default project_name = service password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = placement auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password [keystone_authtoken] memcached_servers = localhost:11211 project_domain_name = Default project_name = service user_domain_name = Default password = PASSWORD username = cyborg auth_url = http://%OPENSTACK_HOST_IP%/identity auth_type = password \u81ea\u884c\u4fee\u6539\u5bf9\u5e94\u7684\u7528\u6237\u540d\u3001\u5bc6\u7801\u3001IP\u7b49\u4fe1\u606f 5.\u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade 6.\u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#aodh","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 3.\u5b89\u88c5Aodh yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient \u6ce8\u610f aodh\u4f9d\u8d56\u7684\u8f6f\u4ef6\u5305pytho3-pyparsing\u5728openEuler\u7684OS\u4ed3\u4e0d\u9002\u914d\uff0c\u9700\u8981\u8986\u76d6\u5b89\u88c5OpenStack\u5bf9\u5e94\u7248\u672c\uff0c\u53ef\u4ee5\u4f7f\u7528 yum list |grep pyparsing |grep OpenStack | awk '{print $2}' \u83b7\u53d6\u5bf9\u5e94\u7684\u7248\u672c VERSION,\u7136\u540e\u518d yum install -y python3-pyparsing-VERSION \u8986\u76d6\u5b89\u88c5\u9002\u914d\u7684pyparsing 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 aodh-dbsync 6.\u542f\u52a8Aodh\u670d\u52a1 systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#gnocchi","text":"1.\u521b\u5efa\u6570\u636e\u5e93 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; 2.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3.\u5b89\u88c5Gnocchi yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 gnocchi-upgrade 6.\u542f\u52a8Gnocchi\u670d\u52a1 systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#ceilometer","text":"1.\u521b\u5efa\u5bf9\u5e94Keystone\u8d44\u6e90\u5bf9\u8c61 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering 2.\u5b89\u88c5Ceilometer yum install openstack-ceilometer-notification openstack-ceilometer-central 3.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/pipeline.yaml publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low 4.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/ceilometer/ceilometer.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne 5.\u521d\u59cb\u5316\u6570\u636e\u5e93 ceilometer-upgrade 6.\u542f\u52a8Ceilometer\u670d\u52a1 systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service","title":"Ceilometer \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#heat","text":"1.\u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 CREATE DATABASE heat; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; 2.\u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin 3.\u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 4.\u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f\uff0c\u5305\u62ec heat domain\u53ca\u5176\u5bf9\u5e94domain\u7684admin\u7528\u6237 heat_domain_admin \uff0c heat_stack_owner \u89d2\u8272\uff0c heat_stack_user \u89d2\u8272 openstack user create --domain heat --password-prompt heat_domain_admin openstack role add --domain heat --user-domain heat --user heat_domain_admin admin openstack role create heat_stack_owner openstack role create heat_stack_user 5.\u5b89\u88c5\u8f6f\u4ef6\u5305 yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine 6.\u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 7.\u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat 8.\u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat \u5b89\u88c5"},{"location":"install/openEuler-24.03-LTS-SP2/OpenStack-wallaby/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 yum install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff1a [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668 \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0openEuler 24.03-LTS-SP2\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 24.03-lts-SP2 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r wallaby \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u547d\u4ee4\u6267\u884c\u6210\u529f\u540e\uff0c\u5728\u7528\u6237\u7684\u6839\u76ee\u5f55\u4e0b\u4f1a\u751f\u6210mytest\u76ee\u5f55\uff0c\u8fdb\u5165\u5176\u4e2d\u5c31\u53ef\u4ee5\u6267\u884ctempest run\u547d\u4ee4\u4e86\u3002 \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u53bb\u9664\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u7b2c4\u6b65\u7531\u5728\u534e\u4e3a\u4e91\u4e0a\u521b\u5efa\u865a\u62df\u673a\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 24.03-lts-SP2 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u5feb\u901f\u90e8\u7f72"},{"location":"install/openEuler-25.03/OpenStack-antelope/","text":"OpenStack Antelope \u90e8\u7f72\u6307\u5357 \u00b6 OpenStack Antelope \u90e8\u7f72\u6307\u5357 \u57fa\u4e8eRPM\u90e8\u7f72 \u73af\u5883\u51c6\u5907 \u65f6\u949f\u540c\u6b65 \u5b89\u88c5\u6570\u636e\u5e93 \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u90e8\u7f72\u670d\u52a1 Keystone Glance Placement Nova Neutron Cinder Horizon Ironic Trove Swift Cyborg Aodh Gnocchi Ceilometer Heat Tempest \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u672c\u6587\u6863\u662f openEuler OpenStack SIG \u7f16\u5199\u7684\u57fa\u4e8e |openEuler 25.03 \u7684 OpenStack \u90e8\u7f72\u6307\u5357\uff0c\u5185\u5bb9\u7531 SIG \u8d21\u732e\u8005\u63d0\u4f9b\u3002\u5728\u9605\u8bfb\u8fc7\u7a0b\u4e2d\uff0c\u5982\u679c\u60a8\u6709\u4efb\u4f55\u7591\u95ee\u6216\u8005\u53d1\u73b0\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7 \u8054\u7cfb SIG\u7ef4\u62a4\u4eba\u5458\uff0c\u6216\u8005\u76f4\u63a5 \u63d0\u4ea4issue \u7ea6\u5b9a \u672c\u7ae0\u8282\u63cf\u8ff0\u6587\u6863\u4e2d\u7684\u4e00\u4e9b\u901a\u7528\u7ea6\u5b9a\u3002 \u540d\u79f0 \u5b9a\u4e49 RABBIT_PASS rabbitmq\u7684\u5bc6\u7801\uff0c\u7531\u7528\u6237\u8bbe\u7f6e\uff0c\u5728OpenStack\u5404\u4e2a\u670d\u52a1\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_PASS cinder\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_DBPASS cinder\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 KEYSTONE_DBPASS keystone\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728keystone\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_PASS glance\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_DBPASS glance\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_PASS \u5728keystone\u6ce8\u518c\u7684heat\u7528\u6237\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_DBPASS heat\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_PASS \u5728keystone\u6ce8\u518c\u7684cyborg\u7528\u6237\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_DBPASS cyborg\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_PASS \u5728keystone\u6ce8\u518c\u7684neutron\u7528\u6237\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_DBPASS neutron\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PROVIDER_INTERFACE_NAME \u7269\u7406\u7f51\u7edc\u63a5\u53e3\u7684\u540d\u79f0\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 OVERLAY_INTERFACE_IP_ADDRESS Controller\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406ip\u5730\u5740\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 METADATA_SECRET metadata proxy\u7684secret\u5bc6\u7801\uff0c\u5728nova\u548cneutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_DBPASS placement\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_PASS \u5728keystone\u6ce8\u518c\u7684placement\u7528\u6237\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_DBPASS nova\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728nova\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_PASS \u5728keystone\u6ce8\u518c\u7684nova\u7528\u6237\u5bc6\u7801\uff0c\u5728nova,cyborg,neutron\u7b49\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_DBPASS ironic\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_PASS \u5728keystone\u6ce8\u518c\u7684ironic\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_DBPASS ironic-inspector\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_PASS \u5728keystone\u6ce8\u518c\u7684ironic-inspector\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 OpenStack SIG \u63d0\u4f9b\u4e86\u591a\u79cd\u57fa\u4e8e openEuler \u90e8\u7f72 OpenStack \u7684\u65b9\u6cd5\uff0c\u4ee5\u6ee1\u8db3\u4e0d\u540c\u7684\u7528\u6237\u573a\u666f\uff0c\u8bf7\u6309\u9700\u9009\u62e9\u3002 \u57fa\u4e8eRPM\u90e8\u7f72 \u00b6 \u73af\u5883\u51c6\u5907 \u00b6 \u672c\u6587\u6863\u57fa\u4e8eOpenStack\u7ecf\u5178\u7684\u4e09\u8282\u70b9\u73af\u5883\u8fdb\u884c\u90e8\u7f72\uff0c\u4e09\u4e2a\u8282\u70b9\u5206\u522b\u662f\u63a7\u5236\u8282\u70b9(Controller)\u3001\u8ba1\u7b97\u8282\u70b9(Compute)\u3001\u5b58\u50a8\u8282\u70b9(Storage)\uff0c\u5176\u4e2d\u5b58\u50a8\u8282\u70b9\u4e00\u822c\u53ea\u90e8\u7f72\u5b58\u50a8\u670d\u52a1\uff0c\u5728\u8d44\u6e90\u6709\u9650\u7684\u60c5\u51b5\u4e0b\uff0c\u53ef\u4ee5\u4e0d\u5355\u72ec\u90e8\u7f72\u8be5\u8282\u70b9\uff0c\u628a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u670d\u52a1\u90e8\u7f72\u5230\u8ba1\u7b97\u8282\u70b9\u5373\u53ef\u3002 \u9996\u5148\u51c6\u5907\u4e09\u4e2a|openEuler 25.03\u73af\u5883\uff0c\u6839\u636e\u60a8\u7684\u73af\u5883\uff0c\u4e0b\u8f7d\u5bf9\u5e94\u7684\u955c\u50cf\u5e76\u5b89\u88c5\u5373\u53ef\uff1a[ISO\u955c\u50cf] https://repo.openeuler.org/openEuler-24.03-LTS-SP1/ISO/ \u3001[qcow2\u955c\u50cf] https://repo.openeuler.org/openEuler-24.03-LTS-SP1/virtual_machine_img/ \u3002 \u4e0b\u9762\u7684\u5b89\u88c5\u6309\u7167\u5982\u4e0b\u62d3\u6251\u8fdb\u884c\uff1a controller\uff1a192.168.0.2 compute\uff1a 192.168.0.3 storage\uff1a 192.168.0.4 \u5982\u679c\u60a8\u7684\u73af\u5883IP\u4e0d\u540c\uff0c\u8bf7\u6309\u7167\u60a8\u7684\u73af\u5883IP\u4fee\u6539\u76f8\u5e94\u7684\u914d\u7f6e\u6587\u4ef6\u3002 \u672c\u6587\u6863\u7684\u4e09\u8282\u70b9\u670d\u52a1\u62d3\u6251\u5982\u4e0b\u56fe\u6240\u793a(\u53ea\u5305\u542bKeystone\u3001Glance\u3001Nova\u3001Cinder\u3001Neutron\u8fd9\u51e0\u4e2a\u6838\u5fc3\u670d\u52a1\uff0c\u5176\u4ed6\u670d\u52a1\u8bf7\u53c2\u8003\u5177\u4f53\u90e8\u7f72\u7ae0\u8282)\uff1a \u5728\u6b63\u5f0f\u90e8\u7f72\u4e4b\u524d\uff0c\u9700\u8981\u5bf9\u6bcf\u4e2a\u8282\u70b9\u505a\u5982\u4e0b\u914d\u7f6e\u548c\u68c0\u67e5\uff1a \u914d\u7f6e |openEuler 25.03 \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-antelope yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u6bcf\u4e2a\u8282\u70b9\u5206\u522b\u4fee\u6539\u4e3b\u673a\u540d\uff0c\u4ee5controller\u4e3a\u4f8b\uff1a hostnamectl set-hostname controller vi /etc/hostname \u5185\u5bb9\u4fee\u6539\u4e3acontroller \u7136\u540e\u4fee\u6539\u6bcf\u4e2a\u8282\u70b9\u7684 /etc/hosts \u6587\u4ef6\uff0c\u65b0\u589e\u5982\u4e0b\u5185\u5bb9: 192.168.0.2 controller 192.168.0.3 compute 192.168.0.4 storage \u65f6\u949f\u540c\u6b65 \u00b6 \u96c6\u7fa4\u73af\u5883\u65f6\u523b\u8981\u6c42\u6bcf\u4e2a\u8282\u70b9\u7684\u65f6\u95f4\u4e00\u81f4\uff0c\u4e00\u822c\u7531\u65f6\u949f\u540c\u6b65\u8f6f\u4ef6\u4fdd\u8bc1\u3002\u672c\u6587\u4f7f\u7528 chrony \u8f6f\u4ef6\u3002\u6b65\u9aa4\u5982\u4e0b\uff1a Controller\u8282\u70b9 \uff1a \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # \u8868\u793a\u5141\u8bb8\u54ea\u4e9bIP\u4ece\u672c\u8282\u70b9\u540c\u6b65\u65f6\u949f allow 192.168.0.0/24 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u5176\u4ed6\u8282\u70b9 \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # NTP_SERVER\u662fcontroller IP\uff0c\u8868\u793a\u4ece\u8fd9\u4e2a\u673a\u5668\u83b7\u53d6\u65f6\u95f4\uff0c\u8fd9\u91cc\u6211\u4eec\u586b192.168.0.2\uff0c\u6216\u8005\u5728`/etc/hosts`\u91cc\u914d\u7f6e\u597d\u7684controller\u540d\u5b57\u5373\u53ef\u3002 server NTP_SERVER iburst \u540c\u65f6\uff0c\u8981\u628a pool pool.ntp.org iburst \u8fd9\u4e00\u884c\u6ce8\u91ca\u6389\uff0c\u8868\u793a\u4e0d\u4ece\u516c\u7f51\u540c\u6b65\u65f6\u949f\u3002 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u914d\u7f6e\u5b8c\u6210\u540e\uff0c\u68c0\u67e5\u4e00\u4e0b\u7ed3\u679c\uff0c\u5728\u5176\u4ed6\u975econtroller\u8282\u70b9\u6267\u884c chronyc sources \uff0c\u8fd4\u56de\u7ed3\u679c\u7c7b\u4f3c\u5982\u4e0b\u5185\u5bb9\uff0c\u8868\u793a\u6210\u529f\u4ececontroller\u540c\u6b65\u65f6\u949f\u3002 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 192.168.0.2 4 6 7 0 -1406ns[ +55us] +/- 16ms \u5b89\u88c5\u6570\u636e\u5e93 \u00b6 \u6570\u636e\u5e93\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528mariadb\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install mysql-config mariadb mariadb-server python3-PyMySQL \u65b0\u589e\u914d\u7f6e\u6587\u4ef6 /etc/my.cnf.d/openstack.cnf \uff0c\u5185\u5bb9\u5982\u4e0b [mysqld] bind-address = 192.168.0.2 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8\u670d\u52a1\u5668 systemctl start mariadb \u521d\u59cb\u5316\u6570\u636e\u5e93\uff0c\u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef mysql_secure_installation \u793a\u4f8b\u5982\u4e0b\uff1a NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and haven't set the root password yet, you should just press enter here. Enter current password for root (enter for none): #\u8fd9\u91cc\u8f93\u5165\u5bc6\u7801\uff0c\u7531\u4e8e\u6211\u4eec\u662f\u521d\u59cb\u5316DB\uff0c\u76f4\u63a5\u56de\u8f66\u5c31\u884c OK, successfully used password, moving on... Setting the root password or using the unix_socket ensures that nobody can log into the MariaDB root user without the proper authorisation. You already have your root account protected, so you can safely answer 'n'. # \u8fd9\u91cc\u6839\u636e\u63d0\u793a\u8f93\u5165N Switch to unix_socket authentication [Y/n] N Enabled successfully! Reloading privilege tables.. ... Success! You already have your root account protected, so you can safely answer 'n'. # \u8f93\u5165Y\uff0c\u4fee\u6539\u5bc6\u7801 Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664\u533f\u540d\u7528\u6237 Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. # \u8f93\u5165Y\uff0c\u5173\u95edroot\u8fdc\u7a0b\u767b\u5f55\u6743\u9650 Disallow root login remotely? [Y/n] Y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664test\u6570\u636e\u5e93 Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. # \u8f93\u5165Y\uff0c\u91cd\u8f7d\u914d\u7f6e Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. \u9a8c\u8bc1\uff0c\u6839\u636e\u7b2c\u56db\u6b65\u8bbe\u7f6e\u7684\u5bc6\u7801\uff0c\u68c0\u67e5\u662f\u5426\u80fd\u767b\u5f55mariadb mysql -uroot -p \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u00b6 \u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528rabbitmq\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install rabbitmq-server \u542f\u52a8\u670d\u52a1 systemctl start rabbitmq-server \u914d\u7f6eopenstack\u7528\u6237\uff0c RABBIT_PASS \u662fopenstack\u670d\u52a1\u767b\u5f55\u6d88\u606f\u961f\u91cc\u7684\u5bc6\u7801\uff0c\u9700\u8981\u548c\u540e\u9762\u5404\u4e2a\u670d\u52a1\u7684\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\u3002 rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\" \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u00b6 \u7f13\u5b58\u670d\u52a1\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528Memcached\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install memcached python3-memcached \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u542f\u52a8\u670d\u52a1 systemctl start memcached \u90e8\u7f72\u670d\u52a1 \u00b6 Keystone \u00b6 Keystone\u662fOpenStack\u63d0\u4f9b\u7684\u9274\u6743\u670d\u52a1\uff0c\u662f\u6574\u4e2aOpenStack\u7684\u5165\u53e3\uff0c\u63d0\u4f9b\u4e86\u79df\u6237\u9694\u79bb\u3001\u7528\u6237\u8ba4\u8bc1\u3001\u670d\u52a1\u53d1\u73b0\u7b49\u529f\u80fd\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server \u6253\u5f00httpd.conf\u5e76\u914d\u7f6e #\u9700\u8981\u4fee\u6539\u7684\u914d\u7f6e\u6587\u4ef6\u8def\u5f84 vim /etc/httpd/conf/httpd.conf #\u4fee\u6539\u4ee5\u4e0b\u9879\uff0c\u5982\u679c\u6ca1\u6709\u5219\u65b0\u6dfb\u52a0 ServerName controller \u521b\u5efa\u8f6f\u94fe\u63a5 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles \u9700\u8981\u5148\u5b89\u88c5python3-openstackclient dnf install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue Glance \u00b6 Glance\u662fOpenStack\u63d0\u4f9b\u7684\u955c\u50cf\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u3001\u88f8\u673a\u955c\u50cf\u7684\u4e0a\u4f20\u4e0e\u4e0b\u8f7d\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521d\u59cb\u5316 glance \u8d44\u6e90\u5bf9\u8c61 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230 GLANCE_PASS \u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt glance User Password: Repeat User Password: \u6dfb\u52a0glance\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user glance admin \u521b\u5efaglance\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efaglance API\u670d\u52a1\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-glance \u4fee\u6539 glance \u914d\u7f6e\u6587\u4ef6 vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrcu \u4e0b\u8f7d\u955c\u50cf x86\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img arm\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list Placement \u00b6 Placement\u662fOpenStack\u63d0\u4f9b\u7684\u8d44\u6e90\u8c03\u5ea6\u7ec4\u4ef6\uff0c\u4e00\u822c\u4e0d\u9762\u5411\u7528\u6237\uff0c\u7531Nova\u7b49\u7ec4\u4ef6\u8c03\u7528\uff0c\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u3001\u914d\u7f6ePlacement\u670d\u52a1\u524d\uff0c\u9700\u8981\u5148\u521b\u5efa\u76f8\u5e94\u7684\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548cAPI endpoints\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efaplacement\u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE placement; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efaplacement\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt placement User Password: Repeat User Password: \u6dfb\u52a0placement\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name placement \\ --description \"Placement API\" placement \u521b\u5efaPlacement API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ placement public http://controller:8778 openstack endpoint create --region RegionOne \\ placement internal http://controller:8778 openstack endpoint create --region RegionOne \\ placement admin http://controller:8778 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-placement-api \u7f16\u8f91 /etc/placement/placement.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [placement_database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [placement_database] connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff0c\u586b\u5145Placement\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8\u670d\u52a1 \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650 source ~/.admin-openrc \u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a placement-status upgrade check +----------------------------------------------------------------------+ | Upgrade Check Results | +----------------------------------------------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Failure | | Details: Your policy file is JSON-formatted which is deprecated. You | | need to switch to YAML-formatted file. Use the | | ``oslopolicy-convert-json-to-yaml`` tool to convert the | | existing JSON-formatted files to YAML in a backwards- | | compatible manner: https://docs.openstack.org/oslo.policy/ | | latest/cli/oslopolicy-convert-json-to-yaml.html. | +----------------------------------------------------------------------+ \u8fd9\u91cc\u53ef\u4ee5\u770b\u5230 Policy File JSON to YAML Migration \u7684\u7ed3\u679c\u4e3aFailure\u3002\u8fd9\u662f\u56e0\u4e3a\u5728Placement\u4e2d\uff0cJSON\u683c\u5f0f\u7684policy\u6587\u4ef6\u4eceWallaby\u7248\u672c\u5f00\u59cb\u5df2\u5904\u4e8e deprecated \u72b6\u6001\u3002\u53ef\u4ee5\u53c2\u8003\u63d0\u793a\uff0c\u4f7f\u7528 oslopolicy-convert-json-to-yaml \u5de5\u5177 \u5c06\u73b0\u6709\u7684JSON\u683c\u5f0fpolicy\u6587\u4ef6\u8f6c\u5316\u4e3aYAML\u683c\u5f0f\u3002 oslopolicy-convert-json-to-yaml --namespace placement \\ --policy-file /etc/placement/policy.json \\ --output-file /etc/placement/policy.yaml mv /etc/placement/policy.json{,.bak} \u6ce8\uff1a\u5f53\u524d\u73af\u5883\u4e2d\u6b64\u95ee\u9898\u53ef\u5ffd\u7565\uff0c\u4e0d\u5f71\u54cd\u8fd0\u884c\u3002 \u9488\u5bf9placement API\u8fd0\u884c\u547d\u4ee4\uff1a \u5b89\u88c5osc-placement\u63d2\u4ef6\uff1a dnf install python3-osc-placement \u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a openstack --os-placement-api-version 1.2 resource class list --sort-column name +----------------------------+ | name | +----------------------------+ | DISK_GB | | FPGA | | ... | openstack --os-placement-api-version 1.6 trait list --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_ACCELERATORS | | COMPUTE_ARCH_AARCH64 | | ... | Nova \u00b6 Nova\u662fOpenStack\u7684\u8ba1\u7b97\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u7684\u521b\u5efa\u3001\u53d1\u653e\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efa nova_api \u3001 nova \u548c nova_cell0 \u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efanova\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt nova User Password: Repeat User Password: \u6dfb\u52a0nova\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user nova admin \u521b\u5efanova\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name nova \\ --description \"OpenStack Compute\" compute \u521b\u5efaNova API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute admin http://controller:8774/v2.1 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528controller\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.2 log_dir = /var/log/nova state_path = /var/lib/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api_database] \u548c [database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff1a \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u542f\u52a8\u670d\u52a1 systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service Compute\u8282\u70b9 \u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-nova-compute \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6 \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528Compute\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49compute_driver\u3001instances_path\u3001log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.3 compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances log_dir = /var/log/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86_64\uff09 \u5904\u7406\u5668\u4e3ax86_64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002\u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08arm64\uff09 \u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a virt-host-validate # \u8be5\u547d\u4ee4\u7531libvirt\u63d0\u4f9b\uff0c\u6b64\u65f6libvirt\u5e94\u5df2\u4f5c\u4e3aopenstack-nova-compute\u4f9d\u8d56\u88ab\u5b89\u88c5\uff0c\u73af\u5883\u4e2d\u5df2\u6709\u6b64\u547d\u4ee4 \u663e\u793aFAIL\u65f6\uff0c\u8868\u793a\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002 QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded) \u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u663e\u793aPASS\u65f6\uff0c\u8868\u793a\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 QEMU: Checking if device /dev/kvm exists: PASS \u914d\u7f6eqemu\uff08\u4ec5arm64\uff09 \u4ec5\u5f53\u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\u9700\u8981\u6267\u884c\u6b64\u64cd\u4f5c\u3002 \u7f16\u8f91 /etc/libvirt/qemu.conf : nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u7f16\u8f91 /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } \u542f\u52a8\u670d\u52a1 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u786e\u8ba4nova-compute\u670d\u52a1\u5df2\u8bc6\u522b\u5230\u6570\u636e\u5e93\u4e2d\uff1a openstack compute service list --service nova-compute \u53d1\u73b0\u8ba1\u7b97\u8282\u70b9\uff0c\u5c06\u8ba1\u7b97\u8282\u70b9\u6dfb\u52a0\u5230cell\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u7ed3\u679c\u5982\u4e0b\uff1a Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code. Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check Neutron \u00b6 Neutron\u662fOpenStack\u7684\u7f51\u7edc\u670d\u52a1\uff0c\u63d0\u4f9b\u865a\u62df\u4ea4\u6362\u673a\u3001IP\u8def\u7531\u3001DHCP\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u670d\u52a1\u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efaneutron\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eNEUTRON_PASS\uff1a source ~/.admin-openrc openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description \"OpenStack Networking\" network \u90e8\u7f72 Neutron API \u670d\u52a1\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2 \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp [experimental] linuxbridge = true \u914d\u7f6eML2\uff0cML2\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge** \u4fee\u6539/etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6eLayer-3\u4ee3\u7406 \u4fee\u6539/etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406 \u4fee\u6539/etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406 \u4fee\u6539/etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u914d\u7f6enova\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542fnova api\u670d\u52a1 systemctl restart openstack-nova-api \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl start neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service Compute\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-neutron-linuxbridge ebtables ipset -y \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6enova compute\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service \u542f\u52a8Neutron linuxbridge agent\u670d\u52a1 systemctl enable neutron-linuxbridge-agent systemctl start neutron-linuxbridge-agent Cinder \u00b6 Cinder\u662fOpenStack\u7684\u5b58\u50a8\u670d\u52a1\uff0c\u63d0\u4f9b\u5757\u8bbe\u5907\u7684\u521b\u5efa\u3001\u53d1\u653e\u3001\u5907\u4efd\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \uff1a \u521d\u59cb\u5316\u6570\u636e\u5e93 CINDER_DBPASS \u662f\u7528\u6237\u81ea\u5b9a\u4e49\u7684cinder\u6570\u636e\u5e93\u5bc6\u7801\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u521d\u59cb\u5316Keystone\u8d44\u6e90\u5bf9\u8c61 source ~/.admin-openrc #\u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230`CINDER_PASS`\u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-cinder-api openstack-cinder-scheduler \u4fee\u6539cinder\u914d\u7f6e\u6587\u4ef6 /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.2 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u6570\u636e\u5e93\u540c\u6b65 su -s /bin/sh -c \"cinder-manage db sync\" cinder \u4fee\u6539nova\u914d\u7f6e /etc/nova/nova.conf [cinder] os_region_name = RegionOne \u542f\u52a8\u670d\u52a1 systemctl restart openstack-nova-api systemctl start openstack-cinder-api openstack-cinder-scheduler Storage\u8282\u70b9 \uff1a Storage\u8282\u70b9\u8981\u63d0\u524d\u51c6\u5907\u81f3\u5c11\u4e00\u5757\u786c\u76d8\uff0c\u4f5c\u4e3acinder\u7684\u5b58\u50a8\u540e\u7aef\uff0c\u4e0b\u6587\u9ed8\u8ba4storage\u8282\u70b9\u5df2\u7ecf\u5b58\u5728\u4e00\u5757\u672a\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u8bbe\u5907\u540d\u79f0\u4e3a /dev/sdb \uff0c\u7528\u6237\u5728\u914d\u7f6e\u8fc7\u7a0b\u4e2d\uff0c\u8bf7\u6309\u7167\u771f\u5b9e\u73af\u5883\u4fe1\u606f\u8fdb\u884c\u540d\u79f0\u66ff\u6362\u3002 Cinder\u652f\u6301\u5f88\u591a\u7c7b\u578b\u7684\u540e\u7aef\u5b58\u50a8\uff0c\u672c\u6307\u5bfc\u4f7f\u7528\u6700\u7b80\u5355\u7684lvm\u4e3a\u53c2\u8003\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982ceph\u7b49\u5176\u4ed6\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup \u914d\u7f6elvm\u5377\u7ec4 pvcreate /dev/sdb vgcreate cinder-volumes /dev/sdb \u4fee\u6539cinder\u914d\u7f6e /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.4 enabled_backends = lvm glance_api_servers = http://controller:9292 [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u914d\u7f6ecinder backup \uff08\u53ef\u9009\uff09 cinder-backup\u662f\u53ef\u9009\u7684\u5907\u4efd\u670d\u52a1\uff0ccinder\u540c\u6837\u652f\u6301\u5f88\u591a\u79cd\u5907\u4efd\u540e\u7aef\uff0c\u672c\u6587\u4f7f\u7528swift\u5b58\u50a8\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982NFS\u7b49\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\uff0c\u4f8b\u5982\u53ef\u4ee5\u53c2\u8003 OpenStack\u5b98\u65b9\u6587\u6863 \u5bf9NFS\u7684\u914d\u7f6e\u8bf4\u660e\u3002 \u4fee\u6539 /etc/cinder/cinder.conf \uff0c\u5728 [DEFAULT] \u4e2d\u65b0\u589e [DEFAULT] backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u8fd9\u91cc\u7684 SWIFT_URL \u662f\u6307\u73af\u5883\u4e2dswift\u670d\u52a1\u7684URL\uff0c\u5728\u90e8\u7f72\u5b8cswift\u670d\u52a1\u540e\uff0c\u6267\u884c openstack catalog show object-store \u547d\u4ee4\u83b7\u53d6\u3002 \u542f\u52a8\u670d\u52a1 systemctl start openstack-cinder-volume target systemctl start openstack-cinder-backup (\u53ef\u9009) \u81f3\u6b64\uff0cCinder\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u53ef\u4ee5\u5728controller\u901a\u8fc7\u4ee5\u4e0b\u547d\u4ee4\u8fdb\u884c\u7b80\u5355\u7684\u9a8c\u8bc1 source ~/.admin-openrc openstack storage service list openstack volume list Horizon \u00b6 Horizon\u662fOpenStack\u63d0\u4f9b\u7684\u524d\u7aef\u9875\u9762\uff0c\u53ef\u4ee5\u8ba9\u7528\u6237\u901a\u8fc7\u7f51\u9875\u9f20\u6807\u7684\u64cd\u4f5c\u6765\u63a7\u5236OpenStack\u96c6\u7fa4\uff0c\u800c\u4e0d\u7528\u7e41\u7410\u7684CLI\u547d\u4ee4\u884c\u3002Horizon\u4e00\u822c\u90e8\u7f72\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-dashboard \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] OPENSTACK_KEYSTONE_URL = \"http://controller:5000/v3\" SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f\u670d\u52a1 systemctl restart httpd \u81f3\u6b64\uff0chorizon\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165 http://192.168.0.2/dashboard \uff0c\u6253\u5f00horizon\u767b\u5f55\u9875\u9762\u3002 Ironic \u00b6 Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> exit Bye \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 \u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 \u66ff\u6362 IRONIC_PASS \u4e3aironic\u7528\u6237\u5bc6\u7801\uff0c IRONIC_INSPECTOR_PASS \u4e3aironic_inspector\u7528\u6237\u5bc6\u7801\u3002 openstack user create --password IRONIC_PASS \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector openstack role add --project service --user ironic-inspector admin \u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQ LAlchemy connection string used to connect to the # database (string value) # connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) # transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASS \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) # www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 www_authenticate_uri=http://controller:5000 # Complete admin Identity API endpoint. (string value) # auth_url=http://PRIVATE_IDENTITY_IP:5000 auth_url=http://controller:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASS # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none \u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema \u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 \u5982\u4e0b\u4e3aironic-conductor\u670d\u52a1\u81ea\u8eab\u7684\u6807\u51c6\u914d\u7f6e\uff0cironic-conductor\u670d\u52a1\u53ef\u4ee5\u4e0eironic-api\u670d\u52a1\u5206\u5e03\u4e8e\u4e0d\u540c\u8282\u70b9\uff0c\u672c\u6307\u5357\u4e2d\u5747\u90e8\u7f72\u4e0e\u63a7\u5236\u8282\u70b9\uff0c\u6240\u4ee5\u91cd\u590d\u7684\u914d\u7f6e\u9879\u53ef\u8df3\u8fc7\u3002 \u66ff\u6362\u4f7f\u7528conductor\u670d\u52a1\u6240\u5728host\u7684IP\u914d\u7f6emy_ip\uff1a [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) # my_ip=HOST_IP my_ip = 192.168.0.2 \u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c \u66ff\u6362IRONIC_PASS\u4e3aironic\u7528\u6237\u5bc6\u7801\u3002 [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASS # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public # \u5176\u4ed6\u53c2\u8003\u914d\u7f6e [glance] endpoint_override = http://controller:9292 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 auth_type = password username = ironic password = IRONIC_PASS project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service [service_catalog] region_name = RegionOne project_domain_id = default user_domain_id = default project_name = service password = IRONIC_PASS username = ironic auth_url = http://controller:5000 auth_type = password \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] endpoint_override = \u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 \u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-inspector \u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> exit Bye \u914d\u7f6e /etc/ironic-inspector/inspector.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASS \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801 [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 \u914d\u7f6e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://controller:5000 www_authenticate_uri = http://controller:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = controller:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True \u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=192.168.0.40,192.168.0.50 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log \u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c \u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade \u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 dnf install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u4e0b\u8f7d\u6216\u5236\u4f5c \u90e8\u7f72\u4e00\u4e2a\u88f8\u673a\u8282\u70b9\u603b\u5171\u9700\u8981\u4e24\u7ec4\u955c\u50cf\uff1adeploy ramdisk images\u548cuser images\u3002Deploy ramdisk images\u4e0a\u8fd0\u884c\u6709ironic-python-agent(IPA)\u670d\u52a1\uff0cIronic\u901a\u8fc7\u5b83\u8fdb\u884c\u88f8\u673a\u8282\u70b9\u7684\u73af\u5883\u51c6\u5907\u3002User images\u662f\u6700\u7ec8\u88ab\u5b89\u88c5\u88f8\u673a\u8282\u70b9\u4e0a\uff0c\u4f9b\u7528\u6237\u4f7f\u7528\u7684\u955c\u50cf\u3002 ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent-builder\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002\u82e5\u4f7f\u7528\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \uff0c\u540c\u65f6\u5b98\u65b9\u4e5f\u6709\u63d0\u4f9b\u5236\u4f5c\u597d\u7684deploy\u955c\u50cf\uff0c\u53ef\u5c1d\u8bd5\u4e0b\u8f7d\u3002 \u4e0b\u6587\u4ecb\u7ecd\u901a\u8fc7ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder dnf install python3-ironic-python-agent-builder \u6216 pip3 install ironic-python-agent-builder dnf install qemu-img git \u5236\u4f5c\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--lzma] [--extra-args EXTRA_ARGS] [--elements-path ELEMENTS_PATH] distribution positional arguments: distribution Distribution to use options: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic-python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --lzma Use lzma compression for smaller images --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder --elements-path ELEMENTS_PATH Path(s) to custom DIB elements separated by a colon \u64cd\u4f5c\u5b9e\u4f8b\uff1a # -o\u9009\u9879\u6307\u5b9a\u751f\u6210\u7684\u955c\u50cf\u540d # ubuntu\u6307\u5b9a\u751f\u6210ubuntu\u7cfb\u7edf\u7684\u955c\u50cf ironic-python-agent-builder -o my-ubuntu-ipa ubuntu \u53ef\u901a\u8fc7\u8bbe\u7f6e ARCH \u73af\u5883\u53d8\u91cf\uff08\u9ed8\u8ba4\u4e3aamd64\uff09\u6307\u5b9a\u6240\u6784\u5efa\u955c\u50cf\u7684\u67b6\u6784\u3002\u5982\u679c\u662f arm \u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a export ARCH=aarch64 \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf,\u8bbe\u7f6e\u7528\u6237\u540d\u3001\u5bc6\u7801\uff0c\u542f\u7528 sodo \u6743\u9650\uff1b\u5e76\u6dfb\u52a0 -e \u9009\u9879\u4f7f\u7528\u76f8\u5e94\u7684DIB\u5143\u7d20\u3002\u5236\u4f5c\u955c\u50cf\u64cd\u4f5c\u5982\u4e0b\uff1a export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=stable/2023.1 # \u6307\u5b9a\u672c\u5730\u4ed3\u5e93\u53ca\u5206\u652f DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo DIB_REPOREF_ironic_python_agent=my-test-branch ironic-python-agent-builder ubuntu \u53c2\u8003\uff1a source-repositories \u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\u3002 \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a \u5f53\u524d\u7248\u672c\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ ramdisk\u955c\u50cf\u4e2d\u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 \u7f16\u8f91/usr/lib/systemd/system/ironic-python-agent.service\u6587\u4ef6 [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target Trove \u00b6 Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2atrove\u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684trove\u6570\u636e\u5e93\uff0c\u66ff\u6362TROVE_DBPASS\u4e3a\u5408\u9002\u7684\u5bc6\u7801\u3002 CREATE DATABASE trove CHARACTER SET utf8; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS'; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efatrove\u7528\u6237 openstack user create --domain default --password-prompt trove # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user trove admin # \u521b\u5efadatabase\u670d\u52a1 openstack service create --name trove --description \"Database service\" database \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5Trove\u3002 dnf install openstack-trove python-troveclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 \u7f16\u8f91/etc/trove/trove.conf\u3002 [DEFAULT] bind_host=192.168.0.2 log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver network_label_regex=.* management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] auth_url = http://controller:5000/v3/ auth_type = password project_domain_name = Default project_name = service user_domain_name = Default password = trove username = TROVE_PASS [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = trove password = TROVE_PASS [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u63a7\u5236\u8282\u70b9\u7684IP\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002 \u7f16\u8f91/etc/trove/trove-guestagent.conf\u3002 [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df\u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a\u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002\\ \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 \u6570\u636e\u5e93\u540c\u6b65\u3002 su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service Swift \u00b6 Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efaswift\u7528\u6237 openstack user create --domain default --password-prompt swift # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user swift admin # \u521b\u5efa\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5Swift\u3002 dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \\ python3-keystonemiddleware memcached \u914d\u7f6eproxy-server\u3002 Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cSWIFT_PASS\u5373\u53ef\u3002 vim /etc/swift/proxy-server.conf [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = swift password = SWIFT_PASS delay_auth_decision = True service_token_roles_required = True Storage\u8282\u70b9 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305\u3002 dnf install openstack-swift-account openstack-swift-container openstack-swift-object dnf install xfsprogs rsync \u5c06\u8bbe\u5907/dev/sdb\u548c/dev/sdc\u683c\u5f0f\u5316\u4e3aXFS\u3002 mkfs.xfs /dev/sdb mkfs.xfs /dev/sdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u3002 mkdir -p /srv/node/sdb mkdir -p /srv/node/sdc \u627e\u5230\u65b0\u5206\u533a\u7684UUID\u3002 blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d\u3002 UUID=\"\" /srv/node/sdb xfs noatime 0 2 UUID=\"\" /srv/node/sdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\u3002 mount /srv/node/sdb mount /srv/node/sdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e\u3002 \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u914d\u7f6e\u5b58\u50a8\u8282\u70b9\u3002 \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 [DEFAULT] bind_ip = 192.168.0.4 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\u3002 mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift Controller\u8282\u70b9\u521b\u5efa\u5e76\u5206\u53d1\u73af \u521b\u5efa\u8d26\u53f7\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840 account.builder \u6587\u4ef6\u3002 swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder account.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6202 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u8d26\u53f7\u73af\u5185\u5bb9\u3002 swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u8d26\u53f7\u73af\u3002 swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\u3002 swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder container.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bb9\u5668\u73af\u5185\u5bb9\u3002 swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u5bb9\u5668\u73af\u3002 swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\u3002 swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder object.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6200 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bf9\u8c61\u73af\u5185\u5bb9\u3002 swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u5bf9\u8c61\u73af\u3002 swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\u3002 \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/swift/swift.conf\u3002 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R root:swift /etc/swift \u5b8c\u6210\u5b89\u88c5 \u5728\u63a7\u5236\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service systemctl start openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service Cyborg \u00b6 Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 Controller\u8282\u70b9 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cyborg; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efacybory\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eCYBORG_PASS source ~/.admin-openrc openstack user create --domain default --password-prompt cyborg openstack role add --project service --user cyborg admin openstack service create --name cyborg --description \"Acceleration Service\" accelerator \u4f7f\u7528uwsgi\u90e8\u7f72Cyborg api\u670d\u52a1 openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2 \u5b89\u88c5Cyborg dnf install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [api] host_ip = 0.0.0.0 [database] connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg [service_catalog] cafile = /opt/stack/data/ca-bundle.pem project_domain_id = default user_domain_id = default project_name = service password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = password username = PLACEMENT_PASS auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [nova] project_domain_name = Default project_name = service user_domain_name = Default password = NOVA_PASS username = nova auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [keystone_authtoken] memcached_servers = localhost:11211 signing_dir = /var/cache/cyborg/api cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent Aodh \u00b6 Aodh\u53ef\u4ee5\u6839\u636e\u7531Ceilometer\u6216\u8005Gnocchi\u6536\u96c6\u7684\u76d1\u63a7\u6570\u636e\u521b\u5efa\u544a\u8b66\uff0c\u5e76\u8bbe\u7f6e\u89e6\u53d1\u89c4\u5219\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh\u3002 dnf install openstack-aodh-api openstack-aodh-evaluator \\ openstack-aodh-notifier openstack-aodh-listener \\ openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/aodh/aodh.conf [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u540c\u6b65\u6570\u636e\u5e93\u3002 aodh-dbsync \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service Gnocchi \u00b6 Gnocchi\u662f\u4e00\u4e2a\u5f00\u6e90\u7684\u65f6\u95f4\u5e8f\u5217\u6570\u636e\u5e93\uff0c\u53ef\u4ee5\u5bf9\u63a5Ceilometer\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi\u3002 dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. # coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u540c\u6b65\u6570\u636e\u5e93\u3002 gnocchi-upgrade \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service Ceilometer \u00b6 Ceilometer\u662fOpenStack\u4e2d\u8d1f\u8d23\u6570\u636e\u6536\u96c6\u7684\u670d\u52a1\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-notification openstack-ceilometer-central \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/pipeline.yaml\u3002 publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u6570\u636e\u5e93\u540c\u6b65\u3002 ceilometer-upgrade \u5b8c\u6210\u63a7\u5236\u8282\u70b9Ceilometer\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Compute\u8282\u70b9 \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-compute dnf install openstack-ceilometer-ipmi # \u53ef\u9009 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_url = http://controller:5000 project_domain_id = default user_domain_id = default auth_type = password username = ceilometer project_name = service password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/nova/nova.conf\u3002 [DEFAULT] instance_usage_audit = True instance_usage_audit_period = hour [notifications] notify_on_state_change = vm_and_task_state [oslo_messaging_notifications] driver = messagingv2 \u5b8c\u6210\u5b89\u88c5\u3002 systemctl enable openstack-ceilometer-compute.service systemctl start openstack-ceilometer-compute.service systemctl enable openstack-ceilometer-ipmi.service # \u53ef\u9009 systemctl start openstack-ceilometer-ipmi.service # \u53ef\u9009 # \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service Heat \u00b6 Heat\u662f OpenStack \u81ea\u52a8\u7f16\u6392\u670d\u52a1\uff0c\u57fa\u4e8e\u63cf\u8ff0\u6027\u7684\u6a21\u677f\u6765\u7f16\u6392\u590d\u5408\u4e91\u5e94\u7528\uff0c\u4e5f\u79f0\u4e3a Orchestration Service \u3002Heat \u7684\u5404\u670d\u52a1\u4e00\u822c\u5b89\u88c5\u5728 Controller \u8282\u70b9\u4e0a\u3002 Controller\u8282\u70b9 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE heat; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 source ~/.admin-openrc openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f \u521b\u5efa heat domain openstack domain create --description \"Stack projects and users\" heat \u5728 heat domain\u4e0b\u521b\u5efa heat_domain_admin \u7528\u6237\uff0c\u5e76\u8bb0\u4e0b\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6e\u4e0b\u9762\u7684 HEAT_DOMAIN_PASS openstack user create --domain heat --password-prompt heat_domain_admin \u4e3a heat_domain_admin \u7528\u6237\u589e\u52a0 admin \u89d2\u8272 openstack role add --domain heat --user-domain heat --user heat_domain_admin admin \u521b\u5efa heat_stack_owner \u89d2\u8272 openstack role create heat_stack_owner \u521b\u5efa heat_stack_user \u89d2\u8272 openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service Tempest \u00b6 Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u5b89\u88c5Tempest dnf install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Antelope\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u00b6 oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 yum install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff0cAK/SK\u662f\u7528\u6237\u7684\u534e\u4e3a\u4e91\u767b\u5f55\u5bc6\u94a5\uff0c\u5176\u4ed6\u914d\u7f6e\u4fdd\u6301\u9ed8\u8ba4\u5373\u53ef\uff08\u9ed8\u8ba4\u4f7f\u7528\u65b0\u52a0\u5761region\uff09\uff0c\u9700\u8981\u63d0\u524d\u5728\u4e91\u4e0a\u521b\u5efa\u5bf9\u5e94\u7684\u8d44\u6e90\uff0c\u5305\u62ec\uff1a \u4e00\u4e2a\u5b89\u5168\u7ec4\uff0c\u540d\u5b57\u9ed8\u8ba4\u662f oos \u4e00\u4e2aopenEuler\u955c\u50cf\uff0c\u540d\u79f0\u683c\u5f0f\u662fopenEuler-%(release)s-%(arch)s\uff0c\u4f8b\u5982 openEuler-25.03-arm64 \u4e00\u4e2aVPC\uff0c\u540d\u79f0\u662f oos_vpc \u8be5VPC\u4e0b\u9762\u4e24\u4e2a\u5b50\u7f51\uff0c\u540d\u79f0\u662f oos_subnet1 \u3001 oos_subnet2 [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668(\u53ea\u5728openEuler LTS\u4e0a\u652f\u6301) \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0|openEuler 25.03\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 25.03 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r antelope \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u6267\u884ctempest\u6d4b\u8bd5 \u7528\u6237\u53ef\u4ee5\u4f7f\u7528oos\u81ea\u52a8\u6267\u884c\uff1a oos env test test-oos \u4e5f\u53ef\u4ee5\u624b\u52a8\u767b\u5f55\u76ee\u6807\u8282\u70b9\uff0c\u8fdb\u5165\u6839\u76ee\u5f55\u4e0b\u7684 mytest \u76ee\u5f55\uff0c\u624b\u52a8\u6267\u884c tempest run \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u8df3\u8fc7\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u5728\u7b2c4\u6b65\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 \u88ab\u7eb3\u7ba1\u7684\u865a\u673a\u9700\u8981\u4fdd\u8bc1\uff1a \u81f3\u5c11\u6709\u4e00\u5f20\u7ed9oos\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e neutron_dataplane_interface_name \u81f3\u5c11\u6709\u4e00\u5757\u7ed9oos\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e cinder_block_device \u5982\u679c\u8981\u90e8\u7f72swift\u670d\u52a1\uff0c\u5219\u9700\u8981\u65b0\u589e\u4e00\u5757\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e swift_storage_devices # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 25.03 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"openEuler-25.03_Antelope"},{"location":"install/openEuler-25.03/OpenStack-antelope/#openstack-antelope","text":"OpenStack Antelope \u90e8\u7f72\u6307\u5357 \u57fa\u4e8eRPM\u90e8\u7f72 \u73af\u5883\u51c6\u5907 \u65f6\u949f\u540c\u6b65 \u5b89\u88c5\u6570\u636e\u5e93 \u5b89\u88c5\u6d88\u606f\u961f\u5217 \u5b89\u88c5\u7f13\u5b58\u670d\u52a1 \u90e8\u7f72\u670d\u52a1 Keystone Glance Placement Nova Neutron Cinder Horizon Ironic Trove Swift Cyborg Aodh Gnocchi Ceilometer Heat Tempest \u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72 \u672c\u6587\u6863\u662f openEuler OpenStack SIG \u7f16\u5199\u7684\u57fa\u4e8e |openEuler 25.03 \u7684 OpenStack \u90e8\u7f72\u6307\u5357\uff0c\u5185\u5bb9\u7531 SIG \u8d21\u732e\u8005\u63d0\u4f9b\u3002\u5728\u9605\u8bfb\u8fc7\u7a0b\u4e2d\uff0c\u5982\u679c\u60a8\u6709\u4efb\u4f55\u7591\u95ee\u6216\u8005\u53d1\u73b0\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7 \u8054\u7cfb SIG\u7ef4\u62a4\u4eba\u5458\uff0c\u6216\u8005\u76f4\u63a5 \u63d0\u4ea4issue \u7ea6\u5b9a \u672c\u7ae0\u8282\u63cf\u8ff0\u6587\u6863\u4e2d\u7684\u4e00\u4e9b\u901a\u7528\u7ea6\u5b9a\u3002 \u540d\u79f0 \u5b9a\u4e49 RABBIT_PASS rabbitmq\u7684\u5bc6\u7801\uff0c\u7531\u7528\u6237\u8bbe\u7f6e\uff0c\u5728OpenStack\u5404\u4e2a\u670d\u52a1\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_PASS cinder\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 CINDER_DBPASS cinder\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cinder\u914d\u7f6e\u4e2d\u4f7f\u7528 KEYSTONE_DBPASS keystone\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728keystone\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_PASS glance\u670d\u52a1keystone\u7528\u6237\u7684\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 GLANCE_DBPASS glance\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728glance\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_PASS \u5728keystone\u6ce8\u518c\u7684heat\u7528\u6237\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 HEAT_DBPASS heat\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728heat\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_PASS \u5728keystone\u6ce8\u518c\u7684cyborg\u7528\u6237\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 CYBORG_DBPASS cyborg\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728cyborg\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_PASS \u5728keystone\u6ce8\u518c\u7684neutron\u7528\u6237\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 NEUTRON_DBPASS neutron\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PROVIDER_INTERFACE_NAME \u7269\u7406\u7f51\u7edc\u63a5\u53e3\u7684\u540d\u79f0\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 OVERLAY_INTERFACE_IP_ADDRESS Controller\u63a7\u5236\u8282\u70b9\u7684\u7ba1\u7406ip\u5730\u5740\uff0c\u5728neutron\u914d\u7f6e\u4e2d\u4f7f\u7528 METADATA_SECRET metadata proxy\u7684secret\u5bc6\u7801\uff0c\u5728nova\u548cneutron\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_DBPASS placement\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 PLACEMENT_PASS \u5728keystone\u6ce8\u518c\u7684placement\u7528\u6237\u5bc6\u7801\uff0c\u5728placement\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_DBPASS nova\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728nova\u914d\u7f6e\u4e2d\u4f7f\u7528 NOVA_PASS \u5728keystone\u6ce8\u518c\u7684nova\u7528\u6237\u5bc6\u7801\uff0c\u5728nova,cyborg,neutron\u7b49\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_DBPASS ironic\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_PASS \u5728keystone\u6ce8\u518c\u7684ironic\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_DBPASS ironic-inspector\u670d\u52a1\u6570\u636e\u5e93\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 IRONIC_INSPECTOR_PASS \u5728keystone\u6ce8\u518c\u7684ironic-inspector\u7528\u6237\u5bc6\u7801\uff0c\u5728ironic-inspector\u914d\u7f6e\u4e2d\u4f7f\u7528 OpenStack SIG \u63d0\u4f9b\u4e86\u591a\u79cd\u57fa\u4e8e openEuler \u90e8\u7f72 OpenStack \u7684\u65b9\u6cd5\uff0c\u4ee5\u6ee1\u8db3\u4e0d\u540c\u7684\u7528\u6237\u573a\u666f\uff0c\u8bf7\u6309\u9700\u9009\u62e9\u3002","title":"OpenStack Antelope \u90e8\u7f72\u6307\u5357"},{"location":"install/openEuler-25.03/OpenStack-antelope/#rpm","text":"","title":"\u57fa\u4e8eRPM\u90e8\u7f72"},{"location":"install/openEuler-25.03/OpenStack-antelope/#_1","text":"\u672c\u6587\u6863\u57fa\u4e8eOpenStack\u7ecf\u5178\u7684\u4e09\u8282\u70b9\u73af\u5883\u8fdb\u884c\u90e8\u7f72\uff0c\u4e09\u4e2a\u8282\u70b9\u5206\u522b\u662f\u63a7\u5236\u8282\u70b9(Controller)\u3001\u8ba1\u7b97\u8282\u70b9(Compute)\u3001\u5b58\u50a8\u8282\u70b9(Storage)\uff0c\u5176\u4e2d\u5b58\u50a8\u8282\u70b9\u4e00\u822c\u53ea\u90e8\u7f72\u5b58\u50a8\u670d\u52a1\uff0c\u5728\u8d44\u6e90\u6709\u9650\u7684\u60c5\u51b5\u4e0b\uff0c\u53ef\u4ee5\u4e0d\u5355\u72ec\u90e8\u7f72\u8be5\u8282\u70b9\uff0c\u628a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u670d\u52a1\u90e8\u7f72\u5230\u8ba1\u7b97\u8282\u70b9\u5373\u53ef\u3002 \u9996\u5148\u51c6\u5907\u4e09\u4e2a|openEuler 25.03\u73af\u5883\uff0c\u6839\u636e\u60a8\u7684\u73af\u5883\uff0c\u4e0b\u8f7d\u5bf9\u5e94\u7684\u955c\u50cf\u5e76\u5b89\u88c5\u5373\u53ef\uff1a[ISO\u955c\u50cf] https://repo.openeuler.org/openEuler-24.03-LTS-SP1/ISO/ \u3001[qcow2\u955c\u50cf] https://repo.openeuler.org/openEuler-24.03-LTS-SP1/virtual_machine_img/ \u3002 \u4e0b\u9762\u7684\u5b89\u88c5\u6309\u7167\u5982\u4e0b\u62d3\u6251\u8fdb\u884c\uff1a controller\uff1a192.168.0.2 compute\uff1a 192.168.0.3 storage\uff1a 192.168.0.4 \u5982\u679c\u60a8\u7684\u73af\u5883IP\u4e0d\u540c\uff0c\u8bf7\u6309\u7167\u60a8\u7684\u73af\u5883IP\u4fee\u6539\u76f8\u5e94\u7684\u914d\u7f6e\u6587\u4ef6\u3002 \u672c\u6587\u6863\u7684\u4e09\u8282\u70b9\u670d\u52a1\u62d3\u6251\u5982\u4e0b\u56fe\u6240\u793a(\u53ea\u5305\u542bKeystone\u3001Glance\u3001Nova\u3001Cinder\u3001Neutron\u8fd9\u51e0\u4e2a\u6838\u5fc3\u670d\u52a1\uff0c\u5176\u4ed6\u670d\u52a1\u8bf7\u53c2\u8003\u5177\u4f53\u90e8\u7f72\u7ae0\u8282)\uff1a \u5728\u6b63\u5f0f\u90e8\u7f72\u4e4b\u524d\uff0c\u9700\u8981\u5bf9\u6bcf\u4e2a\u8282\u70b9\u505a\u5982\u4e0b\u914d\u7f6e\u548c\u68c0\u67e5\uff1a \u914d\u7f6e |openEuler 25.03 \u5b98\u65b9 yum \u6e90\uff0c\u9700\u8981\u542f\u7528 EPOL \u8f6f\u4ef6\u4ed3\u4ee5\u652f\u6301 OpenStack yum update yum install openstack-release-antelope yum clean all && yum makecache \u6ce8\u610f \uff1a\u5982\u679c\u4f60\u7684\u73af\u5883\u7684YUM\u6e90\u6ca1\u6709\u542f\u7528EPOL\uff0c\u9700\u8981\u540c\u65f6\u914d\u7f6eEPOL\uff0c\u786e\u4fddEPOL\u5df2\u914d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\u3002 vi /etc/yum.repos.d/openEuler.repo [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-24.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler EOF \u4fee\u6539\u4e3b\u673a\u540d\u4ee5\u53ca\u6620\u5c04 \u6bcf\u4e2a\u8282\u70b9\u5206\u522b\u4fee\u6539\u4e3b\u673a\u540d\uff0c\u4ee5controller\u4e3a\u4f8b\uff1a hostnamectl set-hostname controller vi /etc/hostname \u5185\u5bb9\u4fee\u6539\u4e3acontroller \u7136\u540e\u4fee\u6539\u6bcf\u4e2a\u8282\u70b9\u7684 /etc/hosts \u6587\u4ef6\uff0c\u65b0\u589e\u5982\u4e0b\u5185\u5bb9: 192.168.0.2 controller 192.168.0.3 compute 192.168.0.4 storage","title":"\u73af\u5883\u51c6\u5907"},{"location":"install/openEuler-25.03/OpenStack-antelope/#_2","text":"\u96c6\u7fa4\u73af\u5883\u65f6\u523b\u8981\u6c42\u6bcf\u4e2a\u8282\u70b9\u7684\u65f6\u95f4\u4e00\u81f4\uff0c\u4e00\u822c\u7531\u65f6\u949f\u540c\u6b65\u8f6f\u4ef6\u4fdd\u8bc1\u3002\u672c\u6587\u4f7f\u7528 chrony \u8f6f\u4ef6\u3002\u6b65\u9aa4\u5982\u4e0b\uff1a Controller\u8282\u70b9 \uff1a \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # \u8868\u793a\u5141\u8bb8\u54ea\u4e9bIP\u4ece\u672c\u8282\u70b9\u540c\u6b65\u65f6\u949f allow 192.168.0.0/24 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u5176\u4ed6\u8282\u70b9 \u5b89\u88c5\u670d\u52a1 dnf install chrony \u4fee\u6539 /etc/chrony.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u65b0\u589e\u4e00\u884c # NTP_SERVER\u662fcontroller IP\uff0c\u8868\u793a\u4ece\u8fd9\u4e2a\u673a\u5668\u83b7\u53d6\u65f6\u95f4\uff0c\u8fd9\u91cc\u6211\u4eec\u586b192.168.0.2\uff0c\u6216\u8005\u5728`/etc/hosts`\u91cc\u914d\u7f6e\u597d\u7684controller\u540d\u5b57\u5373\u53ef\u3002 server NTP_SERVER iburst \u540c\u65f6\uff0c\u8981\u628a pool pool.ntp.org iburst \u8fd9\u4e00\u884c\u6ce8\u91ca\u6389\uff0c\u8868\u793a\u4e0d\u4ece\u516c\u7f51\u540c\u6b65\u65f6\u949f\u3002 \u91cd\u542f\u670d\u52a1 systemctl restart chronyd \u914d\u7f6e\u5b8c\u6210\u540e\uff0c\u68c0\u67e5\u4e00\u4e0b\u7ed3\u679c\uff0c\u5728\u5176\u4ed6\u975econtroller\u8282\u70b9\u6267\u884c chronyc sources \uff0c\u8fd4\u56de\u7ed3\u679c\u7c7b\u4f3c\u5982\u4e0b\u5185\u5bb9\uff0c\u8868\u793a\u6210\u529f\u4ececontroller\u540c\u6b65\u65f6\u949f\u3002 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 192.168.0.2 4 6 7 0 -1406ns[ +55us] +/- 16ms","title":"\u65f6\u949f\u540c\u6b65"},{"location":"install/openEuler-25.03/OpenStack-antelope/#_3","text":"\u6570\u636e\u5e93\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528mariadb\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install mysql-config mariadb mariadb-server python3-PyMySQL \u65b0\u589e\u914d\u7f6e\u6587\u4ef6 /etc/my.cnf.d/openstack.cnf \uff0c\u5185\u5bb9\u5982\u4e0b [mysqld] bind-address = 192.168.0.2 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 \u542f\u52a8\u670d\u52a1\u5668 systemctl start mariadb \u521d\u59cb\u5316\u6570\u636e\u5e93\uff0c\u6839\u636e\u63d0\u793a\u8fdb\u884c\u5373\u53ef mysql_secure_installation \u793a\u4f8b\u5982\u4e0b\uff1a NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and haven't set the root password yet, you should just press enter here. Enter current password for root (enter for none): #\u8fd9\u91cc\u8f93\u5165\u5bc6\u7801\uff0c\u7531\u4e8e\u6211\u4eec\u662f\u521d\u59cb\u5316DB\uff0c\u76f4\u63a5\u56de\u8f66\u5c31\u884c OK, successfully used password, moving on... Setting the root password or using the unix_socket ensures that nobody can log into the MariaDB root user without the proper authorisation. You already have your root account protected, so you can safely answer 'n'. # \u8fd9\u91cc\u6839\u636e\u63d0\u793a\u8f93\u5165N Switch to unix_socket authentication [Y/n] N Enabled successfully! Reloading privilege tables.. ... Success! You already have your root account protected, so you can safely answer 'n'. # \u8f93\u5165Y\uff0c\u4fee\u6539\u5bc6\u7801 Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664\u533f\u540d\u7528\u6237 Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. # \u8f93\u5165Y\uff0c\u5173\u95edroot\u8fdc\u7a0b\u767b\u5f55\u6743\u9650 Disallow root login remotely? [Y/n] Y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. # \u8f93\u5165Y\uff0c\u5220\u9664test\u6570\u636e\u5e93 Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. # \u8f93\u5165Y\uff0c\u91cd\u8f7d\u914d\u7f6e Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. \u9a8c\u8bc1\uff0c\u6839\u636e\u7b2c\u56db\u6b65\u8bbe\u7f6e\u7684\u5bc6\u7801\uff0c\u68c0\u67e5\u662f\u5426\u80fd\u767b\u5f55mariadb mysql -uroot -p","title":"\u5b89\u88c5\u6570\u636e\u5e93"},{"location":"install/openEuler-25.03/OpenStack-antelope/#_4","text":"\u6d88\u606f\u961f\u5217\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528rabbitmq\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install rabbitmq-server \u542f\u52a8\u670d\u52a1 systemctl start rabbitmq-server \u914d\u7f6eopenstack\u7528\u6237\uff0c RABBIT_PASS \u662fopenstack\u670d\u52a1\u767b\u5f55\u6d88\u606f\u961f\u91cc\u7684\u5bc6\u7801\uff0c\u9700\u8981\u548c\u540e\u9762\u5404\u4e2a\u670d\u52a1\u7684\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\u3002 rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack \".*\" \".*\" \".*\"","title":"\u5b89\u88c5\u6d88\u606f\u961f\u5217"},{"location":"install/openEuler-25.03/OpenStack-antelope/#_5","text":"\u7f13\u5b58\u670d\u52a1\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\uff0c\u8fd9\u91cc\u63a8\u8350\u4f7f\u7528Memcached\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install memcached python3-memcached \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/sysconfig/memcached OPTIONS=\"-l 127.0.0.1,::1,controller\" \u542f\u52a8\u670d\u52a1 systemctl start memcached","title":"\u5b89\u88c5\u7f13\u5b58\u670d\u52a1"},{"location":"install/openEuler-25.03/OpenStack-antelope/#_6","text":"","title":"\u90e8\u7f72\u670d\u52a1"},{"location":"install/openEuler-25.03/OpenStack-antelope/#keystone","text":"Keystone\u662fOpenStack\u63d0\u4f9b\u7684\u9274\u6743\u670d\u52a1\uff0c\u662f\u6574\u4e2aOpenStack\u7684\u5165\u53e3\uff0c\u63d0\u4f9b\u4e86\u79df\u6237\u9694\u79bb\u3001\u7528\u6237\u8ba4\u8bc1\u3001\u670d\u52a1\u53d1\u73b0\u7b49\u529f\u80fd\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 \u521b\u5efa keystone \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE keystone; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f \u66ff\u6362 KEYSTONE_DBPASS \uff0c\u4e3a Keystone \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-keystone httpd mod_wsgi \u914d\u7f6ekeystone\u76f8\u5173\u914d\u7f6e vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone [token] provider = fernet \u89e3\u91ca [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [token]\u90e8\u5206\uff0c\u914d\u7f6etoken provider \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"keystone-manage db_sync\" keystone \u521d\u59cb\u5316Fernet\u5bc6\u94a5\u4ed3\u5e93 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone \u542f\u52a8\u670d\u52a1 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \\ --bootstrap-admin-url http://controller:5000/v3/ \\ --bootstrap-internal-url http://controller:5000/v3/ \\ --bootstrap-public-url http://controller:5000/v3/ \\ --bootstrap-region-id RegionOne \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \uff0c\u4e3a admin \u7528\u6237\u8bbe\u7f6e\u5bc6\u7801 \u914d\u7f6eApache HTTP server \u6253\u5f00httpd.conf\u5e76\u914d\u7f6e #\u9700\u8981\u4fee\u6539\u7684\u914d\u7f6e\u6587\u4ef6\u8def\u5f84 vim /etc/httpd/conf/httpd.conf #\u4fee\u6539\u4ee5\u4e0b\u9879\uff0c\u5982\u679c\u6ca1\u6709\u5219\u65b0\u6dfb\u52a0 ServerName controller \u521b\u5efa\u8f6f\u94fe\u63a5 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ \u89e3\u91ca \u914d\u7f6e ServerName \u9879\u5f15\u7528\u63a7\u5236\u8282\u70b9 \u6ce8\u610f \u5982\u679c ServerName \u9879\u4e0d\u5b58\u5728\u5219\u9700\u8981\u521b\u5efa \u542f\u52a8Apache HTTP\u670d\u52a1 systemctl enable httpd.service systemctl start httpd.service \u521b\u5efa\u73af\u5883\u53d8\u91cf\u914d\u7f6e cat << EOF >> ~/.admin-openrc export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF \u6ce8\u610f \u66ff\u6362 ADMIN_PASS \u4e3a admin \u7528\u6237\u7684\u5bc6\u7801 \u4f9d\u6b21\u521b\u5efadomain, projects, users, roles \u9700\u8981\u5148\u5b89\u88c5python3-openstackclient dnf install python3-openstackclient \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efaproject service \uff0c\u5176\u4e2d domain default \u5728 keystone-manage bootstrap \u65f6\u5df2\u521b\u5efa openstack domain create --description \"An Example Domain\" example openstack project create --domain default --description \"Service Project\" service \u521b\u5efa\uff08non-admin\uff09project myproject \uff0cuser myuser \u548c role myrole \uff0c\u4e3a myproject \u548c myuser \u6dfb\u52a0\u89d2\u8272 myrole openstack project create --domain default --description \"Demo Project\" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole \u9a8c\u8bc1 \u53d6\u6d88\u4e34\u65f6\u73af\u5883\u53d8\u91cfOS_AUTH_URL\u548cOS_PASSWORD\uff1a source ~/.admin-openrc unset OS_AUTH_URL OS_PASSWORD \u4e3aadmin\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name admin --os-username admin token issue \u4e3amyuser\u7528\u6237\u8bf7\u6c42token\uff1a openstack --os-auth-url http://controller:5000/v3 \\ --os-project-domain-name Default --os-user-domain-name Default \\ --os-project-name myproject --os-username myuser token issue","title":"Keystone"},{"location":"install/openEuler-25.03/OpenStack-antelope/#glance","text":"Glance\u662fOpenStack\u63d0\u4f9b\u7684\u955c\u50cf\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u3001\u88f8\u673a\u955c\u50cf\u7684\u4e0a\u4f20\u4e0e\u4e0b\u8f7d\uff0c\u5fc5\u987b\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u521b\u5efa glance \u6570\u636e\u5e93\u5e76\u6388\u6743 mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> exit \u6ce8\u610f: \u66ff\u6362 GLANCE_DBPASS \uff0c\u4e3a glance \u6570\u636e\u5e93\u8bbe\u7f6e\u5bc6\u7801 \u521d\u59cb\u5316 glance \u8d44\u6e90\u5bf9\u8c61 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrc \u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230 GLANCE_PASS \u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt glance User Password: Repeat User Password: \u6dfb\u52a0glance\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user glance admin \u521b\u5efaglance\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name glance --description \"OpenStack Image\" image \u521b\u5efaglance API\u670d\u52a1\uff1a openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-glance \u4fee\u6539 glance \u914d\u7f6e\u6587\u4ef6 vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ \u89e3\u91ca: [database]\u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3 [keystone_authtoken] [paste_deploy]\u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3 [glance_store]\u90e8\u5206\uff0c\u914d\u7f6e\u672c\u5730\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u548c\u955c\u50cf\u6587\u4ef6\u7684\u4f4d\u7f6e \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"glance-manage db_sync\" glance \u542f\u52a8\u670d\u52a1\uff1a systemctl enable openstack-glance-api.service systemctl start openstack-glance-api.service \u9a8c\u8bc1 \u5bfc\u5165\u73af\u5883\u53d8\u91cf source ~/.admin-openrcu \u4e0b\u8f7d\u955c\u50cf x86\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img arm\u955c\u50cf\u4e0b\u8f7d\uff1a wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img \u6ce8\u610f \u5982\u679c\u60a8\u4f7f\u7528\u7684\u73af\u5883\u662f\u9cb2\u9e4f\u67b6\u6784\uff0c\u8bf7\u4e0b\u8f7daarch64\u7248\u672c\u7684\u955c\u50cf\uff1b\u5df2\u5bf9\u955c\u50cfcirros-0.5.2-aarch64-disk.img\u8fdb\u884c\u6d4b\u8bd5\u3002 \u5411Image\u670d\u52a1\u4e0a\u4f20\u955c\u50cf\uff1a openstack image create --disk-format qcow2 --container-format bare \\ --file cirros-0.4.0-x86_64-disk.img --public cirros \u786e\u8ba4\u955c\u50cf\u4e0a\u4f20\u5e76\u9a8c\u8bc1\u5c5e\u6027\uff1a openstack image list","title":"Glance"},{"location":"install/openEuler-25.03/OpenStack-antelope/#placement","text":"Placement\u662fOpenStack\u63d0\u4f9b\u7684\u8d44\u6e90\u8c03\u5ea6\u7ec4\u4ef6\uff0c\u4e00\u822c\u4e0d\u9762\u5411\u7528\u6237\uff0c\u7531Nova\u7b49\u7ec4\u4ef6\u8c03\u7528\uff0c\u5b89\u88c5\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u3001\u914d\u7f6ePlacement\u670d\u52a1\u524d\uff0c\u9700\u8981\u5148\u521b\u5efa\u76f8\u5e94\u7684\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548cAPI endpoints\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efaplacement\u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE placement; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \\ IDENTIFIED BY 'PLACEMENT_DBPASS'; \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efaplacement\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt placement User Password: Repeat User Password: \u6dfb\u52a0placement\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user placement admin \u521b\u5efaplacement\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name placement \\ --description \"Placement API\" placement \u521b\u5efaPlacement API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ placement public http://controller:8778 openstack endpoint create --region RegionOne \\ placement internal http://controller:8778 openstack endpoint create --region RegionOne \\ placement admin http://controller:8778 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-placement-api \u7f16\u8f91 /etc/placement/placement.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [placement_database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [placement_database] connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement \u66ff\u6362 PLACEMENT_DBPASS \u4e3aplacement\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff0c\u586b\u5145Placement\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"placement-manage db sync\" placement \u542f\u52a8\u670d\u52a1 \u91cd\u542fhttpd\u670d\u52a1\uff1a systemctl restart httpd \u9a8c\u8bc1 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650 source ~/.admin-openrc \u6267\u884c\u72b6\u6001\u68c0\u67e5\uff1a placement-status upgrade check +----------------------------------------------------------------------+ | Upgrade Check Results | +----------------------------------------------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +----------------------------------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Failure | | Details: Your policy file is JSON-formatted which is deprecated. You | | need to switch to YAML-formatted file. Use the | | ``oslopolicy-convert-json-to-yaml`` tool to convert the | | existing JSON-formatted files to YAML in a backwards- | | compatible manner: https://docs.openstack.org/oslo.policy/ | | latest/cli/oslopolicy-convert-json-to-yaml.html. | +----------------------------------------------------------------------+ \u8fd9\u91cc\u53ef\u4ee5\u770b\u5230 Policy File JSON to YAML Migration \u7684\u7ed3\u679c\u4e3aFailure\u3002\u8fd9\u662f\u56e0\u4e3a\u5728Placement\u4e2d\uff0cJSON\u683c\u5f0f\u7684policy\u6587\u4ef6\u4eceWallaby\u7248\u672c\u5f00\u59cb\u5df2\u5904\u4e8e deprecated \u72b6\u6001\u3002\u53ef\u4ee5\u53c2\u8003\u63d0\u793a\uff0c\u4f7f\u7528 oslopolicy-convert-json-to-yaml \u5de5\u5177 \u5c06\u73b0\u6709\u7684JSON\u683c\u5f0fpolicy\u6587\u4ef6\u8f6c\u5316\u4e3aYAML\u683c\u5f0f\u3002 oslopolicy-convert-json-to-yaml --namespace placement \\ --policy-file /etc/placement/policy.json \\ --output-file /etc/placement/policy.yaml mv /etc/placement/policy.json{,.bak} \u6ce8\uff1a\u5f53\u524d\u73af\u5883\u4e2d\u6b64\u95ee\u9898\u53ef\u5ffd\u7565\uff0c\u4e0d\u5f71\u54cd\u8fd0\u884c\u3002 \u9488\u5bf9placement API\u8fd0\u884c\u547d\u4ee4\uff1a \u5b89\u88c5osc-placement\u63d2\u4ef6\uff1a dnf install python3-osc-placement \u5217\u51fa\u53ef\u7528\u7684\u8d44\u6e90\u7c7b\u522b\u53ca\u7279\u6027\uff1a openstack --os-placement-api-version 1.2 resource class list --sort-column name +----------------------------+ | name | +----------------------------+ | DISK_GB | | FPGA | | ... | openstack --os-placement-api-version 1.6 trait list --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_ACCELERATORS | | COMPUTE_ARCH_AARCH64 | | ... |","title":"Placement"},{"location":"install/openEuler-25.03/OpenStack-antelope/#nova","text":"Nova\u662fOpenStack\u7684\u8ba1\u7b97\u670d\u52a1\uff0c\u8d1f\u8d23\u865a\u62df\u673a\u7684\u521b\u5efa\u3001\u53d1\u653e\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u521b\u5efa\u6570\u636e\u5e93 \u4f7f\u7528root\u7528\u6237\u8bbf\u95ee\u6570\u636e\u5e93\u670d\u52a1\uff1a mysql -u root -p \u521b\u5efa nova_api \u3001 nova \u548c nova_cell0 \u6570\u636e\u5e93\uff1a MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; \u6388\u6743\u6570\u636e\u5e93\u8bbf\u95ee\uff1a MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS'; \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u8bbf\u95ee\u5bc6\u7801\u3002 \u9000\u51fa\u6570\u636e\u5e93\u8bbf\u95ee\u5ba2\u6237\u7aef\uff1a exit \u914d\u7f6e\u7528\u6237\u548cEndpoints source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u521b\u5efanova\u7528\u6237\u5e76\u8bbe\u7f6e\u7528\u6237\u5bc6\u7801\uff1a openstack user create --domain default --password-prompt nova User Password: Repeat User Password: \u6dfb\u52a0nova\u7528\u6237\u5230service project\u5e76\u6307\u5b9aadmin\u89d2\u8272\uff1a openstack role add --project service --user nova admin \u521b\u5efanova\u670d\u52a1\u5b9e\u4f53\uff1a openstack service create --name nova \\ --description \"OpenStack Compute\" compute \u521b\u5efaNova API\u670d\u52a1endpoints\uff1a openstack endpoint create --region RegionOne \\ compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne \\ compute admin http://controller:8774/v2.1 \u5b89\u88c5\u53ca\u914d\u7f6e\u7ec4\u4ef6 \u5b89\u88c5\u8f6f\u4ef6\u5305\uff1a dnf install openstack-nova-api openstack-nova-conductor \\ openstack-nova-novncproxy openstack-nova-scheduler \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6\uff0c\u5b8c\u6210\u5982\u4e0b\u64cd\u4f5c\uff1a \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528controller\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.2 log_dir = /var/log/nova state_path = /var/lib/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api_database] \u548c [database] \u90e8\u5206\uff0c\u914d\u7f6e\u6570\u636e\u5e93\u5165\u53e3\uff1a [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova \u66ff\u6362 NOVA_DBPASS \u4e3anova\u76f8\u5173\u6570\u636e\u5e93\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u6570\u636e\u5e93\u540c\u6b65\uff1a \u540c\u6b65nova-api\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage api_db sync\" nova \u6ce8\u518ccell0\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 map_cell0\" nova \u521b\u5efacell1 cell\uff1a su -s /bin/sh -c \"nova-manage cell_v2 create_cell --name=cell1 --verbose\" nova \u540c\u6b65nova\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage db sync\" nova \u9a8c\u8bc1cell0\u548ccell1\u6ce8\u518c\u6b63\u786e\uff1a su -s /bin/sh -c \"nova-manage cell_v2 list_cells\" nova \u542f\u52a8\u670d\u52a1 systemctl enable \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service systemctl start \\ openstack-nova-api.service \\ openstack-nova-scheduler.service \\ openstack-nova-conductor.service \\ openstack-nova-novncproxy.service Compute\u8282\u70b9 \u5728\u8ba1\u7b97\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-nova-compute \u7f16\u8f91 /etc/nova/nova.conf \u914d\u7f6e\u6587\u4ef6 \u5728 [default] \u90e8\u5206\uff0c\u542f\u7528\u8ba1\u7b97\u548c\u5143\u6570\u636e\u7684API\uff0c\u914d\u7f6eRabbitMQ\u6d88\u606f\u961f\u5217\u5165\u53e3\uff0c\u4f7f\u7528Compute\u8282\u70b9\u7ba1\u7406IP\u914d\u7f6emy_ip\uff0c\u663e\u5f0f\u5b9a\u4e49compute_driver\u3001instances_path\u3001log_dir\uff1a [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.0.3 compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances log_dir = /var/log/nova \u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002 \u5728 [api] \u548c [keystone_authtoken] \u90e8\u5206\uff0c\u914d\u7f6e\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5165\u53e3\uff1a [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS \u66ff\u6362 NOVA_PASS \u4e3anova\u7528\u6237\u7684\u5bc6\u7801\u3002 \u5728 [vnc] \u90e8\u5206\uff0c\u542f\u7528\u5e76\u914d\u7f6e\u8fdc\u7a0b\u63a7\u5236\u53f0\u5165\u53e3\uff1a [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html \u5728 [glance] \u90e8\u5206\uff0c\u914d\u7f6e\u955c\u50cf\u670d\u52a1API\u7684\u5730\u5740\uff1a [glance] api_servers = http://controller:9292 \u5728 [oslo_concurrency] \u90e8\u5206\uff0c\u914d\u7f6elock path\uff1a [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement]\u90e8\u5206\uff0c\u914d\u7f6eplacement\u670d\u52a1\u7684\u5165\u53e3\uff1a [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS \u66ff\u6362 PLACEMENT_PASS \u4e3aplacement\u7528\u6237\u7684\u5bc6\u7801\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08x86_64\uff09 \u5904\u7406\u5668\u4e3ax86_64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a egrep -c '(vmx|svm)' /proc/cpuinfo \u5982\u679c\u8fd4\u56de\u503c\u4e3a0\u5219\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002\u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u5982\u679c\u8fd4\u56de\u503c\u4e3a1\u6216\u66f4\u5927\u7684\u503c\uff0c\u5219\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 \u786e\u8ba4\u8ba1\u7b97\u8282\u70b9\u662f\u5426\u652f\u6301\u865a\u62df\u673a\u786c\u4ef6\u52a0\u901f\uff08arm64\uff09 \u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\uff0c\u53ef\u901a\u8fc7\u8fd0\u884c\u5982\u4e0b\u547d\u4ee4\u786e\u8ba4\u662f\u5426\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff1a virt-host-validate # \u8be5\u547d\u4ee4\u7531libvirt\u63d0\u4f9b\uff0c\u6b64\u65f6libvirt\u5e94\u5df2\u4f5c\u4e3aopenstack-nova-compute\u4f9d\u8d56\u88ab\u5b89\u88c5\uff0c\u73af\u5883\u4e2d\u5df2\u6709\u6b64\u547d\u4ee4 \u663e\u793aFAIL\u65f6\uff0c\u8868\u793a\u4e0d\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u9700\u8981\u914d\u7f6elibvirt\u4f7f\u7528QEMU\u800c\u4e0d\u662f\u9ed8\u8ba4\u7684KVM\u3002 QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded) \u7f16\u8f91 /etc/nova/nova.conf \u7684 [libvirt] \u90e8\u5206\uff1a [libvirt] virt_type = qemu \u663e\u793aPASS\u65f6\uff0c\u8868\u793a\u652f\u6301\u786c\u4ef6\u52a0\u901f\uff0c\u4e0d\u9700\u8981\u8fdb\u884c\u989d\u5916\u7684\u914d\u7f6e\u3002 QEMU: Checking if device /dev/kvm exists: PASS \u914d\u7f6eqemu\uff08\u4ec5arm64\uff09 \u4ec5\u5f53\u5904\u7406\u5668\u4e3aarm64\u67b6\u6784\u65f6\u9700\u8981\u6267\u884c\u6b64\u64cd\u4f5c\u3002 \u7f16\u8f91 /etc/libvirt/qemu.conf : nvram = [\"/usr/share/AAVMF/AAVMF_CODE.fd: \\ /usr/share/AAVMF/AAVMF_VARS.fd\", \\ \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \\ /usr/share/edk2/aarch64/vars-template-pflash.raw\"] \u7f16\u8f91 /etc/qemu/firmware/edk2-aarch64.json { \"description\": \"UEFI firmware for ARM64 virtual machines\", \"interface-types\": [ \"uefi\" ], \"mapping\": { \"device\": \"flash\", \"executable\": { \"filename\": \"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw\", \"format\": \"raw\" }, \"nvram-template\": { \"filename\": \"/usr/share/edk2/aarch64/vars-template-pflash.raw\", \"format\": \"raw\" } }, \"targets\": [ { \"architecture\": \"aarch64\", \"machines\": [ \"virt-*\" ] } ], \"features\": [ ], \"tags\": [ ] } \u542f\u52a8\u670d\u52a1 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service Controller\u8282\u70b9 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u6dfb\u52a0\u8ba1\u7b97\u8282\u70b9\u5230openstack\u96c6\u7fa4 source admin\u51ed\u8bc1\uff0c\u4ee5\u83b7\u53d6admin\u547d\u4ee4\u884c\u6743\u9650\uff1a source ~/.admin-openrc \u786e\u8ba4nova-compute\u670d\u52a1\u5df2\u8bc6\u522b\u5230\u6570\u636e\u5e93\u4e2d\uff1a openstack compute service list --service nova-compute \u53d1\u73b0\u8ba1\u7b97\u8282\u70b9\uff0c\u5c06\u8ba1\u7b97\u8282\u70b9\u6dfb\u52a0\u5230cell\u6570\u636e\u5e93\uff1a su -s /bin/sh -c \"nova-manage cell_v2 discover_hosts --verbose\" nova \u7ed3\u679c\u5982\u4e0b\uff1a Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code. Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 Checking host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Creating host mapping for compute host 'compute': 6286a86f-09d7-4786-9137-1185654c9e2e Found 1 unmapped computes in cell: 6dae034e-b2d9-4a6c-b6f0-60ada6a6ddc2 \u9a8c\u8bc1 \u5217\u51fa\u670d\u52a1\u7ec4\u4ef6\uff0c\u9a8c\u8bc1\u6bcf\u4e2a\u6d41\u7a0b\u90fd\u6210\u529f\u542f\u52a8\u548c\u6ce8\u518c\uff1a openstack compute service list \u5217\u51fa\u8eab\u4efd\u670d\u52a1\u4e2d\u7684API\u7aef\u70b9\uff0c\u9a8c\u8bc1\u4e0e\u8eab\u4efd\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack catalog list \u5217\u51fa\u955c\u50cf\u670d\u52a1\u4e2d\u7684\u955c\u50cf\uff0c\u9a8c\u8bc1\u4e0e\u955c\u50cf\u670d\u52a1\u7684\u8fde\u63a5\uff1a openstack image list \u68c0\u67e5cells\u662f\u5426\u8fd0\u4f5c\u6210\u529f\uff0c\u4ee5\u53ca\u5176\u4ed6\u5fc5\u8981\u6761\u4ef6\u662f\u5426\u5df2\u5177\u5907\u3002 nova-status upgrade check","title":"Nova"},{"location":"install/openEuler-25.03/OpenStack-antelope/#neutron","text":"Neutron\u662fOpenStack\u7684\u7f51\u7edc\u670d\u52a1\uff0c\u63d0\u4f9b\u865a\u62df\u4ea4\u6362\u673a\u3001IP\u8def\u7531\u3001DHCP\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3001\u670d\u52a1\u51ed\u8bc1\u548c API \u670d\u52a1\u7aef\u70b9 \u521b\u5efa\u6570\u636e\u5e93\uff1a mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efaneutron\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eNEUTRON_PASS\uff1a source ~/.admin-openrc openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description \"OpenStack Networking\" network \u90e8\u7f72 Neutron API \u670d\u52a1\uff1a openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2 \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = NOVA_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp [experimental] linuxbridge = true \u914d\u7f6eML2\uff0cML2\u5177\u4f53\u914d\u7f6e\u53ef\u4ee5\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u4fee\u6539\uff0c\u672c\u6587\u4f7f\u7528\u7684\u662fprovider network + linuxbridge** \u4fee\u6539/etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6eLayer-3\u4ee3\u7406 \u4fee\u6539/etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge \u914d\u7f6eDHCP\u4ee3\u7406 \u4fee\u6539/etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true \u914d\u7f6emetadata\u4ee3\u7406 \u4fee\u6539/etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET \u914d\u7f6enova\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET \u521b\u5efa/etc/neutron/plugin.ini\u7684\u7b26\u53f7\u94fe\u63a5 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini \u540c\u6b65\u6570\u636e\u5e93 su -s /bin/sh -c \"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\" neutron \u91cd\u542fnova api\u670d\u52a1 systemctl restart openstack-nova-api \u542f\u52a8\u7f51\u7edc\u670d\u52a1 systemctl enable neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl start neutron-server.service neutron-linuxbridge-agent.service \\ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service Compute\u8282\u70b9 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-neutron-linuxbridge ebtables ipset -y \u914d\u7f6eNeutron \u4fee\u6539/etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = NEUTRON_PASS [oslo_concurrency] lock_path = /var/lib/neutron/tmp \u4fee\u6539/etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver \u914d\u7f6enova compute\u670d\u52a1\u4f7f\u7528neutron\uff0c\u4fee\u6539/etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service \u542f\u52a8Neutron linuxbridge agent\u670d\u52a1 systemctl enable neutron-linuxbridge-agent systemctl start neutron-linuxbridge-agent","title":"Neutron"},{"location":"install/openEuler-25.03/OpenStack-antelope/#cinder","text":"Cinder\u662fOpenStack\u7684\u5b58\u50a8\u670d\u52a1\uff0c\u63d0\u4f9b\u5757\u8bbe\u5907\u7684\u521b\u5efa\u3001\u53d1\u653e\u3001\u5907\u4efd\u7b49\u529f\u80fd\u3002 Controller\u8282\u70b9 \uff1a \u521d\u59cb\u5316\u6570\u636e\u5e93 CINDER_DBPASS \u662f\u7528\u6237\u81ea\u5b9a\u4e49\u7684cinder\u6570\u636e\u5e93\u5bc6\u7801\u3002 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; MariaDB [(none)]> exit \u521d\u59cb\u5316Keystone\u8d44\u6e90\u5bf9\u8c61 source ~/.admin-openrc #\u521b\u5efa\u7528\u6237\u65f6\uff0c\u547d\u4ee4\u884c\u4f1a\u63d0\u793a\u8f93\u5165\u5bc6\u7801\uff0c\u8bf7\u8f93\u5165\u81ea\u5b9a\u4e49\u7684\u5bc6\u7801\uff0c\u4e0b\u6587\u6d89\u53ca\u5230`CINDER_PASS`\u7684\u5730\u65b9\u66ff\u6362\u6210\u8be5\u5bc6\u7801\u5373\u53ef\u3002 openstack user create --domain default --password-prompt cinder openstack role add --project service --user cinder admin openstack service create --name cinderv3 --description \"OpenStack Block Storage\" volumev3 openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\\(project_id\\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\\(project_id\\)s \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-cinder-api openstack-cinder-scheduler \u4fee\u6539cinder\u914d\u7f6e\u6587\u4ef6 /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.2 [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u6570\u636e\u5e93\u540c\u6b65 su -s /bin/sh -c \"cinder-manage db sync\" cinder \u4fee\u6539nova\u914d\u7f6e /etc/nova/nova.conf [cinder] os_region_name = RegionOne \u542f\u52a8\u670d\u52a1 systemctl restart openstack-nova-api systemctl start openstack-cinder-api openstack-cinder-scheduler Storage\u8282\u70b9 \uff1a Storage\u8282\u70b9\u8981\u63d0\u524d\u51c6\u5907\u81f3\u5c11\u4e00\u5757\u786c\u76d8\uff0c\u4f5c\u4e3acinder\u7684\u5b58\u50a8\u540e\u7aef\uff0c\u4e0b\u6587\u9ed8\u8ba4storage\u8282\u70b9\u5df2\u7ecf\u5b58\u5728\u4e00\u5757\u672a\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u8bbe\u5907\u540d\u79f0\u4e3a /dev/sdb \uff0c\u7528\u6237\u5728\u914d\u7f6e\u8fc7\u7a0b\u4e2d\uff0c\u8bf7\u6309\u7167\u771f\u5b9e\u73af\u5883\u4fe1\u606f\u8fdb\u884c\u540d\u79f0\u66ff\u6362\u3002 Cinder\u652f\u6301\u5f88\u591a\u7c7b\u578b\u7684\u540e\u7aef\u5b58\u50a8\uff0c\u672c\u6307\u5bfc\u4f7f\u7528\u6700\u7b80\u5355\u7684lvm\u4e3a\u53c2\u8003\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982ceph\u7b49\u5176\u4ed6\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup \u914d\u7f6elvm\u5377\u7ec4 pvcreate /dev/sdb vgcreate cinder-volumes /dev/sdb \u4fee\u6539cinder\u914d\u7f6e /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.0.4 enabled_backends = lvm glance_api_servers = http://controller:9292 [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp \u914d\u7f6ecinder backup \uff08\u53ef\u9009\uff09 cinder-backup\u662f\u53ef\u9009\u7684\u5907\u4efd\u670d\u52a1\uff0ccinder\u540c\u6837\u652f\u6301\u5f88\u591a\u79cd\u5907\u4efd\u540e\u7aef\uff0c\u672c\u6587\u4f7f\u7528swift\u5b58\u50a8\uff0c\u5982\u679c\u60a8\u60f3\u4f7f\u7528\u5982NFS\u7b49\u540e\u7aef\uff0c\u8bf7\u81ea\u884c\u914d\u7f6e\uff0c\u4f8b\u5982\u53ef\u4ee5\u53c2\u8003 OpenStack\u5b98\u65b9\u6587\u6863 \u5bf9NFS\u7684\u914d\u7f6e\u8bf4\u660e\u3002 \u4fee\u6539 /etc/cinder/cinder.conf \uff0c\u5728 [DEFAULT] \u4e2d\u65b0\u589e [DEFAULT] backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver backup_swift_url = SWIFT_URL \u8fd9\u91cc\u7684 SWIFT_URL \u662f\u6307\u73af\u5883\u4e2dswift\u670d\u52a1\u7684URL\uff0c\u5728\u90e8\u7f72\u5b8cswift\u670d\u52a1\u540e\uff0c\u6267\u884c openstack catalog show object-store \u547d\u4ee4\u83b7\u53d6\u3002 \u542f\u52a8\u670d\u52a1 systemctl start openstack-cinder-volume target systemctl start openstack-cinder-backup (\u53ef\u9009) \u81f3\u6b64\uff0cCinder\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u53ef\u4ee5\u5728controller\u901a\u8fc7\u4ee5\u4e0b\u547d\u4ee4\u8fdb\u884c\u7b80\u5355\u7684\u9a8c\u8bc1 source ~/.admin-openrc openstack storage service list openstack volume list","title":"Cinder"},{"location":"install/openEuler-25.03/OpenStack-antelope/#horizon","text":"Horizon\u662fOpenStack\u63d0\u4f9b\u7684\u524d\u7aef\u9875\u9762\uff0c\u53ef\u4ee5\u8ba9\u7528\u6237\u901a\u8fc7\u7f51\u9875\u9f20\u6807\u7684\u64cd\u4f5c\u6765\u63a7\u5236OpenStack\u96c6\u7fa4\uff0c\u800c\u4e0d\u7528\u7e41\u7410\u7684CLI\u547d\u4ee4\u884c\u3002Horizon\u4e00\u822c\u90e8\u7f72\u5728\u63a7\u5236\u8282\u70b9\u3002 \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-dashboard \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/openstack-dashboard/local_settings OPENSTACK_HOST = \"controller\" ALLOWED_HOSTS = ['*', ] OPENSTACK_KEYSTONE_URL = \"http://controller:5000/v3\" SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \"Default\" OPENSTACK_KEYSTONE_DEFAULT_ROLE = \"member\" WEBROOT = '/dashboard' POLICY_FILES_PATH = \"/etc/openstack-dashboard\" OPENSTACK_API_VERSIONS = { \"identity\": 3, \"image\": 2, \"volume\": 3, } \u91cd\u542f\u670d\u52a1 systemctl restart httpd \u81f3\u6b64\uff0chorizon\u670d\u52a1\u7684\u90e8\u7f72\u5df2\u5168\u90e8\u5b8c\u6210\uff0c\u6253\u5f00\u6d4f\u89c8\u5668\uff0c\u8f93\u5165 http://192.168.0.2/dashboard \uff0c\u6253\u5f00horizon\u767b\u5f55\u9875\u9762\u3002","title":"Horizon"},{"location":"install/openEuler-25.03/OpenStack-antelope/#ironic","text":"Ironic\u662fOpenStack\u7684\u88f8\u91d1\u5c5e\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u8fdb\u884c\u88f8\u673a\u90e8\u7f72\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 \u5728\u63a7\u5236\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\u3002 \u8bbe\u7f6e\u6570\u636e\u5e93 \u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2a ironic \u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684 ironic \u6570\u636e\u5e93\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \\ IDENTIFIED BY 'IRONIC_DBPASS'; MariaDB [(none)]> exit Bye \u521b\u5efa\u670d\u52a1\u7528\u6237\u8ba4\u8bc1 \u521b\u5efaBare Metal\u670d\u52a1\u7528\u6237 \u66ff\u6362 IRONIC_PASS \u4e3aironic\u7528\u6237\u5bc6\u7801\uff0c IRONIC_INSPECTOR_PASS \u4e3aironic_inspector\u7528\u6237\u5bc6\u7801\u3002 openstack user create --password IRONIC_PASS \\ --email ironic@example.com ironic openstack role add --project service --user ironic admin openstack service create --name ironic \\ --description \"Ironic baremetal provisioning service\" baremetal openstack service create --name ironic-inspector --description \"Ironic inspector baremetal provisioning service\" baremetal-introspection openstack user create --password IRONIC_INSPECTOR_PASS --email ironic_inspector@example.com ironic-inspector openstack role add --project service --user ironic-inspector admin \u521b\u5efaBare Metal\u670d\u52a1\u8bbf\u95ee\u5165\u53e3 openstack endpoint create --region RegionOne baremetal admin http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal public http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal internal http://192.168.0.2:6385 openstack endpoint create --region RegionOne baremetal-introspection internal http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection public http://192.168.0.2:5050/v1 openstack endpoint create --region RegionOne baremetal-introspection admin http://192.168.0.2:5050/v1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-api openstack-ironic-conductor python3-ironicclient \u914d\u7f6eironic-api\u670d\u52a1 \u914d\u7f6e\u6587\u4ef6\u8def\u5f84/etc/ironic/ironic.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 DB_IP \u4e3aDB\u670d\u52a1\u5668\u6240\u5728\u7684IP\u5730\u5740\uff1a [database] # The SQ LAlchemy connection string used to connect to the # database (string value) # connection = mysql+pymysql://ironic:IRONIC_DBPASS@DB_IP/ironic connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0c\u66ff\u6362 RPC_* \u4e3aRabbitMQ\u7684\u8be6\u7ec6\u5730\u5740\u548c\u51ed\u8bc1 [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) # transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u51ed\u8bc1\uff0c\u66ff\u6362 PUBLIC_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u516c\u5171IP\uff0c\u66ff\u6362 PRIVATE_IDENTITY_IP \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u5668\u7684\u79c1\u6709IP\uff0c\u66ff\u6362 IRONIC_PASS \u4e3a\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u4e2d ironic \u7528\u6237\u7684\u5bc6\u7801\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\u3002\uff1a [DEFAULT] # Authentication strategy used by ironic-api: one of # \"keystone\" or \"noauth\". \"noauth\" should not be used in a # production environment because all authentication will be # disabled. (string value) auth_strategy=keystone host = controller memcache_servers = controller:11211 enabled_network_interfaces = flat,noop,neutron default_network_interface = noop enabled_hardware_types = ipmi enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct default_deploy_interface = direct enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool enabled_rescue_interfaces = no-rescue,agent isolinux_bin = /usr/share/syslinux/isolinux.bin logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s [keystone_authtoken] # Authentication type to load (string value) auth_type=password # Complete public Identity API endpoint (string value) # www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000 www_authenticate_uri=http://controller:5000 # Complete admin Identity API endpoint. (string value) # auth_url=http://PRIVATE_IDENTITY_IP:5000 auth_url=http://controller:5000 # Service username. (string value) username=ironic # Service account password. (string value) password=IRONIC_PASS # Service tenant name. (string value) project_name=service # Domain name containing project (string value) project_domain_name=Default # User's domain name (string value) user_domain_name=Default [agent] deploy_logs_collect = always deploy_logs_local_path = /var/log/ironic/deploy deploy_logs_storage_backend = local image_download_source = http stream_raw_images = false force_raw_images = false verify_ca = False [oslo_concurrency] [oslo_messaging_notifications] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ topics = notifications driver = messagingv2 [oslo_messaging_rabbit] amqp_durable_queues = True rabbit_ha_queues = True [pxe] ipxe_enabled = false pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 image_cache_size = 204800 tftp_root=/var/lib/tftpboot/cephfs/ tftp_master_path=/var/lib/tftpboot/cephfs/master_images [dhcp] dhcp_provider = none \u521b\u5efa\u88f8\u91d1\u5c5e\u670d\u52a1\u6570\u636e\u5e93\u8868 ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema \u91cd\u542fironic-api\u670d\u52a1 sudo systemctl restart openstack-ironic-api \u914d\u7f6eironic-conductor\u670d\u52a1 \u5982\u4e0b\u4e3aironic-conductor\u670d\u52a1\u81ea\u8eab\u7684\u6807\u51c6\u914d\u7f6e\uff0cironic-conductor\u670d\u52a1\u53ef\u4ee5\u4e0eironic-api\u670d\u52a1\u5206\u5e03\u4e8e\u4e0d\u540c\u8282\u70b9\uff0c\u672c\u6307\u5357\u4e2d\u5747\u90e8\u7f72\u4e0e\u63a7\u5236\u8282\u70b9\uff0c\u6240\u4ee5\u91cd\u590d\u7684\u914d\u7f6e\u9879\u53ef\u8df3\u8fc7\u3002 \u66ff\u6362\u4f7f\u7528conductor\u670d\u52a1\u6240\u5728host\u7684IP\u914d\u7f6emy_ip\uff1a [DEFAULT] # IP address of this host. If unset, will determine the IP # programmatically. If unable to do so, will use \"127.0.0.1\". # (string value) # my_ip=HOST_IP my_ip = 192.168.0.2 \u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\u3002\u66ff\u6362 IRONIC_DBPASS \u4e3a ironic \u7528\u6237\u7684\u5bc6\u7801\uff1a [database] # The SQLAlchemy connection string to use to connect to the # database. (string value) connection = mysql+pymysql://ironic:IRONIC_DBPASS@controller/ironic \u901a\u8fc7\u4ee5\u4e0b\u9009\u9879\u914d\u7f6eironic-api\u670d\u52a1\u4f7f\u7528RabbitMQ\u6d88\u606f\u4ee3\u7406\uff0cironic-conductor\u5e94\u8be5\u4f7f\u7528\u548cironic-api\u76f8\u540c\u7684\u914d\u7f6e\uff0c\u66ff\u6362 RABBIT_PASS \u4e3aRabbitMQ\u4e2dopenstack\u8d26\u6237\u7684\u5bc6\u7801\uff1a [DEFAULT] # A URL representing the messaging driver to use and its full # configuration. (string value) transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u7528\u6237\u4e5f\u53ef\u81ea\u884c\u4f7f\u7528json-rpc\u65b9\u5f0f\u66ff\u6362rabbitmq \u914d\u7f6e\u51ed\u8bc1\u8bbf\u95ee\u5176\u4ed6OpenStack\u670d\u52a1 \u4e3a\u4e86\u4e0e\u5176\u4ed6OpenStack\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u5728\u8bf7\u6c42\u5176\u4ed6\u670d\u52a1\u65f6\u9700\u8981\u4f7f\u7528\u670d\u52a1\u7528\u6237\u4e0eOpenStack Identity\u670d\u52a1\u8fdb\u884c\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u7528\u6237\u7684\u51ed\u636e\u5fc5\u987b\u5728\u4e0e\u76f8\u5e94\u670d\u52a1\u76f8\u5173\u7684\u6bcf\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u8fdb\u884c\u914d\u7f6e\u3002 [neutron] - \u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1 [glance] - \u8bbf\u95eeOpenStack\u955c\u50cf\u670d\u52a1 [swift] - \u8bbf\u95eeOpenStack\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 [cinder] - \u8bbf\u95eeOpenStack\u5757\u5b58\u50a8\u670d\u52a1 [inspector] - \u8bbf\u95eeOpenStack\u88f8\u91d1\u5c5eintrospection\u670d\u52a1 [service_catalog] - \u4e00\u4e2a\u7279\u6b8a\u9879\u7528\u4e8e\u4fdd\u5b58\u88f8\u91d1\u5c5e\u670d\u52a1\u4f7f\u7528\u7684\u51ed\u8bc1\uff0c\u8be5\u51ed\u8bc1\u7528\u4e8e\u53d1\u73b0\u6ce8\u518c\u5728OpenStack\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u76ee\u5f55\u4e2d\u7684\u81ea\u5df1\u7684API URL\u7aef\u70b9 \u7b80\u5355\u8d77\u89c1\uff0c\u53ef\u4ee5\u5bf9\u6240\u6709\u670d\u52a1\u4f7f\u7528\u540c\u4e00\u4e2a\u670d\u52a1\u7528\u6237\u3002\u4e3a\u4e86\u5411\u540e\u517c\u5bb9\uff0c\u8be5\u7528\u6237\u5e94\u8be5\u548cironic-api\u670d\u52a1\u7684[keystone_authtoken]\u6240\u914d\u7f6e\u7684\u4e3a\u540c\u4e00\u4e2a\u7528\u6237\u3002\u4f46\u8fd9\u4e0d\u662f\u5fc5\u987b\u7684\uff0c\u4e5f\u53ef\u4ee5\u4e3a\u6bcf\u4e2a\u670d\u52a1\u521b\u5efa\u5e76\u914d\u7f6e\u4e0d\u540c\u7684\u670d\u52a1\u7528\u6237\u3002 \u5728\u4e0b\u9762\u7684\u793a\u4f8b\u4e2d\uff0c\u7528\u6237\u8bbf\u95eeOpenStack\u7f51\u7edc\u670d\u52a1\u7684\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u914d\u7f6e\u4e3a\uff1a \u7f51\u7edc\u670d\u52a1\u90e8\u7f72\u5728\u540d\u4e3aRegionOne\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u57df\u4e2d\uff0c\u4ec5\u5728\u670d\u52a1\u76ee\u5f55\u4e2d\u6ce8\u518c\u516c\u5171\u7aef\u70b9\u63a5\u53e3 \u8bf7\u6c42\u65f6\u4f7f\u7528\u7279\u5b9a\u7684CA SSL\u8bc1\u4e66\u8fdb\u884cHTTPS\u8fde\u63a5 \u4e0eironic-api\u670d\u52a1\u914d\u7f6e\u76f8\u540c\u7684\u670d\u52a1\u7528\u6237 \u52a8\u6001\u5bc6\u7801\u8ba4\u8bc1\u63d2\u4ef6\u57fa\u4e8e\u5176\u4ed6\u9009\u9879\u53d1\u73b0\u5408\u9002\u7684\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1API\u7248\u672c \u66ff\u6362IRONIC_PASS\u4e3aironic\u7528\u6237\u5bc6\u7801\u3002 [neutron] # Authentication type to load (string value) auth_type = password # Authentication URL (string value) auth_url=https://IDENTITY_IP:5000/ # Username (string value) username=ironic # User's password (string value) password=IRONIC_PASS # Project name to scope to (string value) project_name=service # Domain ID containing project (string value) project_domain_id=default # User's domain id (string value) user_domain_id=default # PEM encoded Certificate Authority to use when verifying # HTTPs connections. (string value) cafile=/opt/stack/data/ca-bundle.pem # The default region_name for endpoint URL discovery. (string # value) region_name = RegionOne # List of interfaces, in order of preference, for endpoint # URL. (list value) valid_interfaces=public # \u5176\u4ed6\u53c2\u8003\u914d\u7f6e [glance] endpoint_override = http://controller:9292 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 auth_type = password username = ironic password = IRONIC_PASS project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service [service_catalog] region_name = RegionOne project_domain_id = default user_domain_id = default project_name = service password = IRONIC_PASS username = ironic auth_url = http://controller:5000 auth_type = password \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u4e3a\u4e86\u4e0e\u5176\u4ed6\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\uff0c\u88f8\u91d1\u5c5e\u670d\u52a1\u4f1a\u5c1d\u8bd5\u901a\u8fc7\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u7684\u670d\u52a1\u76ee\u5f55\u53d1\u73b0\u8be5\u670d\u52a1\u5408\u9002\u7684\u7aef\u70b9\u3002\u5982\u679c\u5e0c\u671b\u5bf9\u4e00\u4e2a\u7279\u5b9a\u670d\u52a1\u4f7f\u7528\u4e00\u4e2a\u4e0d\u540c\u7684\u7aef\u70b9\uff0c\u5219\u5728\u88f8\u91d1\u5c5e\u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u4e2d\u901a\u8fc7endpoint_override\u9009\u9879\u8fdb\u884c\u6307\u5b9a\uff1a [neutron] endpoint_override = \u914d\u7f6e\u5141\u8bb8\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u786c\u4ef6\u7c7b\u578b \u901a\u8fc7\u8bbe\u7f6eenabled_hardware_types\u8bbe\u7f6eironic-conductor\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u7684\u786c\u4ef6\u7c7b\u578b\uff1a [DEFAULT] enabled_hardware_types = ipmi \u914d\u7f6e\u786c\u4ef6\u63a5\u53e3\uff1a enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool \u914d\u7f6e\u63a5\u53e3\u9ed8\u8ba4\u503c\uff1a [DEFAULT] default_deploy_interface = direct default_network_interface = neutron \u5982\u679c\u542f\u7528\u4e86\u4efb\u4f55\u4f7f\u7528Direct deploy\u7684\u9a71\u52a8\uff0c\u5fc5\u987b\u5b89\u88c5\u548c\u914d\u7f6e\u955c\u50cf\u670d\u52a1\u7684Swift\u540e\u7aef\u3002Ceph\u5bf9\u8c61\u7f51\u5173(RADOS\u7f51\u5173)\u4e5f\u652f\u6301\u4f5c\u4e3a\u955c\u50cf\u670d\u52a1\u7684\u540e\u7aef\u3002 \u91cd\u542fironic-conductor\u670d\u52a1 sudo systemctl restart openstack-ironic-conductor \u914d\u7f6eironic-inspector\u670d\u52a1 \u5b89\u88c5\u7ec4\u4ef6 dnf install openstack-ironic-inspector \u521b\u5efa\u6570\u636e\u5e93 # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \\ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASS'; MariaDB [(none)]> exit Bye \u914d\u7f6e /etc/ironic-inspector/inspector.conf \u901a\u8fc7 connection \u9009\u9879\u914d\u7f6e\u6570\u636e\u5e93\u7684\u4f4d\u7f6e\uff0c\u5982\u4e0b\u6240\u793a\uff0c\u66ff\u6362 IRONIC_INSPECTOR_DBPASS \u4e3a ironic_inspector \u7528\u6237\u7684\u5bc6\u7801 [database] backend = sqlalchemy connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASS@controller/ironic_inspector min_pool_size = 100 max_pool_size = 500 pool_timeout = 30 max_retries = 5 max_overflow = 200 db_retry_interval = 2 db_inc_retry_interval = True db_max_retry_interval = 2 db_max_retries = 5 \u914d\u7f6e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u5730\u5740 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ \u8bbe\u7f6ekeystone\u8ba4\u8bc1 [DEFAULT] auth_strategy = keystone timeout = 900 rootwrap_config = /etc/ironic-inspector/rootwrap.conf logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s % (user_identity)s] %(instance)s%(message)s log_dir = /var/log/ironic-inspector state_path = /var/lib/ironic-inspector use_stderr = False [ironic] api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385 auth_type = password auth_url = http://PUBLIC_IDENTITY_IP:5000 auth_strategy = keystone ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385 os_region = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = IRONIC_SERVICE_USER_NAME password = IRONIC_SERVICE_USER_PASSWORD [keystone_authtoken] auth_type = password auth_url = http://controller:5000 www_authenticate_uri = http://controller:5000 project_domain_name = default user_domain_name = default project_name = service username = ironic_inspector password = IRONICPASSWD region_name = RegionOne memcache_servers = controller:11211 token_cache_time = 300 [processing] add_ports = active processing_hooks = $default_processing_hooks,local_link_connection,lldp_basic ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk always_store_ramdisk_logs = true store_data =none power_off = false [pxe_filter] driver = iptables [capabilities] boot_mode=True \u914d\u7f6eironic inspector dnsmasq\u670d\u52a1 # \u914d\u7f6e\u6587\u4ef6\u5730\u5740\uff1a/etc/ironic-inspector/dnsmasq.conf port=0 interface=enp3s0 #\u66ff\u6362\u4e3a\u5b9e\u9645\u76d1\u542c\u7f51\u7edc\u63a5\u53e3 dhcp-range=192.168.0.40,192.168.0.50 #\u66ff\u6362\u4e3a\u5b9e\u9645dhcp\u5730\u5740\u8303\u56f4 bind-interfaces enable-tftp dhcp-match=set:efi,option:client-arch,7 dhcp-match=set:efi,option:client-arch,9 dhcp-match=aarch64, option:client-arch,11 dhcp-boot=tag:aarch64,grubaa64.efi dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0 tftp-root=/tftpboot #\u66ff\u6362\u4e3a\u5b9e\u9645tftpboot\u76ee\u5f55 log-facility=/var/log/dnsmasq.log \u5173\u95edironic provision\u7f51\u7edc\u5b50\u7f51\u7684dhcp openstack subnet set --no-dhcp 72426e89-f552-4dc4-9ac7-c4e131ce7f3c \u521d\u59cb\u5316ironic-inspector\u670d\u52a1\u7684\u6570\u636e\u5e93 ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade \u542f\u52a8\u670d\u52a1 systemctl enable --now openstack-ironic-inspector.service systemctl enable --now openstack-ironic-inspector-dnsmasq.service \u914d\u7f6ehttpd\u670d\u52a1 \u521b\u5efaironic\u8981\u4f7f\u7528\u7684httpd\u7684root\u76ee\u5f55\u5e76\u8bbe\u7f6e\u5c5e\u4e3b\u5c5e\u7ec4\uff0c\u76ee\u5f55\u8def\u5f84\u8981\u548c/etc/ironic/ironic.conf\u4e2d[deploy]\u7ec4\u4e2dhttp_root \u914d\u7f6e\u9879\u6307\u5b9a\u7684\u8def\u5f84\u8981\u4e00\u81f4\u3002 mkdir -p /var/lib/ironic/httproot chown ironic.ironic /var/lib/ironic/httproot \u5b89\u88c5\u548c\u914d\u7f6ehttpd\u670d\u52a1 \u5b89\u88c5httpd\u670d\u52a1\uff0c\u5df2\u6709\u8bf7\u5ffd\u7565 dnf install httpd -y \u521b\u5efa/etc/httpd/conf.d/openstack-ironic-httpd.conf\u6587\u4ef6\uff0c\u5185\u5bb9\u5982\u4e0b\uff1a Listen 8080 ServerName ironic.openeuler.com ErrorLog \"/var/log/httpd/openstack-ironic-httpd-error_log\" CustomLog \"/var/log/httpd/openstack-ironic-httpd-access_log\" \"%h %l %u %t \\\"%r\\\" %>s %b\" DocumentRoot \"/var/lib/ironic/httproot\" Options Indexes FollowSymLinks Require all granted LogLevel warn AddDefaultCharset UTF-8 EnableSendfile on \u6ce8\u610f\u76d1\u542c\u7684\u7aef\u53e3\u8981\u548c/etc/ironic/ironic.conf\u91cc[deploy]\u9009\u9879\u4e2dhttp_url\u914d\u7f6e\u9879\u4e2d\u6307\u5b9a\u7684\u7aef\u53e3\u4e00\u81f4\u3002 \u91cd\u542fhttpd\u670d\u52a1\u3002 systemctl restart httpd deploy ramdisk\u955c\u50cf\u4e0b\u8f7d\u6216\u5236\u4f5c \u90e8\u7f72\u4e00\u4e2a\u88f8\u673a\u8282\u70b9\u603b\u5171\u9700\u8981\u4e24\u7ec4\u955c\u50cf\uff1adeploy ramdisk images\u548cuser images\u3002Deploy ramdisk images\u4e0a\u8fd0\u884c\u6709ironic-python-agent(IPA)\u670d\u52a1\uff0cIronic\u901a\u8fc7\u5b83\u8fdb\u884c\u88f8\u673a\u8282\u70b9\u7684\u73af\u5883\u51c6\u5907\u3002User images\u662f\u6700\u7ec8\u88ab\u5b89\u88c5\u88f8\u673a\u8282\u70b9\u4e0a\uff0c\u4f9b\u7528\u6237\u4f7f\u7528\u7684\u955c\u50cf\u3002 ramdisk\u955c\u50cf\u652f\u6301\u901a\u8fc7ironic-python-agent-builder\u6216disk-image-builder\u5de5\u5177\u5236\u4f5c\u3002\u7528\u6237\u4e5f\u53ef\u4ee5\u81ea\u884c\u9009\u62e9\u5176\u4ed6\u5de5\u5177\u5236\u4f5c\u3002\u82e5\u4f7f\u7528\u539f\u751f\u5de5\u5177\uff0c\u5219\u9700\u8981\u5b89\u88c5\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u3002 \u5177\u4f53\u7684\u4f7f\u7528\u65b9\u6cd5\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u6587\u6863 \uff0c\u540c\u65f6\u5b98\u65b9\u4e5f\u6709\u63d0\u4f9b\u5236\u4f5c\u597d\u7684deploy\u955c\u50cf\uff0c\u53ef\u5c1d\u8bd5\u4e0b\u8f7d\u3002 \u4e0b\u6587\u4ecb\u7ecd\u901a\u8fc7ironic-python-agent-builder\u6784\u5efaironic\u4f7f\u7528\u7684deploy\u955c\u50cf\u7684\u5b8c\u6574\u8fc7\u7a0b\u3002 \u5b89\u88c5 ironic-python-agent-builder dnf install python3-ironic-python-agent-builder \u6216 pip3 install ironic-python-agent-builder dnf install qemu-img git \u5236\u4f5c\u955c\u50cf \u57fa\u672c\u7528\u6cd5\uff1a usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT] [-b BRANCH] [-v] [--lzma] [--extra-args EXTRA_ARGS] [--elements-path ELEMENTS_PATH] distribution positional arguments: distribution Distribution to use options: -h, --help show this help message and exit -r RELEASE, --release RELEASE Distribution release to use -o OUTPUT, --output OUTPUT Output base file name -e ELEMENT, --element ELEMENT Additional DIB element to use -b BRANCH, --branch BRANCH If set, override the branch that is used for ironic-python-agent and requirements -v, --verbose Enable verbose logging in diskimage-builder --lzma Use lzma compression for smaller images --extra-args EXTRA_ARGS Extra arguments to pass to diskimage-builder --elements-path ELEMENTS_PATH Path(s) to custom DIB elements separated by a colon \u64cd\u4f5c\u5b9e\u4f8b\uff1a # -o\u9009\u9879\u6307\u5b9a\u751f\u6210\u7684\u955c\u50cf\u540d # ubuntu\u6307\u5b9a\u751f\u6210ubuntu\u7cfb\u7edf\u7684\u955c\u50cf ironic-python-agent-builder -o my-ubuntu-ipa ubuntu \u53ef\u901a\u8fc7\u8bbe\u7f6e ARCH \u73af\u5883\u53d8\u91cf\uff08\u9ed8\u8ba4\u4e3aamd64\uff09\u6307\u5b9a\u6240\u6784\u5efa\u955c\u50cf\u7684\u67b6\u6784\u3002\u5982\u679c\u662f arm \u67b6\u6784\uff0c\u9700\u8981\u6dfb\u52a0\uff1a export ARCH=aarch64 \u5141\u8bb8ssh\u767b\u5f55 \u521d\u59cb\u5316\u73af\u5883\u53d8\u91cf,\u8bbe\u7f6e\u7528\u6237\u540d\u3001\u5bc6\u7801\uff0c\u542f\u7528 sodo \u6743\u9650\uff1b\u5e76\u6dfb\u52a0 -e \u9009\u9879\u4f7f\u7528\u76f8\u5e94\u7684DIB\u5143\u7d20\u3002\u5236\u4f5c\u955c\u50cf\u64cd\u4f5c\u5982\u4e0b\uff1a export DIB_DEV_USER_USERNAME=ipa \\ export DIB_DEV_USER_PWDLESS_SUDO=yes \\ export DIB_DEV_USER_PASSWORD='123' ironic-python-agent-builder -o my-ssh-ubuntu-ipa -e selinux-permissive -e devuser ubuntu \u6307\u5b9a\u4ee3\u7801\u4ed3\u5e93 \u521d\u59cb\u5316\u5bf9\u5e94\u7684\u73af\u5883\u53d8\u91cf\uff0c\u7136\u540e\u5236\u4f5c\u955c\u50cf\uff1a # \u76f4\u63a5\u4ecegerrit\u4e0aclone\u4ee3\u7801 DIB_REPOLOCATION_ironic_python_agent=https://opendev.org/openstack/ironic-python-agent DIB_REPOREF_ironic_python_agent=stable/2023.1 # \u6307\u5b9a\u672c\u5730\u4ed3\u5e93\u53ca\u5206\u652f DIB_REPOLOCATION_ironic_python_agent=/home/user/path/to/repo DIB_REPOREF_ironic_python_agent=my-test-branch ironic-python-agent-builder ubuntu \u53c2\u8003\uff1a source-repositories \u3002 \u6ce8\u610f \u539f\u751f\u7684openstack\u91cc\u7684pxe\u914d\u7f6e\u6587\u4ef6\u7684\u6a21\u7248\u4e0d\u652f\u6301arm64\u67b6\u6784\uff0c\u9700\u8981\u81ea\u5df1\u5bf9\u539f\u751fopenstack\u4ee3\u7801\u8fdb\u884c\u4fee\u6539\uff1a \u5728W\u7248\u4e2d\uff0c\u793e\u533a\u7684ironic\u4ecd\u7136\u4e0d\u652f\u6301arm64\u4f4d\u7684uefi pxe\u542f\u52a8\uff0c\u8868\u73b0\u4e3a\u751f\u6210\u7684grub.cfg\u6587\u4ef6(\u4e00\u822c\u4f4d\u4e8e/tftpboot/\u4e0b)\u683c\u5f0f\u4e0d\u5bf9\u800c\u5bfc\u81f4pxe\u542f\u52a8\u5931\u8d25\u3002 \u751f\u6210\u7684\u9519\u8bef\u914d\u7f6e\u6587\u4ef6\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0carm\u67b6\u6784\u91cc\u5bfb\u627evmlinux\u548cramdisk\u955c\u50cf\u7684\u547d\u4ee4\u5206\u522b\u662flinux\u548cinitrd\uff0c\u4e0a\u56fe\u6240\u793a\u7684\u6807\u7ea2\u547d\u4ee4\u662fx86\u67b6\u6784\u4e0b\u7684uefi pxe\u542f\u52a8\u3002 \u9700\u8981\u7528\u6237\u5bf9\u751f\u6210grub.cfg\u7684\u4ee3\u7801\u903b\u8f91\u81ea\u884c\u4fee\u6539\u3002 ironic\u5411ipa\u53d1\u9001\u67e5\u8be2\u547d\u4ee4\u6267\u884c\u72b6\u6001\u8bf7\u6c42\u7684tls\u62a5\u9519\uff1a \u5f53\u524d\u7248\u672c\u7684ipa\u548cironic\u9ed8\u8ba4\u90fd\u4f1a\u5f00\u542ftls\u8ba4\u8bc1\u7684\u65b9\u5f0f\u5411\u5bf9\u65b9\u53d1\u9001\u8bf7\u6c42\uff0c\u8ddf\u636e\u5b98\u7f51\u7684\u8bf4\u660e\u8fdb\u884c\u5173\u95ed\u5373\u53ef\u3002 \u4fee\u6539ironic\u914d\u7f6e\u6587\u4ef6(/etc/ironic/ironic.conf)\u4e0b\u9762\u7684\u914d\u7f6e\u4e2d\u6dfb\u52a0ipa-insecure=1\uff1a [agent] verify_ca = False [pxe] pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1 ramdisk\u955c\u50cf\u4e2d\u6dfb\u52a0ipa\u914d\u7f6e\u6587\u4ef6/etc/ironic_python_agent/ironic_python_agent.conf\u5e76\u914d\u7f6etls\u7684\u914d\u7f6e\u5982\u4e0b\uff1a /etc/ironic_python_agent/ironic_python_agent.conf (\u9700\u8981\u63d0\u524d\u521b\u5efa/etc/ ironic_python_agent\u76ee\u5f55\uff09 [DEFAULT] enable_auto_tls = False \u8bbe\u7f6e\u6743\u9650\uff1a chown -R ipa.ipa /etc/ironic_python_agent/ ramdisk\u955c\u50cf\u4e2d\u4fee\u6539ipa\u670d\u52a1\u7684\u670d\u52a1\u542f\u52a8\u6587\u4ef6\uff0c\u6dfb\u52a0\u914d\u7f6e\u6587\u4ef6\u9009\u9879 \u7f16\u8f91/usr/lib/systemd/system/ironic-python-agent.service\u6587\u4ef6 [Unit] Description=Ironic Python Agent After=network-online.target [Service] ExecStartPre=/sbin/modprobe vfat ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ ironic_python_agent/ironic_python_agent.conf Restart=always RestartSec=30s [Install] WantedBy=multi-user.target","title":"Ironic"},{"location":"install/openEuler-25.03/OpenStack-antelope/#trove","text":"Trove\u662fOpenStack\u7684\u6570\u636e\u5e93\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u4f7f\u7528OpenStack\u63d0\u4f9b\u7684\u6570\u636e\u5e93\u670d\u52a1\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 \u6570\u636e\u5e93\u670d\u52a1\u5728\u6570\u636e\u5e93\u4e2d\u5b58\u50a8\u4fe1\u606f\uff0c\u521b\u5efa\u4e00\u4e2atrove\u7528\u6237\u53ef\u4ee5\u8bbf\u95ee\u7684trove\u6570\u636e\u5e93\uff0c\u66ff\u6362TROVE_DBPASS\u4e3a\u5408\u9002\u7684\u5bc6\u7801\u3002 CREATE DATABASE trove CHARACTER SET utf8; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY 'TROVE_DBPASS'; GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY 'TROVE_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efatrove\u7528\u6237 openstack user create --domain default --password-prompt trove # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user trove admin # \u521b\u5efadatabase\u670d\u52a1 openstack service create --name trove --description \"Database service\" database \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\\(tenant_id\\)s openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\\(tenant_id\\)s \u5b89\u88c5Trove\u3002 dnf install openstack-trove python-troveclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 \u7f16\u8f91/etc/trove/trove.conf\u3002 [DEFAULT] bind_host=192.168.0.2 log_dir = /var/log/trove network_driver = trove.network.neutron.NeutronDriver network_label_regex=.* management_security_groups = nova_keypair = trove-mgmt default_datastore = mysql taskmanager_manager = trove.taskmanager.manager.Manager trove_api_workers = 5 transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ reboot_time_out = 300 usage_timeout = 900 agent_call_high_timeout = 1200 use_syslog = False debug = True [database] connection = mysql+pymysql://trove:TROVE_DBPASS@controller/trove [keystone_authtoken] auth_url = http://controller:5000/v3/ auth_type = password project_domain_name = Default project_name = service user_domain_name = Default password = trove username = TROVE_PASS [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service project_domain_name = Default user_domain_name = Default username = trove password = TROVE_PASS [mariadb] tcp_ports = 3306,4444,4567,4568 [mysql] tcp_ports = 3306 [postgresql] tcp_ports = 5432 \u89e3\u91ca\uff1a [Default] \u5206\u7ec4\u4e2d bind_host \u914d\u7f6e\u4e3aTrove\u63a7\u5236\u8282\u70b9\u7684IP\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ [database] \u5206\u7ec4\u4e2d\u7684 connection \u4e3a\u524d\u9762\u5728mysql\u4e2d\u4e3aTrove\u521b\u5efa\u7684\u6570\u636e\u5e93\u4fe1\u606f\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002 \u7f16\u8f91/etc/trove/trove-guestagent.conf\u3002 [DEFAULT] log_file = trove-guestagent.log log_dir = /var/log/trove/ ignore_users = os_admin control_exchange = trove transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ rpc_backend = rabbit command_process_timeout = 60 use_syslog = False debug = True [service_credentials] auth_url = http://controller:5000/v3/ region_name = RegionOne project_name = service password = TROVE_PASS project_domain_name = Default user_domain_name = Default username = trove [mysql] docker_image = your-registry/your-repo/mysql backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0 \u89e3\u91ca\uff1a guestagent \u662ftrove\u4e2d\u4e00\u4e2a\u72ec\u7acb\u7ec4\u4ef6\uff0c\u9700\u8981\u9884\u5148\u5185\u7f6e\u5230Trove\u901a\u8fc7Nova\u521b\u5efa\u7684\u865a\u62df\u673a\u955c\u50cf\u4e2d\uff0c\u5728\u521b\u5efa\u597d\u6570\u636e\u5e93\u5b9e\u4f8b\u540e\uff0c\u4f1a\u8d77guestagent\u8fdb\u7a0b\uff0c\u8d1f\u8d23\u901a\u8fc7\u6d88\u606f\u961f\u5217\uff08RabbitMQ\uff09\u5411Trove\u4e0a\u62a5\u5fc3\u8df3\uff0c\u56e0\u6b64\u9700\u8981\u914d\u7f6eRabbitMQ\u7684\u7528\u6237\u548c\u5bc6\u7801\u4fe1\u606f\u3002\\ transport_url \u4e3a RabbitMQ \u8fde\u63a5\u4fe1\u606f\uff0c RABBIT_PASS \u66ff\u6362\u4e3aRabbitMQ\u7684\u5bc6\u7801\u3002\\ Trove\u7684\u7528\u6237\u4fe1\u606f\u4e2d TROVE_PASSWORD \u66ff\u6362\u4e3a\u5b9e\u9645trove\u7528\u6237\u7684\u5bc6\u7801\u3002\\ \u4eceVictoria\u7248\u5f00\u59cb\uff0cTrove\u4f7f\u7528\u4e00\u4e2a\u7edf\u4e00\u7684\u955c\u50cf\u6765\u8dd1\u4e0d\u540c\u7c7b\u578b\u7684\u6570\u636e\u5e93\uff0c\u6570\u636e\u5e93\u670d\u52a1\u8fd0\u884c\u5728Guest\u865a\u62df\u673a\u7684Docker\u5bb9\u5668\u4e2d\u3002 \u6570\u636e\u5e93\u540c\u6b65\u3002 su -s /bin/sh -c \"trove-manage db_sync\" trove \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \\ openstack-trove-conductor.service","title":"Trove"},{"location":"install/openEuler-25.03/OpenStack-antelope/#swift","text":"Swift \u63d0\u4f9b\u4e86\u5f39\u6027\u53ef\u4f38\u7f29\u3001\u9ad8\u53ef\u7528\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9002\u5408\u5b58\u50a8\u5927\u89c4\u6a21\u975e\u7ed3\u6784\u5316\u6570\u636e\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 # \u521b\u5efaswift\u7528\u6237 openstack user create --domain default --password-prompt swift # \u6dfb\u52a0admin\u89d2\u8272 openstack role add --project service --user swift admin # \u521b\u5efa\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 openstack service create --name swift --description \"OpenStack Object Storage\" object-store \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\\(project_id\\)s openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 \u5b89\u88c5Swift\u3002 dnf install openstack-swift-proxy python3-swiftclient python3-keystoneclient \\ python3-keystonemiddleware memcached \u914d\u7f6eproxy-server\u3002 Swift RPM\u5305\u91cc\u5df2\u7ecf\u5305\u542b\u4e86\u4e00\u4e2a\u57fa\u672c\u53ef\u7528\u7684proxy-server.conf\uff0c\u53ea\u9700\u8981\u624b\u52a8\u4fee\u6539\u5176\u4e2d\u7684ip\u548cSWIFT_PASS\u5373\u53ef\u3002 vim /etc/swift/proxy-server.conf [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = swift password = SWIFT_PASS delay_auth_decision = True service_token_roles_required = True Storage\u8282\u70b9 \u5b89\u88c5\u652f\u6301\u7684\u7a0b\u5e8f\u5305\u3002 dnf install openstack-swift-account openstack-swift-container openstack-swift-object dnf install xfsprogs rsync \u5c06\u8bbe\u5907/dev/sdb\u548c/dev/sdc\u683c\u5f0f\u5316\u4e3aXFS\u3002 mkfs.xfs /dev/sdb mkfs.xfs /dev/sdc \u521b\u5efa\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u3002 mkdir -p /srv/node/sdb mkdir -p /srv/node/sdc \u627e\u5230\u65b0\u5206\u533a\u7684UUID\u3002 blkid \u7f16\u8f91/etc/fstab\u6587\u4ef6\u5e76\u5c06\u4ee5\u4e0b\u5185\u5bb9\u6dfb\u52a0\u5230\u5176\u4e2d\u3002 UUID=\"\" /srv/node/sdb xfs noatime 0 2 UUID=\"\" /srv/node/sdc xfs noatime 0 2 \u6302\u8f7d\u8bbe\u5907\u3002 mount /srv/node/sdb mount /srv/node/sdc \u6ce8\u610f \u5982\u679c\u7528\u6237\u4e0d\u9700\u8981\u5bb9\u707e\u529f\u80fd\uff0c\u4ee5\u4e0a\u6b65\u9aa4\u53ea\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u8bbe\u5907\u5373\u53ef\uff0c\u540c\u65f6\u53ef\u4ee5\u8df3\u8fc7\u4e0b\u9762\u7684rsync\u914d\u7f6e\u3002 \uff08\u53ef\u9009\uff09\u521b\u5efa\u6216\u7f16\u8f91/etc/rsyncd.conf\u6587\u4ef6\u4ee5\u5305\u542b\u4ee5\u4e0b\u5185\u5bb9: [DEFAULT] uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = MANAGEMENT_INTERFACE_IP_ADDRESS [account] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = False lock file = /var/lock/object.lock \u66ff\u6362MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740 \u542f\u52a8rsyncd\u670d\u52a1\u5e76\u914d\u7f6e\u5b83\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8: systemctl enable rsyncd.service systemctl start rsyncd.service \u914d\u7f6e\u5b58\u50a8\u8282\u70b9\u3002 \u7f16\u8f91/etc/swift\u76ee\u5f55\u7684account-server.conf\u3001container-server.conf\u548cobject-server.conf\u6587\u4ef6\uff0c\u66ff\u6362bind_ip\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002 [DEFAULT] bind_ip = 192.168.0.4 \u786e\u4fdd\u6302\u8f7d\u70b9\u76ee\u5f55\u7ed3\u6784\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R swift:swift /srv/node \u521b\u5efarecon\u76ee\u5f55\u5e76\u786e\u4fdd\u5176\u62e5\u6709\u6b63\u786e\u7684\u6240\u6709\u6743\u3002 mkdir -p /var/cache/swift chown -R root:swift /var/cache/swift chmod -R 775 /var/cache/swift Controller\u8282\u70b9\u521b\u5efa\u5e76\u5206\u53d1\u73af \u521b\u5efa\u8d26\u53f7\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 cd /etc/swift \u521b\u5efa\u57fa\u7840 account.builder \u6587\u4ef6\u3002 swift-ring-builder account.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder account.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6202 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u8d26\u53f7\u73af\u5185\u5bb9\u3002 swift-ring-builder account.builder \u91cd\u65b0\u5e73\u8861\u8d26\u53f7\u73af\u3002 swift-ring-builder account.builder rebalance \u521b\u5efa\u5bb9\u5668\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 container.builder \u6587\u4ef6\u3002 swift-ring-builder container.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder container.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bb9\u5668\u73af\u5185\u5bb9\u3002 swift-ring-builder container.builder \u91cd\u65b0\u5e73\u8861\u5bb9\u5668\u73af\u3002 swift-ring-builder container.builder rebalance \u521b\u5efa\u5bf9\u8c61\u73af\u3002 \u5207\u6362\u5230 /etc/swift \u76ee\u5f55\u3002 \u521b\u5efa\u57fa\u7840 object.builder \u6587\u4ef6\u3002 swift-ring-builder object.builder create 10 1 1 \u5c06\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u6dfb\u52a0\u5230\u73af\u4e2d\u3002 swift-ring-builder object.builder add --region 1 --zone 1 \\ --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS \\ --port 6200 --device DEVICE_NAME \\ --weight 100 \u66ff\u6362STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS\u4e3a\u5b58\u50a8\u8282\u70b9\u4e0a\u7ba1\u7406\u7f51\u7edc\u7684IP\u5730\u5740\u3002\\ \u66ff\u6362DEVICE_NAME\u4e3a\u540c\u4e00\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u5b58\u50a8\u8bbe\u5907\u540d\u79f0\u3002 \u6ce8\u610f \u5bf9\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u6bcf\u4e2a\u5b58\u50a8\u8bbe\u5907\u91cd\u590d\u6b64\u547d\u4ee4 \u9a8c\u8bc1\u5bf9\u8c61\u73af\u5185\u5bb9\u3002 swift-ring-builder object.builder \u91cd\u65b0\u5e73\u8861\u5bf9\u8c61\u73af\u3002 swift-ring-builder object.builder rebalance \u5206\u53d1\u73af\u914d\u7f6e\u6587\u4ef6\u3002 \u5c06 account.ring.gz \uff0c container.ring.gz \u4ee5\u53ca object.ring.gz \u6587\u4ef6\u590d\u5236\u5230\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684 /etc/swift \u76ee\u5f55\u3002 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/swift/swift.conf\u3002 [swift-hash] swift_hash_path_suffix = test-hash swift_hash_path_prefix = test-hash [storage-policy:0] name = Policy-0 default = yes \u7528\u552f\u4e00\u503c\u66ff\u6362 test-hash \u5c06swift.conf\u6587\u4ef6\u590d\u5236\u5230/etc/swift\u6bcf\u4e2a\u5b58\u50a8\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\u7684\u76ee\u5f55\u3002 \u5728\u6240\u6709\u8282\u70b9\u4e0a\uff0c\u786e\u4fdd\u914d\u7f6e\u76ee\u5f55\u7684\u6b63\u786e\u6240\u6709\u6743\u3002 chown -R root:swift /etc/swift \u5b8c\u6210\u5b89\u88c5 \u5728\u63a7\u5236\u8282\u70b9\u548c\u8fd0\u884c\u4ee3\u7406\u670d\u52a1\u7684\u4efb\u4f55\u5176\u4ed6\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u4ee3\u7406\u670d\u52a1\u53ca\u5176\u4f9d\u8d56\u9879\uff0c\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-proxy.service memcached.service systemctl start openstack-swift-proxy.service memcached.service \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\uff0c\u542f\u52a8\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5e76\u5c06\u5b83\u4eec\u914d\u7f6e\u4e3a\u5728\u7cfb\u7edf\u542f\u52a8\u65f6\u542f\u52a8\u3002 systemctl enable openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service systemctl start openstack-swift-account.service \\ openstack-swift-account-auditor.service \\ openstack-swift-account-reaper.service \\ openstack-swift-account-replicator.service \\ openstack-swift-container.service \\ openstack-swift-container-auditor.service \\ openstack-swift-container-replicator.service \\ openstack-swift-container-updater.service \\ openstack-swift-object.service \\ openstack-swift-object-auditor.service \\ openstack-swift-object-replicator.service \\ openstack-swift-object-updater.service","title":"Swift"},{"location":"install/openEuler-25.03/OpenStack-antelope/#cyborg","text":"Cyborg\u4e3aOpenStack\u63d0\u4f9b\u52a0\u901f\u5668\u8bbe\u5907\u7684\u652f\u6301\uff0c\u5305\u62ec GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK\u7b49\u7b49\u3002 Controller\u8282\u70b9 \u521d\u59cb\u5316\u5bf9\u5e94\u6570\u636e\u5e93 mysql -u root -p MariaDB [(none)]> CREATE DATABASE cyborg; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u7528\u6237\u548c\u670d\u52a1\uff0c\u5e76\u8bb0\u4f4f\u521b\u5efacybory\u7528\u6237\u65f6\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6eCYBORG_PASS source ~/.admin-openrc openstack user create --domain default --password-prompt cyborg openstack role add --project service --user cyborg admin openstack service create --name cyborg --description \"Acceleration Service\" accelerator \u4f7f\u7528uwsgi\u90e8\u7f72Cyborg api\u670d\u52a1 openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2 openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2 \u5b89\u88c5Cyborg dnf install openstack-cyborg \u914d\u7f6eCyborg \u4fee\u6539 /etc/cyborg/cyborg.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ use_syslog = False state_path = /var/lib/cyborg debug = True [api] host_ip = 0.0.0.0 [database] connection = mysql+pymysql://cyborg:CYBORG_DBPASS@controller/cyborg [service_catalog] cafile = /opt/stack/data/ca-bundle.pem project_domain_id = default user_domain_id = default project_name = service password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password [placement] project_domain_name = Default project_name = service user_domain_name = Default password = password username = PLACEMENT_PASS auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [nova] project_domain_name = Default project_name = service user_domain_name = Default password = NOVA_PASS username = nova auth_url = http://controller:5000/v3/ auth_type = password auth_section = keystone_authtoken [keystone_authtoken] memcached_servers = localhost:11211 signing_dir = /var/cache/cyborg/api cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = CYBORG_PASS username = cyborg auth_url = http://controller:5000/v3/ auth_type = password \u540c\u6b65\u6570\u636e\u5e93\u8868\u683c cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade \u542f\u52a8Cyborg\u670d\u52a1 systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent","title":"Cyborg"},{"location":"install/openEuler-25.03/OpenStack-antelope/#aodh","text":"Aodh\u53ef\u4ee5\u6839\u636e\u7531Ceilometer\u6216\u8005Gnocchi\u6536\u96c6\u7684\u76d1\u63a7\u6570\u636e\u521b\u5efa\u544a\u8b66\uff0c\u5e76\u8bbe\u7f6e\u89e6\u53d1\u89c4\u5219\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE aodh; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS'; GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt aodh openstack role add --project service --user aodh admin openstack service create --name aodh --description \"Telemetry\" alarming \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne alarming public http://controller:8042 openstack endpoint create --region RegionOne alarming internal http://controller:8042 openstack endpoint create --region RegionOne alarming admin http://controller:8042 \u5b89\u88c5Aodh\u3002 dnf install openstack-aodh-api openstack-aodh-evaluator \\ openstack-aodh-notifier openstack-aodh-listener \\ openstack-aodh-expirer python3-aodhclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/aodh/aodh.conf [database] connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = aodh password = AODH_PASS interface = internalURL region_name = RegionOne \u540c\u6b65\u6570\u636e\u5e93\u3002 aodh-dbsync \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \\ openstack-aodh-notifier.service openstack-aodh-listener.service","title":"Aodh"},{"location":"install/openEuler-25.03/OpenStack-antelope/#gnocchi","text":"Gnocchi\u662f\u4e00\u4e2a\u5f00\u6e90\u7684\u65f6\u95f4\u5e8f\u5217\u6570\u636e\u5e93\uff0c\u53ef\u4ee5\u5bf9\u63a5Ceilometer\u3002 Controller\u8282\u70b9 \u521b\u5efa\u6570\u636e\u5e93\u3002 CREATE DATABASE gnocchi; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS'; GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS'; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u4ee5\u53caAPI\u7aef\u70b9\u3002 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt gnocchi openstack role add --project service --user gnocchi admin openstack service create --name gnocchi --description \"Metric Service\" metric \u521b\u5efaAPI\u7aef\u70b9\u3002 openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 \u5b89\u88c5Gnocchi\u3002 dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 vim /etc/gnocchi/gnocchi.conf [api] auth_mode = keystone port = 8041 uwsgi_mode = http-socket [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = Default user_domain_name = Default project_name = service username = gnocchi password = GNOCCHI_PASS interface = internalURL region_name = RegionOne [indexer] url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi [storage] # coordination_url is not required but specifying one will improve # performance with better workload division across workers. # coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file \u540c\u6b65\u6570\u636e\u5e93\u3002 gnocchi-upgrade \u5b8c\u6210\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service","title":"Gnocchi"},{"location":"install/openEuler-25.03/OpenStack-antelope/#ceilometer","text":"Ceilometer\u662fOpenStack\u4e2d\u8d1f\u8d23\u6570\u636e\u6536\u96c6\u7684\u670d\u52a1\u3002 Controller\u8282\u70b9 \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\u3002 openstack user create --domain default --password-prompt ceilometer openstack role add --project service --user ceilometer admin openstack service create --name ceilometer --description \"Telemetry\" metering \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-notification openstack-ceilometer-central \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/pipeline.yaml\u3002 publishers: # set address of Gnocchi # + filter out Gnocchi-related activity meters (Swift driver) # + set default archive policy - gnocchi://?filter_project=service&archive_policy=low \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_type = password auth_url = http://controller:5000/v3 project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u6570\u636e\u5e93\u540c\u6b65\u3002 ceilometer-upgrade \u5b8c\u6210\u63a7\u5236\u8282\u70b9Ceilometer\u5b89\u88c5\u3002 # \u914d\u7f6e\u670d\u52a1\u81ea\u542f systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service # \u542f\u52a8\u670d\u52a1 systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service Compute\u8282\u70b9 \u5b89\u88c5Ceilometer\u8f6f\u4ef6\u5305\u3002 dnf install openstack-ceilometer-compute dnf install openstack-ceilometer-ipmi # \u53ef\u9009 \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/ceilometer/ceilometer.conf\u3002 [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller [service_credentials] auth_url = http://controller:5000 project_domain_id = default user_domain_id = default auth_type = password username = ceilometer project_name = service password = CEILOMETER_PASS interface = internalURL region_name = RegionOne \u7f16\u8f91\u914d\u7f6e\u6587\u4ef6/etc/nova/nova.conf\u3002 [DEFAULT] instance_usage_audit = True instance_usage_audit_period = hour [notifications] notify_on_state_change = vm_and_task_state [oslo_messaging_notifications] driver = messagingv2 \u5b8c\u6210\u5b89\u88c5\u3002 systemctl enable openstack-ceilometer-compute.service systemctl start openstack-ceilometer-compute.service systemctl enable openstack-ceilometer-ipmi.service # \u53ef\u9009 systemctl start openstack-ceilometer-ipmi.service # \u53ef\u9009 # \u91cd\u542fnova-compute\u670d\u52a1 systemctl restart openstack-nova-compute.service","title":"Ceilometer"},{"location":"install/openEuler-25.03/OpenStack-antelope/#heat","text":"Heat\u662f OpenStack \u81ea\u52a8\u7f16\u6392\u670d\u52a1\uff0c\u57fa\u4e8e\u63cf\u8ff0\u6027\u7684\u6a21\u677f\u6765\u7f16\u6392\u590d\u5408\u4e91\u5e94\u7528\uff0c\u4e5f\u79f0\u4e3a Orchestration Service \u3002Heat \u7684\u5404\u670d\u52a1\u4e00\u822c\u5b89\u88c5\u5728 Controller \u8282\u70b9\u4e0a\u3002 Controller\u8282\u70b9 \u521b\u5efa heat \u6570\u636e\u5e93\uff0c\u5e76\u6388\u4e88 heat \u6570\u636e\u5e93\u6b63\u786e\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u66ff\u6362 HEAT_DBPASS \u4e3a\u5408\u9002\u7684\u5bc6\u7801 mysql -u root -p MariaDB [(none)]> CREATE DATABASE heat; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS'; MariaDB [(none)]> exit; \u521b\u5efa\u670d\u52a1\u51ed\u8bc1\uff0c\u521b\u5efa heat \u7528\u6237\uff0c\u5e76\u4e3a\u5176\u589e\u52a0 admin \u89d2\u8272 source ~/.admin-openrc openstack user create --domain default --password-prompt heat openstack role add --project service --user heat admin \u521b\u5efa heat \u548c heat-cfn \u670d\u52a1\u53ca\u5176\u5bf9\u5e94\u7684API\u7aef\u70b9 openstack service create --name heat --description \"Orchestration\" orchestration openstack service create --name heat-cfn --description \"Orchestration\" cloudformation openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\\(tenant_id\\)s openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1 openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1 \u521b\u5efastack\u7ba1\u7406\u7684\u989d\u5916\u4fe1\u606f \u521b\u5efa heat domain openstack domain create --description \"Stack projects and users\" heat \u5728 heat domain\u4e0b\u521b\u5efa heat_domain_admin \u7528\u6237\uff0c\u5e76\u8bb0\u4e0b\u8f93\u5165\u7684\u5bc6\u7801\uff0c\u7528\u4e8e\u914d\u7f6e\u4e0b\u9762\u7684 HEAT_DOMAIN_PASS openstack user create --domain heat --password-prompt heat_domain_admin \u4e3a heat_domain_admin \u7528\u6237\u589e\u52a0 admin \u89d2\u8272 openstack role add --domain heat --user-domain heat --user heat_domain_admin admin \u521b\u5efa heat_stack_owner \u89d2\u8272 openstack role create heat_stack_owner \u521b\u5efa heat_stack_user \u89d2\u8272 openstack role create heat_stack_user \u5b89\u88c5\u8f6f\u4ef6\u5305 dnf install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine \u4fee\u6539\u914d\u7f6e\u6587\u4ef6 /etc/heat/heat.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller heat_metadata_server_url = http://controller:8000 heat_waitcondition_server_url = http://controller:8000/v1/waitcondition stack_domain_admin = heat_domain_admin stack_domain_admin_password = HEAT_DOMAIN_PASS stack_user_domain_name = heat [database] connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = heat password = HEAT_PASS [trustee] auth_type = password auth_url = http://controller:5000 username = heat password = HEAT_PASS user_domain_name = default [clients_keystone] auth_uri = http://controller:5000 \u521d\u59cb\u5316 heat \u6570\u636e\u5e93\u8868 su -s /bin/sh -c \"heat-manage db_sync\" heat \u542f\u52a8\u670d\u52a1 systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service","title":"Heat"},{"location":"install/openEuler-25.03/OpenStack-antelope/#tempest","text":"Tempest\u662fOpenStack\u7684\u96c6\u6210\u6d4b\u8bd5\u670d\u52a1\uff0c\u5982\u679c\u7528\u6237\u9700\u8981\u5168\u9762\u81ea\u52a8\u5316\u6d4b\u8bd5\u5df2\u5b89\u88c5\u7684OpenStack\u73af\u5883\u7684\u529f\u80fd,\u5219\u63a8\u8350\u4f7f\u7528\u8be5\u7ec4\u4ef6\u3002\u5426\u5219\uff0c\u53ef\u4ee5\u4e0d\u7528\u5b89\u88c5\u3002 Controller\u8282\u70b9 \uff1a \u5b89\u88c5Tempest dnf install openstack-tempest \u521d\u59cb\u5316\u76ee\u5f55 tempest init mytest \u4fee\u6539\u914d\u7f6e\u6587\u4ef6\u3002 cd mytest vi etc/tempest.conf tempest.conf\u4e2d\u9700\u8981\u914d\u7f6e\u5f53\u524dOpenStack\u73af\u5883\u7684\u4fe1\u606f\uff0c\u5177\u4f53\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003 \u5b98\u65b9\u793a\u4f8b \u6267\u884c\u6d4b\u8bd5 tempest run \u5b89\u88c5tempest\u6269\u5c55\uff08\u53ef\u9009\uff09 OpenStack\u5404\u4e2a\u670d\u52a1\u672c\u8eab\u4e5f\u63d0\u4f9b\u4e86\u4e00\u4e9btempest\u6d4b\u8bd5\u5305\uff0c\u7528\u6237\u53ef\u4ee5\u5b89\u88c5\u8fd9\u4e9b\u5305\u6765\u4e30\u5bcctempest\u7684\u6d4b\u8bd5\u5185\u5bb9\u3002\u5728Antelope\u4e2d\uff0c\u6211\u4eec\u63d0\u4f9b\u4e86Cinder\u3001Glance\u3001Keystone\u3001Ironic\u3001Trove\u7684\u6269\u5c55\u6d4b\u8bd5\uff0c\u7528\u6237\u53ef\u4ee5\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u8fdb\u884c\u5b89\u88c5\u4f7f\u7528\uff1a dnf install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin","title":"Tempest"},{"location":"install/openEuler-25.03/OpenStack-antelope/#openstack-sigoos","text":"oos (openEuler OpenStack SIG)\u662fOpenStack SIG\u63d0\u4f9b\u7684\u547d\u4ee4\u884c\u5de5\u5177\u3002\u5176\u4e2d oos env \u7cfb\u5217\u547d\u4ee4\u63d0\u4f9b\u4e86\u4e00\u952e\u90e8\u7f72OpenStack \uff08 all in one \u6216\u4e09\u8282\u70b9 cluster \uff09\u7684ansible\u811a\u672c\uff0c\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5\u811a\u672c\u5feb\u901f\u90e8\u7f72\u4e00\u5957\u57fa\u4e8e openEuler RPM \u7684 OpenStack \u73af\u5883\u3002 oos \u5de5\u5177\u652f\u6301\u5bf9\u63a5\u4e91provider\uff08\u76ee\u524d\u4ec5\u652f\u6301\u534e\u4e3a\u4e91provider\uff09\u548c\u4e3b\u673a\u7eb3\u7ba1\u4e24\u79cd\u65b9\u5f0f\u6765\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u4e0b\u9762\u4ee5\u5bf9\u63a5\u534e\u4e3a\u4e91\u90e8\u7f72\u4e00\u5957 all in one \u7684OpenStack\u73af\u5883\u4e3a\u4f8b\u8bf4\u660e oos \u5de5\u5177\u7684\u4f7f\u7528\u65b9\u6cd5\u3002 \u5b89\u88c5 oos \u5de5\u5177 yum install openstack-sig-tool \u914d\u7f6e\u5bf9\u63a5\u534e\u4e3a\u4e91provider\u7684\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u4fee\u6539\u914d\u7f6e\u4e3a\u60a8\u62e5\u6709\u7684\u534e\u4e3a\u4e91\u8d44\u6e90\u4fe1\u606f\uff0cAK/SK\u662f\u7528\u6237\u7684\u534e\u4e3a\u4e91\u767b\u5f55\u5bc6\u94a5\uff0c\u5176\u4ed6\u914d\u7f6e\u4fdd\u6301\u9ed8\u8ba4\u5373\u53ef\uff08\u9ed8\u8ba4\u4f7f\u7528\u65b0\u52a0\u5761region\uff09\uff0c\u9700\u8981\u63d0\u524d\u5728\u4e91\u4e0a\u521b\u5efa\u5bf9\u5e94\u7684\u8d44\u6e90\uff0c\u5305\u62ec\uff1a \u4e00\u4e2a\u5b89\u5168\u7ec4\uff0c\u540d\u5b57\u9ed8\u8ba4\u662f oos \u4e00\u4e2aopenEuler\u955c\u50cf\uff0c\u540d\u79f0\u683c\u5f0f\u662fopenEuler-%(release)s-%(arch)s\uff0c\u4f8b\u5982 openEuler-25.03-arm64 \u4e00\u4e2aVPC\uff0c\u540d\u79f0\u662f oos_vpc \u8be5VPC\u4e0b\u9762\u4e24\u4e2a\u5b50\u7f51\uff0c\u540d\u79f0\u662f oos_subnet1 \u3001 oos_subnet2 [huaweicloud] ak = sk = region = ap-southeast-3 root_volume_size = 100 data_volume_size = 100 security_group_name = oos image_format = openEuler-%%(release)s-%%(arch)s vpc_name = oos_vpc subnet1_name = oos_subnet1 subnet2_name = oos_subnet2 \u914d\u7f6e OpenStack \u73af\u5883\u4fe1\u606f \u6253\u5f00 /usr/local/etc/oos/oos.conf \u6587\u4ef6\uff0c\u6839\u636e\u5f53\u524d\u673a\u5668\u73af\u5883\u548c\u9700\u6c42\u4fee\u6539\u914d\u7f6e\u3002\u5185\u5bb9\u5982\u4e0b\uff1a [environment] mysql_root_password = root mysql_project_password = root rabbitmq_password = root project_identity_password = root enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest neutron_provider_interface_name = br-ex default_ext_subnet_range = 10.100.100.0/24 default_ext_subnet_gateway = 10.100.100.1 neutron_dataplane_interface_name = eth1 cinder_block_device = vdb swift_storage_devices = vdc swift_hash_path_suffix = ash swift_hash_path_prefix = has glance_api_workers = 2 cinder_api_workers = 2 nova_api_workers = 2 nova_metadata_api_workers = 2 nova_conductor_workers = 2 nova_scheduler_workers = 2 neutron_api_workers = 2 horizon_allowed_host = * kolla_openeuler_plugin = false \u5173\u952e\u914d\u7f6e \u914d\u7f6e\u9879 \u89e3\u91ca enabled_service \u5b89\u88c5\u670d\u52a1\u5217\u8868\uff0c\u6839\u636e\u7528\u6237\u9700\u6c42\u81ea\u884c\u5220\u51cf neutron_provider_interface_name neutron L3\u7f51\u6865\u540d\u79f0 default_ext_subnet_range neutron\u79c1\u7f51IP\u6bb5 default_ext_subnet_gateway neutron\u79c1\u7f51gateway neutron_dataplane_interface_name neutron\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u63a8\u8350\u4f7f\u7528\u4e00\u5f20\u65b0\u7684\u7f51\u5361\uff0c\u4ee5\u514d\u548c\u73b0\u6709\u7f51\u5361\u51b2\u7a81\uff0c\u9632\u6b62all in one\u4e3b\u673a\u65ad\u8fde\u7684\u60c5\u51b5 cinder_block_device cinder\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d swift_storage_devices swift\u4f7f\u7528\u7684\u5377\u8bbe\u5907\u540d kolla_openeuler_plugin \u662f\u5426\u542f\u7528kolla plugin\u3002\u8bbe\u7f6e\u4e3aTrue\uff0ckolla\u5c06\u652f\u6301\u90e8\u7f72openEuler\u5bb9\u5668(\u53ea\u5728openEuler LTS\u4e0a\u652f\u6301) \u534e\u4e3a\u4e91\u4e0a\u9762\u521b\u5efa\u4e00\u53f0|openEuler 25.03\u7684x86_64\u865a\u62df\u673a\uff0c\u7528\u4e8e\u90e8\u7f72 all in one \u7684 OpenStack # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u865a\u62df\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env create -r 25.03 -f small -a x86 -n test-oos all_in_one \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env create --help \u547d\u4ee4\u67e5\u770b \u90e8\u7f72OpenStack all in one \u73af\u5883 oos env setup test-oos -r antelope \u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env setup --help \u547d\u4ee4\u67e5\u770b \u521d\u59cb\u5316tempest\u73af\u5883 \u5982\u679c\u7528\u6237\u60f3\u4f7f\u7528\u8be5\u73af\u5883\u8fd0\u884ctempest\u6d4b\u8bd5\u7684\u8bdd\uff0c\u53ef\u4ee5\u6267\u884c\u547d\u4ee4 oos env init \uff0c\u4f1a\u81ea\u52a8\u628atempest\u9700\u8981\u7684OpenStack\u8d44\u6e90\u81ea\u52a8\u521b\u5efa\u597d oos env init test-oos \u6267\u884ctempest\u6d4b\u8bd5 \u7528\u6237\u53ef\u4ee5\u4f7f\u7528oos\u81ea\u52a8\u6267\u884c\uff1a oos env test test-oos \u4e5f\u53ef\u4ee5\u624b\u52a8\u767b\u5f55\u76ee\u6807\u8282\u70b9\uff0c\u8fdb\u5165\u6839\u76ee\u5f55\u4e0b\u7684 mytest \u76ee\u5f55\uff0c\u624b\u52a8\u6267\u884c tempest run \u5982\u679c\u662f\u4ee5\u4e3b\u673a\u7eb3\u7ba1\u7684\u65b9\u5f0f\u90e8\u7f72 OpenStack \u73af\u5883\uff0c\u603b\u4f53\u903b\u8f91\u4e0e\u4e0a\u6587\u5bf9\u63a5\u534e\u4e3a\u4e91\u65f6\u4e00\u81f4\uff0c1\u30013\u30015\u30016\u6b65\u64cd\u4f5c\u4e0d\u53d8\uff0c\u8df3\u8fc7\u7b2c2\u6b65\u5bf9\u534e\u4e3a\u4e91provider\u4fe1\u606f\u7684\u914d\u7f6e\uff0c\u5728\u7b2c4\u6b65\u6539\u4e3a\u7eb3\u7ba1\u4e3b\u673a\u64cd\u4f5c\u3002 \u88ab\u7eb3\u7ba1\u7684\u865a\u673a\u9700\u8981\u4fdd\u8bc1\uff1a \u81f3\u5c11\u6709\u4e00\u5f20\u7ed9oos\u4f7f\u7528\u7684\u7f51\u5361\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e neutron_dataplane_interface_name \u81f3\u5c11\u6709\u4e00\u5757\u7ed9oos\u4f7f\u7528\u7684\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e cinder_block_device \u5982\u679c\u8981\u90e8\u7f72swift\u670d\u52a1\uff0c\u5219\u9700\u8981\u65b0\u589e\u4e00\u5757\u786c\u76d8\uff0c\u540d\u79f0\u4e0e\u914d\u7f6e\u4fdd\u6301\u4e00\u81f4\uff0c\u76f8\u5173\u914d\u7f6e swift_storage_devices # sshpass\u5728`oos env create`\u8fc7\u7a0b\u4e2d\u88ab\u4f7f\u7528\uff0c\u7528\u4e8e\u914d\u7f6e\u5bf9\u76ee\u6807\u4e3b\u673a\u7684\u514d\u5bc6\u8bbf\u95ee dnf install sshpass oos env manage -r 25.03 -i TARGET_MACHINE_IP -p TARGET_MACHINE_PASSWD -n test-oos \u66ff\u6362 TARGET_MACHINE_IP \u4e3a\u76ee\u6807\u673aip\u3001 TARGET_MACHINE_PASSWD \u4e3a\u76ee\u6807\u673a\u5bc6\u7801\u3002\u5177\u4f53\u7684\u53c2\u6570\u53ef\u4ee5\u4f7f\u7528 oos env manage --help \u547d\u4ee4\u67e5\u770b\u3002","title":"\u57fa\u4e8eOpenStack SIG\u5f00\u53d1\u5de5\u5177oos\u90e8\u7f72"},{"location":"security/security-guide/","text":"OpenStack\u5b89\u5168\u6307\u5357 \u00b6 \u672c\u6587\u7ffb\u8bd1\u81ea \u4e0a\u6e38\u5b89\u5168\u6307\u5357 OpenStack\u5b89\u5168\u6307\u5357 \u6458\u8981 \u5185\u5bb9 \u7ea6\u5b9a \u6ce8\u610f\u4e8b\u9879 \u547d\u4ee4\u63d0\u793a\u7b26 \u4ecb\u7ecd \u81f4\u8c22 \u6211\u4eec\u4e3a\u4ec0\u4e48\u4ee5\u53ca\u5982\u4f55\u5199\u8fd9\u672c\u4e66 \u76ee\u6807 \u5199\u4f5c\u8bb0\u5f55 \u5982\u4f55\u4e3a\u672c\u4e66\u505a\u8d21\u732e OpenStack \u7b80\u4ecb \u4e91\u7c7b\u578b \u516c\u6709\u4e91 \u79c1\u6709\u4e91 \u793e\u533a\u4e91 \u6df7\u5408\u4e91 OpenStack \u670d\u52a1\u6982\u8ff0 \u8ba1\u7b97 \u5bf9\u8c61\u5b58\u50a8 \u5757\u5b58\u50a8 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf \u7f51\u7edc \u4eea\u8868\u677f \u8eab\u4efd\u9274\u522b\u670d\u52a1 \u955c\u50cf\u670d\u52a1 \u6570\u636e\u5904\u7406\u670d\u52a1 \u5176\u4ed6\u914d\u5957\u6280\u672f \u5b89\u5168\u8fb9\u754c\u548c\u5a01\u80c1 \u5b89\u5168\u57df \u516c\u5171 \u8bbf\u5ba2 \u7ba1\u7406 \u6570\u636e \u6865\u63a5\u5b89\u5168\u57df \u5a01\u80c1\u5206\u7c7b\u3001\u53c2\u4e0e\u8005\u548c\u653b\u51fb\u5411\u91cf \u5a01\u80c1\u53c2\u4e0e\u8005 \u60c5\u62a5\u673a\u6784 \u4e25\u91cd\u6709\u7ec4\u7ec7\u72af\u7f6a \u9ad8\u80fd\u529b\u7684\u56e2\u961f \u6709\u52a8\u673a\u7684\u4e2a\u4eba \u811a\u672c\u653b\u51fb\u8005 \u516c\u6709\u4e91\u548c\u79c1\u6709\u4e91\u6ce8\u610f\u4e8b\u9879 \u51fa\u7ad9\u653b\u51fb\u548c\u58f0\u8a89\u98ce\u9669 \u653b\u51fb\u7c7b\u578b \u9009\u62e9\u652f\u6301\u8f6f\u4ef6 \u56e2\u961f\u4e13\u4e1a\u77e5\u8bc6 \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u901a\u7528\u6807\u51c6 \u786c\u4ef6\u95ee\u9898 \u7cfb\u7edf\u6587\u6863 \u7cfb\u7edf\u6587\u6863\u8981\u6c42 \u7cfb\u7edf\u89d2\u8272\u548c\u7c7b\u578b \u57fa\u7840\u8bbe\u65bd\u8282\u70b9 \u8ba1\u7b97\u3001\u5b58\u50a8\u6216\u5176\u4ed6\u8d44\u6e90\u8282\u70b9 \u7cfb\u7edf\u6e05\u5355 \u786c\u4ef6\u6e05\u5355 \u8f6f\u4ef6\u6e05\u5355 \u7f51\u7edc\u62d3\u6251 \u670d\u52a1\u3001\u534f\u8bae\u548c\u7aef\u53e3 \u7ba1\u7406 \u6301\u7eed\u7684\u7cfb\u7edf\u7ba1\u7406 \u6f0f\u6d1e\u7ba1\u7406 \u5206\u7c7b \u6d4b\u8bd5\u66f4\u65b0 \u90e8\u7f72\u66f4\u65b0 \u914d\u7f6e\u7ba1\u7406 \u7b56\u7565\u66f4\u6539 \u5b89\u5168\u5907\u4efd\u548c\u6062\u590d \u5b89\u5168\u5ba1\u8ba1\u5de5\u5177 \u5b8c\u6574\u6027\u751f\u547d\u5468\u671f \u5b89\u5168\u5f15\u5bfc \u8282\u70b9\u914d\u7f6e \u9a8c\u8bc1\u542f\u52a8 \u8282\u70b9\u52a0\u56fa \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \uff08MAC\uff09 \u5220\u9664\u8f6f\u4ef6\u5305\u5e76\u505c\u6b62\u670d\u52a1 \u53ea\u8bfb\u6587\u4ef6\u7cfb\u7edf \u7cfb\u7edf\u9a8c\u8bc1 \u8fd0\u884c\u65f6\u9a8c\u8bc1 \u5165\u4fb5\u68c0\u6d4b\u7cfb\u7edf \u670d\u52a1\u5668\u52a0\u56fa \u6587\u4ef6\u5b8c\u6574\u6027\u7ba1\u7406\uff08FIM\uff09 \u7ba1\u7406\u754c\u9762 \u4eea\u8868\u677f \u529f\u80fd \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u53c2\u8003\u4e66\u76ee OpenStack \u63a5\u53e3 \u529f\u80fd \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 \u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9 \u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u53c2\u8003\u4e66\u76ee \u5e26\u5916\u7ba1\u7406\u63a5\u53e3 \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u53c2\u8003\u4e66\u76ee \u5b89\u5168\u901a\u4fe1 TLS \u548c SSL \u7b80\u4ecb \u8bc1\u4e66\u9881\u53d1\u673a\u6784 TLS \u5e93 \u52a0\u5bc6\u7b97\u6cd5\u3001\u5bc6\u7801\u6a21\u5f0f\u548c\u534f\u8bae \u603b\u7ed3 TLS \u4ee3\u7406\u548c HTTP \u670d\u52a1 \u793a\u4f8b Pound Stud Nginx Apache HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \u5b8c\u5168\u524d\u5411\u4fdd\u5bc6 \u5b89\u5168\u53c2\u8003\u67b6\u6784 SSL/TLS \u4ee3\u7406\u5728\u524d\u9762 \u4e0e API \u7aef\u70b9\u4f4d\u4e8e\u540c\u4e00\u7269\u7406\u4e3b\u673a\u4e0a\u7684 SSL/TLS SSL/TLS\u8d1f\u8f7d\u5e73\u8861\u5668 \u5916\u90e8\u548c\u5185\u90e8\u73af\u5883\u7684\u52a0\u5bc6\u5206\u79bb API \u7aef\u70b9 API \u7aef\u70b9\u914d\u7f6e\u5efa\u8bae \u5185\u90e8 API \u901a\u4fe1 \u5728\u8eab\u4efd\u670d\u52a1\u76ee\u5f55\u4e2d\u914d\u7f6e\u5185\u90e8 URL \u4e3a\u5185\u90e8 URL \u914d\u7f6e\u5e94\u7528\u7a0b\u5e8f \u7c98\u8d34\u548c\u4e2d\u95f4\u4ef6 API \u7aef\u70b9\u8fdb\u7a0b\u9694\u79bb\u548c\u7b56\u7565 \u547d\u540d\u7a7a\u95f4 \u7f51\u7edc\u7b56\u7565 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 API \u7aef\u70b9\u901f\u7387\u9650\u5236 \u8eab\u4efd\u9274\u522b \u8ba4\u8bc1 \u65e0\u6548\u7684\u767b\u5f55\u5c1d\u8bd5 \u591a\u56e0\u7d20\u8eab\u4efd\u9a8c\u8bc1 \u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5 \u5185\u90e8\u5b9e\u73b0\u7684\u8ba4\u8bc1\u65b9\u5f0f \u5916\u90e8\u8ba4\u8bc1\u65b9\u5f0f \u6388\u6743 \u5efa\u7acb\u6b63\u5f0f\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565 \u670d\u52a1\u6388\u6743 \u7ba1\u7406\u5458\u7528\u6237 \u7ec8\u7aef\u7528\u6237 \u653f\u7b56 \u4ee4\u724c Fernet \u4ee4\u724c JWT \u4ee4\u724c \u57df \u8054\u5408\u9274\u6743 \u4e3a\u4ec0\u4e48\u8981\u4f7f\u7528\u8054\u5408\u8eab\u4efd\uff1f \u68c0\u67e5\u8868 Check-Identity-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a keystone\uff1f Check-Identity-02\uff1a\u662f\u5426\u4e3a Identity \u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u6743\u9650\uff1f Check-Identity-03\uff1a\u662f\u5426\u4e3a Identity \u542f\u7528\u4e86 TLS\uff1f Check-Identity-04\uff1a\uff08\u5df2\u8fc7\u65f6\uff09 Check-Identity-05\uff1a\u662f\u5426 max_request_body_size \u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f check-identity-06:\u7981\u7528/etc/keystone/keystone.conf\u4e2d\u7684\u7ba1\u7406\u4ee4\u724c check-identity-07:/etc/keystone/keystone.conf\u4e2d\u7684\u4e0d\u5b89\u5168_\u8c03\u8bd5\u4e3a\u5047 check-identity-08:\u4f7f\u7528/etc/keystone/keystone.conf\u4e2d\u7684Fernet\u4ee4\u724c \u4eea\u8868\u677f \u57df\u540d\u3001\u4eea\u8868\u677f\u5347\u7ea7\u548c\u57fa\u672c Web \u670d\u52a1\u5668\u914d\u7f6e \u57df\u540d \u57fa\u672c\u7684 Web \u670d\u52a1\u5668\u914d\u7f6e \u5141\u8bb8\u7684\u4e3b\u673a Horizon \u955c\u50cf\u4e0a\u4f20 HTTPS\u3001HSTS\u3001XSS \u548c SSRF \u8de8\u7ad9\u811a\u672c \uff08XSS\uff09 \u8de8\u7ad9\u8bf7\u6c42\u4f2a\u9020 \uff08CSRF\uff09 \u8de8\u5e27\u811a\u672c \uff08XFS\uff09 HTTPS \u51fd\u6570 HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \uff08HSTS\uff09 \u524d\u7aef\u7f13\u5b58\u548c\u4f1a\u8bdd\u540e\u7aef \u524d\u7aef\u7f13\u5b58 \u4f1a\u8bdd\u540e\u7aef \u9759\u6001\u5a92\u4f53 \u5bc6\u7801 \u5bc6\u94a5 Cookies \u8de8\u57df\u8d44\u6e90\u5171\u4eab \uff08CORS\uff09 \u8c03\u8bd5 \u68c0\u67e5\u8868 Check-Dashboard-01\uff1a\u7528\u6237/\u914d\u7f6e\u6587\u4ef6\u7ec4\u662f\u5426\u8bbe\u7f6e\u4e3a root/horizon\uff1f Check-Dashboard-02\uff1a\u662f\u5426\u4e3a Horizon \u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u6743\u9650\uff1f Check-Dashboard-03\uff1a\u53c2\u6570\u662f\u5426 DISALLOW_IFRAME_EMBED \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-04\uff1a\u53c2\u6570\u662f\u5426 CSRF_COOKIE_SECURE \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-05\uff1a\u53c2\u6570\u662f\u5426 SESSION_COOKIE_SECURE \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-06\uff1a\u53c2\u6570\u662f\u5426 SESSION_COOKIE_HTTPONLY \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-07\uff1a PASSWORD_AUTOCOMPLETE \u8bbe\u7f6e\u4e3a False \uff1f Check-Dashboard-08\uff1a DISABLE_PASSWORD_REVEAL \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-09\uff1a ENFORCE_PASSWORD_CHECK \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-10\uff1a\u662f\u5426 PASSWORD_VALIDATOR \u5df2\u914d\u7f6e\uff1f Check-Dashboard-11\uff1a\u662f\u5426 SECURE_PROXY_SSL_HEADER \u5df2\u914d\u7f6e\uff1f \u8ba1\u7b97 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u9009\u62e9 OpenStack \u4e2d\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f \u9009\u62e9\u6807\u51c6 \u56e2\u961f\u4e13\u957f \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u8ba4\u8bc1\u548c\u8bc1\u660e \u901a\u7528\u6807\u51c6 \u5bc6\u7801\u5b66\u6807\u51c6 FIPS 140-2 \u786c\u4ef6\u95ee\u9898 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0e\u88f8\u673a Hypervisor \u5185\u5b58\u4f18\u5316 KVM \u5185\u6838\u540c\u9875\u5408\u5e76 XEN \u900f\u660e\u9875\u9762\u5171\u4eab \u5185\u5b58\u4f18\u5316\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u5176\u4ed6\u5b89\u5168\u529f\u80fd \u53c2\u8003\u4e66\u76ee \u52a0\u56fa\u865a\u62df\u5316\u5c42 \u7269\u7406\u786c\u4ef6\uff08PCI\u76f4\u901a\uff09 \u865a\u62df\u786c\u4ef6 \uff08QEMU\uff09 \u6700\u5c0f\u5316 QEMU \u4ee3\u7801\u5e93 \u7f16\u8bd1\u5668\u52a0\u56fa \u5b89\u5168\u52a0\u5bc6\u865a\u62df\u5316 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 sVirt\uff1aSELinux \u548c\u865a\u62df\u5316 \u6807\u7b7e\u548c\u7c7b\u522b SELinux \u7528\u6237\u548c\u89d2\u8272 \u5e03\u5c14\u503c \u52a0\u56fa\u8ba1\u7b97\u90e8\u7f72 OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f OpenStack \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 OpenStack-dev \u90ae\u4ef6\u5217\u8868 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90ae\u4ef6\u5217\u8868 \u6f0f\u6d1e\u610f\u8bc6 OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f OpenStack \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 OpenStack-discuss \u90ae\u4ef6\u5217\u8868 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90ae\u4ef6\u5217\u8868 \u5982\u4f55\u9009\u62e9\u865a\u62df\u63a7\u5236\u53f0 \u865a\u62df\u7f51\u7edc\u8ba1\u7b97\u673a \uff08VNC\uff09 \u529f\u80fd \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u53c2\u8003\u4e66\u76ee \u72ec\u7acb\u8ba1\u7b97\u73af\u5883\u7684\u7b80\u5355\u534f\u8bae \uff08SPICE\uff09 \u529f\u80fd \u9650\u5236 \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u53c2\u8003\u4e66\u76ee \u68c0\u67e5\u8868 Check-Compute-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/nova\uff1f Check-Compute-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Compute-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Compute-04\uff1a\u662f\u5426\u4f7f\u7528\u5b89\u5168\u534f\u8bae\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Compute-05\uff1aNova \u4e0e Glance \u7684\u901a\u4fe1\u662f\u5426\u5b89\u5168\uff1f \u5757\u5b58\u50a8 \u5377\u64e6\u9664 \u68c0\u67e5\u8868 Check-Block-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/cinder\uff1f Check-Block-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Block-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Block-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Block-05\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e nova \u901a\u4fe1\uff1f Check-Block-06\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e glance \u901a\u4fe1\uff1f Check-Block-07\uff1a NAS \u662f\u5426\u5728\u5b89\u5168\u7684\u73af\u5883\u4e2d\u8fd0\u884c\uff1f Check-Block-08\uff1a\u8bf7\u6c42\u6b63\u6587\u7684\u6700\u5927\u5927\u5c0f\u662f\u5426\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f Check-Block-09\uff1a\u662f\u5426\u542f\u7528\u4e86\u5377\u52a0\u5bc6\u529f\u80fd\uff1f \u56fe\u50cf\u5b58\u50a8 \u68c0\u67e5\u8868 Check-Image-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/glance\uff1f Check-Image-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Image-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Image-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Image-05\uff1a\u662f\u5426\u963b\u6b62\u4e86\u5c4f\u853d\u7aef\u53e3\u626b\u63cf\uff1f \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf \u4ecb\u7ecd \u4e00\u822c\u5b89\u5168\u4fe1\u606f \u7f51\u7edc\u548c\u5b89\u5168\u6a21\u578b \u5171\u4eab\u540e\u7aef\u6a21\u5f0f \u6241\u5e73\u5316\u4e0e\u5206\u6bb5\u5316\u7f51\u7edc \u7f51\u7edc\u63d2\u4ef6 \u5b89\u5168\u670d\u52a1 \u5b89\u5168\u670d\u52a1\u4ecb\u7ecd \u5b89\u5168\u670d\u52a1\u7ba1\u7406 \u5171\u4eab\u8bbf\u95ee\u63a7\u5236 \u5171\u4eab\u7c7b\u578b\u8bbf\u95ee\u63a7\u5236 \u653f\u7b56 \u68c0\u67e5\u8868 Check-Shared-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/manila\uff1f Check-Shared-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Shared-03\uff1aOpenStack Identity \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Shared-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Shared-05\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u8ba1\u7b97\u8054\u7cfb\uff1f Check-Shared-06\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u7f51\u7edc\u8054\u7cfb\uff1f Check-Shared-07\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u5757\u5b58\u50a8\u8054\u7cfb\uff1f Check-Shared-08\uff1a\u8bf7\u6c42\u6b63\u6587\u7684\u6700\u5927\u5927\u5c0f\u662f\u5426\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f \u8054\u7f51 \u7f51\u7edc\u67b6\u6784 OpenStack Networking \u670d\u52a1\u5728\u7269\u7406\u670d\u52a1\u5668\u4e0a\u7684\u653e\u7f6e \u7269\u7406\u670d\u52a1\u5668\u7684\u7f51\u7edc\u8fde\u63a5 \u7f51\u7edc\u670d\u52a1 \u4f7f\u7528 VLAN \u548c\u96a7\u9053\u7684 L2 \u9694\u79bb VLANs L2 \u96a7\u9053 \u7f51\u7edc\u670d\u52a1 \u8bbf\u95ee\u63a7\u5236\u5217\u8868 L3 \u8def\u7531\u548c NAT \u670d\u52a1\u8d28\u91cf \uff08QoS\uff09 \u8d1f\u8f7d\u5747\u8861 \u9632\u706b\u5899 \u7f51\u7edc\u670d\u52a1\u6269\u5c55 \u7f51\u7edc\u670d\u52a1\u9650\u5236 \u7f51\u7edc\u670d\u52a1\u5b89\u5168\u6700\u4f73\u505a\u6cd5 OpenStack Networking \u670d\u52a1\u914d\u7f6e \u9650\u5236 API \u670d\u52a1\u5668\u7684\u7ed1\u5b9a\u5730\u5740\uff1aneutron-server \u9650\u5236 OpenStack Networking \u670d\u52a1\u7684 DB \u548c RPC \u901a\u4fe1 \u4fdd\u62a4 OpenStack \u7f51\u7edc\u670d\u52a1 \u9879\u76ee\u7f51\u7edc\u670d\u52a1\u5de5\u4f5c\u6d41 \u7f51\u7edc\u8d44\u6e90\u7b56\u7565\u5f15\u64ce \u5b89\u5168\u7ec4 \u914d\u989d \u7f13\u89e3 ARP \u6b3a\u9a97 \u68c0\u67e5\u8868 Check-Neutron-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/neutron\uff1f Check-Neutron-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Neutron-03\uff1aKeystone\u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Neutron-04\uff1a\u662f\u5426\u4f7f\u7528\u5b89\u5168\u534f\u8bae\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Neutron-05\uff1aNeutron API \u670d\u52a1\u5668\u4e0a\u662f\u5426\u542f\u7528\u4e86 TLS\uff1f \u5bf9\u8c61\u5b58\u50a8 \u7f51\u7edc\u5b89\u5168 \u4e00\u822c\u670d\u52a1\u5b89\u5168 \u4ee5\u975e root \u7528\u6237\u8eab\u4efd\u8fd0\u884c\u670d\u52a1 \u6587\u4ef6\u6743\u9650 \u4fdd\u62a4\u5b58\u50a8\u670d\u52a1 \u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u672f\u8bed \u4fdd\u62a4\u4ee3\u7406\u670d\u52a1 HTTP \u76d1\u542c\u7aef\u53e3 \u8d1f\u8f7d\u5747\u8861\u5668 \u5bf9\u8c61\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1 TempAuth \u51fd\u6570 Keystone \u5176\u4ed6\u503c\u5f97\u6ce8\u610f\u7684\u4e8b\u9879 \u673a\u5bc6\u7ba1\u7406 \u73b0\u6709\u6280\u672f\u6458\u8981 \u76f8\u5173 Openstack \u9879\u76ee \u4f7f\u7528\u6848\u4f8b \u955c\u50cf\u7b7e\u540d\u9a8c\u8bc1 \u5377\u52a0\u5bc6 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6 Sahara Magnum Octavia/LBaaS Swift \u914d\u7f6e\u6587\u4ef6\u4e2d\u7684\u5bc6\u7801 Barbican \u6982\u8ff0 Barbican \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236 \u673a\u5bc6\u5b58\u50a8\u540e\u7aef \u52a0\u5bc6\u63d2\u4ef6 \u7b80\u5355\u7684\u52a0\u5bc6\u63d2\u4ef6 PKCS#11 \u52a0\u5bc6\u63d2\u4ef6 \u673a\u5bc6\u5b58\u50a8\u63d2\u4ef6 KMIP \u63d2\u4ef6 Dogtag \u63d2\u4ef6 Vault \u63d2\u4ef6 \u5a01\u80c1\u5206\u6790 Castellan \u6982\u8ff0 \u5e38\u89c1\u95ee\u9898\u89e3\u7b54 \u68c0\u67e5\u8868 Check-Key-Manager-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/barbican\uff1f Check-Key-Manager-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Key-Manager-03\uff1aOpenStack Identity \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Key-Manager-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f \u6d88\u606f\u961f\u5217 \u6d88\u606f\u5b89\u5168 \u6d88\u606f\u4f20\u8f93\u5b89\u5168 RabbitMQ \u670d\u52a1\u5668 SSL \u914d\u7f6e Qpid \u670d\u52a1\u5668 SSL \u914d\u7f6e \u961f\u5217\u8ba4\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236 \u8eab\u4efd\u9a8c\u8bc1\u914d\u7f6e\u793a\u4f8b\uff1aRabbitMQ OpenStack \u670d\u52a1\u914d\u7f6e\uff1aRabbitMQ \u8eab\u4efd\u9a8c\u8bc1\u914d\u7f6e\u793a\u4f8b\uff1aQpid OpenStack \u670d\u52a1\u914d\u7f6e\uff1aQpid \u6d88\u606f\u961f\u5217\u8fdb\u7a0b\u9694\u79bb\u548c\u7b56\u7565 \u547d\u540d\u7a7a\u95f4 \u7f51\u7edc\u7b56\u7565 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \u6570\u636e\u5904\u7406 \u6570\u636e\u5904\u7406\u7b80\u4ecb \u67b6\u6784 \u6d89\u53ca\u7684\u6280\u672f \u7528\u6237\u8bbf\u95ee\u8d44\u6e90 \u90e8\u7f72 \u63a7\u5236\u5668\u5bf9\u96c6\u7fa4\u7684\u7f51\u7edc\u8bbf\u95ee \u914d\u7f6e\u548c\u5f3a\u5316 TLS\u7cfb\u7edf \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565 \u5b89\u5168\u7ec4 \u4ee3\u7406\u57df \u81ea\u5b9a\u4e49\u7f51\u7edc\u62d3\u6251 \u95f4\u63a5\u8bbf\u95ee Rootwrap \u65e5\u5fd7 \u53c2\u8003\u4e66\u76ee \u6570\u636e\u5e93 \u6570\u636e\u5e93\u540e\u7aef\u6ce8\u610f\u4e8b\u9879 \u6570\u636e\u5e93\u540e\u7aef\u7684\u5b89\u5168\u53c2\u8003 \u6570\u636e\u5e93\u8bbf\u95ee\u63a7\u5236 OpenStack \u6570\u636e\u5e93\u8bbf\u95ee\u6a21\u578b \u7cbe\u7ec6\u8bbf\u95ee\u63a7\u5236 Nova-conductor \u6570\u636e\u5e93\u8ba4\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236 \u6743\u9650 \u8981\u6c42\u7528\u6237\u5e10\u6237\u9700\u8981 SSL \u4f20\u8f93 \u914d\u7f6e\u793a\u4f8b #1\uff1a\uff08MySQL\uff09 \u914d\u7f6e\u793a\u4f8b #2\uff1a\uff08PostgreSQL\uff09 OpenStack \u670d\u52a1\u6570\u636e\u5e93\u914d\u7f6e MySQL :sql_connection \u7684\u5b57\u7b26\u4e32\u793a\u4f8b\uff1a \u4f7f\u7528 X.509 \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1 \u914d\u7f6e\u793a\u4f8b #1\uff1a\uff08MySQL\uff09 \u914d\u7f6e\u793a\u4f8b #2\uff1a\uff08PostgreSQL\uff09 OpenStack \u670d\u52a1\u6570\u636e\u5e93\u914d\u7f6e Nova-conductor \u6570\u636e\u5e93\u4f20\u8f93\u5b89\u5168\u6027 \u6570\u636e\u5e93\u670d\u52a1\u5668 IP \u5730\u5740\u7ed1\u5b9a \u9650\u5236 MySQL \u7684\u7ed1\u5b9a\u5730\u5740 \u9650\u5236 PostgreSQL \u7684\u76d1\u542c\u5730\u5740 \u6570\u636e\u5e93\u4f20\u8f93 MySQL SSL\u914d\u7f6e PostgreSQL SSL \u914d\u7f6e \u79df\u6237\u6570\u636e\u9690\u79c1 \u6570\u636e\u9690\u79c1\u95ee\u9898 \u6570\u636e\u9a7b\u7559 \u6570\u636e\u5904\u7f6e \u6570\u636e\u672a\u5b89\u5168\u5220\u9664 \u5b9e\u4f8b\u5185\u5b58\u6e05\u7406 Cinder \u5377\u6570\u636e \u955c\u50cf\u670d\u52a1\u5ef6\u65f6\u5220\u9664\u529f\u80fd \u8ba1\u7b97\u8f6f\u5220\u9664\u529f\u80fd \u8ba1\u7b97\u5b9e\u4f8b\u4e34\u65f6\u5b58\u50a8 \u88f8\u673a\u670d\u52a1\u5668\u6e05\u7406 \u6570\u636e\u52a0\u5bc6 \u5377\u52a0\u5bc6 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6 \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61 \u5757\u5b58\u50a8\u6027\u80fd\u548c\u540e\u7aef \u7f51\u7edc\u6570\u636e \u5bc6\u94a5\u7ba1\u7406 \u53c2\u8003\u4e66\u76ee\uff1a \u5b9e\u4f8b\u5b89\u5168\u7ba1\u7406 \u5b9e\u4f8b\u7684\u5b89\u5168\u670d\u52a1 \u5b9e\u4f8b\u7684\u71b5 \u5c06\u5b9e\u4f8b\u8c03\u5ea6\u5230\u8282\u70b9 \u53ef\u4fe1\u955c\u50cf \u955c\u50cf\u521b\u5efa\u8fc7\u7a0b \u6620\u50cf\u7b7e\u540d\u9a8c\u8bc1 \u5b9e\u4f8b\u8fc1\u79fb \u5b9e\u65f6\u8fc1\u79fb\u98ce\u9669 \u5b9e\u65f6\u8fc1\u79fb\u7f13\u89e3\u63aa\u65bd \u7981\u7528\u5b9e\u65f6\u8fc1\u79fb \u8fc1\u79fb\u7f51\u7edc \u52a0\u5bc6\u5b9e\u65f6\u8fc1\u79fb \u76d1\u63a7\u3001\u544a\u8b66\u548c\u62a5\u544a \u66f4\u65b0\u548c\u8865\u4e01 \u9632\u706b\u5899\u548c\u5176\u4ed6\u57fa\u4e8e\u4e3b\u673a\u7684\u5b89\u5168\u63a7\u5236 \u76d1\u89c6\u548c\u65e5\u5fd7\u8bb0\u5f55 \u53d6\u8bc1\u548c\u4e8b\u4ef6\u54cd\u5e94 \u76d1\u63a7\u7528\u4f8b \u53c2\u8003\u4e66\u76ee \u5408\u89c4 \u5408\u89c4\u6027\u6982\u8ff0 \u5b89\u5168\u539f\u5219 \u5206\u5c42\u9632\u5fa1 \u5b89\u5168\u5931\u8d25 \u6700\u5c0f\u6743\u9650 \u5206\u9694 \u4fc3\u8fdb\u9690\u79c1 \u65e5\u5fd7\u8bb0\u5f55\u80fd\u529b \u5e38\u7528\u63a7\u5236\u6846\u67b6 \u5ba1\u8ba1\u53c2\u8003 \u4e86\u89e3\u5ba1\u6838\u6d41\u7a0b \u786e\u5b9a\u5ba1\u8ba1\u8303\u56f4 \u5ba1\u8ba1\u7684\u9636\u6bb5 \u5185\u90e8\u5ba1\u8ba1 \u51c6\u5907\u5916\u90e8\u5ba1\u8ba1 \u5916\u90e8\u5ba1\u8ba1 \u5408\u89c4\u6027\u7ef4\u62a4 \u5408\u89c4\u6d3b\u52a8 \u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u7cfb\u7edf \uff08ISMS\uff09 \u98ce\u9669\u8bc4\u4f30 \u8bbf\u95ee\u548c\u65e5\u5fd7\u5ba1\u67e5 \u5907\u4efd\u548c\u707e\u96be\u6062\u590d \u5b89\u5168\u57f9\u8bad \u5b89\u5168\u5ba1\u67e5 \u6f0f\u6d1e\u7ba1\u7406 \u6570\u636e\u5206\u7c7b \u5f02\u5e38\u8fc7\u7a0b \u8ba4\u8bc1\u548c\u5408\u89c4\u58f0\u660e \u5546\u4e1a\u6807\u51c6 SOC 1 \uff08SSAE 16\uff09 / ISAE 3402 SOC 2 \u51fd\u6570 SOC 3 \u51fd\u6570 ISO 27001/2 \u8ba4\u8bc1 HIPAA / HITECH PCI-DSS \u653f\u5e9c\u6807\u51c6 FedRAMP ITAR FISMA \u9690\u79c1 \u5b89\u5168\u5ba1\u67e5 \u67b6\u6784\u9875\u9762\u6307\u5357 \u6807\u9898\u3001\u7248\u672c\u4fe1\u606f\u3001\u8054\u7cfb\u65b9\u5f0f \u9879\u76ee\u63cf\u8ff0\u548c\u76ee\u7684 \u4e3b\u8981\u7528\u6237\u548c\u7528\u4f8b \u5916\u90e8\u4f9d\u8d56\u548c\u76f8\u5173\u7684\u5b89\u5168\u5047\u8bbe \u7ec4\u4ef6 \u670d\u52a1\u67b6\u6784\u56fe \u6570\u636e\u8d44\u4ea7 \u6570\u636e\u8d44\u4ea7\u5f71\u54cd\u5206\u6790 \u63a5\u53e3 \u8d44\u6e90 \u5b89\u5168\u68c0\u67e5\u8868 \u9644\u5f55 \u793e\u533a\u652f\u6301 \u6587\u6863 OpenStack wiki Launchpad bug \u533a\u57df \u6587\u6863\u53cd\u9988 OpenStack IRC \u9891\u9053 OpenStack \u90ae\u4ef6\u5217\u8868 OpenStack \u53d1\u884c\u5305 \u8bcd\u6c47\u8868 0-9 A B C D E F G H I J K M N O P Q R S T U V W X Y Z \u6458\u8981 \u00b6 \u672c\u4e66\u63d0\u4f9b\u4e86\u6709\u5173\u4fdd\u62a4OpenStack\u4e91\u7684\u6700\u4f73\u5b9e\u8df5\u548c\u6982\u5ff5\u4fe1\u606f\u3002 \u672c\u6307\u5357\u6700\u540e\u4e00\u6b21\u66f4\u65b0\u662f\u5728Train\u53d1\u5e03\u671f\u95f4\uff0c\u8bb0\u5f55\u4e86OpenStack Train\u3001Stein\u548cRocky\u7248\u672c\u3002\u5b83\u53ef\u80fd\u4e0d\u9002\u7528\u4e8eEOL\u7248\u672c\uff08\u4f8b\u5982Newton\uff09\u3002\u6211\u4eec\u5efa\u8bae\u60a8\u5728\u8ba1\u5212\u4e3a\u60a8\u7684OpenStack\u4e91\u5b9e\u65bd\u5b89\u5168\u63aa\u65bd\u65f6\uff0c\u81ea\u884c\u9605\u8bfb\u672c\u6587\u3002\u672c\u6307\u5357\u4ec5\u4f9b\u53c2\u8003\u3002OpenStack\u5b89\u5168\u56e2\u961f\u57fa\u4e8eOpenStack\u793e\u533a\u7684\u81ea\u613f\u8d21\u732e\u3002\u60a8\u53ef\u4ee5\u5728OFTC IRC\u4e0a\u7684#OpenStack-Security\u9891\u9053\u4e2d\u76f4\u63a5\u8054\u7cfb\u5b89\u5168\u793e\u533a\uff0c\u6216\u8005\u901a\u8fc7\u5411OpenStack-Discussion\u90ae\u4ef6\u5217\u8868\u53d1\u9001\u4e3b\u9898\u6807\u9898\u4e2d\u5e26\u6709[Security]\u524d\u7f00\u7684\u90ae\u4ef6\u6765\u8054\u7cfb\u3002 \u5185\u5bb9 \u00b6 \u7ea6\u5b9a \u901a\u77e5 \u547d\u4ee4\u63d0\u793a\u7b26 \u4ecb\u7ecd \u786e\u5b9a \u6211\u4eec\u4e3a\u4ec0\u4e48\u4ee5\u53ca\u5982\u4f55\u5199\u8fd9\u672c\u4e66 OpenStack\u7b80\u4ecb \u5b89\u5168\u8fb9\u754c\u548c\u5a01\u80c1 \u9009\u62e9\u652f\u6301\u8f6f\u4ef6 \u7cfb\u7edf\u6587\u6863 \u7cfb\u7edf\u6587\u6863\u8981\u6c42 \u7ba1\u7406 \u6301\u7eed\u7684\u7cfb\u7edf\u7ba1\u7406 \u5b8c\u6574\u6027\u751f\u547d\u5468\u671f \u7ba1\u7406\u754c\u9762 \u5b89\u5168\u901a\u4fe1 TLS\u548cSSL\u7b80\u4ecb TLS\u4ee3\u7406\u548cHTTP\u670d\u52a1 \u5b89\u5168\u53c2\u8003\u67b6\u6784 \u7aef\u70b9 APL\u7aef\u70b9\u914d\u7f6e\u5efa\u8bae \u8eab\u4efd \u8ba4\u8bc1 \u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5 \u6388\u6743 \u653f\u7b56 \u4ee4\u724c \u57df \u8054\u5408\u68af\u5f62\u5931\u771f \u6e05\u5355 \u4eea\u8868\u677f \u57df\u540d\u3001\u4eea\u8868\u677f\u5347\u7ea7\u548c\u57fa\u672cWeb\u670d\u52a1\u5668\u914d\u7f6e HTTPS\u3001HSTS\u3001XSS\u548cSSRF \u524d\u7aef\u7f13\u5b58\u548c\u4f1a\u8bdd\u540e\u7aef \u9759\u6001\u5a92\u4f53 \u5bc6\u7801 \u5bc6\u94a5 \u7f51\u7ad9\u6570\u636e \u8de8\u57df\u8d44\u6e90\u5171\u4eab \uff08CORS\uff09 \u8c03\u8bd5 \u68c0\u67e5\u8868 \u8ba1\u7b97 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u9009\u62e9 \u5f3a\u5316\u865a\u62df\u5316\u5c42 \u5f3a\u5316\u8ba1\u7b97\u90e8\u7f72 \u6f0f\u6d1e\u610f\u8bc6 \u5982\u4f55\u9009\u62e9\u865a\u62df\u63a7\u5236\u53f0 \u68c0\u67e5\u8868 \u5757\u5b58\u50a8 \u97f3\u91cf\u64e6\u9664 \u68c0\u67e5\u8868 \u56fe\u50cf\u5b58\u50a8 \u68c0\u67e5\u8868 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf \u4ecb\u7ecd \u7f51\u7edc\u548c\u5b89\u5168\u6a21\u578b \u5b89\u5168\u670d\u52a1 \u5171\u4eab\u8bbf\u95ee\u63a7\u5236 \u5171\u4eab\u7c7b\u578b\u8bbf\u95ee\u63a7\u5236 \u653f\u7b56 \u68c0\u67e5\u8868 \u8054\u7f51 \u7f51\u7edc\u67b6\u6784 \u7f51\u7edc\u670d\u52a1 \u7f51\u7edc\u670d\u52a1\u5b89\u5168\u6700\u4f73\u505a\u6cd5 \u4fdd\u62a4 OpenStack \u7f51\u7edc\u670d\u52a1 \u68c0\u67e5\u8868 \u5bf9\u8c61\u5b58\u50a8 \u7f51\u7edc\u5b89\u5168 \u4e00\u822c\u4e8b\u52a1\u5b89\u5168 \u4fdd\u62a4\u5b58\u50a8\u670d\u52a1 \u4fdd\u62a4\u4ee3\u7406\u670d\u52a1 \u5bf9\u8c61\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1 \u5176\u4ed6\u503c\u5f97\u6ce8\u610f\u7684\u9879\u76ee \u673a\u5bc6\u7ba1\u7406 \u73b0\u6709\u6280\u672f\u6458\u8981 \u76f8\u5173 Openstack \u9879\u76ee \u4f7f\u7528\u6848\u4f8b \u5bc6\u94a5\u7ba1\u7406\u670d\u52a1 \u5bc6\u94a5\u7ba1\u7406\u63a5\u53e3 \u5e38\u89c1\u95ee\u9898\u89e3\u7b54 \u68c0\u67e5\u8868 \u6d88\u606f\u961f\u5217 \u90ae\u4ef6\u5b89\u5168 \u6570\u636e\u5904\u7406 \u6570\u636e\u5904\u7406\u7b80\u4ecb \u90e8\u7f72 \u914d\u7f6e\u548c\u5f3a\u5316 \u6570\u636e\u5e93 \u6570\u636e\u5e93\u540e\u7aef\u6ce8\u610f\u4e8b\u9879 \u6570\u636e\u5e93\u8bbf\u95ee\u63a7\u5236 \u6570\u636e\u5e93\u4f20\u8f93\u5b89\u5168\u6027 \u79df\u6237\u6570\u636e\u9690\u79c1 \u6570\u636e\u9690\u79c1\u95ee\u9898 \u6570\u636e\u52a0\u5bc6 \u5bc6\u94a5\u7ba1\u7406 \u5b9e\u4f8b\u5b89\u5168\u7ba1\u7406 \u5b9e\u4f8b\u7684\u5b89\u5168\u670d\u52a1 \u76d1\u89c6\u548c\u65e5\u5fd7\u8bb0\u5f55 \u53d6\u8bc1\u548c\u4e8b\u4ef6\u54cd\u5e94 \u5408\u89c4 \u5408\u89c4\u6027\u6982\u8ff0 \u4e86\u89e3\u5ba1\u6838\u6d41\u7a0b \u5408\u89c4\u6d3b\u52a8 \u8ba4\u8bc1\u548c\u5408\u89c4\u58f0\u660e \u9690\u79c1 \u5b89\u5168\u5ba1\u67e5 \u4f53\u7cfb\u7ed3\u6784\u9875\u9762\u6307\u5357 \u5b89\u5168\u68c0\u67e5\u8868 \u9644\u5f55 \u793e\u533a\u652f\u6301 \u8bcd\u6c47\u8868 \u7ea6\u5b9a \u00b6 OpenStack \u6587\u6863\u4f7f\u7528\u4e86\u51e0\u79cd\u6392\u7248\u7ea6\u5b9a\u3002 \u6ce8\u610f\u4e8b\u9879 \u00b6 \u6ce8\u610f \u5e26\u6709\u9644\u52a0\u4fe1\u606f\u7684\u6ce8\u91ca\uff0c\u7528\u4e8e\u89e3\u91ca\u6587\u672c\u7684\u67d0\u4e00\u90e8\u5206\u3002 \u91cd\u8981 \u5728\u7ee7\u7eed\u4e4b\u524d\uff0c\u60a8\u5fc5\u987b\u6ce8\u610f\u8fd9\u4e00\u70b9\u3002 \u63d0\u793a \u4e00\u4e2a\u989d\u5916\u4f46\u6709\u7528\u7684\u5b9e\u7528\u5efa\u8bae\u3002 \u8b66\u793a \u9632\u6b62\u7528\u6237\u72af\u9519\u8bef\u7684\u6709\u7528\u4fe1\u606f\u3002 \u8b66\u544a \u6709\u5173\u6570\u636e\u4e22\u5931\u98ce\u9669\u6216\u5b89\u5168\u95ee\u9898\u7684\u5173\u952e\u4fe1\u606f\u3002 \u547d\u4ee4\u63d0\u793a\u7b26 \u00b6 $ command \u4efb\u4f55\u7528\u6237\uff08\u5305\u62ecroot\u7528\u6237\uff09\u90fd\u53ef\u4ee5\u8fd0\u884c\u4ee5$\u63d0\u793a\u7b26\u4e3a\u524d\u7f00\u7684\u547d\u4ee4\u3002 # command root\u7528\u6237\u5fc5\u987b\u8fd0\u884c\u524d\u7f00\u4e3a#\u63d0\u793a\u7b26\u7684\u547d\u4ee4\u3002\u60a8\u8fd8\u53ef\u4ee5\u5728\u8fd9\u4e9b\u547d\u4ee4\u524d\u9762\u52a0\u4e0asudo\u547d\u4ee4\uff08\u5982\u679c\u53ef\u7528\uff09\uff0c\u4ee5\u8fd0\u884c\u8fd9\u4e9b\u547d\u4ee4\u3002 \u4ecb\u7ecd \u00b6 \u300aOpenStack \u5b89\u5168\u6307\u5357\u300b\u662f\u8bb8\u591a\u4eba\u7ecf\u8fc7\u4e94\u5929\u534f\u4f5c\u7684\u6210\u679c\u3002\u672c\u6587\u6863\u65e8\u5728\u63d0\u4f9b\u90e8\u7f72\u5b89\u5168 OpenStack \u4e91\u7684\u6700\u4f73\u5b9e\u8df5\u6307\u5357\u3002\u5b83\u65e8\u5728\u53cd\u6620OpenStack\u793e\u533a\u7684\u5f53\u524d\u5b89\u5168\u72b6\u6001\uff0c\u5e76\u4e3a\u7531\u4e8e\u590d\u6742\u6027\u6216\u5176\u4ed6\u7279\u5b9a\u4e8e\u73af\u5883\u7684\u7ec6\u8282\u800c\u65e0\u6cd5\u5217\u51fa\u7279\u5b9a\u5b89\u5168\u63a7\u5236\u63aa\u65bd\u7684\u51b3\u7b56\u63d0\u4f9b\u6846\u67b6\u3002 \u81f4\u8c22 \u6211\u4eec\u4e3a\u4ec0\u4e48\u4ee5\u53ca\u5982\u4f55\u5199\u8fd9\u672c\u4e66 \u76ee\u6807 \u5982\u4f55 OpenStack \u7b80\u4ecb \u4e91\u7c7b\u578b OpenStack \u670d\u52a1\u6982\u8ff0 \u5b89\u5168\u8fb9\u754c\u548c\u5a01\u80c1 \u5b89\u5168\u57df \u6865\u63a5\u5b89\u5168\u57df \u5a01\u80c1\u5206\u7c7b\u3001\u53c2\u4e0e\u8005\u548c\u653b\u51fb\u5a92\u4ecb \u9009\u62e9\u652f\u6301\u8f6f\u4ef6 \u56e2\u961f\u4e13\u957f \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u901a\u7528\u6807\u51c6 \u786c\u4ef6\u95ee\u9898 \u81f4\u8c22 \u00b6 OpenStack \u5b89\u5168\u7ec4\u8981\u611f\u8c22\u4ee5\u4e0b\u7ec4\u7ec7\u7684\u8d21\u732e\uff0c\u4ed6\u4eec\u4e3a\u672c\u4e66\u7684\u51fa\u7248\u505a\u51fa\u4e86\u8d21\u732e\u3002\u8fd9\u4e9b\u7ec4\u7ec7\u662f\uff1a \u6211\u4eec\u4e3a\u4ec0\u4e48\u4ee5\u53ca\u5982\u4f55\u5199\u8fd9\u672c\u4e66 \u00b6 \u968f\u7740 OpenStack \u7684\u666e\u53ca\u548c\u4ea7\u54c1\u6210\u719f\uff0c\u5b89\u5168\u6027\u5df2\u6210\u4e3a\u91cd\u4e2d\u4e4b\u91cd\u3002OpenStack \u5b89\u5168\u7ec4\u5df2\u7ecf\u8ba4\u8bc6\u5230\u9700\u8981\u4e00\u4e2a\u5168\u9762\u800c\u6743\u5a01\u7684\u5b89\u5168\u6307\u5357\u3002\u300aOpenStack \u5b89\u5168\u6307\u5357\u300b\u65e8\u5728\u6982\u8ff0\u63d0\u9ad8 OpenStack \u90e8\u7f72\u5b89\u5168\u6027\u7684\u5b89\u5168\u6700\u4f73\u5b9e\u8df5\u3001\u6307\u5357\u548c\u5efa\u8bae\u3002\u4f5c\u8005\u5e26\u6765\u4e86\u4ed6\u4eec\u5728\u5404\u79cd\u73af\u5883\u4e2d\u90e8\u7f72\u548c\u4fdd\u62a4 OpenStack \u7684\u4e13\u4e1a\u77e5\u8bc6\u3002 \u672c\u6307\u5357\u662f\u5bf9\u300aOpenStack \u64cd\u4f5c\u6307\u5357\u300b\u7684\u8865\u5145\uff0c\u53ef\u7528\u4e8e\u5f3a\u5316\u73b0\u6709\u7684 OpenStack \u90e8\u7f72\u6216\u8bc4\u4f30 OpenStack \u4e91\u63d0\u4f9b\u5546\u7684\u5b89\u5168\u63a7\u5236\u3002 \u76ee\u6807 \u00b6 \u8bc6\u522b OpenStack \u4e2d\u7684\u5b89\u5168\u57df \u63d0\u4f9b\u4fdd\u62a4 OpenStack \u90e8\u7f72\u7684\u6307\u5bfc \u5f3a\u8c03\u5f53\u4eca OpenStack \u4e2d\u7684\u5b89\u5168\u95ee\u9898\u548c\u6f5c\u5728\u7684\u7f13\u89e3\u63aa\u65bd \u8ba8\u8bba\u5373\u5c06\u63a8\u51fa\u7684\u5b89\u5168\u529f\u80fd \u4e3a\u77e5\u8bc6\u83b7\u53d6\u548c\u4f20\u64ad\u63d0\u4f9b\u793e\u533a\u9a71\u52a8\u7684\u8bbe\u65bd \u5199\u4f5c\u8bb0\u5f55 \u00b6 \u4e0e\u300aOpenStack \u64cd\u4f5c\u6307\u5357\u300b\u4e00\u6837\uff0c\u6211\u4eec\u9075\u5faa\u4e86\u672c\u4e66\u7684\u51b2\u523a\u65b9\u6cd5\u3002\u4e66\u7c4d\u51b2\u523a\u8fc7\u7a0b\u5141\u8bb8\u5feb\u901f\u5f00\u53d1\u548c\u5236\u4f5c\u5927\u91cf\u4e66\u9762\u4f5c\u54c1\u3002OpenStack \u5b89\u5168\u7ec4\u7684\u534f\u8c03\u5458\u91cd\u65b0\u9080\u8bf7\u4e86 Adam Hyde \u4f5c\u4e3a\u534f\u8c03\u4eba\u3002\u8be5\u9879\u76ee\u5728\u4fc4\u52d2\u5188\u5dde\u6ce2\u7279\u5170\u5e02\u7684OpenStack\u5cf0\u4f1a\u4e0a\u6b63\u5f0f\u5ba3\u5e03\u3002 \u7531\u4e8e\u8be5\u5c0f\u7ec4\u7684\u4e00\u4e9b\u5173\u952e\u6210\u5458\u79bb\u5f97\u5f88\u8fd1\uff0c\u8be5\u56e2\u961f\u805a\u96c6\u5728\u9a6c\u91cc\u5170\u5dde\u5b89\u7eb3\u6ce2\u5229\u65af\u3002\u8fd9\u662f\u516c\u5171\u90e8\u95e8\u60c5\u62a5\u754c\u6210\u5458\u3001\u7845\u8c37\u521d\u521b\u516c\u53f8\u548c\u4e00\u4e9b\u5927\u578b\u77e5\u540d\u79d1\u6280\u516c\u53f8\u4e4b\u95f4\u7684\u975e\u51e1\u5408\u4f5c\u3002\u8be5\u4e66\u7684\u51b2\u523a\u57282013\u5e746\u6708\u7684\u6700\u540e\u4e00\u5468\u8fdb\u884c\uff0c\u7b2c\u4e00\u7248\u5728\u4e94\u5929\u5185\u5b8c\u6210\u3002 \u8be5\u56e2\u961f\u5305\u62ec\uff1a Bryan D. Payne\uff0c\u661f\u4e91 Bryan D. Payne \u535a\u58eb\u662f Nebula \u7684\u5b89\u5168\u7814\u7a76\u603b\u76d1\uff0c\u4e5f\u662f OpenStack \u5b89\u5168\u7ec4\u7ec7 \uff08OSSG\uff09 \u7684\u8054\u5408\u521b\u59cb\u4eba\u3002\u5728\u52a0\u5165 Nebula \u4e4b\u524d\uff0c\u4ed6\u66fe\u5728\u6851\u8fea\u4e9a\u56fd\u5bb6\u5b9e\u9a8c\u5ba4\u3001\u56fd\u5bb6\u5b89\u5168\u5c40\u3001BAE Systems \u548c IBM \u7814\u7a76\u9662\u5de5\u4f5c\u3002\u4ed6\u6bd5\u4e1a\u4e8e\u4f50\u6cbb\u4e9a\u7406\u5de5\u5b66\u9662\u8ba1\u7b97\u673a\u5b66\u9662\uff0c\u83b7\u5f97\u8ba1\u7b97\u673a\u79d1\u5b66\u535a\u58eb\u5b66\u4f4d\uff0c\u4e13\u653b\u7cfb\u7edf\u5b89\u5168\u3002Bryan \u662f\u300aOpenStack \u5b89\u5168\u6307\u5357\u300b\u7684\u7f16\u8f91\u548c\u8d1f\u8d23\u4eba\uff0c\u8d1f\u8d23\u8be5\u6307\u5357\u5728\u7f16\u5199\u540e\u7684\u4e24\u5e74\u4e2d\u6301\u7eed\u589e\u957f\u3002 Robert Clark\uff0c\u60e0\u666e Robert Clark \u662f\u60e0\u666e\u4e91\u670d\u52a1\u7684\u9996\u5e2d\u5b89\u5168\u67b6\u6784\u5e08\uff0c\u4e5f\u662f OpenStack \u5b89\u5168\u7ec4\u7ec7 \uff08OSSG\uff09 \u7684\u8054\u5408\u521b\u59cb\u4eba\u3002\u5728\u88ab\u60e0\u666e\u62db\u52df\u4e4b\u524d\uff0c\u4ed6\u66fe\u5728\u82f1\u56fd\u60c5\u62a5\u754c\u5de5\u4f5c\u3002Robert \u5728\u5a01\u80c1\u5efa\u6a21\u3001\u5b89\u5168\u67b6\u6784\u548c\u865a\u62df\u5316\u6280\u672f\u65b9\u9762\u62e5\u6709\u6df1\u539a\u7684\u80cc\u666f\u3002Robert \u62e5\u6709\u5a01\u5c14\u58eb\u5927\u5b66\u7684\u8f6f\u4ef6\u5de5\u7a0b\u7855\u58eb\u5b66\u4f4d\u3002 Keith Basil \uff0c\u7ea2\u5e3d Keith Basil \u662f\u7ea2\u5e3d OpenStack \u7684\u9996\u5e2d\u4ea7\u54c1\u7ecf\u7406\uff0c\u4e13\u6ce8\u4e8e\u7ea2\u5e3d\u7684 OpenStack \u4ea7\u54c1\u7ba1\u7406\u3001\u5f00\u53d1\u548c\u6218\u7565\u3002\u5728\u7f8e\u56fd\u516c\u5171\u90e8\u95e8\uff0cBasil \u5e26\u6765\u4e86\u4e3a\u8054\u90a6\u6c11\u7528\u673a\u6784\u548c\u627f\u5305\u5546\u8bbe\u8ba1\u6388\u6743\u3001\u5b89\u5168\u3001\u9ad8\u6027\u80fd\u4e91\u67b6\u6784\u7684\u7ecf\u9a8c\u3002 Cody Bunch\uff0c\u62c9\u514b\u7a7a\u95f4 Cody Bunch \u662f Rackspace \u7684\u79c1\u6709\u4e91\u67b6\u6784\u5e08\u3002Cody \u4e0e\u4eba\u5408\u8457\u4e86\u300aThe OpenStack Cookbook\u300b\u7684\u66f4\u65b0\u4ee5\u53ca\u6709\u5173 VMware \u81ea\u52a8\u5316\u7684\u4e66\u7c4d\u3002 Malini Bhandaru\uff0c\u82f1\u7279\u5c14 Malini Bhandaru \u662f\u82f1\u7279\u5c14\u7684\u4e00\u540d\u5b89\u5168\u67b6\u6784\u5e08\u3002\u5979\u62e5\u6709\u591a\u5143\u5316\u7684\u80cc\u666f\uff0c\u66fe\u5728\u82f1\u7279\u5c14\u4ece\u4e8b\u5e73\u53f0\u529f\u80fd\u548c\u6027\u80fd\u65b9\u9762\u7684\u5de5\u4f5c\uff0c\u5728 Nuance \u4ece\u4e8b\u8bed\u97f3\u4ea7\u54c1\u65b9\u9762\u7684\u5de5\u4f5c\uff0c\u5728 ComBrio \u4ece\u4e8b\u8fdc\u7a0b\u76d1\u63a7\u548c\u7ba1\u7406\u5de5\u4f5c\uff0c\u5728 Verizon \u4ece\u4e8b\u7f51\u7edc\u5546\u52a1\u5de5\u4f5c\u3002\u5979\u62e5\u6709\u9a6c\u8428\u8bf8\u585e\u5927\u5b66\u963f\u9ed8\u65af\u7279\u5206\u6821\u7684\u4eba\u5de5\u667a\u80fd\u535a\u58eb\u5b66\u4f4d\u3002 Gregg Tally\uff0c\u7ea6\u7ff0\u970d\u666e\u91d1\u65af\u5927\u5b66\u5e94\u7528\u7269\u7406\u5b9e\u9a8c\u5ba4 Gregg Tally \u662f JHU/APL \u7f51\u7edc\u7cfb\u7edf\u90e8\u95e8\u975e\u5bf9\u79f0\u8fd0\u8425\u90e8\u7684\u603b\u5de5\u7a0b\u5e08\u3002\u4ed6\u4e3b\u8981\u4ece\u4e8b\u7cfb\u7edf\u5b89\u5168\u5de5\u7a0b\u65b9\u9762\u7684\u5de5\u4f5c\u3002\u6b64\u524d\uff0c\u4ed6\u66fe\u5728\u65af\u5df4\u8fbe\u3001\u8fc8\u514b\u83f2\u548c\u53ef\u4fe1\u4fe1\u606f\u7cfb\u7edf\u516c\u53f8\u5de5\u4f5c\uff0c\u53c2\u4e0e\u7f51\u7edc\u5b89\u5168\u7814\u7a76\u9879\u76ee\u3002 Eric Lopez, \u5a01\u777f Eric Lopez \u662f VMware \u7f51\u7edc\u548c\u5b89\u5168\u4e1a\u52a1\u90e8\u95e8\u7684\u9ad8\u7ea7\u89e3\u51b3\u65b9\u6848\u67b6\u6784\u5e08\uff0c\u4ed6\u5e2e\u52a9\u5ba2\u6237\u5b9e\u65bd OpenStack \u548c VMware NSX\uff08\u4ee5\u524d\u79f0\u4e3a Nicira \u7684\u7f51\u7edc\u865a\u62df\u5316\u5e73\u53f0\uff09\u3002\u5728\u52a0\u5165 VMware\uff08\u901a\u8fc7\u516c\u53f8\u6536\u8d2d Nicira\uff09\u4e4b\u524d\uff0c\u4ed6\u66fe\u5728 Q1 Labs\u3001Symantec\u3001Vontu \u548c Brightmail \u5de5\u4f5c\u3002\u4ed6\u62e5\u6709\u52a0\u5dde\u5927\u5b66\u4f2f\u514b\u5229\u5206\u6821\u7684\u7535\u6c14\u5de5\u7a0b/\u8ba1\u7b97\u673a\u79d1\u5b66\u548c\u6838\u5de5\u7a0b\u5b66\u58eb\u5b66\u4f4d\u548c\u65e7\u91d1\u5c71\u5927\u5b66\u7684\u5de5\u5546\u7ba1\u7406\u7855\u58eb\u5b66\u4f4d\u3002 Shawn Wells\uff0c\u7ea2\u5e3d Shawn Wells \u662f\u7ea2\u5e3d\u521b\u65b0\u9879\u76ee\u603b\u76d1\uff0c\u4e13\u6ce8\u4e8e\u6539\u8fdb\u7f8e\u56fd\u653f\u5e9c\u5185\u90e8\u91c7\u7528\u3001\u4fc3\u8fdb\u548c\u7ba1\u7406\u5f00\u6e90\u6280\u672f\u7684\u6d41\u7a0b\u3002\u6b64\u5916\uff0cShawn \u8fd8\u662f SCAP \u5b89\u5168\u6307\u5357\u9879\u76ee\u7684\u4e0a\u6e38\u7ef4\u62a4\u8005\uff0c\u8be5\u9879\u76ee\u4e0e\u7f8e\u56fd\u519b\u65b9\u3001NSA \u548c DISA \u4e00\u8d77\u5236\u5b9a\u865a\u62df\u5316\u548c\u64cd\u4f5c\u7cfb\u7edf\u5f3a\u5316\u7b56\u7565\u3002Shawn\u66fe\u662fNSA\u7684\u5e73\u6c11\uff0c\u5229\u7528\u5927\u578b\u5206\u5e03\u5f0f\u8ba1\u7b97\u57fa\u7840\u8bbe\u65bd\u5f00\u53d1\u4e86SIGINT\u6536\u96c6\u7cfb\u7edf\u3002 Ben de Bont\uff0c\u60e0\u666e Ben de Bont \u662f\u60e0\u666e\u4e91\u670d\u52a1\u7684\u9996\u5e2d\u6218\u7565\u5b98\u3002\u5728\u62c5\u4efb\u73b0\u804c\u4e4b\u524d\uff0cBen \u9886\u5bfc MySpace \u7684\u4fe1\u606f\u5b89\u5168\u5c0f\u7ec4\u548c MSN Security \u7684\u4e8b\u4ef6\u54cd\u5e94\u56e2\u961f\u3002Ben \u62e5\u6709\u6606\u58eb\u5170\u79d1\u6280\u5927\u5b66\u7684\u8ba1\u7b97\u673a\u79d1\u5b66\u7855\u58eb\u5b66\u4f4d\u3002 Nathanael Burton\uff0c\u56fd\u5bb6\u5b89\u5168\u5c40 \u7eb3\u5854\u5185\u5c14\u00b7\u4f2f\u987f\uff08Nathanael Burton\uff09\u662f\u7f8e\u56fd\u56fd\u5bb6\u5b89\u5168\u5c40\uff08National Security Agency\uff09\u7684\u8ba1\u7b97\u673a\u79d1\u5b66\u5bb6\u3002\u4ed6\u5728\u8be5\u673a\u6784\u5de5\u4f5c\u4e86 10 \u591a\u5e74\uff0c\u4ece\u4e8b\u5206\u5e03\u5f0f\u7cfb\u7edf\u3001\u5927\u89c4\u6a21\u6258\u7ba1\u3001\u5f00\u6e90\u8ba1\u5212\u3001\u64cd\u4f5c\u7cfb\u7edf\u3001\u5b89\u5168\u3001\u5b58\u50a8\u548c\u865a\u62df\u5316\u6280\u672f\u65b9\u9762\u7684\u5de5\u4f5c\u3002\u4ed6\u62e5\u6709\u5f17\u5409\u5c3c\u4e9a\u7406\u5de5\u5927\u5b66\u7684\u8ba1\u7b97\u673a\u79d1\u5b66\u5b66\u58eb\u5b66\u4f4d\u3002 Vibha Fauver Vibha Fauver\uff0cGWEB\uff0cCISSP\uff0cPMP\uff0c\u5728\u4fe1\u606f\u6280\u672f\u9886\u57df\u62e5\u6709\u8d85\u8fc715\u5e74\u7684\u7ecf\u9a8c\u3002\u5979\u7684\u4e13\u4e1a\u9886\u57df\u5305\u62ec\u8f6f\u4ef6\u5de5\u7a0b\u3001\u9879\u76ee\u7ba1\u7406\u548c\u4fe1\u606f\u5b89\u5168\u3002\u5979\u62e5\u6709\u8ba1\u7b97\u673a\u4e0e\u4fe1\u606f\u79d1\u5b66\u5b66\u58eb\u5b66\u4f4d\u548c\u5de5\u7a0b\u7ba1\u7406\u7855\u58eb\u5b66\u4f4d\uff0c\u4e13\u4e1a\u548c\u7cfb\u7edf\u5de5\u7a0b\u8bc1\u4e66\u3002 Eric Windisch\uff0c\u4e91\u7f29\u653e Eric Windisch \u662f Cloudscaling \u7684\u9996\u5e2d\u5de5\u7a0b\u5e08\uff0c\u4ed6\u4e3a OpenStack \u8d21\u732e\u4e86\u4e24\u5e74\u591a\u3002\u57c3\u91cc\u514b\uff08Eric\uff09\u5728\u7f51\u7edc\u6258\u7ba1\u884c\u4e1a\u62e5\u6709\u5341\u591a\u5e74\u7684\u7ecf\u9a8c\uff0c\u4e00\u76f4\u5728\u654c\u5bf9\u73af\u5883\u7684\u6218\u58d5\u4e2d\uff0c\u5efa\u7acb\u4e86\u79df\u6237\u9694\u79bb\u548c\u57fa\u7840\u8bbe\u65bd\u5b89\u5168\u6027\u3002\u81ea 2007 \u5e74\u4ee5\u6765\uff0c\u4ed6\u4e00\u76f4\u5728\u6784\u5efa\u4e91\u8ba1\u7b97\u57fa\u7840\u8bbe\u65bd\u548c\u81ea\u52a8\u5316\u3002 Andrew Hay\uff0c\u4e91\u9053 Andrew Hay \u662f CloudPassage\uff0c Inc. \u7684\u5e94\u7528\u5b89\u5168\u7814\u7a76\u603b\u76d1\uff0c\u8d1f\u8d23\u9886\u5bfc\u8be5\u516c\u53f8\u53ca\u5176\u4e13\u4e3a\u52a8\u6001\u516c\u6709\u4e91\u3001\u79c1\u6709\u4e91\u548c\u6df7\u5408\u4e91\u6258\u7ba1\u73af\u5883\u6784\u5efa\u7684\u670d\u52a1\u5668\u5b89\u5168\u4ea7\u54c1\u7684\u5b89\u5168\u7814\u7a76\u5de5\u4f5c\u3002 Adam Hyde \u4e9a\u5f53\u4fc3\u6210\u4e86\u8fd9\u4e2a Book Sprint\u3002\u4ed6\u8fd8\u521b\u7acb\u4e86 Book Sprint \u65b9\u6cd5\u8bba\uff0c\u5e76\u4e14\u662f\u6700\u6709\u7ecf\u9a8c\u7684 Book Sprint \u4fc3\u8fdb\u8005\u3002Adam \u521b\u7acb\u4e86 FLOSS Manuals\uff0c\u8fd9\u662f\u4e00\u4e2a\u7531 3,000 \u4eba\u7ec4\u6210\u7684\u793e\u533a\uff0c\u81f4\u529b\u4e8e\u5f00\u53d1\u5173\u4e8e\u81ea\u7531\u8f6f\u4ef6\u7684\u81ea\u7531\u624b\u518c\u3002\u4ed6\u8fd8\u662f Booktype \u7684\u521b\u59cb\u4eba\u548c\u9879\u76ee\u7ecf\u7406\uff0cBooktype \u662f\u4e00\u4e2a\u7528\u4e8e\u5728\u7ebf\u548c\u5370\u5237\u4e66\u7c4d\u7f16\u5199\u3001\u7f16\u8f91\u548c\u51fa\u7248\u7684\u5f00\u6e90\u9879\u76ee\u3002 \u5728\u51b2\u523a\u671f\u95f4\uff0c\u6211\u4eec\u8fd8\u5f97\u5230\u4e86 Anne Gentle\u3001Warren Wang\u3001Paul McMillan\u3001Brian Schott \u548c Lorin Hochstein \u7684\u5e2e\u52a9\u3002 \u8fd9\u672c\u4e66\u662f\u5728\u4e3a\u671f 5 \u5929\u7684\u56fe\u4e66\u51b2\u523a\u4e2d\u5236\u4f5c\u7684\u3002\u56fe\u4e66\u51b2\u523a\u662f\u4e00\u4e2a\u9ad8\u5ea6\u534f\u4f5c\u3001\u4fc3\u8fdb\u7684\u8fc7\u7a0b\uff0c\u5b83\u5c06\u4e00\u4e2a\u5c0f\u7ec4\u805a\u96c6\u5728\u4e00\u8d77\uff0c\u5728 3-5 \u5929\u5185\u5236\u4f5c\u4e00\u672c\u4e66\u3002\u8fd9\u662f\u4e00\u4e2a\u7531\u4e9a\u5f53\u00b7\u6d77\u5fb7\uff08Adam Hyde\uff09\u521b\u7acb\u548c\u53d1\u5c55\u7684\u7279\u5b9a\u65b9\u6cd5\u7684\u6709\u529b\u4fc3\u8fdb\u8fc7\u7a0b\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u8bbf\u95eeBookSprints\u7684Book Sprint\u7f51\u9875\u3002 \u5982\u4f55\u4e3a\u672c\u4e66\u505a\u8d21\u732e \u00b6 \u672c\u4e66\u7684\u6700\u521d\u5de5\u4f5c\u662f\u5728\u4e00\u95f4\u7a7a\u8c03\u8fc7\u9ad8\u7684\u623f\u95f4\u91cc\u8fdb\u884c\u7684\uff0c\u8be5\u623f\u95f4\u662f\u6574\u4e2a\u6587\u6863\u51b2\u523a\u671f\u95f4\u7684\u5c0f\u7ec4\u529e\u516c\u5ba4\u3002 \u8981\u4e86\u89e3\u6709\u5173\u5982\u4f55\u4e3a OpenStack \u6587\u6863\u505a\u51fa\u8d21\u732e\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack \u6587\u6863\u8d21\u732e\u8005\u6307\u5357\u3002 OpenStack \u7b80\u4ecb \u00b6 \u672c\u6307\u5357\u63d0\u4f9b\u4e86\u5bf9 OpenStack \u90e8\u7f72\u7684\u5b89\u5168\u89c1\u89e3\u3002\u76ee\u6807\u53d7\u4f17\u662f\u4e91\u67b6\u6784\u5e08\u3001\u90e8\u7f72\u4eba\u5458\u548c\u7ba1\u7406\u5458\u3002\u6b64\u5916\uff0c\u4e91\u7528\u6237\u4f1a\u53d1\u73b0\u8be5\u6307\u5357\u5728\u63d0\u4f9b\u5546\u9009\u62e9\u65b9\u9762\u65e2\u6709\u6559\u80b2\u610f\u4e49\u53c8\u6709\u5e2e\u52a9\uff0c\u800c\u5ba1\u8ba1\u4eba\u5458\u4f1a\u53d1\u73b0\u5b83\u4f5c\u4e3a\u53c2\u8003\u6587\u6863\u5f88\u6709\u7528\uff0c\u53ef\u4ee5\u652f\u6301\u4ed6\u4eec\u7684\u5408\u89c4\u6027\u8ba4\u8bc1\u5de5\u4f5c\u3002\u672c\u6307\u5357\u4e5f\u63a8\u8350\u7ed9\u4efb\u4f55\u5bf9\u4e91\u5b89\u5168\u611f\u5174\u8da3\u7684\u4eba\u3002 \u6bcf\u4e2a OpenStack \u90e8\u7f72\u90fd\u5305\u542b\u5404\u79cd\u5404\u6837\u7684\u6280\u672f\uff0c\u5305\u62ec Linux \u53d1\u884c\u7248\u3001\u6570\u636e\u5e93\u7cfb\u7edf\u3001\u6d88\u606f\u961f\u5217\u3001OpenStack \u7ec4\u4ef6\u672c\u8eab\u3001\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u3001\u65e5\u5fd7\u8bb0\u5f55\u670d\u52a1\u3001\u5b89\u5168\u76d1\u63a7\u5de5\u5177\u7b49\u7b49\u3002\u6240\u6d89\u53ca\u7684\u5b89\u5168\u95ee\u9898\u540c\u6837\u591a\u79cd\u591a\u6837\u4e5f\u5c31\u4e0d\u8db3\u4e3a\u5947\u4e86\uff0c\u5bf9\u8fd9\u4e9b\u95ee\u9898\u7684\u6df1\u5165\u5206\u6790\u9700\u8981\u4e00\u4e9b\u6307\u5357\u3002\u6211\u4eec\u52aa\u529b\u5bfb\u627e\u5e73\u8861\u70b9\uff0c\u63d0\u4f9b\u8db3\u591f\u7684\u80cc\u666f\u4fe1\u606f\u6765\u7406\u89e3OpenStack\u5b89\u5168\u95ee\u9898\u53ca\u5176\u5904\u7406\uff0c\u5e76\u4e3a\u8fdb\u4e00\u6b65\u7684\u4fe1\u606f\u63d0\u4f9b\u5916\u90e8\u53c2\u8003\u3002\u8be5\u6307\u5357\u53ef\u4ee5\u4ece\u5934\u5230\u5c3e\u9605\u8bfb\uff0c\u4e5f\u53ef\u4ee5\u50cf\u53c2\u8003\u4e00\u6837\u4f7f\u7528\u3002 \u6211\u4eec\u7b80\u8981\u4ecb\u7ecd\u4e86\u4e91\u7684\u79cd\u7c7b\uff08\u79c1\u6709\u4e91\u3001\u516c\u6709\u4e91\u548c\u6df7\u5408\u4e91\uff09\uff0c\u7136\u540e\u5728\u672c\u7ae0\u7684\u5176\u4f59\u90e8\u5206\u6982\u8ff0\u4e86 OpenStack \u7ec4\u4ef6\u53ca\u5176\u76f8\u5173\u7684\u5b89\u5168\u95ee\u9898\u3002 \u5728\u6574\u672c\u4e66\u4e2d\uff0c\u6211\u4eec\u63d0\u5230\u4e86\u51e0\u79cd\u7c7b\u578b\u7684OpenStack\u4e91\u7528\u6237\uff1a\u7ba1\u7406\u5458\u3001\u64cd\u4f5c\u5458\u548c\u7528\u6237\u3002\u6211\u4eec\u4f7f\u7528\u8fd9\u4e9b\u672f\u8bed\u6765\u6807\u8bc6\u6bcf\u4e2a\u89d2\u8272\u5177\u6709\u7684\u5b89\u5168\u8bbf\u95ee\u7ea7\u522b\uff0c\u5c3d\u7ba1\u5b9e\u9645\u4e0a\uff0c\u6211\u4eec\u77e5\u9053\u4e0d\u540c\u7684\u89d2\u8272\u901a\u5e38\u7531\u540c\u4e00\u4e2a\u4eba\u62c5\u4efb\u3002 \u4e91\u7c7b\u578b \u00b6 OpenStack\u662f\u91c7\u7528\u4e91\u6280\u672f\u7684\u5173\u952e\u63a8\u52a8\u56e0\u7d20\uff0c\u5e76\u5177\u6709\u51e0\u4e2a\u5e38\u89c1\u7684\u90e8\u7f72\u7528\u4f8b\u3002\u8fd9\u4e9b\u6a21\u578b\u901a\u5e38\u79f0\u4e3a\u516c\u5171\u6a21\u578b\u3001\u4e13\u7528\u6a21\u578b\u548c\u6df7\u5408\u6a21\u578b\u3002\u4ee5\u4e0b\u5404\u8282\u4f7f\u7528\u7f8e\u56fd\u56fd\u5bb6\u6807\u51c6\u4e0e\u6280\u672f\u7814\u7a76\u9662 \uff08NIST\uff09 \u5bf9\u4e91\u7684\u5b9a\u4e49\u6765\u4ecb\u7ecd\u8fd9\u4e9b\u9002\u7528\u4e8e OpenStack \u7684\u4e0d\u540c\u7c7b\u578b\u7684\u4e91\u3002 \u516c\u6709\u4e91 \u00b6 \u6839\u636eNIST\u7684\u8bf4\u6cd5\uff0c\u516c\u5171\u4e91\u662f\u57fa\u7840\u8bbe\u65bd\u5411\u516c\u4f17\u5f00\u653e\u4f9b\u6d88\u8d39\u7684\u4e91\u3002OpenStack\u516c\u6709\u4e91\u901a\u5e38\u7531\u670d\u52a1\u63d0\u4f9b\u5546\u8fd0\u884c\uff0c\u53ef\u4f9b\u4e2a\u4eba\u3001\u516c\u53f8\u6216\u4efb\u4f55\u4ed8\u8d39\u5ba2\u6237\u4f7f\u7528\u3002\u9664\u4e86\u591a\u79cd\u5b9e\u4f8b\u7c7b\u578b\u5916\uff0c\u516c\u6709\u4e91\u63d0\u4f9b\u5546\u8fd8\u53ef\u80fd\u516c\u5f00\u4e00\u6574\u5957\u529f\u80fd\uff0c\u4f8b\u5982\u8f6f\u4ef6\u5b9a\u4e49\u7f51\u7edc\u6216\u5757\u5b58\u50a8\u3002 \u5c31\u5176\u6027\u8d28\u800c\u8a00\uff0c\u516c\u6709\u4e91\u9762\u4e34\u66f4\u9ad8\u7684\u98ce\u9669\u3002\u4f5c\u4e3a\u516c\u6709\u4e91\u7684\u4f7f\u7528\u8005\uff0c\u60a8\u5e94\u8be5\u9a8c\u8bc1\u6240\u9009\u63d0\u4f9b\u5546\u662f\u5426\u5177\u6709\u5fc5\u8981\u7684\u8ba4\u8bc1\u3001\u8bc1\u660e\u548c\u5176\u4ed6\u6cd5\u89c4\u6ce8\u610f\u4e8b\u9879\u3002\u4f5c\u4e3a\u516c\u6709\u4e91\u63d0\u4f9b\u5546\uff0c\u6839\u636e\u60a8\u7684\u76ee\u6807\u5ba2\u6237\uff0c\u60a8\u53ef\u80fd\u9700\u8981\u9075\u5b88\u4e00\u9879\u6216\u591a\u9879\u6cd5\u89c4\u3002\u6b64\u5916\uff0c\u5373\u4f7f\u4e0d\u9700\u8981\u6ee1\u8db3\u6cd5\u89c4\u8981\u6c42\uff0c\u63d0\u4f9b\u5546\u4e5f\u5e94\u786e\u4fdd\u79df\u6237\u9694\u79bb\uff0c\u5e76\u4fdd\u62a4\u7ba1\u7406\u57fa\u7840\u7ed3\u6784\u514d\u53d7\u5916\u90e8\u653b\u51fb\u3002 \u79c1\u6709\u4e91 \u00b6 \u5728\u9891\u8c31\u7684\u53e6\u4e00\u7aef\u662f\u79c1\u6709\u4e91\u3002\u6b63\u5982NIST\u6240\u5b9a\u4e49\u7684\u90a3\u6837\uff0c\u79c1\u6709\u4e91\u88ab\u914d\u7f6e\u4e3a\u7531\u591a\u4e2a\u6d88\u8d39\u8005\uff08\u5982\u4e1a\u52a1\u90e8\u95e8\uff09\u7ec4\u6210\u7684\u5355\u4e2a\u7ec4\u7ec7\u72ec\u5360\u4f7f\u7528\u3002\u4e91\u53ef\u80fd\u7531\u7ec4\u7ec7\u3001\u7b2c\u4e09\u65b9\u6216\u5b83\u4eec\u7684\u67d0\u79cd\u7ec4\u5408\u62e5\u6709\u3001\u7ba1\u7406\u548c\u8fd0\u8425\uff0c\u5e76\u4e14\u53ef\u80fd\u5b58\u5728\u4e8e\u672c\u5730\u6216\u5916\u90e8\u3002\u79c1\u6709\u4e91\u7528\u4f8b\u591a\u79cd\u591a\u6837\uff0c\u56e0\u6b64\uff0c\u5b83\u4eec\u5404\u81ea\u7684\u5b89\u5168\u95ee\u9898\u5404\u4e0d\u76f8\u540c\u3002 \u793e\u533a\u4e91 \u00b6 NIST \u5c06\u793e\u533a\u4e91\u5b9a\u4e49\u4e3a\u5176\u57fa\u7840\u7ed3\u6784\u4ec5\u4f9b\u5177\u6709\u5171\u540c\u5173\u6ce8\u70b9\uff08\u4f8b\u5982\uff0c\u4efb\u52a1\u3001\u5b89\u5168\u8981\u6c42\u3001\u7b56\u7565\u6216\u5408\u89c4\u6027\u6ce8\u610f\u4e8b\u9879\uff09\u7684\u7ec4\u7ec7\u7684\u7279\u5b9a\u6d88\u8d39\u8005\u793e\u533a\u4f7f\u7528\u3002\u4e91\u53ef\u80fd\u7531\u793e\u533a\u4e2d\u7684\u4e00\u4e2a\u6216\u591a\u4e2a\u7ec4\u7ec7\u3001\u7b2c\u4e09\u65b9\u6216\u5b83\u4eec\u7684\u67d0\u79cd\u7ec4\u5408\u62e5\u6709\u3001\u7ba1\u7406\u548c\u8fd0\u8425\uff0c\u5e76\u4e14\u5b83\u53ef\u80fd\u5b58\u5728\u4e8e\u672c\u5730\u6216\u5916\u90e8\u3002 \u6df7\u5408\u4e91 \u00b6 NIST\u5c06\u6df7\u5408\u4e91\u5b9a\u4e49\u4e3a\u4e24\u4e2a\u6216\u591a\u4e2a\u4e0d\u540c\u7684\u4e91\u57fa\u7840\u8bbe\u65bd\uff08\u5982\u79c1\u6709\u4e91\u3001\u793e\u533a\u4e91\u6216\u516c\u5171\u4e91\uff09\u7684\u7ec4\u5408\uff0c\u8fd9\u4e9b\u4e91\u57fa\u7840\u8bbe\u65bd\u4ecd\u7136\u662f\u552f\u4e00\u7684\u5b9e\u4f53\uff0c\u4f46\u901a\u8fc7\u6807\u51c6\u5316\u6216\u4e13\u6709\u6280\u672f\u7ed1\u5b9a\u5728\u4e00\u8d77\uff0c\u4ece\u800c\u5b9e\u73b0\u6570\u636e\u548c\u5e94\u7528\u7a0b\u5e8f\u7684\u53ef\u79fb\u690d\u6027\uff0c\u4f8b\u5982\u7528\u4e8e\u4e91\u4e4b\u95f4\u8d1f\u8f7d\u5e73\u8861\u7684\u4e91\u7206\u53d1\u3002\u4f8b\u5982\uff0c\u5728\u7ebf\u96f6\u552e\u5546\u53ef\u80fd\u4f1a\u5728\u5141\u8bb8\u5f39\u6027\u914d\u7f6e\u7684\u516c\u6709\u4e91\u4e0a\u5c55\u793a\u5176\u5e7f\u544a\u548c\u76ee\u5f55\u3002\u8fd9\u5c06\u4f7f\u4ed6\u4eec\u80fd\u591f\u4ee5\u7075\u6d3b\u3001\u5177\u6709\u6210\u672c\u6548\u76ca\u7684\u65b9\u5f0f\u5904\u7406\u5b63\u8282\u6027\u8d1f\u8f7d\u3002\u4e00\u65e6\u5ba2\u6237\u5f00\u59cb\u5904\u7406\u4ed6\u4eec\u7684\u8ba2\u5355\uff0c\u4ed6\u4eec\u5c31\u4f1a\u88ab\u8f6c\u79fb\u5230\u4e00\u4e2a\u66f4\u5b89\u5168\u7684\u79c1\u6709\u4e91\u4e2d\uff0c\u8be5\u79c1\u6709\u4e91\u7b26\u5408PCI\u6807\u51c6\u3002 \u5728\u672c\u6587\u6863\u4e2d\uff0c\u6211\u4eec\u4ee5\u7c7b\u4f3c\u7684\u65b9\u5f0f\u5bf9\u5f85\u793e\u533a\u548c\u6df7\u5408\u4e91\uff0c\u4ec5\u4ece\u5b89\u5168\u89d2\u5ea6\u660e\u786e\u5904\u7406\u516c\u6709\u4e91\u548c\u79c1\u6709\u4e91\u7684\u6781\u7aef\u60c5\u51b5\u3002\u5b89\u5168\u63aa\u65bd\u53d6\u51b3\u4e8e\u90e8\u7f72\u5728\u79c1\u6709\u516c\u5171\u8fde\u7eed\u4f53\u4e0a\u7684\u4f4d\u7f6e\u3002 OpenStack \u670d\u52a1\u6982\u8ff0 \u00b6 OpenStack \u91c7\u7528\u6a21\u5757\u5316\u67b6\u6784\uff0c\u63d0\u4f9b\u4e00\u7ec4\u6838\u5fc3\u670d\u52a1\uff0c\u4ee5\u4fc3\u8fdb\u53ef\u6269\u5c55\u6027\u548c\u5f39\u6027\u4f5c\u4e3a\u6838\u5fc3\u8bbe\u8ba1\u539f\u5219\u3002\u672c\u7ae0\u7b80\u8981\u56de\u987e\u4e86 OpenStack \u7ec4\u4ef6\u3001\u5b83\u4eec\u7684\u7528\u4f8b\u548c\u5b89\u5168\u6ce8\u610f\u4e8b\u9879\u3002 \u8ba1\u7b97 \u00b6 OpenStack Compute \u670d\u52a1 \uff08nova\uff09 \u63d0\u4f9b\u7684\u670d\u52a1\u652f\u6301\u5927\u89c4\u6a21\u7ba1\u7406\u865a\u62df\u673a\u5b9e\u4f8b\u3001\u6258\u7ba1\u591a\u5c42\u5e94\u7528\u7a0b\u5e8f\u7684\u5b9e\u4f8b\u3001\u5f00\u53d1\u6216\u6d4b\u8bd5\u73af\u5883\u3001\u5904\u7406 Hadoop \u96c6\u7fa4\u7684\u201c\u5927\u6570\u636e\u201d\u6216\u9ad8\u6027\u80fd\u8ba1\u7b97\u3002 \u8ba1\u7b97\u670d\u52a1\u901a\u8fc7\u4e0e\u652f\u6301\u7684\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4ea4\u4e92\u7684\u62bd\u8c61\u5c42\u6765\u4fc3\u8fdb\u8fd9\u79cd\u7ba1\u7406\uff08\u6211\u4eec\u7a0d\u540e\u4f1a\u66f4\u8be6\u7ec6\u5730\u8ba8\u8bba\u8fd9\u4e2a\u95ee\u9898\uff09\u3002 \u5728\u672c\u6307\u5357\u7684\u540e\u9762\u90e8\u5206\uff0c\u6211\u4eec\u5c06\u91cd\u70b9\u4ecb\u7ecd\u865a\u62df\u5316\u5806\u6808\uff0c\u56e0\u4e3a\u5b83\u4e0e\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u76f8\u5173\u3002 \u6709\u5173\u529f\u80fd\u652f\u6301\u7684\u5f53\u524d\u72b6\u6001\u7684\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack Hypervisor \u652f\u6301\u77e9\u9635\u3002 \u8ba1\u7b97\u5b89\u5168\u6027\u5bf9\u4e8eOpenStack\u90e8\u7f72\u81f3\u5173\u91cd\u8981\u3002\u5f3a\u5316\u6280\u672f\u5e94\u5305\u62ec\u5bf9\u5f3a\u5b9e\u4f8b\u9694\u79bb\u7684\u652f\u6301\u3001\u8ba1\u7b97\u5b50\u7ec4\u4ef6\u4e4b\u95f4\u7684\u5b89\u5168\u901a\u4fe1\u4ee5\u53ca\u9762\u5411\u516c\u4f17\u7684 API \u7ec8\u7ed3\u70b9\u7684\u590d\u539f\u80fd\u529b\u3002 \u5bf9\u8c61\u5b58\u50a8 \u00b6 OpenStack \u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 \uff08swift\uff09 \u652f\u6301\u5728\u4e91\u4e2d\u5b58\u50a8\u548c\u68c0\u7d22\u4efb\u610f\u6570\u636e\u3002\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u63d0\u4f9b\u672c\u673a API \u548c\u4e9a\u9a6c\u900a\u4e91\u79d1\u6280 S3 \u517c\u5bb9 API\u3002\u8be5\u670d\u52a1\u901a\u8fc7\u6570\u636e\u590d\u5236\u63d0\u4f9b\u9ad8\u5ea6\u7684\u590d\u539f\u80fd\u529b\uff0c\u5e76\u4e14\u53ef\u4ee5\u5904\u7406 PB \u7ea7\u7684\u6570\u636e\u3002 \u8bf7\u52a1\u5fc5\u4e86\u89e3\u5bf9\u8c61\u5b58\u50a8\u4e0d\u540c\u4e8e\u4f20\u7edf\u7684\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u3002\u5bf9\u8c61\u5b58\u50a8\u6700\u9002\u5408\u7528\u4e8e\u9759\u6001\u6570\u636e\uff0c\u4f8b\u5982\u5a92\u4f53\u6587\u4ef6\uff08MP3\u3001\u56fe\u50cf\u6216\u89c6\u9891\uff09\u3001\u865a\u62df\u673a\u6620\u50cf\u548c\u5907\u4efd\u6587\u4ef6\u3002 \u5bf9\u8c61\u5b89\u5168\u5e94\u4fa7\u91cd\u4e8e\u4f20\u8f93\u4e2d\u548c\u9759\u6001\u6570\u636e\u7684\u8bbf\u95ee\u63a7\u5236\u548c\u52a0\u5bc6\u3002\u5176\u4ed6\u95ee\u9898\u53ef\u80fd\u4e0e\u7cfb\u7edf\u6ee5\u7528\u3001\u975e\u6cd5\u6216\u6076\u610f\u5185\u5bb9\u5b58\u50a8\u4ee5\u53ca\u4ea4\u53c9\u8eab\u4efd\u9a8c\u8bc1\u653b\u51fb\u5a92\u4ecb\u6709\u5173\u3002 \u5757\u5b58\u50a8 \u00b6 OpenStack \u5757\u5b58\u50a8\u670d\u52a1 \uff08cinder\uff09 \u4e3a\u8ba1\u7b97\u5b9e\u4f8b\u63d0\u4f9b\u6301\u4e45\u6027\u5757\u5b58\u50a8\u3002\u5757\u5b58\u50a8\u670d\u52a1\u8d1f\u8d23\u7ba1\u7406\u5757\u8bbe\u5907\u7684\u751f\u547d\u5468\u671f\uff0c\u4ece\u521b\u5efa\u5377\u548c\u9644\u52a0\u5230\u5b9e\u4f8b\uff0c\u518d\u5230\u91ca\u653e\u3002 \u5757\u5b58\u50a8\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879\u4e0e\u5bf9\u8c61\u5b58\u50a8\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879\u7c7b\u4f3c\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf \u00b6 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff08\u9a6c\u5c3c\u62c9\uff09\u63d0\u4f9b\u4e86\u4e00\u7ec4\u7528\u4e8e\u7ba1\u7406\u591a\u79df\u6237\u4e91\u73af\u5883\u4e2d\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u7684\u670d\u52a1\uff0c\u7c7b\u4f3c\u4e8e OpenStack \u901a\u8fc7 OpenStack \u5757\u5b58\u50a8\u670d\u52a1\u9879\u76ee\u63d0\u4f9b\u57fa\u4e8e\u5757\u7684\u5b58\u50a8\u7ba1\u7406\u7684\u65b9\u5f0f\u3002\u4f7f\u7528\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff0c\u60a8\u53ef\u4ee5\u521b\u5efa\u8fdc\u7a0b\u6587\u4ef6\u7cfb\u7edf\uff0c\u5c06\u6587\u4ef6\u7cfb\u7edf\u6302\u8f7d\u5230\u5b9e\u4f8b\u4e0a\uff0c\u7136\u540e\u4ece\u5b9e\u4f8b\u8bfb\u53d6\u548c\u5199\u5165\u6587\u4ef6\u7cfb\u7edf\u4e2d\u7684\u6570\u636e\u3002 \u7f51\u7edc \u00b6 OpenStack \u7f51\u7edc\u670d\u52a1\uff08neutron\uff0c\u4ee5\u524d\u79f0\u4e3a\u91cf\u5b50\uff09\u4e3a\u4e91\u7528\u6237\uff08\u79df\u6237\uff09\u63d0\u4f9b\u5404\u79cd\u7f51\u7edc\u670d\u52a1\uff0c\u4f8b\u5982 IP \u5730\u5740\u7ba1\u7406\u3001DNS\u3001DHCP\u3001\u8d1f\u8f7d\u5747\u8861\u548c\u5b89\u5168\u7ec4\uff08\u7f51\u7edc\u8bbf\u95ee\u89c4\u5219\uff0c\u5982\u9632\u706b\u5899\u7b56\u7565\uff09\u3002\u6b64\u670d\u52a1\u4e3a\u8f6f\u4ef6\u5b9a\u4e49\u7f51\u7edc \uff08SDN\uff09 \u63d0\u4f9b\u4e86\u4e00\u4e2a\u6846\u67b6\uff0c\u5141\u8bb8\u4e0e\u5404\u79cd\u7f51\u7edc\u89e3\u51b3\u65b9\u6848\u8fdb\u884c\u53ef\u63d2\u62d4\u96c6\u6210\u3002 OpenStack Networking \u5141\u8bb8\u4e91\u79df\u6237\u7ba1\u7406\u5176\u8bbf\u5ba2\u7f51\u7edc\u914d\u7f6e\u3002\u7f51\u7edc\u670d\u52a1\u7684\u5b89\u5168\u95ee\u9898\u5305\u62ec\u7f51\u7edc\u6d41\u91cf\u9694\u79bb\u3001\u53ef\u7528\u6027\u3001\u5b8c\u6574\u6027\u548c\u673a\u5bc6\u6027\u3002 \u4eea\u8868\u677f \u00b6 OpenStack \u4eea\u8868\u677f \uff08horizon\uff09 \u4e3a\u4e91\u7ba1\u7406\u5458\u548c\u4e91\u79df\u6237\u63d0\u4f9b\u4e86\u4e00\u4e2a\u57fa\u4e8e Web \u7684\u754c\u9762\u3002\u4f7f\u7528\u6b64\u754c\u9762\uff0c\u7ba1\u7406\u5458\u548c\u79df\u6237\u53ef\u4ee5\u9884\u914d\u3001\u7ba1\u7406\u548c\u76d1\u89c6\u4e91\u8d44\u6e90\u3002\u4eea\u8868\u677f\u901a\u5e38\u4ee5\u9762\u5411\u516c\u4f17\u7684\u65b9\u5f0f\u90e8\u7f72\uff0c\u5177\u6709\u516c\u5171 Web \u95e8\u6237\u7684\u6240\u6709\u5e38\u89c1\u5b89\u5168\u95ee\u9898\u3002 \u8eab\u4efd\u9274\u522b\u670d\u52a1 \u00b6 OpenStack Identity \u670d\u52a1 \uff08keystone\uff09 \u662f\u4e00\u9879\u5171\u4eab\u670d\u52a1\uff0c\u53ef\u5728\u6574\u4e2a\u4e91\u57fa\u7840\u67b6\u6784\u4e2d\u63d0\u4f9b\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u670d\u52a1\u3002Identity \u670d\u52a1\u5177\u6709\u5bf9\u591a\u79cd\u8eab\u4efd\u9a8c\u8bc1\u5f62\u5f0f\u7684\u53ef\u63d2\u5165\u652f\u6301\u3002 Identity \u670d\u52a1\u7684\u5b89\u5168\u95ee\u9898\u5305\u62ec\u5bf9\u8eab\u4efd\u9a8c\u8bc1\u7684\u4fe1\u4efb\u3001\u6388\u6743\u4ee4\u724c\u7684\u7ba1\u7406\u4ee5\u53ca\u5b89\u5168\u901a\u4fe1\u3002 \u955c\u50cf\u670d\u52a1 \u00b6 OpenStack \u955c\u50cf\u670d\u52a1\uff08glance\uff09\u63d0\u4f9b\u78c1\u76d8\u955c\u50cf\u7ba1\u7406\u670d\u52a1\uff0c\u5305\u62ec\u955c\u50cf\u53d1\u73b0\u3001\u6ce8\u518c\u548c\u6839\u636e\u9700\u8981\u5411\u8ba1\u7b97\u670d\u52a1\u4ea4\u4ed8\u670d\u52a1\u3002 \u9700\u8981\u53d7\u4fe1\u4efb\u7684\u8fdb\u7a0b\u6765\u7ba1\u7406\u78c1\u76d8\u6620\u50cf\u7684\u751f\u547d\u5468\u671f\uff0c\u4ee5\u53ca\u524d\u9762\u63d0\u5230\u7684\u4e0e\u6570\u636e\u5b89\u5168\u6709\u5173\u7684\u6240\u6709\u95ee\u9898\u3002 \u6570\u636e\u5904\u7406\u670d\u52a1 \u00b6 \u6570\u636e\u5904\u7406\u670d\u52a1 \uff08sahara\uff09 \u63d0\u4f9b\u4e86\u4e00\u4e2a\u5e73\u53f0\uff0c\u7528\u4e8e\u914d\u7f6e\u3001\u7ba1\u7406\u548c\u4f7f\u7528\u8fd0\u884c\u5e38\u7528\u5904\u7406\u6846\u67b6\u7684\u7fa4\u96c6\u3002 \u6570\u636e\u5904\u7406\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879\u5e94\u4fa7\u91cd\u4e8e\u6570\u636e\u9690\u79c1\u548c\u4e0e\u9884\u7f6e\u96c6\u7fa4\u7684\u5b89\u5168\u901a\u4fe1\u3002 \u5176\u4ed6\u914d\u5957\u6280\u672f \u00b6 \u6d88\u606f\u4f20\u9012\u7528\u4e8e\u591a\u4e2a OpenStack \u670d\u52a1\u4e4b\u95f4\u7684\u5185\u90e8\u901a\u4fe1\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0cOpenStack \u4f7f\u7528\u57fa\u4e8e AMQP \u7684\u6d88\u606f\u961f\u5217\u3002\u4e0e\u5927\u591a\u6570 OpenStack \u670d\u52a1\u4e00\u6837\uff0cAMQP \u652f\u6301\u53ef\u63d2\u62d4\u7ec4\u4ef6\u3002\u73b0\u5728\uff0c\u5b9e\u73b0\u540e\u7aef\u53ef\u4ee5\u662f RabbitMQ\u3001Qpid \u6216 ZeroMQ\u3002 \u7531\u4e8e\u5927\u591a\u6570\u7ba1\u7406\u547d\u4ee4\u90fd\u6d41\u7ecf\u6d88\u606f\u961f\u5217\u7cfb\u7edf\uff0c\u56e0\u6b64\u6d88\u606f\u961f\u5217\u5b89\u5168\u6027\u662f\u4efb\u4f55 OpenStack \u90e8\u7f72\u7684\u4e3b\u8981\u5b89\u5168\u95ee\u9898\uff0c\u672c\u6307\u5357\u7a0d\u540e\u5c06\u5bf9\u6b64\u8fdb\u884c\u8be6\u7ec6\u8ba8\u8bba\u3002 \u6709\u51e0\u4e2a\u7ec4\u4ef6\u4f7f\u7528\u6570\u636e\u5e93\uff0c\u5c3d\u7ba1\u5b83\u6ca1\u6709\u663e\u5f0f\u8c03\u7528\u3002\u4fdd\u62a4\u6570\u636e\u5e93\u8bbf\u95ee\u662f\u53e6\u4e00\u4e2a\u5b89\u5168\u95ee\u9898\uff0c\u56e0\u6b64\u5728\u672c\u6307\u5357\u540e\u9762\u5c06\u66f4\u8be6\u7ec6\u5730\u8ba8\u8bba\u3002 \u5b89\u5168\u8fb9\u754c\u548c\u5a01\u80c1 \u00b6 \u4e91\u53ef\u4ee5\u62bd\u8c61\u4e3a\u903b\u8f91\u7ec4\u4ef6\u7684\u96c6\u5408\uff0c\u56e0\u4e3a\u5b83\u4eec\u7684\u529f\u80fd\u3001\u7528\u6237\u548c\u5171\u4eab\u7684\u5b89\u5168\u95ee\u9898\uff0c\u6211\u4eec\u79f0\u4e4b\u4e3a\u5b89\u5168\u57df\u3002\u5a01\u80c1\u53c2\u4e0e\u8005\u548c\u5411\u91cf\u6839\u636e\u5176\u52a8\u673a\u548c\u5bf9\u8d44\u6e90\u7684\u8bbf\u95ee\u8fdb\u884c\u5206\u7c7b\u3002\u6211\u4eec\u7684\u76ee\u6807\u662f\u6839\u636e\u60a8\u7684\u98ce\u9669/\u6f0f\u6d1e\u4fdd\u62a4\u76ee\u6807\uff0c\u8ba9\u60a8\u4e86\u89e3\u6bcf\u4e2a\u57df\u7684\u5b89\u5168\u95ee\u9898\u3002 \u5b89\u5168\u57df \u00b6 \u5b89\u5168\u57df\u5305\u62ec\u7528\u6237\u3001\u5e94\u7528\u7a0b\u5e8f\u3001\u670d\u52a1\u5668\u6216\u7f51\u7edc\uff0c\u5b83\u4eec\u5728\u7cfb\u7edf\u4e2d\u5177\u6709\u5171\u540c\u7684\u4fe1\u4efb\u8981\u6c42\u548c\u671f\u671b\u3002\u901a\u5e38\uff0c\u5b83\u4eec\u5177\u6709\u76f8\u540c\u7684\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743 \uff08AuthN/Z\uff09 \u8981\u6c42\u548c\u7528\u6237\u3002 \u5c3d\u7ba1\u60a8\u53ef\u80fd\u5e0c\u671b\u8fdb\u4e00\u6b65\u7ec6\u5206\u8fd9\u4e9b\u57df\uff08\u6211\u4eec\u7a0d\u540e\u5c06\u8ba8\u8bba\u5728\u54ea\u4e9b\u65b9\u9762\u53ef\u80fd\u5408\u9002\uff09\uff0c\u4f46\u6211\u4eec\u901a\u5e38\u6307\u7684\u662f\u56db\u4e2a\u4e0d\u540c\u7684\u5b89\u5168\u57df\uff0c\u5b83\u4eec\u6784\u6210\u4e86\u5b89\u5168\u90e8\u7f72\u4efb\u4f55 OpenStack \u4e91\u6240\u9700\u7684\u6700\u4f4e\u9650\u5ea6\u3002\u8fd9\u4e9b\u5b89\u5168\u57df\u5305\u62ec\uff1a \u516c\u5171\u57df \u8bbf\u5ba2\u57df \u7ba1\u7406\u57df \u6570\u636e\u57df \u6211\u4eec\u4e4b\u6240\u4ee5\u9009\u62e9\u8fd9\u4e9b\u5b89\u5168\u57df\uff0c\u662f\u56e0\u4e3a\u5b83\u4eec\u53ef\u4ee5\u72ec\u7acb\u6620\u5c04\uff0c\u4e5f\u53ef\u4ee5\u7ec4\u5408\u8d77\u6765\uff0c\u4ee5\u8868\u793a\u7ed9\u5b9a OpenStack \u90e8\u7f72\u4e2d\u5927\u591a\u6570\u53ef\u80fd\u7684\u4fe1\u4efb\u533a\u57df\u3002\u4f8b\u5982\uff0c\u67d0\u4e9b\u90e8\u7f72\u62d3\u6251\u53ef\u80fd\u7531\u4e00\u4e2a\u7269\u7406\u7f51\u7edc\u4e0a\u7684\u6765\u5bbe\u57df\u548c\u6570\u636e\u57df\u7684\u7ec4\u5408\u7ec4\u6210\uff0c\u800c\u5176\u4ed6\u62d3\u6251\u5219\u5c06\u8fd9\u4e9b\u57df\u5206\u5f00\u3002\u5728\u6bcf\u79cd\u60c5\u51b5\u4e0b\uff0c\u4e91\u64cd\u4f5c\u5458\u90fd\u5e94\u6ce8\u610f\u9002\u5f53\u7684\u5b89\u5168\u95ee\u9898\u3002\u5b89\u5168\u57df\u5e94\u9488\u5bf9\u7279\u5b9a\u7684 OpenStack \u90e8\u7f72\u62d3\u6251\u8fdb\u884c\u6620\u5c04\u3002\u57df\u53ca\u5176\u4fe1\u4efb\u8981\u6c42\u53d6\u51b3\u4e8e\u4e91\u5b9e\u4f8b\u662f\u516c\u6709\u4e91\u5b9e\u4f8b\u3001\u79c1\u6709\u4e91\u5b9e\u4f8b\u8fd8\u662f\u6df7\u5408\u4e91\u5b9e\u4f8b\u3002 \u516c\u5171 \u00b6 \u516c\u5171\u5b89\u5168\u57df\u662f\u4e91\u57fa\u7840\u67b6\u6784\u4e2d\u5b8c\u5168\u4e0d\u53d7\u4fe1\u4efb\u7684\u533a\u57df\u3002\u5b83\u53ef\u4ee5\u6307\u6574\u4e2a\u4e92\u8054\u7f51\uff0c\u4e5f\u53ef\u4ee5\u7b80\u5355\u5730\u6307\u60a8\u65e0\u6743\u8bbf\u95ee\u7684\u7f51\u7edc\u3002\u4efb\u4f55\u5177\u6709\u673a\u5bc6\u6027\u6216\u5b8c\u6574\u6027\u8981\u6c42\u4f20\u8f93\u6b64\u57df\u7684\u6570\u636e\u90fd\u5e94\u4f7f\u7528\u8865\u507f\u63a7\u5236\u8fdb\u884c\u4fdd\u62a4\u3002 \u6b64\u57df\u5e94\u59cb\u7ec8\u88ab\u89c6\u4e3a\u4e0d\u53d7\u4fe1\u4efb\u3002 \u8bbf\u5ba2 \u00b6 \u8bbf\u5ba2\u5b89\u5168\u57df\u901a\u5e38\u7528\u4e8e\u8ba1\u7b97\u5b9e\u4f8b\u5230\u5b9e\u4f8b\u7684\u6d41\u91cf\uff0c\u5b83\u5904\u7406\u7531\u4e91\u4e0a\u7684\u5b9e\u4f8b\u751f\u6210\u7684\u8ba1\u7b97\u6570\u636e\uff0c\u4f46\u4e0d\u5904\u7406\u652f\u6301\u4e91\u64cd\u4f5c\u7684\u670d\u52a1\uff0c\u4f8b\u5982 API \u8c03\u7528\u3002 \u5982\u679c\u516c\u6709\u4e91\u548c\u79c1\u6709\u4e91\u63d0\u4f9b\u5546\u5bf9\u5b9e\u4f8b\u4f7f\u7528\u6ca1\u6709\u4e25\u683c\u63a7\u5236\uff0c\u4e5f\u4e0d\u5141\u8bb8\u5bf9\u865a\u62df\u673a\u8fdb\u884c\u4e0d\u53d7\u9650\u5236\u7684 Internet \u8bbf\u95ee\uff0c\u5219\u5e94\u5c06\u6b64\u57df\u89c6\u4e3a\u4e0d\u53d7\u4fe1\u4efb\u7684\u57df\u3002\u79c1\u6709\u4e91\u63d0\u4f9b\u5546\u53ef\u80fd\u5e0c\u671b\u5c06\u6b64\u7f51\u7edc\u89c6\u4e3a\u5185\u90e8\u7f51\u7edc\uff0c\u5e76\u4e14\u53ea\u6709\u5728\u5b9e\u65bd\u9002\u5f53\u7684\u63a7\u5236\u4ee5\u65ad\u8a00\u5b9e\u4f8b\u548c\u6240\u6709\u5173\u8054\u79df\u6237\u90fd\u662f\u53ef\u4fe1\u7684\u65f6\u3002 \u7ba1\u7406 \u00b6 \u7ba1\u7406\u5b89\u5168\u57df\u662f\u670d\u52a1\u4ea4\u4e92\u7684\u5730\u65b9\u3002\u6709\u65f6\u79f0\u4e3a\u201c\u63a7\u5236\u5e73\u9762\u201d\uff0c\u6b64\u57df\u4e2d\u7684\u7f51\u7edc\u4f20\u8f93\u673a\u5bc6\u6570\u636e\uff0c\u4f8b\u5982\u914d\u7f6e\u53c2\u6570\u3001\u7528\u6237\u540d\u548c\u5bc6\u7801\u3002\u547d\u4ee4\u548c\u63a7\u5236\u6d41\u91cf\u901a\u5e38\u9a7b\u7559\u5728\u6b64\u57df\u4e2d\uff0c\u8fd9\u9700\u8981\u5f3a\u5927\u7684\u5b8c\u6574\u6027\u8981\u6c42\u3002\u5bf9\u6b64\u57df\u7684\u8bbf\u95ee\u5e94\u53d7\u5230\u9ad8\u5ea6\u9650\u5236\u548c\u76d1\u89c6\u3002\u540c\u65f6\uff0c\u6b64\u57df\u4ecd\u5e94\u91c7\u7528\u672c\u6307\u5357\u4e2d\u63cf\u8ff0\u7684\u6240\u6709\u5b89\u5168\u6700\u4f73\u505a\u6cd5\u3002 \u5728\u5927\u591a\u6570\u90e8\u7f72\u4e2d\uff0c\u6b64\u57df\u88ab\u89c6\u4e3a\u53d7\u4fe1\u4efb\u7684\u57df\u3002\u4f46\u662f\uff0c\u5728\u8003\u8651 OpenStack \u90e8\u7f72\u65f6\uff0c\u6709\u8bb8\u591a\u7cfb\u7edf\u5c06\u6b64\u57df\u4e0e\u5176\u4ed6\u57df\u6865\u63a5\u8d77\u6765\uff0c\u8fd9\u53ef\u80fd\u4f1a\u964d\u4f4e\u60a8\u53ef\u4ee5\u5bf9\u8be5\u57df\u7684\u4fe1\u4efb\u7ea7\u522b\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6865\u63a5\u5b89\u5168\u57df\u3002 \u6570\u636e \u00b6 \u6570\u636e\u5b89\u5168\u57df\u4e3b\u8981\u5173\u6ce8\u4e0eOpenStack\u4e2d\u7684\u5b58\u50a8\u670d\u52a1\u6709\u5173\u7684\u4fe1\u606f\u3002\u901a\u8fc7\u8be5\u7f51\u7edc\u4f20\u8f93\u7684\u5927\u591a\u6570\u6570\u636e\u90fd\u9700\u8981\u9ad8\u5ea6\u7684\u5b8c\u6574\u6027\u548c\u673a\u5bc6\u6027\u3002\u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u6839\u636e\u90e8\u7f72\u7c7b\u578b\uff0c\u53ef\u80fd\u8fd8\u4f1a\u6709\u5f88\u5f3a\u7684\u53ef\u7528\u6027\u8981\u6c42\u3002 \u6b64\u7f51\u7edc\u7684\u4fe1\u4efb\u7ea7\u522b\u5f88\u5927\u7a0b\u5ea6\u4e0a\u53d6\u51b3\u4e8e\u90e8\u7f72\u51b3\u7b56\uff0c\u56e0\u6b64\u6211\u4eec\u4e0d\u4f1a\u4e3a\u5176\u5206\u914d\u4efb\u4f55\u9ed8\u8ba4\u7684\u4fe1\u4efb\u7ea7\u522b\u3002 \u6865\u63a5\u5b89\u5168\u57df \u00b6 \u7f51\u6865\u662f\u5b58\u5728\u4e8e\u591a\u4e2a\u5b89\u5168\u57df\u4e2d\u7684\u7ec4\u4ef6\u3002\u5fc5\u987b\u4ed4\u7ec6\u914d\u7f6e\u6865\u63a5\u5177\u6709\u4e0d\u540c\u4fe1\u4efb\u7ea7\u522b\u6216\u8eab\u4efd\u9a8c\u8bc1\u8981\u6c42\u7684\u5b89\u5168\u57df\u7684\u4efb\u4f55\u7ec4\u4ef6\u3002\u8fd9\u4e9b\u7f51\u6865\u901a\u5e38\u662f\u7f51\u7edc\u67b6\u6784\u4e2d\u7684\u8584\u5f31\u73af\u8282\u3002\u6865\u63a5\u5e94\u59cb\u7ec8\u914d\u7f6e\u4e3a\u6ee1\u8db3\u5b83\u6240\u6865\u63a5\u7684\u4efb\u4f55\u57df\u7684\u6700\u9ad8\u4fe1\u4efb\u7ea7\u522b\u7684\u5b89\u5168\u8981\u6c42\u3002\u5728\u8bb8\u591a\u60c5\u51b5\u4e0b\uff0c\u7531\u4e8e\u653b\u51fb\u7684\u53ef\u80fd\u6027\uff0c\u6865\u63a5\u5668\u7684\u5b89\u5168\u63a7\u5236\u5e94\u8be5\u662f\u4e3b\u8981\u5173\u6ce8\u70b9\u3002 \u4e0a\u56fe\u663e\u793a\u4e86\u6865\u63a5\u6570\u636e\u548c\u7ba1\u7406\u57df\u7684\u8ba1\u7b97\u8282\u70b9;\u56e0\u6b64\uff0c\u5e94\u5c06\u8ba1\u7b97\u8282\u70b9\u914d\u7f6e\u4e3a\u6ee1\u8db3\u7ba1\u7406\u57df\u7684\u5b89\u5168\u8981\u6c42\u3002\u540c\u6837\uff0c\u6b64\u56fe\u4e2d\u7684 API \u7aef\u70b9\u6b63\u5728\u6865\u63a5\u4e0d\u53d7\u4fe1\u4efb\u7684\u516c\u5171\u57df\u548c\u7ba1\u7406\u57df\uff0c\u5e94\u5c06\u5176\u914d\u7f6e\u4e3a\u9632\u6b62\u4ece\u516c\u5171\u57df\u4f20\u64ad\u5230\u7ba1\u7406\u57df\u7684\u653b\u51fb\u3002 \u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u90e8\u7f72\u4eba\u5458\u53ef\u80fd\u5e0c\u671b\u8003\u8651\u5c06\u7f51\u6865\u4fdd\u62a4\u5230\u6bd4\u5b83\u6240\u5728\u7684\u4efb\u4f55\u57df\u66f4\u9ad8\u7684\u6807\u51c6\u3002\u9274\u4e8e\u4e0a\u8ff0 API \u7aef\u70b9\u793a\u4f8b\uff0c\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u4ece\u516c\u5171\u57df\u4ee5 API \u7aef\u70b9\u4e3a\u76ee\u6807\uff0c\u5229\u7528\u5b83\u6765\u5165\u4fb5\u6216\u8bbf\u95ee\u7ba1\u7406\u57df\u3002 OpenStack\u7684\u8bbe\u8ba1\u4f7f\u5f97\u5b89\u5168\u57df\u7684\u5206\u79bb\u662f\u5f88\u56f0\u96be\u7684\u3002\u7531\u4e8e\u6838\u5fc3\u670d\u52a1\u901a\u5e38\u81f3\u5c11\u6865\u63a5\u4e24\u4e2a\u57df\uff0c\u56e0\u6b64\u5728\u5bf9\u5b83\u4eec\u5e94\u7528\u5b89\u5168\u63a7\u5236\u65f6\u5fc5\u987b\u7279\u522b\u8003\u8651\u3002 \u5a01\u80c1\u5206\u7c7b\u3001\u53c2\u4e0e\u8005\u548c\u653b\u51fb\u5411\u91cf \u00b6 \u5927\u591a\u6570\u7c7b\u578b\u7684\u4e91\u90e8\u7f72\uff08\u516c\u6709\u4e91\u6216\u79c1\u6709\u4e91\uff09\u90fd\u4f1a\u53d7\u5230\u67d0\u79cd\u5f62\u5f0f\u7684\u653b\u51fb\u3002\u5728\u672c\u7ae0\u4e2d\uff0c\u6211\u4eec\u5c06\u5bf9\u653b\u51fb\u8005\u8fdb\u884c\u5206\u7c7b\uff0c\u5e76\u603b\u7ed3\u6bcf\u4e2a\u5b89\u5168\u57df\u4e2d\u7684\u6f5c\u5728\u653b\u51fb\u7c7b\u578b\u3002 \u5a01\u80c1\u53c2\u4e0e\u8005 \u00b6 \u5a01\u80c1\u53c2\u4e0e\u8005\u662f\u4e00\u79cd\u62bd\u8c61\u7684\u65b9\u5f0f\uff0c\u7528\u4e8e\u6307\u4ee3\u60a8\u53ef\u80fd\u5c1d\u8bd5\u9632\u5fa1\u7684\u4e00\u7c7b\u5bf9\u624b\u3002\u53c2\u4e0e\u8005\u7684\u80fd\u529b\u8d8a\u5f3a\uff0c\u6210\u529f\u7f13\u89e3\u548c\u9884\u9632\u653b\u51fb\u6240\u9700\u7684\u5b89\u5168\u63a7\u5236\u5c31\u8d8a\u6602\u8d35\u3002\u5b89\u5168\u6027\u662f\u6210\u672c\u3001\u53ef\u7528\u6027\u548c\u9632\u5fa1\u4e4b\u95f4\u7684\u6743\u8861\u3002\u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u4e0d\u53ef\u80fd\u9488\u5bf9\u6211\u4eec\u5728\u6b64\u5904\u63cf\u8ff0\u7684\u6240\u6709\u5a01\u80c1\u53c2\u4e0e\u8005\u4fdd\u62a4\u4e91\u90e8\u7f72\u3002\u90a3\u4e9b\u90e8\u7f72OpenStack\u4e91\u7684\u4eba\u5c06\u4e0d\u5f97\u4e0d\u51b3\u5b9a\u5176\u90e8\u7f72/\u4f7f\u7528\u7684\u5e73\u8861\u70b9\u5728\u54ea\u91cc\u3002 \u60c5\u62a5\u673a\u6784 \u00b6 \u672c\u6307\u5357\u8ba4\u4e3a\u662f\u6700\u6709\u80fd\u529b\u7684\u5bf9\u624b\u3002\u60c5\u62a5\u90e8\u95e8\u548c\u5176\u4ed6\u56fd\u5bb6\u884c\u4e3a\u8005\u53ef\u4ee5\u4e3a\u76ee\u6807\u5e26\u6765\u5de8\u5927\u7684\u8d44\u6e90\u3002\u4ed6\u4eec\u62e5\u6709\u8d85\u8d8a\u4efb\u4f55\u5176\u4ed6\u53c2\u4e0e\u8005\u7684\u80fd\u529b\u3002\u5982\u679c\u6ca1\u6709\u6781\u5176\u4e25\u683c\u7684\u63a7\u5236\u63aa\u65bd\uff0c\u65e0\u8bba\u662f\u4eba\u529b\u8fd8\u662f\u6280\u672f\uff0c\u90fd\u5f88\u96be\u9632\u5fa1\u8fd9\u4e9b\u884c\u4e3a\u8005\u3002 \u4e25\u91cd\u6709\u7ec4\u7ec7\u72af\u7f6a \u00b6 \u80fd\u529b\u5f3a\u4e14\u53d7\u7ecf\u6d4e\u9a71\u52a8\u7684\u653b\u51fb\u8005\u7fa4\u4f53\u3002\u80fd\u591f\u8d44\u52a9\u5185\u90e8\u6f0f\u6d1e\u5f00\u53d1\u548c\u76ee\u6807\u7814\u7a76\u3002\u8fd1\u5e74\u6765\uff0c\u4fc4\u7f57\u65af\u5546\u4e1a\u7f51\u7edc\uff08Russian Business Network\uff09\u7b49\u7ec4\u7ec7\u7684\u5d1b\u8d77\uff0c\u4e00\u4e2a\u5e9e\u5927\u7684\u7f51\u7edc\u72af\u7f6a\u4f01\u4e1a\uff0c\u5df2\u7ecf\u8bc1\u660e\u4e86\u7f51\u7edc\u653b\u51fb\u5982\u4f55\u6210\u4e3a\u4e00\u79cd\u5546\u54c1\u3002\u5de5\u4e1a\u95f4\u8c0d\u6d3b\u52a8\u5c5e\u4e8e\u4e25\u91cd\u7684\u6709\u7ec4\u7ec7\u72af\u7f6a\u96c6\u56e2\u3002 \u9ad8\u80fd\u529b\u7684\u56e2\u961f \u00b6 \u8fd9\u662f\u6307\u201c\u9ed1\u5ba2\u884c\u52a8\u4e3b\u4e49\u8005\u201d\u7c7b\u578b\u7684\u7ec4\u7ec7\uff0c\u4ed6\u4eec\u901a\u5e38\u6ca1\u6709\u5546\u4e1a\u8d44\u52a9\uff0c\u4f46\u53ef\u80fd\u5bf9\u670d\u52a1\u63d0\u4f9b\u5546\u548c\u4e91\u8fd0\u8425\u5546\u6784\u6210\u4e25\u91cd\u5a01\u80c1\u3002 \u6709\u52a8\u673a\u7684\u4e2a\u4eba \u00b6 \u8fd9\u4e9b\u653b\u51fb\u8005\u5355\u72ec\u884c\u52a8\uff0c\u4ee5\u591a\u79cd\u5f62\u5f0f\u51fa\u73b0\uff0c\u4f8b\u5982\u6d41\u6c13\u6216\u6076\u610f\u5458\u5de5\u3001\u5fc3\u6000\u4e0d\u6ee1\u7684\u5ba2\u6237\u6216\u5c0f\u89c4\u6a21\u7684\u5de5\u4e1a\u95f4\u8c0d\u6d3b\u52a8\u3002 \u811a\u672c\u653b\u51fb\u8005 \u00b6 \u81ea\u52a8\u6f0f\u6d1e\u626b\u63cf/\u5229\u7528\u3002\u975e\u9488\u5bf9\u6027\u653b\u51fb\u3002\u901a\u5e38\uff0c\u53ea\u6709\u8fd9\u4e9b\u884c\u4e3a\u8005\u4e4b\u4e00\u7684\u6ecb\u6270\u3001\u59a5\u534f\u624d\u4f1a\u5bf9\u7ec4\u7ec7\u7684\u58f0\u8a89\u6784\u6210\u91cd\u5927\u98ce\u9669\u3002 \u516c\u6709\u4e91\u548c\u79c1\u6709\u4e91\u6ce8\u610f\u4e8b\u9879 \u00b6 \u79c1\u6709\u4e91\u901a\u5e38\u7531\u4f01\u4e1a\u6216\u673a\u6784\u5728\u5176\u7f51\u7edc\u5185\u90e8\u548c\u9632\u706b\u5899\u540e\u9762\u90e8\u7f72\u3002\u4f01\u4e1a\u5c06\u5bf9\u5141\u8bb8\u54ea\u4e9b\u6570\u636e\u9000\u51fa\u5176\u7f51\u7edc\u6709\u4e25\u683c\u7684\u653f\u7b56\uff0c\u751a\u81f3\u53ef\u80fd\u4e3a\u7279\u5b9a\u76ee\u7684\u4f7f\u7528\u4e0d\u540c\u7684\u4e91\u3002\u79c1\u6709\u4e91\u7684\u7528\u6237\u901a\u5e38\u662f\u62e5\u6709\u4e91\u7684\u7ec4\u7ec7\u7684\u5458\u5de5\uff0c\u5e76\u4e14\u80fd\u591f\u5bf9\u5176\u884c\u4e3a\u8d1f\u8d23\u3002\u5458\u5de5\u901a\u5e38\u4f1a\u5728\u8bbf\u95ee\u4e91\u4e4b\u524d\u53c2\u52a0\u57f9\u8bad\u8bfe\u7a0b\uff0c\u5e76\u4e14\u53ef\u80fd\u4f1a\u53c2\u52a0\u5b9a\u671f\u5b89\u6392\u7684\u5b89\u5168\u610f\u8bc6\u57f9\u8bad\u3002\u76f8\u6bd4\u4e4b\u4e0b\uff0c\u516c\u6709\u4e91\u4e0d\u80fd\u5bf9\u5176\u7528\u6237\u3001\u4e91\u7528\u4f8b\u6216\u7528\u6237\u52a8\u673a\u505a\u51fa\u4efb\u4f55\u65ad\u8a00\u3002\u5bf9\u4e8e\u516c\u6709\u4e91\u63d0\u4f9b\u5546\u6765\u8bf4\uff0c\u8fd9\u4f1a\u7acb\u5373\u5c06\u5ba2\u6237\u673a\u5b89\u5168\u57df\u63a8\u5165\u5b8c\u5168\u4e0d\u53d7\u4fe1\u4efb\u7684\u72b6\u6001\u3002 \u516c\u6709\u4e91\u653b\u51fb\u9762\u7684\u4e00\u4e2a\u663e\u7740\u533a\u522b\u662f\uff0c\u5b83\u4eec\u5fc5\u987b\u63d0\u4f9b\u5bf9\u5176\u670d\u52a1\u7684\u4e92\u8054\u7f51\u8bbf\u95ee\u3002\u5b9e\u4f8b\u8fde\u63a5\u3001\u901a\u8fc7 Internet \u8bbf\u95ee\u6587\u4ef6\u4ee5\u53ca\u4e0e\u4e91\u63a7\u5236\u7ed3\u6784\uff08\u5982 API \u7aef\u70b9\u548c\u4eea\u8868\u677f\uff09\u4ea4\u4e92\u7684\u80fd\u529b\u662f\u516c\u6709\u4e91\u7684\u5fc5\u5907\u6761\u4ef6\u3002 \u516c\u6709\u4e91\u548c\u79c1\u6709\u4e91\u7528\u6237\u7684\u9690\u79c1\u95ee\u9898\u901a\u5e38\u662f\u622a\u7136\u76f8\u53cd\u7684\u3002\u5728\u79c1\u6709\u4e91\u4e2d\u751f\u6210\u548c\u5b58\u50a8\u7684\u6570\u636e\u901a\u5e38\u7531\u4e91\u8fd0\u8425\u5546\u62e5\u6709\uff0c\u4ed6\u4eec\u80fd\u591f\u90e8\u7f72\u6570\u636e\u4e22\u5931\u9632\u62a4 \uff08DLP\uff09 \u4fdd\u62a4\u3001\u6587\u4ef6\u68c0\u67e5\u3001\u6df1\u5ea6\u6570\u636e\u5305\u68c0\u67e5\u548c\u89c4\u8303\u6027\u9632\u706b\u5899\u7b49\u6280\u672f\u3002\u76f8\u6bd4\u4e4b\u4e0b\uff0c\u9690\u79c1\u662f\u91c7\u7528\u516c\u6709\u4e91\u57fa\u7840\u8bbe\u65bd\u7684\u4e3b\u8981\u969c\u788d\u4e4b\u4e00\uff0c\u56e0\u4e3a\u524d\u9762\u63d0\u5230\u7684\u8bb8\u591a\u63a7\u5236\u63aa\u65bd\u5e76\u4e0d\u5b58\u5728\u3002 \u51fa\u7ad9\u653b\u51fb\u548c\u58f0\u8a89\u98ce\u9669 \u00b6 \u5e94\u4ed4\u7ec6\u8003\u8651\u4e91\u90e8\u7f72\u4e2d\u6f5c\u5728\u7684\u51fa\u7ad9\u6ee5\u7528\u3002\u65e0\u8bba\u662f\u516c\u6709\u4e91\u8fd8\u662f\u79c1\u6709\u4e91\uff0c\u4e91\u5f80\u5f80\u90fd\u6709\u5927\u91cf\u53ef\u7528\u8d44\u6e90\u3002\u901a\u8fc7\u9ed1\u5ba2\u653b\u51fb\u6216\u6388\u6743\u8bbf\u95ee\u5728\u4e91\u4e2d\u5efa\u7acb\u5b58\u5728\u70b9\u7684\u653b\u51fb\u8005\uff08\u4f8b\u5982\u6d41\u6c13\u5458\u5de5\uff09\u53ef\u4ee5\u4f7f\u8fd9\u4e9b\u8d44\u6e90\u5bf9\u6574\u4e2a\u4e92\u8054\u7f51\u4ea7\u751f\u5f71\u54cd\u3002\u5177\u6709\u8ba1\u7b97\u670d\u52a1\u7684\u4e91\u662f\u7406\u60f3\u7684 DDoS \u548c\u66b4\u529b\u5f15\u64ce\u3002\u5bf9\u4e8e\u516c\u6709\u4e91\u6765\u8bf4\uff0c\u8fd9\u4e2a\u95ee\u9898\u66f4\u4e3a\u7d27\u8feb\uff0c\u56e0\u4e3a\u5b83\u4eec\u7684\u7528\u6237\u5728\u5f88\u5927\u7a0b\u5ea6\u4e0a\u662f\u4e0d\u8d1f\u8d23\u4efb\u7684\uff0c\u5e76\u4e14\u53ef\u4ee5\u8fc5\u901f\u542f\u52a8\u5927\u91cf\u4e00\u6b21\u6027\u5b9e\u4f8b\u8fdb\u884c\u51fa\u7ad9\u653b\u51fb\u3002\u5982\u679c\u4e00\u5bb6\u516c\u53f8\u56e0\u6258\u7ba1\u6076\u610f\u8f6f\u4ef6\u6216\u5bf9\u5176\u4ed6\u7f51\u7edc\u53d1\u8d77\u653b\u51fb\u800c\u95fb\u540d\uff0c\u53ef\u80fd\u4f1a\u5bf9\u516c\u53f8\u7684\u58f0\u8a89\u9020\u6210\u91cd\u5927\u635f\u5bb3\u3002\u9884\u9632\u65b9\u6cd5\u5305\u62ec\u51fa\u53e3\u5b89\u5168\u7ec4\u3001\u51fa\u7ad9\u6d41\u91cf\u68c0\u67e5\u3001\u5ba2\u6237\u6559\u80b2\u548c\u610f\u8bc6\uff0c\u4ee5\u53ca\u6b3a\u8bc8\u548c\u6ee5\u7528\u7f13\u89e3\u7b56\u7565\u3002 \u653b\u51fb\u7c7b\u578b \u00b6 \u8be5\u56fe\u663e\u793a\u4e86\u4e0a\u4e00\u8282\u4e2d\u63cf\u8ff0\u7684\u53c2\u4e0e\u8005\u53ef\u80fd\u9884\u671f\u7684\u5178\u578b\u653b\u51fb\u7c7b\u578b\u3002\u8bf7\u6ce8\u610f\uff0c\u6b64\u56fe\u4e0d\u6392\u9664\u6709\u4e0d\u53ef\u9884\u671f\u7684\u653b\u51fb\u7c7b\u578b\u3002 \u653b\u51fb\u7c7b\u578b \u6bcf\u79cd\u653b\u51fb\u5f62\u5f0f\u7684\u89c4\u8303\u6027\u9632\u5fa1\u8d85\u51fa\u4e86\u672c\u6587\u6863\u7684\u8303\u56f4\u3002\u4e0a\u56fe\u53ef\u4ee5\u5e2e\u52a9\u60a8\u5c31\u5e94\u9632\u8303\u54ea\u4e9b\u7c7b\u578b\u7684\u5a01\u80c1\u548c\u5a01\u80c1\u53c2\u4e0e\u8005\u505a\u51fa\u660e\u667a\u7684\u51b3\u5b9a\u3002\u5bf9\u4e8e\u5546\u4e1a\u516c\u6709\u4e91\u90e8\u7f72\uff0c\u8fd9\u53ef\u80fd\u5305\u62ec\u9884\u9632\u4e25\u91cd\u72af\u7f6a\u3002\u5bf9\u4e8e\u90a3\u4e9b\u4e3a\u653f\u5e9c\u4f7f\u7528\u90e8\u7f72\u79c1\u6709\u4e91\u7684\u4eba\u6765\u8bf4\uff0c\u5e94\u8be5\u5efa\u7acb\u66f4\u4e25\u683c\u7684\u4fdd\u62a4\u673a\u5236\uff0c\u5305\u62ec\u7cbe\u5fc3\u4fdd\u62a4\u7684\u8bbe\u65bd\u548c\u4f9b\u5e94\u94fe\u3002\u76f8\u6bd4\u4e4b\u4e0b\uff0c\u90a3\u4e9b\u5efa\u7acb\u57fa\u672c\u5f00\u53d1\u6216\u6d4b\u8bd5\u73af\u5883\u7684\u4eba\u53ef\u80fd\u9700\u8981\u9650\u5236\u8f83\u5c11\u7684\u63a7\u5236\uff08\u4e2d\u95f4\uff09\u3002 \u9009\u62e9\u652f\u6301\u8f6f\u4ef6 \u00b6 \u60a8\u9009\u62e9\u7684\u652f\u6301\u8f6f\u4ef6\uff08\u5982\u6d88\u606f\u4f20\u9012\u548c\u8d1f\u8f7d\u5e73\u8861\uff09\u53ef\u80fd\u4f1a\u5bf9\u4e91\u4ea7\u751f\u4e25\u91cd\u7684\u5b89\u5168\u5f71\u54cd\u3002\u4e3a\u7ec4\u7ec7\u505a\u51fa\u6b63\u786e\u7684\u9009\u62e9\u975e\u5e38\u91cd\u8981\u3002\u672c\u8282\u63d0\u4f9b\u4e86\u9009\u62e9\u652f\u6301\u8f6f\u4ef6\u7684\u4e00\u4e9b\u4e00\u822c\u51c6\u5219\u3002 \u4e3a\u4e86\u9009\u62e9\u6700\u4f73\u652f\u6301\u8f6f\u4ef6\uff0c\u8bf7\u8003\u8651\u4ee5\u4e0b\u56e0\u7d20\uff1a \u56e2\u961f\u4e13\u4e1a\u77e5\u8bc6 \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u901a\u7528\u6807\u51c6 \u786c\u4ef6\u95ee\u9898 \u56e2\u961f\u4e13\u4e1a\u77e5\u8bc6 \u00b6 \u56e2\u961f\u8d8a\u719f\u6089\u7279\u5b9a\u4ea7\u54c1\u3001\u5176\u914d\u7f6e\u548c\u7279\u6b8a\u6027\uff0c\u5c31\u8d8a\u5c11\u4f1a\u51fa\u73b0\u914d\u7f6e\u9519\u8bef\u3002\u6b64\u5916\uff0c\u5c06\u5458\u5de5\u7684\u4e13\u4e1a\u77e5\u8bc6\u5206\u6563\u5230\u6574\u4e2a\u7ec4\u7ec7\u4e2d\u53ef\u4ee5\u589e\u52a0\u7cfb\u7edf\u7684\u53ef\u7528\u6027\uff0c\u5141\u8bb8\u5206\u5de5\uff0c\u5e76\u5728\u56e2\u961f\u6210\u5458\u4e0d\u53ef\u7528\u65f6\u51cf\u8f7b\u95ee\u9898\u3002 \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u00b6 \u7ed9\u5b9a\u4ea7\u54c1\u6216\u9879\u76ee\u7684\u6210\u719f\u5ea6\u5bf9\u60a8\u7684\u5b89\u5168\u72b6\u51b5\u81f3\u5173\u91cd\u8981\u3002\u90e8\u7f72\u4e91\u540e\uff0c\u4ea7\u54c1\u6210\u719f\u5ea6\u4f1a\u4ea7\u751f\u8bb8\u591a\u5f71\u54cd\uff1a \u4e13\u4e1a\u77e5\u8bc6\u7684\u53ef\u7528\u6027 \u6d3b\u8dc3\u7684\u5f00\u53d1\u4eba\u5458\u548c\u7528\u6237\u793e\u533a \u66f4\u65b0\u7684\u53ca\u65f6\u6027\u548c\u53ef\u7528\u6027 \u4e8b\u4ef6\u54cd\u5e94 \u901a\u7528\u6807\u51c6 \u00b6 \u901a\u7528\u6807\u51c6\u662f\u4e00\u4e2a\u56fd\u9645\u6807\u51c6\u5316\u7684\u8f6f\u4ef6\u8bc4\u4f30\u8fc7\u7a0b\uff0c\u653f\u5e9c\u548c\u5546\u4e1a\u516c\u53f8\u4f7f\u7528\u5b83\u6765\u9a8c\u8bc1\u8f6f\u4ef6\u6280\u672f\u7684\u6027\u80fd\u662f\u5426\u5982\u5ba3\u4f20\u7684\u90a3\u6837\u3002 \u786c\u4ef6\u95ee\u9898 \u00b6 \u8003\u8651\u8fd0\u884c\u8f6f\u4ef6\u7684\u786c\u4ef6\u7684\u53ef\u652f\u6301\u6027\u3002\u6b64\u5916\uff0c\u8bf7\u8003\u8651\u786c\u4ef6\u4e2d\u53ef\u7528\u7684\u5176\u4ed6\u529f\u80fd\uff0c\u4ee5\u53ca\u60a8\u9009\u62e9\u7684\u8f6f\u4ef6\u5982\u4f55\u652f\u6301\u8fd9\u4e9b\u529f\u80fd\u3002 \u7cfb\u7edf\u6587\u6863 \u00b6 OpenStack \u4e91\u90e8\u7f72\u7684\u7cfb\u7edf\u6587\u6863\u5e94\u9075\u5faa\u7ec4\u7ec7\u4e2d\u4f01\u4e1a\u4fe1\u606f\u6280\u672f\u7cfb\u7edf\u7684\u6a21\u677f\u548c\u6700\u4f73\u5b9e\u8df5\u3002\u7ec4\u7ec7\u901a\u5e38\u6709\u5408\u89c4\u6027\u8981\u6c42\uff0c\u8fd9\u53ef\u80fd\u9700\u8981\u4e00\u4e2a\u6574\u4f53\u7684\u7cfb\u7edf\u5b89\u5168\u8ba1\u5212\u6765\u6e05\u70b9\u548c\u8bb0\u5f55\u7ed9\u5b9a\u7cfb\u7edf\u7684\u67b6\u6784\u3002\u6574\u4e2a\u884c\u4e1a\u90fd\u9762\u4e34\u7740\u4e0e\u8bb0\u5f55\u52a8\u6001\u4e91\u57fa\u7840\u67b6\u6784\u548c\u4fdd\u6301\u4fe1\u606f\u6700\u65b0\u76f8\u5173\u7684\u5171\u540c\u6311\u6218\u3002 \u7cfb\u7edf\u6587\u6863\u8981\u6c42 \u7cfb\u7edf\u89d2\u8272\u548c\u7c7b\u578b \u7cfb\u7edf\u6e05\u5355 \u7f51\u7edc\u62d3\u6251 \u670d\u52a1\u3001\u534f\u8bae\u548c\u7aef\u53e3 \u7cfb\u7edf\u6587\u6863\u8981\u6c42 \u00b6 \u7cfb\u7edf\u89d2\u8272\u548c\u7c7b\u578b \u00b6 \u901a\u5e38\u6784\u6210 OpenStack \u5b89\u88c5\u7684\u4e24\u79cd\u5e7f\u4e49\u8282\u70b9\u7c7b\u578b\u662f\uff1a \u57fa\u7840\u8bbe\u65bd\u8282\u70b9 \u00b6 \u8fd0\u884c\u4e0e\u4e91\u76f8\u5173\u7684\u670d\u52a1\uff0c\u4f8b\u5982 OpenStack Identity \u670d\u52a1\u3001\u6d88\u606f\u961f\u5217\u670d\u52a1\u3001\u5b58\u50a8\u3001\u7f51\u7edc\u4ee5\u53ca\u652f\u6301\u4e91\u8fd0\u884c\u6240\u9700\u7684\u5176\u4ed6\u670d\u52a1\u3002 \u8ba1\u7b97\u3001\u5b58\u50a8\u6216\u5176\u4ed6\u8d44\u6e90\u8282\u70b9 \u00b6 \u4e3a\u4e91\u63d0\u4f9b\u5b58\u50a8\u5bb9\u91cf\u6216\u865a\u62df\u673a\u3002 \u7cfb\u7edf\u6e05\u5355 \u00b6 \u6587\u6863\u5e94\u63d0\u4f9bOpenStack\u73af\u5883\u7684\u4e00\u822c\u63cf\u8ff0\uff0c\u5e76\u6db5\u76d6\u4f7f\u7528\u7684\u6240\u6709\u7cfb\u7edf\uff08\u4f8b\u5982\uff0c\u751f\u4ea7\u3001\u5f00\u53d1\u6216\u6d4b\u8bd5\uff09\u3002\u8bb0\u5f55\u7cfb\u7edf\u7ec4\u4ef6\u3001\u7f51\u7edc\u3001\u670d\u52a1\u548c\u8f6f\u4ef6\u901a\u5e38\u63d0\u4f9b\u5168\u9762\u8986\u76d6\u548c\u8003\u8651\u5b89\u5168\u95ee\u9898\u3001\u653b\u51fb\u5a92\u4ecb\u548c\u53ef\u80fd\u7684\u5b89\u5168\u57df\u6865\u63a5\u70b9\u6240\u9700\u7684\u9e1f\u77b0\u56fe\u3002\u7cfb\u7edf\u6e05\u5355\u53ef\u80fd\u9700\u8981\u6355\u83b7\u4e34\u65f6\u8d44\u6e90\uff0c\u4f8b\u5982\u865a\u62df\u673a\u6216\u865a\u62df\u78c1\u76d8\u5377\uff0c\u5426\u5219\u8fd9\u4e9b\u8d44\u6e90\u5c06\u6210\u4e3a\u4f20\u7edf IT \u7cfb\u7edf\u4e2d\u7684\u6301\u4e45\u6027\u8d44\u6e90\u3002 \u786c\u4ef6\u6e05\u5355 \u00b6 \u5bf9\u4e66\u9762\u6587\u6863\u6ca1\u6709\u4e25\u683c\u5408\u89c4\u6027\u8981\u6c42\u7684\u4e91\u53ef\u80fd\u4f1a\u53d7\u76ca\u4e8e\u914d\u7f6e\u7ba1\u7406\u6570\u636e\u5e93 \uff08CMDB\uff09\u3002CMDB\u901a\u5e38\u7528\u4e8e\u786c\u4ef6\u8d44\u4ea7\u8ddf\u8e2a\u548c\u6574\u4f53\u751f\u547d\u5468\u671f\u7ba1\u7406\u3002\u901a\u8fc7\u5229\u7528 CMDB\uff0c\u7ec4\u7ec7\u53ef\u4ee5\u5feb\u901f\u8bc6\u522b\u4e91\u57fa\u7840\u8bbe\u65bd\u786c\u4ef6\uff0c\u4f8b\u5982\u8ba1\u7b97\u8282\u70b9\u3001\u5b58\u50a8\u8282\u70b9\u6216\u7f51\u7edc\u8bbe\u5907\u3002CMDB\u53ef\u4ee5\u5e2e\u52a9\u8bc6\u522b\u7f51\u7edc\u4e0a\u5b58\u5728\u7684\u8d44\u4ea7\uff0c\u8fd9\u4e9b\u8d44\u4ea7\u53ef\u80fd\u7531\u4e8e\u7ef4\u62a4\u4e0d\u8db3\u3001\u4fdd\u62a4\u4e0d\u8db3\u6216\u88ab\u53d6\u4ee3\u548c\u9057\u5fd8\u800c\u5b58\u5728\u6f0f\u6d1e\u3002\u5982\u679c\u5e95\u5c42\u786c\u4ef6\u652f\u6301\u5fc5\u8981\u7684\u81ea\u52a8\u53d1\u73b0\u529f\u80fd\uff0c\u5219 OpenStack \u7f6e\u5907\u7cfb\u7edf\u53ef\u4ee5\u63d0\u4f9b\u4e00\u4e9b\u57fa\u672c\u7684 CMDB \u529f\u80fd\u3002 \u8f6f\u4ef6\u6e05\u5355 \u00b6 \u4e0e\u786c\u4ef6\u4e00\u6837\uff0cOpenStack \u90e8\u7f72\u4e2d\u7684\u6240\u6709\u8f6f\u4ef6\u7ec4\u4ef6\u90fd\u5e94\u8bb0\u5f55\u5728\u6848\u3002\u793a\u4f8b\u5305\u62ec\uff1a \u7cfb\u7edf\u6570\u636e\u5e93\uff0c\u4f8b\u5982 MySQL \u6216 mongoDB OpenStack \u8f6f\u4ef6\u7ec4\u4ef6\uff0c\u4f8b\u5982 Identity \u6216 Compute \u652f\u6301\u7ec4\u4ef6\uff0c\u4f8b\u5982\u8d1f\u8f7d\u5747\u8861\u5668\u3001\u53cd\u5411\u4ee3\u7406\u3001DNS \u6216 DHCP \u670d\u52a1 \u5728\u8bc4\u4f30\u5e93\u3001\u5e94\u7528\u7a0b\u5e8f\u6216\u8f6f\u4ef6\u7c7b\u522b\u4e2d\u6cc4\u9732\u6216\u6f0f\u6d1e\u7684\u5f71\u54cd\u65f6\uff0c\u8f6f\u4ef6\u7ec4\u4ef6\u7684\u6743\u5a01\u5217\u8868\u53ef\u80fd\u81f3\u5173\u91cd\u8981\u3002 \u7f51\u7edc\u62d3\u6251 \u00b6 \u5e94\u63d0\u4f9b\u7f51\u7edc\u62d3\u6251\uff0c\u5e76\u7a81\u51fa\u663e\u793a\u5b89\u5168\u57df\u4e4b\u95f4\u7684\u6570\u636e\u6d41\u548c\u6865\u63a5\u70b9\u3002\u7f51\u7edc\u5165\u53e3\u548c\u51fa\u53e3\u70b9\u5e94\u4e0e\u4efb\u4f55 OpenStack \u903b\u8f91\u7cfb\u7edf\u8fb9\u754c\u4e00\u8d77\u6807\u8bc6\u3002\u53ef\u80fd\u9700\u8981\u591a\u4e2a\u56fe\u8868\u6765\u63d0\u4f9b\u7cfb\u7edf\u7684\u5b8c\u6574\u89c6\u89c9\u8986\u76d6\u3002\u7f51\u7edc\u62d3\u6251\u6587\u6863\u5e94\u5305\u62ec\u7cfb\u7edf\u4ee3\u8868\u79df\u6237\u521b\u5efa\u7684\u865a\u62df\u7f51\u7edc\uff0c\u4ee5\u53ca OpenStack \u521b\u5efa\u7684\u865a\u62df\u673a\u5b9e\u4f8b\u548c\u7f51\u5173\u3002 \u670d\u52a1\u3001\u534f\u8bae\u548c\u7aef\u53e3 \u00b6 \u4e86\u89e3\u6709\u5173\u7ec4\u7ec7\u8d44\u4ea7\u7684\u4fe1\u606f\u901a\u5e38\u662f\u6700\u4f73\u505a\u6cd5\u3002\u8d44\u4ea7\u8868\u53ef\u4ee5\u5e2e\u52a9\u9a8c\u8bc1\u5b89\u5168\u8981\u6c42\uff0c\u5e76\u5e2e\u52a9\u7ef4\u62a4\u6807\u51c6\u5b89\u5168\u7ec4\u4ef6\uff0c\u4f8b\u5982\u9632\u706b\u5899\u914d\u7f6e\u3001\u670d\u52a1\u7aef\u53e3\u51b2\u7a81\u3001\u5b89\u5168\u4fee\u6b63\u533a\u57df\u548c\u5408\u89c4\u6027\u3002\u6b64\u5916\uff0c\u8be5\u8868\u8fd8\u6709\u52a9\u4e8e\u7406\u89e3 OpenStack \u7ec4\u4ef6\u4e4b\u95f4\u7684\u5173\u7cfb\u3002\u8be5\u8868\u53ef\u80fd\u5305\u62ec\uff1a OpenStack \u90e8\u7f72\u4e2d\u4f7f\u7528\u7684\u670d\u52a1\u3001\u534f\u8bae\u548c\u7aef\u53e3\u3002 \u4e91\u57fa\u7840\u67b6\u6784\u4e2d\u8fd0\u884c\u7684\u6240\u6709\u670d\u52a1\u7684\u6982\u8ff0\u3002 \u5f3a\u70c8\u5efa\u8bae OpenStack \u90e8\u7f72\u8bb0\u5f55\u4e0e\u6b64\u7c7b\u4f3c\u7684\u4fe1\u606f\u3002\u8be5\u8868\u53ef\u4ee5\u6839\u636e\u4ece CMDB \u6d3e\u751f\u7684\u4fe1\u606f\u521b\u5efa\uff0c\u4e5f\u53ef\u4ee5\u624b\u52a8\u6784\u5efa\u3002 \u4e0b\u9762\u63d0\u4f9b\u4e86\u4e00\u4e2a\u8868\u683c\u793a\u4f8b\uff1a \u670d\u52a1 \u534f\u8bae \u7aef\u53e3 \u76ee\u7684 \u4f7f\u7528\u8005 \u5b89\u5168\u57df beam.smp AMQP 5672/tcp AMQP \u6d88\u606f\u670d\u52a1 RabbitMQ \u7ba1\u7406\u57df tgtd iSCSI 3260/tcp iSCSI \u53d1\u8d77\u7a0b\u5e8f\u670d\u52a1 iSCSI \u79c1\u6709\uff08\u6570\u636e\u7f51\u7edc\uff09 sshd ssh 22/tcp \u5141\u8bb8\u5b89\u5168\u767b\u5f55\u5230\u8282\u70b9\u548c\u6765\u5bbe\u865a\u62df\u673a Various \u6309\u9700\u914d\u7f6e\u4f5c\u7528\u4e8e\u7ba1\u7406\u57df\u3001\u516c\u5171\u57df\u548c\u8bbf\u5ba2\u57df mysqld mysql 3306/tcp \u6570\u636e\u5e93\u670d\u52a1 Various \u7ba1\u7406\u57df apache2 http 443/tcp \u4eea\u8868\u677f Tenants \u516c\u5171\u57df dnsmasq dns 53/tcp DNS \u670d\u52a1 Guest VMs \u8bbf\u5ba2\u57df \u7ba1\u7406 \u00b6 \u4e91\u90e8\u7f72\u662f\u4e00\u4e2a\u4e0d\u65ad\u53d8\u5316\u7684\u7cfb\u7edf\u3002\u673a\u5668\u8001\u5316\u548c\u6545\u969c\uff0c\u8f6f\u4ef6\u8fc7\u65f6\uff0c\u6f0f\u6d1e\u88ab\u53d1\u73b0\u3002\u5f53\u914d\u7f6e\u4e2d\u51fa\u73b0\u9519\u8bef\u6216\u9057\u6f0f\u65f6\uff0c\u6216\u8005\u5fc5\u987b\u5e94\u7528\u8f6f\u4ef6\u4fee\u590d\u65f6\uff0c\u5fc5\u987b\u4ee5\u5b89\u5168\u4f46\u65b9\u4fbf\u7684\u65b9\u5f0f\u8fdb\u884c\u8fd9\u4e9b\u66f4\u6539\u3002\u8fd9\u4e9b\u66f4\u6539\u901a\u5e38\u901a\u8fc7\u914d\u7f6e\u7ba1\u7406\u6765\u89e3\u51b3\u3002 \u4fdd\u62a4\u4e91\u90e8\u7f72\u4e0d\u88ab\u6076\u610f\u5b9e\u4f53\u914d\u7f6e\u6216\u64cd\u7eb5\u975e\u5e38\u91cd\u8981\u3002\u7531\u4e8e\u4e91\u4e2d\u7684\u8bb8\u591a\u7cfb\u7edf\u90fd\u91c7\u7528\u8ba1\u7b97\u548c\u7f51\u7edc\u865a\u62df\u5316\uff0c\u56e0\u6b64 OpenStack \u9762\u4e34\u7740\u660e\u663e\u7684\u6311\u6218\uff0c\u5fc5\u987b\u901a\u8fc7\u5b8c\u6574\u6027\u751f\u547d\u5468\u671f\u7ba1\u7406\u6765\u89e3\u51b3\u8fd9\u4e9b\u6311\u6218\u3002 \u7ba1\u7406\u5458\u5fc5\u987b\u5bf9\u4e91\u6267\u884c\u547d\u4ee4\u548c\u63a7\u5236\uff0c\u4ee5\u5b9e\u73b0\u5404\u79cd\u64cd\u4f5c\u529f\u80fd\u3002\u7406\u89e3\u548c\u4fdd\u62a4\u8fd9\u4e9b\u6307\u6325\u548c\u63a7\u5236\u8bbe\u65bd\u975e\u5e38\u91cd\u8981\u3002 \u6301\u7eed\u7684\u7cfb\u7edf\u7ba1\u7406 \u6f0f\u6d1e\u7ba1\u7406 \u914d\u7f6e\u7ba1\u7406 \u5b89\u5168\u5907\u4efd\u548c\u6062\u590d \u5b89\u5168\u5ba1\u8ba1\u5de5\u5177 \u5b8c\u6574\u6027\u751f\u547d\u5468\u671f \u5b89\u5168\u5f15\u5bfc \u8fd0\u884c\u65f6\u9a8c\u8bc1 \u670d\u52a1\u5668\u52a0\u56fa \u7ba1\u7406\u754c\u9762 \u4eea\u8868\u677f OpenStack \u63a5\u53e3 \u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 \u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f \u5e26\u5916\u7ba1\u7406\u63a5\u53e3 \u6301\u7eed\u7684\u7cfb\u7edf\u7ba1\u7406 \u00b6 \u4e91\u7cfb\u7edf\u603b\u4f1a\u5b58\u5728\u6f0f\u6d1e\uff0c\u5176\u4e2d\u4e00\u4e9b\u53ef\u80fd\u662f\u5b89\u5168\u95ee\u9898\u3002\u56e0\u6b64\uff0c\u51c6\u5907\u597d\u5e94\u7528\u5b89\u5168\u66f4\u65b0\u548c\u5e38\u89c4\u8f6f\u4ef6\u66f4\u65b0\u81f3\u5173\u91cd\u8981\u3002\u8fd9\u6d89\u53ca\u5230\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u7684\u667a\u80fd\u4f7f\u7528\uff0c\u4e0b\u9762\u5c06\u5bf9\u6b64\u8fdb\u884c\u8ba8\u8bba\u3002\u8fd9\u8fd8\u6d89\u53ca\u4e86\u89e3\u4f55\u65f6\u9700\u8981\u5347\u7ea7\u3002 \u6f0f\u6d1e\u7ba1\u7406 \u00b6 \u6709\u5173\u5b89\u5168\u76f8\u5173\u66f4\u6539\u7684\u516c\u544a\uff0c\u8bf7\u8ba2\u9605 OpenStack Announce \u90ae\u4ef6\u5217\u8868\u3002\u5b89\u5168\u901a\u77e5\u8fd8\u4f1a\u901a\u8fc7\u4e0b\u6e38\u8f6f\u4ef6\u5305\u53d1\u5e03\uff0c\u4f8b\u5982\uff0c\u901a\u8fc7\u60a8\u53ef\u80fd\u4f5c\u4e3a\u8f6f\u4ef6\u5305\u66f4\u65b0\u7684\u4e00\u90e8\u5206\u8ba2\u9605\u7684 Linux \u53d1\u884c\u7248\u3002 OpenStack\u7ec4\u4ef6\u53ea\u662f\u4e91\u4e2d\u8f6f\u4ef6\u7684\u4e00\u5c0f\u90e8\u5206\u3002\u4e0e\u6240\u6709\u8fd9\u4e9b\u5176\u4ed6\u7ec4\u4ef6\u4fdd\u6301\u540c\u6b65\u4e5f\u5f88\u91cd\u8981\u3002\u867d\u7136\u67d0\u4e9b\u6570\u636e\u6e90\u662f\u7279\u5b9a\u4e8e\u90e8\u7f72\u7684\uff0c\u4f46\u4e91\u7ba1\u7406\u5458\u5fc5\u987b\u8ba2\u9605\u5fc5\u8981\u7684\u90ae\u4ef6\u5217\u8868\uff0c\u4ee5\u4fbf\u63a5\u6536\u9002\u7528\u4e8e\u7ec4\u7ec7\u73af\u5883\u7684\u4efb\u4f55\u5b89\u5168\u66f4\u65b0\u7684\u901a\u77e5\u3002\u901a\u5e38\uff0c\u8fd9\u5c31\u50cf\u8ddf\u8e2a\u4e0a\u6e38 Linux \u53d1\u884c\u7248\u4e00\u6837\u7b80\u5355\u3002 \u6ce8\u610f OpenStack \u901a\u8fc7\u4e24\u4e2a\u6e20\u9053\u53d1\u5e03\u5b89\u5168\u4fe1\u606f\u3002 - OpenStack \u5b89\u5168\u516c\u544a \uff08OSSA\uff09 \u7531 OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f \uff08VMT\uff09 \u521b\u5efa\u3002\u5b83\u4eec\u4e0e\u6838\u5fc3OpenStack\u670d\u52a1\u4e2d\u7684\u5b89\u5168\u6f0f\u6d1e\u6709\u5173\u3002\u6709\u5173 VMT \u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6f0f\u6d1e\u7ba1\u7406\u6d41\u7a0b\u3002 - OpenStack \u5b89\u5168\u8bf4\u660e \uff08OSSN\uff09 \u7531 OpenStack \u5b89\u5168\u7ec4 \uff08OSSG\uff09 \u521b\u5efa\uff0c\u4ee5\u652f\u6301 VMT \u7684\u5de5\u4f5c\u3002OSSN\u89e3\u51b3\u4e86\u652f\u6301\u8f6f\u4ef6\u548c\u5e38\u89c1\u90e8\u7f72\u914d\u7f6e\u4e2d\u7684\u95ee\u9898\u3002\u672c\u6307\u5357\u4e2d\u5f15\u7528\u4e86\u5b83\u4eec\u3002\u5b89\u5168\u8bf4\u660e\u5b58\u6863\u5728OSSN\u4e0a\u3002 \u5206\u7c7b \u00b6 \u6536\u5230\u5b89\u5168\u66f4\u65b0\u901a\u77e5\u540e\uff0c\u4e0b\u4e00\u6b65\u662f\u786e\u5b9a\u6b64\u66f4\u65b0\u5bf9\u7ed9\u5b9a\u4e91\u90e8\u7f72\u7684\u91cd\u8981\u6027\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u62e5\u6709\u9884\u5b9a\u4e49\u7684\u7b56\u7565\u5f88\u6709\u7528\u3002\u73b0\u6709\u7684\u6f0f\u6d1e\u8bc4\u7ea7\u7cfb\u7edf\uff08\u5982\u901a\u7528\u6f0f\u6d1e\u8bc4\u5206\u7cfb\u7edf \uff08CVSS\uff09\uff09\u65e0\u6cd5\u6b63\u786e\u8003\u8651\u4e91\u90e8\u7f72\u3002 \u5728\u6b64\u793a\u4f8b\u4e2d\uff0c\u6211\u4eec\u5f15\u5165\u4e86\u4e00\u4e2a\u8bc4\u5206\u77e9\u9635\uff0c\u8be5\u77e9\u9635\u5c06\u6f0f\u6d1e\u5206\u4e3a\u4e09\u7c7b\uff1a\u6743\u9650\u63d0\u5347\u3001\u62d2\u7edd\u670d\u52a1\u548c\u4fe1\u606f\u6cc4\u9732\u3002\u4e86\u89e3\u6f0f\u6d1e\u7684\u7c7b\u578b\u53ca\u5176\u5728\u57fa\u7840\u67b6\u6784\u4e2d\u53d1\u751f\u7684\u4f4d\u7f6e\u5c06\u4f7f\u60a8\u80fd\u591f\u505a\u51fa\u5408\u7406\u7684\u54cd\u5e94\u51b3\u7b56\u3002 \u6743\u9650\u63d0\u5347\u63cf\u8ff0\u4e86\u7528\u6237\u4f7f\u7528\u7cfb\u7edf\u4e2d\u5176\u4ed6\u7528\u6237\u7684\u6743\u9650\u8fdb\u884c\u64cd\u4f5c\u7684\u80fd\u529b\uff0c\u7ed5\u8fc7\u9002\u5f53\u7684\u6388\u6743\u68c0\u67e5\u3002\u6765\u5bbe\u7528\u6237\u6267\u884c\u7684\u64cd\u4f5c\u5141\u8bb8\u4ed6\u4eec\u4ee5\u7ba1\u7406\u5458\u6743\u9650\u6267\u884c\u672a\u7ecf\u6388\u6743\u7684\u64cd\u4f5c\uff0c\u8fd9\u662f\u6b64\u7c7b\u6f0f\u6d1e\u7684\u4e00\u4e2a\u793a\u4f8b\u3002 \u62d2\u7edd\u670d\u52a1\u662f\u6307\u88ab\u5229\u7528\u7684\u6f0f\u6d1e\uff0c\u53ef\u80fd\u5bfc\u81f4\u670d\u52a1\u6216\u7cfb\u7edf\u4e2d\u65ad\u3002\u8fd9\u65e2\u5305\u62ec\u4f7f\u7f51\u7edc\u8d44\u6e90\u4e0d\u582a\u91cd\u8d1f\u7684\u5206\u5e03\u5f0f\u653b\u51fb\uff0c\u4e5f\u5305\u62ec\u901a\u5e38\u7531\u8d44\u6e90\u5206\u914d\u9519\u8bef\u6216\u8f93\u5165\u5f15\u8d77\u7684\u7cfb\u7edf\u6545\u969c\u7f3a\u9677\u5f15\u8d77\u7684\u5355\u7528\u6237\u653b\u51fb\u3002 \u4fe1\u606f\u6cc4\u9732\u6f0f\u6d1e\u4f1a\u6cc4\u9732\u6709\u5173\u60a8\u7684\u7cfb\u7edf\u6216\u64cd\u4f5c\u7684\u4fe1\u606f\u3002\u8fd9\u4e9b\u6f0f\u6d1e\u7684\u8303\u56f4\u4ece\u8c03\u8bd5\u4fe1\u606f\u6cc4\u9732\u5230\u5173\u952e\u5b89\u5168\u6570\u636e\uff08\u5982\u8eab\u4efd\u9a8c\u8bc1\u51ed\u636e\u548c\u5bc6\u7801\uff09\u7684\u66b4\u9732\u3002 \u653b\u51fb\u8005\u4f4d\u7f6e/\u6743\u9650\u7ea7\u522b \u5916\u90e8 \u4e91\u7528\u6237 \u4e91\u7ba1\u7406\u5458 \u63a7\u5236\u5e73\u9762 \u6743\u9650\u63d0\u5347\uff083 \u7ea7\uff09 \u7d27\u6025 n/a n/a n/a \u6743\u9650\u63d0\u5347\uff082 \u4e2a\u7ea7\u522b\uff09 \u7d27\u6025 \u7d27\u6025 n/a n/a \u7279\u6743\u63d0\u5347\uff081 \u7ea7\uff09 \u7d27\u6025 \u7d27\u6025 \u7d27\u6025 n/a \u62d2\u7edd\u670d\u52a1 \u9ad8 \u4e2d \u4f4e \u4f4e \u4fe1\u606f\u62ab\u9732 \u7d27\u6025/\u9ad8 \u7d27\u6025/\u9ad8 \u4e2d/\u4f4e \u4f4e \u8be5\u8868\u8bf4\u660e\u4e86\u4e00\u79cd\u901a\u7528\u65b9\u6cd5\uff0c\u8be5\u65b9\u6cd5\u6839\u636e\u6f0f\u6d1e\u5728\u90e8\u7f72\u4e2d\u53d1\u751f\u7684\u4f4d\u7f6e\u548c\u5f71\u54cd\u6765\u8861\u91cf\u6f0f\u6d1e\u7684\u5f71\u54cd\u3002\u4f8b\u5982\uff0c\u8ba1\u7b97 API \u8282\u70b9\u4e0a\u7684\u5355\u7ea7\u6743\u9650\u63d0\u5347\u53ef\u80fd\u5141\u8bb8 API \u7684\u6807\u51c6\u7528\u6237\u5347\u7ea7\u4e3a\u5177\u6709\u4e0e\u8282\u70b9\u4e0a\u7684 root \u7528\u6237\u76f8\u540c\u7684\u6743\u9650\u3002 \u6211\u4eec\u5efa\u8bae\u4e91\u7ba1\u7406\u5458\u4f7f\u7528\u6b64\u8868\u4f5c\u4e3a\u6a21\u578b\uff0c\u4ee5\u5e2e\u52a9\u5b9a\u4e49\u8981\u9488\u5bf9\u5404\u79cd\u5b89\u5168\u7ea7\u522b\u6267\u884c\u7684\u64cd\u4f5c\u3002\u4f8b\u5982\uff0c\u5173\u952e\u7ea7\u522b\u7684\u5b89\u5168\u66f4\u65b0\u53ef\u80fd\u9700\u8981\u5feb\u901f\u5347\u7ea7\u4e91\uff0c\u800c\u4f4e\u7ea7\u522b\u7684\u66f4\u65b0\u53ef\u80fd\u9700\u8981\u66f4\u957f\u7684\u65f6\u95f4\u624d\u80fd\u5b8c\u6210\u3002 \u6d4b\u8bd5\u66f4\u65b0 \u00b6 \u5728\u751f\u4ea7\u73af\u5883\u4e2d\u90e8\u7f72\u4efb\u4f55\u66f4\u65b0\u4e4b\u524d\uff0c\u5e94\u5bf9\u5176\u8fdb\u884c\u6d4b\u8bd5\u3002\u901a\u5e38\uff0c\u8fd9\u9700\u8981\u6709\u4e00\u4e2a\u5355\u72ec\u7684\u6d4b\u8bd5\u4e91\u8bbe\u7f6e\uff0c\u8be5\u8bbe\u7f6e\u9996\u5148\u63a5\u6536\u66f4\u65b0\u3002\u5728\u8f6f\u4ef6\u548c\u786c\u4ef6\u65b9\u9762\uff0c\u6b64\u4e91\u5e94\u5c3d\u53ef\u80fd\u63a5\u8fd1\u751f\u4ea7\u4e91\u3002\u5e94\u5728\u6027\u80fd\u5f71\u54cd\u3001\u7a33\u5b9a\u6027\u3001\u5e94\u7528\u7a0b\u5e8f\u5f71\u54cd\u7b49\u65b9\u9762\u5bf9\u66f4\u65b0\u8fdb\u884c\u5168\u9762\u6d4b\u8bd5\u3002\u7279\u522b\u91cd\u8981\u7684\u662f\u9a8c\u8bc1\u66f4\u65b0\u7406\u8bba\u4e0a\u89e3\u51b3\u7684\u95ee\u9898\uff08\u4f8b\u5982\u7279\u5b9a\u6f0f\u6d1e\uff09\u662f\u5426\u5df2\u5b9e\u9645\u4fee\u590d\u3002 \u90e8\u7f72\u66f4\u65b0 \u00b6 \u5b8c\u5168\u6d4b\u8bd5\u66f4\u65b0\u540e\uff0c\u53ef\u4ee5\u5c06\u5176\u90e8\u7f72\u5230\u751f\u4ea7\u73af\u5883\u3002\u5e94\u4f7f\u7528\u4e0b\u9762\u6240\u8ff0\u7684\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u5b8c\u5168\u81ea\u52a8\u5316\u6b64\u90e8\u7f72\u3002 \u914d\u7f6e\u7ba1\u7406 \u00b6 \u751f\u4ea7\u8d28\u91cf\u7684\u4e91\u5e94\u59cb\u7ec8\u4f7f\u7528\u5de5\u5177\u6765\u81ea\u52a8\u6267\u884c\u914d\u7f6e\u548c\u90e8\u7f72\u3002\u8fd9\u6d88\u9664\u4e86\u4eba\u4e3a\u9519\u8bef\uff0c\u5e76\u5141\u8bb8\u4e91\u66f4\u5feb\u5730\u6269\u5c55\u3002\u81ea\u52a8\u5316\u8fd8\u6709\u52a9\u4e8e\u6301\u7eed\u96c6\u6210\u548c\u6d4b\u8bd5\u3002 \u5728\u6784\u5efa OpenStack \u4e91\u65f6\uff0c\u5f3a\u70c8\u5efa\u8bae\u5728\u8bbe\u8ba1\u548c\u5b9e\u73b0\u65f6\u8003\u8651\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u6216\u6846\u67b6\u3002\u901a\u8fc7\u914d\u7f6e\u7ba1\u7406\uff0c\u60a8\u53ef\u4ee5\u907f\u514d\u5728\u6784\u5efa\u3001\u7ba1\u7406\u548c\u7ef4\u62a4\u50cf OpenStack \u8fd9\u6837\u590d\u6742\u7684\u57fa\u7840\u67b6\u6784\u65f6\u56fa\u6709\u7684\u8bb8\u591a\u9677\u9631\u3002\u901a\u8fc7\u751f\u6210\u914d\u7f6e\u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f\u6240\u9700\u7684\u6e05\u5355\u3001\u8bf4\u660e\u4e66\u6216\u6a21\u677f\uff0c\u60a8\u53ef\u4ee5\u6ee1\u8db3\u8bb8\u591a\u6587\u6863\u548c\u6cd5\u89c4\u62a5\u544a\u8981\u6c42\u3002\u6b64\u5916\uff0c\u914d\u7f6e\u7ba1\u7406\u8fd8\u53ef\u4ee5\u4f5c\u4e3a\u4e1a\u52a1\u8fde\u7eed\u6027\u8ba1\u5212 \uff08BCP\uff09 \u548c\u6570\u636e\u6062\u590d \uff08DR\uff09 \u8ba1\u5212\u7684\u4e00\u90e8\u5206\uff0c\u60a8\u53ef\u4ee5\u5728\u5176\u4e2d\u5c06\u8282\u70b9\u6216\u670d\u52a1\u91cd\u5efa\u56de DR \u4e8b\u4ef6\u4e2d\u7684\u5df2\u77e5\u72b6\u6001\u6216\u7ed9\u5b9a\u7684\u59a5\u534f\u72b6\u6001\u3002 \u6b64\u5916\uff0c\u5f53\u4e0e Git \u6216 SVN \u7b49\u7248\u672c\u63a7\u5236\u7cfb\u7edf\u7ed3\u5408\u4f7f\u7528\u65f6\uff0c\u60a8\u53ef\u4ee5\u8ddf\u8e2a\u73af\u5883\u968f\u65f6\u95f4\u63a8\u79fb\u800c\u53d1\u751f\u7684\u66f4\u6539\uff0c\u5e76\u91cd\u65b0\u8c03\u89e3\u53ef\u80fd\u53d1\u751f\u7684\u672a\u7ecf\u6388\u6743\u7684\u66f4\u6539\u3002\u4f8b\u5982\uff0c\u6587\u4ef6 nova.conf \u6216\u5176\u4ed6\u914d\u7f6e\u6587\u4ef6\u4e0d\u7b26\u5408\u60a8\u7684\u6807\u51c6\uff0c\u60a8\u7684\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u53ef\u4ee5\u8fd8\u539f\u6216\u66ff\u6362\u8be5\u6587\u4ef6\uff0c\u5e76\u5c06\u60a8\u7684\u914d\u7f6e\u6062\u590d\u5230\u5df2\u77e5\u72b6\u6001\u3002\u6700\u540e\uff0c\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u4e5f\u53ef\u7528\u4e8e\u90e8\u7f72\u66f4\u65b0;\u7b80\u5316\u5b89\u5168\u8865\u4e01\u6d41\u7a0b\u3002\u8fd9\u4e9b\u5de5\u5177\u5177\u6709\u5e7f\u6cdb\u7684\u529f\u80fd\uff0c\u5728\u8be5\u9886\u57df\u975e\u5e38\u6709\u7528\u3002\u4fdd\u62a4\u4e91\u7684\u5173\u952e\u70b9\u662f\u9009\u62e9\u4e00\u79cd\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u5e76\u4f7f\u7528\u5b83\u3002 \u6709\u8bb8\u591a\u914d\u7f6e\u7ba1\u7406\u89e3\u51b3\u65b9\u6848;\u5728\u64b0\u5199\u672c\u6587\u65f6\uff0c\u5e02\u573a\u4e0a\u6709\u4e24\u4e2a\u5728\u652f\u6301 OpenStack \u73af\u5883\u65b9\u9762\u975e\u5e38\u5f3a\u5927\u7684\u516c\u53f8\uff1aChef \u548c Puppet\u3002\u4e0b\u9762\u63d0\u4f9b\u4e86\u6b64\u7a7a\u95f4\u4e2d\u7684\u5de5\u5177\u7684\u975e\u8be6\u5c3d\u5217\u8868\uff1a Chef Puppet Salt Stack Ansible \u7b56\u7565\u66f4\u6539 \u00b6 \u6bcf\u5f53\u66f4\u6539\u7b56\u7565\u6216\u914d\u7f6e\u7ba1\u7406\u65f6\uff0c\u6700\u597d\u8bb0\u5f55\u6d3b\u52a8\u5e76\u5907\u4efd\u65b0\u96c6\u7684\u526f\u672c\u3002\u901a\u5e38\uff0c\u6b64\u7c7b\u7b56\u7565\u548c\u914d\u7f6e\u5b58\u50a8\u5728\u53d7\u7248\u672c\u63a7\u5236\u7684\u5b58\u50a8\u5e93\uff08\u5982 Git\uff09\u4e2d\u3002 \u5b89\u5168\u5907\u4efd\u548c\u6062\u590d \u00b6 \u5728\u6574\u4e2a\u7cfb\u7edf\u5b89\u5168\u8ba1\u5212\u4e2d\u5305\u62ec\u5907\u4efd\u8fc7\u7a0b\u548c\u7b56\u7565\u975e\u5e38\u91cd\u8981\u3002\u6709\u5173 OpenStack \u5907\u4efd\u548c\u6062\u590d\u529f\u80fd\u548c\u8fc7\u7a0b\u7684\u6982\u8ff0\uff0c\u8bf7\u53c2\u9605\u6709\u5173\u5907\u4efd\u548c\u6062\u590d\u7684 OpenStack \u64cd\u4f5c\u6307\u5357\u3002 \u786e\u4fdd\u53ea\u6709\u7ecf\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u7684\u7528\u6237\u548c\u5907\u4efd\u5ba2\u6237\u7aef\u624d\u80fd\u8bbf\u95ee\u5907\u4efd\u670d\u52a1\u5668\u3002 \u4f7f\u7528\u6570\u636e\u52a0\u5bc6\u9009\u9879\u6765\u5b58\u50a8\u548c\u4f20\u8f93\u5907\u4efd\u3002 \u4f7f\u7528\u4e13\u7528\u4e14\u5f3a\u5316\u7684\u5907\u4efd\u670d\u52a1\u5668\u3002\u5907\u4efd\u670d\u52a1\u5668\u7684\u65e5\u5fd7\u5fc5\u987b\u6bcf\u5929\u8fdb\u884c\u76d1\u89c6\uff0c\u5e76\u4e14\u53ea\u6709\u5c11\u6570\u4eba\u53ef\u4ee5\u8bbf\u95ee\u3002 \u5b9a\u671f\u6d4b\u8bd5\u6570\u636e\u6062\u590d\u9009\u9879\uff0c\u5305\u62ec\u5b58\u50a8\u5728\u5b89\u5168\u5907\u4efd\u4e2d\u7684\u955c\u50cf\uff0c\u662f\u786e\u4fdd\u707e\u96be\u6062\u590d\u51c6\u5907\u7684\u5173\u952e\u90e8\u5206\u3002\u5728\u53d1\u751f\u5b89\u5168\u6f0f\u6d1e\u6216\u53d7\u635f\u65f6\uff0c\u7ec8\u6b62\u8fd0\u884c\u4e2d\u7684\u5b9e\u4f8b\u5e76\u4ece\u5df2\u77e5\u7684\u5b89\u5168\u955c\u50cf\u5907\u4efd\u4e2d\u91cd\u65b0\u542f\u52a8\u5b9e\u4f8b\u786e\u5b9e\u662f\u6700\u4f73\u505a\u6cd5\u3002\u8fd9\u6709\u52a9\u4e8e\u786e\u4fdd\u53d7\u635f\u7684\u5b9e\u4f8b\u88ab\u6d88\u9664\uff0c\u5e76\u4e14\u53ef\u4ee5\u8fc5\u901f\u4ece\u5907\u4efd\u7684\u955c\u50cf\u4e2d\u91cd\u65b0\u90e8\u7f72\u5e72\u51c0\u3001\u53ef\u4fe1\u8d56\u7684\u7248\u672c\u3002 \u5b89\u5168\u5ba1\u8ba1\u5de5\u5177 \u00b6 \u5b89\u5168\u5ba1\u6838\u5de5\u5177\u53ef\u4ee5\u8865\u5145\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u3002\u5b89\u5168\u5ba1\u6838\u5de5\u5177\u53ef\u81ea\u52a8\u6267\u884c\u9a8c\u8bc1\u7ed9\u5b9a\u7cfb\u7edf\u914d\u7f6e\u662f\u5426\u6ee1\u8db3\u5927\u91cf\u5b89\u5168\u63a7\u5236\u7684\u8fc7\u7a0b\u3002\u8fd9\u4e9b\u5de5\u5177\u6709\u52a9\u4e8e\u5f25\u5408\u4ece\u5b89\u5168\u914d\u7f6e\u6307\u5357\u6587\u6863\uff08\u4f8b\u5982\uff0cSTIG \u548c NSA \u6307\u5357\uff09\u5230\u7279\u5b9a\u7cfb\u7edf\u5b89\u88c5\u7684\u5dee\u8ddd\u3002\u4f8b\u5982\uff0cSCAP \u53ef\u4ee5\u5c06\u6b63\u5728\u8fd0\u884c\u7684\u7cfb\u7edf\u4e0e\u9884\u5b9a\u4e49\u7684\u914d\u7f6e\u6587\u4ef6\u8fdb\u884c\u6bd4\u8f83\u3002SCAP \u8f93\u51fa\u4e00\u4efd\u62a5\u544a\uff0c\u8be6\u7ec6\u8bf4\u660e\u914d\u7f6e\u6587\u4ef6\u4e2d\u7684\u54ea\u4e9b\u63a7\u4ef6\u5df2\u6ee1\u8db3\uff0c\u54ea\u4e9b\u63a7\u4ef6\u672a\u901a\u8fc7\uff0c\u54ea\u4e9b\u63a7\u4ef6\u672a\u9009\u4e2d\u3002 \u5c06\u914d\u7f6e\u7ba1\u7406\u548c\u5b89\u5168\u5ba1\u8ba1\u5de5\u5177\u76f8\u7ed3\u5408\uff0c\u5f62\u6210\u4e86\u4e00\u4e2a\u5f3a\u5927\u7684\u7ec4\u5408\u3002\u5ba1\u6838\u5de5\u5177\u5c06\u7a81\u51fa\u663e\u793a\u90e8\u7f72\u95ee\u9898\u3002\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u7b80\u5316\u4e86\u66f4\u6539\u6bcf\u4e2a\u7cfb\u7edf\u7684\u8fc7\u7a0b\uff0c\u4ee5\u89e3\u51b3\u5ba1\u8ba1\u95ee\u9898\u3002\u4ee5\u8fd9\u79cd\u65b9\u5f0f\u4e00\u8d77\u4f7f\u7528\uff0c\u8fd9\u4e9b\u5de5\u5177\u6709\u52a9\u4e8e\u7ef4\u62a4\u6ee1\u8db3\u4ece\u57fa\u672c\u5f3a\u5316\u5230\u5408\u89c4\u6027\u9a8c\u8bc1\u7b49\u5b89\u5168\u8981\u6c42\u7684\u4e91\u73af\u5883\u3002 \u914d\u7f6e\u7ba1\u7406\u548c\u5b89\u5168\u5ba1\u8ba1\u5de5\u5177\u5c06\u7ed9\u4e91\u5e26\u6765\u53e6\u4e00\u5c42\u590d\u6742\u6027\u3002\u8fd9\u79cd\u590d\u6742\u6027\u5e26\u6765\u4e86\u989d\u5916\u7684\u5b89\u5168\u95ee\u9898\u3002\u8003\u8651\u5230\u5176\u5b89\u5168\u4f18\u52bf\uff0c\u6211\u4eec\u8ba4\u4e3a\u8fd9\u662f\u4e00\u79cd\u53ef\u63a5\u53d7\u7684\u98ce\u9669\u6743\u8861\u3002\u5bf9\u4e8e\u8fd9\u4e9b\u5de5\u5177\u7684\u64cd\u4f5c\u5b89\u5168\u6027\u4fdd\u969c\u8d85\u51fa\u4e86\u672c\u6307\u5357\u7684\u8303\u56f4\u3002 \u5b8c\u6574\u6027\u751f\u547d\u5468\u671f \u00b6 \u6211\u4eec\u5c06\u5b8c\u6574\u6027\u751f\u547d\u5468\u671f\u5b9a\u4e49\u4e3a\u4e00\u4e2a\u6df1\u601d\u719f\u8651\u7684\u8fc7\u7a0b\uff0c\u5b83\u786e\u4fdd\u6211\u4eec\u59cb\u7ec8\u5728\u6574\u4e2a\u4e91\u4e2d\u4ee5\u9884\u671f\u7684\u914d\u7f6e\u8fd0\u884c\u9884\u671f\u7684\u8f6f\u4ef6\u3002\u6b64\u8fc7\u7a0b\u4ece\u5b89\u5168\u5f15\u5bfc\u5f00\u59cb\uff0c\u5e76\u901a\u8fc7\u914d\u7f6e\u7ba1\u7406\u548c\u5b89\u5168\u76d1\u63a7\u8fdb\u884c\u7ef4\u62a4\u3002\u672c\u7ae0\u5c31\u5982\u4f55\u5904\u7406\u5b8c\u6574\u6027\u751f\u547d\u5468\u671f\u8fc7\u7a0b\u63d0\u4f9b\u4e86\u5efa\u8bae\u3002 \u5b89\u5168\u5f15\u5bfc \u00b6 \u4e91\u4e2d\u7684\u8282\u70b9\uff0c\u5305\u62ec\u8ba1\u7b97\u3001\u5b58\u50a8\u3001\u7f51\u7edc\u3001\u670d\u52a1\u548c\u6df7\u5408\u8282\u70b9\uff0c\u5e94\u8be5\u6709\u4e00\u4e2a\u81ea\u52a8\u5316\u7684\u914d\u7f6e\u8fc7\u7a0b\u3002\u8fd9\u786e\u4fdd\u4e86\u8282\u70b9\u7684\u4e00\u81f4\u548c\u6b63\u786e\u914d\u7f6e\u3002\u8fd9\u4e5f\u4fbf\u4e8e\u5b89\u5168\u8865\u4e01\u3001\u5347\u7ea7\u3001\u6545\u969c\u4fee\u590d\u548c\u5176\u4ed6\u5173\u952e\u53d8\u66f4\u3002\u7531\u4e8e\u8fd9\u4e2a\u8fc7\u7a0b\u5b89\u88c5\u4e86\u5728\u4e91\u4e2d\u5177\u6709\u6700\u9ad8\u7279\u6743\u7ea7\u522b\u7684\u65b0\u8f6f\u4ef6\uff0c\u56e0\u6b64\u9a8c\u8bc1\u5b89\u88c5\u6b63\u786e\u7684\u8f6f\u4ef6\u975e\u5e38\u91cd\u8981\uff0c\u5305\u62ec\u542f\u52a8\u8fc7\u7a0b\u7684\u6700\u65e9\u9636\u6bb5\u3002 \u6709\u591a\u79cd\u6280\u672f\u53ef\u4ee5\u9a8c\u8bc1\u8fd9\u4e9b\u65e9\u671f\u542f\u52a8\u9636\u6bb5\u3002\u8fd9\u4e9b\u901a\u5e38\u9700\u8981\u786c\u4ef6\u652f\u6301\uff0c\u4f8b\u5982\u53ef\u4fe1\u5e73\u53f0\u6a21\u5757 \uff08TPM\uff09\u3001\u82f1\u7279\u5c14\u53ef\u4fe1\u6267\u884c\u6280\u672f \uff08TXT\uff09\u3001\u52a8\u6001\u4fe1\u4efb\u6839\u6d4b\u91cf \uff08DRTM\uff09 \u548c\u7edf\u4e00\u53ef\u6269\u5c55\u56fa\u4ef6\u63a5\u53e3 \uff08UEFI\uff09 \u5b89\u5168\u542f\u52a8\u3002\u5728\u672c\u4e66\u4e2d\uff0c\u6211\u4eec\u5c06\u6240\u6709\u8fd9\u4e9b\u7edf\u79f0\u4e3a\u5b89\u5168\u542f\u52a8\u6280\u672f\u3002\u6211\u4eec\u5efa\u8bae\u4f7f\u7528\u5b89\u5168\u542f\u52a8\uff0c\u540c\u65f6\u627f\u8ba4\u90e8\u7f72\u6b64\u542f\u52a8\u6240\u9700\u7684\u8bb8\u591a\u90e8\u5206\u9700\u8981\u9ad8\u7ea7\u6280\u672f\u6280\u80fd\u624d\u80fd\u4e3a\u6bcf\u4e2a\u73af\u5883\u81ea\u5b9a\u4e49\u5de5\u5177\u3002\u4e0e\u672c\u6307\u5357\u4e2d\u7684\u8bb8\u591a\u5176\u4ed6\u5efa\u8bae\u76f8\u6bd4\uff0c\u4f7f\u7528\u5b89\u5168\u542f\u52a8\u9700\u8981\u66f4\u6df1\u5165\u7684\u96c6\u6210\u548c\u81ea\u5b9a\u4e49\u3002TPM \u6280\u672f\u867d\u7136\u5728\u5927\u591a\u6570\u5546\u52a1\u7ea7\u7b14\u8bb0\u672c\u7535\u8111\u548c\u53f0\u5f0f\u673a\u4e2d\u5f88\u5e38\u89c1\u6570\u5e74\uff0c\u4f46\u73b0\u5728\u5df2\u4e0e\u652f\u6301\u7684 BIOS \u4e00\u8d77\u5728\u670d\u52a1\u5668\u4e2d\u53ef\u7528\u3002\u6b63\u786e\u7684\u89c4\u5212\u5bf9\u4e8e\u6210\u529f\u7684\u5b89\u5168\u542f\u52a8\u90e8\u7f72\u81f3\u5173\u91cd\u8981\u3002 \u6709\u5173\u5b89\u5168\u542f\u52a8\u90e8\u7f72\u7684\u5b8c\u6574\u6559\u7a0b\u8d85\u51fa\u4e86\u672c\u4e66\u7684\u8303\u56f4\u3002\u76f8\u53cd\uff0c\u6211\u4eec\u5728\u8fd9\u91cc\u63d0\u4f9b\u4e86\u4e00\u4e2a\u6846\u67b6\uff0c\u7528\u4e8e\u5c06\u5b89\u5168\u542f\u52a8\u6280\u672f\u4e0e\u5178\u578b\u7684\u8282\u70b9\u9884\u914d\u8fc7\u7a0b\u96c6\u6210\u3002\u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u4e91\u67b6\u6784\u5e08\u5e94\u53c2\u8003\u76f8\u5173\u89c4\u8303\u548c\u8f6f\u4ef6\u914d\u7f6e\u624b\u518c\u3002 \u8282\u70b9\u914d\u7f6e \u00b6 \u8282\u70b9\u5e94\u4f7f\u7528\u9884\u5f15\u5bfc\u6267\u884c\u73af\u5883\uff08PXE\uff09\u8fdb\u884c\u914d\u7f6e\u3002\u8fd9\u5927\u5927\u51cf\u5c11\u4e86\u91cd\u65b0\u90e8\u7f72\u8282\u70b9\u6240\u9700\u7684\u5de5\u4f5c\u91cf\u3002\u5178\u578b\u7684\u8fc7\u7a0b\u6d89\u53ca\u8282\u70b9\u4ece\u670d\u52a1\u5668\u63a5\u6536\u5404\u79cd\u5f15\u5bfc\u9636\u6bb5\uff08\u5373\u6267\u884c\u7684\u8f6f\u4ef6\u9010\u6e10\u590d\u6742\uff09\u3002 \u6211\u4eec\u5efa\u8bae\u5728\u7ba1\u7406\u5b89\u5168\u57df\u4e2d\u4f7f\u7528\u5355\u72ec\u7684\u9694\u79bb\u7f51\u7edc\u8fdb\u884c\u7f6e\u5907\u3002\u6b64\u7f51\u7edc\u5c06\u5904\u7406\u6240\u6709 PXE \u6d41\u91cf\uff0c\u4ee5\u53ca\u4e0a\u9762\u63cf\u8ff0\u7684\u540e\u7eed\u542f\u52a8\u9636\u6bb5\u4e0b\u8f7d\u3002\u8bf7\u6ce8\u610f\uff0c\u8282\u70b9\u5f15\u5bfc\u8fc7\u7a0b\u4ece\u4e24\u4e2a\u4e0d\u5b89\u5168\u7684\u64cd\u4f5c\u5f00\u59cb\uff1aDHCP \u548c TFTP\u3002\u7136\u540e\uff0c\u5f15\u5bfc\u8fc7\u7a0b\u4f7f\u7528 TLS \u4e0b\u8f7d\u90e8\u7f72\u8282\u70b9\u6240\u9700\u7684\u5176\u4f59\u4fe1\u606f\u3002\u8fd9\u53ef\u80fd\u662f\u64cd\u4f5c\u7cfb\u7edf\u5b89\u88c5\u7a0b\u5e8f\u3001\u7531 Chef \u6216 Puppet \u7ba1\u7406\u7684\u57fa\u672c\u5b89\u88c5\uff0c\u751a\u81f3\u662f\u76f4\u63a5\u5199\u5165\u78c1\u76d8\u7684\u5b8c\u6574\u6587\u4ef6\u7cfb\u7edf\u6620\u50cf\u3002 \u867d\u7136\u5728 PXE \u542f\u52a8\u8fc7\u7a0b\u4e2d\u4f7f\u7528 TLS \u66f4\u5177\u6311\u6218\u6027\uff0c\u4f46\u5e38\u89c1\u7684 PXE \u56fa\u4ef6\u9879\u76ee\uff08\u5982 iPXE\uff09\u63d0\u4f9b\u4e86\u8fd9\u79cd\u652f\u6301\u3002\u901a\u5e38\uff0c\u8fd9\u6d89\u53ca\u5728\u4e86\u89e3\u5141\u8bb8\u7684 TLS \u8bc1\u4e66\u94fe\u7684\u60c5\u51b5\u4e0b\u6784\u5efa PXE \u56fa\u4ef6\uff0c\u4ee5\u4fbf\u5b83\u53ef\u4ee5\u6b63\u786e\u9a8c\u8bc1\u670d\u52a1\u5668\u8bc1\u4e66\u3002\u8fd9\u901a\u8fc7\u9650\u5236\u4e0d\u5b89\u5168\u7684\u7eaf\u6587\u672c\u7f51\u7edc\u64cd\u4f5c\u7684\u6570\u91cf\u6765\u63d0\u9ad8\u653b\u51fb\u8005\u7684\u95e8\u69db\u3002 \u9a8c\u8bc1\u542f\u52a8 \u00b6 \u901a\u5e38\uff0c\u6709\u4e24\u79cd\u4e0d\u540c\u7684\u7b56\u7565\u6765\u9a8c\u8bc1\u542f\u52a8\u8fc7\u7a0b\u3002\u4f20\u7edf\u7684\u5b89\u5168\u542f\u52a8\u5c06\u9a8c\u8bc1\u5728\u8fc7\u7a0b\u4e2d\u7684\u6bcf\u4e2a\u6b65\u9aa4\u8fd0\u884c\u7684\u4ee3\u7801\uff0c\u5e76\u5728\u4ee3\u7801\u4e0d\u6b63\u786e\u65f6\u505c\u6b62\u542f\u52a8\u3002\u542f\u52a8\u8bc1\u660e\u5c06\u8bb0\u5f55\u5728\u6bcf\u4e2a\u6b65\u9aa4\u4e2d\u8fd0\u884c\u7684\u4ee3\u7801\uff0c\u5e76\u5c06\u6b64\u4fe1\u606f\u63d0\u4f9b\u7ed9\u53e6\u4e00\u53f0\u8ba1\u7b97\u673a\uff0c\u4ee5\u8bc1\u660e\u542f\u52a8\u8fc7\u7a0b\u6309\u9884\u671f\u5b8c\u6210\u3002\u5728\u8fd9\u4e24\u79cd\u60c5\u51b5\u4e0b\uff0c\u7b2c\u4e00\u6b65\u90fd\u662f\u5728\u8fd0\u884c\u4e4b\u524d\u6d4b\u91cf\u6bcf\u6bb5\u4ee3\u7801\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u6d4b\u91cf\u5b9e\u9645\u4e0a\u662f\u4ee3\u7801\u7684 SHA-1 \u54c8\u5e0c\u503c\uff0c\u5728\u6267\u884c\u4e4b\u524d\u83b7\u53d6\u3002\u54c8\u5e0c\u5b58\u50a8\u5728 TPM \u7684\u5e73\u53f0\u914d\u7f6e\u5bc4\u5b58\u5668 \uff08PCR\uff09 \u4e2d\u3002 \u6ce8\u610f \u6b64\u5904\u4f7f\u7528 SHA-1\uff0c\u56e0\u4e3a\u8fd9\u662f TPM \u82af\u7247\u652f\u6301\u7684\u5185\u5bb9\u3002 \u6bcf\u4e2a TPM \u81f3\u5c11\u6709 24 \u4e2a PCR\u30022005 \u5e74 3 \u6708\u7684 TCG \u901a\u7528\u670d\u52a1\u5668\u89c4\u8303 v1.0 \u5b9a\u4e49\u4e86\u542f\u52a8\u65f6\u5b8c\u6574\u6027\u6d4b\u91cf\u7684 PCR \u5206\u914d\u3002\u4e0b\u8868\u663e\u793a\u4e86\u5178\u578b\u7684PCR\u914d\u7f6e\u3002\u4e0a\u4e0b\u6587\u6307\u793a\u8fd9\u4e9b\u503c\u662f\u6839\u636e\u8282\u70b9\u786c\u4ef6\uff08\u56fa\u4ef6\uff09\u8fd8\u662f\u6839\u636e\u8282\u70b9\u4e0a\u7f6e\u5907\u7684\u8f6f\u4ef6\u786e\u5b9a\u7684\u3002\u67d0\u4e9b\u503c\u53d7\u56fa\u4ef6\u7248\u672c\u3001\u78c1\u76d8\u5927\u5c0f\u548c\u5176\u4ed6\u4f4e\u7ea7\u4fe1\u606f\u7684\u5f71\u54cd\u3002\u56e0\u6b64\uff0c\u5728\u914d\u7f6e\u7ba1\u7406\u65b9\u9762\u91c7\u53d6\u826f\u597d\u7684\u505a\u6cd5\u975e\u5e38\u91cd\u8981\uff0c\u4ee5\u786e\u4fdd\u90e8\u7f72\u7684\u6bcf\u4e2a\u7cfb\u7edf\u90fd\u5b8c\u5168\u6309\u7167\u9884\u671f\u8fdb\u884c\u914d\u7f6e\u3002 \u6ce8\u518c \u6d4b\u91cf\u5185\u5bb9 \u4e0a\u4e0b\u6587 PCR-00 \u6838\u5fc3\u4fe1\u4efb\u6839\u6d4b\u91cf \uff08CRTM\uff09\u3001BIOS \u4ee3\u7801\u3001\u4e3b\u673a\u5e73\u53f0\u6269\u5c55 \u786c\u4ef6 PCR-01 \u4e3b\u673a\u5e73\u53f0\u914d\u7f6e \u786c\u4ef6 PCR-02 \u9009\u9879 ROM \u4ee3\u7801 \u786c\u4ef6 PCR-03 \u9009\u9879 ROM \u914d\u7f6e\u548c\u6570\u636e \u786c\u4ef6 PCR-04 \u521d\u59cb\u7a0b\u5e8f\u52a0\u8f7d\u7a0b\u5e8f \uff08IPL\uff09 \u4ee3\u7801\u3002\u4f8b\u5982\uff0c\u4e3b\u5f15\u5bfc\u8bb0\u5f55\u3002 \u8f6f\u4ef6 PCR-05 IPL \u4ee3\u7801\u914d\u7f6e\u548c\u6570\u636e \u8f6f\u4ef6 PCR-06 \u72b6\u6001\u8f6c\u6362\u548c\u5524\u9192\u4e8b\u4ef6 \u8f6f\u4ef6 PCR-07 \u4e3b\u673a\u5e73\u53f0\u5236\u9020\u5546\u63a7\u5236 \u8f6f\u4ef6 PCR-08 \u7279\u5b9a\u4e8e\u5e73\u53f0\uff0c\u901a\u5e38\u662f\u5185\u6838\u3001\u5185\u6838\u6269\u5c55\u548c\u9a71\u52a8\u7a0b\u5e8f \u8f6f\u4ef6 PCR-09 \u7279\u5b9a\u4e8e\u5e73\u53f0\uff0c\u901a\u5e38\u662f Initramfs \u8f6f\u4ef6 PCR-10 \u81f3 PCR-23 \u7279\u5b9a\u4e8e\u5e73\u53f0 \u8f6f\u4ef6 \u5b89\u5168\u542f\u52a8\u53ef\u80fd\u662f\u6784\u5efa\u4e91\u7684\u4e00\u4e2a\u9009\u9879\uff0c\u4f46\u9700\u8981\u5728\u786c\u4ef6\u9009\u62e9\u65b9\u9762\u8fdb\u884c\u4ed4\u7ec6\u89c4\u5212\u3002\u4f8b\u5982\uff0c\u786e\u4fdd\u60a8\u5177\u6709 TPM \u548c\u82f1\u7279\u5c14 TXT \u652f\u6301\u3002\u7136\u540e\u9a8c\u8bc1\u8282\u70b9\u786c\u4ef6\u4f9b\u5e94\u5546\u5982\u4f55\u586b\u5145 PCR \u503c\u3002\u4f8b\u5982\uff0c\u54ea\u4e9b\u503c\u53ef\u7528\u4e8e\u9a8c\u8bc1\u3002\u901a\u5e38\uff0c\u4e0a\u8868\u4e2d\u8f6f\u4ef6\u4e0a\u4e0b\u6587\u4e0b\u5217\u51fa\u7684 PCR \u503c\u662f\u4e91\u67b6\u6784\u5e08\u53ef\u4ee5\u76f4\u63a5\u63a7\u5236\u7684\u503c\u3002\u4f46\u5373\u4f7f\u8fd9\u4e9b\u4e5f\u53ef\u80fd\u968f\u7740\u4e91\u4e2d\u8f6f\u4ef6\u7684\u5347\u7ea7\u800c\u6539\u53d8\u3002\u914d\u7f6e\u7ba1\u7406\u5e94\u94fe\u63a5\u5230 PCR \u7b56\u7565\u5f15\u64ce\uff0c\u4ee5\u786e\u4fdd\u9a8c\u8bc1\u59cb\u7ec8\u662f\u6700\u65b0\u7684\u3002 \u6bcf\u4e2a\u5236\u9020\u5546\u90fd\u5fc5\u987b\u4e3a\u5176\u670d\u52a1\u5668\u63d0\u4f9b BIOS \u548c\u56fa\u4ef6\u4ee3\u7801\u3002\u4e0d\u540c\u7684\u670d\u52a1\u5668\u3001\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u548c\u64cd\u4f5c\u7cfb\u7edf\u5c06\u9009\u62e9\u586b\u5145\u4e0d\u540c\u7684 PCR\u3002\u5728\u5927\u591a\u6570\u5b9e\u9645\u90e8\u7f72\u4e2d\uff0c\u4e0d\u53ef\u80fd\u6839\u636e\u5df2\u77e5\u7684\u826f\u597d\u6570\u91cf\uff08\u201c\u9ec4\u91d1\u6d4b\u91cf\u201d\uff09\u9a8c\u8bc1\u6bcf\u4e2aPCR\u3002\u7ecf\u9a8c\u8868\u660e\uff0c\u5373\u4f7f\u5728\u5355\u4e2a\u4f9b\u5e94\u5546\u7684\u4ea7\u54c1\u7ebf\u4e2d\uff0c\u7ed9\u5b9aPCR\u7684\u6d4b\u91cf\u8fc7\u7a0b\u4e5f\u53ef\u80fd\u4e0d\u4e00\u81f4\u3002\u5efa\u8bae\u4e3a\u6bcf\u4e2a\u670d\u52a1\u5668\u5efa\u7acb\u57fa\u7ebf\uff0c\u5e76\u76d1\u89c6 PCR \u503c\u4ee5\u67e5\u627e\u610f\u5916\u66f4\u6539\u3002\u7b2c\u4e09\u65b9\u8f6f\u4ef6\u53ef\u80fd\u53ef\u7528\u4e8e\u534f\u52a9 TPM \u9884\u914d\u548c\u76d1\u89c6\u8fc7\u7a0b\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u6240\u9009\u7684\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u89e3\u51b3\u65b9\u6848\u3002 \u521d\u59cb\u7a0b\u5e8f\u52a0\u8f7d\u7a0b\u5e8f \uff08IPL\uff09 \u4ee3\u7801\u5f88\u53ef\u80fd\u662f PXE \u56fa\u4ef6\uff0c\u5047\u8bbe\u91c7\u7528\u4e0a\u8ff0\u8282\u70b9\u90e8\u7f72\u7b56\u7565\u3002\u56e0\u6b64\uff0c\u5b89\u5168\u542f\u52a8\u6216\u542f\u52a8\u8bc1\u660e\u8fc7\u7a0b\u53ef\u4ee5\u6d4b\u91cf\u6240\u6709\u65e9\u671f\u542f\u52a8\u4ee3\u7801\uff0c\u4f8b\u5982 BIOS\u3001\u56fa\u4ef6\u3001PXE \u56fa\u4ef6\u548c\u5185\u6838\u6620\u50cf\u3002\u786e\u4fdd\u6bcf\u4e2a\u8282\u70b9\u90fd\u5b89\u88c5\u4e86\u8fd9\u4e9b\u90e8\u4ef6\u7684\u6b63\u786e\u7248\u672c\uff0c\u4e3a\u6784\u5efa\u8282\u70b9\u8f6f\u4ef6\u5806\u6808\u7684\u5176\u4f59\u90e8\u5206\u5960\u5b9a\u4e86\u575a\u5b9e\u7684\u57fa\u7840\u3002 \u6839\u636e\u6240\u9009\u7684\u7b56\u7565\uff0c\u5728\u53d1\u751f\u6545\u969c\u65f6\uff0c\u8282\u70b9\u5c06\u65e0\u6cd5\u542f\u52a8\uff0c\u6216\u8005\u5b83\u53ef\u4ee5\u5c06\u6545\u969c\u62a5\u544a\u7ed9\u4e91\u4e2d\u7684\u53e6\u4e00\u4e2a\u5b9e\u4f53\u3002\u4e3a\u4e86\u5b9e\u73b0\u5b89\u5168\u5f15\u5bfc\uff0c\u8282\u70b9\u5c06\u65e0\u6cd5\u5f15\u5bfc\uff0c\u7ba1\u7406\u5b89\u5168\u57df\u4e2d\u7684\u7f6e\u5907\u670d\u52a1\u5fc5\u987b\u8bc6\u522b\u8fd9\u4e00\u70b9\u5e76\u8bb0\u5f55\u4e8b\u4ef6\u3002\u5bf9\u4e8e\u542f\u52a8\u8bc1\u660e\uff0c\u5f53\u68c0\u6d4b\u5230\u6545\u969c\u65f6\uff0c\u8282\u70b9\u5c06\u5df2\u7ecf\u5728\u8fd0\u884c\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u5e94\u901a\u8fc7\u7981\u7528\u8282\u70b9\u7684\u7f51\u7edc\u8bbf\u95ee\u6765\u7acb\u5373\u9694\u79bb\u8282\u70b9\u3002\u7136\u540e\uff0c\u5e94\u5206\u6790\u4e8b\u4ef6\u7684\u6839\u672c\u539f\u56e0\u3002\u65e0\u8bba\u54ea\u79cd\u60c5\u51b5\uff0c\u7b56\u7565\u90fd\u5e94\u89c4\u5b9a\u5728\u5931\u8d25\u540e\u5982\u4f55\u7ee7\u7eed\u3002\u4e91\u53ef\u80fd\u4f1a\u81ea\u52a8\u5c1d\u8bd5\u91cd\u65b0\u914d\u7f6e\u8282\u70b9\u4e00\u5b9a\u6b21\u6570\u3002\u6216\u8005\uff0c\u5b83\u53ef\u80fd\u4f1a\u7acb\u5373\u901a\u77e5\u4e91\u7ba1\u7406\u5458\u8c03\u67e5\u95ee\u9898\u3002\u6b64\u5904\u7684\u6b63\u786e\u7b56\u7565\u662f\u7279\u5b9a\u4e8e\u90e8\u7f72\u548c\u6545\u969c\u6a21\u5f0f\u7684\u3002 \u8282\u70b9\u52a0\u56fa \u00b6 \u6b64\u65f6\uff0c\u6211\u4eec\u77e5\u9053\u8282\u70b9\u5df2\u4f7f\u7528\u6b63\u786e\u7684\u5185\u6838\u548c\u5e95\u5c42\u7ec4\u4ef6\u542f\u52a8\u3002\u4e0b\u4e00\u6b65\u662f\u5f3a\u5316\u64cd\u4f5c\u7cfb\u7edf\uff0c\u5b83\u4ece\u4e00\u7ec4\u884c\u4e1a\u516c\u8ba4\u7684\u5f3a\u5316\u63a7\u4ef6\u5f00\u59cb\u3002\u4ee5\u4e0b\u6307\u5357\u662f\u5f88\u597d\u7684\u793a\u4f8b\uff1a \u5b89\u5168\u6280\u672f\u5b9e\u65bd\u6307\u5357 \uff08STIG\uff09 \u56fd\u9632\u4fe1\u606f\u7cfb\u7edf\u5c40 \uff08DISA\uff09\uff08\u96b6\u5c5e\u4e8e\u7f8e\u56fd\u56fd\u9632\u90e8\uff09\u53d1\u5e03\u9002\u7528\u4e8e\u5404\u79cd\u64cd\u4f5c\u7cfb\u7edf\u3001\u5e94\u7528\u7a0b\u5e8f\u548c\u786c\u4ef6\u7684 STIG \u5185\u5bb9\u3002\u8fd9\u4e9b\u63a7\u4ef6\u5728\u672a\u9644\u52a0\u4efb\u4f55\u8bb8\u53ef\u8bc1\u7684\u60c5\u51b5\u4e0b\u53d1\u5e03\u3002 \u4e92\u8054\u7f51\u5b89\u5168\u4e2d\u5fc3 \uff08CIS\uff09 \u57fa\u51c6\u6d4b\u8bd5 CIS \u4f1a\u5b9a\u671f\u53d1\u5e03\u5b89\u5168\u57fa\u51c6\u4ee5\u53ca\u81ea\u52a8\u5e94\u7528\u8fd9\u4e9b\u5b89\u5168\u63a7\u5236\u7684\u81ea\u52a8\u5316\u5de5\u5177\u3002\u8fd9\u4e9b\u57fa\u51c6\u6d4b\u8bd5\u662f\u5728\u5177\u6709\u4e00\u4e9b\u9650\u5236\u7684\u77e5\u8bc6\u5171\u4eab\u8bb8\u53ef\u4e0b\u53d1\u5e03\u7684\u3002 \u8fd9\u4e9b\u5b89\u5168\u63a7\u5236\u6700\u597d\u901a\u8fc7\u81ea\u52a8\u5316\u65b9\u6cd5\u5e94\u7528\u3002\u81ea\u52a8\u5316\u786e\u4fdd\u6bcf\u6b21\u5bf9\u6bcf\u4e2a\u7cfb\u7edf\u90fd\u4ee5\u76f8\u540c\u7684\u65b9\u5f0f\u5e94\u7528\u63a7\u5236\uff0c\u5e76\u4e14\u5b83\u4eec\u8fd8\u63d0\u4f9b\u4e86\u4e00\u79cd\u7528\u4e8e\u5ba1\u6838\u73b0\u6709\u7cfb\u7edf\u7684\u5feb\u901f\u65b9\u6cd5\u3002\u81ea\u52a8\u5316\u6709\u591a\u79cd\u9009\u62e9\uff1a OpenSCAP OpenSCAP \u662f\u4e00\u4e2a\u5f00\u6e90\u5de5\u5177\uff0c\u5b83\u91c7\u7528 SCAP \u5185\u5bb9\uff08\u63cf\u8ff0\u5b89\u5168\u63a7\u5236\u7684 XML \u6587\u4ef6\uff09\u5e76\u5c06\u8be5\u5185\u5bb9\u5e94\u7528\u4e8e\u5404\u79cd\u7cfb\u7edf\u3002\u76ee\u524d\u53ef\u7528\u7684\u5927\u591a\u6570\u5185\u5bb9\u90fd\u9002\u7528\u4e8e Red Hat Enterprise Linux \u548c CentOS\uff0c\u4f46\u8fd9\u4e9b\u5de5\u5177\u9002\u7528\u4e8e\u4efb\u4f55 Linux \u6216 Windows \u7cfb\u7edf\u3002 ansible \u52a0\u56fa ansible-hardening \u9879\u76ee\u63d0\u4f9b\u4e86\u4e00\u4e2a Ansible \u89d2\u8272\uff0c\u53ef\u5c06\u5b89\u5168\u63a7\u5236\u5e94\u7528\u4e8e\u5404\u79cd Linux \u64cd\u4f5c\u7cfb\u7edf\u3002\u5b83\u8fd8\u53ef\u7528\u4e8e\u5ba1\u6838\u73b0\u6709\u7cfb\u7edf\u3002\u4ed4\u7ec6\u68c0\u67e5\u6bcf\u4e2a\u63a7\u5236\u63aa\u65bd\uff0c\u4ee5\u786e\u5b9a\u5b83\u662f\u5426\u53ef\u80fd\u5bf9\u751f\u4ea7\u7cfb\u7edf\u9020\u6210\u635f\u5bb3\u3002\u8fd9\u4e9b\u63a7\u4ef6\u57fa\u4e8e Red Hat Enterprise Linux 7 STIG\u3002 \u5b8c\u5168\u52a0\u56fa\u7684\u7cfb\u7edf\u662f\u4e00\u4e2a\u5177\u6709\u6311\u6218\u6027\u7684\u8fc7\u7a0b\uff0c\u53ef\u80fd\u9700\u8981\u5bf9\u67d0\u4e9b\u7cfb\u7edf\u8fdb\u884c\u5927\u91cf\u66f4\u6539\u3002\u5176\u4e2d\u4e00\u4e9b\u66f4\u6539\u53ef\u80fd\u4f1a\u5f71\u54cd\u751f\u4ea7\u5de5\u4f5c\u8d1f\u8f7d\u3002\u5982\u679c\u7cfb\u7edf\u65e0\u6cd5\u5b8c\u5168\u52a0\u56fa\uff0c\u5f3a\u70c8\u5efa\u8bae\u8fdb\u884c\u4ee5\u4e0b\u4e24\u9879\u66f4\u6539\uff0c\u4ee5\u4fbf\u5728\u4e0d\u9020\u6210\u91cd\u5927\u4e2d\u65ad\u7684\u60c5\u51b5\u4e0b\u63d0\u9ad8\u5b89\u5168\u6027\uff1a \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \uff08MAC\uff09 \u00b6 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u4f1a\u5f71\u54cd\u7cfb\u7edf\u4e0a\u7684\u6240\u6709\u7528\u6237\uff0c\u5305\u62ec root\uff0c\u5185\u6838\u7684\u5de5\u4f5c\u662f\u6839\u636e\u5f53\u524d\u5b89\u5168\u7b56\u7565\u5ba1\u67e5\u6d3b\u52a8\u3002\u5982\u679c\u6d3b\u52a8\u4e0d\u5728\u5141\u8bb8\u7684\u7b56\u7565\u8303\u56f4\u5185\uff0c\u5219\u4f1a\u88ab\u963b\u6b62\uff0c\u5373\u4f7f\u5bf9\u4e8e root \u7528\u6237\u4e5f\u662f\u5982\u6b64\u3002\u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u67e5\u770b\u4e0b\u9762\u5173\u4e8e sVirt\u3001SELinux \u548c AppArmor \u7684\u8ba8\u8bba\u3002 \u5220\u9664\u8f6f\u4ef6\u5305\u5e76\u505c\u6b62\u670d\u52a1 \u00b6 \u786e\u4fdd\u7cfb\u7edf\u5b89\u88c5\u7684\u8f6f\u4ef6\u5305\u6570\u91cf\u5c3d\u53ef\u80fd\u5c11\uff0c\u5e76\u4e14\u8fd0\u884c\u7684\u670d\u52a1\u6570\u91cf\u5c3d\u53ef\u80fd\u5c11\u3002\u5220\u9664\u4e0d\u9700\u8981\u7684\u8f6f\u4ef6\u5305\u53ef\u4ee5\u66f4\u8f7b\u677e\u5730\u8fdb\u884c\u4fee\u8865\uff0c\u5e76\u51cf\u5c11\u7cfb\u7edf\u4e0a\u53ef\u80fd\u5bfc\u81f4\u8fdd\u89c4\u7684\u9879\u76ee\u6570\u91cf\u3002\u505c\u6b62\u4e0d\u9700\u8981\u7684\u670d\u52a1\u4f1a\u7f29\u5c0f\u7cfb\u7edf\u4e0a\u7684\u653b\u51fb\u9762\uff0c\u5e76\u4f7f\u653b\u51fb\u66f4\u52a0\u56f0\u96be\u3002 \u6211\u4eec\u8fd8\u5efa\u8bae\u5bf9\u751f\u4ea7\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u9644\u52a0\u6b65\u9aa4\uff1a \u53ea\u8bfb\u6587\u4ef6\u7cfb\u7edf \u00b6 \u5c3d\u53ef\u80fd\u4f7f\u7528\u53ea\u8bfb\u6587\u4ef6\u7cfb\u7edf\u3002\u786e\u4fdd\u53ef\u5199\u6587\u4ef6\u7cfb\u7edf\u4e0d\u5141\u8bb8\u6267\u884c\u3002\u8fd9\u53ef\u4ee5\u4f7f\u7528 noexec \u4e2d\u7684 \u3001 nosuid \u548c nodev \u6302\u8f7d\u9009\u9879\u6765\u5904\u7406 /etc/fstab \u3002 \u7cfb\u7edf\u9a8c\u8bc1 \u00b6 \u6700\u540e\uff0c\u8282\u70b9\u5185\u6838\u5e94\u8be5\u6709\u4e00\u79cd\u673a\u5236\u6765\u9a8c\u8bc1\u8282\u70b9\u7684\u5176\u4f59\u90e8\u5206\u662f\u5426\u4ee5\u5df2\u77e5\u7684\u826f\u597d\u72b6\u6001\u542f\u52a8\u3002\u8fd9\u63d0\u4f9b\u4e86\u4ece\u5f15\u5bfc\u9a8c\u8bc1\u8fc7\u7a0b\u5230\u9a8c\u8bc1\u6574\u4e2a\u7cfb\u7edf\u7684\u5fc5\u8981\u94fe\u63a5\u3002\u6267\u884c\u6b64\u64cd\u4f5c\u7684\u6b65\u9aa4\u5c06\u7279\u5b9a\u4e8e\u90e8\u7f72\u3002\u4f8b\u5982\uff0c\u5185\u6838\u6a21\u5757\u53ef\u4ee5\u5728\u4f7f\u7528 dm-verity \u6302\u8f7d\u6587\u4ef6\u7cfb\u7edf\u4e4b\u524d\u9a8c\u8bc1\u7ec4\u6210\u6587\u4ef6\u7cfb\u7edf\u7684\u5757\u7684\u54c8\u5e0c\u503c\u3002 \u8fd0\u884c\u65f6\u9a8c\u8bc1 \u00b6 \u4e00\u65e6\u8282\u70b9\u8fd0\u884c\uff0c\u6211\u4eec\u9700\u8981\u786e\u4fdd\u5b83\u968f\u7740\u65f6\u95f4\u7684\u63a8\u79fb\u4fdd\u6301\u826f\u597d\u7684\u72b6\u6001\u3002\u4ece\u5e7f\u4e49\u4e0a\u8bb2\uff0c\u8fd9\u5305\u62ec\u914d\u7f6e\u7ba1\u7406\u548c\u5b89\u5168\u76d1\u63a7\u3002\u8fd9\u4e9b\u9886\u57df\u4e2d\u6bcf\u4e2a\u9886\u57df\u7684\u76ee\u6807\u90fd\u4e0d\u540c\u3002\u901a\u8fc7\u68c0\u67e5\u8fd9\u4e24\u8005\uff0c\u6211\u4eec\u53ef\u4ee5\u66f4\u597d\u5730\u786e\u4fdd\u7cfb\u7edf\u6309\u9884\u671f\u8fd0\u884c\u3002\u6211\u4eec\u5c06\u5728\u7ba1\u7406\u90e8\u5206\u8ba8\u8bba\u914d\u7f6e\u7ba1\u7406\uff0c\u5e76\u5728\u4e0b\u9762\u8ba8\u8bba\u5b89\u5168\u76d1\u63a7\u3002 \u5165\u4fb5\u68c0\u6d4b\u7cfb\u7edf \u00b6 \u57fa\u4e8e\u4e3b\u673a\u7684\u5165\u4fb5\u68c0\u6d4b\u5de5\u5177\u5bf9\u4e8e\u81ea\u52a8\u9a8c\u8bc1\u4e91\u5185\u90e8\u4e5f\u5f88\u6709\u7528\u3002\u6709\u5404\u79cd\u5404\u6837\u7684\u57fa\u4e8e\u4e3b\u673a\u7684\u5165\u4fb5\u68c0\u6d4b\u5de5\u5177\u53ef\u7528\u3002\u6709\u4e9b\u662f\u514d\u8d39\u63d0\u4f9b\u7684\u5f00\u6e90\u9879\u76ee\uff0c\u800c\u53e6\u4e00\u4e9b\u5219\u662f\u5546\u4e1a\u9879\u76ee\u3002\u901a\u5e38\uff0c\u8fd9\u4e9b\u5de5\u5177\u4f1a\u5206\u6790\u6765\u81ea\u5404\u79cd\u6765\u6e90\u7684\u6570\u636e\uff0c\u5e76\u6839\u636e\u89c4\u5219\u96c6\u548c/\u6216\u8bad\u7ec3\u751f\u6210\u5b89\u5168\u8b66\u62a5\u3002\u5178\u578b\u529f\u80fd\u5305\u62ec\u65e5\u5fd7\u5206\u6790\u3001\u6587\u4ef6\u5b8c\u6574\u6027\u68c0\u67e5\u3001\u7b56\u7565\u76d1\u63a7\u548c rootkit \u68c0\u6d4b\u3002\u66f4\u9ad8\u7ea7\uff08\u901a\u5e38\u662f\u81ea\u5b9a\u4e49\uff09\u5de5\u5177\u53ef\u4ee5\u9a8c\u8bc1\u5185\u5b58\u4e2d\u8fdb\u7a0b\u6620\u50cf\u662f\u5426\u4e0e\u78c1\u76d8\u4e0a\u7684\u53ef\u6267\u884c\u6587\u4ef6\u5339\u914d\uff0c\u5e76\u9a8c\u8bc1\u6b63\u5728\u8fd0\u884c\u7684\u8fdb\u7a0b\u7684\u6267\u884c\u72b6\u6001\u3002 \u5bf9\u4e8e\u4e91\u67b6\u6784\u5e08\u6765\u8bf4\uff0c\u4e00\u4e2a\u5173\u952e\u7684\u7b56\u7565\u51b3\u7b56\u662f\u5982\u4f55\u5904\u7406\u5b89\u5168\u76d1\u63a7\u5de5\u5177\u7684\u8f93\u51fa\u3002\u5b9e\u9645\u4e0a\u6709\u4e24\u79cd\u9009\u62e9\u3002\u9996\u5148\u662f\u63d0\u9192\u4eba\u7c7b\u8fdb\u884c\u8c03\u67e5\u548c/\u6216\u91c7\u53d6\u7ea0\u6b63\u63aa\u65bd\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u5728\u4e91\u7ba1\u7406\u5458\u7684\u65e5\u5fd7\u6216\u4e8b\u4ef6\u6e90\u4e2d\u5305\u542b\u5b89\u5168\u8b66\u62a5\u6765\u5b8c\u6210\u3002\u7b2c\u4e8c\u79cd\u9009\u62e9\u662f\u8ba9\u4e91\u81ea\u52a8\u91c7\u53d6\u67d0\u79cd\u5f62\u5f0f\u7684\u8865\u6551\u63aa\u65bd\uff0c\u4ee5\u53ca\u8bb0\u5f55\u4e8b\u4ef6\u3002\u8865\u6551\u63aa\u65bd\u53ef\u80fd\u5305\u62ec\u4ece\u91cd\u65b0\u5b89\u88c5\u8282\u70b9\u5230\u6267\u884c\u6b21\u8981\u670d\u52a1\u914d\u7f6e\u7684\u4efb\u4f55\u5185\u5bb9\u3002\u4f46\u662f\uff0c\u7531\u4e8e\u53ef\u80fd\u5b58\u5728\u8bef\u62a5\uff0c\u81ea\u52a8\u8865\u6551\u63aa\u65bd\u53ef\u80fd\u5177\u6709\u6311\u6218\u6027\u3002 \u5f53\u5b89\u5168\u76d1\u89c6\u5de5\u5177\u4e3a\u826f\u6027\u4e8b\u4ef6\u751f\u6210\u5b89\u5168\u8b66\u62a5\u65f6\uff0c\u4f1a\u53d1\u751f\u8bef\u62a5\u3002\u7531\u4e8e\u5b89\u5168\u76d1\u63a7\u5de5\u5177\u7684\u6027\u8d28\uff0c\u8bef\u62a5\u80af\u5b9a\u4f1a\u4e0d\u65f6\u53d1\u751f\u3002\u901a\u5e38\uff0c\u4e91\u7ba1\u7406\u5458\u53ef\u4ee5\u8c03\u6574\u5b89\u5168\u76d1\u63a7\u5de5\u5177\u4ee5\u51cf\u5c11\u8bef\u62a5\uff0c\u4f46\u8fd9\u4e5f\u53ef\u80fd\u540c\u65f6\u964d\u4f4e\u6574\u4f53\u68c0\u6d4b\u7387\u3002\u5728\u4e91\u4e2d\u8bbe\u7f6e\u5b89\u5168\u76d1\u63a7\u7cfb\u7edf\u65f6\uff0c\u5fc5\u987b\u4e86\u89e3\u5e76\u8003\u8651\u8fd9\u4e9b\u7ecf\u5178\u7684\u6743\u8861\u3002 \u57fa\u4e8e\u4e3b\u673a\u7684\u5165\u4fb5\u68c0\u6d4b\u5de5\u5177\u7684\u9009\u62e9\u548c\u914d\u7f6e\u5177\u6709\u9ad8\u5ea6\u7684\u90e8\u7f72\u7279\u5f02\u6027\u3002\u6211\u4eec\u5efa\u8bae\u4ece\u63a2\u7d22\u4ee5\u4e0b\u5f00\u6e90\u9879\u76ee\u5f00\u59cb\uff0c\u8fd9\u4e9b\u9879\u76ee\u5b9e\u73b0\u4e86\u5404\u79cd\u57fa\u4e8e\u4e3b\u673a\u7684\u5165\u4fb5\u68c0\u6d4b\u548c\u6587\u4ef6\u76d1\u63a7\u529f\u80fd\u3002 OSSEC Samhain Tripwire AIDE \u7f51\u7edc\u5165\u4fb5\u68c0\u6d4b\u5de5\u5177\u662f\u5bf9\u57fa\u4e8e\u4e3b\u673a\u7684\u5de5\u5177\u7684\u8865\u5145\u3002OpenStack \u6ca1\u6709\u5185\u7f6e\u7279\u5b9a\u7684\u7f51\u7edc IDS\uff0c\u4f46 OpenStack Networking \u63d0\u4f9b\u4e86\u4e00\u79cd\u63d2\u4ef6\u673a\u5236\uff0c\u53ef\u4ee5\u901a\u8fc7 Networking API \u542f\u7528\u4e0d\u540c\u7684\u6280\u672f\u3002\u6b64\u63d2\u4ef6\u4f53\u7cfb\u7ed3\u6784\u5c06\u5141\u8bb8\u79df\u6237\u5f00\u53d1 API \u6269\u5c55\uff0c\u4ee5\u63d2\u5165\u548c\u914d\u7f6e\u81ea\u5df1\u7684\u9ad8\u7ea7\u7f51\u7edc\u670d\u52a1\uff0c\u4f8b\u5982\u9632\u706b\u5899\u3001\u5165\u4fb5\u68c0\u6d4b\u7cfb\u7edf\u6216\u865a\u62df\u673a\u4e4b\u95f4\u7684 VPN\u3002 \u4e0e\u57fa\u4e8e\u4e3b\u673a\u7684\u5de5\u5177\u7c7b\u4f3c\uff0c\u57fa\u4e8e\u7f51\u7edc\u7684\u5165\u4fb5\u68c0\u6d4b\u5de5\u5177\u7684\u9009\u62e9\u548c\u914d\u7f6e\u662f\u7279\u5b9a\u4e8e\u90e8\u7f72\u7684\u3002Snort \u662f\u9886\u5148\u7684\u5f00\u6e90\u7f51\u7edc\u5165\u4fb5\u68c0\u6d4b\u5de5\u5177\uff0c\u4e5f\u662f\u4e86\u89e3\u66f4\u591a\u4fe1\u606f\u7684\u826f\u597d\u8d77\u70b9\u3002 \u5bf9\u4e8e\u57fa\u4e8e\u7f51\u7edc\u548c\u4e3b\u673a\u7684\u5165\u4fb5\u68c0\u6d4b\u7cfb\u7edf\uff0c\u6709\u4e00\u4e9b\u91cd\u8981\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879\u3002 \u91cd\u8981\u7684\u662f\u8981\u8003\u8651\u5c06\u7f51\u7edc IDS \u653e\u7f6e\u5728\u4e91\u4e0a\uff08\u4f8b\u5982\uff0c\u5c06\u5176\u6dfb\u52a0\u5230\u7f51\u7edc\u8fb9\u754c\u548c/\u6216\u654f\u611f\u7f51\u7edc\u5468\u56f4\uff09\u3002\u653e\u7f6e\u4f4d\u7f6e\u53d6\u51b3\u4e8e\u60a8\u7684\u7f51\u7edc\u73af\u5883\uff0c\u4f46\u8bf7\u786e\u4fdd\u76d1\u63a7 IDS \u53ef\u80fd\u5bf9\u60a8\u7684\u670d\u52a1\u4ea7\u751f\u7684\u5f71\u54cd\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u60a8\u9009\u62e9\u6dfb\u52a0\u7684\u4f4d\u7f6e\u3002\u7f51\u7edc IDS \u901a\u5e38\u65e0\u6cd5\u68c0\u67e5\u52a0\u5bc6\u6d41\u91cf\uff08\u5982 TLS\uff09\u7684\u5185\u5bb9\u3002\u4f46\u662f\uff0c\u7f51\u7edc IDS \u5728\u8bc6\u522b\u7f51\u7edc\u4e0a\u7684\u5f02\u5e38\u672a\u52a0\u5bc6\u6d41\u91cf\u65b9\u9762\u4ecd\u53ef\u80fd\u63d0\u4f9b\u4e00\u4e9b\u597d\u5904\u3002 \u5728\u67d0\u4e9b\u90e8\u7f72\u4e2d\uff0c\u53ef\u80fd\u9700\u8981\u5728\u5b89\u5168\u57df\u7f51\u6865\u4e0a\u7684\u654f\u611f\u7ec4\u4ef6\u4e0a\u6dfb\u52a0\u57fa\u4e8e\u4e3b\u673a\u7684 IDS\u3002\u57fa\u4e8e\u4e3b\u673a\u7684 IDS \u53ef\u80fd\u4f1a\u901a\u8fc7\u7ec4\u4ef6\u4e0a\u906d\u5230\u5165\u4fb5\u6216\u672a\u7ecf\u6388\u6743\u7684\u8fdb\u7a0b\u6765\u68c0\u6d4b\u5f02\u5e38\u6d3b\u52a8\u3002IDS \u5e94\u5728\u7ba1\u7406\u7f51\u7edc\u4e0a\u4f20\u8f93\u8b66\u62a5\u548c\u65e5\u5fd7\u4fe1\u606f\u3002 \u670d\u52a1\u5668\u52a0\u56fa \u00b6 \u4e91\u73af\u5883\u4e2d\u7684\u670d\u52a1\u5668\uff0c\u5305\u62ec undercloud \u548c overcloud \u57fa\u7840\u67b6\u6784\uff0c\u5e94\u5b9e\u65bd\u5f3a\u5316\u6700\u4f73\u5b9e\u8df5\u3002\u7531\u4e8e\u64cd\u4f5c\u7cfb\u7edf\u548c\u670d\u52a1\u5668\u5f3a\u5316\u5f88\u5e38\u89c1\uff0c\u56e0\u6b64\u6b64\u5904\u4e0d\u6db5\u76d6\u9002\u7528\u7684\u6700\u4f73\u5b9e\u8df5\uff0c\u5305\u62ec\u4f46\u4e0d\u9650\u4e8e\u65e5\u5fd7\u8bb0\u5f55\u3001\u7528\u6237\u5e10\u6237\u9650\u5236\u548c\u5b9a\u671f\u66f4\u65b0\uff0c\u4f46\u5e94\u5e94\u7528\u4e8e\u6240\u6709\u57fa\u7840\u7ed3\u6784\u3002 \u6587\u4ef6\u5b8c\u6574\u6027\u7ba1\u7406\uff08FIM\uff09 \u00b6 \u6587\u4ef6\u5b8c\u6574\u6027\u7ba1\u7406 \uff08FIM\uff09 \u662f\u786e\u4fdd\u654f\u611f\u7cfb\u7edf\u6216\u5e94\u7528\u7a0b\u5e8f\u914d\u7f6e\u6587\u4ef6\u7b49\u6587\u4ef6\u4e0d\u4f1a\u635f\u574f\u6216\u66f4\u6539\u4ee5\u5141\u8bb8\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u6216\u6076\u610f\u884c\u4e3a\u7684\u65b9\u6cd5\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u5b9e\u7528\u7a0b\u5e8f\uff08\u5982 Samhain\uff09\u6765\u5b8c\u6210\uff0c\u8be5\u5b9e\u7528\u7a0b\u5e8f\u5c06\u521b\u5efa\u6307\u5b9a\u8d44\u6e90\u7684\u6821\u9a8c\u548c\u54c8\u5e0c\uff0c\u7136\u540e\u5b9a\u671f\u9a8c\u8bc1\u8be5\u54c8\u5e0c\uff0c\u6216\u8005\u901a\u8fc7 DMVerity \u7b49\u5de5\u5177\u6765\u5b8c\u6210\uff0c\u8be5\u5de5\u5177\u53ef\u4ee5\u83b7\u53d6\u5757\u8bbe\u5907\u7684\u54c8\u5e0c\u503c\uff0c\u5e76\u5728\u7cfb\u7edf\u8bbf\u95ee\u8fd9\u4e9b\u54c8\u5e0c\u503c\u65f6\u5bf9\u5176\u8fdb\u884c\u9a8c\u8bc1\uff0c\u7136\u540e\u518d\u5c06\u5176\u5448\u73b0\u7ed9\u7528\u6237\u3002 \u8fd9\u4e9b\u5e94\u8be5\u653e\u5728\u9002\u5f53\u7684\u4f4d\u7f6e\uff0c\u4ee5\u76d1\u63a7\u548c\u62a5\u544a\u5bf9\u7cfb\u7edf\u3001\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u548c\u5e94\u7528\u7a0b\u5e8f\u914d\u7f6e\u6587\u4ef6\uff08\u5982 \u548c /etc/keystone/keystone.conf \uff09\u4ee5\u53ca\u5185\u6838\u6a21\u5757\uff08\u5982 /etc/pam.d/system-auth virtio\uff09\u7684\u66f4\u6539\u3002\u6700\u4f73\u505a\u6cd5\u662f\u4f7f\u7528 lsmod \u547d\u4ee4\u6765\u663e\u793a\u7cfb\u7edf\u4e0a\u5b9a\u671f\u52a0\u8f7d\u7684\u5185\u5bb9\uff0c\u4ee5\u5e2e\u52a9\u786e\u5b9a FIM \u68c0\u67e5\u4e2d\u5e94\u5305\u542b\u6216\u4e0d\u5e94\u5305\u542b\u7684\u5185\u5bb9\u3002 \u7ba1\u7406\u754c\u9762 \u00b6 \u7ba1\u7406\u5458\u9700\u8981\u5bf9\u4e91\u6267\u884c\u547d\u4ee4\u548c\u63a7\u5236\uff0c\u4ee5\u5b9e\u73b0\u5404\u79cd\u64cd\u4f5c\u529f\u80fd\u3002\u7406\u89e3\u548c\u4fdd\u62a4\u8fd9\u4e9b\u6307\u6325\u548c\u63a7\u5236\u8bbe\u65bd\u975e\u5e38\u91cd\u8981\u3002 OpenStack \u4e3a\u8fd0\u7ef4\u4eba\u5458\u548c\u79df\u6237\u63d0\u4f9b\u4e86\u591a\u79cd\u7ba1\u7406\u754c\u9762\uff1a OpenStack \u4eea\u8868\u677f \uff08horizon\uff09 OpenStack \u63a5\u53e3 \u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 OpenStack \u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f\uff0c\u4f8b\u5982 nova-manage \u548c glance-manage \u5e26\u5916\u7ba1\u7406\u63a5\u53e3\uff0c\u5982 IPMI \u4eea\u8868\u677f \u00b6 OpenStack \u4eea\u8868\u677f \uff08horizon\uff09 \u4e3a\u7ba1\u7406\u5458\u548c\u79df\u6237\u63d0\u4f9b\u4e86\u4e00\u4e2a\u57fa\u4e8e Web \u7684\u56fe\u5f62\u754c\u9762\uff0c\u7528\u4e8e\u7f6e\u5907\u548c\u8bbf\u95ee\u57fa\u4e8e\u4e91\u7684\u8d44\u6e90\u3002\u4eea\u8868\u677f\u901a\u8fc7\u8c03\u7528 OpenStack API \u4e0e\u540e\u7aef\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\u3002 \u529f\u80fd \u00b6 \u4f5c\u4e3a\u4e91\u7ba1\u7406\u5458\uff0c\u4eea\u8868\u677f\u63d0\u4f9b\u4e91\u5927\u5c0f\u548c\u72b6\u6001\u7684\u6574\u4f53\u89c6\u56fe\u3002\u60a8\u53ef\u4ee5\u521b\u5efa\u7528\u6237\u548c\u79df\u6237/\u9879\u76ee\uff0c\u5c06\u7528\u6237\u5206\u914d\u7ed9\u79df\u6237/\u9879\u76ee\uff0c\u5e76\u5bf9\u53ef\u4f9b\u4ed6\u4eec\u4f7f\u7528\u7684\u8d44\u6e90\u8bbe\u7f6e\u9650\u5236\u3002 \u4eea\u8868\u677f\u4e3a\u79df\u6237\u7528\u6237\u63d0\u4f9b\u4e86\u4e00\u4e2a\u81ea\u52a9\u670d\u52a1\u95e8\u6237\uff0c\u7528\u4e8e\u5728\u7ba1\u7406\u5458\u8bbe\u7f6e\u7684\u9650\u5236\u8303\u56f4\u5185\u9884\u914d\u81ea\u5df1\u7684\u8d44\u6e90\u3002 \u4eea\u8868\u677f\u4e3a\u8def\u7531\u5668\u548c\u8d1f\u8f7d\u5e73\u8861\u5668\u63d0\u4f9b GUI \u652f\u6301\u3002\u4f8b\u5982\uff0c\u4eea\u8868\u677f\u73b0\u5728\u5b9e\u73b0\u4e86\u6240\u6709\u4e3b\u8981\u7684\u7f51\u7edc\u529f\u80fd\u3002 \u5b83\u662f\u4e00\u4e2a\u53ef\u6269\u5c55\u7684 Django Web \u5e94\u7528\u7a0b\u5e8f\uff0c\u5141\u8bb8\u8f7b\u677e\u63d2\u5165\u7b2c\u4e09\u65b9\u4ea7\u54c1\u548c\u670d\u52a1\uff0c\u4f8b\u5982\u8ba1\u8d39\u3001\u76d1\u63a7\u548c\u5176\u4ed6\u7ba1\u7406\u5de5\u5177\u3002 \u4eea\u8868\u677f\u8fd8\u53ef\u4ee5\u4e3a\u670d\u52a1\u63d0\u4f9b\u5546\u548c\u5176\u4ed6\u5546\u4e1a\u4f9b\u5e94\u5546\u6253\u9020\u54c1\u724c\u3002 \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u00b6 \u4eea\u8868\u677f\u8981\u6c42\u5728 Web \u6d4f\u89c8\u5668\u4e2d\u542f\u7528 Cookie \u548c JavaScript\u3002 \u6258\u7ba1\u4eea\u8868\u677f\u7684 Web \u670d\u52a1\u5668\u5e94\u914d\u7f6e\u4e3a\u4f7f\u7528 TLS\uff0c\u4ee5\u786e\u4fdd\u6570\u636e\u5df2\u52a0\u5bc6\u3002 Horizon Web Service \u53ca\u5176\u7528\u4e8e\u4e0e\u540e\u7aef\u901a\u4fe1\u7684 OpenStack API \u90fd\u5bb9\u6613\u53d7\u5230 Web \u653b\u51fb\u5a92\u4ecb\uff08\u5982\u62d2\u7edd\u670d\u52a1\uff09\u7684\u653b\u51fb\uff0c\u56e0\u6b64\u5fc5\u987b\u5bf9\u5176\u8fdb\u884c\u76d1\u63a7\u3002 \u73b0\u5728\u53ef\u4ee5\u901a\u8fc7\u4eea\u8868\u677f\u5c06\u955c\u50cf\u6587\u4ef6\u76f4\u63a5\u4ece\u7528\u6237\u7684\u786c\u76d8\u4e0a\u4f20\u5230 OpenStack \u955c\u50cf\u670d\u52a1\uff08\u5c3d\u7ba1\u5b58\u5728\u8bb8\u591a\u90e8\u7f72/\u5b89\u5168\u9690\u60a3\uff09\u3002\u5bf9\u4e8e\u591a GB \u7684\u6620\u50cf\uff0c\u4ecd\u5f3a\u70c8\u5efa\u8bae\u4f7f\u7528 glance CLI \u8fdb\u884c\u4e0a\u4f20\u3002 \u901a\u8fc7\u4eea\u8868\u76d8\u521b\u5efa\u548c\u7ba1\u7406\u5b89\u5168\u7ec4\u3002\u5b89\u5168\u7ec4\u5141\u8bb8\u5bf9\u5b89\u5168\u7b56\u7565\u8fdb\u884c L3-L4 \u6570\u636e\u5305\u7b5b\u9009\uff0c\u4ee5\u4fdd\u62a4\u865a\u62df\u673a\u3002 \u53c2\u8003\u4e66\u76ee \u00b6 OpenStack.org\uff0cReleaseNotes/Liberty\u30022015. OpenStack Liberty \u53d1\u884c\u8bf4\u660e OpenStack \u63a5\u53e3 \u00b6 OpenStack API \u662f\u4e00\u4e2a RESTful Web \u670d\u52a1\u7aef\u70b9\uff0c\u7528\u4e8e\u8bbf\u95ee\u3001\u914d\u7f6e\u548c\u81ea\u52a8\u5316\u57fa\u4e8e\u4e91\u7684\u8d44\u6e90\u3002\u64cd\u4f5c\u5458\u548c\u7528\u6237\u901a\u5e38\u901a\u8fc7\u547d\u4ee4\u884c\u5b9e\u7528\u7a0b\u5e8f\uff08\u4f8b\u5982\uff0c nova \u6216\uff09\u3001\u7279\u5b9a\u4e8e\u8bed\u8a00\u7684\u5e93\u6216 glance \u7b2c\u4e09\u65b9\u5de5\u5177\u8bbf\u95ee API\u3002 \u529f\u80fd \u00b6 To the cloud administrator, the API provides an overall view of the size and state of the cloud deployment and allows the creation of users, tenants/projects, assigning users to tenants/projects, and specifying resource quotas on a per tenant/project basis. \u5bf9\u4e8e\u4e91\u7ba1\u7406\u5458\u6765\u8bf4\uff0cAPI \u63d0\u4f9b\u4e86\u4e91\u90e8\u7f72\u5927\u5c0f\u548c\u72b6\u6001\u7684\u6574\u4f53\u89c6\u56fe\uff0c\u5e76\u5141\u8bb8\u521b\u5efa\u7528\u6237\u3001\u79df\u6237/\u9879\u76ee\u3001\u5c06\u7528\u6237\u5206\u914d\u7ed9\u79df\u6237/\u9879\u76ee\uff0c\u4ee5\u53ca\u4e3a\u6bcf\u4e2a\u79df\u6237/\u9879\u76ee\u6307\u5b9a\u8d44\u6e90\u914d\u989d\u3002 The API provides a tenant interface for provisioning, managing, and accessing their resources. API \u63d0\u4f9b\u4e86\u4e00\u4e2a\u79df\u6237\u63a5\u53e3\uff0c\u7528\u4e8e\u9884\u914d\u3001\u7ba1\u7406\u548c\u8bbf\u95ee\u5176\u8d44\u6e90\u3002 \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u00b6 \u5e94\u4e3a TLS \u914d\u7f6e API \u670d\u52a1\uff0c\u4ee5\u786e\u4fdd\u6570\u636e\u5df2\u52a0\u5bc6\u3002 \u4f5c\u4e3a Web \u670d\u52a1\uff0cOpenStack API \u5bb9\u6613\u53d7\u5230\u719f\u6089\u7684\u7f51\u7ad9\u653b\u51fb\u5a92\u4ecb\u7684\u5f71\u54cd\uff0c\u4f8b\u5982\u62d2\u7edd\u670d\u52a1\u653b\u51fb\u3002 \u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 \u00b6 \u4f7f\u7528\u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 \u8bbf\u95ee\u6765\u7ba1\u7406 Linux \u548c Unix \u7cfb\u7edf\u5df2\u6210\u4e3a\u884c\u4e1a\u60ef\u4f8b\u3002SSH \u4f7f\u7528\u5b89\u5168\u7684\u52a0\u5bc6\u539f\u8bed\u8fdb\u884c\u901a\u4fe1\u3002\u9274\u4e8e SSH \u5728\u5178\u578b OpenStack \u90e8\u7f72\u4e2d\u7684\u8303\u56f4\u548c\u91cd\u8981\u6027\uff0c\u4e86\u89e3\u90e8\u7f72 SSH \u7684\u6700\u4f73\u5b9e\u8df5\u975e\u5e38\u91cd\u8981\u3002 \u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9 \u00b6 \u7ecf\u5e38\u88ab\u5ffd\u89c6\u7684\u662f SSH \u4e3b\u673a\u7684\u5bc6\u94a5\u7ba1\u7406\u9700\u6c42\u3002\u7531\u4e8e OpenStack \u90e8\u7f72\u4e2d\u7684\u5927\u591a\u6570\u6216\u6240\u6709\u4e3b\u673a\u90fd\u5c06\u63d0\u4f9b SSH \u670d\u52a1\uff0c\u56e0\u6b64\u5bf9\u4e0e\u8fd9\u4e9b\u4e3b\u673a\u7684\u8fde\u63a5\u5145\u6ee1\u4fe1\u5fc3\u975e\u5e38\u91cd\u8981\u3002\u4e0d\u80fd\u4f4e\u4f30\u7684\u662f\uff0c\u672a\u80fd\u63d0\u4f9b\u5408\u7406\u5b89\u5168\u4e14\u53ef\u8bbf\u95ee\u7684\u65b9\u6cd5\u6765\u9a8c\u8bc1 SSH \u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9\u662f\u6ee5\u7528\u548c\u5229\u7528\u7684\u6210\u719f\u65f6\u673a\u3002 \u6240\u6709 SSH \u5b88\u62a4\u7a0b\u5e8f\u90fd\u5177\u6709\u4e13\u7528\u4e3b\u673a\u5bc6\u94a5\uff0c\u5e76\u5728\u8fde\u63a5\u65f6\u63d0\u4f9b\u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9\u3002\u6b64\u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9\u662f\u672a\u7b7e\u540d\u516c\u94a5\u7684\u54c8\u5e0c\u503c\u3002\u5728\u4e0e\u8fd9\u4e9b\u4e3b\u673a\u5efa\u7acb SSH \u8fde\u63a5\u4e4b\u524d\uff0c\u5fc5\u987b\u77e5\u9053\u8fd9\u4e9b\u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9\u3002\u9a8c\u8bc1\u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9\u6709\u52a9\u4e8e\u68c0\u6d4b\u4e2d\u95f4\u4eba\u653b\u51fb\u3002 \u901a\u5e38\uff0c\u5728\u5b89\u88c5 SSH \u5b88\u62a4\u7a0b\u5e8f\u65f6\uff0c\u5c06\u751f\u6210\u4e3b\u673a\u5bc6\u94a5\u3002\u5728\u4e3b\u673a\u5bc6\u94a5\u751f\u6210\u8fc7\u7a0b\u4e2d\uff0c\u4e3b\u673a\u5fc5\u987b\u5177\u6709\u8db3\u591f\u7684\u71b5\u3002\u4e3b\u673a\u5bc6\u94a5\u751f\u6210\u671f\u95f4\u7684\u71b5\u4e0d\u8db3\u53ef\u80fd\u5bfc\u81f4\u7a83\u542c SSH \u4f1a\u8bdd\u3002 \u751f\u6210 SSH \u4e3b\u673a\u5bc6\u94a5\u540e\uff0c\u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9\u5e94\u5b58\u50a8\u5728\u5b89\u5168\u4e14\u53ef\u67e5\u8be2\u7684\u4f4d\u7f6e\u3002\u4e00\u4e2a\u7279\u522b\u65b9\u4fbf\u7684\u89e3\u51b3\u65b9\u6848\u662f\u4f7f\u7528 RFC-4255 \u4e2d\u5b9a\u4e49\u7684 SSHFP \u8d44\u6e90\u8bb0\u5f55\u7684 DNS\u3002\u4e3a\u4e86\u5b89\u5168\u8d77\u89c1\uff0c\u6709\u5fc5\u8981\u90e8\u7f72 DNSSEC\u3002 \u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f \u00b6 OpenStack Management Utilities \u662f\u8fdb\u884c API \u8c03\u7528\u7684\u5f00\u6e90 Python \u547d\u4ee4\u884c\u5ba2\u6237\u7aef\u3002\u6bcf\u4e2a OpenStack \u670d\u52a1\u90fd\u6709\u4e00\u4e2a\u5ba2\u6237\u7aef\uff08\u4f8b\u5982\uff0cnova\u3001glance\uff09\u3002\u9664\u4e86\u6807\u51c6\u7684 CLI \u5ba2\u6237\u7aef\u4e4b\u5916\uff0c\u5927\u591a\u6570\u670d\u52a1\u90fd\u5177\u6709\u7ba1\u7406\u547d\u4ee4\u884c\u5b9e\u7528\u7a0b\u5e8f\uff0c\u7528\u4e8e\u76f4\u63a5\u8c03\u7528\u6570\u636e\u5e93\u3002\u8fd9\u4e9b\u4e13\u7528\u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f\u6b63\u5728\u6162\u6162\u88ab\u5f03\u7528\u3002 \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u00b6 \u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u4e13\u7528\u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f \uff08*-manage\uff09 \u4f7f\u7528\u76f4\u63a5\u6570\u636e\u5e93\u8fde\u63a5\u3002 \u786e\u4fdd\u5305\u542b\u51ed\u636e\u4fe1\u606f\u7684 .rc \u6587\u4ef6\u662f\u5b89\u5168\u7684\u3002 \u53c2\u8003\u4e66\u76ee \u00b6 OpenStack.org\uff0c\u201cOpenStack \u6700\u7ec8\u7528\u6237\u6307\u5357\u201d\u90e8\u5206\u30022016. OpenStack \u547d\u4ee4\u884c\u5ba2\u6237\u7aef\u6982\u8ff0\u3002 OpenStack.org\uff0c\u4f7f\u7528 OpenStack RC \u6587\u4ef6\u8bbe\u7f6e\u73af\u5883\u53d8\u91cf\u30022016. \u4e0b\u8f7d\u5e76\u83b7\u53d6 OpenStack RC \u6587\u4ef6\u3002 \u5e26\u5916\u7ba1\u7406\u63a5\u53e3 \u00b6 OpenStack \u7ba1\u7406\u4f9d\u8d56\u4e8e\u5e26\u5916\u7ba1\u7406\u63a5\u53e3\uff08\u5982 IPMI \u534f\u8bae\uff09\u6765\u8bbf\u95ee\u8fd0\u884c OpenStack \u7ec4\u4ef6\u7684\u8282\u70b9\u3002IPMI \u662f\u4e00\u79cd\u975e\u5e38\u6d41\u884c\u7684\u89c4\u8303\uff0c\u7528\u4e8e\u8fdc\u7a0b\u7ba1\u7406\u3001\u8bca\u65ad\u548c\u91cd\u65b0\u542f\u52a8\u670d\u52a1\u5668\uff0c\u65e0\u8bba\u64cd\u4f5c\u7cfb\u7edf\u6b63\u5728\u8fd0\u884c\u8fd8\u662f\u7cfb\u7edf\u5d29\u6e83\u3002 \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u00b6 \u4f7f\u7528\u5f3a\u5bc6\u7801\u5e76\u4fdd\u62a4\u5b83\u4eec\uff0c\u6216\u4f7f\u7528\u5ba2\u6237\u7aef TLS \u8eab\u4efd\u9a8c\u8bc1\u3002 \u786e\u4fdd\u7f51\u7edc\u63a5\u53e3\u4f4d\u4e8e\u5176\u81ea\u5df1\u7684\u4e13\u7528\uff08\u7ba1\u7406\u6216\u5355\u72ec\u7684\uff09\u7f51\u7edc\u4e0a\u3002\u4f7f\u7528\u9632\u706b\u5899\u6216\u5176\u4ed6\u7f51\u7edc\u8bbe\u5907\u9694\u79bb\u7ba1\u7406\u57df\u3002 \u5982\u679c\u60a8\u4f7f\u7528 Web \u754c\u9762\u4e0e BMC/IPMI \u4ea4\u4e92\uff0c\u8bf7\u59cb\u7ec8\u4f7f\u7528 TLS \u63a5\u53e3\uff0c\u4f8b\u5982 HTTPS \u6216\u7aef\u53e3 443\u3002\u6b64 TLS \u63a5\u53e3\u4e0d\u5e94\u4f7f\u7528\u81ea\u7b7e\u540d\u8bc1\u4e66\uff08\u901a\u5e38\u662f\u9ed8\u8ba4\u7684\uff09\uff0c\u4f46\u5e94\u5177\u6709\u4f7f\u7528\u6b63\u786e\u5b9a\u4e49\u7684\u5b8c\u5168\u9650\u5b9a\u57df\u540d \uff08FQDN\uff09 \u7684\u53d7\u4fe1\u4efb\u8bc1\u4e66\u3002 \u76d1\u63a7\u7ba1\u7406\u7f51\u7edc\u4e0a\u7684\u6d41\u91cf\u3002\u4e0e\u7e41\u5fd9\u7684\u8ba1\u7b97\u8282\u70b9\u76f8\u6bd4\uff0c\u5f02\u5e38\u53ef\u80fd\u66f4\u5bb9\u6613\u8ddf\u8e2a\u3002 \u5e26\u5916\u7ba1\u7406\u754c\u9762\u901a\u5e38\u8fd8\u5305\u62ec\u56fe\u5f62\u8ba1\u7b97\u673a\u63a7\u5236\u53f0\u8bbf\u95ee\u3002\u8fd9\u4e9b\u63a5\u53e3\u901a\u5e38\u53ef\u4ee5\u52a0\u5bc6\uff0c\u4f46\u4e0d\u4e00\u5b9a\u662f\u9ed8\u8ba4\u7684\u3002\u8bf7\u53c2\u9605\u7cfb\u7edf\u8f6f\u4ef6\u6587\u6863\u4ee5\u52a0\u5bc6\u8fd9\u4e9b\u63a5\u53e3\u3002 \u53c2\u8003\u4e66\u76ee \u00b6 SANS \u6280\u672f\u7814\u7a76\u6240\uff0cInfoSec Handlers \u65e5\u8bb0\u535a\u5ba2\u30022012. \u9ed1\u5ba2\u653b\u51fb\u5df2\u5173\u95ed\u7684\u670d\u52a1\u5668\u3002 \u5b89\u5168\u901a\u4fe1 \u00b6 \u8bbe\u5907\u95f4\u901a\u4fe1\u662f\u4e00\u4e2a\u4e25\u91cd\u7684\u5b89\u5168\u95ee\u9898\u3002\u5728\u5927\u578b\u9879\u76ee\u9519\u8bef\uff08\u5982 Heartbleed\uff09\u6216\u66f4\u9ad8\u7ea7\u7684\u653b\u51fb\uff08\u5982 BEAST \u548c CRIME\uff09\u4e4b\u95f4\uff0c\u901a\u8fc7\u7f51\u7edc\u8fdb\u884c\u5b89\u5168\u901a\u4fe1\u7684\u65b9\u6cd5\u53d8\u5f97\u8d8a\u6765\u8d8a\u91cd\u8981\u3002\u4f46\u662f\uff0c\u5e94\u8be5\u8bb0\u4f4f\uff0c\u52a0\u5bc6\u5e94\u8be5\u4f5c\u4e3a\u66f4\u5927\u7684\u5b89\u5168\u7b56\u7565\u7684\u4e00\u90e8\u5206\u6765\u5e94\u7528\u3002\u7aef\u70b9\u7684\u5165\u4fb5\u610f\u5473\u7740\u653b\u51fb\u8005\u4e0d\u518d\u9700\u8981\u7834\u574f\u6240\u4f7f\u7528\u7684\u52a0\u5bc6\uff0c\u800c\u662f\u80fd\u591f\u5728\u7cfb\u7edf\u5904\u7406\u6d88\u606f\u65f6\u67e5\u770b\u548c\u64cd\u7eb5\u6d88\u606f\u3002 \u672c\u7ae0\u5c06\u56de\u987e\u6709\u5173\u914d\u7f6e TLS \u4ee5\u4fdd\u62a4\u5185\u90e8\u548c\u5916\u90e8\u8d44\u6e90\u7684\u51e0\u4e2a\u529f\u80fd\uff0c\u5e76\u6307\u51fa\u5e94\u7279\u522b\u6ce8\u610f\u7684\u7279\u5b9a\u7c7b\u522b\u7684\u7cfb\u7edf\u3002 TLS \u548c SSL \u7b80\u4ecb \u8bc1\u4e66\u9881\u53d1\u673a\u6784 TLS \u5e93 \u52a0\u5bc6\u7b97\u6cd5\u3001\u5bc6\u7801\u6a21\u5f0f\u548c\u534f\u8bae \u603b\u7ed3 TLS \u4ee3\u7406\u548c HTTP \u670d\u52a1 \u4f8b\u5b50 HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168\u6027 \u5b8c\u7f8e\u524d\u5411\u4fdd\u5bc6 \u5b89\u5168\u53c2\u8003\u67b6\u6784 SSL/TLS \u4ee3\u7406\u5728\u524d\u9762 SSL/TLS \u4e0e API \u7aef\u70b9\u4f4d\u4e8e\u540c\u4e00\u7269\u7406\u4e3b\u673a\u4e0a \u8d1f\u8f7d\u5747\u8861\u5668\u4e0a\u7684 SSL/TLS \u5916\u90e8\u548c\u5185\u90e8\u73af\u5883\u7684\u52a0\u5bc6\u5206\u79bb TLS \u548c SSL \u7b80\u4ecb \u00b6 \u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u9700\u8981\u5b89\u5168\u6765\u786e\u4fdd OpenStack \u90e8\u7f72\u4e2d\u7f51\u7edc\u6d41\u91cf\u7684\u673a\u5bc6\u6027\u6216\u5b8c\u6574\u6027\u3002\u8fd9\u901a\u5e38\u662f\u4f7f\u7528\u52a0\u5bc6\u63aa\u65bd\u5b9e\u73b0\u7684\uff0c\u4f8b\u5982\u4f20\u8f93\u5c42\u5b89\u5168\u6027 \uff08TLS\uff09 \u534f\u8bae\u3002 \u5728\u5178\u578b\u90e8\u7f72\u4e2d\uff0c\u901a\u8fc7\u516c\u5171\u7f51\u7edc\u4f20\u8f93\u7684\u6240\u6709\u6d41\u91cf\u90fd\u662f\u5b89\u5168\u7684\uff0c\u4f46\u5b89\u5168\u6700\u4f73\u5b9e\u8df5\u8981\u6c42\u5185\u90e8\u6d41\u91cf\u4e5f\u5fc5\u987b\u5f97\u5230\u4fdd\u62a4\u3002\u4ec5\u4ec5\u4f9d\u9760\u5b89\u5168\u57df\u5206\u79bb\u8fdb\u884c\u4fdd\u62a4\u662f\u4e0d\u591f\u7684\u3002\u5982\u679c\u653b\u51fb\u8005\u83b7\u5f97\u5bf9\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u6216\u4e3b\u673a\u8d44\u6e90\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u7834\u574f API \u7aef\u70b9\u6216\u4efb\u4f55\u5176\u4ed6\u670d\u52a1\uff0c\u5219\u4ed6\u4eec\u4e00\u5b9a\u65e0\u6cd5\u8f7b\u677e\u6ce8\u5165\u6216\u6355\u83b7\u6d88\u606f\u3001\u547d\u4ee4\u6216\u4ee5\u5176\u4ed6\u65b9\u5f0f\u5f71\u54cd\u4e91\u7684\u7ba1\u7406\u529f\u80fd\u3002 \u6240\u6709\u57df\u90fd\u5e94\u4f7f\u7528 TLS \u8fdb\u884c\u4fdd\u62a4\uff0c\u5305\u62ec\u7ba1\u7406\u57df\u670d\u52a1\u548c\u670d\u52a1\u5185\u901a\u4fe1\u3002TLS \u63d0\u4f9b\u4e86\u786e\u4fdd\u7528\u6237\u4e0e OpenStack \u670d\u52a1\u4e4b\u95f4\u4ee5\u53ca OpenStack \u670d\u52a1\u672c\u8eab\u4e4b\u95f4\u901a\u4fe1\u7684\u8eab\u4efd\u9a8c\u8bc1\u3001\u4e0d\u53ef\u5426\u8ba4\u6027\u3001\u673a\u5bc6\u6027\u548c\u5b8c\u6574\u6027\u7684\u673a\u5236\u3002 \u7531\u4e8e\u5b89\u5168\u5957\u63a5\u5b57\u5c42 \uff08SSL\uff09 \u534f\u8bae\u4e2d\u5df2\u53d1\u5e03\u7684\u6f0f\u6d1e\uff0c\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u4f18\u5148\u4f7f\u7528 TLS \u800c\u4e0d\u662f SSL\uff0c\u5e76\u4e14\u5728\u4efb\u4f55\u60c5\u51b5\u4e0b\u90fd\u7981\u7528 SSL\uff0c\u9664\u975e\u9700\u8981\u4e0e\u8fc7\u65f6\u7684\u6d4f\u89c8\u5668\u6216\u5e93\u517c\u5bb9\u3002 \u516c\u94a5\u57fa\u7840\u8bbe\u65bd \uff08PKI\uff09 \u662f\u7528\u4e8e\u4fdd\u62a4\u7f51\u7edc\u901a\u4fe1\u7684\u6846\u67b6\u3002\u5b83\u7531\u4e00\u7ec4\u7cfb\u7edf\u548c\u6d41\u7a0b\u7ec4\u6210\uff0c\u4ee5\u786e\u4fdd\u5728\u9a8c\u8bc1\u5404\u65b9\u8eab\u4efd\u7684\u540c\u65f6\u53ef\u4ee5\u5b89\u5168\u5730\u53d1\u9001\u6d41\u91cf\u3002\u6b64\u5904\u63cf\u8ff0\u7684 PKI \u914d\u7f6e\u6587\u4ef6\u662f\u7531 PKIX \u5de5\u4f5c\u7ec4\u5f00\u53d1\u7684 Internet \u5de5\u7a0b\u4efb\u52a1\u7ec4 \uff08IETF\uff09 \u516c\u94a5\u57fa\u7840\u7ed3\u6784 \uff08PKIX\uff09 \u914d\u7f6e\u6587\u4ef6\u3002PKI\u7684\u6838\u5fc3\u7ec4\u4ef6\u5305\u62ec\uff1a \u6570\u5b57\u8bc1\u4e66 \u7b7e\u540d\u516c\u94a5\u8bc1\u4e66\u662f\u5177\u6709\u5b9e\u4f53\u7684\u53ef\u9a8c\u8bc1\u6570\u636e\u3001\u5176\u516c\u94a5\u4ee5\u53ca\u5176\u4ed6\u4e00\u4e9b\u5c5e\u6027\u7684\u6570\u636e\u7ed3\u6784\u3002\u8fd9\u4e9b\u8bc1\u4e66\u7531\u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09 \u9881\u53d1\u3002\u7531\u4e8e\u8bc1\u4e66\u7531\u53d7\u4fe1\u4efb\u7684 CA \u7b7e\u540d\uff0c\u56e0\u6b64\u4e00\u65e6\u9a8c\u8bc1\uff0c\u4e0e\u5b9e\u4f53\u5173\u8054\u7684\u516c\u94a5\u5c06\u4fdd\u8bc1\u4e0e\u6240\u8ff0\u5b9e\u4f53\u76f8\u5173\u8054\u3002\u7528\u4e8e\u5b9a\u4e49\u8fd9\u4e9b\u8bc1\u4e66\u7684\u6700\u5e38\u89c1\u6807\u51c6\u662f X.509 \u6807\u51c6\u3002X.509 v3 \u662f\u5f53\u524d\u7684\u6807\u51c6\uff0c\u5728 RFC5280 \u4e2d\u8fdb\u884c\u4e86\u8be6\u7ec6\u63cf\u8ff0\u3002\u8bc1\u4e66\u7531 CA \u9881\u53d1\uff0c\u4f5c\u4e3a\u8bc1\u660e\u5728\u7ebf\u5b9e\u4f53\u8eab\u4efd\u7684\u673a\u5236\u3002CA \u901a\u8fc7\u4ece\u8bc1\u4e66\u521b\u5efa\u6d88\u606f\u6458\u8981\u5e76\u4f7f\u7528\u5176\u79c1\u94a5\u5bf9\u6458\u8981\u8fdb\u884c\u52a0\u5bc6\uff0c\u5bf9\u8bc1\u4e66\u8fdb\u884c\u6570\u5b57\u7b7e\u540d\u3002 \u7ed3\u675f\u5b9e\u4f53 \u4f5c\u4e3a\u8bc1\u4e66\u4e3b\u9898\u7684\u7528\u6237\u3001\u8fdb\u7a0b\u6216\u7cfb\u7edf\u3002\u6700\u7ec8\u5b9e\u4f53\u5c06\u5176\u8bc1\u4e66\u8bf7\u6c42\u53d1\u9001\u5230\u6ce8\u518c\u673a\u6784 \uff08RA\uff09 \u8fdb\u884c\u5ba1\u6279\u3002\u5982\u679c\u83b7\u5f97\u6279\u51c6\uff0cRA \u4f1a\u5c06\u8bf7\u6c42\u8f6c\u53d1\u7ed9\u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09\u3002\u8bc1\u4e66\u9881\u53d1\u673a\u6784\u9a8c\u8bc1\u8bf7\u6c42\uff0c\u5982\u679c\u4fe1\u606f\u6b63\u786e\uff0c\u5219\u751f\u6210\u8bc1\u4e66\u5e76\u7b7e\u540d\u3002\u7136\u540e\uff0c\u6b64\u7b7e\u540d\u8bc1\u4e66\u5c06\u53d1\u9001\u5230\u8bc1\u4e66\u5b58\u50a8\u5e93\u3002 \u4fe1\u8d56\u65b9 \u63a5\u6536\u6570\u5b57\u7b7e\u540d\u8bc1\u4e66\u7684\u7ec8\u7ed3\u70b9\uff0c\u8be5\u8bc1\u4e66\u53ef\u53c2\u8003\u8bc1\u4e66\u4e0a\u5217\u51fa\u7684\u516c\u94a5\u8fdb\u884c\u9a8c\u8bc1\u3002\u4fe1\u8d56\u65b9\u5e94\u80fd\u591f\u9a8c\u8bc1\u8bc1\u4e66\u7684\u94fe\u4e0a\uff0c\u786e\u4fdd\u5b83\u4e0d\u5b58\u5728\u4e8e CRL \u4e2d\uff0c\u5e76\u4e14\u8fd8\u5fc5\u987b\u80fd\u591f\u9a8c\u8bc1\u8bc1\u4e66\u7684\u5230\u671f\u65e5\u671f\u3002 \u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09 CA \u662f\u53d7\u4fe1\u4efb\u7684\u5b9e\u4f53\uff0c\u65e0\u8bba\u662f\u6700\u7ec8\u65b9\u8fd8\u662f\u4f9d\u8d56\u8bc1\u4e66\u8fdb\u884c\u8bc1\u4e66\u7b56\u7565\u3001\u7ba1\u7406\u5904\u7406\u548c\u8bc1\u4e66\u9881\u53d1\u7684\u4e00\u65b9\u3002 \u6ce8\u518c\u673a\u6784 \uff08RA\uff09 CA \u5c06\u67d0\u4e9b\u7ba1\u7406\u529f\u80fd\u59d4\u6d3e\u7ed9\u7684\u53ef\u9009\u7cfb\u7edf\uff0c\u8fd9\u5305\u62ec\u5728 CA \u9881\u53d1\u8bc1\u4e66\u4e4b\u524d\u5bf9\u7ec8\u7aef\u5b9e\u4f53\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u7b49\u529f\u80fd\u3002 \u8bc1\u4e66\u540a\u9500\u5217\u8868 \uff08CRL\uff09 \u8bc1\u4e66\u540a\u9500\u5217\u8868 \uff08CRL\uff09 \u662f\u5df2\u540a\u9500\u7684\u8bc1\u4e66\u5e8f\u5217\u53f7\u5217\u8868\u3002\u5728 PKI \u6a21\u578b\u4e2d\uff0c\u4e0d\u5e94\u4fe1\u4efb\u63d0\u4f9b\u8fd9\u4e9b\u8bc1\u4e66\u7684\u6700\u7ec8\u5b9e\u4f53\u3002\u540a\u9500\u53ef\u80fd\u7531\u4e8e\u591a\u79cd\u539f\u56e0\u800c\u53d1\u751f\uff0c\u4f8b\u5982\u5bc6\u94a5\u6cc4\u9732\u3001CA \u6cc4\u9732\u3002 CRL \u53d1\u884c\u4eba CA \u5c06\u8bc1\u4e66\u540a\u9500\u5217\u8868\u7684\u53d1\u5e03\u59d4\u6258\u7ed9\u7684\u53ef\u9009\u7cfb\u7edf\u3002 \u8bc1\u4e66\u5b58\u50a8\u5e93 \u5b58\u50a8\u548c\u67e5\u627e\u6700\u7ec8\u5b9e\u4f53\u8bc1\u4e66\u548c\u8bc1\u4e66\u540a\u9500\u5217\u8868\u7684\u4f4d\u7f6e - \u6709\u65f6\u79f0\u4e3a\u8bc1\u4e66\u6346\u7ed1\u5305\u3002 PKI \u6784\u5efa\u4e86\u4e00\u4e2a\u6846\u67b6\uff0c\u7528\u4e8e\u63d0\u4f9b\u52a0\u5bc6\u7b97\u6cd5\u3001\u5bc6\u7801\u6a21\u5f0f\u548c\u534f\u8bae\uff0c\u4ee5\u4fdd\u62a4\u6570\u636e\u548c\u8eab\u4efd\u9a8c\u8bc1\u3002\u5f3a\u70c8\u5efa\u8bae\u4f7f\u7528\u516c\u94a5\u57fa\u7840\u7ed3\u6784 \uff08PKI\uff09 \u4fdd\u62a4\u6240\u6709\u670d\u52a1\uff0c\u5305\u62ec\u5bf9 API \u7ec8\u7ed3\u70b9\u4f7f\u7528 TLS\u3002\u4ec5\u9760\u4f20\u8f93\u6216\u6d88\u606f\u7684\u52a0\u5bc6\u6216\u7b7e\u540d\u662f\u4e0d\u53ef\u80fd\u89e3\u51b3\u6240\u6709\u8fd9\u4e9b\u95ee\u9898\u7684\u3002\u4e3b\u673a\u672c\u8eab\u5fc5\u987b\u662f\u5b89\u5168\u7684\uff0c\u5e76\u5b9e\u65bd\u7b56\u7565\u3001\u547d\u540d\u7a7a\u95f4\u548c\u5176\u4ed6\u63a7\u5236\u63aa\u65bd\u6765\u4fdd\u62a4\u5176\u79c1\u6709\u51ed\u636e\u548c\u5bc6\u94a5\u3002\u4f46\u662f\uff0c\u5bc6\u94a5\u7ba1\u7406\u548c\u4fdd\u62a4\u7684\u6311\u6218\u5e76\u6ca1\u6709\u51cf\u5c11\u8fd9\u4e9b\u63a7\u5236\u7684\u5fc5\u8981\u6027\uff0c\u4e5f\u6ca1\u6709\u964d\u4f4e\u5b83\u4eec\u7684\u91cd\u8981\u6027\u3002 \u8bc1\u4e66\u9881\u53d1\u673a\u6784 \u00b6 \u8bb8\u591a\u7ec4\u7ec7\u90fd\u5efa\u7acb\u4e86\u516c\u94a5\u57fa\u7840\u8bbe\u65bd\uff0c\u5176\u4e2d\u5305\u542b\u81ea\u5df1\u7684\u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09\u3001\u8bc1\u4e66\u7b56\u7565\u548c\u7ba1\u7406\uff0c\u4ed6\u4eec\u5e94\u8be5\u4f7f\u7528\u8fd9\u4e9b\u8bc1\u4e66\u4e3a\u5185\u90e8 OpenStack \u7528\u6237\u6216\u670d\u52a1\u9881\u53d1\u8bc1\u4e66\u3002\u516c\u5171\u5b89\u5168\u57df\u9762\u5411 Internet \u7684\u7ec4\u7ec7\u8fd8\u9700\u8981\u7531\u5e7f\u6cdb\u8ba4\u53ef\u7684\u516c\u5171 CA \u7b7e\u540d\u7684\u8bc1\u4e66\u3002\u5bf9\u4e8e\u901a\u8fc7\u7ba1\u7406\u7f51\u7edc\u8fdb\u884c\u7684\u52a0\u5bc6\u901a\u4fe1\uff0c\u5efa\u8bae\u4e0d\u8981\u4f7f\u7528\u516c\u5171 CA\u3002\u76f8\u53cd\uff0c\u6211\u4eec\u671f\u671b\u5e76\u5efa\u8bae\u5927\u591a\u6570\u90e8\u7f72\u90e8\u7f72\u81ea\u5df1\u7684\u5185\u90e8 CA\u3002 \u5efa\u8bae OpenStack \u4e91\u67b6\u6784\u5e08\u8003\u8651\u5bf9\u5185\u90e8\u7cfb\u7edf\u548c\u9762\u5411\u5ba2\u6237\u7684\u670d\u52a1\u4f7f\u7528\u5355\u72ec\u7684 PKI \u90e8\u7f72\u3002\u8fd9\u4f7f\u4e91\u90e8\u7f72\u4eba\u5458\u80fd\u591f\u4fdd\u6301\u5bf9\u5176 PKI \u57fa\u7840\u8bbe\u65bd\u7684\u63a7\u5236\uff0c\u5e76\u4e14\u4f7f\u5185\u90e8\u7cfb\u7edf\u7684\u8bc1\u4e66\u8bf7\u6c42\u3001\u7b7e\u540d\u548c\u90e8\u7f72\u53d8\u5f97\u66f4\u52a0\u5bb9\u6613\u3002\u9ad8\u7ea7\u914d\u7f6e\u53ef\u4ee5\u5bf9\u4e0d\u540c\u7684\u5b89\u5168\u57df\u4f7f\u7528\u5355\u72ec\u7684 PKI \u90e8\u7f72\u3002\u8fd9\u5141\u8bb8\u90e8\u7f72\u4eba\u5458\u4fdd\u6301\u73af\u5883\u7684\u52a0\u5bc6\u9694\u79bb\uff0c\u786e\u4fdd\u9881\u53d1\u7ed9\u4e00\u4e2a\u73af\u5883\u7684\u8bc1\u4e66\u4e0d\u88ab\u53e6\u4e00\u4e2a\u73af\u5883\u8bc6\u522b\u3002 \u7528\u4e8e\u5728\u9762\u5411 Internet \u7684\u4e91\u7aef\u70b9\uff08\u6216\u5ba2\u6237\u63a5\u53e3\uff0c\u5176\u4e2d\u5ba2\u6237\u9884\u8ba1\u4e0d\u4f1a\u5b89\u88c5\u9664\u6807\u51c6\u64cd\u4f5c\u7cfb\u7edf\u63d0\u4f9b\u7684\u8bc1\u4e66\u6346\u7ed1\u5305\u4ee5\u5916\u7684\u4efb\u4f55\u5185\u5bb9\uff09\u4e0a\u652f\u6301 TLS \u7684\u8bc1\u4e66\u5e94\u4f7f\u7528\u5b89\u88c5\u5728\u64cd\u4f5c\u7cfb\u7edf\u8bc1\u4e66\u6346\u7ed1\u5305\u4e2d\u7684\u8bc1\u4e66\u9881\u53d1\u673a\u6784\u8fdb\u884c\u9884\u914d\u3002\u5178\u578b\u7684\u77e5\u540d\u4f9b\u5e94\u5546\u5305\u62ec Let's Encrypt\u3001Verisign \u548c Thawte\uff0c\u4f46\u8fd8\u6709\u8bb8\u591a\u5176\u4ed6\u4f9b\u5e94\u5546\u3002 \u5728\u521b\u5efa\u548c\u7b7e\u7f72\u8bc1\u4e66\u65b9\u9762\u5b58\u5728\u7ba1\u7406\u3001\u7b56\u7565\u548c\u6280\u672f\u65b9\u9762\u7684\u6311\u6218\u3002\u5728\u8fd9\u4e2a\u9886\u57df\uff0c\u4e91\u67b6\u6784\u5e08\u6216\u64cd\u4f5c\u5458\u53ef\u80fd\u5e0c\u671b\u5bfb\u6c42\u884c\u4e1a\u9886\u5bfc\u8005\u548c\u4f9b\u5e94\u5546\u7684\u5efa\u8bae\uff0c\u4ee5\u53ca\u6b64\u5904\u63a8\u8350\u7684\u6307\u5bfc\u3002 TLS \u5e93 \u00b6 OpenStack \u751f\u6001\u7cfb\u7edf\u4e2d\u7684\u7ec4\u4ef6\u3001\u670d\u52a1\u548c\u5e94\u7528\u7a0b\u5e8f\u6216 OpenStack \u7684\u4f9d\u8d56\u9879\u5df2\u5b9e\u73b0\u6216\u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528 TLS \u5e93\u3002OpenStack \u4e2d\u7684 TLS \u548c HTTP \u670d\u52a1\u901a\u5e38\u4f7f\u7528 OpenSSL \u5b9e\u73b0\uff0cOpenSSL \u5177\u6709\u5df2\u9488\u5bf9 FIPS 140-2 \u9a8c\u8bc1\u7684\u6a21\u5757\u3002\u4f46\u662f\uff0c\u8bf7\u8bb0\u4f4f\uff0c\u6bcf\u4e2a\u5e94\u7528\u7a0b\u5e8f\u6216\u670d\u52a1\u5728\u4f7f\u7528 OpenSSL \u5e93\u7684\u65b9\u5f0f\u4e0a\u4ecd\u53ef\u80fd\u5f15\u5165\u5f31\u70b9\u3002 \u52a0\u5bc6\u7b97\u6cd5\u3001\u5bc6\u7801\u6a21\u5f0f\u548c\u534f\u8bae \u00b6 \u5efa\u8bae\u81f3\u5c11\u4f7f\u7528 TLS 1.2\u3002\u65e7\u7248\u672c\uff08\u5982 TLS 1.0\u30011.1 \u548c\u6240\u6709\u7248\u672c\u7684 SSL\uff08TLS \u7684\u524d\u8eab\uff09\u5bb9\u6613\u53d7\u5230\u591a\u79cd\u516c\u5f00\u5df2\u77e5\u7684\u653b\u51fb\uff0c\u56e0\u6b64\u4e0d\u5f97\u4f7f\u7528\u3002TLS 1.2 \u53ef\u7528\u4e8e\u5e7f\u6cdb\u7684\u5ba2\u6237\u7aef\u517c\u5bb9\u6027\uff0c\u4f46\u5728\u542f\u7528\u6b64\u534f\u8bae\u65f6\u8981\u5c0f\u5fc3\u3002\u4ec5\u5f53\u5b58\u5728\u5f3a\u5236\u6027\u517c\u5bb9\u6027\u8981\u6c42\u5e76\u4e14\u60a8\u4e86\u89e3\u6240\u6d89\u53ca\u7684\u98ce\u9669\u65f6\uff0c\u624d\u542f\u7528 TLS \u7248\u672c 1.1\u3002 \u4f7f\u7528 TLS 1.2 \u5e76\u540c\u65f6\u63a7\u5236\u5ba2\u6237\u7aef\u548c\u670d\u52a1\u5668\u65f6\uff0c\u5bc6\u7801\u5957\u4ef6\u5e94\u9650\u5236\u4e3a ECDHE-ECDSA-AES256-GCM-SHA384 .\u5728\u4e0d\u63a7\u5236\u8fd9\u4e24\u4e2a\u7ec8\u7ed3\u70b9\u5e76\u4f7f\u7528 TLS 1.1 \u6216 1.2 \u7684\u60c5\u51b5\u4e0b\uff0c\u66f4\u901a\u7528 HIGH:!aNULL:!eNULL:!DES:!3DES:!SSLv3:!TLSv1:!CAMELLIA \u7684\u662f\u5408\u7406\u7684\u5bc6\u7801\u9009\u62e9\u3002 \u4f46\u662f\uff0c\u7531\u4e8e\u672c\u4e66\u5e76\u4e0d\u6253\u7b97\u5168\u9762\u4ecb\u7ecd\u5bc6\u7801\u5b66\uff0c\u56e0\u6b64\u6211\u4eec\u4e0d\u5e0c\u671b\u89c4\u5b9a\u5728OpenStack\u670d\u52a1\u4e2d\u5e94\u8be5\u542f\u7528\u6216\u7981\u7528\u54ea\u4e9b\u7279\u5b9a\u7684\u7b97\u6cd5\u6216\u5bc6\u7801\u6a21\u5f0f\u3002\u6211\u4eec\u60f3\u63a8\u8350\u4e00\u4e9b\u6743\u5a01\u7684\u53c2\u8003\u8d44\u6599\uff0c\u4ee5\u63d0\u4f9b\u66f4\u591a\u4fe1\u606f\uff1a \u56fd\u5bb6\u5b89\u5168\u5c40\uff0cSuite B \u5bc6\u7801\u5b66 OWASP\u5bc6\u7801\u5b66\u6307\u5357 OWASP \u4f20\u8f93\u5c42\u4fdd\u62a4\u5907\u5fd8\u5355 SoK\uff1aSSL \u548c HTTPS\uff1a\u91cd\u6e29\u8fc7\u53bb\u7684\u6311\u6218\u5e76\u8bc4\u4f30\u8bc1\u4e66\u4fe1\u4efb\u6a21\u578b\u589e\u5f3a\u529f\u80fd \u4e16\u754c\u4e0a\u6700\u5371\u9669\u7684\u4ee3\u7801\uff1a\u5728\u975e\u6d4f\u89c8\u5668\u8f6f\u4ef6\u4e2d\u9a8c\u8bc1SSL\u8bc1\u4e66 OpenSSL \u548c FIPS 140-2 \u603b\u7ed3 \u00b6 \u9274\u4e8e OpenStack \u7ec4\u4ef6\u7684\u590d\u6742\u6027\u548c\u90e8\u7f72\u53ef\u80fd\u6027\u7684\u6570\u91cf\uff0c\u60a8\u5fc5\u987b\u6ce8\u610f\u786e\u4fdd\u6bcf\u4e2a\u7ec4\u4ef6\u90fd\u83b7\u5f97 TLS \u8bc1\u4e66\u3001\u5bc6\u94a5\u548c CA \u7684\u9002\u5f53\u914d\u7f6e\u3002\u540e\u7eed\u90e8\u5206\u5c06\u8ba8\u8bba\u4ee5\u4e0b\u670d\u52a1\uff1a \u8ba1\u7b97 API \u7aef\u70b9 \u8eab\u4efd API \u7aef\u70b9 \u7f51\u7edc API \u7aef\u70b9 \u5b58\u50a8 API \u7aef\u70b9 \u6d88\u606f\u670d\u52a1\u5668 \u6570\u636e\u5e93\u670d\u52a1\u5668 \u4eea\u8868\u677f TLS \u4ee3\u7406\u548c HTTP \u670d\u52a1 \u00b6 OpenStack\u7684\u7ec8\u7aef\u662f\u63d0\u4f9bAPI\u7ed9\u516c\u5171\u7f51\u7edc\u4e0a\u7684\u7ec8\u7aef\u7528\u6237\u548c\u7ba1\u7406\u7f51\u7edc\u4e0a\u7684\u5176\u4ed6OpenStack\u670d\u52a1\u7684HTTP\u670d\u52a1\u3002\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u8fd9\u4e9b\u8bf7\u6c42\uff0c\u65e0\u8bba\u662f\u5185\u90e8\u8fd8\u662f\u5916\u90e8\uff0c\u90fd\u4f7f\u7528TLS\u8fdb\u884c\u64cd\u4f5c\u3002\u4e3a\u4e86\u5b9e\u73b0\u8fd9\u4e2a\u76ee\u6807\uff0cAPI\u670d\u52a1\u5fc5\u987b\u90e8\u7f72\u5728TLS\u4ee3\u7406\u540e\u9762\uff0c\u8be5\u4ee3\u7406\u80fd\u591f\u5efa\u7acb\u548c\u7ec8\u6b62TLS\u4f1a\u8bdd\u3002\u4e0b\u8868\u63d0\u4f9b\u4e86\u53ef\u7528\u4e8e\u6b64\u76ee\u7684\u7684\u5f00\u6e90\u8f6f\u4ef6\u7684\u975e\u8be6\u5c3d\u5217\u8868\uff1a Pound Stud Nginx Apache httpd \u5728\u8f6f\u4ef6\u7ec8\u7aef\u6027\u80fd\u4e0d\u8db3\u7684\u60c5\u51b5\u4e0b\uff0c\u786c\u4ef6\u52a0\u901f\u5668\u53ef\u80fd\u503c\u5f97\u63a2\u7d22\u4f5c\u4e3a\u66ff\u4ee3\u9009\u9879\u3002\u8bf7\u52a1\u5fc5\u6ce8\u610f\u4efb\u4f55\u9009\u5b9a\u7684 TLS \u4ee3\u7406\u5c06\u5904\u7406\u7684\u8bf7\u6c42\u7684\u5927\u5c0f\u3002 \u793a\u4f8b \u00b6 \u4e0b\u9762\u6211\u4eec\u63d0\u4f9b\u4e86\u4e00\u4e9b\u66f4\u6d41\u884c\u7684 Web \u670d\u52a1\u5668/TLS \u7ec8\u7ed3\u5668\u4e2d\u542f\u7528 TLS \u7684\u63a8\u8350\u914d\u7f6e\u8bbe\u7f6e\u793a\u4f8b\u3002 \u5728\u6df1\u5165\u7814\u7a76\u914d\u7f6e\u4e4b\u524d\uff0c\u6211\u4eec\u7b80\u8981\u8ba8\u8bba\u5bc6\u7801\u7684\u914d\u7f6e\u5143\u7d20\u53ca\u5176\u683c\u5f0f\u3002\u6709\u5173\u53ef\u7528\u5bc6\u7801\u548c OpenSSL \u5bc6\u7801\u5217\u8868\u683c\u5f0f\u7684\u66f4\u8be6\u5c3d\u5904\u7406\uff0c\u8bf7\u53c2\u9605\uff1a\u5bc6\u7801\u3002 ciphers = \"HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM\" \u6216 ciphers = \"kEECDH:kEDH:kRSA:HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM\" \u5bc6\u7801\u5b57\u7b26\u4e32\u9009\u9879\u7531 \u201c\uff1a\u201d \u5206\u9694\uff0c\u800c \u201c\uff01\u201d \u63d0\u4f9b\u7d27\u63a5\u7740\u7684\u5143\u7d20\u7684\u5426\u5b9a\u3002\u5143\u7d20\u987a\u5e8f\u6307\u793a\u9996\u9009\u9879\uff0c\u9664\u975e\u88ab\u9650\u5b9a\u7b26\uff08\u5982 HIGH\uff09\u8986\u76d6\u3002\u8ba9\u6211\u4eec\u4ed4\u7ec6\u770b\u770b\u4e0a\u9762\u793a\u4f8b\u5b57\u7b26\u4e32\u4e2d\u7684\u5143\u7d20\u3002 kEECDH:kEDH \u4e34\u65f6\u692d\u5706\u66f2\u7ebf Diffie-Hellman\uff08\u7f29\u5199\u4e3a EECDH \u548c ECDHE\uff09\u3002 Ephemeral Diffie-Hellman\uff08\u7f29\u5199\u4e3a EDH \u6216 DHE\uff09\u4f7f\u7528\u7d20\u6570\u573a\u7fa4\u3002 \u8fd9\u4e24\u79cd\u65b9\u6cd5\u90fd\u63d0\u4f9b\u5b8c\u5168\u524d\u5411\u4fdd\u5bc6 \uff08PFS\uff09\u3002\u6709\u5173\u6b63\u786e\u914d\u7f6e PFS \u7684\u66f4\u591a\u8ba8\u8bba\uff0c\u8bf7\u53c2\u9605\u5b8c\u5168\u524d\u5411\u4fdd\u5bc6\u3002 \u4e34\u65f6\u692d\u5706\u66f2\u7ebf\u8981\u6c42\u670d\u52a1\u5668\u914d\u7f6e\u547d\u540d\u66f2\u7ebf\uff0c\u5e76\u63d0\u4f9b\u6bd4\u4e3b\u5b57\u6bb5\u7ec4\u66f4\u597d\u7684\u5b89\u5168\u6027\u548c\u66f4\u4f4e\u7684\u8ba1\u7b97\u6210\u672c\u3002\u4f46\u662f\uff0c\u4e3b\u8981\u5b57\u6bb5\u7ec4\u7684\u5b9e\u73b0\u8303\u56f4\u66f4\u5e7f\uff0c\u56e0\u6b64\u901a\u5e38\u4e24\u8005\u90fd\u5305\u542b\u5728\u5217\u8868\u4e2d\u3002 kRSA \u5206\u522b\u4f7f\u7528 RSA \u4ea4\u6362\u3001\u8eab\u4efd\u9a8c\u8bc1\u6216\u4e24\u8005\u4e4b\u4e00\u7684\u5bc6\u7801\u5957\u4ef6\u3002 HIGH \u5728\u534f\u5546\u9636\u6bb5\u9009\u62e9\u53ef\u80fd\u7684\u6700\u9ad8\u5b89\u5168\u5bc6\u7801\u3002\u8fd9\u4e9b\u5bc6\u94a5\u901a\u5e38\u5177\u6709\u957f\u5ea6\u4e3a 128 \u4f4d\u6216\u66f4\u957f\u7684\u5bc6\u94a5\u3002 !RC4 \u6ca1\u6709 RC4\u3002RC4 \u5728 TLS V3 \u7684\u4e0a\u4e0b\u6587\u4e2d\u5b58\u5728\u7f3a\u9677\u3002\u8bf7\u53c2\u9605 TLS \u548c WPA \u4e2d RC4 \u7684\u5b89\u5168\u6027\u3002 !MD5 \u6ca1\u6709 MD5\u3002MD5 \u4e0d\u5177\u6709\u9632\u51b2\u7a81\u529f\u80fd\uff0c\u56e0\u6b64\u4e0d\u63a5\u53d7\u6d88\u606f\u9a8c\u8bc1\u7801 \uff08MAC\uff09 \u6216\u7b7e\u540d\u3002 !aNULL:!eNULL Disallows clear text. \u4e0d\u5141\u8bb8\u660e\u6587\u3002 !EXP \u4e0d\u5141\u8bb8\u5bfc\u51fa\u52a0\u5bc6\u7b97\u6cd5\uff0c\u8fd9\u4e9b\u7b97\u6cd5\u5728\u8bbe\u8ba1\u4e0a\u5f80\u5f80\u5f88\u5f31\uff0c\u901a\u5e38\u4f7f\u7528 40 \u4f4d\u548c 56 \u4f4d\u5bc6\u94a5\u3002 \u7f8e\u56fd\u5bf9\u5bc6\u7801\u5b66\u7cfb\u7edf\u7684\u51fa\u53e3\u9650\u5236\u5df2\u88ab\u53d6\u6d88\uff0c\u4e0d\u518d\u9700\u8981\u652f\u6301\u3002 !LOW:!MEDIUM \u4e0d\u5141\u8bb8\u4f7f\u7528\u4f4e\uff0856 \u6216 64 \u4f4d\u957f\u5bc6\u94a5\uff09\u548c\u4e2d\u7b49\uff08128 \u4f4d\u957f\u5bc6\u94a5\uff09\u5bc6\u7801\uff0c\u56e0\u4e3a\u5b83\u4eec\u5bb9\u6613\u53d7\u5230\u66b4\u529b\u653b\u51fb\uff08\u793a\u4f8b 2-DES\uff09\u3002\u6b64\u89c4\u5219\u4ecd\u5141\u8bb8\u4e09\u91cd\u6570\u636e\u52a0\u5bc6\u6807\u51c6 \uff08Triple DES\uff09\uff0c\u4e5f\u79f0\u4e3a\u4e09\u91cd\u6570\u636e\u52a0\u5bc6\u7b97\u6cd5 \uff08TDEA\uff09 \u548c\u9ad8\u7ea7\u52a0\u5bc6\u6807\u51c6 \uff08AES\uff09\uff0c\u6bcf\u4e2a\u6807\u51c6\u90fd\u5177\u6709\u5927\u4e8e\u7b49\u4e8e 128 \u4f4d\u7684\u5bc6\u94a5\uff0c\u56e0\u6b64\u66f4\u5b89\u5168\u3002 Protocols \u534f\u8bae\u901a\u8fc7SSL_CTX_set_options\u542f\u7528/\u7981\u7528\u3002\u5efa\u8bae\u7981\u7528 SSLv2/v3 \u5e76\u542f\u7528 TLS\u3002 Pound \u00b6 \u6b64 Pound \u793a\u4f8b\u542f\u7528 AES-NI \u52a0\u901f\uff0c\u8fd9\u6709\u52a9\u4e8e\u63d0\u9ad8\u5177\u6709\u652f\u6301\u6b64\u529f\u80fd\u7684\u5904\u7406\u5668\u7684\u7cfb\u7edf\u7684\u6027\u80fd\u3002\u9ed8\u8ba4\u914d\u7f6e\u6587\u4ef6\u4f4d\u4e8e /etc/pound/pound.cfg Ubuntu\u3001RHEL\u3001CentOS\u3001 /etc/pound.cfg openSUSE \u548c SUSE Linux Enterprise \u4e0a\u3002 ## see pound(8) for details daemon 1 ###################################################################### ## global options: User \"swift\" Group \"swift\" #RootJail \"/chroot/pound\" ## Logging: (goes to syslog by default) ## 0 no logging ## 1 normal ## 2 extended ## 3 Apache-style (common log format) LogLevel 0 ## turn on dynamic scaling (off by default) # Dyn Scale 1 ## check backend every X secs: Alive 30 ## client timeout #Client 10 ## allow 10 second proxy connect time ConnTO 10 ## use hardware-acceleration card supported by openssl(1): SSLEngine \"aesni\" # poundctl control socket Control \"/var/run/pound/poundctl.socket\" ###################################################################### ## listen, redirect and ... to: ## redirect all swift requests on port 443 to local swift proxy ListenHTTPS Address 0.0.0.0 Port 443 Cert \"/etc/pound/cert.pem\" ## Certs to accept from clients ## CAlist \"CA_file\" ## Certs to use for client verification ## VerifyList \"Verify_file\" ## Request client cert - don't verify ## Ciphers \"AES256-SHA\" ## allow PUT and DELETE also (by default only GET, POST and HEAD)?: NoHTTPS11 0 ## allow PUT and DELETE also (by default only GET, POST and HEAD)?: xHTTP 1 Service BackEnd Address 127.0.0.1 Port 80 End End End Stud \u00b6 \u5bc6\u7801\u884c\u53ef\u4ee5\u6839\u636e\u60a8\u7684\u9700\u8981\u8fdb\u884c\u8c03\u6574\uff0c\u4f46\u8fd9\u662f\u4e00\u4e2a\u5408\u7406\u7684\u8d77\u70b9\u3002\u9ed8\u8ba4\u914d\u7f6e\u6587\u4ef6\u4f4d\u4e8e\u76ee\u5f55\u4e2d /etc/stud \u3002\u4f46\u662f\uff0c\u9ed8\u8ba4\u60c5\u51b5\u4e0b\u4e0d\u63d0\u4f9b\u5b83\u3002 # SSL x509 certificate file. pem-file = \" # SSL protocol. tls = on ssl = off # List of allowed SSL ciphers. # OpenSSL's high-strength ciphers which require authentication # NOTE: forbids clear text, use of RC4 or MD5 or LOW and MEDIUM strength ciphers ciphers = \"HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM\" # Enforce server cipher list order prefer-server-ciphers = on # Number of worker processes workers = 4 # Listen backlog size backlog = 1000 # TCP socket keepalive interval in seconds keepalive = 3600 # Chroot directory chroot = \"\" # Set uid after binding a socket user = \"www-data\" # Set gid after binding a socket group = \"www-data\" # Quiet execution, report only error messages quiet = off # Use syslog for logging syslog = on # Syslog facility to use syslog-facility = \"daemon\" # Run as daemon daemon = off # Report client address using SENDPROXY protocol for haproxy # Disabling this until we upgrade to HAProxy 1.5 write-proxy = off Nginx \u00b6 \u6b64 Nginx \u793a\u4f8b\u9700\u8981 TLS v1.1 \u6216 v1.2 \u624d\u80fd\u83b7\u5f97\u6700\u5927\u7684\u5b89\u5168\u6027\u3002\u53ef\u4ee5\u6839\u636e\u60a8\u7684\u9700\u8981\u8c03\u6574\u751f\u4ea7\u7ebf ssl_ciphers \uff0c\u4f46\u8fd9\u662f\u4e00\u4e2a\u5408\u7406\u7684\u8d77\u70b9\u3002\u7f3a\u7701\u914d\u7f6e\u6587\u4ef6\u4e3a /etc/nginx/nginx.conf \u3002 server { listen : ssl; ssl_certificate ; ssl_certificate_key ; ssl_protocols TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM ssl_session_tickets off; server_name _; keepalive_timeout 5; location / { } } Apache \u00b6 \u9ed8\u8ba4\u914d\u7f6e\u6587\u4ef6\u4f4d\u4e8e /etc/apache2/apache2.conf Ubuntu\u3001RHEL \u548c CentOS\u3001 /etc/httpd/conf/httpd.conf /etc/apache2/httpd.conf openSUSE \u548c SUSE Linux Enterprise \u4e0a\u3002 :80> ServerName RedirectPermanent / https:/// :443> ServerName SSLEngine On SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM SSLCertificateFile /path/.crt SSLCACertificateFile /path/.crt SSLCertificateKeyFile /path/.key WSGIScriptAlias / WSGIDaemonProcess horizon user= group= processes=3 threads=10 Alias /static > # For http server 2.2 and earlier: Order allow,deny Allow from all # Or, in Apache http server 2.4 and later: # Require all granted Apache \u4e2d\u7684\u8ba1\u7b97 API SSL \u7aef\u70b9\uff0c\u5fc5\u987b\u4e0e\u7b80\u77ed\u7684 WSGI \u811a\u672c\u914d\u5bf9\u3002 :8447> ServerName SSLEngine On SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM SSLCertificateFile /path/.crt SSLCACertificateFile /path/.crt SSLCertificateKeyFile /path/.key SSLSessionTickets Off WSGIScriptAlias / WSGIDaemonProcess osapi user= group= processes=3 threads=10 > # For http server 2.2 and earlier: Order allow,deny Allow from all # Or, in Apache http server 2.4 and later: # Require all granted HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \u00b6 \u5efa\u8bae\u6240\u6709\u751f\u4ea7\u90e8\u7f72\u90fd\u4f7f\u7528 HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168\u6027 \uff08HSTS\uff09\u3002\u6b64\u6807\u5934\u53ef\u9632\u6b62\u6d4f\u89c8\u5668\u5728\u5efa\u7acb\u5355\u4e2a\u5b89\u5168\u8fde\u63a5\u540e\u5efa\u7acb\u4e0d\u5b89\u5168\u7684\u8fde\u63a5\u3002\u5982\u679c\u60a8\u5df2\u5c06 HTTP \u670d\u52a1\u90e8\u7f72\u5728\u516c\u5171\u57df\u6216\u4e0d\u53d7\u4fe1\u4efb\u7684\u57df\u4e0a\uff0c\u5219 HSTS \u5c24\u4e3a\u91cd\u8981\u3002\u8981\u542f\u7528 HSTS\uff0c\u8bf7\u5c06 Web \u670d\u52a1\u5668\u914d\u7f6e\u4e3a\u53d1\u9001\u5305\u542b\u6240\u6709\u8bf7\u6c42\u7684\u6807\u5934\uff0c\u5982\u4e0b\u6240\u793a\uff1a Strict-Transport-Security: max-age=31536000; includeSubDomains \u5728\u6d4b\u8bd5\u671f\u95f4\u4ece 1 \u5929\u7684\u77ed\u6682\u505c\u5f00\u59cb\uff0c\u5e76\u5728\u6d4b\u8bd5\u8868\u660e\u60a8\u6ca1\u6709\u7ed9\u7528\u6237\u5e26\u6765\u95ee\u9898\u540e\u5c06\u5176\u63d0\u9ad8\u5230\u4e00\u5e74\u3002\u8bf7\u6ce8\u610f\uff0c\u4e00\u65e6\u6b64\u6807\u5934\u8bbe\u7f6e\u4e3a\u8f83\u5927\u7684\u8d85\u65f6\uff0c\u5b83\uff08\u6839\u636e\u8bbe\u8ba1\uff09\u5c31\u5f88\u96be\u7981\u7528\u3002 \u5b8c\u5168\u524d\u5411\u4fdd\u5bc6 \u00b6 \u914d\u7f6e TLS \u670d\u52a1\u5668\u4ee5\u5b9e\u73b0\u5b8c\u7f8e\u7684\u524d\u5411\u4fdd\u5bc6\u9700\u8981\u56f4\u7ed5\u5bc6\u94a5\u5927\u5c0f\u3001\u4f1a\u8bdd ID \u548c\u4f1a\u8bdd\u7968\u8bc1\u8fdb\u884c\u4ed4\u7ec6\u89c4\u5212\u3002\u6b64\u5916\uff0c\u5bf9\u4e8e\u591a\u670d\u52a1\u5668\u90e8\u7f72\uff0c\u5171\u4eab\u72b6\u6001\u4e5f\u662f\u4e00\u4e2a\u91cd\u8981\u7684\u8003\u8651\u56e0\u7d20\u3002\u4e0a\u9762\u7684 Apache \u548c Nginx \u793a\u4f8b\u914d\u7f6e\u7981\u7528\u4e86\u4f1a\u8bdd\u7968\u8bc1\u9009\u9879\uff0c\u4ee5\u5e2e\u52a9\u7f13\u89e3\u5176\u4e2d\u4e00\u4e9b\u95ee\u9898\u3002\u5b9e\u9645\u90e8\u7f72\u53ef\u80fd\u5e0c\u671b\u542f\u7528\u6b64\u529f\u80fd\u4ee5\u63d0\u9ad8\u6027\u80fd\u3002\u8fd9\u53ef\u4ee5\u5b89\u5168\u5730\u5b8c\u6210\uff0c\u4f46\u9700\u8981\u7279\u522b\u8003\u8651\u5bc6\u94a5\u7ba1\u7406\u3002\u6b64\u7c7b\u914d\u7f6e\u8d85\u51fa\u4e86\u672c\u6307\u5357\u7684\u8303\u56f4\u3002\u6211\u4eec\u5efa\u8bae\u9605\u8bfb ImperialViolet \u7684 How to botch TLS forward secrecy \u4f5c\u4e3a\u7406\u89e3\u95ee\u9898\u7a7a\u95f4\u7684\u8d77\u70b9\u3002 \u5b89\u5168\u53c2\u8003\u67b6\u6784 \u00b6 \u5efa\u8bae\u5728 TLS \u4ee3\u7406\u548c HTTP \u670d\u52a1\u7684\u516c\u7528\u7f51\u7edc\u548c\u7ba1\u7406\u7f51\u7edc\u4e0a\u4f7f\u7528 SSL/TLS\u3002\u4f46\u662f\uff0c\u5982\u679c\u5b9e\u9645\u5728\u4efb\u4f55\u5730\u65b9\u90e8\u7f72 SSL/TLS \u592a\u56f0\u96be\uff0c\u6211\u4eec\u5efa\u8bae\u60a8\u8bc4\u4f30\u60a8\u7684 OpenStack SSL/TLS \u9700\u6c42\uff0c\u5e76\u9075\u5faa\u6b64\u5904\u8ba8\u8bba\u7684\u67b6\u6784\u4e4b\u4e00\u3002 \u5728\u8bc4\u4f30\u5176 OpenStack SSL/TLS \u9700\u6c42\u65f6\uff0c\u5e94\u8be5\u505a\u7684\u7b2c\u4e00\u4ef6\u4e8b\u662f\u8bc6\u522b\u5a01\u80c1\u3002\u60a8\u53ef\u4ee5\u5c06\u8fd9\u4e9b\u5a01\u80c1\u5206\u4e3a\u5916\u90e8\u653b\u51fb\u8005\u548c\u5185\u90e8\u653b\u51fb\u8005\u7c7b\u522b\uff0c\u4f46\u7531\u4e8e OpenStack \u7684\u67d0\u4e9b\u7ec4\u4ef6\u5728\u516c\u5171\u548c\u7ba1\u7406\u7f51\u7edc\u4e0a\u8fd0\u884c\uff0c\u56e0\u6b64\u754c\u9650\u5f80\u5f80\u4f1a\u53d8\u5f97\u6a21\u7cca\u3002 \u5bf9\u4e8e\u9762\u5411\u516c\u4f17\u7684\u670d\u52a1\uff0c\u5a01\u80c1\u975e\u5e38\u7b80\u5355\u3002\u7528\u6237\u5c06\u4f7f\u7528\u5176\u7528\u6237\u540d\u548c\u5bc6\u7801\u5bf9 Horizon \u548c Keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u7528\u6237\u8fd8\u5c06\u4f7f\u7528\u5176 keystone \u4ee4\u724c\u8bbf\u95ee\u5176\u4ed6\u670d\u52a1\u7684 API \u7aef\u70b9\u3002\u5982\u679c\u6b64\u7f51\u7edc\u6d41\u91cf\u672a\u52a0\u5bc6\uff0c\u5219\u653b\u51fb\u8005\u53ef\u4ee5\u4f7f\u7528\u4e2d\u95f4\u4eba\u653b\u51fb\u622a\u83b7\u5bc6\u7801\u548c\u4ee4\u724c\u3002\u7136\u540e\uff0c\u653b\u51fb\u8005\u53ef\u4ee5\u4f7f\u7528\u8fd9\u4e9b\u6709\u6548\u51ed\u636e\u6267\u884c\u6076\u610f\u64cd\u4f5c\u3002\u6240\u6709\u5b9e\u9645\u90e8\u7f72\u90fd\u5e94\u4f7f\u7528 SSL/TLS \u6765\u4fdd\u62a4\u9762\u5411\u516c\u4f17\u7684\u670d\u52a1\u3002 \u5bf9\u4e8e\u90e8\u7f72\u5728\u7ba1\u7406\u7f51\u7edc\u4e0a\u7684\u670d\u52a1\uff0c\u7531\u4e8e\u5b89\u5168\u57df\u4e0e\u7f51\u7edc\u5b89\u5168\u7684\u6865\u63a5\uff0c\u5a01\u80c1\u5e76\u4e0d\u90a3\u4e48\u660e\u786e\u3002\u6709\u6743\u8bbf\u95ee\u7ba1\u7406\u7f51\u7edc\u7684\u7ba1\u7406\u5458\u603b\u662f\u6709\u53ef\u80fd\u51b3\u5b9a\u6267\u884c\u6076\u610f\u64cd\u4f5c\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u5982\u679c\u5141\u8bb8\u653b\u51fb\u8005\u8bbf\u95ee\u79c1\u94a5\uff0cSSL/TLS \u5c06\u65e0\u6d4e\u4e8e\u4e8b\u3002\u5f53\u7136\uff0c\u5e76\u4e0d\u662f\u7ba1\u7406\u7f51\u7edc\u4e0a\u7684\u6bcf\u4e2a\u4eba\u90fd\u88ab\u5141\u8bb8\u8bbf\u95ee\u79c1\u94a5\uff0c\u56e0\u6b64\u4f7f\u7528 SSL/TLS \u6765\u4fdd\u62a4\u81ea\u5df1\u514d\u53d7\u5185\u90e8\u653b\u51fb\u8005\u7684\u653b\u51fb\u4ecd\u7136\u5f88\u6709\u4ef7\u503c\u3002\u5373\u4f7f\u5141\u8bb8\u8bbf\u95ee\u60a8\u7684\u7ba1\u7406\u7f51\u7edc\u7684\u6bcf\u4e2a\u4eba\u90fd\u662f 100% \u53d7\u4fe1\u4efb\u7684\uff0c\u4ecd\u7136\u5b58\u5728\u672a\u7ecf\u6388\u6743\u7684\u7528\u6237\u901a\u8fc7\u5229\u7528\u9519\u8bef\u914d\u7f6e\u6216\u8f6f\u4ef6\u6f0f\u6d1e\u8bbf\u95ee\u60a8\u7684\u5185\u90e8\u7f51\u7edc\u7684\u5a01\u80c1\u3002\u5fc5\u987b\u8bb0\u4f4f\uff0c\u7528\u6237\u5728 OpenStack Compute \u8282\u70b9\u4e2d\u7684\u5b9e\u4f8b\u4e0a\u8fd0\u884c\u81ea\u5df1\u7684\u4ee3\u7801\uff0c\u8fd9\u4e9b\u8282\u70b9\u90e8\u7f72\u5728\u7ba1\u7406\u7f51\u7edc\u4e0a\u3002\u5982\u679c\u6f0f\u6d1e\u5141\u8bb8\u4ed6\u4eec\u7a81\u7834\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\uff0c\u4ed6\u4eec\u5c06\u53ef\u4ee5\u8bbf\u95ee\u60a8\u7684\u7ba1\u7406\u7f51\u7edc\u3002\u5728\u7ba1\u7406\u7f51\u7edc\u4e0a\u4f7f\u7528 SSL/TLS \u53ef\u4ee5\u6700\u5927\u7a0b\u5ea6\u5730\u51cf\u5c11\u653b\u51fb\u8005\u53ef\u80fd\u9020\u6210\u7684\u635f\u5bb3\u3002 SSL/TLS \u4ee3\u7406\u5728\u524d\u9762 \u00b6 \u4eba\u4eec\u666e\u904d\u8ba4\u4e3a\uff0c\u6700\u597d\u5c3d\u65e9\u52a0\u5bc6\u654f\u611f\u6570\u636e\uff0c\u5e76\u5c3d\u53ef\u80fd\u665a\u5730\u89e3\u5bc6\u3002\u5c3d\u7ba1\u6709\u8fd9\u79cd\u6700\u4f73\u5b9e\u8df5\uff0c\u4f46\u5728OpenStack\u670d\u52a1\u524d\u9762\u4f7f\u7528SSL / TLS\u4ee3\u7406\u5e76\u5728\u4e4b\u540e\u4f7f\u7528\u6e05\u6670\u7684\u901a\u4fe1\u4f3c\u4e4e\u662f\u5f88\u5e38\u89c1\u7684\uff0c\u5982\u4e0b\u6240\u793a\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0c\u4f7f\u7528 SSL/TLS \u4ee3\u7406\u7684\u4e00\u4e9b\u95ee\u9898\uff1a OpenStack \u670d\u52a1\u4e2d\u7684\u539f\u751f SSL/TLS \u7684\u6027\u80fd/\u6269\u5c55\u6027\u4e0d\u5982 SSL \u4ee3\u7406\uff08\u7279\u522b\u662f\u5bf9\u4e8e\u50cf Eventlet \u8fd9\u6837\u7684 Python \u5b9e\u73b0\uff09\u3002 OpenStack \u670d\u52a1\u4e2d\u7684\u539f\u751f SSL/TLS \u6ca1\u6709\u50cf\u66f4\u6210\u719f\u7684\u89e3\u51b3\u65b9\u6848\u90a3\u6837\u7ecf\u8fc7\u4ed4\u7ec6\u5ba1\u67e5/\u5ba1\u8ba1\u3002 \u672c\u673a SSL/TLS \u914d\u7f6e\u5f88\u56f0\u96be\uff08\u6ca1\u6709\u5f88\u597d\u7684\u6587\u6863\u8bb0\u5f55\u3001\u6d4b\u8bd5\u6216\u8de8\u670d\u52a1\u4fdd\u6301\u4e00\u81f4\uff09\u3002 \u6743\u9650\u5206\u79bb\uff08OpenStack \u670d\u52a1\u8fdb\u7a0b\u4e0d\u5e94\u76f4\u63a5\u8bbf\u95ee\u7528\u4e8e SSL/TLS \u7684\u79c1\u94a5\uff09\u3002 \u6d41\u91cf\u68c0\u67e5\u9700\u8981\u8d1f\u8f7d\u5747\u8861\u3002 \u4ee5\u4e0a\u6240\u6709\u95ee\u9898\u90fd\u662f\u6709\u9053\u7406\u7684\uff0c\u4f46\u5b83\u4eec\u90fd\u4e0d\u80fd\u963b\u6b62\u5728\u7ba1\u7406\u7f51\u7edc\u4e0a\u4f7f\u7528 SSL/TLS\u3002\u8ba9\u6211\u4eec\u8003\u8651\u4e0b\u4e00\u4e2a\u90e8\u7f72\u6a21\u578b\u3002 \u4e0e API \u7aef\u70b9\u4f4d\u4e8e\u540c\u4e00\u7269\u7406\u4e3b\u673a\u4e0a\u7684 SSL/TLS \u00b6 \u8fd9\u4e0e\u524d\u9762\u7684 SSL/TLS \u4ee3\u7406\u975e\u5e38\u76f8\u4f3c\uff0c\u4f46 SSL/TLS \u4ee3\u7406\u4e0e API \u7aef\u70b9\u4f4d\u4e8e\u540c\u4e00\u7269\u7406\u7cfb\u7edf\u4e0a\u3002API \u7aef\u70b9\u5c06\u914d\u7f6e\u4e3a\u4ec5\u4fa6\u542c\u672c\u5730\u7f51\u7edc\u63a5\u53e3\u3002\u4e0e API \u7aef\u70b9\u7684\u6240\u6709\u8fdc\u7a0b\u901a\u4fe1\u90fd\u5c06\u901a\u8fc7 SSL/TLS \u4ee3\u7406\u8fdb\u884c\u3002\u901a\u8fc7\u6b64\u90e8\u7f72\u6a21\u578b\uff0c\u6211\u4eec\u5c06\u89e3\u51b3 SSL/TLS \u4ee3\u7406\u4e2d\u7684\u8bb8\u591a\u8981\u70b9\uff1a\u5c06\u4f7f\u7528\u6027\u80fd\u826f\u597d\u7684\u7ecf\u8fc7\u9a8c\u8bc1\u7684 SSL \u5b9e\u73b0\u3002\u6240\u6709\u670d\u52a1\u90fd\u5c06\u4f7f\u7528\u76f8\u540c\u7684 SSL \u4ee3\u7406\u8f6f\u4ef6\uff0c\u56e0\u6b64 API \u7aef\u70b9\u7684 SSL \u914d\u7f6e\u5c06\u662f\u4e00\u81f4\u7684\u3002OpenStack \u670d\u52a1\u8fdb\u7a0b\u5c06\u65e0\u6cd5\u76f4\u63a5\u8bbf\u95ee\u7528\u4e8e SSL/TLS \u7684\u79c1\u94a5\uff0c\u56e0\u4e3a\u60a8\u5c06\u4ee5\u4e0d\u540c\u7684\u7528\u6237\u8eab\u4efd\u8fd0\u884c SSL \u4ee3\u7406\uff0c\u5e76\u4f7f\u7528\u6743\u9650\u9650\u5236\u8bbf\u95ee\uff08\u4ee5\u53ca\u4f7f\u7528 SELinux \u4e4b\u7c7b\u7684\u989d\u5916\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\uff09\u3002\u7406\u60f3\u60c5\u51b5\u4e0b\uff0c\u6211\u4eec\u4f1a\u8ba9 API \u7aef\u70b9\u5728 Unix \u5957\u63a5\u5b57\u4e0a\u76d1\u542c\uff0c\u8fd9\u6837\u6211\u4eec\u5c31\u53ef\u4ee5\u4f7f\u7528\u6743\u9650\u548c\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u6765\u9650\u5236\u5bf9\u5b83\u7684\u8bbf\u95ee\u3002\u4e0d\u5e78\u7684\u662f\uff0c\u6839\u636e\u6211\u4eec\u7684\u6d4b\u8bd5\uff0c\u8fd9\u5728 Eventlet \u4e2d\u76ee\u524d\u4f3c\u4e4e\u4e0d\u8d77\u4f5c\u7528\u3002\u8fd9\u662f\u4e00\u4e2a\u5f88\u597d\u7684\u672a\u6765\u53d1\u5c55\u76ee\u6807\u3002 SSL/TLS\u8d1f\u8f7d\u5e73\u8861\u5668 \u00b6 \u9700\u8981\u68c0\u67e5\u6d41\u91cf\u7684\u9ad8\u53ef\u7528\u6027\u6216\u8d1f\u8f7d\u5747\u8861\u90e8\u7f72\u4f1a\u600e\u6837\uff1f\u4ee5\u524d\u7684\u90e8\u7f72\u6a21\u578b\uff08\u4e0e API \u7aef\u70b9\u4f4d\u4e8e\u540c\u4e00\u7269\u7406\u4e3b\u673a\u4e0a\u7684 SSL/TLS\uff09\u4e0d\u5141\u8bb8\u8fdb\u884c\u6df1\u5ea6\u6570\u636e\u5305\u68c0\u6d4b\uff0c\u56e0\u4e3a\u6d41\u91cf\u662f\u52a0\u5bc6\u7684\u3002\u5982\u679c\u4ec5\u51fa\u4e8e\u57fa\u672c\u8def\u7531\u76ee\u7684\u800c\u9700\u8981\u68c0\u67e5\u6d41\u91cf\uff0c\u5219\u8d1f\u8f7d\u5747\u8861\u5668\u53ef\u80fd\u6ca1\u6709\u5fc5\u8981\u8bbf\u95ee\u672a\u52a0\u5bc6\u7684\u6d41\u91cf\u3002HAProxy \u80fd\u591f\u5728\u63e1\u624b\u671f\u95f4\u63d0\u53d6 SSL/TLS \u4f1a\u8bdd ID\uff0c\u7136\u540e\u53ef\u4ee5\u4f7f\u7528\u8be5 ID \u6765\u5b9e\u73b0\u4f1a\u8bdd\u4eb2\u548c\u6027\uff08\u4f1a\u8bdd ID \u914d\u7f6e\u8be6\u7ec6\u4fe1\u606f \u6b64\u5904 \uff09\u3002HAProxy\u8fd8\u53ef\u4ee5\u4f7f\u7528TLS\u670d\u52a1\u5668\u540d\u79f0\u6307\u793a\uff08SNI\uff09\u6269\u5c55\u6765\u786e\u5b9a\u5e94\u5c06\u6d41\u91cf\u8def\u7531\u5230\u7684\u4f4d\u7f6e\uff08SNI\u914d\u7f6e\u8be6\u7ec6\u4fe1\u606f\u8bf7\u5728\u6b64\u5904\uff09\u3002\u8fd9\u4e9b\u529f\u80fd\u53ef\u80fd\u6db5\u76d6\u4e86\u4e00\u4e9b\u6700\u5e38\u89c1\u7684\u8d1f\u8f7d\u5747\u8861\u5668\u9700\u6c42\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0cHAProxy \u5c06\u80fd\u591f\u5c06 HTTPS \u6d41\u91cf\u76f4\u63a5\u4f20\u9012\u5230 API \u7aef\u70b9\u7cfb\u7edf\uff1a \u5916\u90e8\u548c\u5185\u90e8\u73af\u5883\u7684\u52a0\u5bc6\u5206\u79bb \u00b6 \u5982\u679c\u60a8\u5e0c\u671b\u5bf9\u5916\u90e8\u548c\u5185\u90e8\u73af\u5883\u8fdb\u884c\u52a0\u5bc6\u5206\u79bb\uff0c\u8be5\u600e\u4e48\u529e\uff1f\u516c\u6709\u4e91\u63d0\u4f9b\u5546\u53ef\u80fd\u5e0c\u671b\u5176\u9762\u5411\u516c\u4f17\u7684\u670d\u52a1\uff08\u6216\u4ee3\u7406\uff09\u4f7f\u7528\u7531 CA \u9881\u53d1\u7684\u8bc1\u4e66\uff0c\u8be5\u8bc1\u4e66\u94fe\u63a5\u5230\u53d7\u4fe1\u4efb\u7684\u6839 CA\uff0c\u8be5\u6839 CA \u5206\u5e03\u5728\u6d41\u884c\u7684 SSL/TLS Web \u6d4f\u89c8\u5668\u8f6f\u4ef6\u4e2d\u3002\u5bf9\u4e8e\u5185\u90e8\u670d\u52a1\uff0c\u53ef\u80fd\u5e0c\u671b\u6539\u7528\u81ea\u5df1\u7684 PKI \u6765\u9881\u53d1 SSL/TLS \u8bc1\u4e66\u3002\u53ef\u4ee5\u901a\u8fc7\u5728\u7f51\u7edc\u8fb9\u754c\u7ec8\u6b62 SSL\uff0c\u7136\u540e\u4f7f\u7528\u5185\u90e8\u9881\u53d1\u7684\u8bc1\u4e66\u91cd\u65b0\u52a0\u5bc6\u6765\u5b9e\u73b0\u8fd9\u79cd\u52a0\u5bc6\u5206\u79bb\u3002\u6d41\u91cf\u5c06\u5728\u9762\u5411\u516c\u4f17\u7684 SSL/TLS \u4ee3\u7406\u4e0a\u77ed\u65f6\u95f4\u5185\u672a\u52a0\u5bc6\uff0c\u4f46\u6c38\u8fdc\u4e0d\u4f1a\u4ee5\u660e\u6587\u5f62\u5f0f\u901a\u8fc7\u7f51\u7edc\u4f20\u8f93\u3002\u5982\u679c\u8d1f\u8f7d\u5747\u8861\u5668\u4e0a\u786e\u5b9e\u9700\u8981\u6df1\u5ea6\u6570\u636e\u5305\u68c0\u6d4b\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u7528\u4e8e\u5b9e\u73b0\u52a0\u5bc6\u5206\u79bb\u7684\u76f8\u540c\u91cd\u65b0\u52a0\u5bc6\u65b9\u6cd5\u3002\u4e0b\u9762\u662f\u6b64\u90e8\u7f72\u6a21\u578b\u7684\u6837\u5b50\uff1a\u4e0b\u9762\u662f\u6b64\u90e8\u7f72\u6a21\u578b\u7684\u5916\u89c2: \u4e0e\u5927\u591a\u6570\u4e8b\u60c5\u4e00\u6837\uff0c\u9700\u8981\u6743\u8861\u53d6\u820d\u3002\u4e3b\u8981\u7684\u6743\u8861\u662f\u5728\u5b89\u5168\u6027\u548c\u6027\u80fd\u4e4b\u95f4\u3002\u52a0\u5bc6\u662f\u6709\u4ee3\u4ef7\u7684\uff0c\u4f46\u88ab\u9ed1\u5ba2\u5165\u4fb5\u4e5f\u662f\u6709\u4ee3\u4ef7\u7684\u3002\u6bcf\u4e2a\u90e8\u7f72\u7684\u5b89\u5168\u6027\u548c\u6027\u80fd\u8981\u6c42\u90fd\u4f1a\u6709\u6240\u4e0d\u540c\uff0c\u56e0\u6b64\u5982\u4f55\u4f7f\u7528 SSL/TLS \u6700\u7ec8\u5c06\u7531\u4e2a\u4eba\u51b3\u5b9a\u3002 API \u7aef\u70b9 \u00b6 \u4f7f\u7528 OpenStack \u4e91\u7684\u8fc7\u7a0b\u662f\u901a\u8fc7\u67e5\u8be2 API \u7aef\u70b9\u5f00\u59cb\u7684\u3002\u867d\u7136\u516c\u5171\u548c\u4e13\u7528\u7ec8\u7ed3\u70b9\u9762\u4e34\u4e0d\u540c\u7684\u6311\u6218\uff0c\u4f46\u8fd9\u4e9b\u662f\u9ad8\u4ef7\u503c\u8d44\u4ea7\uff0c\u5982\u679c\u906d\u5230\u5165\u4fb5\uff0c\u53ef\u80fd\u4f1a\u5e26\u6765\u91cd\u5927\u98ce\u9669\u3002 \u672c\u7ae0\u5efa\u8bae\u5bf9\u9762\u5411\u516c\u5171\u548c\u79c1\u6709\u7684 API \u7aef\u70b9\u8fdb\u884c\u5b89\u5168\u589e\u5f3a\u3002 API \u7aef\u70b9\u914d\u7f6e\u5efa\u8bae \u5185\u90e8 API \u901a\u4fe1 \u7c98\u8d34\u4ef6\u548c\u4e2d\u95f4\u4ef6 API \u7aef\u70b9\u8fdb\u7a0b\u9694\u79bb\u548c\u7b56\u7565 API \u7ec8\u7aef\u8282\u70b9\u901f\u7387\u9650\u5236 API \u7aef\u70b9\u914d\u7f6e\u5efa\u8bae \u00b6 \u5185\u90e8 API \u901a\u4fe1 \u00b6 OpenStack \u63d0\u4f9b\u9762\u5411\u516c\u4f17\u548c\u79c1\u6709\u7684 API \u7aef\u70b9\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0cOpenStack \u7ec4\u4ef6\u4f7f\u7528\u516c\u5f00\u5b9a\u4e49\u7684\u7aef\u70b9\u3002\u5efa\u8bae\u5c06\u8fd9\u4e9b\u7ec4\u4ef6\u914d\u7f6e\u4e3a\u5728\u9002\u5f53\u7684\u5b89\u5168\u57df\u4e2d\u4f7f\u7528 API \u7aef\u70b9\u3002 \u670d\u52a1\u6839\u636e OpenStack \u670d\u52a1\u76ee\u5f55\u9009\u62e9\u5404\u81ea\u7684 API \u7aef\u70b9\u3002\u8fd9\u4e9b\u670d\u52a1\u53ef\u80fd\u4e0d\u9075\u5b88\u5217\u51fa\u7684\u516c\u5171\u6216\u5185\u90e8 API \u7aef\u70b9\u503c\u3002\u8fd9\u53ef\u80fd\u4f1a\u5bfc\u81f4\u5185\u90e8\u7ba1\u7406\u6d41\u91cf\u8def\u7531\u5230\u5916\u90e8 API \u7ec8\u7ed3\u70b9\u3002 \u5728\u8eab\u4efd\u670d\u52a1\u76ee\u5f55\u4e2d\u914d\u7f6e\u5185\u90e8 URL \u00b6 Identity \u670d\u52a1\u76ee\u5f55\u5e94\u4e86\u89e3\u60a8\u7684\u5185\u90e8 URL\u3002\u867d\u7136\u9ed8\u8ba4\u60c5\u51b5\u4e0b\u4e0d\u4f7f\u7528\u6b64\u529f\u80fd\uff0c\u4f46\u53ef\u4ee5\u901a\u8fc7\u914d\u7f6e\u6765\u5229\u7528\u5b83\u3002\u6b64\u5916\uff0c\u4e00\u65e6\u6b64\u884c\u4e3a\u6210\u4e3a\u9ed8\u8ba4\u884c\u4e3a\uff0c\u5b83\u5e94\u8be5\u4e0e\u9884\u671f\u7684\u66f4\u6539\u5411\u524d\u517c\u5bb9\u3002 \u8981\u4e3a\u7ec8\u7ed3\u70b9\u6ce8\u518c\u5185\u90e8 URL\uff0c\u8bf7\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\uff1a $ openstack endpoint create identity \\ --region RegionOne internal \\ https://MANAGEMENT_IP:5000/v3 \u66ff\u6362\u4e3a MANAGEMENT_IP \u63a7\u5236\u5668\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\u3002 \u4e3a\u5185\u90e8 URL \u914d\u7f6e\u5e94\u7528\u7a0b\u5e8f \u00b6 \u60a8\u53ef\u4ee5\u5f3a\u5236\u67d0\u4e9b\u670d\u52a1\u4f7f\u7528\u7279\u5b9a\u7684 API \u7aef\u70b9\u3002\u56e0\u6b64\uff0c\u5efa\u8bae\u5fc5\u987b\u5c06\u6bcf\u4e2a\u4e0e\u53e6\u4e00\u4e2a\u670d\u52a1\u7684 API \u901a\u4fe1\u7684 OpenStack \u670d\u52a1\u663e\u5f0f\u914d\u7f6e\u4e3a\u8bbf\u95ee\u6b63\u786e\u7684\u5185\u90e8 API \u7aef\u70b9\u3002 \u6bcf\u4e2a\u9879\u76ee\u90fd\u53ef\u80fd\u5448\u73b0\u5b9a\u4e49\u76ee\u6807 API \u7aef\u70b9\u7684\u4e0d\u4e00\u81f4\u65b9\u5f0f\u3002OpenStack \u7684\u672a\u6765\u7248\u672c\u8bd5\u56fe\u901a\u8fc7\u4e00\u81f4\u5730\u4f7f\u7528\u8eab\u4efd\u670d\u52a1\u76ee\u5f55\u6765\u89e3\u51b3\u8fd9\u4e9b\u4e0d\u4e00\u81f4\u95ee\u9898\u3002 \u914d\u7f6e\u793a\u4f8b #1\uff1anova cinder_catalog_info='volume:cinder:internalURL' glance_protocol='https' neutron_url='https://neutron-host:9696' neutron_admin_auth_url='https://neutron-host:9696' s3_host='s3-host' s3_use_ssl=True \u914d\u7f6e\u793a\u4f8b #2\uff1acinder glance_host = 'https://glance-server' \u7c98\u8d34\u548c\u4e2d\u95f4\u4ef6 \u00b6 OpenStack \u4e2d\u7684\u5927\u591a\u6570 API \u7aef\u70b9\u548c\u5176\u4ed6 HTTP \u670d\u52a1\u90fd\u4f7f\u7528 Python Paste Deploy \u5e93\u3002\u4ece\u5b89\u5168\u89d2\u5ea6\u6765\u770b\uff0c\u6b64\u5e93\u5141\u8bb8\u901a\u8fc7\u5e94\u7528\u7a0b\u5e8f\u7684\u914d\u7f6e\u6765\u64cd\u4f5c\u8bf7\u6c42\u7b5b\u9009\u5668\u7ba1\u9053\u3002\u6b64\u94fe\u4e2d\u7684\u6bcf\u4e2a\u5143\u7d20\u90fd\u79f0\u4e3a\u4e2d\u95f4\u4ef6\u3002\u66f4\u6539\u7ba1\u9053\u4e2d\u7b5b\u9009\u5668\u7684\u987a\u5e8f\u6216\u6dfb\u52a0\u5176\u4ed6\u4e2d\u95f4\u4ef6\u53ef\u80fd\u4f1a\u4ea7\u751f\u4e0d\u53ef\u9884\u77e5\u7684\u5b89\u5168\u5f71\u54cd\u3002 \u901a\u5e38\uff0c\u5b9e\u73b0\u8005\u4f1a\u6dfb\u52a0\u4e2d\u95f4\u4ef6\u6765\u6269\u5c55 OpenStack \u7684\u57fa\u672c\u529f\u80fd\u3002\u6211\u4eec\u5efa\u8bae\u5b9e\u73b0\u8005\u4ed4\u7ec6\u8003\u8651\u5c06\u975e\u6807\u51c6\u8f6f\u4ef6\u7ec4\u4ef6\u6dfb\u52a0\u5230\u5176 HTTP \u8bf7\u6c42\u7ba1\u9053\u4e2d\u53ef\u80fd\u5e26\u6765\u7684\u98ce\u9669\u3002 \u6709\u5173\u7c98\u8d34\u90e8\u7f72\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Python \u7c98\u8d34\u90e8\u7f72\u6587\u6863\u3002 API \u7aef\u70b9\u8fdb\u7a0b\u9694\u79bb\u548c\u7b56\u7565 \u00b6 \u60a8\u5e94\u8be5\u9694\u79bb API \u7aef\u70b9\u8fdb\u7a0b\uff0c\u5c24\u5176\u662f\u90a3\u4e9b\u4f4d\u4e8e\u516c\u5171\u5b89\u5168\u57df\u4e2d\u7684\u8fdb\u7a0b\uff0c\u5e94\u5c3d\u53ef\u80fd\u9694\u79bb\u3002\u5728\u90e8\u7f72\u5141\u8bb8\u7684\u60c5\u51b5\u4e0b\uff0cAPI \u7aef\u70b9\u5e94\u90e8\u7f72\u5728\u5355\u72ec\u7684\u4e3b\u673a\u4e0a\uff0c\u4ee5\u589e\u5f3a\u9694\u79bb\u6027\u3002 \u547d\u540d\u7a7a\u95f4 \u00b6 \u73b0\u5728\uff0c\u8bb8\u591a\u64cd\u4f5c\u7cfb\u7edf\u90fd\u63d0\u4f9b\u5206\u533a\u5316\u652f\u6301\u3002Linux \u652f\u6301\u547d\u540d\u7a7a\u95f4\u5c06\u8fdb\u7a0b\u5206\u914d\u5230\u72ec\u7acb\u7684\u57df\u4e2d\u3002\u672c\u6307\u5357\u7684\u5176\u4ed6\u90e8\u5206\u66f4\u8be6\u7ec6\u5730\u4ecb\u7ecd\u4e86\u7cfb\u7edf\u533a\u9694\u3002 \u7f51\u7edc\u7b56\u7565 \u00b6 \u7531\u4e8e API \u7aef\u70b9\u901a\u5e38\u6865\u63a5\u591a\u4e2a\u5b89\u5168\u57df\uff0c\u56e0\u6b64\u60a8\u5fc5\u987b\u7279\u522b\u6ce8\u610f API \u8fdb\u7a0b\u7684\u5212\u5206\u3002\u6709\u5173\u6b64\u533a\u57df\u7684\u5176\u4ed6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6865\u63a5\u5b89\u5168\u57df\u3002 \u901a\u8fc7\u4ed4\u7ec6\u5efa\u6a21\uff0c\u60a8\u53ef\u4ee5\u4f7f\u7528\u7f51\u7edc ACL \u548c IDS \u6280\u672f\u5728\u7f51\u7edc\u670d\u52a1\u4e4b\u95f4\u5f3a\u5236\u5b9e\u65bd\u663e\u5f0f\u70b9\u5bf9\u70b9\u901a\u4fe1\u3002\u4f5c\u4e3a\u4e00\u9879\u5173\u952e\u7684\u8de8\u57df\u670d\u52a1\uff0c\u8fd9\u79cd\u663e\u5f0f\u5f3a\u5236\u6267\u884c\u5bf9 OpenStack \u7684\u6d88\u606f\u961f\u5217\u670d\u52a1\u975e\u5e38\u6709\u6548\u3002 \u8981\u5b9e\u65bd\u7b56\u7565\uff0c\u60a8\u53ef\u4ee5\u914d\u7f6e\u670d\u52a1\u3001\u57fa\u4e8e\u4e3b\u673a\u7684\u9632\u706b\u5899\uff08\u4f8b\u5982 iptables\uff09\u3001\u672c\u5730\u7b56\u7565\uff08SELinux \u6216 AppArmor\uff09\u4ee5\u53ca\u53ef\u9009\u7684\u5168\u5c40\u7f51\u7edc\u7b56\u7565\u3002 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \u00b6 \u60a8\u5e94\u8be5\u5c06 API \u7aef\u70b9\u8fdb\u7a0b\u5f7c\u6b64\u9694\u79bb\uff0c\u5e76\u9694\u79bb\u8ba1\u7b97\u673a\u4e0a\u7684\u5176\u4ed6\u8fdb\u7a0b\u3002\u8fd9\u4e9b\u8fdb\u7a0b\u7684\u914d\u7f6e\u4e0d\u4ec5\u5e94\u901a\u8fc7\u4efb\u610f\u8bbf\u95ee\u63a7\u5236\uff0c\u8fd8\u5e94\u901a\u8fc7\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u6765\u9650\u5236\u8fd9\u4e9b\u8fdb\u7a0b\u3002\u8fd9\u4e9b\u589e\u5f3a\u7684\u8bbf\u95ee\u63a7\u5236\u7684\u76ee\u6807\u662f\u5e2e\u52a9\u904f\u5236\u548c\u5347\u7ea7 API \u7aef\u70b9\u5b89\u5168\u6f0f\u6d1e\u3002\u901a\u8fc7\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\uff0c\u6b64\u7c7b\u8fdd\u89c4\u884c\u4e3a\u4f1a\u4e25\u91cd\u9650\u5236\u5bf9\u8d44\u6e90\u7684\u8bbf\u95ee\uff0c\u5e76\u9488\u5bf9\u6b64\u7c7b\u4e8b\u4ef6\u63d0\u4f9b\u65e9\u671f\u8b66\u62a5\u3002 API \u7aef\u70b9\u901f\u7387\u9650\u5236 \u00b6 \u901f\u7387\u9650\u5236\u662f\u4e00\u79cd\u63a7\u5236\u57fa\u4e8e\u7f51\u7edc\u7684\u5e94\u7528\u7a0b\u5e8f\u63a5\u6536\u4e8b\u4ef6\u9891\u7387\u7684\u65b9\u6cd5\u3002\u5982\u679c\u4e0d\u5b58\u5728\u53ef\u9760\u7684\u901f\u7387\u9650\u5236\uff0c\u5219\u53ef\u80fd\u5bfc\u81f4\u5e94\u7528\u7a0b\u5e8f\u5bb9\u6613\u53d7\u5230\u5404\u79cd\u62d2\u7edd\u670d\u52a1\u653b\u51fb\u3002\u5bf9\u4e8e API \u5c24\u5176\u5982\u6b64\uff0c\u56e0\u4e3a API \u7684\u672c\u8d28\u662f\u65e8\u5728\u63a5\u53d7\u9ad8\u9891\u7387\u7684\u7c7b\u4f3c\u8bf7\u6c42\u7c7b\u578b\u548c\u64cd\u4f5c\u3002 \u5728 OpenStack \u4e2d\uff0c\u5efa\u8bae\u901a\u8fc7\u901f\u7387\u9650\u5236\u4ee3\u7406\u6216 Web \u5e94\u7528\u7a0b\u5e8f\u9632\u706b\u5899\u4e3a\u6240\u6709\u7aef\u70b9\uff08\u5c24\u5176\u662f\u516c\u5171\u7aef\u70b9\uff09\u63d0\u4f9b\u989d\u5916\u7684\u4fdd\u62a4\u5c42\u3002 \u5728\u914d\u7f6e\u548c\u5b9e\u73b0\u4efb\u4f55\u901f\u7387\u9650\u5236\u529f\u80fd\u65f6\uff0c\u8fd0\u8425\u5546\u5fc5\u987b\u4ed4\u7ec6\u89c4\u5212\u5e76\u8003\u8651\u5176 OpenStack \u4e91\u4e2d\u7528\u6237\u548c\u670d\u52a1\u7684\u4e2a\u4eba\u6027\u80fd\u9700\u6c42\uff0c\u8fd9\u4e00\u70b9\u81f3\u5173\u91cd\u8981\u3002 \u63d0\u4f9b\u901f\u7387\u9650\u5236\u7684\u5e38\u89c1\u89e3\u51b3\u65b9\u6848\u662f Nginx\u3001HAProxy\u3001OpenPose \u6216 Apache \u6a21\u5757\uff0c\u4f8b\u5982 mod_ratelimit\u3001mod_qos \u6216 mod_security\u3002 \u8eab\u4efd\u9274\u522b \u00b6 Keystone\u8eab\u4efd\u670d\u52a1\u4e3aOpenStack\u7cfb\u5217\u670d\u52a1\u4e13\u95e8\u63d0\u4f9b\u8eab\u4efd\u3001\u4ee4\u724c\u3001\u76ee\u5f55\u548c\u7b56\u7565\u670d\u52a1\u3002\u8eab\u4efd\u670d\u52a1\u7ec4\u7ec7\u4e3a\u4e00\u7ec4\u5185\u90e8\u670d\u52a1\uff0c\u901a\u8fc7\u4e00\u4e2a\u6216\u591a\u4e2a\u7aef\u70b9\u66b4\u9732\u3002\u8fd9\u4e9b\u670d\u52a1\u4e2d\u7684\u8bb8\u591a\u662f\u7531\u524d\u7aef\u4ee5\u7ec4\u5408\u65b9\u5f0f\u4f7f\u7528\u7684\u3002\u4f8b\u5982\uff0c\u8eab\u4efd\u9a8c\u8bc1\u8c03\u7528\u901a\u8fc7\u8eab\u4efd\u670d\u52a1\u9a8c\u8bc1\u7528\u6237\u548c\u9879\u76ee\u51ed\u636e\u3002\u5982\u679c\u6210\u529f\uff0c\u5b83\u5c06\u4f7f\u7528\u4ee4\u724c\u670d\u52a1\u521b\u5efa\u5e76\u8fd4\u56de\u4ee4\u724c\u3002\u66f4\u591a\u4fe1\u606f\u53ef\u4ee5\u5728Keystone\u5f00\u53d1\u8005\u6587\u6863\u4e2d\u627e\u5230\u3002 \u8ba4\u8bc1 \u65e0\u6548\u7684\u767b\u5f55\u5c1d\u8bd5 \u591a\u56e0\u7d20\u8ba4\u8bc1 \u8ba4\u8bc1\u65b9\u6cd5 \u5185\u90e8\u5b9e\u65bd\u7684\u8ba4\u8bc1\u65b9\u6cd5 \u5916\u90e8\u8ba4\u8bc1\u65b9\u6cd5 \u6388\u6743 \u5efa\u7acb\u6b63\u5f0f\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565 \u670d\u52a1\u6388\u6743 \u7ba1\u7406\u539f\u7528\u6237 \u7ec8\u7aef\u7528\u6237 \u7b56\u7565 \u4ee4\u724c Fernet \u4ee4\u724c JWT \u4ee4\u724c \u57df \u8054\u5408 Keystone \u4e3a\u4ec0\u4e48\u8981\u4f7f\u7528\u8054\u5408\u9274\u522b \u68c0\u67e5\u8868 Check-Identity-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a keystone\uff1f Check-Identity-02\uff1a\u662f\u5426\u4e3a\u8eab\u4efd\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u6743\u9650 Check-Identity-03\uff1a\u662f\u5426\u4e3a Identity \u542f\u7528\u4e86 TLS\uff1f Check-Identity-04\uff1a\uff08\u5df2\u8fc7\u65f6\uff09 Check-Identity-05\uff1a\u662f\u5426 max_request_body_size \u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f check-identity-06:\u7981\u7528/etc/keystone/keystone.conf\u4e2d\u7684\u7ba1\u7406\u4ee4\u724c check-identity-07:/etc/keystone/keystone.conf\u4e2d\u7684\u4e0d\u5b89\u5168_\u8c03\u8bd5\u4e3a\u5047 check-identity-08:\u4f7f\u7528/etc/keystone/keystone.conf\u4e2d\u7684Fernet\u4ee4\u724c \u8ba4\u8bc1 \u00b6 \u8eab\u4efd\u8ba4\u8bc1\u662f\u4efb\u4f55\u5b9e\u9645OpenStack\u90e8\u7f72\u4e2d\u4e0d\u53ef\u6216\u7f3a\u7684\u4e00\u90e8\u5206\uff0c\u56e0\u6b64\u5e94\u8be5\u4ed4\u7ec6\u8003\u8651\u7cfb\u7edf\u8bbe\u8ba1\u7684\u8fd9\u4e00\u65b9\u9762\u3002\u672c\u4e3b\u9898\u7684\u5b8c\u6574\u5904\u7406\u8d85\u51fa\u4e86\u672c\u6307\u5357\u7684\u8303\u56f4\uff0c\u4f46\u662f\u4ee5\u4e0b\u5404\u8282\u4ecb\u7ecd\u4e86\u4e00\u4e9b\u5173\u952e\u4e3b\u9898\u3002 \u4ece\u6839\u672c\u4e0a\u8bf4\uff0c\u8eab\u4efd\u8ba4\u8bc1\u662f\u786e\u8ba4\u8eab\u4efd\u7684\u8fc7\u7a0b - \u7528\u6237\u5b9e\u9645\u4e0a\u662f\u4ed6\u4eec\u58f0\u79f0\u7684\u8eab\u4efd\u3002\u4e00\u4e2a\u719f\u6089\u7684\u793a\u4f8b\u662f\u5728\u767b\u5f55\u7cfb\u7edf\u65f6\u63d0\u4f9b\u7528\u6237\u540d\u548c\u5bc6\u7801\u3002 OpenStack \u8eab\u4efd\u9274\u522b\u670d\u52a1\uff08keystone\uff09\u652f\u6301\u591a\u79cd\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\uff0c\u5305\u62ec\u7528\u6237\u540d\u548c\u5bc6\u7801\u3001LDAP \u548c\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002\u8eab\u4efd\u8ba4\u8bc1\u6210\u529f\u540e\uff0c\u8eab\u4efd\u9274\u522b\u670d\u52a1\u4f1a\u5411\u7528\u6237\u63d0\u4f9b\u7528\u4e8e\u540e\u7eed\u670d\u52a1\u8bf7\u6c42\u7684\u6388\u6743\u4ee4\u724c\u3002 \u4f20\u8f93\u5c42\u5b89\u5168\u6027 \uff08TLS\uff09 \u4f7f\u7528 X.509 \u8bc1\u4e66\u5728\u670d\u52a1\u548c\u4eba\u5458\u4e4b\u95f4\u63d0\u4f9b\u8eab\u4efd\u9a8c\u8bc1\u3002\u5c3d\u7ba1 TLS \u7684\u9ed8\u8ba4\u6a21\u5f0f\u662f\u4ec5\u670d\u52a1\u5668\u7aef\u8eab\u4efd\u9a8c\u8bc1\uff0c\u4f46\u8bc1\u4e66\u4e5f\u53ef\u7528\u4e8e\u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u3002 \u65e0\u6548\u7684\u767b\u5f55\u5c1d\u8bd5 \u00b6 \u4ece Newton \u7248\u672c\u5f00\u59cb\uff0c\u8eab\u4efd\u9274\u522b\u670d\u52a1\u53ef\u4ee5\u5728\u591a\u6b21\u767b\u5f55\u5c1d\u8bd5\u5931\u8d25\u540e\u9650\u5236\u5bf9\u5e10\u6237\u7684\u8bbf\u95ee\u3002\u91cd\u590d\u5931\u8d25\u767b\u5f55\u5c1d\u8bd5\u7684\u6a21\u5f0f\u901a\u5e38\u662f\u66b4\u529b\u653b\u51fb\u7684\u6307\u6807\uff08\u8bf7\u53c2\u9605\u653b\u51fb\u7c7b\u578b\uff09\u3002\u8fd9\u79cd\u7c7b\u578b\u7684\u653b\u51fb\u5728\u516c\u6709\u4e91\u90e8\u7f72\u4e2d\u66f4\u4e3a\u666e\u904d\u3002 \u5bf9\u4e8e\u9700\u8981\u6b64\u529f\u80fd\u7684\u65e7\u90e8\u7f72\uff0c\u53ef\u4ee5\u4f7f\u7528\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u8fdb\u884c\u9884\u9632\uff0c\u8be5\u7cfb\u7edf\u5728\u914d\u7f6e\u7684\u767b\u5f55\u5c1d\u8bd5\u5931\u8d25\u6b21\u6570\u540e\u9501\u5b9a\u5e10\u6237\u3002\u7136\u540e\uff0c\u53ea\u6709\u901a\u8fc7\u8fdb\u4e00\u6b65\u7684\u4fa7\u4fe1\u9053\u5e72\u9884\u624d\u80fd\u89e3\u9501\u8be5\u5e10\u6237\u3002 \u5982\u679c\u65e0\u6cd5\u9884\u9632\uff0c\u5219\u53ef\u4ee5\u4f7f\u7528\u68c0\u6d4b\u6765\u51cf\u8f7b\u635f\u5bb3\u3002\u68c0\u6d4b\u6d89\u53ca\u9891\u7e41\u67e5\u770b\u8bbf\u95ee\u63a7\u5236\u65e5\u5fd7\uff0c\u4ee5\u8bc6\u522b\u672a\u7ecf\u6388\u6743\u7684\u5e10\u6237\u8bbf\u95ee\u5c1d\u8bd5\u3002\u53ef\u80fd\u7684\u8865\u6551\u63aa\u65bd\u5305\u62ec\u68c0\u67e5\u7528\u6237\u5bc6\u7801\u7684\u5f3a\u5ea6\uff0c\u6216\u901a\u8fc7\u9632\u706b\u5899\u89c4\u5219\u963b\u6b62\u653b\u51fb\u7684\u7f51\u7edc\u6e90\u3002Keystone \u670d\u52a1\u5668\u4e0a\u9650\u5236\u8fde\u63a5\u6570\u7684\u9632\u706b\u5899\u89c4\u5219\u53ef\u7528\u4e8e\u964d\u4f4e\u653b\u51fb\u6548\u7387\uff0c\u4ece\u800c\u529d\u963b\u653b\u51fb\u8005\u3002 \u6b64\u5916\uff0c\u68c0\u67e5\u5e10\u6237\u6d3b\u52a8\u662f\u5426\u5b58\u5728\u5f02\u5e38\u767b\u5f55\u65f6\u95f4\u548c\u53ef\u7591\u64cd\u4f5c\uff0c\u5e76\u91c7\u53d6\u7ea0\u6b63\u63aa\u65bd\uff08\u5982\u7981\u7528\u5e10\u6237\uff09\u4e5f\u5f88\u6709\u7528\u3002\u901a\u5e38\uff0c\u4fe1\u7528\u5361\u63d0\u4f9b\u5546\u91c7\u7528\u8fd9\u79cd\u65b9\u6cd5\u8fdb\u884c\u6b3a\u8bc8\u68c0\u6d4b\u548c\u8b66\u62a5\u3002 \u591a\u56e0\u7d20\u8eab\u4efd\u9a8c\u8bc1 \u00b6 \u91c7\u7528\u591a\u91cd\u8eab\u4efd\u9a8c\u8bc1\u5bf9\u7279\u6743\u7528\u6237\u5e10\u6237\u8fdb\u884c\u7f51\u7edc\u8bbf\u95ee\u3002\u8eab\u4efd\u9274\u522b\u670d\u52a1\u901a\u8fc7\u53ef\u63d0\u4f9b\u6b64\u529f\u80fd\u7684 Apache Web \u670d\u52a1\u5668\u652f\u6301\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u3002\u670d\u52a1\u5668\u8fd8\u53ef\u4ee5\u4f7f\u7528\u8bc1\u4e66\u5f3a\u5236\u6267\u884c\u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u3002 \u6b64\u5efa\u8bae\u53ef\u9632\u6b62\u66b4\u529b\u7834\u89e3\u3001\u793e\u4f1a\u5de5\u7a0b\u4ee5\u53ca\u53ef\u80fd\u6cc4\u9732\u7ba1\u7406\u5458\u5bc6\u7801\u7684\u72d9\u51fb\u548c\u5927\u89c4\u6a21\u7f51\u7edc\u9493\u9c7c\u653b\u51fb\u3002 \u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5 \u00b6 \u5185\u90e8\u5b9e\u73b0\u7684\u8ba4\u8bc1\u65b9\u5f0f \u00b6 \u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u53ef\u4ee5\u5c06\u7528\u6237\u51ed\u636e\u5b58\u50a8\u5728 SQL \u6570\u636e\u5e93\u4e2d\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u7b26\u5408 LDAP \u7684\u76ee\u5f55\u670d\u52a1\u5668\u3002\u8eab\u4efd\u6570\u636e\u5e93\u53ef\u4ee5\u4e0e\u5176\u4ed6 OpenStack \u670d\u52a1\u4f7f\u7528\u7684\u6570\u636e\u5e93\u5206\u5f00\uff0c\u4ee5\u964d\u4f4e\u5b58\u50a8\u51ed\u636e\u6cc4\u9732\u7684\u98ce\u9669\u3002 \u5f53\u60a8\u4f7f\u7528\u7528\u6237\u540d\u548c\u5bc6\u7801\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u65f6\uff0c\u8eab\u4efd\u670d\u52a1\u4e0d\u4f1a\u5f3a\u5236\u6267\u884c NIST Special Publication 800-118\uff08\u8349\u6848\uff09\u4e2d\u63a8\u8350\u7684\u6709\u5173\u5bc6\u7801\u5f3a\u5ea6\u3001\u8fc7\u671f\u6216\u5931\u8d25\u8eab\u4efd\u9a8c\u8bc1\u5c1d\u8bd5\u7684\u7b56\u7565\u3002\u5e0c\u671b\u6267\u884c\u66f4\u4e25\u683c\u5bc6\u7801\u7b56\u7565\u7684\u7ec4\u7ec7\u5e94\u8003\u8651\u4f7f\u7528\u8eab\u4efd\u670d\u52a1\u7684\u6269\u5c55\u6216\u5916\u90e8\u8ba4\u8bc1\u670d\u52a1\u3002 LDAP \u7b80\u5316\u4e86\u8eab\u4efd\u8ba4\u8bc1\u4e0e\u7ec4\u7ec7\u73b0\u6709\u76ee\u5f55\u670d\u52a1\u548c\u7528\u6237\u5e10\u6237\u7ba1\u7406\u6d41\u7a0b\u7684\u96c6\u6210\u3002 OpenStack \u4e2d\u7684\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u7b56\u7565\u53ef\u4ee5\u59d4\u6258\u7ed9\u5176\u4ed6\u670d\u52a1\u3002\u4e00\u4e2a\u5178\u578b\u7684\u7528\u4f8b\u662f\u5bfb\u6c42\u90e8\u7f72\u79c1\u6709\u4e91\u7684\u7ec4\u7ec7\uff0c\u5e76\u4e14\u5df2\u7ecf\u5728 LDAP \u7cfb\u7edf\u4e2d\u62e5\u6709\u5458\u5de5\u548c\u7528\u6237\u7684\u6570\u636e\u5e93\u3002\u4f7f\u7528\u6b64\u8eab\u4efd\u9a8c\u8bc1\u673a\u6784\uff0c\u5c06\u5bf9\u8eab\u4efd\u670d\u52a1\u7684\u8bf7\u6c42\u59d4\u6258\u7ed9 LDAP \u7cfb\u7edf\uff0c\u7136\u540e LDAP \u7cfb\u7edf\u5c06\u6839\u636e\u5176\u7b56\u7565\u8fdb\u884c\u6388\u6743\u6216\u62d2\u7edd\u3002\u8eab\u4efd\u9a8c\u8bc1\u6210\u529f\u540e\uff0c\u8eab\u4efd\u9274\u522b\u670d\u52a1\u4f1a\u751f\u6210\u4e00\u4e2a\u4ee4\u724c\uff0c\u7528\u4e8e\u8bbf\u95ee\u6388\u6743\u670d\u52a1\u3002 \u8bf7\u6ce8\u610f\uff0c\u5982\u679c LDAP \u7cfb\u7edf\u5177\u6709\u4e3a\u7528\u6237\u5b9a\u4e49\u7684\u5c5e\u6027\uff0c\u4f8b\u5982 admin\u3001finance\u3001HR \u7b49\uff0c\u5219\u5fc5\u987b\u5c06\u8fd9\u4e9b\u5c5e\u6027\u6620\u5c04\u5230\u8eab\u4efd\u9274\u522b\u4e2d\u7684\u89d2\u8272\u548c\u7ec4\uff0c\u4ee5\u4f9b\u5404\u79cd OpenStack \u670d\u52a1\u4f7f\u7528\u3002\u8be5\u6587\u4ef6 /etc/keystone/keystone.conf \u5c06 LDAP \u5c5e\u6027\u6620\u5c04\u5230\u8eab\u4efd\u5c5e\u6027\u3002 \u4e0d\u5f97\u5141\u8bb8\u8eab\u4efd\u670d\u52a1\u5199\u5165\u7528\u4e8e OpenStack \u90e8\u7f72\u4e4b\u5916\u7684\u8eab\u4efd\u9a8c\u8bc1\u7684 LDAP \u670d\u52a1\uff0c\u56e0\u4e3a\u8fd9\u5c06\u5141\u8bb8\u5177\u6709\u8db3\u591f\u6743\u9650\u7684 keystone \u7528\u6237\u5bf9 LDAP \u76ee\u5f55\u8fdb\u884c\u66f4\u6539\u3002\u8fd9\u5c06\u5141\u8bb8\u5728\u66f4\u5e7f\u6cdb\u7684\u7ec4\u7ec7\u5185\u8fdb\u884c\u6743\u9650\u5347\u7ea7\uff0c\u6216\u4fc3\u8fdb\u5bf9\u5176\u4ed6\u4fe1\u606f\u548c\u8d44\u6e90\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u5728\u8fd9\u6837\u7684\u90e8\u7f72\u4e2d\uff0c\u7528\u6237\u914d\u7f6e\u5c06\u8d85\u51fa OpenStack \u90e8\u7f72\u7684\u8303\u56f4\u3002 \u6ce8\u610f \u6709\u4e00\u4e2a\u5173\u4e8e keystone.conf \u6743\u9650\u7684 OpenStack \u5b89\u5168\u8bf4\u660e \uff08OSSN\uff09\u3002 \u6709\u4e00\u4e2a\u5173\u4e8e\u6f5c\u5728 DoS \u653b\u51fb\u7684 OpenStack \u5b89\u5168\u8bf4\u660e \uff08OSSN\uff09\u3002 \u5916\u90e8\u8ba4\u8bc1\u65b9\u5f0f \u00b6 \u672c\u7ec4\u7ec7\u53ef\u80fd\u5e0c\u671b\u5b9e\u73b0\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\uff0c\u4ee5\u4fbf\u4e0e\u73b0\u6709\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u517c\u5bb9\uff0c\u6216\u5f3a\u5236\u5b9e\u65bd\u66f4\u5f3a\u7684\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\u8981\u6c42\u3002\u5c3d\u7ba1\u5bc6\u7801\u662f\u6700\u5e38\u89c1\u7684\u8eab\u4efd\u9a8c\u8bc1\u5f62\u5f0f\uff0c\u4f46\u5b83\u4eec\u53ef\u4ee5\u901a\u8fc7\u591a\u79cd\u65b9\u6cd5\u6cc4\u9732\uff0c\u5305\u62ec\u51fb\u952e\u8bb0\u5f55\u548c\u5bc6\u7801\u6cc4\u9732\u3002\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u53ef\u4ee5\u63d0\u4f9b\u66ff\u4ee3\u5f62\u5f0f\u7684\u8eab\u4efd\u9a8c\u8bc1\uff0c\u4ee5\u6700\u5927\u7a0b\u5ea6\u5730\u964d\u4f4e\u5f31\u5bc6\u7801\u5e26\u6765\u7684\u98ce\u9669\u3002 \u8fd9\u4e9b\u5305\u62ec\uff1a \u5bc6\u7801\u7b56\u7565\u5b9e\u65bd \u8981\u6c42\u7528\u6237\u5bc6\u7801\u7b26\u5408\u957f\u5ea6\u3001\u5b57\u7b26\u591a\u6837\u6027\u3001\u8fc7\u671f\u6216\u767b\u5f55\u5c1d\u8bd5\u5931\u8d25\u7684\u6700\u4f4e\u6807\u51c6\u3002\u5728\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6848\u4e2d\uff0c\u8fd9\u5c06\u662f\u539f\u59cb\u8eab\u4efd\u5b58\u50a8\u4e0a\u7684\u5bc6\u7801\u7b56\u7565\u3002 \u591a\u56e0\u7d20\u8eab\u4efd\u9a8c\u8bc1 \u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u8981\u6c42\u7528\u6237\u6839\u636e\u4ed6\u4eec\u62e5\u6709\u7684\u5185\u5bb9\uff08\u5982\u4e00\u6b21\u6027\u5bc6\u7801\u4ee4\u724c\u6216 X.509 \u8bc1\u4e66\uff09\u548c\u4ed6\u4eec\u77e5\u9053\u7684\u5185\u5bb9\uff08\u5982\u5bc6\u7801\uff09\u63d0\u4f9b\u4fe1\u606f\u3002 Kerberos \u4e00\u79cd\u4f7f\u7528\u201c\u7968\u8bc1\u201d\u8fdb\u884c\u53cc\u5411\u8ba4\u8bc1\u7684\u7f51\u7edc\u534f\u8bae\uff0c\u7528\u4e8e\u4fdd\u62a4\u5ba2\u6237\u7aef\u548c\u670d\u52a1\u5668\u4e4b\u95f4\u7684\u901a\u4fe1\u3002Kerberos \u7968\u8bc1\u6388\u4e88\u7968\u8bc1\u53ef\u5b89\u5168\u5730\u4e3a\u7279\u5b9a\u670d\u52a1\u63d0\u4f9b\u7968\u8bc1\u3002 \u6388\u6743 \u00b6 \u8eab\u4efd\u670d\u52a1\u652f\u6301\u7ec4\u548c\u89d2\u8272\u7684\u6982\u5ff5\u3002\u7528\u6237\u5c5e\u4e8e\u7ec4\uff0c\u800c\u7ec4\u5177\u6709\u89d2\u8272\u5217\u8868\u3002OpenStack \u670d\u52a1\u5f15\u7528\u5c1d\u8bd5\u8bbf\u95ee\u8be5\u670d\u52a1\u7684\u7528\u6237\u7684\u89d2\u8272\u3002OpenStack \u7b56\u7565\u6267\u884c\u5668\u4e2d\u95f4\u4ef6\u4f1a\u8003\u8651\u4e0e\u6bcf\u4e2a\u8d44\u6e90\u5173\u8054\u7684\u7b56\u7565\u89c4\u5219\uff0c\u7136\u540e\u8003\u8651\u7528\u6237\u7684\u7ec4/\u89d2\u8272\u548c\u5173\u8054\uff0c\u4ee5\u786e\u5b9a\u662f\u5426\u5141\u8bb8\u8bbf\u95ee\u6240\u8bf7\u6c42\u7684\u8d44\u6e90\u3002 \u7b56\u7565\u5b9e\u65bd\u4e2d\u95f4\u4ef6\u652f\u6301\u5bf9 OpenStack \u8d44\u6e90\u8fdb\u884c\u7ec6\u7c92\u5ea6\u7684\u8bbf\u95ee\u63a7\u5236\u3002\u7b56\u7565\u4e2d\u6df1\u5165\u8ba8\u8bba\u4e86\u7b56\u7565\u7684\u884c\u4e3a\u3002 \u5efa\u7acb\u6b63\u5f0f\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565 \u00b6 \u5728\u914d\u7f6e\u89d2\u8272\u3001\u7ec4\u548c\u7528\u6237\u4e4b\u524d\uff0c\u8bf7\u8bb0\u5f55 OpenStack \u5b89\u88c5\u6240\u9700\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u3002\u8fd9\u4e9b\u7b56\u7565\u5e94\u4e0e\u7ec4\u7ec7\u7684\u4efb\u4f55\u6cd5\u89c4\u6216\u6cd5\u5f8b\u8981\u6c42\u4fdd\u6301\u4e00\u81f4\u3002\u5c06\u6765\u5bf9\u8bbf\u95ee\u63a7\u5236\u914d\u7f6e\u7684\u4fee\u6539\u5e94\u4e0e\u6b63\u5f0f\u7b56\u7565\u4fdd\u6301\u4e00\u81f4\u3002\u7b56\u7565\u5e94\u5305\u62ec\u521b\u5efa\u3001\u5220\u9664\u3001\u7981\u7528\u548c\u542f\u7528\u5e10\u6237\u4ee5\u53ca\u4e3a\u5e10\u6237\u5206\u914d\u6743\u9650\u7684\u6761\u4ef6\u548c\u8fc7\u7a0b\u3002\u5b9a\u671f\u67e5\u770b\u7b56\u7565\uff0c\u5e76\u786e\u4fdd\u914d\u7f6e\u7b26\u5408\u6279\u51c6\u7684\u7b56\u7565\u3002 \u670d\u52a1\u6388\u6743 \u00b6 \u4e91\u7ba1\u7406\u5458\u5fc5\u987b\u4e3a\u6bcf\u4e2a\u670d\u52a1\u5b9a\u4e49\u4e00\u4e2a\u5177\u6709\u7ba1\u7406\u5458\u89d2\u8272\u7684\u7528\u6237\uff0c\u5982\u300aOpenStack \u7ba1\u7406\u5458\u6307\u5357\u300b\u4e2d\u6240\u8ff0\u3002\u6b64\u670d\u52a1\u5e10\u6237\u4e3a\u670d\u52a1\u63d0\u4f9b\u5bf9\u7528\u6237\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u7684\u6388\u6743\u3002 \u53ef\u4ee5\u5c06\u8ba1\u7b97\u548c\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u914d\u7f6e\u4e3a\u4f7f\u7528\u8eab\u4efd\u670d\u52a1\u6765\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u3002\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u7684\u5176\u4ed6\u9009\u9879\u5305\u62ec\u4f7f\u7528\u201ctempAuth\u201d\u6587\u4ef6\uff0c\u4f46\u4e0d\u5e94\u5c06\u5176\u90e8\u7f72\u5728\u751f\u4ea7\u73af\u5883\u4e2d\uff0c\u56e0\u4e3a\u5bc6\u7801\u4ee5\u7eaf\u6587\u672c\u5f62\u5f0f\u663e\u793a\u3002 \u8eab\u4efd\u9274\u522b\u670d\u52a1\u652f\u6301\u5bf9 TLS \u8fdb\u884c\u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\uff0c\u8be5\u8eab\u4efd\u9a8c\u8bc1\u53ef\u80fd\u5df2\u542f\u7528\u3002\u9664\u4e86\u7528\u6237\u540d\u548c\u5bc6\u7801\u4e4b\u5916\uff0cTLS \u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u8fd8\u63d0\u4f9b\u4e86\u989d\u5916\u7684\u8eab\u4efd\u9a8c\u8bc1\u56e0\u7d20\uff0c\u4ece\u800c\u63d0\u9ad8\u4e86\u7528\u6237\u6807\u8bc6\u7684\u53ef\u9760\u6027\u3002\u5f53\u7528\u6237\u540d\u548c\u5bc6\u7801\u53ef\u80fd\u88ab\u6cc4\u9732\u65f6\uff0c\u5b83\u964d\u4f4e\u4e86\u672a\u7ecf\u6388\u6743\u8bbf\u95ee\u7684\u98ce\u9669\u3002\u4f46\u662f\uff0c\u5411\u7528\u6237\u9881\u53d1\u8bc1\u4e66\u4f1a\u4ea7\u751f\u989d\u5916\u7684\u7ba1\u7406\u5f00\u9500\u548c\u6210\u672c\uff0c\u8fd9\u5728\u6bcf\u6b21\u90e8\u7f72\u4e2d\u90fd\u53ef\u80fd\u4e0d\u53ef\u884c\u3002 \u6ce8\u610f \u6211\u4eec\u5efa\u8bae\u60a8\u5c06\u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u4e0e TLS \u7ed3\u5408\u4f7f\u7528\uff0c\u4ee5\u4fbf\u5bf9\u8eab\u4efd\u9274\u522b\u670d\u52a1\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u4e91\u7ba1\u7406\u5458\u5e94\u4fdd\u62a4\u654f\u611f\u7684\u914d\u7f6e\u6587\u4ef6\u514d\u906d\u672a\u7ecf\u6388\u6743\u7684\u4fee\u6539\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u5f3a\u5236\u6027\u8bbf\u95ee\u63a7\u5236\u6846\u67b6\uff08\u5982 SELinux\uff09\u6765\u5b9e\u73b0\uff0c\u5305\u62ec /etc/keystone/keystone.conf X.509 \u8bc1\u4e66\u3002 \u4f7f\u7528 TLS \u7684\u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u9700\u8981\u5411\u670d\u52a1\u9881\u53d1\u8bc1\u4e66\u3002\u8fd9\u4e9b\u8bc1\u4e66\u53ef\u4ee5\u7531\u5916\u90e8\u6216\u5185\u90e8\u8bc1\u4e66\u9881\u53d1\u673a\u6784\u7b7e\u540d\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0cOpenStack \u670d\u52a1\u4f1a\u6839\u636e\u53d7\u4fe1\u4efb\u7684 CA \u68c0\u67e5\u8bc1\u4e66\u7b7e\u540d\u7684\u6709\u6548\u6027\uff0c\u5982\u679c\u7b7e\u540d\u65e0\u6548\u6216 CA \u4e0d\u53ef\u4fe1\uff0c\u8fde\u63a5\u5c06\u5931\u8d25\u3002\u4e91\u90e8\u7f72\u4eba\u5458\u53ef\u4ee5\u4f7f\u7528\u81ea\u7b7e\u540d\u8bc1\u4e66\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u5fc5\u987b\u7981\u7528\u6709\u6548\u6027\u68c0\u67e5\uff0c\u6216\u8005\u5e94\u5c06\u8bc1\u4e66\u6807\u8bb0\u4e3a\u53d7\u4fe1\u4efb\u3002\u82e5\u8981\u7981\u7528\u81ea\u7b7e\u540d\u8bc1\u4e66\u7684\u9a8c\u8bc1\uff0c\u8bf7\u5728 /etc/nova/api.paste.ini \u6587\u4ef6\u7684 [filter:authtoken] \u201c\u90e8\u5206\u201d\u4e2d\u8fdb\u884c\u8bbe\u7f6e insecure=False \u3002\u6b64\u8bbe\u7f6e\u8fd8\u4f1a\u7981\u7528\u5176\u4ed6\u7ec4\u4ef6\u7684\u8bc1\u4e66\u3002 \u7ba1\u7406\u5458\u7528\u6237 \u00b6 \u6211\u4eec\u5efa\u8bae\u7ba1\u7406\u5458\u7528\u6237\u4f7f\u7528\u8eab\u4efd\u670d\u52a1\u548c\u652f\u6301 2 \u56e0\u7d20\u8eab\u4efd\u9a8c\u8bc1\u7684\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\uff08\u4f8b\u5982\u8bc1\u4e66\uff09\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u8fd9\u6837\u53ef\u4ee5\u964d\u4f4e\u5bc6\u7801\u53ef\u80fd\u88ab\u6cc4\u9732\u7684\u98ce\u9669\u3002\u6b64\u5efa\u8bae\u7b26\u5408 NIST 800-53 IA-2\uff081\uff09 \u6307\u5357\uff0c\u5373\u4f7f\u7528\u591a\u91cd\u8eab\u4efd\u9a8c\u8bc1\u5bf9\u7279\u6743\u5e10\u6237\u8fdb\u884c\u7f51\u7edc\u8bbf\u95ee\u3002 \u7ec8\u7aef\u7528\u6237 \u00b6 \u8eab\u4efd\u9274\u522b\u670d\u52a1\u53ef\u4ee5\u76f4\u63a5\u63d0\u4f9b\u6700\u7ec8\u7528\u6237\u8eab\u4efd\u9a8c\u8bc1\uff0c\u4e5f\u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u4ee5\u7b26\u5408\u7ec4\u7ec7\u7684\u5b89\u5168\u7b56\u7565\u548c\u8981\u6c42\u3002 \u653f\u7b56 \u00b6 \u6bcf\u4e2a OpenStack \u670d\u52a1\u90fd\u5728\u5173\u8054\u7684\u7b56\u7565\u6587\u4ef6\u4e2d\u5b9a\u4e49\u5176\u8d44\u6e90\u7684\u8bbf\u95ee\u7b56\u7565\u3002\u4f8b\u5982\uff0c\u8d44\u6e90\u53ef\u4ee5\u662f API \u8bbf\u95ee\u3001\u9644\u52a0\u5230\u5377\u6216\u542f\u52a8\u5b9e\u4f8b\u7684\u80fd\u529b\u3002\u7b56\u7565\u89c4\u5219\u4ee5 JSON \u683c\u5f0f\u6307\u5b9a\uff0c\u6587\u4ef6\u79f0\u4e3a policy.json .\u6b64\u6587\u4ef6\u7684\u8bed\u6cd5\u548c\u683c\u5f0f\u5728\u914d\u7f6e\u53c2\u8003\u4e2d\u8fdb\u884c\u4e86\u8ba8\u8bba\u3002 \u4e91\u7ba1\u7406\u5458\u53ef\u4ee5\u4fee\u6539\u6216\u66f4\u65b0\u8fd9\u4e9b\u7b56\u7565\uff0c\u4ee5\u63a7\u5236\u5bf9\u5404\u79cd\u8d44\u6e90\u7684\u8bbf\u95ee\u3002\u786e\u4fdd\u5bf9\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u7684\u4efb\u4f55\u66f4\u6539\u90fd\u4e0d\u4f1a\u65e0\u610f\u4e2d\u524a\u5f31\u4efb\u4f55\u8d44\u6e90\u7684\u5b89\u5168\u6027\u3002\u53e6\u8bf7\u6ce8\u610f\uff0c\u5bf9 policy.json \u6587\u4ef6\u7684\u66f4\u6539\u4f1a\u7acb\u5373\u751f\u6548\uff0c\u5e76\u4e14\u4e0d\u9700\u8981\u91cd\u65b0\u542f\u52a8\u670d\u52a1\u3002 \u4ee5\u4e0b\u793a\u4f8b\u663e\u793a\u4e86\u8be5\u670d\u52a1\u5982\u4f55\u5c06\u521b\u5efa\u3001\u66f4\u65b0\u548c\u5220\u9664\u8d44\u6e90\u7684\u8bbf\u95ee\u6743\u9650\u9650\u5236\u4e3a\u4ec5\u5177\u6709\u89d2\u8272 cloud_admin \u7684\u7528\u6237\uff0c\u8be5\u89d2\u8272\u5df2\u5b9a\u4e49\u4e3a role = admin \u548c domain_id = admin_domain_id \u7684\u7ed3\u5408\uff0c\u800c get \u548c list \u8d44\u6e90\u53ef\u4f9b\u89d2\u8272\u4e3a cloud_admin \u6216 admin \u7684\u7528\u6237\u4f7f\u7528\u3002 { \"admin_required\": \"role:admin\", \"cloud_admin\": \"rule:admin_required and domain_id:admin_domain_id\", \"service_role\": \"role:service\", \"service_or_admin\": \"rule:admin_required or rule:service_role\", \"owner\" : \"user_id:%(user_id)s or user_id:%(target.token.user_id)s\", \"admin_or_owner\": \"(rule:admin_required and domain_id:%(target.token.user.domain.id)s) or rule:owner\", \"admin_or_cloud_admin\": \"rule:admin_required or rule:cloud_admin\", \"admin_and_matching_domain_id\": \"rule:admin_required and domain_id:%(domain_id)s\", \"service_admin_or_owner\": \"rule:service_or_admin or rule:owner\", \"default\": \"rule:admin_required\", \"identity:get_service\": \"rule:admin_or_cloud_admin\", \"identity:list_services\": \"rule:admin_or_cloud_admin\", \"identity:create_service\": \"rule:cloud_admin\", \"identity:update_service\": \"rule:cloud_admin\", \"identity:delete_service\": \"rule:cloud_admin\", \"identity:get_endpoint\": \"rule:admin_or_cloud_admin\", \"identity:list_endpoints\": \"rule:admin_or_cloud_admin\", \"identity:create_endpoint\": \"rule:cloud_admin\", \"identity:update_endpoint\": \"rule:cloud_admin\", \"identity:delete_endpoint\": \"rule:cloud_admin\", } \u4ee4\u724c \u00b6 \u7528\u6237\u901a\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u540e\uff0c\u5c06\u751f\u6210\u4e00\u4e2a\u4ee4\u724c\uff0c\u7528\u4e8e\u6388\u6743\u548c\u8bbf\u95ee OpenStack \u73af\u5883\u3002\u4ee3\u5e01\u53ef\u4ee5\u5177\u6709\u53ef\u53d8\u7684\u751f\u547d\u5468\u671f;\u4f46\u662f\uff0cexpiry \u7684\u9ed8\u8ba4\u503c\u4e3a 1 \u5c0f\u65f6\u3002\u5efa\u8bae\u7684\u8fc7\u671f\u503c\u5e94\u8bbe\u7f6e\u4e3a\u8f83\u4f4e\u7684\u503c\uff0c\u4ee5\u4fbf\u5185\u90e8\u670d\u52a1\u6709\u8db3\u591f\u7684\u65f6\u95f4\u5b8c\u6210\u4efb\u52a1\u3002\u5982\u679c\u4ee4\u724c\u5728\u4efb\u52a1\u5b8c\u6210\u4e4b\u524d\u8fc7\u671f\uff0c\u4e91\u53ef\u80fd\u4f1a\u53d8\u5f97\u65e0\u54cd\u5e94\u6216\u505c\u6b62\u63d0\u4f9b\u670d\u52a1\u3002\u4f8b\u5982\uff0c\u8ba1\u7b97\u670d\u52a1\u5c06\u78c1\u76d8\u6620\u50cf\u4f20\u8f93\u5230\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4ee5\u8fdb\u884c\u672c\u5730\u7f13\u5b58\u6240\u9700\u7684\u65f6\u95f4\u3002\u5141\u8bb8\u5728\u4f7f\u7528\u6709\u6548\u7684\u670d\u52a1\u4ee4\u724c\u65f6\u63d0\u53d6\u8fc7\u671f\u7684\u4ee4\u724c\u3002 \u4ee4\u724c\u901a\u5e38\u5728 Identity \u670d\u52a1\u54cd\u5e94\u7684\u8f83\u5927\u4e0a\u4e0b\u6587\u7684\u7ed3\u6784\u4e2d\u4f20\u9012\u3002\u8fd9\u4e9b\u54cd\u5e94\u8fd8\u63d0\u4f9b\u4e86\u5404\u79cd OpenStack \u670d\u52a1\u7684\u76ee\u5f55\u3002\u5217\u51fa\u4e86\u6bcf\u4e2a\u670d\u52a1\u7684\u540d\u79f0\u3001\u5185\u90e8\u8bbf\u95ee\u3001\u7ba1\u7406\u5458\u8bbf\u95ee\u548c\u516c\u5171\u8bbf\u95ee\u7684\u8bbf\u95ee\u7ec8\u7ed3\u70b9\u3002 \u53ef\u4ee5\u4f7f\u7528\u6807\u8bc6 API \u540a\u9500\u4ee4\u724c\u3002 \u5728 Stein \u7248\u672c\u4e2d\uff0c\u6709\u4e24\u79cd\u53d7\u652f\u6301\u7684\u4ee4\u724c\u7c7b\u578b\uff1afernet \u548c JWT\u3002 fernet \u548c JWT \u4ee4\u724c\u90fd\u4e0d\u9700\u8981\u6301\u4e45\u6027\u3002Keystone \u4ee4\u724c\u6570\u636e\u5e93\u4e0d\u518d\u56e0\u8eab\u4efd\u9a8c\u8bc1\u7684\u526f\u4f5c\u7528\u800c\u906d\u53d7\u81a8\u80c0\u3002\u8fc7\u671f\u4ee4\u724c\u7684\u4fee\u526a\u4f1a\u81ea\u52a8\u8fdb\u884c\u3002\u4e5f\u4e0d\u518d\u9700\u8981\u8de8\u591a\u4e2a\u8282\u70b9\u8fdb\u884c\u590d\u5236\u3002\u53ea\u8981\u6bcf\u4e2a keystone \u8282\u70b9\u5171\u4eab\u76f8\u540c\u7684\u5b58\u50a8\u5e93\uff0c\u5c31\u53ef\u4ee5\u5728\u6240\u6709\u8282\u70b9\u4e0a\u7acb\u5373\u521b\u5efa\u548c\u9a8c\u8bc1\u4ee4\u724c\u3002 Fernet \u4ee4\u724c \u00b6 Fernet \u4ee4\u724c\u662f Stein \u652f\u6301\u7684\u4ee4\u724c\u63d0\u4f9b\u7a0b\u5e8f\uff08\u9ed8\u8ba4\uff09\u3002Fernet \u662f\u4e00\u79cd\u5b89\u5168\u7684\u6d88\u606f\u4f20\u9012\u683c\u5f0f\uff0c\u4e13\u95e8\u8bbe\u8ba1\u7528\u4e8e API \u4ee4\u724c\u3002\u5b83\u4eec\u662f\u8f7b\u91cf\u7ea7\u7684\uff08\u8303\u56f4\u5728 180 \u5230 240 \u5b57\u8282\u4e4b\u95f4\uff09\uff0c\u5e76\u51cf\u5c11\u4e86\u8fd0\u884c\u4e91\u6240\u9700\u7684\u8fd0\u8425\u5f00\u9500\u3002\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u5143\u6570\u636e\u88ab\u6574\u9f50\u5730\u6346\u7ed1\u5230\u6d88\u606f\u6253\u5305\u7684\u6709\u6548\u8d1f\u8f7d\u4e2d\uff0c\u7136\u540e\u5bf9\u5176\u8fdb\u884c\u52a0\u5bc6\u5e76\u4f5c\u4e3a fernet \u4ee4\u724c\u767b\u5f55\u3002 JWT \u4ee4\u724c \u00b6 JSON Web \u7b7e\u540d \uff08JWS\uff09 \u4ee4\u724c\u662f\u5728 Stein \u7248\u672c\u4e2d\u5f15\u5165\u7684\u3002\u4e0efernet\u76f8\u6bd4\uff0cJWS\u901a\u8fc7\u9650\u5236\u9700\u8981\u5171\u4eab\u5bf9\u79f0\u52a0\u5bc6\u5bc6\u94a5\u7684\u4e3b\u673a\u6570\u91cf\uff0c\u4e3a\u8fd0\u8425\u5546\u63d0\u4f9b\u4e86\u6f5c\u5728\u7684\u597d\u5904\u3002\u8fd9\u6709\u52a9\u4e8e\u9632\u6b62\u53ef\u80fd\u5df2\u5728\u90e8\u7f72\u4e2d\u7ad9\u7a33\u811a\u8ddf\u7684\u6076\u610f\u53c2\u4e0e\u8005\u6269\u6563\u5230\u5176\u4ed6\u8282\u70b9\u3002 \u6709\u5173\u8fd9\u4e9b\u4ee4\u724c\u63d0\u4f9b\u7a0b\u5e8f\u4e4b\u95f4\u5dee\u5f02\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6b64\u5904 https://docs.openstack.org/keystone/stein/admin/tokens-overview.html#token-providers \u57df \u00b6 \u57df\u662f\u9879\u76ee\u3001\u7528\u6237\u548c\u7ec4\u7684\u9ad8\u7ea7\u5bb9\u5668\u3002\u56e0\u6b64\uff0c\u5b83\u4eec\u53ef\u7528\u4e8e\u96c6\u4e2d\u7ba1\u7406\u6240\u6709\u57fa\u4e8e keystone \u7684\u8eab\u4efd\u7ec4\u4ef6\u3002\u968f\u7740\u5e10\u6237\u57df\u7684\u5f15\u5165\uff0c\u670d\u52a1\u5668\u3001\u5b58\u50a8\u548c\u5176\u4ed6\u8d44\u6e90\u73b0\u5728\u53ef\u4ee5\u5728\u903b\u8f91\u4e0a\u5206\u7ec4\u5230\u591a\u4e2a\u9879\u76ee\uff08\u4ee5\u524d\u79f0\u4e3a\u79df\u6237\uff09\u4e2d\uff0c\u8fd9\u4e9b\u9879\u76ee\u672c\u8eab\u53ef\u4ee5\u5206\u7ec4\u5230\u7c7b\u4f3c\u4e3b\u5e10\u6237\u7684\u5bb9\u5668\u4e0b\u3002\u6b64\u5916\uff0c\u53ef\u4ee5\u5728\u4e00\u4e2a\u5e10\u6237\u57df\u4e2d\u7ba1\u7406\u591a\u4e2a\u7528\u6237\uff0c\u5e76\u4e3a\u6bcf\u4e2a\u9879\u76ee\u5206\u914d\u4e0d\u540c\u7684\u89d2\u8272\u3002 Identity V3 API \u652f\u6301\u591a\u4e2a\u57df\u3002\u4e0d\u540c\u57df\u7684\u7528\u6237\u53ef\u80fd\u5728\u4e0d\u540c\u7684\u8eab\u4efd\u9a8c\u8bc1\u540e\u7aef\u4e2d\u8868\u793a\uff0c\u751a\u81f3\u5177\u6709\u4e0d\u540c\u7684\u5c5e\u6027\uff0c\u8fd9\u4e9b\u5c5e\u6027\u5fc5\u987b\u6620\u5c04\u5230\u4e00\u7ec4\u89d2\u8272\u548c\u6743\u9650\uff0c\u8fd9\u4e9b\u89d2\u8272\u548c\u6743\u9650\u5728\u7b56\u7565\u5b9a\u4e49\u4e2d\u7528\u4e8e\u8bbf\u95ee\u5404\u79cd\u670d\u52a1\u8d44\u6e90\u3002 \u5982\u679c\u89c4\u5219\u53ef\u4ee5\u4ec5\u6307\u5b9a\u5bf9\u7ba1\u7406\u5458\u7528\u6237\u548c\u5c5e\u4e8e\u79df\u6237\u7684\u7528\u6237\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u5219\u6620\u5c04\u53ef\u80fd\u5f88\u7b80\u5355\u3002\u5728\u5176\u4ed6\u60c5\u51b5\u4e0b\uff0c\u4e91\u7ba1\u7406\u5458\u53ef\u80fd\u9700\u8981\u6279\u51c6\u6bcf\u4e2a\u79df\u6237\u7684\u6620\u5c04\u4f8b\u7a0b\u3002 \u7279\u5b9a\u4e8e\u57df\u7684\u8eab\u4efd\u9a8c\u8bc1\u9a71\u52a8\u7a0b\u5e8f\u5141\u8bb8\u4f7f\u7528\u7279\u5b9a\u4e8e\u57df\u7684\u914d\u7f6e\u6587\u4ef6\u4e3a\u591a\u4e2a\u57df\u914d\u7f6e\u6807\u8bc6\u670d\u52a1\u3002\u542f\u7528\u9a71\u52a8\u7a0b\u5e8f\u5e76\u8bbe\u7f6e\u7279\u5b9a\u4e8e\u57df\u7684\u914d\u7f6e\u6587\u4ef6\u4f4d\u7f6e\u53d1\u751f\u5728 keystone.conf \u6587\u4ef6 [identity] \u90e8\u5206\u4e2d\uff1a [identity] domain_specific_drivers_enabled = True domain_config_dir = /etc/keystone/domains \u4efb\u4f55\u6ca1\u6709\u7279\u5b9a\u4e8e\u57df\u7684\u914d\u7f6e\u6587\u4ef6\u7684\u57df\u90fd\u5c06\u4f7f\u7528\u4e3b keystone.conf \u6587\u4ef6\u4e2d\u7684\u9009\u9879\u3002 \u8054\u5408\u9274\u6743 \u00b6 \u91cd\u8981\u5b9a\u4e49\uff1a \u670d\u52a1\u63d0\u4f9b\u5546 \uff08SP\uff09 \u5411\u59d4\u6258\u4eba\u6216\u5176\u4ed6\u7cfb\u7edf\u5b9e\u4f53\u63d0\u4f9b\u670d\u52a1\u7684\u7cfb\u7edf\u5b9e\u4f53\uff0c\u5728\u672c\u4f8b\u4e2d\uff0cOpenStack Identity \u662f\u670d\u52a1\u63d0\u4f9b\u8005\u3002 \u8eab\u4efd\u63d0\u4f9b\u5546 \uff08IdP\uff09 \u76ee\u5f55\u670d\u52a1\uff08\u5982 LDAP\u3001RADIUS \u548c Active Directory\uff09\u5141\u8bb8\u7528\u6237\u4f7f\u7528\u7528\u6237\u540d\u548c\u5bc6\u7801\u767b\u5f55\uff0c\u662f\u8eab\u4efd\u63d0\u4f9b\u5546\u5904\u8eab\u4efd\u9a8c\u8bc1\u4ee4\u724c\uff08\u4f8b\u5982\u5bc6\u7801\uff09\u7684\u5178\u578b\u6765\u6e90\u3002 \u8054\u5408\u9274\u6743\u662f\u4e00\u79cd\u5728 IdP \u548c SP \u4e4b\u95f4\u5efa\u7acb\u4fe1\u4efb\u7684\u673a\u5236\uff0c\u5728\u672c\u4f8b\u4e2d\uff0c\u662f\u5728\u8eab\u4efd\u63d0\u4f9b\u8005\u548c OpenStack Cloud \u63d0\u4f9b\u7684\u670d\u52a1\u4e4b\u95f4\u5efa\u7acb\u4fe1\u4efb\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u79cd\u5b89\u5168\u7684\u65b9\u6cd5\uff0c\u53ef\u4ee5\u4f7f\u7528\u73b0\u6709\u51ed\u636e\u8de8\u591a\u4e2a\u7aef\u70b9\u8bbf\u95ee\u4e91\u8d44\u6e90\uff0c\u4f8b\u5982\u670d\u52a1\u5668\u3001\u5377\u548c\u6570\u636e\u5e93\u3002\u51ed\u8bc1\u7531\u7528\u6237\u7684 IdP \u7ef4\u62a4\u3002 \u4e3a\u4ec0\u4e48\u8981\u4f7f\u7528\u8054\u5408\u8eab\u4efd\uff1f \u00b6 \u4e24\u4e2a\u6839\u672c\u539f\u56e0\uff1a \u964d\u4f4e\u590d\u6742\u6027\u4f7f\u90e8\u7f72\u66f4\u6613\u4e8e\u4fdd\u62a4\u3002 \u5b83\u4e3a\u60a8\u548c\u60a8\u7684\u7528\u6237\u8282\u7701\u4e86\u65f6\u95f4\u3002 \u96c6\u4e2d\u7ba1\u7406\u5e10\u6237\uff0c\u9632\u6b62 OpenStack \u57fa\u7840\u67b6\u6784\u5185\u90e8\u7684\u91cd\u590d\u5de5\u4f5c\u3002 \u51cf\u8f7b\u7528\u6237\u8d1f\u62c5\u3002\u5355\u70b9\u767b\u5f55\u5141\u8bb8\u4f7f\u7528\u5355\u4e00\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u6765\u8bbf\u95ee\u8bb8\u591a\u4e0d\u540c\u7684\u670d\u52a1\u548c\u73af\u5883\u3002 \u5c06\u5bc6\u7801\u6062\u590d\u8fc7\u7a0b\u7684\u8d23\u4efb\u8f6c\u79fb\u5230 IdP\u3002 \u8fdb\u4e00\u6b65\u7684\u7406\u7531\u548c\u7ec6\u8282\u53ef\u4ee5\u5728 Keystone \u5173\u4e8e\u8054\u5408\u7684\u6587\u6863\u4e2d\u627e\u5230\u3002 \u68c0\u67e5\u8868 \u00b6 Check-Identity-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a keystone\uff1f \u00b6 \u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u5bf9\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u7684\u62d2\u7edd\u670d\u52a1\u3002\u56e0\u6b64\uff0c\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a\u8be5\u7ec4\u4ef6\u6240\u6709\u8005\u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/keystone/keystone.conf | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone/keystone-paste.ini | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone/policy.json | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone/logging.conf | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone/ssl/certs/signing_cert.pem | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone/ssl/private/signing_key.pem | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone/ssl/certs/ca.pem | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone | egrep \"keystone keystone\" \u901a\u8fc7\uff1a \u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u90fd\u8bbe\u7f6e\u4e3a keystone\u3002\u4e0a\u8ff0\u547d\u4ee4\u663e\u793a keystone keystone \u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a \u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u56e0\u4e3a\u7528\u6237\u6216\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 keystone \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u3002 \u63a8\u8350\u4e8e\uff1a\u5185\u90e8\u5b9e\u73b0\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002 Check-Identity-02\uff1a\u662f\u5426\u4e3a Identity \u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u6743\u9650\uff1f \u00b6 \u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u5efa\u8bae\u5bf9\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/keystone/keystone.conf $ stat -L -c \"%a\" /etc/keystone/keystone-paste.ini $ stat -L -c \"%a\" /etc/keystone/policy.json $ stat -L -c \"%a\" /etc/keystone/logging.conf $ stat -L -c \"%a\" /etc/keystone/ssl/certs/signing_cert.pem $ stat -L -c \"%a\" /etc/keystone/ssl/private/signing_key.pem $ stat -L -c \"%a\" /etc/keystone/ssl/certs/ca.pem $ stat -L -c \"%a\" /etc/keystone \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a \u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002 \u5931\u8d25\uff1a \u5982\u679c\u6743\u9650\u672a\u8bbe\u7f6e\u4e3a\u81f3\u5c11 640/750\u3002 \u63a8\u8350\u4e8e\uff1a\u5185\u90e8\u5b9e\u73b0\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002 Check-Identity-03\uff1a\u662f\u5426\u4e3a Identity \u542f\u7528\u4e86 TLS\uff1f \u00b6 OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u56e0\u6b64\uff0c\u6240\u6709\u7ec4\u4ef6\u90fd\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\uff08\u5982 HTTPS\uff09\u76f8\u4e92\u901a\u4fe1\u3002 \u5982\u679c\u5c06 HTTP/WSGI \u670d\u52a1\u5668\u7528\u4e8e\u6807\u8bc6\uff0c\u5219\u5e94\u5728 HTTP/WSGI \u670d\u52a1\u5668\u4e0a\u542f\u7528 TLS\u3002 \u901a\u8fc7\uff1a \u5982\u679c\u5728 HTTP \u670d\u52a1\u5668\u4e0a\u542f\u7528\u4e86 TLS\u3002 \u5931\u8d25\uff1a \u5982\u679c HTTP \u670d\u52a1\u5668\u4e0a\u672a\u542f\u7528 TLS\u3002 \u63a8\u8350\u4e8e\uff1a\u5b89\u5168\u901a\u4fe1\u3002 Check-Identity-04\uff1a\uff08\u5df2\u8fc7\u65f6\uff09 \u00b6 Check-Identity-05\uff1a\u662f\u5426 max_request_body_size \u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f \u00b6 \u8be5\u53c2\u6570 max_request_body_size \u5b9a\u4e49\u6bcf\u4e2a\u8bf7\u6c42\u7684\u6700\u5927\u6b63\u6587\u5927\u5c0f\uff08\u4ee5\u5b57\u8282\u4e3a\u5355\u4f4d\uff09\u3002\u5982\u679c\u672a\u5b9a\u4e49\u6700\u5927\u5927\u5c0f\uff0c\u653b\u51fb\u8005\u53ef\u4ee5\u6784\u5efa\u4efb\u610f\u5927\u5bb9\u91cf\u8bf7\u6c42\uff0c\u5bfc\u81f4\u670d\u52a1\u5d29\u6e83\uff0c\u6700\u7ec8\u5bfc\u81f4\u62d2\u7edd\u670d\u52a1\u653b\u51fb\u3002\u5206\u914d\u6700\u5927\u503c\u53ef\u786e\u4fdd\u963b\u6b62\u4efb\u4f55\u6076\u610f\u7684\u8d85\u5927\u8bf7\u6c42\uff0c\u4ece\u800c\u786e\u4fdd\u7ec4\u4ef6\u7684\u6301\u7eed\u53ef\u7528\u6027\u3002 \u901a\u8fc7\uff1a \u5982\u679c\u53c2\u6570 max_request_body_size in /etc/keystone/keystone.conf \u7684\u503c\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09 \u6216\u6839\u636e\u60a8\u7684\u73af\u5883\u8bbe\u7f6e\u7684\u67d0\u4e2a\u5408\u7406\u503c\u3002 \u5931\u8d25\uff1a \u5982\u679c\u672a\u8bbe\u7f6e\u53c2\u6570 max_request_body_size \u503c\u3002 check-identity-06:\u7981\u7528/etc/keystone/keystone.conf\u4e2d\u7684\u7ba1\u7406\u4ee4\u724c \u00b6 \u7ba1\u7406\u5458\u4ee4\u724c\u901a\u5e38\u7528\u4e8e\u5f15\u5bfc Identity\u3002\u6b64\u4ee4\u724c\u662f\u6700\u6709\u4ef7\u503c\u7684\u6807\u8bc6\u8d44\u4ea7\uff0c\u53ef\u7528\u4e8e\u83b7\u53d6\u4e91\u7ba1\u7406\u5458\u6743\u9650\u3002 \u901a\u8fc7\uff1a \u5982\u679c admin_token under [DEFAULT] section in /etc/keystone/keystone.conf \u88ab\u7981\u7528\u3002\u5e76\u4e14\uff0c AdminTokenAuthMiddleware under [filter:admin_token_auth] \u4ece /etc/keystone/keystone-paste.ini \u5931\u8d25\uff1a \u5982\u679c admin_token \u8bbe\u7f6e\u4e86 under [DEFAULT] \u90e8\u5206\u5e76 AdminTokenAuthMiddleware \u5b58\u5728\u4e8e keystone-paste.ini \u4e2d\u3002 \u5efa\u8bae \u7981\u7528 `admin_token` \u610f\u5473\u7740\u5b83\u7684\u503c\u4e3a `` \u3002 check-identity-07:/etc/keystone/keystone.conf\u4e2d\u7684\u4e0d\u5b89\u5168_\u8c03\u8bd5\u4e3a\u5047 \u00b6 \u5982\u679c insecure_debug \u8bbe\u7f6e\u4e3a true\uff0c\u5219\u670d\u52a1\u5668\u5c06\u5728 HTTP \u54cd\u5e94\u4e2d\u8fd4\u56de\u4fe1\u606f\uff0c\u8fd9\u4e9b\u4fe1\u606f\u53ef\u80fd\u5141\u8bb8\u672a\u7ecf\u8eab\u4efd\u9a8c\u8bc1\u6216\u7ecf\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u7684\u7528\u6237\u83b7\u53d6\u6bd4\u6b63\u5e38\u60c5\u51b5\u66f4\u591a\u7684\u4fe1\u606f\uff0c\u4f8b\u5982\u6709\u5173\u8eab\u4efd\u9a8c\u8bc1\u5931\u8d25\u539f\u56e0\u7684\u5176\u4ed6\u8be6\u7ec6\u4fe1\u606f\u3002 \u901a\u8fc7\uff1a \u5982\u679c insecure_debug under [DEFAULT] section in /etc/keystone/keystone.conf \u4e3a false\u3002 \u5931\u8d25\uff1a \u5982\u679c insecure_debug under [DEFAULT] section in /etc/keystone/keystone.conf \u4e3a true\u3002 check-identity-08:\u4f7f\u7528/etc/keystone/keystone.conf\u4e2d\u7684Fernet\u4ee4\u724c \u00b6 OpenStack Identity \u670d\u52a1\u63d0\u4f9b uuid \u548c fernet \u4f5c\u4e3a\u4ee4\u724c\u63d0\u4f9b\u8005\u3002 uuid \u4ee4\u724c\u5fc5\u987b\u6301\u4e45\u5316\uff0c\u5e76\u88ab\u89c6\u4e3a\u4e0d\u5b89\u5168\u3002 \u901a\u8fc7\uff1a \u5982\u679c section in /etc/keystone/keystone.conf \u4e0b\u7684 [token] \u53c2\u6570 provider \u503c\u8bbe\u7f6e\u4e3a fernet\u3002 \u5931\u8d25\uff1a \u5982\u679c section \u4e0b\u7684 [token] \u53c2\u6570 provider \u503c\u8bbe\u7f6e\u4e3a uuid\u3002 \u4eea\u8868\u677f \u00b6 Dashboard \uff08horizon\uff09 \u662f OpenStack \u4eea\u8868\u677f\uff0c\u5b83\u4e3a\u7528\u6237\u63d0\u4f9b\u4e86\u4e00\u4e2a\u81ea\u52a9\u670d\u52a1\u95e8\u6237\uff0c\u4ee5\u4fbf\u5728\u7ba1\u7406\u5458\u8bbe\u7f6e\u7684\u9650\u5236\u8303\u56f4\u5185\u914d\u7f6e\u81ea\u5df1\u7684\u8d44\u6e90\u3002\u5176\u4e2d\u5305\u62ec\u9884\u7f6e\u7528\u6237\u3001\u5b9a\u4e49\u5b9e\u4f8b\u53d8\u79cd\u3001\u4e0a\u4f20\u865a\u62df\u673a \uff08VM\uff09 \u6620\u50cf\u3001\u7ba1\u7406\u7f51\u7edc\u3001\u8bbe\u7f6e\u5b89\u5168\u7ec4\u3001\u542f\u52a8\u5b9e\u4f8b\u4ee5\u53ca\u901a\u8fc7\u63a7\u5236\u53f0\u8bbf\u95ee\u5b9e\u4f8b\u3002 \u4eea\u8868\u677f\u57fa\u4e8e Django Web \u6846\u67b6\uff0c\u786e\u4fdd Django \u7684\u5b89\u5168\u90e8\u7f72\u5b9e\u8df5\u76f4\u63a5\u5e94\u7528\u4e8e Horizon\u3002\u672c\u6307\u5357\u63d0\u4f9b\u4e86\u4e00\u7ec4 Django \u5b89\u5168\u5efa\u8bae\u3002\u66f4\u591a\u4fe1\u606f\u53ef\u4ee5\u901a\u8fc7\u9605\u8bfb Django \u6587\u6863\u627e\u5230\u3002 \u4eea\u8868\u677f\u9644\u5e26\u9ed8\u8ba4\u5b89\u5168\u8bbe\u7f6e\uff0c\u5e76\u5177\u6709\u90e8\u7f72\u548c\u914d\u7f6e\u6587\u6863\u3002 \u57df\u540d\u3001\u4eea\u8868\u677f\u5347\u7ea7\u548c\u57fa\u672c Web \u670d\u52a1\u5668\u914d\u7f6e \u57df\u540d \u57fa\u672c Web \u670d\u52a1\u5668\u914d\u7f6e \u5141\u8bb8\u7684\u4e3b\u673a \u6620\u50cf\u4e0a\u4f20 HTTPS\u3001HSTS\u3001XSS \u548c SSRF \u8de8\u7ad9\u70b9\u811a\u672c \uff08XSS\uff09 \u8de8\u7ad9\u70b9\u8bf7\u6c42\u4f2a\u9020 \uff08CSRF\uff09 \u8de8\u5e27\u811a\u672c \uff08XFS\uff09 HTTPS\u534f\u8bae HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \uff08HSTS\uff09 \u524d\u7aef\u7f13\u5b58\u548c\u4f1a\u8bdd\u540e\u7aef \u524d\u7aef\u7f13\u5b58 \u4f1a\u8bdd\u540e\u7aef \u9759\u6001\u5a92\u4f53 \u5bc6\u7801 \u5bc6\u94a5 \u7f51\u7ad9\u6570\u636e \u8de8\u57df\u8d44\u6e90\u5171\u4eab \uff08CORS\uff09 \u8c03\u8bd5 \u68c0\u67e5\u8868 Check-Dashboard-01\uff1a\u7528\u6237/\u914d\u7f6e\u6587\u4ef6\u7ec4\u662f\u5426\u8bbe\u7f6e\u4e3a root/horizon\uff1f Check-Dashboard-02\uff1a\u662f\u5426\u4e3a Horizon \u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u6743\u9650\uff1f Check-Dashboard-03\uff1a\u53c2\u6570\u662f\u5426 DISALLOW_IFRAME_EMBED \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-04\uff1a\u53c2\u6570\u662f\u5426 CSRF_COOKIE_SECURE \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-05\uff1a\u53c2\u6570\u662f\u5426 SESSION_COOKIE_SECURE \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-06\uff1a\u53c2\u6570\u662f\u5426 SESSION_COOKIE_HTTPONLY \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-07\uff1a PASSWORD_AUTOCOMPLETE \u8bbe\u7f6e\u4e3a False \uff1f Check-Dashboard-08\uff1a DISABLE_PASSWORD_REVEAL \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-09\uff1a ENFORCE_PASSWORD_CHECK \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-10\uff1a\u662f\u5426 PASSWORD_VALIDATOR \u5df2\u914d\u7f6e\uff1f Check-Dashboard-11\uff1a\u662f\u5426 SECURE_PROXY_SSL_HEADER \u5df2\u914d\u7f6e\uff1f \u57df\u540d\u3001\u4eea\u8868\u677f\u5347\u7ea7\u548c\u57fa\u672c Web \u670d\u52a1\u5668\u914d\u7f6e \u00b6 \u57df\u540d \u00b6 \u8bb8\u591a\u7ec4\u7ec7\u901a\u5e38\u5728\u603b\u4f53\u7ec4\u7ec7\u57df\u7684\u5b50\u57df\u4e2d\u90e8\u7f72 Web \u5e94\u7528\u7a0b\u5e8f\u3002\u7528\u6237\u5f88\u81ea\u7136\u5730\u671f\u671b openstack.example.org .\u5728\u6b64\u4e0a\u4e0b\u6587\u4e2d\uff0c\u901a\u5e38\u5b58\u5728\u90e8\u7f72\u5728\u540c\u4e00\u4e2a\u4e8c\u7ea7\u547d\u540d\u7a7a\u95f4\u4e2d\u7684\u5e94\u7528\u7a0b\u5e8f\u3002\u6b64\u540d\u79f0\u7ed3\u6784\u975e\u5e38\u65b9\u4fbf\uff0c\u5e76\u7b80\u5316\u4e86\u540d\u79f0\u670d\u52a1\u5668\u7684\u7ef4\u62a4\u3002 \u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u5c06\u4eea\u8868\u677f\u90e8\u7f72\u5230\u4e8c\u7ea7\u57df\uff0c\u4f8b\u5982 \uff0c\u800c\u4e0d\u662f\u5728\u4efb\u4f55\u7ea7\u522b\u7684\u5171\u4eab\u5b50\u57df\u4e0a\u90e8\u7f72\u4eea\u8868\u677f\uff0c\u4f8b\u5982 https://example.com https://openstack.example.org \u6216 https://horizon.openstack.example.org \u3002\u6211\u4eec\u8fd8\u5efa\u8bae\u4e0d\u8981\u90e8\u7f72\u5230\u88f8\u5185\u90e8\u57df\uff0c\u4f8b\u5982 https://horizon/ .\u8fd9\u4e9b\u5efa\u8bae\u57fa\u4e8e\u6d4f\u89c8\u5668\u540c\u6e90\u7b56\u7565\u7684\u9650\u5236\u3002 \u5982\u679c\u5c06\u4eea\u8868\u677f\u90e8\u7f72\u5728\u8fd8\u6258\u7ba1\u7528\u6237\u751f\u6210\u5185\u5bb9\u7684\u57df\u4e2d\uff0c\u5219\u672c\u6307\u5357\u4e2d\u63d0\u4f9b\u7684\u5efa\u8bae\u65e0\u6cd5\u6709\u6548\u9632\u8303\u5df2\u77e5\u653b\u51fb\uff0c\u5373\u4f7f\u6b64\u5185\u5bb9\u9a7b\u7559\u5728\u5355\u72ec\u7684\u5b50\u57df\u4e2d\u4e5f\u662f\u5982\u6b64\u3002\u7528\u6237\u751f\u6210\u7684\u5185\u5bb9\u53ef\u4ee5\u5305\u542b\u4efb\u4f55\u7c7b\u578b\u7684\u811a\u672c\u3001\u56fe\u50cf\u6216\u4e0a\u4f20\u5185\u5bb9\u3002\u5927\u591a\u6570\u4e3b\u8981\u7684 Web \u5b58\u5728\uff08\u5305\u62ec googleusercontent.com\u3001fbcdn.com\u3001github.io \u548c twimg.co\uff09\u90fd\u4f7f\u7528\u8fd9\u79cd\u65b9\u6cd5\u5c06\u7528\u6237\u751f\u6210\u7684\u5185\u5bb9\u4e0e Cookie \u548c\u5b89\u5168\u4ee4\u724c\u9694\u79bb\u5f00\u6765\u3002 \u5982\u679c\u60a8\u4e0d\u9075\u5faa\u6709\u5173\u4e8c\u7ea7\u57df\u7684\u5efa\u8bae\uff0c\u8bf7\u907f\u514d\u4f7f\u7528 Cookie \u652f\u6301\u7684\u4f1a\u8bdd\u5b58\u50a8\uff0c\u5e76\u91c7\u7528 HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \uff08HSTS\uff09\u3002\u5f53\u90e8\u7f72\u5728\u5b50\u57df\u4e0a\u65f6\uff0c\u4eea\u8868\u677f\u7684\u5b89\u5168\u6027\u7b49\u540c\u4e8e\u90e8\u7f72\u5728\u540c\u4e00\u4e8c\u7ea7\u57df\u4e0a\u7684\u5b89\u5168\u6027\u6700\u4f4e\u7684\u5e94\u7528\u7a0b\u5e8f\u3002 \u57fa\u672c\u7684 Web \u670d\u52a1\u5668\u914d\u7f6e \u00b6 \u4eea\u8868\u677f\u5e94\u90e8\u7f72\u4e3a HTTPS \u4ee3\u7406\uff08\u5982 Apache \u6216 Nginx\uff09\u540e\u9762\u7684 Web \u670d\u52a1\u7f51\u5173\u63a5\u53e3 \uff08WSGI\uff09 \u5e94\u7528\u7a0b\u5e8f\u3002\u5982\u679c Apache \u5c1a\u672a\u4f7f\u7528\uff0c\u6211\u4eec\u5efa\u8bae\u4f7f\u7528 Nginx\uff0c\u56e0\u4e3a\u5b83\u662f\u8f7b\u91cf\u7ea7\u7684\uff0c\u5e76\u4e14\u66f4\u5bb9\u6613\u6b63\u786e\u914d\u7f6e\u3002 \u4f7f\u7528 Nginx \u65f6\uff0c\u6211\u4eec\u5efa\u8bae gunicorn \u4f5c\u4e3a WSGI \u4e3b\u673a\uff0c\u5e76\u5177\u6709\u9002\u5f53\u6570\u91cf\u7684\u540c\u6b65\u5de5\u4f5c\u7ebf\u7a0b\u3002\u4f7f\u7528 Apache \u65f6\uff0c\u6211\u4eec\u5efa\u8bae mod_wsgi \u6258\u7ba1\u4eea\u8868\u677f\u3002 \u5141\u8bb8\u7684\u4e3b\u673a \u00b6 \u4f7f\u7528 OpenStack \u4eea\u8868\u677f\u63d0\u4f9b\u7684\u5b8c\u5168\u9650\u5b9a\u4e3b\u673a\u540d\u914d\u7f6e\u8bbe\u7f6e ALLOWED_HOSTS \u3002\u63d0\u4f9b\u6b64\u8bbe\u7f6e\u540e\uff0c\u5982\u679c\u4f20\u5165 HTTP \u8bf7\u6c42\u7684\u201cHost\uff1a\u201d\u6807\u5934\u4e2d\u7684\u503c\u4e0e\u6b64\u5217\u8868\u4e2d\u7684\u4efb\u4f55\u503c\u90fd\u4e0d\u5339\u914d\uff0c\u5219\u5c06\u5f15\u53d1\u9519\u8bef\uff0c\u5e76\u4e14\u8bf7\u6c42\u8005\u5c06\u65e0\u6cd5\u7ee7\u7eed\u3002\u5982\u679c\u672a\u80fd\u914d\u7f6e\u6b64\u9009\u9879\uff0c\u6216\u8005\u5728\u6307\u5b9a\u7684\u4e3b\u673a\u540d\u4e2d\u4f7f\u7528\u901a\u914d\u7b26\uff0c\u5c06\u5bfc\u81f4\u4eea\u8868\u677f\u5bb9\u6613\u53d7\u5230\u4e0e\u865a\u5047 HTTP \u4e3b\u673a\u6807\u5934\u5173\u8054\u7684\u5b89\u5168\u6f0f\u6d1e\u7684\u5f71\u54cd\u3002 \u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Django \u6587\u6863\u3002 Horizon \u955c\u50cf\u4e0a\u4f20 \u00b6 \u6211\u4eec\u5efa\u8bae\u5b9e\u65bd\u8005\u7981\u7528HORIZON_IMAGES_ALLOW_UPLOAD\uff0c\u9664\u975e\u4ed6\u4eec\u5df2\u5b9e\u65bd\u9632\u6b62\u8d44\u6e90\u8017\u5c3d\u548c\u62d2\u7edd\u670d\u52a1\u7684\u8ba1\u5212\u3002 HTTPS\u3001HSTS\u3001XSS \u548c SSRF \u00b6 \u8de8\u7ad9\u811a\u672c \uff08XSS\uff09 \u00b6 \u4e0e\u8bb8\u591a\u7c7b\u4f3c\u7684\u7cfb\u7edf\u4e0d\u540c\uff0cOpenStack \u4eea\u8868\u677f\u5141\u8bb8\u5728\u5927\u591a\u6570\u5b57\u6bb5\u4e2d\u4f7f\u7528\u6574\u4e2a Unicode \u5b57\u7b26\u96c6\u3002\u8fd9\u610f\u5473\u7740\u5f00\u53d1\u4eba\u5458\u72af\u9519\u8bef\u7684\u81ea\u7531\u5ea6\u8f83\u5c0f\uff0c\u8fd9\u4e9b\u9519\u8bef\u4e3a\u8de8\u7ad9\u70b9\u811a\u672c \uff08XSS\uff09 \u6253\u5f00\u4e86\u653b\u51fb\u5a92\u4ecb\u3002 Dashboard \u4e3a\u5f00\u53d1\u4eba\u5458\u63d0\u4f9b\u4e86\u907f\u514d\u521b\u5efa XSS \u6f0f\u6d1e\u7684\u5de5\u5177\uff0c\u4f46\u5b83\u4eec\u53ea\u6709\u5728\u5f00\u53d1\u4eba\u5458\u6b63\u786e\u4f7f\u7528\u5b83\u4eec\u65f6\u624d\u6709\u6548\u3002\u5ba1\u6838\u4efb\u4f55\u81ea\u5b9a\u4e49\u4eea\u8868\u677f\uff0c\u7279\u522b\u6ce8\u610f mark_safe \u51fd\u6570\u7684\u4f7f\u7528\u3001\u4e0e\u81ea\u5b9a\u4e49\u6a21\u677f\u6807\u8bb0\u7684\u4f7f\u7528 is_safe \u3001 safe \u6a21\u677f\u6807\u8bb0\u7684\u4f7f\u7528\u3001\u5173\u95ed\u81ea\u52a8\u8f6c\u4e49\u7684\u4efb\u4f55\u4f4d\u7f6e\uff0c\u4ee5\u53ca\u4efb\u4f55\u53ef\u80fd\u8bc4\u4f30\u4e0d\u5f53\u8f6c\u4e49\u6570\u636e\u7684 JavaScript\u3002 \u8de8\u7ad9\u8bf7\u6c42\u4f2a\u9020 \uff08CSRF\uff09 \u00b6 Django \u6709\u4e13\u95e8\u7684\u4e2d\u95f4\u4ef6\u7528\u4e8e\u8de8\u7ad9\u8bf7\u6c42\u4f2a\u9020 \uff08CSRF\uff09\u3002\u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Django \u6587\u6863\u3002 OpenStack \u4eea\u8868\u677f\u65e8\u5728\u963b\u6b62\u5f00\u53d1\u4eba\u5458\u5728\u5f15\u5165\u7ebf\u7a0b\u65f6\u4f7f\u7528\u81ea\u5b9a\u4e49\u4eea\u8868\u677f\u5f15\u5165\u8de8\u7ad9\u70b9\u811a\u672c\u6f0f\u6d1e\u3002\u5e94\u5ba1\u6838\u4f7f\u7528\u591a\u4e2a JavaScript \u5b9e\u4f8b\u7684\u4eea\u8868\u677f\u662f\u5426\u5b58\u5728\u6f0f\u6d1e\uff0c\u4f8b\u5982\u4e0d\u5f53\u4f7f\u7528 @csrf_exempt \u88c5\u9970\u5668\u3002\u5728\u653e\u5bbd\u9650\u5236\u4e4b\u524d\uff0c\u5e94\u4ed4\u7ec6\u8bc4\u4f30\u4efb\u4f55\u4e0d\u9075\u5faa\u8fd9\u4e9b\u5efa\u8bae\u7684\u5b89\u5168\u8bbe\u7f6e\u7684\u4eea\u8868\u677f\u3002 \u8de8\u5e27\u811a\u672c \uff08XFS\uff09 \u00b6 \u4f20\u7edf\u6d4f\u89c8\u5668\u4ecd\u7136\u5bb9\u6613\u53d7\u5230\u8de8\u5e27\u811a\u672c \uff08XFS\uff09 \u6f0f\u6d1e\u7684\u653b\u51fb\uff0c\u56e0\u6b64 OpenStack \u4eea\u8868\u677f\u63d0\u4f9b\u4e86\u4e00\u4e2a\u9009\u9879 DISALLOW_IFRAME_EMBED \uff0c\u5141\u8bb8\u5728\u90e8\u7f72\u4e2d\u4e0d\u4f7f\u7528 iframe \u7684\u60c5\u51b5\u4e0b\u8fdb\u884c\u989d\u5916\u7684\u5b89\u5168\u5f3a\u5316\u3002 HTTPS \u51fd\u6570 \u00b6 \u4f7f\u7528\u6765\u81ea\u516c\u8ba4\u7684\u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09 \u7684\u6709\u6548\u53d7\u4fe1\u4efb\u8bc1\u4e66\uff0c\u5c06\u4eea\u8868\u677f\u90e8\u7f72\u5728\u5b89\u5168 HTTPS \u670d\u52a1\u5668\u540e\u9762\u3002\u4ec5\u5f53\u4fe1\u4efb\u6839\u9884\u5b89\u88c5\u5728\u6240\u6709\u7528\u6237\u6d4f\u89c8\u5668\u4e2d\u65f6\uff0c\u79c1\u6709\u7ec4\u7ec7\u9881\u53d1\u7684\u8bc1\u4e66\u624d\u9002\u7528\u3002 \u914d\u7f6e\u5bf9\u4eea\u8868\u677f\u57df\u7684 HTTP \u8bf7\u6c42\uff0c\u4ee5\u91cd\u5b9a\u5411\u5230\u5b8c\u5168\u9650\u5b9a\u7684 HTTPS URL\u3002 HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \uff08HSTS\uff09 \u00b6 \u5f3a\u70c8\u5efa\u8bae\u4f7f\u7528 HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \uff08HSTS\uff09\u3002 \u6ce8\u610f \u5982\u679c\u60a8\u5728 Web \u670d\u52a1\u5668\u524d\u9762\u4f7f\u7528 HTTPS \u4ee3\u7406\uff0c\u800c\u4e0d\u662f\u4f7f\u7528\u5177\u6709 HTTPS \u529f\u80fd\u7684 HTTP \u670d\u52a1\u5668\uff0c\u8bf7\u4fee\u6539\u8be5 `SECURE_PROXY_SSL_HEADER` \u53d8\u91cf\u3002\u6709\u5173\u4fee\u6539 `SECURE_PROXY_SSL_HEADER` \u53d8\u91cf\u7684\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Django \u6587\u6863\u3002 \u6709\u5173 HTTPS \u914d\u7f6e\uff08\u5305\u62ec HSTS \u914d\u7f6e\uff09\u7684\u66f4\u5177\u4f53\u5efa\u8bae\u548c\u670d\u52a1\u5668\u914d\u7f6e\uff0c\u8bf7\u53c2\u9605\u201c\u5b89\u5168\u901a\u4fe1\u201d\u4e00\u7ae0\u3002 \u524d\u7aef\u7f13\u5b58\u548c\u4f1a\u8bdd\u540e\u7aef \u00b6 \u524d\u7aef\u7f13\u5b58 \u00b6 \u6211\u4eec\u4e0d\u5efa\u8bae\u5728\u4eea\u8868\u677f\u4e2d\u4f7f\u7528\u524d\u7aef\u7f13\u5b58\u5de5\u5177\u3002\u4eea\u8868\u677f\u6b63\u5728\u6e32\u67d3\u76f4\u63a5\u7531 OpenStack API \u8bf7\u6c42\u751f\u6210\u7684\u52a8\u6001\u5185\u5bb9\uff0c\u524d\u7aef\u7f13\u5b58\u5c42\uff08\u5982 varnish\uff09\u53ef\u80fd\u4f1a\u963b\u6b62\u663e\u793a\u6b63\u786e\u7684\u5185\u5bb9\u3002\u5728 Django \u4e2d\uff0c\u9759\u6001\u5a92\u4f53\u76f4\u63a5\u4ece Apache \u6216 Nginx \u63d0\u4f9b\uff0c\u5e76\u4e14\u5df2\u7ecf\u53d7\u76ca\u4e8e Web \u4e3b\u673a\u7f13\u5b58\u3002 \u4f1a\u8bdd\u540e\u7aef \u00b6 Horizon \u7684\u9ed8\u8ba4\u4f1a\u8bdd\u540e\u7aef django.contrib.sessions.backends.signed_cookies \u5c06\u7528\u6237\u6570\u636e\u4fdd\u5b58\u5728\u6d4f\u89c8\u5668\u4e2d\u5b58\u50a8\u7684\u5df2\u7b7e\u540d\u4f46\u672a\u52a0\u5bc6\u7684 Cookie \u4e2d\u3002\u7531\u4e8e\u6bcf\u4e2a\u4eea\u8868\u677f\u5b9e\u4f8b\u90fd\u662f\u65e0\u72b6\u6001\u7684\uff0c\u56e0\u6b64\u524d\u9762\u63d0\u5230\u7684\u65b9\u6cd5\u63d0\u4f9b\u4e86\u5b9e\u73b0\u6700\u7b80\u5355\u7684\u4f1a\u8bdd\u540e\u7aef\u6269\u5c55\u7684\u80fd\u529b\u3002 \u5e94\u8be5\u6ce8\u610f\u7684\u662f\uff0c\u5728\u8fd9\u79cd\u7c7b\u578b\u7684\u5b9e\u73b0\u4e2d\uff0c\u654f\u611f\u7684\u8bbf\u95ee\u4ee4\u724c\u5c06\u5b58\u50a8\u5728\u6d4f\u89c8\u5668\u4e2d\uff0c\u5e76\u5c06\u968f\u7740\u6bcf\u4e2a\u8bf7\u6c42\u7684\u53d1\u51fa\u800c\u4f20\u8f93\u3002\u540e\u7aef\u786e\u4fdd\u4f1a\u8bdd\u6570\u636e\u7684\u5b8c\u6574\u6027\uff0c\u5373\u4f7f\u4f20\u8f93\u7684\u6570\u636e\u4ec5\u901a\u8fc7 HTTPS \u52a0\u5bc6\u3002 \u5982\u679c\u60a8\u7684\u67b6\u6784\u5141\u8bb8\u5171\u4eab\u5b58\u50a8\uff0c\u5e76\u4e14\u60a8\u6b63\u786e\u914d\u7f6e\u4e86\u7f13\u5b58\uff0c\u6211\u4eec\u5efa\u8bae\u60a8\u5c06\u5176\u8bbe\u7f6e\u4e3a SESSION_ENGINE django.contrib.sessions.backends.cache \u5e76\u7528\u4f5c\u57fa\u4e8e\u7f13\u5b58\u7684\u4f1a\u8bdd\u540e\u7aef\uff0c\u5e76\u5c06 memcached \u4f5c\u4e3a\u7f13\u5b58\u3002Memcached \u662f\u4e00\u79cd\u9ad8\u6548\u7684\u5185\u5b58\u952e\u503c\u5b58\u50a8\uff0c\u7528\u4e8e\u5b58\u50a8\u6570\u636e\u5757\uff0c\u53ef\u5728\u9ad8\u53ef\u7528\u6027\u548c\u5206\u5e03\u5f0f\u73af\u5883\u4e2d\u4f7f\u7528\uff0c\u5e76\u4e14\u6613\u4e8e\u914d\u7f6e\u3002\u4f46\u662f\uff0c\u60a8\u9700\u8981\u786e\u4fdd\u6ca1\u6709\u6570\u636e\u6cc4\u6f0f\u3002Memcached \u5229\u7528\u5907\u7528 RAM \u6765\u5b58\u50a8\u7ecf\u5e38\u8bbf\u95ee\u7684\u6570\u636e\u5757\uff0c\u5c31\u50cf\u91cd\u590d\u8bbf\u95ee\u4fe1\u606f\u7684\u5185\u5b58\u7f13\u5b58\u4e00\u6837\u3002\u7531\u4e8e memcached \u4f7f\u7528\u672c\u5730\u5185\u5b58\uff0c\u56e0\u6b64\u4e0d\u4f1a\u4ea7\u751f\u6570\u636e\u5e93\u548c\u6587\u4ef6\u7cfb\u7edf\u4f7f\u7528\u5f00\u9500\uff0c\u4ece\u800c\u5bfc\u81f4\u76f4\u63a5\u4ece RAM \u800c\u4e0d\u662f\u4ece\u78c1\u76d8\u8bbf\u95ee\u6570\u636e\u3002 \u6211\u4eec\u5efa\u8bae\u4f7f\u7528 memcached \u800c\u4e0d\u662f\u672c\u5730\u5185\u5b58\u7f13\u5b58\uff0c\u56e0\u4e3a\u5b83\u901f\u5ea6\u5feb\uff0c\u6570\u636e\u4fdd\u7559\u65f6\u95f4\u66f4\u957f\uff0c\u591a\u8fdb\u7a0b\u5b89\u5168\uff0c\u5e76\u4e14\u80fd\u591f\u5728\u591a\u4e2a\u670d\u52a1\u5668\u4e0a\u5171\u4eab\u7f13\u5b58\uff0c\u4f46\u4ecd\u5c06\u5176\u89c6\u4e3a\u5355\u4e2a\u7f13\u5b58\u3002 \u8981\u542f\u7528 memcached\uff0c\u8bf7\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache' } \u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Django \u6587\u6863\u3002 \u9759\u6001\u5a92\u4f53 \u00b6 \u4eea\u8868\u677f\u7684\u9759\u6001\u5a92\u4f53\u5e94\u90e8\u7f72\u5230\u4eea\u8868\u677f\u57df\u7684\u5b50\u57df\uff0c\u5e76\u7531 Web \u670d\u52a1\u5668\u63d0\u4f9b\u670d\u52a1\u3002\u4f7f\u7528\u5916\u90e8\u5185\u5bb9\u5206\u53d1\u7f51\u7edc \uff08CDN\uff09 \u4e5f\u662f\u53ef\u4ee5\u63a5\u53d7\u7684\u3002\u6b64\u5b50\u57df\u4e0d\u5e94\u8bbe\u7f6e Cookie \u6216\u63d0\u4f9b\u7528\u6237\u63d0\u4f9b\u7684\u5185\u5bb9\u3002\u5a92\u4f53\u4e5f\u5e94\u4f7f\u7528 HTTPS \u63d0\u4f9b\u3002 Django \u5a92\u4f53\u8bbe\u7f6e\u8bb0\u5f55\u5728 Django \u6587\u6863\u4e2d\u3002 Dashboard \u7684\u9ed8\u8ba4\u914d\u7f6e\u4f7f\u7528 django_compressor \u6765\u538b\u7f29\u548c\u7f29\u5c0f CSS \u548c JavaScript \u5185\u5bb9\uff0c\u7136\u540e\u518d\u63d0\u4f9b\u8fd9\u4e9b\u5185\u5bb9\u3002\u6b64\u8fc7\u7a0b\u5e94\u5728\u90e8\u7f72\u4eea\u8868\u677f\u4e4b\u524d\u9759\u6001\u5b8c\u6210\uff0c\u800c\u4e0d\u662f\u4f7f\u7528\u9ed8\u8ba4\u7684\u8bf7\u6c42\u5185\u52a8\u6001\u538b\u7f29\uff0c\u5e76\u5c06\u751f\u6210\u7684\u6587\u4ef6\u4e0e\u5df2\u90e8\u7f72\u7684\u4ee3\u7801\u4e00\u8d77\u590d\u5236\u5230 CDN \u670d\u52a1\u5668\u3002\u538b\u7f29\u5e94\u5728\u975e\u751f\u4ea7\u751f\u6210\u73af\u5883\u4e2d\u5b8c\u6210\u3002\u5982\u679c\u8fd9\u4e0d\u53ef\u884c\uff0c\u6211\u4eec\u5efa\u8bae\u5b8c\u5168\u7981\u7528\u8d44\u6e90\u538b\u7f29\u3002\u4e0d\u5e94\u5728\u751f\u4ea7\u8ba1\u7b97\u673a\u4e0a\u5b89\u88c5\u8054\u673a\u538b\u7f29\u4f9d\u8d56\u9879\uff08\u8f83\u5c11\uff0cNode.js\uff09\u3002 \u5bc6\u7801 \u00b6 \u5bc6\u7801\u7ba1\u7406\u5e94\u8be5\u662f\u4e91\u7ba1\u7406\u8ba1\u5212\u4e0d\u53ef\u6216\u7f3a\u7684\u4e00\u90e8\u5206\u3002\u5173\u4e8e\u5bc6\u7801\u7684\u6743\u5a01\u6559\u7a0b\u8d85\u51fa\u4e86\u672c\u4e66\u7684\u8303\u56f4;\u4f46\u662f\uff0c\u4e91\u7ba1\u7406\u5458\u5e94\u53c2\u8003 NIST \u4f01\u4e1a\u5bc6\u7801\u7ba1\u7406\u7279\u522b\u51fa\u7248\u7269\u6307\u5357\u7b2c 4 \u7ae0\u4e2d\u63a8\u8350\u7684\u6700\u4f73\u5b9e\u8df5\u3002 \u65e0\u8bba\u662f\u901a\u8fc7\u4eea\u8868\u677f\u8fd8\u662f\u5176\u4ed6\u5e94\u7528\u7a0b\u5e8f\uff0c\u57fa\u4e8e\u6d4f\u89c8\u5668\u7684 OpenStack \u4e91\u8bbf\u95ee\u90fd\u4f1a\u5f15\u5165\u989d\u5916\u7684\u6ce8\u610f\u4e8b\u9879\u3002\u73b0\u4ee3\u6d4f\u89c8\u5668\u90fd\u652f\u6301\u67d0\u79cd\u5f62\u5f0f\u7684\u5bc6\u7801\u5b58\u50a8\u548c\u81ea\u52a8\u586b\u5145\u8bb0\u4f4f\u7684\u7ad9\u70b9\u7684\u51ed\u636e\u3002\u8fd9\u5728\u4f7f\u7528\u4e0d\u5bb9\u6613\u8bb0\u4f4f\u6216\u952e\u5165\u7684\u5f3a\u5bc6\u7801\u65f6\u975e\u5e38\u6709\u7528\uff0c\u4f46\u5982\u679c\u5ba2\u6237\u7aef\u7684\u7269\u7406\u5b89\u5168\u6027\u53d7\u5230\u5a01\u80c1\uff0c\u53ef\u80fd\u4f1a\u5bfc\u81f4\u6d4f\u89c8\u5668\u6210\u4e3a\u8584\u5f31\u73af\u8282\u3002\u5982\u679c\u6d4f\u89c8\u5668\u7684\u5bc6\u7801\u5b58\u50a8\u672c\u8eab\u4e0d\u53d7\u5f3a\u5bc6\u7801\u4fdd\u62a4\uff0c\u6216\u8005\u5982\u679c\u5141\u8bb8\u5bc6\u7801\u5b58\u50a8\u5728\u4f1a\u8bdd\u671f\u95f4\u4fdd\u6301\u89e3\u9501\u72b6\u6001\uff0c\u5219\u5f88\u5bb9\u6613\u83b7\u5f97\u5bf9\u7cfb\u7edf\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002 KeePassX \u548c Password Safe \u7b49\u5bc6\u7801\u7ba1\u7406\u5e94\u7528\u7a0b\u5e8f\u975e\u5e38\u6709\u7528\uff0c\u56e0\u4e3a\u5927\u591a\u6570\u5e94\u7528\u7a0b\u5e8f\u90fd\u652f\u6301\u751f\u6210\u5f3a\u5bc6\u7801\u548c\u5b9a\u671f\u63d0\u9192\u751f\u6210\u65b0\u5bc6\u7801\u3002\u6700\u91cd\u8981\u7684\u662f\uff0c\u5bc6\u7801\u5b58\u50a8\u4ec5\u77ed\u6682\u4fdd\u6301\u89e3\u9501\u72b6\u6001\uff0c\u4ece\u800c\u964d\u4f4e\u4e86\u5bc6\u7801\u6cc4\u9732\u548c\u901a\u8fc7\u6d4f\u89c8\u5668\u6216\u7cfb\u7edf\u5165\u4fb5\u8fdb\u884c\u672a\u7ecf\u6388\u6743\u7684\u8d44\u6e90\u8bbf\u95ee\u7684\u98ce\u9669\u3002 \u5bc6\u94a5 \u00b6 \u4eea\u8868\u677f\u4f9d\u8d56\u4e8e\u67d0\u4e9b\u5b89\u5168\u529f\u80fd\u7684\u5171\u4eab SECRET_KEY \u8bbe\u7f6e\u3002\u5bc6\u94a5\u5e94\u4e3a\u968f\u673a\u751f\u6210\u7684\u5b57\u7b26\u4e32\uff0c\u957f\u5ea6\u81f3\u5c11\u4e3a 64 \u4e2a\u5b57\u7b26\uff0c\u5fc5\u987b\u5728\u6240\u6709\u6d3b\u52a8\u4eea\u8868\u677f\u5b9e\u4f8b\u4e4b\u95f4\u5171\u4eab\u3002\u6cc4\u9732\u6b64\u5bc6\u94a5\u53ef\u80fd\u5141\u8bb8\u8fdc\u7a0b\u653b\u51fb\u8005\u6267\u884c\u4efb\u610f\u4ee3\u7801\u3002\u8f6e\u6362\u6b64\u5bc6\u94a5\u4f1a\u4f7f\u73b0\u6709\u7528\u6237\u4f1a\u8bdd\u548c\u7f13\u5b58\u5931\u6548\u3002\u8bf7\u52ff\u5c06\u6b64\u5bc6\u94a5\u63d0\u4ea4\u5230\u516c\u5171\u5b58\u50a8\u5e93\u3002 Cookies \u00b6 \u4f1a\u8bddCookies\u5e94\u8bbe\u7f6e\u4e3a HTTPONLY\uff1a SESSION_COOKIE_HTTPONLY = True \u5207\u52ff\u5c06 CSRF \u6216\u4f1a\u8bdd Cookie \u914d\u7f6e\u4e3a\u5177\u6709\u5e26\u524d\u5bfc\u70b9\u7684\u901a\u914d\u7b26\u57df\u3002\u4f7f\u7528 HTTPS \u90e8\u7f72\u65f6\uff0c\u5e94\u4fdd\u62a4 Horizon \u7684\u4f1a\u8bdd\u548c CSRF Cookie\uff1a CSRF_COOKIE_SECURE = True SESSION_COOKIE_SECURE = True \u8de8\u57df\u8d44\u6e90\u5171\u4eab \uff08CORS\uff09 \u00b6 \u5c06 Web \u670d\u52a1\u5668\u914d\u7f6e\u4e3a\u5728\u6bcf\u6b21\u54cd\u5e94\u65f6\u53d1\u9001\u9650\u5236\u6027 CORS \u6807\u5934\uff0c\u4ec5\u5141\u8bb8\u4eea\u8868\u677f\u57df\u548c\u534f\u8bae\uff1a Access-Control-Allow-Origin: https://example.com/ \u6c38\u8fdc\u4e0d\u5141\u8bb8\u901a\u914d\u7b26\u6765\u6e90\u3002 \u8c03\u8bd5 \u00b6 \u5efa\u8bae\u5728\u751f\u4ea7\u73af\u5883\u4e2d\u5c06 DEBUG \u8be5\u8bbe\u7f6e\u8bbe\u7f6e\u4e3a False \u3002\u5982\u679c DEBUG \u8bbe\u7f6e\u4e3a True\uff0c\u5219\u5f53\u629b\u51fa\u5f02\u5e38\u65f6\uff0cDjango \u5c06\u663e\u793a\u5806\u6808\u8ddf\u8e2a\u548c\u654f\u611f\u7684 Web \u670d\u52a1\u5668\u72b6\u6001\u4fe1\u606f\u3002 \u68c0\u67e5\u8868 \u00b6 Check-Dashboard-01\uff1a\u7528\u6237/\u914d\u7f6e\u6587\u4ef6\u7ec4\u662f\u5426\u8bbe\u7f6e\u4e3a root/horizon\uff1f \u00b6 \u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u5bf9\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u7684\u62d2\u7edd\u670d\u52a1\u3002\u56e0\u6b64\uff0c\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a root\uff0c\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a horizon\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/openstack-dashboard/local_settings.py | egrep \"root horizon\" \u901a\u8fc7\uff1a\u5982\u679c\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c horizon\u3002\u4e0a\u9762\u7684\u547d\u4ee4\u663e\u793a\u4e86\u6839\u5730\u5e73\u7ebf\u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u56e0\u4e3a\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 root \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u6216\u9664 Horizon \u4ee5\u5916\u7684\u4efb\u4f55\u7ec4\u3002 Check-Dashboard-02\uff1a\u662f\u5426\u4e3a Horizon \u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u6743\u9650\uff1f \u00b6 \u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u5efa\u8bae\u5bf9\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/openstack-dashboard/local_settings.py \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\u3002640 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\uff0c\u5373\u201cu=rw\uff0cg=r\uff0co=\u201d\u3002\u8bf7\u6ce8\u610f\uff0c\u4f7f\u7528 Check-Dashboard-01 \u65f6\uff1a\u7528\u6237/\u914d\u7f6e\u6587\u4ef6\u7ec4\u662f\u5426\u8bbe\u7f6e\u4e3a root/horizon\uff1f\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0c\u5219 root \u7528\u6237\u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0cHorizon \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/openstack-dashboard/local_settings.py getfacl: Removing leading '/' from absolute path names # file: etc/openstack-dashboard/local_settings.py USER root rw- GROUP horizon r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u672a\u8bbe\u7f6e\u4e3a\u81f3\u5c11 640\u3002 Check-Dashboard-03\uff1a\u53c2\u6570\u662f\u5426 DISALLOW_IFRAME_EMBED \u8bbe\u7f6e\u4e3a True \uff1f \u00b6 DISALLOW_IFRAME_EMBED \u53ef\u7528\u4e8e\u9632\u6b62 OpenStack Dashboard \u5d4c\u5165\u5230 iframe \u4e2d\u3002 \u65e7\u7248\u6d4f\u89c8\u5668\u4ecd\u7136\u5bb9\u6613\u53d7\u5230\u8de8\u5e27\u811a\u672c \uff08XFS\uff09 \u6f0f\u6d1e\u7684\u5f71\u54cd\uff0c\u56e0\u6b64\u6b64\u9009\u9879\u5141\u8bb8\u5728\u90e8\u7f72\u4e2d\u672a\u4f7f\u7528 iframe \u7684\u60c5\u51b5\u4e0b\u8fdb\u884c\u989d\u5916\u7684\u5b89\u5168\u5f3a\u5316\u3002 \u9ed8\u8ba4\u8bbe\u7f6e\u4e3a True\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 DISALLOW_IFRAME_EMBED in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a True \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 DISALLOW_IFRAME_EMBED in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a False \u3002 \u63a8\u8350\u7528\u4e8e\uff1aHTTPS\u3001HSTS\u3001XSS \u548c SSRF\u3002 Check-Dashboard-04\uff1a\u53c2\u6570\u662f\u5426 CSRF_COOKIE_SECURE \u8bbe\u7f6e\u4e3a True \uff1f \u00b6 CSRF\uff08\u8de8\u7ad9\u70b9\u8bf7\u6c42\u4f2a\u9020\uff09\u662f\u4e00\u79cd\u653b\u51fb\uff0c\u5b83\u8feb\u4f7f\u6700\u7ec8\u7528\u6237\u5728\u4ed6/\u5979\u5f53\u524d\u7ecf\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u7684 Web \u5e94\u7528\u7a0b\u5e8f\u4e0a\u6267\u884c\u672a\u7ecf\u6388\u6743\u7684\u547d\u4ee4\u3002\u6210\u529f\u7684 CSRF \u6f0f\u6d1e\u53ef\u80fd\u4f1a\u5371\u53ca\u6700\u7ec8\u7528\u6237\u7684\u6570\u636e\u548c\u64cd\u4f5c\u3002\u5982\u679c\u76ee\u6807\u6700\u7ec8\u7528\u6237\u5177\u6709\u7ba1\u7406\u5458\u6743\u9650\uff0c\u8fd9\u53ef\u80fd\u4f1a\u5371\u53ca\u6574\u4e2a Web \u5e94\u7528\u7a0b\u5e8f\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 CSRF_COOKIE_SECURE in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a True \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 CSRF_COOKIE_SECURE in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a False \u3002 \u63a8\u8350\u4e8e\uff1aCookies\u3002 Check-Dashboard-05\uff1a\u53c2\u6570\u662f\u5426 SESSION_COOKIE_SECURE \u8bbe\u7f6e\u4e3a True \uff1f \u00b6 \u201cSECURE\u201dcookie \u5c5e\u6027\u6307\u793a Web \u6d4f\u89c8\u5668\u4ec5\u901a\u8fc7\u52a0\u5bc6\u7684 HTTPS \uff08SSL/TLS\uff09 \u8fde\u63a5\u53d1\u9001 cookie\u3002\u6b64\u4f1a\u8bdd\u4fdd\u62a4\u673a\u5236\u662f\u5f3a\u5236\u6027\u7684\uff0c\u4ee5\u9632\u6b62\u901a\u8fc7 MitM\uff08\u4e2d\u95f4\u4eba\uff09\u653b\u51fb\u6cc4\u9732\u4f1a\u8bdd ID\u3002\u5b83\u786e\u4fdd\u653b\u51fb\u8005\u65e0\u6cd5\u7b80\u5355\u5730\u4ece Web \u6d4f\u89c8\u5668\u6d41\u91cf\u4e2d\u6355\u83b7\u4f1a\u8bdd ID\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 SESSION_COOKIE_SECURE in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a True \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 SESSION_COOKIE_SECURE in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a False \u3002 \u63a8\u8350\u4e8e\uff1aCookies\u3002 Check-Dashboard-06\uff1a\u53c2\u6570\u662f\u5426 SESSION_COOKIE_HTTPONLY \u8bbe\u7f6e\u4e3a True \uff1f \u00b6 \u201cHTTPONLY\u201dcookie \u5c5e\u6027\u6307\u793a Web \u6d4f\u89c8\u5668\u4e0d\u5141\u8bb8\u811a\u672c\uff08\u4f8b\u5982 JavaScript \u6216 VBscript\uff09\u901a\u8fc7 DOM document.cookie \u5bf9\u8c61\u8bbf\u95ee cookie\u3002\u6b64\u4f1a\u8bdd ID \u4fdd\u62a4\u662f\u5fc5\u9700\u7684\uff0c\u4ee5\u9632\u6b62\u901a\u8fc7 XSS \u653b\u51fb\u7a83\u53d6\u4f1a\u8bdd ID\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 SESSION_COOKIE_HTTPONLY in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a True \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 SESSION_COOKIE_HTTPONLY in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a False \u3002 \u63a8\u8350\u4e8e\uff1aCookies\u3002 Check-Dashboard-07\uff1a PASSWORD_AUTOCOMPLETE \u8bbe\u7f6e\u4e3a False \uff1f \u00b6 \u5e94\u7528\u7a0b\u5e8f\u7528\u4e8e\u4e3a\u7528\u6237\u63d0\u4f9b\u4fbf\u5229\u7684\u5e38\u89c1\u529f\u80fd\u662f\u5c06\u5bc6\u7801\u672c\u5730\u7f13\u5b58\u5728\u6d4f\u89c8\u5668\u4e2d\uff08\u5728\u5ba2\u6237\u7aef\u8ba1\u7b97\u673a\u4e0a\uff09\uff0c\u5e76\u5728\u6240\u6709\u540e\u7eed\u8bf7\u6c42\u4e2d\u201c\u9884\u5148\u952e\u5165\u201d\u3002\u867d\u7136\u6b64\u529f\u80fd\u5bf9\u666e\u901a\u7528\u6237\u6765\u8bf4\u975e\u5e38\u53cb\u597d\uff0c\u4f46\u540c\u65f6\uff0c\u5b83\u5f15\u5165\u4e86\u4e00\u4e2a\u7f3a\u9677\uff0c\u56e0\u4e3a\u5728\u5ba2\u6237\u7aef\u8ba1\u7b97\u673a\u4e0a\u4f7f\u7528\u76f8\u540c\u5e10\u6237\u7684\u4efb\u4f55\u4eba\u90fd\u53ef\u4ee5\u8f7b\u677e\u8bbf\u95ee\u7528\u6237\u5e10\u6237\uff0c\u4ece\u800c\u53ef\u80fd\u5bfc\u81f4\u7528\u6237\u5e10\u6237\u53d7\u635f\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 PASSWORD_AUTOCOMPLETE in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a off \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 PASSWORD_AUTOCOMPLETE in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a on \u3002 Check-Dashboard-08\uff1a DISABLE_PASSWORD_REVEAL \u8bbe\u7f6e\u4e3a True \uff1f \u00b6 \u4e0e\u4e4b\u524d\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u5efa\u8bae\u4e0d\u8981\u663e\u793a\u5bc6\u7801\u5b57\u6bb5\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 DISABLE_PASSWORD_REVEAL in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a True \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 DISABLE_PASSWORD_REVEAL in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a False \u3002 \u6ce8\u610f \u6b64\u9009\u9879\u662f\u5728 Kilo \u7248\u672c\u4e2d\u5f15\u5165\u7684\u3002 Check-Dashboard-09\uff1a ENFORCE_PASSWORD_CHECK \u8bbe\u7f6e\u4e3a True \uff1f \u00b6 \u8bbe\u7f6e\u4e3a ENFORCE_PASSWORD_CHECK True \u5c06\u5728\u201c\u66f4\u6539\u5bc6\u7801\u201d\u7a97\u4f53\u4e0a\u663e\u793a\u201c\u7ba1\u7406\u5458\u5bc6\u7801\u201d\u5b57\u6bb5\uff0c\u4ee5\u9a8c\u8bc1\u662f\u5426\u786e\u5b9e\u662f\u7ba1\u7406\u5458\u767b\u5f55\u7684\u8981\u66f4\u6539\u5bc6\u7801\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 ENFORCE_PASSWORD_CHECK in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a True \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 ENFORCE_PASSWORD_CHECK in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a False \u3002 Check-Dashboard-10\uff1a\u662f\u5426 PASSWORD_VALIDATOR \u5df2\u914d\u7f6e\uff1f \u00b6 \u5141\u8bb8\u6b63\u5219\u8868\u8fbe\u5f0f\u9a8c\u8bc1\u7528\u6237\u5bc6\u7801\u7684\u590d\u6742\u6027\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 PASSWORD_VALIDATOR in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a defaul \u4e4b\u5916\u7684\u4efb\u4f55\u503c\uff0c\u5219\u5141\u8bb8\u6240\u6709 \u201cregex\u201d\uff1a '.*'\uff0c \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 PASSWORD_VALIDATOR in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a\u5141\u8bb8\u6240\u6709 \u201cregex\u201d\uff1a '.*' Check-Dashboard-11\uff1a\u662f\u5426 SECURE_PROXY_SSL_HEADER \u5df2\u914d\u7f6e\uff1f \u00b6 \u5982\u679c OpenStack Dashboard \u90e8\u7f72\u5728\u4ee3\u7406\u540e\u9762\uff0c\u5e76\u4e14\u4ee3\u7406\u4ece\u6240\u6709\u4f20\u5165\u8bf7\u6c42\u4e2d\u5265\u79bb X-Forwarded-Proto \u6807\u5934\uff0c\u6216\u8005\u8bbe\u7f6e\u6807\u5934 X-Forwarded-Proto \u5e76\u5c06\u5176\u53d1\u9001\u5230 Dashboard\uff0c\u4f46\u4ec5\u9002\u7528\u4e8e\u6700\u521d\u901a\u8fc7 HTTPS \u4f20\u5165\u7684\u8bf7\u6c42\uff0c\u90a3\u4e48\u60a8\u5e94\u8be5\u8003\u8651\u914d\u7f6e SECURE_PROXY_SSL_HEADER \u66f4\u591a\u4fe1\u606f\u53ef\u4ee5\u5728 Django \u6587\u6863\u4e2d\u627e\u5230\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 SECURE_PROXY_SSL_HEADER in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a 'HTTP_X_FORWARDED_PROTO', 'https' \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 SECURE_PROXY_SSL_HEADER in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u672a\u8bbe\u7f6e\u4e3a 'HTTP_X_FORWARDED_PROTO', 'https' \u6216\u6ce8\u91ca\u6389\u3002 \u8ba1\u7b97 \u00b6 OpenStack \u8ba1\u7b97\u670d\u52a1 \uff08nova\uff09 \u5728\u6574\u4e2a\u4e91\u4e2d\u7684\u8bb8\u591a\u4f4d\u7f6e\u8fd0\u884c\uff0c\u5e76\u4e0e\u5404\u79cd\u5185\u90e8\u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002OpenStack \u8ba1\u7b97\u670d\u52a1\u63d0\u4f9b\u4e86\u591a\u79cd\u914d\u7f6e\u9009\u9879\uff0c\u8fd9\u4e9b\u9009\u9879\u53ef\u80fd\u662f\u7279\u5b9a\u4e8e\u90e8\u7f72\u7684\u3002 \u5728\u672c\u7ae0\u4e2d\uff0c\u6211\u4eec\u5c06\u4ecb\u7ecd\u6709\u5173\u8ba1\u7b97\u5b89\u5168\u6027\u7684\u4e00\u822c\u6700\u4f73\u5b9e\u8df5\uff0c\u4ee5\u53ca\u53ef\u80fd\u5bfc\u81f4\u5b89\u5168\u95ee\u9898\u7684\u7279\u5b9a\u5df2\u77e5\u914d\u7f6e\u3002 nova.conf \u6587\u4ef6\u548c /var/lib/nova \u4f4d\u7f6e\u5e94\u53d7\u5230\u4fdd\u62a4\u3002\u5e94\u5b9e\u65bd\u96c6\u4e2d\u5f0f\u65e5\u5fd7\u8bb0\u5f55\u3001 policy.json \u6587\u4ef6\u548c\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u6846\u67b6\u7b49\u63a7\u5236\u63aa\u65bd\u3002 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u9009\u62e9 OpenStack \u4e2d\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f \u7eb3\u5165\u6392\u9664\u6807\u51c6 \u56e2\u961f\u4e13\u957f \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u8ba4\u8bc1\u548c\u8bc1\u660e \u901a\u7528\u6807\u51c6 \u52a0\u5bc6\u6807\u51c6 FIPS 140-2 \u786c\u4ef6\u95ee\u9898 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0e\u88f8\u673a \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5185\u5b58\u4f18\u5316 KVM \u5185\u6838 Samepage \u5408\u5e76 XEN\u900f\u660e\u9875\u9762\u5171\u4eab \u5185\u5b58\u4f18\u5316\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u5176\u4ed6\u5b89\u5168\u529f\u80fd \u4e66\u76ee \u5f3a\u5316\u865a\u62df\u5316\u5c42 \u7269\u7406\u786c\u4ef6\uff08PCI \u76f4\u901a\uff09 \u865a\u62df\u786c\u4ef6 \uff08QEMU\uff09 \u6700\u5c0f\u5316 QEMU \u4ee3\u7801\u5e93 \u7f16\u8bd1\u5668\u5f3a\u5316 \u5b89\u5168\u52a0\u5bc6\u865a\u62df\u5316 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 sVirt\uff1aSELinux \u548c\u865a\u62df\u5316 \u6807\u7b7e\u548c\u7c7b\u522b SELinux \u7528\u6237\u548c\u89d2\u8272 \u5e03\u5c14\u503c \u5f3a\u5316\u8ba1\u7b97\u90e8\u7f72 OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f OpenStack \u5b89\u5168\u8bf4\u660e OpenStack-dev \u90ae\u4ef6\u5217\u8868 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90ae\u4ef6\u5217\u8868 \u6f0f\u6d1e\u610f\u8bc6 OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f OpenStack \u5b89\u5168\u8bf4\u660e OpenStack-\u8ba8\u8bba\u90ae\u4ef6\u5217\u8868 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90ae\u4ef6\u5217\u8868 \u5982\u4f55\u9009\u62e9\u865a\u62df\u63a7\u5236\u53f0 \u865a\u62df\u7f51\u7edc\u8ba1\u7b97\u673a \uff08VNC\uff09 \u72ec\u7acb\u8ba1\u7b97\u73af\u5883\u7684\u7b80\u5355\u534f\u8bae \uff08SPICE\uff09 \u68c0\u67e5\u8868 Check-Compute-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/nova\uff1f Check-Compute-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Compute-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Compute-04\uff1a\u662f\u5426\u4f7f\u7528\u5b89\u5168\u534f\u8bae\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Compute-05\uff1aNova \u4e0e Glance \u7684\u901a\u4fe1\u662f\u5426\u5b89\u5168\uff1f \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u9009\u62e9 \u00b6 OpenStack \u4e2d\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f \u00b6 \u65e0\u8bbaOpenStack\u662f\u90e8\u7f72\u5728\u79c1\u6709\u6570\u636e\u4e2d\u5fc3\u5185\uff0c\u8fd8\u662f\u4f5c\u4e3a\u516c\u5171\u4e91\u670d\u52a1\u90e8\u7f72\uff0c\u5e95\u5c42\u865a\u62df\u5316\u6280\u672f\u90fd\u80fd\u5728\u53ef\u6269\u5c55\u6027\u3001\u8d44\u6e90\u6548\u7387\u548c\u6b63\u5e38\u8fd0\u884c\u65f6\u95f4\u65b9\u9762\u63d0\u4f9b\u4f01\u4e1a\u7ea7\u529f\u80fd\u3002\u867d\u7136\u5728\u8bb8\u591a OpenStack \u652f\u6301\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6280\u672f\u4e2d\u901a\u5e38\u90fd\u5177\u6709\u8fd9\u79cd\u9ad8\u7ea7\u4f18\u52bf\uff0c\u4f46\u6bcf\u4e2a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u5b89\u5168\u67b6\u6784\u548c\u529f\u80fd\u90fd\u5b58\u5728\u663e\u8457\u5dee\u5f02\uff0c\u5c24\u5176\u662f\u5728\u8003\u8651\u5f39\u6027 OpenStack \u73af\u5883\u7279\u6709\u7684\u5b89\u5168\u5a01\u80c1\u5411\u91cf\u65f6\u3002\u968f\u7740\u5e94\u7528\u7a0b\u5e8f\u6574\u5408\u5230\u5355\u4e2a\u57fa\u7840\u67b6\u6784\u5373\u670d\u52a1 \uff08IaaS\uff09 \u5e73\u53f0\u4e2d\uff0c\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7ea7\u522b\u7684\u5b9e\u4f8b\u9694\u79bb\u53d8\u5f97\u81f3\u5173\u91cd\u8981\u3002\u5b89\u5168\u9694\u79bb\u7684\u8981\u6c42\u5728\u5546\u4e1a\u3001\u653f\u5e9c\u548c\u519b\u4e8b\u793e\u533a\u4e2d\u90fd\u9002\u7528\u3002 \u5728 OpenStack \u6846\u67b6\u4e2d\uff0c\u60a8\u53ef\u4ee5\u5728\u4f17\u591a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5e73\u53f0\u548c\u76f8\u5e94\u7684 OpenStack \u63d2\u4ef6\u4e2d\u8fdb\u884c\u9009\u62e9\uff0c\u4ee5\u4f18\u5316\u60a8\u7684\u4e91\u73af\u5883\u3002\u5728\u672c\u6307\u5357\u7684\u4e0a\u4e0b\u6587\u4e2d\uff0c\u91cd\u70b9\u4ecb\u7ecd\u4e86\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u9009\u62e9\u6ce8\u610f\u4e8b\u9879\uff0c\u56e0\u4e3a\u5b83\u4eec\u4e0e\u5bf9\u5b89\u5168\u6027\u81f3\u5173\u91cd\u8981\u7684\u529f\u80fd\u96c6\u6709\u5173\u3002\u4f46\u662f\uff0c\u8fd9\u4e9b\u6ce8\u610f\u4e8b\u9879\u5e76\u4e0d\u610f\u5473\u7740\u5bf9\u7279\u5b9a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u4f18\u7f3a\u70b9\u8fdb\u884c\u8be6\u5c3d\u7684\u8c03\u67e5\u3002NIST \u5728\u7279\u522b\u51fa\u7248\u7269 800-125\u201c\u5b8c\u6574\u865a\u62df\u5316\u6280\u672f\u5b89\u5168\u6307\u5357\u201d\u4e2d\u63d0\u4f9b\u4e86\u5176\u4ed6\u6307\u5bfc\u3002 \u9009\u62e9\u6807\u51c6 \u00b6 \u4f5c\u4e3a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u9009\u62e9\u8fc7\u7a0b\u7684\u4e00\u90e8\u5206\uff0c\u60a8\u5fc5\u987b\u8003\u8651\u8bb8\u591a\u91cd\u8981\u56e0\u7d20\uff0c\u4ee5\u5e2e\u52a9\u6539\u5584\u60a8\u7684\u5b89\u5168\u72b6\u51b5\u3002\u5177\u4f53\u6765\u8bf4\uff0c\u60a8\u5fc5\u987b\u719f\u6089\u4ee5\u4e0b\u65b9\u9762\uff1a \u56e2\u961f\u4e13\u957f \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u901a\u7528\u6807\u51c6 \u8ba4\u8bc1\u548c\u8bc1\u660e \u786c\u4ef6\u95ee\u9898 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0e\u88f8\u673a \u5176\u4ed6\u5b89\u5168\u529f\u80fd \u6b64\u5916\uff0c\u5f3a\u70c8\u5efa\u8bae\u5728\u4e3a OpenStack \u90e8\u7f72\u9009\u62e9\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u65f6\u8bc4\u4f30\u4ee5\u4e0b\u4e0e\u5b89\u5168\u76f8\u5173\u7684\u6807\u51c6\uff1a * \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u662f\u5426\u7ecf\u8fc7\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\uff1f\u5982\u679c\u662f\u8fd9\u6837\uff0c\u8fbe\u5230\u4ec0\u4e48\u6c34\u5e73\uff1f* \u5e95\u5c42\u5bc6\u7801\u5b66\u662f\u5426\u7ecf\u8fc7\u7b2c\u4e09\u65b9\u8ba4\u8bc1\uff1f \u56e2\u961f\u4e13\u957f \u00b6 \u6700\u6709\u53ef\u80fd\u7684\u662f\uff0c\u5728\u9009\u62e9\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u65f6\uff0c\u6700\u91cd\u8981\u7684\u65b9\u9762\u662f\u60a8\u7684\u5458\u5de5\u5728\u7ba1\u7406\u548c\u7ef4\u62a4\u7279\u5b9a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5e73\u53f0\u65b9\u9762\u7684\u4e13\u4e1a\u77e5\u8bc6\u3002\u60a8\u7684\u56e2\u961f\u5bf9\u7ed9\u5b9a\u4ea7\u54c1\u3001\u5176\u914d\u7f6e\u53ca\u5176\u602a\u7656\u8d8a\u719f\u6089\uff0c\u914d\u7f6e\u9519\u8bef\u5c31\u8d8a\u5c11\u3002\u6b64\u5916\uff0c\u5728\u7ed9\u5b9a\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0a\u5c06\u5458\u5de5\u4e13\u4e1a\u77e5\u8bc6\u5206\u5e03\u5728\u6574\u4e2a\u7ec4\u7ec7\u4e2d\u53ef\u4ee5\u63d0\u9ad8\u7cfb\u7edf\u7684\u53ef\u7528\u6027\uff0c\u5141\u8bb8\u804c\u8d23\u5206\u79bb\uff0c\u5e76\u5728\u56e2\u961f\u6210\u5458\u4e0d\u53ef\u7528\u65f6\u7f13\u89e3\u95ee\u9898\u3002 \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u00b6 \u7ed9\u5b9a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4ea7\u54c1\u6216\u9879\u76ee\u7684\u6210\u719f\u5ea6\u5bf9\u60a8\u7684\u5b89\u5168\u72b6\u51b5\u4e5f\u81f3\u5173\u91cd\u8981\u3002\u90e8\u7f72\u4e91\u540e\uff0c\u4ea7\u54c1\u6210\u719f\u5ea6\u4f1a\u4ea7\u751f\u8bb8\u591a\u5f71\u54cd\uff1a\u7ed9\u5b9a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4ea7\u54c1\u6216\u9879\u76ee\u7684\u6210\u719f\u5ea6\u5bf9\u60a8\u7684\u5b89\u5168\u72b6\u51b5\u4e5f\u81f3\u5173\u91cd\u8981\u3002\u90e8\u7f72\u4e91\u540e\uff0c\u4ea7\u54c1\u6210\u719f\u5ea6\u4f1a\u4ea7\u751f\u8bb8\u591a\u5f71\u54cd\uff1a \u4e13\u4e1a\u77e5\u8bc6\u7684\u53ef\u7528\u6027 \u6d3b\u8dc3\u7684\u5f00\u53d1\u4eba\u5458\u548c\u7528\u6237\u793e\u533a \u66f4\u65b0\u7684\u53ca\u65f6\u6027\u548c\u53ef\u7528\u6027 \u53d1\u75c5\u7387\u54cd\u5e94 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6210\u719f\u5ea6\u7684\u6700\u5927\u6307\u6807\u4e4b\u4e00\u662f\u56f4\u7ed5\u5b83\u7684\u793e\u533a\u7684\u89c4\u6a21\u548c\u6d3b\u529b\u3002\u7531\u4e8e\u8fd9\u6d89\u53ca\u5b89\u5168\u6027\uff0c\u56e0\u6b64\u5982\u679c\u60a8\u9700\u8981\u989d\u5916\u7684\u4e91\u64cd\u4f5c\u5458\uff0c\u793e\u533a\u7684\u8d28\u91cf\u4f1a\u5f71\u54cd\u4e13\u4e1a\u77e5\u8bc6\u7684\u53ef\u7528\u6027\u3002\u8fd9\u4e5f\u8868\u660e\u4e86\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u5e7f\u6cdb\u90e8\u7f72\uff0c\u8fdb\u800c\u5bfc\u81f4\u4efb\u4f55\u53c2\u8003\u67b6\u6784\u548c\u6700\u4f73\u5b9e\u8df5\u7684\u6218\u5907\u72b6\u6001\u3002 \u6b64\u5916\uff0c\u793e\u533a\u7684\u8d28\u91cf\uff0c\u56e0\u4e3a\u5b83\u56f4\u7ed5\u7740KVM\u6216Xen\u7b49\u5f00\u6e90\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\uff0c\u5bf9\u9519\u8bef\u4fee\u590d\u548c\u5b89\u5168\u66f4\u65b0\u7684\u53ca\u65f6\u6027\u6709\u76f4\u63a5\u5f71\u54cd\u3002\u5728\u8c03\u67e5\u5546\u4e1a\u548c\u5f00\u6e90\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u65f6\uff0c\u60a8\u5fc5\u987b\u67e5\u770b\u5b83\u4eec\u7684\u53d1\u5e03\u548c\u652f\u6301\u5468\u671f\uff0c\u4ee5\u53ca\u53d1\u5e03\u9519\u8bef\u6216\u5b89\u5168\u95ee\u9898\u4e0e\u8865\u4e01\u6216\u54cd\u5e94\u4e4b\u95f4\u7684\u65f6\u95f4\u5dee\u3002\u6700\u540e\uff0cOpenStack \u8ba1\u7b97\u652f\u6301\u7684\u529f\u80fd\u56e0\u6240\u9009\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u800c\u5f02\u3002\u8bf7\u53c2\u9605 OpenStack Hypervisor Support Matrix\uff0c\u4e86\u89e3 Hypervisor \u5bf9 OpenStack \u8ba1\u7b97\u529f\u80fd\u7684\u652f\u6301\u3002 \u8ba4\u8bc1\u548c\u8bc1\u660e \u00b6 \u9009\u62e9\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u65f6\uff0c\u53e6\u4e00\u4e2a\u8003\u8651\u56e0\u7d20\u662f\u5404\u79cd\u6b63\u5f0f\u8ba4\u8bc1\u548c\u8bc1\u660e\u7684\u53ef\u7528\u6027\u3002\u867d\u7136\u5b83\u4eec\u53ef\u80fd\u4e0d\u662f\u7279\u5b9a\u7ec4\u7ec7\u7684\u8981\u6c42\uff0c\u4f46\u8fd9\u4e9b\u8ba4\u8bc1\u548c\u8bc1\u660e\u8bf4\u660e\u4e86\u7279\u5b9a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5e73\u53f0\u6240\u7ecf\u8fc7\u7684\u6d4b\u8bd5\u7684\u6210\u719f\u5ea6\u3001\u751f\u4ea7\u51c6\u5907\u60c5\u51b5\u548c\u5f7b\u5e95\u6027\u3002 \u901a\u7528\u6807\u51c6 \u00b6 \u901a\u7528\u6807\u51c6\u662f\u4e00\u4e2a\u56fd\u9645\u6807\u51c6\u5316\u7684\u8f6f\u4ef6\u8bc4\u4f30\u8fc7\u7a0b\uff0c\u653f\u5e9c\u548c\u5546\u4e1a\u516c\u53f8\u4f7f\u7528\u5b83\u6765\u9a8c\u8bc1\u8f6f\u4ef6\u6280\u672f\u662f\u5426\u5982\u5ba3\u4f20\u7684\u90a3\u6837\u3002\u5728\u653f\u5e9c\u90e8\u95e8\uff0cNSTISSP \u7b2c 11 \u53f7\u89c4\u5b9a\u7f8e\u56fd\u653f\u5e9c\u673a\u6784\u53ea\u80fd\u91c7\u8d2d\u5df2\u901a\u8fc7\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\u7684\u8f6f\u4ef6\uff0c\u8be5\u653f\u7b56\u81ea 2002 \u5e74 7 \u6708\u8d77\u5b9e\u65bd\u3002 \u6ce8\u610f OpenStack\u5c1a\u672a\u901a\u8fc7\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\uff0c\u4f46\u8bb8\u591a\u53ef\u7528\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90fd\u7ecf\u8fc7\u4e86\u8ba4\u8bc1\u3002 \u9664\u4e86\u9a8c\u8bc1\u6280\u672f\u80fd\u529b\u5916\uff0c\u901a\u7528\u6807\u51c6\u6d41\u7a0b\u8fd8\u8bc4\u4f30\u6280\u672f\u7684\u5f00\u53d1\u65b9\u5f0f\u3002 \u5982\u4f55\u8fdb\u884c\u6e90\u4ee3\u7801\u7ba1\u7406\uff1f \u5982\u4f55\u6388\u4e88\u7528\u6237\u5bf9\u6784\u5efa\u7cfb\u7edf\u7684\u8bbf\u95ee\u6743\u9650\uff1f \u8be5\u6280\u672f\u5728\u5206\u53d1\u524d\u662f\u5426\u7ecf\u8fc7\u52a0\u5bc6\u7b7e\u540d\uff1f KVM \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5df2\u901a\u8fc7\u7f8e\u56fd\u653f\u5e9c\u548c\u5546\u4e1a\u53d1\u884c\u7248\u7684\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u5df2\u7ecf\u8fc7\u9a8c\u8bc1\uff0c\u53ef\u4ee5\u5c06\u865a\u62df\u673a\u7684\u8fd0\u884c\u65f6\u73af\u5883\u5f7c\u6b64\u5206\u79bb\uff0c\u4ece\u800c\u63d0\u4f9b\u57fa\u7840\u6280\u672f\u6765\u5b9e\u65bd\u5b9e\u4f8b\u9694\u79bb\u3002\u9664\u4e86\u865a\u62df\u673a\u9694\u79bb\u4e4b\u5916\uff0cKVM \u8fd8\u901a\u8fc7\u4e86\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\uff1a \"...provide system-inherent separation mechanisms to the resources of virtual machines. This separation ensures that large software component used for virtualizing and simulating devices executing for each virtual machine cannot interfere with each other. Using the SELinux multi-category mechanism, the virtualization and simulation software instances are isolated. The virtual machine management framework configures SELinux multi-category settings transparently to the administrator.\" \u867d\u7136\u8bb8\u591a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4f9b\u5e94\u5546\uff08\u5982 Red Hat\u3001Microsoft \u548c VMware\uff09\u5df2\u83b7\u5f97\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\uff0c\u4f46\u5176\u57fa\u7840\u8ba4\u8bc1\u529f\u80fd\u96c6\u6709\u6240\u4e0d\u540c\uff0c\u4f46\u6211\u4eec\u5efa\u8bae\u8bc4\u4f30\u4f9b\u5e94\u5546\u58f0\u660e\uff0c\u4ee5\u786e\u4fdd\u5b83\u4eec\u81f3\u5c11\u6ee1\u8db3\u4ee5\u4e0b\u8981\u6c42\uff1a \u5ba1\u8ba1 \u8be5\u7cfb\u7edf\u63d0\u4f9b\u4e86\u5ba1\u6838\u5927\u91cf\u4e8b\u4ef6\u7684\u529f\u80fd\uff0c\u5305\u62ec\u5355\u4e2a\u7cfb\u7edf\u8c03\u7528\u548c\u53d7\u4fe1\u4efb\u8fdb\u7a0b\u751f\u6210\u7684\u4e8b\u4ef6\u3002\u5ba1\u8ba1\u6570\u636e\u4ee5 ASCII \u683c\u5f0f\u6536\u96c6\u5728\u5e38\u89c4\u6587\u4ef6\u4e2d\u3002\u7cfb\u7edf\u63d0\u4f9b\u4e86\u4e00\u4e2a\u7528\u4e8e\u641c\u7d22\u5ba1\u8ba1\u8bb0\u5f55\u7684\u7a0b\u5e8f\u3002\u7cfb\u7edf\u7ba1\u7406\u5458\u53ef\u4ee5\u5b9a\u4e49\u4e00\u4e2a\u89c4\u5219\u5e93\uff0c\u4ee5\u5c06\u5ba1\u6838\u9650\u5236\u4e3a\u4ed6\u4eec\u611f\u5174\u8da3\u7684\u4e8b\u4ef6\u3002\u8fd9\u5305\u62ec\u5c06\u5ba1\u6838\u9650\u5236\u4e3a\u7279\u5b9a\u4e8b\u4ef6\u3001\u7279\u5b9a\u7528\u6237\u3001\u7279\u5b9a\u5bf9\u8c61\u6216\u6240\u6709\u8fd9\u4e9b\u7684\u7ec4\u5408\u7684\u80fd\u529b\u3002\u5ba1\u8ba1\u8bb0\u5f55\u53ef\u4ee5\u4f20\u8f93\u5230\u8fdc\u7a0b\u5ba1\u8ba1\u5b88\u62a4\u7a0b\u5e8f\u3002 \u81ea\u4e3b\u8bbf\u95ee\u63a7\u5236 \u81ea\u4e3b\u8bbf\u95ee\u63a7\u5236 \uff08DAC\uff09 \u9650\u5236\u5bf9\u57fa\u4e8e ACL \u7684\u6587\u4ef6\u7cfb\u7edf\u5bf9\u8c61\u7684\u8bbf\u95ee\uff0c\u8fd9\u4e9b\u5bf9\u8c61\u5305\u62ec\u7528\u6237\u3001\u7ec4\u548c\u5176\u4ed6\u4eba\u5458\u7684\u6807\u51c6 UNIX \u6743\u9650\u3002\u8bbf\u95ee\u63a7\u5236\u673a\u5236\u8fd8\u53ef\u4ee5\u4fdd\u62a4 IPC \u5bf9\u8c61\u514d\u53d7\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u8be5\u7cfb\u7edf\u5305\u62ec ext4 \u6587\u4ef6\u7cfb\u7edf\uff0c\u5b83\u652f\u6301 POSIX ACL\u3002\u8fd9\u5141\u8bb8\u5b9a\u4e49\u5bf9\u6b64\u7c7b\u6587\u4ef6\u7cfb\u7edf\u4e2d\u6587\u4ef6\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u7cbe\u786e\u5230\u5355\u4e2a\u7528\u6237\u7684\u7c92\u5ea6\u3002 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \uff08MAC\uff09 \u6839\u636e\u5206\u914d\u7ed9\u4e3b\u4f53\u548c\u5bf9\u8c61\u7684\u6807\u7b7e\u6765\u9650\u5236\u5bf9\u5bf9\u8c61\u7684\u8bbf\u95ee\u3002\u654f\u611f\u5ea6\u6807\u7b7e\u4f1a\u81ea\u52a8\u9644\u52a0\u5230\u8fdb\u7a0b\u548c\u5bf9\u8c61\u3002\u4f7f\u7528\u8fd9\u4e9b\u6807\u7b7e\u5f3a\u5236\u5b9e\u65bd\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u6d3e\u751f\u81ea Bell-LaPadula \u6a21\u578b\u3002SELinux \u7c7b\u522b\u9644\u52a0\u5230\u865a\u62df\u673a\u53ca\u5176\u8d44\u6e90\u3002\u5982\u679c\u865a\u62df\u673a\u7684\u7c7b\u522b\u4e0e\u6240\u8bbf\u95ee\u8d44\u6e90\u7684\u7c7b\u522b\u76f8\u540c\uff0c\u5219\u4f7f\u7528\u8fd9\u4e9b\u7c7b\u522b\u5f3a\u5236\u5b9e\u65bd\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u5c06\u6388\u4e88\u865a\u62df\u673a\u5bf9\u8d44\u6e90\u7684\u8bbf\u95ee\u6743\u9650\u3002TOE \u5b9e\u73b0\u975e\u5206\u5c42\u7c7b\u522b\u6765\u63a7\u5236\u5bf9\u865a\u62df\u673a\u7684\u8bbf\u95ee\u3002 \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236 \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236 \uff08RBAC\uff09 \u5141\u8bb8\u89d2\u8272\u5206\u79bb\uff0c\u65e0\u9700\u5168\u80fd\u7684\u7cfb\u7edf\u7ba1\u7406\u5458\u3002 \u5bf9\u8c61\u91cd\u7528 \u6587\u4ef6\u7cfb\u7edf\u5bf9\u8c61\u3001\u5185\u5b58\u548c IPC \u5bf9\u8c61\u5728\u88ab\u5c5e\u4e8e\u5176\u4ed6\u7528\u6237\u7684\u8fdb\u7a0b\u91cd\u7528\u4e4b\u524d\u4f1a\u88ab\u6e05\u9664\u3002 \u5b89\u5168\u7ba1\u7406 \u7cfb\u7edf\u5b89\u5168\u5173\u952e\u53c2\u6570\u7684\u7ba1\u7406\u7531\u7ba1\u7406\u7528\u6237\u6267\u884c\u3002\u4e00\u7ec4\u9700\u8981 root \u6743\u9650\uff08\u6216\u4f7f\u7528 RBAC \u65f6\u9700\u8981\u7279\u5b9a\u89d2\u8272\uff09\u7684\u547d\u4ee4\u7528\u4e8e\u7cfb\u7edf\u7ba1\u7406\u3002\u5b89\u5168\u53c2\u6570\u5b58\u50a8\u5728\u7279\u5b9a\u6587\u4ef6\u4e2d\uff0c\u8fd9\u4e9b\u6587\u4ef6\u53d7\u7cfb\u7edf\u7684\u8bbf\u95ee\u63a7\u5236\u673a\u5236\u4fdd\u62a4\uff0c\u9632\u6b62\u975e\u7ba1\u7406\u7528\u6237\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002 \u5b89\u5168\u901a\u4fe1 \u7cfb\u7edf\u652f\u6301\u4f7f\u7528 SSH \u5b9a\u4e49\u53ef\u4fe1\u901a\u9053\u3002\u652f\u6301\u57fa\u4e8e\u5bc6\u7801\u7684\u8eab\u4efd\u9a8c\u8bc1\u3002\u5728\u8bc4\u4f30\u7684\u914d\u7f6e\u4e2d\uff0c\u8fd9\u4e9b\u534f\u8bae\u4ec5\u652f\u6301\u6709\u9650\u6570\u91cf\u7684\u5bc6\u7801\u5957\u4ef6\u3002 \u5b58\u50a8\u52a0\u5bc6 \u7cfb\u7edf\u652f\u6301\u52a0\u5bc6\u5757\u8bbe\u5907\uff0c\u901a\u8fc7 dm_crypt \u63d0\u4f9b\u5b58\u50a8\u673a\u5bc6\u6027\u3002 TSF \u4fdd\u62a4 \u5728\u8fd0\u884c\u65f6\uff0c\u5185\u6838\u8f6f\u4ef6\u548c\u6570\u636e\u53d7\u5230\u786c\u4ef6\u5185\u5b58\u4fdd\u62a4\u673a\u5236\u7684\u4fdd\u62a4\u3002\u5185\u6838\u7684\u5185\u5b58\u548c\u8fdb\u7a0b\u7ba1\u7406\u7ec4\u4ef6\u786e\u4fdd\u7528\u6237\u8fdb\u7a0b\u65e0\u6cd5\u8bbf\u95ee\u5185\u6838\u5b58\u50a8\u6216\u5c5e\u4e8e\u5176\u4ed6\u8fdb\u7a0b\u7684\u5b58\u50a8\u3002\u975e\u5185\u6838 TSF \u8f6f\u4ef6\u548c\u6570\u636e\u53d7 DAC \u548c\u8fdb\u7a0b\u9694\u79bb\u673a\u5236\u4fdd\u62a4\u3002\u5728\u8bc4\u4f30\u7684\u914d\u7f6e\u4e2d\uff0c\u4fdd\u7559\u7528\u6237 ID root \u62e5\u6709\u5b9a\u4e49 TSF \u914d\u7f6e\u7684\u76ee\u5f55\u548c\u6587\u4ef6\u3002\u901a\u5e38\uff0c\u5305\u542b\u5185\u90e8 TSF \u6570\u636e\u7684\u6587\u4ef6\u548c\u76ee\u5f55\uff08\u5982\u914d\u7f6e\u6587\u4ef6\u548c\u6279\u5904\u7406\u4f5c\u4e1a\u961f\u5217\uff09\u4e5f\u53d7\u5230 DAC \u6743\u9650\u7684\u4fdd\u62a4\uff0c\u4e0d\u4f1a\u88ab\u8bfb\u53d6\u3002\u7cfb\u7edf\u4ee5\u53ca\u786c\u4ef6\u548c\u56fa\u4ef6\u7ec4\u4ef6\u9700\u8981\u53d7\u5230\u7269\u7406\u4fdd\u62a4\uff0c\u4ee5\u9632\u6b62\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u7cfb\u7edf\u5185\u6838\u8c03\u89e3\u5bf9\u786c\u4ef6\u673a\u5236\u672c\u8eab\u7684\u6240\u6709\u8bbf\u95ee\uff0c\u4f46\u7a0b\u5e8f\u53ef\u89c1\u7684 CPU \u6307\u4ee4\u51fd\u6570\u9664\u5916\u3002\u6b64\u5916\uff0c\u8fd8\u63d0\u4f9b\u4e86\u9632\u6b62\u5806\u6808\u6ea2\u51fa\u653b\u51fb\u7684\u673a\u5236\u3002 \u5bc6\u7801\u5b66\u6807\u51c6 \u00b6 OpenStack \u4e2d\u63d0\u4f9b\u4e86\u591a\u79cd\u52a0\u5bc6\u7b97\u6cd5\uff0c\u7528\u4e8e\u8bc6\u522b\u548c\u6388\u6743\u3001\u6570\u636e\u4f20\u8f93\u548c\u9759\u6001\u6570\u636e\u4fdd\u62a4\u3002\u9009\u62e9\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u65f6\uff0c\u6211\u4eec\u5efa\u8bae\u91c7\u7528\u4ee5\u4e0b\u7b97\u6cd5\u548c\u5b9e\u73b0\u6807\u51c6\uff1a \u7b97\u6cd5 \u5bc6\u94a5\u957f\u5ea6 \u9884\u671f\u76ee\u7684 \u5b89\u5168\u529f\u80fd \u6267\u884c\u6807\u51c6 AES 128\u3001192 \u6216 256 \u4f4d \u52a0\u5bc6/\u89e3\u5bc6 \u53d7\u4fdd\u62a4\u7684\u6570\u636e\u4f20\u8f93\uff0c\u4fdd\u62a4\u9759\u6001\u6570\u636e RFC 4253 TDES 168 \u4f4d \u52a0\u5bc6/\u89e3\u5bc6 \u53d7\u4fdd\u62a4\u7684\u6570\u636e\u4f20\u8f93 RFC 4253 RSA 1024\u30012048 \u6216 3072 \u4f4d \u8eab\u4efd\u9a8c\u8bc1\u3001\u5bc6\u94a5\u4ea4\u6362 \u8bc6\u522b\u548c\u8eab\u4efd\u9a8c\u8bc1\uff0c\u53d7\u4fdd\u62a4\u7684\u6570\u636e\u4f20\u8f93 U.S. NIST FIPS PUB 186-3 DSA L=1024\uff0cN=160\u4f4d \u8eab\u4efd\u9a8c\u8bc1\u3001\u5bc6\u94a5\u4ea4\u6362 \u8bc6\u522b\u548c\u8eab\u4efd\u9a8c\u8bc1\uff0c\u53d7\u4fdd\u62a4\u7684\u6570\u636e\u4f20\u8f93 U.S. NIST FIPS PUB 186-3 Serpent 128\u3001192 \u6216 256 \u4f4d \u52a0\u5bc6/\u89e3\u5bc6 \u9759\u6001\u6570\u636e\u4fdd\u62a4 http://www.cl.cam.ac.uk/~rja14/Papers/serpent.pdf Twofish 128\u3001192 \u6216 256 \u4f4d \u52a0\u5bc6/\u89e3\u5bc6 \u9759\u6001\u6570\u636e\u4fdd\u62a4 https://www.schneier.com/paper-twofish-paper.html SHA-1 \u6d88\u606f\u6458\u8981 \u4fdd\u62a4\u9759\u6001\u6570\u636e\uff0c\u53d7\u4fdd\u62a4\u7684\u6570\u636e\u4f20\u8f93 U.S. NIST FIPS PUB 180-3 SHA-2\uff08224\u3001256\u3001384 \u6216 512 \u4f4d\uff09 \u6d88\u606f\u6458\u8981 Protection for data at rest, identification and authentication \u4fdd\u62a4\u9759\u6001\u6570\u636e\u3001\u8bc6\u522b\u548c\u8eab\u4efd\u9a8c\u8bc1 U.S. NIST FIPS PUB 180-3 FIPS 140-2 \u00b6 \u5728\u7f8e\u56fd\uff0c\u7f8e\u56fd\u56fd\u5bb6\u79d1\u5b66\u6280\u672f\u7814\u7a76\u9662 \uff08NIST\uff09 \u901a\u8fc7\u79f0\u4e3a\u52a0\u5bc6\u6a21\u5757\u9a8c\u8bc1\u8ba1\u5212\u7684\u8fc7\u7a0b\u5bf9\u52a0\u5bc6\u7b97\u6cd5\u8fdb\u884c\u8ba4\u8bc1\u3002NIST \u8ba4\u8bc1\u7b97\u6cd5\u7b26\u5408\u8054\u90a6\u4fe1\u606f\u5904\u7406\u6807\u51c6 140-2 \uff08FIPS 140-2\uff09\uff0c\u786e\u4fdd\uff1a \"... Products validated as conforming to FIPS 140-2 are accepted by the Federal agencies of both countries [United States and Canada] for the protection of sensitive information (United States) or Designated Information (Canada). The goal of the CMVP is to promote the use of validated cryptographic modules and provide Federal agencies with a security metric to use in procuring equipment containing validated cryptographic modules.\" \u5728\u8bc4\u4f30\u57fa\u672c\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6280\u672f\u65f6\uff0c\u8bf7\u8003\u8651\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u662f\u5426\u5df2\u901a\u8fc7 FIPS 140-2 \u8ba4\u8bc1\u3002\u6839\u636e\u7f8e\u56fd\u653f\u5e9c\u653f\u7b56\uff0c\u4e0d\u4ec5\u5f3a\u5236\u8981\u6c42\u7b26\u5408 FIPS 140-2\uff0c\u800c\u4e14\u6b63\u5f0f\u8ba4\u8bc1\u8868\u660e\u5df2\u5bf9\u52a0\u5bc6\u7b97\u6cd5\u7684\u7ed9\u5b9a\u5b9e\u73b0\u8fdb\u884c\u4e86\u5ba1\u67e5\uff0c\u4ee5\u786e\u4fdd\u7b26\u5408\u6a21\u5757\u89c4\u8303\u3001\u52a0\u5bc6\u6a21\u5757\u7aef\u53e3\u548c\u63a5\u53e3;\u89d2\u8272\u3001\u670d\u52a1\u548c\u8eab\u4efd\u9a8c\u8bc1;\u6709\u9650\u72b6\u6001\u6a21\u578b;\u4eba\u8eab\u5b89\u5168;\u64cd\u4f5c\u73af\u5883;\u52a0\u5bc6\u5bc6\u94a5\u7ba1\u7406;\u7535\u78c1\u5e72\u6270/\u7535\u78c1\u517c\u5bb9\u6027\uff08EMI/EMC\uff09;\u81ea\u68c0;\u8bbe\u8ba1\u4fdd\u8bc1;\u4ee5\u53ca\u7f13\u89e3\u5176\u4ed6\u653b\u51fb\u3002 \u786c\u4ef6\u95ee\u9898 \u00b6 \u5728\u8bc4\u4f30\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5e73\u53f0\u65f6\uff0c\u8bf7\u8003\u8651\u8fd0\u884c\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u786c\u4ef6\u7684\u53ef\u652f\u6301\u6027\u3002\u6b64\u5916\uff0c\u8bf7\u8003\u8651\u786c\u4ef6\u4e2d\u53ef\u7528\u7684\u5176\u4ed6\u529f\u80fd\uff0c\u4ee5\u53ca\u60a8\u5728 OpenStack \u90e8\u7f72\u4e2d\u9009\u62e9\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5982\u4f55\u652f\u6301\u8fd9\u4e9b\u529f\u80fd\u3002\u4e3a\u6b64\uff0c\u6bcf\u4e2a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90fd\u6709\u81ea\u5df1\u7684\u786c\u4ef6\u517c\u5bb9\u6027\u5217\u8868 \uff08HCL\uff09\u3002\u5728\u9009\u62e9\u517c\u5bb9\u7684\u786c\u4ef6\u65f6\uff0c\u4ece\u5b89\u5168\u89d2\u5ea6\u6765\u770b\uff0c\u63d0\u524d\u4e86\u89e3\u54ea\u4e9b\u57fa\u4e8e\u786c\u4ef6\u7684\u865a\u62df\u5316\u6280\u672f\u662f\u91cd\u8981\u7684\uff0c\u8fd9\u4e00\u70b9\u5f88\u91cd\u8981\u3002 \u63cf\u8ff0 \u79d1\u6280 \u89e3\u91ca I/O MMU VT-d / AMD-Vi \u4fdd\u62a4 PCI \u76f4\u901a\u6240\u5fc5\u9700\u7684 \u82f1\u7279\u5c14\u53ef\u4fe1\u6267\u884c\u6280\u672f Intel TXT / SEM \u52a8\u6001\u8bc1\u660e\u670d\u52a1\u662f\u5fc5\u9700\u7684 PCI-SIG I/O \u865a\u62df\u5316 SR-IOV, MR-IOV, ATS \u9700\u8981\u5141\u8bb8\u5b89\u5168\u5171\u4eab PCI Express \u8bbe\u5907 \u7f51\u7edc\u865a\u62df\u5316 VT-c \u63d0\u9ad8\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0a\u7684\u7f51\u7edc I/O \u6027\u80fd \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0e\u88f8\u673a \u00b6 \u91cd\u8981\u7684\u662f\u8981\u8ba4\u8bc6\u5230\u4f7f\u7528 Linux \u5bb9\u5668 \uff08LXC\uff09 \u6216\u88f8\u673a\u7cfb\u7edf\u4e0e\u4f7f\u7528 KVM \u7b49\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e4b\u95f4\u7684\u533a\u522b\u3002\u5177\u4f53\u6765\u8bf4\uff0c\u672c\u5b89\u5168\u6307\u5357\u7684\u91cd\u70b9\u4e3b\u8981\u57fa\u4e8e\u62e5\u6709\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u548c\u865a\u62df\u5316\u5e73\u53f0\u3002\u4f46\u662f\uff0c\u5982\u679c\u60a8\u7684\u5b9e\u73b0\u9700\u8981\u4f7f\u7528\u88f8\u673a\u6216 LXC \u73af\u5883\uff0c\u5219\u5fc5\u987b\u6ce8\u610f\u8be5\u73af\u5883\u90e8\u7f72\u65b9\u9762\u7684\u7279\u6b8a\u5dee\u5f02\u3002 \u5728\u91cd\u65b0\u9884\u914d\u4e4b\u524d\uff0c\u8bf7\u786e\u4fdd\u6700\u7ec8\u7528\u6237\u5df2\u6b63\u786e\u6e05\u7406\u8282\u70b9\u7684\u6570\u636e\u3002\u6b64\u5916\uff0c\u5728\u91cd\u7528\u8282\u70b9\u4e4b\u524d\uff0c\u5fc5\u987b\u4fdd\u8bc1\u786c\u4ef6\u672a\u88ab\u7be1\u6539\u6216\u4ee5\u5176\u4ed6\u65b9\u5f0f\u53d7\u5230\u635f\u5bb3\u3002 \u6ce8\u610f \u867d\u7136OpenStack\u6709\u4e00\u4e2a\u88f8\u673a\u9879\u76ee\uff0c\u4f46\u5bf9\u8fd0\u884c\u88f8\u673a\u7684\u7279\u6b8a\u5b89\u5168\u5f71\u54cd\u7684\u8ba8\u8bba\u8d85\u51fa\u4e86\u672c\u4e66\u7684\u8303\u56f4\u3002 \u7531\u4e8e\u4e66\u672c\u51b2\u523a\u7684\u65f6\u95f4\u9650\u5236\uff0c\u8be5\u56e2\u961f\u9009\u62e9\u5728\u6211\u4eec\u7684\u793a\u4f8b\u5b9e\u73b0\u548c\u67b6\u6784\u4e2d\u4f7f\u7528 KVM \u4f5c\u4e3a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 \u6ce8\u610f \u6709\u4e00\u4e2a\u5173\u4e8e\u5728\u8ba1\u7b97\u4e2d\u4f7f\u7528 LXC \u7684 OpenStack \u5b89\u5168\u8bf4\u660e\u3002 Hypervisor \u5185\u5b58\u4f18\u5316 \u00b6 \u8bb8\u591a\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4f7f\u7528\u5185\u5b58\u4f18\u5316\u6280\u672f\u5c06\u5185\u5b58\u8fc7\u91cf\u4f7f\u7528\u5230\u6765\u5bbe\u865a\u62df\u673a\u3002\u8fd9\u662f\u4e00\u9879\u6709\u7528\u7684\u529f\u80fd\uff0c\u53ef\u7528\u4e8e\u90e8\u7f72\u975e\u5e38\u5bc6\u96c6\u7684\u8ba1\u7b97\u7fa4\u96c6\u3002\u5b9e\u73b0\u6b64\u76ee\u7684\u7684\u4e00\u79cd\u65b9\u6cd5\u662f\u901a\u8fc7\u91cd\u590d\u6570\u636e\u6d88\u9664\u6216\u5171\u4eab\u5185\u5b58\u9875\u3002\u5f53\u4e24\u4e2a\u865a\u62df\u673a\u5728\u5185\u5b58\u4e2d\u5177\u6709\u76f8\u540c\u7684\u6570\u636e\u65f6\uff0c\u8ba9\u5b83\u4eec\u5f15\u7528\u76f8\u540c\u7684\u5185\u5b58\u662f\u6709\u597d\u5904\u7684\u3002 \u901a\u5e38\uff0c\u8fd9\u662f\u901a\u8fc7\u5199\u5165\u65f6\u590d\u5236 \uff08COW\uff09 \u673a\u5236\u5b9e\u73b0\u7684\u3002\u8fd9\u4e9b\u673a\u5236\u5df2\u88ab\u8bc1\u660e\u5bb9\u6613\u53d7\u5230\u4fa7\u4fe1\u9053\u653b\u51fb\uff0c\u5176\u4e2d\u4e00\u4e2a VM \u53ef\u4ee5\u63a8\u65ad\u51fa\u53e6\u4e00\u4e2a VM \u7684\u72b6\u6001\uff0c\u5e76\u4e14\u53ef\u80fd\u4e0d\u9002\u7528\u4e8e\u5e76\u975e\u6240\u6709\u79df\u6237\u90fd\u53d7\u4fe1\u4efb\u6216\u5171\u4eab\u76f8\u540c\u4fe1\u4efb\u7ea7\u522b\u7684\u591a\u79df\u6237\u73af\u5883\u3002 KVM \u5185\u6838\u540c\u9875\u5408\u5e76 \u00b6 \u5728\u7248\u672c 2.6.32 \u4e2d\u5f15\u5165\u5230 Linux \u5185\u6838\u4e2d\uff0c\u5185\u6838\u76f8\u540c\u9875\u5408\u5e76 \uff08KSM\uff09 \u5728 Linux \u8fdb\u7a0b\u4e4b\u95f4\u6574\u5408\u4e86\u76f8\u540c\u7684\u5185\u5b58\u9875\u3002\u7531\u4e8e KVM \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0b\u7684\u6bcf\u4e2a\u5ba2\u6237\u673a\u865a\u62df\u673a\u90fd\u5728\u81ea\u5df1\u7684\u8fdb\u7a0b\u4e2d\u8fd0\u884c\uff0c\u56e0\u6b64 KSM \u53ef\u7528\u4e8e\u4f18\u5316\u865a\u62df\u673a\u4e4b\u95f4\u7684\u5185\u5b58\u4f7f\u7528\u3002 XEN \u900f\u660e\u9875\u9762\u5171\u4eab \u00b6 XenServer 5.6 \u5305\u542b\u4e00\u4e2a\u540d\u4e3a\u900f\u660e\u9875\u9762\u5171\u4eab \uff08TPS\uff09 \u7684\u5185\u5b58\u8fc7\u91cf\u4f7f\u7528\u529f\u80fd\u3002TPS \u626b\u63cf 4 KB \u533a\u5757\u4e2d\u7684\u5185\u5b58\u4ee5\u67e5\u627e\u4efb\u4f55\u91cd\u590d\u9879\u3002\u627e\u5230\u540e\uff0cXen \u865a\u62df\u673a\u76d1\u89c6\u5668 \uff08VMM\uff09 \u5c06\u4e22\u5f03\u5176\u4e2d\u4e00\u4e2a\u91cd\u590d\u9879\uff0c\u5e76\u8bb0\u5f55\u7b2c\u4e8c\u4e2a\u526f\u672c\u7684\u5f15\u7528\u3002 \u5185\u5b58\u4f18\u5316\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u00b6 \u4f20\u7edf\u4e0a\uff0c\u5185\u5b58\u91cd\u590d\u6570\u636e\u6d88\u9664\u7cfb\u7edf\u5bb9\u6613\u53d7\u5230\u4fa7\u4fe1\u9053\u653b\u51fb\u3002KSM \u548c TPS \u90fd\u5df2\u88ab\u8bc1\u660e\u5bb9\u6613\u53d7\u5230\u67d0\u79cd\u5f62\u5f0f\u7684\u653b\u51fb\u3002\u5728\u5b66\u672f\u7814\u7a76\u4e2d\uff0c\u653b\u51fb\u8005\u80fd\u591f\u901a\u8fc7\u5206\u6790\u653b\u51fb\u8005\u865a\u62df\u673a\u4e0a\u7684\u5185\u5b58\u8bbf\u95ee\u65f6\u95f4\u6765\u8bc6\u522b\u76f8\u90bb\u865a\u62df\u673a\u4e0a\u8fd0\u884c\u7684\u8f6f\u4ef6\u5305\u548c\u7248\u672c\uff0c\u4ee5\u53ca\u8f6f\u4ef6\u4e0b\u8f7d\u548c\u5176\u4ed6\u654f\u611f\u4fe1\u606f\u3002 \u5982\u679c\u4e91\u90e8\u7f72\u9700\u8981\u5f3a\u79df\u6237\u5206\u79bb\uff08\u5982\u516c\u6709\u4e91\u548c\u67d0\u4e9b\u79c1\u6709\u4e91\u7684\u60c5\u51b5\uff09\uff0c\u90e8\u7f72\u4eba\u5458\u5e94\u8003\u8651\u7981\u7528 TPS \u548c KSM \u5185\u5b58\u4f18\u5316\u3002 \u5176\u4ed6\u5b89\u5168\u529f\u80fd \u00b6 \u9009\u62e9\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5e73\u53f0\u65f6\u8981\u8003\u8651\u7684\u53e6\u4e00\u4ef6\u4e8b\u662f\u7279\u5b9a\u5b89\u5168\u529f\u80fd\u7684\u53ef\u7528\u6027\u3002\u7279\u522b\u662f\u529f\u80fd\u3002\u4f8b\u5982\uff0cXen Server \u7684 XSM \u6216 Xen \u5b89\u5168\u6a21\u5757\u3001sVirt\u3001Intel TXT \u6216 AppArmor\u3002 \u4e0b\u8868\u6309\u5e38\u89c1\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5e73\u53f0\u5217\u51fa\u4e86\u8fd9\u4e9b\u529f\u80fd\u3002 XSM sVirt TXT AppArmor cgroups MAC \u7b56\u7565 KVM X X X X X Xen X X ESXi X Hyper-V \u6ce8\u610f \u6b64\u8868\u4e2d\u7684\u529f\u80fd\u53ef\u80fd\u4e0d\u9002\u7528\u4e8e\u6240\u6709\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\uff0c\u4e5f\u53ef\u80fd\u65e0\u6cd5\u5728\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e4b\u95f4\u76f4\u63a5\u6620\u5c04\u3002 \u53c2\u8003\u4e66\u76ee \u00b6 Sunar\u3001Eisenbarth\u3001Inci\u3001Gorka Irazoqui Apecechea\u3002\u5bf9 Xen \u548c VMware \u8fdb\u884c\u7ec6\u7c92\u5ea6\u8de8\u865a\u62df\u673a\u653b\u51fb\u662f\u53ef\u80fd\u7684\uff012014\u3002 https://eprint.iacr.org/2014/248.pfd Artho\u3001Yagi\u3001Iijima\u3001Kuniyasu Suzaki\u3002\u5185\u5b58\u91cd\u590d\u6570\u636e\u5220\u9664\u5bf9\u5ba2\u6237\u673a\u64cd\u4f5c\u7cfb\u7edf\u7684\u5a01\u80c1\u30022011 \u5e74\u3002https://staff.aist.go.jp/c.artho/papers/EuroSec2011-suzaki.pdf KVM\uff1a\u57fa\u4e8e\u5185\u6838\u7684\u865a\u62df\u673a\u3002\u5185\u6838\u76f8\u540c\u9875\u5408\u5e76\u30022010\u3002http://www.linux-kvm.org/page/KSM Xen \u9879\u76ee\uff0cXen \u5b89\u5168\u6a21\u5757\uff1aXSM-FLASK\u30022014\u3002 http://wiki.xen.org/wiki/Xen_Security_Modules_:_XSM-FLASK SELinux \u9879\u76ee\uff0cSVirt\u30022011\u3002 http://selinuxproject.org/page/SVirt Intel.com\uff0c\u91c7\u7528\u82f1\u7279\u5c14\u53ef\u4fe1\u6267\u884c\u6280\u672f \uff08Intel TXT\uff09 \u7684\u53ef\u4fe1\u8ba1\u7b97\u6c60\u3002http://www.intel.com/txt AppArmor.net\uff0cAppArmor \u4e3b\u9875\u30022011\u3002 http://wiki.apparmor.net/index.php/Main_Page Kernel.org\uff0cCGroups\u30022004\u3002https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt \u8ba1\u7b97\u673a\u5b89\u5168\u8d44\u6e90\u4e2d\u5fc3\u3002\u5b8c\u6574\u865a\u62df\u5316\u6280\u672f\u5b89\u5168\u6307\u5357\u30022011\u3002 http://csrc.nist.gov/publications/nistpubs/800-125/SP800-125-final.pdf \u56fd\u5bb6\u4fe1\u606f\u4fdd\u969c\u4f19\u4f34\u5173\u7cfb\uff0c\u56fd\u5bb6\u5b89\u5168\u7535\u4fe1\u548c\u4fe1\u606f\u7cfb\u7edf\u5b89\u5168\u653f\u7b56\u30022003\u3002http://www.niap-ccevs.org/cc-scheme/nstissp_11_revised_factsheet.pdf \u52a0\u56fa\u865a\u62df\u5316\u5c42 \u00b6 \u5728\u672c\u7ae0\u7684\u5f00\u5934\uff0c\u6211\u4eec\u5c06\u8ba8\u8bba\u5b9e\u4f8b\u5bf9\u7269\u7406\u548c\u865a\u62df\u786c\u4ef6\u7684\u4f7f\u7528\u3001\u76f8\u5173\u7684\u5b89\u5168\u98ce\u9669\u4ee5\u53ca\u7f13\u89e3\u8fd9\u4e9b\u98ce\u9669\u7684\u4e00\u4e9b\u5efa\u8bae\u3002\u7136\u540e\uff0c\u6211\u4eec\u5c06\u8ba8\u8bba\u5982\u4f55\u4f7f\u7528\u5b89\u5168\u52a0\u5bc6\u865a\u62df\u5316\u6280\u672f\u6765\u52a0\u5bc6\u652f\u6301\u8be5\u6280\u672f\u7684\u57fa\u4e8e AMD \u7684\u673a\u5668\u4e0a\u7684\u865a\u62df\u673a\u7684\u5185\u5b58\u3002\u5728\u672c\u7ae0\u7684\u6700\u540e\uff0c\u6211\u4eec\u5c06\u8ba8\u8bba sVirt\uff0c\u8fd9\u662f\u4e00\u4e2a\u5f00\u6e90\u9879\u76ee\uff0c\u7528\u4e8e\u5c06 SELinux \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u4e0e\u865a\u62df\u5316\u7ec4\u4ef6\u96c6\u6210\u3002 \u7269\u7406\u786c\u4ef6\uff08PCI\u76f4\u901a\uff09 \u00b6 \u8bb8\u591a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90fd\u63d0\u4f9b\u4e00\u79cd\u79f0\u4e3a PCI \u76f4\u901a\u7684\u529f\u80fd\u3002\u8fd9\u5141\u8bb8\u5b9e\u4f8b\u76f4\u63a5\u8bbf\u95ee\u8282\u70b9\u4e0a\u7684\u786c\u4ef6\u3002\u4f8b\u5982\uff0c\u8fd9\u53ef\u7528\u4e8e\u5141\u8bb8\u5b9e\u4f8b\u8bbf\u95ee\u63d0\u4f9b\u8ba1\u7b97\u7edf\u4e00\u8bbe\u5907\u67b6\u6784 \uff08CUDA\uff09 \u4ee5\u5b9e\u73b0\u9ad8\u6027\u80fd\u8ba1\u7b97\u7684\u89c6\u9891\u5361\u6216 GPU\u3002\u6b64\u529f\u80fd\u5b58\u5728\u4e24\u79cd\u7c7b\u578b\u7684\u5b89\u5168\u98ce\u9669\uff1a\u76f4\u63a5\u5185\u5b58\u8bbf\u95ee\u548c\u786c\u4ef6\u611f\u67d3\u3002 \u76f4\u63a5\u5185\u5b58\u8bbf\u95ee \uff08DMA\uff09 \u662f\u4e00\u79cd\u529f\u80fd\uff0c\u5b83\u5141\u8bb8\u67d0\u4e9b\u786c\u4ef6\u8bbe\u5907\u8bbf\u95ee\u4e3b\u673a\u4e2d\u7684\u4efb\u610f\u7269\u7406\u5185\u5b58\u5730\u5740\u3002\u89c6\u9891\u5361\u901a\u5e38\u5177\u6709\u6b64\u529f\u80fd\u3002\u4f46\u662f\uff0c\u4e0d\u5e94\u5411\u5b9e\u4f8b\u6388\u4e88\u4efb\u610f\u7269\u7406\u5185\u5b58\u8bbf\u95ee\u6743\u9650\uff0c\u56e0\u4e3a\u8fd9\u5c06\u4f7f\u5176\u80fd\u591f\u5168\u9762\u4e86\u89e3\u4e3b\u673a\u7cfb\u7edf\u548c\u5728\u540c\u4e00\u8282\u70b9\u4e0a\u8fd0\u884c\u7684\u5176\u4ed6\u5b9e\u4f8b\u3002\u5728\u8fd9\u4e9b\u60c5\u51b5\u4e0b\uff0c\u786c\u4ef6\u4f9b\u5e94\u5546\u4f7f\u7528\u8f93\u5165/\u8f93\u51fa\u5185\u5b58\u7ba1\u7406\u5355\u5143 \uff08IOMMU\uff09 \u6765\u7ba1\u7406 DMA \u8bbf\u95ee\u3002\u6211\u4eec\u5efa\u8bae\u4e91\u67b6\u6784\u5e08\u5e94\u786e\u4fdd\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u914d\u7f6e\u4e3a\u4f7f\u7528\u6b64\u786c\u4ef6\u529f\u80fd\u3002 KVM: KVM\uff1a \u5982\u4f55\u5728 KVM \u4e2d\u4f7f\u7528 VT-d \u5206\u914d\u8bbe\u5907 Xen: Xen\uff1a Xen VTd Howto Xen VTd \u8d34\u58eb\u6307\u5357 \u6ce8\u610f IOMMU \u529f\u80fd\u7531 Intel \u4f5c\u4e3a VT-d \u9500\u552e\uff0c\u7531 AMD \u4ee5 AMD-Vi \u9500\u552e\u3002 \u5f53\u5b9e\u4f8b\u5bf9\u56fa\u4ef6\u6216\u8bbe\u5907\u7684\u67d0\u4e9b\u5176\u4ed6\u90e8\u5206\u8fdb\u884c\u6076\u610f\u4fee\u6539\u65f6\uff0c\u5c31\u4f1a\u53d1\u751f\u786c\u4ef6\u611f\u67d3\u3002\u7531\u4e8e\u6b64\u8bbe\u5907\u7531\u5176\u4ed6\u5b9e\u4f8b\u6216\u4e3b\u673a\u64cd\u4f5c\u7cfb\u7edf\u4f7f\u7528\uff0c\u56e0\u6b64\u6076\u610f\u4ee3\u7801\u53ef\u80fd\u4f1a\u4f20\u64ad\u5230\u8fd9\u4e9b\u7cfb\u7edf\u4e2d\u3002\u6700\u7ec8\u7ed3\u679c\u662f\uff0c\u4e00\u4e2a\u5b9e\u4f8b\u53ef\u4ee5\u5728\u5176\u5b89\u5168\u57df\u4e4b\u5916\u8fd0\u884c\u4ee3\u7801\u3002\u8fd9\u662f\u4e00\u4e2a\u91cd\u5927\u7684\u6f0f\u6d1e\uff0c\u56e0\u4e3a\u91cd\u7f6e\u7269\u7406\u786c\u4ef6\u7684\u72b6\u6001\u6bd4\u91cd\u7f6e\u865a\u62df\u786c\u4ef6\u66f4\u96be\uff0c\u5e76\u4e14\u53ef\u80fd\u5bfc\u81f4\u989d\u5916\u7684\u66b4\u9732\uff0c\u4f8b\u5982\u8bbf\u95ee\u7ba1\u7406\u7f51\u7edc\u3002 \u786c\u4ef6\u611f\u67d3\u95ee\u9898\u7684\u89e3\u51b3\u65b9\u6848\u662f\u7279\u5b9a\u4e8e\u57df\u7684\u3002\u8be5\u7b56\u7565\u662f\u786e\u5b9a\u5b9e\u4f8b\u5982\u4f55\u4fee\u6539\u786c\u4ef6\u72b6\u6001\uff0c\u7136\u540e\u786e\u5b9a\u5728\u4f7f\u7528\u786c\u4ef6\u5b8c\u6210\u5b9e\u4f8b\u65f6\u5982\u4f55\u91cd\u7f6e\u4efb\u4f55\u4fee\u6539\u3002\u4f8b\u5982\uff0c\u4e00\u79cd\u9009\u62e9\u53ef\u80fd\u662f\u5728\u4f7f\u7528\u540e\u91cd\u65b0\u5237\u65b0\u56fa\u4ef6\u3002\u9700\u8981\u5e73\u8861\u786c\u4ef6\u5bff\u547d\u548c\u5b89\u5168\u6027\uff0c\u56e0\u4e3a\u67d0\u4e9b\u56fa\u4ef6\u5728\u5927\u91cf\u5199\u5165\u540e\u4f1a\u51fa\u73b0\u6545\u969c\u3002\u5b89\u5168\u5f15\u5bfc\u4e2d\u6240\u8ff0\u7684 TPM \u6280\u672f\u662f\u4e00\u79cd\u7528\u4e8e\u68c0\u6d4b\u672a\u7ecf\u6388\u6743\u7684\u56fa\u4ef6\u66f4\u6539\u7684\u89e3\u51b3\u65b9\u6848\u3002\u65e0\u8bba\u9009\u62e9\u54ea\u79cd\u7b56\u7565\uff0c\u90fd\u5fc5\u987b\u4e86\u89e3\u4e0e\u6b64\u7c7b\u786c\u4ef6\u5171\u4eab\u76f8\u5173\u7684\u98ce\u9669\uff0c\u4ee5\u4fbf\u9488\u5bf9\u7ed9\u5b9a\u7684\u90e8\u7f72\u65b9\u6848\u9002\u5f53\u7f13\u89e3\u8fd9\u4e9b\u98ce\u9669\u3002 \u7531\u4e8e\u4e0e PCI \u76f4\u901a\u76f8\u5173\u7684\u98ce\u9669\u548c\u590d\u6742\u6027\uff0c\u9ed8\u8ba4\u60c5\u51b5\u4e0b\u5e94\u7981\u7528\u5b83\u3002\u5982\u679c\u4e3a\u7279\u5b9a\u9700\u6c42\u542f\u7528\uff0c\u5219\u9700\u8981\u5236\u5b9a\u9002\u5f53\u7684\u6d41\u7a0b\uff0c\u4ee5\u786e\u4fdd\u786c\u4ef6\u5728\u91cd\u65b0\u53d1\u884c\u4e4b\u524d\u662f\u5e72\u51c0\u7684\u3002 \u865a\u62df\u786c\u4ef6 \uff08QEMU\uff09 \u00b6 \u8fd0\u884c\u865a\u62df\u673a\u65f6\uff0c\u865a\u62df\u786c\u4ef6\u662f\u4e3a\u865a\u62df\u673a\u63d0\u4f9b\u786c\u4ef6\u63a5\u53e3\u7684\u8f6f\u4ef6\u5c42\u3002\u5b9e\u4f8b\u4f7f\u7528\u6b64\u529f\u80fd\u63d0\u4f9b\u53ef\u80fd\u9700\u8981\u7684\u7f51\u7edc\u3001\u5b58\u50a8\u3001\u89c6\u9891\u548c\u5176\u4ed6\u8bbe\u5907\u3002\u8003\u8651\u5230\u8fd9\u4e00\u70b9\uff0c\u73af\u5883\u4e2d\u7684\u5927\u591a\u6570\u5b9e\u4f8b\u5c06\u4e13\u95e8\u4f7f\u7528\u865a\u62df\u786c\u4ef6\uff0c\u5c11\u6570\u5b9e\u4f8b\u9700\u8981\u76f4\u63a5\u786c\u4ef6\u8bbf\u95ee\u3002\u4e3b\u8981\u7684\u5f00\u6e90\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4f7f\u7528 QEMU \u6765\u5b9e\u73b0\u6b64\u529f\u80fd\u3002\u867d\u7136 QEMU \u6ee1\u8db3\u4e86\u5bf9\u865a\u62df\u5316\u5e73\u53f0\u7684\u91cd\u8981\u9700\u6c42\uff0c\u4f46\u5b83\u5df2\u88ab\u8bc1\u660e\u662f\u4e00\u4e2a\u975e\u5e38\u5177\u6709\u6311\u6218\u6027\u7684\u8f6f\u4ef6\u9879\u76ee\u3002QEMU \u4e2d\u7684\u8bb8\u591a\u529f\u80fd\u90fd\u662f\u901a\u8fc7\u5927\u591a\u6570\u5f00\u53d1\u4eba\u5458\u96be\u4ee5\u7406\u89e3\u7684\u4f4e\u7ea7\u4ee3\u7801\u5b9e\u73b0\u7684\u3002QEMU \u865a\u62df\u5316\u7684\u786c\u4ef6\u5305\u62ec\u8bb8\u591a\u4f20\u7edf\u8bbe\u5907\uff0c\u8fd9\u4e9b\u8bbe\u5907\u6709\u81ea\u5df1\u7684\u4e00\u5957\u602a\u7656\u3002\u7efc\u4e0a\u6240\u8ff0\uff0cQEMU \u4e00\u76f4\u662f\u8bb8\u591a\u5b89\u5168\u95ee\u9898\u7684\u6839\u6e90\uff0c\u5305\u62ec\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7a81\u7834\u653b\u51fb\u3002 \u91c7\u53d6\u79ef\u6781\u4e3b\u52a8\u7684\u63aa\u65bd\u6765\u5f3a\u5316 QEMU \u975e\u5e38\u91cd\u8981\u3002\u6211\u4eec\u5efa\u8bae\u6267\u884c\u4e09\u4e2a\u5177\u4f53\u6b65\u9aa4\uff1a \u6700\u5c0f\u5316\u4ee3\u7801\u5e93\u3002 \u4f7f\u7528\u7f16\u8bd1\u5668\u5f3a\u5316\u3002 \u4f7f\u7528\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\uff0c\u4f8b\u5982 sVirt\u3001SELinux \u6216 AppArmor\u3002 \u786e\u4fdd\u60a8\u7684 iptables \u5177\u6709\u8fc7\u6ee4\u7f51\u7edc\u6d41\u91cf\u7684\u9ed8\u8ba4\u7b56\u7565\uff0c\u5e76\u8003\u8651\u68c0\u67e5\u73b0\u6709\u89c4\u5219\u96c6\u4ee5\u4e86\u89e3\u6bcf\u4e2a\u89c4\u5219\u5e76\u786e\u5b9a\u662f\u5426\u9700\u8981\u6269\u5c55\u8be5\u7b56\u7565\u3002 \u6700\u5c0f\u5316 QEMU \u4ee3\u7801\u5e93 \u00b6 \u6211\u4eec\u5efa\u8bae\u901a\u8fc7\u4ece\u7cfb\u7edf\u4e2d\u5220\u9664\u672a\u4f7f\u7528\u7684\u7ec4\u4ef6\u6765\u6700\u5c0f\u5316 QEMU \u4ee3\u7801\u5e93\u3002QEMU \u4e3a\u8bb8\u591a\u4e0d\u540c\u7684\u865a\u62df\u786c\u4ef6\u8bbe\u5907\u63d0\u4f9b\u652f\u6301\uff0c\u4f46\u7ed9\u5b9a\u5b9e\u4f8b\u53ea\u9700\u8981\u5c11\u91cf\u8bbe\u5907\u3002\u6700\u5e38\u89c1\u7684\u786c\u4ef6\u8bbe\u5907\u662f virtio \u8bbe\u5907\u3002\u67d0\u4e9b\u65e7\u5b9e\u4f8b\u5c06\u9700\u8981\u8bbf\u95ee\u7279\u5b9a\u786c\u4ef6\uff0c\u8fd9\u4e9b\u786c\u4ef6\u53ef\u4ee5\u4f7f\u7528 glance \u5143\u6570\u636e\u6307\u5b9a\uff1a $ glance image-update \\ --property hw_disk_bus=ide \\ --property hw_cdrom_bus=ide \\ --property hw_vif_model=e1000 \\ f16-x86_64-openstack-sda \u4e91\u67b6\u6784\u5e08\u5e94\u51b3\u5b9a\u5411\u4e91\u7528\u6237\u63d0\u4f9b\u54ea\u4e9b\u8bbe\u5907\u3002\u4efb\u4f55\u4e0d\u9700\u8981\u7684\u4e1c\u897f\u90fd\u5e94\u8be5\u4ece QEMU \u4e2d\u5220\u9664\u3002\u6b64\u6b65\u9aa4\u9700\u8981\u5728\u4fee\u6539\u4f20\u9012\u7ed9 QEMU \u914d\u7f6e\u811a\u672c\u7684\u9009\u9879\u540e\u91cd\u65b0\u7f16\u8bd1 QEMU\u3002\u8981\u83b7\u5f97\u6700\u65b0\u9009\u9879\u7684\u5b8c\u6574\u5217\u8868\uff0c\u53ea\u9700\u4ece QEMU \u6e90\u76ee\u5f55\u4e2d\u8fd0\u884c ./configure --help\u3002\u786e\u5b9a\u90e8\u7f72\u6240\u9700\u7684\u5185\u5bb9\uff0c\u5e76\u7981\u7528\u5176\u4f59\u9009\u9879\u3002 \u7f16\u8bd1\u5668\u52a0\u56fa \u00b6 \u4f7f\u7528\u7f16\u8bd1\u5668\u5f3a\u5316\u9009\u9879\u5f3a\u5316 QEMU\u3002\u73b0\u4ee3\u7f16\u8bd1\u5668\u63d0\u4f9b\u4e86\u591a\u79cd\u7f16\u8bd1\u65f6\u9009\u9879\uff0c\u4ee5\u63d0\u9ad8\u751f\u6210\u7684\u4e8c\u8fdb\u5236\u6587\u4ef6\u7684\u5b89\u5168\u6027\u3002\u8fd9\u4e9b\u529f\u80fd\u5305\u62ec\u53ea\u8bfb\u91cd\u5b9a\u4f4d \uff08RELRO\uff09\u3001\u5806\u6808\u91d1\u4e1d\u96c0\u3001\u4ece\u4e0d\u6267\u884c \uff08NX\uff09\u3001\u4f4d\u7f6e\u65e0\u5173\u53ef\u6267\u884c\u6587\u4ef6 \uff08PIE\uff09 \u548c\u5730\u5740\u7a7a\u95f4\u5e03\u5c40\u968f\u673a\u5316 \uff08ASLR\uff09\u3002 \u8bb8\u591a\u73b0\u4ee3 Linux \u53d1\u884c\u7248\u5df2\u7ecf\u5728\u6784\u5efa\u542f\u7528\u7f16\u8bd1\u5668\u5f3a\u5316\u7684 QEMU\uff0c\u6211\u4eec\u5efa\u8bae\u5728\u7ee7\u7eed\u64cd\u4f5c\u4e4b\u524d\u9a8c\u8bc1\u73b0\u6709\u7684\u53ef\u6267\u884c\u6587\u4ef6\u3002\u53ef\u4ee5\u5e2e\u52a9\u60a8\u8fdb\u884c\u6b64\u9a8c\u8bc1\u7684\u4e00\u79cd\u5de5\u5177\u79f0\u4e3a checksec.sh RELocation \u53ea\u8bfb \uff08RELRO\uff09 \u5f3a\u5316\u53ef\u6267\u884c\u6587\u4ef6\u7684\u6570\u636e\u90e8\u5206\u3002gcc \u652f\u6301\u5b8c\u6574\u548c\u90e8\u5206 RELRO \u6a21\u5f0f\u3002\u5bf9\u4e8eQEMU\u6765\u8bf4\uff0c\u5b8c\u6574\u7684RELLO\u662f\u60a8\u7684\u6700\u4f73\u9009\u62e9\u3002\u8fd9\u5c06\u4f7f\u5168\u5c40\u504f\u79fb\u8868\u6210\u4e3a\u53ea\u8bfb\u7684\uff0c\u5e76\u5728\u751f\u6210\u7684\u53ef\u6267\u884c\u6587\u4ef6\u4e2d\u5c06\u5404\u79cd\u5185\u90e8\u6570\u636e\u90e8\u5206\u653e\u5728\u7a0b\u5e8f\u6570\u636e\u90e8\u5206\u4e4b\u524d\u3002 \u6808\u4fdd\u62a4 \u5c06\u503c\u653e\u5728\u5806\u6808\u4e0a\u5e76\u9a8c\u8bc1\u5176\u662f\u5426\u5b58\u5728\uff0c\u4ee5\u5e2e\u52a9\u9632\u6b62\u7f13\u51b2\u533a\u6ea2\u51fa\u653b\u51fb\u3002 \u4ece\u4e0d\u6267\u884c \uff08NX\uff09 \u4e5f\u79f0\u4e3a\u6570\u636e\u6267\u884c\u4fdd\u62a4 \uff08DEP\uff09\uff0c\u786e\u4fdd\u65e0\u6cd5\u6267\u884c\u53ef\u6267\u884c\u6587\u4ef6\u7684\u6570\u636e\u90e8\u5206\u3002 \u4f4d\u7f6e\u65e0\u5173\u53ef\u6267\u884c\u6587\u4ef6 \uff08PIE\uff09 \u751f\u6210\u4e00\u4e2a\u72ec\u7acb\u4e8e\u4f4d\u7f6e\u7684\u53ef\u6267\u884c\u6587\u4ef6\uff0c\u8fd9\u662f ASLR \u6240\u5fc5\u9700\u7684\u3002 \u5730\u5740\u7a7a\u95f4\u5e03\u5c40\u968f\u673a\u5316 \uff08ASLR\uff09 \u8fd9\u786e\u4fdd\u4e86\u4ee3\u7801\u548c\u6570\u636e\u533a\u57df\u7684\u653e\u7f6e\u90fd\u662f\u968f\u673a\u7684\u3002\u5f53\u4f7f\u7528 PIE \u6784\u5efa\u53ef\u6267\u884c\u6587\u4ef6\u65f6\uff0c\u7531\u5185\u6838\u542f\u7528\uff08\u6240\u6709\u73b0\u4ee3 Linux \u5185\u6838\u90fd\u652f\u6301 ASLR\uff09\u3002 \u7f16\u8bd1 QEMU \u65f6\uff0c\u5efa\u8bae\u5bf9 GCC \u4f7f\u7528\u4ee5\u4e0b\u7f16\u8bd1\u5668\u9009\u9879\uff1a CFLAGS=\"-arch x86_64 -fstack-protector-all -Wstack-protector \\ --param ssp-buffer-size=4 -pie -fPIE -ftrapv -D_FORTIFY_SOURCE=2 -O2 \\ -Wl,-z,relro,-z,now\" \u6211\u4eec\u5efa\u8bae\u5728\u7f16\u8bd1 QEMU \u53ef\u6267\u884c\u6587\u4ef6\u540e\u5bf9\u5176\u8fdb\u884c\u6d4b\u8bd5\uff0c\u4ee5\u786e\u4fdd\u7f16\u8bd1\u5668\u5f3a\u5316\u6b63\u5e38\u5de5\u4f5c\u3002 \u5927\u591a\u6570\u4e91\u90e8\u7f72\u4e0d\u4f1a\u624b\u52a8\u6784\u5efa\u8f6f\u4ef6\uff0c\u4f8b\u5982 QEMU\u3002\u6700\u597d\u4f7f\u7528\u6253\u5305\u6765\u786e\u4fdd\u8be5\u8fc7\u7a0b\u662f\u53ef\u91cd\u590d\u7684\uff0c\u5e76\u786e\u4fdd\u6700\u7ec8\u7ed3\u679c\u53ef\u4ee5\u8f7b\u677e\u5730\u90e8\u7f72\u5728\u6574\u4e2a\u4e91\u4e2d\u3002\u4e0b\u9762\u7684\u53c2\u8003\u8d44\u6599\u63d0\u4f9b\u4e86\u6709\u5173\u5c06\u7f16\u8bd1\u5668\u5f3a\u5316\u9009\u9879\u5e94\u7528\u4e8e\u73b0\u6709\u5305\u7684\u4e00\u4e9b\u5176\u4ed6\u8be6\u7ec6\u4fe1\u606f\u3002 DEB \u5c01\u88c5\uff1a \u786c\u5316\u6307\u5357 RPM \u5305\uff1a \u5982\u4f55\u521b\u5efa RPM \u5305 \u5b89\u5168\u52a0\u5bc6\u865a\u62df\u5316 \u00b6 \u5b89\u5168\u52a0\u5bc6\u865a\u62df\u5316 \uff08SEV\uff09 \u662f AMD \u7684\u4e00\u9879\u6280\u672f\uff0c\u5b83\u5141\u8bb8\u4f7f\u7528 VM \u552f\u4e00\u7684\u5bc6\u94a5\u5bf9 VM \u7684\u5185\u5b58\u8fdb\u884c\u52a0\u5bc6\u3002SEV \u5728 Train \u7248\u672c\u4e2d\u4f5c\u4e3a\u6280\u672f\u9884\u89c8\u7248\u63d0\u4f9b\uff0c\u5728\u67d0\u4e9b\u57fa\u4e8e AMD \u7684\u673a\u5668\u4e0a\u63d0\u4f9b KVM \u5ba2\u6237\u673a\uff0c\u7528\u4e8e\u8bc4\u4f30\u6280\u672f\u3002 nova \u914d\u7f6e\u6307\u5357\u7684 KVM \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90e8\u5206\u5305\u542b\u914d\u7f6e\u8ba1\u7b97\u673a\u548c\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6240\u9700\u7684\u4fe1\u606f\uff0c\u5e76\u5217\u51fa\u4e86 SEV \u7684\u51e0\u4e2a\u9650\u5236\u3002 SEV \u4e3a\u6b63\u5728\u8fd0\u884c\u7684 VM \u4f7f\u7528\u7684\u5185\u5b58\u4e2d\u7684\u6570\u636e\u63d0\u4f9b\u4fdd\u62a4\u3002\u4f46\u662f\uff0c\u867d\u7136 SEV \u4e0e OpenStack \u96c6\u6210\u7684\u7b2c\u4e00\u9636\u6bb5\u652f\u6301\u865a\u62df\u673a\u52a0\u5bc6\u5185\u5b58\uff0c\u4f46\u91cd\u8981\u7684\u662f\u5b83\u4e0d\u63d0\u4f9b SEV \u56fa\u4ef6\u63d0\u4f9b\u7684 LAUNCH_MEASURE or LAUNCH_SECRET \u529f\u80fd\u3002\u8fd9\u610f\u5473\u7740\u53d7 SEV \u4fdd\u62a4\u7684 VM \u4f7f\u7528\u7684\u6570\u636e\u53ef\u80fd\u4f1a\u53d7\u5230\u63a7\u5236\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u7684\u6709\u52a8\u673a\u7684\u5bf9\u624b\u7684\u653b\u51fb\u3002\u4f8b\u5982\uff0c\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u8ba1\u7b97\u673a\u4e0a\u7684\u6076\u610f\u7ba1\u7406\u5458\u53ef\u4ee5\u4e3a\u5177\u6709\u540e\u95e8\u548c\u95f4\u8c0d\u8f6f\u4ef6\u7684\u79df\u6237\u63d0\u4f9b VM \u6620\u50cf\uff0c\u8fd9\u4e9b\u540e\u95e8\u548c\u95f4\u8c0d\u8f6f\u4ef6\u80fd\u591f\u7a83\u53d6\u673a\u5bc6\uff0c\u6216\u8005\u66ff\u6362 VNC \u670d\u52a1\u5668\u8fdb\u7a0b\u4ee5\u7aa5\u63a2\u53d1\u9001\u5230 VM \u63a7\u5236\u53f0\u6216\u4ece VM \u63a7\u5236\u53f0\u53d1\u9001\u7684\u6570\u636e\uff0c\u5305\u62ec\u89e3\u9501\u5168\u78c1\u76d8\u52a0\u5bc6\u89e3\u51b3\u65b9\u6848\u7684\u5bc6\u7801\u3002 \u4e3a\u4e86\u51cf\u5c11\u6076\u610f\u7ba1\u7406\u5458\u672a\u7ecf\u6388\u6743\u8bbf\u95ee\u6570\u636e\u7684\u673a\u4f1a\uff0c\u4f7f\u7528 SEV \u65f6\u5e94\u9075\u5faa\u4ee5\u4e0b\u5b89\u5168\u505a\u6cd5\uff1a VM \u5e94\u4f7f\u7528\u5b8c\u6574\u78c1\u76d8\u52a0\u5bc6\u89e3\u51b3\u65b9\u6848\u3002 \u5e94\u5728 VM \u4e0a\u4f7f\u7528\u5f15\u5bfc\u52a0\u8f7d\u7a0b\u5e8f\u5bc6\u7801\u3002 \u6b64\u5916\uff0c\u5e94\u5c06\u6807\u51c6\u5b89\u5168\u6700\u4f73\u505a\u6cd5\u7528\u4e8e VM\uff0c\u5305\u62ec\u4ee5\u4e0b\u5185\u5bb9\uff1a VM \u5e94\u5f97\u5230\u826f\u597d\u7684\u7ef4\u62a4\uff0c\u5305\u62ec\u5b9a\u671f\u8fdb\u884c\u5b89\u5168\u626b\u63cf\u548c\u4fee\u8865\uff0c\u4ee5\u786e\u4fdd VM \u6301\u7eed\u4fdd\u6301\u5f3a\u5927\u7684\u5b89\u5168\u6001\u52bf\u3002 \u4e0e VM \u7684\u8fde\u63a5\u5e94\u4f7f\u7528\u52a0\u5bc6\u548c\u7ecf\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u7684\u534f\u8bae\uff0c\u4f8b\u5982 HTTPS \u548c SSH\u3002 \u5e94\u8003\u8651\u4f7f\u7528\u5176\u4ed6\u5b89\u5168\u5de5\u5177\u548c\u6d41\u7a0b\uff0c\u5e76\u5c06\u5176\u7528\u4e8e\u9002\u5408\u6570\u636e\u654f\u611f\u5ea6\u7ea7\u522b\u7684 VM\u3002 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \u00b6 \u7f16\u8bd1\u5668\u52a0\u56fa\u4f7f\u653b\u51fb QEMU \u8fdb\u7a0b\u53d8\u5f97\u66f4\u52a0\u56f0\u96be\u3002\u4f46\u662f\uff0c\u5982\u679c\u653b\u51fb\u8005\u5f97\u901e\uff0c\u5219\u9700\u8981\u9650\u5236\u653b\u51fb\u7684\u5f71\u54cd\u3002\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u901a\u8fc7\u5c06 QEMU \u8fdb\u7a0b\u4e0a\u7684\u6743\u9650\u9650\u5236\u4e3a\u4ec5\u9700\u8981\u7684\u6743\u9650\u6765\u5b9e\u73b0\u6b64\u76ee\u7684\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u4f7f\u7528 sVirt\u3001SELinux \u6216 AppArmor \u6765\u5b9e\u73b0\u3002\u4f7f\u7528 sVirt \u65f6\uff0cSELinux \u914d\u7f6e\u4e3a\u5728\u5355\u72ec\u7684\u5b89\u5168\u4e0a\u4e0b\u6587\u4e0b\u8fd0\u884c\u6bcf\u4e2a QEMU \u8fdb\u7a0b\u3002AppArmor \u53ef\u4ee5\u914d\u7f6e\u4e3a\u63d0\u4f9b\u7c7b\u4f3c\u7684\u529f\u80fd\u3002\u6211\u4eec\u5728\u4ee5\u4e0b sVirt \u548c\u5b9e\u4f8b\u9694\u79bb\u90e8\u5206\u4e2d\u63d0\u4f9b\u4e86\u6709\u5173 sVirt \u548c\u5b9e\u4f8b\u9694\u79bb\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff1aSELinux \u548c\u865a\u62df\u5316\u3002 \u7279\u5b9a\u7684 SELinux \u7b56\u7565\u53ef\u7528\u4e8e\u8bb8\u591a OpenStack \u670d\u52a1\u3002CentOS \u7528\u6237\u53ef\u4ee5\u901a\u8fc7\u5b89\u88c5 selinux-policy \u6e90\u7801\u5305\u6765\u67e5\u770b\u8fd9\u4e9b\u7b56\u7565\u3002\u6700\u65b0\u7684\u7b56\u7565\u51fa\u73b0\u5728 Fedora \u7684 selinux-policy \u5b58\u50a8\u5e93\u4e2d\u3002rawhide-contrib \u5206\u652f\u5305\u542b\u4ee5 .te \u7ed3\u5c3e\u7684\u6587\u4ef6\uff0c\u4f8b\u5982 cinder.te \uff0c\u8fd9\u4e9b\u6587\u4ef6\u53ef\u4ee5\u5728\u8fd0\u884c SELinux \u7684\u7cfb\u7edf\u4e0a\u4f7f\u7528\u3002 OpenStack \u670d\u52a1\u7684 AppArmor \u914d\u7f6e\u6587\u4ef6\u5f53\u524d\u4e0d\u5b58\u5728\uff0c\u4f46 OpenStack-Ansible \u9879\u76ee\u901a\u8fc7\u5c06 AppArmor \u914d\u7f6e\u6587\u4ef6\u5e94\u7528\u4e8e\u8fd0\u884c OpenStack \u670d\u52a1\u7684\u6bcf\u4e2a\u5bb9\u5668\u6765\u5904\u7406\u6b64\u95ee\u9898\u3002 sVirt\uff1aSELinux \u548c\u865a\u62df\u5316 \u00b6 \u51ed\u501f\u72ec\u7279\u7684\u5185\u6838\u7ea7\u67b6\u6784\u548c\u56fd\u5bb6\u5b89\u5168\u5c40 \uff08NSA\uff09 \u5f00\u53d1\u7684\u5b89\u5168\u673a\u5236\uff0cKVM \u4e3a\u591a\u79df\u6237\u63d0\u4f9b\u4e86\u57fa\u7840\u9694\u79bb\u6280\u672f\u3002\u5b89\u5168\u865a\u62df\u5316 \uff08sVirt\uff09 \u6280\u672f\u7684\u53d1\u5c55\u8d77\u6e90\u4e8e 2002 \u5e74\uff0c\u662f SELinux \u5bf9\u73b0\u4ee3\u865a\u62df\u5316\u7684\u5e94\u7528\u3002SELinux \u65e8\u5728\u5e94\u7528\u57fa\u4e8e\u6807\u7b7e\u7684\u5206\u79bb\u63a7\u5236\uff0c\u73b0\u5df2\u6269\u5c55\u4e3a\u5728\u865a\u62df\u673a\u8fdb\u7a0b\u3001\u8bbe\u5907\u3001\u6570\u636e\u6587\u4ef6\u548c\u4ee3\u8868\u5b83\u4eec\u6267\u884c\u64cd\u4f5c\u7684\u7cfb\u7edf\u8fdb\u7a0b\u4e4b\u95f4\u63d0\u4f9b\u9694\u79bb\u3002 OpenStack \u7684 sVirt \u5b9e\u73b0\u65e8\u5728\u4fdd\u62a4\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e3b\u673a\u548c\u865a\u62df\u673a\u514d\u53d7\u4e24\u4e2a\u4e3b\u8981\u5a01\u80c1\u5a92\u4ecb\u7684\u4fb5\u5bb3\uff1a \u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u5a01\u80c1 \u5728\u865a\u62df\u673a\u4e2d\u8fd0\u884c\u7684\u53d7\u635f\u5e94\u7528\u7a0b\u5e8f\u4f1a\u653b\u51fb\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4ee5\u8bbf\u95ee\u5e95\u5c42\u8d44\u6e90\u3002\u4f8b\u5982\uff0c\u5f53\u865a\u62df\u673a\u80fd\u591f\u8bbf\u95ee\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u64cd\u4f5c\u7cfb\u7edf\u3001\u7269\u7406\u8bbe\u5907\u6216\u5176\u4ed6\u5e94\u7528\u7a0b\u5e8f\u65f6\u3002\u6b64\u5a01\u80c1\u5411\u91cf\u5b58\u5728\u76f8\u5f53\u5927\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4e0a\u7684\u5165\u4fb5\u53ef\u80fd\u4f1a\u611f\u67d3\u7269\u7406\u786c\u4ef6\u5e76\u66b4\u9732\u5176\u4ed6\u865a\u62df\u673a\u548c\u7f51\u6bb5\u3002 \u865a\u62df\u673a\uff08\u591a\u79df\u6237\uff09\u5a01\u80c1 \u5728 VM \u4e2d\u8fd0\u884c\u7684\u53d7\u635f\u5e94\u7528\u7a0b\u5e8f\u4f1a\u653b\u51fb\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\uff0c\u4ee5\u8bbf\u95ee\u6216\u63a7\u5236\u53e6\u4e00\u4e2a\u865a\u62df\u673a\u53ca\u5176\u8d44\u6e90\u3002\u8fd9\u662f\u865a\u62df\u5316\u7279\u6709\u7684\u5a01\u80c1\u5411\u91cf\uff0c\u5b58\u5728\u76f8\u5f53\u5927\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u5927\u91cf\u865a\u62df\u673a\u6587\u4ef6\u6620\u50cf\u53ef\u80fd\u56e0\u5355\u4e2a\u5e94\u7528\u7a0b\u5e8f\u4e2d\u7684\u6f0f\u6d1e\u800c\u53d7\u5230\u635f\u5bb3\u3002\u8fd9\u79cd\u865a\u62df\u7f51\u7edc\u653b\u51fb\u662f\u4e00\u4e2a\u4e3b\u8981\u95ee\u9898\uff0c\u56e0\u4e3a\u7528\u4e8e\u4fdd\u62a4\u771f\u5b9e\u7f51\u7edc\u7684\u7ba1\u7406\u6280\u672f\u5e76\u4e0d\u76f4\u63a5\u9002\u7528\u4e8e\u865a\u62df\u73af\u5883\u3002 \u6bcf\u4e2a\u57fa\u4e8e KVM \u7684\u865a\u62df\u673a\u90fd\u662f\u4e00\u4e2a\u7531 SELinux \u6807\u8bb0\u7684\u8fdb\u7a0b\uff0c\u4ece\u800c\u6709\u6548\u5730\u5728\u6bcf\u4e2a\u865a\u62df\u673a\u5468\u56f4\u5efa\u7acb\u5b89\u5168\u8fb9\u754c\u3002\u6b64\u5b89\u5168\u8fb9\u754c\u7531 Linux \u5185\u6838\u76d1\u89c6\u548c\u5f3a\u5236\u6267\u884c\uff0c\u4ece\u800c\u9650\u5236\u865a\u62df\u673a\u8bbf\u95ee\u5176\u8fb9\u754c\u4e4b\u5916\u7684\u8d44\u6e90\uff0c\u4f8b\u5982\u4e3b\u673a\u6570\u636e\u6587\u4ef6\u6216\u5176\u4ed6 VM\u3002 \u65e0\u8bba\u865a\u62df\u673a\u5185\u8fd0\u884c\u7684\u5ba2\u6237\u673a\u64cd\u4f5c\u7cfb\u7edf\u5982\u4f55\uff0c\u90fd\u4f1a\u63d0\u4f9b sVirt \u9694\u79bb\u3002\u53ef\u4ee5\u4f7f\u7528 Linux \u6216 Windows VM\u3002\u6b64\u5916\uff0c\u8bb8\u591a Linux \u53d1\u884c\u7248\u5728\u64cd\u4f5c\u7cfb\u7edf\u4e2d\u63d0\u4f9b SELinux\uff0c\u4f7f\u865a\u62df\u673a\u80fd\u591f\u4fdd\u62a4\u5185\u90e8\u865a\u62df\u8d44\u6e90\u514d\u53d7\u5a01\u80c1\u3002 \u6807\u7b7e\u548c\u7c7b\u522b \u00b6 \u57fa\u4e8e KVM \u7684\u865a\u62df\u673a\u5b9e\u4f8b\u4f7f\u7528\u5176\u81ea\u5df1\u7684 SELinux \u6570\u636e\u7c7b\u578b\u8fdb\u884c\u6807\u8bb0\uff0c\u79f0\u4e3a svirt_image_t \u3002\u5185\u6838\u7ea7\u4fdd\u62a4\u53ef\u9632\u6b62\u672a\u7ecf\u6388\u6743\u7684\u7cfb\u7edf\u8fdb\u7a0b\uff08\u5982\u6076\u610f\u8f6f\u4ef6\uff09\u64cd\u7eb5\u78c1\u76d8\u4e0a\u7684\u865a\u62df\u673a\u6620\u50cf\u6587\u4ef6\u3002\u5173\u95ed\u865a\u62df\u673a\u7535\u6e90\u540e\uff0c\u6620\u50cf\u7684\u5b58\u50a8 svirt_image_t \u65b9\u5f0f\u5982\u4e0b\u6240\u793a\uff1a system_u:object_r:svirt_image_t:SystemLow image1 system_u:object_r:svirt_image_t:SystemLow image2 system_u:object_r:svirt_image_t:SystemLow image3 system_u:object_r:svirt_image_t:SystemLow image4 \u8be5 svirt_image_t \u6807\u7b7e\u552f\u4e00\u6807\u8bc6\u78c1\u76d8\u4e0a\u7684\u56fe\u50cf\u6587\u4ef6\uff0c\u5141\u8bb8 SELinux \u7b56\u7565\u9650\u5236\u8bbf\u95ee\u3002\u5f53\u57fa\u4e8e KVM \u7684\u8ba1\u7b97\u6620\u50cf\u901a\u7535\u65f6\uff0csVirt \u4f1a\u5c06\u968f\u673a\u6570\u5b57\u6807\u8bc6\u7b26\u9644\u52a0\u5230\u6620\u50cf\u4e2d\u3002sVirt \u80fd\u591f\u4e3a\u6bcf\u4e2a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u8282\u70b9\u6700\u591a\u5206\u914d 524,288 \u4e2a\u865a\u62df\u673a\u7684\u6570\u5b57\u6807\u8bc6\u7b26\uff0c\u4f46\u5927\u591a\u6570 OpenStack \u90e8\u7f72\u6781\u4e0d\u53ef\u80fd\u9047\u5230\u6b64\u9650\u5236\u3002 \u6b64\u793a\u4f8b\u663e\u793a\u4e86 sVirt \u7c7b\u522b\u6807\u8bc6\u7b26\uff1a system_u:object_r:svirt_image_t:s0:c87,c520 image1 system_u:object_r:svirt_image_t:s0:419,c172 image2 SELinux \u7528\u6237\u548c\u89d2\u8272 \u00b6 SELinux \u7ba1\u7406\u7528\u6237\u89d2\u8272\u3002\u53ef\u4ee5\u901a\u8fc7 -Z \u6807\u5fd7\u6216\u4f7f\u7528 semanage \u547d\u4ee4\u67e5\u770b\u8fd9\u4e9b\u5185\u5bb9\u3002\u5728\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0a\uff0c\u53ea\u6709\u7ba1\u7406\u5458\u624d\u80fd\u8bbf\u95ee\u7cfb\u7edf\uff0c\u5e76\u4e14\u5e94\u8be5\u56f4\u7ed5\u7ba1\u7406\u7528\u6237\u548c\u7cfb\u7edf\u4e0a\u7684\u4efb\u4f55\u5176\u4ed6\u7528\u6237\u5177\u6709\u9002\u5f53\u7684\u4e0a\u4e0b\u6587\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 SELinux \u7528\u6237\u6587\u6863\u3002 \u5e03\u5c14\u503c \u00b6 \u4e3a\u4e86\u51cf\u8f7b\u7ba1\u7406 SELinux \u7684\u7ba1\u7406\u8d1f\u62c5\uff0c\u8bb8\u591a\u4f01\u4e1a Linux \u5e73\u53f0\u5229\u7528 SELinux \u5e03\u5c14\u503c\u6765\u5feb\u901f\u6539\u53d8 sVirt \u7684\u5b89\u5168\u6001\u52bf\u3002 \u57fa\u4e8e Red Hat Enterprise Linux \u7684 KVM \u90e8\u7f72\u4f7f\u7528\u4ee5\u4e0b sVirt \u5e03\u5c14\u503c\uff1a sVirt SELinux \u5e03\u5c14\u503c \u63cf\u8ff0 virt_use_common \u5141\u8bb8 virt \u4f7f\u7528\u4e32\u884c\u6216\u5e76\u884c\u901a\u4fe1\u7aef\u53e3\u3002 virt_use_fusefs \u5141\u8bb8 virt \u8bfb\u53d6 FUSE \u6302\u8f7d\u7684\u6587\u4ef6\u3002 virt_use_nfs \u5141\u8bb8 virt \u7ba1\u7406 NFS \u6302\u8f7d\u7684\u6587\u4ef6\u3002 virt_use_samba \u5141\u8bb8 virt \u7ba1\u7406 CIFS \u6302\u8f7d\u7684\u6587\u4ef6\u3002 virt_use_sanlock \u5141\u8bb8\u53d7\u9650\u7684\u865a\u62df\u8bbf\u5ba2\u4e0e sanlock \u4ea4\u4e92\u3002 virt_use_sysfs \u5141\u8bb8 virt \u7ba1\u7406\u8bbe\u5907\u914d\u7f6e \uff08PCI\uff09\u3002 virt_use_usb \u5141\u8bb8 virt \u4f7f\u7528 USB \u8bbe\u5907\u3002 virt_use_xserver \u5141\u8bb8\u865a\u62df\u673a\u4e0e X Window \u7cfb\u7edf\u4ea4\u4e92\u3002 \u52a0\u56fa\u8ba1\u7b97\u90e8\u7f72 \u00b6 \u4efb\u4f55OpenStack\u90e8\u7f72\u7684\u4e3b\u8981\u5b89\u5168\u95ee\u9898\u4e4b\u4e00\u662f\u56f4\u7ed5\u654f\u611f\u6587\u4ef6\uff08\u5982 nova.conf \u6587\u4ef6\uff09\u7684\u5b89\u5168\u6027\u548c\u63a7\u5236\u3002\u6b64\u914d\u7f6e\u6587\u4ef6\u901a\u5e38\u5305\u542b\u5728 /etc \u76ee\u5f55\u4e2d\uff0c\u5305\u542b\u8bb8\u591a\u654f\u611f\u9009\u9879\uff0c\u5305\u62ec\u914d\u7f6e\u8be6\u7ec6\u4fe1\u606f\u548c\u670d\u52a1\u5bc6\u7801\u3002\u5e94\u4e3a\u6240\u6709\u6b64\u7c7b\u654f\u611f\u6587\u4ef6\u6388\u4e88\u4e25\u683c\u7684\u6587\u4ef6\u7ea7\u6743\u9650\uff0c\u5e76\u901a\u8fc7\u6587\u4ef6\u5b8c\u6574\u6027\u76d1\u89c6 \uff08FIM\uff09 \u5de5\u5177\uff08\u5982 iNotify \u6216 Samhain\uff09\u76d1\u89c6\u66f4\u6539\u3002\u8fd9\u4e9b\u5b9e\u7528\u7a0b\u5e8f\u5c06\u83b7\u53d6\u5904\u4e8e\u5df2\u77e5\u826f\u597d\u72b6\u6001\u7684\u76ee\u6807\u6587\u4ef6\u7684\u54c8\u5e0c\u503c\uff0c\u7136\u540e\u5b9a\u671f\u83b7\u53d6\u8be5\u6587\u4ef6\u7684\u65b0\u54c8\u5e0c\u503c\uff0c\u5e76\u5c06\u5176\u4e0e\u5df2\u77e5\u826f\u597d\u7684\u54c8\u5e0c\u503c\u8fdb\u884c\u6bd4\u8f83\u3002\u5982\u679c\u53d1\u73b0\u8b66\u62a5\u88ab\u610f\u5916\u4fee\u6539\uff0c\u5219\u53ef\u4ee5\u521b\u5efa\u8b66\u62a5\u3002 \u53ef\u4ee5\u68c0\u67e5\u6587\u4ef6\u7684\u6743\u9650\uff0c\u6211\u79fb\u52a8\u5230\u6587\u4ef6\u6240\u5728\u7684\u76ee\u5f55\u5e76\u8fd0\u884c ls -lh \u547d\u4ee4\u3002\u8fd9\u5c06\u663e\u793a\u6709\u6743\u8bbf\u95ee\u6587\u4ef6\u7684\u6743\u9650\u3001\u6240\u6709\u8005\u548c\u7ec4\uff0c\u4ee5\u53ca\u5176\u4ed6\u4fe1\u606f\uff0c\u4f8b\u5982\u4e0a\u6b21\u4fee\u6539\u6587\u4ef6\u7684\u65f6\u95f4\u548c\u521b\u5efa\u65f6\u95f4\u3002 \u8be5 /var/lib/nova \u76ee\u5f55\u7528\u4e8e\u4fdd\u5b58\u6709\u5173\u7ed9\u5b9a\u8ba1\u7b97\u4e3b\u673a\u4e0a\u7684\u5b9e\u4f8b\u7684\u8be6\u7ec6\u4fe1\u606f\u3002\u6b64\u76ee\u5f55\u4e5f\u5e94\u88ab\u89c6\u4e3a\u654f\u611f\u76ee\u5f55\uff0c\u5e76\u5177\u6709\u4e25\u683c\u5f3a\u5236\u6267\u884c\u7684\u6587\u4ef6\u6743\u9650\u3002\u6b64\u5916\uff0c\u5e94\u5b9a\u671f\u5907\u4efd\u5b83\uff0c\u56e0\u4e3a\u5b83\u5305\u542b\u4e0e\u8be5\u4e3b\u673a\u5173\u8054\u7684\u5b9e\u4f8b\u7684\u4fe1\u606f\u548c\u5143\u6570\u636e\u3002 \u5982\u679c\u90e8\u7f72\u4e0d\u9700\u8981\u5b8c\u6574\u7684\u865a\u62df\u673a\u5907\u4efd\uff0c\u5efa\u8bae\u6392\u9664\u8be5 /var/lib/nova/instances \u76ee\u5f55\uff0c\u56e0\u4e3a\u5b83\u7684\u5927\u5c0f\u5c06\u4e0e\u8be5\u8282\u70b9\u4e0a\u8fd0\u884c\u7684\u6bcf\u4e2a VM \u7684\u603b\u7a7a\u95f4\u4e00\u6837\u5927\u3002\u5982\u679c\u90e8\u7f72\u786e\u5b9e\u9700\u8981\u5b8c\u6574 VM \u5907\u4efd\uff0c\u5219\u9700\u8981\u786e\u4fdd\u6210\u529f\u5907\u4efd\u6b64\u76ee\u5f55\u3002 \u76d1\u89c6\u662f IT \u57fa\u7840\u7ed3\u6784\u7684\u5173\u952e\u7ec4\u4ef6\uff0c\u6211\u4eec\u5efa\u8bae\u76d1\u89c6\u548c\u5206\u6790\u8ba1\u7b97\u65e5\u5fd7\u6587\u4ef6\uff0c\u4ee5\u4fbf\u53ef\u4ee5\u521b\u5efa\u6709\u610f\u4e49\u7684\u8b66\u62a5\u3002 OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f \u00b6 \u6211\u4eec\u5efa\u8bae\u5728\u53d1\u5e03\u5b89\u5168\u95ee\u9898\u548c\u5efa\u8bae\u65f6\u53ca\u65f6\u4e86\u89e3\u5b83\u4eec\u3002OpenStack \u5b89\u5168\u95e8\u6237\u662f\u4e00\u4e2a\u4e2d\u592e\u95e8\u6237\uff0c\u53ef\u4ee5\u5728\u8fd9\u91cc\u534f\u8c03\u5efa\u8bae\u3001\u901a\u77e5\u3001\u4f1a\u8bae\u548c\u6d41\u7a0b\u3002\u6b64\u5916\uff0cOpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f \uff08VMT\uff09 \u95e8\u6237\u901a\u8fc7\u5c06 Bug \u6807\u8bb0\u4e3a\u201c\u6b64 bug \u662f\u5b89\u5168\u6f0f\u6d1e\u201d\u6765\u534f\u8c03 OpenStack \u9879\u76ee\u5185\u7684\u8865\u6551\u63aa\u65bd\uff0c\u4ee5\u53ca\u8c03\u67e5\u8d1f\u8d23\u4efb\u5730\uff08\u79c1\u4e0b\uff09\u5411 VMT \u62ab\u9732\u7684\u62a5\u544a bug \u7684\u8fc7\u7a0b\u3002VMT \u6d41\u7a0b\u9875\u9762\u4e2d\u6982\u8ff0\u4e86\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u5e76\u751f\u6210\u4e86 OpenStack \u5b89\u5168\u516c\u544a \uff08OSSA\uff09\u3002\u6b64 OSSA \u6982\u8ff0\u4e86\u95ee\u9898\u548c\u4fee\u590d\u7a0b\u5e8f\uff0c\u5e76\u94fe\u63a5\u5230\u539f\u59cb\u9519\u8bef\u548c\u8865\u4e01\u6258\u7ba1\u4f4d\u7f6e\u3002 OpenStack \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u00b6 \u62a5\u544a\u7684\u5b89\u5168\u6f0f\u6d1e\u88ab\u53d1\u73b0\u662f\u914d\u7f6e\u9519\u8bef\u7684\u7ed3\u679c\uff0c\u6216\u8005\u4e0d\u662f\u4e25\u683c\u610f\u4e49\u4e0a\u7684 OpenStack \u7684\u4e00\u90e8\u5206\uff0c\u8fd9\u4e9b\u6f0f\u6d1e\u5c06\u88ab\u8d77\u8349\u5230 OpenStack \u5b89\u5168\u8bf4\u660e \uff08OSSN\uff09 \u4e2d\u3002\u8fd9\u4e9b\u95ee\u9898\u5305\u62ec\u914d\u7f6e\u95ee\u9898\uff0c\u4f8b\u5982\u786e\u4fdd\u8eab\u4efd\u63d0\u4f9b\u7a0b\u5e8f\u6620\u5c04\u4ee5\u53ca\u975e OpenStack\uff0c\u4f46\u5173\u952e\u95ee\u9898\uff08\u4f8b\u5982\u5f71\u54cd OpenStack \u4f7f\u7528\u7684\u5e73\u53f0\u7684 Bashbug/Ghost \u6216 Venom \u6f0f\u6d1e\uff09\u3002\u5f53\u524d\u7684 OSSN \u96c6\u4f4d\u4e8e\u5b89\u5168\u8bf4\u660e wiki \u4e2d\u3002 OpenStack-dev \u90ae\u4ef6\u5217\u8868 \u00b6 \u6240\u6709\u9519\u8bef\u3001OSSA \u548c OSSN \u90fd\u901a\u8fc7 openstack-discuss \u90ae\u4ef6\u5217\u8868\u516c\u5f00\u53d1\u5e03\uff0c\u4e3b\u9898\u884c\u4e2d\u5e26\u6709 [security] \u4e3b\u9898\u3002\u6211\u4eec\u5efa\u8bae\u8ba2\u9605\u6b64\u5217\u8868\u4ee5\u53ca\u90ae\u4ef6\u8fc7\u6ee4\u89c4\u5219\uff0c\u4ee5\u786e\u4fdd\u4e0d\u4f1a\u9057\u6f0f OSSN\u3001OSSA \u548c\u5176\u4ed6\u91cd\u8981\u516c\u544a\u3002openstack-discuss \u90ae\u4ef6\u5217\u8868\u901a\u8fc7 OpenStack Development Mailing List \u8fdb\u884c\u7ba1\u7406\u3002openstack-discuss \u4f7f\u7528\u300a\u9879\u76ee\u56e2\u961f\u6307\u5357\u300b\u4e2d\u5b9a\u4e49\u7684\u6807\u8bb0\u3002 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90ae\u4ef6\u5217\u8868 \u00b6 \u5728\u5b9e\u65bdOpenStack\u65f6\uff0c\u6838\u5fc3\u51b3\u7b56\u4e4b\u4e00\u662f\u4f7f\u7528\u54ea\u4e2a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002\u6211\u4eec\u5efa\u8bae\u60a8\u4e86\u89e3\u4e0e\u60a8\u9009\u62e9\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u76f8\u5173\u7684\u516c\u544a\u3002\u4ee5\u4e0b\u662f\u51e0\u4e2a\u5e38\u89c1\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5b89\u5168\u5217\u8868\uff1a Xen\uff1a http://xenbits.xen.org/xsa/ VMWare\uff1a http://blogs.vmware.com/security/ \u5176\u4ed6\uff08KVM \u7b49\uff09\uff1a http://seclists.org/oss-sec \u6f0f\u6d1e\u610f\u8bc6 \u00b6 OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f \u00b6 \u6211\u4eec\u5efa\u8bae\u5728\u53d1\u5e03\u5b89\u5168\u95ee\u9898\u548c\u5efa\u8bae\u65f6\u53ca\u65f6\u4e86\u89e3\u5b83\u4eec\u3002OpenStack \u5b89\u5168\u95e8\u6237\u662f\u4e00\u4e2a\u4e2d\u592e\u95e8\u6237\uff0c\u53ef\u4ee5\u5728\u8fd9\u91cc\u534f\u8c03\u5efa\u8bae\u3001\u901a\u77e5\u3001\u4f1a\u8bae\u548c\u6d41\u7a0b\u3002\u6b64\u5916\uff0cOpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f \uff08VMT\uff09 \u95e8\u6237\u534f\u8c03 OpenStack \u5185\u90e8\u7684\u8865\u6551\u63aa\u65bd\uff0c\u4ee5\u53ca\u8c03\u67e5\u8d1f\u8d23\u4efb\u5730\uff08\u79c1\u4e0b\uff09\u5411 VMT \u62ab\u9732\u7684\u62a5\u544a\u9519\u8bef\u7684\u8fc7\u7a0b\uff0c\u65b9\u6cd5\u662f\u5c06\u9519\u8bef\u6807\u8bb0\u4e3a\u201c\u6b64\u9519\u8bef\u662f\u5b89\u5168\u6f0f\u6d1e\u201d\u3002VMT \u6d41\u7a0b\u9875\u9762\u4e2d\u6982\u8ff0\u4e86\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u5e76\u751f\u6210\u4e86 OpenStack \u5b89\u5168\u516c\u544a \uff08OSSA\uff09\u3002\u6b64 OSSA \u6982\u8ff0\u4e86\u95ee\u9898\u548c\u4fee\u590d\u7a0b\u5e8f\uff0c\u5e76\u94fe\u63a5\u5230\u539f\u59cb\u9519\u8bef\u548c\u8865\u4e01\u6258\u7ba1\u4f4d\u7f6e\u3002 OpenStack \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u00b6 \u62a5\u544a\u7684\u5b89\u5168\u6f0f\u6d1e\u88ab\u53d1\u73b0\u662f\u914d\u7f6e\u9519\u8bef\u7684\u7ed3\u679c\uff0c\u6216\u8005\u4e0d\u662f\u4e25\u683c\u610f\u4e49\u4e0a\u7684 OpenStack \u7684\u4e00\u90e8\u5206\uff0c\u5c06\u88ab\u8d77\u8349\u5230 OpenStack \u5b89\u5168\u8bf4\u660e \uff08OSSN\uff09 \u4e2d\u3002\u8fd9\u4e9b\u95ee\u9898\u5305\u62ec\u914d\u7f6e\u95ee\u9898\uff0c\u4f8b\u5982\u786e\u4fdd\u8eab\u4efd\u63d0\u4f9b\u5546\u6620\u5c04\uff0c\u4ee5\u53ca\u975e OpenStack \u4f46\u5173\u952e\u7684\u95ee\u9898\uff0c\u4f8b\u5982\u5f71\u54cd OpenStack \u4f7f\u7528\u7684\u5e73\u53f0\u7684 Bashbug/Ghost \u6216 Venom \u6f0f\u6d1e\u3002\u5f53\u524d\u7684 OSSN \u96c6\u4f4d\u4e8e\u5b89\u5168\u8bf4\u660e wiki \u4e2d\u3002 OpenStack-discuss \u90ae\u4ef6\u5217\u8868 \u00b6 \u6240\u6709 bug\u3001OSSA \u548c OSSN \u90fd\u901a\u8fc7 openstack-discuss \u90ae\u4ef6\u5217\u8868\u516c\u5f00\u53d1\u5e03\uff0c\u4e3b\u9898\u884c\u4e2d\u5305\u542b [security] \u4e3b\u9898\u3002\u6211\u4eec\u5efa\u8bae\u8ba2\u9605\u6b64\u5217\u8868\u4ee5\u53ca\u90ae\u4ef6\u8fc7\u6ee4\u89c4\u5219\uff0c\u4ee5\u786e\u4fdd\u4e0d\u4f1a\u9057\u6f0f OSSN\u3001OSSA \u548c\u5176\u4ed6\u91cd\u8981\u516c\u544a\u3002openstack-discuss \u90ae\u4ef6\u5217\u8868\u901a\u8fc7 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss \u8fdb\u884c\u7ba1\u7406\u3002openstack-discuss \u4f7f\u7528\u300a\u9879\u76ee\u56e2\u961f\u6307\u5357\u300b\u4e2d\u5b9a\u4e49\u7684\u6807\u8bb0\u3002 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90ae\u4ef6\u5217\u8868 \u00b6 \u5728\u5b9e\u65bdOpenStack\u65f6\uff0c\u6838\u5fc3\u51b3\u7b56\u4e4b\u4e00\u662f\u4f7f\u7528\u54ea\u4e2a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002\u6211\u4eec\u5efa\u8bae\u60a8\u4e86\u89e3\u4e0e\u60a8\u9009\u62e9\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u76f8\u5173\u7684\u516c\u544a\u3002\u4ee5\u4e0b\u662f\u51e0\u4e2a\u5e38\u89c1\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5b89\u5168\u5217\u8868\uff1a Xen\uff1a http://xenbits.xen.org/xsa/ VMWare\uff1a http://blogs.vmware.com/security/ \u5176\u4ed6\uff08KVM \u7b49\uff09\uff1a http://seclists.org/oss-sec \u5982\u4f55\u9009\u62e9\u865a\u62df\u63a7\u5236\u53f0 \u00b6 \u4e91\u67b6\u6784\u5e08\u9700\u8981\u505a\u51fa\u7684\u6709\u5173\u8ba1\u7b97\u670d\u52a1\u914d\u7f6e\u7684\u4e00\u4e2a\u51b3\u5b9a\u662f\u4f7f\u7528 VNC \u8fd8\u662f SPICE\u3002 \u865a\u62df\u7f51\u7edc\u8ba1\u7b97\u673a \uff08VNC\uff09 \u00b6 OpenStack \u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528\u865a\u62df\u7f51\u7edc\u8ba1\u7b97\u673a \uff08VNC\uff09 \u534f\u8bae\u4e3a\u79df\u6237\u548c\u7ba1\u7406\u5458\u63d0\u4f9b\u5bf9\u5b9e\u4f8b\u7684\u8fdc\u7a0b\u684c\u9762\u63a7\u5236\u53f0\u8bbf\u95ee\u3002 \u529f\u80fd \u00b6 OpenStack Dashboard \uff08horizon\uff09 \u53ef\u4ee5\u4f7f\u7528 HTML5 noVNC \u5ba2\u6237\u7aef\u76f4\u63a5\u5728\u7f51\u9875\u4e0a\u4e3a\u5b9e\u4f8b\u63d0\u4f9b VNC \u63a7\u5236\u53f0\u3002\u8fd9\u8981\u6c42 nova-novncproxy \u670d\u52a1\u4ece\u516c\u7528\u7f51\u7edc\u6865\u63a5\u5230\u7ba1\u7406\u7f51\u7edc\u3002 nova \u547d\u4ee4\u884c\u5b9e\u7528\u7a0b\u5e8f\u53ef\u4ee5\u8fd4\u56de VNC \u63a7\u5236\u53f0\u7684 URL\uff0c\u4ee5\u4f9b nova Java VNC \u5ba2\u6237\u7aef\u8bbf\u95ee\u3002\u8fd9\u8981\u6c42 nova-xvpvncproxy \u670d\u52a1\u4ece\u516c\u7528\u7f51\u7edc\u6865\u63a5\u5230\u7ba1\u7406\u7f51\u7edc\u3002 \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u00b6 \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c nova-novncproxy \u548c nova-xvpvncproxy \u670d\u52a1\u4f1a\u6253\u5f00\u7ecf\u8fc7\u4ee4\u724c\u8eab\u4efd\u9a8c\u8bc1\u7684\u9762\u5411\u516c\u4f17\u7684\u7aef\u53e3\u3002 \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u8fdc\u7a0b\u684c\u9762\u6d41\u91cf\u672a\u52a0\u5bc6\u3002\u53ef\u4ee5\u542f\u7528 TLS \u6765\u52a0\u5bc6 VNC \u6d41\u91cf\u3002\u8bf7\u53c2\u9605 TLS \u548c SSL \u7b80\u4ecb\u4ee5\u83b7\u53d6\u9002\u5f53\u7684\u5efa\u8bae\u3002 \u53c2\u8003\u4e66\u76ee \u00b6 blog.malchuk.ru, OpenStack VNC Security. 2013. Secure Connections to VNC ports blog.malchuk.ru\uff0cOpenStack VNC \u5b89\u5168\u6027\u30022013. \u4e0e VNC \u7aef\u53e3\u7684\u5b89\u5168\u8fde\u63a5 OpenStack Mailing List, [OpenStack] nova-novnc SSL configuration - Havana. 2014. OpenStack nova-novnc SSL Configuration OpenStack \u90ae\u4ef6\u5217\u8868\uff0c[OpenStack] nova-novnc SSL \u914d\u7f6e - \u54c8\u74e6\u90a3\u30022014. OpenStack nova-novnc SSL\u914d\u7f6e Redhat.com/solutions\uff0c\u5728 OpenStack \u4e2d\u4f7f\u7528 SSL \u52a0\u5bc6 nova-novacproxy\u30022014. OpenStack nova-novncproxy SSL\u52a0\u5bc6 \u72ec\u7acb\u8ba1\u7b97\u73af\u5883\u7684\u7b80\u5355\u534f\u8bae \uff08SPICE\uff09 \u00b6 \u4f5c\u4e3a VNC \u7684\u66ff\u4ee3\u65b9\u6848\uff0cOpenStack \u4f7f\u7528\u72ec\u7acb\u8ba1\u7b97\u73af\u5883\u7684\u7b80\u5355\u534f\u8bae \uff08SPICE\uff09 \u534f\u8bae\u63d0\u4f9b\u5bf9\u5ba2\u6237\u673a\u865a\u62df\u673a\u7684\u8fdc\u7a0b\u684c\u9762\u8bbf\u95ee\u3002 \u529f\u80fd \u00b6 OpenStack Dashboard \uff08horizon\uff09 \u76f4\u63a5\u5728\u5b9e\u4f8b\u7f51\u9875\u4e0a\u652f\u6301 SPICE\u3002\u8fd9\u9700\u8981\u670d\u52a1 nova-spicehtml5proxy \u3002 nova \u547d\u4ee4\u884c\u5b9e\u7528\u7a0b\u5e8f\u53ef\u4ee5\u8fd4\u56de SPICE \u63a7\u5236\u53f0\u7684 URL\uff0c\u4ee5\u4f9b SPICE-html \u5ba2\u6237\u7aef\u8bbf\u95ee\u3002 \u9650\u5236 \u00b6 \u5c3d\u7ba1 SPICE \u4e0e VNC \u76f8\u6bd4\u5177\u6709\u8bb8\u591a\u4f18\u52bf\uff0c\u4f46 spice-html5 \u6d4f\u89c8\u5668\u96c6\u6210\u76ee\u524d\u4e0d\u5141\u8bb8\u7ba1\u7406\u5458\u5229\u7528\u8fd9\u4e9b\u4f18\u52bf\u3002\u4e3a\u4e86\u5229\u7528 \u591a\u663e\u793a\u5668\u3001USB \u76f4\u901a\u7b49 SPICE \u529f\u80fd\uff0c\u6211\u4eec\u5efa\u8bae\u7ba1\u7406\u5458\u5728\u7ba1\u7406\u7f51\u7edc\u4e2d\u4f7f\u7528\u72ec\u7acb\u7684 SPICE \u5ba2\u6237\u7aef\u3002 \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u00b6 \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u8be5 nova-spicehtml5proxy \u670d\u52a1\u4f1a\u6253\u5f00\u7ecf\u8fc7\u4ee4\u724c\u8eab\u4efd\u9a8c\u8bc1\u7684\u9762\u5411\u516c\u4f17\u7684\u7aef\u53e3\u3002 \u529f\u80fd\u548c\u96c6\u6210\u4ecd\u5728\u4e0d\u65ad\u53d1\u5c55\u3002\u6211\u4eec\u5c06\u5728\u4e0b\u4e00\u4e2a\u7248\u672c\u4e2d\u8bbf\u95ee\u8fd9\u4e9b\u529f\u80fd\u5e76\u63d0\u51fa\u5efa\u8bae\u3002 \u4e0e VNC \u7684\u60c5\u51b5\u4e00\u6837\uff0c\u76ee\u524d\u6211\u4eec\u5efa\u8bae\u4ece\u7ba1\u7406\u7f51\u7edc\u4f7f\u7528 SPICE\uff0c\u6b64\u5916\u8fd8\u9650\u5236\u4f7f\u7528\u5c11\u6570\u4eba\u3002 \u53c2\u8003\u4e66\u76ee \u00b6 OpenStack \u7ba1\u7406\u5458\u6307\u5357\u3002SPICE\u63a7\u5236\u53f0\u3002SPICE\u63a7\u5236\u53f0\u3002 bugzilla.redhat.com\uff0c Bug 913607 - RFE\uff1a \u652f\u6301\u901a\u8fc7 websockets \u96a7\u9053\u4f20\u8f93 SPICE\u30022013. RedHat \u9519\u8bef913607\u3002 \u68c0\u67e5\u8868 \u00b6 Check-Compute-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/nova\uff1f \u00b6 \u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u62d2\u7edd\u5411\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u63d0\u4f9b\u670d\u52a1\u3002\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a nova \uff0c root \u5e76\u4e14\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a \u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/nova/nova.conf | egrep \"root nova\" $ stat -L -c \"%U %G\" /etc/nova/api-paste.ini | egrep \"root nova\" $ stat -L -c \"%U %G\" /etc/nova/policy.json | egrep \"root nova\" $ stat -L -c \"%U %G\" /etc/nova/rootwrap.conf | egrep \"root nova\" $ stat -L -c \"%U %G\" /etc/nova | egrep \"root nova\" \u901a\u8fc7\uff1a\u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c nova \u3002\u4e0a\u8ff0\u547d\u4ee4\u663e\u793a \u7684 root nova \u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u5219\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u6216\u9664 nova \u4ee5\u5916\u7684 root \u4efb\u4f55\u7ec4\u3002 \u63a8\u8350\u4e8e\uff1a\u8ba1\u7b97\u3002 Check-Compute-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f \u00b6 \u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u6211\u4eec\u5efa\u8bae\u4e3a\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/nova/nova.conf $ stat -L -c \"%a\" /etc/nova/api-paste.ini $ stat -L -c \"%a\" /etc/nova/policy.json $ stat -L -c \"%a\" /etc/nova/rootwrap.conf \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002640/750 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\u3002\u4f8b\u5982\uff0c\u201cu=rw\uff0cg=r\uff0co=\u201d\u3002 \u6ce8\u610f \u5982\u679c Check-Compute-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/nova\uff1f\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0croot \u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0cnova \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/nova/nova.conf getfacl: Removing leading '/' from absolute path names # file: etc/nova/nova.conf USER root rw- GROUP nova r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u672a\u8bbe\u7f6e\u4e3a\u81f3\u5c11 640/750\u3002 \u63a8\u8350\u4e8e\uff1a\u8ba1\u7b97\u3002 Check-Compute-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f \u00b6 \u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Rocky \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Stein \u4e2d\u5df2\u5f03\u7528\u3002 OpenStack \u652f\u6301\u5404\u79cd\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\uff0c\u5982 noauth \u548c keystone\u3002\u5982\u679c\u4f7f\u7528 noauth \u7b56\u7565\uff0c\u90a3\u4e48\u7528\u6237\u65e0\u9700\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u4e0e OpenStack \u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u6f5c\u5728\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u83b7\u5f97\u5bf9 OpenStack \u7ec4\u4ef6\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u670d\u52a1\u90fd\u5fc5\u987b\u4f7f\u7528\u5176\u670d\u52a1\u5e10\u6237\u901a\u8fc7 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u5728Ocata\u4e4b\u524d\uff1a \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 auth_strategy \u8bbe\u7f6e\u4e3a keystone \u3002 [DEFAULT] /etc/nova/nova.conf \u5931\u8d25\uff1a\u5982\u679c section \u4e0b\u7684 [DEFAULT] \u53c2\u6570 auth_strategy \u503c\u8bbe\u7f6e\u4e3a noauth \u6216 noauth2 \u3002 \u5728Ocata\u4e4b\u540e\uff1a \u901a\u8fc7\uff1a\u5982\u679c under [api] \u6216 [DEFAULT] section in /etc/nova/nova.conf \u7684\u53c2\u6570 auth_strategy \u503c\u8bbe\u7f6e\u4e3a keystone \u3002 \u5931\u8d25\uff1a\u5982\u679c or [DEFAULT] \u90e8\u5206\u4e0b\u7684 [api] \u53c2\u6570 auth_strategy \u503c\u8bbe\u7f6e\u4e3a noauth \u6216 noauth2 \u3002 Check-Compute-04\uff1a\u662f\u5426\u4f7f\u7528\u5b89\u5168\u534f\u8bae\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f \u00b6 OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in /etc/nova/nova.conf \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a Identity API \u7aef\u70b9\u5f00\u5934\uff0c https:// \u5e76\u4e14 same /etc/nova/nova.conf \u4e2d\u540c\u4e00 [keystone_authtoken] \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri insecure \u503c\u8bbe\u7f6e\u4e3a False \u3002 \u5931\u8d25\uff1a\u5982\u679c in /etc/nova/nova.conf \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri \u503c\u672a\u8bbe\u7f6e\u4e3a\u4ee5 \u5f00\u5934\u7684\u8eab\u4efd API \u7aef\u70b9\uff0c https:// \u6216\u8005\u540c\u4e00 /etc/nova/nova.conf \u90e8\u5206\u4e2d\u7684\u53c2\u6570 insecure [keystone_authtoken] \u503c\u8bbe\u7f6e\u4e3a True \u3002 Check-Compute-05\uff1aNova \u4e0e Glance \u7684\u901a\u4fe1\u662f\u5426\u5b89\u5168\uff1f \u00b6 OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a False \uff0c\u5e76\u4e14 section in /etc/nova/nova.conf /etc/nova/nova.conf \u4e0b\u7684 [glance] [glance] \u53c2\u6570 api_insecure api_servers \u503c\u8bbe\u7f6e\u4e3a\u4ee5 https:// \u5f00\u5934\u7684\u503c\u3002 \u5931\u8d25\uff1a\u5982\u679c in /etc/nova/nova.conf \u8282\u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a True \uff0c\u6216\u8005 in /etc/nova/nova.conf \u8282\u4e0b\u7684 [glance] [glance] \u53c2\u6570 api_insecure api_servers \u503c\u8bbe\u7f6e\u4e3a\u4e0d\u4ee5 https:// \u5f00\u5934\u7684\u503c\u3002 \u5757\u5b58\u50a8 \u00b6 OpenStack Block Storage \uff08cinder\uff09 \u662f\u4e00\u9879\u670d\u52a1\uff0c\u5b83\u63d0\u4f9b\u8f6f\u4ef6\uff08\u670d\u52a1\u548c\u5e93\uff09\u6765\u81ea\u52a9\u7ba1\u7406\u6301\u4e45\u6027\u5757\u7ea7\u5b58\u50a8\u8bbe\u5907\u3002\u8fd9\u5c06\u521b\u5efa\u5bf9\u5757\u5b58\u50a8\u8d44\u6e90\u7684\u6309\u9700\u8bbf\u95ee\uff0c\u4ee5\u4fbf\u4e0e OpenStack \u8ba1\u7b97 \uff08nova\uff09 \u5b9e\u4f8b\u4e00\u8d77\u4f7f\u7528\u3002\u901a\u8fc7\u5c06\u5757\u5b58\u50a8\u6c60\u865a\u62df\u5316\u5230\u5404\u79cd\u540e\u7aef\u5b58\u50a8\u8bbe\u5907\uff08\u53ef\u4ee5\u662f\u8f6f\u4ef6\u5b9e\u73b0\u6216\u4f20\u7edf\u786c\u4ef6\u5b58\u50a8\u4ea7\u54c1\uff09\uff0c\u901a\u8fc7\u62bd\u8c61\u521b\u5efa\u8f6f\u4ef6\u5b9a\u4e49\u5b58\u50a8\u3002\u5176\u4e3b\u8981\u529f\u80fd\u662f\u7ba1\u7406\u5757\u8bbe\u5907\u7684\u521b\u5efa\u3001\u9644\u52a0\u548c\u5206\u79bb\u3002\u6d88\u8d39\u8005\u4e0d\u9700\u8981\u77e5\u9053\u540e\u7aef\u5b58\u50a8\u8bbe\u5907\u7684\u7c7b\u578b\u6216\u5b83\u7684\u4f4d\u7f6e\u3002 \u8ba1\u7b97\u5b9e\u4f8b\u901a\u8fc7\u884c\u4e1a\u6807\u51c6\u5b58\u50a8\u534f\u8bae\uff08\u5982 iSCSI\u3001\u4ee5\u592a\u7f51 ATA \u6216\u5149\u7ea4\u901a\u9053\uff09\u5b58\u50a8\u548c\u68c0\u7d22\u5757\u5b58\u50a8\u3002\u8fd9\u4e9b\u8d44\u6e90\u901a\u8fc7 OpenStack \u539f\u751f\u6807\u51c6 HTTP RESTful API \u8fdb\u884c\u7ba1\u7406\u548c\u914d\u7f6e\u3002\u6709\u5173 API \u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack \u5757\u5b58\u50a8\u6587\u6863\u3002 \u5377\u64e6\u9664 \u68c0\u67e5\u8868 Check-Block-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/cinder\uff1f Check-Block-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Block-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Block-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Block-05\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e nova \u901a\u4fe1\uff1f Check-Block-06\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e glance \u901a\u4fe1\uff1f Check-Block-07\uff1a NAS \u662f\u5426\u5728\u5b89\u5168\u7684\u73af\u5883\u4e2d\u8fd0\u884c\uff1f Check-Block-08\uff1a\u8bf7\u6c42\u6b63\u6587\u7684\u6700\u5927\u5927\u5c0f\u662f\u5426\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f Check-Block-09\uff1a\u662f\u5426\u542f\u7528\u4e86\u5377\u52a0\u5bc6\u529f\u80fd\uff1f \u6ce8\u610f \u867d\u7136\u672c\u7ae0\u76ee\u524d\u5bf9\u5177\u4f53\u6307\u5357\u7684\u4ecb\u7ecd\u5f88\u5c11\uff0c\u4f46\u9884\u8ba1\u5c06\u9075\u5faa\u6807\u51c6\u7684\u5f3a\u5316\u5b9e\u8df5\u3002\u672c\u8282\u5c06\u6269\u5c55\u76f8\u5173\u4fe1\u606f\u3002 \u5377\u64e6\u9664 \u00b6 \u6709\u51e0\u79cd\u65b9\u6cd5\u53ef\u4ee5\u64e6\u9664\u5757\u5b58\u50a8\u8bbe\u5907\u3002\u4f20\u7edf\u7684\u65b9\u6cd5\u662f\u5c06 lvm_type \u8bbe\u7f6e\u4e3a thin \uff0c\u5982\u679c\u4f7f\u7528 LVM \u540e\u7aef\uff0c\u5219\u4f7f\u7528 volume_clear \u8be5\u53c2\u6570\u3002\u6216\u8005\uff0c\u5982\u679c\u4f7f\u7528\u5377\u52a0\u5bc6\u529f\u80fd\uff0c\u5219\u5728\u5220\u9664\u5377\u52a0\u5bc6\u5bc6\u94a5\u65f6\u4e0d\u9700\u8981\u5377\u64e6\u9664\u3002\u6709\u5173\u8bbe\u7f6e\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5377\u52a0\u5bc6\u90e8\u5206\u4e2d\u7684 OpenStack \u914d\u7f6e\u53c2\u8003\u6587\u6863\uff0c\u4ee5\u53ca\u6709\u5173\u5bc6\u94a5\u5220\u9664\u7684 Castellan \u4f7f\u7528\u6587\u6863 \u6ce8\u610f \u5728\u8f83\u65e7\u7684 OpenStack \u7248\u672c\u4e2d\uff0c `lvm_type=default` \u7528\u4e8e\u8868\u793a\u64e6\u9664\u3002\u867d\u7136\u6b64\u65b9\u6cd5\u4ecd\u7136\u6709\u6548\uff0c\u4f46 `lvm_type=default` \u4e0d\u5efa\u8bae\u7528\u4e8e\u8bbe\u7f6e\u5b89\u5168\u5220\u9664\u3002 \u8be5 volume_clear \u53c2\u6570\u53ef\u4ee5\u8bbe\u7f6e\u4e3a zero \u3002\u8be5 zero \u53c2\u6570\u5c06\u5411\u8bbe\u5907\u5199\u5165\u4e00\u6b21\u96f6\u4f20\u9012\u3002 \u6709\u5173\u8be5 lvm_type \u53c2\u6570\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 cinder \u9879\u76ee\u6587\u6863\u7684\u7cbe\u7b80\u7f6e\u5907\u4e2d\u7684 LVM \u548c\u8d85\u989d\u8ba2\u9605\u90e8\u5206\u3002 \u6709\u5173\u8be5 volume_clear \u53c2\u6570\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 cinder \u9879\u76ee\u6587\u6863\u7684 Cinder \u914d\u7f6e\u9009\u9879\u90e8\u5206\u3002 \u68c0\u67e5\u8868 \u00b6 Check-Block-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/cinder\uff1f \u00b6 \u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u62d2\u7edd\u5411\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u63d0\u4f9b\u670d\u52a1\u3002\u56e0\u6b64\uff0c\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a root\uff0c\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a cinder\u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/cinder/cinder.conf | egrep \"root cinder\" $ stat -L -c \"%U %G\" /etc/cinder/api-paste.ini | egrep \"root cinder\" $ stat -L -c \"%U %G\" /etc/cinder/policy.json | egrep \"root cinder\" $ stat -L -c \"%U %G\" /etc/cinder/rootwrap.conf | egrep \"root cinder\" $ stat -L -c \"%U %G\" /etc/cinder | egrep \"root cinder\" \u901a\u8fc7\uff1a\u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c cinder\u3002\u4e0a\u9762\u7684\u547d\u4ee4\u663e\u793a\u4e86\u6839\u7164\u6e23\u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u56e0\u4e3a\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 root \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u6216\u9664 cinder \u4ee5\u5916\u7684\u4efb\u4f55\u7ec4\u3002 Check-Block-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f \u00b6 \u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u6211\u4eec\u5efa\u8bae\u4e3a\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/cinder/cinder.conf $ stat -L -c \"%a\" /etc/cinder/api-paste.ini $ stat -L -c \"%a\" /etc/cinder/policy.json $ stat -L -c \"%a\" /etc/cinder/rootwrap.conf $ stat -L -c \"%a\" /etc/cinder \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002640/750 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\uff0c\u5373\u201cu=rw\uff0cg=r\uff0co=\u201d\u3002\u8bf7\u6ce8\u610f\uff0c\u4f7f\u7528 Check-Block-01 \u65f6\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/cinder\uff1f\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0croot \u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0ccinder \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/cinder/cinder.conf getfacl: Removing leading '/' from absolute path names # file: etc/cinder/cinder.conf USER root rw- GROUP cinder r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u672a\u8bbe\u7f6e\u4e3a\u81f3\u5c11 640\u3002 Check-Block-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f \u00b6 \u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Rocky \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Stein \u4e2d\u5df2\u5f03\u7528\u3002 OpenStack \u652f\u6301\u5404\u79cd\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\uff0c\u5982 noauth\u3001keystone \u7b49\u3002\u5982\u679c\u4f7f\u7528\u201cnoauth\u201d\u7b56\u7565\uff0c\u90a3\u4e48\u7528\u6237\u65e0\u9700\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u4e0eOpenStack\u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u6f5c\u5728\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u83b7\u5f97\u5bf9 OpenStack \u7ec4\u4ef6\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u56e0\u6b64\uff0c\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u670d\u52a1\u90fd\u5fc5\u987b\u4f7f\u7528\u5176\u670d\u52a1\u5e10\u6237\u901a\u8fc7 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 auth_strategy \u8bbe\u7f6e\u4e3a keystone \u3002 [DEFAULT] /etc/cinder/cinder.conf \u5931\u8d25\uff1a\u5982\u679c section \u4e0b\u7684 [DEFAULT] \u53c2\u6570 auth_strategy \u503c\u8bbe\u7f6e\u4e3a noauth \u3002 Check-Block-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f \u00b6 OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f/\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u56e0\u6b64\uff0c\u6240\u6709\u7ec4\u4ef6\u90fd\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u7684\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in /etc/cinder/cinder.conf \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a Identity API \u7aef\u70b9\u5f00\u5934\uff0c https:// \u5e76\u4e14 same /etc/cinder/cinder.conf \u4e2d\u540c\u4e00 [keystone_authtoken] \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri insecure \u503c\u8bbe\u7f6e\u4e3a False \u3002 \u5931\u8d25\uff1a\u5982\u679c in /etc/cinder/cinder.conf \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri \u503c\u672a\u8bbe\u7f6e\u4e3a\u4ee5 \u5f00\u5934\u7684\u8eab\u4efd API \u7aef\u70b9\uff0c https:// \u6216\u8005\u540c\u4e00 /etc/cinder/cinder.conf \u90e8\u5206\u4e2d\u7684\u53c2\u6570 insecure [keystone_authtoken] \u503c\u8bbe\u7f6e\u4e3a True \u3002 Check-Block-05\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e nova \u901a\u4fe1\uff1f \u00b6 OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f/\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u56e0\u6b64\uff0c\u6240\u6709\u7ec4\u4ef6\u90fd\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u7684\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 nova_api_insecure \u8bbe\u7f6e\u4e3a False \u3002 [DEFAULT] /etc/cinder/cinder.conf \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 nova_api_insecure \u8bbe\u7f6e\u4e3a True \u3002 [DEFAULT] /etc/cinder/cinder.conf Check-Block-06\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e glance \u901a\u4fe1\uff1f \u00b6 \u4e0e\u4e4b\u524d\u7684\u68c0\u67e5\uff08Check-Block-05\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e nova \u901a\u4fe1\uff1f\uff09\u7c7b\u4f3c\uff0c\u6211\u4eec\u5efa\u8bae\u6240\u6709\u7ec4\u4ef6\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c in \u90e8\u5206\u4e0b\u7684 [DEFAULT] \u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a False \u5e76\u4e14\u53c2\u6570 glance_api_servers glance_api_insecure \u503c\u8bbe\u7f6e\u4e3a\u4ee5 https:// \u5f00\u5934 /etc/cinder/cinder.conf \u7684\u503c\u3002 \u5931\u8d25\uff1a\u5982\u679c\u5c06 section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a True \u6216\u53c2\u6570 glance_api_servers glance_api_insecure \u503c\u8bbe\u7f6e\u4e3a\u4e0d\u4ee5 https:// \u5f00\u5934\u7684\u503c\u3002 [DEFAULT] /etc/cinder/cinder.conf Check-Block-07\uff1a NAS \u662f\u5426\u5728\u5b89\u5168\u7684\u73af\u5883\u4e2d\u8fd0\u884c\uff1f \u00b6 Cinder \u652f\u6301 NFS \u9a71\u52a8\u7a0b\u5e8f\uff0c\u5176\u5de5\u4f5c\u65b9\u5f0f\u4e0e\u4f20\u7edf\u7684\u5757\u5b58\u50a8\u9a71\u52a8\u7a0b\u5e8f\u4e0d\u540c\u3002NFS \u9a71\u52a8\u7a0b\u5e8f\u5b9e\u9645\u4e0a\u4e0d\u5141\u8bb8\u5b9e\u4f8b\u5728\u5757\u7ea7\u522b\u8bbf\u95ee\u5b58\u50a8\u8bbe\u5907\u3002\u76f8\u53cd\uff0c\u6587\u4ef6\u662f\u5728 NFS \u5171\u4eab\u4e0a\u521b\u5efa\u7684\uff0c\u5e76\u6620\u5c04\u5230\u6a21\u62df\u5757\u50a8\u5b58\u8bbe\u5907\u7684\u5b9e\u4f8b\u3002Cinder \u901a\u8fc7\u5728\u521b\u5efa Cinder \u5377\u65f6\u63a7\u5236\u6587\u4ef6\u6743\u9650\u6765\u652f\u6301\u6b64\u7c7b\u6587\u4ef6\u7684\u5b89\u5168\u914d\u7f6e\u3002Cinder \u914d\u7f6e\u8fd8\u53ef\u4ee5\u63a7\u5236\u662f\u4ee5 root \u7528\u6237\u8eab\u4efd\u8fd8\u662f\u5f53\u524d OpenStack \u8fdb\u7a0b\u7528\u6237\u8eab\u4efd\u8fd0\u884c\u6587\u4ef6\u64cd\u4f5c\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 nas_secure_file_permissions \u8bbe\u7f6e\u4e3a auto \u3002 [DEFAULT] /etc/cinder/cinder.conf \u5982\u679c\u8bbe\u7f6e\u4e3a auto \uff0c\u5219\u5728 cinder \u542f\u52a8\u671f\u95f4\u8fdb\u884c\u68c0\u67e5\u4ee5\u786e\u5b9a\u662f\u5426\u5b58\u5728\u73b0\u6709\u7684 cinder \u5377\uff0c\u4efb\u4f55\u5377\u90fd\u4e0d\u4f1a\u5c06\u9009\u9879\u8bbe\u7f6e\u4e3a True \uff0c\u5e76\u4f7f\u7528\u5b89\u5168\u6587\u4ef6\u6743\u9650\u3002\u68c0\u6d4b\u73b0\u6709\u5377\u4f1a\u5c06\u9009\u9879\u8bbe\u7f6e\u4e3a False \uff0c\u5e76\u4f7f\u7528\u5f53\u524d\u4e0d\u5b89\u5168\u7684\u65b9\u6cd5\u6765\u5904\u7406\u6587\u4ef6\u6743\u9650\u3002\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 nas_secure_file_operations \u8bbe\u7f6e\u4e3a auto \u3002 [DEFAULT] /etc/cinder/cinder.conf \u5f53\u8bbe\u7f6e\u4e3a\u201cauto\u201d\u65f6\uff0c\u5728 cinder \u542f\u52a8\u671f\u95f4\u8fdb\u884c\u68c0\u67e5\u4ee5\u786e\u5b9a\u662f\u5426\u5b58\u5728\u73b0\u6709\u7684 cinder \u5377\uff0c\u4efb\u4f55\u5377\u90fd\u4e0d\u4f1a\u5c06\u9009\u9879\u8bbe\u7f6e\u4e3a True \uff0c\u5b89\u5168\u4e14\u4e0d\u4ee5 root \u7528\u6237\u8eab\u4efd\u8fd0\u884c\u3002\u5bf9\u73b0\u6709\u5377\u7684\u68c0\u6d4b\u4f1a\u5c06\u9009\u9879\u8bbe\u7f6e\u4e3a False \uff0c\u5e76\u4f7f\u7528\u5f53\u524d\u65b9\u6cd5\u4ee5 root \u7528\u6237\u8eab\u4efd\u8fd0\u884c\u64cd\u4f5c\u3002\u5bf9\u4e8e\u65b0\u5b89\u88c5\uff0c\u4f1a\u7f16\u5199\u4e00\u4e2a\u201c\u6807\u8bb0\u6587\u4ef6\u201d\uff0c\u4ee5\u4fbf\u968f\u540e\u91cd\u65b0\u542f\u52a8 cinder \u5c06\u77e5\u9053\u539f\u59cb\u786e\u5b9a\u662f\u4ec0\u4e48\u3002 \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a False \uff0c\u5e76\u4e14 section in /etc/cinder/cinder.conf /etc/cinder/cinder.conf \u4e0b\u7684 [DEFAULT] [DEFAULT] \u53c2\u6570 nas_secure_file_permissions nas_secure_file_operations \u503c\u8bbe\u7f6e\u4e3a False \u3002 Check-Block-08\uff1a\u8bf7\u6c42\u6b63\u6587\u7684\u6700\u5927\u5927\u5c0f\u662f\u5426\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f \u00b6 \u5982\u679c\u672a\u5b9a\u4e49\u6bcf\u4e2a\u8bf7\u6c42\u7684\u6700\u5927\u6b63\u6587\u5927\u5c0f\uff0c\u653b\u51fb\u8005\u53ef\u4ee5\u6784\u5efa\u4efb\u610f\u8f83\u5927\u7684osapi\u8bf7\u6c42\uff0c\u5bfc\u81f4\u670d\u52a1\u5d29\u6e83\uff0c\u6700\u7ec8\u5bfc\u81f4\u62d2\u7edd\u670d\u52a1\u653b\u51fb\u3002\u5206\u914d\u6700\u5927\u503c\u53ef\u786e\u4fdd\u963b\u6b62\u4efb\u4f55\u6076\u610f\u8d85\u5927\u8bf7\u6c42\uff0c\u4ece\u800c\u786e\u4fdd\u670d\u52a1\u7684\u6301\u7eed\u53ef\u7528\u6027\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a 114688 114688 \uff0c\u6216\u8005 section in /etc/cinder/cinder.conf /etc/cinder/cinder.conf \u4e0b\u7684 [oslo_middleware] [DEFAULT] \u53c2\u6570 osapi_max_request_body_size max_request_body_size \u503c\u8bbe\u7f6e\u4e3a \u3002 \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u672a\u8bbe\u7f6e\u4e3a 114688 \uff0c 114688 \u6216\u8005 section in /etc/cinder/cinder.conf /etc/cinder/cinder.conf \u4e0b\u7684 [oslo_middleware] [DEFAULT] \u53c2\u6570 osapi_max_request_body_size max_request_body_size \u503c\u672a\u8bbe\u7f6e\u4e3a \u3002 Check-Block-09\uff1a\u662f\u5426\u542f\u7528\u4e86\u5377\u52a0\u5bc6\u529f\u80fd\uff1f \u00b6 \u672a\u52a0\u5bc6\u7684\u5377\u6570\u636e\u4f7f\u5377\u6258\u7ba1\u5e73\u53f0\u6210\u4e3a\u653b\u51fb\u8005\u7279\u522b\u9ad8\u4ef7\u503c\u7684\u76ee\u6807\uff0c\u56e0\u4e3a\u5b83\u5141\u8bb8\u653b\u51fb\u8005\u8bfb\u53d6\u8bb8\u591a\u4e0d\u540c VM \u7684\u6570\u636e\u3002\u6b64\u5916\uff0c\u7269\u7406\u5b58\u50a8\u4ecb\u8d28\u53ef\u80fd\u4f1a\u88ab\u7a83\u53d6\u3001\u91cd\u65b0\u88c5\u8f7d\u548c\u4ece\u53e6\u4e00\u53f0\u8ba1\u7b97\u673a\u8bbf\u95ee\u3002\u52a0\u5bc6\u5377\u6570\u636e\u53ef\u4ee5\u964d\u4f4e\u8fd9\u4e9b\u98ce\u9669\uff0c\u5e76\u4e3a\u5377\u6258\u7ba1\u5e73\u53f0\u63d0\u4f9b\u6df1\u5ea6\u9632\u5fa1\u3002\u5757\u5b58\u50a8 \uff08cinder\uff09 \u80fd\u591f\u5728\u5c06\u5377\u6570\u636e\u5199\u5165\u78c1\u76d8\u4e4b\u524d\u5bf9\u5176\u8fdb\u884c\u52a0\u5bc6\uff0c\u56e0\u6b64\u5efa\u8bae\u5f00\u542f\u5377\u52a0\u5bc6\u529f\u80fd\u3002\u6709\u5173\u8bf4\u660e\uff0c\u8bf7\u53c2\u9605 Openstack Cinder \u670d\u52a1\u914d\u7f6e\u6587\u6863\u7684\u5377\u52a0\u5bc6\u90e8\u5206\u3002 \u901a\u8fc7\uff1a\u5982\u679c 1\uff09 \u8bbe\u7f6e\u4e86 in [key_manager] \u90e8\u5206\u4e0b\u7684\u53c2\u6570\u503c\uff0c2\uff09 \u8bbe\u7f6e\u4e86 in \u4e0b\u7684 [key_manager] \u53c2\u6570 backend backend \u503c\uff0c\u4ee5\u53ca 3\uff09 \u5982\u679c\u6b63\u786e\u9075\u5faa\u4e86 /etc/cinder/cinder.conf /etc/nova/nova.conf \u4e0a\u8ff0\u6587\u6863\u4e2d\u7684\u8bf4\u660e\u3002 \u82e5\u8981\u8fdb\u4e00\u6b65\u9a8c\u8bc1\uff0c\u8bf7\u5728\u5b8c\u6210\u5377\u52a0\u5bc6\u8bbe\u7f6e\u5e76\u4e3a LUKS \u521b\u5efa\u5377\u7c7b\u578b\u540e\u6267\u884c\u8fd9\u4e9b\u6b65\u9aa4\uff0c\u5982\u4e0a\u8ff0\u6587\u6863\u4e2d\u6240\u8ff0\u3002 \u521b\u5efa VM\uff1a $ openstack server create --image cirros-0.3.1-x86_64-disk --flavor m1.tiny TESTVM \u521b\u5efa\u52a0\u5bc6\u5377\u5e76\u5c06\u5176\u9644\u52a0\u5230 VM\uff1a $ openstack volume create --size 1 --type LUKS 'encrypted volume' $ openstack volume list $ openstack server add volume --device /dev/vdb TESTVM 'encrypted volume' \u5728 VM \u4e0a\uff0c\u5c06\u4e00\u4e9b\u6587\u672c\u53d1\u9001\u5230\u65b0\u9644\u52a0\u7684\u5377\u5e76\u540c\u6b65\u5b83\uff1a # echo \"Hello, world (encrypted /dev/vdb)\" >> /dev/vdb # sync && sleep 2 \u5728\u6258\u7ba1 cinder \u5377\u670d\u52a1\u7684\u7cfb\u7edf\u4e0a\uff0c\u540c\u6b65\u4ee5\u5237\u65b0 I/O \u7f13\u5b58\uff0c\u7136\u540e\u6d4b\u8bd5\u662f\u5426\u53ef\u4ee5\u627e\u5230\u5b57\u7b26\u4e32\uff1a # sync && sleep 2 # strings /dev/stack-volumes/volume-* | grep \"Hello\" \u641c\u7d22\u4e0d\u5e94\u8fd4\u56de\u5199\u5165\u52a0\u5bc6\u5377\u7684\u5b57\u7b26\u4e32\u3002 \u5931\u8d25\uff1a\u5982\u679c\u672a\u8bbe\u7f6e in \u90e8\u5206\u4e0b\u7684\u53c2\u6570\u503c\uff0c\u6216\u8005\u672a\u8bbe\u7f6e in /etc/cinder/cinder.conf /etc/nova/nova.conf \u90e8\u5206\u4e0b\u7684 [key_manager] [key_manager] \u53c2\u6570 backend backend \u503c\uff0c\u6216\u8005\u672a\u6b63\u786e\u9075\u5faa\u4e0a\u8ff0\u6587\u6863\u4e2d\u7684\u8bf4\u660e\u3002 \u56fe\u50cf\u5b58\u50a8 \u00b6 OpenStack Image Storage \uff08glance\uff09 \u662f\u4e00\u9879\u670d\u52a1\uff0c\u7528\u6237\u53ef\u4ee5\u5728\u5176\u4e2d\u4e0a\u4f20\u548c\u53d1\u73b0\u65e8\u5728\u4e0e\u5176\u4ed6\u670d\u52a1\u4e00\u8d77\u4f7f\u7528\u7684\u6570\u636e\u8d44\u4ea7\u3002\u8fd9\u76ee\u524d\u5305\u62ec\u56fe\u50cf\u548c\u5143\u6570\u636e\u5b9a\u4e49\u3002 \u6620\u50cf\u670d\u52a1\u5305\u62ec\u53d1\u73b0\u3001\u6ce8\u518c\u548c\u68c0\u7d22\u865a\u62df\u673a\u6620\u50cf\u3002Glance \u6709\u4e00\u4e2a RESTful API\uff0c\u5141\u8bb8\u67e5\u8be2 VM \u6620\u50cf\u5143\u6570\u636e\u4ee5\u53ca\u68c0\u7d22\u5b9e\u9645\u6620\u50cf\u3002 \u6709\u5173\u8be5\u670d\u52a1\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack Glance \u6587\u6863\u3002 \u68c0\u67e5\u8868 Check-Image-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/glance\uff1f Check-Image-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Image-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Image-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Image-05\uff1a\u662f\u5426\u963b\u6b62\u4e86\u5c4f\u853d\u7aef\u53e3\u626b\u63cf\uff1f \u6ce8\u610f \u867d\u7136\u672c\u7ae0\u76ee\u524d\u5bf9\u5177\u4f53\u6307\u5357\u7684\u4ecb\u7ecd\u5f88\u5c11\uff0c\u4f46\u9884\u8ba1\u5c06\u9075\u5faa\u6807\u51c6\u7684\u5f3a\u5316\u5b9e\u8df5\u3002\u672c\u8282\u5c06\u6269\u5c55\u76f8\u5173\u4fe1\u606f\u3002 \u68c0\u67e5\u8868 \u00b6 Check-Image-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/glance\uff1f \u00b6 \u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u62d2\u7edd\u5411\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u63d0\u4f9b\u670d\u52a1\u3002\u56e0\u6b64\uff0c\u5fc5\u987b\u5c06\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u8bbe\u7f6e\u4e3a glance \uff0c root \u5e76\u4e14\u5fc5\u987b\u5c06\u7ec4\u6240\u6709\u6743\u8bbe\u7f6e\u4e3a \u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/glance/glance-api-paste.ini | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-api.conf | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-cache.conf | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-manage.conf | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-registry-paste.ini | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-registry.conf | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-scrubber.conf | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-swift-store.conf | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/policy.json | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/schema-image.json | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/schema.json | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance | egrep \"root glance\" \u901a\u8fc7\uff1a\u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c glance\u3002\u4e0a\u9762\u7684\u547d\u4ee4\u663e\u793a\u4e86 root glance \u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u4e0d\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\u3002 Check-Image-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f \u00b6 \u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u6211\u4eec\u5efa\u8bae\u60a8\u4e3a\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/glance/glance-api-paste.ini $ stat -L -c \"%a\" /etc/glance/glance-api.conf $ stat -L -c \"%a\" /etc/glance/glance-cache.conf $ stat -L -c \"%a\" /etc/glance/glance-manage.conf $ stat -L -c \"%a\" /etc/glance/glance-registry-paste.ini $ stat -L -c \"%a\" /etc/glance/glance-registry.conf $ stat -L -c \"%a\" /etc/glance/glance-scrubber.conf $ stat -L -c \"%a\" /etc/glance/glance-swift-store.conf $ stat -L -c \"%a\" /etc/glance/policy.json $ stat -L -c \"%a\" /etc/glance/schema-image.json $ stat -L -c \"%a\" /etc/glance/schema.json $ stat -L -c \"%a\" /etc/glance \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002640/750 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\u3002\u4f8b\u5982\uff0c u=rw,g=r,o= . \u6ce8\u610f \u4f7f\u7528 Check-Image-01\uff1a Devices / Group Ownership of config files \u662f\u5426\u8bbe\u7f6e\u4e3a root/glance\uff1f\uff0c\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0c\u5219 root \u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0cglance \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/glance/glance-api.conf getfacl: Removing leading '/' from absolute path names # file: /etc/glance/glance-api.conf USER root rw- GROUP glance r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u672a\u8bbe\u7f6e\u4e3a\u81f3\u5c11 640\u3002 Check-Image-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f \u00b6 \u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Rocky \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Stein \u4e2d\u5df2\u5f03\u7528\u3002 OpenStack \u652f\u6301\u5404\u79cd\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\uff0c\u5305\u62ec noauth \u548c keystone\u3002\u5982\u679c\u4f7f\u7528\u8be5 noauth \u7b56\u7565\uff0c\u5219\u7528\u6237\u65e0\u9700\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u4e0e OpenStack \u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u6f5c\u5728\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u83b7\u5f97\u5bf9 OpenStack \u7ec4\u4ef6\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u670d\u52a1\u90fd\u5fc5\u987b\u4f7f\u7528\u5176\u670d\u52a1\u5e10\u6237\u901a\u8fc7 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a \uff0c keystone \u5e76\u4e14 section in /etc/glance/glance-api.conf /etc/glance /glance-registry.conf \u4e0b\u7684 [DEFAULT] [DEFAULT] \u53c2\u6570 auth_strategy auth_strategy \u503c\u8bbe\u7f6e\u4e3a keystone \u3002 \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a noauth \u6216 section in /etc/glance/glance-api.conf /etc/glance/glance- registry.conf \u4e0b\u7684 [DEFAULT] [DEFAULT] \u53c2\u6570 auth_strategy auth_strategy \u503c\u8bbe\u7f6e\u4e3a noauth \u3002 Check-Image-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f \u00b6 OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u7684\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a\u4ee5 \u5f00\u5934\u7684 Identity API \u7aef\u70b9 https:// \uff0c\u5e76\u4e14\u8be5\u53c2\u6570 insecure www_authenticate_uri \u7684\u503c\u4f4d\u4e8e same /etc/glance/glance-registry.conf \u4e2d\u7684\u540c\u4e00 [keystone_authtoken] \u90e8\u5206\u4e0b\uff0c\u5219\u8bbe\u7f6e\u4e3a False \u3002 [keystone_authtoken] /etc/glance/glance-api.conf \u5931\u8d25\uff1a\u5982\u679c \u4e2d\u7684 /etc/glance/glance-api.conf \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri \u503c\u672a\u8bbe\u7f6e\u4e3a\u4ee5 https:// \u5f00\u5934\u7684\u6807\u8bc6 API \u7aef\u70b9\uff0c\u6216\u8005\u540c\u4e00 /etc/glance/glance-api.conf \u90e8\u5206\u4e2d\u7684\u53c2\u6570 insecure [keystone_authtoken] \u503c\u8bbe\u7f6e\u4e3a True \u3002 Check-Image-05\uff1a\u662f\u5426\u963b\u6b62\u4e86\u5c4f\u853d\u7aef\u53e3\u626b\u63cf\uff1f \u00b6 Glance \u63d0\u4f9b\u7684\u6620\u50cf\u670d\u52a1 API v1 \u4e2d\u7684 copy_from \u529f\u80fd\u53ef\u5141\u8bb8\u653b\u51fb\u8005\u6267\u884c\u5c4f\u853d\u7684\u7f51\u7edc\u7aef\u53e3\u626b\u63cf\u3002\u5982\u679c\u542f\u7528\u4e86 v1 API\uff0c\u5219\u5e94\u5c06\u6b64\u7b56\u7565\u8bbe\u7f6e\u4e3a\u53d7\u9650\u503c\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 copy_from in /etc/glance/policy.json \u7684\u503c\u8bbe\u7f6e\u4e3a\u53d7\u9650\u503c\uff0c\u4f8b\u5982 role:admin . \u5931\u8d25\uff1a\u672a\u8bbe\u7f6e\u53c2\u6570 copy_from in /etc/glance/policy.json \u7684\u503c\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf \u00b6 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff08manila\uff09\u63d0\u4f9b\u4e86\u4e00\u7ec4\u670d\u52a1\uff0c\u7528\u4e8e\u7ba1\u7406\u591a\u79df\u6237\u4e91\u73af\u5883\u4e2d\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u3002\u5b83\u7c7b\u4f3c\u4e8eOpenStack\u901a\u8fc7OpenStack\u5757\u5b58\u50a8\u670d\u52a1\uff08cinder\uff09\u9879\u76ee\u63d0\u4f9b\u57fa\u4e8e\u5757\u7684\u5b58\u50a8\u7ba1\u7406\u7684\u65b9\u5f0f\u3002\u4f7f\u7528\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff0c\u60a8\u53ef\u4ee5\u521b\u5efa\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u5e76\u7ba1\u7406\u5176\u5c5e\u6027\uff0c\u4f8b\u5982\u53ef\u89c1\u6027\u3001\u53ef\u8bbf\u95ee\u6027\u548c\u4f7f\u7528\u914d\u989d\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u9002\u7528\u4e8e\u4f7f\u7528\u4ee5\u4e0b\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u7684\u5404\u79cd\u5b58\u50a8\u63d0\u4f9b\u7a0b\u5e8f\uff1aNFS\u3001CIFS\u3001GlusterFS \u548c HDFS\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7684\u7528\u9014\u4e0e Amazon Elastic File System \uff08EFS\uff09 \u76f8\u540c\u3002 \u4ecb\u7ecd \u4e00\u822c\u5b89\u5168\u4fe1\u606f \u7f51\u7edc\u548c\u5b89\u5168\u6a21\u578b \u5171\u4eab\u540e\u7aef\u6a21\u5f0f \u6241\u5e73\u5316\u7f51\u7edc\u4e0e\u5206\u6bb5\u5316\u7f51\u7edc \u7f51\u7edc\u63d2\u4ef6 \u5b89\u5168\u670d\u52a1 \u5b89\u5168\u670d\u52a1\u7b80\u4ecb \u5b89\u5168\u670d\u52a1\u7ba1\u7406 \u5171\u4eab\u8bbf\u95ee\u63a7\u5236 \u5171\u4eab\u7c7b\u578b\u8bbf\u95ee\u63a7\u5236 \u653f\u7b56 \u68c0\u67e5\u8868 Check-Shared-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/manila\uff1f Check-Shared-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Shared-03\uff1aOpenStack Identity \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Shared-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Shared-05\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u8ba1\u7b97\u8054\u7cfb\uff1f Check-Shared-06\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u7f51\u7edc\u8054\u7cfb\uff1f Check-Shared-07\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u5757\u5b58\u50a8\u8054\u7cfb\uff1f Check-Shared-08\uff1a\u8bf7\u6c42\u6b63\u6587\u7684\u6700\u5927\u5927\u5c0f\u662f\u5426\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f \u4ecb\u7ecd \u00b6 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff08\u9a6c\u5c3c\u62c9\uff09\u65e8\u5728\u5728\u5355\u8282\u70b9\u6216\u8de8\u591a\u4e2a\u8282\u70b9\u8fd0\u884c\u3002\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7531\u56db\u4e2a\u4e3b\u8981\u670d\u52a1\u7ec4\u6210\uff0c\u5b83\u4eec\u7c7b\u4f3c\u4e8e\u5757\u5b58\u50a8\u670d\u52a1\uff1a manila-api manila-scheduler manila-share manila-data manila-api \u63d0\u4f9b\u7a33\u5b9a RESTful API \u7684\u670d\u52a1\u3002\u8be5\u670d\u52a1\u5728\u6574\u4e2a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e2d\u5bf9\u8bf7\u6c42\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u548c\u8def\u7531\u3002\u6709 python-manilaclient \u53ef\u4ee5\u4e0e API \u4ea4\u4e92\u3002\u6709\u5173\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API \u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API\u3002 manila-share \u8d1f\u8d23\u7ba1\u7406\u5171\u4eab\u6587\u4ef6\u670d\u52a1\u8bbe\u5907\uff0c\u7279\u522b\u662f\u540e\u7aef\u8bbe\u5907\u3002 manila-scheduler \u8d1f\u8d23\u5b89\u6392\u8bf7\u6c42\u5e76\u5c06\u5176\u8def\u7531\u5230\u76f8\u5e94\u7684 manila-share \u670d\u52a1\u3002\u5b83\u901a\u8fc7\u9009\u62e9\u4e00\u4e2a\u540e\u7aef\uff0c\u540c\u65f6\u8fc7\u6ee4\u9664\u4e00\u4e2a\u540e\u7aef\u4e4b\u5916\u7684\u6240\u6709\u540e\u7aef\u6765\u5b9e\u73b0\u8fd9\u4e00\u70b9\u3002 manila-data \u6b64\u670d\u52a1\u8d1f\u8d23\u7ba1\u7406\u6570\u636e\u64cd\u4f5c\uff0c\u5982\u679c\u4e0d\u5355\u72ec\u5904\u7406\uff0c\u53ef\u80fd\u9700\u8981\u5f88\u957f\u65f6\u95f4\u624d\u80fd\u5b8c\u6210\uff0c\u5e76\u963b\u6b62\u5176\u4ed6\u670d\u52a1\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4f7f\u7528\u57fa\u4e8e SQL \u7684\u4e2d\u592e\u6570\u636e\u5e93\uff0c\u8be5\u6570\u636e\u5e93\u7531\u7cfb\u7edf\u4e2d\u7684\u6240\u6709\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5171\u4eab\u3002\u5b83\u53ef\u4ee5\u4f7f\u7528 ORM SQLALcvery \u652f\u6301\u7684\u4efb\u4f55 SQL \u65b9\u8a00\uff0c\u4f46\u4ec5\u4f7f\u7528 MySQL \u548c PostgreSQL \u6570\u636e\u5e93\u8fdb\u884c\u6d4b\u8bd5\u3002 \u4f7f\u7528 SQL\uff0c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7c7b\u4f3c\u4e8e\u5176\u4ed6 OpenStack \u670d\u52a1\uff0c\u53ef\u4ee5\u4e0e\u4efb\u4f55 OpenStack \u90e8\u7f72\u4e00\u8d77\u4f7f\u7528\u3002\u6709\u5173 API \u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API \u8bf4\u660e\u3002\u6709\u5173 CLI \u7528\u6cd5\u548c\u914d\u7f6e\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u4e91\u7ba1\u7406\u6307\u5357\u3002 \u4e0b\u56fe\u4e2d\uff0c\u60a8\u53ef\u4ee5\u770b\u5230\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7684\u4e0d\u540c\u90e8\u5206\u5982\u4f55\u76f8\u4e92\u4ea4\u4e92\u3002 \u9664\u4e86\u5df2\u7ecf\u63cf\u8ff0\u7684\u670d\u52a1\u4e4b\u5916\uff0c\u60a8\u8fd8\u53ef\u4ee5\u5728\u56fe\u50cf\u4e0a\u770b\u5230\u53e6\u5916\u4e24\u4e2a\u5b9e\u4f53\uff1a python-manilaclient \u548c storage controller \u3002 python-manilaclient \u547d\u4ee4\u884c\u754c\u9762\uff0c\u7528\u4e8e\u901a\u8fc7 manila-api \u4e0e\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\uff0c\u4ee5\u53ca\u7528\u4e8e\u4ee5\u7f16\u7a0b\u65b9\u5f0f\u4e0e\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4ea4\u4e92\u7684 Python \u6a21\u5757\u3002 Storage controller \u901a\u5e38\u662f\u4e00\u4e2a\u91d1\u5c5e\u76d2\uff0c\u5e26\u6709\u65cb\u8f6c\u78c1\u76d8\u3001\u4ee5\u592a\u7f51\u7aef\u53e3\u548c\u67d0\u79cd\u8f6f\u4ef6\uff0c\u5141\u8bb8\u7f51\u7edc\u5ba2\u6237\u7aef\u5728\u78c1\u76d8\u4e0a\u8bfb\u53d6\u548c\u5199\u5165\u6587\u4ef6\u3002\u8fd8\u6709\u4e00\u4e9b\u5728\u4efb\u610f\u786c\u4ef6\u4e0a\u8fd0\u884c\u7684\u7eaf\u8f6f\u4ef6\u5b58\u50a8\u63a7\u5236\u5668\uff0c\u7fa4\u96c6\u63a7\u5236\u5668\u53ef\u80fd\u5141\u8bb8\u591a\u4e2a\u7269\u7406\u8bbe\u5907\u663e\u793a\u4e3a\u5355\u4e2a\u5b58\u50a8\u63a7\u5236\u5668\uff0c\u6216\u7eaf\u865a\u62df\u5b58\u50a8\u63a7\u5236\u5668\u3002 \u5171\u4eab\u662f\u8fdc\u7a0b\u7684\u3001\u53ef\u88c5\u8f7d\u7684\u6587\u4ef6\u7cfb\u7edf\u3002\u60a8\u53ef\u4ee5\u4e00\u6b21\u5c06\u5171\u4eab\u88c5\u8f7d\u5230\u591a\u4e2a\u4e3b\u673a\uff0c\u4e5f\u53ef\u4ee5\u7531\u591a\u4e2a\u7528\u6237\u4ece\u591a\u4e2a\u4e3b\u673a\u8bbf\u95ee\u5171\u4eab\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u53ef\u4ee5\u4f7f\u7528\u4e0d\u540c\u7684\u7f51\u7edc\u7c7b\u578b\uff1a\u6241\u5e73\u7f51\u7edc\u3001VLAN\u3001VXLAN \u6216 GRE\uff0c\u5e76\u652f\u6301\u5206\u6bb5\u7f51\u7edc\u3002\u6b64\u5916\uff0c\u8fd8\u6709\u4e0d\u540c\u7684\u7f51\u7edc\u63d2\u4ef6\uff0c\u5b83\u4eec\u63d0\u4f9b\u4e86\u4e0e OpenStack \u63d0\u4f9b\u7684\u7f51\u7edc\u670d\u52a1\u7684\u5404\u79cd\u96c6\u6210\u65b9\u6cd5\u3002 \u4e0d\u540c\u4f9b\u5e94\u5546\u521b\u5efa\u4e86\u5927\u91cf\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\uff0c\u8fd9\u4e9b\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u4e0d\u540c\u7684\u786c\u4ef6\u5b58\u50a8\u89e3\u51b3\u65b9\u6848\uff0c\u4f8b\u5982 NetApp \u96c6\u7fa4\u6a21\u5f0f Data ONTAP \uff08 cDOT \uff09\u9a71\u52a8\u7a0b\u5e8f\uff0c\u534e\u4e3a NAS \u9a71\u52a8\u7a0b\u5e8f\u6216 GlusterFS \u9a71\u52a8\u7a0b\u5e8f\u3002\u6bcf\u4e2a\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u90fd\u662f\u4e00\u4e2a Python \u7c7b\uff0c\u53ef\u4ee5\u4e3a\u540e\u7aef\u8bbe\u7f6e\u5e76\u5728\u540e\u7aef\u8fd0\u884c\u4ee5\u7ba1\u7406\u5171\u4eab\u64cd\u4f5c\uff0c\u5176\u4e2d\u4e00\u4e9b\u64cd\u4f5c\u53ef\u80fd\u662f\u7279\u5b9a\u4e8e\u4f9b\u5e94\u5546\u7684\u3002\u540e\u7aef\u662f manila-share \u670d\u52a1\u7684\u4e00\u4e2a\u5b9e\u4f8b\u3002 \u5ba2\u6237\u7aef\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u7684\u914d\u7f6e\u6570\u636e\u53ef\u4ee5\u7531\u5b89\u5168\u670d\u52a1\u5b58\u50a8\u3002\u53ef\u4ee5\u914d\u7f6e\u548c\u4f7f\u7528 LDAP\u3001Kerberos \u6216 Microsoft Active Directory \u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u7b49\u534f\u8bae\u3002 \u9664\u975e\u672a\u5728 policy.json \u4e2d\u663e\u5f0f\u66f4\u6539\uff0c\u5426\u5219\u7ba1\u7406\u5458\u6216\u62e5\u6709\u5171\u4eab\u7684\u79df\u6237\u90fd\u80fd\u591f\u7ba1\u7406\u5bf9\u5171\u4eab\u7684\u8bbf\u95ee\u3002\u8bbf\u95ee\u7ba1\u7406\u662f\u901a\u8fc7\u521b\u5efa\u8bbf\u95ee\u89c4\u5219\u6765\u5b8c\u6210\u7684\uff0c\u8be5\u89c4\u5219\u901a\u8fc7 IP \u5730\u5740\u3001\u7528\u6237\u3001\u7ec4\u6216 TLS \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u53ef\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u53d6\u51b3\u4e8e\u60a8\u914d\u7f6e\u548c\u4f7f\u7528\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u548c\u5b89\u5168\u670d\u52a1\u3002 \u6ce8\u610f \u4e0d\u540c\u7684\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u4e0d\u540c\u7684\u8bbf\u95ee\u9009\u9879\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u4f7f\u7528\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u3002\u652f\u6301\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u5305\u62ec NFS\u3001CIFS\u3001GlusterFS \u548c HDFS\u3002\u4f8b\u5982\uff0c\u901a\u7528\uff08\u5757\u5b58\u50a8\u4f5c\u4e3a\u540e\u7aef\uff09\u9a71\u52a8\u7a0b\u5e8f\u4e0d\u652f\u6301\u7528\u6237\u548c\u8bc1\u4e66\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002\u5b83\u8fd8\u4e0d\u652f\u6301\u4efb\u4f55\u5b89\u5168\u670d\u52a1\uff0c\u4f8b\u5982 LDAP\u3001Kerberos \u6216 Active Directory\u3002\u6709\u5173\u4e0d\u540c\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u7684\u529f\u80fd\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u9a6c\u5c3c\u62c9\u5171\u4eab\u529f\u80fd\u652f\u6301\u6620\u5c04\u3002 \u4f5c\u4e3a\u7ba1\u7406\u5458\uff0c\u60a8\u53ef\u4ee5\u521b\u5efa\u5171\u4eab\u7c7b\u578b\uff0c\u4f7f\u8ba1\u5212\u7a0b\u5e8f\u80fd\u591f\u5728\u521b\u5efa\u5171\u4eab\u4e4b\u524d\u7b5b\u9009\u540e\u7aef\u3002\u5171\u4eab\u7c7b\u578b\u5177\u6709\u989d\u5916\u7684\u89c4\u8303\uff0c\u60a8\u53ef\u4ee5\u4e3a\u8ba1\u5212\u7a0b\u5e8f\u8bbe\u7f6e\u8fd9\u4e9b\u89c4\u8303\uff0c\u4ee5\u7b5b\u9009\u548c\u6743\u8861\u540e\u7aef\uff0c\u4ee5\u4fbf\u4e3a\u8bf7\u6c42\u521b\u5efa\u5171\u4eab\u7684\u7528\u6237\u9009\u62e9\u9002\u5f53\u7684\u5171\u4eab\u7c7b\u578b\u3002\u5171\u4eab\u548c\u5171\u4eab\u7c7b\u578b\u53ef\u4ee5\u521b\u5efa\u4e3a\u516c\u5171\u6216\u79c1\u6709\u3002\u6b64\u53ef\u89c1\u6027\u7ea7\u522b\u5b9a\u4e49\u5176\u4ed6\u79df\u6237\u662f\u5426\u80fd\u591f\u770b\u5230\u8fd9\u4e9b\u5bf9\u8c61\u5e76\u5bf9\u5176\u8fdb\u884c\u64cd\u4f5c\u3002\u7ba1\u7406\u5458\u53ef\u4ee5\u4e3a\u8eab\u4efd\u670d\u52a1\u4e2d\u7684\u7279\u5b9a\u7528\u6237\u6216\u79df\u6237\u6dfb\u52a0\u5bf9\u4e13\u7528\u5171\u4eab\u7c7b\u578b\u7684\u8bbf\u95ee\u6743\u9650\u3002\u56e0\u6b64\uff0c\u60a8\u6388\u4e88\u8bbf\u95ee\u6743\u9650\u7684\u7528\u6237\u53ef\u4ee5\u770b\u5230\u53ef\u7528\u7684\u5171\u4eab\u7c7b\u578b\uff0c\u5e76\u4f7f\u7528\u5b83\u4eec\u521b\u5efa\u5171\u4eab\u3002 \u4e0d\u540c\u7528\u6237\u53ca\u5176\u89d2\u8272\u7684 API \u8c03\u7528\u6743\u9650\u7531\u7b56\u7565\u51b3\u5b9a\uff0c\u5c31\u50cf\u5728\u5176\u4ed6 OpenStack \u670d\u52a1\u4e2d\u4e00\u6837\u3002 \u6807\u8bc6\u670d\u52a1\u53ef\u7528\u4e8e\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e2d\u7684\u8eab\u4efd\u9a8c\u8bc1\u3002\u8bf7\u53c2\u9605\u201c\u8eab\u4efd\u201d\u90e8\u5206\u4e2d\u7684\u8eab\u4efd\u670d\u52a1\u5b89\u5168\u6027\u7684\u8be6\u7ec6\u4fe1\u606f\u3002 \u4e00\u822c\u5b89\u5168\u4fe1\u606f \u00b6 \u4e0e\u5176\u4ed6 OpenStack \u9879\u76ee\u7c7b\u4f3c\uff0c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5df2\u6ce8\u518c\u5230 Identity \u670d\u52a1\uff0c\u56e0\u6b64\u60a8\u53ef\u4ee5\u4f7f\u7528 manila endpoints \u547d\u4ee4\u67e5\u627e\u5171\u4eab\u670d\u52a1 v1 \u548c v2 \u7684 API \u7aef\u70b9\uff1a $ manila endpoints +-------------+-----------------------------------------+ | manila | Value | +-------------+-----------------------------------------+ | adminURL | http://172.18.198.55:8786/v1/20787a7b...| | region | RegionOne | | publicURL | http://172.18.198.55:8786/v1/20787a7b...| | internalURL | http://172.18.198.55:8786/v1/20787a7b...| | id | 82cc5535aa444632b64585f138cb9b61 | +-------------+-----------------------------------------+ +-------------+-----------------------------------------+ | manilav2 | Value | +-------------+-----------------------------------------+ | adminURL | http://172.18.198.55:8786/v2/20787a7b...| | region | RegionOne | | publicURL | http://172.18.198.55:8786/v2/20787a7b...| | internalURL | http://172.18.198.55:8786/v2/20787a7b...| | id | 2e8591bfcac4405fa7e5dc3fd61a2b85 | +-------------+-----------------------------------------+ \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API \u670d\u52a1\u4ec5\u4fa6\u542c tcp6 \u7c7b\u578b\u540c\u65f6\u652f\u6301 IPv4 \u548c IPv6 \u7684\u7aef\u53e3 8786 \u3002 \u6ce8\u610f \u8be5\u7aef\u53e3\u662f\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7684\u9ed8\u8ba4\u7aef\u53e3 8786 \u3002\u5b83\u53ef\u4ee5\u66f4\u6539\u4e3a\u4efb\u4f55\u5176\u4ed6\u7aef\u53e3\uff0c\u4f46\u6b64\u66f4\u6539\u4e5f\u5e94\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d\u7684 \u9009\u9879\u4e2d\u8fdb\u884c\uff0c\u8be5\u9009\u9879 osapi_share_listen_port \u9ed8\u8ba4\u4e3a 8786 \u3002 \u5728 /etc/manila/ \u76ee\u5f55\u4e2d\uff0c\u60a8\u53ef\u4ee5\u627e\u5230\u51e0\u4e2a\u914d\u7f6e\u6587\u4ef6\uff1a api-paste.ini manila.conf policy.json rootwrap.conf rootwrap.d ./rootwrap.d: share.filters \u5efa\u8bae\u60a8\u5c06\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u914d\u7f6e\u4e3a\u5728\u975e root \u670d\u52a1\u5e10\u6237\u4e0b\u8fd0\u884c\uff0c\u5e76\u66f4\u6539\u6587\u4ef6\u6743\u9650\uff0c\u4ee5\u4fbf\u53ea\u6709\u7cfb\u7edf\u7ba1\u7406\u5458\u624d\u80fd\u4fee\u6539\u5b83\u4eec\u3002\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u8981\u6c42\u53ea\u6709\u7ba1\u7406\u5458\u624d\u80fd\u5199\u5165\u914d\u7f6e\u6587\u4ef6\uff0c\u800c\u670d\u52a1\u53ea\u80fd\u901a\u8fc7\u5176\u5728\u7ec4\u4e2d\u7684 manila \u7ec4\u6210\u5458\u8eab\u4efd\u8bfb\u53d6\u5b83\u4eec\u3002\u5176\u4ed6\u4eba\u4e00\u5b9a\u65e0\u6cd5\u8bfb\u53d6\u8fd9\u4e9b\u6587\u4ef6\uff0c\u56e0\u4e3a\u8fd9\u4e9b\u6587\u4ef6\u5305\u542b\u4e0d\u540c\u670d\u52a1\u7684\u7ba1\u7406\u5458\u5bc6\u7801\u3002 \u5e94\u7528\u68c0\u67e5 Check-Shared-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/manila\uff1f\u548c Check-Shared-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f\u4ece\u6e05\u5355\u4e2d\u9a8c\u8bc1\u6743\u9650\u8bbe\u7f6e\u662f\u5426\u6b63\u786e\u3002 \u6ce8\u610f \u6587\u4ef6\u4e2d\u7684 manila-rootwrap \u914d\u7f6e\u548c\u6587\u4ef6\u4e2d `rootwrap.conf` `rootwrap.d/share.filters` \u5171\u4eab\u8282\u70b9\u7684 manila-rootwrap \u547d\u4ee4\u8fc7\u6ee4\u5668\u5e94\u5f52 root \u7528\u6237\u6240\u6709\uff0c\u5e76\u4e14\u53ea\u80fd\u7531 root \u7528\u6237\u5199\u5165\u3002 \u5efa\u8bae manila \u914d\u7f6e\u6587\u4ef6 `manila.conf` \u53ef\u4ee5\u653e\u7f6e\u5728\u4efb\u4f55\u4f4d\u7f6e\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u8be5\u8def\u5f84 `/etc/manila/manila.conf` \u662f\u5fc5\u9700\u7684\u3002 \u7f51\u7edc\u548c\u5b89\u5168\u6a21\u578b \u00b6 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e2d\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u662f\u4e00\u4e2a Python \u7c7b\uff0c\u53ef\u4ee5\u4e3a\u540e\u7aef\u8bbe\u7f6e\u5e76\u5728\u5176\u4e2d\u8fd0\u884c\u4ee5\u7ba1\u7406\u5171\u4eab\u64cd\u4f5c\uff0c\u5176\u4e2d\u4e00\u4e9b\u64cd\u4f5c\u662f\u7279\u5b9a\u4e8e\u4f9b\u5e94\u5546\u7684\u3002\u540e\u7aef\u662f manila-share \u670d\u52a1\u7684\u5b9e\u4f8b\u3002\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e2d\u6709\u8bb8\u591a\u7531\u4e0d\u540c\u4f9b\u5e94\u5546\u521b\u5efa\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u3002\u6bcf\u4e2a\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u90fd\u652f\u6301\u4e00\u79cd\u6216\u591a\u79cd\u540e\u7aef\u6a21\u5f0f\uff1a\u5171\u4eab\u670d\u52a1\u5668\u548c\u65e0\u5171\u4eab\u670d\u52a1\u5668\u3002\u7ba1\u7406\u5458\u901a\u8fc7\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d manila.conf \u6307\u5b9a\u6a21\u5f0f\u6765\u9009\u62e9\u4f7f\u7528\u54ea\u79cd\u6a21\u5f0f\u3002\u5b83\u4f7f\u7528\u4e86\u4e00\u4e2a\u9009\u9879 driver_handles_share_servers \u3002 \u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u53ef\u4ee5\u914d\u7f6e\u4e3a\u6241\u5e73\u7f51\u7edc\uff0c\u4e5f\u53ef\u4ee5\u914d\u7f6e\u5206\u6bb5\u7f51\u7edc\u3002\u8fd9\u53d6\u51b3\u4e8e\u7f51\u7edc\u63d0\u4f9b\u5546\u3002 \u5982\u679c\u60a8\u60f3\u4f7f\u7528\u4e0d\u540c\u7684\u914d\u7f6e\uff0c\u5219\u53ef\u4ee5\u4e3a\u4e0d\u540c\u7684\u6a21\u5f0f\u4f7f\u7528\u76f8\u540c\u7684\u786c\u4ef6\u4f7f\u7528\u5355\u72ec\u7684\u9a71\u52a8\u7a0b\u5e8f\u3002\u6839\u636e\u9009\u62e9\u7684\u6a21\u5f0f\uff0c\u7ba1\u7406\u5458\u53ef\u80fd\u9700\u8981\u901a\u8fc7\u914d\u7f6e\u6587\u4ef6\u63d0\u4f9b\u66f4\u591a\u914d\u7f6e\u8be6\u7ec6\u4fe1\u606f\u3002 \u5171\u4eab\u540e\u7aef\u6a21\u5f0f \u00b6 \u6bcf\u4e2a\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u81f3\u5c11\u652f\u6301\u4e00\u79cd\u53ef\u80fd\u7684\u9a71\u52a8\u7a0b\u5e8f\u6a21\u5f0f\uff1a \u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f \u65e0\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f \u8bbe\u7f6e\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u6216\u65e0\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u7684 manila.conf \u914d\u7f6e\u9009\u9879\u662f driver_handles_share_servers \u9009\u9879\u3002\u5b83\u6307\u793a\u9a71\u52a8\u7a0b\u5e8f\u662f\u81ea\u884c\u5904\u7406\u5171\u4eab\u670d\u52a1\u5668\uff0c\u8fd8\u662f\u671f\u671b\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u6267\u884c\u6b64\u64cd\u4f5c\u3002 \u6a21\u5f0f \u914d\u7f6e\u9009\u9879 \u63cf\u8ff0 \u5171\u4eab\u670d\u52a1\u5668 driver_handles_share_servers =True \u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u521b\u5efa\u5171\u4eab\u670d\u52a1\u5668\u5e76\u7ba1\u7406\u6216\u5904\u7406\u5171\u4eab\u670d\u52a1\u5668\u751f\u547d\u5468\u671f\u3002 \u65e0\u5171\u4eab\u670d\u52a1\u5668 driver_handles_share_servers =False \u7ba1\u7406\u5458\uff08\u800c\u4e0d\u662f\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\uff09\u4f7f\u7528\u67d0\u4e9b\u7f51\u7edc\u63a5\u53e3\uff08\u800c\u4e0d\u662f\u5171\u4eab\u670d\u52a1\u5668\u7684\u5b58\u5728\uff09\u7ba1\u7406\u88f8\u673a\u5b58\u50a8\u3002 \u65e0\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f \u5728\u8fd9\u79cd\u6a21\u5f0f\u4e0b\uff0c\u9a71\u52a8\u7a0b\u5e8f\u57fa\u672c\u4e0a\u6ca1\u6709\u4efb\u4f55\u7f51\u7edc\u8981\u6c42\u3002\u5047\u5b9a\u7531\u9a71\u52a8\u7a0b\u5e8f\u7ba1\u7406\u7684\u5b58\u50a8\u63a7\u5236\u5668\u5177\u6709\u6240\u9700\u7684\u6240\u6709\u7f51\u7edc\u63a5\u53e3\u3002\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5c06\u671f\u671b\u9a71\u52a8\u7a0b\u5e8f\u76f4\u63a5\u8bbe\u7f6e\u5171\u4eab\uff0c\u800c\u65e0\u9700\u4e8b\u5148\u521b\u5efa\u4efb\u4f55\u5171\u4eab\u670d\u52a1\u5668\u3002\u6b64\u6a21\u5f0f\u5bf9\u5e94\u4e8e\u67d0\u4e9b\u73b0\u6709\u9a71\u52a8\u7a0b\u5e8f\u5df2\u5728\u6267\u884c\u7684\u64cd\u4f5c\uff0c\u4f46\u5b83\u4f7f\u7ba1\u7406\u5458\u53ef\u4ee5\u660e\u786e\u9009\u62e9\u3002\u5728\u6b64\u6a21\u5f0f\u4e0b\uff0c\u5171\u4eab\u521b\u5efa\u65f6\u4e0d\u9700\u8981\u5171\u4eab\u7f51\u7edc\uff0c\u4e5f\u4e0d\u5f97\u63d0\u4f9b\u5171\u4eab\u7f51\u7edc\u3002 \u6ce8\u610f \u5728\u65e0\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u4e0b\uff0c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5c06\u5047\u5b9a\u6240\u6709\u79df\u6237\u90fd\u5df2\u53ef\u8bbf\u95ee\u7528\u4e8e\u5bfc\u51fa\u4efb\u4f55\u5171\u4eab\u7684\u7f51\u7edc\u63a5\u53e3\u3002 \u5728\u65e0\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u4e0b\uff0c\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u4e0d\u5904\u7406\u5b58\u50a8\u751f\u547d\u5468\u671f\u3002\u7ba1\u7406\u5458\u5e94\u5904\u7406\u5b58\u50a8\u3001\u7f51\u7edc\u63a5\u53e3\u548c\u5176\u4ed6\u4e3b\u673a\u914d\u7f6e\u3002\u5728\u6b64\u6a21\u5f0f\u4e0b\uff0c\u7ba1\u7406\u5458\u53ef\u4ee5\u5c06\u5b58\u50a8\u8bbe\u7f6e\u4e3a\u5bfc\u51fa\u5171\u4eab\u7684\u4e3b\u673a\u3002\u6b64\u6a21\u5f0f\u7684\u4e3b\u8981\u7279\u5f81\u662f\u5b58\u50a8\u4e0d\u7531\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5904\u7406\u3002\u79df\u6237\u4e2d\u7684\u7528\u6237\u5171\u4eab\u516c\u5171\u7f51\u7edc\u3001\u4e3b\u673a\u3001\u5904\u7406\u5668\u548c\u7f51\u7edc\u7ba1\u9053\u3002\u5982\u679c\u7ba1\u7406\u5458\u6216\u4ee3\u7406\u4e4b\u524d\u914d\u7f6e\u7684\u5b58\u50a8\u6ca1\u6709\u6b63\u786e\u7684\u5e73\u8861\u8c03\u6574\uff0c\u5b83\u4eec\u53ef\u80fd\u4f1a\u76f8\u4e92\u963b\u788d\u3002\u5728\u516c\u6709\u4e91\u4e2d\uff0c\u6240\u6709\u7f51\u7edc\u5bb9\u91cf\u53ef\u80fd\u90fd\u7531\u4e00\u4e2a\u5ba2\u6237\u7aef\u4f7f\u7528\uff0c\u56e0\u6b64\u7ba1\u7406\u5458\u5e94\u6ce8\u610f\u4e0d\u8981\u53d1\u751f\u8fd9\u79cd\u60c5\u51b5\u3002\u5e73\u8861\u8c03\u6574\u53ef\u4ee5\u901a\u8fc7\u4efb\u4f55\u65b9\u5f0f\u5b8c\u6210\uff0c\u800c\u4e0d\u4e00\u5b9a\u662f\u4f7f\u7528 OpenStack \u5de5\u5177\u3002 \u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f \u5728\u6b64\u6a21\u5f0f\u4e0b\uff0c\u9a71\u52a8\u7a0b\u5e8f\u80fd\u591f\u521b\u5efa\u5171\u4eab\u670d\u52a1\u5668\u5e76\u5c06\u5176\u63d2\u5165\u73b0\u6709\u7f51\u7edc\u3002\u63d0\u4f9b\u65b0\u7684\u5171\u4eab\u670d\u52a1\u5668\u65f6\uff0c\u9a71\u52a8\u7a0b\u5e8f\u9700\u8981\u6765\u81ea\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7684 IP \u5730\u5740\u548c\u5b50\u7f51\u3002 \u4e0e\u65e0\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u4e0d\u540c\uff0c\u5728\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u4e0b\uff0c\u7528\u6237\u5177\u6709\u4e00\u4e2a\u5171\u4eab\u7f51\u7edc\u548c\u4e00\u4e2a\u4e3a\u6bcf\u4e2a\u5171\u4eab\u7f51\u7edc\u521b\u5efa\u7684\u5171\u4eab\u670d\u52a1\u5668\u3002\u56e0\u6b64\uff0c\u6240\u6709\u7528\u6237\u90fd\u6709\u5355\u72ec\u7684 CPU\u3001CPU \u65f6\u95f4\u3001\u7f51\u7edc\u3001\u5bb9\u91cf\u548c\u541e\u5410\u91cf\u3002 \u60a8\u8fd8\u53ef\u4ee5\u5728\u5171\u4eab\u670d\u52a1\u5668\u548c\u65e0\u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\u4e0b\u914d\u7f6e\u5b89\u5168\u670d\u52a1\u3002\u4f46\u662f\uff0c\u5982\u679c\u6ca1\u6709\u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\uff0c\u7ba1\u7406\u5458\u5e94\u5728\u4e3b\u673a\u4e0a\u624b\u52a8\u8bbe\u7f6e\u6240\u9700\u7684\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u3002\u5728\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u4e0b\uff0c\u53ef\u4ee5\u4f7f\u7528\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u7684\u4efb\u4f55\u73b0\u6709\u5b89\u5168\u670d\u52a1\u81ea\u52a8\u914d\u7f6e\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u3002 \u6241\u5e73\u5316\u4e0e\u5206\u6bb5\u5316\u7f51\u7edc \u00b6 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u4e0d\u540c\u7c7b\u578b\u7684\u7f51\u7edc\uff1a flat GRE VLAN VXLAN \u6ce8\u610f \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u53ea\u662f\u5c06\u6709\u5173\u7f51\u7edc\u7684\u4fe1\u606f\u4fdd\u5b58\u5728\u6570\u636e\u5e93\u4e2d\uff0c\u800c\u771f\u6b63\u7684\u7f51\u7edc\u5219\u7531\u7f51\u7edc\u63d0\u4f9b\u5546\u63d0\u4f9b\u3002\u5728OpenStack\u4e2d\uff0c\u5b83\u53ef\u4ee5\u662f\u4f20\u7edf\u7f51\u7edc\uff08nova-network\uff09\u6216\u7f51\u7edc\uff08neutron\uff09\u670d\u52a1\uff0c\u4f46\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u751a\u81f3\u53ef\u4ee5\u5728OpenStack\u4e4b\u5916\u5de5\u4f5c\u3002\u8fd9\u662f\u5141\u8bb8\u7684\uff0c `StandaloneNetworkPlugin` \u53ef\u4ee5\u4e0e\u4efb\u4f55\u7f51\u7edc\u5e73\u53f0\u4e00\u8d77\u4f7f\u7528\uff0c\u5e76\u4e14\u4e0d\u9700\u8981OpenStack\u4e2d\u7684\u67d0\u4e9b\u7279\u5b9a\u7f51\u7edc\u670d\u52a1\uff0c\u5982Networking\u6216Legacy\u7f51\u7edc\u670d\u52a1\u3002\u60a8\u53ef\u4ee5\u5728\u5176\u914d\u7f6e\u6587\u4ef6\u4e2d\u8bbe\u7f6e\u7f51\u7edc\u53c2\u6570\u3002 \u5728\u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\u4e0b\uff0c\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u4e3a\u6bcf\u4e2a\u5171\u4eab\u7f51\u7edc\u521b\u5efa\u548c\u7ba1\u7406\u5171\u4eab\u670d\u52a1\u5668\u3002\u6b64\u6a21\u5f0f\u53ef\u5206\u4e3a\u4e24\u79cd\u53d8\u4f53\uff1a \u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\u4e0b\u7684\u6241\u5e73\u7f51\u7edc \u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\u4e0b\u7684\u5206\u6bb5\u7f51\u7edc \u6700\u521d\uff0c\u5728\u521b\u5efa\u5171\u4eab\u7f51\u7edc\u65f6\uff0c\u60a8\u53ef\u4ee5\u8bbe\u7f6e OpenStack Networking \uff08neutron\uff09 \u7684\u7f51\u7edc\u548c\u5b50\u7f51\uff0c\u4e5f\u53ef\u4ee5\u8bbe\u7f6e Legacy \u7f51\u7edc \uff08nova-network\uff09 \u670d\u52a1\u7f51\u7edc\u3002\u7b2c\u4e09\u79cd\u65b9\u6cd5\u662f\u5728\u6ca1\u6709\u65e7\u7248\u7f51\u7edc\u548c\u7f51\u7edc\u670d\u52a1\u7684\u60c5\u51b5\u4e0b\u914d\u7f6e\u7f51\u7edc\u3002 StandaloneNetworkPlugin \u53ef\u4e0e\u4efb\u4f55\u7f51\u7edc\u5e73\u53f0\u4e00\u8d77\u4f7f\u7528\u3002\u60a8\u53ef\u4ee5\u5728\u5176\u914d\u7f6e\u6587\u4ef6\u4e2d\u8bbe\u7f6e\u7f51\u7edc\u53c2\u6570\u3002 \u5efa\u8bae \u6240\u6709\u4f7f\u7528 OpenStack Compute \u670d\u52a1\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u90fd\u4e0d\u4f7f\u7528\u7f51\u7edc\u63d2\u4ef6\u3002\u5728 Mitaka \u7248\u672c\u4e2d\uff0c\u5b83\u662f Windows \u548c\u901a\u7528\u9a71\u52a8\u7a0b\u5e8f\u3002\u8fd9\u4e9b\u5171\u4eab\u9a71\u52a8\u5668\u5177\u6709\u5176\u4ed6\u9009\u9879\u5e76\u4f7f\u7528\u4e0d\u540c\u7684\u65b9\u6cd5\u3002 \u521b\u5efa\u5171\u4eab\u7f51\u7edc\u540e\uff0c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5c06\u68c0\u7d22\u7531\u7f51\u7edc\u63d0\u4f9b\u5546\u786e\u5b9a\u7684\u7f51\u7edc\u4fe1\u606f\uff1a\u7f51\u7edc\u7c7b\u578b\u3001\u5206\u6bb5\u6807\u8bc6\u7b26\uff08\u5982\u679c\u7f51\u7edc\u4f7f\u7528\u5206\u6bb5\uff09\u548c CIDR \u8868\u793a\u6cd5\u4e2d\u7684 IP \u5757\uff0c\u4ee5\u4fbf\u4ece\u4e2d\u5206\u914d\u7f51\u7edc\u3002 \u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\u4e0b\u7684\u6241\u5e73\u7f51\u7edc \u5728\u6b64\u6a21\u5f0f\u4e0b\uff0c\u67d0\u4e9b\u5b58\u50a8\u63a7\u5236\u5668\u53ef\u4ee5\u521b\u5efa\u5171\u4eab\u670d\u52a1\u5668\uff0c\u4f46\u7531\u4e8e\u7269\u7406\u6216\u903b\u8f91\u7f51\u7edc\u7684\u5404\u79cd\u9650\u5236\uff0c\u6240\u6709\u5171\u4eab\u670d\u52a1\u5668\u90fd\u5fc5\u987b\u4f4d\u4e8e\u6241\u5e73\u7f51\u7edc\u4e0a\u3002\u5728\u6b64\u6a21\u5f0f\u4e0b\uff0c\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u9700\u8981\u4e00\u4e9b\u4e1c\u897f\u6765\u4e3a\u5171\u4eab\u670d\u52a1\u5668\u9884\u914d IP \u5730\u5740\uff0c\u4f46 IP \u5c06\u5168\u90e8\u6765\u81ea\u540c\u4e00\u5b50\u7f51\uff0c\u5e76\u4e14\u5047\u5b9a\u6240\u6709\u79df\u6237\u90fd\u53ef\u4ee5\u8bbf\u95ee\u8be5\u5b50\u7f51\u672c\u8eab\u3002 \u5171\u4eab\u7f51\u7edc\u7684\u5b89\u5168\u670d\u52a1\u90e8\u5206\u6307\u5b9a\u5b89\u5168\u8981\u6c42\uff0c\u4f8b\u5982 AD \u6216 LDAP \u57df\u6216 Kerberos \u57df\u3002\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5047\u5b9a\u5b89\u5168\u670d\u52a1\u4e2d\u5f15\u7528\u7684\u4efb\u4f55\u4e3b\u673a\u90fd\u53ef\u4ee5\u4ece\u521b\u5efa\u5171\u4eab\u670d\u52a1\u5668\u7684\u5b50\u7f51\u8bbf\u95ee\uff0c\u8fd9\u9650\u5236\u4e86\u53ef\u4ee5\u4f7f\u7528\u6b64\u6a21\u5f0f\u7684\u60c5\u51b5\u6570\u3002 \u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\u4e0b\u7684\u5206\u6bb5\u7f51\u7edc \u5728\u6b64\u6a21\u5f0f\u4e0b\uff0c\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u80fd\u591f\u521b\u5efa\u5171\u4eab\u670d\u52a1\u5668\u5e76\u5c06\u5176\u63d2\u5165\u5230\u73b0\u6709\u7684\u5206\u6bb5\u7f51\u7edc\u3002\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u671f\u671b\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e3a\u6bcf\u4e2a\u65b0\u7684\u5171\u4eab\u670d\u52a1\u5668\u63d0\u4f9b\u5b50\u7f51\u5b9a\u4e49\u3002\u6b64\u5b9a\u4e49\u5e94\u5305\u62ec\u5206\u6bb5\u7c7b\u578b\u3001\u5206\u6bb5 ID \u4ee5\u53ca\u4e0e\u5206\u6bb5\u7c7b\u578b\u76f8\u5173\u7684\u4efb\u4f55\u5176\u4ed6\u4fe1\u606f\u3002 \u6ce8\u610f \u67d0\u4e9b\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u53ef\u80fd\u4e0d\u652f\u6301\u6240\u6709\u7c7b\u578b\u7684\u5206\u6bb5\uff0c\u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6b63\u5728\u4f7f\u7528\u7684\u9a71\u52a8\u7a0b\u5e8f\u7684\u89c4\u8303\u3002 \u7f51\u7edc\u63d2\u4ef6 \u00b6 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4f53\u7cfb\u7ed3\u6784\u5b9a\u4e49\u4e86\u7528\u4e8e\u7f51\u7edc\u8d44\u6e90\u8c03\u914d\u7684\u62bd\u8c61\u5c42\u3002\u5b83\u5141\u8bb8\u7ba1\u7406\u5458\u4ece\u4e0d\u540c\u7684\u9009\u9879\u4e2d\u8fdb\u884c\u9009\u62e9\uff0c\u4ee5\u51b3\u5b9a\u5982\u4f55\u5c06\u7f51\u7edc\u8d44\u6e90\u5206\u914d\u7ed9\u5176\u79df\u6237\u7684\u7f51\u7edc\u5b58\u50a8\u3002\u6709\u51e0\u4e2a\u7f51\u7edc\u63d2\u4ef6\u63d0\u4f9b\u4e86\u4e0eOpenStack\u63d0\u4f9b\u7684\u7f51\u7edc\u670d\u52a1\u7684\u5404\u79cd\u96c6\u6210\u65b9\u6cd5\u3002 \u7f51\u7edc\u63d2\u4ef6\u5141\u8bb8\u4f7f\u7528 OpenStack Networking \u548c Legacy \u7f51\u7edc\u670d\u52a1\u7684\u4efb\u4f55\u529f\u80fd\u3001\u914d\u7f6e\u3002\u53ef\u4ee5\u4f7f\u7528\u7f51\u7edc\u670d\u52a1\u652f\u6301\u7684\u4efb\u4f55\u7f51\u7edc\u5206\u6bb5\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4f20\u7edf\u7f51\u7edc \uff08nova-network\uff09 \u670d\u52a1\u7684\u6241\u5e73\u7f51\u7edc\u6216 VLAN \u5206\u6bb5\u7f51\u7edc\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u63d2\u4ef6\u6765\u72ec\u7acb\u4e8e OpenStack \u7f51\u7edc\u670d\u52a1\u6307\u5b9a\u7f51\u7edc\u3002\u6709\u5173\u5982\u4f55\u4f7f\u7528\u4e0d\u540c\u7f51\u7edc\u63d2\u4ef6\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7f51\u7edc\u63d2\u4ef6\u3002 \u5b89\u5168\u670d\u52a1 \u00b6 \u5bf9\u4e8e\u5ba2\u6237\u7aef\u7684\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\uff0c\u53ef\u4ee5\u9009\u62e9\u4f7f\u7528\u4e0d\u540c\u7684\u7f51\u7edc\u8eab\u4efd\u9a8c\u8bc1\u534f\u8bae\u914d\u7f6e\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u670d\u52a1\u3002\u652f\u6301\u7684\u8eab\u4efd\u9a8c\u8bc1\u534f\u8bae\u5305\u62ec LDAP\u3001Kerberos \u548c Microsoft Active Directory \u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u3002 \u5b89\u5168\u670d\u52a1\u4ecb\u7ecd \u00b6 \u521b\u5efa\u5171\u4eab\u5e76\u83b7\u53d6\u5176\u5bfc\u51fa\u4f4d\u7f6e\u540e\uff0c\u7528\u6237\u65e0\u6743\u88c5\u8f7d\u8be5\u5171\u4eab\u5e76\u5904\u7406\u6587\u4ef6\u3002\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u9700\u8981\u663e\u5f0f\u6388\u4e88\u5bf9\u65b0\u5171\u4eab\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743 \uff08AuthN/AuthZ\uff09 \u7684\u5ba2\u6237\u673a\u914d\u7f6e\u6570\u636e\u53ef\u4ee5\u901a\u8fc7 \u5b58\u50a8 security services \u3002\u5982\u679c\u4f7f\u7528\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u540e\u7aef\u652f\u6301 LDAP\u3001Kerberos \u6216 Microsoft Active Directory\uff0c\u5219\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u53ef\u4ee5\u4f7f\u7528\u5b83\u4eec\u3002\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u4e5f\u53ef\u4ee5\u5728\u6ca1\u6709\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7684\u60c5\u51b5\u4e0b\u8fdb\u884c\u914d\u7f6e\u3002 \u6ce8\u610f \u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u9700\u8981\u663e\u5f0f\u6307\u5b9a\u5176\u4e2d\u4e00\u9879\u5b89\u5168\u670d\u52a1\uff0c\u4f8b\u5982\uff0cNetApp\u3001EMC \u548c Windows \u9a71\u52a8\u7a0b\u5e8f\u9700\u8981 Active Directory \u624d\u80fd\u521b\u5efa\u4e0e CIFS \u534f\u8bae\u7684\u5171\u4eab\u3002 \u5b89\u5168\u670d\u52a1\u7ba1\u7406 \u00b6 \u5b89\u5168\u670d\u52a1\u662f\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff08\u9a6c\u5c3c\u62c9\uff09\u5b9e\u4f53\uff0c\u5b83\u62bd\u8c61\u51fa\u4e00\u7ec4\u9009\u9879\uff0c\u8fd9\u4e9b\u9009\u9879\u4e3a\u7279\u5b9a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\uff08\u5982 Active Directory \u57df\u6216 Kerberos \u57df\uff09\u5b9a\u4e49\u5b89\u5168\u57df\u3002\u5b89\u5168\u670d\u52a1\u5305\u542b\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u521b\u5efa\u52a0\u5165\u7ed9\u5b9a\u57df\u7684\u670d\u52a1\u5668\u6240\u9700\u7684\u6240\u6709\u4fe1\u606f\u3002 \u4f7f\u7528 API\uff0c\u7528\u6237\u53ef\u4ee5\u521b\u5efa\u3001\u66f4\u65b0\u3001\u67e5\u770b\u548c\u5220\u9664\u5b89\u5168\u670d\u52a1\u3002\u5b89\u5168\u670d\u52a1\u7684\u8bbe\u8ba1\u57fa\u4e8e\u4ee5\u4e0b\u5047\u8bbe\uff1a \u79df\u6237\u63d0\u4f9b\u5b89\u5168\u670d\u52a1\u7684\u8be6\u7ec6\u4fe1\u606f\u3002 \u7ba1\u7406\u5458\u5173\u5fc3\u5b89\u5168\u670d\u52a1\uff1a\u4ed6\u4eec\u914d\u7f6e\u6b64\u7c7b\u5b89\u5168\u670d\u52a1\u7684\u670d\u52a1\u5668\u7aef\u3002 \u5728\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API \u4e2d\uff0ca security_service \u4e0e share_networks \u5173\u8054\u3002 \u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u4f7f\u7528\u5b89\u5168\u670d\u52a1\u4e2d\u7684\u6570\u636e\u6765\u914d\u7f6e\u65b0\u521b\u5efa\u7684\u5171\u4eab\u670d\u52a1\u5668\u3002 \u521b\u5efa\u5b89\u5168\u670d\u52a1\u65f6\uff0c\u53ef\u4ee5\u9009\u62e9\u4ee5\u4e0b\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u4e4b\u4e00\uff1a \u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1 \u63cf\u8ff0 LDAP \u8f7b\u91cf\u7ea7\u76ee\u5f55\u8bbf\u95ee\u534f\u8bae\u3002\u7528\u4e8e\u901a\u8fc7 IP \u7f51\u7edc\u8bbf\u95ee\u548c\u7ef4\u62a4\u5206\u5e03\u5f0f\u76ee\u5f55\u4fe1\u606f\u670d\u52a1\u7684\u5e94\u7528\u7a0b\u5e8f\u534f\u8bae\u3002 Kerberos \u7f51\u7edc\u8eab\u4efd\u9a8c\u8bc1\u534f\u8bae\uff0c\u5b83\u57fa\u4e8e\u7968\u8bc1\u5de5\u4f5c\uff0c\u5141\u8bb8\u901a\u8fc7\u975e\u5b89\u5168\u7f51\u7edc\u8fdb\u884c\u901a\u4fe1\u7684\u8282\u70b9\u4ee5\u5b89\u5168\u7684\u65b9\u5f0f\u76f8\u4e92\u8bc1\u660e\u5176\u8eab\u4efd\u3002 \u6d3b\u52a8\u76ee\u5f55 Microsoft \u4e3a Windows \u57df\u7f51\u7edc\u5f00\u53d1\u7684\u76ee\u5f55\u670d\u52a1\u3002\u4f7f\u7528 LDAP\u3001Microsoft \u7684 Kerberos \u7248\u672c\u548c DNS\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5141\u8bb8\u60a8\u4f7f\u7528\u4ee5\u4e0b\u9009\u9879\u914d\u7f6e\u5b89\u5168\u670d\u52a1\uff1a \u79df\u6237\u7f51\u7edc\u5185\u90e8\u4f7f\u7528\u7684 DNS IP \u5730\u5740\u3002 \u5b89\u5168\u670d\u52a1\u7684 IP \u5730\u5740\u6216\u4e3b\u673a\u540d\u3002 \u5b89\u5168\u670d\u52a1\u7684\u57df\u3002 \u79df\u6237\u4f7f\u7528\u7684\u7528\u6237\u540d\u6216\u7ec4\u540d\u3002 \u5982\u679c\u6307\u5b9a\u7528\u6237\u540d\uff0c\u5219\u9700\u8981\u4e00\u4e2a\u7528\u6237\u5bc6\u7801\u3002 \u73b0\u6709\u5b89\u5168\u670d\u52a1\u5b9e\u4f53\u53ef\u4ee5\u4e0e\u5171\u4eab\u7f51\u7edc\u5b9e\u4f53\u76f8\u5173\u8054\uff0c\u8fd9\u4e9b\u5b9e\u4f53\u901a\u77e5\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e00\u7ec4\u5171\u4eab\u7684\u5b89\u5168\u6027\u548c\u7f51\u7edc\u914d\u7f6e\u3002\u60a8\u8fd8\u53ef\u4ee5\u67e5\u770b\u6307\u5b9a\u5171\u4eab\u7f51\u7edc\u7684\u6240\u6709\u5b89\u5168\u670d\u52a1\u7684\u5217\u8868\uff0c\u5e76\u53d6\u6d88\u5b83\u4eec\u4e0e\u5171\u4eab\u7f51\u7edc\u7684\u5173\u8054\u3002 \u6709\u5173\u901a\u8fc7 API \u7ba1\u7406\u5b89\u5168\u670d\u52a1\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5b89\u5168\u670d\u52a1 API\u3002\u60a8\u8fd8\u53ef\u4ee5\u901a\u8fc7 python-manilaclient \u7ba1\u7406\u5b89\u5168\u670d\u52a1\uff0c\u8bf7\u53c2\u9605\u5b89\u5168\u670d\u52a1 CLI \u7ba1\u7406\u3002 \u7ba1\u7406\u5458\u548c\u4f5c\u4e3a\u5171\u4eab\u6240\u6709\u8005\u7684\u7528\u6237\u53ef\u4ee5\u901a\u8fc7\u521b\u5efa\u8bbf\u95ee\u89c4\u5219\uff0c\u5e76\u901a\u8fc7 IP \u5730\u5740\u3001\u7528\u6237\u3001\u7ec4\u6216 TLS \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u6765\u7ba1\u7406\u5bf9\u5171\u4eab\u7684\u8bbf\u95ee\u3002\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u53d6\u51b3\u4e8e\u60a8\u914d\u7f6e\u548c\u4f7f\u7528\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u548c\u5b89\u5168\u670d\u52a1\u3002 \u56e0\u6b64\uff0c\u4f5c\u4e3a\u7ba1\u7406\u5458\uff0c\u60a8\u53ef\u4ee5\u5c06\u540e\u7aef\u914d\u7f6e\u4e3a\u901a\u8fc7\u7f51\u7edc\u4f7f\u7528\u7279\u5b9a\u7684\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\uff0c\u5b83\u5c06\u5b58\u50a8\u7528\u6237\u3002\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u53ef\u4ee5\u5728\u6ca1\u6709\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u548c\u6807\u8bc6\u670d\u52a1\u7684\u5ba2\u6237\u7aef\u4e0a\u8fd0\u884c\u3002 \u6ce8\u610f \u4e0d\u540c\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u4e0d\u540c\u7684\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u3002\u6709\u5173\u4e0d\u540c\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u529f\u80fd\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u9a6c\u5c3c\u62c9\u5171\u4eab\u529f\u80fd\u652f\u6301\u6620\u5c04\u3002\u9a71\u52a8\u7a0b\u5e8f\u5bf9\u7279\u5b9a\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u7684\u652f\u6301\u5e76\u4e0d\u610f\u5473\u7740\u53ef\u4ee5\u4f7f\u7528\u4efb\u4f55\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u5bf9\u5176\u8fdb\u884c\u914d\u7f6e\u3002\u652f\u6301\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u5305\u62ec NFS\u3001CIFS\u3001GlusterFS \u548c HDFS\u3002\u6709\u5173\u7279\u5b9a\u9a71\u52a8\u7a0b\u5e8f\u53ca\u5176\u5b89\u5168\u670d\u52a1\u914d\u7f6e\u7684\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u9a71\u52a8\u7a0b\u5e8f\u4f9b\u5e94\u5546\u7684\u6587\u6863\u3002 \u67d0\u4e9b\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u5b89\u5168\u670d\u52a1\uff0c\u800c\u5176\u4ed6\u9a71\u52a8\u7a0b\u5e8f\u4e0d\u652f\u6301\u4e0a\u8ff0\u4efb\u4f55\u5b89\u5168\u670d\u52a1\u3002\u4f8b\u5982\uff0c\u5177\u6709 NFS \u6216 CIFS \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u7684\u901a\u7528\u9a71\u52a8\u7a0b\u5e8f\u4ec5\u652f\u6301\u901a\u8fc7 IP \u5730\u5740\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002 \u5efa\u8bae - \u5728\u5927\u591a\u6570\u60c5\u51b5\u4e0b\uff0c\u652f\u6301 CIFS \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u7684\u9a71\u52a8\u7a0b\u5e8f\u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528 Active Directory \u5e76\u901a\u8fc7\u7528\u6237\u8eab\u4efd\u9a8c\u8bc1\u7ba1\u7406\u8bbf\u95ee\u3002 - \u652f\u6301 GlusterFS \u534f\u8bae\u7684\u9a71\u52a8\u7a0b\u5e8f\u53ef\u4ee5\u901a\u8fc7 TLS \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 - \u4f7f\u7528\u652f\u6301 NFS \u534f\u8bae\u7684\u9a71\u52a8\u7a0b\u5e8f\uff0c\u901a\u8fc7 IP \u5730\u5740\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u662f\u552f\u4e00\u53d7\u652f\u6301\u7684\u9009\u9879\u3002 - \u7531\u4e8e HDFS \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u4f7f\u7528 NFS \u8bbf\u95ee\uff0c\u56e0\u6b64\u4e5f\u53ef\u4ee5\u5c06\u5176\u914d\u7f6e\u4e3a\u901a\u8fc7 IP \u5730\u5740\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u4f46\u8bf7\u6ce8\u610f\uff0c\u901a\u8fc7 IP \u8fdb\u884c\u7684\u8eab\u4efd\u9a8c\u8bc1\u662f\u6700\u4e0d\u5b89\u5168\u7684\u8eab\u4efd\u9a8c\u8bc1\u7c7b\u578b\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5b9e\u9645\u4f7f\u7528\u60c5\u51b5\u7684\u5efa\u8bae\u914d\u7f6e\u662f\u4f7f\u7528 CIFS \u5171\u4eab\u534f\u8bae\u521b\u5efa\u5171\u4eab\uff0c\u5e76\u5411\u5176\u6dfb\u52a0 Microsoft Active Directory \u76ee\u5f55\u670d\u52a1\u3002\u5728\u6b64\u914d\u7f6e\u4e2d\uff0c\u60a8\u5c06\u83b7\u5f97\u96c6\u4e2d\u5f0f\u6570\u636e\u5e93\u4ee5\u53ca\u5c06Kerberos\u548cLDAP\u65b9\u6cd5\u7ed3\u5408\u5728\u4e00\u8d77\u7684\u670d\u52a1\u3002\u8fd9\u662f\u4e00\u4e2a\u771f\u5b9e\u7684\u7528\u4f8b\uff0c\u5bf9\u4e8e\u751f\u4ea7\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u6765\u8bf4\u5f88\u65b9\u4fbf\u3002 \u5171\u4eab\u8bbf\u95ee\u63a7\u5236 \u00b6 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5141\u8bb8\u6388\u4e88\u6216\u62d2\u7edd\u5176\u4ed6\u5ba2\u6237\u7aef\u5bf9\u670d\u52a1\u7684\u4e0d\u540c\u5b9e\u4f53\u7684\u8bbf\u95ee\u3002 \u5c06\u5171\u4eab\u4f5c\u4e3a\u6587\u4ef6\u7cfb\u7edf\u7684\u53ef\u8fdc\u7a0b\u6302\u8f7d\u5b9e\u4f8b\uff0c\u53ef\u4ee5\u7ba1\u7406\u5bf9\u6307\u5b9a\u5171\u4eab\u7684\u8bbf\u95ee\uff0c\u5e76\u5217\u51fa\u6307\u5b9a\u5171\u4eab\u7684\u6743\u9650\u3002 \u5171\u4eab\u53ef\u4ee5\u662f\u516c\u5171\u7684\uff0c\u4e5f\u53ef\u4ee5\u662f\u79c1\u6709\u7684\u3002\u8fd9\u662f\u5171\u4eab\u7684\u53ef\u89c1\u6027\u7ea7\u522b\uff0c\u7528\u4e8e\u5b9a\u4e49\u5176\u4ed6\u79df\u6237\u662f\u5426\u53ef\u4ee5\u770b\u5230\u5171\u4eab\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u6240\u6709\u5171\u4eab\u90fd\u521b\u5efa\u4e3a\u4e13\u7528\u5171\u4eab\u3002\u521b\u5efa\u5171\u4eab\u65f6\uff0c\u8bf7\u4f7f\u7528\u5bc6\u94a5 --public \u5c06\u5171\u4eab\u516c\u5f00\uff0c\u4f9b\u5176\u4ed6\u79df\u6237\u67e5\u770b\u5171\u4eab\u5217\u8868\u5e76\u67e5\u770b\u5176\u8be6\u7ec6\u4fe1\u606f\u3002 \u6839\u636e policy.json \u6587\u4ef6\uff0c\u7ba1\u7406\u5458\u548c\u4f5c\u4e3a\u5171\u4eab\u6240\u6709\u8005\u7684\u7528\u6237\u53ef\u4ee5\u901a\u8fc7\u521b\u5efa\u8bbf\u95ee\u89c4\u5219\u6765\u7ba1\u7406\u5bf9\u5171\u4eab\u7684\u8bbf\u95ee\u3002\u4f7f\u7528 manila access-allow\u3001manila access-deny \u548c manila access-list \u547d\u4ee4\uff0c\u60a8\u53ef\u4ee5\u76f8\u5e94\u5730\u6388\u4e88\u3001\u62d2\u7edd\u548c\u5217\u51fa\u5bf9\u6307\u5b9a\u5171\u4eab\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u5efa\u8bae \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u5f53\u521b\u5efa\u5171\u4eab\u5e76\u5177\u6709\u5176\u5bfc\u51fa\u4f4d\u7f6e\u65f6\uff0c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u671f\u671b\u4efb\u4f55\u4eba\u90fd\u65e0\u6cd5\u901a\u8fc7\u88c5\u8f7d\u5171\u4eab\u6765\u8bbf\u95ee\u8be5\u5171\u4eab\u3002\u8bf7\u6ce8\u610f\uff0c\u60a8\u4f7f\u7528\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u53ef\u4ee5\u66f4\u6539\u6b64\u914d\u7f6e\uff0c\u4e5f\u53ef\u4ee5\u76f4\u63a5\u5728\u5171\u4eab\u5b58\u50a8\u4e0a\u66f4\u6539\u3002\u8981\u786e\u4fdd\u8bbf\u95ee\u5171\u4eab\uff0c\u8bf7\u68c0\u67e5\u5bfc\u51fa\u534f\u8bae\u7684\u6302\u8f7d\u914d\u7f6e\u3002 \u521a\u521b\u5efa\u5171\u4eab\u65f6\uff0c\u6ca1\u6709\u4e0e\u4e4b\u5173\u8054\u7684\u9ed8\u8ba4\u8bbf\u95ee\u89c4\u5219\u548c\u88c5\u8f7d\u6743\u9650\u3002\u8fd9\u53ef\u4ee5\u5728\u6b63\u5728\u4f7f\u7528\u7684\u5bfc\u51fa\u534f\u8bae\u7684\u6302\u8f7d\u914d\u7f6e\u4e2d\u770b\u5230\u3002\u4f8b\u5982\uff0c\u5b58\u50a8\u4e0a\u6709\u4e00\u4e2a NFS \u547d\u4ee4 exportfs \u6216 /etc/exports \u6587\u4ef6\uff0c\u7528\u4e8e\u63a7\u5236\u6bcf\u4e2a\u8fdc\u7a0b\u5171\u4eab\u5e76\u5b9a\u4e49\u53ef\u4ee5\u8bbf\u95ee\u5b83\u7684\u4e3b\u673a\u3002\u5982\u679c\u6ca1\u6709\u4eba\u53ef\u4ee5\u6302\u8f7d\u5171\u4eab\uff0c\u5219\u4e3a\u7a7a\u3002\u5bf9\u4e8e\u8fdc\u7a0b CIFS \u670d\u52a1\u5668\uff0c\u6709\u4e00\u4e2a net conf list \u663e\u793a\u914d\u7f6e\u7684\u547d\u4ee4\u3002 hosts deny \u53c2\u6570\u5e94\u7531\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u8bbe\u7f6e 0.0.0.0/0 \uff0c\u8fd9\u610f\u5473\u7740\u4efb\u4f55\u4e3b\u673a\u90fd\u88ab\u62d2\u7edd\u6302\u8f7d\u5171\u4eab\u3002 \u4f7f\u7528\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff0c\u53ef\u4ee5\u901a\u8fc7\u6307\u5b9a\u4ee5\u4e0b\u652f\u6301\u7684\u5171\u4eab\u8bbf\u95ee\u7ea7\u522b\u4e4b\u4e00\u6765\u6388\u4e88\u6216\u62d2\u7edd\u5bf9\u5171\u4eab\u7684\u8bbf\u95ee\uff1a rw\u3002\u8bfb\u53d6\u548c\u5199\u5165 \uff08RW\uff09 \u8bbf\u95ee\u3002\u8fd9\u662f\u9ed8\u8ba4\u503c\u3002 ro\u3002\u53ea\u8bfb \uff08RO\uff09 \u8bbf\u95ee\u3002 \u5efa\u8bae \u5f53\u7ba1\u7406\u5458\u4e3a\u67d0\u4e9b\u7279\u5b9a\u7f16\u8f91\u8005\u6216\u8d21\u732e\u8005\u63d0\u4f9b\u8bfb\u5199 \uff08RW\uff09 \u8bbf\u95ee\u6743\u9650\u5e76\u4e3a\u5176\u4f59\u7528\u6237\uff08\u67e5\u770b\u8005\uff09\u63d0\u4f9b\u53ea\u8bfb \uff08RO\uff09 \u8bbf\u95ee\u6743\u9650\u65f6\uff0cRO \u8bbf\u95ee\u7ea7\u522b\u5728\u516c\u5171\u5171\u4eab\u4e2d\u4f1a\u5f88\u6709\u5e2e\u52a9\u3002 \u60a8\u8fd8\u5fc5\u987b\u6307\u5b9a\u4ee5\u4e0b\u53d7\u652f\u6301\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u4e4b\u4e00\uff1a ip\u3002\u901a\u8fc7\u5b9e\u4f8b\u7684 IP \u5730\u5740\u5bf9\u5b9e\u4f8b\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u6709\u6548\u683c\u5f0f\u4e3a XX.XX.XX.XX \u6216 XX.XX.XX.XX/XX\u3002\u4f8b\u5982\uff0c0.0.0.0/0\u3002 cert\u3002\u901a\u8fc7 TLS \u8bc1\u4e66\u5bf9\u5b9e\u4f8b\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u5c06 TLS \u6807\u8bc6\u6307\u5b9a\u4e3a IDENTKEY\u3002\u6709\u6548\u503c\u662f\u8bc1\u4e66\u516c\u7528\u540d \uff08CN\uff09 \u4e2d\u957f\u5ea6\u4e0d\u8d85\u8fc7 64 \u4e2a\u5b57\u7b26\u7684\u4efb\u4f55\u5b57\u7b26\u4e32\u3002 user\u3002\u6309\u6307\u5b9a\u7684\u7528\u6237\u540d\u6216\u7ec4\u540d\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u6709\u6548\u503c\u662f\u4e00\u4e2a\u5b57\u6bcd\u6570\u5b57\u5b57\u7b26\u4e32\uff0c\u53ef\u4ee5\u5305\u542b\u4e00\u4e9b\u7279\u6b8a\u5b57\u7b26\uff0c\u957f\u5ea6\u4e3a 4 \u5230 32 \u4e2a\u5b57\u7b26\u3002 \u6ce8\u610f \u652f\u6301\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u53d6\u51b3\u4e8e\u60a8\u914d\u7f6e\u548c\u4f7f\u7528\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u3001\u5b89\u5168\u670d\u52a1\u548c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u3002\u652f\u6301\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u5305\u62ec NFS\u3001CIFS\u3001GlusterFS \u548c HDFS\u3002\u652f\u6301\u7684\u5b89\u5168\u670d\u52a1\u5305\u62ec LDAP\u3001Kerberos \u534f\u8bae\u6216 Microsoft Active Directory \u670d\u52a1\u3002\u6709\u5173\u4e0d\u540c\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u529f\u80fd\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u9a6c\u5c3c\u62c9\u5171\u4eab\u529f\u80fd\u652f\u6301\u6620\u5c04\u3002 \u4e0b\u9762\u662f\u4e0e\u901a\u7528\u9a71\u52a8\u7a0b\u5e8f\u5171\u4eab\u7684 NFS \u793a\u4f8b\u3002\u521b\u5efa\u5171\u4eab\u540e\uff0c\u5b83\u5177\u6709\u5bfc\u51fa\u4f4d\u7f6e 10.254.0.3:/shares/share-b2874f8d-d428-4a5c-b056-e6af80a995de \u3002\u5982\u679c\u60a8\u5c1d\u8bd5\u4f7f\u7528 10.254.0.4 IP \u5730\u5740\u5c06\u5176\u6302\u8f7d\u5230\u4e3b\u673a\u4e0a\uff0c\u60a8\u5c06\u6536\u5230\u201c\u6743\u9650\u88ab\u62d2\u7edd\u201d\u6d88\u606f\u3002 # mount.nfs -v 10.254.0.3:/shares/share-b2874f8d-d428-4a5c-b056-e6af80a995de /mnt mount.nfs: timeout set for Mon Oct 12 13:07:47 2015 mount.nfs: trying text-based options 'vers=4,addr=10.254.0.3,clientaddr=10.254.0.4' mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting 10.254.0.3:/shares/share-b2874f8d-... \u4f5c\u4e3a\u7ba1\u7406\u5458\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7 SSH \u8fde\u63a5\u5230\u5177\u6709 IP \u5730\u5740\u7684 10.254.0.3 \u4e3b\u673a\uff0c\u68c0\u67e5\u5176 /etc/exports \u4e0a\u7684\u6587\u4ef6\u5e76\u67e5\u770b\u5b83\u662f\u5426\u4e3a\u7a7a\uff1a # cat /etc/exports # \u6211\u4eec\u5728\u793a\u4f8b\u4e2d\u4f7f\u7528\u7684\u901a\u7528\u9a71\u52a8\u7a0b\u5e8f\u4e0d\u652f\u6301\u4efb\u4f55\u5b89\u5168\u670d\u52a1\uff0c\u56e0\u6b64\u4f7f\u7528 NFS \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\uff0c\u6211\u4eec\u53ea\u80fd\u901a\u8fc7 IP \u5730\u5740\u6388\u4e88\u8bbf\u95ee\u6743\u9650\uff1a $ manila access-allow Share_demo2 ip 10.254.0.4 +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | share_id | e57c25a8-0392-444f-9ffc-5daadb9f756c | | access_type | ip | | access_to | 10.254.0.4 | | access_level | rw | | state | new | | id | 62b8e453-d712-4074-8410-eab6227ba267 | +--------------+--------------------------------------+ \u89c4\u5219\u8fdb\u5165\u72b6\u6001 active \u540e\uff0c\u6211\u4eec\u53ef\u4ee5\u518d\u6b21\u8fde\u63a5\u5230 10.254.0.3 \u4e3b\u673a\u5e76\u68c0\u67e5 /etc/exports \u6587\u4ef6\uff0c\u5e76\u67e5\u770b\u662f\u5426\u6dfb\u52a0\u4e86\u5e26\u6709\u89c4\u5219\u7684\u884c\uff1a # cat /etc/exports /shares/share-b2874f8d-d428-4a5c-b056-e6af80a995de 10.254.0.4(rw,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,anonuid=65534,anongid=65534,sec=sys,rw,root_squash,no_all_squash) \u73b0\u5728\uff0c\u6211\u4eec\u53ef\u4ee5\u4f7f\u7528 IP \u5730\u5740 10.254.0.4 \u5728\u4e3b\u673a\u4e0a\u6302\u8f7d\u5171\u4eab\uff0c\u5e76\u62e5\u6709 rw \u5171\u4eab\u6743\u9650\uff1a # mount.nfs -v 10.254.0.3:/shares/share-b2874f8d-d428-4a5c-b056-e6af80a995de /mnt # ls -a /mnt . .. lost+found # echo \"Hello!\" > /mnt/1.txt # ls -a /mnt . .. 1.txt lost+found # \u5171\u4eab\u7c7b\u578b\u8bbf\u95ee\u63a7\u5236 \u00b6 \u5171\u4eab\u7c7b\u578b\u662f\u7ba1\u7406\u5458\u5b9a\u4e49\u7684\u201c\u670d\u52a1\u7c7b\u578b\u201d\uff0c\u7531\u79df\u6237\u53ef\u89c1\u63cf\u8ff0\u548c\u79df\u6237\u4e0d\u53ef\u89c1\u952e\u503c\u5bf9\u5217\u8868\uff08\u989d\u5916\u89c4\u8303\uff09\u7ec4\u6210\u3002manila-scheduler \u4f7f\u7528\u989d\u5916\u7684\u89c4\u8303\u6765\u505a\u51fa\u8c03\u5ea6\u51b3\u7b56\uff0c\u9a71\u52a8\u7a0b\u5e8f\u63a7\u5236\u5171\u4eab\u521b\u5efa\u3002 \u7ba1\u7406\u5458\u53ef\u4ee5\u521b\u5efa\u548c\u5220\u9664\u5171\u4eab\u7c7b\u578b\uff0c\u8fd8\u53ef\u4ee5\u7ba1\u7406\u5728\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e2d\u8d4b\u4e88\u5b83\u4eec\u542b\u4e49\u7684\u989d\u5916\u89c4\u8303\u3002\u79df\u6237\u53ef\u4ee5\u5217\u51fa\u5171\u4eab\u7c7b\u578b\uff0c\u5e76\u53ef\u4ee5\u4f7f\u7528\u5b83\u4eec\u521b\u5efa\u65b0\u5171\u4eab\u3002\u6709\u5173\u7ba1\u7406\u5171\u4eab\u7c7b\u578b\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API \u548c\u5171\u4eab\u7c7b\u578b\u7ba1\u7406\u6587\u6863\u3002 \u5171\u4eab\u7c7b\u578b\u53ef\u4ee5\u521b\u5efa\u4e3a\u516c\u5171\u548c\u79c1\u6709\u3002\u8fd9\u662f\u5171\u4eab\u7c7b\u578b\u7684\u53ef\u89c1\u6027\u7ea7\u522b\uff0c\u7528\u4e8e\u5b9a\u4e49\u5176\u4ed6\u79df\u6237\u662f\u5426\u53ef\u4ee5\u5728\u5171\u4eab\u7c7b\u578b\u5217\u8868\u4e2d\u770b\u5230\u5b83\uff0c\u5e76\u4f7f\u7528\u5b83\u6765\u521b\u5efa\u65b0\u5171\u4eab\u3002 \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u5171\u4eab\u7c7b\u578b\u521b\u5efa\u4e3a\u516c\u5171\u7c7b\u578b\u3002\u521b\u5efa\u5171\u4eab\u7c7b\u578b\u65f6\uff0c\u8bf7\u4f7f\u7528 --is_public \u53c2\u6570\u96c6 \u8bbe\u7f6e\u4e3a False \u79c1\u6709\u5171\u4eab\u7c7b\u578b\uff0c\u8fd9\u5c06\u9632\u6b62\u5176\u4ed6\u79df\u6237\u5728\u5171\u4eab\u7c7b\u578b\u5217\u8868\u4e2d\u770b\u5230\u5b83\u5e76\u4f7f\u7528\u5b83\u521b\u5efa\u65b0\u5171\u4eab\u3002\u53e6\u4e00\u65b9\u9762\uff0c\u516c\u5171\u5171\u4eab\u7c7b\u578b\u53ef\u4f9b\u4e91\u4e2d\u7684\u6bcf\u4e2a\u79df\u6237\u4f7f\u7528\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5141\u8bb8\u7ba1\u7406\u5458\u6388\u4e88\u6216\u62d2\u7edd\u5bf9\u79df\u6237\u7684\u4e13\u7528\u5171\u4eab\u7c7b\u578b\u7684\u8bbf\u95ee\u6743\u9650\u3002\u8fd8\u53ef\u4ee5\u83b7\u53d6\u6709\u5173\u6307\u5b9a\u4e13\u7528\u5171\u4eab\u7c7b\u578b\u7684\u8bbf\u95ee\u6743\u9650\u7684\u4fe1\u606f\u3002 \u5efa\u8bae \u7531\u4e8e\u5171\u4eab\u7c7b\u578b\u7531\u4e8e\u5176\u989d\u5916\u7684\u89c4\u8303\u800c\u6709\u52a9\u4e8e\u5728\u7528\u6237\u521b\u5efa\u5171\u4eab\u4e4b\u524d\u7b5b\u9009\u6216\u9009\u62e9\u540e\u7aef\uff0c\u56e0\u6b64\u4f7f\u7528\u5bf9\u5171\u4eab\u7c7b\u578b\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u53ef\u4ee5\u9650\u5236\u5ba2\u6237\u7aef\u9009\u62e9\u7279\u5b9a\u7684\u540e\u7aef\u3002 \u4f8b\u5982\uff0c\u4f5c\u4e3a\u7ba1\u7406\u5458\u79df\u6237\u4e2d\u7684\u7ba1\u7406\u5458\u7528\u6237\uff0c\u53ef\u4ee5\u521b\u5efa\u540d\u4e3a my_type \u7684\u4e13\u7528\u5171\u4eab\u7c7b\u578b\uff0c\u5e76\u5728\u5217\u8868\u4e2d\u67e5\u770b\u5b83\u3002\u5728\u63a7\u5236\u53f0\u793a\u4f8b\u4e2d\uff0c\u7701\u7565\u4e86\u767b\u5f55\u548c\u6ce8\u9500\uff0c\u5e76\u63d0\u4f9b\u4e86\u73af\u5883\u53d8\u91cf\u4ee5\u663e\u793a\u5f53\u524d\u767b\u5f55\u7684\u7528\u6237\u3002 $ env | grep OS_ ... OS_USERNAME=admin OS_TENANT_NAME=admin ... $ manila type-list --all +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | 4..| my_type| private | - | driver_handles_share_servers:False| snapshot_support:True | | 5..| default| public | YES | driver_handles_share_servers:True | snapshot_support:True | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ demo \u79df\u6237\u4e2d\u7684 demo \u7528\u6237\u53ef\u4ee5\u5217\u51fa\u7c7b\u578b\uff0c\u5e76\u4e14\u547d\u540d my_type \u7684\u4e13\u7528\u5171\u4eab\u7c7b\u578b\u5bf9\u4ed6\u4e0d\u53ef\u89c1\u3002 $ env | grep OS_ ... OS_USERNAME=demo OS_TENANT_NAME=demo ... $ manila type-list --all +----+--------+-----------+-----------+----------------------------------+----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+----------------------------------+----------------------+ | 5..| default| public | YES | driver_handles_share_servers:True| snapshot_support:True| +----+--------+-----------+-----------+----------------------------------+----------------------+ \u7ba1\u7406\u5458\u53ef\u4ee5\u6388\u4e88\u5bf9\u79df\u6237 ID \u7b49\u4e8e df29a37db5ae48d19b349fe947fada46 \u7684\u6f14\u793a\u79df\u6237\u7684\u4e13\u7528\u5171\u4eab\u7c7b\u578b\u7684\u8bbf\u95ee\u6743\u9650\uff1a $ env | grep OS_ ... OS_USERNAME=admin OS_TENANT_NAME=admin ... $ openstack project list +----------------------------------+--------------------+ | ID | Name | +----------------------------------+--------------------+ | ... | ... | | df29a37db5ae48d19b349fe947fada46 | demo | +----------------------------------+--------------------+ $ manila type-access-add my_type df29a37db5ae48d19b349fe947fada46 \u56e0\u6b64\uff0c\u73b0\u5728\u6f14\u793a\u79df\u6237\u4e2d\u7684\u7528\u6237\u53ef\u4ee5\u770b\u5230\u4e13\u7528\u5171\u4eab\u7c7b\u578b\uff0c\u5e76\u5728\u5171\u4eab\u521b\u5efa\u4e2d\u4f7f\u7528\u5b83\uff1a $ env | grep OS_ ... OS_USERNAME=demo OS_TENANT_NAME=demo ... $ manila type-list --all +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | 4..| my_type| private | - | driver_handles_share_servers:False| snapshot_support:True | | 5..| default| public | YES | driver_handles_share_servers:True | snapshot_support:True | +----+--------+-----------+-----------+-----------------------------------+- \u8981\u62d2\u7edd\u5bf9\u6307\u5b9a\u9879\u76ee\u7684\u8bbf\u95ee\uff0c\u8bf7\u4f7f\u7528 manila type-access-remove \u547d\u4ee4\u3002 \u5efa\u8bae \u4e00\u4e2a\u771f\u5b9e\u7684\u751f\u4ea7\u7528\u4f8b\u663e\u793a\u4e86\u5171\u4eab\u7c7b\u578b\u7684\u7528\u9014\u548c\u5bf9\u5b83\u4eec\u7684\u8bbf\u95ee\uff0c\u5f53\u4f60\u6709\u4e24\u4e2a\u540e\u7aef\u65f6\uff1a\u5ec9\u4ef7\u7684 LVM \u4f5c\u4e3a\u516c\u5171\u5b58\u50a8\uff0c\u6602\u8d35\u7684 Ceph \u4f5c\u4e3a\u79c1\u6709\u5b58\u50a8\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u53ef\u4ee5\u5411\u67d0\u4e9b\u79df\u6237\u6388\u4e88\u8bbf\u95ee\u6743\u9650\uff0c\u5e76\u4f7f\u7528 `user/group` \u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u8fdb\u884c\u8bbf\u95ee\u3002 \u653f\u7b56 \u00b6 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u6709\u81ea\u5df1\u7684\u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u7b56\u7565\u3002\u5b83\u4eec\u786e\u5b9a\u54ea\u4e2a\u7528\u6237\u53ef\u4ee5\u4ee5\u54ea\u79cd\u65b9\u5f0f\u8bbf\u95ee\u54ea\u4e9b\u5bf9\u8c61\uff0c\u5e76\u5728\u670d\u52a1\u7684 policy.json \u6587\u4ef6\u4e2d\u5b9a\u4e49\u3002 \u5efa\u8bae \u914d\u7f6e\u6587\u4ef6 `policy.json` \u53ef\u4ee5\u653e\u7f6e\u5728\u4efb\u4f55\u4f4d\u7f6e\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u8be5\u8def\u5f84 `/etc/manila/policy.json` \u662f\u5fc5\u9700\u7684\u3002 \u6bcf\u5f53\u5bf9\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u8fdb\u884c API \u8c03\u7528\u65f6\uff0c\u7b56\u7565\u5f15\u64ce\u90fd\u4f1a\u4f7f\u7528\u76f8\u5e94\u7684\u7b56\u7565\u5b9a\u4e49\u6765\u786e\u5b9a\u662f\u5426\u53ef\u4ee5\u63a5\u53d7\u8be5\u8c03\u7528\u3002 \u7b56\u7565\u89c4\u5219\u786e\u5b9a\u5728\u4ec0\u4e48\u60c5\u51b5\u4e0b\u5141\u8bb8 API \u8c03\u7528\u3002\u5f53 /etc/manila/policy.json \u89c4\u5219\u4e3a\u7a7a\u5b57\u7b26\u4e32\u65f6\uff0c\u8be5\u6587\u4ef6\u5177\u6709\u59cb\u7ec8\u5141\u8bb8\u64cd\u4f5c\u7684\u89c4\u5219\uff1a \"\" ;\u57fa\u4e8e\u7528\u6237\u89d2\u8272\u6216\u89c4\u5219\u7684\u89c4\u5219;\u5e26\u6709\u5e03\u5c14\u8868\u8fbe\u5f0f\u7684\u89c4\u5219\u3002\u4e0b\u9762\u662f\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1 policy.json \u7684\u6587\u4ef6\u7247\u6bb5\u3002\u4ece\u4e00\u4e2aOpenStack\u7248\u672c\u5230\u53e6\u4e00\u4e2aOpenStack\u7248\u672c\uff0c\u53ef\u4ee5\u5bf9\u5176\u8fdb\u884c\u66f4\u6539\u3002 { \"context_is_admin\": \"role:admin\", \"admin_or_owner\": \"is_admin:True or project_id:%(project_id)s\", \"default\": \"rule:admin_or_owner\", \"share_extension:quotas:show\": \"\", \"share_extension:quotas:update\": \"rule:admin_api\", \"share_extension:quotas:delete\": \"rule:admin_api\", \"share_extension:quota_classes\": \"\", } \u5fc5\u987b\u5c06\u7528\u6237\u5206\u914d\u5230\u7b56\u7565\u4e2d\u5f15\u7528\u7684\u7ec4\u548c\u89d2\u8272\u3002\u5f53\u4f7f\u7528\u7528\u6237\u7ba1\u7406\u547d\u4ee4\u65f6\uff0c\u670d\u52a1\u4f1a\u81ea\u52a8\u5b8c\u6210\u6b64\u64cd\u4f5c\u3002 \u6ce8\u610f \u4efb\u4f55\u66f4\u6539 `/etc/manila/policy.json` \u90fd\u4f1a\u7acb\u5373\u751f\u6548\uff0c\u8fd9\u5141\u8bb8\u5728\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u8fd0\u884c\u65f6\u5b9e\u65bd\u65b0\u7b56\u7565\u3002\u624b\u52a8\u4fee\u6539\u7b56\u7565\u53ef\u80fd\u4f1a\u4ea7\u751f\u610f\u60f3\u4e0d\u5230\u7684\u526f\u4f5c\u7528\uff0c\u56e0\u6b64\u4e0d\u9f13\u52b1\u8fd9\u6837\u505a\u3002\u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 policy.json \u6587\u4ef6\u3002 \u68c0\u67e5\u8868 \u00b6 Check-Shared-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/manila\uff1f \u00b6 \u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u62d2\u7edd\u5411\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u63d0\u4f9b\u670d\u52a1\u3002\u56e0\u6b64\uff0c\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a root\uff0c\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a manila\u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/manila/manila.conf | egrep \"root manila\" $ stat -L -c \"%U %G\" /etc/manila/api-paste.ini | egrep \"root manila\" $ stat -L -c \"%U %G\" /etc/manila/policy.json | egrep \"root manila\" $ stat -L -c \"%U %G\" /etc/manila/rootwrap.conf | egrep \"root manila\" $ stat -L -c \"%U %G\" /etc/manila | egrep \"root manila\" \u901a\u8fc7\uff1a\u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c manila\u3002\u4e0a\u9762\u7684\u547d\u4ee4\u663e\u793a\u4e86\u6839\u9a6c\u5c3c\u62c9\u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u56e0\u4e3a\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 root \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u6216\u9a6c\u5c3c\u62c9\u4ee5\u5916\u7684\u4efb\u4f55\u7ec4\u3002 Check-Shared-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f \u00b6 \u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u5efa\u8bae\u5bf9\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/manila/manila.conf $ stat -L -c \"%a\" /etc/manila/api-paste.ini $ stat -L -c \"%a\" /etc/manila/policy.json $ stat -L -c \"%a\" /etc/manila/rootwrap.conf $ stat -L -c \"%a\" /etc/manila \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002640 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\uff0c\u5373\u201cu=rw\uff0cg=r\uff0co=\u201d\u3002\u8bf7\u6ce8\u610f\uff0c\u4f7f\u7528 Check-Shared-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/manila\uff1f\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0croot \u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0cmanila \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/manila/manila.conf getfacl: Removing leading '/' from absolute path names # file: etc/manila/manila.conf USER root rw- GROUP manila r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u672a\u8bbe\u7f6e\u4e3a\u81f3\u5c11 640\u3002 Check-Shared-03\uff1aOpenStack Identity \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f \u00b6 \u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Rocky \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Stein \u4e2d\u5df2\u5f03\u7528\u3002 OpenStack \u652f\u6301\u5404\u79cd\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\uff0c\u5982 noauth \u548c keystone\u3002\u5982\u679c\u4f7f\u7528 ' noauth ' \u7b56\u7565\uff0c\u5219\u7528\u6237\u65e0\u9700\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u4e0e OpenStack \u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u6f5c\u5728\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u83b7\u5f97\u5bf9 OpenStack \u7ec4\u4ef6\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u56e0\u6b64\uff0c\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u670d\u52a1\u90fd\u5fc5\u987b\u4f7f\u7528\u5176\u670d\u52a1\u5e10\u6237\u901a\u8fc7 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 auth_strategy \u8bbe\u7f6e\u4e3a keystone \u3002 [DEFAULT] manila.conf \u5931\u8d25\uff1a\u5982\u679c section \u4e0b\u7684 [DEFAULT] \u53c2\u6570 auth_strategy \u503c\u8bbe\u7f6e\u4e3a noauth \u3002 Check-Shared-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f \u00b6 OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in /etc/manila/manila.conf \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a Identity API \u7aef\u70b9\u5f00\u5934\uff0c https:// \u5e76\u4e14 same /etc/manila/manila.conf \u4e2d\u540c\u4e00 [keystone_authtoken] \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri insecure \u503c\u8bbe\u7f6e\u4e3a False \u3002 \u5931\u8d25\uff1a\u5982\u679c in /etc/manila/manila.conf \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri \u503c\u672a\u8bbe\u7f6e\u4e3a\u4ee5 \u5f00\u5934\u7684\u8eab\u4efd API \u7aef\u70b9\uff0c https:// \u6216\u8005\u540c\u4e00 /etc/manila/manila.conf \u90e8\u5206\u4e2d\u7684\u53c2\u6570 insecure [keystone_authtoken] \u503c\u8bbe\u7f6e\u4e3a True \u3002 Check-Shared-05\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u8ba1\u7b97\u8054\u7cfb\uff1f \u00b6 \u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Train \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Ussuri \u4e2d\u5df2\u5f03\u7528\u3002 OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u56e0\u6b64\uff0c\u6240\u6709\u7ec4\u4ef6\u90fd\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u7684\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 nova_api_insecure \u8bbe\u7f6e\u4e3a False \u3002 [DEFAULT] manila.conf \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 nova_api_insecure \u8bbe\u7f6e\u4e3a True \u3002 [DEFAULT] manila.conf Check-Shared-06\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u7f51\u7edc\u8054\u7cfb\uff1f \u00b6 \u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Train \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Ussuri \u4e2d\u5df2\u5f03\u7528\u3002 \u4e0e\u4e4b\u524d\u7684\u68c0\u67e5\uff08Check-Shared-05\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u8ba1\u7b97\u8054\u7cfb\uff1f\uff09\u7c7b\u4f3c\uff0c\u5efa\u8bae\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 neutron_api_insecure \u8bbe\u7f6e\u4e3a False \u3002 [DEFAULT] manila.conf \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 neutron_api_insecure \u8bbe\u7f6e\u4e3a True \u3002 [DEFAULT] manila.conf Check-Shared-07\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u5757\u5b58\u50a8\u8054\u7cfb\uff1f \u00b6 \u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Train \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Ussuri \u4e2d\u5df2\u5f03\u7528\u3002 \u4e0e\u4e4b\u524d\u7684\u68c0\u67e5\uff08Check-Shared-05\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u8ba1\u7b97\u8054\u7cfb\uff1f\uff09\u7c7b\u4f3c\uff0c\u5efa\u8bae\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 cinder_api_insecure \u8bbe\u7f6e\u4e3a False \u3002 [DEFAULT] manila.conf \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 cinder_api_insecure \u8bbe\u7f6e\u4e3a True \u3002 [DEFAULT] manila.conf Check-Shared-08\uff1a\u8bf7\u6c42\u6b63\u6587\u7684\u6700\u5927\u5927\u5c0f\u662f\u5426\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f \u00b6 \u5982\u679c\u672a\u5b9a\u4e49\u6bcf\u4e2a\u8bf7\u6c42\u7684\u6700\u5927\u6b63\u6587\u5927\u5c0f\uff0c\u653b\u51fb\u8005\u53ef\u4ee5\u6784\u5efa\u4efb\u610f\u8f83\u5927\u7684OSAPI\u8bf7\u6c42\uff0c\u5bfc\u81f4\u670d\u52a1\u5d29\u6e83\uff0c\u6700\u7ec8\u5bfc\u81f4\u62d2\u7edd\u670d\u52a1\u653b\u51fb\u3002\u5206\u914d\u6700\u5927\u503c\u53ef\u786e\u4fdd\u963b\u6b62\u4efb\u4f55\u6076\u610f\u8d85\u5927\u8bf7\u6c42\uff0c\u4ece\u800c\u786e\u4fdd\u670d\u52a1\u7684\u6301\u7eed\u53ef\u7528\u6027\u3002 \u901a\u8fc7\uff1a\u5982\u679c in \u8282\u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a \uff0c\u6216\u8005 in manila.conf manila.conf \u8282\u4e0b\u7684 [oslo_middleware] [DEFAULT] \u53c2\u6570 max_request_body_size osapi_max_request_body_size \u503c\u8bbe\u7f6e\u4e3a 114688 \u3002 114688 \u4e0b\u9762\u7684 [DEFAULT] \u53c2\u6570 osapi_max_request_body_size \u5df2\u5f03\u7528\uff0c\u6700\u597d\u4f7f\u7528 [oslo_middleware]/ max_request_body_size \u3002 \u5931\u8d25\uff1a\u5982\u679c in manila.conf \u8282\u4e0b\u7684\u53c2\u6570\u503c\u672a\u8bbe\u7f6e\u4e3a 114688 \uff0c\u6216\u8005 in manila.conf \u8282\u4e0b\u7684 [DEFAULT] [oslo_middleware] \u53c2\u6570 max_request_body_size osapi_max_request_body_size \u503c\u672a\u8bbe\u7f6e\u4e3a 114688 \u3002 \u8054\u7f51 \u00b6 OpenStack \u7f51\u7edc\u670d\u52a1 \uff08neutron\uff09 \u4f7f\u6700\u7ec8\u7528\u6237\u6216\u79df\u6237\u80fd\u591f\u5b9a\u4e49\u3001\u5229\u7528\u548c\u4f7f\u7528\u7f51\u7edc\u8d44\u6e90\u3002OpenStack Networking \u63d0\u4f9b\u4e86\u4e00\u4e2a\u9762\u5411\u79df\u6237\u7684 API\uff0c\u7528\u4e8e\u5b9a\u4e49\u4e91\u4e2d\u5b9e\u4f8b\u7684\u7f51\u7edc\u8fde\u63a5\u548c IP \u5bfb\u5740\uff0c\u4ee5\u53ca\u7f16\u6392\u7f51\u7edc\u914d\u7f6e\u3002\u968f\u7740\u5411\u4ee5 API \u4e3a\u4e2d\u5fc3\u7684\u7f51\u7edc\u670d\u52a1\u7684\u8fc7\u6e21\uff0c\u4e91\u67b6\u6784\u5e08\u548c\u7ba1\u7406\u5458\u5e94\u8003\u8651\u6700\u4f73\u5b9e\u8df5\u6765\u4fdd\u62a4\u7269\u7406\u548c\u865a\u62df\u7f51\u7edc\u57fa\u7840\u67b6\u6784\u548c\u670d\u52a1\u3002 OpenStack Networking \u91c7\u7528\u63d2\u4ef6\u67b6\u6784\u8bbe\u8ba1\uff0c\u901a\u8fc7\u5f00\u6e90\u793e\u533a\u6216\u7b2c\u4e09\u65b9\u670d\u52a1\u63d0\u4f9b API \u7684\u53ef\u6269\u5c55\u6027\u3002\u5728\u8bc4\u4f30\u67b6\u6784\u8bbe\u8ba1\u8981\u6c42\u65f6\uff0c\u786e\u5b9a OpenStack Networking \u6838\u5fc3\u670d\u52a1\u4e2d\u6709\u54ea\u4e9b\u529f\u80fd\u3001\u7b2c\u4e09\u65b9\u4ea7\u54c1\u63d0\u4f9b\u7684\u4efb\u4f55\u5176\u4ed6\u670d\u52a1\u4ee5\u53ca\u9700\u8981\u5728\u7269\u7406\u57fa\u7840\u67b6\u6784\u4e2d\u5b9e\u73b0\u54ea\u4e9b\u8865\u5145\u670d\u52a1\u975e\u5e38\u91cd\u8981\u3002 \u672c\u8282\u7b80\u8981\u6982\u8ff0\u4e86\u5728\u5b9e\u73b0 OpenStack Networking \u65f6\u5e94\u8003\u8651\u54ea\u4e9b\u6d41\u7a0b\u548c\u6700\u4f73\u5b9e\u8df5\u3002 \u7f51\u7edc\u67b6\u6784 \u5728\u7269\u7406\u670d\u52a1\u5668\u4e0a\u653e\u7f6e OpenStack Networking \u670d\u52a1 \u7f51\u7edc\u670d\u52a1 \u4f7f\u7528 VLAN \u548c\u96a7\u9053\u7684 L2 \u9694\u79bb \u7f51\u7edc\u670d\u52a1 \u7f51\u7edc\u670d\u52a1\u6269\u5c55 \u7f51\u7edc\u670d\u52a1\u9650\u5236 \u7f51\u7edc\u670d\u52a1\u5b89\u5168\u6700\u4f73\u505a\u6cd5 OpenStack Networking \u670d\u52a1\u914d\u7f6e \u4fdd\u62a4 OpenStack \u7f51\u7edc\u670d\u52a1 \u9879\u76ee\u7f51\u7edc\u670d\u52a1\u5de5\u4f5c\u6d41\u7a0b \u7f51\u7edc\u8d44\u6e90\u7b56\u7565\u5f15\u64ce \u5b89\u5168\u7ec4 \u914d\u989d \u7f13\u89e3 ARP \u6b3a\u9a97 \u68c0\u67e5\u8868 Check-Neutron-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/neutron\uff1f Check-Neutron-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Neutron-03\uff1aKeystone\u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Neutron-04\uff1a\u662f\u5426\u4f7f\u7528\u5b89\u5168\u534f\u8bae\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Neutron-05\uff1aNeutron API \u670d\u52a1\u5668\u4e0a\u662f\u5426\u542f\u7528\u4e86 TLS\uff1f \u7f51\u7edc\u67b6\u6784 \u00b6 OpenStack Networking \u662f\u4e00\u4e2a\u72ec\u7acb\u7684\u670d\u52a1\uff0c\u901a\u5e38\u5728\u591a\u4e2a\u8282\u70b9\u4e0a\u90e8\u7f72\u591a\u4e2a\u8fdb\u7a0b\u3002\u8fd9\u4e9b\u8fdb\u7a0b\u5f7c\u6b64\u4ea4\u4e92\uff0c\u5e76\u4e0e\u5176\u4ed6 OpenStack \u670d\u52a1\u4ea4\u4e92\u3002OpenStack Networking \u670d\u52a1\u7684\u4e3b\u8981\u8fdb\u7a0b\u662f neutron-server\uff0c\u8fd9\u662f\u4e00\u4e2a Python \u5b88\u62a4\u8fdb\u7a0b\uff0c\u5b83\u516c\u5f00 OpenStack Networking API\uff0c\u5e76\u5c06\u79df\u6237\u8bf7\u6c42\u4f20\u9012\u7ed9\u4e00\u7ec4\u63d2\u4ef6\u8fdb\u884c\u989d\u5916\u5904\u7406\u3002 OpenStack Networking \u7ec4\u4ef6\u5305\u62ec\uff1a neutron \u670d\u52a1\u5668\uff08neutron-server \u548c neutron-*-plugin\uff09 \u6b64\u670d\u52a1\u5728\u7f51\u7edc\u8282\u70b9\u4e0a\u8fd0\u884c\uff0c\u4e3a\u7f51\u7edc API \u53ca\u5176\u6269\u5c55\u63d0\u4f9b\u670d\u52a1\u3002\u5b83\u8fd8\u5f3a\u5236\u6267\u884c\u6bcf\u4e2a\u7aef\u53e3\u7684\u7f51\u7edc\u6a21\u578b\u548c IP \u5bfb\u5740\u3002neutron-server \u9700\u8981\u95f4\u63a5\u8bbf\u95ee\u6301\u4e45\u6027\u6570\u636e\u5e93\u3002\u8fd9\u662f\u901a\u8fc7\u63d2\u4ef6\u5b9e\u73b0\u7684\uff0c\u63d2\u4ef6\u4f7f\u7528 AMQP\uff08\u9ad8\u7ea7\u6d88\u606f\u961f\u5217\u534f\u8bae\uff09\u4e0e\u6570\u636e\u5e93\u8fdb\u884c\u901a\u4fe1\u3002 \u63d2\u4ef6\u4ee3\u7406 \uff08neutron-*-agent\uff09 \u5728\u6bcf\u4e2a\u8ba1\u7b97\u8282\u70b9\u4e0a\u8fd0\u884c\uff0c\u4ee5\u7ba1\u7406\u672c\u5730\u865a\u62df\u4ea4\u6362\u673a \uff08vswitch\uff09 \u914d\u7f6e\u3002\u60a8\u4f7f\u7528\u7684\u63d2\u4ef6\u51b3\u5b9a\u4e86\u8fd0\u884c\u54ea\u4e9b\u4ee3\u7406\u3002\u6b64\u670d\u52a1\u9700\u8981\u6d88\u606f\u961f\u5217\u8bbf\u95ee\uff0c\u5e76\u53d6\u51b3\u4e8e\u6240\u4f7f\u7528\u7684\u63d2\u4ef6\u3002\u4e00\u4e9b\u63d2\u4ef6\uff0c\u5982 OpenDaylight\uff08ODL\uff09 \u548c\u5f00\u653e\u865a\u62df\u7f51\u7edc \uff08OVN\uff09\uff0c\u5728\u8ba1\u7b97\u8282\u70b9\u4e0a\u4e0d\u9700\u8981\u4efb\u4f55 python \u4ee3\u7406\u3002 DHCP \u4ee3\u7406 \uff08neutron-dhcp-agent\uff09 \u4e3a\u79df\u6237\u7f51\u7edc\u63d0\u4f9bDHCP\u670d\u52a1\u3002\u6b64\u4ee3\u7406\u5728\u6240\u6709\u63d2\u4ef6\u4e2d\u90fd\u662f\u76f8\u540c\u7684\uff0c\u5e76\u8d1f\u8d23\u7ef4\u62a4 DHCP \u914d\u7f6e\u3002neutron-dhcp-agent \u9700\u8981\u6d88\u606f\u961f\u5217\u8bbf\u95ee\u3002\u53ef\u9009\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u63d2\u4ef6\u3002 L3 \u4ee3\u7406\uff08neutron-L3-agent\uff09 \u4e3a\u79df\u6237\u7f51\u7edc\u4e0a\u7684\u865a\u62df\u673a\u63d0\u4f9b L3/NAT \u8f6c\u53d1\u3002\u9700\u8981\u6d88\u606f\u961f\u5217\u8bbf\u95ee\u6743\u9650\u3002\u53ef\u9009\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u63d2\u4ef6\u3002 \u7f51\u7edc\u63d0\u4f9b\u5546\u670d\u52a1\uff08SDN \u670d\u52a1\u5668/\u670d\u52a1\uff09 \u4e3a\u79df\u6237\u7f51\u7edc\u63d0\u4f9b\u5176\u4ed6\u7f51\u7edc\u670d\u52a1\u3002\u8fd9\u4e9b SDN \u670d\u52a1\u53ef\u4ee5\u901a\u8fc7 REST API \u7b49\u901a\u4fe1\u901a\u9053\u4e0e neutron-server\u3001neutron-plugin \u548c plugin-agents \u8fdb\u884c\u4ea4\u4e92\u3002 \u4e0b\u56fe\u663e\u793a\u4e86 OpenStack Networking \u7ec4\u4ef6\u7684\u67b6\u6784\u548c\u7f51\u7edc\u6d41\u7a0b\u56fe\uff1a OpenStack Networking \u670d\u52a1\u5728\u7269\u7406\u670d\u52a1\u5668\u4e0a\u7684\u653e\u7f6e \u00b6 \u672c\u6307\u5357\u91cd\u70b9\u4ecb\u7ecd\u4e00\u4e2a\u6807\u51c6\u67b6\u6784\uff0c\u5176\u4e2d\u5305\u62ec\u4e00\u4e2a\u4e91\u63a7\u5236\u5668\u4e3b\u673a\u3001\u4e00\u4e2a\u7f51\u7edc\u4e3b\u673a\u548c\u4e00\u7ec4\u7528\u4e8e\u8fd0\u884c VM \u7684\u8ba1\u7b97\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u3002 \u7269\u7406\u670d\u52a1\u5668\u7684\u7f51\u7edc\u8fde\u63a5 \u00b6 \u6807\u51c6\u7684 OpenStack Networking \u8bbe\u7f6e\u6700\u591a\u6709\u56db\u4e2a\u4e0d\u540c\u7684\u7269\u7406\u6570\u636e\u4e2d\u5fc3\u7f51\u7edc\uff1a \u7ba1\u7406\u7f51\u7edc \u7528\u4e8e OpenStack \u7ec4\u4ef6\u4e4b\u95f4\u7684\u5185\u90e8\u901a\u4fe1\u3002\u6b64\u7f51\u7edc\u4e0a\u7684 IP \u5730\u5740\u5e94\u53ea\u80fd\u5728\u6570\u636e\u4e2d\u5fc3\u5185\u8bbf\u95ee\uff0c\u5e76\u88ab\u89c6\u4e3a\u7ba1\u7406\u5b89\u5168\u57df\u3002 \u8bbf\u5ba2\u7f51\u7edc \u7528\u4e8e\u4e91\u90e8\u7f72\u4e2d\u7684 VM \u6570\u636e\u901a\u4fe1\u3002\u6b64\u7f51\u7edc\u7684 IP \u5bfb\u5740\u8981\u6c42\u53d6\u51b3\u4e8e\u6240\u4f7f\u7528\u7684 OpenStack Networking \u63d2\u4ef6\u4ee5\u53ca\u79df\u6237\u5bf9\u865a\u62df\u7f51\u7edc\u6240\u505a\u7684\u7f51\u7edc\u914d\u7f6e\u9009\u62e9\u3002\u6b64\u7f51\u7edc\u88ab\u89c6\u4e3a\u5ba2\u6237\u673a\u5b89\u5168\u57df\u3002 \u5916\u90e8\u7f51\u7edc \u7528\u4e8e\u5728\u67d0\u4e9b\u90e8\u7f72\u65b9\u6848\u4e2d\u4e3a VM \u63d0\u4f9b Internet \u8bbf\u95ee\u6743\u9650\u3002Internet \u4e0a\u7684\u4efb\u4f55\u4eba\u90fd\u53ef\u4ee5\u8bbf\u95ee\u6b64\u7f51\u7edc\u4e0a\u7684 IP \u5730\u5740\u3002\u6b64\u7f51\u7edc\u88ab\u89c6\u4e3a\u5c5e\u4e8e\u516c\u5171\u5b89\u5168\u57df\u3002 API\u7f51\u7edc \u5411\u79df\u6237\u516c\u5f00\u6240\u6709 OpenStack API\uff0c\u5305\u62ec OpenStack \u7f51\u7edc API\u3002Internet \u4e0a\u7684\u4efb\u4f55\u4eba\u90fd\u53ef\u4ee5\u8bbf\u95ee\u6b64\u7f51\u7edc\u4e0a\u7684 IP \u5730\u5740\u3002\u8fd9\u53ef\u80fd\u4e0e\u5916\u90e8\u7f51\u7edc\u662f\u540c\u4e00\u7f51\u7edc\uff0c\u56e0\u4e3a\u53ef\u4ee5\u4e3a\u4f7f\u7528 IP \u5206\u914d\u8303\u56f4\u7684\u5916\u90e8\u7f51\u7edc\u521b\u5efa\u4e00\u4e2a\u5b50\u7f51\uff0c\u4ee5\u4fbf\u4ec5\u4f7f\u7528 IP \u5757\u4e2d\u5c0f\u4e8e\u5168\u90e8\u8303\u56f4\u7684 IP \u5730\u5740\u3002\u6b64\u7f51\u7edc\u88ab\u89c6\u4e3a\u516c\u5171\u5b89\u5168\u57df\u3002 \u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u300aOpenStack \u7ba1\u7406\u5458\u6307\u5357\u300b\u3002 \u7f51\u7edc\u670d\u52a1 \u00b6 \u5728\u8bbe\u8ba1 OpenStack \u7f51\u7edc\u57fa\u7840\u67b6\u6784\u7684\u521d\u59cb\u67b6\u6784\u9636\u6bb5\uff0c\u786e\u4fdd\u63d0\u4f9b\u9002\u5f53\u7684\u4e13\u4e1a\u77e5\u8bc6\u6765\u534f\u52a9\u8bbe\u8ba1\u7269\u7406\u7f51\u7edc\u57fa\u7840\u67b6\u6784\uff0c\u786e\u5b9a\u9002\u5f53\u7684\u5b89\u5168\u63a7\u5236\u548c\u5ba1\u8ba1\u673a\u5236\u975e\u5e38\u91cd\u8981\u3002 OpenStack Networking \u589e\u52a0\u4e86\u4e00\u5c42\u865a\u62df\u5316\u7f51\u7edc\u670d\u52a1\uff0c\u4f7f\u79df\u6237\u80fd\u591f\u6784\u5efa\u81ea\u5df1\u7684\u865a\u62df\u7f51\u7edc\u3002\u76ee\u524d\uff0c\u8fd9\u4e9b\u865a\u62df\u5316\u670d\u52a1\u8fd8\u6ca1\u6709\u4f20\u7edf\u7f51\u7edc\u7684\u6210\u719f\u3002\u5728\u91c7\u7528\u8fd9\u4e9b\u865a\u62df\u5316\u670d\u52a1\u4e4b\u524d\uff0c\u8bf7\u8003\u8651\u8fd9\u4e9b\u670d\u52a1\u7684\u5f53\u524d\u72b6\u6001\uff0c\u56e0\u4e3a\u5b83\u51b3\u5b9a\u4e86\u60a8\u53ef\u80fd\u9700\u8981\u5728\u865a\u62df\u5316\u548c\u4f20\u7edf\u7f51\u7edc\u8fb9\u754c\u4e0a\u5b9e\u73b0\u54ea\u4e9b\u63a7\u5236\u3002 \u4f7f\u7528 VLAN \u548c\u96a7\u9053\u7684 L2 \u9694\u79bb \u00b6 OpenStack Networking \u53ef\u4ee5\u91c7\u7528\u4e24\u79cd\u4e0d\u540c\u7684\u673a\u5236\u5bf9\u6bcf\u4e2a\u79df\u6237/\u7f51\u7edc\u7ec4\u5408\u8fdb\u884c\u6d41\u91cf\u9694\u79bb\uff1aVLAN\uff08IEEE 802.1Q \u6807\u8bb0\uff09\u6216\u4f7f\u7528 GRE \u5c01\u88c5\u7684 L2 \u96a7\u9053\u3002OpenStack \u90e8\u7f72\u7684\u8303\u56f4\u548c\u89c4\u6a21\u51b3\u5b9a\u4e86\u60a8\u5e94\u8be5\u4f7f\u7528\u54ea\u79cd\u65b9\u6cd5\u8fdb\u884c\u6d41\u91cf\u9694\u79bb\u6216\u9694\u79bb\u3002 VLANs \u00b6 VLAN \u5728\u7279\u5b9a\u7269\u7406\u7f51\u7edc\u4e0a\u5b9e\u73b0\u4e3a\u6570\u636e\u5305\uff0c\u5176\u4e2d\u5305\u542b\u5177\u6709\u7279\u5b9a VLAN ID \uff08VID\uff09 \u5b57\u6bb5\u503c\u7684 IEEE 802.1Q \u6807\u5934\u3002\u5171\u4eab\u540c\u4e00\u7269\u7406\u7f51\u7edc\u7684 VLAN \u7f51\u7edc\u5728 L2 \u4e0a\u5f7c\u6b64\u9694\u79bb\uff0c\u751a\u81f3\u53ef\u4ee5\u6709\u91cd\u53e0\u7684 IP \u5730\u5740\u7a7a\u95f4\u3002\u6bcf\u4e2a\u652f\u6301 VLAN \u7f51\u7edc\u7684\u4e0d\u540c\u7269\u7406\u7f51\u7edc\u90fd\u88ab\u89c6\u4e3a\u4e00\u4e2a\u5355\u72ec\u7684 VLAN \u4e2d\u7ee7\uff0c\u5177\u6709\u4e0d\u540c\u7684 VID \u503c\u7a7a\u95f4\u3002\u6709\u6548\u7684 VID \u503c\u4e3a 1 \u5230 4094\u3002 VLAN \u914d\u7f6e\u7684\u590d\u6742\u6027\u53d6\u51b3\u4e8e\u60a8\u7684 OpenStack \u8bbe\u8ba1\u8981\u6c42\u3002\u4e3a\u4e86\u8ba9 OpenStack Networking \u80fd\u591f\u6709\u6548\u5730\u4f7f\u7528 VLAN\uff0c\u60a8\u5fc5\u987b\u5206\u914d\u4e00\u4e2a VLAN \u8303\u56f4\uff08\u6bcf\u4e2a\u79df\u6237\u4e00\u4e2a\uff09\uff0c\u5e76\u5c06\u6bcf\u4e2a\u8ba1\u7b97\u8282\u70b9\u7269\u7406\u4ea4\u6362\u673a\u7aef\u53e3\u8f6c\u6362\u4e3a VLAN \u4e2d\u7ee7\u7aef\u53e3\u3002 \u6ce8\u610f \u5982\u679c\u60a8\u6253\u7b97\u8ba9\u60a8\u7684\u7f51\u7edc\u652f\u6301\u8d85\u8fc7 4094 \u4e2a\u79df\u6237\uff0c\u5219 VLAN \u53ef\u80fd\u4e0d\u662f\u60a8\u7684\u6b63\u786e\u9009\u62e9\uff0c\u56e0\u4e3a\u9700\u8981\u591a\u4e2a\u201c\u9ed1\u5ba2\u201d\u624d\u80fd\u5c06 VLAN \u6807\u8bb0\u6269\u5c55\u5230\u8d85\u8fc7 4094 \u4e2a\u79df\u6237\u3002 L2 \u96a7\u9053 \u00b6 \u7f51\u7edc\u96a7\u9053\u4f7f\u7528\u552f\u4e00\u7684\u201ctunnel-id\u201d\u5c01\u88c5\u6bcf\u4e2a\u79df\u6237/\u7f51\u7edc\u7ec4\u5408\uff0c\u8be5 ID \u7528\u4e8e\u6807\u8bc6\u5c5e\u4e8e\u8be5\u7ec4\u5408\u7684\u7f51\u7edc\u6d41\u91cf\u3002\u79df\u6237\u7684 L2 \u7f51\u7edc\u8fde\u63a5\u4e0e\u7269\u7406\u4f4d\u7f6e\u6216\u57fa\u7840\u7f51\u7edc\u8bbe\u8ba1\u65e0\u5173\u3002\u901a\u8fc7\u5c06\u6d41\u91cf\u5c01\u88c5\u5728 IP \u6570\u636e\u5305\u4e2d\uff0c\u8be5\u6d41\u91cf\u53ef\u4ee5\u8de8\u8d8a\u7b2c 3 \u5c42\u8fb9\u754c\uff0c\u65e0\u9700\u9884\u914d\u7f6e VLAN \u548c VLAN \u4e2d\u7ee7\u3002\u96a7\u9053\u4e3a\u7f51\u7edc\u6570\u636e\u6d41\u91cf\u589e\u52a0\u4e86\u4e00\u5c42\u6df7\u6dc6\uff0c\u4ece\u76d1\u63a7\u7684\u89d2\u5ea6\u964d\u4f4e\u4e86\u5355\u4e2a\u79df\u6237\u6d41\u91cf\u7684\u53ef\u89c1\u6027\u3002 OpenStack Networking \u76ee\u524d\u652f\u6301 GRE \u548c VXLAN \u5c01\u88c5\u3002 \u63d0\u4f9b L2 \u9694\u79bb\u7684\u6280\u672f\u9009\u62e9\u53d6\u51b3\u4e8e\u5c06\u5728\u90e8\u7f72\u4e2d\u521b\u5efa\u7684\u79df\u6237\u7f51\u7edc\u7684\u8303\u56f4\u548c\u5927\u5c0f\u3002\u5982\u679c\u60a8\u7684\u73af\u5883\u7684 VLAN ID \u53ef\u7528\u6027\u6709\u9650\u6216\u5c06\u5177\u6709\u5927\u91cf L2 \u7f51\u7edc\uff0c\u6211\u4eec\u5efa\u8bae\u60a8\u4f7f\u7528\u96a7\u9053\u3002 \u7f51\u7edc\u670d\u52a1 \u00b6 \u79df\u6237\u7f51\u7edc\u9694\u79bb\u7684\u9009\u62e9\u4f1a\u5f71\u54cd\u79df\u6237\u670d\u52a1\u7684\u7f51\u7edc\u5b89\u5168\u548c\u63a7\u5236\u8fb9\u754c\u7684\u5b9e\u73b0\u65b9\u5f0f\u3002\u4ee5\u4e0b\u9644\u52a0\u7f51\u7edc\u670d\u52a1\u5df2\u7ecf\u53ef\u7528\u6216\u76ee\u524d\u6b63\u5728\u5f00\u53d1\u4e2d\uff0c\u4ee5\u589e\u5f3a OpenStack \u7f51\u7edc\u67b6\u6784\u7684\u5b89\u5168\u6001\u52bf\u3002 \u8bbf\u95ee\u63a7\u5236\u5217\u8868 \u00b6 OpenStack \u8ba1\u7b97\u5728\u4e0e\u65e7\u7248 nova-network \u670d\u52a1\u4e00\u8d77\u90e8\u7f72\u65f6\u76f4\u63a5\u652f\u6301\u79df\u6237\u7f51\u7edc\u6d41\u91cf\u8bbf\u95ee\u63a7\u5236\uff0c\u6216\u8005\u53ef\u4ee5\u5c06\u8bbf\u95ee\u63a7\u5236\u63a8\u8fdf\u5230 OpenStack Networking \u670d\u52a1\u3002 \u8bf7\u6ce8\u610f\uff0c\u65e7\u7248 nova-network \u5b89\u5168\u7ec4\u4f7f\u7528 iptables \u5e94\u7528\u4e8e\u5b9e\u4f8b\u4e0a\u7684\u6240\u6709\u865a\u62df\u63a5\u53e3\u7aef\u53e3\u3002 \u5b89\u5168\u7ec4\u5141\u8bb8\u7ba1\u7406\u5458\u548c\u79df\u6237\u6307\u5b9a\u6d41\u91cf\u7c7b\u578b\u4ee5\u53ca\u5141\u8bb8\u901a\u8fc7\u865a\u62df\u63a5\u53e3\u7aef\u53e3\u7684\u65b9\u5411\uff08\u5165\u53e3/\u51fa\u53e3\uff09\u3002\u5b89\u5168\u7ec4\u89c4\u5219\u662f\u6709\u72b6\u6001\u7684 L2-L4 \u6d41\u91cf\u8fc7\u6ee4\u5668\u3002 \u4f7f\u7528\u7f51\u7edc\u670d\u52a1\u65f6\uff0c\u5efa\u8bae\u5728\u6b64\u670d\u52a1\u4e2d\u542f\u7528\u5b89\u5168\u7ec4\uff0c\u5e76\u5728\u8ba1\u7b97\u670d\u52a1\u4e2d\u7981\u7528\u5b89\u5168\u7ec4\u3002 L3 \u8def\u7531\u548c NAT \u00b6 OpenStack Networking \u8def\u7531\u5668\u53ef\u4ee5\u8fde\u63a5\u591a\u4e2a L2 \u7f51\u7edc\uff0c\u5e76\u4e14\u8fd8\u53ef\u4ee5\u63d0\u4f9b\u8fde\u63a5\u4e00\u4e2a\u6216\u591a\u4e2a\u79c1\u6709 L2 \u7f51\u7edc\u5230\u5171\u4eab\u5916\u90e8\u7f51\u7edc\uff08\u4f8b\u5982\u7528\u4e8e\u8bbf\u95ee\u4e92\u8054\u7f51\u7684\u516c\u5171\u7f51\u7edc\uff09\u7684\u7f51\u5173\u3002 L3 \u8def\u7531\u5668\u5728\u5c06\u8def\u7531\u5668\u4e0a\u884c\u94fe\u8def\u5230\u5916\u90e8\u7f51\u7edc\u7684\u7f51\u5173\u7aef\u53e3\u4e0a\u63d0\u4f9b\u57fa\u672c\u7684\u7f51\u7edc\u5730\u5740\u8f6c\u6362 \uff08NAT\uff09 \u529f\u80fd\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u6b64\u8def\u7531\u5668\u4f1a SNAT\uff08\u9759\u6001 NAT\uff09\u6240\u6709\u6d41\u91cf\uff0c\u5e76\u652f\u6301\u6d6e\u52a8 IP\uff0c\u8fd9\u4f1a\u521b\u5efa\u4ece\u5916\u90e8\u7f51\u7edc\u4e0a\u7684\u516c\u5171 IP \u5230\u8fde\u63a5\u5230\u8def\u7531\u5668\u7684\u5176\u4ed6\u5b50\u7f51\u4e0a\u7684\u4e13\u7528 IP \u7684\u9759\u6001\u4e00\u5bf9\u4e00\u6620\u5c04\u3002 \u6211\u4eec\u5efa\u8bae\u5229\u7528\u6bcf\u4e2a\u79df\u6237\u7684 L3 \u8def\u7531\u548c\u6d6e\u52a8 IP \u6765\u5b9e\u73b0\u79df\u6237 VM \u7684\u66f4\u7cbe\u7ec6\u8fde\u63a5\u3002 \u670d\u52a1\u8d28\u91cf \uff08QoS\uff09 \u00b6 \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u670d\u52a1\u8d28\u91cf \uff08QoS\uff09 \u7b56\u7565\u548c\u89c4\u5219\u7531\u4e91\u7ba1\u7406\u5458\u7ba1\u7406\uff0c\u8fd9\u4f1a\u5bfc\u81f4\u79df\u6237\u65e0\u6cd5\u521b\u5efa\u7279\u5b9a\u7684 QoS \u89c4\u5219\uff0c\u4e5f\u65e0\u6cd5\u5c06\u7279\u5b9a\u7aef\u53e3\u9644\u52a0\u5230\u7b56\u7565\u3002\u5728\u67d0\u4e9b\u7528\u4f8b\u4e2d\uff0c\u4f8b\u5982\u67d0\u4e9b\u7535\u4fe1\u5e94\u7528\u7a0b\u5e8f\uff0c\u7ba1\u7406\u5458\u53ef\u80fd\u4fe1\u4efb\u79df\u6237\uff0c\u56e0\u6b64\u5141\u8bb8\u4ed6\u4eec\u521b\u5efa\u81ea\u5df1\u7684\u7b56\u7565\u5e76\u5c06\u5176\u9644\u52a0\u5230\u7aef\u53e3\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u4fee\u6539 policy.json \u6587\u4ef6\u548c\u7279\u5b9a\u6587\u6863\u6765\u5b9e\u73b0\u3002\u5c06\u4e0e\u6269\u5c55\u4e00\u8d77\u53d1\u5e03\u3002 \u7f51\u7edc\u670d\u52a1 \uff08neutron\uff09 \u652f\u6301 Liberty \u53ca\u66f4\u9ad8\u7248\u672c\u4e2d\u7684\u5e26\u5bbd\u9650\u5236 QoS \u89c4\u5219\u3002\u6b64 QoS \u89c4\u5219\u5df2\u547d\u540d QosBandwidthLimitRule \uff0c\u5b83\u63a5\u53d7\u4e24\u4e2a\u975e\u8d1f\u6574\u6570\uff0c\u4ee5\u5343\u6bd4\u7279/\u79d2\u4e3a\u5355\u4f4d\uff1a max-kbps \uff1a\u5e26\u5bbd max-burst-kbps \uff1a\u7a81\u53d1\u7f13\u51b2\u533a \u5df2 QoSBandwidthLimitRule \u5728 neutron Open vSwitch\u3001Linux \u7f51\u6865\u548c\u5355\u6839\u8f93\u5165/\u8f93\u51fa\u865a\u62df\u5316 \uff08SR-IOV\uff09 \u9a71\u52a8\u7a0b\u5e8f\u4e2d\u5b9e\u73b0\u3002 \u5728 Newton \u4e2d\uff0c\u6dfb\u52a0\u4e86 QoS \u89c4\u5219 QosDscpMarkingRule \u3002\u6b64\u89c4\u5219\u5728 IPv4 \uff08RFC 2474\uff09 \u4e0a\u7684\u670d\u52a1\u6807\u5934\u7c7b\u578b\u548c IPv6 \u4e0a\u7684\u6d41\u91cf\u7c7b\u6807\u5934\u4e2d\u6807\u8bb0\u5dee\u5206\u670d\u52a1\u4ee3\u7801\u70b9 \uff08DSCP\uff09 \u503c\uff0c\u8fd9\u4e9b\u503c\u9002\u7528\u4e8e\u5e94\u7528\u89c4\u5219\u7684\u865a\u62df\u673a\u7684\u6240\u6709\u6d41\u91cf\u3002\u8fd9\u662f\u4e00\u4e2a 6 \u4f4d\u6807\u5934\uff0c\u5177\u6709 21 \u4e2a\u6709\u6548\u503c\uff0c\u8868\u793a\u6570\u636e\u5305\u5728\u9047\u5230\u62e5\u585e\u65f6\u7a7f\u8fc7\u7f51\u7edc\u65f6\u7684\u4e22\u5f03\u4f18\u5148\u7ea7\u3002\u9632\u706b\u5899\u8fd8\u53ef\u4ee5\u4f7f\u7528\u5b83\u6765\u5c06\u6709\u6548\u6216\u65e0\u6548\u6d41\u91cf\u4e0e\u5176\u8bbf\u95ee\u63a7\u5236\u5217\u8868\u8fdb\u884c\u5339\u914d\u3002 \u7aef\u53e3\u955c\u50cf\u670d\u52a1\u6d89\u53ca\u5c06\u8fdb\u5165\u6216\u79bb\u5f00\u4e00\u4e2a\u7aef\u53e3\u7684\u6570\u636e\u5305\u526f\u672c\u53d1\u9001\u5230\u53e6\u4e00\u4e2a\u7aef\u53e3\uff0c\u8be5\u7aef\u53e3\u901a\u5e38\u4e0e\u88ab\u955c\u50cf\u6570\u636e\u5305\u7684\u539f\u59cb\u76ee\u7684\u5730\u4e0d\u540c\u3002Tap-as-a-Service \uff08TaaS\uff09 \u662f OpenStack \u7f51\u7edc\u670d\u52a1 \uff08neutron\uff09 \u7684\u6269\u5c55\u3002\u5b83\u4e3a\u79df\u6237\u865a\u62df\u7f51\u7edc\u63d0\u4f9b\u8fdc\u7a0b\u7aef\u53e3\u955c\u50cf\u529f\u80fd\u3002\u6b64\u670d\u52a1\u4e3b\u8981\u65e8\u5728\u5e2e\u52a9\u79df\u6237\uff08\u6216\u4e91\u7ba1\u7406\u5458\uff09\u8c03\u8bd5\u590d\u6742\u7684\u865a\u62df\u7f51\u7edc\uff0c\u5e76\u901a\u8fc7\u76d1\u89c6\u4e0e\u5176\u5173\u8054\u7684\u7f51\u7edc\u6d41\u91cf\u6765\u4e86\u89e3\u5176 VM\u3002TaaS \u9075\u5faa\u79df\u6237\u8fb9\u754c\uff0c\u5176\u955c\u50cf\u4f1a\u8bdd\u80fd\u591f\u8de8\u8d8a\u591a\u4e2a\u8ba1\u7b97\u548c\u7f51\u7edc\u8282\u70b9\u3002\u5b83\u662f\u4e00\u4e2a\u5fc5\u4e0d\u53ef\u5c11\u7684\u57fa\u7840\u8bbe\u65bd\u7ec4\u4ef6\uff0c\u53ef\u7528\u4e8e\u5411\u5404\u79cd\u7f51\u7edc\u5206\u6790\u548c\u5b89\u5168\u5e94\u7528\u7a0b\u5e8f\u63d0\u4f9b\u6570\u636e\u3002 \u8d1f\u8f7d\u5747\u8861 \u00b6 OpenStack Networking \u7684\u53e6\u4e00\u4e2a\u7279\u6027\u662f\u8d1f\u8f7d\u5747\u8861\u5668\u5373\u670d\u52a1 \uff08LBaaS\uff09\u3002LBaaS \u53c2\u8003\u5b9e\u73b0\u57fa\u4e8e HA-Proxy\u3002OpenStack Networking \u4e2d\u7684\u6269\u5c55\u6b63\u5728\u5f00\u53d1\u7b2c\u4e09\u65b9\u63d2\u4ef6\uff0c\u4ee5\u4fbf\u4e3a\u865a\u62df\u63a5\u53e3\u7aef\u53e3\u63d0\u4f9b\u5e7f\u6cdb\u7684 L4-L7 \u529f\u80fd\u3002 \u9632\u706b\u5899 \u00b6 FW-as-a-Service\uff08FWaaS\uff09\u88ab\u8ba4\u4e3a\u662fOpenStack Networking\u7684Kilo\u7248\u672c\u7684\u5b9e\u9a8c\u6027\u529f\u80fd\u3002FWaaS \u6ee1\u8db3\u4e86\u7ba1\u7406\u548c\u5229\u7528\u5178\u578b\u9632\u706b\u5899\u4ea7\u54c1\u63d0\u4f9b\u7684\u4e30\u5bcc\u5b89\u5168\u529f\u80fd\u7684\u9700\u6c42\uff0c\u8fd9\u4e9b\u4ea7\u54c1\u901a\u5e38\u6bd4\u5f53\u524d\u5b89\u5168\u7ec4\u63d0\u4f9b\u7684\u8981\u5168\u9762\u5f97\u591a\u3002\u98de\u601d\u5361\u5c14\u548c\u82f1\u7279\u5c14\u90fd\u5f00\u53d1\u4e86\u7b2c\u4e09\u65b9\u63d2\u4ef6\u4f5c\u4e3aOpenStack Networking\u7684\u6269\u5c55\uff0c\u4ee5\u5728Kilo\u7248\u672c\u4e2d\u652f\u6301\u6b64\u7ec4\u4ef6\u3002\u6709\u5173 FWaaS \u7ba1\u7406\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u300aOpenStack \u7ba1\u7406\u5458\u6307\u5357\u300b\u4e2d\u7684\u9632\u706b\u5899\u5373\u670d\u52a1 \uff08FWaaS\uff09 \u6982\u8ff0\u3002 \u5728\u8bbe\u8ba1 OpenStack Networking \u57fa\u7840\u67b6\u6784\u65f6\uff0c\u4e86\u89e3\u53ef\u7528\u7f51\u7edc\u670d\u52a1\u7684\u5f53\u524d\u7279\u6027\u548c\u5c40\u9650\u6027\u975e\u5e38\u91cd\u8981\u3002\u4e86\u89e3\u865a\u62df\u7f51\u7edc\u548c\u7269\u7406\u7f51\u7edc\u7684\u8fb9\u754c\u5c06\u6709\u52a9\u4e8e\u5728\u60a8\u7684\u73af\u5883\u4e2d\u6dfb\u52a0\u6240\u9700\u7684\u5b89\u5168\u63a7\u4ef6\u3002 \u7f51\u7edc\u670d\u52a1\u6269\u5c55 \u00b6 \u5f00\u6e90\u793e\u533a\u6216\u4f7f\u7528 OpenStack Networking \u7684 SDN \u516c\u53f8\u63d0\u4f9b\u7684\u5df2\u77e5\u63d2\u4ef6\u5217\u8868\u53ef\u5728 OpenStack neutron \u63d2\u4ef6\u548c\u9a71\u52a8\u7a0b\u5e8f wiki \u9875\u9762\u4e0a\u627e\u5230\u3002 \u7f51\u7edc\u670d\u52a1\u9650\u5236 \u00b6 OpenStack Networking \u5177\u6709\u4ee5\u4e0b\u5df2\u77e5\u9650\u5236\uff1a \u91cd\u53e0\u7684 IP \u5730\u5740 \u5982\u679c\u8fd0\u884c neutron-l3-agent \u6216 neutron-dhcp-agent \u7684\u8282\u70b9\u4f7f\u7528\u91cd\u53e0\u7684 IP \u5730\u5740\uff0c\u5219\u8fd9\u4e9b\u8282\u70b9\u5fc5\u987b\u4f7f\u7528 Linux \u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0cDHCP \u548c L3 \u4ee3\u7406\u4f7f\u7528 Linux \u7f51\u7edc\u547d\u540d\u7a7a\u95f4\uff0c\u5e76\u5728\u5404\u81ea\u7684\u547d\u540d\u7a7a\u95f4\u4e2d\u8fd0\u884c\u3002\u4f46\u662f\uff0c\u5982\u679c\u4e3b\u673a\u4e0d\u652f\u6301\u591a\u4e2a\u547d\u540d\u7a7a\u95f4\uff0c\u5219 DHCP \u548c L3 \u4ee3\u7406\u5e94\u5728\u4e0d\u540c\u7684\u4e3b\u673a\u4e0a\u8fd0\u884c\u3002\u8fd9\u662f\u56e0\u4e3a L3 \u4ee3\u7406\u548c DHCP \u4ee3\u7406\u521b\u5efa\u7684 IP \u5730\u5740\u4e4b\u95f4\u6ca1\u6709\u9694\u79bb\u3002 \u5982\u679c\u4e0d\u5b58\u5728\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u652f\u6301\uff0c\u5219 L3 \u4ee3\u7406\u7684\u53e6\u4e00\u4e2a\u9650\u5236\u662f\u4ec5\u652f\u6301\u5355\u4e2a\u903b\u8f91\u8def\u7531\u5668\u3002 \u591a\u4e3b\u673a DHCP \u4ee3\u7406 OpenStack Networking \u652f\u6301\u591a\u4e2a\u5177\u6709\u8d1f\u8f7d\u5747\u8861\u529f\u80fd\u7684 L3 \u548c DHCP \u4ee3\u7406\u3002\u4f46\u662f\uff0c\u4e0d\u652f\u6301\u865a\u62df\u673a\u4f4d\u7f6e\u7684\u7d27\u5bc6\u8026\u5408\u3002\u6362\u8a00\u4e4b\uff0c\u5728\u521b\u5efa\u865a\u62df\u673a\u65f6\uff0c\u9ed8\u8ba4\u865a\u62df\u673a\u8c03\u5ea6\u7a0b\u5e8f\u4e0d\u4f1a\u8003\u8651\u4ee3\u7406\u7684\u4f4d\u7f6e\u3002 L3 \u4ee3\u7406\u4e0d\u652f\u6301 IPv6 neutron-l3-agent \u88ab\u8bb8\u591a\u63d2\u4ef6\u7528\u4e8e\u5b9e\u73b0 L3 \u8f6c\u53d1\uff0c\u4ec5\u652f\u6301 IPv4 \u8f6c\u53d1\u3002 \u7f51\u7edc\u670d\u52a1\u5b89\u5168\u6700\u4f73\u505a\u6cd5 \u00b6 \u8981\u4fdd\u62a4 OpenStack Networking\uff0c\u60a8\u5fc5\u987b\u4e86\u89e3\u5982\u4f55\u5c06\u79df\u6237\u5b9e\u4f8b\u521b\u5efa\u7684\u5de5\u4f5c\u6d41\u8fc7\u7a0b\u6620\u5c04\u5230\u5b89\u5168\u57df\u3002 \u6709\u56db\u4e2a\u4e3b\u8981\u670d\u52a1\u4e0e OpenStack Networking \u4ea4\u4e92\u3002\u5728\u5178\u578b\u7684 OpenStack \u90e8\u7f72\u4e2d\uff0c\u8fd9\u4e9b\u670d\u52a1\u6620\u5c04\u5230\u4ee5\u4e0b\u5b89\u5168\u57df\uff1a OpenStack \u4eea\u8868\u677f\uff1a\u516c\u5171\u548c\u7ba1\u7406 OpenStack Identity\uff1a\u7ba1\u7406 OpenStack \u8ba1\u7b97\u8282\u70b9\uff1a\u7ba1\u7406\u548c\u5ba2\u6237\u7aef OpenStack \u7f51\u7edc\u8282\u70b9\uff1a\u7ba1\u7406\u3001\u5ba2\u6237\u7aef\uff0c\u4ee5\u53ca\u53ef\u80fd\u7684\u516c\u5171\u8282\u70b9\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u6b63\u5728\u4f7f\u7528\u7684 neutron-plugin\u3002 SDN \u670d\u52a1\u8282\u70b9\uff1a\u7ba1\u7406\u3001\u8bbf\u5ba2\u548c\u53ef\u80fd\u7684\u516c\u5171\u670d\u52a1\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u4f7f\u7528\u7684\u4ea7\u54c1\u3002 \u8981\u9694\u79bb OpenStack Networking \u670d\u52a1\u4e0e\u5176\u4ed6 OpenStack \u6838\u5fc3\u670d\u52a1\u4e4b\u95f4\u7684\u654f\u611f\u6570\u636e\u901a\u4fe1\uff0c\u8bf7\u5c06\u8fd9\u4e9b\u901a\u4fe1\u901a\u9053\u914d\u7f6e\u4e3a\u4ec5\u5141\u8bb8\u901a\u8fc7\u9694\u79bb\u7684\u7ba1\u7406\u7f51\u7edc\u8fdb\u884c\u901a\u4fe1\u3002 OpenStack Networking \u670d\u52a1\u914d\u7f6e \u00b6 \u9650\u5236 API \u670d\u52a1\u5668\u7684\u7ed1\u5b9a\u5730\u5740\uff1aneutron-server \u00b6 \u8981\u9650\u5236 OpenStack Networking API \u670d\u52a1\u4e3a\u4f20\u5165\u5ba2\u6237\u7aef\u8fde\u63a5\u7ed1\u5b9a\u7f51\u7edc\u5957\u63a5\u5b57\u7684\u63a5\u53e3\u6216 IP \u5730\u5740\uff0c\u8bf7\u5728 neutron.conf \u6587\u4ef6\u4e2d\u6307\u5b9a bind_host \u548c bind_port\uff0c\u5982\u4e0b\u6240\u793a\uff1a # Address to bind the API server bind_host = IP ADDRESS OF SERVER # Port the bind the API server to bind_port = 9696 \u9650\u5236 OpenStack Networking \u670d\u52a1\u7684 DB \u548c RPC \u901a\u4fe1 \u00b6 OpenStack Networking \u670d\u52a1\u7684\u5404\u79cd\u7ec4\u4ef6\u4f7f\u7528\u6d88\u606f\u961f\u5217\u6216\u6570\u636e\u5e93\u8fde\u63a5\u4e0e OpenStack Networking \u4e2d\u7684\u5176\u4ed6\u7ec4\u4ef6\u8fdb\u884c\u901a\u4fe1\u3002 \u5bf9\u4e8e\u9700\u8981\u76f4\u63a5\u6570\u636e\u5e93\u8fde\u63a5\u7684\u6240\u6709\u7ec4\u4ef6\uff0c\u5efa\u8bae\u60a8\u9075\u5faa\u6570\u636e\u5e93\u8eab\u4efd\u9a8c\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236\u4e2d\u63d0\u4f9b\u7684\u51c6\u5219\u3002 \u5efa\u8bae\u60a8\u9075\u5faa\u961f\u5217\u8eab\u4efd\u9a8c\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236\u4e2d\u63d0\u4f9b\u7684\u51c6\u5219\uff0c\u9002\u7528\u4e8e\u9700\u8981 RPC \u901a\u4fe1\u7684\u6240\u6709\u7ec4\u4ef6\u3002 \u4fdd\u62a4 OpenStack \u7f51\u7edc\u670d\u52a1 \u00b6 \u672c\u8282\u8ba8\u8bba OpenStack Networking \u914d\u7f6e\u6700\u4f73\u5b9e\u8df5\uff0c\u56e0\u4e3a\u5b83\u4eec\u9002\u7528\u4e8e OpenStack \u90e8\u7f72\u4e2d\u7684\u9879\u76ee\u7f51\u7edc\u5b89\u5168\u3002 \u9879\u76ee\u7f51\u7edc\u670d\u52a1\u5de5\u4f5c\u6d41 \u00b6 OpenStack Networking \u4e3a\u7528\u6237\u63d0\u4f9b\u7f51\u7edc\u8d44\u6e90\u548c\u914d\u7f6e\u7684\u81ea\u52a9\u670d\u52a1\u3002\u4e91\u67b6\u6784\u5e08\u548c\u8fd0\u7ef4\u4eba\u5458\u5fc5\u987b\u8bc4\u4f30\u5176\u8bbe\u8ba1\u7528\u4f8b\uff0c\u4ee5\u4fbf\u4e3a\u7528\u6237\u63d0\u4f9b\u521b\u5efa\u3001\u66f4\u65b0\u548c\u9500\u6bc1\u53ef\u7528\u7f51\u7edc\u8d44\u6e90\u7684\u80fd\u529b\u3002 \u7f51\u7edc\u8d44\u6e90\u7b56\u7565\u5f15\u64ce \u00b6 OpenStack Networking \u4e2d\u7684\u7b56\u7565\u5f15\u64ce\u53ca\u5176\u914d\u7f6e\u6587\u4ef6 policy.json \u63d0\u4f9b\u4e86\u4e00\u79cd\u65b9\u6cd5\uff0c\u53ef\u4ee5\u5bf9\u7528\u6237\u5728\u9879\u76ee\u7f51\u7edc\u65b9\u6cd5\u548c\u5bf9\u8c61\u4e0a\u63d0\u4f9b\u66f4\u7ec6\u7c92\u5ea6\u7684\u6388\u6743\u3002OpenStack Networking \u7b56\u7565\u5b9a\u4e49\u4f1a\u5f71\u54cd\u7f51\u7edc\u53ef\u7528\u6027\u3001\u7f51\u7edc\u5b89\u5168\u548c\u6574\u4f53 OpenStack \u5b89\u5168\u6027\u3002\u4e91\u67b6\u6784\u5e08\u548c\u8fd0\u7ef4\u4eba\u5458\u5e94\u4ed4\u7ec6\u8bc4\u4f30\u5176\u5bf9\u7528\u6237\u548c\u9879\u76ee\u8bbf\u95ee\u7f51\u7edc\u8d44\u6e90\u7ba1\u7406\u7684\u7b56\u7565\u3002\u6709\u5173 OpenStack Networking \u7b56\u7565\u5b9a\u4e49\u7684\u66f4\u8be6\u7ec6\u8bf4\u660e\uff0c\u8bf7\u53c2\u9605\u300aOpenStack \u7ba1\u7406\u5458\u6307\u5357\u300b\u4e2d\u7684\u201c\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u201d\u90e8\u5206\u3002 \u6ce8\u610f \u8bf7\u52a1\u5fc5\u67e5\u770b\u9ed8\u8ba4\u7f51\u7edc\u8d44\u6e90\u7b56\u7565\uff0c\u56e0\u4e3a\u53ef\u4ee5\u4fee\u6539\u6b64\u7b56\u7565\u4ee5\u9002\u5408\u60a8\u7684\u5b89\u5168\u72b6\u51b5\u3002 \u5982\u679c\u60a8\u7684 OpenStack \u90e8\u7f72\u4e3a\u4e0d\u540c\u7684\u5b89\u5168\u57df\u63d0\u4f9b\u4e86\u591a\u4e2a\u5916\u90e8\u8bbf\u95ee\u70b9\uff0c\u90a3\u4e48\u9650\u5236\u9879\u76ee\u5c06\u591a\u4e2a vNIC \u8fde\u63a5\u5230\u591a\u4e2a\u5916\u90e8\u8bbf\u95ee\u70b9\u7684\u80fd\u529b\u975e\u5e38\u91cd\u8981\uff0c\u8fd9\u5c06\u6865\u63a5\u8fd9\u4e9b\u5b89\u5168\u57df\uff0c\u5e76\u53ef\u80fd\u5bfc\u81f4\u4e0d\u53ef\u9884\u89c1\u7684\u5b89\u5168\u5371\u5bb3\u3002\u901a\u8fc7\u5229\u7528 OpenStack Compute \u63d0\u4f9b\u7684\u4e3b\u673a\u805a\u5408\u529f\u80fd\uff0c\u6216\u8005\u5c06\u9879\u76ee\u865a\u62df\u673a\u62c6\u5206\u4e3a\u5177\u6709\u4e0d\u540c\u865a\u62df\u7f51\u7edc\u914d\u7f6e\u7684\u591a\u4e2a\u9879\u76ee\u9879\u76ee\uff0c\u53ef\u4ee5\u964d\u4f4e\u8fd9\u79cd\u98ce\u9669\u3002 \u5b89\u5168\u7ec4 \u00b6 OpenStack Networking \u670d\u52a1\u4f7f\u7528\u6bd4 OpenStack Compute \u4e2d\u5185\u7f6e\u7684\u5b89\u5168\u7ec4\u529f\u80fd\u66f4\u7075\u6d3b\u3001\u66f4\u5f3a\u5927\u7684\u673a\u5236\u63d0\u4f9b\u5b89\u5168\u7ec4\u529f\u80fd\u3002\u56e0\u6b64\uff0c\u5728\u4f7f\u7528 OpenStack Network \u65f6\uff0c\u5e94\u59cb\u7ec8\u7981\u7528\u5185\u7f6e\u5b89\u5168\u7ec4\uff0c nova.conf \u5e76\u5c06\u6240\u6709\u5b89\u5168\u7ec4\u8c03\u7528\u4ee3\u7406\u5230 OpenStack Networking API\u3002\u5982\u679c\u4e0d\u8fd9\u6837\u505a\uff0c\u5c06\u5bfc\u81f4\u4e24\u4e2a\u670d\u52a1\u540c\u65f6\u5e94\u7528\u51b2\u7a81\u7684\u5b89\u5168\u7b56\u7565\u3002\u8981\u5c06\u5b89\u5168\u7ec4\u4ee3\u7406\u5230 OpenStack Networking\uff0c\u8bf7\u4f7f\u7528\u4ee5\u4e0b\u914d\u7f6e\u503c\uff1a firewall_driver \u5fc5\u987b\u8bbe\u7f6e\u4e3a nova.virt.firewall.NoopFirewallDriver \uff0c\u4ee5\u4fbf nova-compute \u672c\u8eab\u4e0d\u6267\u884c\u57fa\u4e8e iptables \u7684\u8fc7\u6ee4\u3002 security_group_api \u5fc5\u987b\u8bbe\u7f6e\u4e3a neutron \u4ee5\u4fbf\u5c06\u6240\u6709\u5b89\u5168\u7ec4\u8bf7\u6c42\u4ee3\u7406\u5230 OpenStack Networking \u670d\u52a1\u3002 \u5b89\u5168\u7ec4\u662f\u5b89\u5168\u7ec4\u89c4\u5219\u7684\u5bb9\u5668\u3002\u5b89\u5168\u7ec4\u53ca\u5176\u89c4\u5219\u5141\u8bb8\u7ba1\u7406\u5458\u548c\u9879\u76ee\u6307\u5b9a\u5141\u8bb8\u901a\u8fc7\u865a\u62df\u63a5\u53e3\u7aef\u53e3\u7684\u6d41\u91cf\u7c7b\u578b\u548c\u65b9\u5411\uff08\u5165\u53e3/\u51fa\u53e3\uff09\u3002\u5728 OpenStack Networking \u4e2d\u521b\u5efa\u865a\u62df\u63a5\u53e3\u7aef\u53e3\u65f6\uff0c\u8be5\u7aef\u53e3\u4e0e\u5b89\u5168\u7ec4\u76f8\u5173\u8054\u3002\u6709\u5173\u7aef\u53e3\u5b89\u5168\u7ec4\u9ed8\u8ba4\u884c\u4e3a\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u7f51\u7edc\u5b89\u5168\u7ec4\u884c\u4e3a\u6587\u6863\u3002\u53ef\u4ee5\u5c06\u89c4\u5219\u6dfb\u52a0\u5230\u9ed8\u8ba4\u5b89\u5168\u7ec4\uff0c\u4ee5\u4fbf\u6839\u636e\u6bcf\u4e2a\u90e8\u7f72\u66f4\u6539\u884c\u4e3a\u3002 \u4f7f\u7528 OpenStack Compute API \u4fee\u6539\u5b89\u5168\u7ec4\u65f6\uff0c\u66f4\u65b0\u540e\u7684\u5b89\u5168\u7ec4\u5c06\u5e94\u7528\u4e8e\u5b9e\u4f8b\u4e0a\u7684\u6240\u6709\u865a\u62df\u63a5\u53e3\u7aef\u53e3\u3002\u8fd9\u662f\u56e0\u4e3a OpenStack Compute \u5b89\u5168\u7ec4 API \u662f\u57fa\u4e8e\u5b9e\u4f8b\u7684\uff0c\u800c\u4e0d\u662f\u57fa\u4e8e\u7aef\u53e3\u7684\uff0c\u5982 OpenStack Networking \u4e2d\u6240\u793a\u3002 \u914d\u989d \u00b6 \u914d\u989d\u63d0\u4f9b\u4e86\u9650\u5236\u9879\u76ee\u53ef\u7528\u7684\u7f51\u7edc\u8d44\u6e90\u6570\u91cf\u7684\u529f\u80fd\u3002\u60a8\u53ef\u4ee5\u5bf9\u6240\u6709\u9879\u76ee\u5f3a\u5236\u5b9e\u65bd\u9ed8\u8ba4\u914d\u989d\u3002\u5305\u62ec /etc/neutron/neutron.conf \u4ee5\u4e0b\u914d\u989d\u9009\u9879\uff1a [QUOTAS] # resource name(s) that are supported in quota features quota_items = network,subnet,port # default number of resource allowed per tenant, minus for unlimited #default_quota = -1 # number of networks allowed per tenant, and minus means unlimited quota_network = 10 # number of subnets allowed per tenant, and minus means unlimited quota_subnet = 10 # number of ports allowed per tenant, and minus means unlimited quota_port = 50 # number of security groups allowed per tenant, and minus means unlimited quota_security_group = 10 # number of security group rules allowed per tenant, and minus means unlimited quota_security_group_rule = 100 # default driver to use for quota checks quota_driver = neutron.quota.ConfDriver OpenStack Networking \u8fd8\u901a\u8fc7\u914d\u989d\u6269\u5c55 API \u652f\u6301\u6bcf\u4e2a\u9879\u76ee\u7684\u914d\u989d\u9650\u5236\u3002\u8981\u542f\u7528\u6bcf\u4e2a\u9879\u76ee\u7684\u914d\u989d\uff0c\u5fc5\u987b\u5728 \u4e2d\u8bbe\u7f6e\u9009\u9879 quota_driver neutron.conf \u3002 quota_driver = neutron.db.quota.driver.DbQuotaDriver \u7f13\u89e3 ARP \u6b3a\u9a97 \u00b6 \u4f7f\u7528\u6241\u5e73\u7f51\u7edc\u65f6\uff0c\u4e0d\u80fd\u5047\u5b9a\u5171\u4eab\u540c\u4e00\u7b2c 2 \u5c42\u7f51\u7edc\uff08\u6216\u5e7f\u64ad\u57df\uff09\u7684\u9879\u76ee\u5f7c\u6b64\u5b8c\u5168\u9694\u79bb\u3002\u8fd9\u4e9b\u9879\u76ee\u53ef\u80fd\u5bb9\u6613\u53d7\u5230 ARP \u6b3a\u9a97\u7684\u653b\u51fb\uff0c\u4ece\u800c\u6709\u53ef\u80fd\u906d\u53d7\u4e2d\u95f4\u4eba\u653b\u51fb\u3002 \u5982\u679c\u4f7f\u7528\u652f\u6301 ARP \u5b57\u6bb5\u5339\u914d\u7684 Open vSwitch \u7248\u672c\uff0c\u5219\u53ef\u4ee5\u901a\u8fc7\u542f\u7528 Open vSwitch \u4ee3\u7406 prevent_arp_spoofing \u9009\u9879\u6765\u5e2e\u52a9\u964d\u4f4e\u6b64\u98ce\u9669\u3002\u6b64\u9009\u9879\u53ef\u9632\u6b62\u5b9e\u4f8b\u6267\u884c\u6b3a\u9a97\u653b\u51fb;\u5b83\u4e0d\u80fd\u4fdd\u62a4\u4ed6\u4eec\u514d\u53d7\u6b3a\u9a97\u653b\u51fb\u3002\u8bf7\u6ce8\u610f\uff0c\u6b64\u8bbe\u7f6e\u9884\u8ba1\u5c06\u5728 Ocata \u4e2d\u5220\u9664\uff0c\u8be5\u884c\u4e3a\u5c06\u6c38\u4e45\u5904\u4e8e\u6d3b\u52a8\u72b6\u6001\u3002 \u4f8b\u5982\uff0c\u5728 /etc/neutron/plugins/ml2/openvswitch_agent.ini \uff1a prevent_arp_spoofing = True \u9664 Open vSwitch \u5916\uff0c\u5176\u4ed6\u63d2\u4ef6\u4e5f\u53ef\u80fd\u5305\u542b\u7c7b\u4f3c\u7684\u7f13\u89e3\u63aa\u65bd;\u5efa\u8bae\u60a8\u5728\u9002\u5f53\u7684\u60c5\u51b5\u4e0b\u542f\u7528\u6b64\u529f\u80fd\u3002 \u6ce8\u610f \u5373\u4f7f\u542f\u7528 `prevent_arp_spoofing` \u4e86\u6241\u5e73\u7f51\u7edc\uff0c\u4e5f\u65e0\u6cd5\u63d0\u4f9b\u5b8c\u6574\u7684\u9879\u76ee\u9694\u79bb\u7ea7\u522b\uff0c\u56e0\u4e3a\u6240\u6709\u9879\u76ee\u6d41\u91cf\u4ecd\u4f1a\u53d1\u9001\u5230\u540c\u4e00 VLAN\u3002 \u68c0\u67e5\u8868 \u00b6 Check-Neutron-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/neutron\uff1f \u00b6 \u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u5bf9\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u7684\u62d2\u7edd\u670d\u52a1\u3002\u56e0\u6b64\uff0c\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a root\uff0c\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a neutron\u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/neutron/neutron.conf | egrep \"root neutron\" $ stat -L -c \"%U %G\" /etc/neutron/api-paste.ini | egrep \"root neutron\" $ stat -L -c \"%U %G\" /etc/neutron/policy.json | egrep \"root neutron\" $ stat -L -c \"%U %G\" /etc/neutron/rootwrap.conf | egrep \"root neutron\" $ stat -L -c \"%U %G\" /etc/neutron | egrep \"root neutron\" \u901a\u8fc7\uff1a\u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c neutron\u3002\u4e0a\u9762\u7684\u547d\u4ee4\u663e\u793a\u4e86\u6839\u4e2d\u5b50\u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u56e0\u4e3a\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 root \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u6216\u9664 neutron \u4ee5\u5916\u7684\u4efb\u4f55\u7ec4\u3002 Check-Neutron-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f \u00b6 \u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u5efa\u8bae\u5bf9\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/neutron/neutron.conf $ stat -L -c \"%a\" /etc/neutron/api-paste.ini $ stat -L -c \"%a\" /etc/neutron/policy.json $ stat -L -c \"%a\" /etc/neutron/rootwrap.conf $ stat -L -c \"%a\" /etc/neutron \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002640 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\uff0c\u5373\u201cu=rw\uff0cg=r\uff0co=\u201d\u3002 \u8bf7\u6ce8\u610f\uff0c\u4f7f\u7528 Check-Neutron-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/neutron\uff1f\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0croot \u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0cneutron \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/neutron/neutron.conf getfacl: Removing leading '/' from absolute path names # file: etc/neutron/neutron.conf USER root rw- GROUP neutron r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u6ca1\u6709\u8bbe\u7f6e\u81f3\u5c11\u4e3a640\u3002 Check-Neutron-03\uff1aKeystone\u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f \u00b6 \u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Rocky \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Stein \u4e2d\u5df2\u5f03\u7528\u3002 OpenStack \u652f\u6301\u5404\u79cd\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\uff0c\u5982 noauth\u3001keystone \u7b49\u3002\u5982\u679c\u4f7f\u7528\u201cnoauth\u201d\u7b56\u7565\uff0c\u90a3\u4e48\u7528\u6237\u65e0\u9700\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u4e0eOpenStack\u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u6f5c\u5728\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u83b7\u5f97\u5bf9 OpenStack \u7ec4\u4ef6\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u56e0\u6b64\uff0c\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u670d\u52a1\u90fd\u5fc5\u987b\u4f7f\u7528\u5176\u670d\u52a1\u5e10\u6237\u901a\u8fc7 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 auth_strategy \u8bbe\u7f6e\u4e3a keystone \u3002 [DEFAULT] /etc/neutron/neutron.conf \u5931\u8d25\uff1a\u5982\u679c section \u4e0b\u7684 [DEFAULT] \u53c2\u6570 auth_strategy \u503c\u8bbe\u7f6e\u4e3a noauth \u6216 noauth2 \u3002 Check-Neutron-04\uff1a\u662f\u5426\u4f7f\u7528\u5b89\u5168\u534f\u8bae\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f \u00b6 OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f/\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u56e0\u6b64\uff0c\u6240\u6709\u7ec4\u4ef6\u90fd\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u7684\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in /etc/neutron/neutron.conf \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a Identity API \u7aef\u70b9\u5f00\u5934\uff0c https:// \u5e76\u4e14 same /etc/neutron/neutron.conf \u4e2d\u540c\u4e00 [keystone_authtoken] \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri insecure \u503c\u8bbe\u7f6e\u4e3a False \u3002 \u5931\u8d25\uff1a\u5982\u679c in /etc/neutron/neutron.conf \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri \u503c\u672a\u8bbe\u7f6e\u4e3a\u4ee5 \u5f00\u5934\u7684\u8eab\u4efd API \u7aef\u70b9\uff0c https:// \u6216\u8005\u540c\u4e00 /etc/neutron/neutron.conf \u90e8\u5206\u4e2d\u7684\u53c2\u6570 insecure [keystone_authtoken] \u503c\u8bbe\u7f6e\u4e3a True \u3002 Check-Neutron-05\uff1aNeutron API \u670d\u52a1\u5668\u4e0a\u662f\u5426\u542f\u7528\u4e86 TLS\uff1f \u00b6 \u4e0e\u4e4b\u524d\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u5efa\u8bae\u5728 API \u670d\u52a1\u5668\u4e0a\u542f\u7528\u5b89\u5168\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 use_ssl \u8bbe\u7f6e\u4e3a True \u3002 [DEFAULT] /etc/neutron/neutron.conf \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 use_ssl \u8bbe\u7f6e\u4e3a False \u3002 [DEFAULT] /etc/neutron/neutron.conf \u5bf9\u8c61\u5b58\u50a8 \u00b6 OpenStack \u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09 \u670d\u52a1\u63d0\u4f9b\u901a\u8fc7 HTTP \u5b58\u50a8\u548c\u68c0\u7d22\u6570\u636e\u7684\u8f6f\u4ef6\u3002\u5bf9\u8c61\uff08\u6570\u636e blob\uff09\u5b58\u50a8\u5728\u7ec4\u7ec7\u5c42\u6b21\u7ed3\u6784\u4e2d\uff0c\u8be5\u5c42\u6b21\u7ed3\u6784\u63d0\u4f9b\u533f\u540d\u53ea\u8bfb\u8bbf\u95ee\u3001ACL \u5b9a\u4e49\u7684\u8bbf\u95ee\uff0c\u751a\u81f3\u4e34\u65f6\u8bbf\u95ee\u3002\u5bf9\u8c61\u5b58\u50a8\u652f\u6301\u901a\u8fc7\u4e2d\u95f4\u4ef6\u5b9e\u73b0\u7684\u591a\u79cd\u57fa\u4e8e\u4ee4\u724c\u7684\u8eab\u4efd\u9a8c\u8bc1\u673a\u5236\u3002 \u5e94\u7528\u7a0b\u5e8f\u901a\u8fc7\u884c\u4e1a\u6807\u51c6\u7684 HTTP RESTful API \u5728\u5bf9\u8c61\u5b58\u50a8\u4e2d\u5b58\u50a8\u548c\u68c0\u7d22\u6570\u636e\u3002\u5bf9\u8c61\u5b58\u50a8\u7684\u540e\u7aef\u7ec4\u4ef6\u9075\u5faa\u76f8\u540c\u7684 RESTful \u6a21\u578b\uff0c\u5c3d\u7ba1\u67d0\u4e9b API\uff08\u4f8b\u5982\u7ba1\u7406\u6301\u4e45\u6027\u7684 API\uff09\u5bf9\u96c6\u7fa4\u662f\u79c1\u6709\u7684\u3002\u6709\u5173 API \u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack Storage API\u3002 \u5bf9\u8c61\u5b58\u50a8\u7684\u7ec4\u4ef6\u5206\u4e3a\u4ee5\u4e0b\u4e3b\u8981\u7ec4\uff1a \u4ee3\u7406\u670d\u52a1 \u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1 \u5b58\u50a8\u670d\u52a1 \u8d26\u6237\u670d\u52a1 \u5bb9\u5668\u670d\u52a1 \u5bf9\u8c61\u670d\u52a1 OpenStack \u5bf9\u8c61\u5b58\u50a8\u7ba1\u7406\u6307\u5357 \uff082013\uff09 \u4e2d\u7684\u793a\u4f8b\u56fe \u6ce8\u610f \u5bf9\u8c61\u5b58\u50a8\u5b89\u88c5\u4e0d\u5fc5\u4f4d\u4e8e Internet \u4e0a\uff0c\u4e5f\u53ef\u4ee5\u662f\u79c1\u6709\u4e91\uff0c\u5176\u4e2d\u516c\u5171\u4ea4\u6362\u673a\u662f\u7ec4\u7ec7\u5185\u90e8\u7f51\u7edc\u57fa\u7840\u67b6\u6784\u7684\u4e00\u90e8\u5206\u3002 \u7f51\u7edc\u5b89\u5168 \u00b6 \u8981\u4fdd\u62a4\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9996\u5148\u8981\u4fdd\u62a4\u7f51\u7edc\u7ec4\u4ef6\u3002\u5982\u679c\u60a8\u8df3\u8fc7\u4e86\u7f51\u7edc\u7ae0\u8282\uff0c\u8bf7\u8fd4\u56de\u5230\u7f51\u7edc\u90e8\u5206\u3002 rsync \u534f\u8bae\u7528\u4e8e\u5728\u5b58\u50a8\u670d\u52a1\u8282\u70b9\u4e4b\u95f4\u590d\u5236\u6570\u636e\u4ee5\u5b9e\u73b0\u9ad8\u53ef\u7528\u6027\u3002\u6b64\u5916\uff0c\u5728\u5ba2\u6237\u7aef\u7aef\u70b9\u548c\u4e91\u73af\u5883\u4e4b\u95f4\u6765\u56de\u4e2d\u7ee7\u6570\u636e\u65f6\uff0c\u4ee3\u7406\u670d\u52a1\u4f1a\u4e0e\u5b58\u50a8\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\u3002 \u8b66\u544a \u5bf9\u8c61\u5b58\u50a8\u4e0d\u5bf9\u8282\u70b9\u95f4\u901a\u4fe1\u8fdb\u884c\u52a0\u5bc6\u6216\u8eab\u4efd\u9a8c\u8bc1\u3002\u8fd9\u5c31\u662f\u60a8\u5728\u4f53\u7cfb\u7ed3\u6784\u56fe\u4e2d\u770b\u5230\u4e13\u7528\u4ea4\u6362\u673a\u6216\u4e13\u7528\u7f51\u7edc \uff08[V]LAN\uff09 \u7684\u539f\u56e0\u3002\u8fd9\u4e2a\u6570\u636e\u57df\u4e5f\u5e94\u8be5\u4e0e\u5176\u4ed6OpenStack\u6570\u636e\u7f51\u7edc\u5206\u5f00\u3002\u6709\u5173\u5b89\u5168\u57df\u7684\u8fdb\u4e00\u6b65\u8ba8\u8bba\uff0c\u8bf7\u53c2\u9605\u5b89\u5168\u8fb9\u754c\u548c\u5a01\u80c1\u3002 \u5efa\u8bae \u5bf9\u6570\u636e\u57df\u4e2d\u7684\u5b58\u50a8\u8282\u70b9\u4f7f\u7528\u4e13\u7528 \uff08V\uff09LAN \u7f51\u6bb5\u3002 \u8fd9\u9700\u8981\u4ee3\u7406\u8282\u70b9\u5177\u6709\u53cc\u63a5\u53e3\uff08\u7269\u7406\u6216\u865a\u62df\uff09\uff1a \u4e00\u4e2a\u4f5c\u4e3a\u6d88\u8d39\u8005\u8bbf\u95ee\u7684\u516c\u5171\u754c\u9762\u3002 \u53e6\u4e00\u4e2a\u4f5c\u4e3a\u53ef\u4ee5\u8bbf\u95ee\u5b58\u50a8\u8282\u70b9\u7684\u4e13\u7528\u63a5\u53e3\u3002 \u4e0b\u56fe\u6f14\u793a\u4e86\u4e00\u79cd\u53ef\u80fd\u7684\u7f51\u7edc\u4f53\u7cfb\u7ed3\u6784\u3002 \u5177\u6709\u7ba1\u7406\u8282\u70b9\uff08OSAM\uff09\u7684\u5bf9\u8c61\u5b58\u50a8\u7f51\u7edc\u67b6\u6784 \u4e00\u822c\u670d\u52a1\u5b89\u5168 \u00b6 \u4ee5\u975e root \u7528\u6237\u8eab\u4efd\u8fd0\u884c\u670d\u52a1 \u00b6 \u6211\u4eec\u5efa\u8bae\u60a8\u5c06\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u914d\u7f6e\u4e3a\u5728\u975e root \uff08UID 0\uff09 \u670d\u52a1\u5e10\u6237\u4e0b\u8fd0\u884c\u3002\u4e00\u4e2a\u5efa\u8bae\u662f swift \u5177\u6709\u4e3b\u7ec4 swift \u7684\u7528\u6237\u540d\u3002\u4f8b\u5982\uff0c proxy-server \u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5305\u62ec\u3001\u3001 container-server account-server \u3002\u6709\u5173\u8bbe\u7f6e\u548c\u914d\u7f6e\u7684\u8be6\u7ec6\u6b65\u9aa4\uff0c\u8bf7\u53c2\u9605\u300a\u5b89\u88c5\u6307\u5357\u300b\u7684\u201c\u6dfb\u52a0\u5bf9\u8c61\u5b58\u50a8\u201d\u4e00\u7ae0\u7684 OpenStack \u6587\u6863\u7d22\u5f15\u3002 \u6ce8\u610f \u4e0a\u9762\u7684\u94fe\u63a5\u9ed8\u8ba4\u4e3aUbuntu\u7248\u672c\u3002 \u6587\u4ef6\u6743\u9650 \u00b6 \u8be5 /etc/swift \u76ee\u5f55\u5305\u542b\u6709\u5173\u73af\u5f62\u62d3\u6251\u548c\u73af\u5883\u914d\u7f6e\u7684\u4fe1\u606f\u3002\u5efa\u8bae\u4f7f\u7528\u4ee5\u4e0b\u6743\u9650\uff1a # chown -R root:swift /etc/swift/* # find /etc/swift/ -type f -exec chmod 640 {} \\; # find /etc/swift/ -type d -exec chmod 750 {} \\; \u8fd9\u5c06\u9650\u5236\u53ea\u6709 root \u7528\u6237\u80fd\u591f\u4fee\u6539\u914d\u7f6e\u6587\u4ef6\uff0c\u540c\u65f6\u5141\u8bb8\u670d\u52a1\u901a\u8fc7\u5176 swift \u5728\u7ec4\u4e2d\u7684\u7ec4\u6210\u5458\u8eab\u4efd\u8bfb\u53d6\u5b83\u4eec\u3002 \u4fdd\u62a4\u5b58\u50a8\u670d\u52a1 \u00b6 \u4ee5\u4e0b\u662f\u5404\u79cd\u5b58\u50a8\u670d\u52a1\u7684\u9ed8\u8ba4\u4fa6\u542c\u7aef\u53e3\uff1a \u670d\u52a1\u540d\u79f0 \u6e2f\u53e3 \u7c7b\u578b \u8d26\u6237\u670d\u52a1 6002 TCP \u5bb9\u5668\u670d\u52a1 6001 TCP \u5bf9\u8c61\u670d\u52a1 6000 TCP \u540c\u6b65 [1] 873 TCP \u5982\u679c\u4f7f\u7528 ssync \u800c\u4e0d\u662f rsync\uff0c\u5219\u4f7f\u7528\u5bf9\u8c61\u670d\u52a1\u7aef\u53e3\u6765\u7ef4\u62a4\u6301\u4e45\u6027\u3002 \u91cd\u8981 \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\u4e0d\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u5982\u679c\u80fd\u591f\u5728\u5176\u4e2d\u4e00\u4e2a\u7aef\u53e3\u4e0a\u8fde\u63a5\u5230\u5b58\u50a8\u8282\u70b9\uff0c\u5219\u65e0\u9700\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u8bbf\u95ee\u6216\u4fee\u6539\u6570\u636e\u3002\u4e3a\u4e86\u9632\u6b62\u6b64\u95ee\u9898\uff0c\u60a8\u5e94\u8be5\u9075\u5faa\u4e4b\u524d\u7ed9\u51fa\u7684\u6709\u5173\u4f7f\u7528\u4e13\u7528\u5b58\u50a8\u7f51\u7edc\u7684\u5efa\u8bae\u3002 \u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u672f\u8bed \u00b6 \u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u4e0d\u662f\u7528\u6237\u5e10\u6237\u6216\u51ed\u636e\u3002\u4e0b\u9762\u5bf9\u8fd9\u4e9b\u5173\u7cfb\u8fdb\u884c\u8bf4\u660e\uff1a \u5bf9\u8c61\u5b58\u50a8\u5e10\u6237 \u5bb9\u5668\u7684\u6536\u96c6;\u4e0d\u662f\u7528\u6237\u5e10\u6237\u6216\u8eab\u4efd\u9a8c\u8bc1\u3002\u54ea\u4e9b\u7528\u6237\u4e0e\u8be5\u5e10\u6237\u76f8\u5173\u8054\u4ee5\u53ca\u4ed6\u4eec\u5982\u4f55\u8bbf\u95ee\u8be5\u5e10\u6237\u53d6\u51b3\u4e8e\u6240\u4f7f\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u3002\u8bf7\u53c2\u9605\u5bf9\u8c61\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1\u3002 \u5bf9\u8c61\u5b58\u50a8\u5bb9\u5668 \u5bf9\u8c61\u7684\u96c6\u5408\u3002\u5bb9\u5668\u4e0a\u7684\u5143\u6570\u636e\u53ef\u7528\u4e8e ACL\u3002ACL \u7684\u542b\u4e49\u53d6\u51b3\u4e8e\u6240\u4f7f\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u3002 \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61 \u5b9e\u9645\u6570\u636e\u5bf9\u8c61\u3002\u5bf9\u8c61\u7ea7\u522b\u7684 ACL \u4e5f\u53ef\u4ee5\u4e0e\u5143\u6570\u636e\u4e00\u8d77\u4f7f\u7528\uff0c\u5e76\u4e14\u53d6\u51b3\u4e8e\u6240\u4f7f\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u3002 \u5728\u6bcf\u4e2a\u7ea7\u522b\uff0c\u60a8\u90fd\u6709 ACL\uff0c\u7528\u4e8e\u6307\u793a\u8c01\u62e5\u6709\u54ea\u79cd\u7c7b\u578b\u7684\u8bbf\u95ee\u6743\u9650\u3002ACL \u662f\u6839\u636e\u6b63\u5728\u4f7f\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u8fdb\u884c\u89e3\u91ca\u7684\u3002\u6700\u5e38\u7528\u7684\u4e24\u79cd\u8eab\u4efd\u9a8c\u8bc1\u63d0\u4f9b\u7a0b\u5e8f\u7c7b\u578b\u662f Identity service \uff08keystone\uff09 \u548c TempAuth\u3002\u81ea\u5b9a\u4e49\u8eab\u4efd\u9a8c\u8bc1\u63d0\u4f9b\u7a0b\u5e8f\u4e5f\u662f\u53ef\u80fd\u7684\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5bf9\u8c61\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1\u3002 \u4fdd\u62a4\u4ee3\u7406\u670d\u52a1 \u00b6 \u4ee3\u7406\u8282\u70b9\u5e94\u81f3\u5c11\u5177\u6709\u4e24\u4e2a\u63a5\u53e3\uff08\u7269\u7406\u6216\u865a\u62df\uff09\uff1a\u4e00\u4e2a\u516c\u5171\u63a5\u53e3\u548c\u4e00\u4e2a\u4e13\u7528\u63a5\u53e3\u3002\u9632\u706b\u5899\u6216\u670d\u52a1\u7ed1\u5b9a\u53ef\u80fd\u4f1a\u4fdd\u62a4\u516c\u5171\u63a5\u53e3\u3002\u9762\u5411\u516c\u4f17\u7684\u670d\u52a1\u662f\u4e00\u4e2a HTTP Web \u670d\u52a1\u5668\uff0c\u7528\u4e8e\u5904\u7406\u7aef\u70b9\u5ba2\u6237\u7aef\u8bf7\u6c42\u3001\u5bf9\u5176\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u5e76\u6267\u884c\u76f8\u5e94\u7684\u64cd\u4f5c\u3002\u4e13\u7528\u63a5\u53e3\u4e0d\u9700\u8981\u4efb\u4f55\u4fa6\u542c\u670d\u52a1\uff0c\u800c\u662f\u7528\u4e8e\u5efa\u7acb\u4e0e\u4e13\u7528\u5b58\u50a8\u7f51\u7edc\u4e0a\u7684\u5b58\u50a8\u8282\u70b9\u7684\u4f20\u51fa\u8fde\u63a5\u3002 HTTP \u76d1\u542c\u7aef\u53e3 \u00b6 \u5982\u524d\u6240\u8ff0\uff0c\u60a8\u5e94\u8be5\u5c06 Web \u670d\u52a1\u914d\u7f6e\u4e3a\u975e root\uff08\u65e0 UID 0\uff09\u7528\u6237 swift \u3002\u9700\u8981\u4f7f\u7528\u5927\u4e8e 1024 \u7684\u7aef\u53e3\u624d\u80fd\u8f7b\u677e\u5b8c\u6210\u6b64\u64cd\u4f5c\uff0c\u5e76\u907f\u514d\u4ee5 root \u8eab\u4efd\u8fd0\u884c Web \u5bb9\u5668\u7684\u4efb\u4f55\u90e8\u5206\u3002\u901a\u5e38\uff0c\u4f7f\u7528 HTTP REST API \u5e76\u6267\u884c\u8eab\u4efd\u9a8c\u8bc1\u7684\u5ba2\u6237\u7aef\u4f1a\u81ea\u52a8\u4ece\u8eab\u4efd\u9a8c\u8bc1\u54cd\u5e94\u4e2d\u68c0\u7d22\u6240\u9700\u7684\u5b8c\u6574 REST API URL\u3002OpenStack \u7684 REST API \u5141\u8bb8\u5ba2\u6237\u7aef\u5bf9\u4e00\u4e2a URL \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff0c\u7136\u540e\u88ab\u544a\u77e5\u5bf9\u5b9e\u9645\u670d\u52a1\u4f7f\u7528\u5b8c\u5168\u4e0d\u540c\u7684 URL\u3002\u4f8b\u5982\uff0c\u5ba2\u6237\u7aef\u5411 https://identity.cloud.example.org:55443/v1/auth \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff0c\u5e76\u83b7\u53d6\u5176\u8eab\u4efd\u9a8c\u8bc1\u5bc6\u94a5\u548c\u5b58\u50a8 URL\uff08\u4ee3\u7406\u8282\u70b9\u6216\u8d1f\u8f7d\u5747\u8861\u5668\u7684 URL\uff09https://swift.cloud.example.org:44443/v1/AUTH_8980 \u54cd\u5e94\u3002 \u5c06 Web \u670d\u52a1\u5668\u914d\u7f6e\u4e3a\u4ee5\u975e root \u7528\u6237\u8eab\u4efd\u542f\u52a8\u548c\u8fd0\u884c\u7684\u65b9\u6cd5\u56e0 Web \u670d\u52a1\u5668\u548c\u64cd\u4f5c\u7cfb\u7edf\u800c\u5f02\u3002 \u8d1f\u8f7d\u5747\u8861\u5668 \u00b6 \u5982\u679c\u4f7f\u7528 Apache \u7684\u9009\u9879\u4e0d\u53ef\u884c\uff0c\u6216\u8005\u4e3a\u4e86\u63d0\u9ad8\u6027\u80fd\uff0c\u60a8\u5e0c\u671b\u51cf\u8f7b TLS \u5de5\u4f5c\uff0c\u5219\u53ef\u4ee5\u4f7f\u7528\u4e13\u7528\u7684\u7f51\u7edc\u8bbe\u5907\u8d1f\u8f7d\u5e73\u8861\u5668\u3002\u8fd9\u662f\u5728\u4f7f\u7528\u591a\u4e2a\u4ee3\u7406\u8282\u70b9\u65f6\u63d0\u4f9b\u5197\u4f59\u548c\u8d1f\u8f7d\u5e73\u8861\u7684\u5e38\u7528\u65b9\u6cd5\u3002 \u5982\u679c\u9009\u62e9\u5378\u8f7d TLS\uff0c\u8bf7\u786e\u4fdd\u8d1f\u8f7d\u5747\u8861\u5668\u548c\u4ee3\u7406\u8282\u70b9\u4e4b\u95f4\u7684\u7f51\u7edc\u94fe\u8def\u4f4d\u4e8e\u4e13\u7528 \uff08V\uff09LAN \u7f51\u6bb5\u4e0a\uff0c\u4ee5\u4fbf\u7f51\u7edc\u4e0a\u7684\u5176\u4ed6\u8282\u70b9\uff08\u53ef\u80fd\u5df2\u6cc4\u9732\uff09\u65e0\u6cd5\u7a83\u542c\uff08\u55c5\u63a2\uff09\u672a\u52a0\u5bc6\u7684\u6d41\u91cf\u3002\u5982\u679c\u53d1\u751f\u6b64\u7c7b\u8fdd\u89c4\u884c\u4e3a\uff0c\u653b\u51fb\u8005\u53ef\u4ee5\u8bbf\u95ee\u7aef\u70b9\u5ba2\u6237\u7aef\u6216\u4e91\u7ba1\u7406\u5458\u51ed\u636e\u5e76\u8bbf\u95ee\u4e91\u6570\u636e\u3002 \u60a8\u4f7f\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\uff08\u4f8b\u5982\u8eab\u4efd\u670d\u52a1\uff08keystone\uff09\u6216TempAuth\uff09\u5c06\u51b3\u5b9a\u5982\u4f55\u5728\u5bf9\u7aef\u70b9\u5ba2\u6237\u7aef\u7684\u54cd\u5e94\u4e2d\u914d\u7f6e\u4e0d\u540c\u7684URL\uff0c\u4ee5\u4fbf\u5b83\u4eec\u4f7f\u7528\u8d1f\u8f7d\u5e73\u8861\u5668\u800c\u4e0d\u662f\u5355\u4e2a\u4ee3\u7406\u8282\u70b9\u3002 \u5bf9\u8c61\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1 \u00b6 \u5bf9\u8c61\u5b58\u50a8\u4f7f\u7528 WSGI \u6a21\u578b\u6765\u63d0\u4f9b\u4e2d\u95f4\u4ef6\u529f\u80fd\uff0c\u8be5\u529f\u80fd\u4e0d\u4ec5\u63d0\u4f9b\u901a\u7528\u53ef\u6269\u5c55\u6027\uff0c\u8fd8\u7528\u4e8e\u7aef\u70b9\u5ba2\u6237\u7aef\u7684\u8eab\u4efd\u9a8c\u8bc1\u3002\u8eab\u4efd\u9a8c\u8bc1\u63d0\u4f9b\u7a0b\u5e8f\u5b9a\u4e49\u5b58\u5728\u7684\u89d2\u8272\u548c\u7528\u6237\u7c7b\u578b\u3002\u6709\u4e9b\u4f7f\u7528\u4f20\u7edf\u7684\u7528\u6237\u540d\u548c\u5bc6\u7801\u51ed\u636e\uff0c\u800c\u53e6\u4e00\u4e9b\u5219\u53ef\u80fd\u5229\u7528 API \u5bc6\u94a5\u4ee4\u724c\u751a\u81f3\u5ba2\u6237\u7aef x.509 \u8bc1\u4e66\u3002\u81ea\u5b9a\u4e49\u63d0\u4f9b\u7a0b\u5e8f\u53ef\u4ee5\u96c6\u6210\u5230\u4f7f\u7528\u81ea\u5b9a\u4e49\u4e2d\u95f4\u4ef6\u4e2d\u3002 \u5bf9\u8c61\u5b58\u50a8\u9ed8\u8ba4\u81ea\u5e26\u4e24\u4e2a\u8ba4\u8bc1\u4e2d\u95f4\u4ef6\u6a21\u5757\uff0c\u5176\u4e2d\u4efb\u4f55\u4e00\u4e2a\u6a21\u5757\u90fd\u53ef\u4ee5\u4f5c\u4e3a\u5f00\u53d1\u81ea\u5b9a\u4e49\u8ba4\u8bc1\u4e2d\u95f4\u4ef6\u7684\u793a\u4f8b\u4ee3\u7801\u3002 TempAuth \u51fd\u6570 \u00b6 TempAuth \u662f\u5bf9\u8c61\u5b58\u50a8\u7684\u9ed8\u8ba4\u8eab\u4efd\u9a8c\u8bc1\u3002\u4e0e Identity \u76f8\u6bd4\uff0c\u5b83\u5c06\u7528\u6237\u5e10\u6237\u3001\u51ed\u636e\u548c\u5143\u6570\u636e\u5b58\u50a8\u5728\u5bf9\u8c61\u5b58\u50a8\u672c\u8eab\u4e2d\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09 \u6587\u6863\u7684\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u90e8\u5206\u3002 Keystone \u00b6 Keystone \u662f OpenStack \u4e2d\u5e38\u7528\u7684\u8eab\u4efd\u63d0\u4f9b\u7a0b\u5e8f\u3002\u5b83\u8fd8\u53ef\u7528\u4e8e\u5bf9\u8c61\u5b58\u50a8\u4e2d\u7684\u8eab\u4efd\u9a8c\u8bc1\u3002Identity \u4e2d\u5df2\u63d0\u4f9b\u4fdd\u62a4 keystone \u7684\u8986\u76d6\u8303\u56f4\u3002 \u5176\u4ed6\u503c\u5f97\u6ce8\u610f\u7684\u4e8b\u9879 \u00b6 \u5728 \u4e2d /etc/swift \uff0c\u5728\u6bcf\u4e2a\u8282\u70b9\u4e0a\uff0c\u90fd\u6709\u4e00\u4e2a\u8bbe\u7f6e\u548c\u4e00\u4e2a swift_hash_path_prefix swift_hash_path_suffix \u8bbe\u7f6e\u3002\u63d0\u4f9b\u8fd9\u4e9b\u662f\u4e3a\u4e86\u51cf\u5c11\u5b58\u50a8\u5bf9\u8c61\u53d1\u751f\u54c8\u5e0c\u51b2\u7a81\u7684\u53ef\u80fd\u6027\uff0c\u5e76\u907f\u514d\u4e00\u4e2a\u7528\u6237\u8986\u76d6\u53e6\u4e00\u4e2a\u7528\u6237\u7684\u6570\u636e\u3002 \u6b64\u503c\u6700\u521d\u5e94\u4f7f\u7528\u52a0\u5bc6\u5b89\u5168\u7684\u968f\u673a\u6570\u751f\u6210\u5668\u8fdb\u884c\u8bbe\u7f6e\uff0c\u5e76\u5728\u6240\u6709\u8282\u70b9\u4e0a\u4fdd\u6301\u4e00\u81f4\u3002\u786e\u4fdd\u5b83\u53d7\u5230\u9002\u5f53\u7684 ACL \u4fdd\u62a4\uff0c\u5e76\u4e14\u60a8\u6709\u5907\u4efd\u526f\u672c\u4ee5\u907f\u514d\u6570\u636e\u4e22\u5931\u3002 \u673a\u5bc6\u7ba1\u7406 \u00b6 \u64cd\u4f5c\u5458\u901a\u8fc7\u4f7f\u7528\u5404\u79cd\u52a0\u5bc6\u5e94\u7528\u7a0b\u5e8f\u6765\u4fdd\u62a4\u4e91\u90e8\u7f72\u4e2d\u7684\u654f\u611f\u4fe1\u606f\u3002\u4f8b\u5982\uff0c\u5bf9\u9759\u6001\u6570\u636e\u8fdb\u884c\u52a0\u5bc6\u6216\u5bf9\u6620\u50cf\u8fdb\u884c\u7b7e\u540d\u4ee5\u8bc1\u660e\u5176\u672a\u88ab\u7be1\u6539\u3002\u5728\u6240\u6709\u60c5\u51b5\u4e0b\uff0c\u8fd9\u4e9b\u52a0\u5bc6\u529f\u80fd\u90fd\u9700\u8981\u67d0\u79cd\u5bc6\u94a5\u6750\u6599\u624d\u80fd\u8fd0\u884c\u3002 \u673a\u5bc6\u7ba1\u7406\u63cf\u8ff0\u4e86\u4e00\u7ec4\u65e8\u5728\u4fdd\u62a4\u8f6f\u4ef6\u7cfb\u7edf\u4e2d\u7684\u5173\u952e\u6750\u6599\u7684\u6280\u672f\u3002\u4f20\u7edf\u4e0a\uff0c\u5bc6\u94a5\u7ba1\u7406\u6d89\u53ca\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u7684\u90e8\u7f72\u3002\u8fd9\u4e9b\u8bbe\u5907\u5df2\u7ecf\u8fc7\u7269\u7406\u5f3a\u5316\uff0c\u53ef\u9632\u6b62\u7be1\u6539\u3002 \u968f\u7740\u6280\u672f\u7684\u8fdb\u6b65\uff0c\u9700\u8981\u4fdd\u62a4\u7684\u79d8\u5bc6\u7269\u54c1\u7684\u6570\u91cf\u5df2\u7ecf\u4ece\u5bc6\u94a5\u6750\u6599\u589e\u52a0\u5230\u5305\u62ec\u8bc1\u4e66\u5bf9\u3001API \u5bc6\u94a5\u3001\u7cfb\u7edf\u5bc6\u7801\u3001\u7b7e\u540d\u5bc6\u94a5\u7b49\u3002\u8fd9\u79cd\u589e\u957f\u4ea7\u751f\u4e86\u5bf9\u66f4\u5177\u53ef\u6269\u5c55\u6027\u7684\u5bc6\u94a5\u7ba1\u7406\u65b9\u6cd5\u7684\u9700\u6c42\uff0c\u5e76\u5bfc\u81f4\u521b\u5efa\u4e86\u8bb8\u591a\u63d0\u4f9b\u53ef\u6269\u5c55\u52a8\u6001\u5bc6\u94a5\u7ba1\u7406\u7684\u8f6f\u4ef6\u670d\u52a1\u3002\u672c\u7ae0\u4ecb\u7ecd\u4e86\u76ee\u524d\u5b58\u5728\u7684\u670d\u52a1\uff0c\u5e76\u91cd\u70b9\u4ecb\u7ecd\u4e86\u90a3\u4e9b\u80fd\u591f\u96c6\u6210\u5230OpenStack\u4e91\u4e2d\u7684\u670d\u52a1\u3002 \u73b0\u6709\u6280\u672f\u6458\u8981 \u76f8\u5173 Openstack \u9879\u76ee \u4f7f\u7528\u6848\u4f8b \u955c\u50cf\u7b7e\u540d\u9a8c\u8bc1 \u5377\u52a0\u5bc6 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6 Sahara Magnum Octavia/LBaaS Swift \u914d\u7f6e\u6587\u4ef6\u4e2d\u7684\u5bc6\u7801 Barbican \u6982\u8ff0 \u52a0\u5bc6\u63d2\u4ef6 \u7b80\u5355\u7684\u52a0\u5bc6\u63d2\u4ef6 PKCS#11\u52a0\u5bc6\u63d2\u4ef6 \u5bc6\u94a5\u5546\u5e97\u63d2\u4ef6 KMIP\u63d2\u4ef6 Dogtag \u63d2\u4ef6 Vault \u63d2\u4ef6 Castellan \u6982\u8ff0 \u5e38\u89c1\u95ee\u9898\u89e3\u7b54 \u68c0\u67e5\u8868 Check-Key-Manager-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/barbican\uff1f Check-Key-Manager-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Key-Manager-03\uff1aOpenStack Identity \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Key-Manager-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f \u73b0\u6709\u6280\u672f\u6458\u8981 \u00b6 \u5728OpenStack\u4e2d\uff0c\u6709\u4e24\u79cd\u63a8\u8350\u7528\u4e8e\u673a\u5bc6\u7ba1\u7406\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u5373Barbican\u548cCastellan\u3002\u672c\u7ae0\u5c06\u6982\u8ff0\u4e0d\u540c\u7684\u65b9\u6848\uff0c\u4ee5\u5e2e\u52a9\u64cd\u4f5c\u5458\u9009\u62e9\u4f7f\u7528\u54ea\u4e2a\u5bc6\u94a5\u7ba1\u7406\u5668\u3002 \u7b2c\u4e09\u79cd\u4e0d\u53d7\u652f\u6301\u7684\u65b9\u6cd5\u662f\u56fa\u5b9a/\u786c\u7f16\u7801\u5bc6\u94a5\u3002\u4f17\u6240\u5468\u77e5\uff0c\u67d0\u4e9b OpenStack \u670d\u52a1\u53ef\u4ee5\u9009\u62e9\u5728\u5176\u914d\u7f6e\u6587\u4ef6\u4e2d\u6307\u5b9a\u5bc6\u94a5\u3002\u8fd9\u662f\u6700\u4e0d\u5b89\u5168\u7684\u64cd\u4f5c\u65b9\u5f0f\uff0c\u6211\u4eec\u4e0d\u5efa\u8bae\u5728\u4efb\u4f55\u7c7b\u578b\u7684\u751f\u4ea7\u73af\u5883\u4e2d\u4f7f\u7528\u3002 \u5176\u4ed6\u89e3\u51b3\u65b9\u6848\u5305\u62ec KeyWhiz\u3001Confidant\u3001Conjur\u3001EJSON\u3001Knox \u548c Red October\uff0c\u4f46\u5728\u672c\u6587\u6863\u7684\u8ba8\u8bba\u8303\u56f4\u4e4b\u5916\uff0c\u65e0\u6cd5\u6db5\u76d6\u6240\u6709\u53ef\u7528\u7684 Key Manager\u3002 \u5bf9\u4e8e\u673a\u5bc6\u7684\u5b58\u50a8\uff0c\u5f3a\u70c8\u5efa\u8bae\u4f7f\u7528\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u3002HSM \u53ef\u4ee5\u6709\u591a\u79cd\u5f62\u5f0f\u3002\u4f20\u7edf\u8bbe\u5907\u662f\u673a\u67b6\u5f0f\u8bbe\u5907\uff0c\u5982\u4ee5\u4e0b\u535a\u5ba2\u6587\u7ae0\u4e2d\u6240\u793a\u3002 \u76f8\u5173 Openstack \u9879\u76ee \u00b6 Castellan \u662f\u4e00\u4e2a\u5e93\uff0c\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u7b80\u5355\u7684\u901a\u7528\u63a5\u53e3\u6765\u5b58\u50a8\u3001\u751f\u6210\u548c\u68c0\u7d22\u673a\u5bc6\u3002\u5927\u591a\u6570 Openstack \u670d\u52a1\u90fd\u4f7f\u7528\u5b83\u8fdb\u884c\u673a\u5bc6\u7ba1\u7406\u3002\u4f5c\u4e3a\u4e00\u4e2a\u56fe\u4e66\u9986\uff0cCastellan \u672c\u8eab\u5e76\u4e0d\u63d0\u4f9b\u79d8\u5bc6\u5b58\u50a8\u3002\u76f8\u53cd\uff0c\u9700\u8981\u90e8\u7f72\u540e\u7aef\u5b9e\u73b0\u3002 \u8bf7\u6ce8\u610f\uff0cCastellan \u4e0d\u63d0\u4f9b\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u3002\u5b83\u53ea\u662f\u901a\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u51ed\u636e\uff08\u4f8b\u5982Keystone\u4ee4\u724c\uff09\u4f20\u9012\u5230\u540e\u7aef\u3002 Barbican \u662f\u4e00\u4e2a OpenStack \u670d\u52a1\uff0c\u4e3a Castellan \u63d0\u4f9b\u540e\u7aef\u3002Barbican \u9700\u8981\u5e76\u9a8c\u8bc1 keystone \u8eab\u4efd\u9a8c\u8bc1\u4ee4\u724c\uff0c\u4ee5\u8bc6\u522b\u8bbf\u95ee\u6216\u5b58\u50a8\u5bc6\u94a5\u7684\u7528\u6237\u548c\u9879\u76ee\u3002\u7136\u540e\uff0c\u5b83\u5e94\u7528\u7b56\u7565\u6765\u786e\u5b9a\u662f\u5426\u5141\u8bb8\u8bbf\u95ee\u3002\u5b83\u8fd8\u63d0\u4f9b\u4e86\u8bb8\u591a\u989d\u5916\u7684\u6709\u7528\u529f\u80fd\u6765\u6539\u8fdb\u5bc6\u94a5\u7ba1\u7406\uff0c\u5305\u62ec\u914d\u989d\u3001\u6bcf\u4e2a\u5bc6\u94a5\u7684 ACL\u3001\u8ddf\u8e2a\u5bc6\u94a5\u4f7f\u7528\u8005\u4ee5\u53ca\u5bc6\u94a5\u5bb9\u5668\u4e2d\u7684\u5bc6\u94a5\u5206\u7ec4\u3002\u4f8b\u5982\uff0c\u660e\u9510\u76f4\u63a5\u4e0e\u5df4\u6bd4\u80af\uff08\u800c\u4e0d\u662f\u5361\u65af\u7279\u62c9\u5170\uff09\u96c6\u6210\uff0c\u4ee5\u5229\u7528\u5176\u4e2d\u4e00\u4e9b\u529f\u80fd\u3002 Barbican \u6709\u8bb8\u591a\u540e\u7aef\u63d2\u4ef6\uff0c\u53ef\u7528\u4e8e\u5c06\u673a\u5bc6\u5b89\u5168\u5730\u5b58\u50a8\u5728\u672c\u5730\u6570\u636e\u5e93\u6216 HSM \u4e2d\u3002 \u76ee\u524d\uff0cBarbican \u662f Castellan \u552f\u4e00\u53ef\u7528\u7684\u540e\u7aef\u3002\u7136\u800c\uff0c\u6709\u51e0\u4e2a\u540e\u7aef\u6b63\u5728\u5f00\u53d1\u4e2d\uff0c\u5305\u62ec KMIP\u3001Dogtag\u3001Hashicorp Vault \u548c Custodia\u3002\u5bf9\u4e8e\u90a3\u4e9b\u4e0d\u5e0c\u671b\u90e8\u7f72 Barbican \u5e76\u4e14\u5bc6\u94a5\u7ba1\u7406\u9700\u6c42\u76f8\u5bf9\u7b80\u5355\u7684\u90e8\u7f72\u4eba\u5458\u6765\u8bf4\uff0c\u4f7f\u7528\u8fd9\u4e9b\u540e\u7aef\u4e4b\u4e00\u53ef\u80fd\u662f\u4e00\u4e2a\u53ef\u884c\u7684\u66ff\u4ee3\u65b9\u6848\u3002\u4f46\u662f\uff0c\u5728\u68c0\u7d22\u5bc6\u94a5\u65f6\uff0c\u7f3a\u5c11\u7684\u662f\u591a\u79df\u6237\u548c\u79df\u6237\u7b56\u7565\u7684\u5b9e\u65bd\uff0c\u4ee5\u53ca\u4e0a\u9762\u63d0\u5230\u7684\u4efb\u4f55\u989d\u5916\u529f\u80fd\u3002 \u4f7f\u7528\u6848\u4f8b \u00b6 \u955c\u50cf\u7b7e\u540d\u9a8c\u8bc1 \u00b6 \u9a8c\u8bc1\u955c\u50cf\u7b7e\u540d\u53ef\u786e\u4fdd\u955c\u50cf\u81ea\u539f\u59cb\u4e0a\u4f20\u4ee5\u6765\u4e0d\u4f1a\u88ab\u66ff\u6362\u6216\u66f4\u6539\u3002\u955c\u50cf\u7b7e\u540d\u9a8c\u8bc1\u529f\u80fd\u4f7f\u7528 Castellan \u4f5c\u4e3a\u5176\u5bc6\u94a5\u7ba1\u7406\u5668\u6765\u5b58\u50a8\u52a0\u5bc6\u7b7e\u540d\u3002\u955c\u50cf\u7b7e\u540d\u548c\u8bc1\u4e66 UUID \u5c06\u4e0e\u955c\u50cf\u4e00\u8d77\u4e0a\u4f20\u5230\u955c\u50cf \uff08glance\uff09 \u670d\u52a1\u3002Glance \u5728\u4ece\u5bc6\u94a5\u7ba1\u7406\u5668\u68c0\u7d22\u8bc1\u4e66\u540e\u9a8c\u8bc1\u7b7e\u540d\u3002\u542f\u52a8\u955c\u50cf\u65f6\uff0c\u8ba1\u7b97\u670d\u52a1 \uff08nova\uff09 \u5728\u4ece\u5bc6\u94a5\u7ba1\u7406\u5668\u68c0\u7d22\u8bc1\u4e66\u540e\u9a8c\u8bc1\u7b7e\u540d\u3002 \u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u53ef\u4fe1\u6620\u50cf\u6587\u6863\u3002 \u5377\u52a0\u5bc6 \u00b6 \u5377\u52a0\u5bc6\u529f\u80fd\u4f7f\u7528 Castellan \u63d0\u4f9b\u9759\u6001\u6570\u636e\u52a0\u5bc6\u3002\u5f53\u7528\u6237\u521b\u5efa\u52a0\u5bc6\u5377\u7c7b\u578b\u5e76\u4f7f\u7528\u8be5\u7c7b\u578b\u521b\u5efa\u5377\u65f6\uff0c\u5757\u5b58\u50a8 \uff08cinder\uff09 \u670d\u52a1\u4f1a\u8bf7\u6c42\u5bc6\u94a5\u7ba1\u7406\u5668\u521b\u5efa\u8981\u4e0e\u8be5\u5377\u5173\u8054\u7684\u5bc6\u94a5\u3002\u5f53\u5377\u9644\u52a0\u5230\u5b9e\u4f8b\u65f6\uff0cnova \u4f1a\u68c0\u7d22\u5bc6\u94a5\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6570\u636e\u52a0\u5bc6\u90e8\u5206\u3002\u548c\u5377\u52a0\u5bc6\u3002 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6 \u00b6 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u53ef\u89e3\u51b3\u6570\u636e\u9690\u79c1\u95ee\u9898\u3002\u4e34\u65f6\u78c1\u76d8\u662f\u865a\u62df\u4e3b\u673a\u64cd\u4f5c\u7cfb\u7edf\u4f7f\u7528\u7684\u4e34\u65f6\u5de5\u4f5c\u7a7a\u95f4\u3002\u5982\u679c\u4e0d\u52a0\u5bc6\uff0c\u53ef\u4ee5\u5728\u6b64\u78c1\u76d8\u4e0a\u8bbf\u95ee\u654f\u611f\u7684\u7528\u6237\u4fe1\u606f\uff0c\u5e76\u4e14\u5728\u5378\u8f7d\u78c1\u76d8\u540e\u53ef\u80fd\u4f1a\u4fdd\u7559\u6b8b\u7559\u4fe1\u606f\u3002 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u53ef\u4ee5\u901a\u8fc7\u5b89\u5168\u5305\u88c5\u5668\u4e0e\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u4ea4\u4e92\uff0c\u5e76\u901a\u8fc7\u6309\u79df\u6237\u63d0\u4f9b\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u5bc6\u94a5\u6765\u652f\u6301\u6570\u636e\u9694\u79bb\u3002\u5efa\u8bae\u4f7f\u7528\u540e\u7aef\u5bc6\u94a5\u5b58\u50a8\u4ee5\u589e\u5f3a\u5b89\u5168\u6027\uff08\u4f8b\u5982\uff0cHSM \u6216 KMIP \u670d\u52a1\u5668\u53ef\u7528\u4f5c barbican \u540e\u7aef\u5bc6\u94a5\u5b58\u50a8\uff09\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u6587\u6863\u3002 Sahara \u00b6 Sahara\u5728\u64cd\u4f5c\u8fc7\u7a0b\u4e2d\u751f\u6210\u5e76\u5b58\u50a8\u591a\u4e2a\u5bc6\u7801\u3002\u4e3a\u4e86\u52a0\u5f3aSahara\u5bf9\u5bc6\u7801\u7684\u4f7f\u7528\uff0c\u53ef\u4ee5\u6307\u793a\u5b83\u4f7f\u7528\u5916\u90e8\u5bc6\u94a5\u7ba1\u7406\u5668\u6765\u5b58\u50a8\u548c\u68c0\u7d22\u8fd9\u4e9b\u5bc6\u94a5\u3002\u8981\u542f\u7528\u6b64\u529f\u80fd\uff0c\u5fc5\u987b\u9996\u5148\u5728\u5806\u6808\u4e2d\u90e8\u7f72\u4e00\u4e2a OpenStack Key Manager \u670d\u52a1\u3002 \u5728\u5806\u6808\u4e0a\u90e8\u7f72\u5bc6\u94a5\u7ba1\u7406\u5668\u670d\u52a1\u540e\uff0c\u5fc5\u987b\u5c06 sahara \u914d\u7f6e\u4e3a\u542f\u7528\u5bc6\u94a5\u7684\u5916\u90e8\u5b58\u50a8\u3002Sahara \u4f7f\u7528 Castellan \u5e93\u4e0e OpenStack Key Manager \u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u6b64\u5e93\u63d0\u4f9b\u5bf9\u5bc6\u94a5\u7ba1\u7406\u5668\u7684\u53ef\u914d\u7f6e\u8bbf\u95ee\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Sahara \u9ad8\u7ea7\u914d\u7f6e\u6307\u5357\u3002 Magnum \u00b6 \u4e3a\u4e86\u4f7f\u7528\u672c\u673a\u5ba2\u6237\u7aef\uff08 docker \u6216 kubectl \u5206\u522b\uff09\u63d0\u4f9b\u5bf9 Docker Swarm \u6216 Kubernetes \u7684\u8bbf\u95ee\uff0cmagnum \u4f7f\u7528 TLS \u8bc1\u4e66\u3002\u8981\u5b58\u50a8\u8bc1\u4e66\uff0c\u5efa\u8bae\u4f7f\u7528 Barbican \u6216 Magnum \u6570\u636e\u5e93 \uff08 x590keypair \uff09\u3002 \u4e5f\u53ef\u4ee5\u4f7f\u7528\u672c\u5730\u76ee\u5f55 \uff08 local \uff09\uff0c\u4f46\u88ab\u8ba4\u4e3a\u662f\u4e0d\u5b89\u5168\u7684\uff0c\u4e0d\u9002\u5408\u751f\u4ea7\u73af\u5883\u3002 \u6709\u5173\u4e3a Magnum \u8bbe\u7f6e\u8bc1\u4e66\u7ba1\u7406\u5668\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5bb9\u5668\u57fa\u7840\u67b6\u6784\u7ba1\u7406\u670d\u52a1\u6587\u6863\u3002 Octavia/LBaaS \u00b6 Neutron \u548c Octavia \u9879\u76ee\u7684 LBaaS\uff08\u8d1f\u8f7d\u5747\u8861\u5668\u5373\u670d\u52a1\uff09\u529f\u80fd\u9700\u8981\u8bc1\u4e66\u53ca\u5176\u79c1\u94a5\u6765\u4e3a TLS \u8fde\u63a5\u63d0\u4f9b\u8d1f\u8f7d\u5747\u8861\u3002Barbican \u53ef\u7528\u4e8e\u5b58\u50a8\u6b64\u654f\u611f\u4fe1\u606f\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5982\u4f55\u521b\u5efa TLS \u8d1f\u8f7d\u5747\u8861\u5668\u548c\u90e8\u7f72\u4ee5 TLS \u7ed3\u5c3e\u7684 HTTPS \u8d1f\u8f7d\u5747\u8861\u5668\u3002 Swift \u00b6 \u5bf9\u79f0\u5bc6\u94a5\u53ef\u7528\u4e8e\u52a0\u5bc6 Swift \u5bb9\u5668\uff0c\u4ee5\u964d\u4f4e\u7528\u6237\u6570\u636e\u88ab\u8bfb\u53d6\u7684\u98ce\u9669\uff0c\u5982\u679c\u672a\u7ecf\u6388\u6743\u7684\u4e00\u65b9\u8981\u83b7\u5f97\u5bf9\u78c1\u76d8\u7684\u7269\u7406\u8bbf\u95ee\u6743\u9650\u3002 \u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5b98\u65b9 swift \u6587\u6863\u4e2d\u7684\u5bf9\u8c61\u52a0\u5bc6\u90e8\u5206\u3002 \u914d\u7f6e\u6587\u4ef6\u4e2d\u7684\u5bc6\u7801 \u00b6 OpenStack \u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u5305\u542b\u8bb8\u591a\u7eaf\u6587\u672c\u5bc6\u7801\u3002\u4f8b\u5982\uff0c\u8fd9\u4e9b\u5305\u62ec\u670d\u52a1\u7528\u6237\u7528\u4e8e\u5411 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u4ee5\u9a8c\u8bc1 keystone \u4ee4\u724c\u7684\u5bc6\u7801\u3002 \u76ee\u524d\u6ca1\u6709\u5bf9\u8fd9\u4e9b\u5bc6\u7801\u8fdb\u884c\u6a21\u7cca\u5904\u7406\u7684\u89e3\u51b3\u65b9\u6848\u3002\u5efa\u8bae\u901a\u8fc7\u6587\u4ef6\u6743\u9650\u9002\u5f53\u5730\u4fdd\u62a4\u8fd9\u4e9b\u6587\u4ef6\u3002 \u76ee\u524d\u6b63\u5728\u52aa\u529b\u5c06\u8fd9\u4e9b\u5bc6\u94a5\u5b58\u50a8\u5728 Castellan \u540e\u7aef\uff0c\u7136\u540e\u8ba9 oslo.config \u4f7f\u7528 Castellan \u6765\u68c0\u7d22\u8fd9\u4e9b\u5bc6\u94a5\u3002 Barbican \u00b6 \u6982\u8ff0 \u00b6 Barbican \u662f\u4e00\u4e2a REST API\uff0c\u65e8\u5728\u5b89\u5168\u5b58\u50a8\u3001\u914d\u7f6e\u548c\u7ba1\u7406\u5bc6\u7801\u3001\u52a0\u5bc6\u5bc6\u94a5\u548c X.509 \u8bc1\u4e66\u7b49\u673a\u5bc6\u3002\u5b83\u65e8\u5728\u5bf9\u6240\u6709\u73af\u5883\u90fd\u6709\u7528\uff0c\u5305\u62ec\u5927\u578b\u77ed\u6682\u4e91\u3002 Barbican \u4e0e\u591a\u4e2a OpenStack \u529f\u80fd\u96c6\u6210\uff0c\u53ef\u4ee5\u76f4\u63a5\u96c6\u6210\uff0c\u4e5f\u53ef\u4ee5\u4f5c\u4e3a Castellan \u7684\u540e\u7aef\u96c6\u6210\u3002 Barbican \u901a\u5e38\u7528\u4f5c\u5bc6\u94a5\u7ba1\u7406\u7cfb\u7edf\uff0c\u4ee5\u5b9e\u73b0\u56fe\u50cf\u7b7e\u540d\u9a8c\u8bc1\u3001\u5377\u52a0\u5bc6\u7b49\u7528\u4f8b\u3002\u8fd9\u4e9b\u7528\u4f8b\u5728\u7528\u4f8b\u4e2d\u8fdb\u884c\u4e86\u6982\u8ff0 Barbican \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236 \u00b6 \u5f85\u5b9a \u673a\u5bc6\u5b58\u50a8\u540e\u7aef \u00b6 Key Manager \u670d\u52a1\u5177\u6709\u63d2\u4ef6\u67b6\u6784\uff0c\u5141\u8bb8\u90e8\u7f72\u7a0b\u5e8f\u5c06\u5bc6\u94a5\u5b58\u50a8\u5728\u4e00\u4e2a\u6216\u591a\u4e2a\u5bc6\u94a5\u5b58\u50a8\u4e2d\u3002\u673a\u5bc6\u5b58\u50a8\u53ef\u4ee5\u662f\u57fa\u4e8e\u8f6f\u4ef6\u7684\uff08\u5982\u8f6f\u4ef6\u4ee4\u724c\uff09\uff0c\u4e5f\u53ef\u4ee5\u662f\u57fa\u4e8e\u786c\u4ef6\u8bbe\u5907\uff08\u5982\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09\uff09\u7684\u3002\u672c\u8282\u4ecb\u7ecd\u5f53\u524d\u53ef\u7528\u7684\u63d2\u4ef6\uff0c\u5e76\u8ba8\u8bba\u6bcf\u4e2a\u63d2\u4ef6\u7684\u5b89\u5168\u72b6\u51b5\u3002\u63d2\u4ef6\u5df2\u542f\u7528\u5e76\u4f7f\u7528\u914d\u7f6e\u6587\u4ef6\u4e2d\u7684 /etc/barbican/barbican.conf \u8bbe\u7f6e\u8fdb\u884c\u914d\u7f6e\u3002 \u6709\u4e24\u79cd\u7c7b\u578b\u7684\u63d2\u4ef6\uff1a\u52a0\u5bc6\u63d2\u4ef6\u548c\u673a\u5bc6\u5b58\u50a8\u63d2\u4ef6\u3002 \u52a0\u5bc6\u63d2\u4ef6 \u00b6 \u52a0\u5bc6\u63d2\u4ef6\u5c06\u673a\u5bc6\u5b58\u50a8\u4e3a Barbican \u6570\u636e\u5e93\u4e2d\u7684\u52a0\u5bc6 blob\u3002\u8c03\u7528\u8be5\u63d2\u4ef6\u6765\u52a0\u5bc6\u5bc6\u94a5\u5b58\u50a8\u4e0a\u7684\u5bc6\u94a5\uff0c\u5e76\u5728\u5bc6\u94a5\u68c0\u7d22\u65f6\u89e3\u5bc6\u5bc6\u94a5\u3002\u76ee\u524d\u6709\u4e24\u79cd\u7c7b\u578b\u7684\u5b58\u50a8\u63d2\u4ef6\u53ef\u7528\uff1aSimple Crypto \u63d2\u4ef6\u548c PKCS#11 \u52a0\u5bc6\u63d2\u4ef6\u3002 \u7b80\u5355\u7684\u52a0\u5bc6\u63d2\u4ef6 \u00b6 \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u5728 \u4e2d barbican.conf \u914d\u7f6e\u4e86\u7b80\u5355\u7684\u52a0\u5bc6\u63d2\u4ef6\u3002\u8be5\u63d2\u4ef6\u4f7f\u7528\u5355\u4e2a\u5bf9\u79f0\u5bc6\u94a5\uff08KEK - \u6216\u201c\u5bc6\u94a5\u52a0\u5bc6\u5bc6\u94a5\u201d\uff09\uff0c\u8be5\u5bc6\u94a5\u4ee5\u7eaf\u6587\u672c\u5f62\u5f0f\u5b58\u50a8\u5728 barbican.conf \u6587\u4ef6\u4e2d\uff0c\u4ee5\u52a0\u5bc6\u548c\u89e3\u5bc6\u6240\u6709\u673a\u5bc6\u3002\u6b64\u63d2\u4ef6\u88ab\u8ba4\u4e3a\u662f\u4e0d\u592a\u5b89\u5168\u7684\u9009\u9879\uff0c\u4ec5\u9002\u7528\u4e8e\u5f00\u53d1\u548c\u6d4b\u8bd5\uff0c\u56e0\u4e3a\u4e3b\u5bc6\u94a5\u4ee5\u7eaf\u6587\u672c\u5f62\u5f0f\u5b58\u50a8\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d\uff0c\u56e0\u6b64\u4e0d\u5efa\u8bae\u5728\u751f\u4ea7\u90e8\u7f72\u4e2d\u4f7f\u7528\u3002 PKCS#11 \u52a0\u5bc6\u63d2\u4ef6 \u00b6 PKCS#11 \u52a0\u5bc6\u63d2\u4ef6\u53ef\u7528\u4e8e\u4e0e\u4f7f\u7528 PKCS#11 \u534f\u8bae\u7684\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u8fde\u63a5\u3002\u673a\u5bc6\u7531\u9879\u76ee\u7279\u5b9a\u7684\u5bc6\u94a5\u52a0\u5bc6\u5bc6\u94a5 \uff08KEK\uff09 \u52a0\u5bc6 \uff08\u5e76\u5728\u68c0\u7d22\u65f6\u89e3\u5bc6\uff09 \u3002KEK \u53d7\u4e3b KEK \uff08MKEK\uff09 \u4fdd\u62a4\uff08\u52a0\u5bc6\uff09\u3002MKEK \u4e0e HMAC \u4e00\u8d77\u9a7b\u7559\u5728 HSM \u4e2d\u3002\u7531\u4e8e\u6bcf\u4e2a\u9879\u76ee\u90fd\u4f7f\u7528\u4e0d\u540c\u7684 KEK\uff0c\u5e76\u4e14\u7531\u4e8e KEK \u4ee5\u52a0\u5bc6\u5f62\u5f0f\uff08\u800c\u4e0d\u662f\u914d\u7f6e\u6587\u4ef6\u4e2d\u7684\u660e\u6587\uff09\u5b58\u50a8\u5728\u6570\u636e\u5e93\u4e2d\uff0c\u56e0\u6b64 PKCS#11 \u63d2\u4ef6\u6bd4\u7b80\u5355\u7684\u52a0\u5bc6\u63d2\u4ef6\u5b89\u5168\u5f97\u591a\u3002\u5b83\u662f Barbican \u90e8\u7f72\u4e2d\u6700\u53d7\u6b22\u8fce\u7684\u540e\u7aef\u3002 \u673a\u5bc6\u5b58\u50a8\u63d2\u4ef6 \u00b6 \u5bc6\u94a5\u5b58\u50a8\u63d2\u4ef6\u4e0e\u5b89\u5168\u5b58\u50a8\u7cfb\u7edf\u63a5\u53e3\uff0c\u4ee5\u5c06\u5bc6\u94a5\u5b58\u50a8\u5728\u8fd9\u4e9b\u7cfb\u7edf\u4e2d\u3002\u5bc6\u94a5\u5b58\u50a8\u63d2\u4ef6\u6709\u4e09\u79cd\u7c7b\u578b\uff1aKMIP \u63d2\u4ef6\u3001Dogtag \u63d2\u4ef6\u548c Vault \u63d2\u4ef6\u3002 KMIP \u63d2\u4ef6 \u00b6 \u5bc6\u94a5\u7ba1\u7406\u4e92\u64cd\u4f5c\u6027\u534f\u8bae \uff08KMIP\uff09 \u5bc6\u94a5\u5b58\u50a8\u63d2\u4ef6\u7528\u4e8e\u4e0e\u542f\u7528\u4e86 KMIP \u7684\u8bbe\u5907\uff08\u5982\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09\uff09\u8fdb\u884c\u901a\u4fe1\u3002\u5bc6\u94a5\u76f4\u63a5\u5b89\u5168\u5730\u5b58\u50a8\u5728\u542f\u7528\u4e86 KMIP \u7684\u8bbe\u5907\u4e2d\uff0c\u800c\u4e0d\u662f\u5b58\u50a8\u5728 Barbican \u6570\u636e\u5e93\u4e2d\u3002Barbican \u6570\u636e\u5e93\u7ef4\u62a4\u5bf9\u5bc6\u94a5\u4f4d\u7f6e\u7684\u5f15\u7528\uff0c\u4ee5\u4f9b\u4ee5\u540e\u68c0\u7d22\u3002\u8be5\u63d2\u4ef6\u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528\u7528\u6237\u540d\u548c\u5bc6\u7801\u6216\u4f7f\u7528\u5ba2\u6237\u7aef\u8bc1\u4e66\u5411\u542f\u7528\u4e86 KMIP \u7684\u8bbe\u5907\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u6b64\u4fe1\u606f\u5b58\u50a8\u5728 Barbican \u914d\u7f6e\u6587\u4ef6\u4e2d\u3002 Dogtag \u63d2\u4ef6 \u00b6 Dogtag \u79d8\u5bc6\u5b58\u50a8\u63d2\u4ef6\u7528\u4e8e\u4e0e Dogtag \u901a\u4fe1\u3002Dogtag \u662f\u5bf9\u5e94\u4e8e Red Hat \u8bc1\u4e66\u7cfb\u7edf\u7684\u4e0a\u6e38\u9879\u76ee\uff0cRed Hat Certificate System \u662f\u4e00\u4e2a\u901a\u7528\u6807\u51c6/FIPS \u8ba4\u8bc1\u7684 PKI \u89e3\u51b3\u65b9\u6848\uff0c\u5305\u542b\u8bc1\u4e66\u7ba1\u7406\u5668 \uff08CA\uff09 \u548c\u5bc6\u94a5\u6062\u590d\u673a\u6784 \uff08KRA\uff09\uff0c\u7528\u4e8e\u5b89\u5168\u5b58\u50a8\u673a\u5bc6\u3002KRA \u5c06\u673a\u5bc6\u4f5c\u4e3a\u52a0\u5bc6\u7684 blob \u5b58\u50a8\u5728\u5176\u5185\u90e8\u6570\u636e\u5e93\u4e2d\uff0c\u4e3b\u52a0\u5bc6\u5bc6\u94a5\u5b58\u50a8\u5728\u57fa\u4e8e\u8f6f\u4ef6\u7684 NSS \u5b89\u5168\u6570\u636e\u5e93\u4e2d\uff0c\u6216\u5b58\u50a8\u5728\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u4e2d\u3002\u57fa\u4e8e\u8f6f\u4ef6\u7684 NSS \u6570\u636e\u5e93\u914d\u7f6e\u4e3a\u4e0d\u5e0c\u671b\u4f7f\u7528 HSM \u7684\u90e8\u7f72\u63d0\u4f9b\u4e86\u5b89\u5168\u9009\u9879\u3002KRA \u662f FreeIPA \u7684\u4e00\u4e2a\u7ec4\u4ef6\uff0c\u56e0\u6b64\u53ef\u4ee5\u4f7f\u7528 FreeIPA \u670d\u52a1\u5668\u914d\u7f6e\u63d2\u4ef6\u3002\u4ee5\u4e0b\u535a\u5ba2\u6587\u7ae0\u4e2d\u63d0\u4f9b\u4e86\u6709\u5173\u5982\u4f55\u4f7f\u7528 FreeIPA \u8bbe\u7f6e Barbican \u7684\u66f4\u8be6\u7ec6\u8bf4\u660e\u3002 Vault \u63d2\u4ef6 \u00b6 Vault \u662f Hashicorp \u5f00\u53d1\u7684\u79d8\u5bc6\u5b58\u50a8\uff0c\u7528\u4e8e\u5b89\u5168\u8bbf\u95ee\u673a\u5bc6\u548c\u5176\u4ed6\u5bf9\u8c61\uff0c\u4f8b\u5982 API \u5bc6\u94a5\u3001\u5bc6\u7801\u6216\u8bc1\u4e66\u3002\u4fdd\u9669\u67dc\u4e3a\u4efb\u4f55\u673a\u5bc6\u63d0\u4f9b\u7edf\u4e00\u7684\u754c\u9762\uff0c\u540c\u65f6\u63d0\u4f9b\u4e25\u683c\u7684\u8bbf\u95ee\u63a7\u5236\u5e76\u8bb0\u5f55\u8be6\u7ec6\u7684\u5ba1\u6838\u65e5\u5fd7\u3002Vault \u4f01\u4e1a\u7248\u8fd8\u5141\u8bb8\u4e0e HSM \u96c6\u6210\u4ee5\u8fdb\u884c\u81ea\u52a8\u89e3\u5c01\u3001\u63d0\u4f9b FIPS \u5bc6\u94a5\u5b58\u50a8\u548c\u71b5\u589e\u5f3a\u3002\u4f46\u662f\uff0cVault \u63d2\u4ef6\u7684\u7f3a\u70b9\u662f\u5b83\u4e0d\u652f\u6301\u591a\u79df\u6237\uff0c\u56e0\u6b64\u6240\u6709\u5bc6\u94a5\u90fd\u5c06\u5b58\u50a8\u5728\u540c\u4e00\u4e2a\u952e/\u503c\u5bc6\u94a5\u5f15\u64ce\u4e0b\u3002\u6302\u8f7d\u70b9\u3002 \u5a01\u80c1\u5206\u6790 \u00b6 Barbican \u56e2\u961f\u4e0e OpenStack \u5b89\u5168\u9879\u76ee\u5408\u4f5c\uff0c\u5bf9\u6700\u4f73\u5b9e\u8df5 Barbican \u90e8\u7f72\u8fdb\u884c\u4e86\u5b89\u5168\u5ba1\u67e5\u3002\u5b89\u5168\u5ba1\u67e5\u7684\u76ee\u7684\u662f\u8bc6\u522b\u670d\u52a1\u8bbe\u8ba1\u548c\u4f53\u7cfb\u7ed3\u6784\u4e2d\u7684\u5f31\u70b9\u548c\u7f3a\u9677\uff0c\u5e76\u63d0\u51fa\u89e3\u51b3\u8fd9\u4e9b\u95ee\u9898\u7684\u63a7\u5236\u6216\u4fee\u590d\u63aa\u65bd\u3002 \u5df4\u6bd4\u80af\u5a01\u80c1\u5206\u6790\u786e\u5b9a\u4e86\u516b\u9879\u5b89\u5168\u53d1\u73b0\u548c\u4e24\u9879\u5efa\u8bae\uff0c\u4ee5\u63d0\u9ad8\u5df4\u6bd4\u80af\u90e8\u7f72\u7684\u5b89\u5168\u6027\u3002\u8fd9\u4e9b\u7ed3\u679c\u53ef\u4ee5\u5728\u5b89\u5168\u5206\u6790\u5b58\u50a8\u5e93\u4e2d\u67e5\u770b\uff0c\u4ee5\u53ca Barbican \u4f53\u7cfb\u7ed3\u6784\u56fe\u548c\u4f53\u7cfb\u7ed3\u6784\u63cf\u8ff0\u9875\u3002 Castellan \u00b6 \u6982\u8ff0 \u00b6 Castellan \u662f\u7531 Barbican \u56e2\u961f\u5f00\u53d1\u7684\u901a\u7528\u5bc6\u94a5\u7ba1\u7406\u5668\u754c\u9762\u3002\u5b83\u4f7f\u9879\u76ee\u80fd\u591f\u4f7f\u7528\u53ef\u914d\u7f6e\u7684\u5bc6\u94a5\u7ba1\u7406\u5668\uff0c\u8be5\u7ba1\u7406\u5668\u53ef\u4ee5\u7279\u5b9a\u4e8e\u90e8\u7f72\u3002 \u5e38\u89c1\u95ee\u9898\u89e3\u7b54 \u00b6 \u200b 1.\u5728 OpenStack \u4e2d\u5b89\u5168\u5b58\u50a8\u5bc6\u94a5\u7684\u63a8\u8350\u65b9\u6cd5\u662f\u4ec0\u4e48\uff1f \u5728OpenStack\u4e2d\u5b89\u5168\u5730\u5b58\u50a8\u548c\u7ba1\u7406\u5bc6\u94a5\u7684\u63a8\u8350\u65b9\u6cd5\u662f\u4f7f\u7528Barbican\u3002 \u200b 2.\u6211\u4e3a\u4ec0\u4e48\u8981\u4f7f\u7528Barbican\uff1f Barbican \u662f\u4e00\u79cd OpenStack \u670d\u52a1\uff0c\u5b83\u652f\u6301\u591a\u79df\u6237\uff0c\u5e76\u4f7f\u7528 Keystone \u4ee4\u724c\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u8fd9\u610f\u5473\u7740\u5bf9\u5bc6\u94a5\u7684\u8bbf\u95ee\u662f\u901a\u8fc7\u79df\u6237\u548c RBAC \u89d2\u8272\u7684 OpenStack \u7b56\u7565\u6765\u63a7\u5236\u7684\u3002 Barbican \u5177\u6709\u591a\u4e2a\u53ef\u63d2\u62d4\u540e\u7aef\uff0c\u53ef\u4ee5\u4f7f\u7528 PKCS#11 \u6216 KMIP \u4e0e\u57fa\u4e8e\u8f6f\u4ef6\u548c\u786c\u4ef6\u7684\u5b89\u5168\u6a21\u5757\u8fdb\u884c\u901a\u4fe1\u3002 \u200b 3.\u5982\u679c\u6211\u4e0d\u60f3\u4f7f\u7528Barbican\u600e\u4e48\u529e\uff1f \u5728 Openstack \u4e0a\u4e0b\u6587\u4e2d\uff0c\u9700\u8981\u7ba1\u7406\u4e24\u79cd\u7c7b\u578b\u7684\u5bc6\u94a5 - \u9700\u8981\u5bc6\u94a5\u5931\u771f\u4ee4\u724c\u624d\u80fd\u8bbf\u95ee\u7684\u5bc6\u94a5\uff0c\u4ee5\u53ca\u4e0d\u9700\u8981\u5bc6\u94a5\u9a8c\u8bc1\u4ee4\u724c\u7684\u5bc6\u94a5\u3002 \u9700\u8981 keystone \u8eab\u4efd\u9a8c\u8bc1\u7684\u5bc6\u94a5\u7684\u4e00\u4e2a\u793a\u4f8b\u662f\u7279\u5b9a\u9879\u76ee\u62e5\u6709\u7684\u5bc6\u7801\u548c\u5bc6\u94a5\u3002\u4f8b\u5982\uff0c\u8fd9\u4e9b\u5305\u62ec\u9879\u76ee\u52a0\u5bc6\u7164\u6e23\u5377\u7684\u52a0\u5bc6\u5bc6\u94a5\u6216\u9879\u76ee\u6982\u89c8\u56fe\u50cf\u7684\u7b7e\u540d\u5bc6\u94a5\u3002 \u4e0d\u9700\u8981 keystone \u4ee4\u724c\u5373\u53ef\u8bbf\u95ee\u7684\u5bc6\u94a5\u793a\u4f8b\u5305\u62ec\u670d\u52a1\u914d\u7f6e\u6587\u4ef6\u4e2d\u670d\u52a1\u7528\u6237\u7684\u5bc6\u7801\u6216\u4e0d\u5c5e\u4e8e\u4efb\u4f55\u7279\u5b9a\u9879\u76ee\u7684\u52a0\u5bc6\u5bc6\u94a5\u3002 \u9700\u8981 keystone \u4ee4\u724c\u7684\u673a\u5bc6\u5e94\u4f7f\u7528 Barbican \u8fdb\u884c\u5b58\u50a8\u3002 \u4e0d\u9700\u8981 keystone \u8eab\u4efd\u9a8c\u8bc1\u7684\u5bc6\u94a5\u53ef\u4ee5\u5b58\u50a8\u5728\u4efb\u4f55\u5bc6\u94a5\u5b58\u50a8\u4e2d\uff0c\u8be5\u5bc6\u94a5\u5b58\u50a8\u5b9e\u73b0\u4e86\u901a\u8fc7 Castellan \u516c\u5f00\u7684\u7b80\u5355\u5bc6\u94a5\u5b58\u50a8 API\u3002\u8fd9\u4e5f\u5305\u62ec\u5df4\u6bd4\u80af\u3002 \u200b 4.\u5982\u4f55\u4f7f\u7528 Vault\u3001Keywhiz\u3001Custodia \u7b49...\uff1f \u5982\u679c\u5df2\u4e3a\u8be5\u5bc6\u94a5\u7ba1\u7406\u5668\u7f16\u5199\u4e86 Castellan \u63d2\u4ef6\uff0c\u5219\u60a8\u9009\u62e9\u7684\u5bc6\u94a5\u7ba1\u7406\u5668\u53ef\u4ee5\u4e0e\u8be5\u5bc6\u94a5\u7ba1\u7406\u5668\u4e00\u8d77\u4f7f\u7528\u3002\u4e00\u65e6\u8be5\u63d2\u4ef6\u88ab\u7f16\u5199\u51fa\u6765\uff0c\u76f4\u63a5\u4f7f\u7528\u8be5\u63d2\u4ef6\u6216\u5728 Barbican \u540e\u9762\u4f7f\u7528\u8be5\u63d2\u4ef6\u662f\u76f8\u5bf9\u5fae\u4e0d\u8db3\u9053\u7684\u3002 \u76ee\u524d\uff0cVault \u548c Custodia \u63d2\u4ef6\u6b63\u5728\u4e3a Queens \u5468\u671f\u5f00\u53d1\u3002 \u68c0\u67e5\u8868 \u00b6 Check-Key-Manager-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/barbican\uff1f \u00b6 \u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u62d2\u7edd\u5411\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u63d0\u4f9b\u670d\u52a1\u3002\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a root\uff0c\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a barbican\u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/barbican/barbican.conf | egrep \"root barbican\" $ stat -L -c \"%U %G\" /etc/barbican/barbican-api-paste.ini | egrep \"root barbican\" $ stat -L -c \"%U %G\" /etc/barbican/policy.json | egrep \"root barbican\" $ stat -L -c \"%U %G\" /etc/barbican | egrep \"root barbican\" \u901a\u8fc7\uff1a\u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c barbican\u3002\u4e0a\u9762\u7684\u547d\u4ee4\u663e\u793a\u4e86 root / barbican \u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u5219\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 root \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u6216\u9664 barbican \u4ee5\u5916\u7684\u4efb\u4f55\u7ec4\u3002 Check-Key-Manager-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f \u00b6 \u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u6211\u4eec\u5efa\u8bae\u4e3a\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/barbican/barbican.conf $ stat -L -c \"%a\" /etc/barbican/barbican-api-paste.ini $ stat -L -c \"%a\" /etc/barbican/policy.json $ stat -L -c \"%a\" /etc/barbican \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002640 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\uff0c\u4f8b\u5982\u201cu=rw\uff0cg=r\uff0co=\u201d\u3002 \u6ce8\u610f \u4f7f\u7528 Check-Key-Manager-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/barbican\uff1f\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0croot \u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0cBarbican \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/barbican/barbican.conf getfacl: Removing leading '/' from absolute path names # file: etc/barbican/barbican.conf USER root rw- GROUP barbican r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u5927\u4e8e 640\u3002 Check-Key-Manager-03\uff1aOpenStack Identity \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f \u00b6 OpenStack \u652f\u6301\u5404\u79cd\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\uff0c\u5982 noauth \u548c keystone \u3002\u5982\u679c\u4f7f\u7528\u8be5 noauth \u7b56\u7565\uff0c\u5219\u7528\u6237\u65e0\u9700\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u4e0e OpenStack \u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u6f5c\u5728\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u83b7\u5f97\u5bf9 OpenStack \u7ec4\u4ef6\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u670d\u52a1\u90fd\u5fc5\u987b\u4f7f\u7528\u5176\u670d\u52a1\u5e10\u6237\u901a\u8fc7 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 authtoken \u5217\u5728 \u4e2d\u7684 pipeline:barbican-api-keystone barbican-api-paste.ini \u90e8\u5206\u4e0b\u3002 \u5931\u8d25\uff1a\u5982\u679c \u4e2d\u7684 pipeline:barbican-api-keystone barbican-api-paste.ini \u90e8\u5206\u4e0b\u7f3a\u5c11\u8be5\u53c2\u6570 authtoken \u3002 Check-Key-Manager-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f \u00b6 OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in /etc/barbican/barbican.conf \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a Identity API \u7aef\u70b9\u5f00\u5934\uff0c https:// \u5e76\u4e14 same /etc/barbican/barbican.conf \u4e2d\u540c\u4e00 [keystone_authtoken] \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri insecure \u503c\u8bbe\u7f6e\u4e3a False \u3002 \u5931\u8d25\uff1a\u5982\u679c in /etc/barbican/barbican.conf \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri \u503c\u672a\u8bbe\u7f6e\u4e3a\u4ee5 \u5f00\u5934\u7684\u8eab\u4efd API \u7aef\u70b9\uff0c https:// \u6216\u8005\u540c\u4e00 /etc/barbican/barbican.conf \u90e8\u5206\u4e2d\u7684\u53c2\u6570 insecure [keystone_authtoken] \u503c\u8bbe\u7f6e\u4e3a True \u3002 \u6d88\u606f\u961f\u5217 \u00b6 \u6d88\u606f\u961f\u5217\u670d\u52a1\u4fc3\u8fdb\u4e86 OpenStack \u4e2d\u7684\u8fdb\u7a0b\u95f4\u901a\u4fe1\u3002OpenStack \u652f\u6301\u4ee5\u4e0b\u6d88\u606f\u961f\u5217\u670d\u52a1\u540e\u7aef\uff1a RabbitMQ Qpid ZeroMQ \u6216 0MQ RabbitMQ \u548c Qpid \u90fd\u662f\u9ad8\u7ea7\u6d88\u606f\u961f\u5217\u534f\u8bae \uff08AMQP\uff09 \u6846\u67b6\uff0c\u5b83\u4eec\u4e3a\u70b9\u5bf9\u70b9\u901a\u4fe1\u63d0\u4f9b\u6d88\u606f\u961f\u5217\u3002\u961f\u5217\u5b9e\u73b0\u901a\u5e38\u90e8\u7f72\u4e3a\u96c6\u4e2d\u5f0f\u6216\u5206\u6563\u5f0f\u961f\u5217\u670d\u52a1\u5668\u6c60\u3002ZeroMQ \u901a\u8fc7 TCP \u5957\u63a5\u5b57\u63d0\u4f9b\u76f4\u63a5\u7684\u70b9\u5bf9\u70b9\u901a\u4fe1\u3002 \u6d88\u606f\u961f\u5217\u6709\u6548\u5730\u4fc3\u8fdb\u4e86\u8de8 OpenStack \u90e8\u7f72\u7684\u547d\u4ee4\u548c\u63a7\u5236\u529f\u80fd\u3002\u4e00\u65e6\u5141\u8bb8\u8bbf\u95ee\u961f\u5217\uff0c\u5c31\u4e0d\u4f1a\u6267\u884c\u8fdb\u4e00\u6b65\u7684\u6388\u6743\u68c0\u67e5\u3002\u53ef\u901a\u8fc7\u961f\u5217\u8bbf\u95ee\u7684\u670d\u52a1\u4f1a\u9a8c\u8bc1\u5b9e\u9645\u6d88\u606f\u8d1f\u8f7d\u4e2d\u7684\u4e0a\u4e0b\u6587\u548c\u4ee4\u724c\u3002\u4f46\u662f\uff0c\u60a8\u5fc5\u987b\u6ce8\u610f\u4ee4\u724c\u7684\u5230\u671f\u65e5\u671f\uff0c\u56e0\u4e3a\u4ee4\u724c\u53ef\u80fd\u53ef\u91cd\u64ad\uff0c\u5e76\u4e14\u53ef\u4ee5\u6388\u6743\u57fa\u7840\u7ed3\u6784\u4e2d\u7684\u5176\u4ed6\u670d\u52a1\u3002 OpenStack \u4e0d\u652f\u6301\u6d88\u606f\u7ea7\u522b\u7684\u5b89\u5168\u6027\uff0c\u4f8b\u5982\u6d88\u606f\u7b7e\u540d\u3002\u56e0\u6b64\uff0c\u60a8\u5fc5\u987b\u5bf9\u6d88\u606f\u4f20\u8f93\u672c\u8eab\u8fdb\u884c\u5b89\u5168\u548c\u8eab\u4efd\u9a8c\u8bc1\u3002\u5bf9\u4e8e\u9ad8\u53ef\u7528\u6027 \uff08HA\uff09 \u914d\u7f6e\uff0c\u60a8\u5fc5\u987b\u6267\u884c\u961f\u5217\u5bf9\u961f\u5217\u7684\u8eab\u4efd\u9a8c\u8bc1\u548c\u52a0\u5bc6\u3002 \u901a\u8fc7 ZeroMQ \u6d88\u606f\u4f20\u9012\uff0cIPC \u5957\u63a5\u5b57\u5728\u5355\u4e2a\u673a\u5668\u4e0a\u4f7f\u7528\u3002\u7531\u4e8e\u8fd9\u4e9b\u5957\u63a5\u5b57\u5bb9\u6613\u53d7\u5230\u653b\u51fb\uff0c\u56e0\u6b64\u8bf7\u786e\u4fdd\u4e91\u8fd0\u8425\u5546\u5df2\u4fdd\u62a4\u5b83\u4eec\u3002 \u6d88\u606f\u5b89\u5168 \u6d88\u606f\u4f20\u8f93\u5b89\u5168 \u961f\u5217\u8eab\u4efd\u9a8c\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236 \u6d88\u606f\u961f\u5217\u8fdb\u7a0b\u9694\u79bb\u548c\u7b56\u7565 \u6d88\u606f\u5b89\u5168 \u00b6 \u672c\u8282\u8ba8\u8bba OpenStack \u4e2d\u4f7f\u7528\u7684\u4e09\u79cd\u6700\u5e38\u89c1\u7684\u6d88\u606f\u961f\u5217\u89e3\u51b3\u65b9\u6848\u7684\u5b89\u5168\u5f3a\u5316\u65b9\u6cd5\uff1aRabbitMQ\u3001Qpid \u548c ZeroMQ\u3002 \u6d88\u606f\u4f20\u8f93\u5b89\u5168 \u00b6 \u57fa\u4e8e AMQP \u7684\u89e3\u51b3\u65b9\u6848\uff08Qpid \u548c RabbitMQ\uff09\u652f\u6301\u4f7f\u7528 TLS \u7684\u4f20\u8f93\u7ea7\u5b89\u5168\u6027\u3002ZeroMQ \u6d88\u606f\u4f20\u9012\u672c\u8eab\u4e0d\u652f\u6301 TLS\uff0c\u4f46\u4f7f\u7528\u6807\u8bb0\u7684 IPsec \u6216 CIPSO \u7f51\u7edc\u6807\u7b7e\u53ef\u4ee5\u5b9e\u73b0\u4f20\u8f93\u7ea7\u5b89\u5168\u6027\u3002 \u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u4e3a\u60a8\u7684\u6d88\u606f\u961f\u5217\u542f\u7528\u4f20\u8f93\u7ea7\u52a0\u5bc6\u3002\u5c06 TLS \u7528\u4e8e\u6d88\u606f\u4f20\u9012\u5ba2\u6237\u7aef\u8fde\u63a5\u53ef\u4ee5\u4fdd\u62a4\u901a\u4fe1\u5728\u4f20\u8f93\u5230\u6d88\u606f\u4f20\u9012\u670d\u52a1\u5668\u7684\u8fc7\u7a0b\u4e2d\u4e0d\u88ab\u7be1\u6539\u548c\u7a83\u542c\u3002\u4ee5\u4e0b\u662f\u6709\u5173\u5982\u4f55\u4e3a\u4e24\u4e2a\u5e38\u7528\u6d88\u606f\u4f20\u9012\u670d\u52a1\u5668 Qpid \u548c RabbitMQ \u914d\u7f6e TLS \u7684\u6307\u5357\u3002\u5728\u914d\u7f6e\u6d88\u606f\u4f20\u9012\u670d\u52a1\u5668\u7528\u4e8e\u9a8c\u8bc1\u5ba2\u6237\u673a\u8fde\u63a5\u7684\u53ef\u4fe1\u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09 \u6346\u7ed1\u8f6f\u4ef6\u65f6\uff0c\u5efa\u8bae\u4ec5\u5c06\u5176\u9650\u5236\u4e3a\u7528\u4e8e\u8282\u70b9\u7684 CA\uff0c\u6700\u597d\u662f\u5185\u90e8\u7ba1\u7406\u7684 CA\u3002\u53d7\u4fe1\u4efb\u7684 CA \u6346\u7ed1\u5305\u5c06\u786e\u5b9a\u54ea\u4e9b\u5ba2\u6237\u7aef\u8bc1\u4e66\u5c06\u83b7\u5f97\u6388\u6743\uff0c\u5e76\u901a\u8fc7\u8bbe\u7f6e TLS \u8fde\u63a5\u7684\u5ba2\u6237\u7aef-\u670d\u52a1\u5668\u9a8c\u8bc1\u6b65\u9aa4\u3002\u8bf7\u6ce8\u610f\uff0c\u5728\u5b89\u88c5\u8bc1\u4e66\u548c\u5bc6\u94a5\u6587\u4ef6\u65f6\uff0c\u8bf7\u786e\u4fdd\u6587\u4ef6\u6743\u9650\u53d7\u5230\u9650\u5236\uff0c\u4f8b\u5982\u4f7f\u7528 chmod 0600 \uff0c\u5e76\u4e14\u6240\u6709\u6743\u9650\u5236\u4e3a\u6d88\u606f\u4f20\u9012\u670d\u52a1\u5668\u5b88\u62a4\u7a0b\u5e8f\u7528\u6237\uff0c\u4ee5\u9632\u6b62\u6d88\u606f\u4f20\u9012\u670d\u52a1\u5668\u4e0a\u7684\u5176\u4ed6\u8fdb\u7a0b\u548c\u7528\u6237\u8fdb\u884c\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002 RabbitMQ \u670d\u52a1\u5668 SSL \u914d\u7f6e \u00b6 \u5e94\u5c06\u4ee5\u4e0b\u884c\u6dfb\u52a0\u5230\u7cfb\u7edf\u8303\u56f4\u7684 RabbitMQ \u914d\u7f6e\u6587\u4ef6\u4e2d\uff0c\u901a\u5e38 /etc/rabbitmq/rabbitmq.config \uff1a [ {rabbit, [ {tcp_listeners, [] }, {ssl_listeners, [{\"\", 5671}] }, {ssl_options, [{cacertfile,\"/etc/ssl/cacert.pem\"}, {certfile,\"/etc/ssl/rabbit-server-cert.pem\"}, {keyfile,\"/etc/ssl/rabbit-server-key.pem\"}, {verify,verify_peer}, {fail_if_no_peer_cert,true}]} ]} ]. \u8bf7\u6ce8\u610f\uff0c\u8be5 tcp_listeners \u9009\u9879\u8bbe\u7f6e\u4e3a [] \u963b\u6b62\u5b83\u4fa6\u542c\u975e SSL \u7aef\u53e3\u3002\u5e94\u5c06\u8be5 ssl_listeners \u9009\u9879\u9650\u5236\u4e3a\u4ec5\u5728\u7ba1\u7406\u7f51\u7edc\u4e0a\u4fa6\u542c\u670d\u52a1\u3002 \u6709\u5173 RabbitMQ SSL \u914d\u7f6e\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\uff1a RabbitMQ \u914d\u7f6e RabbitMQ SSL\u534f\u8bae Qpid \u670d\u52a1\u5668 SSL \u914d\u7f6e \u00b6 Apache \u57fa\u91d1\u4f1a\u4e3a Qpid \u63d0\u4f9b\u4e86\u6d88\u606f\u4f20\u9012\u5b89\u5168\u6307\u5357\u3002\u8bf7\u53c2\u9605\uff1a Apache Qpid SSL \u961f\u5217\u8ba4\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236 \u00b6 RabbitMQ \u548c Qpid \u63d0\u4f9b\u8eab\u4efd\u9a8c\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236\u673a\u5236\uff0c\u7528\u4e8e\u63a7\u5236\u5bf9\u961f\u5217\u7684\u8bbf\u95ee\u3002ZeroMQ \u4e0d\u63d0\u4f9b\u6b64\u7c7b\u673a\u5236\u3002 \u7b80\u5355\u8eab\u4efd\u9a8c\u8bc1\u548c\u5b89\u5168\u5c42 \uff08SASL\uff09 \u662f Internet \u534f\u8bae\u4e2d\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\u548c\u6570\u636e\u5b89\u5168\u7684\u6846\u67b6\u3002RabbitMQ \u548c Qpid \u90fd\u63d0\u4f9b SASL \u548c\u5176\u4ed6\u53ef\u63d2\u5165\u7684\u8eab\u4efd\u9a8c\u8bc1\u673a\u5236\uff0c\u800c\u4e0d\u4ec5\u4ec5\u662f\u7b80\u5355\u7684\u7528\u6237\u540d\u548c\u5bc6\u7801\uff0c\u4ece\u800c\u53ef\u4ee5\u63d0\u9ad8\u8eab\u4efd\u9a8c\u8bc1\u5b89\u5168\u6027\u3002\u867d\u7136 RabbitMQ \u652f\u6301 SASL\uff0c\u4f46 OpenStack \u4e2d\u7684\u652f\u6301\u76ee\u524d\u4e0d\u5141\u8bb8\u8bf7\u6c42\u7279\u5b9a\u7684 SASL \u8eab\u4efd\u9a8c\u8bc1\u673a\u5236\u3002OpenStack \u4e2d\u7684 RabbitMQ \u652f\u6301\u5141\u8bb8\u901a\u8fc7\u672a\u52a0\u5bc6\u7684\u8fde\u63a5\u8fdb\u884c\u7528\u6237\u540d\u548c\u5bc6\u7801\u8eab\u4efd\u9a8c\u8bc1\uff0c\u6216\u8005\u5c06\u7528\u6237\u540d\u548c\u5bc6\u7801\u4e0e X.509 \u5ba2\u6237\u7aef\u8bc1\u4e66\u7ed3\u5408\u4f7f\u7528\uff0c\u4ee5\u5efa\u7acb\u5b89\u5168\u7684 TLS \u8fde\u63a5\u3002 \u6211\u4eec\u5efa\u8bae\u5728\u6240\u6709 OpenStack \u670d\u52a1\u8282\u70b9\u4e0a\u914d\u7f6e X.509 \u5ba2\u6237\u7aef\u8bc1\u4e66\uff0c\u4ee5\u4fbf\u5ba2\u6237\u7aef\u8fde\u63a5\u5230\u6d88\u606f\u4f20\u9012\u961f\u5217\uff0c\u5e76\u5728\u53ef\u80fd\u7684\u60c5\u51b5\u4e0b\uff08\u76ee\u524d\u4ec5 Qpid\uff09\u4f7f\u7528 X.509 \u5ba2\u6237\u7aef\u8bc1\u4e66\u6267\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u4f7f\u7528\u7528\u6237\u540d\u548c\u5bc6\u7801\u65f6\uff0c\u5e94\u6309\u670d\u52a1\u548c\u8282\u70b9\u521b\u5efa\u5e10\u6237\uff0c\u4ee5\u4fbf\u5bf9\u961f\u5217\u7684\u8bbf\u95ee\u8fdb\u884c\u66f4\u7cbe\u7ec6\u7684\u53ef\u5ba1\u6838\u6027\u3002 \u5728\u90e8\u7f72\u4e4b\u524d\uff0c\u8bf7\u8003\u8651\u6392\u961f\u670d\u52a1\u5668\u4f7f\u7528\u7684 TLS \u5e93\u3002Qpid \u4f7f\u7528 Mozilla \u7684 NSS \u5e93\uff0c\u800c RabbitMQ \u4f7f\u7528 Erlang \u7684 TLS \u6a21\u5757\uff0c\u8be5\u6a21\u5757\u4f7f\u7528 OpenSSL\u3002 \u8eab\u4efd\u9a8c\u8bc1\u914d\u7f6e\u793a\u4f8b\uff1aRabbitMQ \u00b6 \u5728 RabbitMQ \u670d\u52a1\u5668\u4e0a\uff0c\u5220\u9664\u9ed8\u8ba4 guest \u7528\u6237\uff1a # rabbitmqctl delete_user guest \u5728 RabbitMQ \u670d\u52a1\u5668\u4e0a\uff0c\u5bf9\u4e8e\u4e0e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u7684\u6bcf\u4e2a OpenStack \u670d\u52a1\u6216\u8282\u70b9\uff0c\u8bf7\u8bbe\u7f6e\u7528\u6237\u5e10\u6237\u548c\u6743\u9650\uff1a # rabbitmqctl add_user compute01 RABBIT_PASS # rabbitmqctl set_permissions compute01 \".*\" \".*\" \".*\" \u5c06RABBIT_PASS\u66ff\u6362\u4e3a\u5408\u9002\u7684\u5bc6\u7801\u3002 \u6709\u5173\u5176\u4ed6\u914d\u7f6e\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\uff1a RabbitMQ \u8bbf\u95ee\u63a7\u5236 RabbitMQ \u8eab\u4efd\u9a8c\u8bc1 RabbitMQ \u63d2\u4ef6 RabbitMQ SASL \u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1 OpenStack \u670d\u52a1\u914d\u7f6e\uff1aRabbitMQ \u00b6 [DEFAULT] rpc_backend = nova.openstack.common.rpc.impl_kombu rabbit_use_ssl = True rabbit_host = RABBIT_HOST rabbit_port = 5671 rabbit_user = compute01 rabbit_password = RABBIT_PASS kombu_ssl_keyfile = /etc/ssl/node-key.pem kombu_ssl_certfile = /etc/ssl/node-cert.pem kombu_ssl_ca_certs = /etc/ssl/cacert.pem \u8eab\u4efd\u9a8c\u8bc1\u914d\u7f6e\u793a\u4f8b\uff1aQpid \u00b6 \u6709\u5173\u914d\u7f6e\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\uff1a Apache Qpid \u8eab\u4efd\u9a8c\u8bc1 Apache Qpid \u6388\u6743 OpenStack \u670d\u52a1\u914d\u7f6e\uff1aQpid \u00b6 [DEFAULT] rpc_backend = nova.openstack.common.rpc.impl_qpid qpid_protocol = ssl qpid_hostname = qpid_port = 5671 qpid_username = compute01 qpid_password = QPID_PASS \uff08\u53ef\u9009\uff09\u5982\u679c\u5c06 SASL \u4e0e Qpid \u4e00\u8d77\u4f7f\u7528\uff0c\u8bf7\u901a\u8fc7\u6dfb\u52a0\u4ee5\u4e0b\u5185\u5bb9\u6765\u6307\u5b9a\u6b63\u5728\u4f7f\u7528\u7684 SASL \u673a\u5236\uff1a qpid_sasl_mechanisms = \u6d88\u606f\u961f\u5217\u8fdb\u7a0b\u9694\u79bb\u548c\u7b56\u7565 \u00b6 \u6bcf\u4e2a\u9879\u76ee\u90fd\u63d0\u4f9b\u4e86\u8bb8\u591a\u53d1\u9001\u548c\u4f7f\u7528\u6d88\u606f\u7684\u670d\u52a1\u3002\u6bcf\u4e2a\u53d1\u9001\u6d88\u606f\u7684\u4e8c\u8fdb\u5236\u6587\u4ef6\u90fd\u5e94\u8be5\u4f7f\u7528\u961f\u5217\u4e2d\u7684\u6d88\u606f\uff0c\u5982\u679c\u53ea\u662f\u56de\u590d\u7684\u8bdd\u3002 \u6d88\u606f\u961f\u5217\u670d\u52a1\u8fdb\u7a0b\u5e94\u5f7c\u6b64\u9694\u79bb\uff0c\u5e76\u5e94\u4e0e\u8ba1\u7b97\u673a\u4e0a\u7684\u5176\u4ed6\u8fdb\u7a0b\u9694\u79bb\u3002 \u547d\u540d\u7a7a\u95f4 \u00b6 \u5f3a\u70c8\u5efa\u8bae\u5728 OpenStack Compute Hypervisor \u4e0a\u8fd0\u884c\u7684\u6240\u6709\u670d\u52a1\u4f7f\u7528\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u3002\u8fd9\u5c06\u6709\u52a9\u4e8e\u9632\u6b62 VM \u6765\u5bbe\u548c\u7ba1\u7406\u7f51\u7edc\u4e4b\u95f4\u7684\u7f51\u7edc\u6d41\u91cf\u6865\u63a5\u3002 \u4f7f\u7528 ZeroMQ \u6d88\u606f\u4f20\u9012\u65f6\uff0c\u6bcf\u4e2a\u4e3b\u673a\u5fc5\u987b\u81f3\u5c11\u8fd0\u884c\u4e00\u4e2a ZeroMQ \u6d88\u606f\u63a5\u6536\u5668\uff0c\u4ee5\u63a5\u6536\u6765\u81ea\u7f51\u7edc\u7684\u6d88\u606f\u5e76\u901a\u8fc7 IPC \u5c06\u6d88\u606f\u8f6c\u53d1\u5230\u672c\u5730\u8fdb\u7a0b\u3002\u5728 IPC \u547d\u540d\u7a7a\u95f4\u4e2d\u4e3a\u6bcf\u4e2a\u9879\u76ee\u8fd0\u884c\u4e00\u4e2a\u72ec\u7acb\u7684\u6d88\u606f\u63a5\u6536\u5668\u662f\u53ef\u80fd\u7684\uff0c\u4e5f\u662f\u53ef\u53d6\u7684\uff0c\u4ee5\u53ca\u540c\u4e00\u9879\u76ee\u4e2d\u7684\u5176\u4ed6\u670d\u52a1\u3002 \u7f51\u7edc\u7b56\u7565 \u00b6 \u961f\u5217\u670d\u52a1\u5668\u5e94\u4ec5\u63a5\u53d7\u6765\u81ea\u7ba1\u7406\u7f51\u7edc\u7684\u8fde\u63a5\u3002\u8fd9\u9002\u7528\u4e8e\u6240\u6709\u5b9e\u73b0\u3002\u8fd9\u5e94\u901a\u8fc7\u670d\u52a1\u914d\u7f6e\u6765\u5b9e\u73b0\uff0c\u5e76\u53ef\u9009\u62e9\u901a\u8fc7\u5168\u5c40\u7f51\u7edc\u7b56\u7565\u5f3a\u5236\u5b9e\u65bd\u3002 \u4f7f\u7528 ZeroMQ \u6d88\u606f\u4f20\u9012\u65f6\uff0c\u6bcf\u4e2a\u9879\u76ee\u90fd\u5e94\u5728\u4e13\u7528\u4e8e\u5c5e\u4e8e\u8be5\u9879\u76ee\u7684\u670d\u52a1\u7684\u7aef\u53e3\u4e0a\u8fd0\u884c\u5355\u72ec\u7684 ZeroMQ \u63a5\u6536\u65b9\u8fdb\u7a0b\u3002\u8fd9\u76f8\u5f53\u4e8e AMQP \u7684\u63a7\u5236\u4ea4\u6362\u6982\u5ff5\u3002 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \u00b6 \u4f7f\u7528\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \uff08MAC\uff09 \u548c\u81ea\u7531\u8bbf\u95ee\u63a7\u5236 \uff08DAC\uff09 \u5c06\u8fdb\u7a0b\u7684\u914d\u7f6e\u9650\u5236\u4e3a\u4ec5\u8fd9\u4e9b\u8fdb\u7a0b\u3002\u6b64\u9650\u5236\u53ef\u9632\u6b62\u8fd9\u4e9b\u8fdb\u7a0b\u4e0e\u5728\u540c\u4e00\u53f0\u8ba1\u7b97\u673a\u4e0a\u8fd0\u884c\u7684\u5176\u4ed6\u8fdb\u7a0b\u9694\u79bb\u3002 \u6570\u636e\u5904\u7406 \u00b6 \u6570\u636e\u5904\u7406\u670d\u52a1\uff08sahara\uff09\u63d0\u4f9b\u4e86\u4e00\u4e2a\u5e73\u53f0\uff0c\u7528\u4e8e\u4f7f\u7528Hadoop\u548cSpark\u7b49\u5904\u7406\u6846\u67b6\u6765\u914d\u7f6e\u548c\u7ba1\u7406\u5b9e\u4f8b\u96c6\u7fa4\u3002\u901a\u8fc7 OpenStack Dashboard \u6216 REST API\uff0c\u7528\u6237\u80fd\u591f\u4e0a\u4f20\u548c\u6267\u884c\u6846\u67b6\u5e94\u7528\u7a0b\u5e8f\uff0c\u8fd9\u4e9b\u5e94\u7528\u7a0b\u5e8f\u53ef\u4ee5\u8bbf\u95ee\u5bf9\u8c61\u5b58\u50a8\u6216\u5916\u90e8\u63d0\u4f9b\u7a0b\u5e8f\u4e2d\u7684\u6570\u636e\u3002\u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u4f7f\u7528\u7f16\u6392\u670d\u52a1 \uff08heat\uff09 \u521b\u5efa\u5b9e\u4f8b\u96c6\u7fa4\uff0c\u8fd9\u4e9b\u96c6\u7fa4\u53ef\u4ee5\u4f5c\u4e3a\u957f\u671f\u8fd0\u884c\u7684\u7ec4\u5b58\u5728\uff0c\u8fd9\u4e9b\u7ec4\u53ef\u4ee5\u6839\u636e\u8bf7\u6c42\u8fdb\u884c\u6269\u5c55\u548c\u6536\u7f29\uff0c\u4e5f\u53ef\u4ee5\u4f5c\u4e3a\u4e3a\u5355\u4e2a\u5de5\u4f5c\u8d1f\u8f7d\u521b\u5efa\u7684\u77ac\u6001\u7ec4\u5b58\u5728\u3002 \u6570\u636e\u5904\u7406\u7b80\u4ecb \u67b6\u6784 \u6d89\u53ca\u7684\u6280\u672f \u7528\u6237\u5bf9\u8d44\u6e90\u7684\u8bbf\u95ee\u6743\u9650 \u90e8\u7f72 \u63a7\u5236\u5668\u5bf9\u96c6\u7fa4\u7684\u7f51\u7edc\u8bbf\u95ee \u914d\u7f6e\u548c\u5f3a\u5316 TLS \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565 \u5b89\u5168\u7ec4 \u4ee3\u7406\u57df \u81ea\u5b9a\u4e49\u7f51\u7edc\u62d3\u6251 \u95f4\u63a5\u8bbf\u95ee \u6839\u5305\u88c5 \u65e5\u5fd7\u8bb0\u5f55 \u53c2\u8003\u4e66\u76ee \u6570\u636e\u5904\u7406\u7b80\u4ecb \u00b6 \u6570\u636e\u5904\u7406\u670d\u52a1\u63a7\u5236\u5668\u5c06\u8d1f\u8d23\u521b\u5efa\u3001\u7ef4\u62a4\u548c\u9500\u6bc1\u4e3a\u5176\u96c6\u7fa4\u521b\u5efa\u7684\u4efb\u4f55\u5b9e\u4f8b\u3002\u63a7\u5236\u5668\u5c06\u4f7f\u7528\u7f51\u7edc\u670d\u52a1\u5728\u81ea\u8eab\u548c\u96c6\u7fa4\u5b9e\u4f8b\u4e4b\u95f4\u5efa\u7acb\u7f51\u7edc\u8def\u5f84\u3002\u5b83\u8fd8\u5c06\u7ba1\u7406\u8981\u5728\u96c6\u7fa4\u4e0a\u8fd0\u884c\u7684\u7528\u6237\u5e94\u7528\u7a0b\u5e8f\u7684\u90e8\u7f72\u548c\u751f\u547d\u5468\u671f\u3002\u96c6\u7fa4\u4e2d\u7684\u5b9e\u4f8b\u5305\u542b\u6846\u67b6\u5904\u7406\u5f15\u64ce\u7684\u6838\u5fc3\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u63d0\u4f9b\u4e86\u591a\u4e2a\u9009\u9879\u6765\u521b\u5efa\u548c\u7ba1\u7406\u4e0e\u8fd9\u4e9b\u5b9e\u4f8b\u7684\u8fde\u63a5\u3002 \u6570\u636e\u5904\u7406\u8d44\u6e90\uff08\u7fa4\u96c6\u3001\u4f5c\u4e1a\u548c\u6570\u636e\u6e90\uff09\u6309\u8eab\u4efd\u670d\u52a1\u4e2d\u5b9a\u4e49\u7684\u9879\u76ee\u8fdb\u884c\u5206\u9694\u3002\u8fd9\u4e9b\u8d44\u6e90\u5728\u9879\u76ee\u4e2d\u5171\u4eab\uff0c\u4e86\u89e3\u4f7f\u7528\u8be5\u670d\u52a1\u7684\u4eba\u5458\u7684\u8bbf\u95ee\u9700\u6c42\u975e\u5e38\u91cd\u8981\u3002\u901a\u8fc7\u4f7f\u7528\u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236\uff0c\u53ef\u4ee5\u8fdb\u4e00\u6b65\u9650\u5236\u9879\u76ee\u4e2d\u7684\u6d3b\u52a8\uff08\u4f8b\u5982\u542f\u52a8\u96c6\u7fa4\u3001\u4e0a\u4f20\u4f5c\u4e1a\u7b49\uff09\u3002 \u5728\u672c\u7ae0\u4e2d\uff0c\u6211\u4eec\u5c06\u8ba8\u8bba\u5982\u4f55\u8bc4\u4f30\u6570\u636e\u5904\u7406\u7528\u6237\u5bf9\u5176\u5e94\u7528\u7a0b\u5e8f\u3001\u4ed6\u4eec\u4f7f\u7528\u7684\u6570\u636e\u4ee5\u53ca\u4ed6\u4eec\u5728\u9879\u76ee\u4e2d\u7684\u9884\u671f\u529f\u80fd\u7684\u9700\u6c42\u3002\u6211\u4eec\u8fd8\u5c06\u6f14\u793a\u670d\u52a1\u63a7\u5236\u5668\u53ca\u5176\u96c6\u7fa4\u7684\u4e00\u4e9b\u5f3a\u5316\u6280\u672f\uff0c\u5e76\u63d0\u4f9b\u5404\u79cd\u63a7\u5236\u5668\u914d\u7f6e\u548c\u7528\u6237\u7ba1\u7406\u65b9\u6cd5\u7684\u793a\u4f8b\uff0c\u4ee5\u786e\u4fdd\u8db3\u591f\u7684\u5b89\u5168\u548c\u9690\u79c1\u7ea7\u522b\u3002 \u67b6\u6784 \u00b6 \u4e0b\u56fe\u663e\u793a\u4e86\u6570\u636e\u5904\u7406\u670d\u52a1\u5982\u4f55\u9002\u5e94\u66f4\u5927\u7684 OpenStack \u751f\u6001\u7cfb\u7edf\u7684\u6982\u5ff5\u89c6\u56fe\u3002 \u6570\u636e\u5904\u7406\u670d\u52a1\u5728\u96c6\u7fa4\u914d\u7f6e\u8fc7\u7a0b\u4e2d\u5927\u91cf\u4f7f\u7528\u8ba1\u7b97\u3001\u7f16\u6392\u3001\u955c\u50cf\u548c\u5757\u5b58\u50a8\u670d\u52a1\u3002\u5b83\u8fd8\u5c06\u4f7f\u7528\u5728\u7fa4\u96c6\u521b\u5efa\u671f\u95f4\u63d0\u4f9b\u7684\u7531\u7f51\u7edc\u670d\u52a1\u521b\u5efa\u7684\u4e00\u4e2a\u6216\u591a\u4e2a\u7f51\u7edc\u6765\u7ba1\u7406\u5b9e\u4f8b\u3002\u5f53\u7528\u6237\u8fd0\u884c\u6846\u67b6\u5e94\u7528\u7a0b\u5e8f\u65f6\uff0c\u63a7\u5236\u5668\u548c\u96c6\u7fa4\u5c06\u8bbf\u95ee\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u3002\u9274\u4e8e\u8fd9\u4e9b\u670d\u52a1\u7528\u6cd5\uff0c\u6211\u4eec\u5efa\u8bae\u6309\u7167\u7cfb\u7edf\u6587\u6863\u4e2d\u6982\u8ff0\u7684\u8bf4\u660e\u5bf9\u5b89\u88c5\u7684\u6240\u6709\u7ec4\u4ef6\u8fdb\u884c\u7f16\u76ee\u3002 \u6d89\u53ca\u7684\u6280\u672f \u00b6 \u6570\u636e\u5904\u7406\u670d\u52a1\u8d1f\u8d23\u90e8\u7f72\u548c\u7ba1\u7406\u591a\u4e2a\u5e94\u7528\u7a0b\u5e8f\u3002\u4e3a\u4e86\u5168\u9762\u4e86\u89e3\u6240\u63d0\u4f9b\u7684\u5b89\u5168\u9009\u9879\uff0c\u6211\u4eec\u5efa\u8bae\u64cd\u4f5c\u5458\u5927\u81f4\u719f\u6089\u8fd9\u4e9b\u5e94\u7528\u7a0b\u5e8f\u3002\u7a81\u51fa\u663e\u793a\u7684\u6280\u672f\u5217\u8868\u5206\u4e3a\u4e24\u90e8\u5206\uff1a\u7b2c\u4e00\u90e8\u5206\uff0c\u5bf9\u5b89\u5168\u6027\u5f71\u54cd\u8f83\u5927\u7684\u9ad8\u4f18\u5148\u7ea7\u5e94\u7528\u7a0b\u5e8f\uff0c\u7b2c\u4e8c\u90e8\u5206\uff0c\u652f\u6301\u5f71\u54cd\u8f83\u5c0f\u7684\u5e94\u7528\u7a0b\u5e8f\u3002 \u66f4\u9ad8\u7684\u5f71\u54cd Hadoop Hadoop\u5b89\u5168\u6a21\u5f0f\u6587\u6863 HDFS Spark Spark \u5b89\u5168 Storm Zookeeper \u8f83\u4f4e\u7684\u5f71\u54cd Oozie Hive Pig \u8fd9\u4e9b\u6280\u672f\u6784\u6210\u4e86\u4e0e\u6570\u636e\u5904\u7406\u670d\u52a1\u4e00\u8d77\u90e8\u7f72\u7684\u6846\u67b6\u7684\u6838\u5fc3\u3002\u9664\u4e86\u8fd9\u4e9b\u6280\u672f\u4e4b\u5916\uff0c\u8be5\u670d\u52a1\u8fd8\u5305\u62ec\u7b2c\u4e09\u65b9\u4f9b\u5e94\u5546\u63d0\u4f9b\u7684\u6346\u7ed1\u6846\u67b6\u3002\u8fd9\u4e9b\u6346\u7ed1\u6846\u67b6\u662f\u4f7f\u7528\u4e0a\u8ff0\u76f8\u540c\u6838\u5fc3\u90e8\u5206\u4ee5\u53ca\u4f9b\u5e94\u5546\u5305\u542b\u7684\u914d\u7f6e\u548c\u5e94\u7528\u7a0b\u5e8f\u6784\u5efa\u7684\u3002\u6709\u5173\u7b2c\u4e09\u65b9\u6846\u67b6\u6346\u7ed1\u5305\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u4ee5\u4e0b\u94fe\u63a5\uff1a Cloudera CDH Hortonworks Data Platform MapR \u7528\u6237\u8bbf\u95ee\u8d44\u6e90 \u00b6 \u6570\u636e\u5904\u7406\u670d\u52a1\u7684\u8d44\u6e90\uff08\u96c6\u7fa4\u3001\u4f5c\u4e1a\u548c\u6570\u636e\u6e90\uff09\u5728\u9879\u76ee\u8303\u56f4\u5185\u5171\u4eab\u3002\u5c3d\u7ba1\u5355\u4e2a\u63a7\u5236\u5668\u5b89\u88c5\u53ef\u4ee5\u7ba1\u7406\u591a\u7ec4\u8d44\u6e90\uff0c\u4f46\u8fd9\u4e9b\u8d44\u6e90\u7684\u8303\u56f4\u5c06\u9650\u5b9a\u4e3a\u5355\u4e2a\u9879\u76ee\u3002\u9274\u4e8e\u6b64\u9650\u5236\uff0c\u6211\u4eec\u5efa\u8bae\u5bc6\u5207\u76d1\u89c6\u9879\u76ee\u4e2d\u7684\u7528\u6237\u6210\u5458\u8eab\u4efd\uff0c\u4ee5\u4fdd\u6301\u8d44\u6e90\u7684\u9002\u5f53\u9694\u79bb\u3002 \u7531\u4e8e\u90e8\u7f72\u6b64\u670d\u52a1\u7684\u7ec4\u7ec7\u7684\u5b89\u5168\u8981\u6c42\u4f1a\u6839\u636e\u5176\u7279\u5b9a\u9700\u6c42\u800c\u6709\u6240\u4e0d\u540c\uff0c\u56e0\u6b64\u6211\u4eec\u5efa\u8bae\u8fd0\u8425\u5546\u5c06\u91cd\u70b9\u653e\u5728\u6570\u636e\u9690\u79c1\u3001\u96c6\u7fa4\u7ba1\u7406\u548c\u6700\u7ec8\u7528\u6237\u5e94\u7528\u7a0b\u5e8f\u4e0a\uff0c\u4f5c\u4e3a\u8bc4\u4f30\u7528\u6237\u9700\u6c42\u7684\u8d77\u70b9\u3002\u8fd9\u4e9b\u51b3\u7b56\u5c06\u6709\u52a9\u4e8e\u6307\u5bfc\u914d\u7f6e\u7528\u6237\u5bf9\u670d\u52a1\u7684\u8bbf\u95ee\u7684\u8fc7\u7a0b\u3002\u6709\u5173\u6570\u636e\u9690\u79c1\u7684\u6269\u5c55\u8ba8\u8bba\uff0c\u8bf7\u53c2\u9605\u79df\u6237\u6570\u636e\u9690\u79c1\u3002 \u6570\u636e\u5904\u7406\u5b89\u88c5\u7684\u9ed8\u8ba4\u5047\u8bbe\u662f\u7528\u6237\u5c06\u6709\u6743\u8bbf\u95ee\u5176\u9879\u76ee\u4e2d\u7684\u6240\u6709\u529f\u80fd\u3002\u5982\u679c\u9700\u8981\u66f4\u7cbe\u7ec6\u7684\u63a7\u5236\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u4f1a\u63d0\u4f9b\u7b56\u7565\u6587\u4ef6\uff08\u5982\u7b56\u7565\u4e2d\u6240\u8ff0\uff09\u3002\u8fd9\u4e9b\u914d\u7f6e\u5c06\u9ad8\u5ea6\u4f9d\u8d56\u4e8e\u5b89\u88c5\u7ec4\u7ec7\u7684\u9700\u6c42\uff0c\u56e0\u6b64\u6ca1\u6709\u5173\u4e8e\u5176\u4f7f\u7528\u7684\u4e00\u822c\u5efa\u8bae\uff1a\u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u3002 \u90e8\u7f72 \u00b6 \u4e0e\u8bb8\u591a\u5176\u4ed6 OpenStack \u670d\u52a1\u4e00\u6837\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u88ab\u90e8\u7f72\u4e3a\u5728\u8fde\u63a5\u5230\u5806\u6808\u7684\u4e3b\u673a\u4e0a\u8fd0\u884c\u7684\u5e94\u7528\u7a0b\u5e8f\u3002\u4ece Kilo \u7248\u672c\u5f00\u59cb\uff0c\u5b83\u80fd\u591f\u4ee5\u5206\u5e03\u5f0f\u65b9\u5f0f\u90e8\u7f72\u591a\u4e2a\u5197\u4f59\u63a7\u5236\u5668\u3002\u4e0e\u5176\u4ed6\u670d\u52a1\u4e00\u6837\uff0c\u5b83\u4e5f\u9700\u8981\u4e00\u4e2a\u6570\u636e\u5e93\u6765\u5b58\u50a8\u6709\u5173\u5176\u8d44\u6e90\u7684\u4fe1\u606f\u3002\u8bf7\u53c2\u9605\u6570\u636e\u5e93\u3002\u8bf7\u52a1\u5fc5\u6ce8\u610f\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u5c06\u9700\u8981\u7ba1\u7406\u591a\u4e2a\u6807\u8bc6\u670d\u52a1\u4fe1\u4efb\uff0c\u76f4\u63a5\u4e0e\u4e1a\u52a1\u6d41\u7a0b\u548c\u7f51\u7edc\u670d\u52a1\u901a\u4fe1\uff0c\u5e76\u53ef\u80fd\u5728\u4ee3\u7406\u57df\u4e2d\u521b\u5efa\u7528\u6237\u3002\u7531\u4e8e\u8fd9\u4e9b\u539f\u56e0\uff0c\u63a7\u5236\u5668\u5c06\u9700\u8981\u8bbf\u95ee\u63a7\u5236\u5e73\u9762\uff0c\u56e0\u6b64\u6211\u4eec\u5efa\u8bae\u5c06\u5176\u4e0e\u5176\u4ed6\u670d\u52a1\u63a7\u5236\u5668\u4e00\u8d77\u5b89\u88c5\u3002 \u6570\u636e\u5904\u7406\u76f4\u63a5\u4e0e\u591a\u4e2a OpenStack \u670d\u52a1\u4ea4\u4e92\uff1a \u8ba1\u7b97 \u8eab\u4efd\u9a8c\u8bc1 \u8054\u7f51 \u5bf9\u8c61\u5b58\u50a8 \u914d\u5668 \u5757\u5b58\u50a8\uff08\u53ef\u9009\uff09 \u5efa\u8bae\u8bb0\u5f55\u8fd9\u4e9b\u670d\u52a1\u4e0e\u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u4e4b\u95f4\u7684\u6240\u6709\u6570\u636e\u6d41\u548c\u6865\u63a5\u70b9\u3002\u8bf7\u53c2\u9605\u7cfb\u7edf\u6587\u6863\u3002 \u6570\u636e\u5904\u7406\u670d\u52a1\u4f7f\u7528\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u6765\u5b58\u50a8\u4f5c\u4e1a\u4e8c\u8fdb\u5236\u6587\u4ef6\u548c\u6570\u636e\u6e90\u3002\u5e0c\u671b\u8bbf\u95ee\u5b8c\u6574\u6570\u636e\u5904\u7406\u670d\u52a1\u529f\u80fd\u7684\u7528\u6237\u5c06\u9700\u8981\u5728\u4ed6\u4eec\u6b63\u5728\u4f7f\u7528\u7684\u9879\u76ee\u4e2d\u5b58\u50a8\u5bf9\u8c61\u3002 \u7f51\u7edc\u670d\u52a1\u5728\u7fa4\u96c6\u7684\u914d\u7f6e\u4e2d\u8d77\u7740\u91cd\u8981\u4f5c\u7528\u3002\u5728\u9884\u914d\u4e4b\u524d\uff0c\u7528\u6237\u5e94\u4e3a\u7fa4\u96c6\u5b9e\u4f8b\u63d0\u4f9b\u4e00\u4e2a\u6216\u591a\u4e2a\u7f51\u7edc\u3002\u5173\u8054\u7f51\u7edc\u7684\u64cd\u4f5c\u7c7b\u4f3c\u4e8e\u901a\u8fc7\u4eea\u8868\u677f\u542f\u52a8\u5b9e\u4f8b\u65f6\u5206\u914d\u7f51\u7edc\u7684\u8fc7\u7a0b\u3002\u63a7\u5236\u5668\u4f7f\u7528\u8fd9\u4e9b\u7f51\u7edc\u5bf9\u5176\u96c6\u7fa4\u7684\u5b9e\u4f8b\u548c\u6846\u67b6\u8fdb\u884c\u7ba1\u7406\u8bbf\u95ee\u3002 \u53e6\u5916\u503c\u5f97\u6ce8\u610f\u7684\u662f\u8eab\u4efd\u670d\u52a1\u3002\u6570\u636e\u5904\u7406\u670d\u52a1\u7684\u7528\u6237\u9700\u8981\u5728\u5176\u9879\u76ee\u4e2d\u5177\u6709\u9002\u5f53\u7684\u89d2\u8272\uff0c\u4ee5\u5141\u8bb8\u4e3a\u5176\u96c6\u7fa4\u9884\u7f6e\u5b9e\u4f8b\u3002\u4f7f\u7528\u4ee3\u7406\u57df\u914d\u7f6e\u7684\u5b89\u88c5\u9700\u8981\u7279\u522b\u6ce8\u610f\u3002\u8bf7\u53c2\u9605\u4ee3\u7406\u57df\u3002\u5177\u4f53\u800c\u8a00\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u5c06\u9700\u8981\u80fd\u591f\u5728\u4ee3\u7406\u57df\u4e2d\u521b\u5efa\u7528\u6237\u3002 \u63a7\u5236\u5668\u5bf9\u96c6\u7fa4\u7684\u7f51\u7edc\u8bbf\u95ee \u00b6 \u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u7684\u4e3b\u8981\u4efb\u52a1\u4e4b\u4e00\u662f\u4e0e\u5176\u751f\u6210\u7684\u5b9e\u4f8b\u8fdb\u884c\u901a\u4fe1\u3002\u8fd9\u4e9b\u5b9e\u4f8b\u662f\u9884\u7f6e\u7684\uff0c\u7136\u540e\u6839\u636e\u6240\u4f7f\u7528\u7684\u6846\u67b6\u8fdb\u884c\u914d\u7f6e\u3002\u63a7\u5236\u5668\u548c\u5b9e\u4f8b\u4e4b\u95f4\u7684\u901a\u4fe1\u4f7f\u7528\u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 \u548c HTTP \u534f\u8bae\u3002 \u5728\u9884\u914d\u96c6\u7fa4\u65f6\uff0c\u5c06\u5728\u7528\u6237\u63d0\u4f9b\u7684\u7f51\u7edc\u4e2d\u4e3a\u6bcf\u4e2a\u5b9e\u4f8b\u63d0\u4f9b\u4e00\u4e2a IP \u5730\u5740\u3002\u7b2c\u4e00\u4e2a\u7f51\u7edc\u901a\u5e38\u79f0\u4e3a\u6570\u636e\u5904\u7406\u7ba1\u7406\u7f51\u7edc\uff0c\u5b9e\u4f8b\u53ef\u4ee5\u4f7f\u7528\u7f51\u7edc\u670d\u52a1\u4e3a\u6b64\u7f51\u7edc\u5206\u914d\u7684\u56fa\u5b9a IP \u5730\u5740\u3002\u63a7\u5236\u5668\u8fd8\u53ef\u4ee5\u914d\u7f6e\u4e3a\u9664\u4e86\u56fa\u5b9a\u5730\u5740\u4e4b\u5916\uff0c\u8fd8\u5bf9\u5b9e\u4f8b\u4f7f\u7528\u6d6e\u52a8 IP \u5730\u5740\u3002\u4e0e\u5b9e\u4f8b\u901a\u4fe1\u65f6\uff0c\u63a7\u5236\u5668\u5c06\u9996\u9009\u6d6e\u52a8\u5730\u5740\uff08\u5982\u679c\u542f\u7528\uff09\u3002 \u5bf9\u4e8e\u56fa\u5b9a\u548c\u6d6e\u52a8 IP \u5730\u5740\u65e0\u6cd5\u63d0\u4f9b\u6240\u9700\u529f\u80fd\u7684\u60c5\u51b5\uff0c\u63a7\u5236\u5668\u53ef\u4ee5\u901a\u8fc7\u4e24\u79cd\u66ff\u4ee3\u65b9\u6cd5\u63d0\u4f9b\u8bbf\u95ee\uff1a\u81ea\u5b9a\u4e49\u7f51\u7edc\u62d3\u6251\u548c\u95f4\u63a5\u8bbf\u95ee\u3002\u81ea\u5b9a\u4e49\u7f51\u7edc\u62d3\u6251\u529f\u80fd\u5141\u8bb8\u63a7\u5236\u5668\u901a\u8fc7\u914d\u7f6e\u6587\u4ef6\u4e2d\u63d0\u4f9b\u7684 shell \u547d\u4ee4\u8bbf\u95ee\u5b9e\u4f8b\u3002\u95f4\u63a5\u8bbf\u95ee\u7528\u4e8e\u6307\u5b9a\u7528\u6237\u5728\u96c6\u7fa4\u7f6e\u5907\u671f\u95f4\u53ef\u7528\u4f5c\u4ee3\u7406\u7f51\u5173\u7684\u5b9e\u4f8b\u3002\u8fd9\u4e9b\u9009\u9879\u901a\u8fc7\u914d\u7f6e\u548c\u5f3a\u5316\u4e2d\u7684\u7528\u6cd5\u793a\u4f8b\u8fdb\u884c\u8ba8\u8bba\u3002 \u914d\u7f6e\u548c\u5f3a\u5316 \u00b6 \u6709\u591a\u4e2a\u914d\u7f6e\u9009\u9879\u548c\u90e8\u7f72\u7b56\u7565\u53ef\u4ee5\u63d0\u9ad8\u6570\u636e\u5904\u7406\u670d\u52a1\u7684\u5b89\u5168\u6027\u3002\u670d\u52a1\u63a7\u5236\u5668\u901a\u8fc7\u4e3b\u914d\u7f6e\u6587\u4ef6\u548c\u4e00\u4e2a\u6216\u591a\u4e2a\u7b56\u7565\u6587\u4ef6\u8fdb\u884c\u914d\u7f6e\u3002\u4f7f\u7528\u6570\u636e\u5c40\u90e8\u6027\u529f\u80fd\u7684\u5b89\u88c5\u8fd8\u5c06\u5177\u6709\u4e24\u4e2a\u9644\u52a0\u6587\u4ef6\uff0c\u7528\u4e8e\u6307\u5b9a\u8ba1\u7b97\u8282\u70b9\u548c\u5bf9\u8c61\u5b58\u50a8\u8282\u70b9\u7684\u7269\u7406\u4f4d\u7f6e\u3002 TLS\u7cfb\u7edf \u00b6 \u4e0e\u8bb8\u591a\u5176\u4ed6 OpenStack \u63a7\u5236\u5668\u4e00\u6837\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u63a7\u5236\u5668\u53ef\u4ee5\u914d\u7f6e\u4e3a\u9700\u8981 TLS \u8fde\u63a5\u3002 Pre-Kilo \u7248\u672c\u5c06\u9700\u8981 TLS \u4ee3\u7406\uff0c\u56e0\u4e3a\u63a7\u5236\u5668\u4e0d\u5141\u8bb8\u76f4\u63a5 TLS \u8fde\u63a5\u3002TLS \u4ee3\u7406\u548c HTTP \u670d\u52a1\u4e2d\u4ecb\u7ecd\u4e86\u5982\u4f55\u914d\u7f6e TLS \u4ee3\u7406\uff0c\u6211\u4eec\u5efa\u8bae\u6309\u7167\u5176\u4e2d\u7684\u5efa\u8bae\u521b\u5efa\u6b64\u7c7b\u5b89\u88c5\u3002 \u4ece Kilo \u7248\u672c\u5f00\u59cb\uff0c\u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u5141\u8bb8\u76f4\u63a5 TLS \u8fde\u63a5\uff0c\u6211\u4eec\u5efa\u8bae\u8fd9\u6837\u505a\u3002\u542f\u7528\u6b64\u884c\u4e3a\u9700\u8981\u5bf9\u63a7\u5236\u5668\u914d\u7f6e\u6587\u4ef6\u8fdb\u884c\u4e00\u4e9b\u5c0f\u7684\u8c03\u6574\u3002 \u4f8b\u3002\u914d\u7f6e\u5bf9\u63a7\u5236\u5668\u7684 TLS \u8bbf\u95ee [ssl] ca_file = cafile.pem cert_file = certfile.crt key_file = keyfile.key \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565 \u00b6 \u6570\u636e\u5904\u7406\u670d\u52a1\u4f7f\u7528\u7b56\u7565\u6587\u4ef6\uff08\u5982\u7b56\u7565\u4e2d\u6240\u8ff0\uff09\u6765\u914d\u7f6e\u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236\u3002\u4f7f\u7528\u7b56\u7565\u6587\u4ef6\uff0c\u64cd\u4f5c\u5458\u53ef\u4ee5\u9650\u5236\u7ec4\u5bf9\u7279\u5b9a\u6570\u636e\u5904\u7406\u529f\u80fd\u7684\u8bbf\u95ee\u3002 \u6267\u884c\u6b64\u64cd\u4f5c\u7684\u539f\u56e0\u5c06\u6839\u636e\u5b89\u88c5\u7684\u7ec4\u7ec7\u8981\u6c42\u800c\u66f4\u6539\u3002\u901a\u5e38\uff0c\u8fd9\u4e9b\u7ec6\u7c92\u5ea6\u63a7\u4ef6\u7528\u4e8e\u64cd\u4f5c\u5458\u9700\u8981\u9650\u5236\u6570\u636e\u5904\u7406\u670d\u52a1\u8d44\u6e90\u7684\u521b\u5efa\u3001\u5220\u9664\u548c\u68c0\u7d22\u7684\u60c5\u51b5\u3002\u9700\u8981\u9650\u5236\u9879\u76ee\u5185\u8bbf\u95ee\u7684\u64cd\u4f5c\u5458\u5e94\u5145\u5206\u610f\u8bc6\u5230\uff0c\u9700\u8981\u6709\u5176\u4ed6\u65b9\u6cd5\u8ba9\u7528\u6237\u8bbf\u95ee\u670d\u52a1\u7684\u6838\u5fc3\u529f\u80fd\uff08\u4f8b\u5982\uff0c\u914d\u7f6e\u96c6\u7fa4\uff09\u3002 \u4f8b\u3002\u5141\u8bb8\u6240\u6709\u7528\u6237\u4f7f\u7528\u6240\u6709\u65b9\u6cd5\uff08\u9ed8\u8ba4\u7b56\u7565\uff09 { \"default\": \"\" } \u4f8b\u3002\u7981\u6b62\u5bf9\u975e\u7ba1\u7406\u5458\u7528\u6237\u8fdb\u884c\u6620\u50cf\u6ce8\u518c\u8868\u64cd\u4f5c { \"default\": \"\", \"data-processing:images:register\": \"role:admin\", \"data-processing:images:unregister\": \"role:admin\", \"data-processing:images:add_tags\": \"role:admin\", \"data-processing:images:remove_tags\": \"role:admin\" } \u5b89\u5168\u7ec4 \u00b6 \u6570\u636e\u5904\u7406\u670d\u52a1\u5141\u8bb8\u5c06\u5b89\u5168\u7ec4\u4e0e\u4e3a\u5176\u96c6\u7fa4\u9884\u7f6e\u7684\u5b9e\u4f8b\u76f8\u5173\u8054\u3002\u65e0\u9700\u5176\u4ed6\u914d\u7f6e\uff0c\u8be5\u670d\u52a1\u5c06\u5bf9\u9884\u7f6e\u96c6\u7fa4\u7684\u4efb\u4f55\u9879\u76ee\u4f7f\u7528\u9ed8\u8ba4\u5b89\u5168\u7ec4\u3002\u5982\u679c\u8bf7\u6c42\uff0c\u53ef\u4ee5\u4f7f\u7528\u4e0d\u540c\u7684\u5b89\u5168\u7ec4\uff0c\u6216\u8005\u5b58\u5728\u4e00\u4e2a\u81ea\u52a8\u9009\u9879\uff0c\u8be5\u9009\u9879\u6307\u793a\u670d\u52a1\u6839\u636e\u6240\u8bbf\u95ee\u6846\u67b6\u6307\u5b9a\u7684\u7aef\u53e3\u521b\u5efa\u5b89\u5168\u7ec4\u3002 \u5bf9\u4e8e\u751f\u4ea7\u73af\u5883\uff0c\u6211\u4eec\u5efa\u8bae\u624b\u52a8\u63a7\u5236\u5b89\u5168\u7ec4\uff0c\u5e76\u521b\u5efa\u4e00\u7ec4\u9002\u5408\u5b89\u88c5\u7684\u7ec4\u89c4\u5219\u3002\u901a\u8fc7\u8fd9\u79cd\u65b9\u5f0f\uff0c\u64cd\u4f5c\u5458\u53ef\u4ee5\u786e\u4fdd\u9ed8\u8ba4\u5b89\u5168\u7ec4\u5c06\u5305\u542b\u6240\u6709\u9002\u5f53\u7684\u89c4\u5219\u3002\u6709\u5173\u5b89\u5168\u7ec4\u7684\u6269\u5c55\u8ba8\u8bba\uff0c\u8bf7\u53c2\u9605\u5b89\u5168\u7ec4\u3002 \u4ee3\u7406\u57df \u00b6 \u5c06\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u4e0e\u6570\u636e\u5904\u7406\u7ed3\u5408\u4f7f\u7528\u65f6\uff0c\u9700\u8981\u6dfb\u52a0\u5b58\u50a8\u8bbf\u95ee\u51ed\u636e\u3002\u4f7f\u7528\u4ee3\u7406\u57df\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u53ef\u4ee5\u6539\u7528\u6765\u81ea\u6807\u8bc6\u670d\u52a1\u7684\u59d4\u6d3e\u4fe1\u4efb\uff0c\u4ee5\u5141\u8bb8\u901a\u8fc7\u57df\u4e2d\u521b\u5efa\u7684\u4e34\u65f6\u7528\u6237\u8fdb\u884c\u5b58\u50a8\u8bbf\u95ee\u3002\u8981\u4f7f\u6b64\u59d4\u6d3e\u673a\u5236\u8d77\u4f5c\u7528\uff0c\u5fc5\u987b\u5c06\u6570\u636e\u5904\u7406\u670d\u52a1\u914d\u7f6e\u4e3a\u4f7f\u7528\u4ee3\u7406\u57df\uff0c\u5e76\u4e14\u64cd\u4f5c\u5458\u5fc5\u987b\u4e3a\u4ee3\u7406\u7528\u6237\u914d\u7f6e\u8eab\u4efd\u57df\u3002 \u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u4fdd\u7559\u4e3a\u5bf9\u8c61\u5b58\u50a8\u8bbf\u95ee\u63d0\u4f9b\u7684\u7528\u6237\u540d\u548c\u5bc6\u7801\u7684\u4e34\u65f6\u5b58\u50a8\u3002\u4f7f\u7528\u4ee3\u7406\u57df\u65f6\uff0c\u63a7\u5236\u5668\u5c06\u4e3a\u4ee3\u7406\u7528\u6237\u751f\u6210\u6b64\u5bf9\uff0c\u5e76\u4e14\u6b64\u7528\u6237\u7684\u8bbf\u95ee\u5c06\u4ec5\u9650\u4e8e\u8eab\u4efd\u4fe1\u4efb\u7684\u8bbf\u95ee\u3002\u6211\u4eec\u5efa\u8bae\u5728\u63a7\u5236\u5668\u6216\u5176\u6570\u636e\u5e93\u5177\u6709\u4e0e\u516c\u5171\u7f51\u7edc\u4e4b\u95f4\u7684\u8def\u7531\u7684\u4efb\u4f55\u5b89\u88c5\u4e2d\u4f7f\u7528\u4ee3\u7406\u57df\u3002 \u793a\u4f8b\uff1a\u4e3a\u540d\u4e3a\u201cdp_proxy\u201d\u7684\u4ee3\u7406\u57df\u8fdb\u884c\u914d\u7f6e [DEFAULT] use_domain_for_proxy_users = true proxy_user_domain_name = dp_proxy proxy_user_role_names = Member \u81ea\u5b9a\u4e49\u7f51\u7edc\u62d3\u6251 \u00b6 \u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528\u4ee3\u7406\u547d\u4ee4\u6765\u8bbf\u95ee\u5176\u96c6\u7fa4\u5b9e\u4f8b\u3002\u901a\u8fc7\u8fd9\u79cd\u65b9\u5f0f\uff0c\u53ef\u4ee5\u4e3a\u4e0d\u4f7f\u7528\u7f51\u7edc\u670d\u52a1\u76f4\u63a5\u63d0\u4f9b\u7684\u7f51\u7edc\u7684\u5b89\u88c5\u521b\u5efa\u81ea\u5b9a\u4e49\u7f51\u7edc\u62d3\u6251\u3002\u5bf9\u4e8e\u9700\u8981\u9650\u5236\u63a7\u5236\u5668\u548c\u5b9e\u4f8b\u4e4b\u95f4\u8bbf\u95ee\u7684\u5b89\u88c5\uff0c\u6211\u4eec\u5efa\u8bae\u4f7f\u7528\u6b64\u9009\u9879\u3002 \u793a\u4f8b\uff1a\u901a\u8fc7\u6307\u5b9a\u7684\u4e2d\u7ee7\u673a\u8bbf\u95ee\u5b9e\u4f8b [DEFAULT] proxy_command='ssh relay-machine-{tenant_id} nc {host} {port}' \u793a\u4f8b\uff1a\u901a\u8fc7\u81ea\u5b9a\u4e49\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u8bbf\u95ee\u5b9e\u4f8b [DEFAULT] proxy_command='ip netns exec ns_for_{network_id} nc {host} {port}' \u95f4\u63a5\u8bbf\u95ee \u00b6 \u5bf9\u4e8e\u63a7\u5236\u5668\u5bf9\u96c6\u7fa4\u6240\u6709\u5b9e\u4f8b\u7684\u8bbf\u95ee\u6743\u9650\u6709\u9650\u7684\u5b89\u88c5\uff0c\u7531\u4e8e\u5bf9\u6d6e\u52a8 IP \u5730\u5740\u6216\u5b89\u5168\u89c4\u5219\u7684\u9650\u5236\uff0c\u53ef\u4ee5\u914d\u7f6e\u95f4\u63a5\u8bbf\u95ee\u3002\u8fd9\u5141\u8bb8\u5c06\u67d0\u4e9b\u5b9e\u4f8b\u6307\u5b9a\u4e3a\u96c6\u7fa4\u5176\u4ed6\u5b9e\u4f8b\u7684\u4ee3\u7406\u7f51\u5173\u3002 \u53ea\u6709\u5728\u5b9a\u4e49\u5c06\u6784\u6210\u6570\u636e\u5904\u7406\u96c6\u7fa4\u7684\u8282\u70b9\u7ec4\u6a21\u677f\u65f6\uff0c\u624d\u80fd\u542f\u7528\u6b64\u914d\u7f6e\u3002\u5b83\u4f5c\u4e3a\u8fd0\u884c\u65f6\u9009\u9879\u63d0\u4f9b\uff0c\u53ef\u5728\u7fa4\u96c6\u7f6e\u5907\u8fc7\u7a0b\u4e2d\u542f\u7528\u3002 Rootwrap \u00b6 \u5728\u4e3a\u7f51\u7edc\u8bbf\u95ee\u521b\u5efa\u81ea\u5b9a\u4e49\u62d3\u6251\u65f6\uff0c\u53ef\u80fd\u9700\u8981\u5141\u8bb8\u975e root \u7528\u6237\u8fd0\u884c\u4ee3\u7406\u547d\u4ee4\u3002\u5bf9\u4e8e\u8fd9\u4e9b\u60c5\u51b5\uff0coslo rootwrap \u8f6f\u4ef6\u5305\u7528\u4e8e\u4e3a\u975e root \u7528\u6237\u63d0\u4f9b\u8fd0\u884c\u7279\u6743\u547d\u4ee4\u7684\u5de5\u5177\u3002\u6b64\u914d\u7f6e\u8981\u6c42\u4e0e\u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u5e94\u7528\u7a0b\u5e8f\u5173\u8054\u7684\u7528\u6237\u4f4d\u4e8e sudoers \u5217\u8868\u4e2d\uff0c\u5e76\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d\u542f\u7528\u8be5\u9009\u9879\u3002\u6216\u8005\uff0c\u53ef\u4ee5\u63d0\u4f9b\u5907\u7528 rootwrap \u547d\u4ee4\u3002 \u793a\u4f8b\uff1a\u542f\u7528 rootwrap \u7528\u6cd5\u5e76\u663e\u793a\u9ed8\u8ba4\u547d\u4ee4 [DEFAULT] use_rootwrap=True rootwrap_command=\u2019sudo sahara-rootwrap /etc/sahara/rootwrap.conf\u2019 \u5173\u4e8e rootwrap \u9879\u76ee\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u8003\u5b98\u65b9\u6587\u6863\uff1ahttps://wiki.openstack.org/wiki/Rootwrap \u65e5\u5fd7 \u00b6 \u76d1\u89c6\u670d\u52a1\u63a7\u5236\u5668\u7684\u8f93\u51fa\u662f\u4e00\u4e2a\u5f3a\u5927\u7684\u53d6\u8bc1\u5de5\u5177\uff0c\u5982\u76d1\u89c6\u548c\u65e5\u5fd7\u8bb0\u5f55\u4e2d\u66f4\u8be6\u7ec6\u5730\u63cf\u8ff0\u7684\u90a3\u6837\u3002\u6570\u636e\u5904\u7406\u670d\u52a1\u63a7\u5236\u5668\u63d0\u4f9b\u4e86\u51e0\u4e2a\u9009\u9879\u6765\u8bbe\u7f6e\u65e5\u5fd7\u8bb0\u5f55\u7684\u4f4d\u7f6e\u548c\u7ea7\u522b\u3002 \u793a\u4f8b\uff1a\u5c06\u65e5\u5fd7\u7ea7\u522b\u8bbe\u7f6e\u4e3a\u9ad8\u4e8e\u8b66\u544a\u5e76\u6307\u5b9a\u8f93\u51fa\u6587\u4ef6\u3002 [DEFAULT] verbose = true log_file = /var/log/data-processing.log \u53c2\u8003\u4e66\u76ee \u00b6 OpenStack.org\uff0c\u6b22\u8fce\u6765\u5230Sahara\uff012016.Sahara\u9879\u76ee\u6587\u6863 Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0c\u6b22\u8fce\u6765\u5230 Apache Hadoop\uff012016. Apache Hadoop \u9879\u76ee Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0c\u5b89\u5168\u6a21\u5f0f\u4e0b\u7684 Hadoop\u30022016. Hadoop \u5b89\u5168\u6a21\u5f0f\u6587\u6863 Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cHDFS \u7528\u6237\u6307\u5357\u30022016. Hadoop HDFS \u6587\u6863 Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cSpark\u30022016. Spark\u9879\u76ee Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cSpark Security\u30022016. Spark \u5b89\u5168\u6587\u6863 Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cApache Storm\u30022016. Storm \u9879\u76ee Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cApache Zookeeper\u30022016. Zookeeper \u9879\u76ee Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cApache Oozie Workflow Scheduler for Hadoop\u30022016. Oozie\u9879\u76ee Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cApache Hive\u30022016. Hive Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0c\u6b22\u8fce\u6765\u5230 Apache Pig\u30022016.Pig Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cCloudera \u4ea7\u54c1\u6587\u6863\u30022016. Cloudera CDH \u6587\u6863 Hortonworks\uff0cHortonworks\u30022016. Hortonworks \u6570\u636e\u5e73\u53f0\u6587\u6863 MapR Technologies\uff0c\u7528\u4e8e MapR \u878d\u5408\u6570\u636e\u5e73\u53f0\u7684 Apache Hadoop\u30022016. MapR \u9879\u76ee \u6570\u636e\u5e93 \u00b6 \u6570\u636e\u5e93\u670d\u52a1\u5668\u7684\u9009\u62e9\u662f OpenStack \u90e8\u7f72\u5b89\u5168\u6027\u7684\u4e00\u4e2a\u91cd\u8981\u8003\u8651\u56e0\u7d20\u3002\u5728\u51b3\u5b9a\u4f7f\u7528\u6570\u636e\u5e93\u670d\u52a1\u5668\u65f6\uff0c\u5e94\u8003\u8651\u591a\u79cd\u56e0\u7d20\uff0c\u4f46\u5728\u672c\u672c\u4e66\u7684\u8303\u56f4\u5185\uff0c\u5c06\u53ea\u8ba8\u8bba\u5b89\u5168\u6ce8\u610f\u4e8b\u9879\u3002OpenStack \u652f\u6301\u591a\u79cd\u6570\u636e\u5e93\u7c7b\u578b\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u300aOpenStack \u7ba1\u7406\u5458\u6307\u5357\u300b\u3002 \u300a\u5b89\u5168\u6307\u5357\u300b\u76ee\u524d\u4e3b\u8981\u9488\u5bf9 PostgreSQL \u548c MySQL\u3002 \u6570\u636e\u5e93\u540e\u7aef\u6ce8\u610f\u4e8b\u9879 \u6570\u636e\u5e93\u540e\u7aef\u7684\u5b89\u5168\u53c2\u8003 \u6570\u636e\u5e93\u8bbf\u95ee\u63a7\u5236 OpenStack \u6570\u636e\u5e93\u8bbf\u95ee\u6a21\u578b \u6570\u636e\u5e93\u8eab\u4efd\u9a8c\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236 \u8981\u6c42\u7528\u6237\u5e10\u6237\u9700\u8981 SSL \u4f20\u8f93 \u4f7f\u7528 X.509 \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1 OpenStack \u670d\u52a1\u6570\u636e\u5e93\u914d\u7f6e Nova-conductor \u6570\u636e\u5e93\u4f20\u8f93\u5b89\u5168\u6027 \u6570\u636e\u5e93\u670d\u52a1\u5668 IP \u5730\u5740\u7ed1\u5b9a \u6570\u636e\u5e93\u4f20\u8f93 MySQL SSL\u914d\u7f6e PostgreSQL SSL \u914d\u7f6e \u6570\u636e\u5e93\u540e\u7aef\u6ce8\u610f\u4e8b\u9879 \u00b6 PostgreSQL \u5177\u6709\u8bb8\u591a\u7406\u60f3\u7684\u5b89\u5168\u529f\u80fd\uff0c\u4f8b\u5982 Kerberos \u8eab\u4efd\u9a8c\u8bc1\u3001\u5bf9\u8c61\u7ea7\u5b89\u5168\u6027\u548c\u52a0\u5bc6\u652f\u6301\u3002PostgreSQL \u793e\u533a\u5728\u63d0\u4f9b\u53ef\u9760\u7684\u6307\u5bfc\u3001\u6587\u6863\u548c\u5de5\u5177\u4ee5\u4fc3\u8fdb\u79ef\u6781\u7684\u5b89\u5168\u5b9e\u8df5\u65b9\u9762\u505a\u5f97\u5f88\u597d\u3002 MySQL\u62e5\u6709\u5e9e\u5927\u7684\u793e\u533a\uff0c\u88ab\u5e7f\u6cdb\u91c7\u7528\uff0c\u5e76\u63d0\u4f9b\u9ad8\u53ef\u7528\u6027\u9009\u9879\u3002MySQL\u8fd8\u80fd\u591f\u901a\u8fc7\u63d2\u4ef6\u8eab\u4efd\u9a8c\u8bc1\u673a\u5236\u63d0\u4f9b\u589e\u5f3a\u7684\u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u3002MySQL\u793e\u533a\u4e2d\u7684\u5206\u53c9\u53d1\u884c\u7248\u63d0\u4f9b\u4e86\u8bb8\u591a\u53ef\u4f9b\u8003\u8651\u7684\u9009\u9879\u3002\u6839\u636e\u5bf9\u5b89\u5168\u6001\u52bf\u7684\u5168\u9762\u8bc4\u4f30\u548c\u4e3a\u7ed9\u5b9a\u53d1\u884c\u7248\u63d0\u4f9b\u7684\u652f\u6301\u7ea7\u522b\uff0c\u9009\u62e9MySQL\u7684\u7279\u5b9a\u5b9e\u73b0\u975e\u5e38\u91cd\u8981\u3002 \u6570\u636e\u5e93\u540e\u7aef\u7684\u5b89\u5168\u53c2\u8003 \u00b6 \u5efa\u8bae\u90e8\u7f72 MySQL \u6216 PostgreSQL \u7684\u7528\u6237\u53c2\u8003\u73b0\u6709\u7684\u5b89\u5168\u6307\u5357\u3002\u4e0b\u9762\u5217\u51fa\u4e86\u4e00\u4e9b\u53c2\u8003\u8d44\u6599\uff1a MySQL\u6570\u636e\u5e93\uff1a OWASP MySQL\u5f3a\u5316 MySQL \u53ef\u63d2\u5165\u8eab\u4efd\u9a8c\u8bc1 MySQL\u4e2d\u7684\u5b89\u5168\u6027 PostgreSQL\u683c\u5f0f\uff1a OWASP PostgreSQL \u5f3a\u5316 PostgreSQL \u6570\u636e\u5e93\u4e2d\u7684\u603b\u4f53\u5b89\u5168\u6027 \u6570\u636e\u5e93\u8bbf\u95ee\u63a7\u5236 \u00b6 \u6bcf\u4e2a\u6838\u5fc3 OpenStack \u670d\u52a1\uff08\u8ba1\u7b97\u3001\u8eab\u4efd\u3001\u7f51\u7edc\u3001\u5757\u5b58\u50a8\uff09\u90fd\u5c06\u72b6\u6001\u548c\u914d\u7f6e\u4fe1\u606f\u5b58\u50a8\u5728\u6570\u636e\u5e93\u4e2d\u3002\u5728\u672c\u7ae0\u4e2d\uff0c\u6211\u4eec\u5c06\u8ba8\u8bba\u5f53\u524d\u5728OpenStack\u4e2d\u4f7f\u7528\u6570\u636e\u5e93\u7684\u65b9\u5f0f\u3002\u6211\u4eec\u8fd8\u63a2\u8ba8\u4e86\u5b89\u5168\u95ee\u9898\uff0c\u4ee5\u53ca\u6570\u636e\u5e93\u540e\u7aef\u9009\u62e9\u7684\u5b89\u5168\u540e\u679c\u3002 OpenStack \u6570\u636e\u5e93\u8bbf\u95ee\u6a21\u578b \u00b6 OpenStack \u9879\u76ee\u4e2d\u7684\u6240\u6709\u670d\u52a1\u90fd\u8bbf\u95ee\u5355\u4e2a\u6570\u636e\u5e93\u3002\u76ee\u524d\u6ca1\u6709\u7528\u4e8e\u521b\u5efa\u57fa\u4e8e\u8868\u6216\u884c\u7684\u6570\u636e\u5e93\u8bbf\u95ee\u9650\u5236\u7684\u53c2\u8003\u7b56\u7565\u3002 \u5728OpenStack\u4e2d\uff0c\u6ca1\u6709\u5bf9\u6570\u636e\u5e93\u64cd\u4f5c\u8fdb\u884c\u7cbe\u7ec6\u63a7\u5236\u7684\u4e00\u822c\u89c4\u5b9a\u3002\u8bbf\u95ee\u6743\u9650\u548c\u7279\u6743\u7684\u6388\u4e88\u4ec5\u57fa\u4e8e\u8282\u70b9\u662f\u5426\u6709\u6743\u8bbf\u95ee\u6570\u636e\u5e93\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u6709\u6743\u8bbf\u95ee\u6570\u636e\u5e93\u7684\u8282\u70b9\u53ef\u80fd\u5177\u6709 DROP\u3001INSERT \u6216 UPDATE \u51fd\u6570\u7684\u5b8c\u5168\u6743\u9650\u3002 \u7cbe\u7ec6\u8bbf\u95ee\u63a7\u5236 \u00b6 \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u6bcf\u4e2a OpenStack \u670d\u52a1\u53ca\u5176\u8fdb\u7a0b\u90fd\u4f7f\u7528\u4e00\u7ec4\u5171\u4eab\u51ed\u636e\u8bbf\u95ee\u6570\u636e\u5e93\u3002\u8fd9\u4f7f\u5f97\u5ba1\u6838\u6570\u636e\u5e93\u64cd\u4f5c\u548c\u64a4\u6d88\u670d\u52a1\u53ca\u5176\u8fdb\u7a0b\u5bf9\u6570\u636e\u5e93\u7684\u8bbf\u95ee\u6743\u9650\u53d8\u5f97\u7279\u522b\u56f0\u96be\u3002 Nova-conductor \u00b6 \u8ba1\u7b97\u8282\u70b9\u662f OpenStack \u4e2d\u6700\u4e0d\u53d7\u4fe1\u4efb\u7684\u670d\u52a1\uff0c\u56e0\u4e3a\u5b83\u4eec\u6258\u7ba1\u79df\u6237\u5b9e\u4f8b\u3002\u5f15\u5165\u8be5 nova-conductor \u670d\u52a1\u4f5c\u4e3a\u6570\u636e\u5e93\u4ee3\u7406\uff0c\u5145\u5f53\u8ba1\u7b97\u8282\u70b9\u548c\u6570\u636e\u5e93\u4e4b\u95f4\u7684\u4e2d\u4ecb\u3002\u6211\u4eec\u5c06\u5728\u672c\u7ae0\u540e\u9762\u8ba8\u8bba\u5176\u540e\u679c\u3002 \u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\uff1a \u6240\u6709\u6570\u636e\u5e93\u901a\u4fe1\u90fd\u4e0e\u7ba1\u7406\u7f51\u7edc\u9694\u79bb \u4f7f\u7528 TLS \u4fdd\u62a4\u901a\u4fe1 \u4e3a\u6bcf\u4e2a OpenStack \u670d\u52a1\u7aef\u70b9\u521b\u5efa\u552f\u4e00\u7684\u6570\u636e\u5e93\u7528\u6237\u5e10\u6237\uff08\u5982\u4e0b\u56fe\u6240\u793a\uff09 \u6570\u636e\u5e93\u8ba4\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236 \u00b6 \u8003\u8651\u5230\u8bbf\u95ee\u6570\u636e\u5e93\u7684\u98ce\u9669\uff0c\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u4e3a\u6bcf\u4e2a\u9700\u8981\u8bbf\u95ee\u6570\u636e\u5e93\u7684\u8282\u70b9\u521b\u5efa\u552f\u4e00\u7684\u6570\u636e\u5e93\u7528\u6237\u5e10\u6237\u3002\u8fd9\u6837\u505a\u6709\u52a9\u4e8e\u66f4\u597d\u5730\u8fdb\u884c\u5206\u6790\u548c\u5ba1\u6838\uff0c\u4ee5\u786e\u4fdd\u5408\u89c4\u6027\uff0c\u6216\u8005\u5728\u8282\u70b9\u906d\u5230\u5165\u4fb5\u65f6\uff0c\u901a\u8fc7\u5728\u68c0\u6d4b\u5230\u8be5\u8282\u70b9\u65f6\u5220\u9664\u8be5\u8282\u70b9\u5bf9\u6570\u636e\u5e93\u7684\u8bbf\u95ee\u6765\u9694\u79bb\u53d7\u611f\u67d3\u7684\u4e3b\u673a\u3002\u521b\u5efa\u8fd9\u4e9b\u6bcf\u4e2a\u670d\u52a1\u7ec8\u7ed3\u70b9\u6570\u636e\u5e93\u7528\u6237\u5e10\u6237\u65f6\uff0c\u5e94\u6ce8\u610f\u786e\u4fdd\u5c06\u5176\u914d\u7f6e\u4e3a\u9700\u8981 TLS\u3002\u6216\u8005\uff0c\u4e3a\u4e86\u63d0\u9ad8\u5b89\u5168\u6027\uff0c\u5efa\u8bae\u9664\u4e86\u7528\u6237\u540d\u548c\u5bc6\u7801\u5916\uff0c\u8fd8\u4f7f\u7528 X.509 \u8bc1\u4e66\u8eab\u4efd\u9a8c\u8bc1\u6765\u914d\u7f6e\u6570\u636e\u5e93\u5e10\u6237\u3002 \u6743\u9650 \u00b6 \u5e94\u521b\u5efa\u5e76\u4fdd\u62a4\u4e00\u4e2a\u5355\u72ec\u7684\u6570\u636e\u5e93\u7ba1\u7406\u5458 \uff08DBA\uff09 \u5e10\u6237\uff0c\u8be5\u5e10\u6237\u5177\u6709\u521b\u5efa/\u5220\u9664\u6570\u636e\u5e93\u3001\u521b\u5efa\u7528\u6237\u5e10\u6237\u548c\u66f4\u65b0\u7528\u6237\u6743\u9650\u7684\u5b8c\u5168\u6743\u9650\u3002\u8fd9\u79cd\u7b80\u5355\u7684\u8d23\u4efb\u5206\u79bb\u65b9\u6cd5\u6709\u52a9\u4e8e\u9632\u6b62\u610f\u5916\u914d\u7f6e\u9519\u8bef\uff0c\u964d\u4f4e\u98ce\u9669\u5e76\u7f29\u5c0f\u5371\u5bb3\u8303\u56f4\u3002 \u4e3a OpenStack \u670d\u52a1\u548c\u6bcf\u4e2a\u8282\u70b9\u521b\u5efa\u7684\u6570\u636e\u5e93\u7528\u6237\u5e10\u6237\u7684\u6743\u9650\u5e94\u4ec5\u9650\u4e8e\u4e0e\u8be5\u8282\u70b9\u6240\u5c5e\u7684\u670d\u52a1\u76f8\u5173\u7684\u6570\u636e\u5e93\u3002 \u8981\u6c42\u7528\u6237\u5e10\u6237\u9700\u8981 SSL \u4f20\u8f93 \u00b6 \u914d\u7f6e\u793a\u4f8b #1\uff1a\uff08MySQL\uff09 \u00b6 GRANT ALL ON dbname.* to 'compute01'@'hostname' IDENTIFIED BY 'NOVA_DBPASS' REQUIRE SSL; \u914d\u7f6e\u793a\u4f8b #2\uff1a\uff08PostgreSQL\uff09 \u00b6 \u5728\u6587\u4ef6\u4e2d pg_hba.conf \uff1a hostssl dbname compute01 hostname md5 \u8bf7\u6ce8\u610f\uff0c\u6b64\u547d\u4ee4\u4ec5\u6dfb\u52a0\u901a\u8fc7 SSL \u8fdb\u884c\u901a\u4fe1\u7684\u529f\u80fd\uff0c\u5e76\u4e14\u662f\u975e\u72ec\u5360\u7684\u3002\u5e94\u7981\u7528\u53ef\u80fd\u5141\u8bb8\u672a\u52a0\u5bc6\u4f20\u8f93\u7684\u5176\u4ed6\u8bbf\u95ee\u65b9\u6cd5\uff0c\u4ee5\u4fbf SSL \u662f\u552f\u4e00\u7684\u8bbf\u95ee\u65b9\u6cd5\u3002 \u8be5 md5 \u53c2\u6570\u5c06\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u5b9a\u4e49\u4e3a\u54c8\u5e0c\u5bc6\u7801\u3002\u6211\u4eec\u5728\u4ee5\u4e0b\u90e8\u5206\u4e2d\u63d0\u4f9b\u4e86\u4e00\u4e2a\u5b89\u5168\u8eab\u4efd\u9a8c\u8bc1\u793a\u4f8b\u3002 OpenStack \u670d\u52a1\u6570\u636e\u5e93\u914d\u7f6e \u00b6 \u5982\u679c\u6570\u636e\u5e93\u670d\u52a1\u5668\u914d\u7f6e\u4e3a\u4f7f\u7528 TLS \u4f20\u8f93\uff0c\u5219\u9700\u8981\u6307\u5b9a\u7528\u4e8e SQLAlchemy \u67e5\u8be2\u4e2d\u7684\u521d\u59cb\u8fde\u63a5\u5b57\u7b26\u4e32\u7684\u8bc1\u4e66\u9881\u53d1\u673a\u6784\u4fe1\u606f\u3002 MySQL :sql_connection \u7684\u5b57\u7b26\u4e32\u793a\u4f8b\uff1a \u00b6 sql_connection = mysql://compute01:NOVA_DBPASS@localhost/nova?charset=utf8&ssl_ca=/etc/mysql/cacert.pem \u4f7f\u7528 X.509 \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1 \u00b6 \u901a\u8fc7\u8981\u6c42\u4f7f\u7528 X.509 \u5ba2\u6237\u7aef\u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff0c\u53ef\u4ee5\u589e\u5f3a\u5b89\u5168\u6027\u3002\u4ee5\u8fd9\u79cd\u65b9\u5f0f\u5bf9\u6570\u636e\u5e93\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u53ef\u4ee5\u4e3a\u4e0e\u6570\u636e\u5e93\u5efa\u7acb\u8fde\u63a5\u7684\u5ba2\u6237\u7aef\u63d0\u4f9b\u66f4\u597d\u7684\u8eab\u4efd\u4fdd\u8bc1\uff0c\u5e76\u786e\u4fdd\u901a\u4fe1\u662f\u52a0\u5bc6\u7684\u3002 \u914d\u7f6e\u793a\u4f8b #1\uff1a\uff08MySQL\uff09 \u00b6 GRANT ALL on dbname.* to 'compute01'@'hostname' IDENTIFIED BY 'NOVA_DBPASS' REQUIRE SUBJECT '/C=XX/ST=YYY/L=ZZZZ/O=cloudycloud/CN=compute01' AND ISSUER '/C=XX/ST=YYY/L=ZZZZ/O=cloudycloud/CN=cloud-ca'; \u914d\u7f6e\u793a\u4f8b #2\uff1a\uff08PostgreSQL\uff09 \u00b6 hostssl dbname compute01 hostname cert OpenStack \u670d\u52a1\u6570\u636e\u5e93\u914d\u7f6e \u00b6 \u5982\u679c\u6570\u636e\u5e93\u670d\u52a1\u5668\u914d\u7f6e\u4e3a\u9700\u8981 X.509 \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff0c\u5219\u9700\u8981\u4e3a\u6570\u636e\u5e93\u540e\u7aef\u6307\u5b9a\u76f8\u5e94\u7684 SQLAlchemy \u67e5\u8be2\u53c2\u6570\u3002\u8fd9\u4e9b\u53c2\u6570\u6307\u5b9a\u7528\u4e8e\u521d\u59cb\u8fde\u63a5\u5b57\u7b26\u4e32\u7684\u8bc1\u4e66\u3001\u79c1\u94a5\u548c\u8bc1\u4e66\u9881\u53d1\u673a\u6784\u4fe1\u606f\u3002 MySQL \u7684 X.509 \u8bc1\u4e66\u8eab\u4efd\u9a8c\u8bc1 :sql_connection \u5b57\u7b26\u4e32\u793a\u4f8b\uff1a sql_connection = mysql://compute01:NOVA_DBPASS@localhost/nova? charset=utf8&ssl_ca = /etc/mysql/cacert.pem&ssl_cert=/etc/mysql/server-cert.pem&ssl_key=/etc/mysql/server-key.pem Nova-conductor \u00b6 OpenStack Compute \u63d0\u4f9b\u4e86\u4e00\u4e2a\u79f0\u4e3a nova-conductor \u7684\u5b50\u670d\u52a1\uff0c\u7528\u4e8e\u4ee3\u7406\u6570\u636e\u5e93\u8fde\u63a5\uff0c\u5176\u4e3b\u8981\u76ee\u7684\u662f\u8ba9 nova \u8ba1\u7b97\u8282\u70b9\u4e0e nova-conductor \u8fde\u63a5\u4ee5\u6ee1\u8db3\u6570\u636e\u6301\u4e45\u6027\u9700\u6c42\uff0c\u800c\u4e0d\u662f\u76f4\u63a5\u4e0e\u6570\u636e\u5e93\u901a\u4fe1\u3002 Nova-conductor \u901a\u8fc7 RPC \u63a5\u6536\u8bf7\u6c42\u5e76\u4ee3\u8868\u8c03\u7528\u670d\u52a1\u6267\u884c\u64cd\u4f5c\uff0c\u800c\u65e0\u9700\u6388\u4e88\u5bf9\u6570\u636e\u5e93\u3001\u5176\u8868\u6216\u5176\u4e2d\u6570\u636e\u7684\u7cbe\u7ec6\u8bbf\u95ee\u6743\u9650\u3002Nova-conductor \u5b9e\u8d28\u4e0a\u5c06\u76f4\u63a5\u6570\u636e\u5e93\u8bbf\u95ee\u4ece\u8ba1\u7b97\u8282\u70b9\u4e2d\u62bd\u8c61\u51fa\u6765\u3002 \u8fd9\u79cd\u62bd\u8c61\u7684\u4f18\u70b9\u662f\u5c06\u670d\u52a1\u9650\u5236\u4e3a\u4f7f\u7528\u53c2\u6570\u6267\u884c\u65b9\u6cd5\uff0c\u7c7b\u4f3c\u4e8e\u5b58\u50a8\u8fc7\u7a0b\uff0c\u4ece\u800c\u9632\u6b62\u5927\u91cf\u7cfb\u7edf\u76f4\u63a5\u8bbf\u95ee\u6216\u4fee\u6539\u6570\u636e\u5e93\u6570\u636e\u3002\u8fd9\u662f\u5728\u4e0d\u5728\u6570\u636e\u5e93\u672c\u8eab\u7684\u4e0a\u4e0b\u6587\u6216\u8303\u56f4\u5185\u5b58\u50a8\u6216\u6267\u884c\u8fd9\u4e9b\u8fc7\u7a0b\u7684\u60c5\u51b5\u4e0b\u5b8c\u6210\u7684\uff0c\u8fd9\u662f\u5bf9\u5178\u578b\u5b58\u50a8\u8fc7\u7a0b\u7684\u5e38\u89c1\u6279\u8bc4\u3002 \u9057\u61be\u7684\u662f\uff0c\u6b64\u89e3\u51b3\u65b9\u6848\u4f7f\u66f4\u7ec6\u7c92\u5ea6\u7684\u8bbf\u95ee\u63a7\u5236\u548c\u5ba1\u6838\u6570\u636e\u8bbf\u95ee\u7684\u80fd\u529b\u7684\u4efb\u52a1\u590d\u6742\u5316\u3002\u7531\u4e8e nova-conductor \u670d\u52a1\u901a\u8fc7 RPC \u63a5\u6536\u8bf7\u6c42\uff0c\u56e0\u6b64\u5b83\u7a81\u51fa\u4e86\u63d0\u9ad8\u6d88\u606f\u4f20\u9012\u5b89\u5168\u6027\u7684\u91cd\u8981\u6027\u3002\u4efb\u4f55\u6709\u6743\u8bbf\u95ee\u6d88\u606f\u961f\u5217\u7684\u8282\u70b9\u90fd\u53ef\u4ee5\u6267\u884c nova-conductor \u63d0\u4f9b\u7684\u8fd9\u4e9b\u65b9\u6cd5\uff0c\u5e76\u6709\u6548\u5730\u4fee\u6539\u6570\u636e\u5e93\u3002 \u8bf7\u6ce8\u610f\uff0c\u7531\u4e8e nova-conductor \u4ec5\u9002\u7528\u4e8e OpenStack Compute\uff0c\u56e0\u6b64\u5bf9\u4e8e\u5176\u4ed6 OpenStack \u7ec4\u4ef6\uff08\u5982 Telemetry\uff08\u4e91\u9ad8\u8ba1\uff09\u3001\u7f51\u7edc\u548c\u5757\u5b58\u50a8\uff09\u7684\u8fd0\u884c\uff0c\u53ef\u80fd\u4ecd\u7136\u9700\u8981\u4ece\u8ba1\u7b97\u4e3b\u673a\u76f4\u63a5\u8bbf\u95ee\u6570\u636e\u5e93\u3002 \u82e5\u8981\u7981\u7528 nova-conductor\uff0c\u8bf7\u5c06\u4ee5\u4e0b\u5185\u5bb9\u653e\u5165 nova.conf \u6587\u4ef6\u4e2d\uff08\u5728\u8ba1\u7b97\u4e3b\u673a\u4e0a\uff09\uff1a [conductor] use_local = true \u6570\u636e\u5e93\u4f20\u8f93\u5b89\u5168\u6027 \u00b6 \u672c\u7ae0\u4ecb\u7ecd\u4e0e\u6570\u636e\u5e93\u670d\u52a1\u5668\u4e4b\u95f4\u7684\u7f51\u7edc\u901a\u4fe1\u76f8\u5173\u7684\u95ee\u9898\u3002\u8fd9\u5305\u62ec IP \u5730\u5740\u7ed1\u5b9a\u548c\u4f7f\u7528 TLS \u52a0\u5bc6\u7f51\u7edc\u6d41\u91cf\u3002 \u6570\u636e\u5e93\u670d\u52a1\u5668 IP \u5730\u5740\u7ed1\u5b9a \u00b6 \u82e5\u8981\u9694\u79bb\u670d\u52a1\u548c\u6570\u636e\u5e93\u4e4b\u95f4\u7684\u654f\u611f\u6570\u636e\u5e93\u901a\u4fe1\uff0c\u5f3a\u70c8\u5efa\u8bae\u5c06\u6570\u636e\u5e93\u670d\u52a1\u5668\u914d\u7f6e\u4e3a\u4ec5\u5141\u8bb8\u901a\u8fc7\u9694\u79bb\u7684\u7ba1\u7406\u7f51\u7edc\u4e0e\u6570\u636e\u5e93\u8fdb\u884c\u901a\u4fe1\u3002\u8fd9\u662f\u901a\u8fc7\u9650\u5236\u6570\u636e\u5e93\u670d\u52a1\u5668\u4e3a\u4f20\u5165\u5ba2\u6237\u7aef\u8fde\u63a5\u7ed1\u5b9a\u7f51\u7edc\u5957\u63a5\u5b57\u7684\u63a5\u53e3\u6216 IP \u5730\u5740\u6765\u5b9e\u73b0\u7684\u3002 \u9650\u5236 MySQL \u7684\u7ed1\u5b9a\u5730\u5740 \u00b6 \u5728 my.cnf \uff1a [mysqld] ... bind-address \u9650\u5236 PostgreSQL \u7684\u76d1\u542c\u5730\u5740 \u00b6 \u5728 postgresql.conf \uff1a listen_addresses = \u6570\u636e\u5e93\u4f20\u8f93 \u00b6 \u9664\u4e86\u5c06\u6570\u636e\u5e93\u901a\u4fe1\u9650\u5236\u4e3a\u7ba1\u7406\u7f51\u7edc\u5916\uff0c\u6211\u4eec\u8fd8\u5f3a\u70c8\u5efa\u8bae\u4e91\u7ba1\u7406\u5458\u5c06\u5176\u6570\u636e\u5e93\u540e\u7aef\u914d\u7f6e\u4e3a\u9700\u8981 TLS\u3002\u5c06 TLS \u7528\u4e8e\u6570\u636e\u5e93\u5ba2\u6237\u7aef\u8fde\u63a5\u53ef\u4fdd\u62a4\u901a\u4fe1\u4e0d\u88ab\u7be1\u6539\u548c\u7a83\u542c\u3002\u6b63\u5982\u4e0b\u4e00\u8282\u5c06\u8ba8\u8bba\u7684\u90a3\u6837\uff0c\u4f7f\u7528 TLS \u8fd8\u63d0\u4f9b\u4e86\u901a\u8fc7 X.509 \u8bc1\u4e66\uff08\u901a\u5e38\u79f0\u4e3a PKI\uff09\u6267\u884c\u6570\u636e\u5e93\u7528\u6237\u8eab\u4efd\u9a8c\u8bc1\u7684\u6846\u67b6\u3002\u4ee5\u4e0b\u662f\u6709\u5173\u5982\u4f55\u4e3a\u4e24\u4e2a\u6d41\u884c\u7684\u6570\u636e\u5e93\u540e\u7aef MySQL \u548c PostgreSQL \u914d\u7f6e TLS \u7684\u6307\u5357\u3002 \u6ce8\u610f \u5b89\u88c5\u8bc1\u4e66\u548c\u5bc6\u94a5\u6587\u4ef6\u65f6\uff0c\u8bf7\u786e\u4fdd\u6587\u4ef6\u6743\u9650\u53d7\u5230\u9650\u5236\uff0c\u4f8b\u5982 `chmod 0600` \uff0c\u6240\u6709\u6743\u9650\u5236\u4e3a\u6570\u636e\u5e93\u5b88\u62a4\u7a0b\u5e8f\u7528\u6237\uff0c\u4ee5\u9632\u6b62\u6570\u636e\u5e93\u670d\u52a1\u5668\u4e0a\u7684\u5176\u4ed6\u8fdb\u7a0b\u548c\u7528\u6237\u8fdb\u884c\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002 MySQL SSL\u914d\u7f6e \u00b6 \u5e94\u5728\u7cfb\u7edf\u8303\u56f4\u7684MySQL\u914d\u7f6e\u6587\u4ef6\u4e2d\u6dfb\u52a0\u4ee5\u4e0b\u884c\uff1a \u5728 my.cnf \uff1a [[mysqld]] ... ssl-ca = /path/to/ssl/cacert.pem ssl-cert = /path/to/ssl/server-cert.pem ssl-key = /path/to/ssl/server-key.pem \uff08\u53ef\u9009\uff09\u5982\u679c\u60a8\u5e0c\u671b\u9650\u5236\u7528\u4e8e\u52a0\u5bc6\u8fde\u63a5\u7684 SSL \u5bc6\u7801\u96c6\u3002\u6709\u5173\u5bc6\u7801\u5217\u8868\u548c\u7528\u4e8e\u6307\u5b9a\u5bc6\u7801\u5b57\u7b26\u4e32\u7684\u8bed\u6cd5\uff0c\u8bf7\u53c2\u9605\u5bc6\u7801\uff1a ssl-cipher = 'cipher:list' PostgreSQL SSL \u914d\u7f6e \u00b6 \u5e94\u5728\u7cfb\u7edf\u8303\u56f4\u7684 PostgreSQL \u914d\u7f6e\u6587\u4ef6\u4e2d\u6dfb\u52a0\u4ee5\u4e0b\u884c\u3002 postgresql.conf ssl = true \uff08\u53ef\u9009\uff09\u5982\u679c\u60a8\u5e0c\u671b\u9650\u5236\u7528\u4e8e\u52a0\u5bc6\u8fde\u63a5\u7684 SSL \u5bc6\u7801\u96c6\u3002\u6709\u5173\u5bc6\u7801\u5217\u8868\u548c\u7528\u4e8e\u6307\u5b9a\u5bc6\u7801\u5b57\u7b26\u4e32\u7684\u8bed\u6cd5\uff0c\u8bf7\u53c2\u9605\u5bc6\u7801\uff1a ssl-ciphers = 'cipher:list' \u670d\u52a1\u5668\u8bc1\u4e66\u3001\u5bc6\u94a5\u548c\u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09 \u6587\u4ef6\u5e94\u653e\u5728\u4ee5\u4e0b\u6587\u4ef6\u7684 $PGDATA \u76ee\u5f55\u4e2d\uff1a $PGDATA/server.crt - \u670d\u52a1\u5668\u8bc1\u4e66 $PGDATA/server.key - \u79c1\u94a5\u5bf9\u5e94\u4e8e server.crt $PGDATA/root.crt - \u53ef\u4fe1\u8bc1\u4e66\u9881\u53d1\u673a\u6784 $PGDATA/root.crl - \u8bc1\u4e66\u64a4\u9500\u5217\u8868 \u79df\u6237\u6570\u636e\u9690\u79c1 \u00b6 OpenStack\u65e8\u5728\u652f\u6301\u591a\u79df\u6237\uff0c\u8fd9\u4e9b\u79df\u6237\u5f88\u53ef\u80fd\u6709\u4e0d\u540c\u7684\u6570\u636e\u8981\u6c42\u3002\u4f5c\u4e3a\u4e91\u6784\u5efa\u8005\u6216\u8fd0\u8425\u5546\uff0c\u60a8\u5fc5\u987b\u786e\u4fdd\u60a8\u7684 OpenStack \u73af\u5883\u80fd\u591f\u89e3\u51b3\u6570\u636e\u9690\u79c1\u95ee\u9898\u548c\u6cd5\u89c4\u3002\u5728\u672c\u7ae0\u4e2d\uff0c\u6211\u4eec\u5c06\u8ba8\u8bba\u4e0e OpenStack \u5b9e\u73b0\u76f8\u5173\u7684\u6570\u636e\u9a7b\u7559\u548c\u5904\u7f6e\u3002 \u6570\u636e\u9690\u79c1\u95ee\u9898 \u6570\u636e\u9a7b\u7559 \u6570\u636e\u5904\u7f6e \u6570\u636e\u52a0\u5bc6 \u5377\u52a0\u5bc6 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6 \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61 \u5757\u5b58\u50a8\u6027\u80fd\u548c\u540e\u7aef \u7f51\u7edc\u6570\u636e \u5bc6\u94a5\u7ba1\u7406 \u53c2\u8003\u4e66\u76ee: \u6570\u636e\u9690\u79c1\u95ee\u9898 \u00b6 \u6570\u636e\u9a7b\u7559 \u00b6 \u5728\u8fc7\u53bb\u51e0\u5e74\u4e2d\uff0c\u6570\u636e\u7684\u9690\u79c1\u548c\u9694\u79bb\u4e00\u76f4\u88ab\u8ba4\u4e3a\u662f\u91c7\u7528\u4e91\u7684\u4e3b\u8981\u969c\u788d\u3002\u8fc7\u53bb\uff0c\u5bf9\u8c01\u62e5\u6709\u4e91\u4e2d\u6570\u636e\u4ee5\u53ca\u4e91\u8fd0\u8425\u5546\u662f\u5426\u53ef\u4ee5\u6700\u7ec8\u4fe1\u4efb\u8fd9\u4e9b\u6570\u636e\u7684\u4fdd\u7ba1\u4eba\u7684\u62c5\u5fe7\u4e00\u76f4\u662f\u91cd\u5927\u95ee\u9898\u3002 \u8bb8\u591a OpenStack \u670d\u52a1\u7ef4\u62a4\u5c5e\u4e8e\u79df\u6237\u7684\u6570\u636e\u548c\u5143\u6570\u636e\u6216\u53c2\u8003\u79df\u6237\u4fe1\u606f\u3002 \u5b58\u50a8\u5728 OpenStack \u4e91\u4e2d\u7684\u79df\u6237\u6570\u636e\u53ef\u80fd\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\uff1a \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61 \u8ba1\u7b97\u5b9e\u4f8b\u4e34\u65f6\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8 \u8ba1\u7b97\u5b9e\u4f8b\u5185\u5b58 \u5757\u5b58\u50a8\u5377\u6570\u636e \u7528\u4e8e\u8ba1\u7b97\u8bbf\u95ee\u7684\u516c\u94a5 \u6620\u50cf\u670d\u52a1\u4e2d\u7684\u865a\u62df\u673a\u6620\u50cf \u8ba1\u7b97\u673a\u5feb\u7167 \u4f20\u9012\u7ed9 OpenStack Compute \u7684\u914d\u7f6e\u9a71\u52a8\u5668\u6269\u5c55\u7684\u6570\u636e OpenStack \u4e91\u5b58\u50a8\u7684\u5143\u6570\u636e\u5305\u62ec\u4ee5\u4e0b\u975e\u8be6\u5c3d\u9879\u76ee\uff1a \u7ec4\u7ec7\u540d\u79f0 \u7528\u6237\u7684\u201c\u771f\u5b9e\u59d3\u540d\u201d \u6b63\u5728\u8fd0\u884c\u7684\u5b9e\u4f8b\u3001\u5b58\u50a8\u6876\u3001\u5bf9\u8c61\u3001\u5377\u548c\u5176\u4ed6\u914d\u989d\u76f8\u5173\u9879\u76ee\u7684\u6570\u91cf\u6216\u5927\u5c0f \u8fd0\u884c\u5b9e\u4f8b\u6216\u5b58\u50a8\u6570\u636e\u7684\u5c0f\u65f6\u6570 \u7528\u6237\u7684 IP \u5730\u5740 \u5185\u90e8\u751f\u6210\u7684\u7528\u4e8e\u8ba1\u7b97\u6620\u50cf\u6346\u7ed1\u7684\u79c1\u94a5 \u6570\u636e\u5904\u7f6e \u00b6 OpenStack\u8fd0\u8425\u5546\u5e94\u52aa\u529b\u63d0\u4f9b\u4e00\u5b9a\u7a0b\u5ea6\u7684\u79df\u6237\u6570\u636e\u5904\u7f6e\u4fdd\u8bc1\u3002\u6700\u4f73\u5b9e\u8df5\u5efa\u8bae\u64cd\u4f5c\u5458\u5728\u5904\u7f6e\u3001\u91ca\u653e\u7ec4\u7ec7\u63a7\u5236\u6216\u91ca\u653e\u4ee5\u4f9b\u91cd\u590d\u4f7f\u7528\u4e4b\u524d\u5bf9\u4e91\u7cfb\u7edf\u4ecb\u8d28\uff08\u6570\u5b57\u548c\u975e\u6570\u5b57\uff09\u8fdb\u884c\u6e05\u7406\u3002\u9274\u4e8e\u4fe1\u606f\u7684\u7279\u5b9a\u5b89\u5168\u57df\u548c\u654f\u611f\u6027\uff0c\u6e05\u7406\u65b9\u6cd5\u5e94\u5b9e\u73b0\u9002\u5f53\u7ea7\u522b\u7684\u5f3a\u5ea6\u548c\u5b8c\u6574\u6027\u3002 \u201c\u6e05\u7406\u8fc7\u7a0b\u4f1a\u4ece\u4ecb\u8d28\u4e2d\u5220\u9664\u4fe1\u606f\uff0c\u56e0\u6b64\u65e0\u6cd5\u68c0\u7d22\u6216\u91cd\u5efa\u4fe1\u606f\u3002\u6e05\u7406\u6280\u672f\uff0c\u5305\u62ec\u6e05\u9664\u3001\u6e05\u9664\u3001\u52a0\u5bc6\u64e6\u9664\u548c\u9500\u6bc1\uff0c\u53ef\u9632\u6b62\u5728\u91cd\u590d\u4f7f\u7528\u6216\u91ca\u653e\u5904\u7f6e\u6b64\u7c7b\u4ecb\u8d28\u65f6\u5411\u672a\u7ecf\u6388\u6743\u7684\u4e2a\u4eba\u62ab\u9732\u4fe1\u606f\u3002NIST \u7279\u522b\u51fa\u7248\u7269 800-53 \u4fee\u8ba2\u7248 4 NIST\u5efa\u8bae\u7684\u5b89\u5168\u63a7\u5236\u63aa\u65bd\u4e2d\u91c7\u7528\u7684\u4e00\u822c\u6570\u636e\u5904\u7f6e\u548c\u6e05\u7406\u6307\u5357\u3002\u4e91\u8fd0\u8425\u5546\u5e94\uff1a \u8ddf\u8e2a\u3001\u8bb0\u5f55\u548c\u9a8c\u8bc1\u4ecb\u8d28\u6e05\u7406\u548c\u5904\u7f6e\u64cd\u4f5c\u3002 \u6d4b\u8bd5\u6e05\u7406\u8bbe\u5907\u548c\u7a0b\u5e8f\u4ee5\u9a8c\u8bc1\u5176\u6027\u80fd\u662f\u5426\u6b63\u5e38\u3002 \u5728\u5c06\u4fbf\u643a\u5f0f\u53ef\u79fb\u52a8\u5b58\u50a8\u8bbe\u5907\u8fde\u63a5\u5230\u4e91\u57fa\u7840\u67b6\u6784\u4e4b\u524d\uff0c\u5148\u5bf9\u5176\u8fdb\u884c\u6e05\u7406\u3002 \u9500\u6bc1\u65e0\u6cd5\u6e05\u7406\u7684\u4e91\u7cfb\u7edf\u4ecb\u8d28\u3002 \u5728 OpenStack \u90e8\u7f72\u4e2d\uff0c\u60a8\u9700\u8981\u89e3\u51b3\u4ee5\u4e0b\u95ee\u9898\uff1a \u5b89\u5168\u6570\u636e\u64e6\u9664 \u5b9e\u4f8b\u5185\u5b58\u6e05\u7406 \u5757\u5b58\u50a8\u5377\u6570\u636e \u8ba1\u7b97\u5b9e\u4f8b\u4e34\u65f6\u5b58\u50a8 \u88f8\u673a\u670d\u52a1\u5668\u6e05\u7406 \u6570\u636e\u672a\u5b89\u5168\u5220\u9664 \u00b6 \u5728OpenStack\u4e2d\uff0c\u67d0\u4e9b\u6570\u636e\u53ef\u80fd\u4f1a\u88ab\u5220\u9664\uff0c\u4f46\u5728\u4e0a\u8ff0NIST\u6807\u51c6\u7684\u4e0a\u4e0b\u6587\u4e2d\u4e0d\u4f1a\u88ab\u5b89\u5168\u5220\u9664\u3002\u8fd9\u901a\u5e38\u9002\u7528\u4e8e\u5b58\u50a8\u5728\u6570\u636e\u5e93\u4e2d\u7684\u5927\u591a\u6570\u6216\u5168\u90e8\u4e0a\u8ff0\u5b9a\u4e49\u7684\u5143\u6570\u636e\u548c\u4fe1\u606f\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u6570\u636e\u5e93\u548c/\u6216\u7cfb\u7edf\u914d\u7f6e\u8fdb\u884c\u81ea\u52a8\u5438\u5c18\u548c\u5b9a\u671f\u53ef\u7528\u7a7a\u95f4\u64e6\u9664\u6765\u4fee\u590d\u3002 \u5b9e\u4f8b\u5185\u5b58\u6e05\u7406 \u00b6 \u7279\u5b9a\u4e8e\u5404\u79cd\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u662f\u5b9e\u4f8b\u5185\u5b58\u7684\u5904\u7406\u3002OpenStack Compute \u4e2d\u6ca1\u6709\u5b9a\u4e49\u6b64\u884c\u4e3a\uff0c\u5c3d\u7ba1\u901a\u5e38\u671f\u671b hypervisor \u5728\u5220\u9664\u5b9e\u4f8b\u548c/\u6216\u521b\u5efa\u5b9e\u4f8b\u65f6\u5c3d\u6700\u5927\u52aa\u529b\u6e05\u7406\u5185\u5b58\u3002 Xen \u663e\u5f0f\u5730\u4e3a\u5b9e\u4f8b\u5206\u914d\u4e13\u7528\u5185\u5b58\u533a\u57df\uff0c\u5e76\u5728\u5b9e\u4f8b\uff08\u6216 Xen \u672f\u8bed\u4e2d\u7684\u57df\uff09\u9500\u6bc1\u65f6\u6e05\u7406\u6570\u636e\u3002KVM \u5728\u5f88\u5927\u7a0b\u5ea6\u4e0a\u4f9d\u8d56\u4e8e Linux \u9875\u9762\u7ba1\u7406;KVM \u6587\u6863\u4e2d\u5b9a\u4e49\u4e86\u4e00\u7ec4\u4e0e KVM \u5206\u9875\u76f8\u5173\u7684\u590d\u6742\u89c4\u5219\u3002 \u9700\u8981\u6ce8\u610f\u7684\u662f\uff0c\u4f7f\u7528 Xen \u5185\u5b58\u6c14\u7403\u529f\u80fd\u53ef\u80fd\u4f1a\u5bfc\u81f4\u4fe1\u606f\u6cc4\u9732\u3002\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u907f\u514d\u4f7f\u7528\u6b64\u529f\u80fd\u3002 \u5bf9\u4e8e\u8fd9\u4e9b\u548c\u5176\u4ed6\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\uff0c\u6211\u4eec\u5efa\u8bae\u53c2\u8003\u7279\u5b9a\u4e8e\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u6587\u6863\u3002 Cinder \u5377\u6570\u636e \u00b6 \u5f3a\u70c8\u5efa\u8bae\u4f7f\u7528 OpenStack \u5377\u52a0\u5bc6\u529f\u80fd\u3002\u4e0b\u9762\u201c\u5377\u52a0\u5bc6\u201d\u4e0b\u7684\u201c\u6570\u636e\u52a0\u5bc6\u201d\u90e8\u5206\u5bf9\u6b64\u8fdb\u884c\u4e86\u8ba8\u8bba\u3002\u4f7f\u7528\u6b64\u529f\u80fd\u65f6\uff0c\u901a\u8fc7\u5b89\u5168\u5730\u5220\u9664\u52a0\u5bc6\u5bc6\u94a5\u6765\u5b8c\u6210\u6570\u636e\u9500\u6bc1\u3002\u6700\u7ec8\u7528\u6237\u53ef\u4ee5\u5728\u521b\u5efa\u5377\u65f6\u9009\u62e9\u6b64\u529f\u80fd\uff0c\u4f46\u8bf7\u6ce8\u610f\uff0c\u7ba1\u7406\u5458\u5fc5\u987b\u5148\u6267\u884c\u5377\u52a0\u5bc6\u529f\u80fd\u7684\u4e00\u6b21\u6027\u8bbe\u7f6e\u3002\u6709\u5173\u6b64\u8bbe\u7f6e\u7684\u8bf4\u660e\uff0c\u8bf7\u53c2\u9605\u201c\u914d\u7f6e\u53c2\u8003\u201d\u7684\u201c\u5757\u5b58\u50a8\u201d\u90e8\u5206\u7684\u201c\u5377\u52a0\u5bc6\u201d\u4e0b\u3002 \u5982\u679c\u4e0d\u4f7f\u7528 OpenStack \u5377\u52a0\u5bc6\u529f\u80fd\uff0c\u90a3\u4e48\u5176\u4ed6\u65b9\u6cd5\u901a\u5e38\u66f4\u96be\u542f\u7528\u3002\u5982\u679c\u4f7f\u7528\u540e\u7aef\u63d2\u4ef6\uff0c\u5219\u53ef\u80fd\u5b58\u5728\u72ec\u7acb\u7684\u52a0\u5bc6\u65b9\u6cd5\u6216\u975e\u6807\u51c6\u8986\u76d6\u89e3\u51b3\u65b9\u6848\u3002OpenStack Block Storage \u7684\u63d2\u4ef6\u5c06\u4ee5\u591a\u79cd\u65b9\u5f0f\u5b58\u50a8\u6570\u636e\u3002\u8bb8\u591a\u63d2\u4ef6\u7279\u5b9a\u4e8e\u4f9b\u5e94\u5546\u6216\u6280\u672f\uff0c\u800c\u5176\u4ed6\u63d2\u4ef6\u5219\u66f4\u591a\u5730\u662f\u56f4\u7ed5\u6587\u4ef6\u7cfb\u7edf\uff08\u5982 LVM \u6216 ZFS\uff09\u7684 DIY \u89e3\u51b3\u65b9\u6848\u3002\u5b89\u5168\u9500\u6bc1\u6570\u636e\u7684\u65b9\u6cd5\u56e0\u63d2\u4ef6\u800c\u5f02\uff0c\u56e0\u4f9b\u5e94\u5546\u7684\u89e3\u51b3\u65b9\u6848\u800c\u5f02\uff0c\u4e5f\u56e0\u6587\u4ef6\u7cfb\u7edf\u800c\u5f02\u3002 \u4e00\u4e9b\u540e\u7aef\uff08\u5982 ZFS\uff09\u5c06\u652f\u6301\u5199\u5165\u65f6\u590d\u5236\uff0c\u4ee5\u9632\u6b62\u6570\u636e\u6cc4\u9732\u3002\u5728\u8fd9\u4e9b\u60c5\u51b5\u4e0b\uff0c\u4ece\u672a\u5199\u5165\u5757\u4e2d\u8bfb\u53d6\u5c06\u59cb\u7ec8\u8fd4\u56de\u96f6\u3002\u5176\u4ed6\u540e\u7aef\uff08\u5982 LVM\uff09\u53ef\u80fd\u672c\u8eab\u4e0d\u652f\u6301\u6b64\u529f\u80fd\uff0c\u56e0\u6b64\u5757\u5b58\u50a8\u63d2\u4ef6\u8d1f\u8d23\u5728\u5c06\u4e4b\u524d\u5199\u5165\u7684\u5757\u4ea4\u7ed9\u7528\u6237\u4e4b\u524d\u8986\u76d6\u5b83\u4eec\u3002\u8bf7\u52a1\u5fc5\u67e5\u770b\u6240\u9009\u5377\u540e\u7aef\u63d0\u4f9b\u54ea\u4e9b\u4fdd\u8bc1\uff0c\u5e76\u67e5\u770b\u54ea\u4e9b\u4e2d\u4ecb\u53ef\u7528\u4e8e\u672a\u63d0\u4f9b\u7684\u4fdd\u8bc1\u3002 \u955c\u50cf\u670d\u52a1\u5ef6\u65f6\u5220\u9664\u529f\u80fd \u00b6 OpenStack \u955c\u50cf\u670d\u52a1\u5177\u6709\u5ef6\u8fdf\u5220\u9664\u529f\u80fd\uff0c\u8be5\u529f\u80fd\u5c06\u5728\u5b9a\u4e49\u7684\u65f6\u95f4\u6bb5\u5185\u7b49\u5f85\u955c\u50cf\u7684\u5220\u9664\u3002\u5982\u679c\u5b58\u5728\u5b89\u5168\u95ee\u9898\uff0c\u5efa\u8bae\u901a\u8fc7\u7f16\u8f91 etc/glance/glance-api.conf \u6587\u4ef6\u5e76\u5c06 delayed_delete \u9009\u9879\u8bbe\u7f6e\u4e3a False \u6765\u7981\u7528\u6b64\u529f\u80fd\u3002 \u8ba1\u7b97\u8f6f\u5220\u9664\u529f\u80fd \u00b6 OpenStack Compute \u5177\u6709\u8f6f\u5220\u9664\u529f\u80fd\uff0c\u8be5\u529f\u80fd\u4f7f\u88ab\u5220\u9664\u7684\u5b9e\u4f8b\u5728\u5b9a\u4e49\u7684\u65f6\u95f4\u6bb5\u5185\u5904\u4e8e\u8f6f\u5220\u9664\u72b6\u6001\u3002\u5b9e\u4f8b\u53ef\u4ee5\u5728\u6b64\u65f6\u95f4\u6bb5\u5185\u6062\u590d\u3002\u82e5\u8981\u7981\u7528\u8f6f\u5220\u9664\u529f\u80fd\uff0c\u8bf7\u7f16\u8f91 etc/nova/nova.conf \u6587\u4ef6\u5e76\u5c06\u8be5 reclaim_instance_interval \u9009\u9879\u7559\u7a7a\u3002 \u8ba1\u7b97\u5b9e\u4f8b\u4e34\u65f6\u5b58\u50a8 \u00b6 \u8bf7\u6ce8\u610f\uff0cOpenStack \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u63d0\u4f9b\u4e86\u4e00\u79cd\u6539\u8fdb\u4e34\u65f6\u5b58\u50a8\u9690\u79c1\u548c\u9694\u79bb\u7684\u65b9\u6cd5\uff0c\u65e0\u8bba\u662f\u5728\u4e3b\u52a8\u4f7f\u7528\u671f\u95f4\u8fd8\u662f\u5728\u9500\u6bc1\u6570\u636e\u65f6\u3002\u4e0e\u52a0\u5bc6\u5757\u5b58\u50a8\u4e00\u6837\uff0c\u53ea\u9700\u5220\u9664\u52a0\u5bc6\u5bc6\u94a5\u5373\u53ef\u6709\u6548\u5730\u9500\u6bc1\u6570\u636e\u3002 \u5728\u521b\u5efa\u548c\u9500\u6bc1\u4e34\u65f6\u5b58\u50a8\u65f6\uff0c\u63d0\u4f9b\u6570\u636e\u9690\u79c1\u7684\u66ff\u4ee3\u63aa\u65bd\u5c06\u5728\u4e00\u5b9a\u7a0b\u5ea6\u4e0a\u53d6\u51b3\u4e8e\u6240\u9009\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u548c OpenStack \u8ba1\u7b97\u63d2\u4ef6\u3002 \u7528\u4e8e\u8ba1\u7b97\u7684 libvirt \u63d2\u4ef6\u53ef\u4ee5\u76f4\u63a5\u5728\u6587\u4ef6\u7cfb\u7edf\u4e0a\u6216 LVM \u4e2d\u7ef4\u62a4\u4e34\u65f6\u5b58\u50a8\u3002\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u901a\u5e38\u4e0d\u4f1a\u5728\u5220\u9664\u6570\u636e\u65f6\u8986\u76d6\u6570\u636e\uff0c\u4f46\u53ef\u4ee5\u4fdd\u8bc1\u4e0d\u4f1a\u5411\u7528\u6237\u63d0\u4f9b\u810f\u76d8\u533a\u3002 \u5f53\u4f7f\u7528 LVM \u652f\u6301\u7684\u57fa\u4e8e\u5757\u7684\u4e34\u65f6\u5b58\u50a8\u65f6\uff0cOpenStack \u8ba1\u7b97\u8f6f\u4ef6\u5fc5\u987b\u5b89\u5168\u5730\u64e6\u9664\u5757\u4ee5\u9632\u6b62\u4fe1\u606f\u6cc4\u9732\u3002\u8fc7\u53bb\u66fe\u5b58\u5728\u4e0e\u4e0d\u5f53\u64e6\u9664\u7684\u4e34\u65f6\u5757\u5b58\u50a8\u8bbe\u5907\u76f8\u5173\u7684\u4fe1\u606f\u6cc4\u9732\u6f0f\u6d1e\u3002 \u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u5bf9\u4e8e\u4e34\u65f6\u5757\u5b58\u50a8\u8bbe\u5907\u6765\u8bf4\u662f\u4e00\u79cd\u6bd4 LVM \u66f4\u5b89\u5168\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u56e0\u4e3a\u65e0\u6cd5\u4e3a\u7528\u6237\u63d0\u4f9b\u810f\u76d8\u533a\u3002\u4f46\u662f\uff0c\u9700\u8981\u6ce8\u610f\u7684\u662f\uff0c\u7528\u6237\u6570\u636e\u4e0d\u4f1a\u88ab\u7834\u574f\uff0c\u56e0\u6b64\u5efa\u8bae\u5bf9\u540e\u5907\u6587\u4ef6\u7cfb\u7edf\u8fdb\u884c\u52a0\u5bc6\u3002 \u88f8\u673a\u670d\u52a1\u5668\u6e05\u7406 \u00b6 \u7528\u4e8e\u8ba1\u7b97\u7684\u88f8\u673a\u670d\u52a1\u5668\u9a71\u52a8\u7a0b\u5e8f\u6b63\u5728\u5f00\u53d1\u4e2d\uff0c\u6b64\u540e\u5df2\u8f6c\u79fb\u5230\u4e00\u4e2a\u540d\u4e3a ironic \u7684\u5355\u72ec\u9879\u76ee\u4e2d\u3002\u5728\u64b0\u5199\u672c\u6587\u65f6\uff0c\u5177\u6709\u8bbd\u523a\u610f\u5473\u7684\u662f\uff0c\u4f3c\u4e4e\u6ca1\u6709\u89e3\u51b3\u9a7b\u7559\u5728\u7269\u7406\u786c\u4ef6\u4e2d\u7684\u79df\u6237\u6570\u636e\u7684\u6e05\u7406\u95ee\u9898\u3002 \u6b64\u5916\uff0c\u88f8\u673a\u7cfb\u7edf\u7684\u79df\u6237\u53ef\u4ee5\u4fee\u6539\u7cfb\u7edf\u56fa\u4ef6\u3002\u5b89\u5168\u5f15\u5bfc\u4e2d\u6240\u8ff0\u7684 TPM \u6280\u672f\u63d0\u4f9b\u4e86\u4e00\u79cd\u7528\u4e8e\u68c0\u6d4b\u672a\u7ecf\u6388\u6743\u7684\u56fa\u4ef6\u66f4\u6539\u7684\u89e3\u51b3\u65b9\u6848\u3002 \u6570\u636e\u52a0\u5bc6 \u00b6 \u8be5\u9009\u9879\u53ef\u4f9b\u5b9e\u65bd\u8005\u52a0\u5bc6\u79df\u6237\u6570\u636e\uff0c\u65e0\u8bba\u8fd9\u4e9b\u6570\u636e\u5b58\u50a8\u5728\u78c1\u76d8\u4e0a\u6216\u901a\u8fc7\u7f51\u7edc\u4f20\u8f93\uff0c\u4f8b\u5982\u4e0b\u9762\u63cf\u8ff0\u7684 OpenStack \u5377\u52a0\u5bc6\u529f\u80fd\u3002\u8fd9\u8d85\u51fa\u4e86\u7528\u6237\u5728\u5c06\u81ea\u5df1\u7684\u6570\u636e\u53d1\u9001\u7ed9\u63d0\u4f9b\u5546\u4e4b\u524d\u52a0\u5bc6\u81ea\u5df1\u7684\u6570\u636e\u7684\u4e00\u822c\u5efa\u8bae\u3002 \u4ee3\u8868\u79df\u6237\u52a0\u5bc6\u6570\u636e\u7684\u91cd\u8981\u6027\u5f88\u5927\u7a0b\u5ea6\u4e0a\u4e0e\u63d0\u4f9b\u5546\u627f\u62c5\u7684\u653b\u51fb\u8005\u53ef\u80fd\u8bbf\u95ee\u79df\u6237\u6570\u636e\u7684\u98ce\u9669\u6709\u5173\u3002\u653f\u5e9c\u53ef\u80fd\u6709\u8981\u6c42\uff0c\u4e5f\u6709\u6bcf\u4e2a\u7b56\u7565\u7684\u8981\u6c42\uff0c\u79c1\u6709\u5408\u540c\uff0c\u751a\u81f3\u4e0e\u516c\u5171\u4e91\u63d0\u4f9b\u5546\u7684\u79c1\u6709\u5408\u540c\u6709\u5173\u7684\u5224\u4f8b\u6cd5\u3002\u5efa\u8bae\u5728\u9009\u62e9\u79df\u6237\u52a0\u5bc6\u7b56\u7565\u4e4b\u524d\u8fdb\u884c\u98ce\u9669\u8bc4\u4f30\u548c\u6cd5\u5f8b\u987e\u95ee\u3002 \u6309\u5b9e\u4f8b\u6216\u6309\u5bf9\u8c61\u52a0\u5bc6\u6bd4\u6309\u9879\u76ee\u3001\u6309\u79df\u6237\u3001\u6309\u4e3b\u673a\u548c\u6309\u4e91\u805a\u5408\u964d\u5e8f\u8fdb\u884c\u52a0\u5bc6\u66f4\u53ef\u53d6\u3002\u8fd9\u9879\u5efa\u8bae\u4e0e\u5b9e\u65bd\u7684\u590d\u6742\u6027\u548c\u96be\u5ea6\u76f8\u53cd\u3002\u76ee\u524d\uff0c\u5728\u67d0\u4e9b\u9879\u76ee\u4e2d\uff0c\u5f88\u96be\u6216\u4e0d\u53ef\u80fd\u5b9e\u73b0\u50cf\u6bcf\u4e2a\u79df\u6237\u4e00\u6837\u677e\u6563\u7684\u52a0\u5bc6\u3002\u6211\u4eec\u5efa\u8bae\u5b9e\u73b0\u8005\u5c3d\u6700\u5927\u52aa\u529b\u52a0\u5bc6\u79df\u6237\u6570\u636e\u3002 \u901a\u5e38\uff0c\u6570\u636e\u52a0\u5bc6\u4e0e\u53ef\u9760\u5730\u9500\u6bc1\u79df\u6237\u548c\u6bcf\u4e2a\u5b9e\u4f8b\u6570\u636e\u7684\u80fd\u529b\u5448\u6b63\u76f8\u5173\uff0c\u53ea\u9700\u4e22\u5f03\u5bc6\u94a5\u5373\u53ef\u3002\u5e94\u8be5\u6307\u51fa\u7684\u662f\uff0c\u5728\u8fd9\u6837\u505a\u65f6\uff0c\u4ee5\u53ef\u9760\u548c\u5b89\u5168\u7684\u65b9\u5f0f\u9500\u6bc1\u8fd9\u4e9b\u5bc6\u94a5\u53d8\u5f97\u975e\u5e38\u91cd\u8981\u3002 Opportunities to encrypt data for users are present: \u5b58\u5728\u4e3a\u7528\u6237\u52a0\u5bc6\u6570\u636e\u7684\u673a\u4f1a\uff1a \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61 \u7f51\u7edc\u6570\u636e \u5377\u52a0\u5bc6 \u00b6 OpenStack \u4e2d\u7684\u5377\u52a0\u5bc6\u529f\u80fd\u652f\u6301\u57fa\u4e8e\u6bcf\u4e2a\u79df\u6237\u7684\u9690\u79c1\u4fdd\u62a4\u3002\u4ece Kilo \u7248\u672c\u5f00\u59cb\uff0c\u652f\u6301\u4ee5\u4e0b\u529f\u80fd\uff1a \u521b\u5efa\u548c\u4f7f\u7528\u52a0\u5bc6\u5377\u7c7b\u578b\uff0c\u901a\u8fc7\u4eea\u8868\u677f\u6216\u547d\u4ee4\u884c\u754c\u9762\u542f\u52a8 \u542f\u7528\u52a0\u5bc6\u5e76\u9009\u62e9\u52a0\u5bc6\u7b97\u6cd5\u548c\u5bc6\u94a5\u5927\u5c0f\u7b49\u53c2\u6570 iSCSI \u6570\u636e\u5305\u4e2d\u5305\u542b\u7684\u5377\u6570\u636e\u5df2\u52a0\u5bc6 \u5982\u679c\u539f\u59cb\u5377\u5df2\u52a0\u5bc6\uff0c\u5219\u652f\u6301\u52a0\u5bc6\u5907\u4efd \u4eea\u8868\u677f\u6307\u793a\u5377\u52a0\u5bc6\u72b6\u6001\u3002\u5305\u62ec\u5377\u5df2\u52a0\u5bc6\u7684\u6307\u793a\uff0c\u5e76\u5305\u62ec\u7b97\u6cd5\u548c\u5bc6\u94a5\u5927\u5c0f\u7b49\u52a0\u5bc6\u53c2\u6570 \u901a\u8fc7\u5b89\u5168\u5305\u88c5\u5668\u4e0e\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u4ea4\u4e92 \u540e\u7aef\u5bc6\u94a5\u5b58\u50a8\u652f\u6301\u5377\u52a0\u5bc6\uff0c\u4ee5\u589e\u5f3a\u5b89\u5168\u6027\uff08\u4f8b\u5982\uff0c\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u6216 KMIP \u670d\u52a1\u5668\u53ef\u7528\u4f5c barbican \u540e\u7aef\u5bc6\u94a5\u5b58\u50a8\uff09 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6 \u00b6 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u53ef\u89e3\u51b3\u6570\u636e\u9690\u79c1\u95ee\u9898\u3002\u4e34\u65f6\u78c1\u76d8\u662f\u865a\u62df\u4e3b\u673a\u64cd\u4f5c\u7cfb\u7edf\u4f7f\u7528\u7684\u4e34\u65f6\u5de5\u4f5c\u7a7a\u95f4\u3002\u5982\u679c\u4e0d\u52a0\u5bc6\uff0c\u53ef\u4ee5\u5728\u6b64\u78c1\u76d8\u4e0a\u8bbf\u95ee\u654f\u611f\u7684\u7528\u6237\u4fe1\u606f\uff0c\u5e76\u4e14\u5728\u5378\u8f7d\u78c1\u76d8\u540e\u53ef\u80fd\u4f1a\u4fdd\u7559\u6b8b\u7559\u4fe1\u606f\u3002\u4ece Kilo \u7248\u672c\u5f00\u59cb\uff0c\u652f\u6301\u4ee5\u4e0b\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\uff1a \u521b\u5efa\u548c\u4f7f\u7528\u52a0\u5bc6\u7684 LVM \u4e34\u65f6\u78c1\u76d8\uff08\u6ce8\u610f\uff1a\u76ee\u524d OpenStack \u8ba1\u7b97\u670d\u52a1\u4ec5\u652f\u6301 LVM \u683c\u5f0f\u7684\u52a0\u5bc6\u4e34\u65f6\u78c1\u76d8\uff09 \u8ba1\u7b97\u914d\u7f6e \uff0c nova.conf \u5728\u201c[ephemeral_storage_encryption]\u201d\u90e8\u5206\u4e2d\u5177\u6709\u4ee5\u4e0b\u9ed8\u8ba4\u53c2\u6570 \u9009\u9879\uff1a\u201c\u5bc6\u7801 = AES-XTS-plain64\u201d \u6b64\u5b57\u6bb5\u8bbe\u7f6e\u7528\u4e8e\u52a0\u5bc6\u4e34\u65f6\u5b58\u50a8\u7684\u5bc6\u7801\u548c\u6a21\u5f0f\u3002NIST\u5efa\u8bae\u5c06AES-XTS\u4e13\u95e8\u7528\u4e8e\u78c1\u76d8\u5b58\u50a8\uff0c\u8be5\u540d\u79f0\u662f\u4f7f\u7528XTS\u52a0\u5bc6\u6a21\u5f0f\u7684AES\u52a0\u5bc6\u7684\u7b80\u5199\u3002\u53ef\u7528\u7684\u5bc6\u7801\u53d6\u51b3\u4e8e\u5185\u6838\u652f\u6301\u3002\u5728\u547d\u4ee4\u884c\u4e2d\uff0c\u8f93\u5165\u201ccryptsetup benchmark\u201d\u4ee5\u786e\u5b9a\u53ef\u7528\u9009\u9879\uff08\u5e76\u67e5\u770b\u57fa\u51c6\u6d4b\u8bd5\u7ed3\u679c\uff09\uff0c\u6216\u8f6c\u5230 /proc/crypto \u9009\u9879\uff1a 'enabled = false' \u8981\u4f7f\u7528\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\uff0c\u8bf7\u8bbe\u7f6e\u9009\u9879\uff1a\u201cenabled = true\u201d \u9009\u9879\uff1a\u201ckey_size = 512\u201d \u8bf7\u6ce8\u610f\uff0c\u540e\u7aef\u5bc6\u94a5\u7ba1\u7406\u5668\u53ef\u80fd\u5b58\u5728\u5bc6\u94a5\u5927\u5c0f\u9650\u5236\uff0c\u53ef\u80fd\u9700\u8981\u4f7f\u7528\u201ckey_size = 256\u201d\uff0c\u8fd9\u4ec5\u63d0\u4f9b 128 \u4f4d\u7684 AES \u5bc6\u94a5\u5927\u5c0f\u3002\u9664\u4e86 AES \u6240\u9700\u7684\u52a0\u5bc6\u5bc6\u94a5\u5916\uff0cXTS \u8fd8\u9700\u8981\u81ea\u5df1\u7684\u201c\u8c03\u6574\u5bc6\u94a5\u201d\u3002\u8fd9\u901a\u5e38\u8868\u793a\u4e3a\u5355\u4e2a\u5927\u952e\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u4f7f\u7528 512 \u4f4d\u8bbe\u7f6e\uff0cAES \u5c06\u4f7f\u7528 256 \u4f4d\uff0cXTS \u5c06\u4f7f\u7528 256 \u4f4d\u3002\uff08\u89c1NIST\uff09 \u901a\u8fc7\u5b89\u5168\u5305\u88c5\u5668\u4e0e\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u4ea4\u4e92 \u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u5c06\u901a\u8fc7\u4e3a\u6bcf\u4e2a\u79df\u6237\u63d0\u4f9b\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u5bc6\u94a5\u6765\u652f\u6301\u6570\u636e\u9694\u79bb \u540e\u7aef\u5bc6\u94a5\u5b58\u50a8\u652f\u6301\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\uff0c\u4ee5\u589e\u5f3a\u5b89\u5168\u6027\uff08\u4f8b\u5982\uff0cHSM \u6216 KMIP \u670d\u52a1\u5668\u53ef\u7528\u4f5c barbican \u540e\u7aef\u5bc6\u94a5\u5b58\u50a8\uff09 \u4f7f\u7528\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u65f6\uff0c\u5f53\u4e0d\u518d\u9700\u8981\u4e34\u65f6\u78c1\u76d8\u65f6\uff0c\u53ea\u9700\u5220\u9664\u5bc6\u94a5\u5373\u53ef\u53d6\u4ee3\u8986\u76d6\u4e34\u65f6\u78c1\u76d8\u5b58\u50a8\u533a\u57df \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61 \u00b6 \u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09 \u652f\u6301\u5bf9\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u9759\u6001\u5bf9\u8c61\u6570\u636e\u8fdb\u884c\u53ef\u9009\u52a0\u5bc6\u3002\u5bf9\u8c61\u6570\u636e\u7684\u52a0\u5bc6\u65e8\u5728\u964d\u4f4e\u5728\u672a\u7ecf\u6388\u6743\u7684\u4e00\u65b9\u83b7\u5f97\u5bf9\u78c1\u76d8\u7684\u7269\u7406\u8bbf\u95ee\u6743\u9650\u65f6\u8bfb\u53d6\u7528\u6237\u6570\u636e\u7684\u98ce\u9669\u3002 \u9759\u6001\u6570\u636e\u52a0\u5bc6\u7531\u4e2d\u95f4\u4ef6\u5b9e\u73b0\uff0c\u4e2d\u95f4\u4ef6\u53ef\u80fd\u5305\u542b\u5728\u4ee3\u7406\u670d\u52a1\u5668 WSGI \u7ba1\u9053\u4e2d\u3002\u8be5\u529f\u80fd\u662f swift \u96c6\u7fa4\u5185\u90e8\u7684\uff0c\u4e0d\u901a\u8fc7 API \u516c\u5f00\u3002\u5ba2\u6237\u7aef\u4e0d\u77e5\u9053 swift \u670d\u52a1\u5185\u90e8\u7684\u6b64\u529f\u80fd\u5bf9\u6570\u636e\u8fdb\u884c\u4e86\u52a0\u5bc6;\u5185\u90e8\u52a0\u5bc6\u7684\u6570\u636e\u4e0d\u5e94\u901a\u8fc7 swift API \u8fd4\u56de\u7ed9\u5ba2\u6237\u7aef\u3002 \u4ee5\u4e0b\u6570\u636e\u5728 swift \u4e2d\u9759\u6001\u65f6\u88ab\u52a0\u5bc6\uff1a \u5bf9\u8c61\u5185\u5bb9\u3002\u4f8b\u5982\uff0c\u5bf9\u8c61 PUT \u8bf7\u6c42\u6b63\u6587\u7684\u5185\u5bb9 \u5177\u6709\u975e\u96f6\u5185\u5bb9\u7684\u5bf9\u8c61\u7684\u5b9e\u4f53\u6807\u8bb0 \uff08ETag\uff09 \u6240\u6709\u81ea\u5b9a\u4e49\u7528\u6237\u5bf9\u8c61\u5143\u6570\u636e\u503c\u3002\u4f8b\u5982\uff0c\u4f7f\u7528 X-Object-Meta- \u5e26\u6709 PUT \u6216 POST \u8bf7\u6c42\u7684\u524d\u7f00\u6807\u5934\u53d1\u9001\u7684\u5143\u6570\u636e \u4e0a\u8ff0\u5217\u8868\u4e2d\u672a\u5305\u542b\u7684\u4efb\u4f55\u6570\u636e\u6216\u5143\u6570\u636e\u5747\u672a\u52a0\u5bc6\uff0c\u5305\u62ec\uff1a \u5e10\u6237\u3001\u5bb9\u5668\u548c\u5bf9\u8c61\u540d\u79f0 \u5e10\u6237\u548c\u5bb9\u5668\u81ea\u5b9a\u4e49\u7528\u6237\u5143\u6570\u636e\u503c \u6240\u6709\u81ea\u5b9a\u4e49\u7528\u6237\u5143\u6570\u636e\u540d\u79f0 \u5bf9\u8c61\u5185\u5bb9\u7c7b\u578b\u503c \u5bf9\u8c61\u5927\u5c0f \u7cfb\u7edf\u5143\u6570\u636e \u6709\u5173\u5bf9\u8c61\u5b58\u50a8\u52a0\u5bc6\u7684\u90e8\u7f72\u3001\u64cd\u4f5c\u6216\u5b9e\u65bd\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6709\u5173\u5bf9\u8c61\u52a0\u5bc6\u7684 swift \u5f00\u53d1\u4eba\u5458\u6587\u6863\u3002 \u5757\u5b58\u50a8\u6027\u80fd\u548c\u540e\u7aef \u00b6 \u542f\u7528\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u53ef\u4ee5\u4f7f\u7528 Intel \u548c AMD \u5904\u7406\u5668\u4e2d\u5f53\u524d\u53ef\u7528\u7684\u786c\u4ef6\u52a0\u901f\u529f\u80fd\u6765\u589e\u5f3a OpenStack Volume Encryption \u6027\u80fd\u3002OpenStack \u5377\u52a0\u5bc6\u529f\u80fd\u548c OpenStack \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u90fd\u7528\u4e8e dm-crypt \u4fdd\u62a4\u5377\u6570\u636e\u3002 dm-crypt \u662f Linux \u5185\u6838\u7248\u672c 2.6 \u53ca\u66f4\u9ad8\u7248\u672c\u4e2d\u7684\u900f\u660e\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u3002\u542f\u7528\u5377\u52a0\u5bc6\u540e\uff0c\u52a0\u5bc6\u6570\u636e\u5c06\u901a\u8fc7 iSCSI \u53d1\u9001\u5230\u5757\u5b58\u50a8\uff0c\u4ece\u800c\u540c\u65f6\u4fdd\u62a4\u4f20\u8f93\u4e2d\u7684\u6570\u636e\u548c\u9759\u6001\u6570\u636e\u3002\u4f7f\u7528\u786c\u4ef6\u52a0\u901f\u65f6\uff0c\u8fd9\u4e24\u79cd\u52a0\u5bc6\u529f\u80fd\u5bf9\u6027\u80fd\u7684\u5f71\u54cd\u90fd\u4f1a\u964d\u5230\u6700\u4f4e\u3002 \u867d\u7136\u6211\u4eec\u5efa\u8bae\u4f7f\u7528 OpenStack \u5377\u52a0\u5bc6\u529f\u80fd\uff0c\u4f46\u5757\u5b58\u50a8\u652f\u6301\u591a\u79cd\u66ff\u4ee3\u540e\u7aef\u6765\u63d0\u4f9b\u53ef\u6302\u8f7d\u5377\uff0c\u5176\u4e2d\u4e00\u4e9b\u8fd8\u53ef\u80fd\u63d0\u4f9b\u5377\u52a0\u5bc6\u3002\u7531\u4e8e\u540e\u7aef\u5982\u6b64\u4e4b\u591a\uff0c\u5e76\u4e14\u5fc5\u987b\u4ece\u6bcf\u4e2a\u4f9b\u5e94\u5546\u5904\u83b7\u53d6\u4fe1\u606f\uff0c\u56e0\u6b64\u6307\u5b9a\u5728\u4efb\u4f55\u4e00\u4e2a\u4f9b\u5e94\u5546\u4e2d\u5b9e\u65bd\u52a0\u5bc6\u7684\u5efa\u8bae\u8d85\u51fa\u4e86\u672c\u6307\u5357\u7684\u8303\u56f4\u3002 \u7f51\u7edc\u6570\u636e \u00b6 \u8ba1\u7b97\u7684\u79df\u6237\u6570\u636e\u53ef\u4ee5\u901a\u8fc7 IPsec \u6216\u5176\u4ed6\u96a7\u9053\u8fdb\u884c\u52a0\u5bc6\u3002\u8fd9\u5728OpenStack\u4e2d\u5e76\u4e0d\u5e38\u89c1\u6216\u6807\u51c6\uff0c\u4f46\u5bf9\u4e8e\u6709\u52a8\u529b\u548c\u611f\u5174\u8da3\u7684\u5b9e\u73b0\u8005\u6765\u8bf4\uff0c\u8fd9\u662f\u4e00\u4e2a\u9009\u9879\u3002 \u540c\u6837\uff0c\u52a0\u5bc6\u6570\u636e\u5728\u901a\u8fc7\u7f51\u7edc\u4f20\u8f93\u65f6\u5c06\u4fdd\u6301\u52a0\u5bc6\u72b6\u6001\u3002 \u5bc6\u94a5\u7ba1\u7406 \u00b6 \u4e3a\u4e86\u89e3\u51b3\u7ecf\u5e38\u63d0\u5230\u7684\u79df\u6237\u6570\u636e\u9690\u79c1\u548c\u9650\u5236\u4e91\u63d0\u4f9b\u5546\u8d23\u4efb\u7684\u95ee\u9898\uff0cOpenStack\u793e\u533a\u5bf9\u4f7f\u6570\u636e\u52a0\u5bc6\u66f4\u52a0\u666e\u904d\u7684\u5174\u8da3\u8d8a\u6765\u8d8a\u5927\u3002\u5bf9\u4e8e\u6700\u7ec8\u7528\u6237\u6765\u8bf4\uff0c\u5728\u5c06\u6570\u636e\u4fdd\u5b58\u5230\u4e91\u4e4b\u524d\u5bf9\u5176\u8fdb\u884c\u52a0\u5bc6\u76f8\u5bf9\u5bb9\u6613\uff0c\u8fd9\u662f\u79df\u6237\u5bf9\u8c61\uff08\u5982\u5a92\u4f53\u6587\u4ef6\u3001\u6570\u636e\u5e93\u5b58\u6863\u7b49\uff09\u7684\u53ef\u884c\u8def\u5f84\u3002\u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u5ba2\u6237\u7aef\u52a0\u5bc6\u7528\u4e8e\u52a0\u5bc6\u865a\u62df\u5316\u6280\u672f\u4fdd\u5b58\u7684\u6570\u636e\uff0c\u8fd9\u9700\u8981\u5ba2\u6237\u7aef\u4ea4\u4e92\uff08\u4f8b\u5982\u63d0\u4f9b\u5bc6\u94a5\uff09\u6765\u89e3\u5bc6\u6570\u636e\u4ee5\u4f9b\u5c06\u6765\u4f7f\u7528\u3002\u4e3a\u4e86\u65e0\u7f1d\u5730\u4fdd\u62a4\u6570\u636e\u5e76\u4f7f\u5176\u53ef\u8bbf\u95ee\uff0c\u800c\u65e0\u9700\u7ed9\u5ba2\u6237\u5e26\u6765\u7ba1\u7406\u5176\u5bc6\u94a5\u7684\u8d1f\u62c5\uff0c\u5e76\u4ee5\u4ea4\u4e92\u65b9\u5f0f\u5411\u4ed6\u4eec\u63d0\u4f9b OpenStack \u4e2d\u7684\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u3002\u4f5c\u4e3aOpenStack\u7684\u4e00\u90e8\u5206\uff0c\u63d0\u4f9b\u52a0\u5bc6\u548c\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u53ef\u4ee5\u7b80\u5316\u9759\u6001\u6570\u636e\u5b89\u5168\u91c7\u7528\uff0c\u5e76\u89e3\u51b3\u5ba2\u6237\u5bf9\u9690\u79c1\u6216\u6570\u636e\u6ee5\u7528\u7684\u62c5\u5fe7\uff0c\u540c\u65f6\u4e5f\u9650\u5236\u4e86\u4e91\u63d0\u4f9b\u5546\u7684\u8d23\u4efb\u3002\u8fd9\u6709\u52a9\u4e8e\u51cf\u5c11\u63d0\u4f9b\u5546\u5728\u591a\u79df\u6237\u516c\u6709\u4e91\u4e2d\u7684\u4e8b\u4ef6\u8c03\u67e5\u671f\u95f4\u5904\u7406\u79df\u6237\u6570\u636e\u65f6\u7684\u8d23\u4efb\u3002 \u5377\u52a0\u5bc6\u548c\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u4f9d\u8d56\u4e8e\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\uff08\u4f8b\u5982\uff0cbarbican\uff09\u6765\u521b\u5efa\u548c\u5b89\u5168\u5b58\u50a8\u5bc6\u94a5\u3002\u5bc6\u94a5\u7ba1\u7406\u5668\u662f\u53ef\u63d2\u5165\u7684\uff0c\u4ee5\u65b9\u4fbf\u9700\u8981\u7b2c\u4e09\u65b9\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u6216\u4f7f\u7528\u5bc6\u94a5\u7ba1\u7406\u4ea4\u6362\u534f\u8bae \uff08KMIP\uff09 \u7684\u90e8\u7f72\uff0c\u8be5\u534f\u8bae\u7531\u540d\u4e3a PyKMIP \u7684\u5f00\u6e90\u9879\u76ee\u652f\u6301\u3002 \u53c2\u8003\u4e66\u76ee\uff1a \u00b6 OpenStack.org\uff0c\u6b22\u8fce\u6765\u5230 barbican \u7684\u5f00\u53d1\u8005\u6587\u6863\uff012014\u3002Barbican \u5f00\u53d1\u8005\u6587\u6863 oasis-open.org\uff0cOASIS \u5bc6\u94a5\u7ba1\u7406\u4e92\u64cd\u4f5c\u6027\u534f\u8bae \uff08KMIP\uff09\u30022014\u5e74\u3002KMIP PyKMIP \u5e93 \u673a\u5bc6\u7ba1\u7406 \u673a\u5bc6\u7ba1\u7406 \u5b9e\u4f8b\u5b89\u5168\u7ba1\u7406 \u00b6 \u5728\u865a\u62df\u5316\u73af\u5883\u4e2d\u8fd0\u884c\u5b9e\u4f8b\u7684\u4f18\u70b9\u4e4b\u4e00\u662f\uff0c\u5b83\u4e3a\u5b89\u5168\u63a7\u5236\u5f00\u8f9f\u4e86\u65b0\u7684\u673a\u4f1a\uff0c\u800c\u8fd9\u4e9b\u63a7\u5236\u5728\u90e8\u7f72\u5230\u88f8\u673a\u4e0a\u65f6\u901a\u5e38\u4e0d\u53ef\u7528\u3002\u6709\u51e0\u79cd\u6280\u672f\u53ef\u4ee5\u5e94\u7528\u4e8e\u865a\u62df\u5316\u5806\u6808\uff0c\u4e3a\u4e91\u79df\u6237\u5e26\u6765\u66f4\u597d\u7684\u4fe1\u606f\u4fdd\u969c\u3002 \u5177\u6709\u5f3a\u70c8\u5b89\u5168\u8981\u6c42\u7684 OpenStack \u90e8\u7f72\u4eba\u5458\u6216\u7528\u6237\u53ef\u80fd\u9700\u8981\u8003\u8651\u90e8\u7f72\u8fd9\u4e9b\u6280\u672f\u3002\u5e76\u975e\u6240\u6709\u60c5\u51b5\u90fd\u9002\u7528\u3002\u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u7531\u4e8e\u89c4\u8303\u6027\u4e1a\u52a1\u9700\u6c42\uff0c\u53ef\u80fd\u4f1a\u6392\u9664\u5728\u4e91\u4e2d\u4f7f\u7528\u6280\u672f\u3002\u540c\u6837\uff0c\u67d0\u4e9b\u6280\u672f\u4f1a\u68c0\u67e5\u5b9e\u4f8b\u6570\u636e\uff0c\u4f8b\u5982\u8fd0\u884c\u72b6\u6001\uff0c\u8fd9\u5bf9\u7cfb\u7edf\u7528\u6237\u6765\u8bf4\u53ef\u80fd\u662f\u4e0d\u5e0c\u671b\u7684\u3002 \u5728\u672c\u7ae0\u4e2d\uff0c\u6211\u4eec\u5c06\u63a2\u8ba8\u8fd9\u4e9b\u6280\u672f\uff0c\u5e76\u63cf\u8ff0\u5b83\u4eec\u53ef\u7528\u4e8e\u589e\u5f3a\u5b9e\u4f8b\u6216\u5e95\u5c42\u5b9e\u4f8b\u5b89\u5168\u6027\u7684\u60c5\u51b5\u3002\u6211\u4eec\u8fd8\u8bd5\u56fe\u5f3a\u8c03\u53ef\u80fd\u5b58\u5728\u9690\u79c1\u95ee\u9898\u7684\u5730\u65b9\u3002\u8fd9\u4e9b\u5305\u62ec\u6570\u636e\u4f20\u9012\u3001\u5185\u7701\u6216\u63d0\u4f9b\u71b5\u6e90\u3002\u5728\u672c\u8282\u4e2d\uff0c\u6211\u4eec\u5c06\u91cd\u70b9\u4ecb\u7ecd\u4ee5\u4e0b\u9644\u52a0\u5b89\u5168\u670d\u52a1\uff1a \u5b9e\u4f8b\u7684\u71b5 \u5c06\u5b9e\u4f8b\u8c03\u5ea6\u5230\u8282\u70b9 \u53d7\u4fe1\u4efb\u7684\u6620\u50cf \u5b9e\u4f8b\u8fc1\u79fb \u76d1\u63a7\u3001\u8b66\u62a5\u548c\u62a5\u544a \u66f4\u65b0\u548c\u8865\u4e01 \u9632\u706b\u5899\u548c\u5176\u4ed6\u57fa\u4e8e\u4e3b\u673a\u7684\u5b89\u5168\u63a7\u5236 \u5b9e\u4f8b\u7684\u5b89\u5168\u670d\u52a1 \u5b9e\u4f8b\u7684\u71b5 \u5c06\u5b9e\u4f8b\u8c03\u5ea6\u5230\u8282\u70b9 \u53d7\u4fe1\u4efb\u7684\u6620\u50cf \u5b9e\u4f8b\u8fc1\u79fb \u76d1\u63a7\u3001\u8b66\u62a5\u548c\u62a5\u544a \u66f4\u65b0\u548c\u8865\u4e01 \u9632\u706b\u5899\u548c\u5176\u4ed6\u57fa\u4e8e\u4e3b\u673a\u7684\u5b89\u5168\u63a7\u5236 \u5b9e\u4f8b\u7684\u5b89\u5168\u670d\u52a1 \u00b6 \u5b9e\u4f8b\u7684\u71b5 \u00b6 \u6211\u4eec\u8ba4\u4e3a\u71b5\u662f\u6307\u5b9e\u4f8b\u53ef\u7528\u7684\u968f\u673a\u6570\u636e\u7684\u8d28\u91cf\u548c\u6765\u6e90\u3002\u52a0\u5bc6\u6280\u672f\u901a\u5e38\u4e25\u91cd\u4f9d\u8d56\u968f\u673a\u6027\uff0c\u9700\u8981\u9ad8\u8d28\u91cf\u7684\u71b5\u6c60\u624d\u80fd\u4ece\u4e2d\u6c72\u53d6\u3002\u865a\u62df\u673a\u901a\u5e38\u5f88\u96be\u83b7\u5f97\u8db3\u591f\u7684\u71b5\u6765\u652f\u6301\u8fd9\u4e9b\u64cd\u4f5c\uff0c\u8fd9\u79f0\u4e3a\u71b5\u9965\u997f\u3002\u71b5\u9965\u997f\u53ef\u4ee5\u8868\u73b0\u4e3a\u770b\u4f3c\u65e0\u5173\u7684\u4e8b\u60c5\u3002\u4f8b\u5982\uff0c\u542f\u52a8\u65f6\u95f4\u6162\u53ef\u80fd\u662f\u7531\u4e8e\u5b9e\u4f8b\u7b49\u5f85 ssh \u5bc6\u94a5\u751f\u6210\u9020\u6210\u7684\u3002\u71b5\u9965\u997f\u8fd8\u53ef\u80fd\u4fc3\u4f7f\u7528\u6237\u5728\u5b9e\u4f8b\u4e2d\u4f7f\u7528\u8d28\u91cf\u8f83\u5dee\u7684\u71b5\u6e90\uff0c\u4ece\u800c\u4f7f\u5728\u4e91\u4e2d\u8fd0\u884c\u7684\u5e94\u7528\u7a0b\u5e8f\u6574\u4f53\u5b89\u5168\u6027\u964d\u4f4e\u3002 \u5e78\u8fd0\u7684\u662f\uff0c\u4e91\u67b6\u6784\u5e08\u53ef\u4ee5\u901a\u8fc7\u4e3a\u4e91\u5b9e\u4f8b\u63d0\u4f9b\u9ad8\u8d28\u91cf\u7684\u71b5\u6e90\u6765\u89e3\u51b3\u8fd9\u4e9b\u95ee\u9898\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u5728\u4e91\u4e2d\u62e5\u6709\u8db3\u591f\u7684\u786c\u4ef6\u968f\u673a\u6570\u751f\u6210\u5668 \uff08HRNG\uff09 \u6765\u652f\u6301\u5b9e\u4f8b\u6765\u5b9e\u73b0\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u201c\u8db3\u591f\u201d\u5728\u67d0\u79cd\u7a0b\u5ea6\u4e0a\u662f\u7279\u5b9a\u4e8e\u57df\u7684\u3002\u5bf9\u4e8e\u65e5\u5e38\u64cd\u4f5c\uff0c\u73b0\u4ee3 HRNG \u53ef\u80fd\u4f1a\u4ea7\u751f\u8db3\u591f\u7684\u71b5\u6765\u652f\u6301 50-100 \u4e2a\u8ba1\u7b97\u8282\u70b9\u3002\u9ad8\u5e26\u5bbd HRNG\uff08\u4f8b\u5982\u82f1\u7279\u5c14 Ivy Bridge \u548c\u66f4\u65b0\u7684\u5904\u7406\u5668\u63d0\u4f9b\u7684 RdRand \u6307\u4ee4\uff09\u53ef\u80fd\u4f1a\u5904\u7406\u66f4\u591a\u8282\u70b9\u3002\u5bf9\u4e8e\u7ed9\u5b9a\u7684\u4e91\uff0c\u67b6\u6784\u5e08\u9700\u8981\u4e86\u89e3\u5e94\u7528\u7a0b\u5e8f\u8981\u6c42\uff0c\u4ee5\u786e\u4fdd\u6709\u8db3\u591f\u7684\u71b5\u53ef\u7528\u3002 Virtio RNG \u662f\u4e00\u4e2a\u968f\u673a\u6570\u751f\u6210\u5668\uff0c\u9ed8\u8ba4\u60c5\u51b5\u4e0b\u7528\u4f5c /dev/random \u71b5\u6e90\uff0c\u4f46\u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528\u786c\u4ef6 RNG \u6216\u71b5\u6536\u96c6\u5b88\u62a4\u7a0b\u5e8f \uff08EGD\uff09 \u7b49\u5de5\u5177\uff0c\u4ee5\u63d0\u4f9b\u4e00\u79cd\u901a\u8fc7\u5206\u5e03\u5f0f\u7cfb\u7edf\u516c\u5e73\u5b89\u5168\u5730\u5206\u914d\u71b5\u7684\u65b9\u6cd5\u3002Virtio RNG \u662f\u4f7f\u7528\u7528\u4e8e\u521b\u5efa\u5b9e\u4f8b\u7684\u5143\u6570\u636e\u7684 hw_rng \u5c5e\u6027\u542f\u7528\u7684\u3002 \u5c06\u5b9e\u4f8b\u8c03\u5ea6\u5230\u8282\u70b9 \u00b6 \u5728\u521b\u5efa\u5b9e\u4f8b\u4e4b\u524d\uff0c\u5fc5\u987b\u9009\u62e9\u7528\u4e8e\u955c\u50cf\u5b9e\u4f8b\u5316\u7684\u4e3b\u673a\u3002\u6b64\u9009\u62e9\u7531 nova-scheduler \u786e\u5b9a\u5982\u4f55\u5206\u6d3e\u8ba1\u7b97\u548c\u5377\u8bf7\u6c42\u7684 \u6267\u884c\u3002 \u8fd9\u662f FilterScheduler OpenStack Compute\u7684\u9ed8\u8ba4\u8c03\u5ea6\u7a0b\u5e8f\uff0c\u5c3d\u7ba1\u5b58\u5728\u5176\u4ed6\u8c03\u5ea6\u7a0b\u5e8f\uff08\u8bf7\u53c2\u9605 OpenStack Configuration Reference \u4e2d\u7684 Scheduling \u90e8\u5206\uff09\u3002\u8fd9\u4e0e\u201c\u8fc7\u6ee4\u5668\u63d0\u793a\u201d\u534f\u540c\u5de5\u4f5c\uff0c\u4ee5\u51b3\u5b9a\u5b9e\u4f8b\u7684\u542f\u52a8\u4f4d\u7f6e\u3002\u6b64\u4e3b\u673a\u9009\u62e9\u8fc7\u7a0b\u5141\u8bb8\u7ba1\u7406\u5458\u6ee1\u8db3\u8bb8\u591a\u4e0d\u540c\u7684\u5b89\u5168\u6027\u548c\u5408\u89c4\u6027\u8981\u6c42\u3002\u4f8b\u5982\uff0c\u6839\u636e\u4e91\u90e8\u7f72\u7c7b\u578b\uff0c\u5982\u679c\u6570\u636e\u9694\u79bb\u662f\u4e3b\u8981\u95ee\u9898\uff0c\u5219\u53ef\u4ee5\u9009\u62e9\u5c3d\u53ef\u80fd\u8ba9\u79df\u6237\u5b9e\u4f8b\u9a7b\u7559\u5728\u76f8\u540c\u7684\u4e3b\u673a\u4e0a\u3002\u76f8\u53cd\uff0c\u51fa\u4e8e\u53ef\u7528\u6027\u6216\u5bb9\u9519\u539f\u56e0\uff0c\u53ef\u4ee5\u5c1d\u8bd5\u5c06\u79df\u6237\u7684\u5b9e\u4f8b\u9a7b\u7559\u5728\u5c3d\u53ef\u80fd\u591a\u7684\u4e0d\u540c\u4e3b\u673a\u4e0a\u3002 \u7b5b\u9009\u5668\u8ba1\u5212\u7a0b\u5e8f\u5206\u4e3a\u56db\u5927\u7c7b\uff1a \u57fa\u4e8e\u8d44\u6e90\u7684\u7b5b\u9009\u5668 \u8fd9\u4e9b\u7b5b\u9009\u5668\u5c06\u6839\u636e\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4e3b\u673a\u96c6\u7684\u5229\u7528\u7387\u521b\u5efa\u5b9e\u4f8b\uff0c\u5e76\u53ef\u4ee5\u5728\u53ef\u7528\u6216\u4f7f\u7528\u7684\u5c5e\u6027\uff08\u5982 RAM\u3001IO \u6216 CPU \u5229\u7528\u7387\uff09\u4e0a\u89e6\u53d1\u3002 \u57fa\u4e8e\u6620\u50cf\u7684\u8fc7\u6ee4\u5668 \u8fd9\u5c06\u6839\u636e\u4f7f\u7528\u7684\u6620\u50cf\uff08\u4f8b\u5982 VM \u7684\u64cd\u4f5c\u7cfb\u7edf\u6216\u4f7f\u7528\u7684\u6620\u50cf\u7c7b\u578b\uff09\u59d4\u6d3e\u5b9e\u4f8b\u521b\u5efa\u3002 \u57fa\u4e8e\u73af\u5883\u7684\u8fc7\u6ee4\u5668 \u6b64\u7b5b\u9009\u5668\u5c06\u57fa\u4e8e\u5916\u90e8\u8be6\u7ec6\u4fe1\u606f\u521b\u5efa\u5b9e\u4f8b\uff0c\u4f8b\u5982\u5728\u7279\u5b9a IP \u8303\u56f4\u5185\u3001\u8de8\u53ef\u7528\u533a\u6216\u4e0e\u5176\u4ed6\u5b9e\u4f8b\u4f4d\u4e8e\u540c\u4e00\u4e3b\u673a\u4e0a\u3002 \u81ea\u5b9a\u4e49\u6761\u4ef6 \u6b64\u7b5b\u9009\u5668\u5c06\u6839\u636e\u7528\u6237\u6216\u7ba1\u7406\u5458\u63d0\u4f9b\u7684\u6761\u4ef6\uff08\u5982\u4fe1\u4efb\u6216\u5143\u6570\u636e\u5206\u6790\uff09\u59d4\u6d3e\u5b9e\u4f8b\u521b\u5efa\u3002 \u53ef\u4ee5\u540c\u65f6\u5e94\u7528\u591a\u4e2a\u7b5b\u9009\u5668\uff0c\u4f8b\u5982\uff0c\u7b5b\u9009\u5668\u7528\u4e8e\u786e\u4fdd\u5728\u4e00\u7ec4\u7279\u5b9a\u4e3b\u673a\u7684\u6210\u5458\u4e0a\u521b\u5efa\u5b9e\u4f8b\uff0c\u4ee5\u53ca ServerGroupAntiAffinity \u7528\u4e8e\u786e\u4fdd\u4e0d\u4f1a\u5728\u53e6\u4e00\u7ec4\u7279\u5b9a\u4e3b\u673a\u4e0a\u521b\u5efa\u540c\u4e00\u5b9e\u4f8b\u7684\u7b5b\u9009\u5668 ServerGroupAffinity \u3002\u5e94\u4ed4\u7ec6\u5206\u6790\u8fd9\u4e9b\u7b5b\u9009\u5668\uff0c\u4ee5\u786e\u4fdd\u5b83\u4eec\u4e0d\u4f1a\u76f8\u4e92\u51b2\u7a81\uff0c\u5e76\u5bfc\u81f4\u963b\u6b62\u521b\u5efa\u5b9e\u4f8b\u7684\u89c4\u5219\u3002 GroupAffinity \u548c GroupAntiAffinity \u7b5b\u9009\u5668\u51b2\u7a81\uff0c\u4e0d\u5e94\u540c\u65f6\u542f\u7528\u3002 \u7b5b\u9009\u5668 DiskFilter \u80fd\u591f\u8d85\u989d\u8ba2\u9605\u78c1\u76d8\u7a7a\u95f4\u3002\u867d\u7136\u901a\u5e38\u4e0d\u662f\u95ee\u9898\uff0c\u4f46\u5bf9\u4e8e\u7cbe\u7b80\u9884\u914d\u7684\u5b58\u50a8\u8bbe\u5907\u6765\u8bf4\uff0c\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u95ee\u9898\uff0c\u5e76\u4e14\u6b64\u7b5b\u9009\u5668\u5e94\u4e0e\u5e94\u7528\u7ecf\u8fc7\u5145\u5206\u6d4b\u8bd5\u7684\u914d\u989d\u4e00\u8d77\u4f7f\u7528\u3002 \u6211\u4eec\u5efa\u8bae\u60a8\u7981\u7528\u8fc7\u6ee4\u5668\uff0c\u8fd9\u4e9b\u8fc7\u6ee4\u5668\u53ef\u4ee5\u5206\u6790\u7528\u6237\u63d0\u4f9b\u7684\u5185\u5bb9\u6216\u53ef\u64cd\u4f5c\u7684\u5185\u5bb9\uff0c\u4f8b\u5982\u5143\u6570\u636e\u3002 \u53ef\u4fe1\u955c\u50cf \u00b6 \u5728\u4e91\u73af\u5883\u4e2d\uff0c\u7528\u6237\u4f7f\u7528\u9884\u5b89\u88c5\u7684\u6620\u50cf\u6216\u4ed6\u4eec\u81ea\u5df1\u4e0a\u4f20\u7684\u6620\u50cf\u3002\u5728\u8fd9\u4e24\u79cd\u60c5\u51b5\u4e0b\uff0c\u7528\u6237\u90fd\u5e94\u8be5\u80fd\u591f\u786e\u4fdd\u4ed6\u4eec\u6b63\u5728\u4f7f\u7528\u7684\u56fe\u50cf\u6ca1\u6709\u88ab\u7be1\u6539\u3002\u9a8c\u8bc1\u56fe\u50cf\u7684\u80fd\u529b\u662f\u5b89\u5168\u6027\u7684\u57fa\u672c\u8981\u6c42\u3002\u4ece\u6620\u50cf\u6e90\u5230\u4f7f\u7528\u6620\u50cf\u7684\u76ee\u6807\u9700\u8981\u4fe1\u4efb\u94fe\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u5bf9\u4ece\u53d7\u4fe1\u4efb\u6765\u6e90\u83b7\u53d6\u7684\u6620\u50cf\u8fdb\u884c\u7b7e\u540d\u5e76\u5728\u4f7f\u7528\u524d\u9a8c\u8bc1\u7b7e\u540d\u6765\u5b9e\u73b0\u3002\u4e0b\u9762\u5c06\u8ba8\u8bba\u83b7\u53d6\u548c\u521b\u5efa\u5df2\u9a8c\u8bc1\u56fe\u50cf\u7684\u5404\u79cd\u65b9\u6cd5\uff0c\u7136\u540e\u4ecb\u7ecd\u56fe\u50cf\u7b7e\u540d\u9a8c\u8bc1\u529f\u80fd\u3002 \u955c\u50cf\u521b\u5efa\u8fc7\u7a0b \u00b6 OpenStack \u6587\u6863\u63d0\u4f9b\u4e86\u6709\u5173\u5982\u4f55\u521b\u5efa\u6620\u50cf\u5e76\u5c06\u5176\u4e0a\u4f20\u5230\u6620\u50cf\u670d\u52a1\u7684\u6307\u5bfc\u3002\u6b64\u5916\uff0c\u5047\u5b9a\u60a8\u6709\u4e00\u4e2a\u5b89\u88c5\u548c\u5f3a\u5316\u64cd\u4f5c\u7cfb\u7edf\u7684\u8fc7\u7a0b\u3002\u56e0\u6b64\uff0c\u4ee5\u4e0b\u5404\u9879\u5c06\u63d0\u4f9b\u6709\u5173\u5982\u4f55\u786e\u4fdd\u5c06\u6620\u50cf\u5b89\u5168\u5730\u4f20\u8f93\u5230 OpenStack \u4e2d\u7684\u989d\u5916\u6307\u5bfc\u3002\u6709\u591a\u79cd\u9009\u9879\u53ef\u7528\u4e8e\u83b7\u53d6\u56fe\u50cf\u3002\u6bcf\u4e2a\u6b65\u9aa4\u90fd\u6709\u7279\u5b9a\u7684\u6b65\u9aa4\uff0c\u6709\u52a9\u4e8e\u9a8c\u8bc1\u56fe\u50cf\u7684\u51fa\u5904\u3002 \u7b2c\u4e00\u4e2a\u9009\u9879\u662f\u4ece\u53d7\u4fe1\u4efb\u7684\u6765\u6e90\u83b7\u53d6\u542f\u52a8\u5a92\u4f53\u3002 $ mkdir -p /tmp/download_directorycd /tmp/download_directory $ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/ubuntu-12.04.2-server-amd64.iso $ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/SHA256SUMS $ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/SHA256SUMS.gpg $ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 0xFBB75451 $ gpg --verify SHA256SUMS.gpg SHA256SUMSsha256sum -c SHA256SUMS 2>&1 | grep OK \u7b2c\u4e8c\u79cd\u9009\u62e9\u662f\u4f7f\u7528 OpenStack \u865a\u62df\u673a\u6620\u50cf\u6307\u5357\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u60a8\u9700\u8981\u9075\u5faa\u7ec4\u7ec7\u7684\u64cd\u4f5c\u7cfb\u7edf\u5f3a\u5316\u51c6\u5219\u6216\u53d7\u4fe1\u4efb\u7684\u7b2c\u4e09\u65b9\uff08\u5982 Linux STIG\uff09\u63d0\u4f9b\u7684\u51c6\u5219\u3002 \u6700\u540e\u4e00\u79cd\u9009\u62e9\u662f\u4f7f\u7528\u81ea\u52a8\u6620\u50cf\u751f\u6210\u5668\u3002\u4ee5\u4e0b\u793a\u4f8b\u4f7f\u7528 Oz \u6620\u50cf\u751f\u6210\u5668\u3002OpenStack \u793e\u533a\u6700\u8fd1\u521b\u5efa\u4e86\u4e00\u4e2a\u503c\u5f97\u7814\u7a76\u7684\u65b0\u5de5\u5177\uff1adisk-image-builder\u3002\u6211\u4eec\u5c1a\u672a\u4ece\u5b89\u5168\u89d2\u5ea6\u8bc4\u4f30\u6b64\u5de5\u5177\u3002 RHEL 6 CCE-26976-1 \u793a\u4f8b\uff0c\u8fd9\u5c06\u6709\u52a9\u4e8e\u5728 OZ \u4e2d\u5b9e\u65bd NIST 800-53 \u7b2c AC-19\uff08d\uff09\u8282\u3002 \u5efa\u8bae\u907f\u514d\u624b\u52a8\u6620\u50cf\u6784\u5efa\u8fc7\u7a0b\uff0c\u56e0\u4e3a\u5b83\u5f88\u590d\u6742\u4e14\u5bb9\u6613\u51fa\u9519\u3002\u6b64\u5916\uff0c\u4f7f\u7528 Oz \u7b49\u81ea\u52a8\u5316\u7cfb\u7edf\u8fdb\u884c\u6620\u50cf\u6784\u5efa\uff0c\u6216\u4f7f\u7528 Chef \u6216 Puppet \u7b49\u914d\u7f6e\u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f\u8fdb\u884c\u542f\u52a8\u540e\u6620\u50cf\u5f3a\u5316\uff0c\u4f7f\u60a8\u80fd\u591f\u751f\u6210\u4e00\u81f4\u7684\u6620\u50cf\uff0c\u5e76\u8ddf\u8e2a\u57fa\u7840\u6620\u50cf\u5728\u4e00\u6bb5\u65f6\u95f4\u5185\u662f\u5426\u7b26\u5408\u5176\u5404\u81ea\u7684\u5f3a\u5316\u51c6\u5219\u3002 \u5982\u679c\u8ba2\u9605\u516c\u6709\u4e91\u670d\u52a1\uff0c\u5219\u5e94\u4e0e\u4e91\u63d0\u4f9b\u5546\u8054\u7cfb\uff0c\u4e86\u89e3\u7528\u4e8e\u751f\u6210\u5176\u9ed8\u8ba4\u6620\u50cf\u7684\u8fc7\u7a0b\u7684\u6982\u8ff0\u3002\u5982\u679c\u63d0\u4f9b\u5546\u5141\u8bb8\u60a8\u4e0a\u4f20\u81ea\u5df1\u7684\u6620\u50cf\uff0c\u5219\u9700\u8981\u786e\u4fdd\u5728\u4f7f\u7528\u6620\u50cf\u521b\u5efa\u5b9e\u4f8b\u4e4b\u524d\u80fd\u591f\u9a8c\u8bc1\u6620\u50cf\u662f\u5426\u672a\u88ab\u4fee\u6539\u3002\u4e3a\u6b64\uff0c\u8bf7\u53c2\u9605\u4ee5\u4e0b\u6709\u5173\u56fe\u50cf\u7b7e\u540d\u9a8c\u8bc1\u7684\u90e8\u5206\uff0c\u5982\u679c\u65e0\u6cd5\u4f7f\u7528\u7b7e\u540d\uff0c\u8bf7\u53c2\u9605\u4ee5\u4e0b\u6bb5\u843d\u3002 \u6620\u50cf\u4ece\u8282\u70b9\u4e0a\u7684\u6620\u50cf\u670d\u52a1\u4f20\u8f93\u5230\u8ba1\u7b97\u670d\u52a1\u3002\u5e94\u901a\u8fc7\u901a\u8fc7 TLS \u8fd0\u884c\u6765\u4fdd\u62a4\u6b64\u4f20\u8f93\u3002\u6620\u50cf\u4f4d\u4e8e\u8282\u70b9\u4e0a\u540e\uff0c\u5c06\u4f7f\u7528\u57fa\u672c\u6821\u9a8c\u548c\u5bf9\u5176\u8fdb\u884c\u9a8c\u8bc1\uff0c\u7136\u540e\u6839\u636e\u8981\u542f\u52a8\u7684\u5b9e\u4f8b\u7684\u5927\u5c0f\u6269\u5c55\u5176\u78c1\u76d8\u3002\u5982\u679c\u7a0d\u540e\u5728\u6b64\u8282\u70b9\u4e0a\u4ee5\u76f8\u540c\u7684\u5b9e\u4f8b\u5927\u5c0f\u542f\u52a8\u540c\u4e00\u6620\u50cf\uff0c\u5219\u4f1a\u4ece\u540c\u4e00\u6269\u5c55\u6620\u50cf\u542f\u52a8\u8be5\u6620\u50cf\u3002\u7531\u4e8e\u6b64\u6269\u5c55\u6620\u50cf\u5728\u542f\u52a8\u524d\u9ed8\u8ba4\u4e0d\u4f1a\u91cd\u65b0\u9a8c\u8bc1\uff0c\u56e0\u6b64\u5b83\u53ef\u80fd\u5df2\u88ab\u7be1\u6539\u3002\u9664\u975e\u5728\u751f\u6210\u7684\u6620\u50cf\u4e2d\u5bf9\u6587\u4ef6\u6267\u884c\u624b\u52a8\u68c0\u67e5\uff0c\u5426\u5219\u7528\u6237\u4e0d\u4f1a\u610f\u8bc6\u5230\u7be1\u6539\u3002 \u6620\u50cf\u7b7e\u540d\u9a8c\u8bc1 \u00b6 OpenStack \u4e2d\u73b0\u5728\u63d0\u4f9b\u4e86\u4e00\u4e9b\u4e0e\u6620\u50cf\u7b7e\u540d\u76f8\u5173\u7684\u529f\u80fd\u3002\u4ece Mitaka \u7248\u672c\u5f00\u59cb\uff0c\u6620\u50cf\u670d\u52a1\u53ef\u4ee5\u9a8c\u8bc1\u8fd9\u4e9b\u5df2\u7b7e\u540d\u7684\u6620\u50cf\uff0c\u5e76\u4e14\u4e3a\u4e86\u63d0\u4f9b\u5b8c\u6574\u7684\u4fe1\u4efb\u94fe\uff0c\u8ba1\u7b97\u670d\u52a1\u53ef\u4ee5\u9009\u62e9\u5728\u6620\u50cf\u542f\u52a8\u4e4b\u524d\u6267\u884c\u6620\u50cf\u7b7e\u540d\u9a8c\u8bc1\u3002\u5728\u6620\u50cf\u542f\u52a8\u4e4b\u524d\u6210\u529f\u8fdb\u884c\u7b7e\u540d\u9a8c\u8bc1\u53ef\u786e\u4fdd\u5df2\u7b7e\u540d\u7684\u6620\u50cf\u672a\u66f4\u6539\u3002\u542f\u7528\u6b64\u529f\u80fd\u540e\uff0c\u53ef\u4ee5\u68c0\u6d4b\u5230\u672a\u7ecf\u6388\u6743\u7684\u6620\u50cf\u4fee\u6539\uff08\u4f8b\u5982\uff0c\u4fee\u6539\u6620\u50cf\u4ee5\u5305\u542b\u6076\u610f\u8f6f\u4ef6\u6216 rootkit\uff09\u3002 \u7ba1\u7406\u5458\u53ef\u4ee5\u901a\u8fc7\u5728\u6587\u4ef6\u4e2d\u5c06 verify_glance_signatures \u6807\u5fd7\u8bbe\u7f6e\u4e3a\u6765 True \u542f\u7528\u5b9e\u4f8b\u7b7e\u540d /etc/nova/nova.conf \u9a8c\u8bc1\u3002\u542f\u7528\u540e\uff0c\u8ba1\u7b97\u670d\u52a1\u4f1a\u5728\u4ece\u5f71\u50cf\u670d\u52a1\u68c0\u7d22\u7b7e\u540d\u5b9e\u4f8b\u65f6\u81ea\u52a8\u5bf9\u5176\u8fdb\u884c\u9a8c\u8bc1\u3002\u5982\u679c\u6b64\u9a8c\u8bc1\u5931\u8d25\uff0c\u5219\u4e0d\u4f1a\u542f\u52a8\u3002\u300aOpenStack \u64cd\u4f5c\u6307\u5357\u300b\u63d0\u4f9b\u4e86\u6709\u5173\u5982\u4f55\u521b\u5efa\u548c\u4e0a\u4f20\u7b7e\u540d\u6620\u50cf\u4ee5\u53ca\u5982\u4f55\u4f7f\u7528\u6b64\u529f\u80fd\u7684\u6307\u5bfc\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u300a\u64cd\u4f5c\u6307\u5357\u300b\u4e2d\u7684\u6dfb\u52a0\u7b7e\u540d\u6620\u50cf\u3002 \u5b9e\u4f8b\u8fc1\u79fb \u00b6 OpenStack \u548c\u5e95\u5c42\u865a\u62df\u5316\u5c42\u63d0\u4f9b\u5728 OpenStack \u8282\u70b9\u4e4b\u95f4\u5b9e\u65f6\u8fc1\u79fb\u6620\u50cf\uff0c\u4f7f\u60a8\u80fd\u591f\u65e0\u7f1d\u5730\u6267\u884c OpenStack \u8ba1\u7b97\u8282\u70b9\u7684\u6eda\u52a8\u5347\u7ea7\uff0c\u800c\u65e0\u9700\u5b9e\u4f8b\u505c\u673a\u3002\u4f46\u662f\uff0c\u5b9e\u65f6\u8fc1\u79fb\u4e5f\u5b58\u5728\u91cd\u5927\u98ce\u9669\u3002\u82e5\u8981\u4e86\u89e3\u6240\u6d89\u53ca\u7684\u98ce\u9669\uff0c\u4ee5\u4e0b\u662f\u5728\u5b9e\u65f6\u8fc1\u79fb\u671f\u95f4\u6267\u884c\u7684\u9ad8\u7ea7\u6b65\u9aa4\uff1a \u5728\u76ee\u6807\u4e3b\u673a\u4e0a\u542f\u52a8\u5b9e\u4f8b \u4f20\u8f93\u5185\u5b58 \u505c\u6b62\u5ba2\u6237\u673a\u548c\u540c\u6b65\u78c1\u76d8 \u4f20\u8f93\u72b6\u6001 \u542f\u52a8\u5ba2\u6237\u673a \u5b9e\u65f6\u8fc1\u79fb\u98ce\u9669 \u00b6 \u5728\u5b9e\u65f6\u8fc1\u79fb\u8fc7\u7a0b\u7684\u5404\u4e2a\u9636\u6bb5\uff0c\u5b9e\u4f8b\u8fd0\u884c\u65f6\u3001\u5185\u5b58\u548c\u78c1\u76d8\u7684\u5185\u5bb9\u4ee5\u7eaf\u6587\u672c\u5f62\u5f0f\u901a\u8fc7\u7f51\u7edc\u4f20\u8f93\u3002\u56e0\u6b64\uff0c\u5728\u4f7f\u7528\u5b9e\u65f6\u8fc1\u79fb\u65f6\u9700\u8981\u89e3\u51b3\u4e00\u4e9b\u98ce\u9669\u3002\u4ee5\u4e0b\u8be6\u5c3d\u5217\u8868\u8be6\u7ec6\u4ecb\u7ecd\u4e86\u5176\u4e2d\u7684\u4e00\u4e9b\u98ce\u9669\uff1a \u62d2\u7edd\u670d\u52a1 \uff08DoS\uff09\uff1a\u5982\u679c\u5728\u8fc1\u79fb\u8fc7\u7a0b\u4e2d\u51fa\u73b0\u6545\u969c\uff0c\u5b9e\u4f8b\u53ef\u80fd\u4f1a\u4e22\u5931\u3002 \u6570\u636e\u6cc4\u9732\uff1a\u5fc5\u987b\u5b89\u5168\u5730\u5904\u7406\u5185\u5b58\u6216\u78c1\u76d8\u4f20\u8f93\u3002 \u6570\u636e\u64cd\u7eb5\uff1a\u5982\u679c\u5185\u5b58\u6216\u78c1\u76d8\u4f20\u8f93\u672a\u5f97\u5230\u5b89\u5168\u5904\u7406\uff0c\u5219\u653b\u51fb\u8005\u53ef\u4ee5\u5728\u8fc1\u79fb\u8fc7\u7a0b\u4e2d\u64cd\u7eb5\u7528\u6237\u6570\u636e\u3002 \u4ee3\u7801\u6ce8\u5165\uff1a\u5982\u679c\u5185\u5b58\u6216\u78c1\u76d8\u4f20\u8f93\u672a\u5f97\u5230\u5b89\u5168\u5904\u7406\uff0c\u5219\u653b\u51fb\u8005\u53ef\u4ee5\u5728\u8fc1\u79fb\u671f\u95f4\u64cd\u7eb5\u78c1\u76d8\u6216\u5185\u5b58\u4e2d\u7684\u53ef\u6267\u884c\u6587\u4ef6\u3002 \u5b9e\u65f6\u8fc1\u79fb\u7f13\u89e3\u63aa\u65bd \u00b6 \u6709\u51e0\u79cd\u65b9\u6cd5\u53ef\u4ee5\u7f13\u89e3\u4e0e\u5b9e\u65f6\u8fc1\u79fb\u76f8\u5173\u7684\u4e00\u4e9b\u98ce\u9669\uff0c\u4ee5\u4e0b\u5217\u8868\u8be6\u7ec6\u4ecb\u7ecd\u4e86\u5176\u4e2d\u7684\u4e00\u4e9b\u65b9\u6cd5\uff1a \u7981\u7528\u5b9e\u65f6\u8fc1\u79fb I\u9694\u79bb\u7684\u8fc1\u79fb\u7f51\u7edc \u52a0\u5bc6\u5b9e\u65f6\u8fc1\u79fb \u7981\u7528\u5b9e\u65f6\u8fc1\u79fb \u00b6 \u76ee\u524d\uff0cOpenStack \u4e2d\u9ed8\u8ba4\u542f\u7528\u5b9e\u65f6\u8fc1\u79fb\u3002\u53ef\u4ee5\u901a\u8fc7\u5411 nova policy.json \u6587\u4ef6\u6dfb\u52a0\u4ee5\u4e0b\u884c\u6765\u7981\u7528\u5b9e\u65f6\u8fc1\u79fb\uff1a { \"compute_extension:admin_actions:migrate\": \"!\", \"compute_extension:admin_actions:migrateLive\": \"!\", } \u8fc1\u79fb\u7f51\u7edc \u00b6 \u4e00\u822c\u505a\u6cd5\u662f\uff0c\u5b9e\u65f6\u8fc1\u79fb\u6d41\u91cf\u5e94\u9650\u5236\u5728\u7ba1\u7406\u5b89\u5168\u57df\u5185\uff0c\u8bf7\u53c2\u9605\u5b89\u5168\u8fb9\u754c\u548c\u5a01\u80c1\u3002\u5bf9\u4e8e\u5b9e\u65f6\u8fc1\u79fb\u6d41\u91cf\uff0c\u7531\u4e8e\u5176\u7eaf\u6587\u672c\u6027\u8d28\u4ee5\u53ca\u60a8\u6b63\u5728\u4f20\u8f93\u6b63\u5728\u8fd0\u884c\u7684\u5b9e\u4f8b\u7684\u78c1\u76d8\u548c\u5185\u5b58\u5185\u5bb9\uff0c\u56e0\u6b64\u5efa\u8bae\u60a8\u8fdb\u4e00\u6b65\u5c06\u5b9e\u65f6\u8fc1\u79fb\u6d41\u91cf\u5206\u79bb\u5230\u4e13\u7528\u7f51\u7edc\u4e0a\u3002\u5c06\u6d41\u91cf\u9694\u79bb\u5230\u4e13\u7528\u7f51\u7edc\u53ef\u4ee5\u964d\u4f4e\u66b4\u9732\u98ce\u9669\u3002 \u52a0\u5bc6\u5b9e\u65f6\u8fc1\u79fb \u00b6 \u5982\u679c\u6709\u8db3\u591f\u7684\u4e1a\u52a1\u6848\u4f8b\u6765\u4fdd\u6301\u5b9e\u65f6\u8fc1\u79fb\u7684\u542f\u7528\u72b6\u6001\uff0c\u5219 libvirtd \u53ef\u4ee5\u4e3a\u5b9e\u65f6\u8fc1\u79fb\u63d0\u4f9b\u52a0\u5bc6\u96a7\u9053\u3002\u4f46\u662f\uff0c\u6b64\u529f\u80fd\u76ee\u524d\u5c1a\u672a\u5728 OpenStack Dashboard \u6216 nova-client \u547d\u4ee4\u4e2d\u516c\u5f00\uff0c\u53ea\u80fd\u901a\u8fc7\u624b\u52a8\u914d\u7f6e libvirtd \u6765\u8bbf\u95ee\u3002\u7136\u540e\uff0c\u5b9e\u65f6\u8fc1\u79fb\u8fc7\u7a0b\u5c06\u66f4\u6539\u4e3a\u4ee5\u4e0b\u9ad8\u7ea7\u6b65\u9aa4\uff1a \u5b9e\u4f8b\u6570\u636e\u4ece\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u590d\u5236\u5230 libvirtd\u3002 \u5728\u6e90\u4e3b\u673a\u548c\u76ee\u6807\u4e3b\u673a\u4e0a\u7684 libvirtd \u8fdb\u7a0b\u4e4b\u95f4\u521b\u5efa\u52a0\u5bc6\u96a7\u9053\u3002 \u76ee\u6807 libvirtd \u4e3b\u673a\u5c06\u5b9e\u4f8b\u590d\u5236\u56de\u5e95\u5c42\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 \u76d1\u63a7\u3001\u544a\u8b66\u548c\u62a5\u544a \u00b6 \u7531\u4e8e OpenStack \u865a\u62df\u673a\u662f\u80fd\u591f\u8de8\u4e3b\u673a\u590d\u5236\u7684\u670d\u52a1\u5668\u6620\u50cf\uff0c\u56e0\u6b64\u65e5\u5fd7\u8bb0\u5f55\u7684\u6700\u4f73\u5b9e\u8df5\u540c\u6837\u9002\u7528\u4e8e\u7269\u7406\u4e3b\u673a\u548c\u865a\u62df\u4e3b\u673a\u3002\u5e94\u8bb0\u5f55\u64cd\u4f5c\u7cfb\u7edf\u7ea7\u548c\u5e94\u7528\u7a0b\u5e8f\u7ea7\u4e8b\u4ef6\uff0c\u5305\u62ec\u5bf9\u4e3b\u673a\u548c\u6570\u636e\u7684\u8bbf\u95ee\u4e8b\u4ef6\u3001\u7528\u6237\u6dfb\u52a0\u548c\u5220\u9664\u3001\u6743\u9650\u66f4\u6539\u4ee5\u53ca\u73af\u5883\u89c4\u5b9a\u7684\u5176\u4ed6\u4e8b\u4ef6\u3002\u7406\u60f3\u60c5\u51b5\u4e0b\uff0c\u60a8\u53ef\u4ee5\u5c06\u8fd9\u4e9b\u65e5\u5fd7\u914d\u7f6e\u4e3a\u5bfc\u51fa\u5230\u65e5\u5fd7\u805a\u5408\u5668\uff0c\u8be5\u805a\u5408\u5668\u6536\u96c6\u65e5\u5fd7\u4e8b\u4ef6\uff0c\u5c06\u5b83\u4eec\u5173\u8054\u8d77\u6765\u8fdb\u884c\u5206\u6790\uff0c\u5e76\u5b58\u50a8\u5b83\u4eec\u4ee5\u4f9b\u53c2\u8003\u6216\u8fdb\u4e00\u6b65\u64cd\u4f5c\u3002\u5b9e\u73b0\u6b64\u76ee\u7684\u7684\u4e00\u4e2a\u5e38\u89c1\u5de5\u5177\u662f ELK \u5806\u6808\uff0c\u5373 Elasticsearch\u3001Logstash \u548c Kibana\u3002 \u5e94\u5b9a\u671f\u67e5\u770b\u8fd9\u4e9b\u65e5\u5fd7\uff0c\u4f8b\u5982\u7531\u7f51\u7edc\u8fd0\u8425\u4e2d\u5fc3 \uff08NOC\uff09 \u5b9e\u65f6\u67e5\u770b\uff0c\u6216\u8005\u5982\u679c\u73af\u5883\u4e0d\u591f\u5927\u800c\u4e0d\u9700\u8981 NOC\uff0c\u5219\u65e5\u5fd7\u5e94\u5b9a\u671f\u8fdb\u884c\u65e5\u5fd7\u5ba1\u67e5\u8fc7\u7a0b\u3002 \u5f88\u591a\u65f6\u5019\uff0c\u6709\u8da3\u7684\u4e8b\u4ef6\u4f1a\u89e6\u53d1\u8b66\u62a5\uff0c\u8be5\u8b66\u62a5\u5c06\u53d1\u9001\u7ed9\u54cd\u5e94\u65b9\u4ee5\u91c7\u53d6\u884c\u52a8\u3002\u901a\u5e38\uff0c\u6b64\u8b66\u62a5\u91c7\u7528\u5305\u542b\u76f8\u5173\u6d88\u606f\u7684\u7535\u5b50\u90ae\u4ef6\u5f62\u5f0f\u3002\u4e00\u4e2a\u6709\u8da3\u7684\u4e8b\u4ef6\u53ef\u80fd\u662f\u91cd\u5927\u6545\u969c\uff0c\u4e5f\u53ef\u80fd\u662f\u6302\u8d77\u6545\u969c\u7684\u5df2\u77e5\u8fd0\u884c\u72b6\u51b5\u6307\u793a\u5668\u3002\u7528\u4e8e\u7ba1\u7406\u544a\u8b66\u7684\u4e24\u4e2a\u5e38\u89c1\u5b9e\u7528\u7a0b\u5e8f\u662f Nagios \u548c Zabbix\u3002 \u66f4\u65b0\u548c\u8865\u4e01 \u00b6 \u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u8fd0\u884c\u72ec\u7acb\u7684\u865a\u62df\u673a\u3002\u6b64\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u53ef\u4ee5\u5728\u64cd\u4f5c\u7cfb\u7edf\u4e2d\u8fd0\u884c\uff0c\u4e5f\u53ef\u4ee5\u76f4\u63a5\u5728\u786c\u4ef6\u4e0a\u8fd0\u884c\uff08\u79f0\u4e3a\u88f8\u673a\uff09\u3002\u5bf9\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u7684\u66f4\u65b0\u4e0d\u4f1a\u5411\u4e0b\u4f20\u64ad\u5230\u865a\u62df\u673a\u3002\u4f8b\u5982\uff0c\u5982\u679c\u90e8\u7f72\u4f7f\u7528\u7684\u662f XenServer\uff0c\u5e76\u4e14\u5177\u6709\u4e00\u7ec4 Debian \u865a\u62df\u673a\uff0c\u5219\u5bf9 XenServer \u7684\u66f4\u65b0\u4e0d\u4f1a\u66f4\u65b0 Debian \u865a\u62df\u673a\u4e0a\u8fd0\u884c\u7684\u4efb\u4f55\u5185\u5bb9\u3002 \u56e0\u6b64\uff0c\u6211\u4eec\u5efa\u8bae\u5206\u914d\u865a\u62df\u673a\u7684\u660e\u786e\u6240\u6709\u6743\uff0c\u5e76\u7531\u8fd9\u4e9b\u6240\u6709\u8005\u8d1f\u8d23\u865a\u62df\u673a\u7684\u5f3a\u5316\u3001\u90e8\u7f72\u548c\u6301\u7eed\u529f\u80fd\u3002\u6211\u4eec\u8fd8\u5efa\u8bae\u5b9a\u671f\u90e8\u7f72\u66f4\u65b0\u3002\u8fd9\u4e9b\u8865\u4e01\u5e94\u5728\u5c3d\u53ef\u80fd\u63a5\u8fd1\u751f\u4ea7\u73af\u5883\u7684\u73af\u5883\u4e2d\u8fdb\u884c\u6d4b\u8bd5\uff0c\u4ee5\u786e\u4fdd\u8865\u4e01\u80cc\u540e\u7684\u95ee\u9898\u7684\u7a33\u5b9a\u6027\u548c\u89e3\u51b3\u65b9\u6848\u3002 \u9632\u706b\u5899\u548c\u5176\u4ed6\u57fa\u4e8e\u4e3b\u673a\u7684\u5b89\u5168\u63a7\u5236 \u00b6 \u6700\u5e38\u89c1\u7684\u64cd\u4f5c\u7cfb\u7edf\u5305\u62ec\u57fa\u4e8e\u4e3b\u673a\u7684\u9632\u706b\u5899\uff0c\u4ee5\u63d0\u9ad8\u5b89\u5168\u6027\u3002\u867d\u7136\u6211\u4eec\u5efa\u8bae\u865a\u62df\u673a\u8fd0\u884c\u5c3d\u53ef\u80fd\u5c11\u7684\u5e94\u7528\u7a0b\u5e8f\uff08\u5982\u679c\u53ef\u80fd\u7684\u8bdd\uff0c\u8fbe\u5230\u5355\u4e00\u7528\u9014\u5b9e\u4f8b\u7684\u7a0b\u5ea6\uff09\uff0c\u4f46\u5e94\u5206\u6790\u865a\u62df\u673a\u4e0a\u8fd0\u884c\u7684\u6240\u6709\u5e94\u7528\u7a0b\u5e8f\uff0c\u4ee5\u786e\u5b9a\u5e94\u7528\u7a0b\u5e8f\u9700\u8981\u8bbf\u95ee\u54ea\u4e9b\u7cfb\u7edf\u8d44\u6e90\u3001\u8fd0\u884c\u6240\u9700\u7684\u6700\u4f4e\u7279\u6743\u7ea7\u522b\uff0c\u4ee5\u53ca\u5c06\u8fdb\u51fa\u865a\u62df\u673a\u7684\u9884\u671f\u7f51\u7edc\u6d41\u91cf\u3002\u6b64\u9884\u671f\u6d41\u91cf\u5e94\u4f5c\u4e3a\u5141\u8bb8\u7684\u6d41\u91cf\uff08\u6216\u5217\u5165\u767d\u540d\u5355\uff09\u6dfb\u52a0\u5230\u57fa\u4e8e\u4e3b\u673a\u7684\u9632\u706b\u5899\u4e2d\uff0c\u4ee5\u53ca\u4efb\u4f55\u5fc5\u8981\u7684\u65e5\u5fd7\u8bb0\u5f55\u548c\u7ba1\u7406\u901a\u4fe1\uff0c\u4f8b\u5982 SSH \u6216 RDP\u3002\u5e94\u5728\u9632\u706b\u5899\u914d\u7f6e\u4e2d\u660e\u786e\u62d2\u7edd\u6240\u6709\u5176\u4ed6\u6d41\u91cf\u3002 \u5728 Linux \u865a\u62df\u673a\u4e0a\uff0c\u4e0a\u8ff0\u5e94\u7528\u7a0b\u5e8f\u914d\u7f6e\u6587\u4ef6\u53ef\u4ee5\u4e0e audit2allow \u7b49\u5de5\u5177\u7ed3\u5408\u4f7f\u7528\uff0c\u4ee5\u6784\u5efa SELinux \u7b56\u7565\uff0c\u4ee5\u8fdb\u4e00\u6b65\u4fdd\u62a4\u5927\u591a\u6570 Linux \u53d1\u884c\u7248\u4e0a\u7684\u654f\u611f\u7cfb\u7edf\u4fe1\u606f\u3002SELinux \u4f7f\u7528\u7528\u6237\u3001\u7b56\u7565\u548c\u5b89\u5168\u4e0a\u4e0b\u6587\u7684\u7ec4\u5408\u6765\u5212\u5206\u5e94\u7528\u7a0b\u5e8f\u8fd0\u884c\u6240\u9700\u7684\u8d44\u6e90\uff0c\u5e76\u5c06\u5176\u4e0e\u5176\u4ed6\u4e0d\u9700\u8981\u7684\u7cfb\u7edf\u8d44\u6e90\u533a\u5206\u5f00\u6765\u3002 OpenStack \u4e3a\u4e3b\u673a\u548c\u7f51\u7edc\u63d0\u4f9b\u5b89\u5168\u7ec4\uff0c\u4ee5\u589e\u52a0\u5bf9\u7ed9\u5b9a\u9879\u76ee\u4e2d\u865a\u62df\u673a\u7684\u6df1\u5ea6\u9632\u5fa1\u3002\u8fd9\u4e9b\u89c4\u5219\u7c7b\u4f3c\u4e8e\u57fa\u4e8e\u4e3b\u673a\u7684\u9632\u706b\u5899\uff0c\u56e0\u4e3a\u5b83\u4eec\u6839\u636e\u7aef\u53e3\u3001\u534f\u8bae\u548c\u5730\u5740\u5141\u8bb8\u6216\u62d2\u7edd\u4f20\u5165\u6d41\u91cf\uff0c\u4f46\u5b89\u5168\u7ec4\u89c4\u5219\u4ec5\u9002\u7528\u4e8e\u4f20\u5165\u6d41\u91cf\uff0c\u800c\u57fa\u4e8e\u4e3b\u673a\u7684\u9632\u706b\u5899\u89c4\u5219\u80fd\u591f\u5e94\u7528\u4e8e\u4f20\u5165\u548c\u4f20\u51fa\u6d41\u91cf\u3002\u4e3b\u673a\u548c\u7f51\u7edc\u5b89\u5168\u7ec4\u89c4\u5219\u4e5f\u53ef\u80fd\u53d1\u751f\u51b2\u7a81\u5e76\u62d2\u7edd\u5408\u6cd5\u6d41\u91cf\u3002\u6211\u4eec\u5efa\u8bae\u786e\u4fdd\u4e3a\u6b63\u5728\u4f7f\u7528\u7684\u7f51\u7edc\u6b63\u786e\u914d\u7f6e\u5b89\u5168\u7ec4\u3002\u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u672c\u6307\u5357\u4e2d\u7684\u5b89\u5168\u7ec4\u3002 \u76d1\u89c6\u548c\u65e5\u5fd7\u8bb0\u5f55 \u00b6 \u5728\u4e91\u73af\u5883\u4e2d\uff0c\u786c\u4ef6\u3001\u64cd\u4f5c\u7cfb\u7edf\u3001\u865a\u62df\u673a\u7ba1\u7406\u5668\u3001OpenStack \u670d\u52a1\u3001\u4e91\u7528\u6237\u6d3b\u52a8\uff08\u4f8b\u5982\u521b\u5efa\u5b9e\u4f8b\u548c\u9644\u52a0\u5b58\u50a8\uff09\u3001\u7f51\u7edc\u4ee5\u53ca\u4f7f\u7528\u5728\u5404\u79cd\u5b9e\u4f8b\u4e0a\u8fd0\u884c\u7684\u5e94\u7528\u7a0b\u5e8f\u7684\u6700\u7ec8\u7528\u6237\u6df7\u5408\u5728\u4e00\u8d77\u3002 \u65e5\u5fd7\u8bb0\u5f55\u7684\u57fa\u7840\u77e5\u8bc6\uff1a\u914d\u7f6e\u3001\u8bbe\u7f6e\u65e5\u5fd7\u7ea7\u522b\u3001\u65e5\u5fd7\u6587\u4ef6\u7684\u4f4d\u7f6e\u3001\u5982\u4f55\u4f7f\u7528\u548c\u81ea\u5b9a\u4e49\u65e5\u5fd7\uff0c\u4ee5\u53ca\u5982\u4f55\u96c6\u4e2d\u6536\u96c6\u65e5\u5fd7\uff0c\u8fd9\u4e9b\u5728 OpenStack \u64cd\u4f5c\u6307\u5357\u4e2d\u90fd\u6709\u5f88\u597d\u7684\u4ecb\u7ecd\u3002 \u53d6\u8bc1\u548c\u4e8b\u4ef6\u54cd\u5e94 \u76d1\u63a7\u7528\u4f8b \u53c2\u8003\u4e66\u76ee \u53d6\u8bc1\u548c\u4e8b\u4ef6\u54cd\u5e94 \u00b6 \u65e5\u5fd7\u7684\u751f\u6210\u548c\u6536\u96c6\u662f\u5b89\u5168\u76d1\u63a7 OpenStack \u57fa\u7840\u67b6\u6784\u7684\u91cd\u8981\u7ec4\u6210\u90e8\u5206\u3002\u65e5\u5fd7\u63d0\u4f9b\u5bf9\u7ba1\u7406\u5458\u3001\u79df\u6237\u548c\u6765\u5bbe\u65e5\u5e38\u64cd\u4f5c\u7684\u53ef\u89c1\u6027\uff0c\u4ee5\u53ca\u8ba1\u7b97\u3001\u7f51\u7edc\u548c\u5b58\u50a8\u4ee5\u53ca\u6784\u6210 OpenStack \u90e8\u7f72\u7684\u5176\u4ed6\u7ec4\u4ef6\u4e2d\u7684\u6d3b\u52a8\u3002 \u65e5\u5fd7\u4e0d\u4ec5\u5bf9\u4e3b\u52a8\u5b89\u5168\u548c\u6301\u7eed\u5408\u89c4\u6027\u6d3b\u52a8\u5f88\u6709\u4ef7\u503c\uff0c\u800c\u4e14\u4e5f\u662f\u8c03\u67e5\u548c\u54cd\u5e94\u4e8b\u4ef6\u7684\u5b9d\u8d35\u4fe1\u606f\u6e90\u3002 \u4f8b\u5982\uff0c\u5206\u6790\u8eab\u4efd\u670d\u52a1\u6216\u5176\u66ff\u4ee3\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u7684\u8bbf\u95ee\u65e5\u5fd7\u4f1a\u63d0\u9192\u6211\u4eec\u767b\u5f55\u5931\u8d25\u3001\u9891\u7387\u3001\u6e90 IP\u3001\u4e8b\u4ef6\u662f\u5426\u4ec5\u9650\u4e8e\u9009\u62e9\u5e10\u6237\u548c\u5176\u4ed6\u76f8\u5173\u4fe1\u606f\u3002\u65e5\u5fd7\u5206\u6790\u652f\u6301\u68c0\u6d4b\u3002 \u53ef\u4ee5\u91c7\u53d6\u63aa\u65bd\u6765\u7f13\u89e3\u6f5c\u5728\u7684\u6076\u610f\u6d3b\u52a8\uff0c\u4f8b\u5982\u5c06 IP \u5730\u5740\u5217\u5165\u9ed1\u540d\u5355\u3001\u5efa\u8bae\u52a0\u5f3a\u7528\u6237\u5bc6\u7801\u6216\u505c\u7528\u88ab\u89c6\u4e3a\u4f11\u7720\u7684\u7528\u6237\u5e10\u6237\u3002 \u76d1\u63a7\u7528\u4f8b \u00b6 \u4e8b\u4ef6\u76d1\u63a7\u662f\u4e00\u79cd\u66f4\u4e3b\u52a8\u7684\u65b9\u6cd5\uff0c\u53ef\u4ee5\u4fdd\u62a4\u73af\u5883\uff0c\u63d0\u4f9b\u5b9e\u65f6\u68c0\u6d4b\u548c\u54cd\u5e94\u3002\u6709\u51e0\u79cd\u5de5\u5177\u53ef\u4ee5\u5e2e\u52a9\u8fdb\u884c\u76d1\u63a7\u3002 \u5bf9\u4e8eOpenStack\u4e91\u5b9e\u4f8b\uff0c\u6211\u4eec\u9700\u8981\u76d1\u63a7\u786c\u4ef6\u3001OpenStack\u670d\u52a1\u548c\u4e91\u8d44\u6e90\u4f7f\u7528\u60c5\u51b5\u3002\u540e\u8005\u6e90\u4e8e\u5e0c\u671b\u5177\u6709\u5f39\u6027\uff0c\u4ee5\u9002\u5e94\u7528\u6237\u7684\u52a8\u6001\u9700\u6c42\u3002 \u4ee5\u4e0b\u662f\u5728\u5b9e\u65bd\u65e5\u5fd7\u805a\u5408\u3001\u5206\u6790\u548c\u76d1\u63a7\u65f6\u9700\u8981\u8003\u8651\u7684\u51e0\u4e2a\u91cd\u8981\u7528\u4f8b\u3002\u8fd9\u4e9b\u7528\u4f8b\u53ef\u4ee5\u901a\u8fc7\u5404\u79cd\u5e94\u7528\u7a0b\u5e8f\u3001\u5de5\u5177\u6216\u811a\u672c\u6765\u5b9e\u73b0\u548c\u76d1\u63a7\u3002\u6709\u5f00\u6e90\u548c\u5546\u4e1a\u89e3\u51b3\u65b9\u6848\uff0c\u4e00\u4e9b\u8fd0\u8425\u5546\u5f00\u53d1\u81ea\u5df1\u7684\u5185\u90e8\u89e3\u51b3\u65b9\u6848\u3002\u8fd9\u4e9b\u5de5\u5177\u548c\u811a\u672c\u53ef\u4ee5\u751f\u6210\u4e8b\u4ef6\uff0c\u8fd9\u4e9b\u4e8b\u4ef6\u53ef\u4ee5\u901a\u8fc7\u7535\u5b50\u90ae\u4ef6\u53d1\u9001\u7ed9\u7ba1\u7406\u5458\u6216\u5728\u96c6\u6210\u4eea\u8868\u677f\u4e2d\u67e5\u770b\u3002\u8bf7\u52a1\u5fc5\u8003\u8651\u53ef\u80fd\u9002\u7528\u4e8e\u60a8\u7684\u7279\u5b9a\u7f51\u7edc\u7684\u5176\u4ed6\u7528\u4f8b\uff0c\u4ee5\u53ca\u60a8\u53ef\u80fd\u8ba4\u4e3a\u7684\u5f02\u5e38\u884c\u4e3a\u3002 \u68c0\u6d4b\u65e5\u5fd7\u751f\u6210\u7f3a\u5931\u662f\u4e00\u4e2a\u5177\u6709\u5f88\u9ad8\u4ef7\u503c\u7684\u4e8b\u4ef6\u3002\u6b64\u7c7b\u4e8b\u4ef6\u5c06\u8868\u660e\u670d\u52a1\u5931\u8d25\uff0c\u751a\u81f3\u8868\u793a\u5165\u4fb5\u8005\u6682\u65f6\u5173\u95ed\u4e86\u65e5\u5fd7\u8bb0\u5f55\u6216\u4fee\u6539\u4e86\u65e5\u5fd7\u7ea7\u522b\u4ee5\u9690\u85cf\u5176\u8e2a\u8ff9\u3002 \u5e94\u7528\u7a0b\u5e8f\u4e8b\u4ef6\uff08\u5982\u8ba1\u5212\u5916\u7684\u542f\u52a8\u6216\u505c\u6b62\u4e8b\u4ef6\uff09\u4e5f\u662f\u8981\u76d1\u89c6\u548c\u68c0\u67e5\u53ef\u80fd\u7684\u5b89\u5168\u9690\u60a3\u7684\u4e8b\u4ef6\u3002 OpenStack \u670d\u52a1\u673a\u5668\u4e0a\u7684\u64cd\u4f5c\u7cfb\u7edf\u4e8b\u4ef6\uff08\u5982\u7528\u6237\u767b\u5f55\u6216\u91cd\u65b0\u542f\u52a8\uff09\u4e5f\u4e3a\u7cfb\u7edf\u7684\u6b63\u786e\u548c\u4e0d\u5f53\u4f7f\u7528\u63d0\u4f9b\u4e86\u6709\u4ef7\u503c\u7684\u89c1\u89e3\u3002 \u80fd\u591f\u68c0\u6d4bOpenStack\u670d\u52a1\u5668\u4e0a\u7684\u8d1f\u8f7d\u8fd8\u53ef\u4ee5\u901a\u8fc7\u5f15\u5165\u5176\u4ed6\u670d\u52a1\u5668\u8fdb\u884c\u8d1f\u8f7d\u5e73\u8861\u6765\u505a\u51fa\u54cd\u5e94\uff0c\u4ee5\u786e\u4fdd\u9ad8\u53ef\u7528\u6027\u3002 \u5176\u4ed6\u53ef\u64cd\u4f5c\u7684\u4e8b\u4ef6\u5305\u62ec\u7f51\u7edc\u7f51\u6865\u5173\u95ed\u3001\u8ba1\u7b97\u8282\u70b9\u4e0a\u7684 IP \u8868\u88ab\u5237\u65b0\uff0c\u4ee5\u53ca\u968f\u4e4b\u800c\u6765\u7684\u5bf9\u5b9e\u4f8b\u7684\u8bbf\u95ee\u4e22\u5931\uff0c\u5bfc\u81f4\u5ba2\u6237\u4e0d\u6ee1\u610f\u3002 \u4e3a\u4e86\u964d\u4f4e\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u5220\u9664\u7528\u6237\u3001\u79df\u6237\u6216\u57df\u65f6\u5b64\u7acb\u5b9e\u4f8b\u7684\u5b89\u5168\u98ce\u9669\uff0c\u6211\u4eec\u8ba8\u8bba\u4e86\u5728\u7cfb\u7edf\u4e2d\u751f\u6210\u901a\u77e5\uff0c\u5e76\u8ba9 OpenStack \u7ec4\u4ef6\u9002\u5f53\u5730\u54cd\u5e94\u8fd9\u4e9b\u4e8b\u4ef6\uff0c\u4f8b\u5982\u7ec8\u6b62\u5b9e\u4f8b\u3001\u65ad\u5f00\u8fde\u63a5\u7684\u5377\u3001\u56de\u6536 CPU \u548c\u5b58\u50a8\u8d44\u6e90\u7b49\u3002 \u4e91\u5c06\u6258\u7ba1\u8bb8\u591a\u865a\u62df\u5b9e\u4f8b\uff0c\u5e76\u4e14\u76d1\u89c6\u8fd9\u4e9b\u5b9e\u4f8b\u8d85\u51fa\u4e86\u53ef\u80fd\u4ec5\u5305\u542b CRUD \u4e8b\u4ef6\u7684\u786c\u4ef6\u76d1\u89c6\u548c\u65e5\u5fd7\u6587\u4ef6\u3002 \u5b89\u5168\u76d1\u63a7\u63a7\u5236\uff08\u5982\u5165\u4fb5\u68c0\u6d4b\u8f6f\u4ef6\u3001\u9632\u75c5\u6bd2\u8f6f\u4ef6\u4ee5\u53ca\u95f4\u8c0d\u8f6f\u4ef6\u68c0\u6d4b\u548c\u5220\u9664\u5b9e\u7528\u7a0b\u5e8f\uff09\u53ef\u4ee5\u751f\u6210\u65e5\u5fd7\uff0c\u663e\u793a\u653b\u51fb\u6216\u5165\u4fb5\u53d1\u751f\u7684\u65f6\u95f4\u548c\u65b9\u5f0f\u3002\u5728\u4e91\u8ba1\u7b97\u673a\u4e0a\u90e8\u7f72\u8fd9\u4e9b\u5de5\u5177\u53ef\u63d0\u4f9b\u4ef7\u503c\u548c\u4fdd\u62a4\u3002\u4e91\u7528\u6237\uff0c\u5373\u5728\u4e91\u4e0a\u8fd0\u884c\u5b9e\u4f8b\u7684\u7528\u6237\uff0c\u53ef\u80fd\u4e5f\u5e0c\u671b\u5728\u5176\u5b9e\u4f8b\u4e0a\u8fd0\u884c\u6b64\u7c7b\u5de5\u5177\u3002 \u53c2\u8003\u4e66\u76ee \u00b6 Siwczak, Piotr\uff0c\u5728 OpenStack \u4e91\u4e2d\u8fdb\u884c\u76d1\u63a7\u7684\u4e00\u4e9b\u5b9e\u9645\u6ce8\u610f\u4e8b\u9879\u30022012. blog.sflow.com\uff0c sflow\uff1a\u4e3b\u673a sFlow \u5206\u5e03\u5f0f\u4ee3\u7406\u30022012. blog.sflow.com\uff0csflow\uff1aLAN \u548c WAN\u30022009. blog.sflow.com\u3001sflow\uff1a\u5feb\u901f\u68c0\u6d4b\u5927\u6d41\u91cf sFlow \u4e0e NetFlow/IPFIX\u30022013. \u5408\u89c4 \u00b6 OpenStack \u90e8\u7f72\u53ef\u80fd\u9700\u8981\u51fa\u4e8e\u591a\u79cd\u76ee\u7684\u8fdb\u884c\u5408\u89c4\u6027\u6d3b\u52a8\uff0c\u4f8b\u5982\u6cd5\u89c4\u548c\u6cd5\u5f8b\u8981\u6c42\u3001\u5ba2\u6237\u9700\u6c42\u3001\u9690\u79c1\u6ce8\u610f\u4e8b\u9879\u548c\u5b89\u5168\u6700\u4f73\u5b9e\u8df5\u3002\u5408\u89c4\u529f\u80fd\u5bf9\u4f01\u4e1a\u53ca\u5176\u5ba2\u6237\u5f88\u91cd\u8981\u3002\u5408\u89c4\u610f\u5473\u7740\u9075\u5b88\u6cd5\u89c4\u3001\u89c4\u8303\u3001\u6807\u51c6\u548c\u6cd5\u5f8b\u3002\u5b83\u8fd8\u7528\u4e8e\u63cf\u8ff0\u6709\u5173\u8bc4\u4f30\u3001\u5ba1\u6838\u548c\u8ba4\u8bc1\u7684\u7ec4\u7ec7\u72b6\u6001\u3002\u5982\u679c\u64cd\u4f5c\u5f97\u5f53\uff0c\u5408\u89c4\u6027\u53ef\u4ee5\u7edf\u4e00\u548c\u52a0\u5f3a\u672c\u6307\u5357\u4e2d\u8ba8\u8bba\u7684\u5176\u4ed6\u5b89\u5168\u4e3b\u9898\u3002 \u672c\u7ae0\u6709\u51e0\u4e2a\u76ee\u6807\uff1a \u67e5\u770b\u5e38\u89c1\u7684\u5b89\u5168\u539f\u5219\u3002 \u8ba8\u8bba\u5e38\u89c1\u7684\u63a7\u5236\u6846\u67b6\u548c\u8ba4\u8bc1\u8d44\u6e90\uff0c\u4ee5\u5b9e\u73b0\u884c\u4e1a\u8ba4\u8bc1\u6216\u76d1\u7ba1\u673a\u6784\u8ba4\u8bc1\u3002 \u5728\u8bc4\u4f30 OpenStack \u90e8\u7f72\u65f6\uff0c\u53ef\u4f5c\u4e3a\u5ba1\u8ba1\u4eba\u5458\u7684\u53c2\u8003\u3002 \u4ecb\u7ecd\u7279\u5b9a\u4e8e OpenStack \u548c\u4e91\u73af\u5883\u7684\u9690\u79c1\u6ce8\u610f\u4e8b\u9879\u3002 \u5408\u89c4\u6027\u6982\u8ff0 \u5b89\u5168\u539f\u5219 \u5e38\u89c1\u63a7\u5236\u6846\u67b6 \u5ba1\u6838\u53c2\u8003 \u4e86\u89e3\u5ba1\u6838\u6d41\u7a0b \u786e\u5b9a\u5ba1\u8ba1\u8303\u56f4 \u5ba1\u8ba1\u9636\u6bb5 \u5185\u90e8\u5ba1\u8ba1 \u51c6\u5907\u5916\u90e8\u5ba1\u8ba1 \u5916\u90e8\u5ba1\u8ba1 \u5408\u89c4\u6027\u7ef4\u62a4 \u5408\u89c4\u6d3b\u52a8 \u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u7cfb\u7edf\uff08ISMS\uff09 \u98ce\u9669\u8bc4\u4f30 \u8bbf\u95ee\u548c\u65e5\u5fd7\u5ba1\u67e5 \u5907\u4efd\u548c\u707e\u96be\u6062\u590d \u5b89\u5168\u57f9\u8bad \u5b89\u5168\u5ba1\u67e5 \u6f0f\u6d1e\u7ba1\u7406 \u6570\u636e\u5206\u7c7b \u5f02\u5e38\u8fc7\u7a0b \u8ba4\u8bc1\u548c\u5408\u89c4\u58f0\u660e \u5546\u4e1a\u6807\u51c6 \u653f\u5e9c\u6807\u51c6 \u9690\u79c1 \u5408\u89c4\u6027\u6982\u8ff0 \u00b6 \u5b89\u5168\u539f\u5219 \u00b6 \u884c\u4e1a\u6807\u51c6\u5b89\u5168\u539f\u5219\u4e3a\u5408\u89c4\u6027\u8ba4\u8bc1\u548c\u8bc1\u660e\u63d0\u4f9b\u4e86\u57fa\u51c6\u3002\u5982\u679c\u5728\u6574\u4e2a OpenStack \u90e8\u7f72\u8fc7\u7a0b\u4e2d\u8003\u8651\u548c\u5f15\u7528\u8fd9\u4e9b\u539f\u5219\uff0c\u5219\u53ef\u4ee5\u7b80\u5316\u8ba4\u8bc1\u6d3b\u52a8\u3002 \u5206\u5c42\u9632\u5fa1 \u00b6 \u786e\u5b9a\u4e91\u67b6\u6784\u4e2d\u5b58\u5728\u98ce\u9669\u7684\u4f4d\u7f6e\uff0c\u5e76\u5e94\u7528\u63a7\u5236\u63aa\u65bd\u6765\u964d\u4f4e\u98ce\u9669\u3002\u5728\u91cd\u5927\u5173\u6ce8\u9886\u57df\uff0c\u5206\u5c42\u9632\u5fa1\u63d0\u4f9b\u591a\u79cd\u4e92\u8865\u63a7\u5236\uff0c\u5c06\u98ce\u9669\u7ba1\u7406\u5230\u53ef\u63a5\u53d7\u7684\u6c34\u5e73\u3002\u4f8b\u5982\uff0c\u4e3a\u4e86\u786e\u4fdd\u4e91\u79df\u6237\u4e4b\u95f4\u7684\u5145\u5206\u9694\u79bb\uff0c\u6211\u4eec\u5efa\u8bae\u5f3a\u5316 QEMU\uff0c\u4f7f\u7528\u652f\u6301 SELinux \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\uff0c\u5b9e\u65bd\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\uff0c\u5e76\u51cf\u5c11\u6574\u4f53\u653b\u51fb\u9762\u3002\u57fa\u672c\u539f\u5219\u662f\u7528\u591a\u5c42\u9632\u5fa1\u6765\u5f3a\u5316\u5173\u6ce8\u533a\u57df\uff0c\u8fd9\u6837\uff0c\u5982\u679c\u4efb\u4f55\u4e00\u5c42\u53d7\u5230\u635f\u5bb3\uff0c\u5176\u4ed6\u5c42\u5c06\u5b58\u5728\u4ee5\u63d0\u4f9b\u4fdd\u62a4\u5e76\u6700\u5927\u9650\u5ea6\u5730\u51cf\u5c11\u66b4\u9732\u3002 \u5b89\u5168\u5931\u8d25 \u00b6 \u5728\u53d1\u751f\u6545\u969c\u7684\u60c5\u51b5\u4e0b\uff0c\u7cfb\u7edf\u5e94\u914d\u7f6e\u4e3a\u5728\u5173\u95ed\u7684\u5b89\u5168\u72b6\u6001\u4e2d\u5931\u8d25\u3002\u4f8b\u5982\uff0c\u5982\u679cTLS\u8bc1\u4e66\u9a8c\u8bc1\u672a\u901a\u8fc7\uff0c\u5373CNAME\u4e0e\u670d\u52a1\u5668\u7684DNS\u540d\u79f0\u4e0d\u5339\u914d\uff0c\u5e94\u901a\u8fc7\u5207\u65ad\u7f51\u7edc\u8fde\u63a5\u6765\u5b89\u5168\u5931\u8d25\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u8f6f\u4ef6\u901a\u5e38\u4f1a\u4ee5\u5f00\u653e\u65b9\u5f0f\u5931\u8d25\uff0c\u5141\u8bb8\u8fde\u63a5\u5728\u6ca1\u6709CNAME\u5339\u914d\u7684\u60c5\u51b5\u4e0b\u7ee7\u7eed\u8fdb\u884c\uff0c\u8fd9\u6837\u4e0d\u591f\u5b89\u5168\uff0c\u4e5f\u4e0d\u5efa\u8bae\u3002 \u6700\u5c0f\u6743\u9650 \u00b6 \u4ec5\u6388\u4e88\u7528\u6237\u548c\u7cfb\u7edf\u670d\u52a1\u7684\u6700\u4f4e\u8bbf\u95ee\u7ea7\u522b\u3002\u8fd9\u79cd\u8bbf\u95ee\u57fa\u4e8e\u89d2\u8272\u3001\u804c\u8d23\u548c\u5de5\u4f5c\u804c\u80fd\u3002\u8fd9\u79cd\u6700\u5c0f\u7279\u6743\u5b89\u5168\u539f\u5219\u5df2\u5199\u5165\u591a\u4e2a\u56fd\u9645\u653f\u5e9c\u5b89\u5168\u7b56\u7565\u4e2d\uff0c\u4f8b\u5982\u7f8e\u56fd\u5883\u5185\u7684 NIST 800-53 \u7b2c AC-6 \u8282\u3002 \u5206\u9694 \u00b6 \u7cfb\u7edf\u5e94\u4ee5\u8fd9\u6837\u4e00\u79cd\u65b9\u5f0f\u9694\u79bb\uff0c\u5373\u5982\u679c\u4e00\u53f0\u8ba1\u7b97\u673a\u6216\u7cfb\u7edf\u7ea7\u670d\u52a1\u53d7\u5230\u635f\u5bb3\uff0c\u5176\u4ed6\u7cfb\u7edf\u7684\u5b89\u5168\u6027\u5c06\u4fdd\u6301\u4e0d\u53d8\u3002\u5b9e\u9645\u4e0a\uff0cSELinux \u7684\u542f\u7528\u548c\u6b63\u786e\u4f7f\u7528\u6709\u52a9\u4e8e\u5b9e\u73b0\u8fd9\u4e00\u76ee\u6807\u3002 \u4fc3\u8fdb\u9690\u79c1 \u00b6 \u5e94\u5c3d\u91cf\u51cf\u5c11\u53ef\u4ee5\u6536\u96c6\u7684\u6709\u5173\u7cfb\u7edf\u53ca\u5176\u7528\u6237\u7684\u4fe1\u606f\u91cf\u3002 \u65e5\u5fd7\u8bb0\u5f55\u80fd\u529b \u00b6 \u5b9e\u65bd\u9002\u5f53\u7684\u65e5\u5fd7\u8bb0\u5f55\u4ee5\u76d1\u63a7\u672a\u7ecf\u6388\u6743\u7684\u4f7f\u7528\u3001\u4e8b\u4ef6\u54cd\u5e94\u548c\u53d6\u8bc1\u3002\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u9009\u5b9a\u7684\u5ba1\u8ba1\u5b50\u7cfb\u7edf\u901a\u8fc7\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\uff0c\u8be5\u6807\u51c6\u5728\u5927\u591a\u6570\u56fd\u5bb6/\u5730\u533a\u63d0\u4f9b\u4e0d\u53ef\u8bc1\u660e\u7684\u4e8b\u4ef6\u8bb0\u5f55\u3002 \u5e38\u7528\u63a7\u5236\u6846\u67b6 \u00b6 \u4ee5\u4e0b\u662f\u7ec4\u7ec7\u53ef\u7528\u4e8e\u6784\u5efa\u5176\u5b89\u5168\u63a7\u5236\u7684\u63a7\u5236\u6846\u67b6\u5217\u8868\u3002 \u4e91\u5b89\u5168\u8054\u76df \uff08CSA\uff09 \u901a\u7528\u63a7\u5236\u77e9\u9635 \uff08CCM\uff09 CSA CCM \u4e13\u95e8\u7528\u4e8e\u63d0\u4f9b\u57fa\u672c\u7684\u5b89\u5168\u539f\u5219\uff0c\u4ee5\u6307\u5bfc\u4e91\u4f9b\u5e94\u5546\u5e76\u5e2e\u52a9\u6f5c\u5728\u7684\u4e91\u5ba2\u6237\u8bc4\u4f30\u4e91\u63d0\u4f9b\u5546\u7684\u6574\u4f53\u5b89\u5168\u98ce\u9669\u3002CSA CCM \u63d0\u4f9b\u4e86\u4e00\u4e2a\u8de8 16 \u4e2a\u5b89\u5168\u57df\u4fdd\u6301\u4e00\u81f4\u7684\u63a7\u5236\u6846\u67b6\u3002\u4e91\u63a7\u5236\u77e9\u9635\u7684\u57fa\u7840\u5728\u4e8e\u5176\u4e0e\u5176\u4ed6\u884c\u4e1a\u6807\u51c6\u3001\u6cd5\u89c4\u548c\u63a7\u5236\u6846\u67b6\u7684\u5b9a\u5236\u5173\u7cfb\uff0c\u4f8b\u5982\uff1aISO 27001\uff1a2013\u3001COBIT 5.0\u3001PCI\uff1aDSS v3\u3001AICPA 2014 \u4fe1\u4efb\u670d\u52a1\u539f\u5219\u548c\u6807\u51c6\uff0c\u5e76\u589e\u5f3a\u4e86\u670d\u52a1\u7ec4\u7ec7\u63a7\u5236\u62a5\u544a\u8bc1\u660e\u7684\u5185\u90e8\u63a7\u5236\u65b9\u5411\u3002 CSA CCM \u901a\u8fc7\u51cf\u5c11\u4e91\u4e2d\u7684\u5b89\u5168\u5a01\u80c1\u548c\u6f0f\u6d1e\u6765\u52a0\u5f3a\u73b0\u6709\u7684\u4fe1\u606f\u5b89\u5168\u63a7\u5236\u73af\u5883\uff0c\u63d0\u4f9b\u6807\u51c6\u5316\u7684\u5b89\u5168\u548c\u8fd0\u8425\u98ce\u9669\u7ba1\u7406\uff0c\u5e76\u5bfb\u6c42\u89c4\u8303\u5316\u5b89\u5168\u671f\u671b\u3001\u4e91\u5206\u7c7b\u548c\u672f\u8bed\u4ee5\u53ca\u5728\u4e91\u4e2d\u5b9e\u65bd\u7684\u5b89\u5168\u63aa\u65bd\u3002 ISO 27001/2:2013 ISO 27001/2\uff1a2013 \u8ba4\u8bc1 ISO 27001 \u4fe1\u606f\u5b89\u5168\u6807\u51c6\u548c\u8ba4\u8bc1\u591a\u5e74\u6765\u4e00\u76f4\u7528\u4e8e\u8bc4\u4f30\u548c\u533a\u5206\u7ec4\u7ec7\u662f\u5426\u7b26\u5408\u4fe1\u606f\u5b89\u5168\u6700\u4f73\u5b9e\u8df5\u3002\u8be5\u6807\u51c6\u7531\u4e24\u90e8\u5206\u7ec4\u6210\uff1a\u5b9a\u4e49\u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u7cfb\u7edf \uff08ISMS\uff09 \u7684\u5f3a\u5236\u6027\u6761\u6b3e\u548c\u5305\u542b\u6309\u9886\u57df\u7ec4\u7ec7\u7684\u63a7\u5236\u5217\u8868\u7684\u9644\u5f55 A\u3002 \u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u7cfb\u7edf\u901a\u8fc7\u5e94\u7528\u98ce\u9669\u7ba1\u7406\u6d41\u7a0b\u6765\u4fdd\u6301\u4fe1\u606f\u7684\u673a\u5bc6\u6027\u3001\u5b8c\u6574\u6027\u548c\u53ef\u7528\u6027\uff0c\u5e76\u4f7f\u76f8\u5173\u65b9\u76f8\u4fe1\u98ce\u9669\u5f97\u5230\u5145\u5206\u7ba1\u7406\u3002 \u53ef\u4fe1\u5b89\u5168\u539f\u5219 \u4fe1\u6258\u670d\u52a1\u662f\u4e00\u5957\u57fa\u4e8e\u4e00\u5957\u6838\u5fc3\u539f\u5219\u548c\u6807\u51c6\u7684\u4e13\u4e1a\u8ba4\u8bc1\u548c\u54a8\u8be2\u670d\u52a1\uff0c\u7528\u4e8e\u89e3\u51b3 IT \u7cfb\u7edf\u548c\u9690\u79c1\u8ba1\u5212\u7684\u98ce\u9669\u548c\u673a\u9047\u3002\u901a\u5e38\u79f0\u4e3a SOC \u5ba1\u8ba1\uff0c\u8fd9\u4e9b\u539f\u5219\u5b9a\u4e49\u4e86\u8981\u6c42\u662f\u4ec0\u4e48\uff0c\u7ec4\u7ec7\u6709\u8d23\u4efb\u5b9a\u4e49\u6ee1\u8db3\u8981\u6c42\u7684\u63a7\u5236\u63aa\u65bd\u3002 \u5ba1\u8ba1\u53c2\u8003 \u00b6 OpenStack\u5728\u8bb8\u591a\u65b9\u9762\u90fd\u662f\u521b\u65b0\u7684\uff0c\u4f46\u662f\u7528\u4e8e\u5ba1\u8ba1OpenStack\u90e8\u7f72\u7684\u8fc7\u7a0b\u76f8\u5f53\u666e\u904d\u3002\u5ba1\u6838\u5458\u5c06\u6839\u636e\u4e24\u4e2a\u6807\u51c6\u8bc4\u4f30\u6d41\u7a0b\uff1a\u63a7\u5236\u662f\u5426\u6709\u6548\u8bbe\u8ba1\u4ee5\u53ca\u63a7\u5236\u662f\u5426\u6709\u6548\u8fd0\u884c\u3002\u4e86\u89e3\u5ba1\u8ba1\u5e08\u5982\u4f55\u8bc4\u4f30\u63a7\u5236\u63aa\u65bd\u662f\u5426\u6709\u6548\u8bbe\u8ba1\u548c\u8fd0\u884c\uff0c\u5c06\u5728\u201c\u4e86\u89e3\u5ba1\u8ba1\u8fc7\u7a0b\u201d\u4e00\u8282\u4e2d\u8ba8\u8bba\u3002 \u7528\u4e8e\u5ba1\u6838\u548c\u8bc4\u4f30\u4e91\u90e8\u7f72\u7684\u6700\u5e38\u89c1\u6846\u67b6\u5305\u62ec\u524d\u9762\u63d0\u5230\u7684 ISO 27001/2 \u4fe1\u606f\u5b89\u5168\u6807\u51c6\u3001ISACA \u7684\u4fe1\u606f\u548c\u76f8\u5173\u6280\u672f\u63a7\u5236\u76ee\u6807 \uff08COBIT\uff09 \u6846\u67b6\u3001\u7279\u96f7\u5fb7\u97e6\u59d4\u5458\u4f1a\u8d5e\u52a9\u7ec4\u7ec7\u59d4\u5458\u4f1a \uff08COSO\uff09 \u548c\u4fe1\u606f\u6280\u672f\u57fa\u7840\u8bbe\u65bd\u5e93 \uff08ITIL\uff09\u3002\u5ba1\u8ba1\u901a\u5e38\u5305\u62ec\u4e00\u4e2a\u6216\u591a\u4e2a\u8fd9\u4e9b\u6846\u67b6\u4e2d\u7684\u91cd\u70b9\u9886\u57df\u3002\u5e78\u8fd0\u7684\u662f\uff0c\u8fd9\u4e9b\u6846\u67b6\u4e4b\u95f4\u6709\u5f88\u591a\u91cd\u53e0\uff0c\u56e0\u6b64\u91c7\u7528\u6846\u67b6\u7684\u7ec4\u7ec7\u5c06\u5728\u5ba1\u8ba1\u65f6\u5904\u4e8e\u6709\u5229\u5730\u4f4d\u3002 \u4e86\u89e3\u5ba1\u6838\u6d41\u7a0b \u00b6 \u4fe1\u606f\u7cfb\u7edf\u5b89\u5168\u5408\u89c4\u6027\u4f9d\u8d56\u4e8e\u4e24\u4e2a\u57fa\u672c\u6d41\u7a0b\u7684\u5b8c\u6210\uff1a \u5b89\u5168\u63a7\u5236\u7684\u5b9e\u65bd\u548c\u64cd\u4f5c \u4f7f\u4fe1\u606f\u7cfb\u7edf\u4e0e\u8303\u56f4\u5185\u7684\u6807\u51c6\u548c\u6cd5\u89c4\u4fdd\u6301\u4e00\u81f4\u6d89\u53ca\u5185\u90e8\u4efb\u52a1\uff0c\u8fd9\u4e9b\u4efb\u52a1\u5fc5\u987b\u5728\u6b63\u5f0f\u8bc4\u4f30\u4e4b\u524d\u8fdb\u884c\u3002\u5ba1\u6838\u5458\u53ef\u80fd\u4f1a\u53c2\u4e0e\u6b64\u72b6\u6001\uff0c\u4ee5\u8fdb\u884c\u5dee\u8ddd\u5206\u6790\uff0c\u63d0\u4f9b\u6307\u5bfc\uff0c\u5e76\u589e\u52a0\u6210\u529f\u8ba4\u8bc1\u7684\u53ef\u80fd\u6027\u3002 \u72ec\u7acb\u9a8c\u8bc1\u548c\u786e\u8ba4 \u5728\u8bb8\u591a\u4fe1\u606f\u7cfb\u7edf\u83b7\u5f97\u8ba4\u8bc1\u72b6\u6001\u4e4b\u524d\uff0c\u9700\u8981\u5411\u4e2d\u7acb\u7684\u7b2c\u4e09\u65b9\u8bc1\u660e\u7cfb\u7edf\u5b89\u5168\u63a7\u5236\u5df2\u5b9e\u65bd\u5e76\u6709\u6548\u8fd0\u884c\uff0c\u7b26\u5408\u8303\u56f4\u5185\u7684\u6807\u51c6\u548c\u6cd5\u89c4\u3002\u8bb8\u591a\u8ba4\u8bc1\u9700\u8981\u5b9a\u671f\u5ba1\u6838\uff0c\u4ee5\u786e\u4fdd\u6301\u7eed\u8ba4\u8bc1\uff0c\u8fd9\u88ab\u8ba4\u4e3a\u662f\u603b\u4f53\u6301\u7eed\u76d1\u63a7\u5b9e\u8df5\u7684\u4e00\u90e8\u5206\u3002 \u786e\u5b9a\u5ba1\u8ba1\u8303\u56f4 \u00b6 \u786e\u5b9a\u5ba1\u8ba1\u8303\u56f4\uff0c\u7279\u522b\u662f\u9700\u8981\u54ea\u4e9b\u63a7\u5236\u63aa\u65bd\uff0c\u4ee5\u53ca\u5982\u4f55\u8bbe\u8ba1\u6216\u4fee\u6539OpenStack\u90e8\u7f72\u4ee5\u6ee1\u8db3\u8fd9\u4e9b\u63a7\u5236\u63aa\u65bd\uff0c\u5e94\u8be5\u662f\u6700\u521d\u7684\u89c4\u5212\u6b65\u9aa4\u3002 \u5728\u51fa\u4e8e\u5408\u89c4\u6027\u76ee\u7684\u786e\u5b9a OpenStack \u90e8\u7f72\u8303\u56f4\u65f6\uff0c\u5e94\u4f18\u5148\u8003\u8651\u5bf9\u654f\u611f\u670d\u52a1\u7684\u63a7\u5236\uff0c\u4f8b\u5982\u547d\u4ee4\u548c\u63a7\u5236\u529f\u80fd\u4ee5\u53ca\u57fa\u672c\u865a\u62df\u5316\u6280\u672f\u3002\u8fd9\u4e9b\u8bbe\u65bd\u7684\u59a5\u534f\u53ef\u80fd\u4f1a\u5f71\u54cd\u6574\u4e2a OpenStack \u73af\u5883\u3002 \u7f29\u5c0f\u8303\u56f4\u6709\u52a9\u4e8e\u786e\u4fdd OpenStack \u67b6\u6784\u5e08\u5efa\u7acb\u9488\u5bf9\u7279\u5b9a\u90e8\u7f72\u91cf\u8eab\u5b9a\u5236\u7684\u9ad8\u8d28\u91cf\u5b89\u5168\u63a7\u5236\uff0c\u4f46\u6700\u91cd\u8981\u7684\u662f\u786e\u4fdd\u8fd9\u4e9b\u5b9e\u8df5\u4e0d\u4f1a\u9057\u6f0f\u5b89\u5168\u5f3a\u5316\u4e2d\u7684\u533a\u57df\u6216\u529f\u80fd\u3002\u4e00\u4e2a\u5e38\u89c1\u7684\u4f8b\u5b50\u662fPCI-DSS\u51c6\u5219\uff0c\u5176\u4e2d\u4e0e\u652f\u4ed8\u76f8\u5173\u7684\u57fa\u7840\u8bbe\u65bd\u53ef\u80fd\u4f1a\u53d7\u5230\u5b89\u5168\u95ee\u9898\u7684\u5ba1\u67e5\uff0c\u4f46\u652f\u6301\u670d\u52a1\u88ab\u5ffd\u89c6\uff0c\u5e76\u4e14\u5bb9\u6613\u53d7\u5230\u653b\u51fb\u3002 \u5728\u89e3\u51b3\u5408\u89c4\u6027\u95ee\u9898\u65f6\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u786e\u5b9a\u9002\u7528\u4e8e\u591a\u4e2a\u8ba4\u8bc1\u7684\u5e38\u89c1\u9886\u57df\u548c\u6807\u51c6\u6765\u63d0\u9ad8\u6548\u7387\u5e76\u51cf\u5c11\u5de5\u4f5c\u91cf\u3002\u672c\u4e66\u4e2d\u8ba8\u8bba\u7684\u8bb8\u591a\u5ba1\u8ba1\u539f\u5219\u548c\u51c6\u5219\u5c06\u6709\u52a9\u4e8e\u786e\u5b9a\u8fd9\u4e9b\u63a7\u5236\u63aa\u65bd\uff0c\u6b64\u5916\uff0c\u4e00\u4e9b\u5916\u90e8\u5b9e\u4f53\u63d0\u4f9b\u4e86\u5168\u9762\u7684\u6e05\u5355\u3002\u4ee5\u4e0b\u662f\u4e00\u4e9b\u793a\u4f8b\uff1a \u4e91\u5b89\u5168\u8054\u76df\u4e91\u63a7\u5236\u77e9\u9635 \uff08CCM\uff09 \u53ef\u5e2e\u52a9\u4e91\u63d0\u4f9b\u5546\u548c\u6d88\u8d39\u8005\u8bc4\u4f30\u4e91\u63d0\u4f9b\u5546\u7684\u6574\u4f53\u5b89\u5168\u6027\u3002CSA CMM \u63d0\u4f9b\u4e86\u4e00\u4e2a\u63a7\u5236\u6846\u67b6\uff0c\u8be5\u6846\u67b6\u6620\u5c04\u5230\u8bb8\u591a\u884c\u4e1a\u516c\u8ba4\u7684\u6807\u51c6\u548c\u6cd5\u89c4\uff0c\u5305\u62ec ISO 27001/2\u3001ISACA\u3001COBIT\u3001PCI\u3001NIST\u3001Jericho Forum \u548c NERC CIP\u3002 \u300aSCAP \u5b89\u5168\u6307\u5357\u300b\u662f\u53e6\u4e00\u4e2a\u6709\u7528\u7684\u53c2\u8003\u3002\u8fd9\u4ecd\u7136\u662f\u4e00\u4e2a\u65b0\u5174\u7684\u6765\u6e90\uff0c\u4f46\u6211\u4eec\u9884\u8ba1\u8fd9\u5c06\u53d1\u5c55\u6210\u4e3a\u4e00\u4e2a\u5de5\u5177\uff0c\u5176\u63a7\u4ef6\u6620\u5c04\u66f4\u4fa7\u91cd\u4e8e\u7f8e\u56fd\u8054\u90a6\u653f\u5e9c\u7684\u8ba4\u8bc1\u548c\u5efa\u8bae\u3002\u4f8b\u5982\uff0cSCAP \u5b89\u5168\u6307\u5357\u76ee\u524d\u5305\u542b\u5b89\u5168\u6280\u672f\u5b9e\u65bd\u6307\u5357 \uff08STIG\uff09 \u548c NIST-800-53 \u7684\u4e00\u4e9b\u6620\u5c04\u3002 \u8fd9\u4e9b\u63a7\u5236\u6620\u5c04\u5c06\u6709\u52a9\u4e8e\u8bc6\u522b\u8de8\u8ba4\u8bc1\u7684\u901a\u7528\u63a7\u5236\u6807\u51c6\uff0c\u5e76\u4e3a\u5ba1\u6838\u5458\u548c\u88ab\u5ba1\u6838\u65b9\u63d0\u4f9b\u5bf9\u7279\u5b9a\u5408\u89c4\u6027\u8ba4\u8bc1\u548c\u8bc1\u660e\u7684\u63a7\u5236\u96c6\u4e2d\u95ee\u9898\u533a\u57df\u7684\u53ef\u89c1\u6027\u3002 \u5ba1\u8ba1\u7684\u9636\u6bb5 \u00b6 \u5ba1\u8ba1\u6709\u56db\u4e2a\u4e0d\u540c\u7684\u9636\u6bb5\uff0c\u5c3d\u7ba1\u5927\u591a\u6570\u5229\u76ca\u76f8\u5173\u8005\u548c\u63a7\u5236\u6240\u6709\u8005\u53ea\u4f1a\u53c2\u4e0e\u4e00\u4e24\u4e2a\u9636\u6bb5\u3002\u56db\u4e2a\u9636\u6bb5\u662f\u89c4\u5212\u3001\u5b9e\u5730\u8003\u5bdf\u3001\u62a5\u544a\u548c\u603b\u7ed3\u3002\u4e0b\u9762\u5c06\u8ba8\u8bba\u8fd9\u4e9b\u9636\u6bb5\u4e2d\u7684\u6bcf\u4e00\u4e2a\u3002 \u89c4\u5212\u9636\u6bb5\u901a\u5e38\u5728\u5b9e\u5730\u5de5\u4f5c\u5f00\u59cb\u524d\u4e24\u5468\u5230\u516d\u4e2a\u6708\u8fdb\u884c\u3002\u5728\u6b64\u9636\u6bb5\uff0c\u5c06\u8ba8\u8bba\u5e76\u6700\u7ec8\u786e\u5b9a\u65f6\u95f4\u8303\u56f4\u3001\u65f6\u95f4\u8868\u3001\u8981\u8bc4\u4f30\u7684\u63a7\u5236\u63aa\u65bd\u548c\u63a7\u5236\u6240\u6709\u8005\u7b49\u5ba1\u8ba1\u9879\u76ee\u3002\u5bf9\u8d44\u6e90\u53ef\u7528\u6027\u3001\u516c\u6b63\u6027\u548c\u6210\u672c\u7684\u62c5\u5fe7\u4e5f\u5f97\u5230\u4e86\u89e3\u51b3\u3002 \u5b9e\u5730\u8003\u5bdf\u9636\u6bb5\u662f\u5ba1\u8ba1\u4e2d\u6700\u660e\u663e\u7684\u90e8\u5206\u3002\u8fd9\u662f\u5ba1\u8ba1\u5458\u5728\u73b0\u573a\u7684\u5730\u65b9\uff0c\u4e0e\u63a7\u5236\u6240\u6709\u8005\u9762\u8c08\uff0c\u8bb0\u5f55\u73b0\u6709\u7684\u63a7\u5236\u63aa\u65bd\uff0c\u5e76\u786e\u5b9a\u4efb\u4f55\u95ee\u9898\u3002\u9700\u8981\u6ce8\u610f\u7684\u662f\uff0c\u5ba1\u8ba1\u5e08\u5c06\u4f7f\u7528\u4e24\u90e8\u5206\u6d41\u7a0b\u6765\u8bc4\u4f30\u73b0\u6709\u7684\u63a7\u5236\u63aa\u65bd\u3002\u7b2c\u4e00\u90e8\u5206\u662f\u8bc4\u4f30\u63a7\u5236\u7684\u8bbe\u8ba1\u6709\u6548\u6027\u3002\u5728\u8fd9\u91cc\uff0c\u5ba1\u8ba1\u5458\u5c06\u8bc4\u4f30\u63a7\u5236\u662f\u5426\u80fd\u591f\u6709\u6548\u5730\u9884\u9632\u6216\u68c0\u6d4b\u548c\u7ea0\u6b63\u5f31\u70b9\u548c\u7f3a\u9677\u3002\u63a7\u4ef6\u5fc5\u987b\u901a\u8fc7\u6b64\u6d4b\u8bd5\u624d\u80fd\u5728\u7b2c\u4e8c\u9636\u6bb5\u8fdb\u884c\u8bc4\u4f30\u3002\u8fd9\u662f\u56e0\u4e3a\u5bf9\u4e8e\u8bbe\u8ba1\u65e0\u6548\u7684\u63a7\u4ef6\uff0c\u6ca1\u6709\u5fc5\u8981\u8003\u8651\u5b83\u662f\u5426\u6709\u6548\u8fd0\u884c\u3002\u7b2c\u4e8c\u90e8\u5206\u662f\u8fd0\u8425\u6548\u7387\u3002\u64cd\u4f5c\u6709\u6548\u6027\u6d4b\u8bd5\u5c06\u786e\u5b9a\u5982\u4f55\u5e94\u7528\u63a7\u5236\u63aa\u65bd\uff0c\u5e94\u7528\u63a7\u5236\u63aa\u65bd\u7684\u4e00\u81f4\u6027\u4ee5\u53ca\u7531\u8c01\u6216\u4ee5\u4f55\u79cd\u65b9\u5f0f\u5e94\u7528\u63a7\u5236\u63aa\u65bd\u3002\u4e00\u9879\u63a7\u5236\u53ef\u80fd\u4f9d\u8d56\u4e8e\u5176\u4ed6\u63a7\u5236\uff08\u95f4\u63a5\u63a7\u5236\uff09\uff0c\u5982\u679c\u5b83\u4eec\u4f9d\u8d56\u4e8e\u5176\u4ed6\u63a7\u5236\uff0c\u5219\u5ba1\u8ba1\u5e08\u53ef\u80fd\u9700\u8981\u989d\u5916\u7684\u8bc1\u636e\u6765\u8bc1\u660e\u8fd9\u4e9b\u95f4\u63a5\u63a7\u5236\u7684\u8fd0\u4f5c\u6709\u6548\u6027\uff0c\u4ee5\u786e\u5b9a\u63a7\u5236\u7684\u6574\u4f53\u8fd0\u4f5c\u6709\u6548\u6027\u3002 \u5728\u62a5\u544a\u9636\u6bb5\uff0c\u7ba1\u7406\u5c42\u5c06\u5bf9\u5728\u5b9e\u5730\u5de5\u4f5c\u9636\u6bb5\u53d1\u73b0\u7684\u4efb\u4f55\u95ee\u9898\u8fdb\u884c\u9a8c\u8bc1\u3002\u51fa\u4e8e\u540e\u52e4\u76ee\u7684\uff0c\u4e00\u4e9b\u6d3b\u52a8\uff08\u4f8b\u5982\u95ee\u9898\u9a8c\u8bc1\uff09\u53ef\u80fd\u4f1a\u5728\u5b9e\u5730\u5de5\u4f5c\u9636\u6bb5\u6267\u884c\u3002\u7ba1\u7406\u5c42\u8fd8\u9700\u8981\u63d0\u4f9b\u8865\u6551\u8ba1\u5212\u6765\u89e3\u51b3\u95ee\u9898\uff0c\u5e76\u786e\u4fdd\u5b83\u4eec\u4e0d\u4f1a\u518d\u6b21\u53d1\u751f\u3002\u5c06\u5411\u5229\u76ca\u6538\u5173\u65b9\u548c\u7ba1\u7406\u5c42\u5206\u53d1\u4e00\u4efd\u603b\u4f53\u62a5\u544a\u8349\u7a3f\uff0c\u4f9b\u5176\u5ba1\u67e5\u3002\u5546\u5b9a\u7684\u4fee\u6539\u88ab\u7eb3\u5165\uff0c\u66f4\u65b0\u540e\u7684\u8349\u6848\u5c06\u9001\u4ea4\u9ad8\u7ea7\u7ba1\u7406\u5c42\u5ba1\u67e5\u548c\u6279\u51c6\u3002\u4e00\u65e6\u9ad8\u7ea7\u7ba1\u7406\u5c42\u6279\u51c6\u62a5\u544a\uff0c\u8be5\u62a5\u544a\u5c31\u4f1a\u5b9a\u7a3f\u5e76\u5206\u53d1\u7ed9\u6267\u884c\u7ba1\u7406\u5c42\u3002\u4efb\u4f55\u95ee\u9898\u90fd\u4f1a\u8f93\u5165\u5230\u7ec4\u7ec7\u4f7f\u7528\u7684\u95ee\u9898\u8ddf\u8e2a\u6216\u98ce\u9669\u8ddf\u8e2a\u673a\u5236\u4e2d\u3002 \u603b\u7ed3\u9636\u6bb5\u662f\u5ba1\u8ba1\u6b63\u5f0f\u7ec8\u6b62\u7684\u5730\u65b9\u3002\u6b64\u65f6\uff0c\u7ba1\u7406\u5c42\u5c06\u5f00\u59cb\u6574\u6539\u6d3b\u52a8\u3002\u4f7f\u7528\u8fc7\u7a0b\u548c\u901a\u77e5\u786e\u4fdd\u5c06\u4efb\u4f55\u4e0e\u5ba1\u8ba1\u76f8\u5173\u7684\u4fe1\u606f\u90fd\u88ab\u79fb\u81f3\u5b89\u5168\u5b58\u50a8\u5e930\u3002 \u5185\u90e8\u5ba1\u8ba1 \u00b6 \u90e8\u7f72\u4e91\u540e\uff0c\u5c31\u8be5\u8fdb\u884c\u5185\u90e8\u5ba1\u8ba1\u4e86\u3002\u73b0\u5728\u662f\u65f6\u5019\u5c06\u4e0a\u9762\u786e\u5b9a\u7684\u63a7\u4ef6\u4e0e\u4e91\u4e2d\u4f7f\u7528\u7684\u8bbe\u8ba1\u3001\u529f\u80fd\u548c\u90e8\u7f72\u7b56\u7565\u8fdb\u884c\u6bd4\u8f83\u4e86\u3002\u76ee\u6807\u662f\u4e86\u89e3\u6bcf\u4e2a\u63a7\u4ef6\u7684\u5904\u7406\u65b9\u5f0f\u4ee5\u53ca\u5b58\u5728\u5dee\u8ddd\u7684\u4f4d\u7f6e\u3002\u8bb0\u5f55\u6240\u6709\u53d1\u73b0\u4ee5\u5907\u5c06\u6765\u53c2\u8003\u3002 \u5728\u5ba1\u8ba1OpenStack\u4e91\u65f6\uff0c\u4e86\u89e3OpenStack\u67b6\u6784\u56fa\u6709\u7684\u591a\u79df\u6237\u73af\u5883\u662f\u5f88\u91cd\u8981\u7684\u3002\u9700\u8981\u5173\u6ce8\u7684\u4e00\u4e9b\u5173\u952e\u9886\u57df\u5305\u62ec\u6570\u636e\u5904\u7f6e\u3001\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5b89\u5168\u6027\u3001\u8282\u70b9\u5f3a\u5316\u548c\u8eab\u4efd\u9a8c\u8bc1\u673a\u5236\u3002 \u51c6\u5907\u5916\u90e8\u5ba1\u8ba1 \u00b6 \u4e00\u65e6\u5185\u90e8\u5ba1\u8ba1\u7ed3\u679c\u770b\u8d77\u6765\u4e0d\u9519\uff0c\u5c31\u8be5\u4e3a\u5916\u90e8\u5ba1\u8ba1\u505a\u51c6\u5907\u4e86\u3002\u5728\u6b64\u9636\u6bb5\u9700\u8981\u91c7\u53d6\u51e0\u9879\u5173\u952e\u884c\u52a8\uff0c\u8fd9\u4e9b\u884c\u52a8\u6982\u8ff0\u5982\u4e0b\uff1a \u4fdd\u6301\u5185\u90e8\u5ba1\u8ba1\u7684\u826f\u597d\u8bb0\u5f55\u3002\u8fd9\u4e9b\u5c06\u5728\u5916\u90e8\u5ba1\u8ba1\u671f\u95f4\u8bc1\u660e\u5f88\u6709\u7528\uff0c\u56e0\u6b64\u60a8\u53ef\u4ee5\u51c6\u5907\u597d\u56de\u7b54\u6709\u5173\u5c06\u5408\u89c4\u6027\u63a7\u5236\u6620\u5c04\u5230\u7279\u5b9a\u90e8\u7f72\u7684\u95ee\u9898\u3002 \u90e8\u7f72\u81ea\u52a8\u5316\u6d4b\u8bd5\u5de5\u5177\uff0c\u786e\u4fdd\u4e91\u957f\u671f\u4fdd\u6301\u5408\u89c4\u3002 \u9009\u62e9\u5ba1\u8ba1\u5458\u3002 \u9009\u62e9\u5ba1\u8ba1\u5e08\u53ef\u80fd\u5177\u6709\u6311\u6218\u6027\u3002\u7406\u60f3\u60c5\u51b5\u4e0b\uff0c\u60a8\u6b63\u5728\u5bfb\u627e\u5177\u6709\u4e91\u5408\u89c4\u6027\u5ba1\u6838\u7ecf\u9a8c\u7684\u4eba\u3002OpenStack\u7ecf\u9a8c\u662f\u53e6\u4e00\u5927\u4f18\u52bf\u3002\u901a\u5e38\uff0c\u6700\u597d\u54a8\u8be2\u7ecf\u5386\u8fc7\u6b64\u8fc7\u7a0b\u7684\u4eba\u8fdb\u884c\u8f6c\u8bca\u3002\u6210\u672c\u53ef\u80fd\u4f1a\u56e0\u53c2\u4e0e\u8303\u56f4\u548c\u6240\u8003\u8651\u7684\u5ba1\u8ba1\u516c\u53f8\u800c\u6709\u5f88\u5927\u5dee\u5f02\u3002 \u5916\u90e8\u5ba1\u8ba1 \u00b6 \u8fd9\u662f\u6b63\u5f0f\u7684\u5ba1\u8ba1\u8fc7\u7a0b\u3002\u5ba1\u8ba1\u5458\u5c06\u6d4b\u8bd5\u7279\u5b9a\u8ba4\u8bc1\u8303\u56f4\u5185\u7684\u5b89\u5168\u63a7\u5236\u63aa\u65bd\uff0c\u5e76\u8981\u6c42\u63d0\u4f9b\u8bc1\u636e\u8981\u6c42\uff0c\u4ee5\u8bc1\u660e\u8fd9\u4e9b\u63a7\u5236\u63aa\u65bd\u5728\u5ba1\u8ba1\u7a97\u53e3\u5185\u4e5f\u5df2\u5230\u4f4d\uff08\u4f8b\u5982\uff0cSOC 2 \u5ba1\u8ba1\u901a\u5e38\u5728 6-12 \u4e2a\u6708\u5185\u8bc4\u4f30\u5b89\u5168\u63a7\u5236\u63aa\u65bd\uff09\u3002\u4efb\u4f55\u63a7\u5236\u5931\u8d25\u90fd\u4f1a\u88ab\u8bb0\u5f55\u4e0b\u6765\uff0c\u5e76\u5c06\u8bb0\u5f55\u5728\u5916\u90e8\u5ba1\u8ba1\u5e08\u7684\u6700\u7ec8\u62a5\u544a\u4e2d\u3002\u6839\u636e OpenStack \u90e8\u7f72\u7684\u7c7b\u578b\uff0c\u5ba2\u6237\u53ef\u80fd\u4f1a\u67e5\u770b\u8fd9\u4e9b\u62a5\u544a\uff0c\u56e0\u6b64\u907f\u514d\u63a7\u5236\u5931\u8d25\u975e\u5e38\u91cd\u8981\u3002\u8fd9\u5c31\u662f\u4e3a\u4ec0\u4e48\u5ba1\u8ba1\u51c6\u5907\u5982\u6b64\u91cd\u8981\u7684\u539f\u56e0\u3002 \u5408\u89c4\u6027\u7ef4\u62a4 \u00b6 \u8be5\u8fc7\u7a0b\u4e0d\u4f1a\u56e0\u5355\u4e00\u7684\u5916\u90e8\u5ba1\u8ba1\u800c\u7ed3\u675f\u3002\u5927\u591a\u6570\u8ba4\u8bc1\u90fd\u9700\u8981\u6301\u7eed\u7684\u5408\u89c4\u6d3b\u52a8\uff0c\u8fd9\u610f\u5473\u7740\u8981\u5b9a\u671f\u91cd\u590d\u5ba1\u6838\u8fc7\u7a0b\u3002\u6211\u4eec\u5efa\u8bae\u5c06\u81ea\u52a8\u5408\u89c4\u6027\u9a8c\u8bc1\u5de5\u5177\u96c6\u6210\u5230\u4e91\u4e2d\uff0c\u4ee5\u786e\u4fdd\u5176\u59cb\u7ec8\u5408\u89c4\u3002\u9664\u4e86\u5176\u4ed6\u5b89\u5168\u76d1\u63a7\u5de5\u5177\u4e4b\u5916\uff0c\u8fd8\u5e94\u8be5\u8fd9\u6837\u505a\u3002\u8bf7\u8bb0\u4f4f\uff0c\u76ee\u6807\u65e2\u662f\u5b89\u5168\u6027\uff0c\u4e5f\u662f\u5408\u89c4\u6027\u3002\u5982\u679c\u5728\u4e0a\u8ff0\u4efb\u4f55\u4e00\u9879\u65b9\u9762\u90fd\u5931\u8d25\uff0c\u5c06\u4f7f\u672a\u6765\u7684\u5ba1\u8ba1\u53d8\u5f97\u975e\u5e38\u590d\u6742\u3002 \u5408\u89c4\u6d3b\u52a8 \u00b6 \u6709\u8bb8\u591a\u6807\u51c6\u6d3b\u52a8\u5c06\u6781\u5927\u5730\u5e2e\u52a9\u5408\u89c4\u8fc7\u7a0b\u3002\u672c\u7ae0\u6982\u8ff0\u4e86\u4e00\u4e9b\u6700\u5e38\u89c1\u7684\u5408\u89c4\u6027\u6d3b\u52a8\u3002\u8fd9\u4e9b\u5e76\u4e0d\u662fOpenStack\u6240\u7279\u6709\u7684\uff0c\u4f46\u662f\u672c\u4e66\u4e2d\u63d0\u4f9b\u4e86\u76f8\u5173\u7ae0\u8282\u7684\u53c2\u8003\u8d44\u6599\uff0c\u4f5c\u4e3a\u6709\u7528\u7684\u4e0a\u4e0b\u6587\u3002 \u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u7cfb\u7edf \uff08ISMS\uff09 \u00b6 \u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u7cfb\u7edf \uff08ISMS\uff09 \u662f\u7ec4\u7ec7\u521b\u5efa\u548c\u7ef4\u62a4\u7684\u4e00\u5957\u5168\u9762\u7684\u7b56\u7565\u548c\u6d41\u7a0b\uff0c\u7528\u4e8e\u7ba1\u7406\u4fe1\u606f\u8d44\u4ea7\u7684\u98ce\u9669\u3002\u4e91\u90e8\u7f72\u6700\u5e38\u89c1\u7684 ISMS \u662f ISO/IEC 27001/2\uff0c\u5b83\u4e3a\u5b89\u5168\u63a7\u5236\u548c\u5b9e\u8df5\u5960\u5b9a\u4e86\u575a\u5b9e\u7684\u57fa\u7840\uff0c\u4ee5\u5b9e\u73b0\u66f4\u4e25\u683c\u7684\u5408\u89c4\u6027\u8ba4\u8bc1\u3002\u8be5\u6807\u51c6\u4e8e 2013 \u5e74\u8fdb\u884c\u4e86\u66f4\u65b0\uff0c\u4ee5\u53cd\u6620\u4e91\u670d\u52a1\u7684\u65e5\u76ca\u4f7f\u7528\uff0c\u5e76\u66f4\u52a0\u5f3a\u8c03\u8861\u91cf\u548c\u8bc4\u4f30\u7ec4\u7ec7\u7684 ISMS \u6027\u80fd\u3002 \u98ce\u9669\u8bc4\u4f30 \u00b6 \u98ce\u9669\u8bc4\u4f30\u6846\u67b6\u53ef\u8bc6\u522b\u7ec4\u7ec7\u6216\u670d\u52a1\u4e2d\u7684\u98ce\u9669\uff0c\u5e76\u6307\u5b9a\u8fd9\u4e9b\u98ce\u9669\u7684\u6240\u6709\u6743\uff0c\u4ee5\u53ca\u5b9e\u65bd\u548c\u7f13\u89e3\u7b56\u7565\u3002\u98ce\u9669\u9002\u7528\u4e8e\u670d\u52a1\u7684\u6240\u6709\u9886\u57df\uff0c\u4ece\u6280\u672f\u63a7\u5236\u5230\u73af\u5883\u707e\u96be\u573a\u666f\u548c\u4eba\u4e3a\u56e0\u7d20\u3002\u4f8b\u5982\uff0c\u6076\u610f\u5185\u90e8\u4eba\u5458\u3002\u53ef\u4ee5\u4f7f\u7528\u591a\u79cd\u673a\u5236\u5bf9\u98ce\u9669\u8fdb\u884c\u8bc4\u7ea7\u3002\u4f8b\u5982\uff0c\u53ef\u80fd\u6027\u4e0e\u5f71\u54cd\u3002OpenStack \u90e8\u7f72\u98ce\u9669\u8bc4\u4f30\u53ef\u4ee5\u5305\u62ec\u63a7\u5236\u5dee\u8ddd\u3002 \u8bbf\u95ee\u548c\u65e5\u5fd7\u5ba1\u67e5 \u00b6 \u9700\u8981\u5b9a\u671f\u8bbf\u95ee\u548c\u65e5\u5fd7\u5ba1\u67e5\uff0c\u4ee5\u786e\u4fdd\u670d\u52a1\u90e8\u7f72\u4e2d\u7684\u8eab\u4efd\u9a8c\u8bc1\u3001\u6388\u6743\u548c\u95ee\u8d23\u5236\u3002\u6709\u5173\u8fd9\u4e9b\u4e3b\u9898\u7684 OpenStack \u7684\u5177\u4f53\u6307\u5357\u5728\u76d1\u63a7\u548c\u65e5\u5fd7\u8bb0\u5f55\u4e2d\u8fdb\u884c\u4e86\u6df1\u5165\u8ba8\u8bba\u3002 OpenStack Identity \u670d\u52a1\u652f\u6301\u4e91\u5ba1\u8ba1\u6570\u636e\u8054\u5408 \uff08CADF\uff09 \u901a\u77e5\uff0c\u63d0\u4f9b\u5ba1\u8ba1\u6570\u636e\u4ee5\u7b26\u5408\u5b89\u5168\u6027\u3001\u64cd\u4f5c\u548c\u4e1a\u52a1\u6d41\u7a0b\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Keystone \u5f00\u53d1\u4eba\u5458\u6587\u6863\u3002 \u5907\u4efd\u548c\u707e\u96be\u6062\u590d \u00b6 \u707e\u96be\u6062\u590d \uff08DR\uff09 \u548c\u4e1a\u52a1\u8fde\u7eed\u6027\u89c4\u5212 \uff08BCP\uff09 \u8ba1\u5212\u662f ISMS \u548c\u5408\u89c4\u6027\u6d3b\u52a8\u7684\u5e38\u89c1\u8981\u6c42\u3002\u8fd9\u4e9b\u8ba1\u5212\u5fc5\u987b\u5b9a\u671f\u6d4b\u8bd5\u5e76\u8bb0\u5f55\u5728\u6848\u3002\u5728 OpenStack \u4e2d\uff0c\u5173\u952e\u533a\u57df\u4f4d\u4e8e\u7ba1\u7406\u5b89\u5168\u57df\u4e2d\uff0c\u4ee5\u53ca\u4efb\u4f55\u53ef\u4ee5\u8bc6\u522b\u5355\u70b9\u6545\u969c \uff08SPOF\uff09 \u7684\u5730\u65b9\u3002 \u5b89\u5168\u57f9\u8bad \u00b6 \u9488\u5bf9\u7279\u5b9a\u89d2\u8272\u7684\u5e74\u5ea6\u5b89\u5168\u57f9\u8bad\u662f\u51e0\u4e4e\u6240\u6709\u5408\u89c4\u6027\u8ba4\u8bc1\u548c\u8bc1\u660e\u7684\u5f3a\u5236\u6027\u8981\u6c42\u3002\u4e3a\u4e86\u4f18\u5316\u5b89\u5168\u57f9\u8bad\u7684\u6709\u6548\u6027\uff0c\u4e00\u79cd\u5e38\u89c1\u7684\u65b9\u6cd5\u662f\u63d0\u4f9b\u7279\u5b9a\u4e8e\u89d2\u8272\u7684\u57f9\u8bad\uff0c\u4f8b\u5982\u5411\u5f00\u53d1\u4eba\u5458\u3001\u64cd\u4f5c\u4eba\u5458\u548c\u975e\u6280\u672f\u4eba\u5458\u63d0\u4f9b\u57f9\u8bad\u3002\u57fa\u4e8e\u6b64\u5f3a\u5316\u6307\u5357\u7684\u5176\u4ed6\u4e91\u5b89\u5168\u6216 OpenStack \u5b89\u5168\u57f9\u8bad\u5c06\u662f\u7406\u60f3\u7684\u9009\u62e9\u3002 \u5b89\u5168\u5ba1\u67e5 \u00b6 \u7531\u4e8eOpenStack\u662f\u4e00\u4e2a\u6d41\u884c\u7684\u5f00\u6e90\u9879\u76ee\uff0c\u56e0\u6b64\u8bb8\u591a\u4ee3\u7801\u5e93\u548c\u67b6\u6784\u5df2\u7ecf\u8fc7\u4e2a\u4eba\u8d21\u732e\u8005\u3001\u7ec4\u7ec7\u548c\u4f01\u4e1a\u7684\u5ba1\u67e5\u3002\u4ece\u5b89\u5168\u89d2\u5ea6\u6765\u770b\uff0c\u8fd9\u53ef\u80fd\u662f\u6709\u5229\u7684\uff0c\u4f46\u662f\u5bf9\u4e8e\u670d\u52a1\u63d0\u4f9b\u5546\u6765\u8bf4\uff0c\u5b89\u5168\u5ba1\u67e5\u7684\u9700\u6c42\u4ecd\u7136\u662f\u4e00\u4e2a\u5173\u952e\u7684\u8003\u8651\u56e0\u7d20\uff0c\u56e0\u4e3a\u90e8\u7f72\u5404\u4e0d\u76f8\u540c\uff0c\u800c\u4e14\u5b89\u5168\u6027\u5e76\u4e0d\u603b\u662f\u8d21\u732e\u8005\u7684\u4e3b\u8981\u5173\u6ce8\u70b9\u3002\u5168\u9762\u7684\u5b89\u5168\u5ba1\u67e5\u8fc7\u7a0b\u53ef\u80fd\u5305\u62ec\u67b6\u6784\u5ba1\u67e5\u3001\u5a01\u80c1\u5efa\u6a21\u3001\u6e90\u4ee3\u7801\u5206\u6790\u548c\u6e17\u900f\u6d4b\u8bd5\u3002\u6709\u8bb8\u591a\u7528\u4e8e\u8fdb\u884c\u5b89\u5168\u5ba1\u67e5\u7684\u6280\u672f\u548c\u5efa\u8bae\uff0c\u53ef\u4ee5\u5728\u516c\u5f00\u53d1\u5e03\u4e2d\u627e\u5230\u3002\u4e00\u4e2a\u7ecf\u8fc7\u5145\u5206\u6d4b\u8bd5\u7684\u4f8b\u5b50\u662f Microsoft SDL\uff0c\u5b83\u662f\u4f5c\u4e3a Microsoft \u53ef\u4fe1\u8ba1\u7b97\u8ba1\u5212\u7684\u4e00\u90e8\u5206\u521b\u5efa\u7684\u3002 \u6f0f\u6d1e\u7ba1\u7406 \u00b6 \u5b89\u5168\u66f4\u65b0\u5bf9\u4e8e\u4efb\u4f55 IaaS \u90e8\u7f72\uff08\u65e0\u8bba\u662f\u79c1\u6709\u90e8\u7f72\u8fd8\u662f\u516c\u5171\u90e8\u7f72\uff09\u90fd\u81f3\u5173\u91cd\u8981\u3002\u6613\u53d7\u653b\u51fb\u7684\u7cfb\u7edf\u6269\u5927\u4e86\u653b\u51fb\u9762\uff0c\u662f\u653b\u51fb\u8005\u7684\u660e\u663e\u76ee\u6807\u3002\u5e38\u89c1\u7684\u626b\u63cf\u6280\u672f\u548c\u6f0f\u6d1e\u901a\u77e5\u670d\u52a1\u53ef\u4ee5\u5e2e\u52a9\u7f13\u89e3\u8fd9\u79cd\u5a01\u80c1\u3002\u91cd\u8981\u7684\u662f\uff0c\u626b\u63cf\u8981\u7ecf\u8fc7\u8eab\u4efd\u9a8c\u8bc1\uff0c\u5e76\u4e14\u7f13\u89e3\u7b56\u7565\u8981\u8d85\u8d8a\u7b80\u5355\u7684\u5916\u56f4\u5f3a\u5316\u3002OpenStack \u7b49\u591a\u79df\u6237\u67b6\u6784\u7279\u522b\u5bb9\u6613\u53d7\u5230\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6f0f\u6d1e\u7684\u5f71\u54cd\uff0c\u56e0\u6b64\u8fd9\u662f\u6f0f\u6d1e\u7ba1\u7406\u7cfb\u7edf\u7684\u5173\u952e\u90e8\u5206\u3002 \u6570\u636e\u5206\u7c7b \u00b6 \u6570\u636e\u5206\u7c7b\u5b9a\u4e49\u4e86\u4e00\u79cd\u5bf9\u4fe1\u606f\u8fdb\u884c\u5206\u7c7b\u548c\u5904\u7406\u7684\u65b9\u6cd5\uff0c\u901a\u5e38\u7528\u4e8e\u4fdd\u62a4\u5ba2\u6237\u4fe1\u606f\u514d\u906d\u610f\u5916\u6216\u6545\u610f\u76d7\u7a83\u3001\u4e22\u5931\u6216\u4e0d\u5f53\u62ab\u9732\u3002\u6700\u5e38\u89c1\u7684\u60c5\u51b5\u662f\uff0c\u8fd9\u6d89\u53ca\u5c06\u4fe1\u606f\u5206\u7c7b\u4e3a\u654f\u611f\u6216\u975e\u654f\u611f\u4fe1\u606f\uff0c\u6216\u4e2a\u4eba\u8eab\u4efd\u4fe1\u606f \uff08PII\uff09\u3002\u6839\u636e\u90e8\u7f72\u7684\u4e0a\u4e0b\u6587\uff0c\u53ef\u4ee5\u4f7f\u7528\u5404\u79cd\u5176\u4ed6\u5206\u7c7b\u6807\u51c6\uff08\u653f\u5e9c\u3001\u533b\u7597\u4fdd\u5065\uff09\u3002\u57fa\u672c\u539f\u5219\u662f\u660e\u786e\u5b9a\u4e49\u548c\u4f7f\u7528\u6570\u636e\u5206\u7c7b\u3002\u6700\u5e38\u89c1\u7684\u4fdd\u62a4\u673a\u5236\u5305\u62ec\u884c\u4e1a\u6807\u51c6\u52a0\u5bc6\u6280\u672f\u3002 \u5f02\u5e38\u8fc7\u7a0b \u00b6 \u5f02\u5e38\u8fc7\u7a0b\u662f ISMS \u7684\u91cd\u8981\u7ec4\u6210\u90e8\u5206\u3002\u5f53\u67d0\u4e9b\u64cd\u4f5c\u4e0d\u7b26\u5408\u7ec4\u7ec7\u5b9a\u4e49\u7684\u5b89\u5168\u7b56\u7565\u65f6\uff0c\u5fc5\u987b\u8bb0\u5f55\u8fd9\u4e9b\u64cd\u4f5c\u3002\u9700\u8981\u5305\u62ec\u9002\u5f53\u7684\u7406\u7531\u3001\u63cf\u8ff0\u548c\u7f13\u89e3\u7ec6\u8282\uff0c\u5e76\u7531\u6709\u5173\u5f53\u5c40\u7b7e\u7f72\u3002OpenStack \u9ed8\u8ba4\u914d\u7f6e\u5728\u6ee1\u8db3\u5404\u79cd\u5408\u89c4\u6027\u6807\u51c6\u65b9\u9762\u53ef\u80fd\u4f1a\u6709\u6240\u4e0d\u540c\uff0c\u5e94\u8bb0\u5f55\u4e0d\u7b26\u5408\u5408\u89c4\u6027\u8981\u6c42\u7684\u533a\u57df\uff0c\u5e76\u8003\u8651\u6f5c\u5728\u7684\u4fee\u590d\u7a0b\u5e8f\u4ee5\u5bf9\u793e\u533a\u505a\u51fa\u8d21\u732e\u3002 \u8ba4\u8bc1\u548c\u5408\u89c4\u58f0\u660e \u00b6 \u5408\u89c4\u6027\u548c\u5b89\u5168\u6027\u4e0d\u662f\u6392\u4ed6\u6027\u7684\uff0c\u5fc5\u987b\u4e00\u8d77\u89e3\u51b3\u3002\u5982\u679c\u4e0d\u8fdb\u884c\u5b89\u5168\u5f3a\u5316\uff0cOpenStack \u90e8\u7f72\u4e0d\u592a\u53ef\u80fd\u6ee1\u8db3\u5408\u89c4\u6027\u8981\u6c42\u3002\u4e0b\u9762\u7684\u5217\u8868\u63d0\u4f9b\u4e86 OpenStack \u67b6\u6784\u5e08\u7684\u57fa\u7840\u77e5\u8bc6\u548c\u6307\u5bfc\uff0c\u4ee5\u5b9e\u73b0\u5bf9\u5546\u4e1a\u548c\u653f\u5e9c\u8ba4\u8bc1\u548c\u6807\u51c6\u7684\u5408\u89c4\u6027\u3002 \u5546\u4e1a\u6807\u51c6 \u00b6 \u5bf9\u4e8eOpenStack\u7684\u5546\u4e1a\u90e8\u7f72\uff0c\u6211\u4eec\u5efa\u8bae\u5c06SOC 1/2\u4e0eISO 2700 1/2\u76f8\u7ed3\u5408\uff0c\u4f5c\u4e3aOpenStack\u8ba4\u8bc1\u6d3b\u52a8\u7684\u8d77\u70b9\u3002\u8fd9\u4e9b\u8ba4\u8bc1\u89c4\u5b9a\u7684\u6240\u9700\u5b89\u5168\u6d3b\u52a8\u6709\u52a9\u4e8e\u4e3a\u5b89\u5168\u6700\u4f73\u5b9e\u8df5\u548c\u901a\u7528\u63a7\u5236\u6807\u51c6\u5960\u5b9a\u57fa\u7840\uff0c\u4ece\u800c\u6709\u52a9\u4e8e\u5b9e\u73b0\u66f4\u4e25\u683c\u7684\u5408\u89c4\u6027\u6d3b\u52a8\uff0c\u5305\u62ec\u653f\u5e9c\u8bc1\u660e\u548c\u8ba4\u8bc1\u3002 \u5b8c\u6210\u8fd9\u4e9b\u521d\u59cb\u8ba4\u8bc1\u540e\uff0c\u5176\u4f59\u8ba4\u8bc1\u5c06\u66f4\u52a0\u7279\u5b9a\u4e8e\u90e8\u7f72\u3002\u4f8b\u5982\uff0c\u5904\u7406\u4fe1\u7528\u5361\u4ea4\u6613\u7684\u4e91\u9700\u8981 PCI-DSS\uff0c\u5b58\u50a8\u533b\u7597\u4fdd\u5065\u4fe1\u606f\u7684\u4e91\u9700\u8981 HIPAA\uff0c\u8054\u90a6\u653f\u5e9c\u5185\u90e8\u7684\u4e91\u53ef\u80fd\u9700\u8981 FedRAMP/FISMA \u548c ITAR \u8ba4\u8bc1\u3002 SOC 1 \uff08SSAE 16\uff09 / ISAE 3402 \u00b6 \u670d\u52a1\u7ec4\u7ec7\u63a7\u5236 \uff08SOC\uff09 \u6807\u51c6\u7531\u7f8e\u56fd\u6ce8\u518c\u4f1a\u8ba1\u5e08\u534f\u4f1a \uff08AICPA\uff09 \u5b9a\u4e49\u3002SOC \u63a7\u5236\u8bc4\u4f30\u670d\u52a1\u63d0\u4f9b\u5546\u7684\u76f8\u5173\u8d22\u52a1\u62a5\u8868\u548c\u65ad\u8a00\uff0c\u4f8b\u5982\u662f\u5426\u9075\u5b88\u300a\u8428\u73ed\u65af-\u5965\u514b\u65af\u5229\u6cd5\u6848\u300b\u3002 SOC 1 \u53d6\u4ee3\u4e86\u5ba1\u8ba1\u51c6\u5219\u7b2c 70 \u53f7\u58f0\u660e \uff08SAS 70\uff09 II \u7c7b\u62a5\u544a\u3002\u8fd9\u4e9b\u63a7\u5236\u63aa\u65bd\u901a\u5e38\u5305\u62ec\u8303\u56f4\u5185\u7684\u7269\u7406\u6570\u636e\u4e2d\u5fc3\u3002 \u6709\u4e24\u79cd\u7c7b\u578b\u7684 SOC 1 \u62a5\u544a\uff1a \u7c7b\u578b 1 - \u62a5\u544a\u7ba1\u7406\u5c42\u5bf9\u670d\u52a1\u7ec4\u7ec7\u7cfb\u7edf\u7684\u63cf\u8ff0\u7684\u516c\u5141\u6027\uff0c\u4ee5\u53ca\u63a7\u5236\u8bbe\u8ba1\u662f\u5426\u9002\u5408\u5b9e\u73b0\u622a\u81f3\u6307\u5b9a\u65e5\u671f\u7684\u63cf\u8ff0\u4e2d\u5305\u542b\u7684\u76f8\u5173\u63a7\u5236\u76ee\u6807\u3002 \u7c7b\u578b 2 - \u62a5\u544a\u7ba1\u7406\u5c42\u5bf9\u670d\u52a1\u7ec4\u7ec7\u7cfb\u7edf\u7684\u63cf\u8ff0\u7684\u516c\u5141\u6027\uff0c\u4ee5\u53ca\u63a7\u5236\u63aa\u65bd\u7684\u8bbe\u8ba1\u548c\u8fd0\u8425\u6709\u6548\u6027\u662f\u5426\u9002\u5408\u5728\u7279\u5b9a\u65f6\u671f\u5185\u5b9e\u73b0\u63cf\u8ff0\u4e2d\u5305\u542b\u7684\u76f8\u5173\u63a7\u5236\u76ee\u6807 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605AICPA\u5173\u4e8e\u4e0e\u7528\u6237\u5b9e\u4f53\u8d22\u52a1\u62a5\u544a\u5185\u90e8\u63a7\u5236\u76f8\u5173\u7684\u670d\u52a1\u7ec4\u7ec7\u63a7\u5236\u7684\u62a5\u544a\u3002 SOC 2 \u51fd\u6570 \u00b6 \u670d\u52a1\u7ec4\u7ec7\u63a7\u5236 \uff08SOC\uff09 2 \u662f\u5bf9\u5f71\u54cd\u670d\u52a1\u7ec4\u7ec7\u7528\u4e8e\u5904\u7406\u7528\u6237\u6570\u636e\u7684\u7cfb\u7edf\u7684\u5b89\u5168\u6027\u3001\u53ef\u7528\u6027\u548c\u5904\u7406\u5b8c\u6574\u6027\u4ee5\u53ca\u8fd9\u4e9b\u7cfb\u7edf\u5904\u7406\u7684\u4fe1\u606f\u7684\u673a\u5bc6\u6027\u548c\u9690\u79c1\u6027\u7684\u63a7\u5236\u7684\u81ea\u6211\u8bc1\u660e\u3002\u7528\u6237\u793a\u4f8b\u5305\u62ec\u8d1f\u8d23\u670d\u52a1\u7ec4\u7ec7\u6cbb\u7406\u7684\u4eba\u5458\u3001\u670d\u52a1\u7ec4\u7ec7\u7684\u5ba2\u6237\u3001\u76d1\u7ba1\u673a\u6784\u3001\u4e1a\u52a1\u5408\u4f5c\u4f19\u4f34\u3001\u4f9b\u5e94\u5546\u4ee5\u53ca\u4e86\u89e3\u670d\u52a1\u7ec4\u7ec7\u53ca\u5176\u63a7\u5236\u63aa\u65bd\u7684\u5176\u4ed6\u4eba\u5458\u3002 \u6709\u4e24\u79cd\u7c7b\u578b\u7684 SOC 2 \u62a5\u544a\uff1a \u7c7b\u578b 1 - \u62a5\u544a\u7ba1\u7406\u5c42\u5bf9\u670d\u52a1\u7ec4\u7ec7\u7cfb\u7edf\u7684\u63cf\u8ff0\u7684\u516c\u5141\u6027\uff0c\u4ee5\u53ca\u63a7\u5236\u8bbe\u8ba1\u662f\u5426\u9002\u5408\u5b9e\u73b0\u622a\u81f3\u6307\u5b9a\u65e5\u671f\u7684\u63cf\u8ff0\u4e2d\u5305\u542b\u7684\u76f8\u5173\u63a7\u5236\u76ee\u6807\u3002 \u7c7b\u578b 2 - \u62a5\u544a\u7ba1\u7406\u5c42\u5bf9\u670d\u52a1\u7ec4\u7ec7\u7cfb\u7edf\u7684\u63cf\u8ff0\u7684\u516c\u5141\u6027\uff0c\u4ee5\u53ca\u63a7\u5236\u7684\u8bbe\u8ba1\u548c\u8fd0\u8425\u6709\u6548\u6027\u7684\u9002\u7528\u6027\uff0c\u4ee5\u5728\u7279\u5b9a\u65f6\u671f\u5185\u5b9e\u73b0\u63cf\u8ff0\u4e2d\u5305\u542b\u7684\u76f8\u5173\u63a7\u5236\u76ee\u6807\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 AICPA \u5173\u4e8e\u670d\u52a1\u7ec4\u7ec7\u4e2d\u4e0e\u5b89\u5168\u6027\u3001\u53ef\u7528\u6027\u3001\u5904\u7406\u5b8c\u6574\u6027\u3001\u673a\u5bc6\u6027\u6216\u9690\u79c1\u76f8\u5173\u7684\u63a7\u5236\u7684\u62a5\u544a\u3002 SOC 3 \u51fd\u6570 \u00b6 \u670d\u52a1\u7ec4\u7ec7\u63a7\u5236 \uff08SOC\uff09 3 \u662f\u670d\u52a1\u7ec4\u7ec7\u7684\u4fe1\u4efb\u670d\u52a1\u62a5\u544a\u3002\u8fd9\u4e9b\u62a5\u544a\u65e8\u5728\u6ee1\u8db3\u4ee5\u4e0b\u7528\u6237\u7684\u9700\u6c42\uff1a\u8fd9\u4e9b\u7528\u6237\u5e0c\u671b\u786e\u4fdd\u670d\u52a1\u7ec4\u7ec7\u4e2d\u4e0e\u5b89\u5168\u6027\u3001\u53ef\u7528\u6027\u3001\u5904\u7406\u5b8c\u6574\u6027\u3001\u673a\u5bc6\u6027\u6216\u9690\u79c1\u76f8\u5173\u7684\u63a7\u5236\u63aa\u65bd\uff0c\u4f46\u6ca1\u6709\u6709\u6548\u4f7f\u7528 SOC 2 \u62a5\u544a\u6240\u9700\u7684\u77e5\u8bc6\u3002\u8fd9\u4e9b\u62a5\u544a\u662f\u6839\u636e AICPA/\u52a0\u62ff\u5927\u7279\u8bb8\u4f1a\u8ba1\u5e08\u534f\u4f1a \uff08CICA\uff09 \u5173\u4e8e\u5b89\u5168\u6027\u3001\u53ef\u7528\u6027\u3001\u5904\u7406\u5b8c\u6574\u6027\u3001\u673a\u5bc6\u6027\u548c\u9690\u79c1\u7684\u4fe1\u6258\u670d\u52a1\u539f\u5219\u3001\u6807\u51c6\u548c\u63d2\u56fe\u7f16\u5199\u7684\u3002\u7531\u4e8e SOC 3 \u62a5\u544a\u662f\u901a\u7528\u62a5\u544a\uff0c\u56e0\u6b64\u53ef\u4ee5\u4f5c\u4e3a\u5370\u7ae0\u81ea\u7531\u5206\u53d1\u6216\u53d1\u5e03\u5728\u7f51\u7ad9\u4e0a\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u670d\u52a1\u7ec4\u7ec7\u7684 AICPA \u4fe1\u4efb\u670d\u52a1\u62a5\u544a\u3002 ISO 27001/2 \u8ba4\u8bc1 \u00b6 ISO/IEC 27001/2 \u6807\u51c6\u53d6\u4ee3\u4e86 BS7799-2\uff0c\u662f\u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u4f53\u7cfb \uff08ISMS\uff09 \u7684\u89c4\u8303\u3002ISMS \u662f\u7ec4\u7ec7\u4e3a\u7ba1\u7406\u4fe1\u606f\u8d44\u4ea7\u98ce\u9669\u800c\u521b\u5efa\u548c\u7ef4\u62a4\u7684\u4e00\u6574\u5957\u7b56\u7565\u548c\u8fc7\u7a0b\u3002\u8fd9\u4e9b\u98ce\u9669\u57fa\u4e8e\u7528\u6237\u4fe1\u606f\u7684\u673a\u5bc6\u6027\u3001\u5b8c\u6574\u6027\u548c\u53ef\u7528\u6027 \uff08CIA\uff09\u3002\u4e2d\u592e\u60c5\u62a5\u5c40\u7684\u5b89\u5168\u4e09\u5408\u4f1a\u5df2\u88ab\u7528\u4f5c\u672c\u4e66\u5927\u90e8\u5206\u7ae0\u8282\u7684\u57fa\u7840\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 ISO 27001\u3002 HIPAA / HITECH \u00b6 \u5065\u5eb7\u4fdd\u9669\u6d41\u901a\u4e0e\u8d23\u4efb\u6cd5\u6848 \uff08HIPAA\uff09 \u662f\u7f8e\u56fd\u56fd\u4f1a\u7684\u4e00\u9879\u6cd5\u6848\uff0c\u7528\u4e8e\u7ba1\u7406\u60a3\u8005\u5065\u5eb7\u8bb0\u5f55\u7684\u6536\u96c6\u3001\u5b58\u50a8\u3001\u4f7f\u7528\u548c\u9500\u6bc1\u3002\u8be5\u6cd5\u6848\u89c4\u5b9a\uff0c\u53d7\u4fdd\u62a4\u7684\u5065\u5eb7\u4fe1\u606f\uff08PHI\uff09\u5fc5\u987b\u5bf9\u672a\u7ecf\u6388\u6743\u7684\u4eba\u5458\u201c\u4e0d\u53ef\u7528\u3001\u4e0d\u53ef\u8bfb\u6216\u65e0\u6cd5\u7834\u8bd1\u201d\uff0c\u5e76\u4e14\u5e94\u89e3\u51b3\u201c\u9759\u6001\u201d\u548c\u201c\u52a8\u6001\u201d\u6570\u636e\u7684\u52a0\u5bc6\u95ee\u9898\u3002 HIPAA \u4e0d\u662f\u8ba4\u8bc1\uff0c\u800c\u662f\u4fdd\u62a4\u533b\u7597\u4fdd\u5065\u6570\u636e\u7684\u6307\u5357\u3002\u4e0e PCI-DSS \u7c7b\u4f3c\uff0cPCI \u548c HIPPA \u6700\u91cd\u8981\u7684\u95ee\u9898\u662f\u4e0d\u4f1a\u53d1\u751f\u4fe1\u7528\u5361\u4fe1\u606f\u548c\u5065\u5eb7\u6570\u636e\u6cc4\u9732\u7684\u60c5\u51b5\u3002\u5728\u53d1\u751f\u8fdd\u89c4\u884c\u4e3a\u65f6\uff0c\u5c06\u4ed4\u7ec6\u5ba1\u67e5\u4e91\u63d0\u4f9b\u5546\u662f\u5426\u7b26\u5408 PCI \u548c HIPPA \u63a7\u5236\u63aa\u65bd\u3002\u5982\u679c\u8bc1\u660e\u5408\u89c4\uff0c\u63d0\u4f9b\u5546\u5c06\u7acb\u5373\u5b9e\u65bd\u8865\u6551\u63a7\u5236\u3001\u8fdd\u89c4\u901a\u77e5\u8d23\u4efb\u4ee5\u53ca\u7528\u4e8e\u989d\u5916\u5408\u89c4\u6d3b\u52a8\u7684\u5927\u91cf\u652f\u51fa\u3002\u5982\u679c\u4e0d\u5408\u89c4\uff0c\u4e91\u63d0\u4f9b\u5546\u53ef\u80fd\u4f1a\u9762\u4e34\u73b0\u573a\u5ba1\u8ba1\u56e2\u961f\u3001\u7f5a\u6b3e\u3001\u6f5c\u5728\u7684\u5546\u5bb6 ID \uff08PCI\uff09 \u4e22\u5931\u4ee5\u53ca\u5de8\u5927\u7684\u58f0\u8a89\u5f71\u54cd\u3002 \u62e5\u6709 PHI \u7684\u7528\u6237\u6216\u7ec4\u7ec7\u5fc5\u987b\u652f\u6301 HIPAA \u8981\u6c42\uff0c\u5e76\u4e14\u662f HIPAA \u6db5\u76d6\u7684\u5b9e\u4f53\u3002\u5982\u679c\u5b9e\u4f53\u6253\u7b97\u4f7f\u7528\u67d0\u9879\u670d\u52a1\uff0c\u6216\u8005\u5728\u672c\u4f8b\u4e2d\uff0c\u4f7f\u7528\u53ef\u80fd\u4f7f\u7528\u3001\u5b58\u50a8\u6216\u8bbf\u95ee\u8be5 PHI \u7684 OpenStack \u4e91\uff0c\u5219\u5fc5\u987b\u7b7e\u7f72\u4e1a\u52a1\u4f19\u4f34\u534f\u8bae \uff08BAA\uff09\u3002BAA \u662f HIPAA \u6db5\u76d6\u7684\u5b9e\u4f53\u4e0e OpenStack \u670d\u52a1\u63d0\u4f9b\u5546\u4e4b\u95f4\u7684\u5408\u540c\uff0c\u8981\u6c42\u63d0\u4f9b\u5546\u6839\u636e HIPAA \u8981\u6c42\u5904\u7406\u8be5 PHI\u3002\u5982\u679c\u670d\u52a1\u63d0\u4f9b\u5546\u4e0d\u5904\u7406 PHI\uff0c\u4f8b\u5982\u5b89\u5168\u63a7\u5236\u548c\u5f3a\u5316\uff0c\u90a3\u4e48\u4ed6\u4eec\u5c06\u53d7\u5230 HIPAA \u7684\u7f5a\u6b3e\u548c\u5904\u7f5a\u3002 OpenStack \u67b6\u6784\u5e08\u89e3\u91ca\u548c\u54cd\u5e94 HIPAA \u58f0\u660e\uff0c\u6570\u636e\u52a0\u5bc6\u4ecd\u7136\u662f\u6838\u5fc3\u5b9e\u8df5\u3002\u76ee\u524d\uff0c\u8fd9\u5c06\u8981\u6c42\u4f7f\u7528\u884c\u4e1a\u6807\u51c6\u52a0\u5bc6\u7b97\u6cd5\u5bf9 OpenStack \u90e8\u7f72\u4e2d\u5305\u542b\u7684\u4efb\u4f55\u53d7\u4fdd\u62a4\u7684\u5065\u5eb7\u4fe1\u606f\u8fdb\u884c\u52a0\u5bc6\u3002\u672a\u6765\u6f5c\u5728\u7684OpenStack\u9879\u76ee\uff0c\u5982\u5bf9\u8c61\u52a0\u5bc6\uff0c\u5c06\u4fc3\u8fdbHIPAA\u51c6\u5219\u7684\u9075\u5b88\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u300a\u5065\u5eb7\u4fdd\u9669\u6d41\u901a\u4e0e\u8d23\u4efb\u6cd5\u6848\u300b\u3002 PCI-DSS \u00b6 \u652f\u4ed8\u5361\u884c\u4e1a\u6570\u636e\u5b89\u5168\u6807\u51c6 \uff08PCI DSS\uff09 \u7531\u652f\u4ed8\u5361\u884c\u4e1a\u6807\u51c6\u59d4\u5458\u4f1a\u5b9a\u4e49\uff0c\u65e8\u5728\u52a0\u5f3a\u5bf9\u6301\u5361\u4eba\u6570\u636e\u7684\u63a7\u5236\uff0c\u4ee5\u51cf\u5c11\u4fe1\u7528\u5361\u6b3a\u8bc8\u3002\u5e74\u5ea6\u5408\u89c4\u6027\u9a8c\u8bc1\u7531\u5916\u90e8\u5408\u683c\u5b89\u5168\u8bc4\u4f30\u673a\u6784 \uff08QSA\uff09 \u8fdb\u884c\u8bc4\u4f30\uff0c\u8be5\u8bc4\u4f30\u673a\u6784\u4f1a\u6839\u636e\u6301\u5361\u4eba\u7684\u4ea4\u6613\u91cf\u521b\u5efa\u5408\u89c4\u62a5\u544a \uff08ROC\uff09\uff0c\u6216\u901a\u8fc7\u81ea\u6211\u8bc4\u4f30\u95ee\u5377 \uff08SAQ\uff09 \u8fdb\u884c\u8bc4\u4f30\u3002 \u5b58\u50a8\u3001\u5904\u7406\u6216\u4f20\u8f93\u652f\u4ed8\u5361\u8be6\u7ec6\u4fe1\u606f\u7684 OpenStack \u90e8\u7f72\u5728 PCI-DSS \u7684\u8303\u56f4\u5185\u3002\u6240\u6709\u672a\u4ece\u5904\u7406\u652f\u4ed8\u6570\u636e\u7684\u7cfb\u7edf\u6216\u7f51\u7edc\u4e2d\u6b63\u786e\u5206\u5272\u7684 OpenStack \u7ec4\u4ef6\u90fd\u5c5e\u4e8e PCI-DSS \u7684\u51c6\u5219\u3002PCI-DSS \u4e0a\u4e0b\u6587\u4e2d\u7684\u5206\u6bb5\u4e0d\u652f\u6301\u591a\u79df\u6237\uff0c\u800c\u662f\u7269\u7406\u5206\u79bb\uff08\u4e3b\u673a/\u7f51\u7edc\uff09\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 PCI \u5b89\u5168\u6807\u51c6\u3002 \u653f\u5e9c\u6807\u51c6 \u00b6 FedRAMP \u00b6 \u201c\u8054\u90a6\u98ce\u9669\u548c\u6388\u6743\u7ba1\u7406\u8ba1\u5212 \uff08FedRAMP\uff09 \u662f\u4e00\u9879\u653f\u5e9c\u8303\u56f4\u7684\u8ba1\u5212\uff0c\u5b83\u4e3a\u4e91\u4ea7\u54c1\u548c\u670d\u52a1\u7684\u5b89\u5168\u8bc4\u4f30\u3001\u6388\u6743\u548c\u6301\u7eed\u76d1\u63a7\u63d0\u4f9b\u4e86\u4e00\u79cd\u6807\u51c6\u5316\u65b9\u6cd5\u201d\u3002NIST 800-53 \u662f FISMA \u548c FedRAMP \u7684\u57fa\u7840\uff0c\u540e\u8005\u8981\u6c42\u4e13\u95e8\u9009\u62e9\u5b89\u5168\u63a7\u5236\u4ee5\u5728\u4e91\u73af\u5883\u4e2d\u63d0\u4f9b\u4fdd\u62a4\u3002\u7531\u4e8e\u5b89\u5168\u63a7\u5236\u7684\u7279\u6b8a\u6027\u4ee5\u53ca\u6ee1\u8db3\u653f\u5e9c\u6807\u51c6\u6240\u9700\u7684\u6587\u6863\u91cf\uff0cFedRAMP \u53ef\u80fd\u975e\u5e38\u5bc6\u96c6\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 FedRAMP\u3002 ITAR \u00b6 \u300a\u56fd\u9645\u6b66\u5668\u8d38\u6613\u6761\u4f8b\u300b\uff08ITAR\uff09 \u662f\u4e00\u5957\u7f8e\u56fd\u653f\u5e9c\u6cd5\u89c4\uff0c\u7528\u4e8e\u63a7\u5236\u7f8e\u56fd\u519b\u9700\u54c1\u6e05\u5355 \uff08USML\uff09 \u548c\u76f8\u5173\u6280\u672f\u6570\u636e\u4e2d\u4e0e\u56fd\u9632\u76f8\u5173\u7684\u7269\u54c1\u548c\u670d\u52a1\u7684\u8fdb\u51fa\u53e3\u3002ITAR\u901a\u5e38\u88ab\u4e91\u63d0\u4f9b\u5546\u89c6\u4e3a\u201c\u64cd\u4f5c\u4e00\u81f4\u6027\u201d\uff0c\u800c\u4e0d\u662f\u6b63\u5f0f\u8ba4\u8bc1\u3002\u8fd9\u901a\u5e38\u6d89\u53ca\u6309\u7167 FISMA \u8981\u6c42\uff0c\u9075\u5faa\u57fa\u4e8e NIST 800-53 \u6846\u67b6\u7684\u505a\u6cd5\u5b9e\u65bd\u9694\u79bb\u7684\u4e91\u73af\u5883\uff0c\u5e76\u8f85\u4ee5\u9650\u5236\u4ec5\u8bbf\u95ee\u201c\u7f8e\u56fd\u4eba\u201d\u548c\u80cc\u666f\u7b5b\u9009\u7684\u989d\u5916\u63a7\u5236\u63aa\u65bd\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u300a\u56fd\u9645\u6b66\u5668\u8d38\u6613\u6761\u4f8b\u300b\uff08ITAR\uff09\u3002 FISMA \u00b6 \u300a\u8054\u90a6\u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u6cd5\u300b\u8981\u6c42\u653f\u5e9c\u673a\u6784\u5236\u5b9a\u4e00\u9879\u5168\u9762\u7684\u8ba1\u5212\uff0c\u4ee5\u5b9e\u65bd\u4f17\u591a\u653f\u5e9c\u5b89\u5168\u6807\u51c6\uff0c\u5e76\u5728 2002 \u5e74\u7684\u300a\u7535\u5b50\u653f\u52a1\u6cd5\u300b\u4e2d\u9881\u5e03\u3002FISMA\u6982\u8ff0\u4e86\u4e00\u4e2a\u8fc7\u7a0b\uff0c\u8be5\u8fc7\u7a0b\u5229\u7528\u591a\u4e2aNIST\u51fa\u7248\u7269\uff0c\u51c6\u5907\u4e86\u4e00\u4e2a\u4fe1\u606f\u7cfb\u7edf\u6765\u5b58\u50a8\u548c\u5904\u7406\u653f\u5e9c\u6570\u636e\u3002 \u6b64\u8fc7\u7a0b\u5206\u4e3a\u4e09\u4e2a\u4e3b\u8981\u7c7b\u522b\uff1a \u7cfb\u7edf\u5206\u7c7b\uff1a \u4fe1\u606f\u7cfb\u7edf\u5c06\u6536\u5230\u8054\u90a6\u4fe1\u606f\u5904\u7406\u6807\u51c6\u51fa\u7248\u7269 199 \uff08FIPS 199\uff09 \u4e2d\u5b9a\u4e49\u7684\u5b89\u5168\u7c7b\u522b\u3002\u8fd9\u4e9b\u7c7b\u522b\u53cd\u6620\u4e86\u7cfb\u7edf\u5165\u4fb5\u7684\u6f5c\u5728\u5f71\u54cd\u3002 \u63a7\u4ef6\u9009\u62e9\uff1a \u6839\u636e FIPS 199 \u4e2d\u5b9a\u4e49\u7684\u7cfb\u7edf\u5b89\u5168\u7c7b\u522b\uff0c\u7ec4\u7ec7\u5229\u7528 FIPS 200 \u6765\u786e\u5b9a\u4fe1\u606f\u7cfb\u7edf\u7684\u7279\u5b9a\u5b89\u5168\u63a7\u5236\u8981\u6c42\u3002\u4f8b\u5982\uff0c\u5982\u679c\u7cfb\u7edf\u88ab\u5f52\u7c7b\u4e3a\u201c\u4e2d\u7b49\u201d\uff0c\u5219\u53ef\u80fd\u4f1a\u5f15\u5165\u5f3a\u5236\u8981\u6c42\u201c\u5b89\u5168\u5bc6\u7801\u201d\u7684\u8981\u6c42\u3002 \u63a7\u5236\u5b9a\u5236\uff1a \u4e00\u65e6\u786e\u5b9a\u4e86\u7cfb\u7edf\u5b89\u5168\u63a7\u5236\u63aa\u65bd\uff0cOpenStack \u67b6\u6784\u5e08\u5c06\u5229\u7528 NIST 800-53 \u6765\u63d0\u53d6\u91cf\u8eab\u5b9a\u5236\u7684\u63a7\u5236\u63aa\u65bd\u9009\u62e9\u3002\u4f8b\u5982\uff0c\u89c4\u8303\u4ec0\u4e48\u662f\u201c\u5b89\u5168\u5bc6\u7801\u201d\u3002 \u9690\u79c1 \u00b6 \u9690\u79c1\u662f\u5408\u89c4\u8ba1\u5212\u4e2d\u8d8a\u6765\u8d8a\u91cd\u8981\u7684\u5143\u7d20\u3002\u5ba2\u6237\u5bf9\u4f01\u4e1a\u7684\u8981\u6c42\u8d8a\u6765\u8d8a\u9ad8\uff0c\u4ed6\u4eec\u8d8a\u6765\u8d8a\u6709\u5174\u8da3\u4ece\u9690\u79c1\u7684\u89d2\u5ea6\u4e86\u89e3\u4ed6\u4eec\u7684\u6570\u636e\u662f\u5982\u4f55\u88ab\u5904\u7406\u7684\u3002 OpenStack\u90e8\u7f72\u53ef\u80fd\u9700\u8981\u8bc1\u660e\u7b26\u5408\u7ec4\u7ec7\u7684\u9690\u79c1\u653f\u7b56\uff0c\u4ee5\u53ca\u7f8e\u56fd-\u6b27\u76df\u3002\u5b89\u5168\u6e2f\u6846\u67b6\u3001ISO/IEC 29100\uff1a2011 \u9690\u79c1\u6846\u67b6\u6216\u5176\u4ed6\u7279\u5b9a\u4e8e\u9690\u79c1\u7684\u51c6\u5219\u3002\u5728\u7f8e\u56fd\uff0c\u7f8e\u56fd\u6ce8\u518c\u4f1a\u8ba1\u5e08\u534f\u4f1a\uff08AICPA\uff09\u5df2\u7ecf\u5b9a\u4e49\u4e8610\u4e2a\u9690\u79c1\u91cd\u70b9\u9886\u57df\uff0c\u5728\u5546\u4e1a\u73af\u5883\u4e2d\u90e8\u7f72OpenStack\u53ef\u80fd\u5e0c\u671b\u8bc1\u660e\u5176\u4e2d\u7684\u90e8\u5206\u6216\u5168\u90e8\u539f\u5219\u3002 \u4e3a\u4e86\u5e2e\u52a9 OpenStack \u67b6\u6784\u5e08\u4fdd\u62a4\u4e2a\u4eba\u6570\u636e\uff0c\u6211\u4eec\u5efa\u8bae OpenStack \u67b6\u6784\u5e08\u67e5\u770b NIST \u51fa\u7248\u7269 800-122\uff0c\u6807\u9898\u4e3a\u201c\u4fdd\u62a4\u4e2a\u4eba\u8eab\u4efd\u4fe1\u606f \uff08PII\uff09 \u673a\u5bc6\u6027\u6307\u5357\u201d\u3002\u672c\u6307\u5357\u9010\u6b65\u5b8c\u6210\u4fdd\u62a4\u8fc7\u7a0b\uff1a \"...\u7531\u673a\u6784\u7ef4\u62a4\u7684\u6709\u5173\u4e2a\u4eba\u7684\u4efb\u4f55\u4fe1\u606f\uff0c\u5305\u62ec \uff081\uff09 \u53ef\u7528\u4e8e\u533a\u5206\u6216\u8ffd\u8e2a\u4e2a\u4eba\u8eab\u4efd\u7684\u4efb\u4f55\u4fe1\u606f\uff0c\u4f8b\u5982\u59d3\u540d\u3001\u793e\u4f1a\u5b89\u5168\u53f7\u7801\u3001\u51fa\u751f\u65e5\u671f\u548c\u5730\u70b9\u3001\u6bcd\u4eb2\u7684\u5a5a\u524d\u59d3\u6c0f\u6216\u751f\u7269\u8bc6\u522b\u8bb0\u5f55;\uff082\uff09\u4e0e\u4e2a\u4eba\u6709\u8054\u7cfb\u6216\u53ef\u8054\u7cfb\u7684\u4efb\u4f55\u5176\u4ed6\u4fe1\u606f\uff0c\u5982\u533b\u7597\u3001\u6559\u80b2\u3001\u8d22\u52a1\u548c\u5c31\u4e1a\u4fe1\u606f......\u201d \u5168\u9762\u7684\u9690\u79c1\u7ba1\u7406\u9700\u8981\u5927\u91cf\u7684\u51c6\u5907\u3001\u601d\u8003\u548c\u6295\u8d44\u3002\u5728\u6784\u5efa\u5168\u7403OpenStack\u4e91\u65f6\uff0c\u8fd8\u5f15\u5165\u4e86\u989d\u5916\u7684\u590d\u6742\u6027\uff0c\u4f8b\u5982\uff0c\u5728\u7f8e\u56fd\u548c\u66f4\u4e25\u683c\u7684\u6b27\u76df\u9690\u79c1\u6cd5\u4e4b\u95f4\u7684\u5dee\u5f02\u4e2d\u5bfc\u822a\u3002\u6b64\u5916\uff0c\u5728\u5904\u7406\u654f\u611f\u7684 PII \u65f6\u9700\u8981\u683c\u5916\u5c0f\u5fc3\uff0c\u5176\u4e2d\u53ef\u80fd\u5305\u62ec\u4fe1\u7528\u5361\u53f7\u6216\u533b\u7597\u8bb0\u5f55\u7b49\u4fe1\u606f\u3002\u8fd9\u4e9b\u654f\u611f\u6570\u636e\u4e0d\u4ec5\u53d7\u9690\u79c1\u6cd5\u7684\u7ea6\u675f\uff0c\u8fd8\u53d7\u76d1\u7ba1\u548c\u653f\u5e9c\u6cd5\u89c4\u7684\u7ea6\u675f\u3002\u901a\u8fc7\u9075\u5faa\u65e2\u5b9a\u7684\u6700\u4f73\u5b9e\u8df5\uff0c\u5305\u62ec\u653f\u5e9c\u53d1\u5e03\u7684\u6700\u4f73\u5b9e\u8df5\uff0c\u53ef\u4ee5\u4e3aOpenStack\u90e8\u7f72\u521b\u5efa\u548c\u5b9e\u8df5\u4e00\u4e2a\u5168\u9762\u7684\u9690\u79c1\u7ba1\u7406\u653f\u7b56\u3002 \u5b89\u5168\u5ba1\u67e5 \u00b6 OpenStack\u793e\u533a\u5b89\u5168\u5ba1\u67e5\u7684\u76ee\u6807\u662f\u8bc6\u522bOpenStack\u9879\u76ee\u8bbe\u8ba1\u6216\u5b9e\u73b0\u4e2d\u7684\u5f31\u70b9\u3002\u867d\u7136\u8fd9\u4e9b\u5f31\u70b9\u5f88\u5c11\u89c1\uff0c\u4f46\u53ef\u80fd\u4f1a\u5bf9OpenStack\u90e8\u7f72\u7684\u5b89\u5168\u6027\u4ea7\u751f\u707e\u96be\u6027\u7684\u5f71\u54cd\uff0c\u56e0\u6b64\u5e94\u8be5\u52aa\u529b\u5c06\u8fd9\u4e9b\u7f3a\u9677\u5728\u5df2\u53d1\u5e03\u9879\u76ee\u4e2d\u7684\u53ef\u80fd\u6027\u964d\u5230\u6700\u4f4e\u3002\u5728\u5b89\u5168\u5ba1\u67e5\u8fc7\u7a0b\u4e2d\uff0c\u5e94\u4e86\u89e3\u5e76\u8bb0\u5f55\u4ee5\u4e0b\u5185\u5bb9\uff1a \u7cfb\u7edf\u7684\u6240\u6709\u5165\u53e3\u70b9 \u98ce\u9669\u8d44\u4ea7 \u6570\u636e\u6301\u4e45\u5316\u7684\u4f4d\u7f6e \u6570\u636e\u5982\u4f55\u5728\u7cfb\u7edf\u7ec4\u4ef6\u4e4b\u95f4\u4f20\u8f93 \u6570\u636e\u683c\u5f0f\u548c\u8f6c\u6362 \u9879\u76ee\u7684\u5916\u90e8\u4f9d\u8d56\u9879 \u4e00\u7ec4\u5546\u5b9a\u7684\u8c03\u67e5\u7ed3\u679c\u548c/\u6216\u7f3a\u9677 \u9879\u76ee\u5982\u4f55\u4e0e\u5916\u90e8\u4f9d\u8d56\u9879\u4ea4\u4e92 \u5bf9 OpenStack \u53ef\u4ea4\u4ed8\u5b58\u50a8\u5e93\u6267\u884c\u5b89\u5168\u5ba1\u67e5\u7684\u4e00\u4e2a\u5e38\u89c1\u539f\u56e0\u662f\u534f\u52a9\u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f \uff08VMT\uff09 \u76d1\u7763\u3002OpenStack VMT \u5217\u51fa\u4e86\u53d7\u76d1\u7763\u7684\u5b58\u50a8\u5e93\uff0c\u5176\u4e2d\u6f0f\u6d1e\u7684\u62a5\u544a\u63a5\u6536\u548c\u62ab\u9732\u7531 VMT \u7ba1\u7406\u3002\u867d\u7136\u4e0d\u662f\u4e25\u683c\u7684\u8981\u6c42\uff0c\u4f46\u67d0\u79cd\u5f62\u5f0f\u7684\u5b89\u5168\u5ba1\u67e5\u3001\u5ba1\u8ba1\u6216\u5a01\u80c1\u5206\u6790\u53ef\u4ee5\u5e2e\u52a9\u6bcf\u4e2a\u4eba\u66f4\u8f7b\u677e\u5730\u67e5\u660e\u7cfb\u7edf\u66f4\u5bb9\u6613\u51fa\u73b0\u6f0f\u6d1e\u7684\u533a\u57df\uff0c\u5e76\u5728\u5b83\u4eec\u6210\u4e3a\u7528\u6237\u95ee\u9898\u4e4b\u524d\u89e3\u51b3\u5b83\u4eec\u3002 OpenStack VMT \u5efa\u8bae\uff0c\u5bf9\u9879\u76ee\u63a8\u8350\u7684\u90e8\u7f72\u8fdb\u884c\u67b6\u6784\u5ba1\u67e5\u662f\u4e00\u79cd\u9002\u5f53\u7684\u5b89\u5168\u5ba1\u67e5\u5f62\u5f0f\uff0c\u5728\u5ba1\u67e5\u9700\u6c42\u4e0e OpenStack \u89c4\u6a21\u7684\u9879\u76ee\u8d44\u6e90\u9700\u6c42\u4e4b\u95f4\u53d6\u5f97\u5e73\u8861\u3002\u5b89\u5168\u67b6\u6784\u5ba1\u67e5\u901a\u5e38\u4e5f\u79f0\u4e3a\u5a01\u80c1\u5206\u6790\u3001\u5b89\u5168\u5206\u6790\u6216\u5a01\u80c1\u5efa\u6a21\u3002\u5728OpenStack\u5b89\u5168\u5ba1\u67e5\u7684\u80cc\u666f\u4e0b\uff0c\u8fd9\u4e9b\u672f\u8bed\u662f\u67b6\u6784\u5b89\u5168\u5ba1\u67e5\u7684\u540c\u4e49\u8bcd\uff0c\u5b83\u53ef\u4ee5\u8bc6\u522b\u9879\u76ee\u6216\u53c2\u8003\u67b6\u6784\u8bbe\u8ba1\u4e2d\u7684\u7f3a\u9677\uff0c\u5e76\u53ef\u80fd\u5bfc\u81f4\u8fdb\u4e00\u6b65\u7684\u8c03\u67e5\u5de5\u4f5c\u6765\u9a8c\u8bc1\u90e8\u5206\u5b9e\u73b0\u3002 \u5bf9\u4e8e\u65b0\u9879\u76ee\u4ee5\u53ca\u7b2c\u4e09\u65b9\u672a\u8fdb\u884c\u5b89\u5168\u5ba1\u67e5\u6216\u65e0\u6cd5\u5171\u4eab\u5176\u7ed3\u679c\u7684\u60c5\u51b5\uff0c\u9884\u8ba1\u5b89\u5168\u5ba1\u67e5\u5c06\u662f\u6b63\u5e38\u9014\u5f84\u3002\u9700\u8981\u5b89\u5168\u5ba1\u67e5\u7684\u9879\u76ee\u7684\u4fe1\u606f\u5c06\u5728\u5373\u5c06\u5230\u6765\u7684\u5b89\u5168\u5ba1\u67e5\u8fc7\u7a0b\u4e2d\u63d0\u4f9b\u3002 \u5982\u679c\u7b2c\u4e09\u65b9\u5df2\u7ecf\u6267\u884c\u4e86\u5b89\u5168\u5ba1\u67e5\uff0c\u6216\u8005\u9879\u76ee\u66f4\u559c\u6b22\u4f7f\u7528\u7b2c\u4e09\u65b9\u6765\u6267\u884c\u5ba1\u67e5\uff0c\u5219\u5728\u5373\u5c06\u5230\u6765\u7684\u7b2c\u4e09\u65b9\u5b89\u5168\u5ba1\u67e5\u8fc7\u7a0b\u4e2d\u5c06\u63d0\u4f9b\u6709\u5173\u5982\u4f55\u83b7\u53d6\u8be5\u7b2c\u4e09\u65b9\u5ba1\u67e5\u7684\u8f93\u51fa\u5e76\u5c06\u5176\u63d0\u4ea4\u9a8c\u8bc1\u7684\u4fe1\u606f\u3002 \u65e0\u8bba\u54ea\u79cd\u60c5\u51b5\uff0c\u5bf9\u6587\u6863\u5de5\u4ef6\u7684\u8981\u6c42\u90fd\u662f\u76f8\u4f3c\u7684 - \u9879\u76ee\u5fc5\u987b\u63d0\u4f9b\u6700\u4f73\u5b9e\u8df5\u90e8\u7f72\u7684\u67b6\u6784\u56fe\u3002\u867d\u7136\u5f3a\u70c8\u5efa\u8bae\u4f5c\u4e3a\u6240\u6709\u56e2\u961f\u5f00\u53d1\u5468\u671f\u7684\u4e00\u90e8\u5206\uff0c\u4f46\u6f0f\u6d1e\u626b\u63cf\u548c\u9759\u6001\u5206\u6790\u626b\u63cf\u4e0d\u8db3\u4ee5\u4f5c\u4e3a\u7b2c\u4e09\u65b9\u5ba1\u67e5\u7684\u8bc1\u636e\u3002 \u67b6\u6784\u9875\u9762\u6307\u5357 \u6807\u9898\u3001\u7248\u672c\u4fe1\u606f\u3001\u8054\u7cfb\u65b9\u5f0f \u9879\u76ee\u63cf\u8ff0\u548c\u76ee\u7684 \u4e3b\u8981\u7528\u6237\u548c\u7528\u4f8b \u5916\u90e8\u4f9d\u8d56\u5173\u7cfb\u548c\u5173\u8054\u7684\u5b89\u5168\u5047\u8bbe \u7ec4\u4ef6 \u670d\u52a1\u67b6\u6784\u56fe \u6570\u636e\u8d44\u4ea7 \u6570\u636e\u8d44\u4ea7\u5f71\u54cd\u5206\u6790 \u63a5\u53e3 \u8d44\u6e90 \u67b6\u6784\u9875\u9762\u6307\u5357 \u00b6 \u67b6\u6784\u9875\u9762\u7684\u76ee\u7684\u662f\u8bb0\u5f55\u670d\u52a1\u6216\u9879\u76ee\u7684\u4f53\u7cfb\u7ed3\u6784\u3001\u7528\u9014\u548c\u5b89\u5168\u63a7\u5236\u3002\u5b83\u5e94\u8be5\u8bb0\u5f55\u8be5\u9879\u76ee\u7684\u6700\u4f73\u5b9e\u8df5\u90e8\u7f72\u3002 \u67b6\u6784\u9875\u9762\u6709\u4e00\u4e9b\u5173\u952e\u90e8\u5206\uff0c\u4e0b\u9762\u5c06\u66f4\u8be6\u7ec6\u5730\u89e3\u91ca\u8fd9\u4e9b\u90e8\u5206\uff1a \u6807\u9898\u3001\u7248\u672c\u4fe1\u606f\u3001\u8054\u7cfb\u65b9\u5f0f \u9879\u76ee\u63cf\u8ff0\u548c\u76ee\u7684 \u4e3b\u8981\u7528\u6237\u548c\u7528\u4f8b \u5916\u90e8\u4f9d\u8d56\u5173\u7cfb\u548c\u5173\u8054\u7684\u5b89\u5168\u5047\u8bbe \u7ec4\u4ef6 \u67b6\u6784\u56fe \u6570\u636e\u8d44\u4ea7 \u6570\u636e\u8d44\u4ea7\u5f71\u54cd\u5206\u6790 \u63a5\u53e3 \u6807\u9898\u3001\u7248\u672c\u4fe1\u606f\u3001\u8054\u7cfb\u65b9\u5f0f \u00b6 \u672c\u90e8\u5206\u4e3a\u67b6\u6784\u9875\u9762\u6dfb\u52a0\u6807\u9898\uff0c\u63d0\u4f9b\u8bc4\u5ba1\u72b6\u6001\uff08\u8349\u7a3f\u3001\u51c6\u5907\u8bc4\u5ba1\u3001\u5df2\u5ba1\u6838\uff09\uff0c\u5e76\u6355\u83b7\u9879\u76ee\u7684\u53d1\u5e03\u548c\u7248\u672c\uff08\u5982\u679c\u76f8\u5173\uff09\u3002\u5b83\u8fd8\u8bb0\u5f55\u4e86\u9879\u76ee\u7684 PTL\u3001\u8d1f\u8d23\u751f\u6210\u67b6\u6784\u9875\u9762\u3001\u56fe\u8868\u548c\u5b8c\u6210\u8bc4\u5ba1\u7684\u9879\u76ee\u67b6\u6784\u5e08\uff08\u8fd9\u53ef\u80fd\u662f\u4e5f\u53ef\u80fd\u4e0d\u662f PTL\uff09\u548c\u5b89\u5168\u8bc4\u5ba1\u5458\u3002 \u9879\u76ee\u63cf\u8ff0\u548c\u76ee\u7684 \u00b6 \u672c\u8282\u5c06\u5305\u542b\u9879\u76ee\u7684\u7b80\u8981\u8bf4\u660e\uff0c\u4ee5\u5411\u7b2c\u4e09\u65b9\u4ecb\u7ecd\u8be5\u670d\u52a1\u3002\u8fd9\u5e94\u8be5\u662f\u4e00\u4e24\u4e2a\u6bb5\u843d\uff0c\u53ef\u4ee5\u4ece wiki \u6216\u5176\u4ed6\u6587\u6863\u4e2d\u526a\u5207/\u7c98\u8d34\u3002\u5305\u62ec\u76f8\u5173\u6f14\u793a\u6587\u7a3f\u548c\u66f4\u591a\u6587\u6863\u7684\u94fe\u63a5\uff08\u5982\u679c\u6709\uff09\u3002 \u4f8b\u5982\uff1a \u201cAnchor \u662f\u4e00\u79cd\u516c\u94a5\u57fa\u7840\u8bbe\u65bd \uff08PKI\uff09 \u670d\u52a1\uff0c\u5b83\u4f7f\u7528\u81ea\u52a8\u8bc1\u4e66\u8bf7\u6c42\u9a8c\u8bc1\u6765\u81ea\u52a8\u505a\u51fa\u9881\u53d1\u51b3\u7b56\u3002\u8bc1\u4e66\u7684\u9881\u53d1\u65f6\u95f4\u5f88\u77ed\uff08\u901a\u5e38\u4e3a 12-48 \u5c0f\u65f6\uff09\uff0c\u4ee5\u907f\u514d\u4e0e CRL \u548c OCSP \u76f8\u5173\u7684\u6709\u7f3a\u9677\u7684\u540a\u9500\u95ee\u9898\u3002 \u4e3b\u8981\u7528\u6237\u548c\u7528\u4f8b \u00b6 \u5df2\u5b9e\u73b0\u67b6\u6784\u7684\u9884\u671f\u4e3b\u8981\u7528\u6237\u53ca\u5176\u7528\u4f8b\u7684\u5217\u8868\u3002\u201c\u7528\u6237\u201d\u53ef\u4ee5\u662f OpenStack \u4e2d\u7684\u53c2\u4e0e\u8005\u6216\u5176\u4ed6\u670d\u52a1\u3002 \u4f8b\u5982\uff1a \u6700\u7ec8\u7528\u6237\u5c06\u4f7f\u7528\u7cfb\u7edf\u6765\u5b58\u50a8\u654f\u611f\u6570\u636e\uff0c\u4f8b\u5982\u5bc6\u7801\u3001\u52a0\u5bc6\u5bc6\u94a5\u7b49\u3002 \u4e91\u7ba1\u7406\u5458\u5c06\u4f7f\u7528\u7ba1\u7406 API \u6765\u7ba1\u7406\u8d44\u6e90\u914d\u989d\u3002 \u5916\u90e8\u4f9d\u8d56\u548c\u76f8\u5173\u7684\u5b89\u5168\u5047\u8bbe \u00b6 \u5916\u90e8\u4f9d\u8d56\u9879\u662f\u670d\u52a1\u64cd\u4f5c\u6240\u9700\u7684\u4e0d\u53d7\u63a7\u5236\u7684\u9879\uff0c\u5982\u679c\u5b83\u4eec\u53d7\u5230\u5a01\u80c1\u6216\u53d8\u5f97\u4e0d\u53ef\u7528\uff0c\u53ef\u80fd\u4f1a\u5f71\u54cd\u670d\u52a1\u3002\u8fd9\u4e9b\u9879\u76ee\u901a\u5e38\u4e0d\u5728\u5f00\u53d1\u4eba\u5458\u7684\u63a7\u5236\u8303\u56f4\u5185\uff0c\u4f46\u5728\u90e8\u7f72\u8005\u7684\u63a7\u5236\u8303\u56f4\u5185\uff0c\u6216\u8005\u5b83\u4eec\u53ef\u80fd\u7531\u7b2c\u4e09\u65b9\u64cd\u4f5c\u3002\u8bbe\u5907\u5e94\u88ab\u89c6\u4e3a\u5916\u90e8\u4f9d\u8d56\u9879\u3002 \u4f8b\u5982\uff1a Nova \u8ba1\u7b97\u670d\u52a1\u4f9d\u8d56\u4e8e\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u670d\u52a1\u3002\u5728\u5178\u578b\u90e8\u7f72\u4e2d\uff0c\u6b64\u4f9d\u8d56\u5173\u7cfb\u5c06\u7531 keystone \u670d\u52a1\u5b9e\u73b0\u3002 Barbican \u4f9d\u8d56\u4e8e\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u8bbe\u5907\u7684\u4f7f\u7528\u3002 \u7ec4\u4ef6 \u00b6 \u5df2\u90e8\u7f72\u9879\u76ee\u7684\u7ec4\u4ef6\u5217\u8868\uff0c\u4e0d\u5305\u62ec\u5916\u90e8\u5b9e\u4f53\u3002\u6bcf\u4e2a\u7ec4\u4ef6\u90fd\u5e94\u547d\u540d\u5e76\u7b80\u8981\u63cf\u8ff0\u5176\u7528\u9014\uff0c\u5e76\u4f7f\u7528\u4f7f\u7528\u7684\u4e3b\u8981\u6280\u672f\uff08\u4f8b\u5982 Python\u3001MySQL\u3001RabbitMQ\uff09\u8fdb\u884c\u6807\u8bb0\u3002 \u4f8b\u5982\uff1a keystone \u76d1\u542c\u5668\u8fdb\u7a0b \uff08Python\uff09\uff1a\u4f7f\u7528 keystone \u670d\u52a1\u53d1\u5e03\u7684 keystone \u4e8b\u4ef6\u7684 Python \u8fdb\u7a0b\u3002 \u6570\u636e\u5e93 \uff08MySQL\uff09\uff1aMySQL \u6570\u636e\u5e93\uff0c\u7528\u4e8e\u5b58\u50a8\u4e0e\u5176\u6258\u7ba1\u5b9e\u4f53\u53ca\u5176\u5143\u6570\u636e\u76f8\u5173\u7684\u5df4\u6bd4\u80af\u72b6\u6001\u6570\u636e\u3002 \u670d\u52a1\u67b6\u6784\u56fe \u00b6 \u67b6\u6784\u56fe\u663e\u793a\u4e86\u7cfb\u7edf\u7684\u903b\u8f91\u5e03\u5c40\uff0c\u4ee5\u4fbf\u5b89\u5168\u5ba1\u9605\u8005\u53ef\u4ee5\u4e0e\u9879\u76ee\u56e2\u961f\u4e00\u8d77\u9010\u6b65\u5b8c\u6210\u67b6\u6784\u3002\u5b83\u662f\u4e00\u4e2a\u903b\u8f91\u56fe\uff0c\u663e\u793a\u7ec4\u4ef6\u5982\u4f55\u4ea4\u4e92\u3001\u5b83\u4eec\u5982\u4f55\u8fde\u63a5\u5230\u5916\u90e8\u5b9e\u4f53\u4ee5\u53ca\u901a\u4fe1\u8de8\u8d8a\u4fe1\u4efb\u8fb9\u754c\u7684\u4f4d\u7f6e\u3002\u6709\u5173\u67b6\u6784\u56fe\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u5305\u62ec\u7b26\u53f7\u952e\uff0c\u5c06\u5728\u5373\u5c06\u53d1\u5e03\u7684\u67b6\u6784\u56fe\u6307\u5357\u4e2d\u7ed9\u51fa\u3002\u53ef\u4ee5\u5728\u4efb\u4f55\u53ef\u4ee5\u751f\u6210\u4f7f\u7528\u952e\u4e2d\u7b26\u53f7\u7684\u56fe\u8868\u7684\u5de5\u5177\u4e2d\u7ed8\u5236\u56fe\u8868\uff0c\u4f46\u5f3a\u70c8\u5efa\u8bae draw.io\u3002 \u6b64\u793a\u4f8b\u663e\u793a\u4e86 barbican \u67b6\u6784\u56fe\uff1a \u6570\u636e\u8d44\u4ea7 \u00b6 \u6570\u636e\u8d44\u4ea7\u662f\u653b\u51fb\u8005\u53ef\u80fd\u9488\u5bf9\u7684\u7528\u6237\u6570\u636e\u3001\u9ad8\u4ef7\u503c\u6570\u636e\u3001\u914d\u7f6e\u9879\u3001\u6388\u6743\u4ee4\u724c\u6216\u5176\u4ed6\u9879\u3002\u6570\u636e\u9879\u96c6\u56e0\u9879\u76ee\u800c\u5f02\uff0c\u4f46\u4e00\u822c\u800c\u8a00\uff0c\u5e94\u5c06\u5176\u89c6\u4e3a\u5bf9\u9879\u76ee\u9884\u671f\u64cd\u4f5c\u81f3\u5173\u91cd\u8981\u7684\u7c7b\u522b\u3002\u6240\u9700\u7684\u8be6\u7ec6\u7a0b\u5ea6\u5728\u67d0\u79cd\u7a0b\u5ea6\u4e0a\u53d6\u51b3\u4e8e\u4e0a\u4e0b\u6587\u3002\u6570\u636e\u901a\u5e38\u53ef\u4ee5\u5206\u7ec4\uff0c\u4f8b\u5982\u201c\u7528\u6237\u6570\u636e\u201d\u3001\u201c\u673a\u5bc6\u6570\u636e\u201d\u6216\u201c\u914d\u7f6e\u6587\u4ef6\u201d\uff0c\u4f46\u4e5f\u53ef\u4ee5\u662f\u5355\u6570\uff0c\u4f8b\u5982\u201c\u7ba1\u7406\u5458\u8eab\u4efd\u4ee4\u724c\u201d\u6216\u201c\u7528\u6237\u8eab\u4efd\u4ee4\u724c\u201d\u6216\u201c\u6570\u636e\u5e93\u914d\u7f6e\u6587\u4ef6\u201d\u3002 \u6570\u636e\u8d44\u4ea7\u5e94\u5305\u62ec\u8be5\u8d44\u4ea7\u6301\u4e45\u5316\u4f4d\u7f6e\u7684\u58f0\u660e\u3002 \u4f8b\u5982\uff1a \u673a\u5bc6\u6570\u636e - \u5bc6\u7801\u3001\u52a0\u5bc6\u5bc6\u94a5\u3001RSA \u5bc6\u94a5 - \u4fdd\u7559\u5728\u6570\u636e\u5e93 [PKCS#11] \u6216 HSM [KMIP] \u6216 [KMIP\u3001Dogtag] \u4e2d RBAC \u89c4\u5219\u96c6 - \u4fdd\u7559\u5728 policy.json \u4e2d RabbitMQ \u51ed\u8bc1 - \u4fdd\u7559\u5728 barbican.conf \u4e2d keystone \u4e8b\u4ef6\u961f\u5217\u51ed\u636e - \u4fdd\u7559\u5728 barbican.conf \u4e2d \u4e2d\u95f4\u4ef6\u914d\u7f6e - \u4fdd\u7559\u5728\u7c98\u8d34 .ini \u4e2d \u6570\u636e\u8d44\u4ea7\u5f71\u54cd\u5206\u6790 \u00b6 \u6570\u636e\u8d44\u4ea7\u5f71\u54cd\u5206\u6790\u5206\u89e3\u4e86\u6bcf\u4e2a\u6570\u636e\u8d44\u4ea7\u7684\u673a\u5bc6\u6027\u3001\u5b8c\u6574\u6027\u6216\u53ef\u7528\u6027\u635f\u5931\u7684\u5f71\u54cd\u3002\u9879\u76ee\u67b6\u6784\u5e08\u5e94\u8be5\u5c1d\u8bd5\u5b8c\u6210\u8fd9\u9879\u5de5\u4f5c\uff0c\u56e0\u4e3a\u4ed6\u4eec\u6700\u8be6\u7ec6\u5730\u4e86\u89e3\u4ed6\u4eec\u7684\u9879\u76ee\uff0c\u4f46 OpenStack \u5b89\u5168\u9879\u76ee \uff08OSSP\uff09 \u5c06\u5728\u5b89\u5168\u5ba1\u67e5\u671f\u95f4\u4e0e\u9879\u76ee\u4e00\u8d77\u89e3\u51b3\u8fd9 \u4e2a\u95ee\u9898\uff0c\u5e76\u53ef\u80fd\u6dfb\u52a0\u6216\u66f4\u65b0\u5f71\u54cd\u7ec6\u8282\u3002 \u4f8b\u5982\uff1a RabbitMQ \u51ed\u636e\uff1a \u5b8c\u6574\u6027\u6545\u969c\u5f71\u54cd\uff1abarbican \u548c Workers \u65e0\u6cd5\u518d\u8bbf\u95ee\u961f\u5217\u3002\u62d2\u7edd\u670d\u52a1\u3002 \u673a\u5bc6\u6027\u6545\u969c\u5f71\u54cd\uff1a\u653b\u51fb\u8005\u53ef\u4ee5\u5c06\u65b0\u4efb\u52a1\u6dfb\u52a0\u5230\u961f\u5217\u4e2d\uff0c\u8fd9\u4e9b\u4efb\u52a1\u5c06\u7531\u5de5\u4f5c\u4eba\u5458\u6267\u884c\u3002\u653b\u51fb\u8005\u53ef\u80fd\u8017\u5c3d\u7528\u6237\u914d\u989d\u3002\u62d2\u7edd\u670d\u52a1\u3002\u7528\u6237\u5c06\u65e0\u6cd5\u521b\u5efa\u771f\u6b63\u7684\u673a\u5bc6\u3002 \u53ef\u7528\u6027\u6545\u969c\u5f71\u54cd\uff1a\u5982\u679c\u6ca1\u6709\u5bf9\u961f\u5217\u7684\u8bbf\u95ee\u6743\u9650\uff0cbarbican \u65e0\u6cd5\u518d\u521b\u5efa\u65b0\u5bc6\u94a5\u3002 Keystone \u51ed\u636e\uff1a \u5b8c\u6574\u6027\u6545\u969c\u5f71\u54cd\uff1abarbican \u5c06\u65e0\u6cd5\u9a8c\u8bc1\u7528\u6237\u51ed\u636e\u5e76\u5931\u8d25\u3002\u62d2\u7edd\u670d\u52a1\u3002 \u673a\u5bc6\u6027\u6545\u969c\u5f71\u54cd\uff1a\u6076\u610f\u7528\u6237\u53ef\u80fd\u4f1a\u6ee5\u7528\u5176\u4ed6 OpenStack \u670d\u52a1\uff08\u53d6\u51b3\u4e8e keystone \u89d2\u8272\u914d\u7f6e\uff09\uff0c\u4f46 barbican \u4e0d\u53d7\u5f71\u54cd\u3002\u5982\u679c\u7528\u4e8e\u4ee4\u724c\u9a8c\u8bc1\u7684\u670d\u52a1\u5e10\u6237\u4e5f\u5177\u6709 barbican \u7ba1\u7406\u5458\u6743\u9650\uff0c\u5219\u6076\u610f\u7528\u6237\u53ef\u4ee5\u64cd\u7eb5 barbican \u7ba1\u7406\u5458\u529f\u80fd\u3002 \u53ef\u7528\u6027\u6545\u969c\u5f71\u54cd\uff1abarbican \u5c06\u65e0\u6cd5\u9a8c\u8bc1\u7528\u6237\u51ed\u636e\u5e76\u5931\u8d25\u3002\u62d2\u7edd\u670d\u52a1\u3002 \u63a5\u53e3 \u00b6 \u63a5\u53e3\u5217\u8868\u6355\u83b7\u4e86\u5ba1\u67e5\u8303\u56f4\u5185\u7684\u63a5\u53e3\u3002\u8fd9\u5305\u62ec\u67b6\u6784\u56fe\u4e0a\u8de8\u8d8a\u4fe1\u4efb\u8fb9\u754c\u6216\u4e0d\u4f7f\u7528\u884c\u4e1a\u6807\u51c6\u52a0\u5bc6\u534f\u8bae\uff08\u5982 TLS \u6216 SSH\uff09\u7684\u6a21\u5757\u4e4b\u95f4\u7684\u8fde\u63a5\u3002\u5bf9\u4e8e\u6bcf\u4e2a\u63a5\u53e3\uff0c\u5c06\u6355\u83b7\u4ee5\u4e0b\u4fe1\u606f\uff1a \u4f7f\u7528\u7684\u534f\u8bae \u901a\u8fc7\u8be5\u63a5\u53e3\u4f20\u8f93\u7684\u4efb\u4f55\u6570\u636e\u8d44\u4ea7 \u6709\u5173\u7528\u4e8e\u8fde\u63a5\u5230\u8be5\u63a5\u53e3\u7684\u8eab\u4efd\u9a8c\u8bc1\u7684\u4fe1\u606f \u63a5\u53e3\u7528\u9014\u7684\u7b80\u8981\u8bf4\u660e\u3002 \u8bb0\u5f55\u683c\u5f0f\u5982\u4e0b\uff1a \u4ece>\u5230[\u4f20\u8f93\u65b9\u5f0f]\uff1a \u52a8\u6001\u8d44\u4ea7 \u8eab\u4efd\u8ba4\u8bc1\uff1f \u63cf\u8ff0 \u4f8b\u5982\uff1a \u5ba2\u6237\u7aef>API \u8fdb\u7a0b [TLS]\uff1a \u4f20\u8f93\u4e2d\u7684\u8d44\u4ea7\uff1a\u7528\u6237\u5bc6\u94a5\u5931\u771f\u51ed\u636e\u3001\u660e\u6587\u5bc6\u94a5\u3001HTTP \u8c13\u8bcd\u3001\u5bc6\u94a5 ID\u3001\u8def\u5f84 \u5bf9 keystone \u51ed\u636e\u6216\u660e\u6587\u673a\u5bc6\u7684\u8bbf\u95ee\u88ab\u89c6\u4e3a\u7cfb\u7edf\u7684\u5b8c\u5168\u5b89\u5168\u6545\u969c - \u6b64\u63a5\u53e3\u5fc5\u987b\u5177\u6709\u5f3a\u5927\u7684\u673a\u5bc6\u6027\u548c\u5b8c\u6574\u6027\u63a7\u5236\u3002 \u8d44\u6e90 \u00b6 \u5217\u51fa\u4e0e\u9879\u76ee\u76f8\u5173\u7684\u8d44\u6e90\uff0c\u4f8b\u5982\u63cf\u8ff0\u5176\u90e8\u7f72\u548c\u7528\u6cd5\u7684 Wiki \u9875\u9762\uff0c\u4ee5\u53ca\u6307\u5411\u4ee3\u7801\u5b58\u50a8\u5e93\u548c\u76f8\u5173\u6f14\u793a\u6587\u7a3f\u7684\u94fe\u63a5\u3002 \u5b89\u5168\u68c0\u67e5\u8868 \u00b6 \u8eab\u4efd\u670d\u52a1\u68c0\u67e5\u8868 \u4eea\u8868\u677f\u68c0\u67e5\u8868 \u8ba1\u7b97\u670d\u52a1\u68c0\u67e5\u8868 \u5757\u5b58\u50a8\u670d\u52a1\u68c0\u67e5\u8868 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u68c0\u67e5\u8868 \u7f51\u7edc\u670d\u52a1\u68c0\u67e5\u8868 \u9644\u5f55 \u00b6 \u793e\u533a\u652f\u6301 \u8bcd\u6c47\u8868 \u793e\u533a\u652f\u6301 \u00b6 \u4ee5\u4e0b\u8d44\u6e90\u53ef\u5e2e\u52a9\u60a8\u8fd0\u884c\u548c\u4f7f\u7528 OpenStack\u3002OpenStack\u793e\u533a\u4e0d\u65ad\u6539\u8fdb\u548c\u589e\u52a0OpenStack\u7684\u4e3b\u8981\u529f\u80fd\uff0c\u4f46\u5982\u679c\u60a8\u6709\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7\u968f\u65f6\u63d0\u95ee\u3002\u4f7f\u7528\u4ee5\u4e0b\u8d44\u6e90\u83b7\u53d6 OpenStack \u652f\u6301\u5e76\u5bf9\u5b89\u88c5\u8fdb\u884c\u6545\u969c\u6392\u9664\u3002 \u6587\u6863 \u00b6 \u6709\u5173\u53ef\u7528\u7684 OpenStack \u6587\u6863\uff0c\u8bf7\u53c2\u9605 docs.openstack.org\u3002 \u4ee5\u4e0b\u6307\u5357\u89e3\u91ca\u4e86\u5982\u4f55\u5b89\u88c5\u6982\u5ff5\u9a8c\u8bc1 OpenStack \u4e91\u53ca\u5176\u76f8\u5173\u7ec4\u4ef6\uff1a Rocky \u5b89\u88c5\u6307\u5357 \u4ee5\u4e0b\u4e66\u7c4d\u4ecb\u7ecd\u4e86\u5982\u4f55\u914d\u7f6e\u548c\u8fd0\u884c OpenStack \u4e91\uff1a \u67b6\u6784\u8bbe\u8ba1\u6307\u5357 Rocky \u7ba1\u7406\u5458\u6307\u5357 Rocky \u914d\u7f6e\u6307\u5357 Rocky \u7f51\u7edc\u6307\u5357 \u9ad8\u53ef\u7528\u6027\u6307\u5357 \u5b89\u5168\u6307\u5357 \u865a\u62df\u673a\u6620\u50cf\u6307\u5357 \u4ee5\u4e0b\u4e66\u7c4d\u4ecb\u7ecd\u4e86\u5982\u4f55\u4f7f\u7528\u547d\u4ee4\u884c\u5ba2\u6237\u7aef\uff1a Rocky API \u7ed1\u5b9a \u4ee5\u4e0b\u6587\u6863\u63d0\u4f9b\u4e86 OpenStack API \u7684\u53c2\u8003\u548c\u6307\u5bfc\u4fe1\u606f\uff1a API \u6587\u6863 \u4ee5\u4e0b\u6307\u5357\u63d0\u4f9b\u4e86\u6709\u5173\u5982\u4f55\u4e3a OpenStack \u6587\u6863\u505a\u51fa\u8d21\u732e\u7684\u4fe1\u606f\uff1a \u6587\u6863\u8d21\u732e\u8005\u6307\u5357 OpenStack wiki \u00b6 OpenStack wiki \u5305\u542b\u5e7f\u6cdb\u7684\u4e3b\u9898\uff0c\u4f46\u6709\u4e9b\u4fe1\u606f\u53ef\u80fd\u5f88\u96be\u627e\u5230\u6216\u53ea\u6709\u51e0\u9875\u6df1\u3002\u5e78\u8fd0\u7684\u662f\uff0cWiki \u641c\u7d22\u529f\u80fd\u4f7f\u60a8\u80fd\u591f\u6309\u6807\u9898\u6216\u5185\u5bb9\u8fdb\u884c\u641c\u7d22\u3002\u5982\u679c\u60a8\u641c\u7d22\u7279\u5b9a\u4fe1\u606f\uff0c\u4f8b\u5982\u6709\u5173\u7f51\u7edc\u6216 OpenStack \u8ba1\u7b97\u7684\u4fe1\u606f\uff0c\u60a8\u53ef\u4ee5\u627e\u5230\u5927\u91cf\u76f8\u5173\u6750\u6599\u3002\u66f4\u591a\u5185\u5bb9\u4e00\u76f4\u5728\u6dfb\u52a0\uff0c\u56e0\u6b64\u8bf7\u52a1\u5fc5\u7ecf\u5e38\u56de\u6765\u67e5\u770b\u3002\u60a8\u53ef\u4ee5\u5728\u4efb\u4f55 OpenStack wiki \u9875\u9762\u7684\u53f3\u4e0a\u89d2\u627e\u5230\u641c\u7d22\u6846\u3002 Launchpad bug \u533a\u57df \u00b6 OpenStack \u793e\u533a\u91cd\u89c6\u60a8\u7684\u8bbe\u7f6e\u548c\u6d4b\u8bd5\u5de5\u4f5c\uff0c\u5e76\u5e0c\u671b\u5f97\u5230\u60a8\u7684\u53cd\u9988\u3002\u8981\u8bb0\u5f55bug\uff0c\u60a8\u5fc5\u987b\u6ce8\u518c\u4e00\u4e2a Launchpad \u5e10\u6237\u3002\u60a8\u53ef\u4ee5\u5728 Launchpad bug \u533a\u57df\u4e2d\u67e5\u770b\u73b0\u6709bug\u5e76\u62a5\u544abug\u3002\u4f7f\u7528\u641c\u7d22\u529f\u80fd\u786e\u5b9abug\u662f\u5426\u5df2\u62a5\u544a\u6216\u5df2\u4fee\u590d\u3002\u5982\u679c\u60a8\u7684bug\u4f3c\u4e4e\u4ecd\u672a\u62a5\u544a\uff0c\u8bf7\u586b\u5199bug\u62a5\u544a\u3002 \u4e00\u4e9b\u63d0\u793a\uff1a \u7ed9\u51fa\u4e00\u4e2a\u6e05\u6670\u3001\u7b80\u6d01\u7684\u603b\u7ed3\u3002 \u5728\u63cf\u8ff0\u4e2d\u63d0\u4f9b\u5c3d\u53ef\u80fd\u591a\u7684\u8be6\u7ec6\u4fe1\u606f\u3002\u7c98\u8d34\u547d\u4ee4\u8f93\u51fa\u6216\u5806\u6808\u8ddf\u8e2a\u3001\u5c4f\u5e55\u622a\u56fe\u94fe\u63a5\u4ee5\u53ca\u53ef\u80fd\u6709\u7528\u7684\u4efb\u4f55\u5176\u4ed6\u4fe1\u606f\u3002 \u8bf7\u52a1\u5fc5\u5305\u62ec\u60a8\u6b63\u5728\u4f7f\u7528\u7684\u8f6f\u4ef6\u548c\u8f6f\u4ef6\u5305\u7248\u672c\uff0c\u5c24\u5176\u662f\u5728\u4f7f\u7528\u5f00\u53d1\u5206\u652f\uff08\u5982 \"Kilo release\" vs git commit bc79c3ecc55929bac585d04a03475b72e06a3208 . \u4efb\u4f55\u7279\u5b9a\u4e8e\u90e8\u7f72\u7684\u4fe1\u606f\u90fd\u5f88\u6709\u7528\uff0c\u4f8b\u5982\u60a8\u4f7f\u7528\u7684\u662f Ubuntu 14.04 \u8fd8\u662f\u6b63\u5728\u6267\u884c\u591a\u8282\u70b9\u5b89\u88c5\u3002 \u4ee5\u4e0b Launchpad Bug \u533a\u57df\u53ef\u7528\uff1a Bugs\uff1aOpenStack \u5757\u5b58\u50a8 \uff08cinder\uff09 Bugs\uff1aOpenStack \u8ba1\u7b97\uff08nova\uff09 Bugs\uff1aOpenStack \u4eea\u8868\u677f\uff08horizon\uff09 Bugs\uff1aOpenStack \u8eab\u4efd\u8ba4\u8bc1\uff08keystone\uff09 Bugs\uff1aOpenStack \u955c\u50cf\u670d\u52a1 \uff08glance\uff09 Bugs\uff1aOpenStack \u7f51\u7edc\uff08neutron\uff09 Bugs\uff1aOpenStack \u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09 Bugs\uff1a\u5e94\u7528\u7a0b\u5e8f\u76ee\u5f55 \uff08murano\uff09 Bugs\uff1a\u88f8\u673a\u670d\u52a1\uff08ironic\uff09 Bugs\uff1a\u96c6\u7fa4\u670d\u52a1\uff08senlin\uff09 Bugs\uff1a\u5bb9\u5668\u57fa\u7840\u67b6\u6784\u7ba1\u7406\u670d\u52a1\uff08magnum\uff09 Bugs\uff1a\u6570\u636e\u5904\u7406\u670d\u52a1\uff08sahara\uff09 Bugs\uff1a\u6570\u636e\u5e93\u670d\u52a1 \uff08trove\uff09 Bugs\uff1aDNS\u670d\u52a1\uff08designate\uff09 Bugs\uff1a\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\uff08barbican\uff09 Bugs\uff1a\u76d1\u63a7 \uff08monasca\uff09 Bugs\uff1a\u7f16\u6392 \uff08heat\uff09 Bugs\uff1a\u8bc4\u7ea7 \uff08cloudkitty\uff09 Bugs\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf \uff08manila\uff09 Bugs\uff1a\u9065\u6d4b\uff08ceilometer\uff09 Bugs\uff1a\u9065\u6d4bv3 \uff08gnocchi\uff09 Bugs\uff1a\u5de5\u4f5c\u6d41\u670d\u52a1 \uff08mistral\uff09 Bugs\uff1a\u6d88\u606f\u4f20\u9012\u670d\u52a1 \uff08zaqar\uff09 Bugs\uff1a\u5bb9\u5668\u670d\u52a1 \uff08zun\uff09 Bugs\uff1aOpenStack API \u6587\u6863 \uff08developer.openstack.org\uff09 Bugs\uff1aOpenStack \u6587\u6863 \uff08docs.openstack.org\uff09 \u6587\u6863\u53cd\u9988 \u00b6 \u8981\u63d0\u4f9b\u6709\u5173\u6587\u6863\u7684\u53cd\u9988\uff0c\u8bf7\u52a0\u5165\u6211\u4eec\u5728 OFTC IRC \u7f51\u7edc\u4e0a\u7684 IRC \u9891\u9053 #openstack-doc \uff0c\u6216\u5728 Launchpad \u4e2d\u62a5\u544a\u9519\u8bef\u5e76\u9009\u62e9\u6587\u6863\u6240\u5c5e\u7684\u7279\u5b9a\u9879\u76ee\u3002 OpenStack IRC \u9891\u9053 \u00b6 OpenStack \u793e\u533a\u4f4d\u4e8e OFTC \u7f51\u7edc\u4e0a\u7684 #openstack IRC \u9891\u9053\u4e2d\u3002\u60a8\u53ef\u4ee5\u5728\u8fd9\u91cc\u63d0\u95ee\uff0c\u83b7\u53d6\u5373\u65f6\u53cd\u9988\uff0c\u89e3\u51b3\u7d27\u6025\u95ee\u9898\u3002\u8981\u5b89\u88c5 IRC \u5ba2\u6237\u7aef\u6216\u4f7f\u7528\u57fa\u4e8e\u6d4f\u89c8\u5668\u7684\u5ba2\u6237\u7aef\uff0c\u8bf7\u8bbf\u95ee https://webchat.oftc.net/\u3002\u60a8\u8fd8\u53ef\u4ee5\u4f7f\u7528Colloquy \uff08Mac OS X\uff09\u3001mIRC \uff08Windows\uff09 \u6216 XChat \uff08Linux\uff09\u3002\u5f53\u60a8\u5728 IRC \u9891\u9053\u4e2d\u5e76\u4e14\u60f3\u8981\u5171\u4eab\u4ee3\u7801\u6216\u547d\u4ee4\u8f93\u51fa\u65f6\uff0c\u901a\u5e38\u63a5\u53d7\u7684\u65b9\u6cd5\u662f\u4f7f\u7528 Paste Bin\u3002OpenStack \u9879\u76ee\u6709\u4e00\u4e2aPaste\u7f51\u7ad9\u3002\u53ea\u9700\u5c06\u8f83\u957f\u7684\u6587\u672c\u6216\u65e5\u5fd7\u7c98\u8d34\u5230 Web \u8868\u5355\u4e2d\uff0c\u5373\u53ef\u83b7\u5f97\u4e00\u4e2aURL\uff0c\u53ef\u4ee5\u5c06\u5176\u7c98\u8d34\u5230\u9891\u9053\u4e2d\u3002OpenStack IRC \u9891\u9053\u5904\u4e8e #openstack . irc.oftc.net \u60a8\u53ef\u4ee5\u5728 wiki \u7684 IRC \u9875\u9762\u4e0a\u627e\u5230\u6240\u6709 OpenStack IRC \u9891\u9053\u7684\u5217\u8868\u3002 OpenStack \u90ae\u4ef6\u5217\u8868 \u00b6 \u83b7\u5f97\u7b54\u6848\u548c\u89c1\u89e3\u7684\u4e00\u4e2a\u597d\u65b9\u6cd5\u662f\u5c06\u60a8\u7684\u95ee\u9898\u6216\u6709\u95ee\u9898\u7684\u573a\u666f\u53d1\u5e03\u5230 OpenStack \u90ae\u4ef6\u5217\u8868\u4e2d\u3002\u60a8\u53ef\u4ee5\u5411\u53ef\u80fd\u9047\u5230\u7c7b\u4f3c\u95ee\u9898\u7684\u5176\u4ed6\u4eba\u5b66\u4e60\u548c\u63d0\u4f9b\u5e2e\u52a9\u3002\u8981\u8ba2\u9605\u6216\u67e5\u770b\u5b58\u6863\uff0c\u8bf7\u8bbf\u95ee\u4e00\u822c\u7684 OpenStack \u90ae\u4ef6\u5217\u8868\u3002\u5982\u679c\u60a8\u5bf9\u7279\u5b9a\u9879\u76ee\u6216\u5f00\u53d1\u7684\u5176\u4ed6\u90ae\u4ef6\u5217\u8868\u611f\u5174\u8da3\uff0c\u8bf7\u53c2\u9605\u90ae\u4ef6\u5217\u8868\u3002 OpenStack \u53d1\u884c\u5305 \u00b6 \u4ee5\u4e0b Linux \u53d1\u884c\u7248\u4e3a OpenStack \u63d0\u4f9b\u793e\u533a\u652f\u6301\u7684\u8f6f\u4ef6\u5305\uff1a CentOS, Fedora, and Red Hat Enterprise Linux: https://www.rdoproject.org/ openSUSE and SUSE Linux Enterprise Server: https://en.opensuse.org/Portal:OpenStack Ubuntu: https://wiki.ubuntu.com/OpenStack/CloudArchive \u8bcd\u6c47\u8868 \u00b6 \u672c\u8bcd\u6c47\u8868\u63d0\u4f9b\u4e86\u4e00\u7cfb\u5217\u672f\u8bed\u548c\u5b9a\u4e49\uff0c\u7528\u4e8e\u5b9a\u4e49 OpenStack \u76f8\u5173\u6982\u5ff5\u7684\u8bcd\u6c47\u8868\u3002 \u8981\u6dfb\u52a0\u5230 OpenStack \u672f\u8bed\u8868\uff0c\u8bf7\u514b\u9686 openstack/openstack-manuals \u5b58\u50a8\u5e93\uff0c\u5e76\u901a\u8fc7 OpenStack \u8d21\u732e\u8fc7\u7a0b\u66f4\u65b0\u6e90\u6587\u4ef6 doc/common/glossary.rst \u3002 0-9 \u00b6 2023.1 Antelope OpenStack \u7b2c 27 \u7248\u7684\u4ee3\u53f7\u3002\u6b64\u7248\u672c\u662f\u57fa\u4e8e\u201c\u5e74\u201d\u4e4b\u540e\u5f62\u6210\u7684\u65b0\u7248\u672c\u6807\u8bc6\u8fc7\u7a0b\u7684\u7b2c\u4e00\u4e2a\u7248\u672c\u3002\u5e74\u5185\u91ca\u653e\u8ba1\u6570\u201c\uff0cAntelope\u662f\u4e00\u79cd\u654f\u6377\u800c\u4eb2\u5207\u7684\u52a8\u7269\uff0c\u4e5f\u662f\u4e00\u79cd\u84b8\u6c7d\u673a\u8f66\u7684\u7c7b\u578b\u3002 2023.2 Bobcat OpenStack \u7b2c 28 \u7248\u7684\u4ee3\u53f7\u3002 2024.1 Caracal OpenStack \u7b2c 29 \u7248\u7684\u4ee3\u53f7\u3002 6to4 \u4e00\u79cd\u5141\u8bb8 IPv6 \u6570\u636e\u5305\u901a\u8fc7 IPv4 \u7f51\u7edc\u4f20\u8f93\u7684\u673a\u5236\uff0c\u63d0\u4f9b\u8fc1\u79fb\u5230 IPv6 \u7684\u7b56\u7565\u3002 A \u00b6 \u7edd\u5bf9\u9650\u5236 \u5ba2\u6237\u673a\u865a\u62df\u673a\u7684\u4e0d\u53ef\u903e\u8d8a\u9650\u5236\u3002 \u8bbe\u7f6e\u5305\u62ec\u603b RAM \u5927\u5c0f\u3001\u6700\u5927 vCPU \u6570\u548c\u6700\u5927\u78c1\u76d8\u5927\u5c0f\u3002 \u8bbf\u95ee\u63a7\u5236\u5217\u8868\uff08ACL\uff09 \u9644\u52a0\u5230\u5bf9\u8c61\u7684\u6743\u9650\u5217\u8868\u3002ACL \u6307\u5b9a\u54ea\u4e9b\u7528\u6237\u6216\u7cfb\u7edf\u8fdb\u7a0b\u6709\u6743\u8bbf\u95ee\u5bf9\u8c61\u3002\u5b83\u8fd8\u5b9a\u4e49\u53ef\u4ee5\u5bf9\u6307\u5b9a\u5bf9\u8c61\u6267\u884c\u54ea\u4e9b\u64cd\u4f5c\u3002\u5178\u578b ACL \u4e2d\u7684\u6bcf\u4e2a\u6761\u76ee\u90fd\u6307\u5b9a\u4e00\u4e2a\u4e3b\u9898\u548c\u4e00\u4e2a\u64cd\u4f5c\u3002\u4f8b\u5982\uff0c\u6587\u4ef6\u7684 ACL \u6761\u76ee (Alice, delete) \u6388\u4e88 Alice \u5220\u9664\u8be5\u6587\u4ef6\u7684\u6743\u9650\u3002 \u8bbf\u95ee\u5bc6\u94a5 Amazon EC2 \u8bbf\u95ee\u5bc6\u94a5\u7684\u66ff\u4ee3\u672f\u8bed\u3002\u8bf7\u53c2\u9605 EC2 \u8bbf\u95ee\u5bc6\u94a5\u3002 \u8d26\u6237 \u5bf9\u8c61\u5b58\u50a8\u4e2d\u8d26\u6237\u7684\u4e0a\u4e0b\u6587\u3002\u4e0d\u8981\u4e0e\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u4e2d\u7684\u7528\u6237\u5e10\u6237\u6df7\u6dc6\uff0c\u4f8b\u5982 Active Directory\u3001/etc/passwd\u3001OpenLDAP\u3001OpenStack Identity \u7b49\u3002 \u8d26\u6237\u5ba1\u6838\u5458 \u901a\u8fc7\u5bf9\u540e\u7aef SQLite \u6570\u636e\u5e93\u8fd0\u884c\u67e5\u8be2\uff0c\u68c0\u67e5\u6307\u5b9a\u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u4e2d\u7f3a\u5c11\u7684\u526f\u672c\u4ee5\u53ca\u4e0d\u6b63\u786e\u6216\u635f\u574f\u7684\u5bf9\u8c61\u3002 \u8d26\u6237\u6570\u636e\u5e93 \u4e00\u4e2a SQLite \u6570\u636e\u5e93\uff0c\u5176\u4e2d\u5305\u542b\u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u548c\u76f8\u5173\u5143\u6570\u636e\uff0c\u5e76\u4e14\u5e10\u6237\u670d\u52a1\u5668\u53ef\u4ee5\u8bbf\u95ee\u8be5\u6570\u636e\u5e93\u3002 \u8d26\u6237\u56de\u6536\u5668 \u4e00\u4e2a\u5bf9\u8c61\u5b58\u50a8\u5de5\u4f5c\u7ebf\u7a0b\uff0c\u7528\u4e8e\u626b\u63cf\u548c\u5220\u9664\u5e10\u6237\u6570\u636e\u5e93\uff0c\u5e76\u4e14\u5e10\u6237\u670d\u52a1\u5668\u5df2\u6807\u8bb0\u4e3a\u5220\u9664\u3002 \u8d26\u6237\u670d\u52a1\u5668 \u5217\u51fa\u5bf9\u8c61\u5b58\u50a8\u4e2d\u7684\u5bb9\u5668\uff0c\u5e76\u5c06\u5bb9\u5668\u4fe1\u606f\u5b58\u50a8\u5728\u5e10\u6237\u6570\u636e\u5e93\u4e2d\u3002 \u8d26\u6237\u670d\u52a1 \u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\uff0c\u63d0\u4f9b\u5217\u8868\u3001\u521b\u5efa\u3001\u4fee\u6539\u3001\u5ba1\u8ba1\u7b49\u8d26\u53f7\u670d\u52a1\u3002\u4e0d\u8981\u4e0e OpenStack Identity \u670d\u52a1\u3001OpenLDAP \u6216\u7c7b\u4f3c\u7684\u7528\u6237\u5e10\u6237\u670d\u52a1\u6df7\u6dc6\u3002 \u4f1a\u8ba1 \u8ba1\u7b97\u670d\u52a1\u901a\u8fc7\u4e8b\u4ef6\u901a\u77e5\u548c\u7cfb\u7edf\u4f7f\u7528\u60c5\u51b5\u6570\u636e\u5de5\u5177\u63d0\u4f9b\u4f1a\u8ba1\u4fe1\u606f\u3002 \u6d3b\u52a8\u76ee\u5f55 Microsoft \u57fa\u4e8e LDAP \u7684\u8eab\u4efd\u9a8c\u8bc1\u548c\u8eab\u4efd\u670d\u52a1\u3002\u5728 OpenStack \u4e2d\u53d7\u652f\u6301\u3002 \u4e3b/\u4e3b\u914d\u7f6e \u5728\u5177\u6709\u4e3b/\u4e3b\u914d\u7f6e\u7684\u9ad8\u53ef\u7528\u6027\u8bbe\u7f6e\u4e2d\uff0c\u591a\u4e2a\u7cfb\u7edf\u4e00\u8d77\u5206\u62c5\u8d1f\u8f7d\uff0c\u5982\u679c\u5176\u4e2d\u4e00\u4e2a\u7cfb\u7edf\u53d1\u751f\u6545\u969c\uff0c\u5219\u8d1f\u8f7d\u5c06\u5206\u914d\u7ed9\u5176\u4f59\u7cfb\u7edf\u3002 \u4e3b/\u5907\u914d\u7f6e \u5728\u5177\u6709\u4e3b/\u5907\u914d\u7f6e\u7684\u9ad8\u53ef\u7528\u6027\u8bbe\u7f6e\u4e2d\uff0c\u7cfb\u7edf\u8bbe\u7f6e\u4e3a\u4f7f\u5176\u4ed6\u8d44\u6e90\u8054\u673a\u4ee5\u66ff\u6362\u90a3\u4e9b\u51fa\u73b0\u6545\u969c\u7684\u8d44\u6e90\u3002 \u5730\u5740\u6c60 \u5206\u914d\u7ed9\u9879\u76ee\u7684\u4e00\u7ec4\u56fa\u5b9a\u548c/\u6216\u6d6e\u52a8 IP \u5730\u5740\uff0c\u53ef\u7531\u9879\u76ee\u4e2d\u7684 VM \u5b9e\u4f8b\u4f7f\u7528\u6216\u5206\u914d\u7ed9\u9879\u76ee\u3002 \u5730\u5740\u89e3\u6790\u534f\u8bae \uff08ARP\uff09 \u5c06\u4e09\u5c42IP\u5730\u5740\u89e3\u6790\u4e3a\u4e8c\u5c42\u94fe\u8def\u672c\u5730\u5730\u5740\u7684\u534f\u8bae\u3002 \u7ba1\u7406\u5458 API \u6388\u6743\u7ba1\u7406\u5458\u53ef\u8bbf\u95ee\u7684 API \u8c03\u7528\u5b50\u96c6\uff0c\u6700\u7ec8\u7528\u6237\u6216\u516c\u5171 Internet \u901a\u5e38\u65e0\u6cd5\u8bbf\u95ee\u8fd9\u4e9b\u8c03\u7528\u3002\u5b83\u4eec\u53ef\u4ee5\u4f5c\u4e3a\u5355\u72ec\u7684\u670d\u52a1 \uff08keystone\uff09 \u5b58\u5728\uff0c\u4e5f\u53ef\u4ee5\u662f\u53e6\u4e00\u4e2a API \uff08nova\uff09 \u7684\u5b50\u96c6\u3002 \u7ba1\u7406\u5458\u670d\u52a1\u5668 \u5728 Identity \u670d\u52a1\u7684\u4e0a\u4e0b\u6587\u4e2d\uff0c\u63d0\u4f9b\u5bf9\u7ba1\u7406 API \u7684\u8bbf\u95ee\u7684\u5de5\u4f5c\u8fdb\u7a0b\u3002 \u7ba1\u7406\u5458 \u8d1f\u8d23\u5b89\u88c5\u3001\u914d\u7f6e\u548c\u7ba1\u7406 OpenStack \u4e91\u7684\u4eba\u5458\u3002 \u9ad8\u7ea7\u6d88\u606f\u961f\u5217\u534f\u8bae \uff08AMQP\uff09 OpenStack \u7ec4\u4ef6\u7528\u4e8e\u670d\u52a1\u5185\u90e8\u901a\u4fe1\u7684\u5f00\u653e\u6807\u51c6\u6d88\u606f\u4f20\u9012\u534f\u8bae\uff0c\u7531 RabbitMQ\u3001Qpid \u6216 ZeroMQ \u63d0\u4f9b\u3002 \u9ad8\u7ea7 RISC \u673a\u5668 \uff08ARM\uff09 \u4f4e\u529f\u8017 CPU \u5e38\u89c1\u4e8e\u79fb\u52a8\u548c\u5d4c\u5165\u5f0f\u8bbe\u5907\u4e2d\u3002\u7531 OpenStack \u652f\u6301\u3002 \u8b66\u62a5 \u8ba1\u7b97\u670d\u52a1\u53ef\u4ee5\u901a\u8fc7\u5176\u901a\u77e5\u7cfb\u7edf\u53d1\u9001\u8b66\u62a5\uff0c\u8be5\u7cfb\u7edf\u5305\u62ec\u7528\u4e8e\u521b\u5efa\u81ea\u5b9a\u4e49\u901a\u77e5\u9a71\u52a8\u7a0b\u5e8f\u7684\u5de5\u5177\u3002\u8b66\u62a5\u53ef\u4ee5\u53d1\u9001\u5230\u5e76\u5728\u4eea\u8868\u677f\u4e0a\u663e\u793a\u3002 \u5206\u914d \u4ece\u5730\u5740\u6c60\u4e2d\u83b7\u53d6\u6d6e\u52a8 IP \u5730\u5740\uff0c\u4ee5\u4fbf\u5c06\u5176\u4e0e\u6765\u5bbe VM \u5b9e\u4f8b\u4e0a\u7684\u56fa\u5b9a IP \u76f8\u5173\u8054\u7684\u8fc7\u7a0b\u3002 Amazon \u5185\u6838\u6620\u50cf \uff08AKI\uff09 VM \u5bb9\u5668\u683c\u5f0f\u548c\u78c1\u76d8\u683c\u5f0f\u3002\u53d7Image\u670d\u52a1\u652f\u6301\u3002 Amazon \u7cfb\u7edf\u6620\u50cf \uff08AMI\uff09 VM \u5bb9\u5668\u683c\u5f0f\u548c\u78c1\u76d8\u683c\u5f0f\u3002\u53d7Image\u670d\u52a1\u652f\u6301\u3002 Amazon Ramdisk \u6620\u50cf \uff08ARI\uff09 VM \u5bb9\u5668\u683c\u5f0f\u548c\u78c1\u76d8\u683c\u5f0f\u3002\u53d7Image\u670d\u52a1\u652f\u6301\u3002 Anvil \u5c06\u540d\u4e3a DevStack \u7684\u57fa\u4e8e shell \u811a\u672c\u7684\u9879\u76ee\u79fb\u690d\u5230 Python \u7684\u9879\u76ee\u3002 AODH OpenStack \u9065\u6d4b\u670d\u52a1\u7684\u4e00\u90e8\u5206;\u63d0\u4f9b\u62a5\u8b66\u529f\u80fd\u3002 Apache Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\u652f\u6301 Apache \u5f00\u6e90\u8f6f\u4ef6\u9879\u76ee\u7684 Apache \u793e\u533a\u3002\u8fd9\u4e9b\u9879\u76ee\u4e3a\u516c\u5171\u5229\u76ca\u63d0\u4f9b\u8f6f\u4ef6\u4ea7\u54c1\u3002 Apache \u8bb8\u53ef\u8bc1 2.0 \u6240\u6709 OpenStack \u6838\u5fc3\u9879\u76ee\u90fd\u662f\u6839\u636e Apache License 2.0 \u8bb8\u53ef\u8bc1\u7684\u6761\u6b3e\u63d0\u4f9b\u7684\u3002 Apache Web \u670d\u52a1\u5668 \u76ee\u524d\u5728 Internet \u4e0a\u4f7f\u7528\u7684\u6700\u5e38\u7528\u7684 Web \u670d\u52a1\u5668\u8f6f\u4ef6\u3002 API \u7aef\u70b9 \u5ba2\u6237\u7aef\u4e3a\u8bbf\u95ee API \u800c\u4e0e\u4e4b\u901a\u4fe1\u7684\u5b88\u62a4\u7a0b\u5e8f\u3001\u5de5\u4f5c\u7a0b\u5e8f\u6216\u670d\u52a1\u3002API \u7ec8\u7ed3\u70b9\u53ef\u4ee5\u63d0\u4f9b\u4efb\u610f\u6570\u91cf\u7684\u670d\u52a1\uff0c\u4f8b\u5982\u8eab\u4efd\u9a8c\u8bc1\u3001\u9500\u552e\u6570\u636e\u3001\u6027\u80fd\u6307\u6807\u3001\u8ba1\u7b97 VM \u547d\u4ee4\u3001\u4eba\u53e3\u666e\u67e5\u6570\u636e\u7b49\u3002 API \u6269\u5c55 \u6269\u5c55\u67d0\u4e9b OpenStack \u6838\u5fc3 API \u7684\u81ea\u5b9a\u4e49\u6a21\u5757\u3002 API \u6269\u5c55\u63d2\u4ef6 \u7f51\u7edc\u63d2\u4ef6\u6216\u7f51\u7edc API \u6269\u5c55\u7684\u66ff\u4ee3\u672f\u8bed\u3002 API \u5bc6\u94a5 API \u4ee4\u724c\u7684\u66ff\u4ee3\u672f\u8bed\u3002 API \u670d\u52a1\u5668 \u8fd0\u884c\u63d0\u4f9b API \u7aef\u70b9\u7684\u5b88\u62a4\u7a0b\u5e8f\u6216\u5de5\u4f5c\u7ebf\u7a0b\u7684\u4efb\u4f55\u8282\u70b9\u3002 API \u4ee4\u724c \u4f20\u9012\u7ed9 API \u8bf7\u6c42\u5e76\u7531 OpenStack \u7528\u4e8e\u9a8c\u8bc1\u5ba2\u6237\u7aef\u662f\u5426\u6709\u6743\u8fd0\u884c\u8bf7\u6c42\u7684\u64cd\u4f5c\u3002 API \u7248\u672c \u5728 OpenStack \u4e2d\uff0c\u9879\u76ee\u7684 API \u7248\u672c\u662f URL \u7684\u4e00\u90e8\u5206\u3002\u4f8b\u5982\uff0c example.com/nova/v1/foobar . \u5c0f\u5e94\u7528\u7a0b\u5e8f \u53ef\u4ee5\u5d4c\u5165\u5230\u7f51\u9875\u4e2d\u7684 Java \u7a0b\u5e8f\u3002 \u5e94\u7528\u7a0b\u5e8f\u76ee\u5f55\u670d\u52a1\uff08murano\uff09 \u63d0\u4f9b\u5e94\u7528\u7a0b\u5e8f\u76ee\u5f55\u670d\u52a1\u7684\u9879\u76ee\uff0c\u4ee5\u4fbf\u7528\u6237\u53ef\u4ee5\u5728\u7ba1\u7406\u5e94\u7528\u7a0b\u5e8f\u751f\u547d\u5468\u671f\u7684\u540c\u65f6\uff0c\u5728\u5e94\u7528\u7a0b\u5e8f\u62bd\u8c61\u7ea7\u522b\u4e0a\u7f16\u5199\u548c\u90e8\u7f72\u590d\u5408\u73af\u5883\u3002 \u5e94\u7528\u7a0b\u5e8f\u7f16\u7a0b\u63a5\u53e3\uff08API\uff09 \u7528\u4e8e\u8bbf\u95ee\u670d\u52a1\u3001\u5e94\u7528\u7a0b\u5e8f\u6216\u7a0b\u5e8f\u7684\u89c4\u8303\u96c6\u5408\u3002\u5305\u62ec\u670d\u52a1\u8c03\u7528\u3001\u6bcf\u4e2a\u8c03\u7528\u7684\u5fc5\u9700\u53c2\u6570\u4ee5\u53ca\u9884\u671f\u7684\u8fd4\u56de\u503c\u3002 \u5e94\u7528\u670d\u52a1\u5668 \u4e00\u79cd\u8f6f\u4ef6\uff0c\u5b83\u4f7f\u53e6\u4e00\u79cd\u8f6f\u4ef6\u5728\u7f51\u7edc\u4e0a\u53ef\u7528\u3002 \u5e94\u7528\u670d\u52a1\u63d0\u4f9b\u8005\u5546\uff08ASP\uff09 \u79df\u7528\u4e13\u7528\u5e94\u7528\u7a0b\u5e8f\u7684\u516c\u53f8\uff0c\u8fd9\u4e9b\u5e94\u7528\u7a0b\u5e8f\u53ef\u5e2e\u52a9\u4f01\u4e1a\u548c\u7ec4\u7ec7\u4ee5\u66f4\u4f4e\u7684\u6210\u672c\u63d0\u4f9b\u9644\u52a0\u670d\u52a1\u3002 \u53ef\u5206\u914d \u7528\u4e8e\u7ef4\u62a4 Linux \u5185\u6838\u9632\u706b\u5899\u6a21\u5757\u4e2d\u7684\u5730\u5740\u89e3\u6790\u534f\u8bae\u6570\u636e\u5305\u8fc7\u6ee4\u89c4\u5219\u7684\u5de5\u5177\u3002\u5728\u8ba1\u7b97\u4e2d\u4e0e iptables\u3001ebtables \u548c ip6tables \u4e00\u8d77\u4f7f\u7528\uff0c\u4e3a VM \u63d0\u4f9b\u9632\u706b\u5899\u670d\u52a1\u3002 \u5173\u8054 \u5c06\u8ba1\u7b97\u6d6e\u52a8 IP \u5730\u5740\u4e0e\u56fa\u5b9a IP \u5730\u5740\u5173\u8054\u7684\u8fc7\u7a0b\u3002 \u5f02\u6b65 JavaScript \u548c XML \uff08AJAX\uff09 \u4e00\u7ec4\u76f8\u4e92\u5173\u8054\u7684 Web \u5f00\u53d1\u6280\u672f\uff0c\u7528\u4e8e\u5728\u5ba2\u6237\u7aef\u521b\u5efa\u5f02\u6b65 Web \u5e94\u7528\u7a0b\u5e8f\u3002\u5728\u5730\u5e73\u7ebf\u4e2d\u5e7f\u6cdb\u4f7f\u7528\u3002 \u4ee5\u592a\u7f51 ATA \uff08AoE\uff09 \u5728\u4ee5\u592a\u7f51\u4e2d\u5efa\u7acb\u96a7\u9053\u7684\u78c1\u76d8\u5b58\u50a8\u534f\u8bae\u3002 \u9644\u52a0 \u5728\u7f51\u7edc\u4e2d\u5c06 VIF \u6216 vNIC \u8fde\u63a5\u5230 L2 \u7f51\u7edc\u7684\u8fc7\u7a0b\u3002\u5728\u8ba1\u7b97\u4e0a\u4e0b\u6587\u4e2d\uff0c\u6b64\u8fc7\u7a0b\u5c06\u5b58\u50a8\u5377\u8fde\u63a5\u5230\u5b9e\u4f8b\u3002 \u9644\u4ef6\uff08\u7f51\u7edc\uff09 \u63a5\u53e3 ID \u4e0e\u903b\u8f91\u7aef\u53e3\u7684\u5173\u8054\u3002\u5c06\u63a5\u53e3\u63d2\u5165\u7aef\u53e3\u3002 \u5ba1\u8ba1 \u901a\u8fc7\u7cfb\u7edf\u4f7f\u7528\u60c5\u51b5\u6570\u636e\u5de5\u5177\u5728\u8ba1\u7b97\u4e2d\u63d0\u4f9b\u3002 \u5ba1\u8ba1\u5458 \u9a8c\u8bc1\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u3001\u5bb9\u5668\u548c\u5e10\u6237\u5b8c\u6574\u6027\u7684\u5de5\u4f5c\u8fdb\u7a0b\u3002\u5ba1\u6838\u5458\u662f\u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u5ba1\u8ba1\u5458\u3001\u5bb9\u5668\u5ba1\u8ba1\u5458\u548c\u5bf9\u8c61\u5ba1\u8ba1\u5458\u7684\u7edf\u79f0\u3002 Austin OpenStack \u521d\u59cb\u7248\u672c\u7684\u4ee3\u53f7\u3002\u9996\u5c4a\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u5fb7\u514b\u8428\u65af\u5dde\u5965\u65af\u6c40\u4e3e\u884c\u3002 auth \u8282\u70b9 \u5bf9\u8c61\u5b58\u50a8\u6388\u6743\u8282\u70b9\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u8eab\u4efd\u9a8c\u8bc1 \u901a\u8fc7\u79c1\u94a5\u3001\u79d8\u5bc6\u4ee4\u724c\u3001\u5bc6\u7801\u3001\u6307\u7eb9\u6216\u7c7b\u4f3c\u65b9\u6cd5\u786e\u8ba4\u7528\u6237\u3001\u8fdb\u7a0b\u6216\u5ba2\u6237\u7aef\u786e\u5b9e\u662f\u4ed6\u4eec\u6240\u8bf4\u7684\u4eba\u7684\u8fc7\u7a0b\u3002 \u8eab\u4efd\u9a8c\u8bc1\u4ee4\u724c \u8eab\u4efd\u9a8c\u8bc1\u540e\u63d0\u4f9b\u7ed9\u5ba2\u6237\u7aef\u7684\u6587\u672c\u5b57\u7b26\u4e32\u3002\u5fc5\u987b\u7531\u7528\u6237\u6216\u8fdb\u7a0b\u5728\u5bf9 API \u7aef\u70b9\u7684\u540e\u7eed\u8bf7\u6c42\u4e2d\u63d0\u4f9b\u3002 AuthN \u63d0\u4f9b\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u7684\u6807\u8bc6\u670d\u52a1\u7ec4\u4ef6\u3002 \u6388\u6743 \u9a8c\u8bc1\u7528\u6237\u3001\u8fdb\u7a0b\u6216\u5ba2\u6237\u7aef\u662f\u5426\u6709\u6743\u6267\u884c\u64cd\u4f5c\u7684\u884c\u4e3a\u3002 \u6388\u6743\u8282\u70b9 \u63d0\u4f9b\u6388\u6743\u670d\u52a1\u7684\u5bf9\u8c61\u5b58\u50a8\u8282\u70b9\u3002 AuthZ \u63d0\u4f9b\u9ad8\u7ea7\u6388\u6743\u670d\u52a1\u7684\u8eab\u4efd\u7ec4\u4ef6\u3002 \u81ea\u52a8\u786e\u8ba4 RabbitMQ \u4e2d\u7684\u914d\u7f6e\u8bbe\u7f6e\uff0c\u7528\u4e8e\u542f\u7528\u6216\u7981\u7528\u6d88\u606f\u786e\u8ba4\u3002\u9ed8\u8ba4\u542f\u7528\u3002 \u81ea\u52a8\u58f0\u660e \u4e00\u4e2a Compute RabbitMQ \u8bbe\u7f6e\uff0c\u7528\u4e8e\u786e\u5b9a\u5728\u7a0b\u5e8f\u542f\u52a8\u65f6\u662f\u5426\u81ea\u52a8\u521b\u5efa\u6d88\u606f\u4ea4\u6362\u3002 \u53ef\u7528\u533a \u7528\u4e8e\u5bb9\u9519\u7684\u9694\u79bb\u533a\u57df\u7684 Amazon EC2 \u6982\u5ff5\u3002\u4e0d\u8981\u4e0e OpenStack Compute \u533a\u57df\u6216\u5355\u5143\u6df7\u6dc6\u3002 AWS CloudFormation \u6a21\u677f AWS CloudFormation \u5141\u8bb8 Amazon Web Services \uff08AWS\uff09 \u7528\u6237\u521b\u5efa\u548c\u7ba1\u7406\u76f8\u5173\u8d44\u6e90\u7684\u96c6\u5408\u3002\u7f16\u6392\u670d\u52a1\u652f\u6301\u4e0e CloudFormation \u517c\u5bb9\u7684\u683c\u5f0f \uff08CFN\uff09\u3002 B \u00b6 \u540e\u7aef \u5bf9\u7528\u6237\u8fdb\u884c\u6a21\u7cca\u5904\u7406\u7684\u4ea4\u4e92\u548c\u8fdb\u7a0b\uff0c\u4f8b\u5982\u8ba1\u7b97\u5377\u6302\u8f7d\u3001\u5b88\u62a4\u7a0b\u5e8f\u5411 iSCSI \u76ee\u6807\u4f20\u8f93\u6570\u636e\u6216\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u5b8c\u6574\u6027\u68c0\u67e5\u3002 \u540e\u7aef\u76ee\u5f55 \u8eab\u4efd\u670d\u52a1\u76ee\u5f55\u670d\u52a1\u7528\u4e8e\u5b58\u50a8\u548c\u68c0\u7d22\u6709\u5173\u5ba2\u6237\u7aef\u53ef\u7528\u7684 API \u7aef\u70b9\u7684\u4fe1\u606f\u7684\u5b58\u50a8\u65b9\u6cd5\u3002\u793a\u4f8b\u5305\u62ec SQL \u6570\u636e\u5e93\u3001LDAP \u6570\u636e\u5e93\u6216 KVS \u540e\u7aef\u3002 \u540e\u7aef\u5b58\u50a8 \u7528\u4e8e\u4fdd\u5b58\u548c\u68c0\u7d22\u670d\u52a1\u4fe1\u606f\u7684\u6301\u4e45\u6027\u6570\u636e\u5b58\u50a8\uff0c\u4f8b\u5982\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u5217\u8868\u3001\u5ba2\u6237\u673a\u865a\u62df\u673a\u7684\u5f53\u524d\u72b6\u6001\u3001\u7528\u6237\u540d\u5217\u8868\u7b49\u3002\u6b64\u5916\uff0c\u6620\u50cf\u670d\u52a1\u7528\u4e8e\u83b7\u53d6\u548c\u5b58\u50a8 VM \u6620\u50cf\u7684\u65b9\u6cd5\u3002\u9009\u9879\u5305\u62ec\u5bf9\u8c61\u5b58\u50a8\u3001\u672c\u5730\u6302\u8f7d\u7684\u6587\u4ef6\u7cfb\u7edf\u3001RADOS \u5757\u8bbe\u5907\u3001VMware \u6570\u636e\u5b58\u50a8\u548c HTTP\u3002 \u5907\u4efd\u3001\u6062\u590d\u548c\u707e\u96be\u6062\u590d\u670d\u52a1\uff08freezer\uff09 \u63d0\u4f9b\u7528\u4e8e\u5907\u4efd\u3001\u8fd8\u539f\u548c\u6062\u590d\u6587\u4ef6\u7cfb\u7edf\u3001\u5b9e\u4f8b\u6216\u6570\u636e\u5e93\u5907\u4efd\u7684\u96c6\u6210\u5de5\u5177\u7684\u9879\u76ee\u3002 \u5e26\u5bbd \u901a\u4fe1\u8d44\u6e90\uff08\u5982 Internet\uff09\u4f7f\u7528\u7684\u53ef\u7528\u6570\u636e\u91cf\u3002\u8868\u793a\u7528\u4e8e\u4e0b\u8f7d\u5185\u5bb9\u7684\u6570\u636e\u91cf\u6216\u53ef\u4f9b\u4e0b\u8f7d\u7684\u6570\u636e\u91cf\u3002 barbican Key Manager \u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u88f8\u673a \u6620\u50cf\u670d\u52a1\u5bb9\u5668\u683c\u5f0f\uff0c\u6307\u793a VM \u6620\u50cf\u4e0d\u5b58\u5728\u5bb9\u5668\u3002 \u88f8\u673a\u670d\u52a1\uff08ironic\uff09 OpenStack \u670d\u52a1\uff0c\u5b83\u63d0\u4f9b\u670d\u52a1\u548c\u5173\u8054\u7684\u5e93\uff0c\u80fd\u591f\u4ee5\u5b89\u5168\u611f\u77e5\u548c\u5bb9\u9519\u7684\u65b9\u5f0f\u7ba1\u7406\u548c\u914d\u7f6e\u7269\u7406\u673a\u3002 \u57fa\u7840\u6620\u50cf OpenStack \u63d0\u4f9b\u7684\u6620\u50cf\u3002 Bell-LaPadula \u6a21\u578b \u4e00\u79cd\u5b89\u5168\u6a21\u578b\uff0c\u4fa7\u91cd\u4e8e\u6570\u636e\u673a\u5bc6\u6027\u548c\u5bf9\u673a\u5bc6\u4fe1\u606f\u7684\u53d7\u63a7\u8bbf\u95ee\u3002\u8be5\u6a21\u578b\u5c06\u5b9e\u4f53\u5206\u4e3a\u4e3b\u4f53\u548c\u5ba2\u4f53\u3002\u5c06\u4e3b\u4f53\u7684\u8bb8\u53ef\u4e0e\u4e3b\u4f53\u7684\u5206\u7c7b\u8fdb\u884c\u6bd4\u8f83\uff0c\u4ee5\u786e\u5b9a\u4e3b\u4f53\u662f\u5426\u88ab\u6388\u6743\u7528\u4e8e\u7279\u5b9a\u7684\u8bbf\u95ee\u6a21\u5f0f\u3002\u95f4\u9699\u6216\u5206\u7c7b\u65b9\u6848\u7528\u6676\u683c\u8868\u793a\u3002 \u57fa\u51c6\u670d\u52a1\uff08\u53cd\u5f39\uff09 OpenStack\u9879\u76ee\uff0c\u4e3a\u5355\u4e2aOpenStack\u7ec4\u4ef6\u7684\u6027\u80fd\u5206\u6790\u548c\u57fa\u51c6\u6d4b\u8bd5\u4ee5\u53ca\u5b8c\u6574\u7684\u751f\u4ea7OpenStack\u4e91\u90e8\u7f72\u63d0\u4f9b\u4e86\u4e00\u4e2a\u6846\u67b6\u3002 Bexar 2011 \u5e74 2 \u6708\u53d1\u5e03\u7684\u4e0e OpenStack \u76f8\u5173\u7684\u9879\u76ee\u7684\u5206\u7ec4\u7248\u672c\u3002\u5b83\u4ec5\u5305\u62ec\u8ba1\u7b97 \uff08nova\uff09 \u548c\u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09\u3002Bexar \u662f OpenStack \u7b2c\u4e8c\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u5fb7\u514b\u8428\u65af\u5dde\u5723\u5b89\u4e1c\u5c3c\u5965\u4e3e\u884c\uff0c\u8fd9\u91cc\u662f\u8d1d\u514b\u8428\u5c14\u53bf\u7684\u53bf\u57ce\u3002 \u4e8c\u8fdb\u5236 \u4ec5\u7531 1 \u548c 0 \u7ec4\u6210\u7684\u4fe1\u606f\uff0c\u8fd9\u662f\u8ba1\u7b97\u673a\u7684\u8bed\u8a00\u3002 \u4f4d \u4f4d\u662f\u4ee5 2 \u4e3a\u57fa\u6570\u7684\u4e2a\u4f4d\u6570\uff080 \u6216 1\uff09\u3002\u5e26\u5bbd\u4f7f\u7528\u91cf\u4ee5\u6bcf\u79d2\u4f4d\u6570\u4e3a\u5355\u4f4d\u3002 \u6bcf\u79d2\u6bd4\u7279\u6570 \uff08BPS\uff09 \u901a\u7528\u6d4b\u91cf\u6570\u636e\u4ece\u4e00\u4e2a\u5730\u65b9\u4f20\u8f93\u5230\u53e6\u4e00\u4e2a\u5730\u65b9\u7684\u901f\u5ea6\u3002 \u5757\u8bbe\u5907 \u4e00\u79cd\u4ee5\u5757\u7684\u5f62\u5f0f\u79fb\u52a8\u6570\u636e\u7684\u8bbe\u5907\u3002\u8fd9\u4e9b\u8bbe\u5907\u8282\u70b9\u8fde\u63a5\u8bbe\u5907\uff0c\u4f8b\u5982\u786c\u76d8\u3001CD-ROM \u9a71\u52a8\u5668\u3001\u95ea\u5b58\u9a71\u52a8\u5668\u548c\u5176\u4ed6\u53ef\u5bfb\u5740\u5185\u5b58\u533a\u57df\u3002 \u533a\u5757\u8fc1\u79fb KVM \u4f7f\u7528\u7684\u4e00\u79cd\u865a\u62df\u673a\u5b9e\u65f6\u8fc1\u79fb\u65b9\u6cd5\uff0c\u7528\u4e8e\u5728\u7528\u6237\u542f\u52a8\u7684\u5207\u6362\u671f\u95f4\u5c06\u5b9e\u4f8b\u4ece\u4e00\u53f0\u4e3b\u673a\u64a4\u79bb\u5230\u53e6\u4e00\u53f0\u4e3b\u673a\uff0c\u505c\u673a\u65f6\u95f4\u975e\u5e38\u77ed\u3002\u4e0d\u9700\u8981\u5171\u4eab\u5b58\u50a8\u3002\u7531\u8ba1\u7b97\u652f\u6301\u3002 \u5757\u5b58\u50a8 API \u5355\u72ec\u7ec8\u7ed3\u70b9\u4e0a\u7684 API\uff0c\u7528\u4e8e\u4e3a\u8ba1\u7b97 VM \u9644\u52a0\u3001\u5206\u79bb\u548c\u521b\u5efa\u5757\u5b58\u50a8\u3002 \u5757\u5b58\u50a8\u670d\u52a1\uff08cinder\uff09 OpenStack \u670d\u52a1\uff0c\u5b83\u5b9e\u73b0\u4e86\u670d\u52a1\u548c\u5e93\uff0c\u901a\u8fc7\u5728\u5176\u4ed6\u5757\u5b58\u50a8\u8bbe\u5907\u4e4b\u4e0a\u7684\u62bd\u8c61\u548c\u81ea\u52a8\u5316\uff0c\u63d0\u4f9b\u5bf9\u5757\u5b58\u50a8\u8d44\u6e90\u7684\u6309\u9700\u81ea\u52a9\u8bbf\u95ee\u3002 BMC\uff08\u57fa\u677f\u7ba1\u7406\u63a7\u5236\u5668\uff09 IPMI\u67b6\u6784\u4e2d\u7684\u667a\u80fd\uff0c\u5b83\u662f\u4e00\u79cd\u4e13\u7528\u7684\u5fae\u63a7\u5236\u5668\uff0c\u5d4c\u5165\u5728\u8ba1\u7b97\u673a\u4e3b\u677f\u4e0a\u5e76\u5145\u5f53\u670d\u52a1\u5668\u3002\u7ba1\u7406\u7cfb\u7edf\u7ba1\u7406\u8f6f\u4ef6\u548c\u5e73\u53f0\u786c\u4ef6\u4e4b\u95f4\u7684\u63a5\u53e3\u3002 \u53ef\u542f\u52a8\u78c1\u76d8\u6620\u50cf \u4e00\u79cd VM \u6620\u50cf\u7c7b\u578b\uff0c\u4ee5\u5355\u4e2a\u53ef\u542f\u52a8\u6587\u4ef6\u7684\u5f62\u5f0f\u5b58\u5728\u3002 Bootstrap \u534f\u8bae \uff08BOOTP\uff09 \u7f51\u7edc\u5ba2\u6237\u7aef\u7528\u4e8e\u4ece\u914d\u7f6e\u670d\u52a1\u5668\u83b7\u53d6 IP \u5730\u5740\u7684\u7f51\u7edc\u534f\u8bae\u3002\u5728\u4f7f\u7528 FlatDHCP \u7ba1\u7406\u5668\u6216 VLAN \u7ba1\u7406\u5668\u7f51\u7edc\u7ba1\u7406\u5668\u65f6\uff0c\u901a\u8fc7 dnsmasq \u5b88\u62a4\u7a0b\u5e8f\u8fdb\u884c\u8ba1\u7b97\u4e2d\u63d0\u4f9b\u3002 \u8fb9\u754c\u7f51\u5173\u534f\u8bae \uff08BGP\uff09 \u8fb9\u754c\u7f51\u5173\u534f\u8bae\u662f\u4e00\u79cd\u8fde\u63a5\u81ea\u6cbb\u7cfb\u7edf\u7684\u52a8\u6001\u8def\u7531\u534f\u8bae\u3002\u8be5\u534f\u8bae\u88ab\u8ba4\u4e3a\u662f\u4e92\u8054\u7f51\u7684\u9aa8\u5e72\uff0c\u5c06\u4e0d\u540c\u7684\u7f51\u7edc\u8fde\u63a5\u8d77\u6765\uff0c\u5f62\u6210\u4e00\u4e2a\u66f4\u5927\u7684\u7f51\u7edc\u3002 \u6d4f\u89c8\u5668 \u4f7f\u8ba1\u7b97\u673a\u6216\u8bbe\u5907\u80fd\u591f\u8bbf\u95ee Internet \u7684\u4efb\u4f55\u5ba2\u6237\u7aef\u8f6f\u4ef6\u3002 \u6784\u5efa\u5668\u6587\u4ef6 \u5305\u542b\u5bf9\u8c61\u5b58\u50a8\u7528\u4e8e\u91cd\u65b0\u914d\u7f6e\u73af\u6216\u5728\u53d1\u751f\u4e25\u91cd\u6545\u969c\u540e\u4ece\u5934\u5f00\u59cb\u91cd\u65b0\u521b\u5efa\u73af\u7684\u914d\u7f6e\u4fe1\u606f\u3002 \u6269\u5c55 \u5728\u4e3b\u73af\u5883\u8d44\u6e90\u53d7\u9650\u65f6\uff0c\u5229\u7528\u8f85\u52a9\u73af\u5883\u6309\u9700\u5f39\u6027\u6784\u5efa\u5b9e\u4f8b\u7684\u505a\u6cd5\u3002 \u6309\u94ae\u7c7b \u5730\u5e73\u7ebf\u4e2d\u7684\u4e00\u7ec4\u76f8\u5173\u6309\u94ae\u7c7b\u578b\u3002\u7528\u4e8e\u542f\u52a8\u3001\u505c\u6b62\u548c\u6302\u8d77 VM \u7684\u6309\u94ae\u4f4d\u4e8e\u4e00\u4e2a\u7c7b\u4e2d\u3002\u7528\u4e8e\u5173\u8054\u548c\u53d6\u6d88\u5173\u8054\u6d6e\u52a8 IP \u5730\u5740\u7684\u6309\u94ae\u4f4d\u4e8e\u53e6\u4e00\u4e2a\u7c7b\u4e2d\uff0c\u4f9d\u6b64\u7c7b\u63a8\u3002 \u5b57\u8282 \u6784\u6210\u5355\u4e2a\u5b57\u7b26\u7684\u4f4d\u96c6;\u4e00\u4e2a\u5b57\u8282\u901a\u5e38\u6709 8 \u4f4d\u3002 C \u00b6 \u7f13\u5b58\u4fee\u526a\u5668 \u5c06\u6620\u50cf\u670d\u52a1\u865a\u62df\u673a\u6620\u50cf\u7f13\u5b58\u4fdd\u6301\u5728\u6216\u4f4e\u4e8e\u5176\u914d\u7f6e\u7684\u6700\u5927\u5927\u5c0f\u7684\u7a0b\u5e8f\u3002 Cactus 2011 \u5e74\u6625\u5b63\u53d1\u5e03\u7684 OpenStack \u9879\u76ee\u5206\u7ec4\u7248\u672c\u3002\u5b83\u5305\u62ec\u8ba1\u7b97 \uff08nova\uff09\u3001\u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09 \u548c\u56fe\u50cf\u670d\u52a1 \uff08glance\uff09\u3002Cactus \u662f\u7f8e\u56fd\u5fb7\u514b\u8428\u65af\u5dde\u7684\u4e00\u4e2a\u57ce\u5e02\uff0c\u662f OpenStack \u7b2c\u4e09\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u5f53OpenStack\u7248\u672c\u4ece3\u4e2a\u6708\u5ef6\u957f\u52306\u4e2a\u6708\u65f6\uff0c\u8be5\u7248\u672c\u7684\u4ee3\u53f7\u53d1\u751f\u4e86\u53d8\u5316\uff0c\u4ee5\u5339\u914d\u6700\u63a5\u8fd1\u4e0a\u4e00\u6b21\u5cf0\u4f1a\u7684\u5730\u7406\u4f4d\u7f6e\u3002 \u8c03\u7528 OpenStack \u6d88\u606f\u961f\u5217\u8f6f\u4ef6\u4f7f\u7528\u7684 RPC \u539f\u8bed\u4e4b\u4e00\u3002\u53d1\u9001\u6d88\u606f\u5e76\u7b49\u5f85\u54cd\u5e94\u3002 \u80fd\u529b \u5b9a\u4e49\u5355\u5143\u7684\u8d44\u6e90\uff0c\u5305\u62ec CPU\u3001\u5b58\u50a8\u548c\u7f51\u7edc\u3002\u53ef\u4ee5\u5e94\u7528\u4e8e\u4e00\u4e2a\u5355\u5143\u6216\u6574\u4e2a\u5355\u5143\u5185\u7684\u7279\u5b9a\u670d\u52a1\u3002 \u5bb9\u91cf\u7f13\u5b58 \u8ba1\u7b97\u540e\u7aef\u6570\u636e\u5e93\u8868\uff0c\u5176\u4e2d\u5305\u542b\u5f53\u524d\u5de5\u4f5c\u8d1f\u8f7d\u3001\u53ef\u7528 RAM \u91cf\u4ee5\u53ca\u6bcf\u4e2a\u4e3b\u673a\u4e0a\u8fd0\u884c\u7684 VM \u6570\u3002\u7528\u4e8e\u786e\u5b9a VM \u5728\u54ea\u4e2a\u4e3b\u673a\u4e0a\u542f\u52a8\u3002 \u5bb9\u91cf\u66f4\u65b0\u7a0b\u5e8f \u76d1\u89c6 VM \u5b9e\u4f8b\u5e76\u6839\u636e\u9700\u8981\u66f4\u65b0\u5bb9\u91cf\u7f13\u5b58\u7684\u901a\u77e5\u9a71\u52a8\u7a0b\u5e8f\u3002 \u6295\u5c04 OpenStack \u6d88\u606f\u961f\u5217\u8f6f\u4ef6\u4f7f\u7528\u7684 RPC \u539f\u8bed\u4e4b\u4e00\u3002\u53d1\u9001\u6d88\u606f\uff0c\u4e0d\u7b49\u5f85\u54cd\u5e94\u3002 \u76ee\u5f55 \u7528\u6237\u5728\u4f7f\u7528 Identity \u670d\u52a1\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u540e\u53ef\u7528\u7684 API \u7aef\u70b9\u5217\u8868\u3002 \u76ee\u5f55\u670d\u52a1 \u4e00\u79cd\u8eab\u4efd\u670d\u52a1\uff0c\u5217\u51fa\u7528\u6237\u5728\u4f7f\u7528 Identity \u670d\u52a1\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u540e\u53ef\u7528\u7684 API \u7aef\u70b9\u3002 \u6d4b\u9ad8\u4eea OpenStack Telemetry \u670d\u52a1\u7684\u4e00\u90e8\u5206;\u6536\u96c6\u548c\u5b58\u50a8\u6765\u81ea\u5176\u4ed6 OpenStack \u670d\u52a1\u7684\u6307\u6807\u3002 \u5355\u5143\u683c \u5728\u5b50\u5173\u7cfb\u548c\u7236\u5173\u7cfb\u4e2d\u63d0\u4f9b\u8ba1\u7b97\u8d44\u6e90\u7684\u903b\u8f91\u5206\u533a\u3002\u5982\u679c\u7236\u5355\u5143\u65e0\u6cd5\u63d0\u4f9b\u8bf7\u6c42\u7684\u8d44\u6e90\uff0c\u5219\u8bf7\u6c42\u5c06\u4ece\u7236\u5355\u5143\u4f20\u9012\u5230\u5b50\u5355\u5143\u3002 \u5355\u5143\u683c\u8f6c\u53d1 \u4e00\u4e2a\u201c\u8ba1\u7b97\u201d\u9009\u9879\uff0c\u8be5\u9009\u9879\u4f7f\u7236\u5355\u5143\u80fd\u591f\u5728\u7236\u5355\u5143\u65e0\u6cd5\u63d0\u4f9b\u6240\u8bf7\u6c42\u7684\u8d44\u6e90\u65f6\u5c06\u8d44\u6e90\u8bf7\u6c42\u4f20\u9012\u7ed9\u5b50\u5355\u5143\u3002 \u5355\u5143\u683c\u7ba1\u7406\u5668 \u8ba1\u7b97\u7ec4\u4ef6\uff0c\u5176\u4e2d\u5305\u542b\u5355\u5143\u4e2d\u6bcf\u4e2a\u4e3b\u673a\u7684\u5f53\u524d\u529f\u80fd\u5217\u8868\uff0c\u5e76\u6839\u636e\u9700\u8981\u8def\u7531\u8bf7\u6c42\u3002 CentOS \u64cd\u4f5c\u7cfb\u7edf \u4e0e OpenStack \u517c\u5bb9\u7684 Linux \u53d1\u884c\u7248\u3002 Ceph \u51fd\u6570 \u53ef\u5927\u89c4\u6a21\u6269\u5c55\u7684\u5206\u5e03\u5f0f\u5b58\u50a8\u7cfb\u7edf\uff0c\u7531\u5bf9\u8c61\u5b58\u50a8\u3001\u5757\u5b58\u50a8\u548c\u517c\u5bb9 POSIX \u7684\u5206\u5e03\u5f0f\u6587\u4ef6\u7cfb\u7edf\u7ec4\u6210\u3002\u4e0eOpenStack\u517c\u5bb9\u3002 CephFS Ceph \u63d0\u4f9b\u7684\u7b26\u5408 POSIX \u6807\u51c6\u7684\u6587\u4ef6\u7cfb\u7edf\u3002 \u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09 \u5728\u5bc6\u7801\u5b66\u4e2d\uff0c\u9881\u53d1\u6570\u5b57\u8bc1\u4e66\u7684\u5b9e\u4f53\u3002\u6570\u5b57\u8bc1\u4e66\u901a\u8fc7\u8bc1\u4e66\u7684\u6307\u5b9a\u4e3b\u4f53\u8bc1\u660e\u516c\u94a5\u7684\u6240\u6709\u6743\u3002\u8fd9\u4f7f\u5176\u4ed6\u4eba\uff08\u4f9d\u8d56\u65b9\uff09\u80fd\u591f\u4f9d\u8d56\u4e0e\u8ba4\u8bc1\u516c\u94a5\u76f8\u5bf9\u5e94\u7684\u79c1\u94a5\u6240\u505a\u7684\u7b7e\u540d\u6216\u65ad\u8a00\u3002\u5728\u8fd9\u79cd\u4fe1\u4efb\u5173\u7cfb\u6a21\u578b\u4e2d\uff0cCA \u662f\u8bc1\u4e66\u4e3b\u4f53\uff08\u6240\u6709\u8005\uff09\u548c\u4f9d\u8d56\u8bc1\u4e66\u7684\u4e00\u65b9\u7684\u53d7\u4fe1\u4efb\u7b2c\u4e09\u65b9\u3002CA \u662f\u8bb8\u591a\u516c\u94a5\u57fa\u7840\u7ed3\u6784 \uff08PKI\uff09 \u65b9\u6848\u7684\u7279\u5f81\u3002\u5728 OpenStack \u4e2d\uff0cCompute \u4e3a cloudpipe VPN \u548c VM \u6620\u50cf\u89e3\u5bc6\u63d0\u4f9b\u4e86\u4e00\u4e2a\u7b80\u5355\u7684\u8bc1\u4e66\u9881\u53d1\u673a\u6784\u3002 \u6311\u6218\u63e1\u624b\u8eab\u4efd\u9a8c\u8bc1\u534f\u8bae \uff08CHAP\uff09 \u8ba1\u7b97\u652f\u6301\u7684 iSCSI \u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002 \u673a\u4f1a\u8c03\u5ea6\u5668 \u8ba1\u7b97\u4f7f\u7528\u7684\u4e00\u79cd\u8ba1\u5212\u65b9\u6cd5\uff0c\u7528\u4e8e\u4ece\u6c60\u4e2d\u968f\u673a\u9009\u62e9\u53ef\u7528\u4e3b\u673a\u3002 \u81ea\u4e0a\u6b21\u66f4\u6539\u4ee5\u6765 \u4e00\u4e2a\u8ba1\u7b97 API \u53c2\u6570\uff0c\u8be5\u53c2\u6570\u5141\u8bb8\u4e0b\u8f7d\u81ea\u4e0a\u6b21\u8bf7\u6c42\u4ee5\u6765\u5bf9\u6240\u8bf7\u6c42\u9879\u7684\u66f4\u6539\uff0c\u800c\u4e0d\u662f\u4e0b\u8f7d\u4e00\u7ec4\u65b0\u7684\u6570\u636e\u5e76\u5c06\u5176\u4e0e\u65e7\u6570\u636e\u8fdb\u884c\u6bd4\u8f83\u3002 Chef \u652f\u6301 OpenStack \u90e8\u7f72\u7684\u64cd\u4f5c\u7cfb\u7edf\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u3002 \u5b50\u5355\u5143\u683c \u5982\u679c\u8bf7\u6c42\u7684\u8d44\u6e90\uff08\u5982 CPU \u65f6\u95f4\u3001\u78c1\u76d8\u5b58\u50a8\u6216\u5185\u5b58\uff09\u5728\u7236\u5355\u5143\u4e2d\u4e0d\u53ef\u7528\uff0c\u5219\u8be5\u8bf7\u6c42\u5c06\u8f6c\u53d1\u5230\u5176\u5173\u8054\u7684\u5b50\u5355\u5143\u3002\u5982\u679c\u5b50\u5355\u5143\u53ef\u4ee5\u6ee1\u8db3\u8bf7\u6c42\uff0c\u5219\u5b83\u786e\u5b9e\u53ef\u4ee5\u3002\u5426\u5219\uff0c\u5b83\u4f1a\u5c1d\u8bd5\u5c06\u8bf7\u6c42\u4f20\u9012\u7ed9\u5176\u4efb\u4f55\u5b50\u7ea7\u3002 cinder \u5757\u5b58\u50a8\u670d\u52a1\u7684\u4ee3\u53f7\u3002 CirrOS \u4e00\u4e2a\u6700\u5c0f\u7684 Linux \u53d1\u884c\u7248\uff0c\u8bbe\u8ba1\u7528\u4f5c\u4e91\uff08\u5982 OpenStack\uff09\u4e0a\u7684\u6d4b\u8bd5\u6620\u50cf\u3002 Cisco neutron \u63d2\u4ef6 \u9002\u7528\u4e8e Cisco \u8bbe\u5907\u548c\u6280\u672f\uff08\u5305\u62ec UCS \u548c Nexus\uff09\u7684\u7f51\u7edc\u63d2\u4ef6\u3002 \u4e91\u67b6\u6784\u5e08 \u8ba1\u5212\u3001\u8bbe\u8ba1\u548c\u76d1\u7763\u4e91\u521b\u5efa\u7684\u4eba\u3002 \u4e91\u5ba1\u8ba1\u6570\u636e\u8054\u90a6 \uff08CADF\uff09 Cloud Auditing Data Federation \uff08CADF\uff09 \u662f\u7528\u4e8e\u5ba1\u6838\u4e8b\u4ef6\u6570\u636e\u7684\u89c4\u8303\u3002CADF \u53d7 OpenStack Identity \u652f\u6301\u3002 \u4e91\u8ba1\u7b97 \u4e00\u79cd\u6a21\u578b\uff0c\u652f\u6301\u8bbf\u95ee\u53ef\u914d\u7f6e\u8ba1\u7b97\u8d44\u6e90\uff08\u5982\u7f51\u7edc\u3001\u670d\u52a1\u5668\u3001\u5b58\u50a8\u3001\u5e94\u7528\u7a0b\u5e8f\u548c\u670d\u52a1\uff09\u7684\u5171\u4eab\u6c60\uff0c\u8fd9\u4e9b\u8d44\u6e90\u53ef\u4ee5\u5feb\u901f\u914d\u7f6e\u548c\u53d1\u5e03\uff0c\u53ea\u9700\u6700\u5c11\u7684\u7ba1\u7406\u5de5\u4f5c\u6216\u670d\u52a1\u63d0\u4f9b\u5546\u4ea4\u4e92\u3002 \u4e91\u8ba1\u7b97\u57fa\u7840\u8bbe\u65bd \u652f\u6301\u4e91\u8ba1\u7b97\u6a21\u578b\u7684\u8ba1\u7b97\u8981\u6c42\u6240\u9700\u7684\u786c\u4ef6\u548c\u8f6f\u4ef6\u7ec4\u4ef6\uff0c\u4f8b\u5982\u670d\u52a1\u5668\u3001\u5b58\u50a8\u3001\u7f51\u7edc\u548c\u865a\u62df\u5316\u8f6f\u4ef6\u3002 \u4e91\u8ba1\u7b97\u5e73\u53f0\u8f6f\u4ef6 \u901a\u8fc7\u4e92\u8054\u7f51\u63d0\u4f9b\u4e0d\u540c\u7684\u670d\u52a1\u3002\u8fd9\u4e9b\u8d44\u6e90\u5305\u62ec\u6570\u636e\u5b58\u50a8\u3001\u670d\u52a1\u5668\u3001\u6570\u636e\u5e93\u3001\u7f51\u7edc\u548c\u8f6f\u4ef6\u7b49\u5de5\u5177\u548c\u5e94\u7528\u7a0b\u5e8f\u3002\u53ea\u8981\u7535\u5b50\u8bbe\u5907\u53ef\u4ee5\u8bbf\u95ee\u7f51\u7edc\uff0c\u5b83\u5c31\u53ef\u4ee5\u8bbf\u95ee\u6570\u636e\u548c\u8fd0\u884c\u5b83\u7684\u8f6f\u4ef6\u7a0b\u5e8f\u3002 \u4e91\u8ba1\u7b97\u670d\u52a1\u67b6\u6784 \u4e91\u670d\u52a1\u4f53\u7cfb\u7ed3\u6784\u5b9a\u4e49\u4e86\u5728\u4f01\u4e1a\u4e1a\u52a1\u7f51\u7edc\u8fb9\u754c\u5185\u548c\u8de8\u4f01\u4e1a\u4e1a\u52a1\u7f51\u7edc\u8fb9\u754c\u5b9e\u65bd\u7684\u6574\u4f53\u4e91\u8ba1\u7b97\u670d\u52a1\u548c\u89e3\u51b3\u65b9\u6848\u3002\u8003\u8651\u6838\u5fc3\u4e1a\u52a1\u9700\u6c42\uff0c\u5e76\u5c06\u5176\u4e0e\u53ef\u80fd\u7684\u4e91\u89e3\u51b3\u65b9\u6848\u76f8\u5339\u914d\u3002 \u4e91\u63a7\u5236\u5668 \u8868\u793a\u4e91\u5168\u5c40\u72b6\u6001\u7684\u8ba1\u7b97\u7ec4\u4ef6\u7684\u96c6\u5408;\u901a\u8fc7\u961f\u5217\u4e0e\u670d\u52a1\uff08\u4f8b\u5982\u8eab\u4efd\u8ba4\u8bc1\u3001\u5bf9\u8c61\u5b58\u50a8\u548c\u8282\u70b9/\u5b58\u50a8\u5de5\u4f5c\u7ebf\u7a0b\uff09\u8fdb\u884c\u901a\u4fe1\u3002 \u4e91\u63a7\u5236\u5668\u8282\u70b9 \u8fd0\u884c\u7f51\u7edc\u3001\u5377\u3001API\u3001\u8c03\u5ea6\u7a0b\u5e8f\u548c\u6620\u50cf\u670d\u52a1\u7684\u8282\u70b9\u3002\u6bcf\u4e2a\u670d\u52a1\u90fd\u53ef\u4ee5\u5206\u89e3\u4e3a\u5355\u72ec\u7684\u8282\u70b9\uff0c\u4ee5\u5b9e\u73b0\u53ef\u4f38\u7f29\u6027\u6216\u53ef\u7528\u6027\u3002 \u4e91\u6570\u636e\u7ba1\u7406\u63a5\u53e3\uff08CDMI\uff09 SINA\u6807\u51c6\u5b9a\u4e49\u4e86\u4e00\u4e2aRESTful API\uff0c\u7528\u4e8e\u7ba1\u7406\u4e91\u4e2d\u7684\u5bf9\u8c61\uff0c\u76ee\u524d\u5728OpenStack\u4e2d\u4e0d\u53d7\u652f\u6301\u3002 \u4e91\u57fa\u7840\u8bbe\u65bd\u7ba1\u7406\u63a5\u53e3\uff08CIMI\uff09 \u6b63\u5728\u8fdb\u884c\u7684\u4e91\u7ba1\u7406\u89c4\u8303\u3002\u76ee\u524d\u5728 OpenStack \u4e2d\u4e0d\u53d7\u652f\u6301\u3002 \u4e91\u6280\u672f \u4e91\u662f\u7531\u7ba1\u7406\u548c\u81ea\u52a8\u5316\u8f6f\u4ef6\u7f16\u6392\u7684\u865a\u62df\u6e90\u5de5\u5177\u3002\u8fd9\u5305\u62ec\u539f\u59cb\u5904\u7406\u80fd\u529b\u3001\u5185\u5b58\u3001\u7f51\u7edc\u3001\u57fa\u4e8e\u4e91\u7684\u5e94\u7528\u7a0b\u5e8f\u7684\u5b58\u50a8\u3002 cloud-init \u51fd\u6570 \u901a\u5e38\u5b89\u88c5\u5728 VM \u6620\u50cf\u4e2d\u7684\u5305\uff0c\u7528\u4e8e\u5728\u542f\u52a8\u540e\u4f7f\u7528\u4ece\u5143\u6570\u636e\u670d\u52a1\u68c0\u7d22\u5230\u7684\u4fe1\u606f\uff08\u5982 SSH \u516c\u94a5\u548c\u7528\u6237\u6570\u636e\uff09\u6267\u884c\u5b9e\u4f8b\u7684\u521d\u59cb\u5316\u3002 cloudadmin \u8ba1\u7b97 RBAC \u7cfb\u7edf\u4e2d\u7684\u9ed8\u8ba4\u89d2\u8272\u4e4b\u4e00\u3002\u6388\u4e88\u5b8c\u6574\u7684\u7cfb\u7edf\u8bbf\u95ee\u6743\u9650\u3002 Cloudbase-\u521d\u59cb\u5316 \u63d0\u4f9b\u6765\u5bbe\u521d\u59cb\u5316\u529f\u80fd\u7684 Windows \u9879\u76ee\uff0c\u7c7b\u4f3c\u4e8e cloud-init\u3002 cloudpipe \u4e00\u79cd\u57fa\u4e8e\u6bcf\u4e2a\u9879\u76ee\u521b\u5efa VPN \u7684\u8ba1\u7b97\u670d\u52a1\u3002 CloudPipe \u955c\u50cf \u4f5c\u4e3a cloudpipe \u670d\u52a1\u5668\u7684\u9884\u5236 VM \u955c\u50cf\u3002\u4ece\u672c\u8d28\u4e0a\u8bb2\uff0cOpenVPN\u8fd0\u884c\u5728Linux\u4e0a\u3002 \u96c6\u7fa4\u670d\u52a1\uff08senlin\uff09 \u5b9e\u73b0\u96c6\u7fa4\u670d\u52a1\u548c\u5e93\u7684\u9879\u76ee\uff0c\u7528\u4e8e\u7ba1\u7406\u7531\u5176\u4ed6 OpenStack \u670d\u52a1\u516c\u5f00\u7684\u540c\u6784\u5bf9\u8c61\u7ec4\u3002 \u547d\u4ee4\u8fc7\u6ee4\u5668 \u5217\u51fa\u8ba1\u7b97 rootwrap \u5de5\u5177\u4e2d\u5141\u8bb8\u7684\u547d\u4ee4\u3002 \u547d\u4ee4\u884c\u754c\u9762 \uff08CLI\uff09 \u4e00\u4e2a\u57fa\u4e8e\u6587\u672c\u7684\u5ba2\u6237\u7aef\uff0c\u53ef\u5e2e\u52a9\u60a8\u521b\u5efa\u811a\u672c\u4ee5\u4e0e OpenStack \u4e91\u8fdb\u884c\u4ea4\u4e92\u3002 \u901a\u7528 Internet \u6587\u4ef6\u7cfb\u7edf \uff08CIFS\uff09 \u6587\u4ef6\u5171\u4eab\u534f\u8bae\u3002\u5b83\u662f Microsoft \u5f00\u53d1\u548c\u4f7f\u7528\u7684\u539f\u59cb\u670d\u52a1\u5668\u6d88\u606f\u5757 \uff08SMB\uff09 \u534f\u8bae\u7684\u516c\u5171\u6216\u5f00\u653e\u53d8\u4f53\u3002\u4e0e SMB \u534f\u8bae\u4e00\u6837\uff0c CIFS \u5728\u66f4\u9ad8\u7ea7\u522b\u8fd0\u884c\u5e76\u4f7f\u7528 TCP/IP \u534f\u8bae\u3002 \u516c\u5171\u5e93 \uff08oslo\uff09 \u751f\u6210\u4e00\u7ec4 python \u5e93\u7684\u9879\u76ee\uff0c\u5176\u4e2d\u5305\u542b OpenStack \u9879\u76ee\u5171\u4eab\u7684\u4ee3\u7801\u3002\u8fd9\u4e9b\u5e93\u63d0\u4f9b\u7684 API \u5e94\u8be5\u662f\u9ad8\u8d28\u91cf\u3001\u7a33\u5b9a\u3001\u4e00\u81f4\u3001\u6709\u6587\u6863\u8bb0\u5f55\u7684\u548c\u666e\u904d\u9002\u7528\u7684\u3002 \u793e\u533a\u9879\u76ee \u4e00\u4e2a\u6ca1\u6709\u5f97\u5230OpenStack\u6280\u672f\u59d4\u5458\u4f1a\u6b63\u5f0f\u8ba4\u53ef\u7684\u9879\u76ee\u3002\u5982\u679c\u9879\u76ee\u8db3\u591f\u6210\u529f\uff0c\u5b83\u53ef\u80fd\u4f1a\u88ab\u63d0\u5347\u4e3a\u5b75\u5316\u9879\u76ee\uff0c\u7136\u540e\u88ab\u63d0\u5347\u4e3a\u6838\u5fc3\u9879\u76ee\uff0c\u6216\u8005\u5b83\u53ef\u80fd\u4e0e\u4e3b\u4ee3\u7801\u4e3b\u5e72\u5408\u5e76\u3002 \u538b\u7f29 \u901a\u8fc7\u7279\u6b8a\u7f16\u7801\u51cf\u5c0f\u6587\u4ef6\u5927\u5c0f\uff0c\u6587\u4ef6\u53ef\u4ee5\u518d\u6b21\u89e3\u538b\u7f29\u4e3a\u539f\u59cb\u5185\u5bb9\u3002OpenStack \u652f\u6301 Linux \u6587\u4ef6\u7cfb\u7edf\u7ea7\u522b\u7684\u538b\u7f29\uff0c\u4f46\u4e0d\u652f\u6301\u5bf9\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u6216\u955c\u50cf\u670d\u52a1\u865a\u62df\u673a\u6620\u50cf\u7b49\u5185\u5bb9\u8fdb\u884c\u538b\u7f29\u3002 \u8ba1\u7b97 API \uff08nova API\uff09 nova-api \u5b88\u62a4\u7a0b\u5e8f\u63d0\u4f9b\u5bf9 nova \u670d\u52a1\u7684\u8bbf\u95ee\u3002\u53ef\u4ee5\u4e0e\u5176\u4ed6 API \u901a\u4fe1\uff0c\u4f8b\u5982 Amazon EC2 API\u3002 \u8ba1\u7b97\u63a7\u5236\u5668 \u8ba1\u7b97\u7ec4\u4ef6\uff0c\u7528\u4e8e\u9009\u62e9\u8981\u5728\u5176\u4e0a\u542f\u52a8 VM \u5b9e\u4f8b\u7684\u5408\u9002\u4e3b\u673a\u3002 \u8ba1\u7b97\u4e3b\u673a \u4e13\u7528\u4e8e\u8fd0\u884c\u8ba1\u7b97\u8282\u70b9\u7684\u7269\u7406\u4e3b\u673a\u3002 \u8ba1\u7b97\u8282\u70b9 \u8fd0\u884c nova-compute \u5b88\u62a4\u7a0b\u5e8f\u7684\u8282\u70b9\uff0c\u8be5\u5b88\u62a4\u7a0b\u5e8f\u7ba1\u7406\u63d0\u4f9b\u5404\u79cd\u670d\u52a1\uff08\u5982 Web \u5e94\u7528\u7a0b\u5e8f\u548c\u5206\u6790\uff09\u7684 VM \u5b9e\u4f8b\u3002 \u8ba1\u7b97\u670d\u52a1 \uff08nova\uff09 OpenStack \u6838\u5fc3\u9879\u76ee\uff0c\u7528\u4e8e\u5b9e\u73b0\u670d\u52a1\u548c\u76f8\u5173\u5e93\uff0c\u4ee5\u63d0\u4f9b\u5bf9\u8ba1\u7b97\u8d44\u6e90\uff08\u5305\u62ec\u88f8\u673a\u3001\u865a\u62df\u673a\u548c\u5bb9\u5668\uff09\u7684\u5927\u89c4\u6a21\u53ef\u6269\u5c55\u3001\u6309\u9700\u3001\u81ea\u52a9\u8bbf\u95ee\u3002 \u8ba1\u7b97\u5de5\u4f5c\u8fdb\u7a0b \u5728\u6bcf\u4e2a\u8ba1\u7b97\u8282\u70b9\u4e0a\u8fd0\u884c\u5e76\u7ba1\u7406 VM \u5b9e\u4f8b\u751f\u547d\u5468\u671f\u7684\u8ba1\u7b97\u7ec4\u4ef6\uff0c\u5305\u62ec\u8fd0\u884c\u3001\u91cd\u65b0\u542f\u52a8\u3001\u7ec8\u6b62\u3001\u9644\u52a0/\u5206\u79bb\u5377\u7b49\u3002\u7531 nova-compute \u5b88\u62a4\u7a0b\u5e8f\u63d0\u4f9b\u3002 \u4e32\u8054\u5bf9\u8c61 \u5bf9\u8c61\u5b58\u50a8\u7ec4\u5408\u5e76\u53d1\u9001\u5230\u5ba2\u6237\u7aef\u7684\u4e00\u7ec4\u5206\u6bb5\u5bf9\u8c61\u3002 \u5bfc\u4f53 \u5728\u8ba1\u7b97\u4e2d\uff0cconductor \u662f\u4ee3\u7406\u6765\u81ea\u8ba1\u7b97\u8fdb\u7a0b\u7684\u6570\u636e\u5e93\u8bf7\u6c42\u7684\u8fdb\u7a0b\u3002\u4f7f\u7528 conductor \u53ef\u4ee5\u63d0\u9ad8\u5b89\u5168\u6027\uff0c\u56e0\u4e3a\u8ba1\u7b97\u8282\u70b9\u4e0d\u9700\u8981\u76f4\u63a5\u8bbf\u95ee\u6570\u636e\u5e93\u3002 congress \u6cbb\u7406\u670d\u52a1\u7684\u4ee3\u7801\u540d\u79f0\u3002 \u4e00\u81f4\u6027\u7a97\u53e3 \u6240\u6709\u5ba2\u6237\u7aef\u90fd\u53ef\u4ee5\u8bbf\u95ee\u65b0\u7684\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u6240\u9700\u7684\u65f6\u95f4\u3002 \u63a7\u5236\u53f0\u65e5\u5fd7 \u5305\u542b\u8ba1\u7b97\u4e2d Linux VM \u63a7\u5236\u53f0\u7684\u8f93\u51fa\u3002 \u5bb9\u5668 \u5728\u5bf9\u8c61\u5b58\u50a8\u4e2d\u7ec4\u7ec7\u548c\u5b58\u50a8\u5bf9\u8c61\u3002\u7c7b\u4f3c\u4e8e Linux \u76ee\u5f55\u7684\u6982\u5ff5\uff0c\u4f46\u4e0d\u80fd\u5d4c\u5957\u3002\u5f71\u50cf\u670d\u52a1\u5bb9\u5668\u683c\u5f0f\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5bb9\u5668\u5ba1\u6838\u5458 \u901a\u8fc7\u5bf9 SQLite \u540e\u7aef\u6570\u636e\u5e93\u7684\u67e5\u8be2\uff0c\u68c0\u67e5\u6307\u5b9a\u5bf9\u8c61\u5b58\u50a8\u5bb9\u5668\u4e2d\u7f3a\u5c11\u526f\u672c\u6216\u4e0d\u6b63\u786e\u7684\u5bf9\u8c61\u3002 \u5bb9\u5668\u6570\u636e\u5e93 \u5b58\u50a8\u5bf9\u8c61\u5b58\u50a8\u5bb9\u5668\u548c\u5bb9\u5668\u5143\u6570\u636e\u7684 SQLite \u6570\u636e\u5e93\u3002\u5bb9\u5668\u670d\u52a1\u5668\u8bbf\u95ee\u6b64\u6570\u636e\u5e93\u3002 \u5bb9\u5668\u683c\u5f0f \u6620\u50cf\u670d\u52a1\u4f7f\u7528\u7684\u5305\u88c5\u5668\uff0c\u5176\u4e2d\u5305\u542b VM \u6620\u50cf\u53ca\u5176\u5173\u8054\u7684\u5143\u6570\u636e\uff0c\u4f8b\u5982\u8ba1\u7b97\u673a\u72b6\u6001\u3001OS \u78c1\u76d8\u5927\u5c0f\u7b49\u3002 \u5bb9\u5668\u57fa\u7840\u8bbe\u65bd\u7ba1\u7406\u670d\u52a1\uff08magnum\uff09 \u8be5\u9879\u76ee\u63d0\u4f9b\u4e00\u7ec4\u7528\u4e8e\u9884\u914d\u3001\u6269\u5c55\u548c\u7ba1\u7406\u5bb9\u5668\u7f16\u6392\u5f15\u64ce\u7684\u670d\u52a1\u3002 \u5bb9\u5668\u670d\u52a1\u5668 \u7ba1\u7406\u5bb9\u5668\u7684\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5668\u3002 \u5bb9\u5668\u670d\u52a1 \u63d0\u4f9b\u521b\u5efa\u3001\u5220\u9664\u3001\u5217\u8868\u7b49\u5bb9\u5668\u670d\u52a1\u7684\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\u3002 \u5185\u5bb9\u5206\u53d1\u7f51\u7edc \uff08CDN\uff09 \u5185\u5bb9\u5206\u53d1\u7f51\u7edc\u662f\u7528\u4e8e\u5c06\u5185\u5bb9\u5206\u53d1\u5230\u5ba2\u6237\u7aef\u7684\u4e13\u7528\u7f51\u7edc\uff0c\u901a\u5e38\u4f4d\u4e8e\u5ba2\u6237\u7aef\u9644\u8fd1\u4ee5\u63d0\u9ad8\u6027\u80fd\u3002 \u6301\u7eed\u4ea4\u4ed8 \u4e00\u79cd\u8f6f\u4ef6\u5de5\u7a0b\u65b9\u6cd5\uff0c\u56e2\u961f\u5728\u77ed\u5468\u671f\u5185\u751f\u4ea7\u8f6f\u4ef6\uff0c\u786e\u4fdd\u8f6f\u4ef6\u53ef\u4ee5\u968f\u65f6\u53ef\u9760\u5730\u53d1\u5e03\uff0c\u5e76\u4e14\u5728\u53d1\u5e03\u8f6f\u4ef6\u65f6\u624b\u52a8\u53d1\u5e03\u3002 \u6301\u7eed\u90e8\u7f72 \u4e00\u79cd\u8f6f\u4ef6\u53d1\u5e03\u8fc7\u7a0b\uff0c\u8be5\u8fc7\u7a0b\u4f7f\u7528\u81ea\u52a8\u5316\u6d4b\u8bd5\u6765\u9a8c\u8bc1\u5bf9\u4ee3\u7801\u5e93\u7684\u66f4\u6539\u662f\u5426\u6b63\u786e\u4e14\u7a33\u5b9a\uff0c\u4ee5\u4fbf\u7acb\u5373\u81ea\u4e3b\u90e8\u7f72\u5230\u751f\u4ea7\u73af\u5883\u3002 \u6301\u7eed\u96c6\u6210 \u6bcf\u5929\u591a\u6b21\u5c06\u6240\u6709\u5f00\u53d1\u4eba\u5458\u7684\u5de5\u4f5c\u526f\u672c\u5408\u5e76\u5230\u5171\u4eab\u4e3b\u7ebf\u7684\u505a\u6cd5\u3002 \u63a7\u5236\u5668\u8282\u70b9 \u4e91\u63a7\u5236\u5668\u8282\u70b9\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u6838\u5fc3 API \u6839\u636e\u4e0a\u4e0b\u6587\uff0c\u6838\u5fc3 API \u53ef\u4ee5\u662f OpenStack API \u6216\u7279\u5b9a\u6838\u5fc3\u9879\u76ee\u7684\u4e3b API\uff0c\u4f8b\u5982\u8ba1\u7b97\u3001\u7f51\u7edc\u3001\u6620\u50cf\u670d\u52a1\u7b49\u3002 \u6838\u5fc3\u670d\u52a1 \u7531 Interop \u5de5\u4f5c\u7ec4\u5b9a\u4e49\u4e3a\u6838\u5fc3\u7684\u5b98\u65b9 OpenStack \u670d\u52a1\u3002\u76ee\u524d\u7531\u5757\u5b58\u50a8\u670d\u52a1\uff08cinder\uff09\u3001\u8ba1\u7b97\u670d\u52a1\uff08nova\uff09\u3001\u8eab\u4efd\u670d\u52a1\uff08keystone\uff09\u3001\u955c\u50cf\u670d\u52a1\uff08glance\uff09\u3001\u7f51\u7edc\u670d\u52a1\uff08neutron\uff09\u548c\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff08swift\uff09\u7ec4\u6210\u3002 \u6210\u672c \u5728\u8ba1\u7b97\u5206\u5e03\u5f0f\u8ba1\u5212\u7a0b\u5e8f\u4e0b\uff0c\u8fd9\u662f\u901a\u8fc7\u67e5\u770b\u6bcf\u4e2a\u4e3b\u673a\u76f8\u5bf9\u4e8e\u6240\u8bf7\u6c42\u7684 VM \u5b9e\u4f8b\u7684\u98ce\u683c\u7684\u529f\u80fd\u6765\u8ba1\u7b97\u7684\u3002 \u51ed\u8bc1 \u53ea\u6709\u7528\u6237\u77e5\u9053\u6216\u53ef\u8bbf\u95ee\u7684\u6570\u636e\uff0c\u7528\u4e8e\u9a8c\u8bc1\u7528\u6237\u662f\u5426\u662f\u4ed6\u6240\u8bf4\u7684\u4eba\u3002\u5728\u8eab\u4efd\u9a8c\u8bc1\u671f\u95f4\uff0c\u5c06\u51ed\u636e\u63d0\u4f9b\u7ed9\u670d\u52a1\u5668\u3002\u793a\u4f8b\u5305\u62ec\u5bc6\u7801\u3001\u5bc6\u94a5\u3001\u6570\u5b57\u8bc1\u4e66\u548c\u6307\u7eb9\u3002 CRL \u51fd\u6570 PKI \u6a21\u578b\u4e2d\u7684\u8bc1\u4e66\u540a\u9500\u5217\u8868 \uff08CRL\uff09 \u662f\u5df2\u540a\u9500\u7684\u8bc1\u4e66\u5217\u8868\u3002\u4e0d\u5e94\u4fe1\u4efb\u63d0\u4f9b\u8fd9\u4e9b\u8bc1\u4e66\u7684\u6700\u7ec8\u5b9e\u4f53\u3002 \u8de8\u57df\u8d44\u6e90\u5171\u4eab \uff08CORS\uff09 \u4e00\u79cd\u673a\u5236\uff0c\u5141\u8bb8\u4ece\u8d44\u6e90\u6765\u6e90\u57df\u4e4b\u5916\u7684\u53e6\u4e00\u4e2a\u57df\u8bf7\u6c42\u7f51\u9875\u4e0a\u7684\u8bb8\u591a\u8d44\u6e90\uff08\u4f8b\u5982\uff0c\u5b57\u4f53\u3001JavaScript\uff09\u3002\u7279\u522b\u662f\uff0cJavaScript \u7684 AJAX \u8c03\u7528\u53ef\u4ee5\u4f7f\u7528 XMLHttpRequest \u673a\u5236\u3002 Crowbar SUSE \u7684\u5f00\u6e90\u793e\u533a\u9879\u76ee\uff0c\u65e8\u5728\u63d0\u4f9b\u6240\u6709\u5fc5\u8981\u7684\u670d\u52a1\uff0c\u4ee5\u5feb\u901f\u90e8\u7f72\u548c\u7ba1\u7406\u4e91\u3002 \u5f53\u524d\u5de5\u4f5c\u8d1f\u8f7d \u8ba1\u7b97\u5bb9\u91cf\u7f13\u5b58\u7684\u4e00\u4e2a\u5143\u7d20\uff0c\u6839\u636e\u7ed9\u5b9a\u4e3b\u673a\u4e0a\u5f53\u524d\u6b63\u5728\u8fdb\u884c\u7684\u751f\u6210\u3001\u5feb\u7167\u3001\u8fc1\u79fb\u548c\u8c03\u6574\u5927\u5c0f\u64cd\u4f5c\u7684\u6570\u91cf\u8fdb\u884c\u8ba1\u7b97\u3002 \u5ba2\u6237 \u9879\u76ee\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u81ea\u5b9a\u4e49\u6a21\u5757 \u7528\u6237\u521b\u5efa\u7684 Python \u6a21\u5757\uff0c\u7531 horizon \u52a0\u8f7d\uff0c\u7528\u4e8e\u66f4\u6539\u4eea\u8868\u677f\u7684\u5916\u89c2\u3002 D \u00b6 \u5b88\u62a4\u8fdb\u7a0b \u5728\u540e\u53f0\u8fd0\u884c\u5e76\u7b49\u5f85\u8bf7\u6c42\u7684\u8fdb\u7a0b\u3002\u53ef\u80fd\u4fa6\u542c\u4e5f\u53ef\u80fd\u4e0d\u4fa6\u542c TCP \u6216 UDP \u7aef\u53e3\u3002\u4e0d\u8981\u4e0e\u5de5\u4eba\u6df7\u6dc6\u3002 \u4eea\u8868\u677f\uff08horizon\uff09 OpenStack \u9879\u76ee\uff0c\u4e3a\u6240\u6709 OpenStack \u670d\u52a1\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7edf\u4e00\u7684\u3001\u57fa\u4e8e Web \u7684\u7528\u6237\u754c\u9762\u3002 \u6570\u636e\u52a0\u5bc6 \u955c\u50cf\u670d\u52a1\u548c\u8ba1\u7b97\u90fd\u652f\u6301\u52a0\u5bc6\u7684\u865a\u62df\u673a \uff08VM\uff09 \u955c\u50cf\uff08\u4f46\u4e0d\u652f\u6301\u5b9e\u4f8b\uff09\u3002OpenStack \u652f\u6301\u4f7f\u7528 HTTPS\u3001SSL\u3001TLS \u548c SSH \u7b49\u6280\u672f\u8fdb\u884c\u4f20\u8f93\u4e2d\u6570\u636e\u52a0\u5bc6\u3002\u5bf9\u8c61\u5b58\u50a8\u4e0d\u652f\u6301\u5e94\u7528\u7a0b\u5e8f\u7ea7\u522b\u7684\u5bf9\u8c61\u52a0\u5bc6\uff0c\u4f46\u53ef\u80fd\u652f\u6301\u4f7f\u7528\u78c1\u76d8\u52a0\u5bc6\u7684\u5b58\u50a8\u3002 \u6570\u636e\u4e22\u5931\u9632\u62a4\uff08DLP\uff09 \u8f6f\u4ef6 \u7528\u4e8e\u4fdd\u62a4\u654f\u611f\u4fe1\u606f\u5e76\u901a\u8fc7\u68c0\u6d4b\u548c\u62d2\u7edd\u6570\u636e\u4f20\u8f93\u6765\u9632\u6b62\u5176\u6cc4\u6f0f\u5230\u7f51\u7edc\u8fb9\u754c\u4e4b\u5916\u7684\u8f6f\u4ef6\u7a0b\u5e8f\u3002 \u6570\u636e\u5904\u7406\u670d\u52a1\uff08sahara\uff09 OpenStack \u9879\u76ee\uff0c\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u6570\u636e\u5904\u7406\u5806\u6808\u548c\u5173\u8054\u7684\u7ba1\u7406\u63a5\u53e3\u3002 \u6570\u636e\u5b58\u50a8 \u6570\u636e\u5e93\u670d\u52a1\u652f\u6301\u7684\u6570\u636e\u5e93\u5f15\u64ce\u3002 \u6570\u636e\u5e93 ID \u4e3a\u5bf9\u8c61\u5b58\u50a8\u6570\u636e\u5e93\u7684\u6bcf\u4e2a\u526f\u672c\u6307\u5b9a\u7684\u552f\u4e00 ID\u3002 \u6570\u636e\u5e93\u590d\u5236\u5668 \u4e00\u4e2a\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\uff0c\u7528\u4e8e\u5c06\u5e10\u6237\u3001\u5bb9\u5668\u548c\u5bf9\u8c61\u6570\u636e\u5e93\u4e2d\u7684\u66f4\u6539\u590d\u5236\u5230\u5176\u4ed6\u8282\u70b9\u3002 \u6570\u636e\u5e93\u670d\u52a1\uff08trove\uff09 \u4e00\u4e2a\u96c6\u6210\u9879\u76ee\uff0c\u4e3a\u5173\u7cfb\u548c\u975e\u5173\u7cfb\u6570\u636e\u5e93\u5f15\u64ce\u63d0\u4f9b\u53ef\u6269\u5c55\u4e14\u53ef\u9760\u7684\u4e91\u6570\u636e\u5e93\u5373\u670d\u52a1\u529f\u80fd\u3002 \u89e3\u9664\u5206\u914d \u5220\u9664\u6d6e\u52a8 IP \u5730\u5740\u548c\u56fa\u5b9a IP \u5730\u5740\u4e4b\u95f4\u7684\u5173\u8054\u7684\u8fc7\u7a0b\u3002\u5220\u9664\u6b64\u5173\u8054\u540e\uff0c\u6d6e\u52a8 IP \u5c06\u8fd4\u56de\u5230\u5730\u5740\u6c60\u3002 Debian \u4e0e OpenStack \u517c\u5bb9\u7684 Linux \u53d1\u884c\u7248\u3002 \u91cd\u590d\u6570\u636e\u5220\u9664 \u5728\u78c1\u76d8\u5757\u3001\u6587\u4ef6\u548c/\u6216\u5bf9\u8c61\u7ea7\u522b\u67e5\u627e\u91cd\u590d\u6570\u636e\u4ee5\u6700\u5927\u7a0b\u5ea6\u5730\u51cf\u5c11\u5b58\u50a8\u4f7f\u7528\u7684\u8fc7\u7a0b - \u76ee\u524d\u5728 OpenStack \u4e2d\u4e0d\u53d7\u652f\u6301\u3002 \u9ed8\u8ba4\u9762\u677f \u7528\u6237\u8bbf\u95ee\u4eea\u8868\u677f\u65f6\u663e\u793a\u7684\u9ed8\u8ba4\u9762\u677f\u3002 \u9ed8\u8ba4\u9879\u76ee \u5982\u679c\u5728\u521b\u5efa\u7528\u6237\u65f6\u672a\u6307\u5b9a\u4efb\u4f55\u9879\u76ee\uff0c\u5219\u4f1a\u5c06\u65b0\u7528\u6237\u5206\u914d\u7ed9\u6b64\u9879\u76ee\u3002 \u9ed8\u8ba4\u4ee4\u724c \u4e00\u4e2a\u6807\u8bc6\u670d\u52a1\u4ee4\u724c\uff0c\u8be5\u4ee4\u724c\u4e0d\u4e0e\u7279\u5b9a\u9879\u76ee\u5173\u8054\uff0c\u5e76\u4ea4\u6362\u4e3a\u4f5c\u7528\u57df\u5185\u4ee4\u724c\u3002 \u5ef6\u8fdf\u5220\u9664 \u5f71\u50cf\u670d\u52a1\u4e2d\u7684\u4e00\u4e2a\u9009\u9879\uff0c\u7528\u4e8e\u5728\u9884\u5b9a\u4e49\u7684\u79d2\u6570\u540e\u5220\u9664\u5f71\u50cf\uff0c\u800c\u4e0d\u662f\u7acb\u5373\u5220\u9664\u5f71\u50cf\u3002 \u4ea4\u4ed8\u65b9\u5f0f Compute RabbitMQ\u6d88\u606f\u6295\u9012\u6a21\u5f0f\u7684\u8bbe\u7f6e;\u53ef\u4ee5\u8bbe\u7f6e\u4e3a\u77ac\u6001\u6216\u6301\u4e45\u6027\u3002 \u62d2\u7edd\u670d\u52a1 \uff08DoS\uff09 \u62d2\u7edd\u670d\u52a1 \uff08DoS\uff09 \u662f\u62d2\u7edd\u670d\u52a1\u653b\u51fb\u7684\u7b80\u79f0\u3002\u8fd9\u662f\u963b\u6b62\u5408\u6cd5\u7528\u6237\u4f7f\u7528\u670d\u52a1\u7684\u6076\u610f\u5c1d\u8bd5\u3002 \u5df2\u5f03\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1 \u8ba1\u7b97\u4e2d\u7684\u4e00\u4e2a\u9009\u9879\uff0c\u4f7f\u7ba1\u7406\u5458\u80fd\u591f\u901a\u8fc7 nova-manage \u547d\u4ee4\u521b\u5efa\u548c\u7ba1\u7406\u7528\u6237\uff0c\u800c\u4e0d\u662f\u4f7f\u7528\u6807\u8bc6\u670d\u52a1\u3002 \u6307\u5b9a DNS \u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u684c\u9762\u5373\u670d\u52a1 \u4e00\u4e2a\u5e73\u53f0\uff0c\u5b83\u63d0\u4f9b\u4e86\u4e00\u5957\u684c\u9762\u73af\u5883\uff0c\u7528\u6237\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95ee\u8fd9\u4e9b\u73af\u5883\u4ece\u4efb\u4f55\u4f4d\u7f6e\u63a5\u6536\u684c\u9762\u4f53\u9a8c\u3002\u8fd9\u53ef\u4ee5\u63d0\u4f9b\u901a\u7528\u3001\u5f00\u53d1\u751a\u81f3\u540c\u6784\u6d4b\u8bd5\u73af\u5883\u3002 \u5f00\u53d1\u8005 \u8ba1\u7b97 RBAC \u7cfb\u7edf\u4e2d\u7684\u9ed8\u8ba4\u89d2\u8272\u4e4b\u4e00\uff0c\u4e5f\u662f\u5206\u914d\u7ed9\u65b0\u7528\u6237\u7684\u9ed8\u8ba4\u89d2\u8272\u3002 \u8bbe\u5907 ID \u5c06\u5bf9\u8c61\u5b58\u50a8\u5206\u533a\u6620\u5c04\u5230\u7269\u7406\u5b58\u50a8\u8bbe\u5907\u3002 \u8bbe\u5907\u6743\u91cd \u6839\u636e\u6bcf\u4e2a\u8bbe\u5907\u7684\u5b58\u50a8\u5bb9\u91cf\uff0c\u5728\u5bf9\u8c61\u5b58\u50a8\u8bbe\u5907\u4e4b\u95f4\u6309\u6bd4\u4f8b\u5206\u914d\u5206\u533a\u3002 \u5f00\u53d1\u5806\u6808 \u4f7f\u7528 shell \u811a\u672c\u5feb\u901f\u6784\u5efa\u5b8c\u6574 OpenStack \u5f00\u53d1\u73af\u5883\u7684\u793e\u533a\u9879\u76ee\u3002 DHCP\u4ee3\u7406 \u4e3a\u865a\u62df\u7f51\u7edc\u63d0\u4f9b DHCP \u670d\u52a1\u7684 OpenStack Networking \u4ee3\u7406\u3002 Diablo 2011 \u5e74\u79cb\u5b63\u53d1\u5e03\u7684\u4e0e OpenStack \u76f8\u5173\u7684\u9879\u76ee\u7684\u5206\u7ec4\u7248\u672c\uff0c\u662f OpenStack \u7684\u7b2c\u56db\u4e2a\u7248\u672c\u3002\u5b83\u5305\u62ec\u8ba1\u7b97 \uff08nova 2011.3\uff09\u3001\u5bf9\u8c61\u5b58\u50a8 \uff08swift 1.4.3\uff09 \u548c\u955c\u50cf\u670d\u52a1 \uff08glance\uff09\u3002Diablo\u662fOpenStack\u7b2c\u56db\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u52a0\u5229\u798f\u5c3c\u4e9a\u5dde\u5723\u514b\u62c9\u62c9\u9644\u8fd1\u7684\u6e7e\u533a\u4e3e\u884c\uff0cDiablo\u662f\u9644\u8fd1\u7684\u57ce\u5e02\u3002 \u76f4\u63a5\u6d88\u8d39\u8005 Compute RabbitMQ \u7684\u4e00\u4e2a\u5143\u7d20\uff0c\u5728\u6267\u884c RPC \u8c03\u7528\u65f6\u751f\u6548\u3002\u5b83\u901a\u8fc7\u552f\u4e00\u7684\u72ec\u5360\u961f\u5217\u8fde\u63a5\u5230\u76f4\u63a5\u4ea4\u6362\uff0c\u53d1\u9001\u6d88\u606f\uff0c\u7136\u540e\u7ec8\u6b62\u3002 \u76f4\u63a5\u4ea4\u6362 RPC \u8c03\u7528\u671f\u95f4\u5728 Compute RabbitMQ \u4e2d\u521b\u5efa\u7684\u8def\u7531\u8868;\u4e3a\u6bcf\u4e2a\u8c03\u7528\u7684 RPC \u8c03\u7528\u521b\u5efa\u4e00\u4e2a\u3002 \u76f4\u63a5\u53d1\u5e03\u8005 RabbitMQ \u7684\u5143\u7d20\uff0c\u7528\u4e8e\u63d0\u4f9b\u5bf9\u4f20\u5165 MQ \u6d88\u606f\u7684\u54cd\u5e94\u3002 \u89e3\u9664\u5173\u8054 \u5220\u9664\u6d6e\u52a8 IP \u5730\u5740\u548c\u56fa\u5b9a IP \u4e4b\u95f4\u7684\u5173\u8054\uff0c\u4ece\u800c\u5c06\u6d6e\u52a8 IP \u5730\u5740\u8fd4\u56de\u5230\u5730\u5740\u6c60\u7684\u8fc7\u7a0b\u3002 \u81ea\u4e3b\u8bbf\u95ee\u63a7\u5236 \uff08DAC\uff09 \u63a7\u5236\u4f7f\u7528\u8005\u8bbf\u95ee\u5bf9\u8c61\u7684\u80fd\u529b\uff0c\u540c\u65f6\u4f7f\u7528\u6237\u80fd\u591f\u505a\u51fa\u7b56\u7565\u51b3\u7b56\u5e76\u5206\u914d\u5b89\u5168\u5c5e\u6027\u3002\u4f20\u7edf\u7684\u7528\u6237\u3001\u7ec4\u548c\u8bfb-\u5199-\u6267\u884c\u6743\u9650\u7684 UNIX \u7cfb\u7edf\u5c31\u662f DAC \u7684\u4e00\u4e2a\u793a\u4f8b\u3002 \u78c1\u76d8\u52a0\u5bc6 \u80fd\u591f\u5728\u6587\u4ef6\u7cfb\u7edf\u3001\u78c1\u76d8\u5206\u533a\u6216\u6574\u4e2a\u78c1\u76d8\u7ea7\u522b\u52a0\u5bc6\u6570\u636e\u3002\u5728\u8ba1\u7b97 VM \u4e2d\u53d7\u652f\u6301\u3002 \u78c1\u76d8\u683c\u5f0f VM \u7684\u78c1\u76d8\u6620\u50cf\u5728\u6620\u50cf\u670d\u52a1\u540e\u7aef\u5b58\u50a8\u4e2d\u5b58\u50a8\u7684\u57fa\u7840\u683c\u5f0f\u3002\u4f8b\u5982\uff0cAMI\u3001ISO\u3001QCOW2\u3001VMDK \u7b49\u3002 \u5206\u6563 \u5728\u5bf9\u8c61\u5b58\u50a8\u4e2d\uff0c\u7528\u4e8e\u6d4b\u8bd5\u548c\u786e\u4fdd\u5bf9\u8c61\u548c\u5bb9\u5668\u5206\u6563\u4ee5\u786e\u4fdd\u5bb9\u9519\u7684\u5de5\u5177\u3002 \u5206\u5e03\u5f0f\u865a\u62df\u8def\u7531\u5668 \uff08DVR\uff09 \u4f7f\u7528 OpenStack Networking \uff08neutron\uff09 \u65f6\u5b9e\u73b0\u9ad8\u53ef\u7528\u6027\u591a\u4e3b\u673a\u8def\u7531\u7684\u673a\u5236\u3002 Django \u5728\u5730\u5e73\u7ebf\u4e2d\u5e7f\u6cdb\u4f7f\u7528\u7684 Web \u6846\u67b6\u3002 DNS \u8bb0\u5f55 \u6307\u5b9a\u6709\u5173\u7279\u5b9a\u57df\u5e76\u5c5e\u4e8e\u8be5\u57df\u7684\u4fe1\u606f\u7684\u8bb0\u5f55\u3002 DNS\u670d\u52a1\uff08\u6307\u5b9a\uff09 OpenStack \u9879\u76ee\uff0c\u4ee5\u4e0e\u6280\u672f\u65e0\u5173\u7684\u65b9\u5f0f\u63d0\u4f9b\u5bf9\u6743\u5a01 DNS \u670d\u52a1\u7684\u53ef\u6269\u5c55\u3001\u6309\u9700\u3001\u81ea\u52a9\u8bbf\u95ee\u3002 dnsmasq \u4e3a\u865a\u62df\u7f51\u7edc\u63d0\u4f9b DNS\u3001DHCP\u3001BOOTP \u548c TFTP \u670d\u52a1\u7684\u5b88\u62a4\u7a0b\u5e8f\u3002 \u57df \u6807\u8bc6 API v3 \u5b9e\u4f53\u3002\u8868\u793a\u9879\u76ee\u3001\u7ec4\u548c\u7528\u6237\u7684\u96c6\u5408\uff0c\u7528\u4e8e\u5b9a\u4e49\u7528\u4e8e\u7ba1\u7406 OpenStack Identity \u5b9e\u4f53\u7684\u7ba1\u7406\u8fb9\u754c\u3002\u5728 Internet \u4e0a\uff0c\u5c06\u7f51\u7ad9\u4e0e\u5176\u4ed6\u7f51\u7ad9\u5206\u5f00\u3002\u901a\u5e38\uff0c\u57df\u540d\u6709\u4e24\u4e2a\u6216\u591a\u4e2a\u90e8\u5206\uff0c\u7528\u70b9\u5206\u9694\u3002\u4f8b\u5982\uff0cyahoo.com\u3001usa.gov\u3001harvard.edu \u6216 mail.yahoo.com\u3002\u6b64\u5916\uff0c\u57df\u662f\u5305\u542b\u4e00\u6761\u6216\u591a\u6761\u8bb0\u5f55\u7684\u6240\u6709 DNS \u76f8\u5173\u4fe1\u606f\u7684\u5b9e\u4f53\u6216\u5bb9\u5668\u3002 \u57df\u540d\u7cfb\u7edf\uff08DNS\uff09 \u7528\u4e8e\u786e\u5b9a Internet \u57df\u540d\u5230\u5730\u5740\u548c\u5730\u5740\u5230\u540d\u79f0\u89e3\u6790\u7684\u7cfb\u7edf\u3002DNS \u901a\u8fc7\u5c06 IP \u5730\u5740\u8f6c\u6362\u4e3a\u66f4\u6613\u4e8e\u8bb0\u5fc6\u7684\u5730\u5740\u6765\u5e2e\u52a9\u6d4f\u89c8 Internet\u3002\u4f8b\u5982\uff0c\u5c06 111.111.111.1 \u8f6c\u6362\u4e3a www.yahoo.com\u3002\u6240\u6709\u57df\u53ca\u5176\u7ec4\u4ef6\uff08\u5982\u90ae\u4ef6\u670d\u52a1\u5668\uff09\u90fd\u5229\u7528 DNS \u89e3\u6790\u5230\u9002\u5f53\u7684\u4f4d\u7f6e\u3002DNS\u670d\u52a1\u5668\u901a\u5e38\u8bbe\u7f6e\u5728\u4e3b\u4ece\u5173\u7cfb\u4e2d\uff0c\u4ee5\u4fbf\u4e3b\u670d\u52a1\u5668\u6545\u969c\u8c03\u7528\u4ece\u670d\u52a1\u5668\u3002\u8fd8\u53ef\u4ee5\u5bf9 DNS \u670d\u52a1\u5668\u8fdb\u884c\u7fa4\u96c6\u6216\u590d\u5236\uff0c\u4ee5\u4fbf\u5bf9\u4e00\u4e2a DNS \u670d\u52a1\u5668\u6240\u505a\u7684\u66f4\u6539\u81ea\u52a8\u4f20\u64ad\u5230\u5176\u4ed6\u6d3b\u52a8\u670d\u52a1\u5668\u3002\u5728\u8ba1\u7b97\u4e2d\uff0c\u652f\u6301\u5c06 DNS \u6761\u76ee\u4e0e\u6d6e\u52a8 IP \u5730\u5740\u3001\u8282\u70b9\u6216\u5355\u5143\u76f8\u5173\u8054\uff0c\u4ee5\u4fbf\u4e3b\u673a\u540d\u5728\u91cd\u65b0\u542f\u52a8\u65f6\u4fdd\u6301\u4e00\u81f4\u3002 \u4e0b\u8f7d \u5c06\u6570\u636e\uff08\u901a\u5e38\u4ee5\u6587\u4ef6\u7684\u5f62\u5f0f\uff09\u4ece\u4e00\u53f0\u8ba1\u7b97\u673a\u4f20\u8f93\u5230\u53e6\u4e00\u53f0\u8ba1\u7b97\u673a\u3002 \u6301\u4e45\u4ea4\u6362 \u670d\u52a1\u5668\u91cd\u65b0\u542f\u52a8\u65f6\u4fdd\u6301\u6d3b\u52a8\u72b6\u6001\u7684 Compute RabbitMQ \u6d88\u606f\u4ea4\u6362\u3002 \u6301\u4e45\u961f\u5217 \u4e00\u4e2a Compute RabbitMQ \u6d88\u606f\u961f\u5217\uff0c\u5728\u670d\u52a1\u5668\u91cd\u65b0\u542f\u52a8\u65f6\u4fdd\u6301\u6d3b\u52a8\u72b6\u6001\u3002 \u52a8\u6001\u4e3b\u673a\u914d\u7f6e\u534f\u8bae \uff08DHCP\uff09 \u4e00\u79cd\u7f51\u7edc\u534f\u8bae\uff0c\u7528\u4e8e\u914d\u7f6e\u8fde\u63a5\u5230\u7f51\u7edc\u7684\u8bbe\u5907\uff0c\u4ee5\u4fbf\u5b83\u4eec\u53ef\u4ee5\u4f7f\u7528 Internet \u534f\u8bae \uff08IP\uff09 \u5728\u8be5\u7f51\u7edc\u4e0a\u8fdb\u884c\u901a\u4fe1\u3002\u8be5\u534f\u8bae\u5728\u5ba2\u6237\u7aef-\u670d\u52a1\u5668\u6a21\u578b\u4e2d\u5b9e\u73b0\uff0c\u5176\u4e2d DHCP \u5ba2\u6237\u7aef\u4ece DHCP \u670d\u52a1\u5668\u8bf7\u6c42\u914d\u7f6e\u6570\u636e\uff0c\u4f8b\u5982 IP \u5730\u5740\u3001\u9ed8\u8ba4\u8def\u7531\u4ee5\u53ca\u4e00\u4e2a\u6216\u591a\u4e2a DNS \u670d\u52a1\u5668\u5730\u5740\u3002\u4e00\u79cd\u5728\u5f15\u5bfc\u65f6\u81ea\u52a8\u4e3a\u4e3b\u673a\u914d\u7f6e\u7f51\u7edc\u7684\u65b9\u6cd5\u3002\u7531\u7f51\u7edc\u548c\u8ba1\u7b97\u63d0\u4f9b\u3002 \u52a8\u6001\u8d85\u6587\u672c\u6807\u8bb0\u8bed\u8a00 \uff08DHTML\uff09 \u4f7f\u7528 HTML\u3001JavaScript \u548c\u7ea7\u8054\u6837\u5f0f\u8868\u4f7f\u7528\u6237\u80fd\u591f\u4e0e\u7f51\u9875\u4ea4\u4e92\u6216\u663e\u793a\u7b80\u5355\u52a8\u753b\u7684\u9875\u9762\u3002 E \u00b6 \u4e1c\u897f\u5411\u6d41\u91cf \u540c\u4e00\u4e91\u6216\u6570\u636e\u4e2d\u5fc3\u4e2d\u7684\u670d\u52a1\u5668\u4e4b\u95f4\u7684\u7f51\u7edc\u6d41\u91cf\u3002\u53e6\u8bf7\u53c2\u9605\u5357\u5317\u5411\u6d41\u91cf\u3002 EBS \u542f\u52a8\u5377 \u5305\u542b\u53ef\u542f\u52a8 VM \u6620\u50cf\u7684 Amazon EBS \u5b58\u50a8\u5377\uff0cOpenStack \u76ee\u524d\u4e0d\u652f\u6301\u8be5\u6620\u50cf\u3002 ebtables \u7528\u4e8e Linux \u6865\u63a5\u9632\u706b\u5899\u7684\u8fc7\u6ee4\u5de5\u5177\uff0c\u652f\u6301\u8fc7\u6ee4\u901a\u8fc7 Linux \u6865\u63a5\u7684\u7f51\u7edc\u6d41\u91cf\u3002\u5728\u8ba1\u7b97\u4e2d\u4e0e arptables\u3001iptables \u548c ip6tables \u4e00\u8d77\u4f7f\u7528\uff0c\u4ee5\u786e\u4fdd\u7f51\u7edc\u901a\u4fe1\u7684\u9694\u79bb\u3002 EC2 \u51fd\u6570 Amazon \u5546\u4e1a\u8ba1\u7b97\u4ea7\u54c1\uff0c\u7c7b\u4f3c\u4e8e\u8ba1\u7b97\u3002 EC2 \u8bbf\u95ee\u5bc6\u94a5 \u4e0e EC2 \u79c1\u6709\u5bc6\u94a5\u4e00\u8d77\u4f7f\u7528\u4ee5\u8bbf\u95ee\u8ba1\u7b97 EC2 API\u3002 EC2 API OpenStack \u652f\u6301\u901a\u8fc7\u8ba1\u7b97\u8bbf\u95ee Amazon EC2 API\u3002 EC2 \u517c\u5bb9\u6027 API \u4f7f OpenStack \u80fd\u591f\u4e0e Amazon EC2 \u901a\u4fe1\u7684\u8ba1\u7b97\u7ec4\u4ef6\u3002 EC2 \u79c1\u6709\u5bc6\u94a5 \u4e0e\u8ba1\u7b97 EC2 API \u901a\u4fe1\u65f6\u4e0e EC2 \u8bbf\u95ee\u5bc6\u94a5\u4e00\u8d77\u4f7f\u7528;\u7528\u4e8e\u5bf9\u6bcf\u4e2a\u8bf7\u6c42\u8fdb\u884c\u6570\u5b57\u7b7e\u540d\u3002 \u8fb9\u7f18\u8ba1\u7b97 \u5728\u4e91\u4e2d\u8fd0\u884c\u66f4\u5c11\u7684\u8fdb\u7a0b\uff0c\u5e76\u5c06\u8fd9\u4e9b\u8fdb\u7a0b\u79fb\u52a8\u5230\u672c\u5730\u3002 \u5f39\u6027\u5757\u5b58\u50a8 \uff08EBS\uff09 Amazon \u5546\u4e1a\u5757\u5b58\u50a8\u4ea7\u54c1\u3002 \u5c01\u88c5 \u5c06\u4e00\u79cd\u6570\u636e\u5305\u7c7b\u578b\u7f6e\u4e8e\u53e6\u4e00\u79cd\u6570\u636e\u5305\u7c7b\u578b\u4e2d\uff0c\u4ee5\u63d0\u53d6\u6216\u4fdd\u62a4\u6570\u636e\u3002\u793a\u4f8b\u5305\u62ec GRE\u3001MPLS \u6216 IPsec\u3002 \u52a0\u5bc6 OpenStack\u652f\u6301HTTPS\u3001SSH\u3001SSL\u3001TLS\u3001\u6570\u5b57\u8bc1\u4e66\u3001\u6570\u636e\u52a0\u5bc6\u7b49\u52a0\u5bc6\u6280\u672f\u3002 \u7aef\u70b9 \u8bf7\u53c2\u9605 API \u7aef\u70b9\u3002 \u7aef\u70b9\u6ce8\u518c\u8868 \u8eab\u4efd\u670d\u52a1\u76ee\u5f55\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u7aef\u70b9\u6a21\u677f URL \u548c\u7aef\u53e3\u53f7\u7aef\u70b9\u5217\u8868\uff0c\u6307\u793a\u53ef\u4ee5\u8bbf\u95ee\u670d\u52a1\uff08\u5982\u5bf9\u8c61\u5b58\u50a8\u3001\u8ba1\u7b97\u3001\u6807\u8bc6\u7b49\uff09\u7684\u4f4d\u7f6e\u3002 \u4f01\u4e1a\u4e91\u8ba1\u7b97 \u4f4d\u4e8e\u9632\u706b\u5899\u540e\u9762\u7684\u8ba1\u7b97\u73af\u5883\uff0c\u4e3a\u4f01\u4e1a\u63d0\u4f9b\u8f6f\u4ef6\u3001\u57fa\u7840\u8bbe\u65bd\u548c\u5e73\u53f0\u670d\u52a1\u3002 \u5b9e\u4f53 \u4efb\u4f55\u60f3\u8981\u8fde\u63a5\u5230\u7f51\u7edc\uff08\u7f51\u7edc\u8fde\u63a5\u670d\u52a1\uff09\u63d0\u4f9b\u7684\u7f51\u7edc\u670d\u52a1\u7684\u786c\u4ef6\u6216\u8f6f\u4ef6\u3002\u5b9e\u4f53\u53ef\u4ee5\u901a\u8fc7\u5b9e\u73b0 VIF \u6765\u5229\u7528\u7f51\u7edc\u3002 \u4e34\u65f6\u6620\u50cf \u4e0d\u4fdd\u5b58\u5bf9\u5176\u5377\u6240\u505a\u7684\u66f4\u6539\u5e76\u5728\u5b9e\u4f8b\u7ec8\u6b62\u540e\u5c06\u5176\u6062\u590d\u5230\u539f\u59cb\u72b6\u6001\u7684 VM \u6620\u50cf\u3002 \u4e34\u65f6\u5377 \u4e0d\u4fdd\u5b58\u5bf9\u5176\u6240\u505a\u7684\u66f4\u6539\u5e76\u5728\u5f53\u524d\u7528\u6237\u653e\u5f03\u63a7\u5236\u6743\u65f6\u6062\u590d\u5230\u5176\u539f\u59cb\u72b6\u6001\u7684\u5377\u3002 Essex 2012 \u5e74 4 \u6708\u53d1\u5e03\u7684\u4e0e OpenStack \u76f8\u5173\u7684\u9879\u76ee\u7684\u5206\u7ec4\u7248\u672c\uff0c\u662f OpenStack \u7684\u7b2c\u4e94\u4e2a\u7248\u672c\u3002\u5b83\u5305\u62ec\u8ba1\u7b97\uff08nova 2012.1\uff09\u3001\u5bf9\u8c61\u5b58\u50a8\uff08swift 1.4.8\uff09\u3001\u56fe\u50cf\uff08glance\uff09\u3001\u8eab\u4efd\uff08keystone\uff09\u548c\u4eea\u8868\u677f\uff08horizon\uff09\u3002Essex \u662f OpenStack \u7b2c\u4e94\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u9a6c\u8428\u8bf8\u585e\u5dde\u6ce2\u58eb\u987f\u4e3e\u884c\uff0cEssex\u662f\u9644\u8fd1\u7684\u57ce\u5e02\u3002 ESXi \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 ETag \u51fd\u6570 \u5bf9\u8c61\u5b58\u50a8\u4e2d\u5bf9\u8c61\u7684 MD5 \u54c8\u5e0c\u503c\uff0c\u7528\u4e8e\u786e\u4fdd\u6570\u636e\u5b8c\u6574\u6027\u3002 euca2ools \u7528\u4e8e\u7ba1\u7406 VM \u7684\u547d\u4ee4\u884c\u5de5\u5177\u96c6\u5408;\u5927\u591a\u6570\u90fd\u4e0eOpenStack\u517c\u5bb9\u3002 Eucalyptus Kernel Image \uff08EKI\uff09 \u4e0e ERI \u4e00\u8d77\u4f7f\u7528\u4ee5\u521b\u5efa EMI\u3002 Eucalyptus\u673a\u5668\u6620\u50cf \uff08EMI\uff09 \u6620\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u955c\u50cf\u5bb9\u5668\u683c\u5f0f\u3002 Eucalyptus Ramdisk \u955c\u50cf \uff08ERI\uff09 \u4e0e EKI \u4e00\u8d77\u4f7f\u7528\u4ee5\u521b\u5efa EMI\u3002 \u64a4\u79bb \u5c06\u4e00\u4e2a\u6216\u6240\u6709\u865a\u62df\u673a \uff08VM\uff09 \u5b9e\u4f8b\u4ece\u4e00\u53f0\u4e3b\u673a\u8fc1\u79fb\u5230\u53e6\u4e00\u53f0\u4e3b\u673a\u7684\u8fc7\u7a0b\uff0c\u4e0e\u5171\u4eab\u5b58\u50a8\u5b9e\u65f6\u8fc1\u79fb\u548c\u5757\u8fc1\u79fb\u517c\u5bb9\u3002 \u4ea4\u6362 RabbitMQ \u6d88\u606f\u4ea4\u6362\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u4ea4\u6362\u7c7b\u578b Compute RabbitMQ \u4e2d\u7684\u8def\u7531\u7b97\u6cd5\u3002 \u72ec\u5360\u961f\u5217 \u7531 RabbitMQ \u4e2d\u7684\u76f4\u63a5\u4f7f\u7528\u8005\u8fde\u63a5\u5230 - \u8ba1\u7b97\uff0c\u6d88\u606f\u53ea\u80fd\u7531\u5f53\u524d\u8fde\u63a5\u4f7f\u7528\u3002 \u6269\u5c55\u5c5e\u6027 \uff08xattr\uff09 \u6587\u4ef6\u7cfb\u7edf\u9009\u9879\uff0c\u7528\u4e8e\u5b58\u50a8\u6240\u6709\u8005\u3001\u7ec4\u3001\u6743\u9650\u3001\u4fee\u6539\u65f6\u95f4\u7b49\u4ee5\u5916\u7684\u5176\u4ed6\u4fe1\u606f\u3002\u5e95\u5c42\u5bf9\u8c61\u5b58\u50a8\u6587\u4ef6\u7cfb\u7edf\u5fc5\u987b\u652f\u6301\u6269\u5c55\u5c5e\u6027\u3002 \u6269\u5c55 API \u6269\u5c55\u6216\u63d2\u4ef6\u7684\u66ff\u4ee3\u672f\u8bed\u3002\u5728 Identity \u670d\u52a1\u7684\u4e0a\u4e0b\u6587\u4e2d\uff0c\u8fd9\u662f\u7279\u5b9a\u4e8e\u5b9e\u73b0\u7684\u8c03\u7528\uff0c\u4f8b\u5982\u6dfb\u52a0\u5bf9 OpenID \u7684\u652f\u6301\u3002 \u5916\u90e8\u7f51\u7edc \u901a\u5e38\u7528\u4e8e Internet \u8bbf\u95ee\u7684\u7f51\u6bb5\u3002 \u989d\u5916\u89c4\u683c \u6307\u5b9a\u8ba1\u7b97\u786e\u5b9a\u4ece\u4f55\u5904\u5f00\u59cb\u65b0\u5b9e\u4f8b\u65f6\u7684\u5176\u4ed6\u8981\u6c42\u3002\u793a\u4f8b\u5305\u62ec\u6700\u5c0f\u7f51\u7edc\u5e26\u5bbd\u6216 GPU \u91cf\u3002 F \u00b6 FakeLDAP \u521b\u5efa\u7528\u4e8e\u6d4b\u8bd5\u8eab\u4efd\u548c\u8ba1\u7b97\u7684\u672c\u5730 LDAP \u76ee\u5f55\u7684\u7b80\u5355\u65b9\u6cd5\u3002\u9700\u8981 Redis\u3002 fan-out\u4ea4\u6362 \u5728 RabbitMQ \u548c Compute \u4e2d\uff0c\u8c03\u5ea6\u7a0b\u5e8f\u670d\u52a1\u4f7f\u7528\u6d88\u606f\u4f20\u9012\u63a5\u53e3\u4ece\u8ba1\u7b97\u3001\u5377\u548c\u7f51\u7edc\u8282\u70b9\u63a5\u6536\u529f\u80fd\u6d88\u606f\u3002 \u8054\u5408\u8eab\u4efd \u4e00\u79cd\u5728\u8eab\u4efd\u63d0\u4f9b\u5546\u548c OpenStack \u4e91\u4e4b\u95f4\u5efa\u7acb\u4fe1\u4efb\u7684\u65b9\u6cd5\u3002 Fedora \u4e0e OpenStack \u517c\u5bb9\u7684 Linux \u53d1\u884c\u7248\u3002 \u5149\u7ea4\u901a\u9053 \u5b58\u50a8\u534f\u8bae\u5728\u6982\u5ff5\u4e0a\u7c7b\u4f3c\u4e8e TCP/IP;\u5c01\u88c5 SCSI \u547d\u4ee4\u548c\u6570\u636e\u3002 \u4ee5\u592a\u7f51\u5149\u7ea4\u901a\u9053 \uff08FCoE\uff09 \u5149\u7ea4\u901a\u9053\u534f\u8bae\u5728\u4ee5\u592a\u7f51\u5185\u901a\u8fc7\u96a7\u9053\u4f20\u8f93\u3002 \u586b\u5145\u4f18\u5148\u8c03\u5ea6\u5668 \u8ba1\u7b97\u8ba1\u5212\u65b9\u6cd5\uff0c\u5c1d\u8bd5\u7528 VM \u586b\u5145\u4e3b\u673a\uff0c\u800c\u4e0d\u662f\u5728\u5404\u79cd\u4e3b\u673a\u4e0a\u542f\u52a8\u65b0 VM\u3002 \u8fc7\u6ee4\u5668 \u8ba1\u7b97\u8ba1\u5212\u8fc7\u7a0b\u4e2d\u7684\u6b65\u9aa4\uff0c\u5f53\u65e0\u6cd5\u8fd0\u884c VM \u7684\u4e3b\u673a\u88ab\u6dd8\u6c70\u4e14\u672a\u88ab\u9009\u4e2d\u65f6\u3002 \u9632\u706b\u5899 \u7528\u4e8e\u9650\u5236\u4e3b\u673a\u548c/\u6216\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u4fe1\uff0c\u5728\u8ba1\u7b97\u4e2d\u4f7f\u7528 iptables\u3001arptables\u3001ip6tables \u548c ebtables \u5b9e\u73b0\u3002 \u9632\u706b\u5899\u5373\u670d\u52a1 \uff08FWaaS\uff09 \u63d0\u4f9b\u5916\u56f4\u9632\u706b\u5899\u529f\u80fd\u7684\u7f51\u7edc\u6269\u5c55\u3002 \u56fa\u5b9a IP \u5730\u5740 \u6bcf\u6b21\u542f\u52a8\u5b9e\u4f8b\u65f6\u90fd\u4e0e\u540c\u4e00\u5b9e\u4f8b\u5173\u8054\u7684 IP \u5730\u5740\u901a\u5e38\u4e0d\u5bf9\u6700\u7ec8\u7528\u6237\u6216\u516c\u5171 Internet \u8bbf\u95ee\uff0c\u5e76\u7528\u4e8e\u7ba1\u7406\u5b9e\u4f8b\u3002 \u5e73\u9762\u7ba1\u7406\u5668 \u8ba1\u7b97\u7ec4\u4ef6\u4e3a\u6388\u6743\u8282\u70b9\u63d0\u4f9b IP \u5730\u5740\uff0c\u5e76\u5047\u5b9a DHCP\u3001DNS \u4ee5\u53ca\u8def\u7531\u914d\u7f6e\u548c\u670d\u52a1\u7531\u5176\u4ed6\u8bbe\u5907\u63d0\u4f9b\u3002 \u5e73\u9762\u6a21\u5f0f\u6ce8\u5165 \u4e00\u79cd\u8ba1\u7b97\u7f51\u7edc\u65b9\u6cd5\uff0c\u5728\u5b9e\u4f8b\u542f\u52a8\u4e4b\u524d\u5c06\u64cd\u4f5c\u7cfb\u7edf\u7f51\u7edc\u914d\u7f6e\u4fe1\u606f\u6ce8\u5165\u5230 VM \u6620\u50cf\u4e2d\u3002 \u5e73\u9762\u7f51\u7edc \u865a\u62df\u7f51\u7edc\u7c7b\u578b\uff0c\u4e0d\u4f7f\u7528VLAN\u6216\u96a7\u9053\u6765\u5206\u9694\u9879\u76ee\u6d41\u91cf\u3002\u6bcf\u4e2a\u5e73\u9762\u7f51\u7edc\u901a\u5e38\u9700\u8981\u5b9a\u4e49\u7531\u6865\u63a5\u6620\u5c04\u5b9a\u4e49\u7684\u5355\u72ec\u7684\u5e95\u5c42\u7269\u7406\u63a5\u53e3\u3002\u4f46\u662f\uff0c\u5e73\u9762\u7f51\u7edc\u53ef\u4ee5\u5305\u542b\u591a\u4e2a\u5b50\u7f51\u3002FlatDHCP \u7ba1\u7406\u5668 \u63d0\u4f9b dnsmasq\uff08DHCP\u3001DNS\u3001BOOTP\u3001TFTP\uff09\u548c radvd\uff08\u8def\u7531\uff09\u670d\u52a1\u7684\u8ba1\u7b97\u7ec4\u4ef6\u3002 \u89c4\u683c VM \u5b9e\u4f8b\u7c7b\u578b\u7684\u66ff\u4ee3\u672f\u8bed \u89c4\u683cID \u6bcf\u79cd\u8ba1\u7b97\u6216\u6620\u50cf\u670d\u52a1\u865a\u62df\u673a\u89c4\u683c\u6216\u5b9e\u4f8b\u7c7b\u578b\u7684 UUID\u3002 \u6d6e\u52a8 IP \u5730\u5740 \u9879\u76ee\u53ef\u4ee5\u4e0e VM \u5173\u8054\u7684 IP \u5730\u5740\uff0c\u4ee5\u4fbf\u5b9e\u4f8b\u5728\u6bcf\u6b21\u542f\u52a8\u65f6\u90fd\u5177\u6709\u76f8\u540c\u7684\u516c\u6709 IP \u5730\u5740\u3002\u60a8\u53ef\u4ee5\u521b\u5efa\u4e00\u4e2a\u6d6e\u52a8 IP \u5730\u5740\u6c60\uff0c\u5e76\u5728\u5b9e\u4f8b\u542f\u52a8\u65f6\u5c06\u5176\u5206\u914d\u7ed9\u5b9e\u4f8b\uff0c\u4ee5\u4fdd\u6301\u4e00\u81f4\u7684 IP \u5730\u5740\u4ee5\u7ef4\u62a4 DNS \u5206\u914d\u3002 Folsom 2012 \u5e74\u79cb\u5b63\u53d1\u5e03\u7684\u4e0e OpenStack \u76f8\u5173\u7684\u9879\u76ee\u7684\u5206\u7ec4\u7248\u672c\uff0c\u662f OpenStack \u7684\u7b2c\u516d\u4e2a\u7248\u672c\u3002\u5b83\u5305\u62ec\u8ba1\u7b97 \uff08nova\uff09\u3001\u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09\u3001\u8eab\u4efd \uff08keystone\uff09\u3001\u7f51\u7edc \uff08neutron\uff09\u3001\u6620\u50cf\u670d\u52a1 \uff08glance\uff09 \u4ee5\u53ca\u5377\u6216\u5757\u5b58\u50a8 \uff08cinder\uff09\u3002Folsom \u662f OpenStack \u7b2c\u516d\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u52a0\u5229\u798f\u5c3c\u4e9a\u5dde\u65e7\u91d1\u5c71\u4e3e\u884c\uff0c\u798f\u5c14\u745f\u59c6\u662f\u9644\u8fd1\u7684\u57ce\u5e02\u3002 FormPost \u5bf9\u8c61\u5b58\u50a8\u4e2d\u95f4\u4ef6\uff0c\u901a\u8fc7\u7f51\u9875\u4e0a\u7684\u8868\u5355\u4e0a\u4f20\uff08\u53d1\u5e03\uff09\u56fe\u50cf\u3002 freezer \u5907\u4efd\u3001\u8fd8\u539f\u548c\u707e\u96be\u6062\u590d\u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u524d\u7aef \u7528\u6237\u4e0e\u670d\u52a1\u4ea4\u4e92\u7684\u70b9;\u53ef\u4ee5\u662f API \u7aef\u70b9\u3001\u4eea\u8868\u677f\u6216\u547d\u4ee4\u884c\u5de5\u5177\u3002 G \u00b6 \u7f51\u5173 \u901a\u5e38\u5206\u914d\u7ed9\u8def\u7531\u5668\u7684 IP \u5730\u5740\uff0c\u7528\u4e8e\u5728\u4e0d\u540c\u7f51\u7edc\u4e4b\u95f4\u4f20\u9012\u7f51\u7edc\u6d41\u91cf\u3002 \u901a\u7528\u63a5\u6536\u5378\u8f7d \uff08GRO\uff09 \u67d0\u4e9b\u7f51\u7edc\u63a5\u53e3\u9a71\u52a8\u7a0b\u5e8f\u7684\u529f\u80fd\uff0c\u5728\u4f20\u9001\u5230\u5185\u6838 IP \u5806\u6808\u4e4b\u524d\uff0c\u5c06\u8bb8\u591a\u8f83\u5c0f\u7684\u63a5\u6536\u6570\u636e\u5305\u5408\u5e76\u4e3a\u4e00\u4e2a\u5927\u6570\u636e\u5305\u3002 \u901a\u7528\u8def\u7531\u5c01\u88c5 \uff08GRE\uff09 \u5728\u865a\u62df\u70b9\u5bf9\u70b9\u94fe\u8def\u4e2d\u5c01\u88c5\u5404\u79cd\u7f51\u7edc\u5c42\u534f\u8bae\u7684\u534f\u8bae\u3002 glance \u5f71\u50cf\u670d\u52a1\u7684\u4ee3\u53f7\u3002 glance API \u670d\u52a1\u5668 \u56fe\u50cf API \u7684\u66ff\u4ee3\u540d\u79f0\u3002 glance \u6ce8\u518c\u8868 \u6620\u50cf\u670d\u52a1\u6620\u50cf\u6ce8\u518c\u8868\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5168\u5c40\u7aef\u70b9\u6a21\u677f \u5305\u542b\u53ef\u7528\u4e8e\u6240\u6709\u9879\u76ee\u7684\u670d\u52a1\u7684\u6807\u8bc6\u670d\u52a1\u7ec8\u7ed3\u70b9\u6a21\u677f\u3002 GlusterFS \u4e00\u4e2a\u65e8\u5728\u805a\u5408 NAS \u4e3b\u673a\u7684\u6587\u4ef6\u7cfb\u7edf\uff0c\u4e0e OpenStack \u517c\u5bb9\u3002 gnocchi OpenStack Telemetry \u670d\u52a1\u7684\u4e00\u90e8\u5206;\u63d0\u4f9b\u7d22\u5f15\u5668\u548c\u65f6\u5e8f\u6570\u636e\u5e93\u3002 golden\u6620\u50cf \u4e00\u79cd\u64cd\u4f5c\u7cfb\u7edf\u5b89\u88c5\u65b9\u6cd5\uff0c\u5176\u4e2d\u521b\u5efa\u6700\u7ec8\u7684\u78c1\u76d8\u6620\u50cf\uff0c\u7136\u540e\u7531\u6240\u6709\u8282\u70b9\u4f7f\u7528\uff0c\u65e0\u9700\u4fee\u6539\u3002 \u6cbb\u7406\u670d\u52a1\uff08\u5927\u4f1a\uff09 \u8be5\u9879\u76ee\u5728\u4efb\u4f55\u4e91\u670d\u52a1\u96c6\u5408\u4e2d\u63d0\u4f9b\u6cbb\u7406\u5373\u670d\u52a1\uff0c\u4ee5\u4fbf\u76d1\u89c6\u3001\u5b9e\u65bd\u548c\u5ba1\u6838\u52a8\u6001\u57fa\u7840\u7ed3\u6784\u4e0a\u7684\u7b56\u7565\u3002 \u56fe\u5f62\u4ea4\u6362\u683c\u5f0f \uff08GIF\uff09 \u4e00\u79cd\u901a\u5e38\u7528\u4e8e\u7f51\u9875\u4e0a\u7684\u52a8\u753b\u56fe\u50cf\u7684\u56fe\u50cf\u6587\u4ef6\u3002 \u56fe\u5f62\u5904\u7406\u5355\u5143 \uff08GPU\uff09 OpenStack \u76ee\u524d\u4e0d\u652f\u6301\u6839\u636e GPU \u7684\u5b58\u5728\u6765\u9009\u62e9\u4e3b\u673a\u3002 \u7eff\u8272\u7ebf\u7a0b Python \u4f7f\u7528\u7684\u534f\u4f5c\u7ebf\u7a0b\u6a21\u578b;\u51cf\u5c11\u4e89\u7528\u6761\u4ef6\uff0c\u5e76\u4e14\u4ec5\u5728\u8fdb\u884c\u7279\u5b9a\u5e93\u8c03\u7528\u65f6\u8fdb\u884c\u4e0a\u4e0b\u6587\u5207\u6362\u3002\u6bcf\u4e2a OpenStack \u670d\u52a1\u90fd\u662f\u5b83\u81ea\u5df1\u7684\u7ebf\u7a0b\u3002 Grizzly OpenStack \u7b2c\u4e03\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u52a0\u5229\u798f\u5c3c\u4e9a\u5dde\u5723\u5730\u4e9a\u54e5\u4e3e\u884c\uff0cGrizzly\u662f\u52a0\u5229\u798f\u5c3c\u4e9a\u5dde\u5dde\u65d7\u7684\u4e00\u4e2a\u5143\u7d20\u3002 \u5206\u7ec4 Identity v3 API \u5b9e\u4f53\u3002\u8868\u793a\u7279\u5b9a\u57df\u6240\u62e5\u6709\u7684\u7528\u6237\u96c6\u5408\u3002 \u5ba2\u6237\u673a\u64cd\u4f5c\u7cfb\u7edf \u5728\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u7684\u63a7\u5236\u4e0b\u8fd0\u884c\u7684\u64cd\u4f5c\u7cfb\u7edf\u5b9e\u4f8b\u3002 H \u00b6 Hadoop Apache Hadoop \u662f\u4e00\u4e2a\u5f00\u6e90\u8f6f\u4ef6\u6846\u67b6\uff0c\u652f\u6301\u6570\u636e\u5bc6\u96c6\u578b\u5206\u5e03\u5f0f\u5e94\u7528\u7a0b\u5e8f\u3002 Hadoop \u5206\u5e03\u5f0f\u6587\u4ef6\u7cfb\u7edf \uff08HDFS\uff09 \u4e00\u79cd\u5206\u5e03\u5f0f\u3001\u9ad8\u5ea6\u5bb9\u9519\u7684\u6587\u4ef6\u7cfb\u7edf\uff0c\u8bbe\u8ba1\u7528\u4e8e\u5728\u4f4e\u6210\u672c\u5546\u7528\u786c\u4ef6\u4e0a\u8fd0\u884c\u3002 \u4ea4\u63a5 \u5bf9\u8c61\u5b58\u50a8\u4e2d\u7684\u4e00\u79cd\u5bf9\u8c61\u72b6\u6001\uff0c\u5176\u4e2d\u7531\u4e8e\u9a71\u52a8\u5668\u6545\u969c\u800c\u81ea\u52a8\u521b\u5efa\u5bf9\u8c61\u7684\u65b0\u526f\u672c\u3002 HAProxy \u51fd\u6570 \u4e3a\u57fa\u4e8e TCP \u548c HTTP \u7684\u5e94\u7528\u7a0b\u5e8f\u63d0\u4f9b\u8d1f\u8f7d\u5e73\u8861\u5668\uff0c\u5c06\u8bf7\u6c42\u5206\u6563\u5230\u591a\u4e2a\u670d\u52a1\u5668\u3002 \u786c\u91cd\u542f \u4e00\u79cd\u91cd\u65b0\u542f\u52a8\u7c7b\u578b\uff0c\u5176\u4e2d\u6309\u4e0b\u7269\u7406\u6216\u865a\u62df\u7535\u6e90\u6309\u94ae\uff0c\u800c\u4e0d\u662f\u6b63\u5e38\u3001\u6b63\u786e\u5730\u5173\u95ed\u64cd\u4f5c\u7cfb\u7edf\u3002 Havana OpenStack \u7b2c\u516b\u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u4fc4\u52d2\u5188\u5dde\u6ce2\u7279\u5170\u5e02\u4e3e\u884c\uff0cHavana\u662f\u4fc4\u52d2\u5188\u5dde\u7684\u4e00\u4e2a\u975e\u6cd5\u4eba\u793e\u533a\u3002 \u5065\u5eb7\u76d1\u89c6\u5668 \u786e\u5b9a VIP \u6c60\u7684\u540e\u7aef\u6210\u5458\u662f\u5426\u53ef\u4ee5\u5904\u7406\u8bf7\u6c42\u3002\u4e00\u4e2a\u6c60\u53ef\u4ee5\u6709\u591a\u4e2a\u4e0e\u4e4b\u5173\u8054\u7684\u8fd0\u884c\u72b6\u51b5\u76d1\u89c6\u5668\u3002\u5f53\u6c60\u6709\u591a\u4e2a\u4e0e\u4e4b\u5173\u8054\u7684\u76d1\u89c6\u5668\u65f6\uff0c\u6240\u6709\u76d1\u89c6\u5668\u90fd\u4f1a\u68c0\u67e5\u6c60\u7684\u6bcf\u4e2a\u6210\u5458\u3002\u6240\u6709\u76d1\u89c6\u5668\u90fd\u5fc5\u987b\u58f0\u660e\u6210\u5458\u8fd0\u884c\u72b6\u51b5\u826f\u597d\uff0c\u624d\u80fd\u4fdd\u6301\u6d3b\u52a8\u72b6\u6001\u3002 heat \u4e1a\u52a1\u6d41\u7a0b\u670d\u52a1\u7684\u4ee3\u53f7\u3002 Heat \u7f16\u6392\u6a21\u677f \uff08HOT\uff09 \u4ee5 OpenStack \u539f\u751f\u683c\u5f0f\u7684 Heat \u8f93\u5165\u3002 \u9ad8\u53ef\u7528\u6027 \uff08HA\uff09 \u9ad8\u53ef\u7528\u6027\u7cfb\u7edf\u8bbe\u8ba1\u65b9\u6cd5\u548c\u76f8\u5173\u670d\u52a1\u5b9e\u65bd\u53ef\u786e\u4fdd\u5728\u5408\u540c\u6d4b\u91cf\u671f\u95f4\u8fbe\u5230\u9884\u5148\u5b89\u6392\u7684\u8fd0\u8425\u7ee9\u6548\u6c34\u5e73\u3002\u9ad8\u53ef\u7528\u6027\u7cfb\u7edf\u529b\u6c42\u6700\u5927\u9650\u5ea6\u5730\u51cf\u5c11\u7cfb\u7edf\u505c\u673a\u65f6\u95f4\u548c\u6570\u636e\u4e22\u5931\u3002 horizon \u4eea\u8868\u677f\u7684\u4ee3\u53f7\u3002 Horizon \u63d2\u4ef6 OpenStack Dashboard \uff08horizon\uff09 \u7684\u63d2\u4ef6\u3002 \u4e3b\u673a \u7269\u7406\u8ba1\u7b97\u673a\uff0c\u800c\u4e0d\u662f VM \u5b9e\u4f8b\uff08\u8282\u70b9\uff09\u3002 \u4e3b\u673a\u805a\u5408 \u4e00\u79cd\u5c06\u53ef\u7528\u6027\u533a\u57df\u8fdb\u4e00\u6b65\u7ec6\u5206\u4e3a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6c60\uff08\u516c\u5171\u4e3b\u673a\u7684\u96c6\u5408\uff09\u7684\u65b9\u6cd5\u3002 \u4e3b\u673a\u603b\u7ebf\u9002\u914d\u5668 \uff08HBA\uff09 \u63d2\u5165 PCI \u63d2\u69fd\uff08\u5982\u5149\u7ea4\u901a\u9053\u6216\u7f51\u5361\uff09\u7684\u8bbe\u5907\u3002 \u6df7\u5408\u4e91 \u6df7\u5408\u4e91\u662f\u7531\u4e24\u4e2a\u6216\u591a\u4e2a\u4e91\uff08\u79c1\u6709\u4e91\u3001\u793e\u533a\u4e91\u6216\u516c\u6709\u4e91\uff09\u7ec4\u6210\u7684\uff0c\u8fd9\u4e9b\u4e91\u4ecd\u7136\u662f\u4e0d\u540c\u7684\u5b9e\u4f53\uff0c\u4f46\u7ed1\u5b9a\u5728\u4e00\u8d77\uff0c\u63d0\u4f9b\u591a\u79cd\u90e8\u7f72\u6a21\u578b\u7684\u4f18\u52bf\u3002\u6df7\u5408\u4e91\u8fd8\u610f\u5473\u7740\u80fd\u591f\u5c06\u4e3b\u673a\u6258\u7ba1\u3001\u6258\u7ba1\u548c/\u6216\u4e13\u7528\u670d\u52a1\u4e0e\u4e91\u8d44\u6e90\u8fde\u63a5\u8d77\u6765\u3002 \u6df7\u5408\u4e91\u8ba1\u7b97 \u6df7\u5408\u4e86\u672c\u5730\u3001\u79c1\u6709\u4e91\u548c\u7b2c\u4e09\u65b9\u516c\u6709\u4e91\u670d\u52a1\uff0c\u5e76\u5728\u4e24\u4e2a\u5e73\u53f0\u4e4b\u95f4\u8fdb\u884c\u7f16\u6392\u3002 Hyper-V OpenStack \u652f\u6301\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e4b\u4e00\u3002 \u8d85\u94fe\u63a5 \u5305\u542b\u6307\u5411\u5176\u4ed6\u7f51\u7ad9\u7684\u94fe\u63a5\u7684\u4efb\u4f55\u7c7b\u578b\u7684\u6587\u672c\uff0c\u5e38\u89c1\u4e8e\u5355\u51fb\u4e00\u4e2a\u6216\u591a\u4e2a\u5355\u8bcd\u4f1a\u6253\u5f00\u5176\u4ed6\u7f51\u7ad9\u7684\u6587\u6863\u4e2d\u3002 \u8d85\u6587\u672c\u4f20\u8f93\u534f\u8bae \uff08HTTP\uff09 \u7528\u4e8e\u5206\u5e03\u5f0f\u3001\u534f\u4f5c\u5f0f\u3001\u8d85\u5a92\u4f53\u4fe1\u606f\u7cfb\u7edf\u7684\u5e94\u7528\u534f\u8bae\u3002\u5b83\u662f\u4e07\u7ef4\u7f51\u6570\u636e\u901a\u4fe1\u7684\u57fa\u7840\u3002\u8d85\u6587\u672c\u662f\u5728\u5305\u542b\u6587\u672c\u7684\u8282\u70b9\u4e4b\u95f4\u4f7f\u7528\u903b\u8f91\u94fe\u63a5\uff08\u8d85\u94fe\u63a5\uff09\u7684\u7ed3\u6784\u5316\u6587\u672c\u3002HTTP\u662f\u4ea4\u6362\u6216\u4f20\u8f93\u8d85\u6587\u672c\u7684\u534f\u8bae\u3002 \u5b89\u5168\u8d85\u6587\u672c\u4f20\u8f93\u534f\u8bae \uff08HTTPS\uff09\u4e00\u79cd\u52a0\u5bc6\u901a\u4fe1\u534f\u8bae\uff0c\u7528\u4e8e\u901a\u8fc7\u8ba1\u7b97\u673a\u7f51\u7edc\u8fdb\u884c\u5b89\u5168\u901a\u4fe1\uff0c\u5728 Internet \u4e0a\u7684\u90e8\u7f72\u7279\u522b\u5e7f\u6cdb\u3002\u4ece\u6280\u672f\u4e0a\u8bb2\uff0c\u5b83\u672c\u8eab\u4e0d\u662f\u4e00\u4e2a\u534f\u8bae;\u76f8\u53cd\uff0c\u5b83\u662f\u7b80\u5355\u5730\u5c06\u8d85\u6587\u672c\u4f20\u8f93\u534f\u8bae \uff08HTTP\uff09 \u5206\u5c42\u5728 TLS \u6216 SSL \u534f\u8bae\u4e4b\u4e0a\u7684\u7ed3\u679c\uff0c\u4ece\u800c\u5c06 TLS \u6216 SSL \u7684\u5b89\u5168\u529f\u80fd\u6dfb\u52a0\u5230\u6807\u51c6 HTTP \u901a\u4fe1\u4e2d\u3002\u5927\u591a\u6570 OpenStack API \u7aef\u70b9\u548c\u8bb8\u591a\u7ec4\u4ef6\u95f4\u901a\u4fe1\u90fd\u652f\u6301 HTTPS \u901a\u4fe1\u3002 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f \u4ef2\u88c1\u548c\u63a7\u5236 VM \u5bf9\u5b9e\u9645\u5e95\u5c42\u786c\u4ef6\u7684\u8bbf\u95ee\u7684\u8f6f\u4ef6\u3002 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6c60 \u901a\u8fc7\u4e3b\u673a\u805a\u5408\u7ec4\u5408\u5728\u4e00\u8d77\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u96c6\u5408\u3002 I \u00b6 Icehouse OpenStack \u7b2c\u4e5d\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u9999\u6e2f\u4e3e\u884c\uff0cIce House\u662f\u8be5\u5e02\u7684\u4e00\u6761\u8857\u9053\u7684\u540d\u5b57\u3002 \u8eab\u4efd\u8bc1\u53f7\u7801 \u4e0e\u8eab\u4efd\u4e2d\u7684\u6bcf\u4e2a\u7528\u6237\u5173\u8054\u7684\u552f\u4e00\u6570\u5b57 ID\uff0c\u5728\u6982\u5ff5\u4e0a\u7c7b\u4f3c\u4e8e Linux \u6216 LDAP UID\u3002 \u8eab\u4efd\u9a8c\u8bc1 API Identity \u670d\u52a1 API \u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u8eab\u4efd\u9a8c\u8bc1\u540e\u7aef Identity \u670d\u52a1\u7528\u4e8e\u68c0\u7d22\u7528\u6237\u4fe1\u606f\u7684\u6e90;\u4f8b\u5982\uff0cOpenLDAP \u670d\u52a1\u5668\u3002 \u8eab\u4efd\u63d0\u4f9b\u8005 \u4e00\u79cd\u76ee\u5f55\u670d\u52a1\uff0c\u5141\u8bb8\u7528\u6237\u4f7f\u7528\u7528\u6237\u540d\u548c\u5bc6\u7801\u767b\u5f55\u3002\u5b83\u662f\u8eab\u4efd\u9a8c\u8bc1\u4ee4\u724c\u7684\u5178\u578b\u6765\u6e90\u3002 \u8eab\u4efd\u670d\u52a1\uff08keystone\uff09 \u4fc3\u8fdb API \u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u3001\u670d\u52a1\u53d1\u73b0\u3001\u5206\u5e03\u5f0f\u591a\u9879\u76ee\u6388\u6743\u548c\u5ba1\u8ba1\u7684\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u7528\u6237\u6620\u5c04\u5230\u4ed6\u4eec\u53ef\u4ee5\u8bbf\u95ee\u7684 OpenStack \u670d\u52a1\u7684\u4e2d\u592e\u76ee\u5f55\u3002\u5b83\u8fd8\u4e3a OpenStack \u670d\u52a1\u6ce8\u518c\u7aef\u70b9\uff0c\u5e76\u5145\u5f53\u901a\u7528\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u3002 \u8eab\u4efd\u670d\u52a1 API \u7528\u4e8e\u8bbf\u95ee\u901a\u8fc7 keystone \u63d0\u4f9b\u7684 OpenStack Identity \u670d\u52a1\u7684 API\u3002 IETF \uff08\u82f1\u8bed\uff09 Internet \u5de5\u7a0b\u4efb\u52a1\u7ec4 \uff08IETF\uff09 \u662f\u4e00\u4e2a\u5f00\u653e\u6807\u51c6\u7ec4\u7ec7\uff0c\u8d1f\u8d23\u5236\u5b9a Internet \u6807\u51c6\uff0c\u5c24\u5176\u662f\u4e0e TCP/IP \u76f8\u5173\u7684\u6807\u51c6\u3002 \u6620\u50cf \u7528\u4e8e\u521b\u5efa\u6216\u91cd\u5efa\u670d\u52a1\u5668\u7684\u7279\u5b9a\u64cd\u4f5c\u7cfb\u7edf \uff08OS\uff09 \u7684\u6587\u4ef6\u96c6\u5408\u3002OpenStack \u63d0\u4f9b\u9884\u6784\u5efa\u7684\u6620\u50cf\u3002\u60a8\u8fd8\u53ef\u4ee5\u4ece\u5df2\u542f\u52a8\u7684\u670d\u52a1\u5668\u521b\u5efa\u81ea\u5b9a\u4e49\u6620\u50cf\u6216\u5feb\u7167\u3002\u81ea\u5b9a\u4e49\u6620\u50cf\u53ef\u7528\u4e8e\u6570\u636e\u5907\u4efd\uff0c\u6216\u7528\u4f5c\u5176\u4ed6\u670d\u52a1\u5668\u7684\u201c\u9ec4\u91d1\u201d\u6620\u50cf\u3002 \u6620\u50cfAPI \u7528\u4e8e\u7ba1\u7406 VM \u6620\u50cf\u7684\u6620\u50cf\u670d\u52a1 API \u7ec8\u7ed3\u70b9\u3002\u5904\u7406\u5ba2\u6237\u7aef\u5bf9 VM \u7684\u8bf7\u6c42\uff0c\u66f4\u65b0\u6ce8\u518c\u8868\u670d\u52a1\u5668\u4e0a\u7684\u6620\u50cf\u670d\u52a1\u5143\u6570\u636e\uff0c\u5e76\u4e0e\u5b58\u50a8\u9002\u914d\u5668\u901a\u4fe1\u4ee5\u4ece\u540e\u7aef\u5b58\u50a8\u4e0a\u4f20 VM \u6620\u50cf\u3002 \u6620\u50cf\u7f13\u5b58 \u7531\u56fe\u50cf\u670d\u52a1\u7528\u4e8e\u83b7\u53d6\u672c\u5730\u4e3b\u673a\u4e0a\u7684\u56fe\u50cf\uff0c\u800c\u4e0d\u662f\u5728\u6bcf\u6b21\u8bf7\u6c42\u56fe\u50cf\u65f6\u4ece\u56fe\u50cf\u670d\u52a1\u5668\u91cd\u65b0\u4e0b\u8f7d\u56fe\u50cf\u3002 \u6620\u50cf ID URI \u548c UUID \u7684\u7ec4\u5408\uff0c\u7528\u4e8e\u901a\u8fc7\u955c\u50cf API \u8bbf\u95ee\u955c\u50cf\u670d\u52a1\u865a\u62df\u673a\u955c\u50cf\u3002 \u6620\u50cf\u6210\u5458 \u53ef\u4ee5\u5728\u6620\u50cf\u670d\u52a1\u4e2d\u8bbf\u95ee\u7ed9\u5b9a VM \u6620\u50cf\u7684\u9879\u76ee\u5217\u8868\u3002 \u6620\u50cf\u6240\u6709\u8005 \u62e5\u6709\u955c\u50cf\u670d\u52a1\u865a\u62df\u673a\u955c\u50cf\u7684\u9879\u76ee\u3002 \u6620\u50cf\u6ce8\u518c\u8868 \u53ef\u901a\u8fc7\u6620\u50cf\u670d\u52a1\u83b7\u53d6\u7684 VM \u6620\u50cf\u7684\u5217\u8868\u3002 \u6620\u50cf\u670d\u52a1\uff08glance\uff09 OpenStack \u670d\u52a1\uff0c\u5b83\u63d0\u4f9b\u670d\u52a1\u548c\u5173\u8054\u7684\u5e93\u6765\u5b58\u50a8\u3001\u6d4f\u89c8\u3001\u5171\u4eab\u3001\u5206\u53d1\u548c\u7ba1\u7406\u53ef\u542f\u52a8\u78c1\u76d8\u6620\u50cf\u3001\u4e0e\u521d\u59cb\u5316\u8ba1\u7b97\u8d44\u6e90\u5bc6\u5207\u76f8\u5173\u7684\u5176\u4ed6\u6570\u636e\u4ee5\u53ca\u5143\u6570\u636e\u5b9a\u4e49\u3002 \u6620\u50cf\u72b6\u6001 \u955c\u50cf\u670d\u52a1\u4e2d\u865a\u62df\u673a\u955c\u50cf\u7684\u5f53\u524d\u72b6\u6001\uff0c\u4e0d\u8981\u4e0e\u6b63\u5728\u8fd0\u884c\u7684\u5b9e\u4f8b\u7684\u72b6\u6001\u6df7\u6dc6\u3002 \u6620\u50cf\u5b58\u50a8 \u6620\u50cf\u670d\u52a1\u7528\u4e8e\u5b58\u50a8\u865a\u62df\u673a\u6620\u50cf\u7684\u540e\u7aef\u5b58\u50a8\uff0c\u9009\u9879\u5305\u62ec\u5bf9\u8c61\u5b58\u50a8\u3001\u672c\u5730\u6302\u8f7d\u7684\u6587\u4ef6\u7cfb\u7edf\u3001RADOS \u5757\u8bbe\u5907\u3001VMware \u6570\u636e\u5b58\u50a8\u6216 HTTP\u3002 \u6620\u50cf UUID \u6620\u50cf\u670d\u52a1\u7528\u4e8e\u552f\u4e00\u6807\u8bc6\u6bcf\u4e2a VM \u6620\u50cf\u7684 UUID\u3002 \u5b75\u5316\u9879\u76ee \u793e\u533a\u9879\u76ee\u53ef\u4ee5\u63d0\u5347\u5230\u6b64\u72b6\u6001\uff0c\u7136\u540e\u63d0\u5347\u4e3a\u6838\u5fc3\u9879\u76ee \u57fa\u7840\u8bbe\u65bd\u4f18\u5316\u670d\u52a1\uff08\u89c2\u5bdf\u8005\uff09 OpenStack\u9879\u76ee\uff0c\u65e8\u5728\u4e3a\u57fa\u4e8eOpenStack\u7684\u591a\u9879\u76ee\u4e91\u63d0\u4f9b\u7075\u6d3b\u4e14\u53ef\u6269\u5c55\u7684\u8d44\u6e90\u4f18\u5316\u670d\u52a1\u3002 \u57fa\u7840\u67b6\u6784\u5373\u670d\u52a1 \uff08IaaS\uff09 IaaS \u662f\u4e00\u79cd\u914d\u7f6e\u6a21\u578b\uff0c\u5728\u8fd9\u79cd\u6a21\u578b\u4e2d\uff0c\u7ec4\u7ec7\u5916\u5305\u6570\u636e\u4e2d\u5fc3\u7684\u7269\u7406\u7ec4\u4ef6\uff0c\u4f8b\u5982\u5b58\u50a8\u3001\u786c\u4ef6\u3001\u670d\u52a1\u5668\u548c\u7f51\u7edc\u7ec4\u4ef6\u3002\u670d\u52a1\u63d0\u4f9b\u5546\u62e5\u6709\u8bbe\u5907\uff0c\u5e76\u8d1f\u8d23\u8bbe\u5907\u7684\u5b89\u88c5\u3001\u64cd\u4f5c\u548c\u7ef4\u62a4\u3002\u5ba2\u6237\u901a\u5e38\u6309\u4f7f\u7528\u91cf\u4ed8\u8d39\u3002IaaS \u662f\u4e00\u79cd\u63d0\u4f9b\u4e91\u670d\u52a1\u7684\u6a21\u578b\u3002 Ingress \u8fc7\u6ee4 \u7b5b\u9009\u4f20\u5165\u7f51\u7edc\u6d41\u91cf\u7684\u8fc7\u7a0b\u3002\u7531\u8ba1\u7b97\u652f\u6301\u3002 INI \u683c\u5f0f OpenStack \u914d\u7f6e\u6587\u4ef6\u4f7f\u7528 INI \u683c\u5f0f\u6765\u63cf\u8ff0\u9009\u9879\u53ca\u5176\u503c\u3002\u5b83\u7531\u90e8\u5206\u548c\u952e\u503c\u5bf9\u7ec4\u6210\u3002 \u6ce8\u5165 \u5728\u542f\u52a8\u5b9e\u4f8b\u4e4b\u524d\u5c06\u6587\u4ef6\u653e\u5165\u865a\u62df\u673a\u6620\u50cf\u7684\u8fc7\u7a0b\u3002 \u6bcf\u79d2\u8f93\u5165/\u8f93\u51fa\u64cd\u4f5c\u6570 \uff08IOPS\uff09 IOPS \u662f\u4e00\u79cd\u5e38\u89c1\u7684\u6027\u80fd\u5ea6\u91cf\uff0c\u7528\u4e8e\u5bf9\u8ba1\u7b97\u673a\u5b58\u50a8\u8bbe\u5907\uff08\u5982\u786c\u76d8\u9a71\u52a8\u5668\u3001\u56fa\u6001\u9a71\u52a8\u5668\u548c\u5b58\u50a8\u533a\u57df\u7f51\u7edc\uff09\u8fdb\u884c\u57fa\u51c6\u6d4b\u8bd5\u3002 \u5b9e\u4f8b \u6b63\u5728\u8fd0\u884c\u7684 VM \u6216\u5904\u4e8e\u5df2\u77e5\u72b6\u6001\uff08\u5982\u6302\u8d77\uff09\u7684 VM\uff0c\u53ef\u4ee5\u50cf\u786c\u4ef6\u670d\u52a1\u5668\u4e00\u6837\u4f7f\u7528\u3002 \u5b9e\u4f8bID \u4f8b\u5982UUID\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5b9e\u4f8b\u72b6\u6001 \u6765\u5bbe\u865a\u62df\u673a\u6620\u50cf\u7684\u5f53\u524d\u72b6\u6001\u3002 \u5b9e\u4f8b\u96a7\u9053\u7f51\u7edc \u7528\u4e8e\u8ba1\u7b97\u8282\u70b9\u548c\u7f51\u7edc\u8282\u70b9\u4e4b\u95f4\u7684\u5b9e\u4f8b\u6d41\u91cf\u96a7\u9053\u7684\u7f51\u6bb5\u3002 \u5b9e\u4f8b\u7c7b\u578b \u63cf\u8ff0\u53ef\u4f9b\u7528\u6237\u4f7f\u7528\u7684\u5404\u79cd\u865a\u62df\u673a\u6620\u50cf\u7684\u53c2\u6570;\u5305\u62ec CPU\u3001\u5b58\u50a8\u548c\u5185\u5b58\u7b49\u53c2\u6570\u3002\u98ce\u5473\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5b9e\u4f8b\u7c7b\u578b ID \u7279\u5b9a\u5b9e\u4f8b ID \u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5b9e\u4f8b UUID \u5206\u914d\u7ed9\u6bcf\u4e2a\u6765\u5bbe VM \u5b9e\u4f8b\u7684\u552f\u4e00 ID\u3002 \u667a\u80fd\u5e73\u53f0\u7ba1\u7406\u63a5\u53e3\uff08IPMI\uff09 IPMI \u662f\u7cfb\u7edf\u7ba1\u7406\u5458\u7528\u4e8e\u8ba1\u7b97\u673a\u7cfb\u7edf\u5e26\u5916\u7ba1\u7406\u548c\u76d1\u63a7\u5176\u64cd\u4f5c\u7684\u6807\u51c6\u5316\u8ba1\u7b97\u673a\u7cfb\u7edf\u63a5\u53e3\u3002\u901a\u4fd7\u5730\u8bf4\uff0c\u5b83\u662f\u4e00\u79cd\u4f7f\u7528\u76f4\u63a5\u7f51\u7edc\u8fde\u63a5\u7ba1\u7406\u8ba1\u7b97\u673a\u7684\u65b9\u6cd5\uff0c\u65e0\u8bba\u5b83\u662f\u5426\u6253\u5f00;\u8fde\u63a5\u5230\u786c\u4ef6\uff0c\u800c\u4e0d\u662f\u64cd\u4f5c\u7cfb\u7edf\u6216\u767b\u5f55 shell\u3002 \u63a5\u53e3 \u63d0\u4f9b\u4e0e\u5176\u4ed6\u8bbe\u5907\u6216\u4ecb\u8d28\u7684\u8fde\u63a5\u7684\u7269\u7406\u6216\u865a\u62df\u8bbe\u5907\u3002 \u63a5\u53e3 ID UUID \u5f62\u5f0f\u7684\u7f51\u7edc VIF \u6216 vNIC \u7684\u552f\u4e00 ID\u3002 \u4e92\u8054\u7f51\u63a7\u5236\u6d88\u606f\u534f\u8bae \uff08ICMP\uff09 \u7f51\u7edc\u8bbe\u5907\u7528\u4e8e\u63a7\u5236\u6d88\u606f\u7684\u7f51\u7edc\u534f\u8bae\u3002\u4f8b\u5982\uff0cping \u4f7f\u7528 ICMP \u6765\u6d4b\u8bd5\u8fde\u63a5\u3002 \u4e92\u8054\u7f51\u534f\u8bae \uff08IP\uff09 Internet \u534f\u8bae\u5957\u4ef6\u4e2d\u7684\u4e3b\u8981\u901a\u4fe1\u534f\u8bae\uff0c\u7528\u4e8e\u8de8\u7f51\u7edc\u8fb9\u754c\u4e2d\u7ee7\u6570\u636e\u62a5\u3002 \u4e92\u8054\u7f51\u670d\u52a1\u63d0\u4f9b\u5546 \uff08ISP\uff09 \u4efb\u4f55\u5411\u4e2a\u4eba\u6216\u4f01\u4e1a\u63d0\u4f9b\u4e92\u8054\u7f51\u8bbf\u95ee\u7684\u4f01\u4e1a\u3002 \u4e92\u8054\u7f51\u5c0f\u578b\u8ba1\u7b97\u673a\u7cfb\u7edf\u63a5\u53e3\uff08iSCSI\uff09 \u5c01\u88c5 SCSI \u5e27\u4ee5\u901a\u8fc7 IP \u7f51\u7edc\u4f20\u8f93\u7684\u5b58\u50a8\u534f\u8bae\u3002\u53d7\u8ba1\u7b97\u3001\u5bf9\u8c61\u5b58\u50a8\u548c\u955c\u50cf\u670d\u52a1\u652f\u6301\u3002 IO \u8f93\u5165\u548c\u8f93\u51fa\u7684\u7f29\u5199\u3002 IP \u5730\u5740 Internet \u4e0a\u6bcf\u4e2a\u8ba1\u7b97\u673a\u7cfb\u7edf\u552f\u4e00\u7684\u7f16\u53f7\u3002\u5730\u5740\u4f7f\u7528\u4e86\u4e24\u4e2a\u7248\u672c\u7684 Internet \u534f\u8bae \uff08IP\uff09\uff1aIPv4 \u548c IPv6\u3002 IP \u5730\u5740\u7ba1\u7406 \uff08IPAM\uff09 \u81ea\u52a8\u6267\u884c IP \u5730\u5740\u5206\u914d\u3001\u89e3\u9664\u5206\u914d\u548c\u7ba1\u7406\u7684\u8fc7\u7a0b\u3002\u76ee\u524d\u7531 Compute\u3001melange \u548c Networking \u63d0\u4f9b\u3002 ip6tables \u7528\u4e8e\u5728 Linux \u5185\u6838\u4e2d\u8bbe\u7f6e\u3001\u7ef4\u62a4\u548c\u68c0\u67e5 IPv6 \u6570\u636e\u5305\u8fc7\u6ee4\u89c4\u5219\u8868\u7684\u5de5\u5177\u3002\u5728 OpenStack \u8ba1\u7b97\u4e2d\uff0cip6tables \u4e0e arptables\u3001ebtables \u548c iptables \u4e00\u8d77\u4f7f\u7528\uff0c\u4e3a\u8282\u70b9\u548c\u865a\u62df\u673a\u521b\u5efa\u9632\u706b\u5899\u3002 ipset \u5bf9 iptables \u7684\u6269\u5c55\uff0c\u5141\u8bb8\u521b\u5efa\u540c\u65f6\u5339\u914d\u6574\u4e2a IP \u5730\u5740\u201c\u96c6\u201d\u7684\u9632\u706b\u5899\u89c4\u5219\u3002\u8fd9\u4e9b\u96c6\u9a7b\u7559\u5728\u7d22\u5f15\u6570\u636e\u7ed3\u6784\u4e2d\u4ee5\u63d0\u9ad8\u6548\u7387\uff0c\u5c24\u5176\u662f\u5728\u5177\u6709\u5927\u91cf\u89c4\u5219\u7684\u7cfb\u7edf\u4e0a\u3002 iptables iptables \u4e0e arptables \u548c ebtables \u4e00\u8d77\u4f7f\u7528\uff0c\u53ef\u5728 Compute \u4e2d\u521b\u5efa\u9632\u706b\u5899\u3002iptables \u662f Linux \u5185\u6838\u9632\u706b\u5899\uff08\u4f5c\u4e3a\u4e0d\u540c\u7684 Netfilter \u6a21\u5757\u5b9e\u73b0\uff09\u63d0\u4f9b\u7684\u8868\u53ca\u5176\u5b58\u50a8\u7684\u94fe\u548c\u89c4\u5219\u3002\u76ee\u524d\u4e0d\u540c\u7684\u5185\u6838\u6a21\u5757\u548c\u7a0b\u5e8f\u7528\u4e8e\u4e0d\u540c\u7684\u534f\u8bae\uff1aiptables \u9002\u7528\u4e8e IPv4\uff0cip6tables \u9002\u7528\u4e8e IPv6\uff0carptables \u9002\u7528\u4e8e ARP\uff0cebtables \u7528\u4e8e\u4ee5\u592a\u7f51\u5e27\u3002\u9700\u8981 root \u6743\u9650\u624d\u80fd\u64cd\u4f5c\u3002 ironic \u88f8\u673a\u670d\u52a1\u7684\u4ee3\u53f7\u3002 iSCSI \u9650\u5b9a\u540d\u79f0 \uff08IQN\uff09 IQN \u662f\u6700\u5e38\u7528\u7684 iSCSI \u540d\u79f0\u683c\u5f0f\uff0c\u7528\u4e8e\u552f\u4e00\u6807\u8bc6 iSCSI \u7f51\u7edc\u4e2d\u7684\u8282\u70b9\u3002\u6240\u6709 IQN \u90fd\u9075\u5faa iqn.yyyy-mm.domain\uff1aidentifier \u6a21\u5f0f\uff0c\u5176\u4e2d\u201cyyyy-mm\u201d\u662f\u57df\u540d\u6ce8\u518c\u7684\u5e74\u4efd\u548c\u6708\u4efd\uff0c\u201cdomain\u201d\u662f\u9881\u53d1\u7ec4\u7ec7\u7684\u53cd\u5411\u57df\u540d\uff0c\u201cidentifier\u201d\u662f\u4e00\u4e2a\u53ef\u9009\u5b57\u7b26\u4e32\uff0c\u4f7f\u540c\u4e00\u57df\u540d\u4e0b\u7684\u6bcf\u4e2a IQN \u90fd\u662f\u552f\u4e00\u7684\u3002\u4f8b\u5982\uff0c\u201ciqn.2015-10.org.openstack.408ae959bce1\u201d\u3002 ISO9660 \u955c\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u955c\u50cf\u78c1\u76d8\u683c\u5f0f\u4e4b\u4e00\u3002 ITSEC \u51fd\u6570 \u8ba1\u7b97 RBAC \u7cfb\u7edf\u4e2d\u7684\u9ed8\u8ba4\u89d2\u8272\uff0c\u53ef\u4ee5\u9694\u79bb\u4efb\u4f55\u9879\u76ee\u4e2d\u7684\u5b9e\u4f8b\u3002 J \u00b6 Java \u4e00\u79cd\u7f16\u7a0b\u8bed\u8a00\uff0c\u7528\u4e8e\u521b\u5efa\u901a\u8fc7\u7f51\u7edc\u6d89\u53ca\u591a\u53f0\u8ba1\u7b97\u673a\u7684\u7cfb\u7edf\u3002 JavaScript \u4e00\u79cd\u7528\u4e8e\u751f\u6210\u7f51\u9875\u7684\u811a\u672c\u8bed\u8a00\u3002 JavaScript \u5bf9\u8c61\u8868\u793a\u6cd5 \uff08JSON\uff09 OpenStack \u4e2d\u652f\u6301\u7684\u54cd\u5e94\u683c\u5f0f\u4e4b\u4e00\u3002 \u6846\u67b6\u7684\u5f62\u72b6 \u73b0\u4ee3\u4ee5\u592a\u7f51\u7f51\u7edc\u4e2d\u7684\u529f\u80fd\uff0c\u652f\u6301\u9ad8\u8fbe\u7ea6 9000 \u5b57\u8282\u7684\u5e27\u3002 Juno OpenStack \u7b2c\u5341\u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u4f50\u6cbb\u4e9a\u5dde\u4e9a\u7279\u5170\u5927\u4e3e\u884c\uff0cJuno\u662f\u4f50\u6cbb\u4e9a\u5dde\u7684\u4e00\u4e2a\u975e\u6cd5\u4eba\u793e\u533a\u3002 K \u00b6 Kerberos \u4e00\u79cd\u57fa\u4e8e\u7968\u8bc1\u7684\u7f51\u7edc\u8eab\u4efd\u9a8c\u8bc1\u534f\u8bae\u3002Kerberos \u5141\u8bb8\u8282\u70b9\u901a\u8fc7\u975e\u5b89\u5168\u7f51\u7edc\u8fdb\u884c\u901a\u4fe1\uff0c\u5e76\u5141\u8bb8\u8282\u70b9\u4ee5\u5b89\u5168\u7684\u65b9\u5f0f\u76f8\u4e92\u8bc1\u660e\u5176\u8eab\u4efd\u3002 \u57fa\u4e8e\u5185\u6838\u7684\u865a\u62df\u673a \uff08KVM\uff09 \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002KVM \u662f\u9002\u7528\u4e8e Linux on x86 \u786c\u4ef6\u7684\u5b8c\u6574\u865a\u62df\u5316\u89e3\u51b3\u65b9\u6848\uff0c\u5305\u542b\u865a\u62df\u5316\u6269\u5c55\uff08Intel VT \u6216 AMD-V\uff09\u3001ARM\u3001IBM Power \u548c IBM zSeries\u3002\u5b83\u7531\u4e00\u4e2a\u53ef\u52a0\u8f7d\u7684\u5185\u6838\u6a21\u5757\u7ec4\u6210\uff0c\u8be5\u6a21\u5757\u63d0\u4f9b\u6838\u5fc3\u865a\u62df\u5316\u57fa\u7840\u67b6\u6784\u548c\u7279\u5b9a\u4e8e\u5904\u7406\u5668\u7684\u6a21\u5757\u3002 \u5bc6\u94a5\u7ba1\u7406\u5668\u670d\u52a1\uff08barbican\uff09 \u8be5\u9879\u76ee\u4ea7\u751f\u4e00\u4e2a\u79d8\u5bc6\u5b58\u50a8\u548c\u751f\u6210\u7cfb\u7edf\uff0c\u80fd\u591f\u4e3a\u5e0c\u671b\u542f\u7528\u52a0\u5bc6\u529f\u80fd\u7684\u670d\u52a1\u63d0\u4f9b\u5bc6\u94a5\u7ba1\u7406\u3002 keystone Identity \u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u5feb\u901f\u542f\u52a8 \u7528\u4e8e\u5728\u57fa\u4e8e Red Hat\u3001Fedora \u548c CentOS \u7684 Linux \u53d1\u884c\u7248\u4e0a\u81ea\u52a8\u8fdb\u884c\u7cfb\u7edf\u914d\u7f6e\u548c\u5b89\u88c5\u7684\u5de5\u5177\u3002 Kilo OpenStack \u7b2c 11 \u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u6cd5\u56fd\u5df4\u9ece\u4e3e\u884c\u3002\u7531\u4e8e\u540d\u79f0\u9009\u62e9\u7684\u5ef6\u8fdf\uff0c\u8be5\u7248\u672c\u4ec5\u88ab\u79f0\u4e3a K\u3002\u7531\u4e8e k kilo \u662f\u5355\u4f4d\u7b26\u53f7\uff0c\u800c kilogram \u53c2\u8003\u5de5\u4ef6\u5b58\u653e\u5728\u5df4\u9ece\u9644\u8fd1\u7684\u585e\u592b\u5c14 Pavillon de Breteuil \u4e2d\uff0c\u56e0\u6b64\u793e\u533a\u9009\u62e9\u4e86 Kilo \u4f5c\u4e3a\u7248\u672c\u540d\u79f0\u3002 L \u5927\u5bf9\u8c61 Object Storage \u4e2d\u5927\u4e8e 5 GB \u7684\u5bf9\u8c61\u3002 \u542f\u52a8\u677f OpenStack \u7684\u534f\u4f5c\u7ad9\u70b9\u3002 \u4e8c\u5c42\uff08L2\uff09\u4ee3\u7406 \u4e3a\u865a\u62df\u7f51\u7edc\u63d0\u4f9b\u7b2c 2 \u5c42\u8fde\u63a5\u7684 OpenStack Networking \u4ee3\u7406\u3002 \u4e8c\u5c42\u7f51\u7edc OSI \u7f51\u7edc\u4f53\u7cfb\u7ed3\u6784\u4e2d\u7528\u4e8e\u6570\u636e\u94fe\u8def\u5c42\u7684\u672f\u8bed\u3002\u6570\u636e\u94fe\u8def\u5c42\u8d1f\u8d23\u5a92\u4f53\u8bbf\u95ee\u63a7\u5236\u3001\u6d41\u91cf\u63a7\u5236\u4ee5\u53ca\u68c0\u6d4b\u548c\u7ea0\u6b63\u7269\u7406\u5c42\u4e2d\u53ef\u80fd\u53d1\u751f\u7684\u9519\u8bef\u3002 \u4e09\u5c42 \uff08L3\uff09 \u4ee3\u7406 OpenStack Networking \u4ee3\u7406\uff0c\u4e3a\u865a\u62df\u7f51\u7edc\u63d0\u4f9b\u7b2c 3 \u5c42\uff08\u8def\u7531\uff09\u670d\u52a1\u3002 \u4e09\u5c42\u7f51\u7edc \u5728 OSI \u7f51\u7edc\u4f53\u7cfb\u7ed3\u6784\u4e2d\u7528\u4e8e\u7f51\u7edc\u5c42\u7684\u672f\u8bed\u3002\u7f51\u7edc\u5c42\u8d1f\u8d23\u6570\u636e\u5305\u8f6c\u53d1\uff0c\u5305\u62ec\u4ece\u4e00\u4e2a\u8282\u70b9\u5230\u53e6\u4e00\u4e2a\u8282\u70b9\u7684\u8def\u7531\u3002 Liberty OpenStack \u7b2c 12 \u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u52a0\u62ff\u5927\u6e29\u54e5\u534e\u4e3e\u884c\uff0cLiberty\u662f\u52a0\u62ff\u5927\u8428\u65af\u5580\u5f7b\u6e29\u7701\u4e00\u4e2a\u6751\u5e84\u7684\u540d\u5b57\u3002 libvirt OpenStack \u7528\u6765\u4e0e\u8bb8\u591a\u53d7\u652f\u6301\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u8fdb\u884c\u4ea4\u4e92\u7684\u865a\u62df\u5316 API \u5e93\u3002 \u8f7b\u91cf\u7ea7\u76ee\u5f55\u8bbf\u95ee\u534f\u8bae \uff08LDAP\uff09 \u7528\u4e8e\u901a\u8fc7 IP \u7f51\u7edc\u8bbf\u95ee\u548c\u7ef4\u62a4\u5206\u5e03\u5f0f\u76ee\u5f55\u4fe1\u606f\u670d\u52a1\u7684\u5e94\u7528\u7a0b\u5e8f\u534f\u8bae\u3002 Linux \u64cd\u4f5c\u7cfb\u7edf \u7c7bUnix\u8ba1\u7b97\u673a\u64cd\u4f5c\u7cfb\u7edf\uff0c\u5728\u81ea\u7531\u548c\u5f00\u6e90\u8f6f\u4ef6\u5f00\u53d1\u548c\u5206\u53d1\u7684\u6a21\u5f0f\u4e0b\u7ec4\u88c5\u3002 Linux\u6865\u63a5 \u4f7f\u591a\u4e2a VM \u80fd\u591f\u5728\u8ba1\u7b97\u4e2d\u5171\u4eab\u5355\u4e2a\u7269\u7406 NIC \u7684\u8f6f\u4ef6\u3002 Linux Bridge neutron \u63d2\u4ef6 \u4f7f Linux \u7f51\u6865\u80fd\u591f\u7406\u89e3\u7f51\u7edc\u7aef\u53e3\u3001\u63a5\u53e3\u8fde\u63a5\u548c\u5176\u4ed6\u62bd\u8c61\u3002 Linux \u5bb9\u5668 \uff08LXC\uff09 \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 \u5b9e\u65f6\u8fc1\u79fb \u8ba1\u7b97\u4e2d\u80fd\u591f\u5c06\u6b63\u5728\u8fd0\u884c\u7684\u865a\u62df\u673a\u5b9e\u4f8b\u4ece\u4e00\u53f0\u4e3b\u673a\u79fb\u52a8\u5230\u53e6\u4e00\u53f0\u4e3b\u673a\uff0c\u5728\u5207\u6362\u671f\u95f4\u4ec5\u53d1\u751f\u5c11\u91cf\u670d\u52a1\u4e2d\u65ad\u3002 \u8d1f\u8f7d\u5747\u8861\u5668 \u8d1f\u8f7d\u5747\u8861\u5668\u662f\u5c5e\u4e8e\u4e91\u5e10\u6237\u7684\u903b\u8f91\u8bbe\u5907\u3002\u5b83\u7528\u4e8e\u6839\u636e\u5b9a\u4e49\u4e3a\u5176\u914d\u7f6e\u4e00\u90e8\u5206\u7684\u6761\u4ef6\u5728\u591a\u4e2a\u540e\u7aef\u7cfb\u7edf\u6216\u670d\u52a1\u4e4b\u95f4\u5206\u914d\u5de5\u4f5c\u8d1f\u8f7d\u3002 \u8d1f\u8f7d\u5747\u8861 \u5728\u4e24\u4e2a\u6216\u591a\u4e2a\u8282\u70b9\u4e4b\u95f4\u5206\u6563\u5ba2\u6237\u7aef\u8bf7\u6c42\u4ee5\u63d0\u9ad8\u6027\u80fd\u548c\u53ef\u7528\u6027\u7684\u8fc7\u7a0b\u3002 \u8d1f\u8f7d\u5747\u8861\u5668\u5373\u670d\u52a1\uff08LBaaS\uff09 \u4f7f\u7f51\u7edc\u80fd\u591f\u5728\u6307\u5b9a\u5b9e\u4f8b\u4e4b\u95f4\u5747\u5300\u5206\u914d\u4f20\u5165\u8bf7\u6c42\u3002 \u8d1f\u8f7d\u5747\u8861\u670d\u52a1\uff08octavia\uff09 \u8be5\u9879\u76ee\u65e8\u5728\u4ee5\u4e0e\u6280\u672f\u65e0\u5173\u7684\u65b9\u5f0f\u63d0\u4f9b\u5bf9\u8d1f\u8f7d\u5747\u8861\u5668\u670d\u52a1\u7684\u53ef\u6269\u5c55\u3001\u6309\u9700\u3001\u81ea\u52a9\u670d\u52a1\u8bbf\u95ee\u3002 \u903b\u8f91\u5377\u7ba1\u7406\u5668 \uff08LVM\uff09 \u63d0\u4f9b\u4e00\u79cd\u5728\u5927\u5bb9\u91cf\u5b58\u50a8\u8bbe\u5907\u4e0a\u5206\u914d\u7a7a\u95f4\u7684\u65b9\u6cd5\uff0c\u8be5\u65b9\u6cd5\u6bd4\u4f20\u7edf\u7684\u5206\u533a\u65b9\u6848\u66f4\u7075\u6d3b\u3002 M \u00b6 magnum \u5bb9\u5668\u57fa\u7840\u7ed3\u6784\u7ba1\u7406\u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u7ba1\u7406 API \u7ba1\u7406 API \u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u7ba1\u7406\u7f51\u7edc \u7528\u4e8e\u7ba1\u7406\u7684\u7f51\u6bb5\uff0c\u516c\u5171 Internet \u65e0\u6cd5\u8bbf\u95ee\u3002 \u7ba1\u7406\u5668 \u76f8\u5173\u4ee3\u7801\u7684\u903b\u8f91\u5206\u7ec4\uff0c\u4f8b\u5982\u5757\u5b58\u50a8\u5377\u7ba1\u7406\u5668\u6216\u7f51\u7edc\u7ba1\u7406\u5668\u3002 \u6e05\u5355 \u7528\u4e8e\u8ddf\u8e2a\u5bf9\u8c61\u5b58\u50a8\u4e2d\u5927\u578b\u5bf9\u8c61\u7684\u6bb5\u3002 manifest \u5bf9\u8c61 \u4e00\u4e2a\u7279\u6b8a\u7684\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\uff0c\u5176\u4e2d\u5305\u542b\u5927\u578b\u5bf9\u8c61\u7684\u6e05\u5355\u3002 manila OpenStack \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7684\u4ee3\u53f7\u3002 manila\u5206\u4eab \u8d1f\u8d23\u7ba1\u7406\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u8bbe\u5907\uff0c\u7279\u522b\u662f\u540e\u7aef\u8bbe\u5907\u3002 \u6700\u5927\u4f20\u8f93\u5355\u5143 \uff08MTU\uff09 \u7279\u5b9a\u7f51\u7edc\u4ecb\u8d28\u7684\u6700\u5927\u5e27\u6216\u6570\u636e\u5305\u5927\u5c0f\u3002\u4ee5\u592a\u7f51\u901a\u5e38\u4e3a 1500 \u5b57\u8282\u3002 \u673a\u5236\u9a71\u52a8 \u7a0b\u5e8f \u6a21\u5757\u5316\u7b2c 2 \u5c42 \uff08ML2\uff09 neutron \u63d2\u4ef6\u7684\u9a71\u52a8\u7a0b\u5e8f\uff0c\u4e3a\u865a\u62df\u5b9e\u4f8b\u63d0\u4f9b\u7b2c 2 \u5c42\u8fde\u63a5\u3002\u5355\u4e2a OpenStack \u5b89\u88c5\u53ef\u4ee5\u4f7f\u7528\u591a\u4e2a\u673a\u5236\u9a71\u52a8\u7a0b\u5e8f\u3002 melange OpenStack Network Information Service \u7684\u9879\u76ee\u540d\u79f0\u3002\u5c06\u4e0e\u7f51\u7edc\u5408\u5e76\u3002 \u6210\u5458\u5173\u7cfb \u955c\u50cf\u670d\u52a1\u865a\u62df\u673a\u955c\u50cf\u4e0e\u9879\u76ee\u4e4b\u95f4\u7684\u5173\u8054\u3002\u5141\u8bb8\u4e0e\u6307\u5b9a\u9879\u76ee\u5171\u4eab\u56fe\u50cf\u3002 \u6210\u5458\u5217\u8868 \u53ef\u4ee5\u5728\u6620\u50cf\u670d\u52a1\u4e2d\u8bbf\u95ee\u7ed9\u5b9a VM \u6620\u50cf\u7684\u9879\u76ee\u5217\u8868\u3002 \u5185\u5b58\u7f13\u5b58 \u5bf9\u8c61\u5b58\u50a8\u7528\u4e8e\u7f13\u5b58\u7684\u5206\u5e03\u5f0f\u5185\u5b58\u5bf9\u8c61\u7f13\u5b58\u7cfb\u7edf\u3002 \u5185\u5b58\u8fc7\u91cf\u5206\u914d \u80fd\u591f\u6839\u636e\u4e3b\u673a\u7684\u5b9e\u9645\u5185\u5b58\u4f7f\u7528\u60c5\u51b5\u542f\u52a8\u65b0\u7684 VM \u5b9e\u4f8b\uff0c\u800c\u4e0d\u662f\u6839\u636e\u6bcf\u4e2a\u6b63\u5728\u8fd0\u884c\u7684\u5b9e\u4f8b\u8ba4\u4e3a\u5176\u53ef\u7528\u7684 RAM \u91cf\u6765\u505a\u51fa\u51b3\u5b9a\u3002\u4e5f\u79f0\u4e3a RAM \u8fc7\u91cf\u4f7f\u7528\u3002 \u6d88\u606f\u4ee3\u7406 \u7528\u4e8e\u5728\u8ba1\u7b97\u4e2d\u63d0\u4f9b AMQP \u6d88\u606f\u4f20\u9012\u529f\u80fd\u7684\u8f6f\u4ef6\u5305\u3002\u9ed8\u8ba4\u5305\u4e3a RabbitMQ\u3002 \u6d88\u606f\u603b\u7ebf \u6240\u6709 AMQP \u6d88\u606f\u7528\u4e8e\u8ba1\u7b97\u4e2d\u7684\u4e91\u95f4\u901a\u4fe1\u7684\u4e3b\u8981\u865a\u62df\u901a\u4fe1\u7ebf\u8def\u3002 \u6d88\u606f\u961f\u5217 \u5c06\u6765\u81ea\u5ba2\u6237\u7aef\u7684\u8bf7\u6c42\u4f20\u9012\u7ed9\u76f8\u5e94\u7684\u5de5\u4f5c\u7ebf\u7a0b\uff0c\u5e76\u5728\u4f5c\u4e1a\u5b8c\u6210\u540e\u5c06\u8f93\u51fa\u8fd4\u56de\u7ed9\u5ba2\u6237\u7aef\u3002 \u6d88\u606f\u670d\u52a1 \uff08zaqar\uff09 \u8be5\u9879\u76ee\u63d0\u4f9b\u6d88\u606f\u4f20\u9012\u670d\u52a1\uff0c\u8be5\u670d\u52a1\u4ee5\u9ad8\u6548\u3001\u53ef\u6269\u5c55\u548c\u9ad8\u5ea6\u53ef\u7528\u7684\u65b9\u5f0f\u63d0\u4f9b\u5404\u79cd\u5206\u5e03\u5f0f\u5e94\u7528\u7a0b\u5e8f\u6a21\u5f0f\uff0c\u5e76\u521b\u5efa\u548c\u7ef4\u62a4\u5173\u8054\u7684 Python \u5e93\u548c\u6587\u6863\u3002 \u5143\u6570\u636e\u670d\u52a1\u5668 \uff08MDS\uff09 \u5b58\u50a8 CephFS \u5143\u6570\u636e\u3002 \u5143\u6570\u636e\u4ee3\u7406 \u4e3a\u5b9e\u4f8b\u63d0\u4f9b\u5143\u6570\u636e\u670d\u52a1\u7684 OpenStack Networking \u4ee3\u7406\u3002 \u8fc1\u79fb \u5c06 VM \u5b9e\u4f8b\u4ece\u4e00\u53f0\u4e3b\u673a\u79fb\u52a8\u5230\u53e6\u4e00\u53f0\u4e3b\u673a\u7684\u8fc7\u7a0b\u3002 mistral \u5de5\u4f5c\u6d41\u670d\u52a1\u7684\u4ee3\u53f7\u3002 Mitaka OpenStack \u7b2c 13 \u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u65e5\u672c\u4e1c\u4eac\u4e3e\u884c\u3002Mitaka\u662f\u4e1c\u4eac\u7684\u4e00\u5ea7\u57ce\u5e02\u3002 \u6a21\u5757\u5316\u7b2c 2 \u5c42 \uff08ML2\uff09neutron\u63d2\u4ef6 \u53ef\u4ee5\u5728\u7f51\u7edc\u4e2d\u540c\u65f6\u4f7f\u7528\u591a\u79cd\u4e8c\u5c42\u7f51\u7edc\u6280\u672f\uff0c\u5982802.1Q\u548cVXLAN\u3002 monasca OpenStack \u76d1\u63a7\u7684\u4ee3\u53f7\u3002 \u76d1\u63a7 \uff08LBaaS\uff09 LBaaS \u529f\u80fd\uff0c\u4f7f\u7528 ping \u547d\u4ee4\u3001TCP \u548c HTTP/HTTPS GET \u63d0\u4f9b\u53ef\u7528\u6027\u76d1\u63a7\u3002 \u76d1\u89c6\u5668 \uff08Mon\uff09 \u4e00\u4e2a Ceph \u7ec4\u4ef6\uff0c\u7528\u4e8e\u4e0e\u5916\u90e8\u5ba2\u6237\u7aef\u901a\u4fe1\u3001\u68c0\u67e5\u6570\u636e\u72b6\u6001\u548c\u4e00\u81f4\u6027\u4ee5\u53ca\u6267\u884c\u4ef2\u88c1\u529f\u80fd\u3002 \u76d1\u63a7 \uff08monasca\uff09 OpenStack \u670d\u52a1\uff0c\u4e3a\u6307\u6807\u3001\u590d\u6742\u4e8b\u4ef6\u5904\u7406\u548c\u65e5\u5fd7\u8bb0\u5f55\u63d0\u4f9b\u591a\u9879\u76ee\u3001\u9ad8\u5ea6\u53ef\u6269\u5c55\u3001\u9ad8\u6027\u80fd\u3001\u5bb9\u9519\u7684\u76d1\u63a7\u5373\u670d\u52a1\u89e3\u51b3\u65b9\u6848\u3002\u4e3a\u9ad8\u7ea7\u76d1\u63a7\u670d\u52a1\u6784\u5efa\u4e00\u4e2a\u53ef\u6269\u5c55\u7684\u5e73\u53f0\uff0c\u8fd0\u8425\u5546\u548c\u9879\u76ee\u90fd\u53ef\u4ee5\u4f7f\u7528\u8be5\u5e73\u53f0\u6765\u83b7\u5f97\u8fd0\u8425\u6d1e\u5bdf\u529b\u548c\u53ef\u89c1\u6027\uff0c\u786e\u4fdd\u53ef\u7528\u6027\u548c\u7a33\u5b9a\u6027\u3002 \u591a\u4e91\u8ba1\u7b97 \u5728\u5355\u4e2a\u7f51\u7edc\u67b6\u6784\u4e2d\u4f7f\u7528\u591a\u79cd\u4e91\u8ba1\u7b97\u548c\u5b58\u50a8\u670d\u52a1\u3002 \u591a\u4e91 SDK \u63d0\u4f9b\u591a\u4e91\u62bd\u8c61\u5c42\u5e76\u5305\u542b\u5bf9 OpenStack \u7684\u652f\u6301\u7684 SDK\u3002\u8fd9\u4e9b SDK \u975e\u5e38\u9002\u5408\u7f16\u5199\u9700\u8981\u4f7f\u7528\u591a\u79cd\u7c7b\u578b\u7684\u4e91\u63d0\u4f9b\u5546\u7684\u5e94\u7528\u7a0b\u5e8f\uff0c\u4f46\u53ef\u80fd\u4f1a\u516c\u5f00\u4e00\u7ec4\u66f4\u6709\u9650\u7684\u529f\u80fd\u3002 \u591a\u56e0\u7d20\u8eab\u4efd\u9a8c\u8bc1 \u4f7f\u7528\u4e24\u4e2a\u6216\u591a\u4e2a\u51ed\u636e\uff08\u5982\u5bc6\u7801\u548c\u79c1\u94a5\uff09\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002\u76ee\u524d\u5728 Identity \u4e2d\u4e0d\u53d7\u652f\u6301\u3002 \u591a\u4e3b\u673a \u4f20\u7edf \uff08nova\uff09 \u7f51\u7edc\u7684\u9ad8\u53ef\u7528\u6027\u6a21\u5f0f\u3002\u6bcf\u4e2a\u8ba1\u7b97\u8282\u70b9\u5904\u7406 NAT \u548c DHCP\uff0c\u5e76\u5145\u5f53\u5176\u4e0a\u6240\u6709 VM \u7684\u7f51\u5173\u3002\u4e00\u4e2a\u8ba1\u7b97\u8282\u70b9\u4e0a\u7684\u7f51\u7edc\u6545\u969c\u4e0d\u4f1a\u5f71\u54cd\u5176\u4ed6\u8ba1\u7b97\u8282\u70b9\u4e0a\u7684 VM\u3002 multinic \u51fd\u6570 \u8ba1\u7b97\u4e2d\u7684\u5de5\u5177\uff0c\u5141\u8bb8\u6bcf\u4e2a\u865a\u62df\u673a\u5b9e\u4f8b\u8fde\u63a5\u591a\u4e2a VIF\u3002 murano \u5e94\u7528\u7a0b\u5e8f\u76ee\u5f55\u670d\u52a1\u7684\u4ee3\u53f7\u3002 N \u00b6 Nebula NASA \u4e8e 2010 \u5e74\u4ee5\u5f00\u6e90\u5f62\u5f0f\u53d1\u5e03\uff0c\u662f Compute \u7684\u57fa\u7840\u3002 \u7f51\u7edc\u7ba1\u7406\u5458 \u8ba1\u7b97 RBAC \u7cfb\u7edf\u4e2d\u7684\u9ed8\u8ba4\u89d2\u8272\u4e4b\u4e00\u3002\u5141\u8bb8\u7528\u6237\u4e3a\u5b9e\u4f8b\u5206\u914d\u53ef\u516c\u5f00\u8bbf\u95ee\u7684 IP \u5730\u5740\u5e76\u66f4\u6539\u9632\u706b\u5899\u89c4\u5219\u3002 NetApp \u5377\u9a71\u52a8\u7a0b\u5e8f \u4f7f\u8ba1\u7b97\u80fd\u591f\u901a\u8fc7 NetApp OnCommand \u914d\u7f6e\u7ba1\u7406\u5668\u4e0e NetApp \u5b58\u50a8\u8bbe\u5907\u8fdb\u884c\u901a\u4fe1\u3002 \u7f51\u7edc \u5728\u5b9e\u4f53\u4e4b\u95f4\u63d0\u4f9b\u8fde\u63a5\u7684\u865a\u62df\u7f51\u7edc\u3002\u4f8b\u5982\uff0c\u5171\u4eab\u7f51\u7edc\u8fde\u63a5\u7684\u865a\u62df\u7aef\u53e3\u7684\u96c6\u5408\u3002\u5728\u7f51\u7edc\u672f\u8bed\u4e2d\uff0c\u7f51\u7edc\u59cb\u7ec8\u662f\u7b2c 2 \u5c42\u7f51\u7edc\u3002 \u7f51\u7edc\u5730\u5740\u8f6c\u6362 \uff08NAT\uff09 \u5728\u4f20\u8f93\u8fc7\u7a0b\u4e2d\u4fee\u6539 IP \u5730\u5740\u4fe1\u606f\u7684\u8fc7\u7a0b\u3002\u7531\u8ba1\u7b97\u548c\u7f51\u7edc\u652f\u6301\u3002 \u7f51\u7edc\u63a7\u5236\u5668 \u4e00\u4e2a\u8ba1\u7b97\u5b88\u62a4\u7a0b\u5e8f\uff0c\u7528\u4e8e\u534f\u8c03\u8282\u70b9\u7684\u7f51\u7edc\u914d\u7f6e\uff0c\u5305\u62ec IP \u5730\u5740\u3001VLAN \u548c\u6865\u63a5\u3002\u8fd8\u7ba1\u7406\u516c\u5171\u7f51\u7edc\u548c\u4e13\u7528\u7f51\u7edc\u7684\u8def\u7531\u3002 \u7f51\u7edc\u6587\u4ef6\u7cfb\u7edf \uff08NFS\uff09 \u4e00\u79cd\u4f7f\u6587\u4ef6\u7cfb\u7edf\u5728\u7f51\u7edc\u4e0a\u53ef\u7528\u7684\u65b9\u6cd5\u3002\u7531 OpenStack \u652f\u6301\u3002 \u7f51\u7edc ID \u5206\u914d\u7ed9\u7f51\u7edc\u4e2d\u6bcf\u4e2a\u7f51\u6bb5\u7684\u552f\u4e00 ID\u3002\u4e0e\u7f51\u7edc UUID \u76f8\u540c\u3002 \u7f51\u7edc\u7ba1\u7406\u5668 \u7528\u4e8e\u7ba1\u7406\u5404\u79cd\u7f51\u7edc\u7ec4\u4ef6\uff08\u5982\u9632\u706b\u5899\u89c4\u5219\u3001IP \u5730\u5740\u5206\u914d\u7b49\uff09\u7684\u8ba1\u7b97\u7ec4\u4ef6\u3002 \u7f51\u7edc\u547d\u540d\u7a7a\u95f4 Linux \u5185\u6838\u529f\u80fd\uff0c\u5728\u5355\u4e2a\u4e3b\u673a\u4e0a\u63d0\u4f9b\u72ec\u7acb\u7684\u865a\u62df\u7f51\u7edc\u5b9e\u4f8b\uff0c\u5177\u6709\u5355\u72ec\u7684\u8def\u7531\u8868\u548c\u63a5\u53e3\u3002\u7c7b\u4f3c\u4e8e\u7269\u7406\u7f51\u7edc\u8bbe\u5907\u4e0a\u7684\u865a\u62df\u8def\u7531\u548c\u8f6c\u53d1 \uff08VRF\uff09 \u670d\u52a1\u3002 \u7f51\u7edc\u8282\u70b9 \u8fd0\u884c Network Worker \u5b88\u62a4\u7a0b\u5e8f\u7684\u4efb\u4f55\u8ba1\u7b97\u8282\u70b9\u3002 \u7f51\u7edc\u6bb5 \u8868\u793a\u7f51\u7edc\u4e2d\u865a\u62df\u7684\u9694\u79bb OSI \u7b2c 2 \u5c42\u5b50\u7f51\u3002 \u7f51\u7edc\u670d\u52a1\u6807\u5934 \uff08NSH\uff09 \u63d0\u4f9b\u6cbf\u5b9e\u4f8b\u5316\u670d\u52a1\u8def\u5f84\u8fdb\u884c\u5143\u6570\u636e\u4ea4\u6362\u7684\u673a\u5236\u3002 \u7f51\u7edc\u65f6\u95f4\u534f\u8bae \uff08NTP\uff09 \u901a\u8fc7\u4e0e\u53ef\u4fe1\u3001\u51c6\u786e\u7684\u65f6\u95f4\u6e90\u901a\u4fe1\u6765\u4fdd\u6301\u4e3b\u673a\u6216\u8282\u70b9\u65f6\u949f\u6b63\u786e\u7684\u65b9\u6cd5\u3002 \u7f51\u7edc UUID \u7f51\u7edc\u7f51\u6bb5\u7684\u552f\u4e00 ID\u3002 \u7f51\u7edc\u5de5\u4f5c\u8fdb\u7a0b nova-network worker \u5b88\u62a4\u8fdb\u7a0b;\u63d0\u4f9b\u8bf8\u5982\u4e3a\u542f\u52a8\u7684 nova \u5b9e\u4f8b\u63d0\u4f9b IP \u5730\u5740\u7b49\u670d\u52a1\u3002 \u7f51\u7edc API\uff08Neutron API\uff09 \u7528\u4e8e\u8bbf\u95ee OpenStack Networking \u7684 API\u3002\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u4f53\u7cfb\u7ed3\u6784\u4ee5\u542f\u7528\u81ea\u5b9a\u4e49\u63d2\u4ef6\u521b\u5efa\u3002 \u7f51\u7edc\u670d\u52a1\uff08neutron\uff09 OpenStack \u9879\u76ee\uff0c\u5b83\u5b9e\u73b0\u4e86\u670d\u52a1\u548c\u76f8\u5173\u5e93\uff0c\u4ee5\u63d0\u4f9b\u6309\u9700\u3001\u53ef\u6269\u5c55\u4e14\u4e0e\u6280\u672f\u65e0\u5173\u7684\u7f51\u7edc\u62bd\u8c61\u3002 neutron OpenStack Networking \u670d\u52a1\u7684\u4ee3\u53f7\u3002 neutron API \u7f51\u7edc API \u7684\u66ff\u4ee3\u540d\u79f0\u3002 Neutron \u7ba1\u7406\u5668 \u542f\u7528\u8ba1\u7b97\u548c\u7f51\u7edc\u96c6\u6210\uff0c\u4f7f\u7f51\u7edc\u80fd\u591f\u5bf9\u6765\u5bbe VM \u6267\u884c\u7f51\u7edc\u7ba1\u7406\u3002 Neutron \u63d2\u4ef6 \u7f51\u7edc\u4e2d\u7684\u63a5\u53e3\uff0c\u4f7f\u7ec4\u7ec7\u80fd\u591f\u4e3a\u9ad8\u7ea7\u529f\u80fd\uff08\u5982 QoS\u3001ACL \u6216 IDS\uff09\u521b\u5efa\u81ea\u5b9a\u4e49\u63d2\u4ef6\u3002 Newton OpenStack \u7b2c 14 \u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u5fb7\u514b\u8428\u65af\u5dde\u5965\u65af\u6c40\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u4f4d\u4e8e\u5fb7\u514b\u8428\u65af\u5dde\u5965\u65af\u6c40\u5e02\u7b2c\u4e5d\u8857 1013 \u53f7\u7684\u201cNewton House\u201d\u547d\u540d\u3002\u88ab\u5217\u5165\u56fd\u5bb6\u53f2\u8ff9\u540d\u5f55\u3002 Nexenta \u5377\u9a71\u52a8\u7a0b\u5e8f \u4e3a\u8ba1\u7b97\u4e2d\u7684 NexentaStor \u8bbe\u5907\u63d0\u4f9b\u652f\u6301\u3002 NFV \u7f16\u6392\u670d\u52a1\uff08tacker\uff09 OpenStack \u670d\u52a1\uff0c\u65e8\u5728\u5b9e\u73b0\u7f51\u7edc\u529f\u80fd\u865a\u62df\u5316 \uff08NFV\uff09 \u7f16\u6392\u670d\u52a1\u548c\u5e93\uff0c\u7528\u4e8e\u7f51\u7edc\u670d\u52a1\u548c\u865a\u62df\u7f51\u7edc\u529f\u80fd \uff08VNF\uff09 \u7684\u7aef\u5230\u7aef\u751f\u547d\u5468\u671f\u7ba1\u7406\u3002 Nginx \u51fd\u6570 HTTP \u548c\u53cd\u5411\u4ee3\u7406\u670d\u52a1\u5668\u3001\u90ae\u4ef6\u4ee3\u7406\u670d\u52a1\u5668\u548c\u901a\u7528 TCP/UDP \u4ee3\u7406\u670d\u52a1\u5668\u3002 \u65e0 ACK \u5728 Compute RabbitMQ \u4e2d\u7981\u7528\u670d\u52a1\u5668\u7aef\u6d88\u606f\u786e\u8ba4\u3002\u63d0\u9ad8\u6027\u80fd\u4f46\u964d\u4f4e\u53ef\u9760\u6027\u3002 \u8282\u70b9 \u5728\u4e3b\u673a\u4e0a\u8fd0\u884c\u7684 VM \u5b9e\u4f8b\u3002 \u975e\u6301\u4e45\u4ea4\u6362 \u670d\u52a1\u91cd\u65b0\u542f\u52a8\u65f6\u6e05\u9664\u7684\u6d88\u606f\u4ea4\u6362\u3002\u5176\u6570\u636e\u4e0d\u4f1a\u5199\u5165\u6301\u4e45\u6027\u5b58\u50a8\u3002 \u975e\u6301\u4e45\u961f\u5217 \u670d\u52a1\u91cd\u65b0\u542f\u52a8\u65f6\u6e05\u9664\u7684\u6d88\u606f\u961f\u5217\u3002\u5176\u6570\u636e\u4e0d\u4f1a\u5199\u5165\u6301\u4e45\u6027\u5b58\u50a8\u3002 \u975e\u6301\u4e45\u5316\u5377 \u4e34\u65f6\u5377\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5357\u5317\u5411\u6d41\u91cf \u7528\u6237\u6216\u5ba2\u6237\u7aef\uff08\u5317\uff09\u4e0e\u670d\u52a1\u5668\uff08\u5357\uff09\u4e4b\u95f4\u7684\u7f51\u7edc\u6d41\u91cf\uff0c\u6216\u8fdb\u5165\u4e91\uff08\u5357\uff09\u548c\u4e91\u5916\uff08\u5317\uff09\u7684\u6d41\u91cf\u3002\u53e6\u8bf7\u53c2\u9605\u4e1c\u897f\u5411\u6d41\u91cf\u3002 nova OpenStack \u8ba1\u7b97\u670d\u52a1\u7684\u4ee3\u53f7\u3002 Nova API \u63a5\u53e3 \u8ba1\u7b97 API \u7684\u66ff\u4ee3\u672f\u8bed\u3002 nova-network \uff08\u65b0\u661f\u7f51\u7edc\uff09 \u4e00\u4e2a\u8ba1\u7b97\u7ec4\u4ef6\uff0c\u7528\u4e8e\u7ba1\u7406 IP \u5730\u5740\u5206\u914d\u3001\u9632\u706b\u5899\u548c\u5176\u4ed6\u4e0e\u7f51\u7edc\u76f8\u5173\u7684\u4efb\u52a1\u3002\u8fd9\u662f\u65e7\u7248\u7f51\u7edc\u9009\u9879\uff0c\u4e5f\u662f\u7f51\u7edc\u7684\u66ff\u4ee3\u65b9\u6cd5\u3002 O \u00b6 \u5bf9\u8c61 \u5bf9\u8c61\u5b58\u50a8\u4fdd\u5b58\u7684\u6570\u636e\u7684 BLOB;\u53ef\u4ee5\u662f\u4efb\u4f55\u683c\u5f0f\u3002 \u5bf9\u8c61\u5ba1\u8ba1\u5668 \u6253\u5f00\u5bf9\u8c61\u670d\u52a1\u5668\u7684\u6240\u6709\u5bf9\u8c61\uff0c\u5e76\u9a8c\u8bc1\u6bcf\u4e2a\u5bf9\u8c61\u7684 MD5 \u54c8\u5e0c\u3001\u5927\u5c0f\u548c\u5143\u6570\u636e\u3002 \u5bf9\u8c61\u8fc7\u671f Object Storage \u4e2d\u7684\u4e00\u4e2a\u53ef\u914d\u7f6e\u9009\u9879\uff0c\u7528\u4e8e\u5728\u7ecf\u8fc7\u6307\u5b9a\u65f6\u95f4\u6216\u8fbe\u5230\u7279\u5b9a\u65e5\u671f\u540e\u81ea\u52a8\u5220\u9664\u5bf9\u8c61\u3002 \u5bf9\u8c61\u54c8\u5e0c \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u7684\u552f\u4e00 ID\u3002 \u5bf9\u8c61\u8def\u5f84\u54c8\u5e0c \u5bf9\u8c61\u5b58\u50a8\u7528\u4e8e\u786e\u5b9a\u5bf9\u8c61\u5728\u73af\u4e2d\u7684\u4f4d\u7f6e\u3002\u5c06\u5bf9\u8c61\u6620\u5c04\u5230\u5206\u533a\u3002 \u5bf9\u8c61\u590d\u5236\u5668 \u4e00\u4e2a\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\uff0c\u7528\u4e8e\u5c06\u5bf9\u8c61\u590d\u5236\u5230\u8fdc\u7a0b\u5206\u533a\u4ee5\u5b9e\u73b0\u5bb9\u9519\u3002 \u5bf9\u8c61\u670d\u52a1\u5668 \u8d1f\u8d23\u7ba1\u7406\u5bf9\u8c61\u7684\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\u3002 \u5bf9\u8c61\u5b58\u50a8 API \u7528\u4e8e\u8bbf\u95ee OpenStack \u5bf9\u8c61\u5b58\u50a8\u7684 API\u3002 \u5bf9\u8c61\u5b58\u50a8\u8bbe\u5907 \uff08OSD\uff09 Ceph \u5b58\u50a8\u5b88\u62a4\u8fdb\u7a0b\u3002 \u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff08swift\uff09 OpenStack \u6838\u5fc3\u9879\u76ee\uff0c\u4e3a\u56fa\u5b9a\u6570\u5b57\u5185\u5bb9\u63d0\u4f9b\u6700\u7ec8\u4e00\u81f4\u6027\u548c\u5197\u4f59\u7684\u5b58\u50a8\u548c\u68c0\u7d22\u3002 \u5bf9\u8c61\u7248\u672c\u63a7\u5236 \u5141\u8bb8\u7528\u6237\u5728\u5bf9\u8c61\u5b58\u50a8\u5bb9\u5668\u4e0a\u8bbe\u7f6e\u6807\u5fd7\uff0c\u4ee5\u4fbf\u5bf9\u5bb9\u5668\u5185\u7684\u6240\u6709\u5bf9\u8c61\u8fdb\u884c\u7248\u672c\u63a7\u5236\u3002 Ocata OpenStack \u7b2c 15 \u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u897f\u73ed\u7259\u5df4\u585e\u7f57\u90a3\u4e3e\u884c\u3002Ocata\u662f\u5df4\u585e\u7f57\u90a3\u5317\u90e8\u7684\u4e00\u4e2a\u6d77\u6ee9\u3002 Octavia \u8d1f\u8f7d\u5e73\u8861\u670d\u52a1\u7684\u4ee3\u53f7\u3002 Oldie \u957f\u65f6\u95f4\u8fd0\u884c\u7684\u5bf9\u8c61\u5b58\u50a8\u8fdb\u7a0b\u7684\u672f\u8bed\u3002\u53ef\u4ee5\u6307\u793a\u6302\u8d77\u7684\u8fdb\u7a0b\u3002 \u5f00\u653e\u4e91\u8ba1\u7b97\u63a5\u53e3\uff08OCCI\uff09 \u7528\u4e8e\u7ba1\u7406\u8ba1\u7b97\u3001\u6570\u636e\u548c\u7f51\u7edc\u8d44\u6e90\u7684\u6807\u51c6\u5316\u63a5\u53e3\uff0c\u76ee\u524d\u5728 OpenStack \u4e2d\u4e0d\u53d7\u652f\u6301\u3002 \u5f00\u653e\u865a\u62df\u5316\u683c\u5f0f \uff08OVF\uff09 \u6253\u5305 VM \u6620\u50cf\u7684\u6807\u51c6\u3002\u5728 OpenStack \u4e2d\u53d7\u652f\u6301\u3002 \u6253\u5f00 vSwitch Open vSwitch \u662f\u5728\u5f00\u6e90 Apache 2.0 \u8bb8\u53ef\u8bc1\u4e0b\u83b7\u5f97\u8bb8\u53ef\u7684\u751f\u4ea7\u8d28\u91cf\u7684\u591a\u5c42\u865a\u62df\u4ea4\u6362\u673a\u3002\u5b83\u65e8\u5728\u901a\u8fc7\u7f16\u7a0b\u6269\u5c55\u5b9e\u73b0\u5927\u89c4\u6a21\u7f51\u7edc\u81ea\u52a8\u5316\uff0c\u540c\u65f6\u4ecd\u652f\u6301\u6807\u51c6\u7ba1\u7406\u63a5\u53e3\u548c\u534f\u8bae\uff08\u4f8b\u5982 NetFlow\u3001sFlow\u3001SPAN\u3001RSPAN\u3001CLI\u3001LACP\u3001802.1ag\uff09\u3002 Open vSwitch\uff08OVS\uff09\u4ee3\u7406 \u4e3a\u7f51\u7edc\u63d2\u4ef6\u63d0\u4f9b\u5e95\u5c42 Open vSwitch \u670d\u52a1\u7684\u63a5\u53e3\u3002 \u6253\u5f00 vSwitch neutron \u63d2\u4ef6 \u5728\u7f51\u7edc\u4e2d\u63d0\u4f9b\u5bf9 Open vSwitch \u7684\u652f\u6301\u3002 OpenDev OpenDev \u662f\u4e00\u4e2a\u534f\u4f5c\u5f00\u6e90\u8f6f\u4ef6\u5f00\u53d1\u7684\u7a7a\u95f4\u3002 OpenDev \u7684\u4f7f\u547d\u662f\u4e3a\u5f00\u6e90\u8f6f\u4ef6\u9879\u76ee\u63d0\u4f9b\u9879\u76ee\u6258\u7ba1\u3001\u6301\u7eed\u96c6\u6210\u5de5\u5177\u548c\u865a\u62df\u534f\u4f5c\u7a7a\u95f4\u3002OpenDev \u672c\u8eab\u662f\u81ea\u6258\u7ba1\u5728\u8fd9\u5957\u5de5\u5177\u4e0a\uff0c\u5305\u62ec\u4ee3\u7801\u5ba1\u67e5\u3001\u6301\u7eed\u96c6\u6210\u3001etherpad\u3001wiki\u3001\u4ee3\u7801\u6d4f\u89c8\u7b49\u3002\u8fd9\u610f\u5473\u7740 OpenDev \u672c\u8eab\u5c31\u50cf\u4e00\u4e2a\u5f00\u6e90\u9879\u76ee\u4e00\u6837\u8fd0\u884c\uff0c\u60a8\u53ef\u4ee5\u52a0\u5165\u6211\u4eec\u5e76\u5e2e\u52a9\u8fd0\u884c\u7cfb\u7edf\u3002\u6b64\u5916\uff0c\u8fd0\u884c\u7684\u6240\u6709\u670d\u52a1\u672c\u8eab\u90fd\u662f\u5f00\u6e90\u8f6f\u4ef6\u3002 OpenStack \u9879\u76ee\u662f\u4f7f\u7528 OpenDev \u7684\u6700\u5927\u9879\u76ee\u3002 OpenLDAP \u5f00\u6e90 LDAP \u670d\u52a1\u5668\u3002\u53d7\u8ba1\u7b97\u548c\u6807\u8bc6\u652f\u6301\u3002 OpenStack OpenStack \u662f\u4e00\u4e2a\u4e91\u64cd\u4f5c\u7cfb\u7edf\uff0c\u53ef\u63a7\u5236\u6574\u4e2a\u6570\u636e\u4e2d\u5fc3\u7684\u5927\u578b\u8ba1\u7b97\u3001\u5b58\u50a8\u548c\u7f51\u7edc\u8d44\u6e90\u6c60\uff0c\u6240\u6709\u8fd9\u4e9b\u8d44\u6e90\u90fd\u901a\u8fc7\u4eea\u8868\u677f\u8fdb\u884c\u7ba1\u7406\uff0c\u8be5\u4eea\u8868\u677f\u4f7f\u7ba1\u7406\u5458\u80fd\u591f\u8fdb\u884c\u63a7\u5236\uff0c\u540c\u65f6\u6388\u6743\u7528\u6237\u901a\u8fc7 Web \u754c\u9762\u914d\u7f6e\u8d44\u6e90\u3002OpenStack \u662f\u4e00\u4e2a\u6839\u636e Apache License 2.0 \u8bb8\u53ef\u7684\u5f00\u6e90\u9879\u76ee\u3002 OpenStack \u4ee3\u7801\u540d\u79f0 \u6bcf\u4e2a OpenStack \u7248\u672c\u90fd\u6709\u4e00\u4e2a\u4ee3\u53f7\u3002\u4ee3\u53f7\u6309\u5b57\u6bcd\u987a\u5e8f\u6392\u5217\uff1aAustin, Bexar, Cactus, Diablo, Essex, Folsom, Grizzly, Havana, Icehouse, Juno, Kilo, Liberty, Mitaka, Newton, Ocata, Pike, Queens, Rocky, Stein, Train, Ussuri, Victoria, Wallaby, Xena, Yoga, Zed\u3002 Wallaby \u662f\u65b0\u7b56\u7565\u9009\u62e9\u7684\u7b2c\u4e00\u4e2a\u4ee3\u53f7\uff1a\u4ee3\u53f7\u7531\u793e\u533a\u6309\u7167\u5b57\u6bcd\u987a\u5e8f\u9009\u62e9\uff0c\u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u53d1\u5e03\u540d\u79f0\u6807\u51c6\u3002 \u7ef4\u591a\u5229\u4e9a\u7684\u540d\u5b57\u662f\u59d3\u6c0f\uff0c\u5176\u4e2d\u4ee3\u53f7\u662f\u9760\u8fd1\u76f8\u5e94OpenStack\u8bbe\u8ba1\u5cf0\u4f1a\u4e3e\u529e\u5730\u7684\u57ce\u5e02\u6216\u53bf\u3002\u4e00\u4e2a\u4f8b\u5916\uff0c\u79f0\u4e3a\u6c83\u5c14\u767b\u4f8b\u5916\uff0c\u88ab\u6388\u4e88\u5dde\u65d7\u4e2d\u542c\u8d77\u6765\u7279\u522b\u9177\u7684\u5143\u7d20\u3002\u4ee3\u53f7\u7531\u5927\u4f17\u6295\u7968\u9009\u51fa\u3002 \u4e0e\u6b64\u540c\u65f6\uff0c\u968f\u7740OpenStack\u53d1\u884c\u7248\u7684\u5b57\u6bcd\u8868\u7528\u5b8c\uff0c\u6280\u672f\u59d4\u5458\u4f1a\u6539\u53d8\u4e86\u547d\u540d\u8fc7\u7a0b\uff0c\u5c06\u53d1\u884c\u53f7\u548c\u53d1\u884c\u7248\u540d\u79f0\u4f5c\u4e3a\u8bc6\u522b\u7801\u3002\u7248\u672c\u53f7\u5c06\u662f\u4e3b\u8981\u6807\u8bc6\u7b26\uff1a\u201cyear\u201d\u3002\u5e74\u5185\u53d1\u5e03\u8ba1\u6570\u201c\uff0c\u8be5\u540d\u79f0\u5c06\u4e3b\u8981\u7528\u4e8e\u8425\u9500\u76ee\u7684\u3002\u7b2c\u4e00\u4e2a\u8fd9\u6837\u7684\u7248\u672c\u662f 2023.1 Antelope\u3002\u7d27\u968f\u5176\u540e\u7684\u662f 2023.2 Bobcat\u30012024.1 Caracal\u3002 openSUSE \u4e0e OpenStack \u517c\u5bb9\u7684 Linux \u53d1\u884c\u7248\u3002 \u64cd\u4f5c\u5458 \u8d1f\u8d23\u89c4\u5212\u548c\u7ef4\u62a4 OpenStack \u5b89\u88c5\u7684\u4eba\u5458\u3002 \u53ef\u9009\u670d\u52a1 \u7531 Interop \u5de5\u4f5c\u7ec4\u5b9a\u4e49\u4e3a\u53ef\u9009\u7684\u5b98\u65b9 OpenStack \u670d\u52a1\u3002\u76ee\u524d\uff0c\u7531 Dashboard \uff08horizon\uff09\u3001Telemetry \u670d\u52a1 \uff08Telemetry\uff09\u3001Orchestration \u670d\u52a1 \uff08heat\uff09\u3001Database \u670d\u52a1 \uff08trove\uff09\u3001Bare Metal \u670d\u52a1 \uff08ironic\uff09 \u7b49\u7ec4\u6210\u3002 \u7f16\u6392\u670d\u52a1\uff08heat\uff09 OpenStack \u670d\u52a1\uff0c\u5b83\u901a\u8fc7 OpenStack \u539f\u751f REST API \u4f7f\u7528\u58f0\u660e\u6027\u6a21\u677f\u683c\u5f0f\u7f16\u6392\u590d\u5408\u4e91\u5e94\u7528\u7a0b\u5e8f\u3002 orphan \u5728\u5bf9\u8c61\u5b58\u50a8\u7684\u4e0a\u4e0b\u6587\u4e2d\uff0c\u8fd9\u662f\u4e00\u4e2a\u5728\u5347\u7ea7\u3001\u91cd\u65b0\u542f\u52a8\u6216\u91cd\u65b0\u52a0\u8f7d\u670d\u52a1\u540e\u4e0d\u4f1a\u7ec8\u6b62\u7684\u8fc7\u7a0b\u3002 Oslo Common Libraries \u9879\u76ee\u7684\u4ee3\u53f7\u3002 P \u00b6 panko OpenStack Telemetry \u670d\u52a1\u7684\u4e00\u90e8\u5206;\u63d0\u4f9b\u4e8b\u4ef6\u5b58\u50a8\u3002 \u7236\u5355\u5143\u683c \u5982\u679c\u8bf7\u6c42\u7684\u8d44\u6e90\uff08\u5982 CPU \u65f6\u95f4\u3001\u78c1\u76d8\u5b58\u50a8\u6216\u5185\u5b58\uff09\u5728\u7236\u5355\u5143\u4e2d\u4e0d\u53ef\u7528\uff0c\u5219\u8be5\u8bf7\u6c42\u5c06\u8f6c\u53d1\u5230\u5173\u8054\u7684\u5b50\u5355\u5143\u3002 \u5206\u533a \u5bf9\u8c61\u5b58\u50a8\u4e2d\u7528\u4e8e\u5b58\u50a8\u5bf9\u8c61\u7684\u5b58\u50a8\u5355\u5143\u3002\u5b83\u5b58\u5728\u4e8e\u8bbe\u5907\u4e4b\u4e0a\uff0c\u5e76\u88ab\u590d\u5236\u4ee5\u5b9e\u73b0\u5bb9\u9519\u3002. \u5206\u533a\u7d22\u5f15 \u5305\u542b\u73af\u5185\u6240\u6709\u5bf9\u8c61\u5b58\u50a8\u5206\u533a\u7684\u4f4d\u7f6e\u3002 \u5206\u533a\u504f\u79fb\u503c \u5bf9\u8c61\u5b58\u50a8\u7528\u4e8e\u786e\u5b9a\u6570\u636e\u5e94\u9a7b\u7559\u5728\u54ea\u4e2a\u5206\u533a\u4e0a\u3002 \u8def\u5f84 MTU \u53d1\u73b0 \uff08PMTUD\uff09 IP \u7f51\u7edc\u4e2d\u7528\u4e8e\u68c0\u6d4b\u7aef\u5230\u7aef MTU \u5e76\u76f8\u5e94\u5730\u8c03\u6574\u6570\u636e\u5305\u5927\u5c0f\u7684\u673a\u5236\u3002 \u6682\u505c \u672a\u53d1\u751f\u4efb\u4f55\u66f4\u6539\uff08\u5185\u5b58\u672a\u66f4\u6539\u3001\u7f51\u7edc\u901a\u4fe1\u505c\u6b62\u7b49\uff09\u7684 VM \u72b6\u6001;VM \u5df2\u51bb\u7ed3\uff0c\u4f46\u672a\u5173\u95ed\u3002 PCI\u76f4\u901a \u4e3a\u5ba2\u6237\u673a\u865a\u62df\u673a\u63d0\u4f9b\u5bf9 PCI \u8bbe\u5907\u7684\u72ec\u5360\u8bbf\u95ee\u6743\u9650\u3002\u76ee\u524d\u5728 OpenStack Havana \u53ca\u66f4\u9ad8\u7248\u672c\u4e2d\u53d7\u652f\u6301\u3002 \u6301\u4e45\u6d88\u606f \u5b58\u50a8\u5728\u5185\u5b58\u548c\u78c1\u76d8\u4e0a\u7684\u6d88\u606f\u3002\u5931\u8d25\u6216\u91cd\u65b0\u542f\u52a8\u540e\uff0c\u6d88\u606f\u4e0d\u4f1a\u4e22\u5931\u3002 \u6301\u4e45\u5377 \u5c06\u4fdd\u5b58\u5bf9\u8fd9\u4e9b\u7c7b\u578b\u7684\u78c1\u76d8\u5377\u6240\u505a\u7684\u66f4\u6539\u3002 \u4e2a\u6027\u6587\u4ef6 \u7528\u4e8e\u81ea\u5b9a\u4e49 Compute \u5b9e\u4f8b\u7684\u6587\u4ef6\u3002\u5b83\u53ef\u7528\u4e8e\u6ce8\u5165 SSH \u5bc6\u94a5\u6216\u7279\u5b9a\u7684\u7f51\u7edc\u914d\u7f6e\u3002 Pike OpenStack \u7b2c 16 \u7248\u7684\u4ee3\u53f7\u3002OpenStack\u5cf0\u4f1a\u5728\u7f8e\u56fd\u9a6c\u8428\u8bf8\u585e\u5dde\u6ce2\u58eb\u987f\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u9a6c\u8428\u8bf8\u585e\u5dde\u6536\u8d39\u516c\u8def\u547d\u540d\uff0c\u901a\u5e38\u7f29\u5199\u4e3a\u9a6c\u8428\u8bf8\u585e\u5dde\u6536\u8d39\u516c\u8def\uff0c\u8fd9\u662f 90 \u53f7\u5dde\u9645\u516c\u8def\u6700\u4e1c\u7aef\u7684\u8def\u6bb5\u3002 \u5e73\u53f0\u5373\u670d\u52a1\uff08PaaS\uff09 \u4e3a\u4f7f\u7528\u8005\u63d0\u4f9b\u64cd\u4f5c\u7cfb\u7edf\uff0c\u901a\u5e38\u8fd8\u4e3a\u8bed\u8a00\u8fd0\u884c\u65f6\u548c\u5e93\uff08\u7edf\u79f0\u4e3a\u201c\u5e73\u53f0\u201d\uff09\u63d0\u4f9b\uff0c\u6d88\u8d39\u8005\u53ef\u4ee5\u5728\u5176\u4e0a\u8fd0\u884c\u81ea\u5df1\u7684\u5e94\u7528\u7a0b\u5e8f\u4ee3\u7801\uff0c\u800c\u65e0\u9700\u63d0\u4f9b\u5bf9\u5e95\u5c42\u57fa\u7840\u7ed3\u6784\u7684\u4efb\u4f55\u63a7\u5236\u3002\u5e73\u53f0\u5373\u670d\u52a1\u63d0\u4f9b\u5546\u7684\u793a\u4f8b\u5305\u62ec Cloud Foundry \u548c OpenShift\u3002 \u63d2\u4ef6 \u4e3a\u7f51\u7edc API \u6216\u8ba1\u7b97 API \u63d0\u4f9b\u5b9e\u9645\u5b9e\u73b0\u7684\u8f6f\u4ef6\u7ec4\u4ef6\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u4e0a\u4e0b\u6587\u3002 \u7b56\u7565\u670d\u52a1 \u6807\u8bc6\u7ec4\u4ef6\uff0c\u63d0\u4f9b\u89c4\u5219\u7ba1\u7406\u63a5\u53e3\u548c\u57fa\u4e8e\u89c4\u5219\u7684\u6388\u6743\u5f15\u64ce\u3002 \u57fa\u4e8e\u7b56\u7565\u7684\u8def\u7531 \uff08PBR\uff09 \u63d0\u4f9b\u4e00\u79cd\u673a\u5236\uff0c\u7528\u4e8e\u6839\u636e\u7f51\u7edc\u7ba1\u7406\u5458\u5b9a\u4e49\u7684\u7b56\u7565\u5b9e\u73b0\u6570\u636e\u5305\u8f6c\u53d1\u548c\u8def\u7531\u3002 \u6c60 \u4e00\u7ec4\u903b\u8f91\u8bbe\u5907\uff0c\u4f8b\u5982 Web \u670d\u52a1\u5668\uff0c\u60a8\u53ef\u4ee5\u5c06\u5176\u7ec4\u5408\u5728\u4e00\u8d77\u4ee5\u63a5\u6536\u548c\u5904\u7406\u6d41\u91cf\u3002\u8d1f\u8f7d\u5e73\u8861\u529f\u80fd\u9009\u62e9\u6c60\u4e2d\u7684\u54ea\u4e2a\u6210\u5458\u5904\u7406\u5728 VIP \u5730\u5740\u4e0a\u6536\u5230\u7684\u65b0\u8bf7\u6c42\u6216\u8fde\u63a5\u3002\u6bcf\u4e2aVIP\u90fd\u6709\u4e00\u4e2a\u6e38\u6cf3\u6c60\u3002 \u6c60\u6210\u5458 \u5728\u8d1f\u8f7d\u5e73\u8861\u7cfb\u7edf\u4e2d\u7684\u540e\u7aef\u670d\u52a1\u5668\u4e0a\u8fd0\u884c\u7684\u5e94\u7528\u7a0b\u5e8f\u3002 \u7aef\u53e3 \u7f51\u7edc\u4e2d\u7684\u865a\u62df\u7f51\u7edc\u7aef\u53e3;VIF / vNIC \u8fde\u63a5\u5230\u7aef\u53e3\u3002 \u7aef\u53e3 UUID \u7f51\u7edc\u7aef\u53e3\u7684\u552f\u4e00 ID\u3002 \u9884\u7f6e \u5728\u57fa\u4e8e Debian \u7684 Linux \u53d1\u884c\u7248\u4e0a\u81ea\u52a8\u8fdb\u884c\u7cfb\u7edf\u914d\u7f6e\u548c\u5b89\u88c5\u7684\u5de5\u5177\u3002 \u79c1\u6709\u4e91 \u4e00\u4e2a\u4f01\u4e1a\u6216\u7ec4\u7ec7\u72ec\u5360\u4f7f\u7528\u7684\u8ba1\u7b97\u8d44\u6e90\u3002 \u79c1\u6709\u6620\u50cf \u4ec5\u5bf9\u6307\u5b9a\u9879\u76ee\u53ef\u7528\u7684\u6620\u50cf\u670d\u52a1\u865a\u62df\u673a\u6620\u50cf\u3002 \u79c1\u6709 IP \u5730\u5740 \u7528\u4e8e\u7ba1\u7406\u548c\u7ba1\u7406\u7684 IP \u5730\u5740\uff0c\u4e0d\u53ef\u7528\u4e8e\u516c\u5171 Internet\u3002 \u4e13\u7528\u7f51\u7edc \u7f51\u7edc\u63a7\u5236\u5668\u63d0\u4f9b\u865a\u62df\u7f51\u7edc\uff0c\u4f7f\u8ba1\u7b97\u670d\u52a1\u5668\u80fd\u591f\u76f8\u4e92\u4ea4\u4e92\u4ee5\u53ca\u4e0e\u516c\u7528\u7f51\u7edc\u4ea4\u4e92\u3002\u6240\u6709\u8ba1\u7b97\u673a\u90fd\u5fc5\u987b\u5177\u6709\u516c\u5171\u548c\u4e13\u7528\u7f51\u7edc\u63a5\u53e3\u3002\u4e13\u7528\u7f51\u7edc\u63a5\u53e3\u53ef\u4ee5\u662f\u5e73\u9762\u7f51\u7edc\u63a5\u53e3\uff0c\u4e5f\u53ef\u4ee5\u662f VLAN \u7f51\u7edc\u63a5\u53e3\u3002\u6241\u5e73\u5316\u7f51\u7edc\u63a5\u53e3\u7531\u5177\u6709\u6241\u5e73\u5316\u7ba1\u7406\u5668\u7684flat_interface\u63a7\u5236\u3002VLAN \u7f51\u7edc\u63a5\u53e3\u7531\u5e26\u6709 VLAN \u7ba1\u7406\u5668\u7684 vlan_interface \u9009\u4ef6\u63a7\u5236\u3002 \u9879\u76ee \u9879\u76ee\u4ee3\u8868\u4e86OpenStack\u4e2d\u201c\u6240\u6709\u6743\u201d\u7684\u57fa\u672c\u5355\u4f4d\uff0c\u56e0\u4e3aOpenStack\u4e2d\u7684\u6240\u6709\u8d44\u6e90\u90fd\u5e94\u8be5\u7531\u7279\u5b9a\u9879\u76ee\u62e5\u6709\u3002\u5728 OpenStack Identity \u4e2d\uff0c\u9879\u76ee\u5fc5\u987b\u7531\u7279\u5b9a\u57df\u62e5\u6709\u3002 \u9879\u76ee ID Identity \u670d\u52a1\u5206\u914d\u7ed9\u6bcf\u4e2a\u9879\u76ee\u7684\u552f\u4e00 ID\u3002 \u9879\u76ee VPN cloudpipe \u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u6df7\u6742\u6a21\u5f0f \u4f7f\u7f51\u7edc\u63a5\u53e3\u5c06\u5176\u63a5\u6536\u7684\u6240\u6709\u6d41\u91cf\u4f20\u9012\u5230\u4e3b\u673a\uff0c\u800c\u4e0d\u662f\u4ec5\u4f20\u9012\u5bfb\u5740\u5230\u5b83\u7684\u5e27\u3002 \u53d7\u4fdd\u62a4\u7684\u5c5e\u6027 \u901a\u5e38\uff0c\u53ea\u6709\u4e91\u7ba1\u7406\u5458\u624d\u80fd\u8bbf\u95ee\u7684\u6620\u50cf\u670d\u52a1\u6620\u50cf\u4e0a\u7684\u989d\u5916\u5c5e\u6027\u3002\u9650\u5236\u54ea\u4e9b\u7528\u6237\u89d2\u8272\u53ef\u4ee5\u5bf9\u8be5\u5c5e\u6027\u6267\u884c CRUD \u64cd\u4f5c\u3002\u4e91\u7ba1\u7406\u5458\u53ef\u4ee5\u5c06\u4efb\u4f55\u6620\u50cf\u5c5e\u6027\u914d\u7f6e\u4e3a\u53d7\u4fdd\u62a4\u3002 \u63d0\u4f9b\u8005 \u6709\u6743\u8bbf\u95ee\u6240\u6709\u4e3b\u673a\u548c\u5b9e\u4f8b\u7684\u7ba1\u7406\u5458\u3002 \u4ee3\u7406\u8282\u70b9 \u63d0\u4f9bObject Storage\u4ee3\u7406\u670d\u52a1\u7684\u8282\u70b9\u3002 \u4ee3\u7406\u670d\u52a1\u5668 \u5bf9\u8c61\u5b58\u50a8\u7684\u7528\u6237\u901a\u8fc7\u4ee3\u7406\u670d\u52a1\u5668\u4e0e\u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\uff0c\u4ee3\u7406\u670d\u52a1\u5668\u53c8\u5728\u73af\u5185\u67e5\u627e\u6240\u8bf7\u6c42\u6570\u636e\u7684\u4f4d\u7f6e\uff0c\u5e76\u5c06\u7ed3\u679c\u8fd4\u56de\u7ed9\u7528\u6237\u3002 \u516c\u5171 API \u7528\u4e8e\u670d\u52a1\u5230\u670d\u52a1\u901a\u4fe1\u548c\u6700\u7ec8\u7528\u6237\u4ea4\u4e92\u7684 API \u7ec8\u7ed3\u70b9\u3002 \u516c\u6709\u4e91 \u8bb8\u591a\u7528\u6237\u53ef\u901a\u8fc7 Internet \u8bbf\u95ee\u7684\u6570\u636e\u4e2d\u5fc3\u3002 \u516c\u5171\u955c\u50cf \u53ef\u4f9b\u6240\u6709\u9879\u76ee\u4f7f\u7528\u7684\u955c\u50cf\u670d\u52a1\u865a\u62df\u673a\u955c\u50cf\u3002 \u516c\u7f51 IP \u5730\u5740 \u6700\u7ec8\u7528\u6237\u53ef\u8bbf\u95ee\u7684 IP \u5730\u5740\u3002 \u516c\u94a5\u8ba4\u8bc1 \u4f7f\u7528\u5bc6\u94a5\u800c\u4e0d\u662f\u5bc6\u7801\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002 \u516c\u7f51 \u7f51\u7edc\u63a7\u5236\u5668\u63d0\u4f9b\u865a\u62df\u7f51\u7edc\uff0c\u4f7f\u8ba1\u7b97\u670d\u52a1\u5668\u80fd\u591f\u76f8\u4e92\u4ea4\u4e92\u4ee5\u53ca\u4e0e\u516c\u7528\u7f51\u7edc\u4ea4\u4e92\u3002\u6240\u6709\u8ba1\u7b97\u673a\u90fd\u5fc5\u987b\u5177\u6709\u516c\u5171\u548c\u4e13\u7528\u7f51\u7edc\u63a5\u53e3\u3002\u516c\u7528\u7f51\u7edc\u63a5\u53e3\u7531\u8be5 public_interface \u9009\u9879\u63a7\u5236\u3002 Puppet OpenStack\u652f\u6301\u7684\u64cd\u4f5c\u7cfb\u7edf\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u3002 Python \u6a21\u578b OpenStack\u4e2d\u5e7f\u6cdb\u4f7f\u7528\u7684\u7f16\u7a0b\u8bed\u8a00\u3002 Q \u00b6 QEMU \u5199\u5165\u65f6\u590d\u5236 2 \uff08QCOW2\uff09 \u955c\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u955c\u50cf\u78c1\u76d8\u683c\u5f0f\u4e4b\u4e00\u3002 Qpid penStack\u652f\u6301\u7684\u6d88\u606f\u961f\u5217\u8f6f\u4ef6;RabbitMQ \u7684\u66ff\u4ee3\u54c1\u3002 \u670d\u52a1\u8d28\u91cf \uff08QoS\uff09 \u4fdd\u8bc1\u67d0\u4e9b\u7f51\u7edc\u6216\u5b58\u50a8\u8981\u6c42\u4ee5\u6ee1\u8db3\u5e94\u7528\u7a0b\u5e8f\u63d0\u4f9b\u5546\u548c\u6700\u7ec8\u7528\u6237\u4e4b\u95f4\u7684\u670d\u52a1\u7ea7\u522b\u534f\u8bae \uff08SLA\uff09 \u7684\u80fd\u529b\u3002\u901a\u5e38\u5305\u62ec\u7f51\u7edc\u5e26\u5bbd\u3001\u5ef6\u8fdf\u3001\u6296\u52a8\u6821\u6b63\u548c\u53ef\u9760\u6027\u7b49\u6027\u80fd\u8981\u6c42\uff0c\u4ee5\u53ca\u6bcf\u79d2\u8f93\u5165/\u8f93\u51fa\u64cd\u4f5c\u6570 \uff08IOPS\uff09 \u4e2d\u7684\u5b58\u50a8\u6027\u80fd\u3001\u9650\u5236\u534f\u8bae\u548c\u5cf0\u503c\u8d1f\u8f7d\u4e0b\u7684\u6027\u80fd\u9884\u671f\u3002 \u9694\u79bb \u5982\u679c\u5bf9\u8c61\u5b58\u50a8\u53d1\u73b0\u5bf9\u8c61\u3001\u5bb9\u5668\u6216\u5e10\u6237\u5df2\u635f\u574f\uff0c\u5219\u4f1a\u5c06\u5176\u7f6e\u4e8e\u6b64\u72b6\u6001\uff0c\u4e0d\u4f1a\u88ab\u590d\u5236\uff0c\u5ba2\u6237\u7aef\u65e0\u6cd5\u8bfb\u53d6\uff0c\u5e76\u4e14\u4f1a\u91cd\u65b0\u590d\u5236\u6b63\u786e\u7684\u526f\u672c\u3002 Queens OpenStack \u7b2c 17 \u7248\u7684\u4ee3\u53f7\u3002OpenStack\u5cf0\u4f1a\u5728\u6fb3\u5927\u5229\u4e9a\u6089\u5c3c\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u65b0\u5357\u5a01\u5c14\u58eb\u5dde\u5357\u6d77\u5cb8\u5730\u533a\u7684\u7687\u540e\u5e9e\u5fb7\u6cb3\u547d\u540d\u3002 Quick EMUlator \uff08QEMU\uff09 \uff08\u5feb\u901f EMUlator\uff09 QEMU \u662f\u4e00\u4e2a\u901a\u7528\u7684\u5f00\u6e90\u673a\u5668\u4eff\u771f\u5668\u548c\u865a\u62df\u5316\u5668\u3002OpenStack \u652f\u6301\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e4b\u4e00\uff0c\u901a\u5e38\u7528\u4e8e\u5f00\u53d1\u76ee\u7684\u3002 \u914d\u989d \u5728\u8ba1\u7b97\u548c\u5757\u5b58\u50a8\u4e2d\uff0c\u80fd\u591f\u57fa\u4e8e\u6bcf\u4e2a\u9879\u76ee\u8bbe\u7f6e\u8d44\u6e90\u9650\u5236\u3002 R \u00b6 RabbitMQ \u6a21\u578b OpenStack \u4f7f\u7528\u7684\u9ed8\u8ba4\u6d88\u606f\u961f\u5217\u8f6f\u4ef6\u3002 Rackspace \u4e91\u6587\u4ef6 2010 \u5e74\u7531 Rackspace \u5f00\u6e90\u53d1\u5e03;\u5bf9\u8c61\u5b58\u50a8\u7684\u57fa\u7840\u3002 RADOS \u5757\u8bbe\u5907 \uff08RBD\uff09 Ceph \u7ec4\u4ef6\uff0c\u4f7f Linux \u5757\u8bbe\u5907\u80fd\u591f\u5728\u591a\u4e2a\u5206\u5e03\u5f0f\u6570\u636e\u5b58\u50a8\u4e0a\u8fdb\u884c\u6761\u5e26\u5316\u3002 radvd \u8def\u7531\u5668\u901a\u544a\u5b88\u62a4\u7a0b\u5e8f\uff0c\u7531\u8ba1\u7b97 VLAN \u7ba1\u7406\u5668\u548c FlatDHCP \u7ba1\u7406\u5668\u7528\u4e8e\u4e3a VM \u5b9e\u4f8b\u63d0\u4f9b\u8def\u7531\u670d\u52a1\u3002 rally Benchmark \u670d\u52a1\u7684\u4ee3\u53f7\u3002 RAM\u8fc7\u6ee4\u5668 \u542f\u7528\u6216\u7981\u7528 RAM \u8fc7\u91cf\u5206\u914d\u7684\u8ba1\u7b97\u8bbe\u7f6e\u3002 RAM \u8fc7\u91cf\u5206\u914d \u80fd\u591f\u6839\u636e\u4e3b\u673a\u7684\u5b9e\u9645\u5185\u5b58\u4f7f\u7528\u60c5\u51b5\u542f\u52a8\u65b0\u7684 VM \u5b9e\u4f8b\uff0c\u800c\u4e0d\u662f\u6839\u636e\u6bcf\u4e2a\u6b63\u5728\u8fd0\u884c\u7684\u5b9e\u4f8b\u8ba4\u4e3a\u5176\u53ef\u7528\u7684 RAM \u91cf\u6765\u505a\u51fa\u51b3\u5b9a\u3002\u4e5f\u79f0\u4e3a\u5185\u5b58\u8fc7\u91cf\u4f7f\u7528\u3002 \u901f\u7387\u9650\u5236 \u5bf9\u8c61\u5b58\u50a8\u4e2d\u7684\u53ef\u914d\u7f6e\u9009\u9879\uff0c\u7528\u4e8e\u9650\u5236\u6bcf\u4e2a\u5e10\u6237\u548c/\u6216\u6bcf\u4e2a\u5bb9\u5668\u7684\u6570\u636e\u5e93\u5199\u5165\u3002 \u539f\u59cb \u6620\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u6620\u50cf\u78c1\u76d8\u683c\u5f0f\u4e4b\u4e00;\u975e\u7ed3\u6784\u5316\u78c1\u76d8\u6620\u50cf\u3002 \u91cd\u65b0\u5e73\u8861 \u5728\u73af\u4e2d\u7684\u6240\u6709\u9a71\u52a8\u5668\u4e4b\u95f4\u5206\u914d\u5bf9\u8c61\u5b58\u50a8\u5206\u533a\u7684\u8fc7\u7a0b;\u5728\u521d\u59cb\u73af\u521b\u5efa\u671f\u95f4\u548c\u73af\u91cd\u65b0\u914d\u7f6e\u540e\u4f7f\u7528\u3002 \u91cd\u542f \u5bf9\u670d\u52a1\u5668\u8fdb\u884c\u8f6f\u91cd\u542f\u6216\u786c\u91cd\u542f\u3002\u901a\u8fc7\u8f6f\u91cd\u542f\uff0c\u64cd\u4f5c\u7cfb\u7edf\u4f1a\u53d1\u51fa\u91cd\u65b0\u542f\u52a8\u4fe1\u53f7\uff0c\u4ece\u800c\u53ef\u4ee5\u6b63\u5e38\u5173\u95ed\u6240\u6709\u8fdb\u7a0b\u3002\u786c\u91cd\u542f\u76f8\u5f53\u4e8e\u91cd\u542f\u670d\u52a1\u5668\u3002\u865a\u62df\u5316\u5e73\u53f0\u5e94\u786e\u4fdd\u91cd\u65b0\u542f\u52a8\u64cd\u4f5c\u5df2\u6210\u529f\u5b8c\u6210\uff0c\u5373\u4f7f\u5728\u57fa\u7840\u57df/VM \u6682\u505c\u6216\u505c\u6b62/\u505c\u6b62\u7684\u60c5\u51b5\u4e0b\u4e5f\u662f\u5982\u6b64\u3002 \u91cd\u5efa \u5220\u9664\u670d\u52a1\u5668\u4e0a\u7684\u6240\u6709\u6570\u636e\uff0c\u5e76\u5c06\u5176\u66ff\u6362\u4e3a\u6307\u5b9a\u7684\u6620\u50cf\u3002\u670d\u52a1\u5668 ID \u548c IP \u5730\u5740\u4fdd\u6301\u4e0d\u53d8\u3002 \u4fa6\u5bdf \u7528\u4e8e\u6536\u96c6\u8ba1\u91cf\u7684\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\u3002 \u8bb0\u5f55 \u5c5e\u4e8e\u7279\u5b9a\u57df\uff0c\u7528\u4e8e\u6307\u5b9a\u6709\u5173\u8be5\u57df\u7684\u4fe1\u606f\u3002\u6709\u51e0\u79cd\u7c7b\u578b\u7684 DNS \u8bb0\u5f55\u3002\u6bcf\u79cd\u8bb0\u5f55\u7c7b\u578b\u90fd\u5305\u542b\u7528\u4e8e\u63cf\u8ff0\u8be5\u8bb0\u5f55\u7528\u9014\u7684\u7279\u5b9a\u4fe1\u606f\u3002\u793a\u4f8b\u5305\u62ec\u90ae\u4ef6\u4ea4\u6362 \uff08MX\uff09 \u8bb0\u5f55\uff0c\u5b83\u6307\u5b9a\u7279\u5b9a\u57df\u7684\u90ae\u4ef6\u670d\u52a1\u5668;\u548c\u540d\u79f0\u670d\u52a1\u5668 \uff08NS\uff09 \u8bb0\u5f55\uff0c\u7528\u4e8e\u6307\u5b9a\u57df\u7684\u6743\u5a01\u540d\u79f0\u670d\u52a1\u5668\u3002 \u8bb0\u5f55 ID \u6570\u636e\u5e93\u4e2d\u7684\u4e00\u4e2a\u6570\u5b57\uff0c\u6bcf\u6b21\u8fdb\u884c\u66f4\u6539\u65f6\u90fd\u4f1a\u9012\u589e\u3002\u5bf9\u8c61\u5b58\u50a8\u5728\u590d\u5236\u65f6\u4f7f\u7528\u3002 Red Hat Enterprise Linux \uff08RHEL\uff09 \uff08\u82f1\u8bed\uff09 \u4e0e OpenStack \u517c\u5bb9\u7684 Linux \u53d1\u884c\u7248\u3002 \u53c2\u8003\u67b6\u6784 OpenStack \u4e91\u7684\u63a8\u8350\u67b6\u6784\u3002 \u533a\u57df \u5177\u6709\u4e13\u7528 API \u7aef\u70b9\u7684\u79bb\u6563 OpenStack \u73af\u5883\uff0c\u901a\u5e38\u4ec5\u4e0e\u5176\u4ed6\u533a\u57df\u5171\u4eab\u8eab\u4efd \uff08keystone\uff09\u3002 \u6ce8\u518c\u8868 \u5f71\u50cf\u670d\u52a1\u6ce8\u518c\u8868\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u6ce8\u518c\u8868\u670d\u52a1\u5668 \u5411\u5ba2\u6237\u7aef\u63d0\u4f9b\u865a\u62df\u673a\u955c\u50cf\u5143\u6570\u636e\u4fe1\u606f\u7684\u955c\u50cf\u670d\u52a1\u3002 \u53ef\u9760\u3001\u81ea\u4e3b\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8 \uff08\u96f7\u8fbe\uff09 \u5728 Ceph \u4e2d\u63d0\u4f9b\u5bf9\u8c61\u5b58\u50a8\u7684\u7ec4\u4ef6\u96c6\u5408\u3002\u7c7b\u4f3c\u4e8e OpenStack Object Storage\u3002 \u8fdc\u7a0b\u8fc7\u7a0b\u8c03\u7528 \uff08RPC\uff09 \u8ba1\u7b97RabbitMQ \u7528\u4e8e\u670d\u52a1\u5185\u901a\u4fe1\u7684\u65b9\u6cd5\u3002 \u526f\u672c \u901a\u8fc7\u521b\u5efa\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u3001\u5e10\u6237\u548c\u5bb9\u5668\u7684\u526f\u672c\u6765\u63d0\u4f9b\u6570\u636e\u5197\u4f59\u548c\u5bb9\u9519\uff0c\u4ee5\u4fbf\u5728\u5e95\u5c42\u5b58\u50a8\u53d1\u751f\u6545\u969c\u65f6\u4e0d\u4f1a\u4e22\u5931\u5b83\u4eec\u3002 \u526f\u672c\u6570\u91cf \u5bf9\u8c61\u5b58\u50a8\u73af\u4e2d\u6570\u636e\u7684\u526f\u672c\u6570\u3002 \u590d\u5236 \u5c06\u6570\u636e\u590d\u5236\u5230\u5355\u72ec\u7684\u7269\u7406\u8bbe\u5907\u4ee5\u5b9e\u73b0\u5bb9\u9519\u548c\u6027\u80fd\u7684\u8fc7\u7a0b\u3002 \u590d\u5236\u5668 \u5bf9\u8c61\u5b58\u50a8\u540e\u7aef\u8fdb\u7a0b\uff0c\u7528\u4e8e\u521b\u5efa\u548c\u7ba1\u7406\u5bf9\u8c61\u526f\u672c\u3002 \u8bf7\u6c42 ID \u5206\u914d\u7ed9\u53d1\u9001\u5230\u8ba1\u7b97\u7684\u6bcf\u4e2a\u8bf7\u6c42\u7684\u552f\u4e00 ID\u3002 \u6551\u63f4\u6620\u50cf \u4e00\u79cd\u7279\u6b8a\u7c7b\u578b\u7684 VM \u6620\u50cf\uff0c\u5728\u5c06\u5b9e\u4f8b\u7f6e\u4e8e\u6551\u63f4\u6a21\u5f0f\u65f6\u542f\u52a8\u3002\u5141\u8bb8\u7ba1\u7406\u5458\u6302\u8f7d\u5b9e\u4f8b\u7684\u6587\u4ef6\u7cfb\u7edf\u4ee5\u66f4\u6b63\u95ee\u9898\u3002 \u8c03\u6574\u5927\u5c0f \u5c06\u73b0\u6709\u670d\u52a1\u5668\u8f6c\u6362\u4e3a\u5176\u4ed6\u98ce\u683c\uff0c\u4ece\u800c\u6269\u5c55\u6216\u7f29\u51cf\u670d\u52a1\u5668\u3002\u4fdd\u5b58\u539f\u59cb\u670d\u52a1\u5668\u4ee5\u5728\u51fa\u73b0\u95ee\u9898\u65f6\u542f\u7528\u56de\u6eda\u3002\u5fc5\u987b\u6d4b\u8bd5\u5e76\u660e\u786e\u786e\u8ba4\u6240\u6709\u8c03\u6574\u5927\u5c0f\uff0c\u6b64\u65f6\u5c06\u5220\u9664\u539f\u59cb\u670d\u52a1\u5668\u3002 RESTful \u4e00\u79cd\u4f7f\u7528 REST \u6216\u5177\u8c61\u72b6\u6001\u4f20\u8f93\u7684 Web \u670d\u52a1 API\u3002REST\u662f\u7528\u4e8e\u4e07\u7ef4\u7f51\u7684\u8d85\u5a92\u4f53\u7cfb\u7edf\u7684\u67b6\u6784\u98ce\u683c \u73af \u5c06\u5bf9\u8c61\u5b58\u50a8\u6570\u636e\u6620\u5c04\u5230\u5206\u533a\u7684\u5b9e\u4f53\u3002\u6bcf\u4e2a\u670d\u52a1\uff08\u4f8b\u5982\u5e10\u6237\u3001\u5bf9\u8c61\u548c\u5bb9\u5668\uff09\u90fd\u5b58\u5728\u4e00\u4e2a\u5355\u72ec\u7684\u73af\u3002 \u73af\u6784\u5efa\u5668 \u5728\u5bf9\u8c61\u5b58\u50a8\u4e2d\u6784\u5efa\u548c\u7ba1\u7406\u73af\uff0c\u4e3a\u8bbe\u5907\u5206\u914d\u5206\u533a\uff0c\u5e76\u5c06\u914d\u7f6e\u63a8\u9001\u5230\u5176\u4ed6\u5b58\u50a8\u8282\u70b9\u3002 Rocky OpenStack \u7b2c 18 \u7248\u7684\u4ee3\u53f7\u3002OpenStack\u5cf0\u4f1a\u5728\u52a0\u62ff\u5927\u6e29\u54e5\u534e\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u843d\u57fa\u5c71\u8109\u547d\u540d\u3002 \u89d2\u8272 \u7528\u6237\u4e3a\u6267\u884c\u4e00\u7ec4\u7279\u5b9a\u64cd\u4f5c\u800c\u5047\u5b9a\u7684\u4e2a\u6027\u3002\u89d2\u8272\u5305\u62ec\u4e00\u7ec4\u6743\u9650\u548c\u7279\u6743\u3002\u62c5\u4efb\u8be5\u89d2\u8272\u7684\u7528\u6237\u5c06\u7ee7\u627f\u8fd9\u4e9b\u6743\u5229\u548c\u7279\u6743\u3002 \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236 \uff08RBAC\uff09 \u63d0\u4f9b\u7528\u6237\u53ef\u4ee5\u6267\u884c\u7684\u64cd\u4f5c\u7684\u9884\u5b9a\u4e49\u5217\u8868\uff0c\u4f8b\u5982\u542f\u52a8\u6216\u505c\u6b62 VM\u3001\u91cd\u7f6e\u5bc6\u7801\u7b49\u3002\u5728\u6807\u8bc6\u548c\u8ba1\u7b97\u4e2d\u5747\u53d7\u652f\u6301\uff0c\u53ef\u4ee5\u4f7f\u7528\u4eea\u8868\u677f\u8fdb\u884c\u914d\u7f6e\u3002 \u89d2\u8272 ID \u5206\u914d\u7ed9\u6bcf\u4e2a\u8eab\u4efd\u670d\u52a1\u89d2\u8272\u7684\u5b57\u6bcd\u6570\u5b57 ID\u3002 \u6839\u672c\u539f\u56e0\u5206\u6790\uff08RCA\uff09\u670d\u52a1\uff08Vitrage\uff09 OpenStack\u9879\u76ee\u65e8\u5728\u7ec4\u7ec7\u3001\u5206\u6790\u548c\u53ef\u89c6\u5316OpenStack\u8b66\u62a5\u548c\u4e8b\u4ef6\uff0c\u6df1\u5165\u4e86\u89e3\u95ee\u9898\u7684\u6839\u672c\u539f\u56e0\uff0c\u5e76\u5728\u76f4\u63a5\u68c0\u6d4b\u5230\u95ee\u9898\u4e4b\u524d\u63a8\u65ad\u51fa\u5b83\u4eec\u7684\u5b58\u5728\u3002 rootwrap \u8ba1\u7b97\u7684\u4e00\u9879\u529f\u80fd\uff0c\u5141\u8bb8\u975e\u7279\u6743\u201cnova\u201d\u7528\u6237\u4ee5 Linux root \u7528\u6237\u8eab\u4efd\u8fd0\u884c\u6307\u5b9a\u7684\u547d\u4ee4\u5217\u8868\u3002 \u5faa\u73af\u8c03\u5ea6\u5668 \u5728\u53ef\u7528\u4e3b\u673a\u4e4b\u95f4\u5747\u5300\u5206\u914d\u5b9e\u4f8b\u7684\u8ba1\u7b97\u8ba1\u5212\u7a0b\u5e8f\u7684\u7c7b\u578b\u3002 \u8def\u7531\u5668 \u5728\u4e0d\u540c\u7f51\u7edc\u4e4b\u95f4\u4f20\u9012\u7f51\u7edc\u6d41\u91cf\u7684\u7269\u7406\u6216\u865a\u62df\u7f51\u7edc\u8bbe\u5907\u3002 \u8def\u7531\u5bc6\u94a5 \u8ba1\u7b97\u76f4\u63a5\u4ea4\u6362\u3001\u6247\u51fa\u4ea4\u6362\u548c\u4e3b\u9898\u4ea4\u6362\u4f7f\u7528\u6b64\u5bc6\u94a5\u6765\u786e\u5b9a\u5982\u4f55\u5904\u7406\u6d88\u606f;\u5904\u7406\u65b9\u5f0f\u56e0 Exchange \u7c7b\u578b\u800c\u5f02\u3002 RPC \u9a71\u52a8\u7a0b\u5e8f \u6a21\u5757\u5316\u7cfb\u7edf\uff0c\u5141\u8bb8\u66f4\u6539 Compute \u7684\u5e95\u5c42\u6d88\u606f\u961f\u5217\u8f6f\u4ef6\u3002\u4f8b\u5982\uff0c\u4ece RabbitMQ \u5230 ZeroMQ \u6216 Qpid\u3002 rsync \u7531\u5bf9\u8c61\u5b58\u50a8\u7528\u4e8e\u63a8\u9001\u5bf9\u8c61\u526f\u672c\u3002 RXTX \u9650 \u5236 \u8ba1\u7b97 VM \u5b9e\u4f8b\u53ef\u4ee5\u53d1\u9001\u548c\u63a5\u6536\u7684\u7f51\u7edc\u6d41\u91cf\u7684\u7edd\u5bf9\u9650\u5236\u3002 RXTX \u914d\u989d \u5bf9\u8ba1\u7b97 VM \u5b9e\u4f8b\u53ef\u4ee5\u53d1\u9001\u548c\u63a5\u6536\u7684\u7f51\u7edc\u6d41\u91cf\u7684\u8f6f\u9650\u5236\u3002 S \u00b6 sahara \u6570\u636e\u5904\u7406\u670d\u52a1\u7684\u4ee3\u53f7\u3002 SAML \u65ad\u8a00 \u5305\u542b\u6807\u8bc6\u63d0\u4f9b\u8005\u63d0\u4f9b\u7684\u6709\u5173\u7528\u6237\u7684\u4fe1\u606f\u3002\u8fd9\u8868\u793a\u7528\u6237\u5df2\u901a\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u3002 \u6c99\u76d2 \u4e00\u4e2a\u865a\u62df\u7a7a\u95f4\uff0c\u53ef\u4ee5\u5728\u5176\u4e2d\u5b89\u5168\u5730\u8fd0\u884c\u65b0\u7684\u6216\u672a\u7ecf\u6d4b\u8bd5\u7684\u8f6f\u4ef6\u3002 \u8c03\u5ea6\u5668\u7ba1\u7406\u5668 \u4e00\u4e2a\u8ba1\u7b97\u7ec4\u4ef6\uff0c\u7528\u4e8e\u786e\u5b9a VM \u5b9e\u4f8b\u7684\u542f\u52a8\u4f4d\u7f6e\u3002\u91c7\u7528\u6a21\u5757\u5316\u8bbe\u8ba1\uff0c\u652f\u6301\u591a\u79cd\u8c03\u5ea6\u7a0b\u5e8f\u7c7b\u578b\u3002 \u4f5c\u7528\u57df\u4ee4\u724c \u4e0e\u7279\u5b9a\u9879\u76ee\u5173\u8054\u7684\u8eab\u4efd\u670d\u52a1 API \u8bbf\u95ee\u4ee4\u724c\u3002 \u6d17\u6da4\u5668 \u68c0\u67e5\u5e76\u5220\u9664\u672a\u4f7f\u7528\u7684\u865a\u62df\u673a;\u5b9e\u73b0\u5ef6\u8fdf\u5220\u9664\u7684\u5f71\u50cf\u670d\u52a1\u7ec4\u4ef6\u3002 \u5bc6\u94a5 \u53ea\u6709\u7528\u6237\u77e5\u9053\u7684\u6587\u672c\u5b57\u7b26\u4e32;\u4e0e\u8bbf\u95ee\u5bc6\u94a5\u4e00\u8d77\u4f7f\u7528\uff0c\u4ee5\u5411\u8ba1\u7b97 API \u53d1\u51fa\u8bf7\u6c42\u3002 \u5b89\u5168\u542f\u52a8 \u7cfb\u7edf\u56fa\u4ef6\u9a8c\u8bc1\u542f\u52a8\u8fc7\u7a0b\u4e2d\u6d89\u53ca\u7684\u4ee3\u7801\u7684\u771f\u5b9e\u6027\u7684\u8fc7\u7a0b\u3002 \u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 \u7528\u4e8e\u901a\u8fc7\u52a0\u5bc6\u901a\u4fe1\u901a\u9053\u8bbf\u95ee\u8fdc\u7a0b\u4e3b\u673a\u7684\u5f00\u6e90\u5de5\u5177\uff0c\u8ba1\u7b97\u652f\u6301 SSH \u5bc6\u94a5\u6ce8\u5165\u3002 \u5b89\u5168\u7ec4 \u5e94\u7528\u4e8e\u8ba1\u7b97\u5b9e\u4f8b\u7684\u4e00\u7ec4\u7f51\u7edc\u6d41\u91cf\u7b5b\u9009\u89c4\u5219\u3002 \u5206\u6bb5\u5bf9\u8c61 \u5df2\u5206\u89e3\u4e3a\u591a\u4e2a\u90e8\u5206\u7684\u5bf9\u8c61\u5b58\u50a8\u5927\u578b\u5bf9\u8c61\u3002\u91cd\u65b0\u7ec4\u5408\u7684\u5bf9\u8c61\u79f0\u4e3a\u4e32\u8054\u5bf9\u8c61\u3002 \u81ea\u52a9\u670d\u52a1 \u5bf9\u4e8e IaaS\uff0c\u5e38\u89c4\uff08\u975e\u7279\u6743\uff09\u5e10\u6237\u80fd\u591f\u5728\u4e0d\u6d89\u53ca\u7ba1\u7406\u5458\u7684\u60c5\u51b5\u4e0b\u7ba1\u7406\u865a\u62df\u57fa\u7840\u67b6\u6784\u7ec4\u4ef6\uff08\u5982\u7f51\u7edc\uff09\u3002 SELinux \u51fd\u6570 Linux \u5185\u6838\u5b89\u5168\u6a21\u5757\uff0c\u63d0\u4f9b\u7528\u4e8e\u652f\u6301\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u7684\u673a\u5236\u3002 senlin \u7fa4\u96c6\u670d\u52a1\u7684\u4ee3\u7801\u540d\u79f0\u3002 \u670d\u52a1\u5668 \u4e3a\u8be5\u7cfb\u7edf\u4e0a\u8fd0\u884c\u7684\u5ba2\u6237\u7aef\u8f6f\u4ef6\u63d0\u4f9b\u663e\u5f0f\u670d\u52a1\u7684\u8ba1\u7b97\u673a\uff0c\u901a\u5e38\u7ba1\u7406\u5404\u79cd\u8ba1\u7b97\u673a\u64cd\u4f5c\u3002\u670d\u52a1\u5668\u662f\u8ba1\u7b97\u7cfb\u7edf\u4e2d\u7684 VM \u5b9e\u4f8b\u3002\u98ce\u683c\u548c\u56fe\u50cf\u662f\u521b\u5efa\u670d\u52a1\u5668\u65f6\u7684\u5fc5\u8981\u5143\u7d20\u3002 \u670d\u52a1\u5668\u6620\u50cf VM \u6620\u50cf\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u670d\u52a1\u5668 UUID \u5206\u914d\u7ed9\u6bcf\u4e2a\u6765\u5bbe VM \u5b9e\u4f8b\u7684\u552f\u4e00 ID\u3002 \u670d\u52a1 OpenStack \u670d\u52a1\uff0c\u4f8b\u5982\u8ba1\u7b97\u3001\u5bf9\u8c61\u5b58\u50a8\u6216\u6620\u50cf\u670d\u52a1\u3002\u63d0\u4f9b\u4e00\u4e2a\u6216\u591a\u4e2a\u7aef\u70b9\uff0c\u7528\u6237\u53ef\u4ee5\u901a\u8fc7\u8fd9\u4e9b\u7aef\u70b9\u8bbf\u95ee\u8d44\u6e90\u548c\u6267\u884c\u64cd\u4f5c\u3002 \u670d\u52a1\u76ee\u5f55 Identity \u670d\u52a1\u76ee\u5f55\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u670d\u52a1\u529f\u80fd\u94fe \uff08SFC\uff09 \u5bf9\u4e8e\u7ed9\u5b9a\u7684\u670d\u52a1\uff0cSFC \u662f\u6240\u9700\u670d\u52a1\u529f\u80fd\u53ca\u5176\u5e94\u7528\u987a\u5e8f\u7684\u62bd\u8c61\u89c6\u56fe\u3002 \u670d\u52a1 ID \u5206\u914d\u7ed9 Identity \u670d\u52a1\u76ee\u5f55\u4e2d\u53ef\u7528\u7684\u6bcf\u4e2a\u670d\u52a1\u7684\u552f\u4e00 ID\u3002 \u670d\u52a1\u6c34\u5e73\u534f\u8bae \uff08SLA\uff09 \u786e\u4fdd\u670d\u52a1\u53ef\u7528\u6027\u7684\u5408\u540c\u4e49\u52a1\u3002 \u670d\u52a1\u9879\u76ee \u5305\u542b\u76ee\u5f55\u4e2d\u5217\u51fa\u7684\u6240\u6709\u670d\u52a1\u7684\u7279\u6b8a\u9879\u76ee\u3002 \u670d\u52a1\u63d0\u4f9b\u8005 \u5411\u5176\u4ed6\u7cfb\u7edf\u5b9e\u4f53\u63d0\u4f9b\u670d\u52a1\u7684\u7cfb\u7edf\u3002\u5728\u8054\u5408\u8eab\u4efd\u7684\u60c5\u51b5\u4e0b\uff0cOpenStack \u8eab\u4efd\u662f\u670d\u52a1\u63d0\u4f9b\u8005\u3002 \u670d\u52a1\u6ce8\u518c \u4e00\u79cd\u8eab\u4efd\u670d\u52a1\u529f\u80fd\uff0c\u4f7f\u670d\u52a1\uff08\u5982\u8ba1\u7b97\uff09\u80fd\u591f\u81ea\u52a8\u6ce8\u518c\u5230\u76ee\u5f55\u3002 \u670d\u52a1\u4ee4\u724c \u7ba1\u7406\u5458\u5b9a\u4e49\u7684\u4ee4\u724c\uff0c\u7531\u8ba1\u7b97\u7528\u4e8e\u4e0e\u8eab\u4efd\u670d\u52a1\u8fdb\u884c\u5b89\u5168\u901a\u4fe1\u3002 \u4f1a\u8bdd\u540e\u7aef Horizon \u7528\u4e8e\u8ddf\u8e2a\u5ba2\u6237\u7aef\u4f1a\u8bdd\u7684\u5b58\u50a8\u65b9\u6cd5\uff0c\u4f8b\u5982\u672c\u5730\u5185\u5b58\u3001Cookie\u3001\u6570\u636e\u5e93\u6216 memcached\u3002 \u4f1a\u8bdd\u6301\u4e45\u5316 \u8d1f\u8f7d\u5e73\u8861\u670d\u52a1\u7684\u4e00\u9879\u529f\u80fd\u3002\u53ea\u8981\u67d0\u4e2a\u670d\u52a1\u5904\u4e8e\u8054\u673a\u72b6\u6001\uff0c\u5b83\u5c31\u4f1a\u5c1d\u8bd5\u5f3a\u5236\u5c06\u670d\u52a1\u7684\u540e\u7eed\u8fde\u63a5\u91cd\u5b9a\u5411\u5230\u540c\u4e00\u8282\u70b9\u3002 \u4f1a\u8bdd\u5b58\u50a8 \u7528\u4e8e\u5b58\u50a8\u548c\u8ddf\u8e2a\u5ba2\u6237\u7aef\u4f1a\u8bdd\u4fe1\u606f\u7684 Horizon \u7ec4\u4ef6\u3002\u901a\u8fc7 Django \u4f1a\u8bdd\u6846\u67b6\u5b9e\u73b0\u3002 \u5171\u4eab \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e0a\u4e0b\u6587\u4e2d\u7684\u8fdc\u7a0b\u53ef\u6302\u8f7d\u6587\u4ef6\u7cfb\u7edf\u3002\u60a8\u53ef\u4ee5\u4e00\u6b21\u5c06\u5171\u4eab\u88c5\u8f7d\u5230\u591a\u4e2a\u4e3b\u673a\uff0c\u4e5f\u53ef\u4ee5\u7531\u591a\u4e2a\u7528\u6237\u4ece\u591a\u4e2a\u4e3b\u673a\u8bbf\u95ee\u5171\u4eab\u3002 \u5171\u4eab\u7f51\u7edc \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e0a\u4e0b\u6587\u4e2d\u7684\u5b9e\u4f53\uff0c\u7528\u4e8e\u5c01\u88c5\u4e0e\u7f51\u7edc\u670d\u52a1\u7684\u4ea4\u4e92\u3002\u5982\u679c\u6240\u9009\u9a71\u52a8\u7a0b\u5e8f\u5728\u9700\u8981\u6b64\u7c7b\u4ea4\u4e92\u7684\u6a21\u5f0f\u4e0b\u8fd0\u884c\uff0c\u5219\u9700\u8981\u6307\u5b9a\u5171\u4eab\u7f51\u7edc\u4ee5\u521b\u5efa\u5171\u4eab\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API \u63d0\u4f9b\u7a33\u5b9a RESTful API \u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u3002\u8be5\u670d\u52a1\u5728\u6574\u4e2a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e2d\u5bf9\u8bf7\u6c42\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u548c\u8def\u7531\u3002\u6709 python-manilaclient \u53ef\u4ee5\u4e0e API \u4ea4\u4e92\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff08manila\uff09 \u8be5\u670d\u52a1\u63d0\u4f9b\u4e00\u7ec4\u670d\u52a1\uff0c\u7528\u4e8e\u7ba1\u7406\u591a\u9879\u76ee\u4e91\u73af\u5883\u4e2d\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\uff0c\u7c7b\u4f3c\u4e8e OpenStack \u901a\u8fc7 OpenStack Block Storage \u670d\u52a1\u9879\u76ee\u63d0\u4f9b\u57fa\u4e8e\u5757\u7684\u5b58\u50a8\u7ba1\u7406\u3002\u4f7f\u7528\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff0c\u60a8\u53ef\u4ee5\u521b\u5efa\u8fdc\u7a0b\u6587\u4ef6\u7cfb\u7edf\u5e76\u5c06\u6587\u4ef6\u7cfb\u7edf\u6302\u8f7d\u5230\u60a8\u7684\u5b9e\u4f8b\u4e0a\u3002\u60a8\u8fd8\u53ef\u4ee5\u5728\u6587\u4ef6\u7cfb\u7edf\u4e2d\u8bfb\u53d6\u548c\u5199\u5165\u5b9e\u4f8b\u4e2d\u7684\u6570\u636e\u3002 \u5171\u4eab IP \u5730\u5740 \u53ef\u5206\u914d\u7ed9\u5171\u4eab IP \u7ec4\u4e2d\u7684 VM \u5b9e\u4f8b\u7684 IP \u5730\u5740\u3002\u516c\u5171 IP \u5730\u5740\u53ef\u4ee5\u5728\u591a\u4e2a\u670d\u52a1\u5668\u4e4b\u95f4\u5171\u4eab\uff0c\u4ee5\u4fbf\u5728\u5404\u79cd\u9ad8\u53ef\u7528\u6027\u65b9\u6848\u4e2d\u4f7f\u7528\u3002\u5f53 IP \u5730\u5740\u5171\u4eab\u5230\u53e6\u4e00\u53f0\u670d\u52a1\u5668\u65f6\uff0c\u5c06\u4fee\u6539\u4e91\u7f51\u7edc\u9650\u5236\uff0c\u4f7f\u6bcf\u4e2a\u670d\u52a1\u5668\u90fd\u80fd\u4fa6\u542c\u548c\u54cd\u5e94\u8be5 IP \u5730\u5740\u3002\u60a8\u53ef\u4ee5\u9009\u62e9\u6307\u5b9a\u4fee\u6539\u76ee\u6807\u670d\u52a1\u5668\u7f51\u7edc\u914d\u7f6e\u3002\u5171\u4eab IP \u5730\u5740\u53ef\u4ee5\u4e0e\u8bb8\u591a\u6807\u51c6\u68c0\u6d4b\u4fe1\u53f7\u5de5\u5177\uff08\u5982 keepalive\uff09\u4e00\u8d77\u4f7f\u7528\uff0c\u8fd9\u4e9b\u5de5\u5177\u53ef\u76d1\u89c6\u6545\u969c\u5e76\u7ba1\u7406 IP \u6545\u969c\u8f6c\u79fb\u3002 \u5171\u4eab IP \u7ec4 \u53ef\u4ee5\u4e0e\u7ec4\u7684\u5176\u4ed6\u6210\u5458\u5171\u4eab IP \u7684\u670d\u52a1\u5668\u96c6\u5408\u3002\u7ec4\u4e2d\u7684\u4efb\u4f55\u670d\u52a1\u5668\u90fd\u53ef\u4ee5\u4e0e\u7ec4\u4e2d\u7684\u4efb\u4f55\u5176\u4ed6\u670d\u52a1\u5668\u5171\u4eab\u4e00\u4e2a\u6216\u591a\u4e2a\u516c\u5171 IP\u3002\u9664\u4e86\u5171\u4eab IP \u7ec4\u4e2d\u7684\u7b2c\u4e00\u53f0\u670d\u52a1\u5668\u5916\uff0c\u670d\u52a1\u5668\u5fc5\u987b\u542f\u52a8\u5230\u5171\u4eab IP \u7ec4\u4e2d\u3002\u4e00\u53f0\u670d\u52a1\u5668\u53ea\u80fd\u662f\u4e00\u4e2a\u5171\u4eab IP \u7ec4\u7684\u6210\u5458\u3002 \u5171\u4eab\u5b58\u50a8 \u53ef\u7531\u591a\u4e2a\u5ba2\u6237\u7aef\u540c\u65f6\u8bbf\u95ee\u7684\u5757\u5b58\u50a8\uff0c\u4f8b\u5982 NFS\u3002 Sheepdog \u9762\u5411 QEMU \u7684\u5206\u5e03\u5f0f\u5757\u5b58\u50a8\u7cfb\u7edf\uff0c\u7531 OpenStack \u63d0\u4f9b\u652f\u6301\u3002 \u7b80\u5355\u4e91\u8eab\u4efd\u7ba1\u7406 \uff08SCIM\uff09 \u7528\u4e8e\u5728\u4e91\u4e2d\u7ba1\u7406\u8eab\u4efd\u7684\u89c4\u8303\uff0c\u76ee\u524d\u4e0d\u53d7 OpenStack \u652f\u6301\u3002 \u72ec\u7acb\u8ba1\u7b97\u73af\u5883\u7684\u7b80\u5355\u534f\u8bae \uff08SPICE\uff09 SPICE \u63d0\u4f9b\u5bf9\u5ba2\u6237\u673a\u865a\u62df\u673a\u7684\u8fdc\u7a0b\u684c\u9762\u8bbf\u95ee\u3002\u5b83\u662f VNC \u7684\u66ff\u4ee3\u54c1\u3002OpenStack\u652f\u6301SPICE\u3002 \u5355\u6839 I/O \u865a\u62df\u5316 \uff08SR-IOV\uff09 \u5f53\u7531\u7269\u7406 PCIe \u8bbe\u5907\u5b9e\u73b0\u65f6\uff0c\u8be5\u89c4\u8303\u4f7f\u5176\u80fd\u591f\u663e\u793a\u4e3a\u591a\u4e2a\u5355\u72ec\u7684 PCIe \u8bbe\u5907\u3002\u8fd9\u4f7f\u591a\u4e2a\u865a\u62df\u5316\u5ba2\u6237\u673a\u80fd\u591f\u5171\u4eab\u5bf9\u7269\u7406\u8bbe\u5907\u7684\u76f4\u63a5\u8bbf\u95ee\uff0c\u4ece\u800c\u63d0\u4f9b\u6bd4\u7b49\u6548\u865a\u62df\u8bbe\u5907\u66f4\u9ad8\u7684\u6027\u80fd\u3002\u76ee\u524d\u5728 OpenStack Havana \u53ca\u66f4\u9ad8\u7248\u672c\u4e2d\u53d7\u652f\u6301\u3002 SmokeStack \u9488\u5bf9\u6838\u5fc3 OpenStack API \u8fd0\u884c\u81ea\u52a8\u5316\u6d4b\u8bd5;\u7528 Rails \u7f16\u5199\u3002 \u5feb\u7167 OpenStack \u5b58\u50a8\u5377\u6216\u6620\u50cf\u7684\u65f6\u95f4\u70b9\u526f\u672c\u3002\u4f7f\u7528\u5b58\u50a8\u5377\u5feb\u7167\u5907\u4efd\u5377\u3002\u4f7f\u7528\u6620\u50cf\u5feb\u7167\u6765\u5907\u4efd\u6570\u636e\uff0c\u6216\u4f5c\u4e3a\u5176\u4ed6\u670d\u52a1\u5668\u7684\u201c\u9ec4\u91d1\u201d\u6620\u50cf\u3002 \u8f6f\u91cd\u542f \u901a\u8fc7\u64cd\u4f5c\u7cfb\u7edf\u547d\u4ee4\u6b63\u786e\u91cd\u542f VM \u5b9e\u4f8b\u7684\u53d7\u63a7\u91cd\u542f\u3002 \u8f6f\u4ef6\u5f00\u53d1\u5de5\u5177\u5305 \uff08SDK\uff09 \u5305\u542b\u4ee3\u7801\u3001\u793a\u4f8b\u548c\u6587\u6863\uff0c\u60a8\u53ef\u4ee5\u4f7f\u7528\u8fd9\u4e9b\u4ee3\u7801\u3001\u793a\u4f8b\u548c\u6587\u6863\u4ee5\u6240\u9009\u8bed\u8a00\u521b\u5efa\u5e94\u7528\u7a0b\u5e8f\u3002 \u8f6f\u4ef6\u5f00\u53d1\u751f\u547d\u5468\u671f\u81ea\u52a8\u5316\u670d\u52a1\uff08solum\uff09 OpenStack\u9879\u76ee\uff0c\u65e8\u5728\u901a\u8fc7\u81ea\u52a8\u5316\u4ece\u6e90\u5230\u6620\u50cf\u7684\u8fc7\u7a0b\uff0c\u5e76\u7b80\u5316\u4ee5\u5e94\u7528\u7a0b\u5e8f\u4e3a\u4e2d\u5fc3\u7684\u90e8\u7f72\uff0c\u4f7f\u4e91\u670d\u52a1\u66f4\u6613\u4e8e\u4f7f\u7528\u5e76\u4e0e\u5e94\u7528\u7a0b\u5e8f\u5f00\u53d1\u8fc7\u7a0b\u96c6\u6210\u3002 \u8f6f\u4ef6\u5b9a\u4e49\u7f51\u7edc \uff08SDN\uff09 \u4e3a\u7f51\u7edc\u7ba1\u7406\u5458\u63d0\u4f9b\u4e00\u79cd\u65b9\u6cd5\uff0c\u901a\u8fc7\u62bd\u8c61\u8f83\u4f4e\u7ea7\u522b\u7684\u529f\u80fd\u6765\u7ba1\u7406\u8ba1\u7b97\u673a\u7f51\u7edc\u670d\u52a1\u3002 SolidFire \u5377\u9a71\u52a8\u7a0b\u5e8f SolidFire iSCSI \u5b58\u50a8\u8bbe\u5907\u7684\u5757\u5b58\u50a8\u9a71\u52a8\u7a0b\u5e8f\u3002 solum \u8f6f\u4ef6\u5f00\u53d1\u751f\u547d\u5468\u671f\u81ea\u52a8\u5316\u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u70b9\u5dee\u4f18\u5148\u8c03\u5ea6\u5668 \u8ba1\u7b97 VM \u8ba1\u5212\u7b97\u6cd5\uff0c\u5c1d\u8bd5\u4ee5\u6700\u5c0f\u7684\u8d1f\u8f7d\u5728\u4e3b\u673a\u4e0a\u542f\u52a8\u65b0 VM\u3002 SQLAlchemy \u7528\u4e8e Python \u7684\u5f00\u6e90 SQL \u5de5\u5177\u5305\uff0c\u7528\u4e8e OpenStack\u3002 SQLite \u4e00\u4e2a\u8f7b\u91cf\u7ea7\u7684 SQL \u6570\u636e\u5e93\uff0c\u5728\u8bb8\u591a OpenStack \u670d\u52a1\u4e2d\u7528\u4f5c\u9ed8\u8ba4\u7684\u6301\u4e45\u5316\u5b58\u50a8\u65b9\u6cd5\u3002 \u5806\u6808 \u7531\u7f16\u6392\u670d\u52a1\u6839\u636e\u7ed9\u5b9a\u6a21\u677f\uff08AWS CloudFormation \u6a21\u677f\u6216 Heat \u7f16\u6392\u6a21\u677f \uff08HOT\uff09\uff09\u521b\u5efa\u548c\u7ba1\u7406\u7684\u4e00\u7ec4 OpenStack \u8d44\u6e90\u3002 StackTach \u6355\u83b7\u8ba1\u7b97 AMQP \u901a\u4fe1\u7684\u793e\u533a\u9879\u76ee;\u5bf9\u8c03\u8bd5\u5f88\u6709\u7528\u3002 \u9759\u6001 IP \u5730\u5740 \u56fa\u5b9a IP \u5730\u5740\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u9759\u6001\u7f51\u9875 \u5bf9\u8c61\u5b58\u50a8\u7684 WSGI \u4e2d\u95f4\u4ef6\u7ec4\u4ef6\uff0c\u5c06\u5bb9\u5668\u6570\u636e\u4f5c\u4e3a\u9759\u6001\u7f51\u9875\u63d0\u4f9b\u3002 Stein OpenStack \u7b2c 19 \u7248\u7684\u4ee3\u53f7\u3002OpenStack\u5cf0\u4f1a\u5728\u5fb7\u56fd\u67cf\u6797\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u67cf\u6797\u7684 Steinstra\u00dfe \u8857\u547d\u540d\u3002 \u5b58\u50a8\u540e\u7aef \u670d\u52a1\u7528\u4e8e\u6301\u4e45\u6027\u5b58\u50a8\u7684\u65b9\u6cd5\uff0c\u4f8b\u5982 iSCSI\u3001NFS \u6216\u672c\u5730\u78c1\u76d8\u3002 \u5b58\u50a8\u7ba1\u7406\u5668 \u4e00\u4e2a XenAPI \u7ec4\u4ef6\uff0c\u5b83\u63d0\u4f9b\u53ef\u63d2\u5165\u63a5\u53e3\u4ee5\u652f\u6301\u5404\u79cd\u6301\u4e45\u6027\u5b58\u50a8\u540e\u7aef\u3002 \u5b58\u50a8\u7ba1\u7406\u5668\u540e\u7aef XenAPI \u652f\u6301\u7684\u6301\u4e45\u6027\u5b58\u50a8\u65b9\u6cd5\uff0c\u4f8b\u5982 iSCSI \u6216 NFS\u3002 \u5b58\u50a8\u8282\u70b9 \u63d0\u4f9b\u5bb9\u5668\u670d\u52a1\u3001\u8d26\u6237\u670d\u52a1\u548c\u5bf9\u8c61\u670d\u52a1\u7684\u5bf9\u8c61\u5b58\u50a8\u8282\u70b9;\u63a7\u5236\u5e10\u6237\u6570\u636e\u5e93\u3001\u5bb9\u5668\u6570\u636e\u5e93\u548c\u5bf9\u8c61\u5b58\u50a8\u3002 \u5b58\u50a8\u670d\u52a1 \u63d0\u4f9b\u5bb9\u5668\u670d\u52a1\u3001\u8d26\u6237\u670d\u52a1\u548c\u5bf9\u8c61\u670d\u52a1\u7684\u5bf9\u8c61\u5b58\u50a8\u8282\u70b9;\u63a7\u5236\u5e10\u6237\u6570\u636e\u5e93\u3001\u5bb9\u5668\u6570\u636e\u5e93\u548c\u5bf9\u8c61\u5b58\u50a8\u3002 \u5b58\u50a8\u670d\u52a1 \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u670d\u52a1\u3001\u5bb9\u5668\u670d\u52a1\u548c\u5e10\u6237\u670d\u52a1\u7684\u96c6\u5408\u540d\u79f0\u3002 \u7b56\u7565 \u6307\u5b9a\u955c\u50cf\u670d\u52a1\u6216\u8eab\u4efd\u4f7f\u7528\u7684\u8ba4\u8bc1\u6e90\u3002\u5728\u6570\u636e\u5e93\u670d\u52a1\u4e2d\uff0c\u5b83\u662f\u6307\u4e3a\u6570\u636e\u5b58\u50a8\u5b9e\u73b0\u7684\u6269\u5c55\u3002 \u5b50\u57df \u7236\u57df\u4e2d\u7684\u57df\u3002\u65e0\u6cd5\u6ce8\u518c\u5b50\u57df\u3002\u5b50\u57df\u4f7f\u60a8\u80fd\u591f\u59d4\u6d3e\u57df\u3002\u5b50\u57df\u672c\u8eab\u53ef\u4ee5\u6709\u5b50\u57df\uff0c\u56e0\u6b64\u53ef\u4ee5\u8fdb\u884c\u4e09\u7ea7\u3001\u56db\u7ea7\u3001\u4e94\u7ea7\u548c\u66f4\u6df1\u7ea7\u522b\u7684\u5d4c\u5957\u3002 \u5b50\u7f51 IP \u7f51\u7edc\u7684\u903b\u8f91\u7ec6\u5206\u3002 SUSE Linux Enterprise Server \uff08SLES\uff09 \uff08\u82f1\u8bed\uff09 \u4e0e OpenStack \u517c\u5bb9\u7684 Linux \u53d1\u884c\u7248\u3002 \u6302\u8d77 \u865a\u62df\u673a\u5b9e\u4f8b\u5c06\u6682\u505c\uff0c\u5176\u72b6\u6001\u5c06\u4fdd\u5b58\u5230\u4e3b\u673a\u7684\u78c1\u76d8\u4e2d\u3002 \u4ea4\u6362 \u64cd\u4f5c\u7cfb\u7edf\u4f7f\u7528\u7684\u57fa\u4e8e\u78c1\u76d8\u7684\u865a\u62df\u5185\u5b58\uff0c\u7528\u4e8e\u63d0\u4f9b\u6bd4\u7cfb\u7edf\u4e0a\u5b9e\u9645\u53ef\u7528\u7684\u5185\u5b58\u66f4\u591a\u7684\u5185\u5b58\u3002 swift OpenStack \u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u7684\u4ee3\u53f7\u3002 swift \u591a\u5408\u4e00 \uff08SAIO\uff09 Swift \u4e2d\u95f4\u4ef6 \u63d0\u4f9b\u9644\u52a0\u529f\u80fd\u7684\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\u7684\u7edf\u79f0\u3002 Swift \u4ee3\u7406\u670d\u52a1\u5668 \u5145\u5f53\u5bf9\u8c61\u5b58\u50a8\u7684\u7f51\u5b88\uff0c\u5e76\u8d1f\u8d23\u5bf9\u7528\u6237\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 Swift \u5b58\u50a8\u8282\u70b9 \u8fd0\u884c\u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u3001\u5bb9\u5668\u548c\u5bf9\u8c61\u670d\u52a1\u7684\u8282\u70b9\u3002 \u540c\u6b65\u70b9 \u81ea\u4e0a\u6b21\u5bb9\u5668\u548c\u5e10\u6237\u6570\u636e\u5e93\u5728\u5bf9\u8c61\u5b58\u50a8\u4e2d\u7684\u8282\u70b9\u4e4b\u95f4\u540c\u6b65\u4ee5\u6765\u7684\u65f6\u95f4\u70b9\u3002 \u7cfb\u7edf\u7ba1\u7406\u5458 \u8ba1\u7b97 RBAC \u7cfb\u7edf\u4e2d\u7684\u9ed8\u8ba4\u89d2\u8272\u4e4b\u4e00\u3002\u4f7f\u7528\u6237\u80fd\u591f\u5c06\u5176\u4ed6\u7528\u6237\u6dfb\u52a0\u5230\u9879\u76ee\u4e2d\uff0c\u4e0e\u4e0e\u9879\u76ee\u5173\u8054\u7684 VM \u6620\u50cf\u8fdb\u884c\u4ea4\u4e92\uff0c\u4ee5\u53ca\u542f\u52a8\u548c\u505c\u6b62 VM \u5b9e\u4f8b\u3002 \u7cfb\u7edf\u4f7f\u7528\u60c5\u51b5 \u4e00\u4e2a\u8ba1\u7b97\u7ec4\u4ef6\uff0c\u5b83\u4e0e\u901a\u77e5\u7cfb\u7edf\u4e00\u8d77\u6536\u96c6\u8ba1\u91cf\u548c\u4f7f\u7528\u60c5\u51b5\u4fe1\u606f\u3002\u6b64\u4fe1\u606f\u53ef\u7528\u4e8e\u8ba1\u8d39\u3002 T \u00b6 Tacker NFV \u7f16\u6392\u670d\u52a1\u7684\u4ee3\u7801\u540d\u79f0 \u9065\u6d4b\u670d\u52a1\uff08telemetry\uff09 OpenStack\u9879\u76ee\u6536\u96c6\u5305\u542b\u5df2\u90e8\u7f72\u4e91\u7684\u7269\u7406\u548c\u865a\u62df\u8d44\u6e90\u5229\u7528\u7387\u7684\u6d4b\u91cf\u503c\uff0c\u4fdd\u7559\u6b64\u6570\u636e\u4ee5\u4f9b\u540e\u7eed\u68c0\u7d22\u548c\u5206\u6790\uff0c\u5e76\u5728\u6ee1\u8db3\u5b9a\u4e49\u7684\u6761\u4ef6\u65f6\u89e6\u53d1\u64cd\u4f5c\u3002 TempAuth \u51fd\u6570 Object Storage\u4e2d\u7684\u4e00\u79cd\u8eab\u4efd\u9a8c\u8bc1\u5de5\u5177\uff0c\u4f7fObject Storage\u672c\u8eab\u80fd\u591f\u6267\u884c\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u3002\u7ecf\u5e38\u7528\u4e8e\u6d4b\u8bd5\u548c\u5f00\u53d1\u3002 Tempest \u81ea\u52a8\u5316\u8f6f\u4ef6\u6d4b\u8bd5\u5957\u4ef6\uff0c\u65e8\u5728\u9488\u5bf9 OpenStack \u6838\u5fc3\u9879\u76ee\u7684\u4e3b\u5e72\u8fd0\u884c\u3002 TempURL \u4e00\u4e2a\u5bf9\u8c61\u5b58\u50a8\u4e2d\u95f4\u4ef6\u7ec4\u4ef6\uff0c\u7528\u4e8e\u521b\u5efa\u7528\u4e8e\u4e34\u65f6\u5bf9\u8c61\u8bbf\u95ee\u7684 URL\u3002 \u79df\u6237 \u4e00\u7ec4\u7528\u6237;\u7528\u4e8e\u9694\u79bb\u5bf9\u8ba1\u7b97\u8d44\u6e90\u7684\u8bbf\u95ee\u3002\u9879\u76ee\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u79df\u6237 API \u9879\u76ee\u53ef\u8bbf\u95ee\u7684 API\u3002 \u79df\u6237\u7aef\u70b9 \u4e0e\u4e00\u4e2a\u6216\u591a\u4e2a\u9879\u76ee\u5173\u8054\u7684\u8eab\u4efd\u670d\u52a1 API \u7aef\u70b9\u3002 \u79df\u6237 ID \u9879\u76ee ID \u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u4ee4\u724c \u7528\u4e8e\u8bbf\u95ee OpenStack API \u548c\u8d44\u6e90\u7684\u5b57\u6bcd\u6570\u5b57\u6587\u672c\u5b57\u7b26\u4e32\u3002 \u4ee4\u724c\u670d\u52a1 \u4e00\u4e2a\u8eab\u4efd\u670d\u52a1\u7ec4\u4ef6\uff0c\u7528\u4e8e\u5728\u7528\u6237\u6216\u9879\u76ee\u901a\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u540e\u7ba1\u7406\u548c\u9a8c\u8bc1\u4ee4\u724c\u3002 \u903b\u8f91\u5220\u9664 \u7528\u4e8e\u6807\u8bb0\u5df2\u5220\u9664\u7684\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61;\u786e\u4fdd\u5bf9\u8c61\u5728\u5220\u9664\u540e\u4e0d\u4f1a\u5728\u53e6\u4e00\u4e2a\u8282\u70b9\u4e0a\u66f4\u65b0\u3002 \u4e3b\u9898\u53d1\u5e03\u8005 \u6267\u884c RPC \u8c03\u7528\u65f6\u521b\u5efa\u7684\u8fdb\u7a0b;\u7528\u4e8e\u5c06\u6d88\u606f\u63a8\u9001\u5230\u4e3b\u9898\u4ea4\u6362\u3002 Torpedo \u7528\u4e8e\u9488\u5bf9 OpenStack API \u8fd0\u884c\u81ea\u52a8\u5316\u6d4b\u8bd5\u7684\u793e\u533a\u9879\u76ee\u3002 Train OpenStack \u7b2c 20 \u7248\u7684\u4ee3\u53f7\u3002OpenStack \u57fa\u7840\u67b6\u6784\u5cf0\u4f1a\u5728\u7f8e\u56fd\u79d1\u7f57\u62c9\u591a\u5dde\u4e39\u4f5b\u5e02\u4e3e\u884c\u3002 \u4e39\u4f5b\u7684\u4e24\u6b21\u9879\u76ee\u56e2\u961f\u805a\u4f1a\u4f1a\u8bae\u5728\u4ece\u5e02\u4e2d\u5fc3\u5230\u673a\u573a\u7684\u706b\u8f66\u7ebf\u65c1\u8fb9\u7684\u4e00\u5bb6\u9152\u5e97\u4e3e\u884c\u3002\u90a3\u91cc\u7684\u4ea4\u53c9\u4fe1\u53f7\u706f\u8fc7\u53bb\u66fe\u51fa\u73b0\u8fc7\u67d0\u79cd\u6545\u969c\uff0c\u5bfc\u81f4\u5b83\u4eec\u5728\u706b\u8f66\u6b63\u5e38\u9a76\u6765\u65f6\u6ca1\u6709\u505c\u4e0b\u8f66\u53a2\u3002\u56e0\u6b64\uff0c\u706b\u8f66\u5728\u7ecf\u8fc7\u8be5\u5730\u533a\u65f6\u5fc5\u987b\u9e23\u5587\u53ed\u3002\u663e\u7136\uff0c\u4f4f\u5728\u9152\u5e97\u91cc\uff0c\u4e58\u5750\u706b\u8f6624/7\u5439\u5587\u53ed\uff0c\u4e0d\u592a\u7406\u60f3\u3002\u7ed3\u679c\uff0c\u51fa\u73b0\u4e86\u8bb8\u591a\u5173\u4e8e\u4e39\u4f5b\u548c\u706b\u8f66\u7684\u7b11\u8bdd\u2014\u2014\u56e0\u6b64\u8fd9\u4e2a\u7248\u672c\u88ab\u79f0\u4e3a\u706b\u8f66\u3002 \u4ea4\u6613 ID \u5206\u914d\u7ed9\u6bcf\u4e2a\u5bf9\u8c61\u5b58\u50a8\u8bf7\u6c42\u7684\u552f\u4e00 ID;\u7528\u4e8e\u8c03\u8bd5\u548c\u8ddf\u8e2a\u3002 \u77ac\u6001 \u975e\u8010\u7528\u54c1\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u77ac\u6001\u4ea4\u6362 \u975e\u6301\u4e45\u4ea4\u6362\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u77ac\u6001\u6d88\u606f \u5b58\u50a8\u5728\u5185\u5b58\u4e2d\u5e76\u5728\u670d\u52a1\u5668\u91cd\u65b0\u542f\u52a8\u540e\u4e22\u5931\u7684\u6d88\u606f\u3002 \u77ac\u6001\u961f\u5217 \u975e\u6301\u4e45\u961f\u5217\u7684\u66ff\u4ee3\u672f\u8bed\u3002 TripleO OpenStack-on-OpenStack \u7a0b\u5e8f\u3002OpenStack Deployment \u7a0b\u5e8f\u7684\u4ee3\u53f7\u3002 Trove OpenStack \u6570\u636e\u5e93\u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u53ef\u4fe1\u5e73\u53f0\u6a21\u5757\uff08TPM\uff09 \u4e13\u7528\u5fae\u5904\u7406\u5668\uff0c\u7528\u4e8e\u5c06\u52a0\u5bc6\u5bc6\u94a5\u6574\u5408\u5230\u8bbe\u5907\u4e2d\uff0c\u4ee5\u9a8c\u8bc1\u548c\u4fdd\u62a4\u786c\u4ef6\u5e73\u53f0\u3002 U \u00b6 Ubuntu \u57fa\u4e8e Debian \u7684 Linux \u53d1\u884c\u7248\u3002 \u65e0\u4f5c\u7528\u57df\u4ee4\u724c Identity \u670d\u52a1\u9ed8\u8ba4\u4ee4\u724c\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u66f4\u65b0\u5668 \u4e00\u7ec4\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\u7684\u7edf\u79f0\uff0c\u7528\u4e8e\u5904\u7406\u5bb9\u5668\u548c\u5bf9\u8c61\u7684\u6392\u961f\u548c\u5931\u8d25\u7684\u66f4\u65b0\u3002 \u7528\u6237 \u5728 OpenStack Identity \u4e2d\uff0c\u5b9e\u4f53\u4ee3\u8868\u5355\u4e2a API \u4f7f\u7528\u8005\uff0c\u5e76\u7531\u7279\u5b9a\u57df\u62e5\u6709\u3002\u5728 OpenStack \u8ba1\u7b97\u4e2d\uff0c\u7528\u6237\u53ef\u4ee5\u4e0e\u89d2\u8272\u548c/\u6216\u9879\u76ee\u76f8\u5173\u8054\u3002 \u7528\u6237\u6570\u636e \u7528\u6237\u5728\u542f\u52a8\u5b9e\u4f8b\u65f6\u53ef\u4ee5\u6307\u5b9a\u7684\u6570\u636e Blob\u3002\u5b9e\u4f8b\u53ef\u4ee5\u901a\u8fc7\u5143\u6570\u636e\u670d\u52a1\u6216\u914d\u7f6e\u9a71\u52a8\u5668\u8bbf\u95ee\u6b64\u6570\u636e\u3002\u901a\u5e38\u7528\u4e8e\u4f20\u9012\u5b9e\u4f8b\u5728\u542f\u52a8\u65f6\u8fd0\u884c\u7684 shell \u811a\u672c\u3002 \u7528\u6237\u6a21\u5f0f Linux \uff08UML\uff09 \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 Ussuri OpenStack \u7b2c 21 \u7248\u7684\u4ee3\u53f7\u3002OpenStack\u57fa\u7840\u8bbe\u65bd\u5cf0\u4f1a\u5728\u4e2d\u534e\u4eba\u6c11\u5171\u548c\u56fd\u4e0a\u6d77\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u4e4c\u82cf\u91cc\u6cb3\u547d\u540d\u3002 V \u00b6 Victoria OpenStack \u7b2c 22 \u7248\u7684\u4ee3\u53f7\u3002OpenDev + PTG \u8ba1\u5212\u5728\u52a0\u62ff\u5927\u4e0d\u5217\u98a0\u54e5\u4f26\u6bd4\u4e9a\u7701\u6e29\u54e5\u534e\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u4e0d\u5217\u98a0\u54e5\u4f26\u6bd4\u4e9a\u7701\u9996\u5e9c\u7ef4\u591a\u5229\u4e9a\u547d\u540d\u3002 \u7531\u4e8e COVID-19\uff0c\u73b0\u573a\u6d3b\u52a8\u88ab\u53d6\u6d88\u3002\u8be5\u4e8b\u4ef6\u6b63\u5728\u865a\u62df\u5316\u3002 VIF UUID \u5206\u914d\u7ed9\u6bcf\u4e2a\u7f51\u7edc VIF \u7684\u552f\u4e00 ID\u3002 \u865a\u62df\u4e2d\u592e\u5904\u7406\u5668 \uff08vCPU\uff09 \u7ec6\u5206\u7269\u7406 CPU\u3002\u7136\u540e\uff0c\u5b9e\u4f8b\u53ef\u4ee5\u4f7f\u7528\u8fd9\u4e9b\u5206\u533a\u3002 \u865a\u62df\u78c1\u76d8\u6620\u50cf \uff08VDI\uff09 \u6620\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u6620\u50cf\u78c1\u76d8\u683c\u5f0f\u4e4b\u4e00\u3002 \u865a\u62df\u53ef\u6269\u5c55\u5c40\u57df\u7f51 \uff08VXLAN\uff09 \u4e00\u79cd\u7f51\u7edc\u865a\u62df\u5316\u6280\u672f\uff0c\u8bd5\u56fe\u51cf\u5c11\u4e0e\u5927\u578b\u4e91\u8ba1\u7b97\u90e8\u7f72\u76f8\u5173\u7684\u53ef\u4f38\u7f29\u6027\u95ee\u9898\u3002\u5b83\u4f7f\u7528\u7c7b\u4f3c VLAN \u7684\u5c01\u88c5\u6280\u672f\u5c06\u4ee5\u592a\u7f51\u5e27\u5c01\u88c5\u5728 UDP \u6570\u636e\u5305\u4e2d\u3002 \u865a\u62df\u786c\u76d8 \uff08VHD\uff09 \u955c\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u955c\u50cf\u78c1\u76d8\u683c\u5f0f\u4e4b\u4e00\u3002 \u865a\u62df IP \u5730\u5740 \uff08VIP\uff09 \u5728\u8d1f\u8f7d\u5e73\u8861\u5668\u4e0a\u914d\u7f6e\u7684 Internet \u534f\u8bae \uff08IP\uff09 \u5730\u5740\uff0c\u4f9b\u8fde\u63a5\u5230\u8d1f\u8f7d\u5e73\u8861\u670d\u52a1\u7684\u5ba2\u6237\u7aef\u4f7f\u7528\u3002\u4f20\u5165\u8fde\u63a5\u5c06\u6839\u636e\u8d1f\u8f7d\u5747\u8861\u5668\u7684\u914d\u7f6e\u5206\u53d1\u5230\u540e\u7aef\u8282\u70b9\u3002 \u865a\u62df\u673a \uff08VM\uff09 \u5728\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4e0a\u8fd0\u884c\u7684\u64cd\u4f5c\u7cfb\u7edf\u5b9e\u4f8b\u3002\u591a\u4e2a VM \u53ef\u4ee5\u5728\u540c\u4e00\u7269\u7406\u4e3b\u673a\u4e0a\u540c\u65f6\u8fd0\u884c\u3002 \u865a\u62df\u7f51\u7edc \u7f51\u7edc\u4e2d\u7684 L2 \u7f51\u6bb5\u3002 \u865a\u62df\u7f51\u7edc\u8ba1\u7b97 \uff08VNC\uff09 \u7528\u4e8e\u8fdc\u7a0b\u63a7\u5236\u53f0\u8bbf\u95ee VM \u7684\u5f00\u6e90 GUI \u548c CLI \u5de5\u5177\u3002 \u865a\u62df\u7f51\u7edc\u63a5\u53e3 \uff08VIF\uff09 \u63d2\u5165\u7f51\u7edc\u7f51\u7edc\u4e2d\u7684\u7aef\u53e3\u7684\u63a5\u53e3\u3002\u901a\u5e38\u5c5e\u4e8e VM \u7684\u865a\u62df\u7f51\u7edc\u63a5\u53e3\u3002 \u865a\u62df\u7f51\u7edc \u4f7f\u7528\u7269\u7406\u7f51\u7edc\u57fa\u7840\u67b6\u6784\u4e0a\u7684\u865a\u62df\u673a\u548c\u8986\u76d6\u7f51\u7edc\u7ec4\u5408\u5b9e\u73b0\u7f51\u7edc\u529f\u80fd\u865a\u62df\u5316\uff08\u5982\u4ea4\u6362\u3001\u8def\u7531\u3001\u8d1f\u8f7d\u5e73\u8861\u548c\u5b89\u5168\u6027\uff09\u7684\u901a\u7528\u672f\u8bed\u3002 \u865a\u62df\u7aef\u53e3 \u865a\u62df\u63a5\u53e3\u8fde\u63a5\u5230\u865a\u62df\u7f51\u7edc\u7684\u8fde\u63a5\u70b9\u3002 \u865a\u62df\u4e13\u7528\u7f51\u7edc \uff08VPN\uff09 \u7531 Compute \u4ee5 cloudpipes \u7684\u5f62\u5f0f\u63d0\u4f9b\uff0c\u8fd9\u4e9b\u4e13\u7528\u5b9e\u4f8b\u7528\u4e8e\u6309\u9879\u76ee\u521b\u5efa VPN\u3002 \u865a\u62df\u670d\u52a1\u5668 VM \u6216\u6765\u5bbe\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u865a\u62df\u4ea4\u6362\u673a \uff08vSwitch\uff09 \u5728\u4e3b\u673a\u6216\u8282\u70b9\u4e0a\u8fd0\u884c\u5e76\u63d0\u4f9b\u57fa\u4e8e\u786c\u4ef6\u7684\u7f51\u7edc\u4ea4\u6362\u673a\u7684\u7279\u6027\u548c\u529f\u80fd\u7684\u8f6f\u4ef6\u3002 \u865a\u62df VLAN \u865a\u62df\u7f51\u7edc\u7684\u66ff\u4ee3\u672f\u8bed\u3002 VirtualBox \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 Vitrage Root Cause Analysis\u670d\u52a1\u7684\u4ee3\u7801\u540d\u79f0\u3002 VLAN \u7ba1\u7406\u5668 \u4e00\u4e2a Compute \u7ec4\u4ef6\uff0c\u5b83\u63d0\u4f9b dnsmasq \u548c radvd\uff0c\u5e76\u8bbe\u7f6e\u4e0e cloudpipe \u5b9e\u4f8b\u4e4b\u95f4\u7684\u8f6c\u53d1\u3002 VLAN \u7f51\u7edc \u7f51\u7edc\u63a7\u5236\u5668\u63d0\u4f9b\u865a\u62df\u7f51\u7edc\uff0c\u4f7f\u8ba1\u7b97\u670d\u52a1\u5668\u80fd\u591f\u76f8\u4e92\u4ea4\u4e92\u4ee5\u53ca\u4e0e\u516c\u7528\u7f51\u7edc\u4ea4\u4e92\u3002\u6240\u6709\u8ba1\u7b97\u673a\u90fd\u5fc5\u987b\u5177\u6709\u516c\u5171\u548c\u4e13\u7528\u7f51\u7edc\u63a5\u53e3\u3002VLAN \u7f51\u7edc\u662f\u4e00\u4e2a\u4e13\u7528\u7f51\u7edc\u63a5\u53e3\uff0c\u7531 VLAN \u7ba1\u7406\u5668 vlan_interface \u9009\u9879\u63a7\u5236\u3002 \u865a\u62df\u673a\u78c1\u76d8\uff08VMDK\uff09 \u955c\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u955c\u50cf\u78c1\u76d8\u683c\u5f0f\u4e4b\u4e00\u3002 \u865a\u62df\u673a\u6620\u50cf \u6620\u50cf\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u865a\u62df\u673a\u8fdc\u7a0b\u63a7\u5236 \uff08VMRC\uff09 \u4f7f\u7528 Web \u6d4f\u89c8\u5668\u8bbf\u95ee VM \u5b9e\u4f8b\u63a7\u5236\u53f0\u7684\u65b9\u6cd5\u3002\u7531\u8ba1\u7b97\u652f\u6301\u3002 VMware API \u63a5\u53e3 \u652f\u6301\u5728\u8ba1\u7b97\u4e2d\u4e0e VMware \u4ea7\u54c1\u8fdb\u884c\u4ea4\u4e92\u3002 VMware NSX Neutron \u63d2\u4ef6 \u5728 Neutron \u4e2d\u63d0\u4f9b\u5bf9 VMware NSX \u7684\u652f\u6301\u3002 VNC \u4ee3\u7406 \u4e00\u4e2a\u8ba1\u7b97\u7ec4\u4ef6\uff0c\u5141\u8bb8\u7528\u6237\u901a\u8fc7 VNC \u6216 VMRC \u8bbf\u95ee\u5176 VM \u5b9e\u4f8b\u7684\u63a7\u5236\u53f0\u3002 \u5377 \u57fa\u4e8e\u78c1\u76d8\u7684\u6570\u636e\u5b58\u50a8\u901a\u5e38\u8868\u793a\u4e3a\u5177\u6709\u652f\u6301\u6269\u5c55\u5c5e\u6027\u7684\u6587\u4ef6\u7cfb\u7edf\u7684 iSCSI \u76ee\u6807;\u53ef\u4ee5\u662f\u6301\u4e45\u7684\uff0c\u4e5f\u53ef\u4ee5\u662f\u77ed\u6682\u7684\u3002 \u5377 API \u5757\u5b58\u50a8 API \u7684\u66ff\u4ee3\u540d\u79f0\u3002 \u5377\u63a7\u5236\u5668 \u4e00\u4e2a\u5757\u5b58\u50a8\u7ec4\u4ef6\uff0c\u7528\u4e8e\u76d1\u7763\u548c\u534f\u8c03\u5b58\u50a8\u5377\u64cd\u4f5c\u3002 \u5377\u9a71\u52a8\u7a0b\u5e8f \u5377\u63d2\u4ef6\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5377 ID \u5e94\u7528\u4e8e\u5757\u5b58\u50a8\u63a7\u5236\u4e0b\u6bcf\u4e2a\u5b58\u50a8\u5377\u7684\u552f\u4e00 ID\u3002 \u5377\u7ba1\u7406\u5668 \u7528\u4e8e\u521b\u5efa\u3001\u9644\u52a0\u548c\u5206\u79bb\u6301\u4e45\u6027\u5b58\u50a8\u5377\u7684\u5757\u5b58\u50a8\u7ec4\u4ef6\u3002 \u5377\u8282\u70b9 \u8fd0\u884c cinder-volume \u5b88\u62a4\u7a0b\u5e8f\u7684\u5757\u5b58\u50a8\u8282\u70b9\u3002 \u5377\u63d2\u4ef6 \u4e3a\u5757\u5b58\u50a8\u5377\u7ba1\u7406\u5668\u63d0\u4f9b\u5bf9\u65b0\u578b\u548c\u4e13\u7528\u540e\u7aef\u5b58\u50a8\u7c7b\u578b\u7684\u652f\u6301\u3002 \u5377\u5de5\u4f5c\u5668 \u4e00\u4e2a cinder \u7ec4\u4ef6\uff0c\u5b83\u4e0e\u540e\u7aef\u5b58\u50a8\u4ea4\u4e92\uff0c\u4ee5\u7ba1\u7406\u5377\u7684\u521b\u5efa\u548c\u5220\u9664\u4ee5\u53ca\u8ba1\u7b97\u5377\u7684\u521b\u5efa\uff0c\u7531 cinder-volume \u5b88\u62a4\u7a0b\u5e8f\u63d0\u4f9b\u3002 vSphere \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 W \u00b6 Wallaby OpenStack \u7b2c 23 \u7248\u7684\u4ee3\u53f7\u3002\u5c0f\u888b\u9f20\u539f\u4ea7\u4e8e\u6fb3\u5927\u5229\u4e9a\uff0c\u5728\u8fd9\u4e2a\u547d\u540d\u671f\u5f00\u59cb\u65f6\uff0c\u6fb3\u5927\u5229\u4e9a\u6b63\u5728\u7ecf\u5386\u524d\u6240\u672a\u6709\u7684\u91ce\u706b\u3002 Watcher \u57fa\u7840\u7ed3\u6784\u4f18\u5316\u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u6743\u91cd \u5bf9\u8c61\u5b58\u50a8\u8bbe\u5907\u7528\u4e8e\u786e\u5b9a\u54ea\u4e9b\u5b58\u50a8\u8bbe\u5907\u9002\u5408\u4f5c\u4e1a\u3002\u8bbe\u5907\u6309\u5927\u5c0f\u52a0\u6743\u3002 \u52a0\u6743\u6210\u672c \u51b3\u5b9a\u5728\u8ba1\u7b97\u4e2d\u542f\u52a8\u65b0 VM \u5b9e\u4f8b\u7684\u4f4d\u7f6e\u65f6\u6240\u4f7f\u7528\u7684\u6bcf\u4e2a\u6210\u672c\u7684\u603b\u548c\u3002 \u52a0\u6743 \u4e00\u4e2a\u8ba1\u7b97\u8fc7\u7a0b\uff0c\u7528\u4e8e\u786e\u5b9a VM \u5b9e\u4f8b\u662f\u5426\u9002\u5408\u7279\u5b9a\u4e3b\u673a\u7684\u4f5c\u4e1a\u3002\u4f8b\u5982\uff0c\u4e3b\u673a\u4e0a\u7684 RAM \u4e0d\u8db3\u3001\u4e3b\u673a\u4e0a\u7684 CPU \u8fc7\u591a\u7b49\u3002 \u5de5\u4f5c\u8005 \u4fa6\u542c\u961f\u5217\u5e76\u6267\u884c\u4efb\u52a1\u4ee5\u54cd\u5e94\u6d88\u606f\u7684\u5b88\u62a4\u7a0b\u5e8f\u3002\u4f8b\u5982\uff0c cinder-volume worker \u7ba1\u7406\u5b58\u50a8\u9635\u5217\u4e0a\u7684\u5377\u521b\u5efa\u548c\u5220\u9664\u3002 \u5de5\u4f5c\u6d41\u670d\u52a1 \uff08mistral\uff09 OpenStack\u670d\u52a1\u63d0\u4f9b\u4e86\u4e00\u79cd\u57fa\u4e8eYAML\u7684\u7b80\u5355\u8bed\u8a00\u6765\u7f16\u5199\u5de5\u4f5c\u6d41\uff08\u4efb\u52a1\u548c\u8f6c\u6362\u89c4\u5219\uff09\uff0c\u4ee5\u53ca\u4e00\u79cd\u5141\u8bb8\u4e0a\u4f20\u3001\u4fee\u6539\u3001\u5927\u89c4\u6a21\u548c\u9ad8\u5ea6\u53ef\u7528\u7684\u65b9\u5f0f\u8fd0\u884c\u5b83\u4eec\u3001\u7ba1\u7406\u548c\u76d1\u63a7\u5de5\u4f5c\u6d41\u6267\u884c\u72b6\u6001\u548c\u5355\u4e2a\u4efb\u52a1\u72b6\u6001\u7684\u670d\u52a1\u3002 X \u00b6 X.509 X.509 \u662f\u5b9a\u4e49\u6570\u5b57\u8bc1\u4e66\u7684\u6700\u5e7f\u6cdb\u4f7f\u7528\u7684\u6807\u51c6\u3002\u5b83\u662f\u4e00\u79cd\u6570\u636e\u7ed3\u6784\uff0c\u5305\u542b\u4e3b\u9898\uff08\u5b9e\u4f53\uff09\u53ef\u8bc6\u522b\u4fe1\u606f\uff0c\u4f8b\u5982\u5176\u540d\u79f0\u53ca\u5176\u516c\u94a5\u3002\u8bc1\u4e66\u8fd8\u53ef\u4ee5\u5305\u542b\u4e00\u4e9b\u5176\u4ed6\u5c5e\u6027\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u7248\u672c\u3002X.509 \u7684\u6700\u65b0\u6807\u51c6\u7248\u672c\u662f v3\u3002 Xen Xen \u662f\u4e00\u4e2a\u4f7f\u7528\u5fae\u5185\u6838\u8bbe\u8ba1\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\uff0c\u5b83\u63d0\u4f9b\u7684\u670d\u52a1\u5141\u8bb8\u591a\u4e2a\u8ba1\u7b97\u673a\u64cd\u4f5c\u7cfb\u7edf\u5728\u540c\u4e00\u8ba1\u7b97\u673a\u786c\u4ef6\u4e0a\u540c\u65f6\u6267\u884c\u3002 Xen API Xen \u7ba1\u7406 API\uff0c\u53d7 Compute \u652f\u6301\u3002 Xen \u4e91\u5e73\u53f0 \uff08XCP\uff09 \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 Xen Storage Manager \u5377\u9a71\u52a8\u7a0b\u5e8f \u652f\u6301\u4e0e Xen Storage Manager API \u8fdb\u884c\u901a\u4fe1\u7684\u5757\u5b58\u50a8\u5377\u63d2\u4ef6\u3002 Xena OpenStack \u7b2c 24 \u7248\u7684\u4ee3\u53f7\u3002\u8be5\u7248\u672c\u4ee5\u865a\u6784\u7684\u6218\u58eb\u516c\u4e3b\u547d\u540d\u3002 XenServer An OpenStack-supported hypervisor. \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 XFS \u51fd\u6570 \u7531 Silicon Graphics \u521b\u5efa\u7684\u9ad8\u6027\u80fd 64 \u4f4d\u6587\u4ef6\u7cfb\u7edf\u3002\u5728\u5e76\u884c I/O \u64cd\u4f5c\u548c\u6570\u636e\u4e00\u81f4\u6027\u65b9\u9762\u8868\u73b0\u51fa\u8272\u3002 Y \u00b6 Yoga OpenStack \u7b2c 25 \u7248\u7684\u4ee3\u53f7\u3002\u8be5\u7248\u672c\u4ee5\u6765\u81ea\u5370\u5ea6\u7684\u4e00\u6240\u54f2\u5b66\u5b66\u6821\u547d\u540d\uff0c\u8be5\u5b66\u6821\u5177\u6709\u5fc3\u7406\u548c\u8eab\u4f53\u5b9e\u8df5\u3002 Z \u00b6 Yoga \u6d88\u606f\u670d\u52a1\u7684\u4ee3\u53f7\u3002 Zed OpenStack \u7b2c 26 \u7248\u7684\u4ee3\u53f7\u3002\u8be5\u7248\u672c\u4ee5\u5b57\u6bcd Z \u7684\u53d1\u97f3\u547d\u540d\u3002 ZeroMQ OpenStack \u652f\u6301\u7684\u6d88\u606f\u961f\u5217\u8f6f\u4ef6\u3002RabbitMQ \u7684\u66ff\u4ee3\u54c1\u3002\u4e5f\u62fc\u5199\u4e3a 0MQ\u3002 Zuul Zuul \u662f\u4e00\u4e2a\u5f00\u6e90 CI/CD \u5e73\u53f0\uff0c\u4e13\u95e8\u7528\u4e8e\u5728\u767b\u9646\u5355\u4e2a\u8865\u4e01\u4e4b\u524d\u8de8\u591a\u4e2a\u7cfb\u7edf\u548c\u5e94\u7528\u7a0b\u5e8f\u8fdb\u884c\u95e8\u63a7\u66f4\u6539\u3002 Zuul \u7528\u4e8e OpenStack \u5f00\u53d1\uff0c\u4ee5\u786e\u4fdd\u53ea\u6709\u7ecf\u8fc7\u6d4b\u8bd5\u7684\u4ee3\u7801\u624d\u4f1a\u88ab\u5408\u5e76\u3002","title":"\u5b89\u5168\u6307\u5357"},{"location":"security/security-guide/#openstack","text":"\u672c\u6587\u7ffb\u8bd1\u81ea \u4e0a\u6e38\u5b89\u5168\u6307\u5357 OpenStack\u5b89\u5168\u6307\u5357 \u6458\u8981 \u5185\u5bb9 \u7ea6\u5b9a \u6ce8\u610f\u4e8b\u9879 \u547d\u4ee4\u63d0\u793a\u7b26 \u4ecb\u7ecd \u81f4\u8c22 \u6211\u4eec\u4e3a\u4ec0\u4e48\u4ee5\u53ca\u5982\u4f55\u5199\u8fd9\u672c\u4e66 \u76ee\u6807 \u5199\u4f5c\u8bb0\u5f55 \u5982\u4f55\u4e3a\u672c\u4e66\u505a\u8d21\u732e OpenStack \u7b80\u4ecb \u4e91\u7c7b\u578b \u516c\u6709\u4e91 \u79c1\u6709\u4e91 \u793e\u533a\u4e91 \u6df7\u5408\u4e91 OpenStack \u670d\u52a1\u6982\u8ff0 \u8ba1\u7b97 \u5bf9\u8c61\u5b58\u50a8 \u5757\u5b58\u50a8 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf \u7f51\u7edc \u4eea\u8868\u677f \u8eab\u4efd\u9274\u522b\u670d\u52a1 \u955c\u50cf\u670d\u52a1 \u6570\u636e\u5904\u7406\u670d\u52a1 \u5176\u4ed6\u914d\u5957\u6280\u672f \u5b89\u5168\u8fb9\u754c\u548c\u5a01\u80c1 \u5b89\u5168\u57df \u516c\u5171 \u8bbf\u5ba2 \u7ba1\u7406 \u6570\u636e \u6865\u63a5\u5b89\u5168\u57df \u5a01\u80c1\u5206\u7c7b\u3001\u53c2\u4e0e\u8005\u548c\u653b\u51fb\u5411\u91cf \u5a01\u80c1\u53c2\u4e0e\u8005 \u60c5\u62a5\u673a\u6784 \u4e25\u91cd\u6709\u7ec4\u7ec7\u72af\u7f6a \u9ad8\u80fd\u529b\u7684\u56e2\u961f \u6709\u52a8\u673a\u7684\u4e2a\u4eba \u811a\u672c\u653b\u51fb\u8005 \u516c\u6709\u4e91\u548c\u79c1\u6709\u4e91\u6ce8\u610f\u4e8b\u9879 \u51fa\u7ad9\u653b\u51fb\u548c\u58f0\u8a89\u98ce\u9669 \u653b\u51fb\u7c7b\u578b \u9009\u62e9\u652f\u6301\u8f6f\u4ef6 \u56e2\u961f\u4e13\u4e1a\u77e5\u8bc6 \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u901a\u7528\u6807\u51c6 \u786c\u4ef6\u95ee\u9898 \u7cfb\u7edf\u6587\u6863 \u7cfb\u7edf\u6587\u6863\u8981\u6c42 \u7cfb\u7edf\u89d2\u8272\u548c\u7c7b\u578b \u57fa\u7840\u8bbe\u65bd\u8282\u70b9 \u8ba1\u7b97\u3001\u5b58\u50a8\u6216\u5176\u4ed6\u8d44\u6e90\u8282\u70b9 \u7cfb\u7edf\u6e05\u5355 \u786c\u4ef6\u6e05\u5355 \u8f6f\u4ef6\u6e05\u5355 \u7f51\u7edc\u62d3\u6251 \u670d\u52a1\u3001\u534f\u8bae\u548c\u7aef\u53e3 \u7ba1\u7406 \u6301\u7eed\u7684\u7cfb\u7edf\u7ba1\u7406 \u6f0f\u6d1e\u7ba1\u7406 \u5206\u7c7b \u6d4b\u8bd5\u66f4\u65b0 \u90e8\u7f72\u66f4\u65b0 \u914d\u7f6e\u7ba1\u7406 \u7b56\u7565\u66f4\u6539 \u5b89\u5168\u5907\u4efd\u548c\u6062\u590d \u5b89\u5168\u5ba1\u8ba1\u5de5\u5177 \u5b8c\u6574\u6027\u751f\u547d\u5468\u671f \u5b89\u5168\u5f15\u5bfc \u8282\u70b9\u914d\u7f6e \u9a8c\u8bc1\u542f\u52a8 \u8282\u70b9\u52a0\u56fa \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \uff08MAC\uff09 \u5220\u9664\u8f6f\u4ef6\u5305\u5e76\u505c\u6b62\u670d\u52a1 \u53ea\u8bfb\u6587\u4ef6\u7cfb\u7edf \u7cfb\u7edf\u9a8c\u8bc1 \u8fd0\u884c\u65f6\u9a8c\u8bc1 \u5165\u4fb5\u68c0\u6d4b\u7cfb\u7edf \u670d\u52a1\u5668\u52a0\u56fa \u6587\u4ef6\u5b8c\u6574\u6027\u7ba1\u7406\uff08FIM\uff09 \u7ba1\u7406\u754c\u9762 \u4eea\u8868\u677f \u529f\u80fd \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u53c2\u8003\u4e66\u76ee OpenStack \u63a5\u53e3 \u529f\u80fd \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 \u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9 \u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u53c2\u8003\u4e66\u76ee \u5e26\u5916\u7ba1\u7406\u63a5\u53e3 \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u53c2\u8003\u4e66\u76ee \u5b89\u5168\u901a\u4fe1 TLS \u548c SSL \u7b80\u4ecb \u8bc1\u4e66\u9881\u53d1\u673a\u6784 TLS \u5e93 \u52a0\u5bc6\u7b97\u6cd5\u3001\u5bc6\u7801\u6a21\u5f0f\u548c\u534f\u8bae \u603b\u7ed3 TLS \u4ee3\u7406\u548c HTTP \u670d\u52a1 \u793a\u4f8b Pound Stud Nginx Apache HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \u5b8c\u5168\u524d\u5411\u4fdd\u5bc6 \u5b89\u5168\u53c2\u8003\u67b6\u6784 SSL/TLS \u4ee3\u7406\u5728\u524d\u9762 \u4e0e API \u7aef\u70b9\u4f4d\u4e8e\u540c\u4e00\u7269\u7406\u4e3b\u673a\u4e0a\u7684 SSL/TLS SSL/TLS\u8d1f\u8f7d\u5e73\u8861\u5668 \u5916\u90e8\u548c\u5185\u90e8\u73af\u5883\u7684\u52a0\u5bc6\u5206\u79bb API \u7aef\u70b9 API \u7aef\u70b9\u914d\u7f6e\u5efa\u8bae \u5185\u90e8 API \u901a\u4fe1 \u5728\u8eab\u4efd\u670d\u52a1\u76ee\u5f55\u4e2d\u914d\u7f6e\u5185\u90e8 URL \u4e3a\u5185\u90e8 URL \u914d\u7f6e\u5e94\u7528\u7a0b\u5e8f \u7c98\u8d34\u548c\u4e2d\u95f4\u4ef6 API \u7aef\u70b9\u8fdb\u7a0b\u9694\u79bb\u548c\u7b56\u7565 \u547d\u540d\u7a7a\u95f4 \u7f51\u7edc\u7b56\u7565 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 API \u7aef\u70b9\u901f\u7387\u9650\u5236 \u8eab\u4efd\u9274\u522b \u8ba4\u8bc1 \u65e0\u6548\u7684\u767b\u5f55\u5c1d\u8bd5 \u591a\u56e0\u7d20\u8eab\u4efd\u9a8c\u8bc1 \u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5 \u5185\u90e8\u5b9e\u73b0\u7684\u8ba4\u8bc1\u65b9\u5f0f \u5916\u90e8\u8ba4\u8bc1\u65b9\u5f0f \u6388\u6743 \u5efa\u7acb\u6b63\u5f0f\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565 \u670d\u52a1\u6388\u6743 \u7ba1\u7406\u5458\u7528\u6237 \u7ec8\u7aef\u7528\u6237 \u653f\u7b56 \u4ee4\u724c Fernet \u4ee4\u724c JWT \u4ee4\u724c \u57df \u8054\u5408\u9274\u6743 \u4e3a\u4ec0\u4e48\u8981\u4f7f\u7528\u8054\u5408\u8eab\u4efd\uff1f \u68c0\u67e5\u8868 Check-Identity-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a keystone\uff1f Check-Identity-02\uff1a\u662f\u5426\u4e3a Identity \u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u6743\u9650\uff1f Check-Identity-03\uff1a\u662f\u5426\u4e3a Identity \u542f\u7528\u4e86 TLS\uff1f Check-Identity-04\uff1a\uff08\u5df2\u8fc7\u65f6\uff09 Check-Identity-05\uff1a\u662f\u5426 max_request_body_size \u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f check-identity-06:\u7981\u7528/etc/keystone/keystone.conf\u4e2d\u7684\u7ba1\u7406\u4ee4\u724c check-identity-07:/etc/keystone/keystone.conf\u4e2d\u7684\u4e0d\u5b89\u5168_\u8c03\u8bd5\u4e3a\u5047 check-identity-08:\u4f7f\u7528/etc/keystone/keystone.conf\u4e2d\u7684Fernet\u4ee4\u724c \u4eea\u8868\u677f \u57df\u540d\u3001\u4eea\u8868\u677f\u5347\u7ea7\u548c\u57fa\u672c Web \u670d\u52a1\u5668\u914d\u7f6e \u57df\u540d \u57fa\u672c\u7684 Web \u670d\u52a1\u5668\u914d\u7f6e \u5141\u8bb8\u7684\u4e3b\u673a Horizon \u955c\u50cf\u4e0a\u4f20 HTTPS\u3001HSTS\u3001XSS \u548c SSRF \u8de8\u7ad9\u811a\u672c \uff08XSS\uff09 \u8de8\u7ad9\u8bf7\u6c42\u4f2a\u9020 \uff08CSRF\uff09 \u8de8\u5e27\u811a\u672c \uff08XFS\uff09 HTTPS \u51fd\u6570 HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \uff08HSTS\uff09 \u524d\u7aef\u7f13\u5b58\u548c\u4f1a\u8bdd\u540e\u7aef \u524d\u7aef\u7f13\u5b58 \u4f1a\u8bdd\u540e\u7aef \u9759\u6001\u5a92\u4f53 \u5bc6\u7801 \u5bc6\u94a5 Cookies \u8de8\u57df\u8d44\u6e90\u5171\u4eab \uff08CORS\uff09 \u8c03\u8bd5 \u68c0\u67e5\u8868 Check-Dashboard-01\uff1a\u7528\u6237/\u914d\u7f6e\u6587\u4ef6\u7ec4\u662f\u5426\u8bbe\u7f6e\u4e3a root/horizon\uff1f Check-Dashboard-02\uff1a\u662f\u5426\u4e3a Horizon \u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u6743\u9650\uff1f Check-Dashboard-03\uff1a\u53c2\u6570\u662f\u5426 DISALLOW_IFRAME_EMBED \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-04\uff1a\u53c2\u6570\u662f\u5426 CSRF_COOKIE_SECURE \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-05\uff1a\u53c2\u6570\u662f\u5426 SESSION_COOKIE_SECURE \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-06\uff1a\u53c2\u6570\u662f\u5426 SESSION_COOKIE_HTTPONLY \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-07\uff1a PASSWORD_AUTOCOMPLETE \u8bbe\u7f6e\u4e3a False \uff1f Check-Dashboard-08\uff1a DISABLE_PASSWORD_REVEAL \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-09\uff1a ENFORCE_PASSWORD_CHECK \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-10\uff1a\u662f\u5426 PASSWORD_VALIDATOR \u5df2\u914d\u7f6e\uff1f Check-Dashboard-11\uff1a\u662f\u5426 SECURE_PROXY_SSL_HEADER \u5df2\u914d\u7f6e\uff1f \u8ba1\u7b97 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u9009\u62e9 OpenStack \u4e2d\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f \u9009\u62e9\u6807\u51c6 \u56e2\u961f\u4e13\u957f \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u8ba4\u8bc1\u548c\u8bc1\u660e \u901a\u7528\u6807\u51c6 \u5bc6\u7801\u5b66\u6807\u51c6 FIPS 140-2 \u786c\u4ef6\u95ee\u9898 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0e\u88f8\u673a Hypervisor \u5185\u5b58\u4f18\u5316 KVM \u5185\u6838\u540c\u9875\u5408\u5e76 XEN \u900f\u660e\u9875\u9762\u5171\u4eab \u5185\u5b58\u4f18\u5316\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u5176\u4ed6\u5b89\u5168\u529f\u80fd \u53c2\u8003\u4e66\u76ee \u52a0\u56fa\u865a\u62df\u5316\u5c42 \u7269\u7406\u786c\u4ef6\uff08PCI\u76f4\u901a\uff09 \u865a\u62df\u786c\u4ef6 \uff08QEMU\uff09 \u6700\u5c0f\u5316 QEMU \u4ee3\u7801\u5e93 \u7f16\u8bd1\u5668\u52a0\u56fa \u5b89\u5168\u52a0\u5bc6\u865a\u62df\u5316 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 sVirt\uff1aSELinux \u548c\u865a\u62df\u5316 \u6807\u7b7e\u548c\u7c7b\u522b SELinux \u7528\u6237\u548c\u89d2\u8272 \u5e03\u5c14\u503c \u52a0\u56fa\u8ba1\u7b97\u90e8\u7f72 OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f OpenStack \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 OpenStack-dev \u90ae\u4ef6\u5217\u8868 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90ae\u4ef6\u5217\u8868 \u6f0f\u6d1e\u610f\u8bc6 OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f OpenStack \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 OpenStack-discuss \u90ae\u4ef6\u5217\u8868 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90ae\u4ef6\u5217\u8868 \u5982\u4f55\u9009\u62e9\u865a\u62df\u63a7\u5236\u53f0 \u865a\u62df\u7f51\u7edc\u8ba1\u7b97\u673a \uff08VNC\uff09 \u529f\u80fd \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u53c2\u8003\u4e66\u76ee \u72ec\u7acb\u8ba1\u7b97\u73af\u5883\u7684\u7b80\u5355\u534f\u8bae \uff08SPICE\uff09 \u529f\u80fd \u9650\u5236 \u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u53c2\u8003\u4e66\u76ee \u68c0\u67e5\u8868 Check-Compute-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/nova\uff1f Check-Compute-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Compute-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Compute-04\uff1a\u662f\u5426\u4f7f\u7528\u5b89\u5168\u534f\u8bae\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Compute-05\uff1aNova \u4e0e Glance \u7684\u901a\u4fe1\u662f\u5426\u5b89\u5168\uff1f \u5757\u5b58\u50a8 \u5377\u64e6\u9664 \u68c0\u67e5\u8868 Check-Block-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/cinder\uff1f Check-Block-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Block-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Block-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Block-05\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e nova \u901a\u4fe1\uff1f Check-Block-06\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e glance \u901a\u4fe1\uff1f Check-Block-07\uff1a NAS \u662f\u5426\u5728\u5b89\u5168\u7684\u73af\u5883\u4e2d\u8fd0\u884c\uff1f Check-Block-08\uff1a\u8bf7\u6c42\u6b63\u6587\u7684\u6700\u5927\u5927\u5c0f\u662f\u5426\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f Check-Block-09\uff1a\u662f\u5426\u542f\u7528\u4e86\u5377\u52a0\u5bc6\u529f\u80fd\uff1f \u56fe\u50cf\u5b58\u50a8 \u68c0\u67e5\u8868 Check-Image-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/glance\uff1f Check-Image-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Image-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Image-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Image-05\uff1a\u662f\u5426\u963b\u6b62\u4e86\u5c4f\u853d\u7aef\u53e3\u626b\u63cf\uff1f \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf \u4ecb\u7ecd \u4e00\u822c\u5b89\u5168\u4fe1\u606f \u7f51\u7edc\u548c\u5b89\u5168\u6a21\u578b \u5171\u4eab\u540e\u7aef\u6a21\u5f0f \u6241\u5e73\u5316\u4e0e\u5206\u6bb5\u5316\u7f51\u7edc \u7f51\u7edc\u63d2\u4ef6 \u5b89\u5168\u670d\u52a1 \u5b89\u5168\u670d\u52a1\u4ecb\u7ecd \u5b89\u5168\u670d\u52a1\u7ba1\u7406 \u5171\u4eab\u8bbf\u95ee\u63a7\u5236 \u5171\u4eab\u7c7b\u578b\u8bbf\u95ee\u63a7\u5236 \u653f\u7b56 \u68c0\u67e5\u8868 Check-Shared-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/manila\uff1f Check-Shared-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Shared-03\uff1aOpenStack Identity \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Shared-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Shared-05\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u8ba1\u7b97\u8054\u7cfb\uff1f Check-Shared-06\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u7f51\u7edc\u8054\u7cfb\uff1f Check-Shared-07\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u5757\u5b58\u50a8\u8054\u7cfb\uff1f Check-Shared-08\uff1a\u8bf7\u6c42\u6b63\u6587\u7684\u6700\u5927\u5927\u5c0f\u662f\u5426\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f \u8054\u7f51 \u7f51\u7edc\u67b6\u6784 OpenStack Networking \u670d\u52a1\u5728\u7269\u7406\u670d\u52a1\u5668\u4e0a\u7684\u653e\u7f6e \u7269\u7406\u670d\u52a1\u5668\u7684\u7f51\u7edc\u8fde\u63a5 \u7f51\u7edc\u670d\u52a1 \u4f7f\u7528 VLAN \u548c\u96a7\u9053\u7684 L2 \u9694\u79bb VLANs L2 \u96a7\u9053 \u7f51\u7edc\u670d\u52a1 \u8bbf\u95ee\u63a7\u5236\u5217\u8868 L3 \u8def\u7531\u548c NAT \u670d\u52a1\u8d28\u91cf \uff08QoS\uff09 \u8d1f\u8f7d\u5747\u8861 \u9632\u706b\u5899 \u7f51\u7edc\u670d\u52a1\u6269\u5c55 \u7f51\u7edc\u670d\u52a1\u9650\u5236 \u7f51\u7edc\u670d\u52a1\u5b89\u5168\u6700\u4f73\u505a\u6cd5 OpenStack Networking \u670d\u52a1\u914d\u7f6e \u9650\u5236 API \u670d\u52a1\u5668\u7684\u7ed1\u5b9a\u5730\u5740\uff1aneutron-server \u9650\u5236 OpenStack Networking \u670d\u52a1\u7684 DB \u548c RPC \u901a\u4fe1 \u4fdd\u62a4 OpenStack \u7f51\u7edc\u670d\u52a1 \u9879\u76ee\u7f51\u7edc\u670d\u52a1\u5de5\u4f5c\u6d41 \u7f51\u7edc\u8d44\u6e90\u7b56\u7565\u5f15\u64ce \u5b89\u5168\u7ec4 \u914d\u989d \u7f13\u89e3 ARP \u6b3a\u9a97 \u68c0\u67e5\u8868 Check-Neutron-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/neutron\uff1f Check-Neutron-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Neutron-03\uff1aKeystone\u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Neutron-04\uff1a\u662f\u5426\u4f7f\u7528\u5b89\u5168\u534f\u8bae\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Neutron-05\uff1aNeutron API \u670d\u52a1\u5668\u4e0a\u662f\u5426\u542f\u7528\u4e86 TLS\uff1f \u5bf9\u8c61\u5b58\u50a8 \u7f51\u7edc\u5b89\u5168 \u4e00\u822c\u670d\u52a1\u5b89\u5168 \u4ee5\u975e root \u7528\u6237\u8eab\u4efd\u8fd0\u884c\u670d\u52a1 \u6587\u4ef6\u6743\u9650 \u4fdd\u62a4\u5b58\u50a8\u670d\u52a1 \u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u672f\u8bed \u4fdd\u62a4\u4ee3\u7406\u670d\u52a1 HTTP \u76d1\u542c\u7aef\u53e3 \u8d1f\u8f7d\u5747\u8861\u5668 \u5bf9\u8c61\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1 TempAuth \u51fd\u6570 Keystone \u5176\u4ed6\u503c\u5f97\u6ce8\u610f\u7684\u4e8b\u9879 \u673a\u5bc6\u7ba1\u7406 \u73b0\u6709\u6280\u672f\u6458\u8981 \u76f8\u5173 Openstack \u9879\u76ee \u4f7f\u7528\u6848\u4f8b \u955c\u50cf\u7b7e\u540d\u9a8c\u8bc1 \u5377\u52a0\u5bc6 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6 Sahara Magnum Octavia/LBaaS Swift \u914d\u7f6e\u6587\u4ef6\u4e2d\u7684\u5bc6\u7801 Barbican \u6982\u8ff0 Barbican \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236 \u673a\u5bc6\u5b58\u50a8\u540e\u7aef \u52a0\u5bc6\u63d2\u4ef6 \u7b80\u5355\u7684\u52a0\u5bc6\u63d2\u4ef6 PKCS#11 \u52a0\u5bc6\u63d2\u4ef6 \u673a\u5bc6\u5b58\u50a8\u63d2\u4ef6 KMIP \u63d2\u4ef6 Dogtag \u63d2\u4ef6 Vault \u63d2\u4ef6 \u5a01\u80c1\u5206\u6790 Castellan \u6982\u8ff0 \u5e38\u89c1\u95ee\u9898\u89e3\u7b54 \u68c0\u67e5\u8868 Check-Key-Manager-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/barbican\uff1f Check-Key-Manager-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Key-Manager-03\uff1aOpenStack Identity \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Key-Manager-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f \u6d88\u606f\u961f\u5217 \u6d88\u606f\u5b89\u5168 \u6d88\u606f\u4f20\u8f93\u5b89\u5168 RabbitMQ \u670d\u52a1\u5668 SSL \u914d\u7f6e Qpid \u670d\u52a1\u5668 SSL \u914d\u7f6e \u961f\u5217\u8ba4\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236 \u8eab\u4efd\u9a8c\u8bc1\u914d\u7f6e\u793a\u4f8b\uff1aRabbitMQ OpenStack \u670d\u52a1\u914d\u7f6e\uff1aRabbitMQ \u8eab\u4efd\u9a8c\u8bc1\u914d\u7f6e\u793a\u4f8b\uff1aQpid OpenStack \u670d\u52a1\u914d\u7f6e\uff1aQpid \u6d88\u606f\u961f\u5217\u8fdb\u7a0b\u9694\u79bb\u548c\u7b56\u7565 \u547d\u540d\u7a7a\u95f4 \u7f51\u7edc\u7b56\u7565 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \u6570\u636e\u5904\u7406 \u6570\u636e\u5904\u7406\u7b80\u4ecb \u67b6\u6784 \u6d89\u53ca\u7684\u6280\u672f \u7528\u6237\u8bbf\u95ee\u8d44\u6e90 \u90e8\u7f72 \u63a7\u5236\u5668\u5bf9\u96c6\u7fa4\u7684\u7f51\u7edc\u8bbf\u95ee \u914d\u7f6e\u548c\u5f3a\u5316 TLS\u7cfb\u7edf \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565 \u5b89\u5168\u7ec4 \u4ee3\u7406\u57df \u81ea\u5b9a\u4e49\u7f51\u7edc\u62d3\u6251 \u95f4\u63a5\u8bbf\u95ee Rootwrap \u65e5\u5fd7 \u53c2\u8003\u4e66\u76ee \u6570\u636e\u5e93 \u6570\u636e\u5e93\u540e\u7aef\u6ce8\u610f\u4e8b\u9879 \u6570\u636e\u5e93\u540e\u7aef\u7684\u5b89\u5168\u53c2\u8003 \u6570\u636e\u5e93\u8bbf\u95ee\u63a7\u5236 OpenStack \u6570\u636e\u5e93\u8bbf\u95ee\u6a21\u578b \u7cbe\u7ec6\u8bbf\u95ee\u63a7\u5236 Nova-conductor \u6570\u636e\u5e93\u8ba4\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236 \u6743\u9650 \u8981\u6c42\u7528\u6237\u5e10\u6237\u9700\u8981 SSL \u4f20\u8f93 \u914d\u7f6e\u793a\u4f8b #1\uff1a\uff08MySQL\uff09 \u914d\u7f6e\u793a\u4f8b #2\uff1a\uff08PostgreSQL\uff09 OpenStack \u670d\u52a1\u6570\u636e\u5e93\u914d\u7f6e MySQL :sql_connection \u7684\u5b57\u7b26\u4e32\u793a\u4f8b\uff1a \u4f7f\u7528 X.509 \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1 \u914d\u7f6e\u793a\u4f8b #1\uff1a\uff08MySQL\uff09 \u914d\u7f6e\u793a\u4f8b #2\uff1a\uff08PostgreSQL\uff09 OpenStack \u670d\u52a1\u6570\u636e\u5e93\u914d\u7f6e Nova-conductor \u6570\u636e\u5e93\u4f20\u8f93\u5b89\u5168\u6027 \u6570\u636e\u5e93\u670d\u52a1\u5668 IP \u5730\u5740\u7ed1\u5b9a \u9650\u5236 MySQL \u7684\u7ed1\u5b9a\u5730\u5740 \u9650\u5236 PostgreSQL \u7684\u76d1\u542c\u5730\u5740 \u6570\u636e\u5e93\u4f20\u8f93 MySQL SSL\u914d\u7f6e PostgreSQL SSL \u914d\u7f6e \u79df\u6237\u6570\u636e\u9690\u79c1 \u6570\u636e\u9690\u79c1\u95ee\u9898 \u6570\u636e\u9a7b\u7559 \u6570\u636e\u5904\u7f6e \u6570\u636e\u672a\u5b89\u5168\u5220\u9664 \u5b9e\u4f8b\u5185\u5b58\u6e05\u7406 Cinder \u5377\u6570\u636e \u955c\u50cf\u670d\u52a1\u5ef6\u65f6\u5220\u9664\u529f\u80fd \u8ba1\u7b97\u8f6f\u5220\u9664\u529f\u80fd \u8ba1\u7b97\u5b9e\u4f8b\u4e34\u65f6\u5b58\u50a8 \u88f8\u673a\u670d\u52a1\u5668\u6e05\u7406 \u6570\u636e\u52a0\u5bc6 \u5377\u52a0\u5bc6 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6 \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61 \u5757\u5b58\u50a8\u6027\u80fd\u548c\u540e\u7aef \u7f51\u7edc\u6570\u636e \u5bc6\u94a5\u7ba1\u7406 \u53c2\u8003\u4e66\u76ee\uff1a \u5b9e\u4f8b\u5b89\u5168\u7ba1\u7406 \u5b9e\u4f8b\u7684\u5b89\u5168\u670d\u52a1 \u5b9e\u4f8b\u7684\u71b5 \u5c06\u5b9e\u4f8b\u8c03\u5ea6\u5230\u8282\u70b9 \u53ef\u4fe1\u955c\u50cf \u955c\u50cf\u521b\u5efa\u8fc7\u7a0b \u6620\u50cf\u7b7e\u540d\u9a8c\u8bc1 \u5b9e\u4f8b\u8fc1\u79fb \u5b9e\u65f6\u8fc1\u79fb\u98ce\u9669 \u5b9e\u65f6\u8fc1\u79fb\u7f13\u89e3\u63aa\u65bd \u7981\u7528\u5b9e\u65f6\u8fc1\u79fb \u8fc1\u79fb\u7f51\u7edc \u52a0\u5bc6\u5b9e\u65f6\u8fc1\u79fb \u76d1\u63a7\u3001\u544a\u8b66\u548c\u62a5\u544a \u66f4\u65b0\u548c\u8865\u4e01 \u9632\u706b\u5899\u548c\u5176\u4ed6\u57fa\u4e8e\u4e3b\u673a\u7684\u5b89\u5168\u63a7\u5236 \u76d1\u89c6\u548c\u65e5\u5fd7\u8bb0\u5f55 \u53d6\u8bc1\u548c\u4e8b\u4ef6\u54cd\u5e94 \u76d1\u63a7\u7528\u4f8b \u53c2\u8003\u4e66\u76ee \u5408\u89c4 \u5408\u89c4\u6027\u6982\u8ff0 \u5b89\u5168\u539f\u5219 \u5206\u5c42\u9632\u5fa1 \u5b89\u5168\u5931\u8d25 \u6700\u5c0f\u6743\u9650 \u5206\u9694 \u4fc3\u8fdb\u9690\u79c1 \u65e5\u5fd7\u8bb0\u5f55\u80fd\u529b \u5e38\u7528\u63a7\u5236\u6846\u67b6 \u5ba1\u8ba1\u53c2\u8003 \u4e86\u89e3\u5ba1\u6838\u6d41\u7a0b \u786e\u5b9a\u5ba1\u8ba1\u8303\u56f4 \u5ba1\u8ba1\u7684\u9636\u6bb5 \u5185\u90e8\u5ba1\u8ba1 \u51c6\u5907\u5916\u90e8\u5ba1\u8ba1 \u5916\u90e8\u5ba1\u8ba1 \u5408\u89c4\u6027\u7ef4\u62a4 \u5408\u89c4\u6d3b\u52a8 \u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u7cfb\u7edf \uff08ISMS\uff09 \u98ce\u9669\u8bc4\u4f30 \u8bbf\u95ee\u548c\u65e5\u5fd7\u5ba1\u67e5 \u5907\u4efd\u548c\u707e\u96be\u6062\u590d \u5b89\u5168\u57f9\u8bad \u5b89\u5168\u5ba1\u67e5 \u6f0f\u6d1e\u7ba1\u7406 \u6570\u636e\u5206\u7c7b \u5f02\u5e38\u8fc7\u7a0b \u8ba4\u8bc1\u548c\u5408\u89c4\u58f0\u660e \u5546\u4e1a\u6807\u51c6 SOC 1 \uff08SSAE 16\uff09 / ISAE 3402 SOC 2 \u51fd\u6570 SOC 3 \u51fd\u6570 ISO 27001/2 \u8ba4\u8bc1 HIPAA / HITECH PCI-DSS \u653f\u5e9c\u6807\u51c6 FedRAMP ITAR FISMA \u9690\u79c1 \u5b89\u5168\u5ba1\u67e5 \u67b6\u6784\u9875\u9762\u6307\u5357 \u6807\u9898\u3001\u7248\u672c\u4fe1\u606f\u3001\u8054\u7cfb\u65b9\u5f0f \u9879\u76ee\u63cf\u8ff0\u548c\u76ee\u7684 \u4e3b\u8981\u7528\u6237\u548c\u7528\u4f8b \u5916\u90e8\u4f9d\u8d56\u548c\u76f8\u5173\u7684\u5b89\u5168\u5047\u8bbe \u7ec4\u4ef6 \u670d\u52a1\u67b6\u6784\u56fe \u6570\u636e\u8d44\u4ea7 \u6570\u636e\u8d44\u4ea7\u5f71\u54cd\u5206\u6790 \u63a5\u53e3 \u8d44\u6e90 \u5b89\u5168\u68c0\u67e5\u8868 \u9644\u5f55 \u793e\u533a\u652f\u6301 \u6587\u6863 OpenStack wiki Launchpad bug \u533a\u57df \u6587\u6863\u53cd\u9988 OpenStack IRC \u9891\u9053 OpenStack \u90ae\u4ef6\u5217\u8868 OpenStack \u53d1\u884c\u5305 \u8bcd\u6c47\u8868 0-9 A B C D E F G H I J K M N O P Q R S T U V W X Y Z","title":"OpenStack\u5b89\u5168\u6307\u5357"},{"location":"security/security-guide/#_1","text":"\u672c\u4e66\u63d0\u4f9b\u4e86\u6709\u5173\u4fdd\u62a4OpenStack\u4e91\u7684\u6700\u4f73\u5b9e\u8df5\u548c\u6982\u5ff5\u4fe1\u606f\u3002 \u672c\u6307\u5357\u6700\u540e\u4e00\u6b21\u66f4\u65b0\u662f\u5728Train\u53d1\u5e03\u671f\u95f4\uff0c\u8bb0\u5f55\u4e86OpenStack Train\u3001Stein\u548cRocky\u7248\u672c\u3002\u5b83\u53ef\u80fd\u4e0d\u9002\u7528\u4e8eEOL\u7248\u672c\uff08\u4f8b\u5982Newton\uff09\u3002\u6211\u4eec\u5efa\u8bae\u60a8\u5728\u8ba1\u5212\u4e3a\u60a8\u7684OpenStack\u4e91\u5b9e\u65bd\u5b89\u5168\u63aa\u65bd\u65f6\uff0c\u81ea\u884c\u9605\u8bfb\u672c\u6587\u3002\u672c\u6307\u5357\u4ec5\u4f9b\u53c2\u8003\u3002OpenStack\u5b89\u5168\u56e2\u961f\u57fa\u4e8eOpenStack\u793e\u533a\u7684\u81ea\u613f\u8d21\u732e\u3002\u60a8\u53ef\u4ee5\u5728OFTC IRC\u4e0a\u7684#OpenStack-Security\u9891\u9053\u4e2d\u76f4\u63a5\u8054\u7cfb\u5b89\u5168\u793e\u533a\uff0c\u6216\u8005\u901a\u8fc7\u5411OpenStack-Discussion\u90ae\u4ef6\u5217\u8868\u53d1\u9001\u4e3b\u9898\u6807\u9898\u4e2d\u5e26\u6709[Security]\u524d\u7f00\u7684\u90ae\u4ef6\u6765\u8054\u7cfb\u3002","title":"\u6458\u8981"},{"location":"security/security-guide/#_2","text":"\u7ea6\u5b9a \u901a\u77e5 \u547d\u4ee4\u63d0\u793a\u7b26 \u4ecb\u7ecd \u786e\u5b9a \u6211\u4eec\u4e3a\u4ec0\u4e48\u4ee5\u53ca\u5982\u4f55\u5199\u8fd9\u672c\u4e66 OpenStack\u7b80\u4ecb \u5b89\u5168\u8fb9\u754c\u548c\u5a01\u80c1 \u9009\u62e9\u652f\u6301\u8f6f\u4ef6 \u7cfb\u7edf\u6587\u6863 \u7cfb\u7edf\u6587\u6863\u8981\u6c42 \u7ba1\u7406 \u6301\u7eed\u7684\u7cfb\u7edf\u7ba1\u7406 \u5b8c\u6574\u6027\u751f\u547d\u5468\u671f \u7ba1\u7406\u754c\u9762 \u5b89\u5168\u901a\u4fe1 TLS\u548cSSL\u7b80\u4ecb TLS\u4ee3\u7406\u548cHTTP\u670d\u52a1 \u5b89\u5168\u53c2\u8003\u67b6\u6784 \u7aef\u70b9 APL\u7aef\u70b9\u914d\u7f6e\u5efa\u8bae \u8eab\u4efd \u8ba4\u8bc1 \u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5 \u6388\u6743 \u653f\u7b56 \u4ee4\u724c \u57df \u8054\u5408\u68af\u5f62\u5931\u771f \u6e05\u5355 \u4eea\u8868\u677f \u57df\u540d\u3001\u4eea\u8868\u677f\u5347\u7ea7\u548c\u57fa\u672cWeb\u670d\u52a1\u5668\u914d\u7f6e HTTPS\u3001HSTS\u3001XSS\u548cSSRF \u524d\u7aef\u7f13\u5b58\u548c\u4f1a\u8bdd\u540e\u7aef \u9759\u6001\u5a92\u4f53 \u5bc6\u7801 \u5bc6\u94a5 \u7f51\u7ad9\u6570\u636e \u8de8\u57df\u8d44\u6e90\u5171\u4eab \uff08CORS\uff09 \u8c03\u8bd5 \u68c0\u67e5\u8868 \u8ba1\u7b97 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u9009\u62e9 \u5f3a\u5316\u865a\u62df\u5316\u5c42 \u5f3a\u5316\u8ba1\u7b97\u90e8\u7f72 \u6f0f\u6d1e\u610f\u8bc6 \u5982\u4f55\u9009\u62e9\u865a\u62df\u63a7\u5236\u53f0 \u68c0\u67e5\u8868 \u5757\u5b58\u50a8 \u97f3\u91cf\u64e6\u9664 \u68c0\u67e5\u8868 \u56fe\u50cf\u5b58\u50a8 \u68c0\u67e5\u8868 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf \u4ecb\u7ecd \u7f51\u7edc\u548c\u5b89\u5168\u6a21\u578b \u5b89\u5168\u670d\u52a1 \u5171\u4eab\u8bbf\u95ee\u63a7\u5236 \u5171\u4eab\u7c7b\u578b\u8bbf\u95ee\u63a7\u5236 \u653f\u7b56 \u68c0\u67e5\u8868 \u8054\u7f51 \u7f51\u7edc\u67b6\u6784 \u7f51\u7edc\u670d\u52a1 \u7f51\u7edc\u670d\u52a1\u5b89\u5168\u6700\u4f73\u505a\u6cd5 \u4fdd\u62a4 OpenStack \u7f51\u7edc\u670d\u52a1 \u68c0\u67e5\u8868 \u5bf9\u8c61\u5b58\u50a8 \u7f51\u7edc\u5b89\u5168 \u4e00\u822c\u4e8b\u52a1\u5b89\u5168 \u4fdd\u62a4\u5b58\u50a8\u670d\u52a1 \u4fdd\u62a4\u4ee3\u7406\u670d\u52a1 \u5bf9\u8c61\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1 \u5176\u4ed6\u503c\u5f97\u6ce8\u610f\u7684\u9879\u76ee \u673a\u5bc6\u7ba1\u7406 \u73b0\u6709\u6280\u672f\u6458\u8981 \u76f8\u5173 Openstack \u9879\u76ee \u4f7f\u7528\u6848\u4f8b \u5bc6\u94a5\u7ba1\u7406\u670d\u52a1 \u5bc6\u94a5\u7ba1\u7406\u63a5\u53e3 \u5e38\u89c1\u95ee\u9898\u89e3\u7b54 \u68c0\u67e5\u8868 \u6d88\u606f\u961f\u5217 \u90ae\u4ef6\u5b89\u5168 \u6570\u636e\u5904\u7406 \u6570\u636e\u5904\u7406\u7b80\u4ecb \u90e8\u7f72 \u914d\u7f6e\u548c\u5f3a\u5316 \u6570\u636e\u5e93 \u6570\u636e\u5e93\u540e\u7aef\u6ce8\u610f\u4e8b\u9879 \u6570\u636e\u5e93\u8bbf\u95ee\u63a7\u5236 \u6570\u636e\u5e93\u4f20\u8f93\u5b89\u5168\u6027 \u79df\u6237\u6570\u636e\u9690\u79c1 \u6570\u636e\u9690\u79c1\u95ee\u9898 \u6570\u636e\u52a0\u5bc6 \u5bc6\u94a5\u7ba1\u7406 \u5b9e\u4f8b\u5b89\u5168\u7ba1\u7406 \u5b9e\u4f8b\u7684\u5b89\u5168\u670d\u52a1 \u76d1\u89c6\u548c\u65e5\u5fd7\u8bb0\u5f55 \u53d6\u8bc1\u548c\u4e8b\u4ef6\u54cd\u5e94 \u5408\u89c4 \u5408\u89c4\u6027\u6982\u8ff0 \u4e86\u89e3\u5ba1\u6838\u6d41\u7a0b \u5408\u89c4\u6d3b\u52a8 \u8ba4\u8bc1\u548c\u5408\u89c4\u58f0\u660e \u9690\u79c1 \u5b89\u5168\u5ba1\u67e5 \u4f53\u7cfb\u7ed3\u6784\u9875\u9762\u6307\u5357 \u5b89\u5168\u68c0\u67e5\u8868 \u9644\u5f55 \u793e\u533a\u652f\u6301 \u8bcd\u6c47\u8868","title":"\u5185\u5bb9"},{"location":"security/security-guide/#_3","text":"OpenStack \u6587\u6863\u4f7f\u7528\u4e86\u51e0\u79cd\u6392\u7248\u7ea6\u5b9a\u3002","title":"\u7ea6\u5b9a"},{"location":"security/security-guide/#_4","text":"\u6ce8\u610f \u5e26\u6709\u9644\u52a0\u4fe1\u606f\u7684\u6ce8\u91ca\uff0c\u7528\u4e8e\u89e3\u91ca\u6587\u672c\u7684\u67d0\u4e00\u90e8\u5206\u3002 \u91cd\u8981 \u5728\u7ee7\u7eed\u4e4b\u524d\uff0c\u60a8\u5fc5\u987b\u6ce8\u610f\u8fd9\u4e00\u70b9\u3002 \u63d0\u793a \u4e00\u4e2a\u989d\u5916\u4f46\u6709\u7528\u7684\u5b9e\u7528\u5efa\u8bae\u3002 \u8b66\u793a \u9632\u6b62\u7528\u6237\u72af\u9519\u8bef\u7684\u6709\u7528\u4fe1\u606f\u3002 \u8b66\u544a \u6709\u5173\u6570\u636e\u4e22\u5931\u98ce\u9669\u6216\u5b89\u5168\u95ee\u9898\u7684\u5173\u952e\u4fe1\u606f\u3002","title":"\u6ce8\u610f\u4e8b\u9879"},{"location":"security/security-guide/#_5","text":"$ command \u4efb\u4f55\u7528\u6237\uff08\u5305\u62ecroot\u7528\u6237\uff09\u90fd\u53ef\u4ee5\u8fd0\u884c\u4ee5$\u63d0\u793a\u7b26\u4e3a\u524d\u7f00\u7684\u547d\u4ee4\u3002 # command root\u7528\u6237\u5fc5\u987b\u8fd0\u884c\u524d\u7f00\u4e3a#\u63d0\u793a\u7b26\u7684\u547d\u4ee4\u3002\u60a8\u8fd8\u53ef\u4ee5\u5728\u8fd9\u4e9b\u547d\u4ee4\u524d\u9762\u52a0\u4e0asudo\u547d\u4ee4\uff08\u5982\u679c\u53ef\u7528\uff09\uff0c\u4ee5\u8fd0\u884c\u8fd9\u4e9b\u547d\u4ee4\u3002","title":"\u547d\u4ee4\u63d0\u793a\u7b26"},{"location":"security/security-guide/#_6","text":"\u300aOpenStack \u5b89\u5168\u6307\u5357\u300b\u662f\u8bb8\u591a\u4eba\u7ecf\u8fc7\u4e94\u5929\u534f\u4f5c\u7684\u6210\u679c\u3002\u672c\u6587\u6863\u65e8\u5728\u63d0\u4f9b\u90e8\u7f72\u5b89\u5168 OpenStack \u4e91\u7684\u6700\u4f73\u5b9e\u8df5\u6307\u5357\u3002\u5b83\u65e8\u5728\u53cd\u6620OpenStack\u793e\u533a\u7684\u5f53\u524d\u5b89\u5168\u72b6\u6001\uff0c\u5e76\u4e3a\u7531\u4e8e\u590d\u6742\u6027\u6216\u5176\u4ed6\u7279\u5b9a\u4e8e\u73af\u5883\u7684\u7ec6\u8282\u800c\u65e0\u6cd5\u5217\u51fa\u7279\u5b9a\u5b89\u5168\u63a7\u5236\u63aa\u65bd\u7684\u51b3\u7b56\u63d0\u4f9b\u6846\u67b6\u3002 \u81f4\u8c22 \u6211\u4eec\u4e3a\u4ec0\u4e48\u4ee5\u53ca\u5982\u4f55\u5199\u8fd9\u672c\u4e66 \u76ee\u6807 \u5982\u4f55 OpenStack \u7b80\u4ecb \u4e91\u7c7b\u578b OpenStack \u670d\u52a1\u6982\u8ff0 \u5b89\u5168\u8fb9\u754c\u548c\u5a01\u80c1 \u5b89\u5168\u57df \u6865\u63a5\u5b89\u5168\u57df \u5a01\u80c1\u5206\u7c7b\u3001\u53c2\u4e0e\u8005\u548c\u653b\u51fb\u5a92\u4ecb \u9009\u62e9\u652f\u6301\u8f6f\u4ef6 \u56e2\u961f\u4e13\u957f \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u901a\u7528\u6807\u51c6 \u786c\u4ef6\u95ee\u9898","title":"\u4ecb\u7ecd"},{"location":"security/security-guide/#_7","text":"OpenStack \u5b89\u5168\u7ec4\u8981\u611f\u8c22\u4ee5\u4e0b\u7ec4\u7ec7\u7684\u8d21\u732e\uff0c\u4ed6\u4eec\u4e3a\u672c\u4e66\u7684\u51fa\u7248\u505a\u51fa\u4e86\u8d21\u732e\u3002\u8fd9\u4e9b\u7ec4\u7ec7\u662f\uff1a","title":"\u81f4\u8c22"},{"location":"security/security-guide/#_8","text":"\u968f\u7740 OpenStack \u7684\u666e\u53ca\u548c\u4ea7\u54c1\u6210\u719f\uff0c\u5b89\u5168\u6027\u5df2\u6210\u4e3a\u91cd\u4e2d\u4e4b\u91cd\u3002OpenStack \u5b89\u5168\u7ec4\u5df2\u7ecf\u8ba4\u8bc6\u5230\u9700\u8981\u4e00\u4e2a\u5168\u9762\u800c\u6743\u5a01\u7684\u5b89\u5168\u6307\u5357\u3002\u300aOpenStack \u5b89\u5168\u6307\u5357\u300b\u65e8\u5728\u6982\u8ff0\u63d0\u9ad8 OpenStack \u90e8\u7f72\u5b89\u5168\u6027\u7684\u5b89\u5168\u6700\u4f73\u5b9e\u8df5\u3001\u6307\u5357\u548c\u5efa\u8bae\u3002\u4f5c\u8005\u5e26\u6765\u4e86\u4ed6\u4eec\u5728\u5404\u79cd\u73af\u5883\u4e2d\u90e8\u7f72\u548c\u4fdd\u62a4 OpenStack \u7684\u4e13\u4e1a\u77e5\u8bc6\u3002 \u672c\u6307\u5357\u662f\u5bf9\u300aOpenStack \u64cd\u4f5c\u6307\u5357\u300b\u7684\u8865\u5145\uff0c\u53ef\u7528\u4e8e\u5f3a\u5316\u73b0\u6709\u7684 OpenStack \u90e8\u7f72\u6216\u8bc4\u4f30 OpenStack \u4e91\u63d0\u4f9b\u5546\u7684\u5b89\u5168\u63a7\u5236\u3002","title":"\u6211\u4eec\u4e3a\u4ec0\u4e48\u4ee5\u53ca\u5982\u4f55\u5199\u8fd9\u672c\u4e66"},{"location":"security/security-guide/#_9","text":"\u8bc6\u522b OpenStack \u4e2d\u7684\u5b89\u5168\u57df \u63d0\u4f9b\u4fdd\u62a4 OpenStack \u90e8\u7f72\u7684\u6307\u5bfc \u5f3a\u8c03\u5f53\u4eca OpenStack \u4e2d\u7684\u5b89\u5168\u95ee\u9898\u548c\u6f5c\u5728\u7684\u7f13\u89e3\u63aa\u65bd \u8ba8\u8bba\u5373\u5c06\u63a8\u51fa\u7684\u5b89\u5168\u529f\u80fd \u4e3a\u77e5\u8bc6\u83b7\u53d6\u548c\u4f20\u64ad\u63d0\u4f9b\u793e\u533a\u9a71\u52a8\u7684\u8bbe\u65bd","title":"\u76ee\u6807"},{"location":"security/security-guide/#_10","text":"\u4e0e\u300aOpenStack \u64cd\u4f5c\u6307\u5357\u300b\u4e00\u6837\uff0c\u6211\u4eec\u9075\u5faa\u4e86\u672c\u4e66\u7684\u51b2\u523a\u65b9\u6cd5\u3002\u4e66\u7c4d\u51b2\u523a\u8fc7\u7a0b\u5141\u8bb8\u5feb\u901f\u5f00\u53d1\u548c\u5236\u4f5c\u5927\u91cf\u4e66\u9762\u4f5c\u54c1\u3002OpenStack \u5b89\u5168\u7ec4\u7684\u534f\u8c03\u5458\u91cd\u65b0\u9080\u8bf7\u4e86 Adam Hyde \u4f5c\u4e3a\u534f\u8c03\u4eba\u3002\u8be5\u9879\u76ee\u5728\u4fc4\u52d2\u5188\u5dde\u6ce2\u7279\u5170\u5e02\u7684OpenStack\u5cf0\u4f1a\u4e0a\u6b63\u5f0f\u5ba3\u5e03\u3002 \u7531\u4e8e\u8be5\u5c0f\u7ec4\u7684\u4e00\u4e9b\u5173\u952e\u6210\u5458\u79bb\u5f97\u5f88\u8fd1\uff0c\u8be5\u56e2\u961f\u805a\u96c6\u5728\u9a6c\u91cc\u5170\u5dde\u5b89\u7eb3\u6ce2\u5229\u65af\u3002\u8fd9\u662f\u516c\u5171\u90e8\u95e8\u60c5\u62a5\u754c\u6210\u5458\u3001\u7845\u8c37\u521d\u521b\u516c\u53f8\u548c\u4e00\u4e9b\u5927\u578b\u77e5\u540d\u79d1\u6280\u516c\u53f8\u4e4b\u95f4\u7684\u975e\u51e1\u5408\u4f5c\u3002\u8be5\u4e66\u7684\u51b2\u523a\u57282013\u5e746\u6708\u7684\u6700\u540e\u4e00\u5468\u8fdb\u884c\uff0c\u7b2c\u4e00\u7248\u5728\u4e94\u5929\u5185\u5b8c\u6210\u3002 \u8be5\u56e2\u961f\u5305\u62ec\uff1a Bryan D. Payne\uff0c\u661f\u4e91 Bryan D. Payne \u535a\u58eb\u662f Nebula \u7684\u5b89\u5168\u7814\u7a76\u603b\u76d1\uff0c\u4e5f\u662f OpenStack \u5b89\u5168\u7ec4\u7ec7 \uff08OSSG\uff09 \u7684\u8054\u5408\u521b\u59cb\u4eba\u3002\u5728\u52a0\u5165 Nebula \u4e4b\u524d\uff0c\u4ed6\u66fe\u5728\u6851\u8fea\u4e9a\u56fd\u5bb6\u5b9e\u9a8c\u5ba4\u3001\u56fd\u5bb6\u5b89\u5168\u5c40\u3001BAE Systems \u548c IBM \u7814\u7a76\u9662\u5de5\u4f5c\u3002\u4ed6\u6bd5\u4e1a\u4e8e\u4f50\u6cbb\u4e9a\u7406\u5de5\u5b66\u9662\u8ba1\u7b97\u673a\u5b66\u9662\uff0c\u83b7\u5f97\u8ba1\u7b97\u673a\u79d1\u5b66\u535a\u58eb\u5b66\u4f4d\uff0c\u4e13\u653b\u7cfb\u7edf\u5b89\u5168\u3002Bryan \u662f\u300aOpenStack \u5b89\u5168\u6307\u5357\u300b\u7684\u7f16\u8f91\u548c\u8d1f\u8d23\u4eba\uff0c\u8d1f\u8d23\u8be5\u6307\u5357\u5728\u7f16\u5199\u540e\u7684\u4e24\u5e74\u4e2d\u6301\u7eed\u589e\u957f\u3002 Robert Clark\uff0c\u60e0\u666e Robert Clark \u662f\u60e0\u666e\u4e91\u670d\u52a1\u7684\u9996\u5e2d\u5b89\u5168\u67b6\u6784\u5e08\uff0c\u4e5f\u662f OpenStack \u5b89\u5168\u7ec4\u7ec7 \uff08OSSG\uff09 \u7684\u8054\u5408\u521b\u59cb\u4eba\u3002\u5728\u88ab\u60e0\u666e\u62db\u52df\u4e4b\u524d\uff0c\u4ed6\u66fe\u5728\u82f1\u56fd\u60c5\u62a5\u754c\u5de5\u4f5c\u3002Robert \u5728\u5a01\u80c1\u5efa\u6a21\u3001\u5b89\u5168\u67b6\u6784\u548c\u865a\u62df\u5316\u6280\u672f\u65b9\u9762\u62e5\u6709\u6df1\u539a\u7684\u80cc\u666f\u3002Robert \u62e5\u6709\u5a01\u5c14\u58eb\u5927\u5b66\u7684\u8f6f\u4ef6\u5de5\u7a0b\u7855\u58eb\u5b66\u4f4d\u3002 Keith Basil \uff0c\u7ea2\u5e3d Keith Basil \u662f\u7ea2\u5e3d OpenStack \u7684\u9996\u5e2d\u4ea7\u54c1\u7ecf\u7406\uff0c\u4e13\u6ce8\u4e8e\u7ea2\u5e3d\u7684 OpenStack \u4ea7\u54c1\u7ba1\u7406\u3001\u5f00\u53d1\u548c\u6218\u7565\u3002\u5728\u7f8e\u56fd\u516c\u5171\u90e8\u95e8\uff0cBasil \u5e26\u6765\u4e86\u4e3a\u8054\u90a6\u6c11\u7528\u673a\u6784\u548c\u627f\u5305\u5546\u8bbe\u8ba1\u6388\u6743\u3001\u5b89\u5168\u3001\u9ad8\u6027\u80fd\u4e91\u67b6\u6784\u7684\u7ecf\u9a8c\u3002 Cody Bunch\uff0c\u62c9\u514b\u7a7a\u95f4 Cody Bunch \u662f Rackspace \u7684\u79c1\u6709\u4e91\u67b6\u6784\u5e08\u3002Cody \u4e0e\u4eba\u5408\u8457\u4e86\u300aThe OpenStack Cookbook\u300b\u7684\u66f4\u65b0\u4ee5\u53ca\u6709\u5173 VMware \u81ea\u52a8\u5316\u7684\u4e66\u7c4d\u3002 Malini Bhandaru\uff0c\u82f1\u7279\u5c14 Malini Bhandaru \u662f\u82f1\u7279\u5c14\u7684\u4e00\u540d\u5b89\u5168\u67b6\u6784\u5e08\u3002\u5979\u62e5\u6709\u591a\u5143\u5316\u7684\u80cc\u666f\uff0c\u66fe\u5728\u82f1\u7279\u5c14\u4ece\u4e8b\u5e73\u53f0\u529f\u80fd\u548c\u6027\u80fd\u65b9\u9762\u7684\u5de5\u4f5c\uff0c\u5728 Nuance \u4ece\u4e8b\u8bed\u97f3\u4ea7\u54c1\u65b9\u9762\u7684\u5de5\u4f5c\uff0c\u5728 ComBrio \u4ece\u4e8b\u8fdc\u7a0b\u76d1\u63a7\u548c\u7ba1\u7406\u5de5\u4f5c\uff0c\u5728 Verizon \u4ece\u4e8b\u7f51\u7edc\u5546\u52a1\u5de5\u4f5c\u3002\u5979\u62e5\u6709\u9a6c\u8428\u8bf8\u585e\u5927\u5b66\u963f\u9ed8\u65af\u7279\u5206\u6821\u7684\u4eba\u5de5\u667a\u80fd\u535a\u58eb\u5b66\u4f4d\u3002 Gregg Tally\uff0c\u7ea6\u7ff0\u970d\u666e\u91d1\u65af\u5927\u5b66\u5e94\u7528\u7269\u7406\u5b9e\u9a8c\u5ba4 Gregg Tally \u662f JHU/APL \u7f51\u7edc\u7cfb\u7edf\u90e8\u95e8\u975e\u5bf9\u79f0\u8fd0\u8425\u90e8\u7684\u603b\u5de5\u7a0b\u5e08\u3002\u4ed6\u4e3b\u8981\u4ece\u4e8b\u7cfb\u7edf\u5b89\u5168\u5de5\u7a0b\u65b9\u9762\u7684\u5de5\u4f5c\u3002\u6b64\u524d\uff0c\u4ed6\u66fe\u5728\u65af\u5df4\u8fbe\u3001\u8fc8\u514b\u83f2\u548c\u53ef\u4fe1\u4fe1\u606f\u7cfb\u7edf\u516c\u53f8\u5de5\u4f5c\uff0c\u53c2\u4e0e\u7f51\u7edc\u5b89\u5168\u7814\u7a76\u9879\u76ee\u3002 Eric Lopez, \u5a01\u777f Eric Lopez \u662f VMware \u7f51\u7edc\u548c\u5b89\u5168\u4e1a\u52a1\u90e8\u95e8\u7684\u9ad8\u7ea7\u89e3\u51b3\u65b9\u6848\u67b6\u6784\u5e08\uff0c\u4ed6\u5e2e\u52a9\u5ba2\u6237\u5b9e\u65bd OpenStack \u548c VMware NSX\uff08\u4ee5\u524d\u79f0\u4e3a Nicira \u7684\u7f51\u7edc\u865a\u62df\u5316\u5e73\u53f0\uff09\u3002\u5728\u52a0\u5165 VMware\uff08\u901a\u8fc7\u516c\u53f8\u6536\u8d2d Nicira\uff09\u4e4b\u524d\uff0c\u4ed6\u66fe\u5728 Q1 Labs\u3001Symantec\u3001Vontu \u548c Brightmail \u5de5\u4f5c\u3002\u4ed6\u62e5\u6709\u52a0\u5dde\u5927\u5b66\u4f2f\u514b\u5229\u5206\u6821\u7684\u7535\u6c14\u5de5\u7a0b/\u8ba1\u7b97\u673a\u79d1\u5b66\u548c\u6838\u5de5\u7a0b\u5b66\u58eb\u5b66\u4f4d\u548c\u65e7\u91d1\u5c71\u5927\u5b66\u7684\u5de5\u5546\u7ba1\u7406\u7855\u58eb\u5b66\u4f4d\u3002 Shawn Wells\uff0c\u7ea2\u5e3d Shawn Wells \u662f\u7ea2\u5e3d\u521b\u65b0\u9879\u76ee\u603b\u76d1\uff0c\u4e13\u6ce8\u4e8e\u6539\u8fdb\u7f8e\u56fd\u653f\u5e9c\u5185\u90e8\u91c7\u7528\u3001\u4fc3\u8fdb\u548c\u7ba1\u7406\u5f00\u6e90\u6280\u672f\u7684\u6d41\u7a0b\u3002\u6b64\u5916\uff0cShawn \u8fd8\u662f SCAP \u5b89\u5168\u6307\u5357\u9879\u76ee\u7684\u4e0a\u6e38\u7ef4\u62a4\u8005\uff0c\u8be5\u9879\u76ee\u4e0e\u7f8e\u56fd\u519b\u65b9\u3001NSA \u548c DISA \u4e00\u8d77\u5236\u5b9a\u865a\u62df\u5316\u548c\u64cd\u4f5c\u7cfb\u7edf\u5f3a\u5316\u7b56\u7565\u3002Shawn\u66fe\u662fNSA\u7684\u5e73\u6c11\uff0c\u5229\u7528\u5927\u578b\u5206\u5e03\u5f0f\u8ba1\u7b97\u57fa\u7840\u8bbe\u65bd\u5f00\u53d1\u4e86SIGINT\u6536\u96c6\u7cfb\u7edf\u3002 Ben de Bont\uff0c\u60e0\u666e Ben de Bont \u662f\u60e0\u666e\u4e91\u670d\u52a1\u7684\u9996\u5e2d\u6218\u7565\u5b98\u3002\u5728\u62c5\u4efb\u73b0\u804c\u4e4b\u524d\uff0cBen \u9886\u5bfc MySpace \u7684\u4fe1\u606f\u5b89\u5168\u5c0f\u7ec4\u548c MSN Security \u7684\u4e8b\u4ef6\u54cd\u5e94\u56e2\u961f\u3002Ben \u62e5\u6709\u6606\u58eb\u5170\u79d1\u6280\u5927\u5b66\u7684\u8ba1\u7b97\u673a\u79d1\u5b66\u7855\u58eb\u5b66\u4f4d\u3002 Nathanael Burton\uff0c\u56fd\u5bb6\u5b89\u5168\u5c40 \u7eb3\u5854\u5185\u5c14\u00b7\u4f2f\u987f\uff08Nathanael Burton\uff09\u662f\u7f8e\u56fd\u56fd\u5bb6\u5b89\u5168\u5c40\uff08National Security Agency\uff09\u7684\u8ba1\u7b97\u673a\u79d1\u5b66\u5bb6\u3002\u4ed6\u5728\u8be5\u673a\u6784\u5de5\u4f5c\u4e86 10 \u591a\u5e74\uff0c\u4ece\u4e8b\u5206\u5e03\u5f0f\u7cfb\u7edf\u3001\u5927\u89c4\u6a21\u6258\u7ba1\u3001\u5f00\u6e90\u8ba1\u5212\u3001\u64cd\u4f5c\u7cfb\u7edf\u3001\u5b89\u5168\u3001\u5b58\u50a8\u548c\u865a\u62df\u5316\u6280\u672f\u65b9\u9762\u7684\u5de5\u4f5c\u3002\u4ed6\u62e5\u6709\u5f17\u5409\u5c3c\u4e9a\u7406\u5de5\u5927\u5b66\u7684\u8ba1\u7b97\u673a\u79d1\u5b66\u5b66\u58eb\u5b66\u4f4d\u3002 Vibha Fauver Vibha Fauver\uff0cGWEB\uff0cCISSP\uff0cPMP\uff0c\u5728\u4fe1\u606f\u6280\u672f\u9886\u57df\u62e5\u6709\u8d85\u8fc715\u5e74\u7684\u7ecf\u9a8c\u3002\u5979\u7684\u4e13\u4e1a\u9886\u57df\u5305\u62ec\u8f6f\u4ef6\u5de5\u7a0b\u3001\u9879\u76ee\u7ba1\u7406\u548c\u4fe1\u606f\u5b89\u5168\u3002\u5979\u62e5\u6709\u8ba1\u7b97\u673a\u4e0e\u4fe1\u606f\u79d1\u5b66\u5b66\u58eb\u5b66\u4f4d\u548c\u5de5\u7a0b\u7ba1\u7406\u7855\u58eb\u5b66\u4f4d\uff0c\u4e13\u4e1a\u548c\u7cfb\u7edf\u5de5\u7a0b\u8bc1\u4e66\u3002 Eric Windisch\uff0c\u4e91\u7f29\u653e Eric Windisch \u662f Cloudscaling \u7684\u9996\u5e2d\u5de5\u7a0b\u5e08\uff0c\u4ed6\u4e3a OpenStack \u8d21\u732e\u4e86\u4e24\u5e74\u591a\u3002\u57c3\u91cc\u514b\uff08Eric\uff09\u5728\u7f51\u7edc\u6258\u7ba1\u884c\u4e1a\u62e5\u6709\u5341\u591a\u5e74\u7684\u7ecf\u9a8c\uff0c\u4e00\u76f4\u5728\u654c\u5bf9\u73af\u5883\u7684\u6218\u58d5\u4e2d\uff0c\u5efa\u7acb\u4e86\u79df\u6237\u9694\u79bb\u548c\u57fa\u7840\u8bbe\u65bd\u5b89\u5168\u6027\u3002\u81ea 2007 \u5e74\u4ee5\u6765\uff0c\u4ed6\u4e00\u76f4\u5728\u6784\u5efa\u4e91\u8ba1\u7b97\u57fa\u7840\u8bbe\u65bd\u548c\u81ea\u52a8\u5316\u3002 Andrew Hay\uff0c\u4e91\u9053 Andrew Hay \u662f CloudPassage\uff0c Inc. \u7684\u5e94\u7528\u5b89\u5168\u7814\u7a76\u603b\u76d1\uff0c\u8d1f\u8d23\u9886\u5bfc\u8be5\u516c\u53f8\u53ca\u5176\u4e13\u4e3a\u52a8\u6001\u516c\u6709\u4e91\u3001\u79c1\u6709\u4e91\u548c\u6df7\u5408\u4e91\u6258\u7ba1\u73af\u5883\u6784\u5efa\u7684\u670d\u52a1\u5668\u5b89\u5168\u4ea7\u54c1\u7684\u5b89\u5168\u7814\u7a76\u5de5\u4f5c\u3002 Adam Hyde \u4e9a\u5f53\u4fc3\u6210\u4e86\u8fd9\u4e2a Book Sprint\u3002\u4ed6\u8fd8\u521b\u7acb\u4e86 Book Sprint \u65b9\u6cd5\u8bba\uff0c\u5e76\u4e14\u662f\u6700\u6709\u7ecf\u9a8c\u7684 Book Sprint \u4fc3\u8fdb\u8005\u3002Adam \u521b\u7acb\u4e86 FLOSS Manuals\uff0c\u8fd9\u662f\u4e00\u4e2a\u7531 3,000 \u4eba\u7ec4\u6210\u7684\u793e\u533a\uff0c\u81f4\u529b\u4e8e\u5f00\u53d1\u5173\u4e8e\u81ea\u7531\u8f6f\u4ef6\u7684\u81ea\u7531\u624b\u518c\u3002\u4ed6\u8fd8\u662f Booktype \u7684\u521b\u59cb\u4eba\u548c\u9879\u76ee\u7ecf\u7406\uff0cBooktype \u662f\u4e00\u4e2a\u7528\u4e8e\u5728\u7ebf\u548c\u5370\u5237\u4e66\u7c4d\u7f16\u5199\u3001\u7f16\u8f91\u548c\u51fa\u7248\u7684\u5f00\u6e90\u9879\u76ee\u3002 \u5728\u51b2\u523a\u671f\u95f4\uff0c\u6211\u4eec\u8fd8\u5f97\u5230\u4e86 Anne Gentle\u3001Warren Wang\u3001Paul McMillan\u3001Brian Schott \u548c Lorin Hochstein \u7684\u5e2e\u52a9\u3002 \u8fd9\u672c\u4e66\u662f\u5728\u4e3a\u671f 5 \u5929\u7684\u56fe\u4e66\u51b2\u523a\u4e2d\u5236\u4f5c\u7684\u3002\u56fe\u4e66\u51b2\u523a\u662f\u4e00\u4e2a\u9ad8\u5ea6\u534f\u4f5c\u3001\u4fc3\u8fdb\u7684\u8fc7\u7a0b\uff0c\u5b83\u5c06\u4e00\u4e2a\u5c0f\u7ec4\u805a\u96c6\u5728\u4e00\u8d77\uff0c\u5728 3-5 \u5929\u5185\u5236\u4f5c\u4e00\u672c\u4e66\u3002\u8fd9\u662f\u4e00\u4e2a\u7531\u4e9a\u5f53\u00b7\u6d77\u5fb7\uff08Adam Hyde\uff09\u521b\u7acb\u548c\u53d1\u5c55\u7684\u7279\u5b9a\u65b9\u6cd5\u7684\u6709\u529b\u4fc3\u8fdb\u8fc7\u7a0b\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u8bbf\u95eeBookSprints\u7684Book Sprint\u7f51\u9875\u3002","title":"\u5199\u4f5c\u8bb0\u5f55"},{"location":"security/security-guide/#_11","text":"\u672c\u4e66\u7684\u6700\u521d\u5de5\u4f5c\u662f\u5728\u4e00\u95f4\u7a7a\u8c03\u8fc7\u9ad8\u7684\u623f\u95f4\u91cc\u8fdb\u884c\u7684\uff0c\u8be5\u623f\u95f4\u662f\u6574\u4e2a\u6587\u6863\u51b2\u523a\u671f\u95f4\u7684\u5c0f\u7ec4\u529e\u516c\u5ba4\u3002 \u8981\u4e86\u89e3\u6709\u5173\u5982\u4f55\u4e3a OpenStack \u6587\u6863\u505a\u51fa\u8d21\u732e\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack \u6587\u6863\u8d21\u732e\u8005\u6307\u5357\u3002","title":"\u5982\u4f55\u4e3a\u672c\u4e66\u505a\u8d21\u732e"},{"location":"security/security-guide/#openstack_1","text":"\u672c\u6307\u5357\u63d0\u4f9b\u4e86\u5bf9 OpenStack \u90e8\u7f72\u7684\u5b89\u5168\u89c1\u89e3\u3002\u76ee\u6807\u53d7\u4f17\u662f\u4e91\u67b6\u6784\u5e08\u3001\u90e8\u7f72\u4eba\u5458\u548c\u7ba1\u7406\u5458\u3002\u6b64\u5916\uff0c\u4e91\u7528\u6237\u4f1a\u53d1\u73b0\u8be5\u6307\u5357\u5728\u63d0\u4f9b\u5546\u9009\u62e9\u65b9\u9762\u65e2\u6709\u6559\u80b2\u610f\u4e49\u53c8\u6709\u5e2e\u52a9\uff0c\u800c\u5ba1\u8ba1\u4eba\u5458\u4f1a\u53d1\u73b0\u5b83\u4f5c\u4e3a\u53c2\u8003\u6587\u6863\u5f88\u6709\u7528\uff0c\u53ef\u4ee5\u652f\u6301\u4ed6\u4eec\u7684\u5408\u89c4\u6027\u8ba4\u8bc1\u5de5\u4f5c\u3002\u672c\u6307\u5357\u4e5f\u63a8\u8350\u7ed9\u4efb\u4f55\u5bf9\u4e91\u5b89\u5168\u611f\u5174\u8da3\u7684\u4eba\u3002 \u6bcf\u4e2a OpenStack \u90e8\u7f72\u90fd\u5305\u542b\u5404\u79cd\u5404\u6837\u7684\u6280\u672f\uff0c\u5305\u62ec Linux \u53d1\u884c\u7248\u3001\u6570\u636e\u5e93\u7cfb\u7edf\u3001\u6d88\u606f\u961f\u5217\u3001OpenStack \u7ec4\u4ef6\u672c\u8eab\u3001\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u3001\u65e5\u5fd7\u8bb0\u5f55\u670d\u52a1\u3001\u5b89\u5168\u76d1\u63a7\u5de5\u5177\u7b49\u7b49\u3002\u6240\u6d89\u53ca\u7684\u5b89\u5168\u95ee\u9898\u540c\u6837\u591a\u79cd\u591a\u6837\u4e5f\u5c31\u4e0d\u8db3\u4e3a\u5947\u4e86\uff0c\u5bf9\u8fd9\u4e9b\u95ee\u9898\u7684\u6df1\u5165\u5206\u6790\u9700\u8981\u4e00\u4e9b\u6307\u5357\u3002\u6211\u4eec\u52aa\u529b\u5bfb\u627e\u5e73\u8861\u70b9\uff0c\u63d0\u4f9b\u8db3\u591f\u7684\u80cc\u666f\u4fe1\u606f\u6765\u7406\u89e3OpenStack\u5b89\u5168\u95ee\u9898\u53ca\u5176\u5904\u7406\uff0c\u5e76\u4e3a\u8fdb\u4e00\u6b65\u7684\u4fe1\u606f\u63d0\u4f9b\u5916\u90e8\u53c2\u8003\u3002\u8be5\u6307\u5357\u53ef\u4ee5\u4ece\u5934\u5230\u5c3e\u9605\u8bfb\uff0c\u4e5f\u53ef\u4ee5\u50cf\u53c2\u8003\u4e00\u6837\u4f7f\u7528\u3002 \u6211\u4eec\u7b80\u8981\u4ecb\u7ecd\u4e86\u4e91\u7684\u79cd\u7c7b\uff08\u79c1\u6709\u4e91\u3001\u516c\u6709\u4e91\u548c\u6df7\u5408\u4e91\uff09\uff0c\u7136\u540e\u5728\u672c\u7ae0\u7684\u5176\u4f59\u90e8\u5206\u6982\u8ff0\u4e86 OpenStack \u7ec4\u4ef6\u53ca\u5176\u76f8\u5173\u7684\u5b89\u5168\u95ee\u9898\u3002 \u5728\u6574\u672c\u4e66\u4e2d\uff0c\u6211\u4eec\u63d0\u5230\u4e86\u51e0\u79cd\u7c7b\u578b\u7684OpenStack\u4e91\u7528\u6237\uff1a\u7ba1\u7406\u5458\u3001\u64cd\u4f5c\u5458\u548c\u7528\u6237\u3002\u6211\u4eec\u4f7f\u7528\u8fd9\u4e9b\u672f\u8bed\u6765\u6807\u8bc6\u6bcf\u4e2a\u89d2\u8272\u5177\u6709\u7684\u5b89\u5168\u8bbf\u95ee\u7ea7\u522b\uff0c\u5c3d\u7ba1\u5b9e\u9645\u4e0a\uff0c\u6211\u4eec\u77e5\u9053\u4e0d\u540c\u7684\u89d2\u8272\u901a\u5e38\u7531\u540c\u4e00\u4e2a\u4eba\u62c5\u4efb\u3002","title":"OpenStack \u7b80\u4ecb"},{"location":"security/security-guide/#_12","text":"OpenStack\u662f\u91c7\u7528\u4e91\u6280\u672f\u7684\u5173\u952e\u63a8\u52a8\u56e0\u7d20\uff0c\u5e76\u5177\u6709\u51e0\u4e2a\u5e38\u89c1\u7684\u90e8\u7f72\u7528\u4f8b\u3002\u8fd9\u4e9b\u6a21\u578b\u901a\u5e38\u79f0\u4e3a\u516c\u5171\u6a21\u578b\u3001\u4e13\u7528\u6a21\u578b\u548c\u6df7\u5408\u6a21\u578b\u3002\u4ee5\u4e0b\u5404\u8282\u4f7f\u7528\u7f8e\u56fd\u56fd\u5bb6\u6807\u51c6\u4e0e\u6280\u672f\u7814\u7a76\u9662 \uff08NIST\uff09 \u5bf9\u4e91\u7684\u5b9a\u4e49\u6765\u4ecb\u7ecd\u8fd9\u4e9b\u9002\u7528\u4e8e OpenStack \u7684\u4e0d\u540c\u7c7b\u578b\u7684\u4e91\u3002","title":"\u4e91\u7c7b\u578b"},{"location":"security/security-guide/#_13","text":"\u6839\u636eNIST\u7684\u8bf4\u6cd5\uff0c\u516c\u5171\u4e91\u662f\u57fa\u7840\u8bbe\u65bd\u5411\u516c\u4f17\u5f00\u653e\u4f9b\u6d88\u8d39\u7684\u4e91\u3002OpenStack\u516c\u6709\u4e91\u901a\u5e38\u7531\u670d\u52a1\u63d0\u4f9b\u5546\u8fd0\u884c\uff0c\u53ef\u4f9b\u4e2a\u4eba\u3001\u516c\u53f8\u6216\u4efb\u4f55\u4ed8\u8d39\u5ba2\u6237\u4f7f\u7528\u3002\u9664\u4e86\u591a\u79cd\u5b9e\u4f8b\u7c7b\u578b\u5916\uff0c\u516c\u6709\u4e91\u63d0\u4f9b\u5546\u8fd8\u53ef\u80fd\u516c\u5f00\u4e00\u6574\u5957\u529f\u80fd\uff0c\u4f8b\u5982\u8f6f\u4ef6\u5b9a\u4e49\u7f51\u7edc\u6216\u5757\u5b58\u50a8\u3002 \u5c31\u5176\u6027\u8d28\u800c\u8a00\uff0c\u516c\u6709\u4e91\u9762\u4e34\u66f4\u9ad8\u7684\u98ce\u9669\u3002\u4f5c\u4e3a\u516c\u6709\u4e91\u7684\u4f7f\u7528\u8005\uff0c\u60a8\u5e94\u8be5\u9a8c\u8bc1\u6240\u9009\u63d0\u4f9b\u5546\u662f\u5426\u5177\u6709\u5fc5\u8981\u7684\u8ba4\u8bc1\u3001\u8bc1\u660e\u548c\u5176\u4ed6\u6cd5\u89c4\u6ce8\u610f\u4e8b\u9879\u3002\u4f5c\u4e3a\u516c\u6709\u4e91\u63d0\u4f9b\u5546\uff0c\u6839\u636e\u60a8\u7684\u76ee\u6807\u5ba2\u6237\uff0c\u60a8\u53ef\u80fd\u9700\u8981\u9075\u5b88\u4e00\u9879\u6216\u591a\u9879\u6cd5\u89c4\u3002\u6b64\u5916\uff0c\u5373\u4f7f\u4e0d\u9700\u8981\u6ee1\u8db3\u6cd5\u89c4\u8981\u6c42\uff0c\u63d0\u4f9b\u5546\u4e5f\u5e94\u786e\u4fdd\u79df\u6237\u9694\u79bb\uff0c\u5e76\u4fdd\u62a4\u7ba1\u7406\u57fa\u7840\u7ed3\u6784\u514d\u53d7\u5916\u90e8\u653b\u51fb\u3002","title":"\u516c\u6709\u4e91"},{"location":"security/security-guide/#_14","text":"\u5728\u9891\u8c31\u7684\u53e6\u4e00\u7aef\u662f\u79c1\u6709\u4e91\u3002\u6b63\u5982NIST\u6240\u5b9a\u4e49\u7684\u90a3\u6837\uff0c\u79c1\u6709\u4e91\u88ab\u914d\u7f6e\u4e3a\u7531\u591a\u4e2a\u6d88\u8d39\u8005\uff08\u5982\u4e1a\u52a1\u90e8\u95e8\uff09\u7ec4\u6210\u7684\u5355\u4e2a\u7ec4\u7ec7\u72ec\u5360\u4f7f\u7528\u3002\u4e91\u53ef\u80fd\u7531\u7ec4\u7ec7\u3001\u7b2c\u4e09\u65b9\u6216\u5b83\u4eec\u7684\u67d0\u79cd\u7ec4\u5408\u62e5\u6709\u3001\u7ba1\u7406\u548c\u8fd0\u8425\uff0c\u5e76\u4e14\u53ef\u80fd\u5b58\u5728\u4e8e\u672c\u5730\u6216\u5916\u90e8\u3002\u79c1\u6709\u4e91\u7528\u4f8b\u591a\u79cd\u591a\u6837\uff0c\u56e0\u6b64\uff0c\u5b83\u4eec\u5404\u81ea\u7684\u5b89\u5168\u95ee\u9898\u5404\u4e0d\u76f8\u540c\u3002","title":"\u79c1\u6709\u4e91"},{"location":"security/security-guide/#_15","text":"NIST \u5c06\u793e\u533a\u4e91\u5b9a\u4e49\u4e3a\u5176\u57fa\u7840\u7ed3\u6784\u4ec5\u4f9b\u5177\u6709\u5171\u540c\u5173\u6ce8\u70b9\uff08\u4f8b\u5982\uff0c\u4efb\u52a1\u3001\u5b89\u5168\u8981\u6c42\u3001\u7b56\u7565\u6216\u5408\u89c4\u6027\u6ce8\u610f\u4e8b\u9879\uff09\u7684\u7ec4\u7ec7\u7684\u7279\u5b9a\u6d88\u8d39\u8005\u793e\u533a\u4f7f\u7528\u3002\u4e91\u53ef\u80fd\u7531\u793e\u533a\u4e2d\u7684\u4e00\u4e2a\u6216\u591a\u4e2a\u7ec4\u7ec7\u3001\u7b2c\u4e09\u65b9\u6216\u5b83\u4eec\u7684\u67d0\u79cd\u7ec4\u5408\u62e5\u6709\u3001\u7ba1\u7406\u548c\u8fd0\u8425\uff0c\u5e76\u4e14\u5b83\u53ef\u80fd\u5b58\u5728\u4e8e\u672c\u5730\u6216\u5916\u90e8\u3002","title":"\u793e\u533a\u4e91"},{"location":"security/security-guide/#_16","text":"NIST\u5c06\u6df7\u5408\u4e91\u5b9a\u4e49\u4e3a\u4e24\u4e2a\u6216\u591a\u4e2a\u4e0d\u540c\u7684\u4e91\u57fa\u7840\u8bbe\u65bd\uff08\u5982\u79c1\u6709\u4e91\u3001\u793e\u533a\u4e91\u6216\u516c\u5171\u4e91\uff09\u7684\u7ec4\u5408\uff0c\u8fd9\u4e9b\u4e91\u57fa\u7840\u8bbe\u65bd\u4ecd\u7136\u662f\u552f\u4e00\u7684\u5b9e\u4f53\uff0c\u4f46\u901a\u8fc7\u6807\u51c6\u5316\u6216\u4e13\u6709\u6280\u672f\u7ed1\u5b9a\u5728\u4e00\u8d77\uff0c\u4ece\u800c\u5b9e\u73b0\u6570\u636e\u548c\u5e94\u7528\u7a0b\u5e8f\u7684\u53ef\u79fb\u690d\u6027\uff0c\u4f8b\u5982\u7528\u4e8e\u4e91\u4e4b\u95f4\u8d1f\u8f7d\u5e73\u8861\u7684\u4e91\u7206\u53d1\u3002\u4f8b\u5982\uff0c\u5728\u7ebf\u96f6\u552e\u5546\u53ef\u80fd\u4f1a\u5728\u5141\u8bb8\u5f39\u6027\u914d\u7f6e\u7684\u516c\u6709\u4e91\u4e0a\u5c55\u793a\u5176\u5e7f\u544a\u548c\u76ee\u5f55\u3002\u8fd9\u5c06\u4f7f\u4ed6\u4eec\u80fd\u591f\u4ee5\u7075\u6d3b\u3001\u5177\u6709\u6210\u672c\u6548\u76ca\u7684\u65b9\u5f0f\u5904\u7406\u5b63\u8282\u6027\u8d1f\u8f7d\u3002\u4e00\u65e6\u5ba2\u6237\u5f00\u59cb\u5904\u7406\u4ed6\u4eec\u7684\u8ba2\u5355\uff0c\u4ed6\u4eec\u5c31\u4f1a\u88ab\u8f6c\u79fb\u5230\u4e00\u4e2a\u66f4\u5b89\u5168\u7684\u79c1\u6709\u4e91\u4e2d\uff0c\u8be5\u79c1\u6709\u4e91\u7b26\u5408PCI\u6807\u51c6\u3002 \u5728\u672c\u6587\u6863\u4e2d\uff0c\u6211\u4eec\u4ee5\u7c7b\u4f3c\u7684\u65b9\u5f0f\u5bf9\u5f85\u793e\u533a\u548c\u6df7\u5408\u4e91\uff0c\u4ec5\u4ece\u5b89\u5168\u89d2\u5ea6\u660e\u786e\u5904\u7406\u516c\u6709\u4e91\u548c\u79c1\u6709\u4e91\u7684\u6781\u7aef\u60c5\u51b5\u3002\u5b89\u5168\u63aa\u65bd\u53d6\u51b3\u4e8e\u90e8\u7f72\u5728\u79c1\u6709\u516c\u5171\u8fde\u7eed\u4f53\u4e0a\u7684\u4f4d\u7f6e\u3002","title":"\u6df7\u5408\u4e91"},{"location":"security/security-guide/#openstack_2","text":"OpenStack \u91c7\u7528\u6a21\u5757\u5316\u67b6\u6784\uff0c\u63d0\u4f9b\u4e00\u7ec4\u6838\u5fc3\u670d\u52a1\uff0c\u4ee5\u4fc3\u8fdb\u53ef\u6269\u5c55\u6027\u548c\u5f39\u6027\u4f5c\u4e3a\u6838\u5fc3\u8bbe\u8ba1\u539f\u5219\u3002\u672c\u7ae0\u7b80\u8981\u56de\u987e\u4e86 OpenStack \u7ec4\u4ef6\u3001\u5b83\u4eec\u7684\u7528\u4f8b\u548c\u5b89\u5168\u6ce8\u610f\u4e8b\u9879\u3002","title":"OpenStack \u670d\u52a1\u6982\u8ff0"},{"location":"security/security-guide/#_17","text":"OpenStack Compute \u670d\u52a1 \uff08nova\uff09 \u63d0\u4f9b\u7684\u670d\u52a1\u652f\u6301\u5927\u89c4\u6a21\u7ba1\u7406\u865a\u62df\u673a\u5b9e\u4f8b\u3001\u6258\u7ba1\u591a\u5c42\u5e94\u7528\u7a0b\u5e8f\u7684\u5b9e\u4f8b\u3001\u5f00\u53d1\u6216\u6d4b\u8bd5\u73af\u5883\u3001\u5904\u7406 Hadoop \u96c6\u7fa4\u7684\u201c\u5927\u6570\u636e\u201d\u6216\u9ad8\u6027\u80fd\u8ba1\u7b97\u3002 \u8ba1\u7b97\u670d\u52a1\u901a\u8fc7\u4e0e\u652f\u6301\u7684\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4ea4\u4e92\u7684\u62bd\u8c61\u5c42\u6765\u4fc3\u8fdb\u8fd9\u79cd\u7ba1\u7406\uff08\u6211\u4eec\u7a0d\u540e\u4f1a\u66f4\u8be6\u7ec6\u5730\u8ba8\u8bba\u8fd9\u4e2a\u95ee\u9898\uff09\u3002 \u5728\u672c\u6307\u5357\u7684\u540e\u9762\u90e8\u5206\uff0c\u6211\u4eec\u5c06\u91cd\u70b9\u4ecb\u7ecd\u865a\u62df\u5316\u5806\u6808\uff0c\u56e0\u4e3a\u5b83\u4e0e\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u76f8\u5173\u3002 \u6709\u5173\u529f\u80fd\u652f\u6301\u7684\u5f53\u524d\u72b6\u6001\u7684\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack Hypervisor \u652f\u6301\u77e9\u9635\u3002 \u8ba1\u7b97\u5b89\u5168\u6027\u5bf9\u4e8eOpenStack\u90e8\u7f72\u81f3\u5173\u91cd\u8981\u3002\u5f3a\u5316\u6280\u672f\u5e94\u5305\u62ec\u5bf9\u5f3a\u5b9e\u4f8b\u9694\u79bb\u7684\u652f\u6301\u3001\u8ba1\u7b97\u5b50\u7ec4\u4ef6\u4e4b\u95f4\u7684\u5b89\u5168\u901a\u4fe1\u4ee5\u53ca\u9762\u5411\u516c\u4f17\u7684 API \u7ec8\u7ed3\u70b9\u7684\u590d\u539f\u80fd\u529b\u3002","title":"\u8ba1\u7b97"},{"location":"security/security-guide/#_18","text":"OpenStack \u5bf9\u8c61\u5b58\u50a8\u670d\u52a1 \uff08swift\uff09 \u652f\u6301\u5728\u4e91\u4e2d\u5b58\u50a8\u548c\u68c0\u7d22\u4efb\u610f\u6570\u636e\u3002\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u63d0\u4f9b\u672c\u673a API \u548c\u4e9a\u9a6c\u900a\u4e91\u79d1\u6280 S3 \u517c\u5bb9 API\u3002\u8be5\u670d\u52a1\u901a\u8fc7\u6570\u636e\u590d\u5236\u63d0\u4f9b\u9ad8\u5ea6\u7684\u590d\u539f\u80fd\u529b\uff0c\u5e76\u4e14\u53ef\u4ee5\u5904\u7406 PB \u7ea7\u7684\u6570\u636e\u3002 \u8bf7\u52a1\u5fc5\u4e86\u89e3\u5bf9\u8c61\u5b58\u50a8\u4e0d\u540c\u4e8e\u4f20\u7edf\u7684\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u3002\u5bf9\u8c61\u5b58\u50a8\u6700\u9002\u5408\u7528\u4e8e\u9759\u6001\u6570\u636e\uff0c\u4f8b\u5982\u5a92\u4f53\u6587\u4ef6\uff08MP3\u3001\u56fe\u50cf\u6216\u89c6\u9891\uff09\u3001\u865a\u62df\u673a\u6620\u50cf\u548c\u5907\u4efd\u6587\u4ef6\u3002 \u5bf9\u8c61\u5b89\u5168\u5e94\u4fa7\u91cd\u4e8e\u4f20\u8f93\u4e2d\u548c\u9759\u6001\u6570\u636e\u7684\u8bbf\u95ee\u63a7\u5236\u548c\u52a0\u5bc6\u3002\u5176\u4ed6\u95ee\u9898\u53ef\u80fd\u4e0e\u7cfb\u7edf\u6ee5\u7528\u3001\u975e\u6cd5\u6216\u6076\u610f\u5185\u5bb9\u5b58\u50a8\u4ee5\u53ca\u4ea4\u53c9\u8eab\u4efd\u9a8c\u8bc1\u653b\u51fb\u5a92\u4ecb\u6709\u5173\u3002","title":"\u5bf9\u8c61\u5b58\u50a8"},{"location":"security/security-guide/#_19","text":"OpenStack \u5757\u5b58\u50a8\u670d\u52a1 \uff08cinder\uff09 \u4e3a\u8ba1\u7b97\u5b9e\u4f8b\u63d0\u4f9b\u6301\u4e45\u6027\u5757\u5b58\u50a8\u3002\u5757\u5b58\u50a8\u670d\u52a1\u8d1f\u8d23\u7ba1\u7406\u5757\u8bbe\u5907\u7684\u751f\u547d\u5468\u671f\uff0c\u4ece\u521b\u5efa\u5377\u548c\u9644\u52a0\u5230\u5b9e\u4f8b\uff0c\u518d\u5230\u91ca\u653e\u3002 \u5757\u5b58\u50a8\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879\u4e0e\u5bf9\u8c61\u5b58\u50a8\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879\u7c7b\u4f3c\u3002","title":"\u5757\u5b58\u50a8"},{"location":"security/security-guide/#_20","text":"\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff08\u9a6c\u5c3c\u62c9\uff09\u63d0\u4f9b\u4e86\u4e00\u7ec4\u7528\u4e8e\u7ba1\u7406\u591a\u79df\u6237\u4e91\u73af\u5883\u4e2d\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u7684\u670d\u52a1\uff0c\u7c7b\u4f3c\u4e8e OpenStack \u901a\u8fc7 OpenStack \u5757\u5b58\u50a8\u670d\u52a1\u9879\u76ee\u63d0\u4f9b\u57fa\u4e8e\u5757\u7684\u5b58\u50a8\u7ba1\u7406\u7684\u65b9\u5f0f\u3002\u4f7f\u7528\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff0c\u60a8\u53ef\u4ee5\u521b\u5efa\u8fdc\u7a0b\u6587\u4ef6\u7cfb\u7edf\uff0c\u5c06\u6587\u4ef6\u7cfb\u7edf\u6302\u8f7d\u5230\u5b9e\u4f8b\u4e0a\uff0c\u7136\u540e\u4ece\u5b9e\u4f8b\u8bfb\u53d6\u548c\u5199\u5165\u6587\u4ef6\u7cfb\u7edf\u4e2d\u7684\u6570\u636e\u3002","title":"\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf"},{"location":"security/security-guide/#_21","text":"OpenStack \u7f51\u7edc\u670d\u52a1\uff08neutron\uff0c\u4ee5\u524d\u79f0\u4e3a\u91cf\u5b50\uff09\u4e3a\u4e91\u7528\u6237\uff08\u79df\u6237\uff09\u63d0\u4f9b\u5404\u79cd\u7f51\u7edc\u670d\u52a1\uff0c\u4f8b\u5982 IP \u5730\u5740\u7ba1\u7406\u3001DNS\u3001DHCP\u3001\u8d1f\u8f7d\u5747\u8861\u548c\u5b89\u5168\u7ec4\uff08\u7f51\u7edc\u8bbf\u95ee\u89c4\u5219\uff0c\u5982\u9632\u706b\u5899\u7b56\u7565\uff09\u3002\u6b64\u670d\u52a1\u4e3a\u8f6f\u4ef6\u5b9a\u4e49\u7f51\u7edc \uff08SDN\uff09 \u63d0\u4f9b\u4e86\u4e00\u4e2a\u6846\u67b6\uff0c\u5141\u8bb8\u4e0e\u5404\u79cd\u7f51\u7edc\u89e3\u51b3\u65b9\u6848\u8fdb\u884c\u53ef\u63d2\u62d4\u96c6\u6210\u3002 OpenStack Networking \u5141\u8bb8\u4e91\u79df\u6237\u7ba1\u7406\u5176\u8bbf\u5ba2\u7f51\u7edc\u914d\u7f6e\u3002\u7f51\u7edc\u670d\u52a1\u7684\u5b89\u5168\u95ee\u9898\u5305\u62ec\u7f51\u7edc\u6d41\u91cf\u9694\u79bb\u3001\u53ef\u7528\u6027\u3001\u5b8c\u6574\u6027\u548c\u673a\u5bc6\u6027\u3002","title":"\u7f51\u7edc"},{"location":"security/security-guide/#_22","text":"OpenStack \u4eea\u8868\u677f \uff08horizon\uff09 \u4e3a\u4e91\u7ba1\u7406\u5458\u548c\u4e91\u79df\u6237\u63d0\u4f9b\u4e86\u4e00\u4e2a\u57fa\u4e8e Web \u7684\u754c\u9762\u3002\u4f7f\u7528\u6b64\u754c\u9762\uff0c\u7ba1\u7406\u5458\u548c\u79df\u6237\u53ef\u4ee5\u9884\u914d\u3001\u7ba1\u7406\u548c\u76d1\u89c6\u4e91\u8d44\u6e90\u3002\u4eea\u8868\u677f\u901a\u5e38\u4ee5\u9762\u5411\u516c\u4f17\u7684\u65b9\u5f0f\u90e8\u7f72\uff0c\u5177\u6709\u516c\u5171 Web \u95e8\u6237\u7684\u6240\u6709\u5e38\u89c1\u5b89\u5168\u95ee\u9898\u3002","title":"\u4eea\u8868\u677f"},{"location":"security/security-guide/#_23","text":"OpenStack Identity \u670d\u52a1 \uff08keystone\uff09 \u662f\u4e00\u9879\u5171\u4eab\u670d\u52a1\uff0c\u53ef\u5728\u6574\u4e2a\u4e91\u57fa\u7840\u67b6\u6784\u4e2d\u63d0\u4f9b\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u670d\u52a1\u3002Identity \u670d\u52a1\u5177\u6709\u5bf9\u591a\u79cd\u8eab\u4efd\u9a8c\u8bc1\u5f62\u5f0f\u7684\u53ef\u63d2\u5165\u652f\u6301\u3002 Identity \u670d\u52a1\u7684\u5b89\u5168\u95ee\u9898\u5305\u62ec\u5bf9\u8eab\u4efd\u9a8c\u8bc1\u7684\u4fe1\u4efb\u3001\u6388\u6743\u4ee4\u724c\u7684\u7ba1\u7406\u4ee5\u53ca\u5b89\u5168\u901a\u4fe1\u3002","title":"\u8eab\u4efd\u9274\u522b\u670d\u52a1"},{"location":"security/security-guide/#_24","text":"OpenStack \u955c\u50cf\u670d\u52a1\uff08glance\uff09\u63d0\u4f9b\u78c1\u76d8\u955c\u50cf\u7ba1\u7406\u670d\u52a1\uff0c\u5305\u62ec\u955c\u50cf\u53d1\u73b0\u3001\u6ce8\u518c\u548c\u6839\u636e\u9700\u8981\u5411\u8ba1\u7b97\u670d\u52a1\u4ea4\u4ed8\u670d\u52a1\u3002 \u9700\u8981\u53d7\u4fe1\u4efb\u7684\u8fdb\u7a0b\u6765\u7ba1\u7406\u78c1\u76d8\u6620\u50cf\u7684\u751f\u547d\u5468\u671f\uff0c\u4ee5\u53ca\u524d\u9762\u63d0\u5230\u7684\u4e0e\u6570\u636e\u5b89\u5168\u6709\u5173\u7684\u6240\u6709\u95ee\u9898\u3002","title":"\u955c\u50cf\u670d\u52a1"},{"location":"security/security-guide/#_25","text":"\u6570\u636e\u5904\u7406\u670d\u52a1 \uff08sahara\uff09 \u63d0\u4f9b\u4e86\u4e00\u4e2a\u5e73\u53f0\uff0c\u7528\u4e8e\u914d\u7f6e\u3001\u7ba1\u7406\u548c\u4f7f\u7528\u8fd0\u884c\u5e38\u7528\u5904\u7406\u6846\u67b6\u7684\u7fa4\u96c6\u3002 \u6570\u636e\u5904\u7406\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879\u5e94\u4fa7\u91cd\u4e8e\u6570\u636e\u9690\u79c1\u548c\u4e0e\u9884\u7f6e\u96c6\u7fa4\u7684\u5b89\u5168\u901a\u4fe1\u3002","title":"\u6570\u636e\u5904\u7406\u670d\u52a1"},{"location":"security/security-guide/#_26","text":"\u6d88\u606f\u4f20\u9012\u7528\u4e8e\u591a\u4e2a OpenStack \u670d\u52a1\u4e4b\u95f4\u7684\u5185\u90e8\u901a\u4fe1\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0cOpenStack \u4f7f\u7528\u57fa\u4e8e AMQP \u7684\u6d88\u606f\u961f\u5217\u3002\u4e0e\u5927\u591a\u6570 OpenStack \u670d\u52a1\u4e00\u6837\uff0cAMQP \u652f\u6301\u53ef\u63d2\u62d4\u7ec4\u4ef6\u3002\u73b0\u5728\uff0c\u5b9e\u73b0\u540e\u7aef\u53ef\u4ee5\u662f RabbitMQ\u3001Qpid \u6216 ZeroMQ\u3002 \u7531\u4e8e\u5927\u591a\u6570\u7ba1\u7406\u547d\u4ee4\u90fd\u6d41\u7ecf\u6d88\u606f\u961f\u5217\u7cfb\u7edf\uff0c\u56e0\u6b64\u6d88\u606f\u961f\u5217\u5b89\u5168\u6027\u662f\u4efb\u4f55 OpenStack \u90e8\u7f72\u7684\u4e3b\u8981\u5b89\u5168\u95ee\u9898\uff0c\u672c\u6307\u5357\u7a0d\u540e\u5c06\u5bf9\u6b64\u8fdb\u884c\u8be6\u7ec6\u8ba8\u8bba\u3002 \u6709\u51e0\u4e2a\u7ec4\u4ef6\u4f7f\u7528\u6570\u636e\u5e93\uff0c\u5c3d\u7ba1\u5b83\u6ca1\u6709\u663e\u5f0f\u8c03\u7528\u3002\u4fdd\u62a4\u6570\u636e\u5e93\u8bbf\u95ee\u662f\u53e6\u4e00\u4e2a\u5b89\u5168\u95ee\u9898\uff0c\u56e0\u6b64\u5728\u672c\u6307\u5357\u540e\u9762\u5c06\u66f4\u8be6\u7ec6\u5730\u8ba8\u8bba\u3002","title":"\u5176\u4ed6\u914d\u5957\u6280\u672f"},{"location":"security/security-guide/#_27","text":"\u4e91\u53ef\u4ee5\u62bd\u8c61\u4e3a\u903b\u8f91\u7ec4\u4ef6\u7684\u96c6\u5408\uff0c\u56e0\u4e3a\u5b83\u4eec\u7684\u529f\u80fd\u3001\u7528\u6237\u548c\u5171\u4eab\u7684\u5b89\u5168\u95ee\u9898\uff0c\u6211\u4eec\u79f0\u4e4b\u4e3a\u5b89\u5168\u57df\u3002\u5a01\u80c1\u53c2\u4e0e\u8005\u548c\u5411\u91cf\u6839\u636e\u5176\u52a8\u673a\u548c\u5bf9\u8d44\u6e90\u7684\u8bbf\u95ee\u8fdb\u884c\u5206\u7c7b\u3002\u6211\u4eec\u7684\u76ee\u6807\u662f\u6839\u636e\u60a8\u7684\u98ce\u9669/\u6f0f\u6d1e\u4fdd\u62a4\u76ee\u6807\uff0c\u8ba9\u60a8\u4e86\u89e3\u6bcf\u4e2a\u57df\u7684\u5b89\u5168\u95ee\u9898\u3002","title":"\u5b89\u5168\u8fb9\u754c\u548c\u5a01\u80c1"},{"location":"security/security-guide/#_28","text":"\u5b89\u5168\u57df\u5305\u62ec\u7528\u6237\u3001\u5e94\u7528\u7a0b\u5e8f\u3001\u670d\u52a1\u5668\u6216\u7f51\u7edc\uff0c\u5b83\u4eec\u5728\u7cfb\u7edf\u4e2d\u5177\u6709\u5171\u540c\u7684\u4fe1\u4efb\u8981\u6c42\u548c\u671f\u671b\u3002\u901a\u5e38\uff0c\u5b83\u4eec\u5177\u6709\u76f8\u540c\u7684\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743 \uff08AuthN/Z\uff09 \u8981\u6c42\u548c\u7528\u6237\u3002 \u5c3d\u7ba1\u60a8\u53ef\u80fd\u5e0c\u671b\u8fdb\u4e00\u6b65\u7ec6\u5206\u8fd9\u4e9b\u57df\uff08\u6211\u4eec\u7a0d\u540e\u5c06\u8ba8\u8bba\u5728\u54ea\u4e9b\u65b9\u9762\u53ef\u80fd\u5408\u9002\uff09\uff0c\u4f46\u6211\u4eec\u901a\u5e38\u6307\u7684\u662f\u56db\u4e2a\u4e0d\u540c\u7684\u5b89\u5168\u57df\uff0c\u5b83\u4eec\u6784\u6210\u4e86\u5b89\u5168\u90e8\u7f72\u4efb\u4f55 OpenStack \u4e91\u6240\u9700\u7684\u6700\u4f4e\u9650\u5ea6\u3002\u8fd9\u4e9b\u5b89\u5168\u57df\u5305\u62ec\uff1a \u516c\u5171\u57df \u8bbf\u5ba2\u57df \u7ba1\u7406\u57df \u6570\u636e\u57df \u6211\u4eec\u4e4b\u6240\u4ee5\u9009\u62e9\u8fd9\u4e9b\u5b89\u5168\u57df\uff0c\u662f\u56e0\u4e3a\u5b83\u4eec\u53ef\u4ee5\u72ec\u7acb\u6620\u5c04\uff0c\u4e5f\u53ef\u4ee5\u7ec4\u5408\u8d77\u6765\uff0c\u4ee5\u8868\u793a\u7ed9\u5b9a OpenStack \u90e8\u7f72\u4e2d\u5927\u591a\u6570\u53ef\u80fd\u7684\u4fe1\u4efb\u533a\u57df\u3002\u4f8b\u5982\uff0c\u67d0\u4e9b\u90e8\u7f72\u62d3\u6251\u53ef\u80fd\u7531\u4e00\u4e2a\u7269\u7406\u7f51\u7edc\u4e0a\u7684\u6765\u5bbe\u57df\u548c\u6570\u636e\u57df\u7684\u7ec4\u5408\u7ec4\u6210\uff0c\u800c\u5176\u4ed6\u62d3\u6251\u5219\u5c06\u8fd9\u4e9b\u57df\u5206\u5f00\u3002\u5728\u6bcf\u79cd\u60c5\u51b5\u4e0b\uff0c\u4e91\u64cd\u4f5c\u5458\u90fd\u5e94\u6ce8\u610f\u9002\u5f53\u7684\u5b89\u5168\u95ee\u9898\u3002\u5b89\u5168\u57df\u5e94\u9488\u5bf9\u7279\u5b9a\u7684 OpenStack \u90e8\u7f72\u62d3\u6251\u8fdb\u884c\u6620\u5c04\u3002\u57df\u53ca\u5176\u4fe1\u4efb\u8981\u6c42\u53d6\u51b3\u4e8e\u4e91\u5b9e\u4f8b\u662f\u516c\u6709\u4e91\u5b9e\u4f8b\u3001\u79c1\u6709\u4e91\u5b9e\u4f8b\u8fd8\u662f\u6df7\u5408\u4e91\u5b9e\u4f8b\u3002","title":"\u5b89\u5168\u57df"},{"location":"security/security-guide/#_29","text":"\u516c\u5171\u5b89\u5168\u57df\u662f\u4e91\u57fa\u7840\u67b6\u6784\u4e2d\u5b8c\u5168\u4e0d\u53d7\u4fe1\u4efb\u7684\u533a\u57df\u3002\u5b83\u53ef\u4ee5\u6307\u6574\u4e2a\u4e92\u8054\u7f51\uff0c\u4e5f\u53ef\u4ee5\u7b80\u5355\u5730\u6307\u60a8\u65e0\u6743\u8bbf\u95ee\u7684\u7f51\u7edc\u3002\u4efb\u4f55\u5177\u6709\u673a\u5bc6\u6027\u6216\u5b8c\u6574\u6027\u8981\u6c42\u4f20\u8f93\u6b64\u57df\u7684\u6570\u636e\u90fd\u5e94\u4f7f\u7528\u8865\u507f\u63a7\u5236\u8fdb\u884c\u4fdd\u62a4\u3002 \u6b64\u57df\u5e94\u59cb\u7ec8\u88ab\u89c6\u4e3a\u4e0d\u53d7\u4fe1\u4efb\u3002","title":"\u516c\u5171"},{"location":"security/security-guide/#_30","text":"\u8bbf\u5ba2\u5b89\u5168\u57df\u901a\u5e38\u7528\u4e8e\u8ba1\u7b97\u5b9e\u4f8b\u5230\u5b9e\u4f8b\u7684\u6d41\u91cf\uff0c\u5b83\u5904\u7406\u7531\u4e91\u4e0a\u7684\u5b9e\u4f8b\u751f\u6210\u7684\u8ba1\u7b97\u6570\u636e\uff0c\u4f46\u4e0d\u5904\u7406\u652f\u6301\u4e91\u64cd\u4f5c\u7684\u670d\u52a1\uff0c\u4f8b\u5982 API \u8c03\u7528\u3002 \u5982\u679c\u516c\u6709\u4e91\u548c\u79c1\u6709\u4e91\u63d0\u4f9b\u5546\u5bf9\u5b9e\u4f8b\u4f7f\u7528\u6ca1\u6709\u4e25\u683c\u63a7\u5236\uff0c\u4e5f\u4e0d\u5141\u8bb8\u5bf9\u865a\u62df\u673a\u8fdb\u884c\u4e0d\u53d7\u9650\u5236\u7684 Internet \u8bbf\u95ee\uff0c\u5219\u5e94\u5c06\u6b64\u57df\u89c6\u4e3a\u4e0d\u53d7\u4fe1\u4efb\u7684\u57df\u3002\u79c1\u6709\u4e91\u63d0\u4f9b\u5546\u53ef\u80fd\u5e0c\u671b\u5c06\u6b64\u7f51\u7edc\u89c6\u4e3a\u5185\u90e8\u7f51\u7edc\uff0c\u5e76\u4e14\u53ea\u6709\u5728\u5b9e\u65bd\u9002\u5f53\u7684\u63a7\u5236\u4ee5\u65ad\u8a00\u5b9e\u4f8b\u548c\u6240\u6709\u5173\u8054\u79df\u6237\u90fd\u662f\u53ef\u4fe1\u7684\u65f6\u3002","title":"\u8bbf\u5ba2"},{"location":"security/security-guide/#_31","text":"\u7ba1\u7406\u5b89\u5168\u57df\u662f\u670d\u52a1\u4ea4\u4e92\u7684\u5730\u65b9\u3002\u6709\u65f6\u79f0\u4e3a\u201c\u63a7\u5236\u5e73\u9762\u201d\uff0c\u6b64\u57df\u4e2d\u7684\u7f51\u7edc\u4f20\u8f93\u673a\u5bc6\u6570\u636e\uff0c\u4f8b\u5982\u914d\u7f6e\u53c2\u6570\u3001\u7528\u6237\u540d\u548c\u5bc6\u7801\u3002\u547d\u4ee4\u548c\u63a7\u5236\u6d41\u91cf\u901a\u5e38\u9a7b\u7559\u5728\u6b64\u57df\u4e2d\uff0c\u8fd9\u9700\u8981\u5f3a\u5927\u7684\u5b8c\u6574\u6027\u8981\u6c42\u3002\u5bf9\u6b64\u57df\u7684\u8bbf\u95ee\u5e94\u53d7\u5230\u9ad8\u5ea6\u9650\u5236\u548c\u76d1\u89c6\u3002\u540c\u65f6\uff0c\u6b64\u57df\u4ecd\u5e94\u91c7\u7528\u672c\u6307\u5357\u4e2d\u63cf\u8ff0\u7684\u6240\u6709\u5b89\u5168\u6700\u4f73\u505a\u6cd5\u3002 \u5728\u5927\u591a\u6570\u90e8\u7f72\u4e2d\uff0c\u6b64\u57df\u88ab\u89c6\u4e3a\u53d7\u4fe1\u4efb\u7684\u57df\u3002\u4f46\u662f\uff0c\u5728\u8003\u8651 OpenStack \u90e8\u7f72\u65f6\uff0c\u6709\u8bb8\u591a\u7cfb\u7edf\u5c06\u6b64\u57df\u4e0e\u5176\u4ed6\u57df\u6865\u63a5\u8d77\u6765\uff0c\u8fd9\u53ef\u80fd\u4f1a\u964d\u4f4e\u60a8\u53ef\u4ee5\u5bf9\u8be5\u57df\u7684\u4fe1\u4efb\u7ea7\u522b\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6865\u63a5\u5b89\u5168\u57df\u3002","title":"\u7ba1\u7406"},{"location":"security/security-guide/#_32","text":"\u6570\u636e\u5b89\u5168\u57df\u4e3b\u8981\u5173\u6ce8\u4e0eOpenStack\u4e2d\u7684\u5b58\u50a8\u670d\u52a1\u6709\u5173\u7684\u4fe1\u606f\u3002\u901a\u8fc7\u8be5\u7f51\u7edc\u4f20\u8f93\u7684\u5927\u591a\u6570\u6570\u636e\u90fd\u9700\u8981\u9ad8\u5ea6\u7684\u5b8c\u6574\u6027\u548c\u673a\u5bc6\u6027\u3002\u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u6839\u636e\u90e8\u7f72\u7c7b\u578b\uff0c\u53ef\u80fd\u8fd8\u4f1a\u6709\u5f88\u5f3a\u7684\u53ef\u7528\u6027\u8981\u6c42\u3002 \u6b64\u7f51\u7edc\u7684\u4fe1\u4efb\u7ea7\u522b\u5f88\u5927\u7a0b\u5ea6\u4e0a\u53d6\u51b3\u4e8e\u90e8\u7f72\u51b3\u7b56\uff0c\u56e0\u6b64\u6211\u4eec\u4e0d\u4f1a\u4e3a\u5176\u5206\u914d\u4efb\u4f55\u9ed8\u8ba4\u7684\u4fe1\u4efb\u7ea7\u522b\u3002","title":"\u6570\u636e"},{"location":"security/security-guide/#_33","text":"\u7f51\u6865\u662f\u5b58\u5728\u4e8e\u591a\u4e2a\u5b89\u5168\u57df\u4e2d\u7684\u7ec4\u4ef6\u3002\u5fc5\u987b\u4ed4\u7ec6\u914d\u7f6e\u6865\u63a5\u5177\u6709\u4e0d\u540c\u4fe1\u4efb\u7ea7\u522b\u6216\u8eab\u4efd\u9a8c\u8bc1\u8981\u6c42\u7684\u5b89\u5168\u57df\u7684\u4efb\u4f55\u7ec4\u4ef6\u3002\u8fd9\u4e9b\u7f51\u6865\u901a\u5e38\u662f\u7f51\u7edc\u67b6\u6784\u4e2d\u7684\u8584\u5f31\u73af\u8282\u3002\u6865\u63a5\u5e94\u59cb\u7ec8\u914d\u7f6e\u4e3a\u6ee1\u8db3\u5b83\u6240\u6865\u63a5\u7684\u4efb\u4f55\u57df\u7684\u6700\u9ad8\u4fe1\u4efb\u7ea7\u522b\u7684\u5b89\u5168\u8981\u6c42\u3002\u5728\u8bb8\u591a\u60c5\u51b5\u4e0b\uff0c\u7531\u4e8e\u653b\u51fb\u7684\u53ef\u80fd\u6027\uff0c\u6865\u63a5\u5668\u7684\u5b89\u5168\u63a7\u5236\u5e94\u8be5\u662f\u4e3b\u8981\u5173\u6ce8\u70b9\u3002 \u4e0a\u56fe\u663e\u793a\u4e86\u6865\u63a5\u6570\u636e\u548c\u7ba1\u7406\u57df\u7684\u8ba1\u7b97\u8282\u70b9;\u56e0\u6b64\uff0c\u5e94\u5c06\u8ba1\u7b97\u8282\u70b9\u914d\u7f6e\u4e3a\u6ee1\u8db3\u7ba1\u7406\u57df\u7684\u5b89\u5168\u8981\u6c42\u3002\u540c\u6837\uff0c\u6b64\u56fe\u4e2d\u7684 API \u7aef\u70b9\u6b63\u5728\u6865\u63a5\u4e0d\u53d7\u4fe1\u4efb\u7684\u516c\u5171\u57df\u548c\u7ba1\u7406\u57df\uff0c\u5e94\u5c06\u5176\u914d\u7f6e\u4e3a\u9632\u6b62\u4ece\u516c\u5171\u57df\u4f20\u64ad\u5230\u7ba1\u7406\u57df\u7684\u653b\u51fb\u3002 \u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u90e8\u7f72\u4eba\u5458\u53ef\u80fd\u5e0c\u671b\u8003\u8651\u5c06\u7f51\u6865\u4fdd\u62a4\u5230\u6bd4\u5b83\u6240\u5728\u7684\u4efb\u4f55\u57df\u66f4\u9ad8\u7684\u6807\u51c6\u3002\u9274\u4e8e\u4e0a\u8ff0 API \u7aef\u70b9\u793a\u4f8b\uff0c\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u4ece\u516c\u5171\u57df\u4ee5 API \u7aef\u70b9\u4e3a\u76ee\u6807\uff0c\u5229\u7528\u5b83\u6765\u5165\u4fb5\u6216\u8bbf\u95ee\u7ba1\u7406\u57df\u3002 OpenStack\u7684\u8bbe\u8ba1\u4f7f\u5f97\u5b89\u5168\u57df\u7684\u5206\u79bb\u662f\u5f88\u56f0\u96be\u7684\u3002\u7531\u4e8e\u6838\u5fc3\u670d\u52a1\u901a\u5e38\u81f3\u5c11\u6865\u63a5\u4e24\u4e2a\u57df\uff0c\u56e0\u6b64\u5728\u5bf9\u5b83\u4eec\u5e94\u7528\u5b89\u5168\u63a7\u5236\u65f6\u5fc5\u987b\u7279\u522b\u8003\u8651\u3002","title":"\u6865\u63a5\u5b89\u5168\u57df"},{"location":"security/security-guide/#_34","text":"\u5927\u591a\u6570\u7c7b\u578b\u7684\u4e91\u90e8\u7f72\uff08\u516c\u6709\u4e91\u6216\u79c1\u6709\u4e91\uff09\u90fd\u4f1a\u53d7\u5230\u67d0\u79cd\u5f62\u5f0f\u7684\u653b\u51fb\u3002\u5728\u672c\u7ae0\u4e2d\uff0c\u6211\u4eec\u5c06\u5bf9\u653b\u51fb\u8005\u8fdb\u884c\u5206\u7c7b\uff0c\u5e76\u603b\u7ed3\u6bcf\u4e2a\u5b89\u5168\u57df\u4e2d\u7684\u6f5c\u5728\u653b\u51fb\u7c7b\u578b\u3002","title":"\u5a01\u80c1\u5206\u7c7b\u3001\u53c2\u4e0e\u8005\u548c\u653b\u51fb\u5411\u91cf"},{"location":"security/security-guide/#_35","text":"\u5a01\u80c1\u53c2\u4e0e\u8005\u662f\u4e00\u79cd\u62bd\u8c61\u7684\u65b9\u5f0f\uff0c\u7528\u4e8e\u6307\u4ee3\u60a8\u53ef\u80fd\u5c1d\u8bd5\u9632\u5fa1\u7684\u4e00\u7c7b\u5bf9\u624b\u3002\u53c2\u4e0e\u8005\u7684\u80fd\u529b\u8d8a\u5f3a\uff0c\u6210\u529f\u7f13\u89e3\u548c\u9884\u9632\u653b\u51fb\u6240\u9700\u7684\u5b89\u5168\u63a7\u5236\u5c31\u8d8a\u6602\u8d35\u3002\u5b89\u5168\u6027\u662f\u6210\u672c\u3001\u53ef\u7528\u6027\u548c\u9632\u5fa1\u4e4b\u95f4\u7684\u6743\u8861\u3002\u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u4e0d\u53ef\u80fd\u9488\u5bf9\u6211\u4eec\u5728\u6b64\u5904\u63cf\u8ff0\u7684\u6240\u6709\u5a01\u80c1\u53c2\u4e0e\u8005\u4fdd\u62a4\u4e91\u90e8\u7f72\u3002\u90a3\u4e9b\u90e8\u7f72OpenStack\u4e91\u7684\u4eba\u5c06\u4e0d\u5f97\u4e0d\u51b3\u5b9a\u5176\u90e8\u7f72/\u4f7f\u7528\u7684\u5e73\u8861\u70b9\u5728\u54ea\u91cc\u3002","title":"\u5a01\u80c1\u53c2\u4e0e\u8005"},{"location":"security/security-guide/#_36","text":"\u672c\u6307\u5357\u8ba4\u4e3a\u662f\u6700\u6709\u80fd\u529b\u7684\u5bf9\u624b\u3002\u60c5\u62a5\u90e8\u95e8\u548c\u5176\u4ed6\u56fd\u5bb6\u884c\u4e3a\u8005\u53ef\u4ee5\u4e3a\u76ee\u6807\u5e26\u6765\u5de8\u5927\u7684\u8d44\u6e90\u3002\u4ed6\u4eec\u62e5\u6709\u8d85\u8d8a\u4efb\u4f55\u5176\u4ed6\u53c2\u4e0e\u8005\u7684\u80fd\u529b\u3002\u5982\u679c\u6ca1\u6709\u6781\u5176\u4e25\u683c\u7684\u63a7\u5236\u63aa\u65bd\uff0c\u65e0\u8bba\u662f\u4eba\u529b\u8fd8\u662f\u6280\u672f\uff0c\u90fd\u5f88\u96be\u9632\u5fa1\u8fd9\u4e9b\u884c\u4e3a\u8005\u3002","title":"\u60c5\u62a5\u673a\u6784"},{"location":"security/security-guide/#_37","text":"\u80fd\u529b\u5f3a\u4e14\u53d7\u7ecf\u6d4e\u9a71\u52a8\u7684\u653b\u51fb\u8005\u7fa4\u4f53\u3002\u80fd\u591f\u8d44\u52a9\u5185\u90e8\u6f0f\u6d1e\u5f00\u53d1\u548c\u76ee\u6807\u7814\u7a76\u3002\u8fd1\u5e74\u6765\uff0c\u4fc4\u7f57\u65af\u5546\u4e1a\u7f51\u7edc\uff08Russian Business Network\uff09\u7b49\u7ec4\u7ec7\u7684\u5d1b\u8d77\uff0c\u4e00\u4e2a\u5e9e\u5927\u7684\u7f51\u7edc\u72af\u7f6a\u4f01\u4e1a\uff0c\u5df2\u7ecf\u8bc1\u660e\u4e86\u7f51\u7edc\u653b\u51fb\u5982\u4f55\u6210\u4e3a\u4e00\u79cd\u5546\u54c1\u3002\u5de5\u4e1a\u95f4\u8c0d\u6d3b\u52a8\u5c5e\u4e8e\u4e25\u91cd\u7684\u6709\u7ec4\u7ec7\u72af\u7f6a\u96c6\u56e2\u3002","title":"\u4e25\u91cd\u6709\u7ec4\u7ec7\u72af\u7f6a"},{"location":"security/security-guide/#_38","text":"\u8fd9\u662f\u6307\u201c\u9ed1\u5ba2\u884c\u52a8\u4e3b\u4e49\u8005\u201d\u7c7b\u578b\u7684\u7ec4\u7ec7\uff0c\u4ed6\u4eec\u901a\u5e38\u6ca1\u6709\u5546\u4e1a\u8d44\u52a9\uff0c\u4f46\u53ef\u80fd\u5bf9\u670d\u52a1\u63d0\u4f9b\u5546\u548c\u4e91\u8fd0\u8425\u5546\u6784\u6210\u4e25\u91cd\u5a01\u80c1\u3002","title":"\u9ad8\u80fd\u529b\u7684\u56e2\u961f"},{"location":"security/security-guide/#_39","text":"\u8fd9\u4e9b\u653b\u51fb\u8005\u5355\u72ec\u884c\u52a8\uff0c\u4ee5\u591a\u79cd\u5f62\u5f0f\u51fa\u73b0\uff0c\u4f8b\u5982\u6d41\u6c13\u6216\u6076\u610f\u5458\u5de5\u3001\u5fc3\u6000\u4e0d\u6ee1\u7684\u5ba2\u6237\u6216\u5c0f\u89c4\u6a21\u7684\u5de5\u4e1a\u95f4\u8c0d\u6d3b\u52a8\u3002","title":"\u6709\u52a8\u673a\u7684\u4e2a\u4eba"},{"location":"security/security-guide/#_40","text":"\u81ea\u52a8\u6f0f\u6d1e\u626b\u63cf/\u5229\u7528\u3002\u975e\u9488\u5bf9\u6027\u653b\u51fb\u3002\u901a\u5e38\uff0c\u53ea\u6709\u8fd9\u4e9b\u884c\u4e3a\u8005\u4e4b\u4e00\u7684\u6ecb\u6270\u3001\u59a5\u534f\u624d\u4f1a\u5bf9\u7ec4\u7ec7\u7684\u58f0\u8a89\u6784\u6210\u91cd\u5927\u98ce\u9669\u3002","title":"\u811a\u672c\u653b\u51fb\u8005"},{"location":"security/security-guide/#_41","text":"\u79c1\u6709\u4e91\u901a\u5e38\u7531\u4f01\u4e1a\u6216\u673a\u6784\u5728\u5176\u7f51\u7edc\u5185\u90e8\u548c\u9632\u706b\u5899\u540e\u9762\u90e8\u7f72\u3002\u4f01\u4e1a\u5c06\u5bf9\u5141\u8bb8\u54ea\u4e9b\u6570\u636e\u9000\u51fa\u5176\u7f51\u7edc\u6709\u4e25\u683c\u7684\u653f\u7b56\uff0c\u751a\u81f3\u53ef\u80fd\u4e3a\u7279\u5b9a\u76ee\u7684\u4f7f\u7528\u4e0d\u540c\u7684\u4e91\u3002\u79c1\u6709\u4e91\u7684\u7528\u6237\u901a\u5e38\u662f\u62e5\u6709\u4e91\u7684\u7ec4\u7ec7\u7684\u5458\u5de5\uff0c\u5e76\u4e14\u80fd\u591f\u5bf9\u5176\u884c\u4e3a\u8d1f\u8d23\u3002\u5458\u5de5\u901a\u5e38\u4f1a\u5728\u8bbf\u95ee\u4e91\u4e4b\u524d\u53c2\u52a0\u57f9\u8bad\u8bfe\u7a0b\uff0c\u5e76\u4e14\u53ef\u80fd\u4f1a\u53c2\u52a0\u5b9a\u671f\u5b89\u6392\u7684\u5b89\u5168\u610f\u8bc6\u57f9\u8bad\u3002\u76f8\u6bd4\u4e4b\u4e0b\uff0c\u516c\u6709\u4e91\u4e0d\u80fd\u5bf9\u5176\u7528\u6237\u3001\u4e91\u7528\u4f8b\u6216\u7528\u6237\u52a8\u673a\u505a\u51fa\u4efb\u4f55\u65ad\u8a00\u3002\u5bf9\u4e8e\u516c\u6709\u4e91\u63d0\u4f9b\u5546\u6765\u8bf4\uff0c\u8fd9\u4f1a\u7acb\u5373\u5c06\u5ba2\u6237\u673a\u5b89\u5168\u57df\u63a8\u5165\u5b8c\u5168\u4e0d\u53d7\u4fe1\u4efb\u7684\u72b6\u6001\u3002 \u516c\u6709\u4e91\u653b\u51fb\u9762\u7684\u4e00\u4e2a\u663e\u7740\u533a\u522b\u662f\uff0c\u5b83\u4eec\u5fc5\u987b\u63d0\u4f9b\u5bf9\u5176\u670d\u52a1\u7684\u4e92\u8054\u7f51\u8bbf\u95ee\u3002\u5b9e\u4f8b\u8fde\u63a5\u3001\u901a\u8fc7 Internet \u8bbf\u95ee\u6587\u4ef6\u4ee5\u53ca\u4e0e\u4e91\u63a7\u5236\u7ed3\u6784\uff08\u5982 API \u7aef\u70b9\u548c\u4eea\u8868\u677f\uff09\u4ea4\u4e92\u7684\u80fd\u529b\u662f\u516c\u6709\u4e91\u7684\u5fc5\u5907\u6761\u4ef6\u3002 \u516c\u6709\u4e91\u548c\u79c1\u6709\u4e91\u7528\u6237\u7684\u9690\u79c1\u95ee\u9898\u901a\u5e38\u662f\u622a\u7136\u76f8\u53cd\u7684\u3002\u5728\u79c1\u6709\u4e91\u4e2d\u751f\u6210\u548c\u5b58\u50a8\u7684\u6570\u636e\u901a\u5e38\u7531\u4e91\u8fd0\u8425\u5546\u62e5\u6709\uff0c\u4ed6\u4eec\u80fd\u591f\u90e8\u7f72\u6570\u636e\u4e22\u5931\u9632\u62a4 \uff08DLP\uff09 \u4fdd\u62a4\u3001\u6587\u4ef6\u68c0\u67e5\u3001\u6df1\u5ea6\u6570\u636e\u5305\u68c0\u67e5\u548c\u89c4\u8303\u6027\u9632\u706b\u5899\u7b49\u6280\u672f\u3002\u76f8\u6bd4\u4e4b\u4e0b\uff0c\u9690\u79c1\u662f\u91c7\u7528\u516c\u6709\u4e91\u57fa\u7840\u8bbe\u65bd\u7684\u4e3b\u8981\u969c\u788d\u4e4b\u4e00\uff0c\u56e0\u4e3a\u524d\u9762\u63d0\u5230\u7684\u8bb8\u591a\u63a7\u5236\u63aa\u65bd\u5e76\u4e0d\u5b58\u5728\u3002","title":"\u516c\u6709\u4e91\u548c\u79c1\u6709\u4e91\u6ce8\u610f\u4e8b\u9879"},{"location":"security/security-guide/#_42","text":"\u5e94\u4ed4\u7ec6\u8003\u8651\u4e91\u90e8\u7f72\u4e2d\u6f5c\u5728\u7684\u51fa\u7ad9\u6ee5\u7528\u3002\u65e0\u8bba\u662f\u516c\u6709\u4e91\u8fd8\u662f\u79c1\u6709\u4e91\uff0c\u4e91\u5f80\u5f80\u90fd\u6709\u5927\u91cf\u53ef\u7528\u8d44\u6e90\u3002\u901a\u8fc7\u9ed1\u5ba2\u653b\u51fb\u6216\u6388\u6743\u8bbf\u95ee\u5728\u4e91\u4e2d\u5efa\u7acb\u5b58\u5728\u70b9\u7684\u653b\u51fb\u8005\uff08\u4f8b\u5982\u6d41\u6c13\u5458\u5de5\uff09\u53ef\u4ee5\u4f7f\u8fd9\u4e9b\u8d44\u6e90\u5bf9\u6574\u4e2a\u4e92\u8054\u7f51\u4ea7\u751f\u5f71\u54cd\u3002\u5177\u6709\u8ba1\u7b97\u670d\u52a1\u7684\u4e91\u662f\u7406\u60f3\u7684 DDoS \u548c\u66b4\u529b\u5f15\u64ce\u3002\u5bf9\u4e8e\u516c\u6709\u4e91\u6765\u8bf4\uff0c\u8fd9\u4e2a\u95ee\u9898\u66f4\u4e3a\u7d27\u8feb\uff0c\u56e0\u4e3a\u5b83\u4eec\u7684\u7528\u6237\u5728\u5f88\u5927\u7a0b\u5ea6\u4e0a\u662f\u4e0d\u8d1f\u8d23\u4efb\u7684\uff0c\u5e76\u4e14\u53ef\u4ee5\u8fc5\u901f\u542f\u52a8\u5927\u91cf\u4e00\u6b21\u6027\u5b9e\u4f8b\u8fdb\u884c\u51fa\u7ad9\u653b\u51fb\u3002\u5982\u679c\u4e00\u5bb6\u516c\u53f8\u56e0\u6258\u7ba1\u6076\u610f\u8f6f\u4ef6\u6216\u5bf9\u5176\u4ed6\u7f51\u7edc\u53d1\u8d77\u653b\u51fb\u800c\u95fb\u540d\uff0c\u53ef\u80fd\u4f1a\u5bf9\u516c\u53f8\u7684\u58f0\u8a89\u9020\u6210\u91cd\u5927\u635f\u5bb3\u3002\u9884\u9632\u65b9\u6cd5\u5305\u62ec\u51fa\u53e3\u5b89\u5168\u7ec4\u3001\u51fa\u7ad9\u6d41\u91cf\u68c0\u67e5\u3001\u5ba2\u6237\u6559\u80b2\u548c\u610f\u8bc6\uff0c\u4ee5\u53ca\u6b3a\u8bc8\u548c\u6ee5\u7528\u7f13\u89e3\u7b56\u7565\u3002","title":"\u51fa\u7ad9\u653b\u51fb\u548c\u58f0\u8a89\u98ce\u9669"},{"location":"security/security-guide/#_43","text":"\u8be5\u56fe\u663e\u793a\u4e86\u4e0a\u4e00\u8282\u4e2d\u63cf\u8ff0\u7684\u53c2\u4e0e\u8005\u53ef\u80fd\u9884\u671f\u7684\u5178\u578b\u653b\u51fb\u7c7b\u578b\u3002\u8bf7\u6ce8\u610f\uff0c\u6b64\u56fe\u4e0d\u6392\u9664\u6709\u4e0d\u53ef\u9884\u671f\u7684\u653b\u51fb\u7c7b\u578b\u3002 \u653b\u51fb\u7c7b\u578b \u6bcf\u79cd\u653b\u51fb\u5f62\u5f0f\u7684\u89c4\u8303\u6027\u9632\u5fa1\u8d85\u51fa\u4e86\u672c\u6587\u6863\u7684\u8303\u56f4\u3002\u4e0a\u56fe\u53ef\u4ee5\u5e2e\u52a9\u60a8\u5c31\u5e94\u9632\u8303\u54ea\u4e9b\u7c7b\u578b\u7684\u5a01\u80c1\u548c\u5a01\u80c1\u53c2\u4e0e\u8005\u505a\u51fa\u660e\u667a\u7684\u51b3\u5b9a\u3002\u5bf9\u4e8e\u5546\u4e1a\u516c\u6709\u4e91\u90e8\u7f72\uff0c\u8fd9\u53ef\u80fd\u5305\u62ec\u9884\u9632\u4e25\u91cd\u72af\u7f6a\u3002\u5bf9\u4e8e\u90a3\u4e9b\u4e3a\u653f\u5e9c\u4f7f\u7528\u90e8\u7f72\u79c1\u6709\u4e91\u7684\u4eba\u6765\u8bf4\uff0c\u5e94\u8be5\u5efa\u7acb\u66f4\u4e25\u683c\u7684\u4fdd\u62a4\u673a\u5236\uff0c\u5305\u62ec\u7cbe\u5fc3\u4fdd\u62a4\u7684\u8bbe\u65bd\u548c\u4f9b\u5e94\u94fe\u3002\u76f8\u6bd4\u4e4b\u4e0b\uff0c\u90a3\u4e9b\u5efa\u7acb\u57fa\u672c\u5f00\u53d1\u6216\u6d4b\u8bd5\u73af\u5883\u7684\u4eba\u53ef\u80fd\u9700\u8981\u9650\u5236\u8f83\u5c11\u7684\u63a7\u5236\uff08\u4e2d\u95f4\uff09\u3002","title":"\u653b\u51fb\u7c7b\u578b"},{"location":"security/security-guide/#_44","text":"\u60a8\u9009\u62e9\u7684\u652f\u6301\u8f6f\u4ef6\uff08\u5982\u6d88\u606f\u4f20\u9012\u548c\u8d1f\u8f7d\u5e73\u8861\uff09\u53ef\u80fd\u4f1a\u5bf9\u4e91\u4ea7\u751f\u4e25\u91cd\u7684\u5b89\u5168\u5f71\u54cd\u3002\u4e3a\u7ec4\u7ec7\u505a\u51fa\u6b63\u786e\u7684\u9009\u62e9\u975e\u5e38\u91cd\u8981\u3002\u672c\u8282\u63d0\u4f9b\u4e86\u9009\u62e9\u652f\u6301\u8f6f\u4ef6\u7684\u4e00\u4e9b\u4e00\u822c\u51c6\u5219\u3002 \u4e3a\u4e86\u9009\u62e9\u6700\u4f73\u652f\u6301\u8f6f\u4ef6\uff0c\u8bf7\u8003\u8651\u4ee5\u4e0b\u56e0\u7d20\uff1a \u56e2\u961f\u4e13\u4e1a\u77e5\u8bc6 \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u901a\u7528\u6807\u51c6 \u786c\u4ef6\u95ee\u9898","title":"\u9009\u62e9\u652f\u6301\u8f6f\u4ef6"},{"location":"security/security-guide/#_45","text":"\u56e2\u961f\u8d8a\u719f\u6089\u7279\u5b9a\u4ea7\u54c1\u3001\u5176\u914d\u7f6e\u548c\u7279\u6b8a\u6027\uff0c\u5c31\u8d8a\u5c11\u4f1a\u51fa\u73b0\u914d\u7f6e\u9519\u8bef\u3002\u6b64\u5916\uff0c\u5c06\u5458\u5de5\u7684\u4e13\u4e1a\u77e5\u8bc6\u5206\u6563\u5230\u6574\u4e2a\u7ec4\u7ec7\u4e2d\u53ef\u4ee5\u589e\u52a0\u7cfb\u7edf\u7684\u53ef\u7528\u6027\uff0c\u5141\u8bb8\u5206\u5de5\uff0c\u5e76\u5728\u56e2\u961f\u6210\u5458\u4e0d\u53ef\u7528\u65f6\u51cf\u8f7b\u95ee\u9898\u3002","title":"\u56e2\u961f\u4e13\u4e1a\u77e5\u8bc6"},{"location":"security/security-guide/#_46","text":"\u7ed9\u5b9a\u4ea7\u54c1\u6216\u9879\u76ee\u7684\u6210\u719f\u5ea6\u5bf9\u60a8\u7684\u5b89\u5168\u72b6\u51b5\u81f3\u5173\u91cd\u8981\u3002\u90e8\u7f72\u4e91\u540e\uff0c\u4ea7\u54c1\u6210\u719f\u5ea6\u4f1a\u4ea7\u751f\u8bb8\u591a\u5f71\u54cd\uff1a \u4e13\u4e1a\u77e5\u8bc6\u7684\u53ef\u7528\u6027 \u6d3b\u8dc3\u7684\u5f00\u53d1\u4eba\u5458\u548c\u7528\u6237\u793e\u533a \u66f4\u65b0\u7684\u53ca\u65f6\u6027\u548c\u53ef\u7528\u6027 \u4e8b\u4ef6\u54cd\u5e94","title":"\u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6"},{"location":"security/security-guide/#_47","text":"\u901a\u7528\u6807\u51c6\u662f\u4e00\u4e2a\u56fd\u9645\u6807\u51c6\u5316\u7684\u8f6f\u4ef6\u8bc4\u4f30\u8fc7\u7a0b\uff0c\u653f\u5e9c\u548c\u5546\u4e1a\u516c\u53f8\u4f7f\u7528\u5b83\u6765\u9a8c\u8bc1\u8f6f\u4ef6\u6280\u672f\u7684\u6027\u80fd\u662f\u5426\u5982\u5ba3\u4f20\u7684\u90a3\u6837\u3002","title":"\u901a\u7528\u6807\u51c6"},{"location":"security/security-guide/#_48","text":"\u8003\u8651\u8fd0\u884c\u8f6f\u4ef6\u7684\u786c\u4ef6\u7684\u53ef\u652f\u6301\u6027\u3002\u6b64\u5916\uff0c\u8bf7\u8003\u8651\u786c\u4ef6\u4e2d\u53ef\u7528\u7684\u5176\u4ed6\u529f\u80fd\uff0c\u4ee5\u53ca\u60a8\u9009\u62e9\u7684\u8f6f\u4ef6\u5982\u4f55\u652f\u6301\u8fd9\u4e9b\u529f\u80fd\u3002","title":"\u786c\u4ef6\u95ee\u9898"},{"location":"security/security-guide/#_49","text":"OpenStack \u4e91\u90e8\u7f72\u7684\u7cfb\u7edf\u6587\u6863\u5e94\u9075\u5faa\u7ec4\u7ec7\u4e2d\u4f01\u4e1a\u4fe1\u606f\u6280\u672f\u7cfb\u7edf\u7684\u6a21\u677f\u548c\u6700\u4f73\u5b9e\u8df5\u3002\u7ec4\u7ec7\u901a\u5e38\u6709\u5408\u89c4\u6027\u8981\u6c42\uff0c\u8fd9\u53ef\u80fd\u9700\u8981\u4e00\u4e2a\u6574\u4f53\u7684\u7cfb\u7edf\u5b89\u5168\u8ba1\u5212\u6765\u6e05\u70b9\u548c\u8bb0\u5f55\u7ed9\u5b9a\u7cfb\u7edf\u7684\u67b6\u6784\u3002\u6574\u4e2a\u884c\u4e1a\u90fd\u9762\u4e34\u7740\u4e0e\u8bb0\u5f55\u52a8\u6001\u4e91\u57fa\u7840\u67b6\u6784\u548c\u4fdd\u6301\u4fe1\u606f\u6700\u65b0\u76f8\u5173\u7684\u5171\u540c\u6311\u6218\u3002 \u7cfb\u7edf\u6587\u6863\u8981\u6c42 \u7cfb\u7edf\u89d2\u8272\u548c\u7c7b\u578b \u7cfb\u7edf\u6e05\u5355 \u7f51\u7edc\u62d3\u6251 \u670d\u52a1\u3001\u534f\u8bae\u548c\u7aef\u53e3","title":"\u7cfb\u7edf\u6587\u6863"},{"location":"security/security-guide/#_50","text":"","title":"\u7cfb\u7edf\u6587\u6863\u8981\u6c42"},{"location":"security/security-guide/#_51","text":"\u901a\u5e38\u6784\u6210 OpenStack \u5b89\u88c5\u7684\u4e24\u79cd\u5e7f\u4e49\u8282\u70b9\u7c7b\u578b\u662f\uff1a","title":"\u7cfb\u7edf\u89d2\u8272\u548c\u7c7b\u578b"},{"location":"security/security-guide/#_52","text":"\u8fd0\u884c\u4e0e\u4e91\u76f8\u5173\u7684\u670d\u52a1\uff0c\u4f8b\u5982 OpenStack Identity \u670d\u52a1\u3001\u6d88\u606f\u961f\u5217\u670d\u52a1\u3001\u5b58\u50a8\u3001\u7f51\u7edc\u4ee5\u53ca\u652f\u6301\u4e91\u8fd0\u884c\u6240\u9700\u7684\u5176\u4ed6\u670d\u52a1\u3002","title":"\u57fa\u7840\u8bbe\u65bd\u8282\u70b9"},{"location":"security/security-guide/#_53","text":"\u4e3a\u4e91\u63d0\u4f9b\u5b58\u50a8\u5bb9\u91cf\u6216\u865a\u62df\u673a\u3002","title":"\u8ba1\u7b97\u3001\u5b58\u50a8\u6216\u5176\u4ed6\u8d44\u6e90\u8282\u70b9"},{"location":"security/security-guide/#_54","text":"\u6587\u6863\u5e94\u63d0\u4f9bOpenStack\u73af\u5883\u7684\u4e00\u822c\u63cf\u8ff0\uff0c\u5e76\u6db5\u76d6\u4f7f\u7528\u7684\u6240\u6709\u7cfb\u7edf\uff08\u4f8b\u5982\uff0c\u751f\u4ea7\u3001\u5f00\u53d1\u6216\u6d4b\u8bd5\uff09\u3002\u8bb0\u5f55\u7cfb\u7edf\u7ec4\u4ef6\u3001\u7f51\u7edc\u3001\u670d\u52a1\u548c\u8f6f\u4ef6\u901a\u5e38\u63d0\u4f9b\u5168\u9762\u8986\u76d6\u548c\u8003\u8651\u5b89\u5168\u95ee\u9898\u3001\u653b\u51fb\u5a92\u4ecb\u548c\u53ef\u80fd\u7684\u5b89\u5168\u57df\u6865\u63a5\u70b9\u6240\u9700\u7684\u9e1f\u77b0\u56fe\u3002\u7cfb\u7edf\u6e05\u5355\u53ef\u80fd\u9700\u8981\u6355\u83b7\u4e34\u65f6\u8d44\u6e90\uff0c\u4f8b\u5982\u865a\u62df\u673a\u6216\u865a\u62df\u78c1\u76d8\u5377\uff0c\u5426\u5219\u8fd9\u4e9b\u8d44\u6e90\u5c06\u6210\u4e3a\u4f20\u7edf IT \u7cfb\u7edf\u4e2d\u7684\u6301\u4e45\u6027\u8d44\u6e90\u3002","title":"\u7cfb\u7edf\u6e05\u5355"},{"location":"security/security-guide/#_55","text":"\u5bf9\u4e66\u9762\u6587\u6863\u6ca1\u6709\u4e25\u683c\u5408\u89c4\u6027\u8981\u6c42\u7684\u4e91\u53ef\u80fd\u4f1a\u53d7\u76ca\u4e8e\u914d\u7f6e\u7ba1\u7406\u6570\u636e\u5e93 \uff08CMDB\uff09\u3002CMDB\u901a\u5e38\u7528\u4e8e\u786c\u4ef6\u8d44\u4ea7\u8ddf\u8e2a\u548c\u6574\u4f53\u751f\u547d\u5468\u671f\u7ba1\u7406\u3002\u901a\u8fc7\u5229\u7528 CMDB\uff0c\u7ec4\u7ec7\u53ef\u4ee5\u5feb\u901f\u8bc6\u522b\u4e91\u57fa\u7840\u8bbe\u65bd\u786c\u4ef6\uff0c\u4f8b\u5982\u8ba1\u7b97\u8282\u70b9\u3001\u5b58\u50a8\u8282\u70b9\u6216\u7f51\u7edc\u8bbe\u5907\u3002CMDB\u53ef\u4ee5\u5e2e\u52a9\u8bc6\u522b\u7f51\u7edc\u4e0a\u5b58\u5728\u7684\u8d44\u4ea7\uff0c\u8fd9\u4e9b\u8d44\u4ea7\u53ef\u80fd\u7531\u4e8e\u7ef4\u62a4\u4e0d\u8db3\u3001\u4fdd\u62a4\u4e0d\u8db3\u6216\u88ab\u53d6\u4ee3\u548c\u9057\u5fd8\u800c\u5b58\u5728\u6f0f\u6d1e\u3002\u5982\u679c\u5e95\u5c42\u786c\u4ef6\u652f\u6301\u5fc5\u8981\u7684\u81ea\u52a8\u53d1\u73b0\u529f\u80fd\uff0c\u5219 OpenStack \u7f6e\u5907\u7cfb\u7edf\u53ef\u4ee5\u63d0\u4f9b\u4e00\u4e9b\u57fa\u672c\u7684 CMDB \u529f\u80fd\u3002","title":"\u786c\u4ef6\u6e05\u5355"},{"location":"security/security-guide/#_56","text":"\u4e0e\u786c\u4ef6\u4e00\u6837\uff0cOpenStack \u90e8\u7f72\u4e2d\u7684\u6240\u6709\u8f6f\u4ef6\u7ec4\u4ef6\u90fd\u5e94\u8bb0\u5f55\u5728\u6848\u3002\u793a\u4f8b\u5305\u62ec\uff1a \u7cfb\u7edf\u6570\u636e\u5e93\uff0c\u4f8b\u5982 MySQL \u6216 mongoDB OpenStack \u8f6f\u4ef6\u7ec4\u4ef6\uff0c\u4f8b\u5982 Identity \u6216 Compute \u652f\u6301\u7ec4\u4ef6\uff0c\u4f8b\u5982\u8d1f\u8f7d\u5747\u8861\u5668\u3001\u53cd\u5411\u4ee3\u7406\u3001DNS \u6216 DHCP \u670d\u52a1 \u5728\u8bc4\u4f30\u5e93\u3001\u5e94\u7528\u7a0b\u5e8f\u6216\u8f6f\u4ef6\u7c7b\u522b\u4e2d\u6cc4\u9732\u6216\u6f0f\u6d1e\u7684\u5f71\u54cd\u65f6\uff0c\u8f6f\u4ef6\u7ec4\u4ef6\u7684\u6743\u5a01\u5217\u8868\u53ef\u80fd\u81f3\u5173\u91cd\u8981\u3002","title":"\u8f6f\u4ef6\u6e05\u5355"},{"location":"security/security-guide/#_57","text":"\u5e94\u63d0\u4f9b\u7f51\u7edc\u62d3\u6251\uff0c\u5e76\u7a81\u51fa\u663e\u793a\u5b89\u5168\u57df\u4e4b\u95f4\u7684\u6570\u636e\u6d41\u548c\u6865\u63a5\u70b9\u3002\u7f51\u7edc\u5165\u53e3\u548c\u51fa\u53e3\u70b9\u5e94\u4e0e\u4efb\u4f55 OpenStack \u903b\u8f91\u7cfb\u7edf\u8fb9\u754c\u4e00\u8d77\u6807\u8bc6\u3002\u53ef\u80fd\u9700\u8981\u591a\u4e2a\u56fe\u8868\u6765\u63d0\u4f9b\u7cfb\u7edf\u7684\u5b8c\u6574\u89c6\u89c9\u8986\u76d6\u3002\u7f51\u7edc\u62d3\u6251\u6587\u6863\u5e94\u5305\u62ec\u7cfb\u7edf\u4ee3\u8868\u79df\u6237\u521b\u5efa\u7684\u865a\u62df\u7f51\u7edc\uff0c\u4ee5\u53ca OpenStack \u521b\u5efa\u7684\u865a\u62df\u673a\u5b9e\u4f8b\u548c\u7f51\u5173\u3002","title":"\u7f51\u7edc\u62d3\u6251"},{"location":"security/security-guide/#_58","text":"\u4e86\u89e3\u6709\u5173\u7ec4\u7ec7\u8d44\u4ea7\u7684\u4fe1\u606f\u901a\u5e38\u662f\u6700\u4f73\u505a\u6cd5\u3002\u8d44\u4ea7\u8868\u53ef\u4ee5\u5e2e\u52a9\u9a8c\u8bc1\u5b89\u5168\u8981\u6c42\uff0c\u5e76\u5e2e\u52a9\u7ef4\u62a4\u6807\u51c6\u5b89\u5168\u7ec4\u4ef6\uff0c\u4f8b\u5982\u9632\u706b\u5899\u914d\u7f6e\u3001\u670d\u52a1\u7aef\u53e3\u51b2\u7a81\u3001\u5b89\u5168\u4fee\u6b63\u533a\u57df\u548c\u5408\u89c4\u6027\u3002\u6b64\u5916\uff0c\u8be5\u8868\u8fd8\u6709\u52a9\u4e8e\u7406\u89e3 OpenStack \u7ec4\u4ef6\u4e4b\u95f4\u7684\u5173\u7cfb\u3002\u8be5\u8868\u53ef\u80fd\u5305\u62ec\uff1a OpenStack \u90e8\u7f72\u4e2d\u4f7f\u7528\u7684\u670d\u52a1\u3001\u534f\u8bae\u548c\u7aef\u53e3\u3002 \u4e91\u57fa\u7840\u67b6\u6784\u4e2d\u8fd0\u884c\u7684\u6240\u6709\u670d\u52a1\u7684\u6982\u8ff0\u3002 \u5f3a\u70c8\u5efa\u8bae OpenStack \u90e8\u7f72\u8bb0\u5f55\u4e0e\u6b64\u7c7b\u4f3c\u7684\u4fe1\u606f\u3002\u8be5\u8868\u53ef\u4ee5\u6839\u636e\u4ece CMDB \u6d3e\u751f\u7684\u4fe1\u606f\u521b\u5efa\uff0c\u4e5f\u53ef\u4ee5\u624b\u52a8\u6784\u5efa\u3002 \u4e0b\u9762\u63d0\u4f9b\u4e86\u4e00\u4e2a\u8868\u683c\u793a\u4f8b\uff1a \u670d\u52a1 \u534f\u8bae \u7aef\u53e3 \u76ee\u7684 \u4f7f\u7528\u8005 \u5b89\u5168\u57df beam.smp AMQP 5672/tcp AMQP \u6d88\u606f\u670d\u52a1 RabbitMQ \u7ba1\u7406\u57df tgtd iSCSI 3260/tcp iSCSI \u53d1\u8d77\u7a0b\u5e8f\u670d\u52a1 iSCSI \u79c1\u6709\uff08\u6570\u636e\u7f51\u7edc\uff09 sshd ssh 22/tcp \u5141\u8bb8\u5b89\u5168\u767b\u5f55\u5230\u8282\u70b9\u548c\u6765\u5bbe\u865a\u62df\u673a Various \u6309\u9700\u914d\u7f6e\u4f5c\u7528\u4e8e\u7ba1\u7406\u57df\u3001\u516c\u5171\u57df\u548c\u8bbf\u5ba2\u57df mysqld mysql 3306/tcp \u6570\u636e\u5e93\u670d\u52a1 Various \u7ba1\u7406\u57df apache2 http 443/tcp \u4eea\u8868\u677f Tenants \u516c\u5171\u57df dnsmasq dns 53/tcp DNS \u670d\u52a1 Guest VMs \u8bbf\u5ba2\u57df","title":"\u670d\u52a1\u3001\u534f\u8bae\u548c\u7aef\u53e3"},{"location":"security/security-guide/#_59","text":"\u4e91\u90e8\u7f72\u662f\u4e00\u4e2a\u4e0d\u65ad\u53d8\u5316\u7684\u7cfb\u7edf\u3002\u673a\u5668\u8001\u5316\u548c\u6545\u969c\uff0c\u8f6f\u4ef6\u8fc7\u65f6\uff0c\u6f0f\u6d1e\u88ab\u53d1\u73b0\u3002\u5f53\u914d\u7f6e\u4e2d\u51fa\u73b0\u9519\u8bef\u6216\u9057\u6f0f\u65f6\uff0c\u6216\u8005\u5fc5\u987b\u5e94\u7528\u8f6f\u4ef6\u4fee\u590d\u65f6\uff0c\u5fc5\u987b\u4ee5\u5b89\u5168\u4f46\u65b9\u4fbf\u7684\u65b9\u5f0f\u8fdb\u884c\u8fd9\u4e9b\u66f4\u6539\u3002\u8fd9\u4e9b\u66f4\u6539\u901a\u5e38\u901a\u8fc7\u914d\u7f6e\u7ba1\u7406\u6765\u89e3\u51b3\u3002 \u4fdd\u62a4\u4e91\u90e8\u7f72\u4e0d\u88ab\u6076\u610f\u5b9e\u4f53\u914d\u7f6e\u6216\u64cd\u7eb5\u975e\u5e38\u91cd\u8981\u3002\u7531\u4e8e\u4e91\u4e2d\u7684\u8bb8\u591a\u7cfb\u7edf\u90fd\u91c7\u7528\u8ba1\u7b97\u548c\u7f51\u7edc\u865a\u62df\u5316\uff0c\u56e0\u6b64 OpenStack \u9762\u4e34\u7740\u660e\u663e\u7684\u6311\u6218\uff0c\u5fc5\u987b\u901a\u8fc7\u5b8c\u6574\u6027\u751f\u547d\u5468\u671f\u7ba1\u7406\u6765\u89e3\u51b3\u8fd9\u4e9b\u6311\u6218\u3002 \u7ba1\u7406\u5458\u5fc5\u987b\u5bf9\u4e91\u6267\u884c\u547d\u4ee4\u548c\u63a7\u5236\uff0c\u4ee5\u5b9e\u73b0\u5404\u79cd\u64cd\u4f5c\u529f\u80fd\u3002\u7406\u89e3\u548c\u4fdd\u62a4\u8fd9\u4e9b\u6307\u6325\u548c\u63a7\u5236\u8bbe\u65bd\u975e\u5e38\u91cd\u8981\u3002 \u6301\u7eed\u7684\u7cfb\u7edf\u7ba1\u7406 \u6f0f\u6d1e\u7ba1\u7406 \u914d\u7f6e\u7ba1\u7406 \u5b89\u5168\u5907\u4efd\u548c\u6062\u590d \u5b89\u5168\u5ba1\u8ba1\u5de5\u5177 \u5b8c\u6574\u6027\u751f\u547d\u5468\u671f \u5b89\u5168\u5f15\u5bfc \u8fd0\u884c\u65f6\u9a8c\u8bc1 \u670d\u52a1\u5668\u52a0\u56fa \u7ba1\u7406\u754c\u9762 \u4eea\u8868\u677f OpenStack \u63a5\u53e3 \u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 \u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f \u5e26\u5916\u7ba1\u7406\u63a5\u53e3","title":"\u7ba1\u7406"},{"location":"security/security-guide/#_60","text":"\u4e91\u7cfb\u7edf\u603b\u4f1a\u5b58\u5728\u6f0f\u6d1e\uff0c\u5176\u4e2d\u4e00\u4e9b\u53ef\u80fd\u662f\u5b89\u5168\u95ee\u9898\u3002\u56e0\u6b64\uff0c\u51c6\u5907\u597d\u5e94\u7528\u5b89\u5168\u66f4\u65b0\u548c\u5e38\u89c4\u8f6f\u4ef6\u66f4\u65b0\u81f3\u5173\u91cd\u8981\u3002\u8fd9\u6d89\u53ca\u5230\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u7684\u667a\u80fd\u4f7f\u7528\uff0c\u4e0b\u9762\u5c06\u5bf9\u6b64\u8fdb\u884c\u8ba8\u8bba\u3002\u8fd9\u8fd8\u6d89\u53ca\u4e86\u89e3\u4f55\u65f6\u9700\u8981\u5347\u7ea7\u3002","title":"\u6301\u7eed\u7684\u7cfb\u7edf\u7ba1\u7406"},{"location":"security/security-guide/#_61","text":"\u6709\u5173\u5b89\u5168\u76f8\u5173\u66f4\u6539\u7684\u516c\u544a\uff0c\u8bf7\u8ba2\u9605 OpenStack Announce \u90ae\u4ef6\u5217\u8868\u3002\u5b89\u5168\u901a\u77e5\u8fd8\u4f1a\u901a\u8fc7\u4e0b\u6e38\u8f6f\u4ef6\u5305\u53d1\u5e03\uff0c\u4f8b\u5982\uff0c\u901a\u8fc7\u60a8\u53ef\u80fd\u4f5c\u4e3a\u8f6f\u4ef6\u5305\u66f4\u65b0\u7684\u4e00\u90e8\u5206\u8ba2\u9605\u7684 Linux \u53d1\u884c\u7248\u3002 OpenStack\u7ec4\u4ef6\u53ea\u662f\u4e91\u4e2d\u8f6f\u4ef6\u7684\u4e00\u5c0f\u90e8\u5206\u3002\u4e0e\u6240\u6709\u8fd9\u4e9b\u5176\u4ed6\u7ec4\u4ef6\u4fdd\u6301\u540c\u6b65\u4e5f\u5f88\u91cd\u8981\u3002\u867d\u7136\u67d0\u4e9b\u6570\u636e\u6e90\u662f\u7279\u5b9a\u4e8e\u90e8\u7f72\u7684\uff0c\u4f46\u4e91\u7ba1\u7406\u5458\u5fc5\u987b\u8ba2\u9605\u5fc5\u8981\u7684\u90ae\u4ef6\u5217\u8868\uff0c\u4ee5\u4fbf\u63a5\u6536\u9002\u7528\u4e8e\u7ec4\u7ec7\u73af\u5883\u7684\u4efb\u4f55\u5b89\u5168\u66f4\u65b0\u7684\u901a\u77e5\u3002\u901a\u5e38\uff0c\u8fd9\u5c31\u50cf\u8ddf\u8e2a\u4e0a\u6e38 Linux \u53d1\u884c\u7248\u4e00\u6837\u7b80\u5355\u3002 \u6ce8\u610f OpenStack \u901a\u8fc7\u4e24\u4e2a\u6e20\u9053\u53d1\u5e03\u5b89\u5168\u4fe1\u606f\u3002 - OpenStack \u5b89\u5168\u516c\u544a \uff08OSSA\uff09 \u7531 OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f \uff08VMT\uff09 \u521b\u5efa\u3002\u5b83\u4eec\u4e0e\u6838\u5fc3OpenStack\u670d\u52a1\u4e2d\u7684\u5b89\u5168\u6f0f\u6d1e\u6709\u5173\u3002\u6709\u5173 VMT \u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6f0f\u6d1e\u7ba1\u7406\u6d41\u7a0b\u3002 - OpenStack \u5b89\u5168\u8bf4\u660e \uff08OSSN\uff09 \u7531 OpenStack \u5b89\u5168\u7ec4 \uff08OSSG\uff09 \u521b\u5efa\uff0c\u4ee5\u652f\u6301 VMT \u7684\u5de5\u4f5c\u3002OSSN\u89e3\u51b3\u4e86\u652f\u6301\u8f6f\u4ef6\u548c\u5e38\u89c1\u90e8\u7f72\u914d\u7f6e\u4e2d\u7684\u95ee\u9898\u3002\u672c\u6307\u5357\u4e2d\u5f15\u7528\u4e86\u5b83\u4eec\u3002\u5b89\u5168\u8bf4\u660e\u5b58\u6863\u5728OSSN\u4e0a\u3002","title":"\u6f0f\u6d1e\u7ba1\u7406"},{"location":"security/security-guide/#_62","text":"\u6536\u5230\u5b89\u5168\u66f4\u65b0\u901a\u77e5\u540e\uff0c\u4e0b\u4e00\u6b65\u662f\u786e\u5b9a\u6b64\u66f4\u65b0\u5bf9\u7ed9\u5b9a\u4e91\u90e8\u7f72\u7684\u91cd\u8981\u6027\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u62e5\u6709\u9884\u5b9a\u4e49\u7684\u7b56\u7565\u5f88\u6709\u7528\u3002\u73b0\u6709\u7684\u6f0f\u6d1e\u8bc4\u7ea7\u7cfb\u7edf\uff08\u5982\u901a\u7528\u6f0f\u6d1e\u8bc4\u5206\u7cfb\u7edf \uff08CVSS\uff09\uff09\u65e0\u6cd5\u6b63\u786e\u8003\u8651\u4e91\u90e8\u7f72\u3002 \u5728\u6b64\u793a\u4f8b\u4e2d\uff0c\u6211\u4eec\u5f15\u5165\u4e86\u4e00\u4e2a\u8bc4\u5206\u77e9\u9635\uff0c\u8be5\u77e9\u9635\u5c06\u6f0f\u6d1e\u5206\u4e3a\u4e09\u7c7b\uff1a\u6743\u9650\u63d0\u5347\u3001\u62d2\u7edd\u670d\u52a1\u548c\u4fe1\u606f\u6cc4\u9732\u3002\u4e86\u89e3\u6f0f\u6d1e\u7684\u7c7b\u578b\u53ca\u5176\u5728\u57fa\u7840\u67b6\u6784\u4e2d\u53d1\u751f\u7684\u4f4d\u7f6e\u5c06\u4f7f\u60a8\u80fd\u591f\u505a\u51fa\u5408\u7406\u7684\u54cd\u5e94\u51b3\u7b56\u3002 \u6743\u9650\u63d0\u5347\u63cf\u8ff0\u4e86\u7528\u6237\u4f7f\u7528\u7cfb\u7edf\u4e2d\u5176\u4ed6\u7528\u6237\u7684\u6743\u9650\u8fdb\u884c\u64cd\u4f5c\u7684\u80fd\u529b\uff0c\u7ed5\u8fc7\u9002\u5f53\u7684\u6388\u6743\u68c0\u67e5\u3002\u6765\u5bbe\u7528\u6237\u6267\u884c\u7684\u64cd\u4f5c\u5141\u8bb8\u4ed6\u4eec\u4ee5\u7ba1\u7406\u5458\u6743\u9650\u6267\u884c\u672a\u7ecf\u6388\u6743\u7684\u64cd\u4f5c\uff0c\u8fd9\u662f\u6b64\u7c7b\u6f0f\u6d1e\u7684\u4e00\u4e2a\u793a\u4f8b\u3002 \u62d2\u7edd\u670d\u52a1\u662f\u6307\u88ab\u5229\u7528\u7684\u6f0f\u6d1e\uff0c\u53ef\u80fd\u5bfc\u81f4\u670d\u52a1\u6216\u7cfb\u7edf\u4e2d\u65ad\u3002\u8fd9\u65e2\u5305\u62ec\u4f7f\u7f51\u7edc\u8d44\u6e90\u4e0d\u582a\u91cd\u8d1f\u7684\u5206\u5e03\u5f0f\u653b\u51fb\uff0c\u4e5f\u5305\u62ec\u901a\u5e38\u7531\u8d44\u6e90\u5206\u914d\u9519\u8bef\u6216\u8f93\u5165\u5f15\u8d77\u7684\u7cfb\u7edf\u6545\u969c\u7f3a\u9677\u5f15\u8d77\u7684\u5355\u7528\u6237\u653b\u51fb\u3002 \u4fe1\u606f\u6cc4\u9732\u6f0f\u6d1e\u4f1a\u6cc4\u9732\u6709\u5173\u60a8\u7684\u7cfb\u7edf\u6216\u64cd\u4f5c\u7684\u4fe1\u606f\u3002\u8fd9\u4e9b\u6f0f\u6d1e\u7684\u8303\u56f4\u4ece\u8c03\u8bd5\u4fe1\u606f\u6cc4\u9732\u5230\u5173\u952e\u5b89\u5168\u6570\u636e\uff08\u5982\u8eab\u4efd\u9a8c\u8bc1\u51ed\u636e\u548c\u5bc6\u7801\uff09\u7684\u66b4\u9732\u3002 \u653b\u51fb\u8005\u4f4d\u7f6e/\u6743\u9650\u7ea7\u522b \u5916\u90e8 \u4e91\u7528\u6237 \u4e91\u7ba1\u7406\u5458 \u63a7\u5236\u5e73\u9762 \u6743\u9650\u63d0\u5347\uff083 \u7ea7\uff09 \u7d27\u6025 n/a n/a n/a \u6743\u9650\u63d0\u5347\uff082 \u4e2a\u7ea7\u522b\uff09 \u7d27\u6025 \u7d27\u6025 n/a n/a \u7279\u6743\u63d0\u5347\uff081 \u7ea7\uff09 \u7d27\u6025 \u7d27\u6025 \u7d27\u6025 n/a \u62d2\u7edd\u670d\u52a1 \u9ad8 \u4e2d \u4f4e \u4f4e \u4fe1\u606f\u62ab\u9732 \u7d27\u6025/\u9ad8 \u7d27\u6025/\u9ad8 \u4e2d/\u4f4e \u4f4e \u8be5\u8868\u8bf4\u660e\u4e86\u4e00\u79cd\u901a\u7528\u65b9\u6cd5\uff0c\u8be5\u65b9\u6cd5\u6839\u636e\u6f0f\u6d1e\u5728\u90e8\u7f72\u4e2d\u53d1\u751f\u7684\u4f4d\u7f6e\u548c\u5f71\u54cd\u6765\u8861\u91cf\u6f0f\u6d1e\u7684\u5f71\u54cd\u3002\u4f8b\u5982\uff0c\u8ba1\u7b97 API \u8282\u70b9\u4e0a\u7684\u5355\u7ea7\u6743\u9650\u63d0\u5347\u53ef\u80fd\u5141\u8bb8 API \u7684\u6807\u51c6\u7528\u6237\u5347\u7ea7\u4e3a\u5177\u6709\u4e0e\u8282\u70b9\u4e0a\u7684 root \u7528\u6237\u76f8\u540c\u7684\u6743\u9650\u3002 \u6211\u4eec\u5efa\u8bae\u4e91\u7ba1\u7406\u5458\u4f7f\u7528\u6b64\u8868\u4f5c\u4e3a\u6a21\u578b\uff0c\u4ee5\u5e2e\u52a9\u5b9a\u4e49\u8981\u9488\u5bf9\u5404\u79cd\u5b89\u5168\u7ea7\u522b\u6267\u884c\u7684\u64cd\u4f5c\u3002\u4f8b\u5982\uff0c\u5173\u952e\u7ea7\u522b\u7684\u5b89\u5168\u66f4\u65b0\u53ef\u80fd\u9700\u8981\u5feb\u901f\u5347\u7ea7\u4e91\uff0c\u800c\u4f4e\u7ea7\u522b\u7684\u66f4\u65b0\u53ef\u80fd\u9700\u8981\u66f4\u957f\u7684\u65f6\u95f4\u624d\u80fd\u5b8c\u6210\u3002","title":"\u5206\u7c7b"},{"location":"security/security-guide/#_63","text":"\u5728\u751f\u4ea7\u73af\u5883\u4e2d\u90e8\u7f72\u4efb\u4f55\u66f4\u65b0\u4e4b\u524d\uff0c\u5e94\u5bf9\u5176\u8fdb\u884c\u6d4b\u8bd5\u3002\u901a\u5e38\uff0c\u8fd9\u9700\u8981\u6709\u4e00\u4e2a\u5355\u72ec\u7684\u6d4b\u8bd5\u4e91\u8bbe\u7f6e\uff0c\u8be5\u8bbe\u7f6e\u9996\u5148\u63a5\u6536\u66f4\u65b0\u3002\u5728\u8f6f\u4ef6\u548c\u786c\u4ef6\u65b9\u9762\uff0c\u6b64\u4e91\u5e94\u5c3d\u53ef\u80fd\u63a5\u8fd1\u751f\u4ea7\u4e91\u3002\u5e94\u5728\u6027\u80fd\u5f71\u54cd\u3001\u7a33\u5b9a\u6027\u3001\u5e94\u7528\u7a0b\u5e8f\u5f71\u54cd\u7b49\u65b9\u9762\u5bf9\u66f4\u65b0\u8fdb\u884c\u5168\u9762\u6d4b\u8bd5\u3002\u7279\u522b\u91cd\u8981\u7684\u662f\u9a8c\u8bc1\u66f4\u65b0\u7406\u8bba\u4e0a\u89e3\u51b3\u7684\u95ee\u9898\uff08\u4f8b\u5982\u7279\u5b9a\u6f0f\u6d1e\uff09\u662f\u5426\u5df2\u5b9e\u9645\u4fee\u590d\u3002","title":"\u6d4b\u8bd5\u66f4\u65b0"},{"location":"security/security-guide/#_64","text":"\u5b8c\u5168\u6d4b\u8bd5\u66f4\u65b0\u540e\uff0c\u53ef\u4ee5\u5c06\u5176\u90e8\u7f72\u5230\u751f\u4ea7\u73af\u5883\u3002\u5e94\u4f7f\u7528\u4e0b\u9762\u6240\u8ff0\u7684\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u5b8c\u5168\u81ea\u52a8\u5316\u6b64\u90e8\u7f72\u3002","title":"\u90e8\u7f72\u66f4\u65b0"},{"location":"security/security-guide/#_65","text":"\u751f\u4ea7\u8d28\u91cf\u7684\u4e91\u5e94\u59cb\u7ec8\u4f7f\u7528\u5de5\u5177\u6765\u81ea\u52a8\u6267\u884c\u914d\u7f6e\u548c\u90e8\u7f72\u3002\u8fd9\u6d88\u9664\u4e86\u4eba\u4e3a\u9519\u8bef\uff0c\u5e76\u5141\u8bb8\u4e91\u66f4\u5feb\u5730\u6269\u5c55\u3002\u81ea\u52a8\u5316\u8fd8\u6709\u52a9\u4e8e\u6301\u7eed\u96c6\u6210\u548c\u6d4b\u8bd5\u3002 \u5728\u6784\u5efa OpenStack \u4e91\u65f6\uff0c\u5f3a\u70c8\u5efa\u8bae\u5728\u8bbe\u8ba1\u548c\u5b9e\u73b0\u65f6\u8003\u8651\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u6216\u6846\u67b6\u3002\u901a\u8fc7\u914d\u7f6e\u7ba1\u7406\uff0c\u60a8\u53ef\u4ee5\u907f\u514d\u5728\u6784\u5efa\u3001\u7ba1\u7406\u548c\u7ef4\u62a4\u50cf OpenStack \u8fd9\u6837\u590d\u6742\u7684\u57fa\u7840\u67b6\u6784\u65f6\u56fa\u6709\u7684\u8bb8\u591a\u9677\u9631\u3002\u901a\u8fc7\u751f\u6210\u914d\u7f6e\u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f\u6240\u9700\u7684\u6e05\u5355\u3001\u8bf4\u660e\u4e66\u6216\u6a21\u677f\uff0c\u60a8\u53ef\u4ee5\u6ee1\u8db3\u8bb8\u591a\u6587\u6863\u548c\u6cd5\u89c4\u62a5\u544a\u8981\u6c42\u3002\u6b64\u5916\uff0c\u914d\u7f6e\u7ba1\u7406\u8fd8\u53ef\u4ee5\u4f5c\u4e3a\u4e1a\u52a1\u8fde\u7eed\u6027\u8ba1\u5212 \uff08BCP\uff09 \u548c\u6570\u636e\u6062\u590d \uff08DR\uff09 \u8ba1\u5212\u7684\u4e00\u90e8\u5206\uff0c\u60a8\u53ef\u4ee5\u5728\u5176\u4e2d\u5c06\u8282\u70b9\u6216\u670d\u52a1\u91cd\u5efa\u56de DR \u4e8b\u4ef6\u4e2d\u7684\u5df2\u77e5\u72b6\u6001\u6216\u7ed9\u5b9a\u7684\u59a5\u534f\u72b6\u6001\u3002 \u6b64\u5916\uff0c\u5f53\u4e0e Git \u6216 SVN \u7b49\u7248\u672c\u63a7\u5236\u7cfb\u7edf\u7ed3\u5408\u4f7f\u7528\u65f6\uff0c\u60a8\u53ef\u4ee5\u8ddf\u8e2a\u73af\u5883\u968f\u65f6\u95f4\u63a8\u79fb\u800c\u53d1\u751f\u7684\u66f4\u6539\uff0c\u5e76\u91cd\u65b0\u8c03\u89e3\u53ef\u80fd\u53d1\u751f\u7684\u672a\u7ecf\u6388\u6743\u7684\u66f4\u6539\u3002\u4f8b\u5982\uff0c\u6587\u4ef6 nova.conf \u6216\u5176\u4ed6\u914d\u7f6e\u6587\u4ef6\u4e0d\u7b26\u5408\u60a8\u7684\u6807\u51c6\uff0c\u60a8\u7684\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u53ef\u4ee5\u8fd8\u539f\u6216\u66ff\u6362\u8be5\u6587\u4ef6\uff0c\u5e76\u5c06\u60a8\u7684\u914d\u7f6e\u6062\u590d\u5230\u5df2\u77e5\u72b6\u6001\u3002\u6700\u540e\uff0c\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u4e5f\u53ef\u7528\u4e8e\u90e8\u7f72\u66f4\u65b0;\u7b80\u5316\u5b89\u5168\u8865\u4e01\u6d41\u7a0b\u3002\u8fd9\u4e9b\u5de5\u5177\u5177\u6709\u5e7f\u6cdb\u7684\u529f\u80fd\uff0c\u5728\u8be5\u9886\u57df\u975e\u5e38\u6709\u7528\u3002\u4fdd\u62a4\u4e91\u7684\u5173\u952e\u70b9\u662f\u9009\u62e9\u4e00\u79cd\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u5e76\u4f7f\u7528\u5b83\u3002 \u6709\u8bb8\u591a\u914d\u7f6e\u7ba1\u7406\u89e3\u51b3\u65b9\u6848;\u5728\u64b0\u5199\u672c\u6587\u65f6\uff0c\u5e02\u573a\u4e0a\u6709\u4e24\u4e2a\u5728\u652f\u6301 OpenStack \u73af\u5883\u65b9\u9762\u975e\u5e38\u5f3a\u5927\u7684\u516c\u53f8\uff1aChef \u548c Puppet\u3002\u4e0b\u9762\u63d0\u4f9b\u4e86\u6b64\u7a7a\u95f4\u4e2d\u7684\u5de5\u5177\u7684\u975e\u8be6\u5c3d\u5217\u8868\uff1a Chef Puppet Salt Stack Ansible","title":"\u914d\u7f6e\u7ba1\u7406"},{"location":"security/security-guide/#_66","text":"\u6bcf\u5f53\u66f4\u6539\u7b56\u7565\u6216\u914d\u7f6e\u7ba1\u7406\u65f6\uff0c\u6700\u597d\u8bb0\u5f55\u6d3b\u52a8\u5e76\u5907\u4efd\u65b0\u96c6\u7684\u526f\u672c\u3002\u901a\u5e38\uff0c\u6b64\u7c7b\u7b56\u7565\u548c\u914d\u7f6e\u5b58\u50a8\u5728\u53d7\u7248\u672c\u63a7\u5236\u7684\u5b58\u50a8\u5e93\uff08\u5982 Git\uff09\u4e2d\u3002","title":"\u7b56\u7565\u66f4\u6539"},{"location":"security/security-guide/#_67","text":"\u5728\u6574\u4e2a\u7cfb\u7edf\u5b89\u5168\u8ba1\u5212\u4e2d\u5305\u62ec\u5907\u4efd\u8fc7\u7a0b\u548c\u7b56\u7565\u975e\u5e38\u91cd\u8981\u3002\u6709\u5173 OpenStack \u5907\u4efd\u548c\u6062\u590d\u529f\u80fd\u548c\u8fc7\u7a0b\u7684\u6982\u8ff0\uff0c\u8bf7\u53c2\u9605\u6709\u5173\u5907\u4efd\u548c\u6062\u590d\u7684 OpenStack \u64cd\u4f5c\u6307\u5357\u3002 \u786e\u4fdd\u53ea\u6709\u7ecf\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u7684\u7528\u6237\u548c\u5907\u4efd\u5ba2\u6237\u7aef\u624d\u80fd\u8bbf\u95ee\u5907\u4efd\u670d\u52a1\u5668\u3002 \u4f7f\u7528\u6570\u636e\u52a0\u5bc6\u9009\u9879\u6765\u5b58\u50a8\u548c\u4f20\u8f93\u5907\u4efd\u3002 \u4f7f\u7528\u4e13\u7528\u4e14\u5f3a\u5316\u7684\u5907\u4efd\u670d\u52a1\u5668\u3002\u5907\u4efd\u670d\u52a1\u5668\u7684\u65e5\u5fd7\u5fc5\u987b\u6bcf\u5929\u8fdb\u884c\u76d1\u89c6\uff0c\u5e76\u4e14\u53ea\u6709\u5c11\u6570\u4eba\u53ef\u4ee5\u8bbf\u95ee\u3002 \u5b9a\u671f\u6d4b\u8bd5\u6570\u636e\u6062\u590d\u9009\u9879\uff0c\u5305\u62ec\u5b58\u50a8\u5728\u5b89\u5168\u5907\u4efd\u4e2d\u7684\u955c\u50cf\uff0c\u662f\u786e\u4fdd\u707e\u96be\u6062\u590d\u51c6\u5907\u7684\u5173\u952e\u90e8\u5206\u3002\u5728\u53d1\u751f\u5b89\u5168\u6f0f\u6d1e\u6216\u53d7\u635f\u65f6\uff0c\u7ec8\u6b62\u8fd0\u884c\u4e2d\u7684\u5b9e\u4f8b\u5e76\u4ece\u5df2\u77e5\u7684\u5b89\u5168\u955c\u50cf\u5907\u4efd\u4e2d\u91cd\u65b0\u542f\u52a8\u5b9e\u4f8b\u786e\u5b9e\u662f\u6700\u4f73\u505a\u6cd5\u3002\u8fd9\u6709\u52a9\u4e8e\u786e\u4fdd\u53d7\u635f\u7684\u5b9e\u4f8b\u88ab\u6d88\u9664\uff0c\u5e76\u4e14\u53ef\u4ee5\u8fc5\u901f\u4ece\u5907\u4efd\u7684\u955c\u50cf\u4e2d\u91cd\u65b0\u90e8\u7f72\u5e72\u51c0\u3001\u53ef\u4fe1\u8d56\u7684\u7248\u672c\u3002","title":"\u5b89\u5168\u5907\u4efd\u548c\u6062\u590d"},{"location":"security/security-guide/#_68","text":"\u5b89\u5168\u5ba1\u6838\u5de5\u5177\u53ef\u4ee5\u8865\u5145\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u3002\u5b89\u5168\u5ba1\u6838\u5de5\u5177\u53ef\u81ea\u52a8\u6267\u884c\u9a8c\u8bc1\u7ed9\u5b9a\u7cfb\u7edf\u914d\u7f6e\u662f\u5426\u6ee1\u8db3\u5927\u91cf\u5b89\u5168\u63a7\u5236\u7684\u8fc7\u7a0b\u3002\u8fd9\u4e9b\u5de5\u5177\u6709\u52a9\u4e8e\u5f25\u5408\u4ece\u5b89\u5168\u914d\u7f6e\u6307\u5357\u6587\u6863\uff08\u4f8b\u5982\uff0cSTIG \u548c NSA \u6307\u5357\uff09\u5230\u7279\u5b9a\u7cfb\u7edf\u5b89\u88c5\u7684\u5dee\u8ddd\u3002\u4f8b\u5982\uff0cSCAP \u53ef\u4ee5\u5c06\u6b63\u5728\u8fd0\u884c\u7684\u7cfb\u7edf\u4e0e\u9884\u5b9a\u4e49\u7684\u914d\u7f6e\u6587\u4ef6\u8fdb\u884c\u6bd4\u8f83\u3002SCAP \u8f93\u51fa\u4e00\u4efd\u62a5\u544a\uff0c\u8be6\u7ec6\u8bf4\u660e\u914d\u7f6e\u6587\u4ef6\u4e2d\u7684\u54ea\u4e9b\u63a7\u4ef6\u5df2\u6ee1\u8db3\uff0c\u54ea\u4e9b\u63a7\u4ef6\u672a\u901a\u8fc7\uff0c\u54ea\u4e9b\u63a7\u4ef6\u672a\u9009\u4e2d\u3002 \u5c06\u914d\u7f6e\u7ba1\u7406\u548c\u5b89\u5168\u5ba1\u8ba1\u5de5\u5177\u76f8\u7ed3\u5408\uff0c\u5f62\u6210\u4e86\u4e00\u4e2a\u5f3a\u5927\u7684\u7ec4\u5408\u3002\u5ba1\u6838\u5de5\u5177\u5c06\u7a81\u51fa\u663e\u793a\u90e8\u7f72\u95ee\u9898\u3002\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u7b80\u5316\u4e86\u66f4\u6539\u6bcf\u4e2a\u7cfb\u7edf\u7684\u8fc7\u7a0b\uff0c\u4ee5\u89e3\u51b3\u5ba1\u8ba1\u95ee\u9898\u3002\u4ee5\u8fd9\u79cd\u65b9\u5f0f\u4e00\u8d77\u4f7f\u7528\uff0c\u8fd9\u4e9b\u5de5\u5177\u6709\u52a9\u4e8e\u7ef4\u62a4\u6ee1\u8db3\u4ece\u57fa\u672c\u5f3a\u5316\u5230\u5408\u89c4\u6027\u9a8c\u8bc1\u7b49\u5b89\u5168\u8981\u6c42\u7684\u4e91\u73af\u5883\u3002 \u914d\u7f6e\u7ba1\u7406\u548c\u5b89\u5168\u5ba1\u8ba1\u5de5\u5177\u5c06\u7ed9\u4e91\u5e26\u6765\u53e6\u4e00\u5c42\u590d\u6742\u6027\u3002\u8fd9\u79cd\u590d\u6742\u6027\u5e26\u6765\u4e86\u989d\u5916\u7684\u5b89\u5168\u95ee\u9898\u3002\u8003\u8651\u5230\u5176\u5b89\u5168\u4f18\u52bf\uff0c\u6211\u4eec\u8ba4\u4e3a\u8fd9\u662f\u4e00\u79cd\u53ef\u63a5\u53d7\u7684\u98ce\u9669\u6743\u8861\u3002\u5bf9\u4e8e\u8fd9\u4e9b\u5de5\u5177\u7684\u64cd\u4f5c\u5b89\u5168\u6027\u4fdd\u969c\u8d85\u51fa\u4e86\u672c\u6307\u5357\u7684\u8303\u56f4\u3002","title":"\u5b89\u5168\u5ba1\u8ba1\u5de5\u5177"},{"location":"security/security-guide/#_69","text":"\u6211\u4eec\u5c06\u5b8c\u6574\u6027\u751f\u547d\u5468\u671f\u5b9a\u4e49\u4e3a\u4e00\u4e2a\u6df1\u601d\u719f\u8651\u7684\u8fc7\u7a0b\uff0c\u5b83\u786e\u4fdd\u6211\u4eec\u59cb\u7ec8\u5728\u6574\u4e2a\u4e91\u4e2d\u4ee5\u9884\u671f\u7684\u914d\u7f6e\u8fd0\u884c\u9884\u671f\u7684\u8f6f\u4ef6\u3002\u6b64\u8fc7\u7a0b\u4ece\u5b89\u5168\u5f15\u5bfc\u5f00\u59cb\uff0c\u5e76\u901a\u8fc7\u914d\u7f6e\u7ba1\u7406\u548c\u5b89\u5168\u76d1\u63a7\u8fdb\u884c\u7ef4\u62a4\u3002\u672c\u7ae0\u5c31\u5982\u4f55\u5904\u7406\u5b8c\u6574\u6027\u751f\u547d\u5468\u671f\u8fc7\u7a0b\u63d0\u4f9b\u4e86\u5efa\u8bae\u3002","title":"\u5b8c\u6574\u6027\u751f\u547d\u5468\u671f"},{"location":"security/security-guide/#_70","text":"\u4e91\u4e2d\u7684\u8282\u70b9\uff0c\u5305\u62ec\u8ba1\u7b97\u3001\u5b58\u50a8\u3001\u7f51\u7edc\u3001\u670d\u52a1\u548c\u6df7\u5408\u8282\u70b9\uff0c\u5e94\u8be5\u6709\u4e00\u4e2a\u81ea\u52a8\u5316\u7684\u914d\u7f6e\u8fc7\u7a0b\u3002\u8fd9\u786e\u4fdd\u4e86\u8282\u70b9\u7684\u4e00\u81f4\u548c\u6b63\u786e\u914d\u7f6e\u3002\u8fd9\u4e5f\u4fbf\u4e8e\u5b89\u5168\u8865\u4e01\u3001\u5347\u7ea7\u3001\u6545\u969c\u4fee\u590d\u548c\u5176\u4ed6\u5173\u952e\u53d8\u66f4\u3002\u7531\u4e8e\u8fd9\u4e2a\u8fc7\u7a0b\u5b89\u88c5\u4e86\u5728\u4e91\u4e2d\u5177\u6709\u6700\u9ad8\u7279\u6743\u7ea7\u522b\u7684\u65b0\u8f6f\u4ef6\uff0c\u56e0\u6b64\u9a8c\u8bc1\u5b89\u88c5\u6b63\u786e\u7684\u8f6f\u4ef6\u975e\u5e38\u91cd\u8981\uff0c\u5305\u62ec\u542f\u52a8\u8fc7\u7a0b\u7684\u6700\u65e9\u9636\u6bb5\u3002 \u6709\u591a\u79cd\u6280\u672f\u53ef\u4ee5\u9a8c\u8bc1\u8fd9\u4e9b\u65e9\u671f\u542f\u52a8\u9636\u6bb5\u3002\u8fd9\u4e9b\u901a\u5e38\u9700\u8981\u786c\u4ef6\u652f\u6301\uff0c\u4f8b\u5982\u53ef\u4fe1\u5e73\u53f0\u6a21\u5757 \uff08TPM\uff09\u3001\u82f1\u7279\u5c14\u53ef\u4fe1\u6267\u884c\u6280\u672f \uff08TXT\uff09\u3001\u52a8\u6001\u4fe1\u4efb\u6839\u6d4b\u91cf \uff08DRTM\uff09 \u548c\u7edf\u4e00\u53ef\u6269\u5c55\u56fa\u4ef6\u63a5\u53e3 \uff08UEFI\uff09 \u5b89\u5168\u542f\u52a8\u3002\u5728\u672c\u4e66\u4e2d\uff0c\u6211\u4eec\u5c06\u6240\u6709\u8fd9\u4e9b\u7edf\u79f0\u4e3a\u5b89\u5168\u542f\u52a8\u6280\u672f\u3002\u6211\u4eec\u5efa\u8bae\u4f7f\u7528\u5b89\u5168\u542f\u52a8\uff0c\u540c\u65f6\u627f\u8ba4\u90e8\u7f72\u6b64\u542f\u52a8\u6240\u9700\u7684\u8bb8\u591a\u90e8\u5206\u9700\u8981\u9ad8\u7ea7\u6280\u672f\u6280\u80fd\u624d\u80fd\u4e3a\u6bcf\u4e2a\u73af\u5883\u81ea\u5b9a\u4e49\u5de5\u5177\u3002\u4e0e\u672c\u6307\u5357\u4e2d\u7684\u8bb8\u591a\u5176\u4ed6\u5efa\u8bae\u76f8\u6bd4\uff0c\u4f7f\u7528\u5b89\u5168\u542f\u52a8\u9700\u8981\u66f4\u6df1\u5165\u7684\u96c6\u6210\u548c\u81ea\u5b9a\u4e49\u3002TPM \u6280\u672f\u867d\u7136\u5728\u5927\u591a\u6570\u5546\u52a1\u7ea7\u7b14\u8bb0\u672c\u7535\u8111\u548c\u53f0\u5f0f\u673a\u4e2d\u5f88\u5e38\u89c1\u6570\u5e74\uff0c\u4f46\u73b0\u5728\u5df2\u4e0e\u652f\u6301\u7684 BIOS \u4e00\u8d77\u5728\u670d\u52a1\u5668\u4e2d\u53ef\u7528\u3002\u6b63\u786e\u7684\u89c4\u5212\u5bf9\u4e8e\u6210\u529f\u7684\u5b89\u5168\u542f\u52a8\u90e8\u7f72\u81f3\u5173\u91cd\u8981\u3002 \u6709\u5173\u5b89\u5168\u542f\u52a8\u90e8\u7f72\u7684\u5b8c\u6574\u6559\u7a0b\u8d85\u51fa\u4e86\u672c\u4e66\u7684\u8303\u56f4\u3002\u76f8\u53cd\uff0c\u6211\u4eec\u5728\u8fd9\u91cc\u63d0\u4f9b\u4e86\u4e00\u4e2a\u6846\u67b6\uff0c\u7528\u4e8e\u5c06\u5b89\u5168\u542f\u52a8\u6280\u672f\u4e0e\u5178\u578b\u7684\u8282\u70b9\u9884\u914d\u8fc7\u7a0b\u96c6\u6210\u3002\u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u4e91\u67b6\u6784\u5e08\u5e94\u53c2\u8003\u76f8\u5173\u89c4\u8303\u548c\u8f6f\u4ef6\u914d\u7f6e\u624b\u518c\u3002","title":"\u5b89\u5168\u5f15\u5bfc"},{"location":"security/security-guide/#_71","text":"\u8282\u70b9\u5e94\u4f7f\u7528\u9884\u5f15\u5bfc\u6267\u884c\u73af\u5883\uff08PXE\uff09\u8fdb\u884c\u914d\u7f6e\u3002\u8fd9\u5927\u5927\u51cf\u5c11\u4e86\u91cd\u65b0\u90e8\u7f72\u8282\u70b9\u6240\u9700\u7684\u5de5\u4f5c\u91cf\u3002\u5178\u578b\u7684\u8fc7\u7a0b\u6d89\u53ca\u8282\u70b9\u4ece\u670d\u52a1\u5668\u63a5\u6536\u5404\u79cd\u5f15\u5bfc\u9636\u6bb5\uff08\u5373\u6267\u884c\u7684\u8f6f\u4ef6\u9010\u6e10\u590d\u6742\uff09\u3002 \u6211\u4eec\u5efa\u8bae\u5728\u7ba1\u7406\u5b89\u5168\u57df\u4e2d\u4f7f\u7528\u5355\u72ec\u7684\u9694\u79bb\u7f51\u7edc\u8fdb\u884c\u7f6e\u5907\u3002\u6b64\u7f51\u7edc\u5c06\u5904\u7406\u6240\u6709 PXE \u6d41\u91cf\uff0c\u4ee5\u53ca\u4e0a\u9762\u63cf\u8ff0\u7684\u540e\u7eed\u542f\u52a8\u9636\u6bb5\u4e0b\u8f7d\u3002\u8bf7\u6ce8\u610f\uff0c\u8282\u70b9\u5f15\u5bfc\u8fc7\u7a0b\u4ece\u4e24\u4e2a\u4e0d\u5b89\u5168\u7684\u64cd\u4f5c\u5f00\u59cb\uff1aDHCP \u548c TFTP\u3002\u7136\u540e\uff0c\u5f15\u5bfc\u8fc7\u7a0b\u4f7f\u7528 TLS \u4e0b\u8f7d\u90e8\u7f72\u8282\u70b9\u6240\u9700\u7684\u5176\u4f59\u4fe1\u606f\u3002\u8fd9\u53ef\u80fd\u662f\u64cd\u4f5c\u7cfb\u7edf\u5b89\u88c5\u7a0b\u5e8f\u3001\u7531 Chef \u6216 Puppet \u7ba1\u7406\u7684\u57fa\u672c\u5b89\u88c5\uff0c\u751a\u81f3\u662f\u76f4\u63a5\u5199\u5165\u78c1\u76d8\u7684\u5b8c\u6574\u6587\u4ef6\u7cfb\u7edf\u6620\u50cf\u3002 \u867d\u7136\u5728 PXE \u542f\u52a8\u8fc7\u7a0b\u4e2d\u4f7f\u7528 TLS \u66f4\u5177\u6311\u6218\u6027\uff0c\u4f46\u5e38\u89c1\u7684 PXE \u56fa\u4ef6\u9879\u76ee\uff08\u5982 iPXE\uff09\u63d0\u4f9b\u4e86\u8fd9\u79cd\u652f\u6301\u3002\u901a\u5e38\uff0c\u8fd9\u6d89\u53ca\u5728\u4e86\u89e3\u5141\u8bb8\u7684 TLS \u8bc1\u4e66\u94fe\u7684\u60c5\u51b5\u4e0b\u6784\u5efa PXE \u56fa\u4ef6\uff0c\u4ee5\u4fbf\u5b83\u53ef\u4ee5\u6b63\u786e\u9a8c\u8bc1\u670d\u52a1\u5668\u8bc1\u4e66\u3002\u8fd9\u901a\u8fc7\u9650\u5236\u4e0d\u5b89\u5168\u7684\u7eaf\u6587\u672c\u7f51\u7edc\u64cd\u4f5c\u7684\u6570\u91cf\u6765\u63d0\u9ad8\u653b\u51fb\u8005\u7684\u95e8\u69db\u3002","title":"\u8282\u70b9\u914d\u7f6e"},{"location":"security/security-guide/#_72","text":"\u901a\u5e38\uff0c\u6709\u4e24\u79cd\u4e0d\u540c\u7684\u7b56\u7565\u6765\u9a8c\u8bc1\u542f\u52a8\u8fc7\u7a0b\u3002\u4f20\u7edf\u7684\u5b89\u5168\u542f\u52a8\u5c06\u9a8c\u8bc1\u5728\u8fc7\u7a0b\u4e2d\u7684\u6bcf\u4e2a\u6b65\u9aa4\u8fd0\u884c\u7684\u4ee3\u7801\uff0c\u5e76\u5728\u4ee3\u7801\u4e0d\u6b63\u786e\u65f6\u505c\u6b62\u542f\u52a8\u3002\u542f\u52a8\u8bc1\u660e\u5c06\u8bb0\u5f55\u5728\u6bcf\u4e2a\u6b65\u9aa4\u4e2d\u8fd0\u884c\u7684\u4ee3\u7801\uff0c\u5e76\u5c06\u6b64\u4fe1\u606f\u63d0\u4f9b\u7ed9\u53e6\u4e00\u53f0\u8ba1\u7b97\u673a\uff0c\u4ee5\u8bc1\u660e\u542f\u52a8\u8fc7\u7a0b\u6309\u9884\u671f\u5b8c\u6210\u3002\u5728\u8fd9\u4e24\u79cd\u60c5\u51b5\u4e0b\uff0c\u7b2c\u4e00\u6b65\u90fd\u662f\u5728\u8fd0\u884c\u4e4b\u524d\u6d4b\u91cf\u6bcf\u6bb5\u4ee3\u7801\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u6d4b\u91cf\u5b9e\u9645\u4e0a\u662f\u4ee3\u7801\u7684 SHA-1 \u54c8\u5e0c\u503c\uff0c\u5728\u6267\u884c\u4e4b\u524d\u83b7\u53d6\u3002\u54c8\u5e0c\u5b58\u50a8\u5728 TPM \u7684\u5e73\u53f0\u914d\u7f6e\u5bc4\u5b58\u5668 \uff08PCR\uff09 \u4e2d\u3002 \u6ce8\u610f \u6b64\u5904\u4f7f\u7528 SHA-1\uff0c\u56e0\u4e3a\u8fd9\u662f TPM \u82af\u7247\u652f\u6301\u7684\u5185\u5bb9\u3002 \u6bcf\u4e2a TPM \u81f3\u5c11\u6709 24 \u4e2a PCR\u30022005 \u5e74 3 \u6708\u7684 TCG \u901a\u7528\u670d\u52a1\u5668\u89c4\u8303 v1.0 \u5b9a\u4e49\u4e86\u542f\u52a8\u65f6\u5b8c\u6574\u6027\u6d4b\u91cf\u7684 PCR \u5206\u914d\u3002\u4e0b\u8868\u663e\u793a\u4e86\u5178\u578b\u7684PCR\u914d\u7f6e\u3002\u4e0a\u4e0b\u6587\u6307\u793a\u8fd9\u4e9b\u503c\u662f\u6839\u636e\u8282\u70b9\u786c\u4ef6\uff08\u56fa\u4ef6\uff09\u8fd8\u662f\u6839\u636e\u8282\u70b9\u4e0a\u7f6e\u5907\u7684\u8f6f\u4ef6\u786e\u5b9a\u7684\u3002\u67d0\u4e9b\u503c\u53d7\u56fa\u4ef6\u7248\u672c\u3001\u78c1\u76d8\u5927\u5c0f\u548c\u5176\u4ed6\u4f4e\u7ea7\u4fe1\u606f\u7684\u5f71\u54cd\u3002\u56e0\u6b64\uff0c\u5728\u914d\u7f6e\u7ba1\u7406\u65b9\u9762\u91c7\u53d6\u826f\u597d\u7684\u505a\u6cd5\u975e\u5e38\u91cd\u8981\uff0c\u4ee5\u786e\u4fdd\u90e8\u7f72\u7684\u6bcf\u4e2a\u7cfb\u7edf\u90fd\u5b8c\u5168\u6309\u7167\u9884\u671f\u8fdb\u884c\u914d\u7f6e\u3002 \u6ce8\u518c \u6d4b\u91cf\u5185\u5bb9 \u4e0a\u4e0b\u6587 PCR-00 \u6838\u5fc3\u4fe1\u4efb\u6839\u6d4b\u91cf \uff08CRTM\uff09\u3001BIOS \u4ee3\u7801\u3001\u4e3b\u673a\u5e73\u53f0\u6269\u5c55 \u786c\u4ef6 PCR-01 \u4e3b\u673a\u5e73\u53f0\u914d\u7f6e \u786c\u4ef6 PCR-02 \u9009\u9879 ROM \u4ee3\u7801 \u786c\u4ef6 PCR-03 \u9009\u9879 ROM \u914d\u7f6e\u548c\u6570\u636e \u786c\u4ef6 PCR-04 \u521d\u59cb\u7a0b\u5e8f\u52a0\u8f7d\u7a0b\u5e8f \uff08IPL\uff09 \u4ee3\u7801\u3002\u4f8b\u5982\uff0c\u4e3b\u5f15\u5bfc\u8bb0\u5f55\u3002 \u8f6f\u4ef6 PCR-05 IPL \u4ee3\u7801\u914d\u7f6e\u548c\u6570\u636e \u8f6f\u4ef6 PCR-06 \u72b6\u6001\u8f6c\u6362\u548c\u5524\u9192\u4e8b\u4ef6 \u8f6f\u4ef6 PCR-07 \u4e3b\u673a\u5e73\u53f0\u5236\u9020\u5546\u63a7\u5236 \u8f6f\u4ef6 PCR-08 \u7279\u5b9a\u4e8e\u5e73\u53f0\uff0c\u901a\u5e38\u662f\u5185\u6838\u3001\u5185\u6838\u6269\u5c55\u548c\u9a71\u52a8\u7a0b\u5e8f \u8f6f\u4ef6 PCR-09 \u7279\u5b9a\u4e8e\u5e73\u53f0\uff0c\u901a\u5e38\u662f Initramfs \u8f6f\u4ef6 PCR-10 \u81f3 PCR-23 \u7279\u5b9a\u4e8e\u5e73\u53f0 \u8f6f\u4ef6 \u5b89\u5168\u542f\u52a8\u53ef\u80fd\u662f\u6784\u5efa\u4e91\u7684\u4e00\u4e2a\u9009\u9879\uff0c\u4f46\u9700\u8981\u5728\u786c\u4ef6\u9009\u62e9\u65b9\u9762\u8fdb\u884c\u4ed4\u7ec6\u89c4\u5212\u3002\u4f8b\u5982\uff0c\u786e\u4fdd\u60a8\u5177\u6709 TPM \u548c\u82f1\u7279\u5c14 TXT \u652f\u6301\u3002\u7136\u540e\u9a8c\u8bc1\u8282\u70b9\u786c\u4ef6\u4f9b\u5e94\u5546\u5982\u4f55\u586b\u5145 PCR \u503c\u3002\u4f8b\u5982\uff0c\u54ea\u4e9b\u503c\u53ef\u7528\u4e8e\u9a8c\u8bc1\u3002\u901a\u5e38\uff0c\u4e0a\u8868\u4e2d\u8f6f\u4ef6\u4e0a\u4e0b\u6587\u4e0b\u5217\u51fa\u7684 PCR \u503c\u662f\u4e91\u67b6\u6784\u5e08\u53ef\u4ee5\u76f4\u63a5\u63a7\u5236\u7684\u503c\u3002\u4f46\u5373\u4f7f\u8fd9\u4e9b\u4e5f\u53ef\u80fd\u968f\u7740\u4e91\u4e2d\u8f6f\u4ef6\u7684\u5347\u7ea7\u800c\u6539\u53d8\u3002\u914d\u7f6e\u7ba1\u7406\u5e94\u94fe\u63a5\u5230 PCR \u7b56\u7565\u5f15\u64ce\uff0c\u4ee5\u786e\u4fdd\u9a8c\u8bc1\u59cb\u7ec8\u662f\u6700\u65b0\u7684\u3002 \u6bcf\u4e2a\u5236\u9020\u5546\u90fd\u5fc5\u987b\u4e3a\u5176\u670d\u52a1\u5668\u63d0\u4f9b BIOS \u548c\u56fa\u4ef6\u4ee3\u7801\u3002\u4e0d\u540c\u7684\u670d\u52a1\u5668\u3001\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u548c\u64cd\u4f5c\u7cfb\u7edf\u5c06\u9009\u62e9\u586b\u5145\u4e0d\u540c\u7684 PCR\u3002\u5728\u5927\u591a\u6570\u5b9e\u9645\u90e8\u7f72\u4e2d\uff0c\u4e0d\u53ef\u80fd\u6839\u636e\u5df2\u77e5\u7684\u826f\u597d\u6570\u91cf\uff08\u201c\u9ec4\u91d1\u6d4b\u91cf\u201d\uff09\u9a8c\u8bc1\u6bcf\u4e2aPCR\u3002\u7ecf\u9a8c\u8868\u660e\uff0c\u5373\u4f7f\u5728\u5355\u4e2a\u4f9b\u5e94\u5546\u7684\u4ea7\u54c1\u7ebf\u4e2d\uff0c\u7ed9\u5b9aPCR\u7684\u6d4b\u91cf\u8fc7\u7a0b\u4e5f\u53ef\u80fd\u4e0d\u4e00\u81f4\u3002\u5efa\u8bae\u4e3a\u6bcf\u4e2a\u670d\u52a1\u5668\u5efa\u7acb\u57fa\u7ebf\uff0c\u5e76\u76d1\u89c6 PCR \u503c\u4ee5\u67e5\u627e\u610f\u5916\u66f4\u6539\u3002\u7b2c\u4e09\u65b9\u8f6f\u4ef6\u53ef\u80fd\u53ef\u7528\u4e8e\u534f\u52a9 TPM \u9884\u914d\u548c\u76d1\u89c6\u8fc7\u7a0b\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u6240\u9009\u7684\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u89e3\u51b3\u65b9\u6848\u3002 \u521d\u59cb\u7a0b\u5e8f\u52a0\u8f7d\u7a0b\u5e8f \uff08IPL\uff09 \u4ee3\u7801\u5f88\u53ef\u80fd\u662f PXE \u56fa\u4ef6\uff0c\u5047\u8bbe\u91c7\u7528\u4e0a\u8ff0\u8282\u70b9\u90e8\u7f72\u7b56\u7565\u3002\u56e0\u6b64\uff0c\u5b89\u5168\u542f\u52a8\u6216\u542f\u52a8\u8bc1\u660e\u8fc7\u7a0b\u53ef\u4ee5\u6d4b\u91cf\u6240\u6709\u65e9\u671f\u542f\u52a8\u4ee3\u7801\uff0c\u4f8b\u5982 BIOS\u3001\u56fa\u4ef6\u3001PXE \u56fa\u4ef6\u548c\u5185\u6838\u6620\u50cf\u3002\u786e\u4fdd\u6bcf\u4e2a\u8282\u70b9\u90fd\u5b89\u88c5\u4e86\u8fd9\u4e9b\u90e8\u4ef6\u7684\u6b63\u786e\u7248\u672c\uff0c\u4e3a\u6784\u5efa\u8282\u70b9\u8f6f\u4ef6\u5806\u6808\u7684\u5176\u4f59\u90e8\u5206\u5960\u5b9a\u4e86\u575a\u5b9e\u7684\u57fa\u7840\u3002 \u6839\u636e\u6240\u9009\u7684\u7b56\u7565\uff0c\u5728\u53d1\u751f\u6545\u969c\u65f6\uff0c\u8282\u70b9\u5c06\u65e0\u6cd5\u542f\u52a8\uff0c\u6216\u8005\u5b83\u53ef\u4ee5\u5c06\u6545\u969c\u62a5\u544a\u7ed9\u4e91\u4e2d\u7684\u53e6\u4e00\u4e2a\u5b9e\u4f53\u3002\u4e3a\u4e86\u5b9e\u73b0\u5b89\u5168\u5f15\u5bfc\uff0c\u8282\u70b9\u5c06\u65e0\u6cd5\u5f15\u5bfc\uff0c\u7ba1\u7406\u5b89\u5168\u57df\u4e2d\u7684\u7f6e\u5907\u670d\u52a1\u5fc5\u987b\u8bc6\u522b\u8fd9\u4e00\u70b9\u5e76\u8bb0\u5f55\u4e8b\u4ef6\u3002\u5bf9\u4e8e\u542f\u52a8\u8bc1\u660e\uff0c\u5f53\u68c0\u6d4b\u5230\u6545\u969c\u65f6\uff0c\u8282\u70b9\u5c06\u5df2\u7ecf\u5728\u8fd0\u884c\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u5e94\u901a\u8fc7\u7981\u7528\u8282\u70b9\u7684\u7f51\u7edc\u8bbf\u95ee\u6765\u7acb\u5373\u9694\u79bb\u8282\u70b9\u3002\u7136\u540e\uff0c\u5e94\u5206\u6790\u4e8b\u4ef6\u7684\u6839\u672c\u539f\u56e0\u3002\u65e0\u8bba\u54ea\u79cd\u60c5\u51b5\uff0c\u7b56\u7565\u90fd\u5e94\u89c4\u5b9a\u5728\u5931\u8d25\u540e\u5982\u4f55\u7ee7\u7eed\u3002\u4e91\u53ef\u80fd\u4f1a\u81ea\u52a8\u5c1d\u8bd5\u91cd\u65b0\u914d\u7f6e\u8282\u70b9\u4e00\u5b9a\u6b21\u6570\u3002\u6216\u8005\uff0c\u5b83\u53ef\u80fd\u4f1a\u7acb\u5373\u901a\u77e5\u4e91\u7ba1\u7406\u5458\u8c03\u67e5\u95ee\u9898\u3002\u6b64\u5904\u7684\u6b63\u786e\u7b56\u7565\u662f\u7279\u5b9a\u4e8e\u90e8\u7f72\u548c\u6545\u969c\u6a21\u5f0f\u7684\u3002","title":"\u9a8c\u8bc1\u542f\u52a8"},{"location":"security/security-guide/#_73","text":"\u6b64\u65f6\uff0c\u6211\u4eec\u77e5\u9053\u8282\u70b9\u5df2\u4f7f\u7528\u6b63\u786e\u7684\u5185\u6838\u548c\u5e95\u5c42\u7ec4\u4ef6\u542f\u52a8\u3002\u4e0b\u4e00\u6b65\u662f\u5f3a\u5316\u64cd\u4f5c\u7cfb\u7edf\uff0c\u5b83\u4ece\u4e00\u7ec4\u884c\u4e1a\u516c\u8ba4\u7684\u5f3a\u5316\u63a7\u4ef6\u5f00\u59cb\u3002\u4ee5\u4e0b\u6307\u5357\u662f\u5f88\u597d\u7684\u793a\u4f8b\uff1a \u5b89\u5168\u6280\u672f\u5b9e\u65bd\u6307\u5357 \uff08STIG\uff09 \u56fd\u9632\u4fe1\u606f\u7cfb\u7edf\u5c40 \uff08DISA\uff09\uff08\u96b6\u5c5e\u4e8e\u7f8e\u56fd\u56fd\u9632\u90e8\uff09\u53d1\u5e03\u9002\u7528\u4e8e\u5404\u79cd\u64cd\u4f5c\u7cfb\u7edf\u3001\u5e94\u7528\u7a0b\u5e8f\u548c\u786c\u4ef6\u7684 STIG \u5185\u5bb9\u3002\u8fd9\u4e9b\u63a7\u4ef6\u5728\u672a\u9644\u52a0\u4efb\u4f55\u8bb8\u53ef\u8bc1\u7684\u60c5\u51b5\u4e0b\u53d1\u5e03\u3002 \u4e92\u8054\u7f51\u5b89\u5168\u4e2d\u5fc3 \uff08CIS\uff09 \u57fa\u51c6\u6d4b\u8bd5 CIS \u4f1a\u5b9a\u671f\u53d1\u5e03\u5b89\u5168\u57fa\u51c6\u4ee5\u53ca\u81ea\u52a8\u5e94\u7528\u8fd9\u4e9b\u5b89\u5168\u63a7\u5236\u7684\u81ea\u52a8\u5316\u5de5\u5177\u3002\u8fd9\u4e9b\u57fa\u51c6\u6d4b\u8bd5\u662f\u5728\u5177\u6709\u4e00\u4e9b\u9650\u5236\u7684\u77e5\u8bc6\u5171\u4eab\u8bb8\u53ef\u4e0b\u53d1\u5e03\u7684\u3002 \u8fd9\u4e9b\u5b89\u5168\u63a7\u5236\u6700\u597d\u901a\u8fc7\u81ea\u52a8\u5316\u65b9\u6cd5\u5e94\u7528\u3002\u81ea\u52a8\u5316\u786e\u4fdd\u6bcf\u6b21\u5bf9\u6bcf\u4e2a\u7cfb\u7edf\u90fd\u4ee5\u76f8\u540c\u7684\u65b9\u5f0f\u5e94\u7528\u63a7\u5236\uff0c\u5e76\u4e14\u5b83\u4eec\u8fd8\u63d0\u4f9b\u4e86\u4e00\u79cd\u7528\u4e8e\u5ba1\u6838\u73b0\u6709\u7cfb\u7edf\u7684\u5feb\u901f\u65b9\u6cd5\u3002\u81ea\u52a8\u5316\u6709\u591a\u79cd\u9009\u62e9\uff1a OpenSCAP OpenSCAP \u662f\u4e00\u4e2a\u5f00\u6e90\u5de5\u5177\uff0c\u5b83\u91c7\u7528 SCAP \u5185\u5bb9\uff08\u63cf\u8ff0\u5b89\u5168\u63a7\u5236\u7684 XML \u6587\u4ef6\uff09\u5e76\u5c06\u8be5\u5185\u5bb9\u5e94\u7528\u4e8e\u5404\u79cd\u7cfb\u7edf\u3002\u76ee\u524d\u53ef\u7528\u7684\u5927\u591a\u6570\u5185\u5bb9\u90fd\u9002\u7528\u4e8e Red Hat Enterprise Linux \u548c CentOS\uff0c\u4f46\u8fd9\u4e9b\u5de5\u5177\u9002\u7528\u4e8e\u4efb\u4f55 Linux \u6216 Windows \u7cfb\u7edf\u3002 ansible \u52a0\u56fa ansible-hardening \u9879\u76ee\u63d0\u4f9b\u4e86\u4e00\u4e2a Ansible \u89d2\u8272\uff0c\u53ef\u5c06\u5b89\u5168\u63a7\u5236\u5e94\u7528\u4e8e\u5404\u79cd Linux \u64cd\u4f5c\u7cfb\u7edf\u3002\u5b83\u8fd8\u53ef\u7528\u4e8e\u5ba1\u6838\u73b0\u6709\u7cfb\u7edf\u3002\u4ed4\u7ec6\u68c0\u67e5\u6bcf\u4e2a\u63a7\u5236\u63aa\u65bd\uff0c\u4ee5\u786e\u5b9a\u5b83\u662f\u5426\u53ef\u80fd\u5bf9\u751f\u4ea7\u7cfb\u7edf\u9020\u6210\u635f\u5bb3\u3002\u8fd9\u4e9b\u63a7\u4ef6\u57fa\u4e8e Red Hat Enterprise Linux 7 STIG\u3002 \u5b8c\u5168\u52a0\u56fa\u7684\u7cfb\u7edf\u662f\u4e00\u4e2a\u5177\u6709\u6311\u6218\u6027\u7684\u8fc7\u7a0b\uff0c\u53ef\u80fd\u9700\u8981\u5bf9\u67d0\u4e9b\u7cfb\u7edf\u8fdb\u884c\u5927\u91cf\u66f4\u6539\u3002\u5176\u4e2d\u4e00\u4e9b\u66f4\u6539\u53ef\u80fd\u4f1a\u5f71\u54cd\u751f\u4ea7\u5de5\u4f5c\u8d1f\u8f7d\u3002\u5982\u679c\u7cfb\u7edf\u65e0\u6cd5\u5b8c\u5168\u52a0\u56fa\uff0c\u5f3a\u70c8\u5efa\u8bae\u8fdb\u884c\u4ee5\u4e0b\u4e24\u9879\u66f4\u6539\uff0c\u4ee5\u4fbf\u5728\u4e0d\u9020\u6210\u91cd\u5927\u4e2d\u65ad\u7684\u60c5\u51b5\u4e0b\u63d0\u9ad8\u5b89\u5168\u6027\uff1a","title":"\u8282\u70b9\u52a0\u56fa"},{"location":"security/security-guide/#mac","text":"\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u4f1a\u5f71\u54cd\u7cfb\u7edf\u4e0a\u7684\u6240\u6709\u7528\u6237\uff0c\u5305\u62ec root\uff0c\u5185\u6838\u7684\u5de5\u4f5c\u662f\u6839\u636e\u5f53\u524d\u5b89\u5168\u7b56\u7565\u5ba1\u67e5\u6d3b\u52a8\u3002\u5982\u679c\u6d3b\u52a8\u4e0d\u5728\u5141\u8bb8\u7684\u7b56\u7565\u8303\u56f4\u5185\uff0c\u5219\u4f1a\u88ab\u963b\u6b62\uff0c\u5373\u4f7f\u5bf9\u4e8e root \u7528\u6237\u4e5f\u662f\u5982\u6b64\u3002\u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u67e5\u770b\u4e0b\u9762\u5173\u4e8e sVirt\u3001SELinux \u548c AppArmor \u7684\u8ba8\u8bba\u3002","title":"\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \uff08MAC\uff09"},{"location":"security/security-guide/#_74","text":"\u786e\u4fdd\u7cfb\u7edf\u5b89\u88c5\u7684\u8f6f\u4ef6\u5305\u6570\u91cf\u5c3d\u53ef\u80fd\u5c11\uff0c\u5e76\u4e14\u8fd0\u884c\u7684\u670d\u52a1\u6570\u91cf\u5c3d\u53ef\u80fd\u5c11\u3002\u5220\u9664\u4e0d\u9700\u8981\u7684\u8f6f\u4ef6\u5305\u53ef\u4ee5\u66f4\u8f7b\u677e\u5730\u8fdb\u884c\u4fee\u8865\uff0c\u5e76\u51cf\u5c11\u7cfb\u7edf\u4e0a\u53ef\u80fd\u5bfc\u81f4\u8fdd\u89c4\u7684\u9879\u76ee\u6570\u91cf\u3002\u505c\u6b62\u4e0d\u9700\u8981\u7684\u670d\u52a1\u4f1a\u7f29\u5c0f\u7cfb\u7edf\u4e0a\u7684\u653b\u51fb\u9762\uff0c\u5e76\u4f7f\u653b\u51fb\u66f4\u52a0\u56f0\u96be\u3002 \u6211\u4eec\u8fd8\u5efa\u8bae\u5bf9\u751f\u4ea7\u8282\u70b9\u6267\u884c\u4ee5\u4e0b\u9644\u52a0\u6b65\u9aa4\uff1a","title":"\u5220\u9664\u8f6f\u4ef6\u5305\u5e76\u505c\u6b62\u670d\u52a1"},{"location":"security/security-guide/#_75","text":"\u5c3d\u53ef\u80fd\u4f7f\u7528\u53ea\u8bfb\u6587\u4ef6\u7cfb\u7edf\u3002\u786e\u4fdd\u53ef\u5199\u6587\u4ef6\u7cfb\u7edf\u4e0d\u5141\u8bb8\u6267\u884c\u3002\u8fd9\u53ef\u4ee5\u4f7f\u7528 noexec \u4e2d\u7684 \u3001 nosuid \u548c nodev \u6302\u8f7d\u9009\u9879\u6765\u5904\u7406 /etc/fstab \u3002","title":"\u53ea\u8bfb\u6587\u4ef6\u7cfb\u7edf"},{"location":"security/security-guide/#_76","text":"\u6700\u540e\uff0c\u8282\u70b9\u5185\u6838\u5e94\u8be5\u6709\u4e00\u79cd\u673a\u5236\u6765\u9a8c\u8bc1\u8282\u70b9\u7684\u5176\u4f59\u90e8\u5206\u662f\u5426\u4ee5\u5df2\u77e5\u7684\u826f\u597d\u72b6\u6001\u542f\u52a8\u3002\u8fd9\u63d0\u4f9b\u4e86\u4ece\u5f15\u5bfc\u9a8c\u8bc1\u8fc7\u7a0b\u5230\u9a8c\u8bc1\u6574\u4e2a\u7cfb\u7edf\u7684\u5fc5\u8981\u94fe\u63a5\u3002\u6267\u884c\u6b64\u64cd\u4f5c\u7684\u6b65\u9aa4\u5c06\u7279\u5b9a\u4e8e\u90e8\u7f72\u3002\u4f8b\u5982\uff0c\u5185\u6838\u6a21\u5757\u53ef\u4ee5\u5728\u4f7f\u7528 dm-verity \u6302\u8f7d\u6587\u4ef6\u7cfb\u7edf\u4e4b\u524d\u9a8c\u8bc1\u7ec4\u6210\u6587\u4ef6\u7cfb\u7edf\u7684\u5757\u7684\u54c8\u5e0c\u503c\u3002","title":"\u7cfb\u7edf\u9a8c\u8bc1"},{"location":"security/security-guide/#_77","text":"\u4e00\u65e6\u8282\u70b9\u8fd0\u884c\uff0c\u6211\u4eec\u9700\u8981\u786e\u4fdd\u5b83\u968f\u7740\u65f6\u95f4\u7684\u63a8\u79fb\u4fdd\u6301\u826f\u597d\u7684\u72b6\u6001\u3002\u4ece\u5e7f\u4e49\u4e0a\u8bb2\uff0c\u8fd9\u5305\u62ec\u914d\u7f6e\u7ba1\u7406\u548c\u5b89\u5168\u76d1\u63a7\u3002\u8fd9\u4e9b\u9886\u57df\u4e2d\u6bcf\u4e2a\u9886\u57df\u7684\u76ee\u6807\u90fd\u4e0d\u540c\u3002\u901a\u8fc7\u68c0\u67e5\u8fd9\u4e24\u8005\uff0c\u6211\u4eec\u53ef\u4ee5\u66f4\u597d\u5730\u786e\u4fdd\u7cfb\u7edf\u6309\u9884\u671f\u8fd0\u884c\u3002\u6211\u4eec\u5c06\u5728\u7ba1\u7406\u90e8\u5206\u8ba8\u8bba\u914d\u7f6e\u7ba1\u7406\uff0c\u5e76\u5728\u4e0b\u9762\u8ba8\u8bba\u5b89\u5168\u76d1\u63a7\u3002","title":"\u8fd0\u884c\u65f6\u9a8c\u8bc1"},{"location":"security/security-guide/#_78","text":"\u57fa\u4e8e\u4e3b\u673a\u7684\u5165\u4fb5\u68c0\u6d4b\u5de5\u5177\u5bf9\u4e8e\u81ea\u52a8\u9a8c\u8bc1\u4e91\u5185\u90e8\u4e5f\u5f88\u6709\u7528\u3002\u6709\u5404\u79cd\u5404\u6837\u7684\u57fa\u4e8e\u4e3b\u673a\u7684\u5165\u4fb5\u68c0\u6d4b\u5de5\u5177\u53ef\u7528\u3002\u6709\u4e9b\u662f\u514d\u8d39\u63d0\u4f9b\u7684\u5f00\u6e90\u9879\u76ee\uff0c\u800c\u53e6\u4e00\u4e9b\u5219\u662f\u5546\u4e1a\u9879\u76ee\u3002\u901a\u5e38\uff0c\u8fd9\u4e9b\u5de5\u5177\u4f1a\u5206\u6790\u6765\u81ea\u5404\u79cd\u6765\u6e90\u7684\u6570\u636e\uff0c\u5e76\u6839\u636e\u89c4\u5219\u96c6\u548c/\u6216\u8bad\u7ec3\u751f\u6210\u5b89\u5168\u8b66\u62a5\u3002\u5178\u578b\u529f\u80fd\u5305\u62ec\u65e5\u5fd7\u5206\u6790\u3001\u6587\u4ef6\u5b8c\u6574\u6027\u68c0\u67e5\u3001\u7b56\u7565\u76d1\u63a7\u548c rootkit \u68c0\u6d4b\u3002\u66f4\u9ad8\u7ea7\uff08\u901a\u5e38\u662f\u81ea\u5b9a\u4e49\uff09\u5de5\u5177\u53ef\u4ee5\u9a8c\u8bc1\u5185\u5b58\u4e2d\u8fdb\u7a0b\u6620\u50cf\u662f\u5426\u4e0e\u78c1\u76d8\u4e0a\u7684\u53ef\u6267\u884c\u6587\u4ef6\u5339\u914d\uff0c\u5e76\u9a8c\u8bc1\u6b63\u5728\u8fd0\u884c\u7684\u8fdb\u7a0b\u7684\u6267\u884c\u72b6\u6001\u3002 \u5bf9\u4e8e\u4e91\u67b6\u6784\u5e08\u6765\u8bf4\uff0c\u4e00\u4e2a\u5173\u952e\u7684\u7b56\u7565\u51b3\u7b56\u662f\u5982\u4f55\u5904\u7406\u5b89\u5168\u76d1\u63a7\u5de5\u5177\u7684\u8f93\u51fa\u3002\u5b9e\u9645\u4e0a\u6709\u4e24\u79cd\u9009\u62e9\u3002\u9996\u5148\u662f\u63d0\u9192\u4eba\u7c7b\u8fdb\u884c\u8c03\u67e5\u548c/\u6216\u91c7\u53d6\u7ea0\u6b63\u63aa\u65bd\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u5728\u4e91\u7ba1\u7406\u5458\u7684\u65e5\u5fd7\u6216\u4e8b\u4ef6\u6e90\u4e2d\u5305\u542b\u5b89\u5168\u8b66\u62a5\u6765\u5b8c\u6210\u3002\u7b2c\u4e8c\u79cd\u9009\u62e9\u662f\u8ba9\u4e91\u81ea\u52a8\u91c7\u53d6\u67d0\u79cd\u5f62\u5f0f\u7684\u8865\u6551\u63aa\u65bd\uff0c\u4ee5\u53ca\u8bb0\u5f55\u4e8b\u4ef6\u3002\u8865\u6551\u63aa\u65bd\u53ef\u80fd\u5305\u62ec\u4ece\u91cd\u65b0\u5b89\u88c5\u8282\u70b9\u5230\u6267\u884c\u6b21\u8981\u670d\u52a1\u914d\u7f6e\u7684\u4efb\u4f55\u5185\u5bb9\u3002\u4f46\u662f\uff0c\u7531\u4e8e\u53ef\u80fd\u5b58\u5728\u8bef\u62a5\uff0c\u81ea\u52a8\u8865\u6551\u63aa\u65bd\u53ef\u80fd\u5177\u6709\u6311\u6218\u6027\u3002 \u5f53\u5b89\u5168\u76d1\u89c6\u5de5\u5177\u4e3a\u826f\u6027\u4e8b\u4ef6\u751f\u6210\u5b89\u5168\u8b66\u62a5\u65f6\uff0c\u4f1a\u53d1\u751f\u8bef\u62a5\u3002\u7531\u4e8e\u5b89\u5168\u76d1\u63a7\u5de5\u5177\u7684\u6027\u8d28\uff0c\u8bef\u62a5\u80af\u5b9a\u4f1a\u4e0d\u65f6\u53d1\u751f\u3002\u901a\u5e38\uff0c\u4e91\u7ba1\u7406\u5458\u53ef\u4ee5\u8c03\u6574\u5b89\u5168\u76d1\u63a7\u5de5\u5177\u4ee5\u51cf\u5c11\u8bef\u62a5\uff0c\u4f46\u8fd9\u4e5f\u53ef\u80fd\u540c\u65f6\u964d\u4f4e\u6574\u4f53\u68c0\u6d4b\u7387\u3002\u5728\u4e91\u4e2d\u8bbe\u7f6e\u5b89\u5168\u76d1\u63a7\u7cfb\u7edf\u65f6\uff0c\u5fc5\u987b\u4e86\u89e3\u5e76\u8003\u8651\u8fd9\u4e9b\u7ecf\u5178\u7684\u6743\u8861\u3002 \u57fa\u4e8e\u4e3b\u673a\u7684\u5165\u4fb5\u68c0\u6d4b\u5de5\u5177\u7684\u9009\u62e9\u548c\u914d\u7f6e\u5177\u6709\u9ad8\u5ea6\u7684\u90e8\u7f72\u7279\u5f02\u6027\u3002\u6211\u4eec\u5efa\u8bae\u4ece\u63a2\u7d22\u4ee5\u4e0b\u5f00\u6e90\u9879\u76ee\u5f00\u59cb\uff0c\u8fd9\u4e9b\u9879\u76ee\u5b9e\u73b0\u4e86\u5404\u79cd\u57fa\u4e8e\u4e3b\u673a\u7684\u5165\u4fb5\u68c0\u6d4b\u548c\u6587\u4ef6\u76d1\u63a7\u529f\u80fd\u3002 OSSEC Samhain Tripwire AIDE \u7f51\u7edc\u5165\u4fb5\u68c0\u6d4b\u5de5\u5177\u662f\u5bf9\u57fa\u4e8e\u4e3b\u673a\u7684\u5de5\u5177\u7684\u8865\u5145\u3002OpenStack \u6ca1\u6709\u5185\u7f6e\u7279\u5b9a\u7684\u7f51\u7edc IDS\uff0c\u4f46 OpenStack Networking \u63d0\u4f9b\u4e86\u4e00\u79cd\u63d2\u4ef6\u673a\u5236\uff0c\u53ef\u4ee5\u901a\u8fc7 Networking API \u542f\u7528\u4e0d\u540c\u7684\u6280\u672f\u3002\u6b64\u63d2\u4ef6\u4f53\u7cfb\u7ed3\u6784\u5c06\u5141\u8bb8\u79df\u6237\u5f00\u53d1 API \u6269\u5c55\uff0c\u4ee5\u63d2\u5165\u548c\u914d\u7f6e\u81ea\u5df1\u7684\u9ad8\u7ea7\u7f51\u7edc\u670d\u52a1\uff0c\u4f8b\u5982\u9632\u706b\u5899\u3001\u5165\u4fb5\u68c0\u6d4b\u7cfb\u7edf\u6216\u865a\u62df\u673a\u4e4b\u95f4\u7684 VPN\u3002 \u4e0e\u57fa\u4e8e\u4e3b\u673a\u7684\u5de5\u5177\u7c7b\u4f3c\uff0c\u57fa\u4e8e\u7f51\u7edc\u7684\u5165\u4fb5\u68c0\u6d4b\u5de5\u5177\u7684\u9009\u62e9\u548c\u914d\u7f6e\u662f\u7279\u5b9a\u4e8e\u90e8\u7f72\u7684\u3002Snort \u662f\u9886\u5148\u7684\u5f00\u6e90\u7f51\u7edc\u5165\u4fb5\u68c0\u6d4b\u5de5\u5177\uff0c\u4e5f\u662f\u4e86\u89e3\u66f4\u591a\u4fe1\u606f\u7684\u826f\u597d\u8d77\u70b9\u3002 \u5bf9\u4e8e\u57fa\u4e8e\u7f51\u7edc\u548c\u4e3b\u673a\u7684\u5165\u4fb5\u68c0\u6d4b\u7cfb\u7edf\uff0c\u6709\u4e00\u4e9b\u91cd\u8981\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879\u3002 \u91cd\u8981\u7684\u662f\u8981\u8003\u8651\u5c06\u7f51\u7edc IDS \u653e\u7f6e\u5728\u4e91\u4e0a\uff08\u4f8b\u5982\uff0c\u5c06\u5176\u6dfb\u52a0\u5230\u7f51\u7edc\u8fb9\u754c\u548c/\u6216\u654f\u611f\u7f51\u7edc\u5468\u56f4\uff09\u3002\u653e\u7f6e\u4f4d\u7f6e\u53d6\u51b3\u4e8e\u60a8\u7684\u7f51\u7edc\u73af\u5883\uff0c\u4f46\u8bf7\u786e\u4fdd\u76d1\u63a7 IDS \u53ef\u80fd\u5bf9\u60a8\u7684\u670d\u52a1\u4ea7\u751f\u7684\u5f71\u54cd\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u60a8\u9009\u62e9\u6dfb\u52a0\u7684\u4f4d\u7f6e\u3002\u7f51\u7edc IDS \u901a\u5e38\u65e0\u6cd5\u68c0\u67e5\u52a0\u5bc6\u6d41\u91cf\uff08\u5982 TLS\uff09\u7684\u5185\u5bb9\u3002\u4f46\u662f\uff0c\u7f51\u7edc IDS \u5728\u8bc6\u522b\u7f51\u7edc\u4e0a\u7684\u5f02\u5e38\u672a\u52a0\u5bc6\u6d41\u91cf\u65b9\u9762\u4ecd\u53ef\u80fd\u63d0\u4f9b\u4e00\u4e9b\u597d\u5904\u3002 \u5728\u67d0\u4e9b\u90e8\u7f72\u4e2d\uff0c\u53ef\u80fd\u9700\u8981\u5728\u5b89\u5168\u57df\u7f51\u6865\u4e0a\u7684\u654f\u611f\u7ec4\u4ef6\u4e0a\u6dfb\u52a0\u57fa\u4e8e\u4e3b\u673a\u7684 IDS\u3002\u57fa\u4e8e\u4e3b\u673a\u7684 IDS \u53ef\u80fd\u4f1a\u901a\u8fc7\u7ec4\u4ef6\u4e0a\u906d\u5230\u5165\u4fb5\u6216\u672a\u7ecf\u6388\u6743\u7684\u8fdb\u7a0b\u6765\u68c0\u6d4b\u5f02\u5e38\u6d3b\u52a8\u3002IDS \u5e94\u5728\u7ba1\u7406\u7f51\u7edc\u4e0a\u4f20\u8f93\u8b66\u62a5\u548c\u65e5\u5fd7\u4fe1\u606f\u3002","title":"\u5165\u4fb5\u68c0\u6d4b\u7cfb\u7edf"},{"location":"security/security-guide/#_79","text":"\u4e91\u73af\u5883\u4e2d\u7684\u670d\u52a1\u5668\uff0c\u5305\u62ec undercloud \u548c overcloud \u57fa\u7840\u67b6\u6784\uff0c\u5e94\u5b9e\u65bd\u5f3a\u5316\u6700\u4f73\u5b9e\u8df5\u3002\u7531\u4e8e\u64cd\u4f5c\u7cfb\u7edf\u548c\u670d\u52a1\u5668\u5f3a\u5316\u5f88\u5e38\u89c1\uff0c\u56e0\u6b64\u6b64\u5904\u4e0d\u6db5\u76d6\u9002\u7528\u7684\u6700\u4f73\u5b9e\u8df5\uff0c\u5305\u62ec\u4f46\u4e0d\u9650\u4e8e\u65e5\u5fd7\u8bb0\u5f55\u3001\u7528\u6237\u5e10\u6237\u9650\u5236\u548c\u5b9a\u671f\u66f4\u65b0\uff0c\u4f46\u5e94\u5e94\u7528\u4e8e\u6240\u6709\u57fa\u7840\u7ed3\u6784\u3002","title":"\u670d\u52a1\u5668\u52a0\u56fa"},{"location":"security/security-guide/#fim","text":"\u6587\u4ef6\u5b8c\u6574\u6027\u7ba1\u7406 \uff08FIM\uff09 \u662f\u786e\u4fdd\u654f\u611f\u7cfb\u7edf\u6216\u5e94\u7528\u7a0b\u5e8f\u914d\u7f6e\u6587\u4ef6\u7b49\u6587\u4ef6\u4e0d\u4f1a\u635f\u574f\u6216\u66f4\u6539\u4ee5\u5141\u8bb8\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u6216\u6076\u610f\u884c\u4e3a\u7684\u65b9\u6cd5\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u5b9e\u7528\u7a0b\u5e8f\uff08\u5982 Samhain\uff09\u6765\u5b8c\u6210\uff0c\u8be5\u5b9e\u7528\u7a0b\u5e8f\u5c06\u521b\u5efa\u6307\u5b9a\u8d44\u6e90\u7684\u6821\u9a8c\u548c\u54c8\u5e0c\uff0c\u7136\u540e\u5b9a\u671f\u9a8c\u8bc1\u8be5\u54c8\u5e0c\uff0c\u6216\u8005\u901a\u8fc7 DMVerity \u7b49\u5de5\u5177\u6765\u5b8c\u6210\uff0c\u8be5\u5de5\u5177\u53ef\u4ee5\u83b7\u53d6\u5757\u8bbe\u5907\u7684\u54c8\u5e0c\u503c\uff0c\u5e76\u5728\u7cfb\u7edf\u8bbf\u95ee\u8fd9\u4e9b\u54c8\u5e0c\u503c\u65f6\u5bf9\u5176\u8fdb\u884c\u9a8c\u8bc1\uff0c\u7136\u540e\u518d\u5c06\u5176\u5448\u73b0\u7ed9\u7528\u6237\u3002 \u8fd9\u4e9b\u5e94\u8be5\u653e\u5728\u9002\u5f53\u7684\u4f4d\u7f6e\uff0c\u4ee5\u76d1\u63a7\u548c\u62a5\u544a\u5bf9\u7cfb\u7edf\u3001\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u548c\u5e94\u7528\u7a0b\u5e8f\u914d\u7f6e\u6587\u4ef6\uff08\u5982 \u548c /etc/keystone/keystone.conf \uff09\u4ee5\u53ca\u5185\u6838\u6a21\u5757\uff08\u5982 /etc/pam.d/system-auth virtio\uff09\u7684\u66f4\u6539\u3002\u6700\u4f73\u505a\u6cd5\u662f\u4f7f\u7528 lsmod \u547d\u4ee4\u6765\u663e\u793a\u7cfb\u7edf\u4e0a\u5b9a\u671f\u52a0\u8f7d\u7684\u5185\u5bb9\uff0c\u4ee5\u5e2e\u52a9\u786e\u5b9a FIM \u68c0\u67e5\u4e2d\u5e94\u5305\u542b\u6216\u4e0d\u5e94\u5305\u542b\u7684\u5185\u5bb9\u3002","title":"\u6587\u4ef6\u5b8c\u6574\u6027\u7ba1\u7406\uff08FIM\uff09"},{"location":"security/security-guide/#_80","text":"\u7ba1\u7406\u5458\u9700\u8981\u5bf9\u4e91\u6267\u884c\u547d\u4ee4\u548c\u63a7\u5236\uff0c\u4ee5\u5b9e\u73b0\u5404\u79cd\u64cd\u4f5c\u529f\u80fd\u3002\u7406\u89e3\u548c\u4fdd\u62a4\u8fd9\u4e9b\u6307\u6325\u548c\u63a7\u5236\u8bbe\u65bd\u975e\u5e38\u91cd\u8981\u3002 OpenStack \u4e3a\u8fd0\u7ef4\u4eba\u5458\u548c\u79df\u6237\u63d0\u4f9b\u4e86\u591a\u79cd\u7ba1\u7406\u754c\u9762\uff1a OpenStack \u4eea\u8868\u677f \uff08horizon\uff09 OpenStack \u63a5\u53e3 \u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 OpenStack \u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f\uff0c\u4f8b\u5982 nova-manage \u548c glance-manage \u5e26\u5916\u7ba1\u7406\u63a5\u53e3\uff0c\u5982 IPMI","title":"\u7ba1\u7406\u754c\u9762"},{"location":"security/security-guide/#_81","text":"OpenStack \u4eea\u8868\u677f \uff08horizon\uff09 \u4e3a\u7ba1\u7406\u5458\u548c\u79df\u6237\u63d0\u4f9b\u4e86\u4e00\u4e2a\u57fa\u4e8e Web \u7684\u56fe\u5f62\u754c\u9762\uff0c\u7528\u4e8e\u7f6e\u5907\u548c\u8bbf\u95ee\u57fa\u4e8e\u4e91\u7684\u8d44\u6e90\u3002\u4eea\u8868\u677f\u901a\u8fc7\u8c03\u7528 OpenStack API \u4e0e\u540e\u7aef\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\u3002","title":"\u4eea\u8868\u677f"},{"location":"security/security-guide/#_82","text":"\u4f5c\u4e3a\u4e91\u7ba1\u7406\u5458\uff0c\u4eea\u8868\u677f\u63d0\u4f9b\u4e91\u5927\u5c0f\u548c\u72b6\u6001\u7684\u6574\u4f53\u89c6\u56fe\u3002\u60a8\u53ef\u4ee5\u521b\u5efa\u7528\u6237\u548c\u79df\u6237/\u9879\u76ee\uff0c\u5c06\u7528\u6237\u5206\u914d\u7ed9\u79df\u6237/\u9879\u76ee\uff0c\u5e76\u5bf9\u53ef\u4f9b\u4ed6\u4eec\u4f7f\u7528\u7684\u8d44\u6e90\u8bbe\u7f6e\u9650\u5236\u3002 \u4eea\u8868\u677f\u4e3a\u79df\u6237\u7528\u6237\u63d0\u4f9b\u4e86\u4e00\u4e2a\u81ea\u52a9\u670d\u52a1\u95e8\u6237\uff0c\u7528\u4e8e\u5728\u7ba1\u7406\u5458\u8bbe\u7f6e\u7684\u9650\u5236\u8303\u56f4\u5185\u9884\u914d\u81ea\u5df1\u7684\u8d44\u6e90\u3002 \u4eea\u8868\u677f\u4e3a\u8def\u7531\u5668\u548c\u8d1f\u8f7d\u5e73\u8861\u5668\u63d0\u4f9b GUI \u652f\u6301\u3002\u4f8b\u5982\uff0c\u4eea\u8868\u677f\u73b0\u5728\u5b9e\u73b0\u4e86\u6240\u6709\u4e3b\u8981\u7684\u7f51\u7edc\u529f\u80fd\u3002 \u5b83\u662f\u4e00\u4e2a\u53ef\u6269\u5c55\u7684 Django Web \u5e94\u7528\u7a0b\u5e8f\uff0c\u5141\u8bb8\u8f7b\u677e\u63d2\u5165\u7b2c\u4e09\u65b9\u4ea7\u54c1\u548c\u670d\u52a1\uff0c\u4f8b\u5982\u8ba1\u8d39\u3001\u76d1\u63a7\u548c\u5176\u4ed6\u7ba1\u7406\u5de5\u5177\u3002 \u4eea\u8868\u677f\u8fd8\u53ef\u4ee5\u4e3a\u670d\u52a1\u63d0\u4f9b\u5546\u548c\u5176\u4ed6\u5546\u4e1a\u4f9b\u5e94\u5546\u6253\u9020\u54c1\u724c\u3002","title":"\u529f\u80fd"},{"location":"security/security-guide/#_83","text":"\u4eea\u8868\u677f\u8981\u6c42\u5728 Web \u6d4f\u89c8\u5668\u4e2d\u542f\u7528 Cookie \u548c JavaScript\u3002 \u6258\u7ba1\u4eea\u8868\u677f\u7684 Web \u670d\u52a1\u5668\u5e94\u914d\u7f6e\u4e3a\u4f7f\u7528 TLS\uff0c\u4ee5\u786e\u4fdd\u6570\u636e\u5df2\u52a0\u5bc6\u3002 Horizon Web Service \u53ca\u5176\u7528\u4e8e\u4e0e\u540e\u7aef\u901a\u4fe1\u7684 OpenStack API \u90fd\u5bb9\u6613\u53d7\u5230 Web \u653b\u51fb\u5a92\u4ecb\uff08\u5982\u62d2\u7edd\u670d\u52a1\uff09\u7684\u653b\u51fb\uff0c\u56e0\u6b64\u5fc5\u987b\u5bf9\u5176\u8fdb\u884c\u76d1\u63a7\u3002 \u73b0\u5728\u53ef\u4ee5\u901a\u8fc7\u4eea\u8868\u677f\u5c06\u955c\u50cf\u6587\u4ef6\u76f4\u63a5\u4ece\u7528\u6237\u7684\u786c\u76d8\u4e0a\u4f20\u5230 OpenStack \u955c\u50cf\u670d\u52a1\uff08\u5c3d\u7ba1\u5b58\u5728\u8bb8\u591a\u90e8\u7f72/\u5b89\u5168\u9690\u60a3\uff09\u3002\u5bf9\u4e8e\u591a GB \u7684\u6620\u50cf\uff0c\u4ecd\u5f3a\u70c8\u5efa\u8bae\u4f7f\u7528 glance CLI \u8fdb\u884c\u4e0a\u4f20\u3002 \u901a\u8fc7\u4eea\u8868\u76d8\u521b\u5efa\u548c\u7ba1\u7406\u5b89\u5168\u7ec4\u3002\u5b89\u5168\u7ec4\u5141\u8bb8\u5bf9\u5b89\u5168\u7b56\u7565\u8fdb\u884c L3-L4 \u6570\u636e\u5305\u7b5b\u9009\uff0c\u4ee5\u4fdd\u62a4\u865a\u62df\u673a\u3002","title":"\u5b89\u5168\u6ce8\u610f\u4e8b\u9879"},{"location":"security/security-guide/#_84","text":"OpenStack.org\uff0cReleaseNotes/Liberty\u30022015. OpenStack Liberty \u53d1\u884c\u8bf4\u660e","title":"\u53c2\u8003\u4e66\u76ee"},{"location":"security/security-guide/#openstack_3","text":"OpenStack API \u662f\u4e00\u4e2a RESTful Web \u670d\u52a1\u7aef\u70b9\uff0c\u7528\u4e8e\u8bbf\u95ee\u3001\u914d\u7f6e\u548c\u81ea\u52a8\u5316\u57fa\u4e8e\u4e91\u7684\u8d44\u6e90\u3002\u64cd\u4f5c\u5458\u548c\u7528\u6237\u901a\u5e38\u901a\u8fc7\u547d\u4ee4\u884c\u5b9e\u7528\u7a0b\u5e8f\uff08\u4f8b\u5982\uff0c nova \u6216\uff09\u3001\u7279\u5b9a\u4e8e\u8bed\u8a00\u7684\u5e93\u6216 glance \u7b2c\u4e09\u65b9\u5de5\u5177\u8bbf\u95ee API\u3002","title":"OpenStack \u63a5\u53e3"},{"location":"security/security-guide/#_85","text":"To the cloud administrator, the API provides an overall view of the size and state of the cloud deployment and allows the creation of users, tenants/projects, assigning users to tenants/projects, and specifying resource quotas on a per tenant/project basis. \u5bf9\u4e8e\u4e91\u7ba1\u7406\u5458\u6765\u8bf4\uff0cAPI \u63d0\u4f9b\u4e86\u4e91\u90e8\u7f72\u5927\u5c0f\u548c\u72b6\u6001\u7684\u6574\u4f53\u89c6\u56fe\uff0c\u5e76\u5141\u8bb8\u521b\u5efa\u7528\u6237\u3001\u79df\u6237/\u9879\u76ee\u3001\u5c06\u7528\u6237\u5206\u914d\u7ed9\u79df\u6237/\u9879\u76ee\uff0c\u4ee5\u53ca\u4e3a\u6bcf\u4e2a\u79df\u6237/\u9879\u76ee\u6307\u5b9a\u8d44\u6e90\u914d\u989d\u3002 The API provides a tenant interface for provisioning, managing, and accessing their resources. API \u63d0\u4f9b\u4e86\u4e00\u4e2a\u79df\u6237\u63a5\u53e3\uff0c\u7528\u4e8e\u9884\u914d\u3001\u7ba1\u7406\u548c\u8bbf\u95ee\u5176\u8d44\u6e90\u3002","title":"\u529f\u80fd"},{"location":"security/security-guide/#_86","text":"\u5e94\u4e3a TLS \u914d\u7f6e API \u670d\u52a1\uff0c\u4ee5\u786e\u4fdd\u6570\u636e\u5df2\u52a0\u5bc6\u3002 \u4f5c\u4e3a Web \u670d\u52a1\uff0cOpenStack API \u5bb9\u6613\u53d7\u5230\u719f\u6089\u7684\u7f51\u7ad9\u653b\u51fb\u5a92\u4ecb\u7684\u5f71\u54cd\uff0c\u4f8b\u5982\u62d2\u7edd\u670d\u52a1\u653b\u51fb\u3002","title":"\u5b89\u5168\u6ce8\u610f\u4e8b\u9879"},{"location":"security/security-guide/#ssh","text":"\u4f7f\u7528\u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 \u8bbf\u95ee\u6765\u7ba1\u7406 Linux \u548c Unix \u7cfb\u7edf\u5df2\u6210\u4e3a\u884c\u4e1a\u60ef\u4f8b\u3002SSH \u4f7f\u7528\u5b89\u5168\u7684\u52a0\u5bc6\u539f\u8bed\u8fdb\u884c\u901a\u4fe1\u3002\u9274\u4e8e SSH \u5728\u5178\u578b OpenStack \u90e8\u7f72\u4e2d\u7684\u8303\u56f4\u548c\u91cd\u8981\u6027\uff0c\u4e86\u89e3\u90e8\u7f72 SSH \u7684\u6700\u4f73\u5b9e\u8df5\u975e\u5e38\u91cd\u8981\u3002","title":"\u5b89\u5168\u5916\u58f3 \uff08SSH\uff09"},{"location":"security/security-guide/#_87","text":"\u7ecf\u5e38\u88ab\u5ffd\u89c6\u7684\u662f SSH \u4e3b\u673a\u7684\u5bc6\u94a5\u7ba1\u7406\u9700\u6c42\u3002\u7531\u4e8e OpenStack \u90e8\u7f72\u4e2d\u7684\u5927\u591a\u6570\u6216\u6240\u6709\u4e3b\u673a\u90fd\u5c06\u63d0\u4f9b SSH \u670d\u52a1\uff0c\u56e0\u6b64\u5bf9\u4e0e\u8fd9\u4e9b\u4e3b\u673a\u7684\u8fde\u63a5\u5145\u6ee1\u4fe1\u5fc3\u975e\u5e38\u91cd\u8981\u3002\u4e0d\u80fd\u4f4e\u4f30\u7684\u662f\uff0c\u672a\u80fd\u63d0\u4f9b\u5408\u7406\u5b89\u5168\u4e14\u53ef\u8bbf\u95ee\u7684\u65b9\u6cd5\u6765\u9a8c\u8bc1 SSH \u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9\u662f\u6ee5\u7528\u548c\u5229\u7528\u7684\u6210\u719f\u65f6\u673a\u3002 \u6240\u6709 SSH \u5b88\u62a4\u7a0b\u5e8f\u90fd\u5177\u6709\u4e13\u7528\u4e3b\u673a\u5bc6\u94a5\uff0c\u5e76\u5728\u8fde\u63a5\u65f6\u63d0\u4f9b\u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9\u3002\u6b64\u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9\u662f\u672a\u7b7e\u540d\u516c\u94a5\u7684\u54c8\u5e0c\u503c\u3002\u5728\u4e0e\u8fd9\u4e9b\u4e3b\u673a\u5efa\u7acb SSH \u8fde\u63a5\u4e4b\u524d\uff0c\u5fc5\u987b\u77e5\u9053\u8fd9\u4e9b\u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9\u3002\u9a8c\u8bc1\u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9\u6709\u52a9\u4e8e\u68c0\u6d4b\u4e2d\u95f4\u4eba\u653b\u51fb\u3002 \u901a\u5e38\uff0c\u5728\u5b89\u88c5 SSH \u5b88\u62a4\u7a0b\u5e8f\u65f6\uff0c\u5c06\u751f\u6210\u4e3b\u673a\u5bc6\u94a5\u3002\u5728\u4e3b\u673a\u5bc6\u94a5\u751f\u6210\u8fc7\u7a0b\u4e2d\uff0c\u4e3b\u673a\u5fc5\u987b\u5177\u6709\u8db3\u591f\u7684\u71b5\u3002\u4e3b\u673a\u5bc6\u94a5\u751f\u6210\u671f\u95f4\u7684\u71b5\u4e0d\u8db3\u53ef\u80fd\u5bfc\u81f4\u7a83\u542c SSH \u4f1a\u8bdd\u3002 \u751f\u6210 SSH \u4e3b\u673a\u5bc6\u94a5\u540e\uff0c\u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9\u5e94\u5b58\u50a8\u5728\u5b89\u5168\u4e14\u53ef\u67e5\u8be2\u7684\u4f4d\u7f6e\u3002\u4e00\u4e2a\u7279\u522b\u65b9\u4fbf\u7684\u89e3\u51b3\u65b9\u6848\u662f\u4f7f\u7528 RFC-4255 \u4e2d\u5b9a\u4e49\u7684 SSHFP \u8d44\u6e90\u8bb0\u5f55\u7684 DNS\u3002\u4e3a\u4e86\u5b89\u5168\u8d77\u89c1\uff0c\u6709\u5fc5\u8981\u90e8\u7f72 DNSSEC\u3002","title":"\u4e3b\u673a\u5bc6\u94a5\u6307\u7eb9"},{"location":"security/security-guide/#_88","text":"OpenStack Management Utilities \u662f\u8fdb\u884c API \u8c03\u7528\u7684\u5f00\u6e90 Python \u547d\u4ee4\u884c\u5ba2\u6237\u7aef\u3002\u6bcf\u4e2a OpenStack \u670d\u52a1\u90fd\u6709\u4e00\u4e2a\u5ba2\u6237\u7aef\uff08\u4f8b\u5982\uff0cnova\u3001glance\uff09\u3002\u9664\u4e86\u6807\u51c6\u7684 CLI \u5ba2\u6237\u7aef\u4e4b\u5916\uff0c\u5927\u591a\u6570\u670d\u52a1\u90fd\u5177\u6709\u7ba1\u7406\u547d\u4ee4\u884c\u5b9e\u7528\u7a0b\u5e8f\uff0c\u7528\u4e8e\u76f4\u63a5\u8c03\u7528\u6570\u636e\u5e93\u3002\u8fd9\u4e9b\u4e13\u7528\u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f\u6b63\u5728\u6162\u6162\u88ab\u5f03\u7528\u3002","title":"\u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f"},{"location":"security/security-guide/#_89","text":"\u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u4e13\u7528\u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f \uff08*-manage\uff09 \u4f7f\u7528\u76f4\u63a5\u6570\u636e\u5e93\u8fde\u63a5\u3002 \u786e\u4fdd\u5305\u542b\u51ed\u636e\u4fe1\u606f\u7684 .rc \u6587\u4ef6\u662f\u5b89\u5168\u7684\u3002","title":"\u5b89\u5168\u6ce8\u610f\u4e8b\u9879"},{"location":"security/security-guide/#_90","text":"OpenStack.org\uff0c\u201cOpenStack \u6700\u7ec8\u7528\u6237\u6307\u5357\u201d\u90e8\u5206\u30022016. OpenStack \u547d\u4ee4\u884c\u5ba2\u6237\u7aef\u6982\u8ff0\u3002 OpenStack.org\uff0c\u4f7f\u7528 OpenStack RC \u6587\u4ef6\u8bbe\u7f6e\u73af\u5883\u53d8\u91cf\u30022016. \u4e0b\u8f7d\u5e76\u83b7\u53d6 OpenStack RC \u6587\u4ef6\u3002","title":"\u53c2\u8003\u4e66\u76ee"},{"location":"security/security-guide/#_91","text":"OpenStack \u7ba1\u7406\u4f9d\u8d56\u4e8e\u5e26\u5916\u7ba1\u7406\u63a5\u53e3\uff08\u5982 IPMI \u534f\u8bae\uff09\u6765\u8bbf\u95ee\u8fd0\u884c OpenStack \u7ec4\u4ef6\u7684\u8282\u70b9\u3002IPMI \u662f\u4e00\u79cd\u975e\u5e38\u6d41\u884c\u7684\u89c4\u8303\uff0c\u7528\u4e8e\u8fdc\u7a0b\u7ba1\u7406\u3001\u8bca\u65ad\u548c\u91cd\u65b0\u542f\u52a8\u670d\u52a1\u5668\uff0c\u65e0\u8bba\u64cd\u4f5c\u7cfb\u7edf\u6b63\u5728\u8fd0\u884c\u8fd8\u662f\u7cfb\u7edf\u5d29\u6e83\u3002","title":"\u5e26\u5916\u7ba1\u7406\u63a5\u53e3"},{"location":"security/security-guide/#_92","text":"\u4f7f\u7528\u5f3a\u5bc6\u7801\u5e76\u4fdd\u62a4\u5b83\u4eec\uff0c\u6216\u4f7f\u7528\u5ba2\u6237\u7aef TLS \u8eab\u4efd\u9a8c\u8bc1\u3002 \u786e\u4fdd\u7f51\u7edc\u63a5\u53e3\u4f4d\u4e8e\u5176\u81ea\u5df1\u7684\u4e13\u7528\uff08\u7ba1\u7406\u6216\u5355\u72ec\u7684\uff09\u7f51\u7edc\u4e0a\u3002\u4f7f\u7528\u9632\u706b\u5899\u6216\u5176\u4ed6\u7f51\u7edc\u8bbe\u5907\u9694\u79bb\u7ba1\u7406\u57df\u3002 \u5982\u679c\u60a8\u4f7f\u7528 Web \u754c\u9762\u4e0e BMC/IPMI \u4ea4\u4e92\uff0c\u8bf7\u59cb\u7ec8\u4f7f\u7528 TLS \u63a5\u53e3\uff0c\u4f8b\u5982 HTTPS \u6216\u7aef\u53e3 443\u3002\u6b64 TLS \u63a5\u53e3\u4e0d\u5e94\u4f7f\u7528\u81ea\u7b7e\u540d\u8bc1\u4e66\uff08\u901a\u5e38\u662f\u9ed8\u8ba4\u7684\uff09\uff0c\u4f46\u5e94\u5177\u6709\u4f7f\u7528\u6b63\u786e\u5b9a\u4e49\u7684\u5b8c\u5168\u9650\u5b9a\u57df\u540d \uff08FQDN\uff09 \u7684\u53d7\u4fe1\u4efb\u8bc1\u4e66\u3002 \u76d1\u63a7\u7ba1\u7406\u7f51\u7edc\u4e0a\u7684\u6d41\u91cf\u3002\u4e0e\u7e41\u5fd9\u7684\u8ba1\u7b97\u8282\u70b9\u76f8\u6bd4\uff0c\u5f02\u5e38\u53ef\u80fd\u66f4\u5bb9\u6613\u8ddf\u8e2a\u3002 \u5e26\u5916\u7ba1\u7406\u754c\u9762\u901a\u5e38\u8fd8\u5305\u62ec\u56fe\u5f62\u8ba1\u7b97\u673a\u63a7\u5236\u53f0\u8bbf\u95ee\u3002\u8fd9\u4e9b\u63a5\u53e3\u901a\u5e38\u53ef\u4ee5\u52a0\u5bc6\uff0c\u4f46\u4e0d\u4e00\u5b9a\u662f\u9ed8\u8ba4\u7684\u3002\u8bf7\u53c2\u9605\u7cfb\u7edf\u8f6f\u4ef6\u6587\u6863\u4ee5\u52a0\u5bc6\u8fd9\u4e9b\u63a5\u53e3\u3002","title":"\u5b89\u5168\u6ce8\u610f\u4e8b\u9879"},{"location":"security/security-guide/#_93","text":"SANS \u6280\u672f\u7814\u7a76\u6240\uff0cInfoSec Handlers \u65e5\u8bb0\u535a\u5ba2\u30022012. \u9ed1\u5ba2\u653b\u51fb\u5df2\u5173\u95ed\u7684\u670d\u52a1\u5668\u3002","title":"\u53c2\u8003\u4e66\u76ee"},{"location":"security/security-guide/#_94","text":"\u8bbe\u5907\u95f4\u901a\u4fe1\u662f\u4e00\u4e2a\u4e25\u91cd\u7684\u5b89\u5168\u95ee\u9898\u3002\u5728\u5927\u578b\u9879\u76ee\u9519\u8bef\uff08\u5982 Heartbleed\uff09\u6216\u66f4\u9ad8\u7ea7\u7684\u653b\u51fb\uff08\u5982 BEAST \u548c CRIME\uff09\u4e4b\u95f4\uff0c\u901a\u8fc7\u7f51\u7edc\u8fdb\u884c\u5b89\u5168\u901a\u4fe1\u7684\u65b9\u6cd5\u53d8\u5f97\u8d8a\u6765\u8d8a\u91cd\u8981\u3002\u4f46\u662f\uff0c\u5e94\u8be5\u8bb0\u4f4f\uff0c\u52a0\u5bc6\u5e94\u8be5\u4f5c\u4e3a\u66f4\u5927\u7684\u5b89\u5168\u7b56\u7565\u7684\u4e00\u90e8\u5206\u6765\u5e94\u7528\u3002\u7aef\u70b9\u7684\u5165\u4fb5\u610f\u5473\u7740\u653b\u51fb\u8005\u4e0d\u518d\u9700\u8981\u7834\u574f\u6240\u4f7f\u7528\u7684\u52a0\u5bc6\uff0c\u800c\u662f\u80fd\u591f\u5728\u7cfb\u7edf\u5904\u7406\u6d88\u606f\u65f6\u67e5\u770b\u548c\u64cd\u7eb5\u6d88\u606f\u3002 \u672c\u7ae0\u5c06\u56de\u987e\u6709\u5173\u914d\u7f6e TLS \u4ee5\u4fdd\u62a4\u5185\u90e8\u548c\u5916\u90e8\u8d44\u6e90\u7684\u51e0\u4e2a\u529f\u80fd\uff0c\u5e76\u6307\u51fa\u5e94\u7279\u522b\u6ce8\u610f\u7684\u7279\u5b9a\u7c7b\u522b\u7684\u7cfb\u7edf\u3002 TLS \u548c SSL \u7b80\u4ecb \u8bc1\u4e66\u9881\u53d1\u673a\u6784 TLS \u5e93 \u52a0\u5bc6\u7b97\u6cd5\u3001\u5bc6\u7801\u6a21\u5f0f\u548c\u534f\u8bae \u603b\u7ed3 TLS \u4ee3\u7406\u548c HTTP \u670d\u52a1 \u4f8b\u5b50 HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168\u6027 \u5b8c\u7f8e\u524d\u5411\u4fdd\u5bc6 \u5b89\u5168\u53c2\u8003\u67b6\u6784 SSL/TLS \u4ee3\u7406\u5728\u524d\u9762 SSL/TLS \u4e0e API \u7aef\u70b9\u4f4d\u4e8e\u540c\u4e00\u7269\u7406\u4e3b\u673a\u4e0a \u8d1f\u8f7d\u5747\u8861\u5668\u4e0a\u7684 SSL/TLS \u5916\u90e8\u548c\u5185\u90e8\u73af\u5883\u7684\u52a0\u5bc6\u5206\u79bb","title":"\u5b89\u5168\u901a\u4fe1"},{"location":"security/security-guide/#tls-ssl","text":"\u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u9700\u8981\u5b89\u5168\u6765\u786e\u4fdd OpenStack \u90e8\u7f72\u4e2d\u7f51\u7edc\u6d41\u91cf\u7684\u673a\u5bc6\u6027\u6216\u5b8c\u6574\u6027\u3002\u8fd9\u901a\u5e38\u662f\u4f7f\u7528\u52a0\u5bc6\u63aa\u65bd\u5b9e\u73b0\u7684\uff0c\u4f8b\u5982\u4f20\u8f93\u5c42\u5b89\u5168\u6027 \uff08TLS\uff09 \u534f\u8bae\u3002 \u5728\u5178\u578b\u90e8\u7f72\u4e2d\uff0c\u901a\u8fc7\u516c\u5171\u7f51\u7edc\u4f20\u8f93\u7684\u6240\u6709\u6d41\u91cf\u90fd\u662f\u5b89\u5168\u7684\uff0c\u4f46\u5b89\u5168\u6700\u4f73\u5b9e\u8df5\u8981\u6c42\u5185\u90e8\u6d41\u91cf\u4e5f\u5fc5\u987b\u5f97\u5230\u4fdd\u62a4\u3002\u4ec5\u4ec5\u4f9d\u9760\u5b89\u5168\u57df\u5206\u79bb\u8fdb\u884c\u4fdd\u62a4\u662f\u4e0d\u591f\u7684\u3002\u5982\u679c\u653b\u51fb\u8005\u83b7\u5f97\u5bf9\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u6216\u4e3b\u673a\u8d44\u6e90\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u7834\u574f API \u7aef\u70b9\u6216\u4efb\u4f55\u5176\u4ed6\u670d\u52a1\uff0c\u5219\u4ed6\u4eec\u4e00\u5b9a\u65e0\u6cd5\u8f7b\u677e\u6ce8\u5165\u6216\u6355\u83b7\u6d88\u606f\u3001\u547d\u4ee4\u6216\u4ee5\u5176\u4ed6\u65b9\u5f0f\u5f71\u54cd\u4e91\u7684\u7ba1\u7406\u529f\u80fd\u3002 \u6240\u6709\u57df\u90fd\u5e94\u4f7f\u7528 TLS \u8fdb\u884c\u4fdd\u62a4\uff0c\u5305\u62ec\u7ba1\u7406\u57df\u670d\u52a1\u548c\u670d\u52a1\u5185\u901a\u4fe1\u3002TLS \u63d0\u4f9b\u4e86\u786e\u4fdd\u7528\u6237\u4e0e OpenStack \u670d\u52a1\u4e4b\u95f4\u4ee5\u53ca OpenStack \u670d\u52a1\u672c\u8eab\u4e4b\u95f4\u901a\u4fe1\u7684\u8eab\u4efd\u9a8c\u8bc1\u3001\u4e0d\u53ef\u5426\u8ba4\u6027\u3001\u673a\u5bc6\u6027\u548c\u5b8c\u6574\u6027\u7684\u673a\u5236\u3002 \u7531\u4e8e\u5b89\u5168\u5957\u63a5\u5b57\u5c42 \uff08SSL\uff09 \u534f\u8bae\u4e2d\u5df2\u53d1\u5e03\u7684\u6f0f\u6d1e\uff0c\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u4f18\u5148\u4f7f\u7528 TLS \u800c\u4e0d\u662f SSL\uff0c\u5e76\u4e14\u5728\u4efb\u4f55\u60c5\u51b5\u4e0b\u90fd\u7981\u7528 SSL\uff0c\u9664\u975e\u9700\u8981\u4e0e\u8fc7\u65f6\u7684\u6d4f\u89c8\u5668\u6216\u5e93\u517c\u5bb9\u3002 \u516c\u94a5\u57fa\u7840\u8bbe\u65bd \uff08PKI\uff09 \u662f\u7528\u4e8e\u4fdd\u62a4\u7f51\u7edc\u901a\u4fe1\u7684\u6846\u67b6\u3002\u5b83\u7531\u4e00\u7ec4\u7cfb\u7edf\u548c\u6d41\u7a0b\u7ec4\u6210\uff0c\u4ee5\u786e\u4fdd\u5728\u9a8c\u8bc1\u5404\u65b9\u8eab\u4efd\u7684\u540c\u65f6\u53ef\u4ee5\u5b89\u5168\u5730\u53d1\u9001\u6d41\u91cf\u3002\u6b64\u5904\u63cf\u8ff0\u7684 PKI \u914d\u7f6e\u6587\u4ef6\u662f\u7531 PKIX \u5de5\u4f5c\u7ec4\u5f00\u53d1\u7684 Internet \u5de5\u7a0b\u4efb\u52a1\u7ec4 \uff08IETF\uff09 \u516c\u94a5\u57fa\u7840\u7ed3\u6784 \uff08PKIX\uff09 \u914d\u7f6e\u6587\u4ef6\u3002PKI\u7684\u6838\u5fc3\u7ec4\u4ef6\u5305\u62ec\uff1a \u6570\u5b57\u8bc1\u4e66 \u7b7e\u540d\u516c\u94a5\u8bc1\u4e66\u662f\u5177\u6709\u5b9e\u4f53\u7684\u53ef\u9a8c\u8bc1\u6570\u636e\u3001\u5176\u516c\u94a5\u4ee5\u53ca\u5176\u4ed6\u4e00\u4e9b\u5c5e\u6027\u7684\u6570\u636e\u7ed3\u6784\u3002\u8fd9\u4e9b\u8bc1\u4e66\u7531\u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09 \u9881\u53d1\u3002\u7531\u4e8e\u8bc1\u4e66\u7531\u53d7\u4fe1\u4efb\u7684 CA \u7b7e\u540d\uff0c\u56e0\u6b64\u4e00\u65e6\u9a8c\u8bc1\uff0c\u4e0e\u5b9e\u4f53\u5173\u8054\u7684\u516c\u94a5\u5c06\u4fdd\u8bc1\u4e0e\u6240\u8ff0\u5b9e\u4f53\u76f8\u5173\u8054\u3002\u7528\u4e8e\u5b9a\u4e49\u8fd9\u4e9b\u8bc1\u4e66\u7684\u6700\u5e38\u89c1\u6807\u51c6\u662f X.509 \u6807\u51c6\u3002X.509 v3 \u662f\u5f53\u524d\u7684\u6807\u51c6\uff0c\u5728 RFC5280 \u4e2d\u8fdb\u884c\u4e86\u8be6\u7ec6\u63cf\u8ff0\u3002\u8bc1\u4e66\u7531 CA \u9881\u53d1\uff0c\u4f5c\u4e3a\u8bc1\u660e\u5728\u7ebf\u5b9e\u4f53\u8eab\u4efd\u7684\u673a\u5236\u3002CA \u901a\u8fc7\u4ece\u8bc1\u4e66\u521b\u5efa\u6d88\u606f\u6458\u8981\u5e76\u4f7f\u7528\u5176\u79c1\u94a5\u5bf9\u6458\u8981\u8fdb\u884c\u52a0\u5bc6\uff0c\u5bf9\u8bc1\u4e66\u8fdb\u884c\u6570\u5b57\u7b7e\u540d\u3002 \u7ed3\u675f\u5b9e\u4f53 \u4f5c\u4e3a\u8bc1\u4e66\u4e3b\u9898\u7684\u7528\u6237\u3001\u8fdb\u7a0b\u6216\u7cfb\u7edf\u3002\u6700\u7ec8\u5b9e\u4f53\u5c06\u5176\u8bc1\u4e66\u8bf7\u6c42\u53d1\u9001\u5230\u6ce8\u518c\u673a\u6784 \uff08RA\uff09 \u8fdb\u884c\u5ba1\u6279\u3002\u5982\u679c\u83b7\u5f97\u6279\u51c6\uff0cRA \u4f1a\u5c06\u8bf7\u6c42\u8f6c\u53d1\u7ed9\u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09\u3002\u8bc1\u4e66\u9881\u53d1\u673a\u6784\u9a8c\u8bc1\u8bf7\u6c42\uff0c\u5982\u679c\u4fe1\u606f\u6b63\u786e\uff0c\u5219\u751f\u6210\u8bc1\u4e66\u5e76\u7b7e\u540d\u3002\u7136\u540e\uff0c\u6b64\u7b7e\u540d\u8bc1\u4e66\u5c06\u53d1\u9001\u5230\u8bc1\u4e66\u5b58\u50a8\u5e93\u3002 \u4fe1\u8d56\u65b9 \u63a5\u6536\u6570\u5b57\u7b7e\u540d\u8bc1\u4e66\u7684\u7ec8\u7ed3\u70b9\uff0c\u8be5\u8bc1\u4e66\u53ef\u53c2\u8003\u8bc1\u4e66\u4e0a\u5217\u51fa\u7684\u516c\u94a5\u8fdb\u884c\u9a8c\u8bc1\u3002\u4fe1\u8d56\u65b9\u5e94\u80fd\u591f\u9a8c\u8bc1\u8bc1\u4e66\u7684\u94fe\u4e0a\uff0c\u786e\u4fdd\u5b83\u4e0d\u5b58\u5728\u4e8e CRL \u4e2d\uff0c\u5e76\u4e14\u8fd8\u5fc5\u987b\u80fd\u591f\u9a8c\u8bc1\u8bc1\u4e66\u7684\u5230\u671f\u65e5\u671f\u3002 \u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09 CA \u662f\u53d7\u4fe1\u4efb\u7684\u5b9e\u4f53\uff0c\u65e0\u8bba\u662f\u6700\u7ec8\u65b9\u8fd8\u662f\u4f9d\u8d56\u8bc1\u4e66\u8fdb\u884c\u8bc1\u4e66\u7b56\u7565\u3001\u7ba1\u7406\u5904\u7406\u548c\u8bc1\u4e66\u9881\u53d1\u7684\u4e00\u65b9\u3002 \u6ce8\u518c\u673a\u6784 \uff08RA\uff09 CA \u5c06\u67d0\u4e9b\u7ba1\u7406\u529f\u80fd\u59d4\u6d3e\u7ed9\u7684\u53ef\u9009\u7cfb\u7edf\uff0c\u8fd9\u5305\u62ec\u5728 CA \u9881\u53d1\u8bc1\u4e66\u4e4b\u524d\u5bf9\u7ec8\u7aef\u5b9e\u4f53\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u7b49\u529f\u80fd\u3002 \u8bc1\u4e66\u540a\u9500\u5217\u8868 \uff08CRL\uff09 \u8bc1\u4e66\u540a\u9500\u5217\u8868 \uff08CRL\uff09 \u662f\u5df2\u540a\u9500\u7684\u8bc1\u4e66\u5e8f\u5217\u53f7\u5217\u8868\u3002\u5728 PKI \u6a21\u578b\u4e2d\uff0c\u4e0d\u5e94\u4fe1\u4efb\u63d0\u4f9b\u8fd9\u4e9b\u8bc1\u4e66\u7684\u6700\u7ec8\u5b9e\u4f53\u3002\u540a\u9500\u53ef\u80fd\u7531\u4e8e\u591a\u79cd\u539f\u56e0\u800c\u53d1\u751f\uff0c\u4f8b\u5982\u5bc6\u94a5\u6cc4\u9732\u3001CA \u6cc4\u9732\u3002 CRL \u53d1\u884c\u4eba CA \u5c06\u8bc1\u4e66\u540a\u9500\u5217\u8868\u7684\u53d1\u5e03\u59d4\u6258\u7ed9\u7684\u53ef\u9009\u7cfb\u7edf\u3002 \u8bc1\u4e66\u5b58\u50a8\u5e93 \u5b58\u50a8\u548c\u67e5\u627e\u6700\u7ec8\u5b9e\u4f53\u8bc1\u4e66\u548c\u8bc1\u4e66\u540a\u9500\u5217\u8868\u7684\u4f4d\u7f6e - \u6709\u65f6\u79f0\u4e3a\u8bc1\u4e66\u6346\u7ed1\u5305\u3002 PKI \u6784\u5efa\u4e86\u4e00\u4e2a\u6846\u67b6\uff0c\u7528\u4e8e\u63d0\u4f9b\u52a0\u5bc6\u7b97\u6cd5\u3001\u5bc6\u7801\u6a21\u5f0f\u548c\u534f\u8bae\uff0c\u4ee5\u4fdd\u62a4\u6570\u636e\u548c\u8eab\u4efd\u9a8c\u8bc1\u3002\u5f3a\u70c8\u5efa\u8bae\u4f7f\u7528\u516c\u94a5\u57fa\u7840\u7ed3\u6784 \uff08PKI\uff09 \u4fdd\u62a4\u6240\u6709\u670d\u52a1\uff0c\u5305\u62ec\u5bf9 API \u7ec8\u7ed3\u70b9\u4f7f\u7528 TLS\u3002\u4ec5\u9760\u4f20\u8f93\u6216\u6d88\u606f\u7684\u52a0\u5bc6\u6216\u7b7e\u540d\u662f\u4e0d\u53ef\u80fd\u89e3\u51b3\u6240\u6709\u8fd9\u4e9b\u95ee\u9898\u7684\u3002\u4e3b\u673a\u672c\u8eab\u5fc5\u987b\u662f\u5b89\u5168\u7684\uff0c\u5e76\u5b9e\u65bd\u7b56\u7565\u3001\u547d\u540d\u7a7a\u95f4\u548c\u5176\u4ed6\u63a7\u5236\u63aa\u65bd\u6765\u4fdd\u62a4\u5176\u79c1\u6709\u51ed\u636e\u548c\u5bc6\u94a5\u3002\u4f46\u662f\uff0c\u5bc6\u94a5\u7ba1\u7406\u548c\u4fdd\u62a4\u7684\u6311\u6218\u5e76\u6ca1\u6709\u51cf\u5c11\u8fd9\u4e9b\u63a7\u5236\u7684\u5fc5\u8981\u6027\uff0c\u4e5f\u6ca1\u6709\u964d\u4f4e\u5b83\u4eec\u7684\u91cd\u8981\u6027\u3002","title":"TLS \u548c SSL \u7b80\u4ecb"},{"location":"security/security-guide/#_95","text":"\u8bb8\u591a\u7ec4\u7ec7\u90fd\u5efa\u7acb\u4e86\u516c\u94a5\u57fa\u7840\u8bbe\u65bd\uff0c\u5176\u4e2d\u5305\u542b\u81ea\u5df1\u7684\u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09\u3001\u8bc1\u4e66\u7b56\u7565\u548c\u7ba1\u7406\uff0c\u4ed6\u4eec\u5e94\u8be5\u4f7f\u7528\u8fd9\u4e9b\u8bc1\u4e66\u4e3a\u5185\u90e8 OpenStack \u7528\u6237\u6216\u670d\u52a1\u9881\u53d1\u8bc1\u4e66\u3002\u516c\u5171\u5b89\u5168\u57df\u9762\u5411 Internet \u7684\u7ec4\u7ec7\u8fd8\u9700\u8981\u7531\u5e7f\u6cdb\u8ba4\u53ef\u7684\u516c\u5171 CA \u7b7e\u540d\u7684\u8bc1\u4e66\u3002\u5bf9\u4e8e\u901a\u8fc7\u7ba1\u7406\u7f51\u7edc\u8fdb\u884c\u7684\u52a0\u5bc6\u901a\u4fe1\uff0c\u5efa\u8bae\u4e0d\u8981\u4f7f\u7528\u516c\u5171 CA\u3002\u76f8\u53cd\uff0c\u6211\u4eec\u671f\u671b\u5e76\u5efa\u8bae\u5927\u591a\u6570\u90e8\u7f72\u90e8\u7f72\u81ea\u5df1\u7684\u5185\u90e8 CA\u3002 \u5efa\u8bae OpenStack \u4e91\u67b6\u6784\u5e08\u8003\u8651\u5bf9\u5185\u90e8\u7cfb\u7edf\u548c\u9762\u5411\u5ba2\u6237\u7684\u670d\u52a1\u4f7f\u7528\u5355\u72ec\u7684 PKI \u90e8\u7f72\u3002\u8fd9\u4f7f\u4e91\u90e8\u7f72\u4eba\u5458\u80fd\u591f\u4fdd\u6301\u5bf9\u5176 PKI \u57fa\u7840\u8bbe\u65bd\u7684\u63a7\u5236\uff0c\u5e76\u4e14\u4f7f\u5185\u90e8\u7cfb\u7edf\u7684\u8bc1\u4e66\u8bf7\u6c42\u3001\u7b7e\u540d\u548c\u90e8\u7f72\u53d8\u5f97\u66f4\u52a0\u5bb9\u6613\u3002\u9ad8\u7ea7\u914d\u7f6e\u53ef\u4ee5\u5bf9\u4e0d\u540c\u7684\u5b89\u5168\u57df\u4f7f\u7528\u5355\u72ec\u7684 PKI \u90e8\u7f72\u3002\u8fd9\u5141\u8bb8\u90e8\u7f72\u4eba\u5458\u4fdd\u6301\u73af\u5883\u7684\u52a0\u5bc6\u9694\u79bb\uff0c\u786e\u4fdd\u9881\u53d1\u7ed9\u4e00\u4e2a\u73af\u5883\u7684\u8bc1\u4e66\u4e0d\u88ab\u53e6\u4e00\u4e2a\u73af\u5883\u8bc6\u522b\u3002 \u7528\u4e8e\u5728\u9762\u5411 Internet \u7684\u4e91\u7aef\u70b9\uff08\u6216\u5ba2\u6237\u63a5\u53e3\uff0c\u5176\u4e2d\u5ba2\u6237\u9884\u8ba1\u4e0d\u4f1a\u5b89\u88c5\u9664\u6807\u51c6\u64cd\u4f5c\u7cfb\u7edf\u63d0\u4f9b\u7684\u8bc1\u4e66\u6346\u7ed1\u5305\u4ee5\u5916\u7684\u4efb\u4f55\u5185\u5bb9\uff09\u4e0a\u652f\u6301 TLS \u7684\u8bc1\u4e66\u5e94\u4f7f\u7528\u5b89\u88c5\u5728\u64cd\u4f5c\u7cfb\u7edf\u8bc1\u4e66\u6346\u7ed1\u5305\u4e2d\u7684\u8bc1\u4e66\u9881\u53d1\u673a\u6784\u8fdb\u884c\u9884\u914d\u3002\u5178\u578b\u7684\u77e5\u540d\u4f9b\u5e94\u5546\u5305\u62ec Let's Encrypt\u3001Verisign \u548c Thawte\uff0c\u4f46\u8fd8\u6709\u8bb8\u591a\u5176\u4ed6\u4f9b\u5e94\u5546\u3002 \u5728\u521b\u5efa\u548c\u7b7e\u7f72\u8bc1\u4e66\u65b9\u9762\u5b58\u5728\u7ba1\u7406\u3001\u7b56\u7565\u548c\u6280\u672f\u65b9\u9762\u7684\u6311\u6218\u3002\u5728\u8fd9\u4e2a\u9886\u57df\uff0c\u4e91\u67b6\u6784\u5e08\u6216\u64cd\u4f5c\u5458\u53ef\u80fd\u5e0c\u671b\u5bfb\u6c42\u884c\u4e1a\u9886\u5bfc\u8005\u548c\u4f9b\u5e94\u5546\u7684\u5efa\u8bae\uff0c\u4ee5\u53ca\u6b64\u5904\u63a8\u8350\u7684\u6307\u5bfc\u3002","title":"\u8bc1\u4e66\u9881\u53d1\u673a\u6784"},{"location":"security/security-guide/#tls","text":"OpenStack \u751f\u6001\u7cfb\u7edf\u4e2d\u7684\u7ec4\u4ef6\u3001\u670d\u52a1\u548c\u5e94\u7528\u7a0b\u5e8f\u6216 OpenStack \u7684\u4f9d\u8d56\u9879\u5df2\u5b9e\u73b0\u6216\u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528 TLS \u5e93\u3002OpenStack \u4e2d\u7684 TLS \u548c HTTP \u670d\u52a1\u901a\u5e38\u4f7f\u7528 OpenSSL \u5b9e\u73b0\uff0cOpenSSL \u5177\u6709\u5df2\u9488\u5bf9 FIPS 140-2 \u9a8c\u8bc1\u7684\u6a21\u5757\u3002\u4f46\u662f\uff0c\u8bf7\u8bb0\u4f4f\uff0c\u6bcf\u4e2a\u5e94\u7528\u7a0b\u5e8f\u6216\u670d\u52a1\u5728\u4f7f\u7528 OpenSSL \u5e93\u7684\u65b9\u5f0f\u4e0a\u4ecd\u53ef\u80fd\u5f15\u5165\u5f31\u70b9\u3002","title":"TLS \u5e93"},{"location":"security/security-guide/#_96","text":"\u5efa\u8bae\u81f3\u5c11\u4f7f\u7528 TLS 1.2\u3002\u65e7\u7248\u672c\uff08\u5982 TLS 1.0\u30011.1 \u548c\u6240\u6709\u7248\u672c\u7684 SSL\uff08TLS \u7684\u524d\u8eab\uff09\u5bb9\u6613\u53d7\u5230\u591a\u79cd\u516c\u5f00\u5df2\u77e5\u7684\u653b\u51fb\uff0c\u56e0\u6b64\u4e0d\u5f97\u4f7f\u7528\u3002TLS 1.2 \u53ef\u7528\u4e8e\u5e7f\u6cdb\u7684\u5ba2\u6237\u7aef\u517c\u5bb9\u6027\uff0c\u4f46\u5728\u542f\u7528\u6b64\u534f\u8bae\u65f6\u8981\u5c0f\u5fc3\u3002\u4ec5\u5f53\u5b58\u5728\u5f3a\u5236\u6027\u517c\u5bb9\u6027\u8981\u6c42\u5e76\u4e14\u60a8\u4e86\u89e3\u6240\u6d89\u53ca\u7684\u98ce\u9669\u65f6\uff0c\u624d\u542f\u7528 TLS \u7248\u672c 1.1\u3002 \u4f7f\u7528 TLS 1.2 \u5e76\u540c\u65f6\u63a7\u5236\u5ba2\u6237\u7aef\u548c\u670d\u52a1\u5668\u65f6\uff0c\u5bc6\u7801\u5957\u4ef6\u5e94\u9650\u5236\u4e3a ECDHE-ECDSA-AES256-GCM-SHA384 .\u5728\u4e0d\u63a7\u5236\u8fd9\u4e24\u4e2a\u7ec8\u7ed3\u70b9\u5e76\u4f7f\u7528 TLS 1.1 \u6216 1.2 \u7684\u60c5\u51b5\u4e0b\uff0c\u66f4\u901a\u7528 HIGH:!aNULL:!eNULL:!DES:!3DES:!SSLv3:!TLSv1:!CAMELLIA \u7684\u662f\u5408\u7406\u7684\u5bc6\u7801\u9009\u62e9\u3002 \u4f46\u662f\uff0c\u7531\u4e8e\u672c\u4e66\u5e76\u4e0d\u6253\u7b97\u5168\u9762\u4ecb\u7ecd\u5bc6\u7801\u5b66\uff0c\u56e0\u6b64\u6211\u4eec\u4e0d\u5e0c\u671b\u89c4\u5b9a\u5728OpenStack\u670d\u52a1\u4e2d\u5e94\u8be5\u542f\u7528\u6216\u7981\u7528\u54ea\u4e9b\u7279\u5b9a\u7684\u7b97\u6cd5\u6216\u5bc6\u7801\u6a21\u5f0f\u3002\u6211\u4eec\u60f3\u63a8\u8350\u4e00\u4e9b\u6743\u5a01\u7684\u53c2\u8003\u8d44\u6599\uff0c\u4ee5\u63d0\u4f9b\u66f4\u591a\u4fe1\u606f\uff1a \u56fd\u5bb6\u5b89\u5168\u5c40\uff0cSuite B \u5bc6\u7801\u5b66 OWASP\u5bc6\u7801\u5b66\u6307\u5357 OWASP \u4f20\u8f93\u5c42\u4fdd\u62a4\u5907\u5fd8\u5355 SoK\uff1aSSL \u548c HTTPS\uff1a\u91cd\u6e29\u8fc7\u53bb\u7684\u6311\u6218\u5e76\u8bc4\u4f30\u8bc1\u4e66\u4fe1\u4efb\u6a21\u578b\u589e\u5f3a\u529f\u80fd \u4e16\u754c\u4e0a\u6700\u5371\u9669\u7684\u4ee3\u7801\uff1a\u5728\u975e\u6d4f\u89c8\u5668\u8f6f\u4ef6\u4e2d\u9a8c\u8bc1SSL\u8bc1\u4e66 OpenSSL \u548c FIPS 140-2","title":"\u52a0\u5bc6\u7b97\u6cd5\u3001\u5bc6\u7801\u6a21\u5f0f\u548c\u534f\u8bae"},{"location":"security/security-guide/#_97","text":"\u9274\u4e8e OpenStack \u7ec4\u4ef6\u7684\u590d\u6742\u6027\u548c\u90e8\u7f72\u53ef\u80fd\u6027\u7684\u6570\u91cf\uff0c\u60a8\u5fc5\u987b\u6ce8\u610f\u786e\u4fdd\u6bcf\u4e2a\u7ec4\u4ef6\u90fd\u83b7\u5f97 TLS \u8bc1\u4e66\u3001\u5bc6\u94a5\u548c CA \u7684\u9002\u5f53\u914d\u7f6e\u3002\u540e\u7eed\u90e8\u5206\u5c06\u8ba8\u8bba\u4ee5\u4e0b\u670d\u52a1\uff1a \u8ba1\u7b97 API \u7aef\u70b9 \u8eab\u4efd API \u7aef\u70b9 \u7f51\u7edc API \u7aef\u70b9 \u5b58\u50a8 API \u7aef\u70b9 \u6d88\u606f\u670d\u52a1\u5668 \u6570\u636e\u5e93\u670d\u52a1\u5668 \u4eea\u8868\u677f","title":"\u603b\u7ed3"},{"location":"security/security-guide/#tls-http","text":"OpenStack\u7684\u7ec8\u7aef\u662f\u63d0\u4f9bAPI\u7ed9\u516c\u5171\u7f51\u7edc\u4e0a\u7684\u7ec8\u7aef\u7528\u6237\u548c\u7ba1\u7406\u7f51\u7edc\u4e0a\u7684\u5176\u4ed6OpenStack\u670d\u52a1\u7684HTTP\u670d\u52a1\u3002\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u8fd9\u4e9b\u8bf7\u6c42\uff0c\u65e0\u8bba\u662f\u5185\u90e8\u8fd8\u662f\u5916\u90e8\uff0c\u90fd\u4f7f\u7528TLS\u8fdb\u884c\u64cd\u4f5c\u3002\u4e3a\u4e86\u5b9e\u73b0\u8fd9\u4e2a\u76ee\u6807\uff0cAPI\u670d\u52a1\u5fc5\u987b\u90e8\u7f72\u5728TLS\u4ee3\u7406\u540e\u9762\uff0c\u8be5\u4ee3\u7406\u80fd\u591f\u5efa\u7acb\u548c\u7ec8\u6b62TLS\u4f1a\u8bdd\u3002\u4e0b\u8868\u63d0\u4f9b\u4e86\u53ef\u7528\u4e8e\u6b64\u76ee\u7684\u7684\u5f00\u6e90\u8f6f\u4ef6\u7684\u975e\u8be6\u5c3d\u5217\u8868\uff1a Pound Stud Nginx Apache httpd \u5728\u8f6f\u4ef6\u7ec8\u7aef\u6027\u80fd\u4e0d\u8db3\u7684\u60c5\u51b5\u4e0b\uff0c\u786c\u4ef6\u52a0\u901f\u5668\u53ef\u80fd\u503c\u5f97\u63a2\u7d22\u4f5c\u4e3a\u66ff\u4ee3\u9009\u9879\u3002\u8bf7\u52a1\u5fc5\u6ce8\u610f\u4efb\u4f55\u9009\u5b9a\u7684 TLS \u4ee3\u7406\u5c06\u5904\u7406\u7684\u8bf7\u6c42\u7684\u5927\u5c0f\u3002","title":"TLS \u4ee3\u7406\u548c HTTP \u670d\u52a1"},{"location":"security/security-guide/#_98","text":"\u4e0b\u9762\u6211\u4eec\u63d0\u4f9b\u4e86\u4e00\u4e9b\u66f4\u6d41\u884c\u7684 Web \u670d\u52a1\u5668/TLS \u7ec8\u7ed3\u5668\u4e2d\u542f\u7528 TLS \u7684\u63a8\u8350\u914d\u7f6e\u8bbe\u7f6e\u793a\u4f8b\u3002 \u5728\u6df1\u5165\u7814\u7a76\u914d\u7f6e\u4e4b\u524d\uff0c\u6211\u4eec\u7b80\u8981\u8ba8\u8bba\u5bc6\u7801\u7684\u914d\u7f6e\u5143\u7d20\u53ca\u5176\u683c\u5f0f\u3002\u6709\u5173\u53ef\u7528\u5bc6\u7801\u548c OpenSSL \u5bc6\u7801\u5217\u8868\u683c\u5f0f\u7684\u66f4\u8be6\u5c3d\u5904\u7406\uff0c\u8bf7\u53c2\u9605\uff1a\u5bc6\u7801\u3002 ciphers = \"HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM\" \u6216 ciphers = \"kEECDH:kEDH:kRSA:HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM\" \u5bc6\u7801\u5b57\u7b26\u4e32\u9009\u9879\u7531 \u201c\uff1a\u201d \u5206\u9694\uff0c\u800c \u201c\uff01\u201d \u63d0\u4f9b\u7d27\u63a5\u7740\u7684\u5143\u7d20\u7684\u5426\u5b9a\u3002\u5143\u7d20\u987a\u5e8f\u6307\u793a\u9996\u9009\u9879\uff0c\u9664\u975e\u88ab\u9650\u5b9a\u7b26\uff08\u5982 HIGH\uff09\u8986\u76d6\u3002\u8ba9\u6211\u4eec\u4ed4\u7ec6\u770b\u770b\u4e0a\u9762\u793a\u4f8b\u5b57\u7b26\u4e32\u4e2d\u7684\u5143\u7d20\u3002 kEECDH:kEDH \u4e34\u65f6\u692d\u5706\u66f2\u7ebf Diffie-Hellman\uff08\u7f29\u5199\u4e3a EECDH \u548c ECDHE\uff09\u3002 Ephemeral Diffie-Hellman\uff08\u7f29\u5199\u4e3a EDH \u6216 DHE\uff09\u4f7f\u7528\u7d20\u6570\u573a\u7fa4\u3002 \u8fd9\u4e24\u79cd\u65b9\u6cd5\u90fd\u63d0\u4f9b\u5b8c\u5168\u524d\u5411\u4fdd\u5bc6 \uff08PFS\uff09\u3002\u6709\u5173\u6b63\u786e\u914d\u7f6e PFS \u7684\u66f4\u591a\u8ba8\u8bba\uff0c\u8bf7\u53c2\u9605\u5b8c\u5168\u524d\u5411\u4fdd\u5bc6\u3002 \u4e34\u65f6\u692d\u5706\u66f2\u7ebf\u8981\u6c42\u670d\u52a1\u5668\u914d\u7f6e\u547d\u540d\u66f2\u7ebf\uff0c\u5e76\u63d0\u4f9b\u6bd4\u4e3b\u5b57\u6bb5\u7ec4\u66f4\u597d\u7684\u5b89\u5168\u6027\u548c\u66f4\u4f4e\u7684\u8ba1\u7b97\u6210\u672c\u3002\u4f46\u662f\uff0c\u4e3b\u8981\u5b57\u6bb5\u7ec4\u7684\u5b9e\u73b0\u8303\u56f4\u66f4\u5e7f\uff0c\u56e0\u6b64\u901a\u5e38\u4e24\u8005\u90fd\u5305\u542b\u5728\u5217\u8868\u4e2d\u3002 kRSA \u5206\u522b\u4f7f\u7528 RSA \u4ea4\u6362\u3001\u8eab\u4efd\u9a8c\u8bc1\u6216\u4e24\u8005\u4e4b\u4e00\u7684\u5bc6\u7801\u5957\u4ef6\u3002 HIGH \u5728\u534f\u5546\u9636\u6bb5\u9009\u62e9\u53ef\u80fd\u7684\u6700\u9ad8\u5b89\u5168\u5bc6\u7801\u3002\u8fd9\u4e9b\u5bc6\u94a5\u901a\u5e38\u5177\u6709\u957f\u5ea6\u4e3a 128 \u4f4d\u6216\u66f4\u957f\u7684\u5bc6\u94a5\u3002 !RC4 \u6ca1\u6709 RC4\u3002RC4 \u5728 TLS V3 \u7684\u4e0a\u4e0b\u6587\u4e2d\u5b58\u5728\u7f3a\u9677\u3002\u8bf7\u53c2\u9605 TLS \u548c WPA \u4e2d RC4 \u7684\u5b89\u5168\u6027\u3002 !MD5 \u6ca1\u6709 MD5\u3002MD5 \u4e0d\u5177\u6709\u9632\u51b2\u7a81\u529f\u80fd\uff0c\u56e0\u6b64\u4e0d\u63a5\u53d7\u6d88\u606f\u9a8c\u8bc1\u7801 \uff08MAC\uff09 \u6216\u7b7e\u540d\u3002 !aNULL:!eNULL Disallows clear text. \u4e0d\u5141\u8bb8\u660e\u6587\u3002 !EXP \u4e0d\u5141\u8bb8\u5bfc\u51fa\u52a0\u5bc6\u7b97\u6cd5\uff0c\u8fd9\u4e9b\u7b97\u6cd5\u5728\u8bbe\u8ba1\u4e0a\u5f80\u5f80\u5f88\u5f31\uff0c\u901a\u5e38\u4f7f\u7528 40 \u4f4d\u548c 56 \u4f4d\u5bc6\u94a5\u3002 \u7f8e\u56fd\u5bf9\u5bc6\u7801\u5b66\u7cfb\u7edf\u7684\u51fa\u53e3\u9650\u5236\u5df2\u88ab\u53d6\u6d88\uff0c\u4e0d\u518d\u9700\u8981\u652f\u6301\u3002 !LOW:!MEDIUM \u4e0d\u5141\u8bb8\u4f7f\u7528\u4f4e\uff0856 \u6216 64 \u4f4d\u957f\u5bc6\u94a5\uff09\u548c\u4e2d\u7b49\uff08128 \u4f4d\u957f\u5bc6\u94a5\uff09\u5bc6\u7801\uff0c\u56e0\u4e3a\u5b83\u4eec\u5bb9\u6613\u53d7\u5230\u66b4\u529b\u653b\u51fb\uff08\u793a\u4f8b 2-DES\uff09\u3002\u6b64\u89c4\u5219\u4ecd\u5141\u8bb8\u4e09\u91cd\u6570\u636e\u52a0\u5bc6\u6807\u51c6 \uff08Triple DES\uff09\uff0c\u4e5f\u79f0\u4e3a\u4e09\u91cd\u6570\u636e\u52a0\u5bc6\u7b97\u6cd5 \uff08TDEA\uff09 \u548c\u9ad8\u7ea7\u52a0\u5bc6\u6807\u51c6 \uff08AES\uff09\uff0c\u6bcf\u4e2a\u6807\u51c6\u90fd\u5177\u6709\u5927\u4e8e\u7b49\u4e8e 128 \u4f4d\u7684\u5bc6\u94a5\uff0c\u56e0\u6b64\u66f4\u5b89\u5168\u3002 Protocols \u534f\u8bae\u901a\u8fc7SSL_CTX_set_options\u542f\u7528/\u7981\u7528\u3002\u5efa\u8bae\u7981\u7528 SSLv2/v3 \u5e76\u542f\u7528 TLS\u3002","title":"\u793a\u4f8b"},{"location":"security/security-guide/#pound","text":"\u6b64 Pound \u793a\u4f8b\u542f\u7528 AES-NI \u52a0\u901f\uff0c\u8fd9\u6709\u52a9\u4e8e\u63d0\u9ad8\u5177\u6709\u652f\u6301\u6b64\u529f\u80fd\u7684\u5904\u7406\u5668\u7684\u7cfb\u7edf\u7684\u6027\u80fd\u3002\u9ed8\u8ba4\u914d\u7f6e\u6587\u4ef6\u4f4d\u4e8e /etc/pound/pound.cfg Ubuntu\u3001RHEL\u3001CentOS\u3001 /etc/pound.cfg openSUSE \u548c SUSE Linux Enterprise \u4e0a\u3002 ## see pound(8) for details daemon 1 ###################################################################### ## global options: User \"swift\" Group \"swift\" #RootJail \"/chroot/pound\" ## Logging: (goes to syslog by default) ## 0 no logging ## 1 normal ## 2 extended ## 3 Apache-style (common log format) LogLevel 0 ## turn on dynamic scaling (off by default) # Dyn Scale 1 ## check backend every X secs: Alive 30 ## client timeout #Client 10 ## allow 10 second proxy connect time ConnTO 10 ## use hardware-acceleration card supported by openssl(1): SSLEngine \"aesni\" # poundctl control socket Control \"/var/run/pound/poundctl.socket\" ###################################################################### ## listen, redirect and ... to: ## redirect all swift requests on port 443 to local swift proxy ListenHTTPS Address 0.0.0.0 Port 443 Cert \"/etc/pound/cert.pem\" ## Certs to accept from clients ## CAlist \"CA_file\" ## Certs to use for client verification ## VerifyList \"Verify_file\" ## Request client cert - don't verify ## Ciphers \"AES256-SHA\" ## allow PUT and DELETE also (by default only GET, POST and HEAD)?: NoHTTPS11 0 ## allow PUT and DELETE also (by default only GET, POST and HEAD)?: xHTTP 1 Service BackEnd Address 127.0.0.1 Port 80 End End End","title":"Pound"},{"location":"security/security-guide/#stud","text":"\u5bc6\u7801\u884c\u53ef\u4ee5\u6839\u636e\u60a8\u7684\u9700\u8981\u8fdb\u884c\u8c03\u6574\uff0c\u4f46\u8fd9\u662f\u4e00\u4e2a\u5408\u7406\u7684\u8d77\u70b9\u3002\u9ed8\u8ba4\u914d\u7f6e\u6587\u4ef6\u4f4d\u4e8e\u76ee\u5f55\u4e2d /etc/stud \u3002\u4f46\u662f\uff0c\u9ed8\u8ba4\u60c5\u51b5\u4e0b\u4e0d\u63d0\u4f9b\u5b83\u3002 # SSL x509 certificate file. pem-file = \" # SSL protocol. tls = on ssl = off # List of allowed SSL ciphers. # OpenSSL's high-strength ciphers which require authentication # NOTE: forbids clear text, use of RC4 or MD5 or LOW and MEDIUM strength ciphers ciphers = \"HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM\" # Enforce server cipher list order prefer-server-ciphers = on # Number of worker processes workers = 4 # Listen backlog size backlog = 1000 # TCP socket keepalive interval in seconds keepalive = 3600 # Chroot directory chroot = \"\" # Set uid after binding a socket user = \"www-data\" # Set gid after binding a socket group = \"www-data\" # Quiet execution, report only error messages quiet = off # Use syslog for logging syslog = on # Syslog facility to use syslog-facility = \"daemon\" # Run as daemon daemon = off # Report client address using SENDPROXY protocol for haproxy # Disabling this until we upgrade to HAProxy 1.5 write-proxy = off","title":"Stud"},{"location":"security/security-guide/#nginx","text":"\u6b64 Nginx \u793a\u4f8b\u9700\u8981 TLS v1.1 \u6216 v1.2 \u624d\u80fd\u83b7\u5f97\u6700\u5927\u7684\u5b89\u5168\u6027\u3002\u53ef\u4ee5\u6839\u636e\u60a8\u7684\u9700\u8981\u8c03\u6574\u751f\u4ea7\u7ebf ssl_ciphers \uff0c\u4f46\u8fd9\u662f\u4e00\u4e2a\u5408\u7406\u7684\u8d77\u70b9\u3002\u7f3a\u7701\u914d\u7f6e\u6587\u4ef6\u4e3a /etc/nginx/nginx.conf \u3002 server { listen : ssl; ssl_certificate ; ssl_certificate_key ; ssl_protocols TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM ssl_session_tickets off; server_name _; keepalive_timeout 5; location / { } }","title":"Nginx"},{"location":"security/security-guide/#apache","text":"\u9ed8\u8ba4\u914d\u7f6e\u6587\u4ef6\u4f4d\u4e8e /etc/apache2/apache2.conf Ubuntu\u3001RHEL \u548c CentOS\u3001 /etc/httpd/conf/httpd.conf /etc/apache2/httpd.conf openSUSE \u548c SUSE Linux Enterprise \u4e0a\u3002 :80> ServerName RedirectPermanent / https:/// :443> ServerName SSLEngine On SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM SSLCertificateFile /path/.crt SSLCACertificateFile /path/.crt SSLCertificateKeyFile /path/.key WSGIScriptAlias / WSGIDaemonProcess horizon user= group= processes=3 threads=10 Alias /static > # For http server 2.2 and earlier: Order allow,deny Allow from all # Or, in Apache http server 2.4 and later: # Require all granted Apache \u4e2d\u7684\u8ba1\u7b97 API SSL \u7aef\u70b9\uff0c\u5fc5\u987b\u4e0e\u7b80\u77ed\u7684 WSGI \u811a\u672c\u914d\u5bf9\u3002 :8447> ServerName SSLEngine On SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2 SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM SSLCertificateFile /path/.crt SSLCACertificateFile /path/.crt SSLCertificateKeyFile /path/.key SSLSessionTickets Off WSGIScriptAlias / WSGIDaemonProcess osapi user= group= processes=3 threads=10 > # For http server 2.2 and earlier: Order allow,deny Allow from all # Or, in Apache http server 2.4 and later: # Require all granted ","title":"Apache"},{"location":"security/security-guide/#http","text":"\u5efa\u8bae\u6240\u6709\u751f\u4ea7\u90e8\u7f72\u90fd\u4f7f\u7528 HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168\u6027 \uff08HSTS\uff09\u3002\u6b64\u6807\u5934\u53ef\u9632\u6b62\u6d4f\u89c8\u5668\u5728\u5efa\u7acb\u5355\u4e2a\u5b89\u5168\u8fde\u63a5\u540e\u5efa\u7acb\u4e0d\u5b89\u5168\u7684\u8fde\u63a5\u3002\u5982\u679c\u60a8\u5df2\u5c06 HTTP \u670d\u52a1\u90e8\u7f72\u5728\u516c\u5171\u57df\u6216\u4e0d\u53d7\u4fe1\u4efb\u7684\u57df\u4e0a\uff0c\u5219 HSTS \u5c24\u4e3a\u91cd\u8981\u3002\u8981\u542f\u7528 HSTS\uff0c\u8bf7\u5c06 Web \u670d\u52a1\u5668\u914d\u7f6e\u4e3a\u53d1\u9001\u5305\u542b\u6240\u6709\u8bf7\u6c42\u7684\u6807\u5934\uff0c\u5982\u4e0b\u6240\u793a\uff1a Strict-Transport-Security: max-age=31536000; includeSubDomains \u5728\u6d4b\u8bd5\u671f\u95f4\u4ece 1 \u5929\u7684\u77ed\u6682\u505c\u5f00\u59cb\uff0c\u5e76\u5728\u6d4b\u8bd5\u8868\u660e\u60a8\u6ca1\u6709\u7ed9\u7528\u6237\u5e26\u6765\u95ee\u9898\u540e\u5c06\u5176\u63d0\u9ad8\u5230\u4e00\u5e74\u3002\u8bf7\u6ce8\u610f\uff0c\u4e00\u65e6\u6b64\u6807\u5934\u8bbe\u7f6e\u4e3a\u8f83\u5927\u7684\u8d85\u65f6\uff0c\u5b83\uff08\u6839\u636e\u8bbe\u8ba1\uff09\u5c31\u5f88\u96be\u7981\u7528\u3002","title":"HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168"},{"location":"security/security-guide/#_99","text":"\u914d\u7f6e TLS \u670d\u52a1\u5668\u4ee5\u5b9e\u73b0\u5b8c\u7f8e\u7684\u524d\u5411\u4fdd\u5bc6\u9700\u8981\u56f4\u7ed5\u5bc6\u94a5\u5927\u5c0f\u3001\u4f1a\u8bdd ID \u548c\u4f1a\u8bdd\u7968\u8bc1\u8fdb\u884c\u4ed4\u7ec6\u89c4\u5212\u3002\u6b64\u5916\uff0c\u5bf9\u4e8e\u591a\u670d\u52a1\u5668\u90e8\u7f72\uff0c\u5171\u4eab\u72b6\u6001\u4e5f\u662f\u4e00\u4e2a\u91cd\u8981\u7684\u8003\u8651\u56e0\u7d20\u3002\u4e0a\u9762\u7684 Apache \u548c Nginx \u793a\u4f8b\u914d\u7f6e\u7981\u7528\u4e86\u4f1a\u8bdd\u7968\u8bc1\u9009\u9879\uff0c\u4ee5\u5e2e\u52a9\u7f13\u89e3\u5176\u4e2d\u4e00\u4e9b\u95ee\u9898\u3002\u5b9e\u9645\u90e8\u7f72\u53ef\u80fd\u5e0c\u671b\u542f\u7528\u6b64\u529f\u80fd\u4ee5\u63d0\u9ad8\u6027\u80fd\u3002\u8fd9\u53ef\u4ee5\u5b89\u5168\u5730\u5b8c\u6210\uff0c\u4f46\u9700\u8981\u7279\u522b\u8003\u8651\u5bc6\u94a5\u7ba1\u7406\u3002\u6b64\u7c7b\u914d\u7f6e\u8d85\u51fa\u4e86\u672c\u6307\u5357\u7684\u8303\u56f4\u3002\u6211\u4eec\u5efa\u8bae\u9605\u8bfb ImperialViolet \u7684 How to botch TLS forward secrecy \u4f5c\u4e3a\u7406\u89e3\u95ee\u9898\u7a7a\u95f4\u7684\u8d77\u70b9\u3002","title":"\u5b8c\u5168\u524d\u5411\u4fdd\u5bc6"},{"location":"security/security-guide/#_100","text":"\u5efa\u8bae\u5728 TLS \u4ee3\u7406\u548c HTTP \u670d\u52a1\u7684\u516c\u7528\u7f51\u7edc\u548c\u7ba1\u7406\u7f51\u7edc\u4e0a\u4f7f\u7528 SSL/TLS\u3002\u4f46\u662f\uff0c\u5982\u679c\u5b9e\u9645\u5728\u4efb\u4f55\u5730\u65b9\u90e8\u7f72 SSL/TLS \u592a\u56f0\u96be\uff0c\u6211\u4eec\u5efa\u8bae\u60a8\u8bc4\u4f30\u60a8\u7684 OpenStack SSL/TLS \u9700\u6c42\uff0c\u5e76\u9075\u5faa\u6b64\u5904\u8ba8\u8bba\u7684\u67b6\u6784\u4e4b\u4e00\u3002 \u5728\u8bc4\u4f30\u5176 OpenStack SSL/TLS \u9700\u6c42\u65f6\uff0c\u5e94\u8be5\u505a\u7684\u7b2c\u4e00\u4ef6\u4e8b\u662f\u8bc6\u522b\u5a01\u80c1\u3002\u60a8\u53ef\u4ee5\u5c06\u8fd9\u4e9b\u5a01\u80c1\u5206\u4e3a\u5916\u90e8\u653b\u51fb\u8005\u548c\u5185\u90e8\u653b\u51fb\u8005\u7c7b\u522b\uff0c\u4f46\u7531\u4e8e OpenStack \u7684\u67d0\u4e9b\u7ec4\u4ef6\u5728\u516c\u5171\u548c\u7ba1\u7406\u7f51\u7edc\u4e0a\u8fd0\u884c\uff0c\u56e0\u6b64\u754c\u9650\u5f80\u5f80\u4f1a\u53d8\u5f97\u6a21\u7cca\u3002 \u5bf9\u4e8e\u9762\u5411\u516c\u4f17\u7684\u670d\u52a1\uff0c\u5a01\u80c1\u975e\u5e38\u7b80\u5355\u3002\u7528\u6237\u5c06\u4f7f\u7528\u5176\u7528\u6237\u540d\u548c\u5bc6\u7801\u5bf9 Horizon \u548c Keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u7528\u6237\u8fd8\u5c06\u4f7f\u7528\u5176 keystone \u4ee4\u724c\u8bbf\u95ee\u5176\u4ed6\u670d\u52a1\u7684 API \u7aef\u70b9\u3002\u5982\u679c\u6b64\u7f51\u7edc\u6d41\u91cf\u672a\u52a0\u5bc6\uff0c\u5219\u653b\u51fb\u8005\u53ef\u4ee5\u4f7f\u7528\u4e2d\u95f4\u4eba\u653b\u51fb\u622a\u83b7\u5bc6\u7801\u548c\u4ee4\u724c\u3002\u7136\u540e\uff0c\u653b\u51fb\u8005\u53ef\u4ee5\u4f7f\u7528\u8fd9\u4e9b\u6709\u6548\u51ed\u636e\u6267\u884c\u6076\u610f\u64cd\u4f5c\u3002\u6240\u6709\u5b9e\u9645\u90e8\u7f72\u90fd\u5e94\u4f7f\u7528 SSL/TLS \u6765\u4fdd\u62a4\u9762\u5411\u516c\u4f17\u7684\u670d\u52a1\u3002 \u5bf9\u4e8e\u90e8\u7f72\u5728\u7ba1\u7406\u7f51\u7edc\u4e0a\u7684\u670d\u52a1\uff0c\u7531\u4e8e\u5b89\u5168\u57df\u4e0e\u7f51\u7edc\u5b89\u5168\u7684\u6865\u63a5\uff0c\u5a01\u80c1\u5e76\u4e0d\u90a3\u4e48\u660e\u786e\u3002\u6709\u6743\u8bbf\u95ee\u7ba1\u7406\u7f51\u7edc\u7684\u7ba1\u7406\u5458\u603b\u662f\u6709\u53ef\u80fd\u51b3\u5b9a\u6267\u884c\u6076\u610f\u64cd\u4f5c\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u5982\u679c\u5141\u8bb8\u653b\u51fb\u8005\u8bbf\u95ee\u79c1\u94a5\uff0cSSL/TLS \u5c06\u65e0\u6d4e\u4e8e\u4e8b\u3002\u5f53\u7136\uff0c\u5e76\u4e0d\u662f\u7ba1\u7406\u7f51\u7edc\u4e0a\u7684\u6bcf\u4e2a\u4eba\u90fd\u88ab\u5141\u8bb8\u8bbf\u95ee\u79c1\u94a5\uff0c\u56e0\u6b64\u4f7f\u7528 SSL/TLS \u6765\u4fdd\u62a4\u81ea\u5df1\u514d\u53d7\u5185\u90e8\u653b\u51fb\u8005\u7684\u653b\u51fb\u4ecd\u7136\u5f88\u6709\u4ef7\u503c\u3002\u5373\u4f7f\u5141\u8bb8\u8bbf\u95ee\u60a8\u7684\u7ba1\u7406\u7f51\u7edc\u7684\u6bcf\u4e2a\u4eba\u90fd\u662f 100% \u53d7\u4fe1\u4efb\u7684\uff0c\u4ecd\u7136\u5b58\u5728\u672a\u7ecf\u6388\u6743\u7684\u7528\u6237\u901a\u8fc7\u5229\u7528\u9519\u8bef\u914d\u7f6e\u6216\u8f6f\u4ef6\u6f0f\u6d1e\u8bbf\u95ee\u60a8\u7684\u5185\u90e8\u7f51\u7edc\u7684\u5a01\u80c1\u3002\u5fc5\u987b\u8bb0\u4f4f\uff0c\u7528\u6237\u5728 OpenStack Compute \u8282\u70b9\u4e2d\u7684\u5b9e\u4f8b\u4e0a\u8fd0\u884c\u81ea\u5df1\u7684\u4ee3\u7801\uff0c\u8fd9\u4e9b\u8282\u70b9\u90e8\u7f72\u5728\u7ba1\u7406\u7f51\u7edc\u4e0a\u3002\u5982\u679c\u6f0f\u6d1e\u5141\u8bb8\u4ed6\u4eec\u7a81\u7834\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\uff0c\u4ed6\u4eec\u5c06\u53ef\u4ee5\u8bbf\u95ee\u60a8\u7684\u7ba1\u7406\u7f51\u7edc\u3002\u5728\u7ba1\u7406\u7f51\u7edc\u4e0a\u4f7f\u7528 SSL/TLS \u53ef\u4ee5\u6700\u5927\u7a0b\u5ea6\u5730\u51cf\u5c11\u653b\u51fb\u8005\u53ef\u80fd\u9020\u6210\u7684\u635f\u5bb3\u3002","title":"\u5b89\u5168\u53c2\u8003\u67b6\u6784"},{"location":"security/security-guide/#ssltls","text":"\u4eba\u4eec\u666e\u904d\u8ba4\u4e3a\uff0c\u6700\u597d\u5c3d\u65e9\u52a0\u5bc6\u654f\u611f\u6570\u636e\uff0c\u5e76\u5c3d\u53ef\u80fd\u665a\u5730\u89e3\u5bc6\u3002\u5c3d\u7ba1\u6709\u8fd9\u79cd\u6700\u4f73\u5b9e\u8df5\uff0c\u4f46\u5728OpenStack\u670d\u52a1\u524d\u9762\u4f7f\u7528SSL / TLS\u4ee3\u7406\u5e76\u5728\u4e4b\u540e\u4f7f\u7528\u6e05\u6670\u7684\u901a\u4fe1\u4f3c\u4e4e\u662f\u5f88\u5e38\u89c1\u7684\uff0c\u5982\u4e0b\u6240\u793a\uff1a \u5982\u4e0a\u56fe\u6240\u793a\uff0c\u4f7f\u7528 SSL/TLS \u4ee3\u7406\u7684\u4e00\u4e9b\u95ee\u9898\uff1a OpenStack \u670d\u52a1\u4e2d\u7684\u539f\u751f SSL/TLS \u7684\u6027\u80fd/\u6269\u5c55\u6027\u4e0d\u5982 SSL \u4ee3\u7406\uff08\u7279\u522b\u662f\u5bf9\u4e8e\u50cf Eventlet \u8fd9\u6837\u7684 Python \u5b9e\u73b0\uff09\u3002 OpenStack \u670d\u52a1\u4e2d\u7684\u539f\u751f SSL/TLS \u6ca1\u6709\u50cf\u66f4\u6210\u719f\u7684\u89e3\u51b3\u65b9\u6848\u90a3\u6837\u7ecf\u8fc7\u4ed4\u7ec6\u5ba1\u67e5/\u5ba1\u8ba1\u3002 \u672c\u673a SSL/TLS \u914d\u7f6e\u5f88\u56f0\u96be\uff08\u6ca1\u6709\u5f88\u597d\u7684\u6587\u6863\u8bb0\u5f55\u3001\u6d4b\u8bd5\u6216\u8de8\u670d\u52a1\u4fdd\u6301\u4e00\u81f4\uff09\u3002 \u6743\u9650\u5206\u79bb\uff08OpenStack \u670d\u52a1\u8fdb\u7a0b\u4e0d\u5e94\u76f4\u63a5\u8bbf\u95ee\u7528\u4e8e SSL/TLS \u7684\u79c1\u94a5\uff09\u3002 \u6d41\u91cf\u68c0\u67e5\u9700\u8981\u8d1f\u8f7d\u5747\u8861\u3002 \u4ee5\u4e0a\u6240\u6709\u95ee\u9898\u90fd\u662f\u6709\u9053\u7406\u7684\uff0c\u4f46\u5b83\u4eec\u90fd\u4e0d\u80fd\u963b\u6b62\u5728\u7ba1\u7406\u7f51\u7edc\u4e0a\u4f7f\u7528 SSL/TLS\u3002\u8ba9\u6211\u4eec\u8003\u8651\u4e0b\u4e00\u4e2a\u90e8\u7f72\u6a21\u578b\u3002","title":"SSL/TLS \u4ee3\u7406\u5728\u524d\u9762"},{"location":"security/security-guide/#api-ssltls","text":"\u8fd9\u4e0e\u524d\u9762\u7684 SSL/TLS \u4ee3\u7406\u975e\u5e38\u76f8\u4f3c\uff0c\u4f46 SSL/TLS \u4ee3\u7406\u4e0e API \u7aef\u70b9\u4f4d\u4e8e\u540c\u4e00\u7269\u7406\u7cfb\u7edf\u4e0a\u3002API \u7aef\u70b9\u5c06\u914d\u7f6e\u4e3a\u4ec5\u4fa6\u542c\u672c\u5730\u7f51\u7edc\u63a5\u53e3\u3002\u4e0e API \u7aef\u70b9\u7684\u6240\u6709\u8fdc\u7a0b\u901a\u4fe1\u90fd\u5c06\u901a\u8fc7 SSL/TLS \u4ee3\u7406\u8fdb\u884c\u3002\u901a\u8fc7\u6b64\u90e8\u7f72\u6a21\u578b\uff0c\u6211\u4eec\u5c06\u89e3\u51b3 SSL/TLS \u4ee3\u7406\u4e2d\u7684\u8bb8\u591a\u8981\u70b9\uff1a\u5c06\u4f7f\u7528\u6027\u80fd\u826f\u597d\u7684\u7ecf\u8fc7\u9a8c\u8bc1\u7684 SSL \u5b9e\u73b0\u3002\u6240\u6709\u670d\u52a1\u90fd\u5c06\u4f7f\u7528\u76f8\u540c\u7684 SSL \u4ee3\u7406\u8f6f\u4ef6\uff0c\u56e0\u6b64 API \u7aef\u70b9\u7684 SSL \u914d\u7f6e\u5c06\u662f\u4e00\u81f4\u7684\u3002OpenStack \u670d\u52a1\u8fdb\u7a0b\u5c06\u65e0\u6cd5\u76f4\u63a5\u8bbf\u95ee\u7528\u4e8e SSL/TLS \u7684\u79c1\u94a5\uff0c\u56e0\u4e3a\u60a8\u5c06\u4ee5\u4e0d\u540c\u7684\u7528\u6237\u8eab\u4efd\u8fd0\u884c SSL \u4ee3\u7406\uff0c\u5e76\u4f7f\u7528\u6743\u9650\u9650\u5236\u8bbf\u95ee\uff08\u4ee5\u53ca\u4f7f\u7528 SELinux \u4e4b\u7c7b\u7684\u989d\u5916\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\uff09\u3002\u7406\u60f3\u60c5\u51b5\u4e0b\uff0c\u6211\u4eec\u4f1a\u8ba9 API \u7aef\u70b9\u5728 Unix \u5957\u63a5\u5b57\u4e0a\u76d1\u542c\uff0c\u8fd9\u6837\u6211\u4eec\u5c31\u53ef\u4ee5\u4f7f\u7528\u6743\u9650\u548c\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u6765\u9650\u5236\u5bf9\u5b83\u7684\u8bbf\u95ee\u3002\u4e0d\u5e78\u7684\u662f\uff0c\u6839\u636e\u6211\u4eec\u7684\u6d4b\u8bd5\uff0c\u8fd9\u5728 Eventlet \u4e2d\u76ee\u524d\u4f3c\u4e4e\u4e0d\u8d77\u4f5c\u7528\u3002\u8fd9\u662f\u4e00\u4e2a\u5f88\u597d\u7684\u672a\u6765\u53d1\u5c55\u76ee\u6807\u3002","title":"\u4e0e API \u7aef\u70b9\u4f4d\u4e8e\u540c\u4e00\u7269\u7406\u4e3b\u673a\u4e0a\u7684 SSL/TLS"},{"location":"security/security-guide/#ssltls_1","text":"\u9700\u8981\u68c0\u67e5\u6d41\u91cf\u7684\u9ad8\u53ef\u7528\u6027\u6216\u8d1f\u8f7d\u5747\u8861\u90e8\u7f72\u4f1a\u600e\u6837\uff1f\u4ee5\u524d\u7684\u90e8\u7f72\u6a21\u578b\uff08\u4e0e API \u7aef\u70b9\u4f4d\u4e8e\u540c\u4e00\u7269\u7406\u4e3b\u673a\u4e0a\u7684 SSL/TLS\uff09\u4e0d\u5141\u8bb8\u8fdb\u884c\u6df1\u5ea6\u6570\u636e\u5305\u68c0\u6d4b\uff0c\u56e0\u4e3a\u6d41\u91cf\u662f\u52a0\u5bc6\u7684\u3002\u5982\u679c\u4ec5\u51fa\u4e8e\u57fa\u672c\u8def\u7531\u76ee\u7684\u800c\u9700\u8981\u68c0\u67e5\u6d41\u91cf\uff0c\u5219\u8d1f\u8f7d\u5747\u8861\u5668\u53ef\u80fd\u6ca1\u6709\u5fc5\u8981\u8bbf\u95ee\u672a\u52a0\u5bc6\u7684\u6d41\u91cf\u3002HAProxy \u80fd\u591f\u5728\u63e1\u624b\u671f\u95f4\u63d0\u53d6 SSL/TLS \u4f1a\u8bdd ID\uff0c\u7136\u540e\u53ef\u4ee5\u4f7f\u7528\u8be5 ID \u6765\u5b9e\u73b0\u4f1a\u8bdd\u4eb2\u548c\u6027\uff08\u4f1a\u8bdd ID \u914d\u7f6e\u8be6\u7ec6\u4fe1\u606f \u6b64\u5904 \uff09\u3002HAProxy\u8fd8\u53ef\u4ee5\u4f7f\u7528TLS\u670d\u52a1\u5668\u540d\u79f0\u6307\u793a\uff08SNI\uff09\u6269\u5c55\u6765\u786e\u5b9a\u5e94\u5c06\u6d41\u91cf\u8def\u7531\u5230\u7684\u4f4d\u7f6e\uff08SNI\u914d\u7f6e\u8be6\u7ec6\u4fe1\u606f\u8bf7\u5728\u6b64\u5904\uff09\u3002\u8fd9\u4e9b\u529f\u80fd\u53ef\u80fd\u6db5\u76d6\u4e86\u4e00\u4e9b\u6700\u5e38\u89c1\u7684\u8d1f\u8f7d\u5747\u8861\u5668\u9700\u6c42\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0cHAProxy \u5c06\u80fd\u591f\u5c06 HTTPS \u6d41\u91cf\u76f4\u63a5\u4f20\u9012\u5230 API \u7aef\u70b9\u7cfb\u7edf\uff1a","title":"SSL/TLS\u8d1f\u8f7d\u5e73\u8861\u5668"},{"location":"security/security-guide/#_101","text":"\u5982\u679c\u60a8\u5e0c\u671b\u5bf9\u5916\u90e8\u548c\u5185\u90e8\u73af\u5883\u8fdb\u884c\u52a0\u5bc6\u5206\u79bb\uff0c\u8be5\u600e\u4e48\u529e\uff1f\u516c\u6709\u4e91\u63d0\u4f9b\u5546\u53ef\u80fd\u5e0c\u671b\u5176\u9762\u5411\u516c\u4f17\u7684\u670d\u52a1\uff08\u6216\u4ee3\u7406\uff09\u4f7f\u7528\u7531 CA \u9881\u53d1\u7684\u8bc1\u4e66\uff0c\u8be5\u8bc1\u4e66\u94fe\u63a5\u5230\u53d7\u4fe1\u4efb\u7684\u6839 CA\uff0c\u8be5\u6839 CA \u5206\u5e03\u5728\u6d41\u884c\u7684 SSL/TLS Web \u6d4f\u89c8\u5668\u8f6f\u4ef6\u4e2d\u3002\u5bf9\u4e8e\u5185\u90e8\u670d\u52a1\uff0c\u53ef\u80fd\u5e0c\u671b\u6539\u7528\u81ea\u5df1\u7684 PKI \u6765\u9881\u53d1 SSL/TLS \u8bc1\u4e66\u3002\u53ef\u4ee5\u901a\u8fc7\u5728\u7f51\u7edc\u8fb9\u754c\u7ec8\u6b62 SSL\uff0c\u7136\u540e\u4f7f\u7528\u5185\u90e8\u9881\u53d1\u7684\u8bc1\u4e66\u91cd\u65b0\u52a0\u5bc6\u6765\u5b9e\u73b0\u8fd9\u79cd\u52a0\u5bc6\u5206\u79bb\u3002\u6d41\u91cf\u5c06\u5728\u9762\u5411\u516c\u4f17\u7684 SSL/TLS \u4ee3\u7406\u4e0a\u77ed\u65f6\u95f4\u5185\u672a\u52a0\u5bc6\uff0c\u4f46\u6c38\u8fdc\u4e0d\u4f1a\u4ee5\u660e\u6587\u5f62\u5f0f\u901a\u8fc7\u7f51\u7edc\u4f20\u8f93\u3002\u5982\u679c\u8d1f\u8f7d\u5747\u8861\u5668\u4e0a\u786e\u5b9e\u9700\u8981\u6df1\u5ea6\u6570\u636e\u5305\u68c0\u6d4b\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u7528\u4e8e\u5b9e\u73b0\u52a0\u5bc6\u5206\u79bb\u7684\u76f8\u540c\u91cd\u65b0\u52a0\u5bc6\u65b9\u6cd5\u3002\u4e0b\u9762\u662f\u6b64\u90e8\u7f72\u6a21\u578b\u7684\u6837\u5b50\uff1a\u4e0b\u9762\u662f\u6b64\u90e8\u7f72\u6a21\u578b\u7684\u5916\u89c2: \u4e0e\u5927\u591a\u6570\u4e8b\u60c5\u4e00\u6837\uff0c\u9700\u8981\u6743\u8861\u53d6\u820d\u3002\u4e3b\u8981\u7684\u6743\u8861\u662f\u5728\u5b89\u5168\u6027\u548c\u6027\u80fd\u4e4b\u95f4\u3002\u52a0\u5bc6\u662f\u6709\u4ee3\u4ef7\u7684\uff0c\u4f46\u88ab\u9ed1\u5ba2\u5165\u4fb5\u4e5f\u662f\u6709\u4ee3\u4ef7\u7684\u3002\u6bcf\u4e2a\u90e8\u7f72\u7684\u5b89\u5168\u6027\u548c\u6027\u80fd\u8981\u6c42\u90fd\u4f1a\u6709\u6240\u4e0d\u540c\uff0c\u56e0\u6b64\u5982\u4f55\u4f7f\u7528 SSL/TLS \u6700\u7ec8\u5c06\u7531\u4e2a\u4eba\u51b3\u5b9a\u3002","title":"\u5916\u90e8\u548c\u5185\u90e8\u73af\u5883\u7684\u52a0\u5bc6\u5206\u79bb"},{"location":"security/security-guide/#api","text":"\u4f7f\u7528 OpenStack \u4e91\u7684\u8fc7\u7a0b\u662f\u901a\u8fc7\u67e5\u8be2 API \u7aef\u70b9\u5f00\u59cb\u7684\u3002\u867d\u7136\u516c\u5171\u548c\u4e13\u7528\u7ec8\u7ed3\u70b9\u9762\u4e34\u4e0d\u540c\u7684\u6311\u6218\uff0c\u4f46\u8fd9\u4e9b\u662f\u9ad8\u4ef7\u503c\u8d44\u4ea7\uff0c\u5982\u679c\u906d\u5230\u5165\u4fb5\uff0c\u53ef\u80fd\u4f1a\u5e26\u6765\u91cd\u5927\u98ce\u9669\u3002 \u672c\u7ae0\u5efa\u8bae\u5bf9\u9762\u5411\u516c\u5171\u548c\u79c1\u6709\u7684 API \u7aef\u70b9\u8fdb\u884c\u5b89\u5168\u589e\u5f3a\u3002 API \u7aef\u70b9\u914d\u7f6e\u5efa\u8bae \u5185\u90e8 API \u901a\u4fe1 \u7c98\u8d34\u4ef6\u548c\u4e2d\u95f4\u4ef6 API \u7aef\u70b9\u8fdb\u7a0b\u9694\u79bb\u548c\u7b56\u7565 API \u7ec8\u7aef\u8282\u70b9\u901f\u7387\u9650\u5236","title":"API \u7aef\u70b9"},{"location":"security/security-guide/#api_1","text":"","title":"API \u7aef\u70b9\u914d\u7f6e\u5efa\u8bae"},{"location":"security/security-guide/#api_2","text":"OpenStack \u63d0\u4f9b\u9762\u5411\u516c\u4f17\u548c\u79c1\u6709\u7684 API \u7aef\u70b9\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0cOpenStack \u7ec4\u4ef6\u4f7f\u7528\u516c\u5f00\u5b9a\u4e49\u7684\u7aef\u70b9\u3002\u5efa\u8bae\u5c06\u8fd9\u4e9b\u7ec4\u4ef6\u914d\u7f6e\u4e3a\u5728\u9002\u5f53\u7684\u5b89\u5168\u57df\u4e2d\u4f7f\u7528 API \u7aef\u70b9\u3002 \u670d\u52a1\u6839\u636e OpenStack \u670d\u52a1\u76ee\u5f55\u9009\u62e9\u5404\u81ea\u7684 API \u7aef\u70b9\u3002\u8fd9\u4e9b\u670d\u52a1\u53ef\u80fd\u4e0d\u9075\u5b88\u5217\u51fa\u7684\u516c\u5171\u6216\u5185\u90e8 API \u7aef\u70b9\u503c\u3002\u8fd9\u53ef\u80fd\u4f1a\u5bfc\u81f4\u5185\u90e8\u7ba1\u7406\u6d41\u91cf\u8def\u7531\u5230\u5916\u90e8 API \u7ec8\u7ed3\u70b9\u3002","title":"\u5185\u90e8 API \u901a\u4fe1"},{"location":"security/security-guide/#url","text":"Identity \u670d\u52a1\u76ee\u5f55\u5e94\u4e86\u89e3\u60a8\u7684\u5185\u90e8 URL\u3002\u867d\u7136\u9ed8\u8ba4\u60c5\u51b5\u4e0b\u4e0d\u4f7f\u7528\u6b64\u529f\u80fd\uff0c\u4f46\u53ef\u4ee5\u901a\u8fc7\u914d\u7f6e\u6765\u5229\u7528\u5b83\u3002\u6b64\u5916\uff0c\u4e00\u65e6\u6b64\u884c\u4e3a\u6210\u4e3a\u9ed8\u8ba4\u884c\u4e3a\uff0c\u5b83\u5e94\u8be5\u4e0e\u9884\u671f\u7684\u66f4\u6539\u5411\u524d\u517c\u5bb9\u3002 \u8981\u4e3a\u7ec8\u7ed3\u70b9\u6ce8\u518c\u5185\u90e8 URL\uff0c\u8bf7\u6267\u884c\u4ee5\u4e0b\u64cd\u4f5c\uff1a $ openstack endpoint create identity \\ --region RegionOne internal \\ https://MANAGEMENT_IP:5000/v3 \u66ff\u6362\u4e3a MANAGEMENT_IP \u63a7\u5236\u5668\u8282\u70b9\u7684\u7ba1\u7406 IP \u5730\u5740\u3002","title":"\u5728\u8eab\u4efd\u670d\u52a1\u76ee\u5f55\u4e2d\u914d\u7f6e\u5185\u90e8 URL"},{"location":"security/security-guide/#url_1","text":"\u60a8\u53ef\u4ee5\u5f3a\u5236\u67d0\u4e9b\u670d\u52a1\u4f7f\u7528\u7279\u5b9a\u7684 API \u7aef\u70b9\u3002\u56e0\u6b64\uff0c\u5efa\u8bae\u5fc5\u987b\u5c06\u6bcf\u4e2a\u4e0e\u53e6\u4e00\u4e2a\u670d\u52a1\u7684 API \u901a\u4fe1\u7684 OpenStack \u670d\u52a1\u663e\u5f0f\u914d\u7f6e\u4e3a\u8bbf\u95ee\u6b63\u786e\u7684\u5185\u90e8 API \u7aef\u70b9\u3002 \u6bcf\u4e2a\u9879\u76ee\u90fd\u53ef\u80fd\u5448\u73b0\u5b9a\u4e49\u76ee\u6807 API \u7aef\u70b9\u7684\u4e0d\u4e00\u81f4\u65b9\u5f0f\u3002OpenStack \u7684\u672a\u6765\u7248\u672c\u8bd5\u56fe\u901a\u8fc7\u4e00\u81f4\u5730\u4f7f\u7528\u8eab\u4efd\u670d\u52a1\u76ee\u5f55\u6765\u89e3\u51b3\u8fd9\u4e9b\u4e0d\u4e00\u81f4\u95ee\u9898\u3002 \u914d\u7f6e\u793a\u4f8b #1\uff1anova cinder_catalog_info='volume:cinder:internalURL' glance_protocol='https' neutron_url='https://neutron-host:9696' neutron_admin_auth_url='https://neutron-host:9696' s3_host='s3-host' s3_use_ssl=True \u914d\u7f6e\u793a\u4f8b #2\uff1acinder glance_host = 'https://glance-server'","title":"\u4e3a\u5185\u90e8 URL \u914d\u7f6e\u5e94\u7528\u7a0b\u5e8f"},{"location":"security/security-guide/#_102","text":"OpenStack \u4e2d\u7684\u5927\u591a\u6570 API \u7aef\u70b9\u548c\u5176\u4ed6 HTTP \u670d\u52a1\u90fd\u4f7f\u7528 Python Paste Deploy \u5e93\u3002\u4ece\u5b89\u5168\u89d2\u5ea6\u6765\u770b\uff0c\u6b64\u5e93\u5141\u8bb8\u901a\u8fc7\u5e94\u7528\u7a0b\u5e8f\u7684\u914d\u7f6e\u6765\u64cd\u4f5c\u8bf7\u6c42\u7b5b\u9009\u5668\u7ba1\u9053\u3002\u6b64\u94fe\u4e2d\u7684\u6bcf\u4e2a\u5143\u7d20\u90fd\u79f0\u4e3a\u4e2d\u95f4\u4ef6\u3002\u66f4\u6539\u7ba1\u9053\u4e2d\u7b5b\u9009\u5668\u7684\u987a\u5e8f\u6216\u6dfb\u52a0\u5176\u4ed6\u4e2d\u95f4\u4ef6\u53ef\u80fd\u4f1a\u4ea7\u751f\u4e0d\u53ef\u9884\u77e5\u7684\u5b89\u5168\u5f71\u54cd\u3002 \u901a\u5e38\uff0c\u5b9e\u73b0\u8005\u4f1a\u6dfb\u52a0\u4e2d\u95f4\u4ef6\u6765\u6269\u5c55 OpenStack \u7684\u57fa\u672c\u529f\u80fd\u3002\u6211\u4eec\u5efa\u8bae\u5b9e\u73b0\u8005\u4ed4\u7ec6\u8003\u8651\u5c06\u975e\u6807\u51c6\u8f6f\u4ef6\u7ec4\u4ef6\u6dfb\u52a0\u5230\u5176 HTTP \u8bf7\u6c42\u7ba1\u9053\u4e2d\u53ef\u80fd\u5e26\u6765\u7684\u98ce\u9669\u3002 \u6709\u5173\u7c98\u8d34\u90e8\u7f72\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Python \u7c98\u8d34\u90e8\u7f72\u6587\u6863\u3002","title":"\u7c98\u8d34\u548c\u4e2d\u95f4\u4ef6"},{"location":"security/security-guide/#api_3","text":"\u60a8\u5e94\u8be5\u9694\u79bb API \u7aef\u70b9\u8fdb\u7a0b\uff0c\u5c24\u5176\u662f\u90a3\u4e9b\u4f4d\u4e8e\u516c\u5171\u5b89\u5168\u57df\u4e2d\u7684\u8fdb\u7a0b\uff0c\u5e94\u5c3d\u53ef\u80fd\u9694\u79bb\u3002\u5728\u90e8\u7f72\u5141\u8bb8\u7684\u60c5\u51b5\u4e0b\uff0cAPI \u7aef\u70b9\u5e94\u90e8\u7f72\u5728\u5355\u72ec\u7684\u4e3b\u673a\u4e0a\uff0c\u4ee5\u589e\u5f3a\u9694\u79bb\u6027\u3002","title":"API \u7aef\u70b9\u8fdb\u7a0b\u9694\u79bb\u548c\u7b56\u7565"},{"location":"security/security-guide/#_103","text":"\u73b0\u5728\uff0c\u8bb8\u591a\u64cd\u4f5c\u7cfb\u7edf\u90fd\u63d0\u4f9b\u5206\u533a\u5316\u652f\u6301\u3002Linux \u652f\u6301\u547d\u540d\u7a7a\u95f4\u5c06\u8fdb\u7a0b\u5206\u914d\u5230\u72ec\u7acb\u7684\u57df\u4e2d\u3002\u672c\u6307\u5357\u7684\u5176\u4ed6\u90e8\u5206\u66f4\u8be6\u7ec6\u5730\u4ecb\u7ecd\u4e86\u7cfb\u7edf\u533a\u9694\u3002","title":"\u547d\u540d\u7a7a\u95f4"},{"location":"security/security-guide/#_104","text":"\u7531\u4e8e API \u7aef\u70b9\u901a\u5e38\u6865\u63a5\u591a\u4e2a\u5b89\u5168\u57df\uff0c\u56e0\u6b64\u60a8\u5fc5\u987b\u7279\u522b\u6ce8\u610f API \u8fdb\u7a0b\u7684\u5212\u5206\u3002\u6709\u5173\u6b64\u533a\u57df\u7684\u5176\u4ed6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6865\u63a5\u5b89\u5168\u57df\u3002 \u901a\u8fc7\u4ed4\u7ec6\u5efa\u6a21\uff0c\u60a8\u53ef\u4ee5\u4f7f\u7528\u7f51\u7edc ACL \u548c IDS \u6280\u672f\u5728\u7f51\u7edc\u670d\u52a1\u4e4b\u95f4\u5f3a\u5236\u5b9e\u65bd\u663e\u5f0f\u70b9\u5bf9\u70b9\u901a\u4fe1\u3002\u4f5c\u4e3a\u4e00\u9879\u5173\u952e\u7684\u8de8\u57df\u670d\u52a1\uff0c\u8fd9\u79cd\u663e\u5f0f\u5f3a\u5236\u6267\u884c\u5bf9 OpenStack \u7684\u6d88\u606f\u961f\u5217\u670d\u52a1\u975e\u5e38\u6709\u6548\u3002 \u8981\u5b9e\u65bd\u7b56\u7565\uff0c\u60a8\u53ef\u4ee5\u914d\u7f6e\u670d\u52a1\u3001\u57fa\u4e8e\u4e3b\u673a\u7684\u9632\u706b\u5899\uff08\u4f8b\u5982 iptables\uff09\u3001\u672c\u5730\u7b56\u7565\uff08SELinux \u6216 AppArmor\uff09\u4ee5\u53ca\u53ef\u9009\u7684\u5168\u5c40\u7f51\u7edc\u7b56\u7565\u3002","title":"\u7f51\u7edc\u7b56\u7565"},{"location":"security/security-guide/#_105","text":"\u60a8\u5e94\u8be5\u5c06 API \u7aef\u70b9\u8fdb\u7a0b\u5f7c\u6b64\u9694\u79bb\uff0c\u5e76\u9694\u79bb\u8ba1\u7b97\u673a\u4e0a\u7684\u5176\u4ed6\u8fdb\u7a0b\u3002\u8fd9\u4e9b\u8fdb\u7a0b\u7684\u914d\u7f6e\u4e0d\u4ec5\u5e94\u901a\u8fc7\u4efb\u610f\u8bbf\u95ee\u63a7\u5236\uff0c\u8fd8\u5e94\u901a\u8fc7\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u6765\u9650\u5236\u8fd9\u4e9b\u8fdb\u7a0b\u3002\u8fd9\u4e9b\u589e\u5f3a\u7684\u8bbf\u95ee\u63a7\u5236\u7684\u76ee\u6807\u662f\u5e2e\u52a9\u904f\u5236\u548c\u5347\u7ea7 API \u7aef\u70b9\u5b89\u5168\u6f0f\u6d1e\u3002\u901a\u8fc7\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\uff0c\u6b64\u7c7b\u8fdd\u89c4\u884c\u4e3a\u4f1a\u4e25\u91cd\u9650\u5236\u5bf9\u8d44\u6e90\u7684\u8bbf\u95ee\uff0c\u5e76\u9488\u5bf9\u6b64\u7c7b\u4e8b\u4ef6\u63d0\u4f9b\u65e9\u671f\u8b66\u62a5\u3002","title":"\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236"},{"location":"security/security-guide/#api_4","text":"\u901f\u7387\u9650\u5236\u662f\u4e00\u79cd\u63a7\u5236\u57fa\u4e8e\u7f51\u7edc\u7684\u5e94\u7528\u7a0b\u5e8f\u63a5\u6536\u4e8b\u4ef6\u9891\u7387\u7684\u65b9\u6cd5\u3002\u5982\u679c\u4e0d\u5b58\u5728\u53ef\u9760\u7684\u901f\u7387\u9650\u5236\uff0c\u5219\u53ef\u80fd\u5bfc\u81f4\u5e94\u7528\u7a0b\u5e8f\u5bb9\u6613\u53d7\u5230\u5404\u79cd\u62d2\u7edd\u670d\u52a1\u653b\u51fb\u3002\u5bf9\u4e8e API \u5c24\u5176\u5982\u6b64\uff0c\u56e0\u4e3a API \u7684\u672c\u8d28\u662f\u65e8\u5728\u63a5\u53d7\u9ad8\u9891\u7387\u7684\u7c7b\u4f3c\u8bf7\u6c42\u7c7b\u578b\u548c\u64cd\u4f5c\u3002 \u5728 OpenStack \u4e2d\uff0c\u5efa\u8bae\u901a\u8fc7\u901f\u7387\u9650\u5236\u4ee3\u7406\u6216 Web \u5e94\u7528\u7a0b\u5e8f\u9632\u706b\u5899\u4e3a\u6240\u6709\u7aef\u70b9\uff08\u5c24\u5176\u662f\u516c\u5171\u7aef\u70b9\uff09\u63d0\u4f9b\u989d\u5916\u7684\u4fdd\u62a4\u5c42\u3002 \u5728\u914d\u7f6e\u548c\u5b9e\u73b0\u4efb\u4f55\u901f\u7387\u9650\u5236\u529f\u80fd\u65f6\uff0c\u8fd0\u8425\u5546\u5fc5\u987b\u4ed4\u7ec6\u89c4\u5212\u5e76\u8003\u8651\u5176 OpenStack \u4e91\u4e2d\u7528\u6237\u548c\u670d\u52a1\u7684\u4e2a\u4eba\u6027\u80fd\u9700\u6c42\uff0c\u8fd9\u4e00\u70b9\u81f3\u5173\u91cd\u8981\u3002 \u63d0\u4f9b\u901f\u7387\u9650\u5236\u7684\u5e38\u89c1\u89e3\u51b3\u65b9\u6848\u662f Nginx\u3001HAProxy\u3001OpenPose \u6216 Apache \u6a21\u5757\uff0c\u4f8b\u5982 mod_ratelimit\u3001mod_qos \u6216 mod_security\u3002","title":"API \u7aef\u70b9\u901f\u7387\u9650\u5236"},{"location":"security/security-guide/#_106","text":"Keystone\u8eab\u4efd\u670d\u52a1\u4e3aOpenStack\u7cfb\u5217\u670d\u52a1\u4e13\u95e8\u63d0\u4f9b\u8eab\u4efd\u3001\u4ee4\u724c\u3001\u76ee\u5f55\u548c\u7b56\u7565\u670d\u52a1\u3002\u8eab\u4efd\u670d\u52a1\u7ec4\u7ec7\u4e3a\u4e00\u7ec4\u5185\u90e8\u670d\u52a1\uff0c\u901a\u8fc7\u4e00\u4e2a\u6216\u591a\u4e2a\u7aef\u70b9\u66b4\u9732\u3002\u8fd9\u4e9b\u670d\u52a1\u4e2d\u7684\u8bb8\u591a\u662f\u7531\u524d\u7aef\u4ee5\u7ec4\u5408\u65b9\u5f0f\u4f7f\u7528\u7684\u3002\u4f8b\u5982\uff0c\u8eab\u4efd\u9a8c\u8bc1\u8c03\u7528\u901a\u8fc7\u8eab\u4efd\u670d\u52a1\u9a8c\u8bc1\u7528\u6237\u548c\u9879\u76ee\u51ed\u636e\u3002\u5982\u679c\u6210\u529f\uff0c\u5b83\u5c06\u4f7f\u7528\u4ee4\u724c\u670d\u52a1\u521b\u5efa\u5e76\u8fd4\u56de\u4ee4\u724c\u3002\u66f4\u591a\u4fe1\u606f\u53ef\u4ee5\u5728Keystone\u5f00\u53d1\u8005\u6587\u6863\u4e2d\u627e\u5230\u3002 \u8ba4\u8bc1 \u65e0\u6548\u7684\u767b\u5f55\u5c1d\u8bd5 \u591a\u56e0\u7d20\u8ba4\u8bc1 \u8ba4\u8bc1\u65b9\u6cd5 \u5185\u90e8\u5b9e\u65bd\u7684\u8ba4\u8bc1\u65b9\u6cd5 \u5916\u90e8\u8ba4\u8bc1\u65b9\u6cd5 \u6388\u6743 \u5efa\u7acb\u6b63\u5f0f\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565 \u670d\u52a1\u6388\u6743 \u7ba1\u7406\u539f\u7528\u6237 \u7ec8\u7aef\u7528\u6237 \u7b56\u7565 \u4ee4\u724c Fernet \u4ee4\u724c JWT \u4ee4\u724c \u57df \u8054\u5408 Keystone \u4e3a\u4ec0\u4e48\u8981\u4f7f\u7528\u8054\u5408\u9274\u522b \u68c0\u67e5\u8868 Check-Identity-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a keystone\uff1f Check-Identity-02\uff1a\u662f\u5426\u4e3a\u8eab\u4efd\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u6743\u9650 Check-Identity-03\uff1a\u662f\u5426\u4e3a Identity \u542f\u7528\u4e86 TLS\uff1f Check-Identity-04\uff1a\uff08\u5df2\u8fc7\u65f6\uff09 Check-Identity-05\uff1a\u662f\u5426 max_request_body_size \u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f check-identity-06:\u7981\u7528/etc/keystone/keystone.conf\u4e2d\u7684\u7ba1\u7406\u4ee4\u724c check-identity-07:/etc/keystone/keystone.conf\u4e2d\u7684\u4e0d\u5b89\u5168_\u8c03\u8bd5\u4e3a\u5047 check-identity-08:\u4f7f\u7528/etc/keystone/keystone.conf\u4e2d\u7684Fernet\u4ee4\u724c","title":"\u8eab\u4efd\u9274\u522b"},{"location":"security/security-guide/#_107","text":"\u8eab\u4efd\u8ba4\u8bc1\u662f\u4efb\u4f55\u5b9e\u9645OpenStack\u90e8\u7f72\u4e2d\u4e0d\u53ef\u6216\u7f3a\u7684\u4e00\u90e8\u5206\uff0c\u56e0\u6b64\u5e94\u8be5\u4ed4\u7ec6\u8003\u8651\u7cfb\u7edf\u8bbe\u8ba1\u7684\u8fd9\u4e00\u65b9\u9762\u3002\u672c\u4e3b\u9898\u7684\u5b8c\u6574\u5904\u7406\u8d85\u51fa\u4e86\u672c\u6307\u5357\u7684\u8303\u56f4\uff0c\u4f46\u662f\u4ee5\u4e0b\u5404\u8282\u4ecb\u7ecd\u4e86\u4e00\u4e9b\u5173\u952e\u4e3b\u9898\u3002 \u4ece\u6839\u672c\u4e0a\u8bf4\uff0c\u8eab\u4efd\u8ba4\u8bc1\u662f\u786e\u8ba4\u8eab\u4efd\u7684\u8fc7\u7a0b - \u7528\u6237\u5b9e\u9645\u4e0a\u662f\u4ed6\u4eec\u58f0\u79f0\u7684\u8eab\u4efd\u3002\u4e00\u4e2a\u719f\u6089\u7684\u793a\u4f8b\u662f\u5728\u767b\u5f55\u7cfb\u7edf\u65f6\u63d0\u4f9b\u7528\u6237\u540d\u548c\u5bc6\u7801\u3002 OpenStack \u8eab\u4efd\u9274\u522b\u670d\u52a1\uff08keystone\uff09\u652f\u6301\u591a\u79cd\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\uff0c\u5305\u62ec\u7528\u6237\u540d\u548c\u5bc6\u7801\u3001LDAP \u548c\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002\u8eab\u4efd\u8ba4\u8bc1\u6210\u529f\u540e\uff0c\u8eab\u4efd\u9274\u522b\u670d\u52a1\u4f1a\u5411\u7528\u6237\u63d0\u4f9b\u7528\u4e8e\u540e\u7eed\u670d\u52a1\u8bf7\u6c42\u7684\u6388\u6743\u4ee4\u724c\u3002 \u4f20\u8f93\u5c42\u5b89\u5168\u6027 \uff08TLS\uff09 \u4f7f\u7528 X.509 \u8bc1\u4e66\u5728\u670d\u52a1\u548c\u4eba\u5458\u4e4b\u95f4\u63d0\u4f9b\u8eab\u4efd\u9a8c\u8bc1\u3002\u5c3d\u7ba1 TLS \u7684\u9ed8\u8ba4\u6a21\u5f0f\u662f\u4ec5\u670d\u52a1\u5668\u7aef\u8eab\u4efd\u9a8c\u8bc1\uff0c\u4f46\u8bc1\u4e66\u4e5f\u53ef\u7528\u4e8e\u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u3002","title":"\u8ba4\u8bc1"},{"location":"security/security-guide/#_108","text":"\u4ece Newton \u7248\u672c\u5f00\u59cb\uff0c\u8eab\u4efd\u9274\u522b\u670d\u52a1\u53ef\u4ee5\u5728\u591a\u6b21\u767b\u5f55\u5c1d\u8bd5\u5931\u8d25\u540e\u9650\u5236\u5bf9\u5e10\u6237\u7684\u8bbf\u95ee\u3002\u91cd\u590d\u5931\u8d25\u767b\u5f55\u5c1d\u8bd5\u7684\u6a21\u5f0f\u901a\u5e38\u662f\u66b4\u529b\u653b\u51fb\u7684\u6307\u6807\uff08\u8bf7\u53c2\u9605\u653b\u51fb\u7c7b\u578b\uff09\u3002\u8fd9\u79cd\u7c7b\u578b\u7684\u653b\u51fb\u5728\u516c\u6709\u4e91\u90e8\u7f72\u4e2d\u66f4\u4e3a\u666e\u904d\u3002 \u5bf9\u4e8e\u9700\u8981\u6b64\u529f\u80fd\u7684\u65e7\u90e8\u7f72\uff0c\u53ef\u4ee5\u4f7f\u7528\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u8fdb\u884c\u9884\u9632\uff0c\u8be5\u7cfb\u7edf\u5728\u914d\u7f6e\u7684\u767b\u5f55\u5c1d\u8bd5\u5931\u8d25\u6b21\u6570\u540e\u9501\u5b9a\u5e10\u6237\u3002\u7136\u540e\uff0c\u53ea\u6709\u901a\u8fc7\u8fdb\u4e00\u6b65\u7684\u4fa7\u4fe1\u9053\u5e72\u9884\u624d\u80fd\u89e3\u9501\u8be5\u5e10\u6237\u3002 \u5982\u679c\u65e0\u6cd5\u9884\u9632\uff0c\u5219\u53ef\u4ee5\u4f7f\u7528\u68c0\u6d4b\u6765\u51cf\u8f7b\u635f\u5bb3\u3002\u68c0\u6d4b\u6d89\u53ca\u9891\u7e41\u67e5\u770b\u8bbf\u95ee\u63a7\u5236\u65e5\u5fd7\uff0c\u4ee5\u8bc6\u522b\u672a\u7ecf\u6388\u6743\u7684\u5e10\u6237\u8bbf\u95ee\u5c1d\u8bd5\u3002\u53ef\u80fd\u7684\u8865\u6551\u63aa\u65bd\u5305\u62ec\u68c0\u67e5\u7528\u6237\u5bc6\u7801\u7684\u5f3a\u5ea6\uff0c\u6216\u901a\u8fc7\u9632\u706b\u5899\u89c4\u5219\u963b\u6b62\u653b\u51fb\u7684\u7f51\u7edc\u6e90\u3002Keystone \u670d\u52a1\u5668\u4e0a\u9650\u5236\u8fde\u63a5\u6570\u7684\u9632\u706b\u5899\u89c4\u5219\u53ef\u7528\u4e8e\u964d\u4f4e\u653b\u51fb\u6548\u7387\uff0c\u4ece\u800c\u529d\u963b\u653b\u51fb\u8005\u3002 \u6b64\u5916\uff0c\u68c0\u67e5\u5e10\u6237\u6d3b\u52a8\u662f\u5426\u5b58\u5728\u5f02\u5e38\u767b\u5f55\u65f6\u95f4\u548c\u53ef\u7591\u64cd\u4f5c\uff0c\u5e76\u91c7\u53d6\u7ea0\u6b63\u63aa\u65bd\uff08\u5982\u7981\u7528\u5e10\u6237\uff09\u4e5f\u5f88\u6709\u7528\u3002\u901a\u5e38\uff0c\u4fe1\u7528\u5361\u63d0\u4f9b\u5546\u91c7\u7528\u8fd9\u79cd\u65b9\u6cd5\u8fdb\u884c\u6b3a\u8bc8\u68c0\u6d4b\u548c\u8b66\u62a5\u3002","title":"\u65e0\u6548\u7684\u767b\u5f55\u5c1d\u8bd5"},{"location":"security/security-guide/#_109","text":"\u91c7\u7528\u591a\u91cd\u8eab\u4efd\u9a8c\u8bc1\u5bf9\u7279\u6743\u7528\u6237\u5e10\u6237\u8fdb\u884c\u7f51\u7edc\u8bbf\u95ee\u3002\u8eab\u4efd\u9274\u522b\u670d\u52a1\u901a\u8fc7\u53ef\u63d0\u4f9b\u6b64\u529f\u80fd\u7684 Apache Web \u670d\u52a1\u5668\u652f\u6301\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u3002\u670d\u52a1\u5668\u8fd8\u53ef\u4ee5\u4f7f\u7528\u8bc1\u4e66\u5f3a\u5236\u6267\u884c\u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u3002 \u6b64\u5efa\u8bae\u53ef\u9632\u6b62\u66b4\u529b\u7834\u89e3\u3001\u793e\u4f1a\u5de5\u7a0b\u4ee5\u53ca\u53ef\u80fd\u6cc4\u9732\u7ba1\u7406\u5458\u5bc6\u7801\u7684\u72d9\u51fb\u548c\u5927\u89c4\u6a21\u7f51\u7edc\u9493\u9c7c\u653b\u51fb\u3002","title":"\u591a\u56e0\u7d20\u8eab\u4efd\u9a8c\u8bc1"},{"location":"security/security-guide/#_110","text":"","title":"\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5"},{"location":"security/security-guide/#_111","text":"\u8eab\u4efd\u8ba4\u8bc1\u670d\u52a1\u53ef\u4ee5\u5c06\u7528\u6237\u51ed\u636e\u5b58\u50a8\u5728 SQL \u6570\u636e\u5e93\u4e2d\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u7b26\u5408 LDAP \u7684\u76ee\u5f55\u670d\u52a1\u5668\u3002\u8eab\u4efd\u6570\u636e\u5e93\u53ef\u4ee5\u4e0e\u5176\u4ed6 OpenStack \u670d\u52a1\u4f7f\u7528\u7684\u6570\u636e\u5e93\u5206\u5f00\uff0c\u4ee5\u964d\u4f4e\u5b58\u50a8\u51ed\u636e\u6cc4\u9732\u7684\u98ce\u9669\u3002 \u5f53\u60a8\u4f7f\u7528\u7528\u6237\u540d\u548c\u5bc6\u7801\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u65f6\uff0c\u8eab\u4efd\u670d\u52a1\u4e0d\u4f1a\u5f3a\u5236\u6267\u884c NIST Special Publication 800-118\uff08\u8349\u6848\uff09\u4e2d\u63a8\u8350\u7684\u6709\u5173\u5bc6\u7801\u5f3a\u5ea6\u3001\u8fc7\u671f\u6216\u5931\u8d25\u8eab\u4efd\u9a8c\u8bc1\u5c1d\u8bd5\u7684\u7b56\u7565\u3002\u5e0c\u671b\u6267\u884c\u66f4\u4e25\u683c\u5bc6\u7801\u7b56\u7565\u7684\u7ec4\u7ec7\u5e94\u8003\u8651\u4f7f\u7528\u8eab\u4efd\u670d\u52a1\u7684\u6269\u5c55\u6216\u5916\u90e8\u8ba4\u8bc1\u670d\u52a1\u3002 LDAP \u7b80\u5316\u4e86\u8eab\u4efd\u8ba4\u8bc1\u4e0e\u7ec4\u7ec7\u73b0\u6709\u76ee\u5f55\u670d\u52a1\u548c\u7528\u6237\u5e10\u6237\u7ba1\u7406\u6d41\u7a0b\u7684\u96c6\u6210\u3002 OpenStack \u4e2d\u7684\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u7b56\u7565\u53ef\u4ee5\u59d4\u6258\u7ed9\u5176\u4ed6\u670d\u52a1\u3002\u4e00\u4e2a\u5178\u578b\u7684\u7528\u4f8b\u662f\u5bfb\u6c42\u90e8\u7f72\u79c1\u6709\u4e91\u7684\u7ec4\u7ec7\uff0c\u5e76\u4e14\u5df2\u7ecf\u5728 LDAP \u7cfb\u7edf\u4e2d\u62e5\u6709\u5458\u5de5\u548c\u7528\u6237\u7684\u6570\u636e\u5e93\u3002\u4f7f\u7528\u6b64\u8eab\u4efd\u9a8c\u8bc1\u673a\u6784\uff0c\u5c06\u5bf9\u8eab\u4efd\u670d\u52a1\u7684\u8bf7\u6c42\u59d4\u6258\u7ed9 LDAP \u7cfb\u7edf\uff0c\u7136\u540e LDAP \u7cfb\u7edf\u5c06\u6839\u636e\u5176\u7b56\u7565\u8fdb\u884c\u6388\u6743\u6216\u62d2\u7edd\u3002\u8eab\u4efd\u9a8c\u8bc1\u6210\u529f\u540e\uff0c\u8eab\u4efd\u9274\u522b\u670d\u52a1\u4f1a\u751f\u6210\u4e00\u4e2a\u4ee4\u724c\uff0c\u7528\u4e8e\u8bbf\u95ee\u6388\u6743\u670d\u52a1\u3002 \u8bf7\u6ce8\u610f\uff0c\u5982\u679c LDAP \u7cfb\u7edf\u5177\u6709\u4e3a\u7528\u6237\u5b9a\u4e49\u7684\u5c5e\u6027\uff0c\u4f8b\u5982 admin\u3001finance\u3001HR \u7b49\uff0c\u5219\u5fc5\u987b\u5c06\u8fd9\u4e9b\u5c5e\u6027\u6620\u5c04\u5230\u8eab\u4efd\u9274\u522b\u4e2d\u7684\u89d2\u8272\u548c\u7ec4\uff0c\u4ee5\u4f9b\u5404\u79cd OpenStack \u670d\u52a1\u4f7f\u7528\u3002\u8be5\u6587\u4ef6 /etc/keystone/keystone.conf \u5c06 LDAP \u5c5e\u6027\u6620\u5c04\u5230\u8eab\u4efd\u5c5e\u6027\u3002 \u4e0d\u5f97\u5141\u8bb8\u8eab\u4efd\u670d\u52a1\u5199\u5165\u7528\u4e8e OpenStack \u90e8\u7f72\u4e4b\u5916\u7684\u8eab\u4efd\u9a8c\u8bc1\u7684 LDAP \u670d\u52a1\uff0c\u56e0\u4e3a\u8fd9\u5c06\u5141\u8bb8\u5177\u6709\u8db3\u591f\u6743\u9650\u7684 keystone \u7528\u6237\u5bf9 LDAP \u76ee\u5f55\u8fdb\u884c\u66f4\u6539\u3002\u8fd9\u5c06\u5141\u8bb8\u5728\u66f4\u5e7f\u6cdb\u7684\u7ec4\u7ec7\u5185\u8fdb\u884c\u6743\u9650\u5347\u7ea7\uff0c\u6216\u4fc3\u8fdb\u5bf9\u5176\u4ed6\u4fe1\u606f\u548c\u8d44\u6e90\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u5728\u8fd9\u6837\u7684\u90e8\u7f72\u4e2d\uff0c\u7528\u6237\u914d\u7f6e\u5c06\u8d85\u51fa OpenStack \u90e8\u7f72\u7684\u8303\u56f4\u3002 \u6ce8\u610f \u6709\u4e00\u4e2a\u5173\u4e8e keystone.conf \u6743\u9650\u7684 OpenStack \u5b89\u5168\u8bf4\u660e \uff08OSSN\uff09\u3002 \u6709\u4e00\u4e2a\u5173\u4e8e\u6f5c\u5728 DoS \u653b\u51fb\u7684 OpenStack \u5b89\u5168\u8bf4\u660e \uff08OSSN\uff09\u3002","title":"\u5185\u90e8\u5b9e\u73b0\u7684\u8ba4\u8bc1\u65b9\u5f0f"},{"location":"security/security-guide/#_112","text":"\u672c\u7ec4\u7ec7\u53ef\u80fd\u5e0c\u671b\u5b9e\u73b0\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\uff0c\u4ee5\u4fbf\u4e0e\u73b0\u6709\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u517c\u5bb9\uff0c\u6216\u5f3a\u5236\u5b9e\u65bd\u66f4\u5f3a\u7684\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\u8981\u6c42\u3002\u5c3d\u7ba1\u5bc6\u7801\u662f\u6700\u5e38\u89c1\u7684\u8eab\u4efd\u9a8c\u8bc1\u5f62\u5f0f\uff0c\u4f46\u5b83\u4eec\u53ef\u4ee5\u901a\u8fc7\u591a\u79cd\u65b9\u6cd5\u6cc4\u9732\uff0c\u5305\u62ec\u51fb\u952e\u8bb0\u5f55\u548c\u5bc6\u7801\u6cc4\u9732\u3002\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u53ef\u4ee5\u63d0\u4f9b\u66ff\u4ee3\u5f62\u5f0f\u7684\u8eab\u4efd\u9a8c\u8bc1\uff0c\u4ee5\u6700\u5927\u7a0b\u5ea6\u5730\u964d\u4f4e\u5f31\u5bc6\u7801\u5e26\u6765\u7684\u98ce\u9669\u3002 \u8fd9\u4e9b\u5305\u62ec\uff1a \u5bc6\u7801\u7b56\u7565\u5b9e\u65bd \u8981\u6c42\u7528\u6237\u5bc6\u7801\u7b26\u5408\u957f\u5ea6\u3001\u5b57\u7b26\u591a\u6837\u6027\u3001\u8fc7\u671f\u6216\u767b\u5f55\u5c1d\u8bd5\u5931\u8d25\u7684\u6700\u4f4e\u6807\u51c6\u3002\u5728\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6848\u4e2d\uff0c\u8fd9\u5c06\u662f\u539f\u59cb\u8eab\u4efd\u5b58\u50a8\u4e0a\u7684\u5bc6\u7801\u7b56\u7565\u3002 \u591a\u56e0\u7d20\u8eab\u4efd\u9a8c\u8bc1 \u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u8981\u6c42\u7528\u6237\u6839\u636e\u4ed6\u4eec\u62e5\u6709\u7684\u5185\u5bb9\uff08\u5982\u4e00\u6b21\u6027\u5bc6\u7801\u4ee4\u724c\u6216 X.509 \u8bc1\u4e66\uff09\u548c\u4ed6\u4eec\u77e5\u9053\u7684\u5185\u5bb9\uff08\u5982\u5bc6\u7801\uff09\u63d0\u4f9b\u4fe1\u606f\u3002 Kerberos \u4e00\u79cd\u4f7f\u7528\u201c\u7968\u8bc1\u201d\u8fdb\u884c\u53cc\u5411\u8ba4\u8bc1\u7684\u7f51\u7edc\u534f\u8bae\uff0c\u7528\u4e8e\u4fdd\u62a4\u5ba2\u6237\u7aef\u548c\u670d\u52a1\u5668\u4e4b\u95f4\u7684\u901a\u4fe1\u3002Kerberos \u7968\u8bc1\u6388\u4e88\u7968\u8bc1\u53ef\u5b89\u5168\u5730\u4e3a\u7279\u5b9a\u670d\u52a1\u63d0\u4f9b\u7968\u8bc1\u3002","title":"\u5916\u90e8\u8ba4\u8bc1\u65b9\u5f0f"},{"location":"security/security-guide/#_113","text":"\u8eab\u4efd\u670d\u52a1\u652f\u6301\u7ec4\u548c\u89d2\u8272\u7684\u6982\u5ff5\u3002\u7528\u6237\u5c5e\u4e8e\u7ec4\uff0c\u800c\u7ec4\u5177\u6709\u89d2\u8272\u5217\u8868\u3002OpenStack \u670d\u52a1\u5f15\u7528\u5c1d\u8bd5\u8bbf\u95ee\u8be5\u670d\u52a1\u7684\u7528\u6237\u7684\u89d2\u8272\u3002OpenStack \u7b56\u7565\u6267\u884c\u5668\u4e2d\u95f4\u4ef6\u4f1a\u8003\u8651\u4e0e\u6bcf\u4e2a\u8d44\u6e90\u5173\u8054\u7684\u7b56\u7565\u89c4\u5219\uff0c\u7136\u540e\u8003\u8651\u7528\u6237\u7684\u7ec4/\u89d2\u8272\u548c\u5173\u8054\uff0c\u4ee5\u786e\u5b9a\u662f\u5426\u5141\u8bb8\u8bbf\u95ee\u6240\u8bf7\u6c42\u7684\u8d44\u6e90\u3002 \u7b56\u7565\u5b9e\u65bd\u4e2d\u95f4\u4ef6\u652f\u6301\u5bf9 OpenStack \u8d44\u6e90\u8fdb\u884c\u7ec6\u7c92\u5ea6\u7684\u8bbf\u95ee\u63a7\u5236\u3002\u7b56\u7565\u4e2d\u6df1\u5165\u8ba8\u8bba\u4e86\u7b56\u7565\u7684\u884c\u4e3a\u3002","title":"\u6388\u6743"},{"location":"security/security-guide/#_114","text":"\u5728\u914d\u7f6e\u89d2\u8272\u3001\u7ec4\u548c\u7528\u6237\u4e4b\u524d\uff0c\u8bf7\u8bb0\u5f55 OpenStack \u5b89\u88c5\u6240\u9700\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u3002\u8fd9\u4e9b\u7b56\u7565\u5e94\u4e0e\u7ec4\u7ec7\u7684\u4efb\u4f55\u6cd5\u89c4\u6216\u6cd5\u5f8b\u8981\u6c42\u4fdd\u6301\u4e00\u81f4\u3002\u5c06\u6765\u5bf9\u8bbf\u95ee\u63a7\u5236\u914d\u7f6e\u7684\u4fee\u6539\u5e94\u4e0e\u6b63\u5f0f\u7b56\u7565\u4fdd\u6301\u4e00\u81f4\u3002\u7b56\u7565\u5e94\u5305\u62ec\u521b\u5efa\u3001\u5220\u9664\u3001\u7981\u7528\u548c\u542f\u7528\u5e10\u6237\u4ee5\u53ca\u4e3a\u5e10\u6237\u5206\u914d\u6743\u9650\u7684\u6761\u4ef6\u548c\u8fc7\u7a0b\u3002\u5b9a\u671f\u67e5\u770b\u7b56\u7565\uff0c\u5e76\u786e\u4fdd\u914d\u7f6e\u7b26\u5408\u6279\u51c6\u7684\u7b56\u7565\u3002","title":"\u5efa\u7acb\u6b63\u5f0f\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565"},{"location":"security/security-guide/#_115","text":"\u4e91\u7ba1\u7406\u5458\u5fc5\u987b\u4e3a\u6bcf\u4e2a\u670d\u52a1\u5b9a\u4e49\u4e00\u4e2a\u5177\u6709\u7ba1\u7406\u5458\u89d2\u8272\u7684\u7528\u6237\uff0c\u5982\u300aOpenStack \u7ba1\u7406\u5458\u6307\u5357\u300b\u4e2d\u6240\u8ff0\u3002\u6b64\u670d\u52a1\u5e10\u6237\u4e3a\u670d\u52a1\u63d0\u4f9b\u5bf9\u7528\u6237\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u7684\u6388\u6743\u3002 \u53ef\u4ee5\u5c06\u8ba1\u7b97\u548c\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u914d\u7f6e\u4e3a\u4f7f\u7528\u8eab\u4efd\u670d\u52a1\u6765\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u3002\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1\u4fe1\u606f\u7684\u5176\u4ed6\u9009\u9879\u5305\u62ec\u4f7f\u7528\u201ctempAuth\u201d\u6587\u4ef6\uff0c\u4f46\u4e0d\u5e94\u5c06\u5176\u90e8\u7f72\u5728\u751f\u4ea7\u73af\u5883\u4e2d\uff0c\u56e0\u4e3a\u5bc6\u7801\u4ee5\u7eaf\u6587\u672c\u5f62\u5f0f\u663e\u793a\u3002 \u8eab\u4efd\u9274\u522b\u670d\u52a1\u652f\u6301\u5bf9 TLS \u8fdb\u884c\u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\uff0c\u8be5\u8eab\u4efd\u9a8c\u8bc1\u53ef\u80fd\u5df2\u542f\u7528\u3002\u9664\u4e86\u7528\u6237\u540d\u548c\u5bc6\u7801\u4e4b\u5916\uff0cTLS \u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u8fd8\u63d0\u4f9b\u4e86\u989d\u5916\u7684\u8eab\u4efd\u9a8c\u8bc1\u56e0\u7d20\uff0c\u4ece\u800c\u63d0\u9ad8\u4e86\u7528\u6237\u6807\u8bc6\u7684\u53ef\u9760\u6027\u3002\u5f53\u7528\u6237\u540d\u548c\u5bc6\u7801\u53ef\u80fd\u88ab\u6cc4\u9732\u65f6\uff0c\u5b83\u964d\u4f4e\u4e86\u672a\u7ecf\u6388\u6743\u8bbf\u95ee\u7684\u98ce\u9669\u3002\u4f46\u662f\uff0c\u5411\u7528\u6237\u9881\u53d1\u8bc1\u4e66\u4f1a\u4ea7\u751f\u989d\u5916\u7684\u7ba1\u7406\u5f00\u9500\u548c\u6210\u672c\uff0c\u8fd9\u5728\u6bcf\u6b21\u90e8\u7f72\u4e2d\u90fd\u53ef\u80fd\u4e0d\u53ef\u884c\u3002 \u6ce8\u610f \u6211\u4eec\u5efa\u8bae\u60a8\u5c06\u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u4e0e TLS \u7ed3\u5408\u4f7f\u7528\uff0c\u4ee5\u4fbf\u5bf9\u8eab\u4efd\u9274\u522b\u670d\u52a1\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u4e91\u7ba1\u7406\u5458\u5e94\u4fdd\u62a4\u654f\u611f\u7684\u914d\u7f6e\u6587\u4ef6\u514d\u906d\u672a\u7ecf\u6388\u6743\u7684\u4fee\u6539\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u5f3a\u5236\u6027\u8bbf\u95ee\u63a7\u5236\u6846\u67b6\uff08\u5982 SELinux\uff09\u6765\u5b9e\u73b0\uff0c\u5305\u62ec /etc/keystone/keystone.conf X.509 \u8bc1\u4e66\u3002 \u4f7f\u7528 TLS \u7684\u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u9700\u8981\u5411\u670d\u52a1\u9881\u53d1\u8bc1\u4e66\u3002\u8fd9\u4e9b\u8bc1\u4e66\u53ef\u4ee5\u7531\u5916\u90e8\u6216\u5185\u90e8\u8bc1\u4e66\u9881\u53d1\u673a\u6784\u7b7e\u540d\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0cOpenStack \u670d\u52a1\u4f1a\u6839\u636e\u53d7\u4fe1\u4efb\u7684 CA \u68c0\u67e5\u8bc1\u4e66\u7b7e\u540d\u7684\u6709\u6548\u6027\uff0c\u5982\u679c\u7b7e\u540d\u65e0\u6548\u6216 CA \u4e0d\u53ef\u4fe1\uff0c\u8fde\u63a5\u5c06\u5931\u8d25\u3002\u4e91\u90e8\u7f72\u4eba\u5458\u53ef\u4ee5\u4f7f\u7528\u81ea\u7b7e\u540d\u8bc1\u4e66\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u5fc5\u987b\u7981\u7528\u6709\u6548\u6027\u68c0\u67e5\uff0c\u6216\u8005\u5e94\u5c06\u8bc1\u4e66\u6807\u8bb0\u4e3a\u53d7\u4fe1\u4efb\u3002\u82e5\u8981\u7981\u7528\u81ea\u7b7e\u540d\u8bc1\u4e66\u7684\u9a8c\u8bc1\uff0c\u8bf7\u5728 /etc/nova/api.paste.ini \u6587\u4ef6\u7684 [filter:authtoken] \u201c\u90e8\u5206\u201d\u4e2d\u8fdb\u884c\u8bbe\u7f6e insecure=False \u3002\u6b64\u8bbe\u7f6e\u8fd8\u4f1a\u7981\u7528\u5176\u4ed6\u7ec4\u4ef6\u7684\u8bc1\u4e66\u3002","title":"\u670d\u52a1\u6388\u6743"},{"location":"security/security-guide/#_116","text":"\u6211\u4eec\u5efa\u8bae\u7ba1\u7406\u5458\u7528\u6237\u4f7f\u7528\u8eab\u4efd\u670d\u52a1\u548c\u652f\u6301 2 \u56e0\u7d20\u8eab\u4efd\u9a8c\u8bc1\u7684\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\uff08\u4f8b\u5982\u8bc1\u4e66\uff09\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u8fd9\u6837\u53ef\u4ee5\u964d\u4f4e\u5bc6\u7801\u53ef\u80fd\u88ab\u6cc4\u9732\u7684\u98ce\u9669\u3002\u6b64\u5efa\u8bae\u7b26\u5408 NIST 800-53 IA-2\uff081\uff09 \u6307\u5357\uff0c\u5373\u4f7f\u7528\u591a\u91cd\u8eab\u4efd\u9a8c\u8bc1\u5bf9\u7279\u6743\u5e10\u6237\u8fdb\u884c\u7f51\u7edc\u8bbf\u95ee\u3002","title":"\u7ba1\u7406\u5458\u7528\u6237"},{"location":"security/security-guide/#_117","text":"\u8eab\u4efd\u9274\u522b\u670d\u52a1\u53ef\u4ee5\u76f4\u63a5\u63d0\u4f9b\u6700\u7ec8\u7528\u6237\u8eab\u4efd\u9a8c\u8bc1\uff0c\u4e5f\u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u4ee5\u7b26\u5408\u7ec4\u7ec7\u7684\u5b89\u5168\u7b56\u7565\u548c\u8981\u6c42\u3002","title":"\u7ec8\u7aef\u7528\u6237"},{"location":"security/security-guide/#_118","text":"\u6bcf\u4e2a OpenStack \u670d\u52a1\u90fd\u5728\u5173\u8054\u7684\u7b56\u7565\u6587\u4ef6\u4e2d\u5b9a\u4e49\u5176\u8d44\u6e90\u7684\u8bbf\u95ee\u7b56\u7565\u3002\u4f8b\u5982\uff0c\u8d44\u6e90\u53ef\u4ee5\u662f API \u8bbf\u95ee\u3001\u9644\u52a0\u5230\u5377\u6216\u542f\u52a8\u5b9e\u4f8b\u7684\u80fd\u529b\u3002\u7b56\u7565\u89c4\u5219\u4ee5 JSON \u683c\u5f0f\u6307\u5b9a\uff0c\u6587\u4ef6\u79f0\u4e3a policy.json .\u6b64\u6587\u4ef6\u7684\u8bed\u6cd5\u548c\u683c\u5f0f\u5728\u914d\u7f6e\u53c2\u8003\u4e2d\u8fdb\u884c\u4e86\u8ba8\u8bba\u3002 \u4e91\u7ba1\u7406\u5458\u53ef\u4ee5\u4fee\u6539\u6216\u66f4\u65b0\u8fd9\u4e9b\u7b56\u7565\uff0c\u4ee5\u63a7\u5236\u5bf9\u5404\u79cd\u8d44\u6e90\u7684\u8bbf\u95ee\u3002\u786e\u4fdd\u5bf9\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u7684\u4efb\u4f55\u66f4\u6539\u90fd\u4e0d\u4f1a\u65e0\u610f\u4e2d\u524a\u5f31\u4efb\u4f55\u8d44\u6e90\u7684\u5b89\u5168\u6027\u3002\u53e6\u8bf7\u6ce8\u610f\uff0c\u5bf9 policy.json \u6587\u4ef6\u7684\u66f4\u6539\u4f1a\u7acb\u5373\u751f\u6548\uff0c\u5e76\u4e14\u4e0d\u9700\u8981\u91cd\u65b0\u542f\u52a8\u670d\u52a1\u3002 \u4ee5\u4e0b\u793a\u4f8b\u663e\u793a\u4e86\u8be5\u670d\u52a1\u5982\u4f55\u5c06\u521b\u5efa\u3001\u66f4\u65b0\u548c\u5220\u9664\u8d44\u6e90\u7684\u8bbf\u95ee\u6743\u9650\u9650\u5236\u4e3a\u4ec5\u5177\u6709\u89d2\u8272 cloud_admin \u7684\u7528\u6237\uff0c\u8be5\u89d2\u8272\u5df2\u5b9a\u4e49\u4e3a role = admin \u548c domain_id = admin_domain_id \u7684\u7ed3\u5408\uff0c\u800c get \u548c list \u8d44\u6e90\u53ef\u4f9b\u89d2\u8272\u4e3a cloud_admin \u6216 admin \u7684\u7528\u6237\u4f7f\u7528\u3002 { \"admin_required\": \"role:admin\", \"cloud_admin\": \"rule:admin_required and domain_id:admin_domain_id\", \"service_role\": \"role:service\", \"service_or_admin\": \"rule:admin_required or rule:service_role\", \"owner\" : \"user_id:%(user_id)s or user_id:%(target.token.user_id)s\", \"admin_or_owner\": \"(rule:admin_required and domain_id:%(target.token.user.domain.id)s) or rule:owner\", \"admin_or_cloud_admin\": \"rule:admin_required or rule:cloud_admin\", \"admin_and_matching_domain_id\": \"rule:admin_required and domain_id:%(domain_id)s\", \"service_admin_or_owner\": \"rule:service_or_admin or rule:owner\", \"default\": \"rule:admin_required\", \"identity:get_service\": \"rule:admin_or_cloud_admin\", \"identity:list_services\": \"rule:admin_or_cloud_admin\", \"identity:create_service\": \"rule:cloud_admin\", \"identity:update_service\": \"rule:cloud_admin\", \"identity:delete_service\": \"rule:cloud_admin\", \"identity:get_endpoint\": \"rule:admin_or_cloud_admin\", \"identity:list_endpoints\": \"rule:admin_or_cloud_admin\", \"identity:create_endpoint\": \"rule:cloud_admin\", \"identity:update_endpoint\": \"rule:cloud_admin\", \"identity:delete_endpoint\": \"rule:cloud_admin\", }","title":"\u653f\u7b56"},{"location":"security/security-guide/#_119","text":"\u7528\u6237\u901a\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u540e\uff0c\u5c06\u751f\u6210\u4e00\u4e2a\u4ee4\u724c\uff0c\u7528\u4e8e\u6388\u6743\u548c\u8bbf\u95ee OpenStack \u73af\u5883\u3002\u4ee3\u5e01\u53ef\u4ee5\u5177\u6709\u53ef\u53d8\u7684\u751f\u547d\u5468\u671f;\u4f46\u662f\uff0cexpiry \u7684\u9ed8\u8ba4\u503c\u4e3a 1 \u5c0f\u65f6\u3002\u5efa\u8bae\u7684\u8fc7\u671f\u503c\u5e94\u8bbe\u7f6e\u4e3a\u8f83\u4f4e\u7684\u503c\uff0c\u4ee5\u4fbf\u5185\u90e8\u670d\u52a1\u6709\u8db3\u591f\u7684\u65f6\u95f4\u5b8c\u6210\u4efb\u52a1\u3002\u5982\u679c\u4ee4\u724c\u5728\u4efb\u52a1\u5b8c\u6210\u4e4b\u524d\u8fc7\u671f\uff0c\u4e91\u53ef\u80fd\u4f1a\u53d8\u5f97\u65e0\u54cd\u5e94\u6216\u505c\u6b62\u63d0\u4f9b\u670d\u52a1\u3002\u4f8b\u5982\uff0c\u8ba1\u7b97\u670d\u52a1\u5c06\u78c1\u76d8\u6620\u50cf\u4f20\u8f93\u5230\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4ee5\u8fdb\u884c\u672c\u5730\u7f13\u5b58\u6240\u9700\u7684\u65f6\u95f4\u3002\u5141\u8bb8\u5728\u4f7f\u7528\u6709\u6548\u7684\u670d\u52a1\u4ee4\u724c\u65f6\u63d0\u53d6\u8fc7\u671f\u7684\u4ee4\u724c\u3002 \u4ee4\u724c\u901a\u5e38\u5728 Identity \u670d\u52a1\u54cd\u5e94\u7684\u8f83\u5927\u4e0a\u4e0b\u6587\u7684\u7ed3\u6784\u4e2d\u4f20\u9012\u3002\u8fd9\u4e9b\u54cd\u5e94\u8fd8\u63d0\u4f9b\u4e86\u5404\u79cd OpenStack \u670d\u52a1\u7684\u76ee\u5f55\u3002\u5217\u51fa\u4e86\u6bcf\u4e2a\u670d\u52a1\u7684\u540d\u79f0\u3001\u5185\u90e8\u8bbf\u95ee\u3001\u7ba1\u7406\u5458\u8bbf\u95ee\u548c\u516c\u5171\u8bbf\u95ee\u7684\u8bbf\u95ee\u7ec8\u7ed3\u70b9\u3002 \u53ef\u4ee5\u4f7f\u7528\u6807\u8bc6 API \u540a\u9500\u4ee4\u724c\u3002 \u5728 Stein \u7248\u672c\u4e2d\uff0c\u6709\u4e24\u79cd\u53d7\u652f\u6301\u7684\u4ee4\u724c\u7c7b\u578b\uff1afernet \u548c JWT\u3002 fernet \u548c JWT \u4ee4\u724c\u90fd\u4e0d\u9700\u8981\u6301\u4e45\u6027\u3002Keystone \u4ee4\u724c\u6570\u636e\u5e93\u4e0d\u518d\u56e0\u8eab\u4efd\u9a8c\u8bc1\u7684\u526f\u4f5c\u7528\u800c\u906d\u53d7\u81a8\u80c0\u3002\u8fc7\u671f\u4ee4\u724c\u7684\u4fee\u526a\u4f1a\u81ea\u52a8\u8fdb\u884c\u3002\u4e5f\u4e0d\u518d\u9700\u8981\u8de8\u591a\u4e2a\u8282\u70b9\u8fdb\u884c\u590d\u5236\u3002\u53ea\u8981\u6bcf\u4e2a keystone \u8282\u70b9\u5171\u4eab\u76f8\u540c\u7684\u5b58\u50a8\u5e93\uff0c\u5c31\u53ef\u4ee5\u5728\u6240\u6709\u8282\u70b9\u4e0a\u7acb\u5373\u521b\u5efa\u548c\u9a8c\u8bc1\u4ee4\u724c\u3002","title":"\u4ee4\u724c"},{"location":"security/security-guide/#fernet","text":"Fernet \u4ee4\u724c\u662f Stein \u652f\u6301\u7684\u4ee4\u724c\u63d0\u4f9b\u7a0b\u5e8f\uff08\u9ed8\u8ba4\uff09\u3002Fernet \u662f\u4e00\u79cd\u5b89\u5168\u7684\u6d88\u606f\u4f20\u9012\u683c\u5f0f\uff0c\u4e13\u95e8\u8bbe\u8ba1\u7528\u4e8e API \u4ee4\u724c\u3002\u5b83\u4eec\u662f\u8f7b\u91cf\u7ea7\u7684\uff08\u8303\u56f4\u5728 180 \u5230 240 \u5b57\u8282\u4e4b\u95f4\uff09\uff0c\u5e76\u51cf\u5c11\u4e86\u8fd0\u884c\u4e91\u6240\u9700\u7684\u8fd0\u8425\u5f00\u9500\u3002\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u5143\u6570\u636e\u88ab\u6574\u9f50\u5730\u6346\u7ed1\u5230\u6d88\u606f\u6253\u5305\u7684\u6709\u6548\u8d1f\u8f7d\u4e2d\uff0c\u7136\u540e\u5bf9\u5176\u8fdb\u884c\u52a0\u5bc6\u5e76\u4f5c\u4e3a fernet \u4ee4\u724c\u767b\u5f55\u3002","title":"Fernet \u4ee4\u724c"},{"location":"security/security-guide/#jwt","text":"JSON Web \u7b7e\u540d \uff08JWS\uff09 \u4ee4\u724c\u662f\u5728 Stein \u7248\u672c\u4e2d\u5f15\u5165\u7684\u3002\u4e0efernet\u76f8\u6bd4\uff0cJWS\u901a\u8fc7\u9650\u5236\u9700\u8981\u5171\u4eab\u5bf9\u79f0\u52a0\u5bc6\u5bc6\u94a5\u7684\u4e3b\u673a\u6570\u91cf\uff0c\u4e3a\u8fd0\u8425\u5546\u63d0\u4f9b\u4e86\u6f5c\u5728\u7684\u597d\u5904\u3002\u8fd9\u6709\u52a9\u4e8e\u9632\u6b62\u53ef\u80fd\u5df2\u5728\u90e8\u7f72\u4e2d\u7ad9\u7a33\u811a\u8ddf\u7684\u6076\u610f\u53c2\u4e0e\u8005\u6269\u6563\u5230\u5176\u4ed6\u8282\u70b9\u3002 \u6709\u5173\u8fd9\u4e9b\u4ee4\u724c\u63d0\u4f9b\u7a0b\u5e8f\u4e4b\u95f4\u5dee\u5f02\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6b64\u5904 https://docs.openstack.org/keystone/stein/admin/tokens-overview.html#token-providers","title":"JWT \u4ee4\u724c"},{"location":"security/security-guide/#_120","text":"\u57df\u662f\u9879\u76ee\u3001\u7528\u6237\u548c\u7ec4\u7684\u9ad8\u7ea7\u5bb9\u5668\u3002\u56e0\u6b64\uff0c\u5b83\u4eec\u53ef\u7528\u4e8e\u96c6\u4e2d\u7ba1\u7406\u6240\u6709\u57fa\u4e8e keystone \u7684\u8eab\u4efd\u7ec4\u4ef6\u3002\u968f\u7740\u5e10\u6237\u57df\u7684\u5f15\u5165\uff0c\u670d\u52a1\u5668\u3001\u5b58\u50a8\u548c\u5176\u4ed6\u8d44\u6e90\u73b0\u5728\u53ef\u4ee5\u5728\u903b\u8f91\u4e0a\u5206\u7ec4\u5230\u591a\u4e2a\u9879\u76ee\uff08\u4ee5\u524d\u79f0\u4e3a\u79df\u6237\uff09\u4e2d\uff0c\u8fd9\u4e9b\u9879\u76ee\u672c\u8eab\u53ef\u4ee5\u5206\u7ec4\u5230\u7c7b\u4f3c\u4e3b\u5e10\u6237\u7684\u5bb9\u5668\u4e0b\u3002\u6b64\u5916\uff0c\u53ef\u4ee5\u5728\u4e00\u4e2a\u5e10\u6237\u57df\u4e2d\u7ba1\u7406\u591a\u4e2a\u7528\u6237\uff0c\u5e76\u4e3a\u6bcf\u4e2a\u9879\u76ee\u5206\u914d\u4e0d\u540c\u7684\u89d2\u8272\u3002 Identity V3 API \u652f\u6301\u591a\u4e2a\u57df\u3002\u4e0d\u540c\u57df\u7684\u7528\u6237\u53ef\u80fd\u5728\u4e0d\u540c\u7684\u8eab\u4efd\u9a8c\u8bc1\u540e\u7aef\u4e2d\u8868\u793a\uff0c\u751a\u81f3\u5177\u6709\u4e0d\u540c\u7684\u5c5e\u6027\uff0c\u8fd9\u4e9b\u5c5e\u6027\u5fc5\u987b\u6620\u5c04\u5230\u4e00\u7ec4\u89d2\u8272\u548c\u6743\u9650\uff0c\u8fd9\u4e9b\u89d2\u8272\u548c\u6743\u9650\u5728\u7b56\u7565\u5b9a\u4e49\u4e2d\u7528\u4e8e\u8bbf\u95ee\u5404\u79cd\u670d\u52a1\u8d44\u6e90\u3002 \u5982\u679c\u89c4\u5219\u53ef\u4ee5\u4ec5\u6307\u5b9a\u5bf9\u7ba1\u7406\u5458\u7528\u6237\u548c\u5c5e\u4e8e\u79df\u6237\u7684\u7528\u6237\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u5219\u6620\u5c04\u53ef\u80fd\u5f88\u7b80\u5355\u3002\u5728\u5176\u4ed6\u60c5\u51b5\u4e0b\uff0c\u4e91\u7ba1\u7406\u5458\u53ef\u80fd\u9700\u8981\u6279\u51c6\u6bcf\u4e2a\u79df\u6237\u7684\u6620\u5c04\u4f8b\u7a0b\u3002 \u7279\u5b9a\u4e8e\u57df\u7684\u8eab\u4efd\u9a8c\u8bc1\u9a71\u52a8\u7a0b\u5e8f\u5141\u8bb8\u4f7f\u7528\u7279\u5b9a\u4e8e\u57df\u7684\u914d\u7f6e\u6587\u4ef6\u4e3a\u591a\u4e2a\u57df\u914d\u7f6e\u6807\u8bc6\u670d\u52a1\u3002\u542f\u7528\u9a71\u52a8\u7a0b\u5e8f\u5e76\u8bbe\u7f6e\u7279\u5b9a\u4e8e\u57df\u7684\u914d\u7f6e\u6587\u4ef6\u4f4d\u7f6e\u53d1\u751f\u5728 keystone.conf \u6587\u4ef6 [identity] \u90e8\u5206\u4e2d\uff1a [identity] domain_specific_drivers_enabled = True domain_config_dir = /etc/keystone/domains \u4efb\u4f55\u6ca1\u6709\u7279\u5b9a\u4e8e\u57df\u7684\u914d\u7f6e\u6587\u4ef6\u7684\u57df\u90fd\u5c06\u4f7f\u7528\u4e3b keystone.conf \u6587\u4ef6\u4e2d\u7684\u9009\u9879\u3002","title":"\u57df"},{"location":"security/security-guide/#_121","text":"\u91cd\u8981\u5b9a\u4e49\uff1a \u670d\u52a1\u63d0\u4f9b\u5546 \uff08SP\uff09 \u5411\u59d4\u6258\u4eba\u6216\u5176\u4ed6\u7cfb\u7edf\u5b9e\u4f53\u63d0\u4f9b\u670d\u52a1\u7684\u7cfb\u7edf\u5b9e\u4f53\uff0c\u5728\u672c\u4f8b\u4e2d\uff0cOpenStack Identity \u662f\u670d\u52a1\u63d0\u4f9b\u8005\u3002 \u8eab\u4efd\u63d0\u4f9b\u5546 \uff08IdP\uff09 \u76ee\u5f55\u670d\u52a1\uff08\u5982 LDAP\u3001RADIUS \u548c Active Directory\uff09\u5141\u8bb8\u7528\u6237\u4f7f\u7528\u7528\u6237\u540d\u548c\u5bc6\u7801\u767b\u5f55\uff0c\u662f\u8eab\u4efd\u63d0\u4f9b\u5546\u5904\u8eab\u4efd\u9a8c\u8bc1\u4ee4\u724c\uff08\u4f8b\u5982\u5bc6\u7801\uff09\u7684\u5178\u578b\u6765\u6e90\u3002 \u8054\u5408\u9274\u6743\u662f\u4e00\u79cd\u5728 IdP \u548c SP \u4e4b\u95f4\u5efa\u7acb\u4fe1\u4efb\u7684\u673a\u5236\uff0c\u5728\u672c\u4f8b\u4e2d\uff0c\u662f\u5728\u8eab\u4efd\u63d0\u4f9b\u8005\u548c OpenStack Cloud \u63d0\u4f9b\u7684\u670d\u52a1\u4e4b\u95f4\u5efa\u7acb\u4fe1\u4efb\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u79cd\u5b89\u5168\u7684\u65b9\u6cd5\uff0c\u53ef\u4ee5\u4f7f\u7528\u73b0\u6709\u51ed\u636e\u8de8\u591a\u4e2a\u7aef\u70b9\u8bbf\u95ee\u4e91\u8d44\u6e90\uff0c\u4f8b\u5982\u670d\u52a1\u5668\u3001\u5377\u548c\u6570\u636e\u5e93\u3002\u51ed\u8bc1\u7531\u7528\u6237\u7684 IdP \u7ef4\u62a4\u3002","title":"\u8054\u5408\u9274\u6743"},{"location":"security/security-guide/#_122","text":"\u4e24\u4e2a\u6839\u672c\u539f\u56e0\uff1a \u964d\u4f4e\u590d\u6742\u6027\u4f7f\u90e8\u7f72\u66f4\u6613\u4e8e\u4fdd\u62a4\u3002 \u5b83\u4e3a\u60a8\u548c\u60a8\u7684\u7528\u6237\u8282\u7701\u4e86\u65f6\u95f4\u3002 \u96c6\u4e2d\u7ba1\u7406\u5e10\u6237\uff0c\u9632\u6b62 OpenStack \u57fa\u7840\u67b6\u6784\u5185\u90e8\u7684\u91cd\u590d\u5de5\u4f5c\u3002 \u51cf\u8f7b\u7528\u6237\u8d1f\u62c5\u3002\u5355\u70b9\u767b\u5f55\u5141\u8bb8\u4f7f\u7528\u5355\u4e00\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u6765\u8bbf\u95ee\u8bb8\u591a\u4e0d\u540c\u7684\u670d\u52a1\u548c\u73af\u5883\u3002 \u5c06\u5bc6\u7801\u6062\u590d\u8fc7\u7a0b\u7684\u8d23\u4efb\u8f6c\u79fb\u5230 IdP\u3002 \u8fdb\u4e00\u6b65\u7684\u7406\u7531\u548c\u7ec6\u8282\u53ef\u4ee5\u5728 Keystone \u5173\u4e8e\u8054\u5408\u7684\u6587\u6863\u4e2d\u627e\u5230\u3002","title":"\u4e3a\u4ec0\u4e48\u8981\u4f7f\u7528\u8054\u5408\u8eab\u4efd\uff1f"},{"location":"security/security-guide/#_123","text":"","title":"\u68c0\u67e5\u8868"},{"location":"security/security-guide/#check-identity-01-keystone","text":"\u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u5bf9\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u7684\u62d2\u7edd\u670d\u52a1\u3002\u56e0\u6b64\uff0c\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a\u8be5\u7ec4\u4ef6\u6240\u6709\u8005\u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/keystone/keystone.conf | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone/keystone-paste.ini | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone/policy.json | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone/logging.conf | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone/ssl/certs/signing_cert.pem | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone/ssl/private/signing_key.pem | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone/ssl/certs/ca.pem | egrep \"keystone keystone\" $ stat -L -c \"%U %G\" /etc/keystone | egrep \"keystone keystone\" \u901a\u8fc7\uff1a \u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u90fd\u8bbe\u7f6e\u4e3a keystone\u3002\u4e0a\u8ff0\u547d\u4ee4\u663e\u793a keystone keystone \u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a \u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u56e0\u4e3a\u7528\u6237\u6216\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 keystone \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u3002 \u63a8\u8350\u4e8e\uff1a\u5185\u90e8\u5b9e\u73b0\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002","title":"Check-Identity-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a keystone\uff1f"},{"location":"security/security-guide/#check-identity-02-identity","text":"\u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u5efa\u8bae\u5bf9\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/keystone/keystone.conf $ stat -L -c \"%a\" /etc/keystone/keystone-paste.ini $ stat -L -c \"%a\" /etc/keystone/policy.json $ stat -L -c \"%a\" /etc/keystone/logging.conf $ stat -L -c \"%a\" /etc/keystone/ssl/certs/signing_cert.pem $ stat -L -c \"%a\" /etc/keystone/ssl/private/signing_key.pem $ stat -L -c \"%a\" /etc/keystone/ssl/certs/ca.pem $ stat -L -c \"%a\" /etc/keystone \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a \u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002 \u5931\u8d25\uff1a \u5982\u679c\u6743\u9650\u672a\u8bbe\u7f6e\u4e3a\u81f3\u5c11 640/750\u3002 \u63a8\u8350\u4e8e\uff1a\u5185\u90e8\u5b9e\u73b0\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002","title":"Check-Identity-02\uff1a\u662f\u5426\u4e3a Identity \u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u6743\u9650\uff1f"},{"location":"security/security-guide/#check-identity-03-identity-tls","text":"OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u56e0\u6b64\uff0c\u6240\u6709\u7ec4\u4ef6\u90fd\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\uff08\u5982 HTTPS\uff09\u76f8\u4e92\u901a\u4fe1\u3002 \u5982\u679c\u5c06 HTTP/WSGI \u670d\u52a1\u5668\u7528\u4e8e\u6807\u8bc6\uff0c\u5219\u5e94\u5728 HTTP/WSGI \u670d\u52a1\u5668\u4e0a\u542f\u7528 TLS\u3002 \u901a\u8fc7\uff1a \u5982\u679c\u5728 HTTP \u670d\u52a1\u5668\u4e0a\u542f\u7528\u4e86 TLS\u3002 \u5931\u8d25\uff1a \u5982\u679c HTTP \u670d\u52a1\u5668\u4e0a\u672a\u542f\u7528 TLS\u3002 \u63a8\u8350\u4e8e\uff1a\u5b89\u5168\u901a\u4fe1\u3002","title":"Check-Identity-03\uff1a\u662f\u5426\u4e3a Identity \u542f\u7528\u4e86 TLS\uff1f"},{"location":"security/security-guide/#check-identity-04","text":"","title":"Check-Identity-04\uff1a\uff08\u5df2\u8fc7\u65f6\uff09"},{"location":"security/security-guide/#check-identity-05-max_request_body_size-114688","text":"\u8be5\u53c2\u6570 max_request_body_size \u5b9a\u4e49\u6bcf\u4e2a\u8bf7\u6c42\u7684\u6700\u5927\u6b63\u6587\u5927\u5c0f\uff08\u4ee5\u5b57\u8282\u4e3a\u5355\u4f4d\uff09\u3002\u5982\u679c\u672a\u5b9a\u4e49\u6700\u5927\u5927\u5c0f\uff0c\u653b\u51fb\u8005\u53ef\u4ee5\u6784\u5efa\u4efb\u610f\u5927\u5bb9\u91cf\u8bf7\u6c42\uff0c\u5bfc\u81f4\u670d\u52a1\u5d29\u6e83\uff0c\u6700\u7ec8\u5bfc\u81f4\u62d2\u7edd\u670d\u52a1\u653b\u51fb\u3002\u5206\u914d\u6700\u5927\u503c\u53ef\u786e\u4fdd\u963b\u6b62\u4efb\u4f55\u6076\u610f\u7684\u8d85\u5927\u8bf7\u6c42\uff0c\u4ece\u800c\u786e\u4fdd\u7ec4\u4ef6\u7684\u6301\u7eed\u53ef\u7528\u6027\u3002 \u901a\u8fc7\uff1a \u5982\u679c\u53c2\u6570 max_request_body_size in /etc/keystone/keystone.conf \u7684\u503c\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09 \u6216\u6839\u636e\u60a8\u7684\u73af\u5883\u8bbe\u7f6e\u7684\u67d0\u4e2a\u5408\u7406\u503c\u3002 \u5931\u8d25\uff1a \u5982\u679c\u672a\u8bbe\u7f6e\u53c2\u6570 max_request_body_size \u503c\u3002","title":"Check-Identity-05\uff1a\u662f\u5426 max_request_body_size \u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f"},{"location":"security/security-guide/#check-identity-06etckeystonekeystoneconf","text":"\u7ba1\u7406\u5458\u4ee4\u724c\u901a\u5e38\u7528\u4e8e\u5f15\u5bfc Identity\u3002\u6b64\u4ee4\u724c\u662f\u6700\u6709\u4ef7\u503c\u7684\u6807\u8bc6\u8d44\u4ea7\uff0c\u53ef\u7528\u4e8e\u83b7\u53d6\u4e91\u7ba1\u7406\u5458\u6743\u9650\u3002 \u901a\u8fc7\uff1a \u5982\u679c admin_token under [DEFAULT] section in /etc/keystone/keystone.conf \u88ab\u7981\u7528\u3002\u5e76\u4e14\uff0c AdminTokenAuthMiddleware under [filter:admin_token_auth] \u4ece /etc/keystone/keystone-paste.ini \u5931\u8d25\uff1a \u5982\u679c admin_token \u8bbe\u7f6e\u4e86 under [DEFAULT] \u90e8\u5206\u5e76 AdminTokenAuthMiddleware \u5b58\u5728\u4e8e keystone-paste.ini \u4e2d\u3002 \u5efa\u8bae \u7981\u7528 `admin_token` \u610f\u5473\u7740\u5b83\u7684\u503c\u4e3a `` \u3002","title":"check-identity-06:\u7981\u7528/etc/keystone/keystone.conf\u4e2d\u7684\u7ba1\u7406\u4ee4\u724c"},{"location":"security/security-guide/#check-identity-07etckeystonekeystoneconf_","text":"\u5982\u679c insecure_debug \u8bbe\u7f6e\u4e3a true\uff0c\u5219\u670d\u52a1\u5668\u5c06\u5728 HTTP \u54cd\u5e94\u4e2d\u8fd4\u56de\u4fe1\u606f\uff0c\u8fd9\u4e9b\u4fe1\u606f\u53ef\u80fd\u5141\u8bb8\u672a\u7ecf\u8eab\u4efd\u9a8c\u8bc1\u6216\u7ecf\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u7684\u7528\u6237\u83b7\u53d6\u6bd4\u6b63\u5e38\u60c5\u51b5\u66f4\u591a\u7684\u4fe1\u606f\uff0c\u4f8b\u5982\u6709\u5173\u8eab\u4efd\u9a8c\u8bc1\u5931\u8d25\u539f\u56e0\u7684\u5176\u4ed6\u8be6\u7ec6\u4fe1\u606f\u3002 \u901a\u8fc7\uff1a \u5982\u679c insecure_debug under [DEFAULT] section in /etc/keystone/keystone.conf \u4e3a false\u3002 \u5931\u8d25\uff1a \u5982\u679c insecure_debug under [DEFAULT] section in /etc/keystone/keystone.conf \u4e3a true\u3002","title":"check-identity-07:/etc/keystone/keystone.conf\u4e2d\u7684\u4e0d\u5b89\u5168_\u8c03\u8bd5\u4e3a\u5047"},{"location":"security/security-guide/#check-identity-08etckeystonekeystoneconffernet","text":"OpenStack Identity \u670d\u52a1\u63d0\u4f9b uuid \u548c fernet \u4f5c\u4e3a\u4ee4\u724c\u63d0\u4f9b\u8005\u3002 uuid \u4ee4\u724c\u5fc5\u987b\u6301\u4e45\u5316\uff0c\u5e76\u88ab\u89c6\u4e3a\u4e0d\u5b89\u5168\u3002 \u901a\u8fc7\uff1a \u5982\u679c section in /etc/keystone/keystone.conf \u4e0b\u7684 [token] \u53c2\u6570 provider \u503c\u8bbe\u7f6e\u4e3a fernet\u3002 \u5931\u8d25\uff1a \u5982\u679c section \u4e0b\u7684 [token] \u53c2\u6570 provider \u503c\u8bbe\u7f6e\u4e3a uuid\u3002","title":"check-identity-08:\u4f7f\u7528/etc/keystone/keystone.conf\u4e2d\u7684Fernet\u4ee4\u724c"},{"location":"security/security-guide/#_124","text":"Dashboard \uff08horizon\uff09 \u662f OpenStack \u4eea\u8868\u677f\uff0c\u5b83\u4e3a\u7528\u6237\u63d0\u4f9b\u4e86\u4e00\u4e2a\u81ea\u52a9\u670d\u52a1\u95e8\u6237\uff0c\u4ee5\u4fbf\u5728\u7ba1\u7406\u5458\u8bbe\u7f6e\u7684\u9650\u5236\u8303\u56f4\u5185\u914d\u7f6e\u81ea\u5df1\u7684\u8d44\u6e90\u3002\u5176\u4e2d\u5305\u62ec\u9884\u7f6e\u7528\u6237\u3001\u5b9a\u4e49\u5b9e\u4f8b\u53d8\u79cd\u3001\u4e0a\u4f20\u865a\u62df\u673a \uff08VM\uff09 \u6620\u50cf\u3001\u7ba1\u7406\u7f51\u7edc\u3001\u8bbe\u7f6e\u5b89\u5168\u7ec4\u3001\u542f\u52a8\u5b9e\u4f8b\u4ee5\u53ca\u901a\u8fc7\u63a7\u5236\u53f0\u8bbf\u95ee\u5b9e\u4f8b\u3002 \u4eea\u8868\u677f\u57fa\u4e8e Django Web \u6846\u67b6\uff0c\u786e\u4fdd Django \u7684\u5b89\u5168\u90e8\u7f72\u5b9e\u8df5\u76f4\u63a5\u5e94\u7528\u4e8e Horizon\u3002\u672c\u6307\u5357\u63d0\u4f9b\u4e86\u4e00\u7ec4 Django \u5b89\u5168\u5efa\u8bae\u3002\u66f4\u591a\u4fe1\u606f\u53ef\u4ee5\u901a\u8fc7\u9605\u8bfb Django \u6587\u6863\u627e\u5230\u3002 \u4eea\u8868\u677f\u9644\u5e26\u9ed8\u8ba4\u5b89\u5168\u8bbe\u7f6e\uff0c\u5e76\u5177\u6709\u90e8\u7f72\u548c\u914d\u7f6e\u6587\u6863\u3002 \u57df\u540d\u3001\u4eea\u8868\u677f\u5347\u7ea7\u548c\u57fa\u672c Web \u670d\u52a1\u5668\u914d\u7f6e \u57df\u540d \u57fa\u672c Web \u670d\u52a1\u5668\u914d\u7f6e \u5141\u8bb8\u7684\u4e3b\u673a \u6620\u50cf\u4e0a\u4f20 HTTPS\u3001HSTS\u3001XSS \u548c SSRF \u8de8\u7ad9\u70b9\u811a\u672c \uff08XSS\uff09 \u8de8\u7ad9\u70b9\u8bf7\u6c42\u4f2a\u9020 \uff08CSRF\uff09 \u8de8\u5e27\u811a\u672c \uff08XFS\uff09 HTTPS\u534f\u8bae HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \uff08HSTS\uff09 \u524d\u7aef\u7f13\u5b58\u548c\u4f1a\u8bdd\u540e\u7aef \u524d\u7aef\u7f13\u5b58 \u4f1a\u8bdd\u540e\u7aef \u9759\u6001\u5a92\u4f53 \u5bc6\u7801 \u5bc6\u94a5 \u7f51\u7ad9\u6570\u636e \u8de8\u57df\u8d44\u6e90\u5171\u4eab \uff08CORS\uff09 \u8c03\u8bd5 \u68c0\u67e5\u8868 Check-Dashboard-01\uff1a\u7528\u6237/\u914d\u7f6e\u6587\u4ef6\u7ec4\u662f\u5426\u8bbe\u7f6e\u4e3a root/horizon\uff1f Check-Dashboard-02\uff1a\u662f\u5426\u4e3a Horizon \u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u6743\u9650\uff1f Check-Dashboard-03\uff1a\u53c2\u6570\u662f\u5426 DISALLOW_IFRAME_EMBED \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-04\uff1a\u53c2\u6570\u662f\u5426 CSRF_COOKIE_SECURE \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-05\uff1a\u53c2\u6570\u662f\u5426 SESSION_COOKIE_SECURE \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-06\uff1a\u53c2\u6570\u662f\u5426 SESSION_COOKIE_HTTPONLY \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-07\uff1a PASSWORD_AUTOCOMPLETE \u8bbe\u7f6e\u4e3a False \uff1f Check-Dashboard-08\uff1a DISABLE_PASSWORD_REVEAL \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-09\uff1a ENFORCE_PASSWORD_CHECK \u8bbe\u7f6e\u4e3a True \uff1f Check-Dashboard-10\uff1a\u662f\u5426 PASSWORD_VALIDATOR \u5df2\u914d\u7f6e\uff1f Check-Dashboard-11\uff1a\u662f\u5426 SECURE_PROXY_SSL_HEADER \u5df2\u914d\u7f6e\uff1f","title":"\u4eea\u8868\u677f"},{"location":"security/security-guide/#web","text":"","title":"\u57df\u540d\u3001\u4eea\u8868\u677f\u5347\u7ea7\u548c\u57fa\u672c Web \u670d\u52a1\u5668\u914d\u7f6e"},{"location":"security/security-guide/#_125","text":"\u8bb8\u591a\u7ec4\u7ec7\u901a\u5e38\u5728\u603b\u4f53\u7ec4\u7ec7\u57df\u7684\u5b50\u57df\u4e2d\u90e8\u7f72 Web \u5e94\u7528\u7a0b\u5e8f\u3002\u7528\u6237\u5f88\u81ea\u7136\u5730\u671f\u671b openstack.example.org .\u5728\u6b64\u4e0a\u4e0b\u6587\u4e2d\uff0c\u901a\u5e38\u5b58\u5728\u90e8\u7f72\u5728\u540c\u4e00\u4e2a\u4e8c\u7ea7\u547d\u540d\u7a7a\u95f4\u4e2d\u7684\u5e94\u7528\u7a0b\u5e8f\u3002\u6b64\u540d\u79f0\u7ed3\u6784\u975e\u5e38\u65b9\u4fbf\uff0c\u5e76\u7b80\u5316\u4e86\u540d\u79f0\u670d\u52a1\u5668\u7684\u7ef4\u62a4\u3002 \u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u5c06\u4eea\u8868\u677f\u90e8\u7f72\u5230\u4e8c\u7ea7\u57df\uff0c\u4f8b\u5982 \uff0c\u800c\u4e0d\u662f\u5728\u4efb\u4f55\u7ea7\u522b\u7684\u5171\u4eab\u5b50\u57df\u4e0a\u90e8\u7f72\u4eea\u8868\u677f\uff0c\u4f8b\u5982 https://example.com https://openstack.example.org \u6216 https://horizon.openstack.example.org \u3002\u6211\u4eec\u8fd8\u5efa\u8bae\u4e0d\u8981\u90e8\u7f72\u5230\u88f8\u5185\u90e8\u57df\uff0c\u4f8b\u5982 https://horizon/ .\u8fd9\u4e9b\u5efa\u8bae\u57fa\u4e8e\u6d4f\u89c8\u5668\u540c\u6e90\u7b56\u7565\u7684\u9650\u5236\u3002 \u5982\u679c\u5c06\u4eea\u8868\u677f\u90e8\u7f72\u5728\u8fd8\u6258\u7ba1\u7528\u6237\u751f\u6210\u5185\u5bb9\u7684\u57df\u4e2d\uff0c\u5219\u672c\u6307\u5357\u4e2d\u63d0\u4f9b\u7684\u5efa\u8bae\u65e0\u6cd5\u6709\u6548\u9632\u8303\u5df2\u77e5\u653b\u51fb\uff0c\u5373\u4f7f\u6b64\u5185\u5bb9\u9a7b\u7559\u5728\u5355\u72ec\u7684\u5b50\u57df\u4e2d\u4e5f\u662f\u5982\u6b64\u3002\u7528\u6237\u751f\u6210\u7684\u5185\u5bb9\u53ef\u4ee5\u5305\u542b\u4efb\u4f55\u7c7b\u578b\u7684\u811a\u672c\u3001\u56fe\u50cf\u6216\u4e0a\u4f20\u5185\u5bb9\u3002\u5927\u591a\u6570\u4e3b\u8981\u7684 Web \u5b58\u5728\uff08\u5305\u62ec googleusercontent.com\u3001fbcdn.com\u3001github.io \u548c twimg.co\uff09\u90fd\u4f7f\u7528\u8fd9\u79cd\u65b9\u6cd5\u5c06\u7528\u6237\u751f\u6210\u7684\u5185\u5bb9\u4e0e Cookie \u548c\u5b89\u5168\u4ee4\u724c\u9694\u79bb\u5f00\u6765\u3002 \u5982\u679c\u60a8\u4e0d\u9075\u5faa\u6709\u5173\u4e8c\u7ea7\u57df\u7684\u5efa\u8bae\uff0c\u8bf7\u907f\u514d\u4f7f\u7528 Cookie \u652f\u6301\u7684\u4f1a\u8bdd\u5b58\u50a8\uff0c\u5e76\u91c7\u7528 HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \uff08HSTS\uff09\u3002\u5f53\u90e8\u7f72\u5728\u5b50\u57df\u4e0a\u65f6\uff0c\u4eea\u8868\u677f\u7684\u5b89\u5168\u6027\u7b49\u540c\u4e8e\u90e8\u7f72\u5728\u540c\u4e00\u4e8c\u7ea7\u57df\u4e0a\u7684\u5b89\u5168\u6027\u6700\u4f4e\u7684\u5e94\u7528\u7a0b\u5e8f\u3002","title":"\u57df\u540d"},{"location":"security/security-guide/#web_1","text":"\u4eea\u8868\u677f\u5e94\u90e8\u7f72\u4e3a HTTPS \u4ee3\u7406\uff08\u5982 Apache \u6216 Nginx\uff09\u540e\u9762\u7684 Web \u670d\u52a1\u7f51\u5173\u63a5\u53e3 \uff08WSGI\uff09 \u5e94\u7528\u7a0b\u5e8f\u3002\u5982\u679c Apache \u5c1a\u672a\u4f7f\u7528\uff0c\u6211\u4eec\u5efa\u8bae\u4f7f\u7528 Nginx\uff0c\u56e0\u4e3a\u5b83\u662f\u8f7b\u91cf\u7ea7\u7684\uff0c\u5e76\u4e14\u66f4\u5bb9\u6613\u6b63\u786e\u914d\u7f6e\u3002 \u4f7f\u7528 Nginx \u65f6\uff0c\u6211\u4eec\u5efa\u8bae gunicorn \u4f5c\u4e3a WSGI \u4e3b\u673a\uff0c\u5e76\u5177\u6709\u9002\u5f53\u6570\u91cf\u7684\u540c\u6b65\u5de5\u4f5c\u7ebf\u7a0b\u3002\u4f7f\u7528 Apache \u65f6\uff0c\u6211\u4eec\u5efa\u8bae mod_wsgi \u6258\u7ba1\u4eea\u8868\u677f\u3002","title":"\u57fa\u672c\u7684 Web \u670d\u52a1\u5668\u914d\u7f6e"},{"location":"security/security-guide/#_126","text":"\u4f7f\u7528 OpenStack \u4eea\u8868\u677f\u63d0\u4f9b\u7684\u5b8c\u5168\u9650\u5b9a\u4e3b\u673a\u540d\u914d\u7f6e\u8bbe\u7f6e ALLOWED_HOSTS \u3002\u63d0\u4f9b\u6b64\u8bbe\u7f6e\u540e\uff0c\u5982\u679c\u4f20\u5165 HTTP \u8bf7\u6c42\u7684\u201cHost\uff1a\u201d\u6807\u5934\u4e2d\u7684\u503c\u4e0e\u6b64\u5217\u8868\u4e2d\u7684\u4efb\u4f55\u503c\u90fd\u4e0d\u5339\u914d\uff0c\u5219\u5c06\u5f15\u53d1\u9519\u8bef\uff0c\u5e76\u4e14\u8bf7\u6c42\u8005\u5c06\u65e0\u6cd5\u7ee7\u7eed\u3002\u5982\u679c\u672a\u80fd\u914d\u7f6e\u6b64\u9009\u9879\uff0c\u6216\u8005\u5728\u6307\u5b9a\u7684\u4e3b\u673a\u540d\u4e2d\u4f7f\u7528\u901a\u914d\u7b26\uff0c\u5c06\u5bfc\u81f4\u4eea\u8868\u677f\u5bb9\u6613\u53d7\u5230\u4e0e\u865a\u5047 HTTP \u4e3b\u673a\u6807\u5934\u5173\u8054\u7684\u5b89\u5168\u6f0f\u6d1e\u7684\u5f71\u54cd\u3002 \u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Django \u6587\u6863\u3002","title":"\u5141\u8bb8\u7684\u4e3b\u673a"},{"location":"security/security-guide/#horizon","text":"\u6211\u4eec\u5efa\u8bae\u5b9e\u65bd\u8005\u7981\u7528HORIZON_IMAGES_ALLOW_UPLOAD\uff0c\u9664\u975e\u4ed6\u4eec\u5df2\u5b9e\u65bd\u9632\u6b62\u8d44\u6e90\u8017\u5c3d\u548c\u62d2\u7edd\u670d\u52a1\u7684\u8ba1\u5212\u3002","title":"Horizon \u955c\u50cf\u4e0a\u4f20"},{"location":"security/security-guide/#httpshstsxss-ssrf","text":"","title":"HTTPS\u3001HSTS\u3001XSS \u548c SSRF"},{"location":"security/security-guide/#xss","text":"\u4e0e\u8bb8\u591a\u7c7b\u4f3c\u7684\u7cfb\u7edf\u4e0d\u540c\uff0cOpenStack \u4eea\u8868\u677f\u5141\u8bb8\u5728\u5927\u591a\u6570\u5b57\u6bb5\u4e2d\u4f7f\u7528\u6574\u4e2a Unicode \u5b57\u7b26\u96c6\u3002\u8fd9\u610f\u5473\u7740\u5f00\u53d1\u4eba\u5458\u72af\u9519\u8bef\u7684\u81ea\u7531\u5ea6\u8f83\u5c0f\uff0c\u8fd9\u4e9b\u9519\u8bef\u4e3a\u8de8\u7ad9\u70b9\u811a\u672c \uff08XSS\uff09 \u6253\u5f00\u4e86\u653b\u51fb\u5a92\u4ecb\u3002 Dashboard \u4e3a\u5f00\u53d1\u4eba\u5458\u63d0\u4f9b\u4e86\u907f\u514d\u521b\u5efa XSS \u6f0f\u6d1e\u7684\u5de5\u5177\uff0c\u4f46\u5b83\u4eec\u53ea\u6709\u5728\u5f00\u53d1\u4eba\u5458\u6b63\u786e\u4f7f\u7528\u5b83\u4eec\u65f6\u624d\u6709\u6548\u3002\u5ba1\u6838\u4efb\u4f55\u81ea\u5b9a\u4e49\u4eea\u8868\u677f\uff0c\u7279\u522b\u6ce8\u610f mark_safe \u51fd\u6570\u7684\u4f7f\u7528\u3001\u4e0e\u81ea\u5b9a\u4e49\u6a21\u677f\u6807\u8bb0\u7684\u4f7f\u7528 is_safe \u3001 safe \u6a21\u677f\u6807\u8bb0\u7684\u4f7f\u7528\u3001\u5173\u95ed\u81ea\u52a8\u8f6c\u4e49\u7684\u4efb\u4f55\u4f4d\u7f6e\uff0c\u4ee5\u53ca\u4efb\u4f55\u53ef\u80fd\u8bc4\u4f30\u4e0d\u5f53\u8f6c\u4e49\u6570\u636e\u7684 JavaScript\u3002","title":"\u8de8\u7ad9\u811a\u672c \uff08XSS\uff09"},{"location":"security/security-guide/#csrf","text":"Django \u6709\u4e13\u95e8\u7684\u4e2d\u95f4\u4ef6\u7528\u4e8e\u8de8\u7ad9\u8bf7\u6c42\u4f2a\u9020 \uff08CSRF\uff09\u3002\u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Django \u6587\u6863\u3002 OpenStack \u4eea\u8868\u677f\u65e8\u5728\u963b\u6b62\u5f00\u53d1\u4eba\u5458\u5728\u5f15\u5165\u7ebf\u7a0b\u65f6\u4f7f\u7528\u81ea\u5b9a\u4e49\u4eea\u8868\u677f\u5f15\u5165\u8de8\u7ad9\u70b9\u811a\u672c\u6f0f\u6d1e\u3002\u5e94\u5ba1\u6838\u4f7f\u7528\u591a\u4e2a JavaScript \u5b9e\u4f8b\u7684\u4eea\u8868\u677f\u662f\u5426\u5b58\u5728\u6f0f\u6d1e\uff0c\u4f8b\u5982\u4e0d\u5f53\u4f7f\u7528 @csrf_exempt \u88c5\u9970\u5668\u3002\u5728\u653e\u5bbd\u9650\u5236\u4e4b\u524d\uff0c\u5e94\u4ed4\u7ec6\u8bc4\u4f30\u4efb\u4f55\u4e0d\u9075\u5faa\u8fd9\u4e9b\u5efa\u8bae\u7684\u5b89\u5168\u8bbe\u7f6e\u7684\u4eea\u8868\u677f\u3002","title":"\u8de8\u7ad9\u8bf7\u6c42\u4f2a\u9020 \uff08CSRF\uff09"},{"location":"security/security-guide/#xfs","text":"\u4f20\u7edf\u6d4f\u89c8\u5668\u4ecd\u7136\u5bb9\u6613\u53d7\u5230\u8de8\u5e27\u811a\u672c \uff08XFS\uff09 \u6f0f\u6d1e\u7684\u653b\u51fb\uff0c\u56e0\u6b64 OpenStack \u4eea\u8868\u677f\u63d0\u4f9b\u4e86\u4e00\u4e2a\u9009\u9879 DISALLOW_IFRAME_EMBED \uff0c\u5141\u8bb8\u5728\u90e8\u7f72\u4e2d\u4e0d\u4f7f\u7528 iframe \u7684\u60c5\u51b5\u4e0b\u8fdb\u884c\u989d\u5916\u7684\u5b89\u5168\u5f3a\u5316\u3002","title":"\u8de8\u5e27\u811a\u672c \uff08XFS\uff09"},{"location":"security/security-guide/#https","text":"\u4f7f\u7528\u6765\u81ea\u516c\u8ba4\u7684\u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09 \u7684\u6709\u6548\u53d7\u4fe1\u4efb\u8bc1\u4e66\uff0c\u5c06\u4eea\u8868\u677f\u90e8\u7f72\u5728\u5b89\u5168 HTTPS \u670d\u52a1\u5668\u540e\u9762\u3002\u4ec5\u5f53\u4fe1\u4efb\u6839\u9884\u5b89\u88c5\u5728\u6240\u6709\u7528\u6237\u6d4f\u89c8\u5668\u4e2d\u65f6\uff0c\u79c1\u6709\u7ec4\u7ec7\u9881\u53d1\u7684\u8bc1\u4e66\u624d\u9002\u7528\u3002 \u914d\u7f6e\u5bf9\u4eea\u8868\u677f\u57df\u7684 HTTP \u8bf7\u6c42\uff0c\u4ee5\u91cd\u5b9a\u5411\u5230\u5b8c\u5168\u9650\u5b9a\u7684 HTTPS URL\u3002","title":"HTTPS \u51fd\u6570"},{"location":"security/security-guide/#http-hsts","text":"\u5f3a\u70c8\u5efa\u8bae\u4f7f\u7528 HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \uff08HSTS\uff09\u3002 \u6ce8\u610f \u5982\u679c\u60a8\u5728 Web \u670d\u52a1\u5668\u524d\u9762\u4f7f\u7528 HTTPS \u4ee3\u7406\uff0c\u800c\u4e0d\u662f\u4f7f\u7528\u5177\u6709 HTTPS \u529f\u80fd\u7684 HTTP \u670d\u52a1\u5668\uff0c\u8bf7\u4fee\u6539\u8be5 `SECURE_PROXY_SSL_HEADER` \u53d8\u91cf\u3002\u6709\u5173\u4fee\u6539 `SECURE_PROXY_SSL_HEADER` \u53d8\u91cf\u7684\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Django \u6587\u6863\u3002 \u6709\u5173 HTTPS \u914d\u7f6e\uff08\u5305\u62ec HSTS \u914d\u7f6e\uff09\u7684\u66f4\u5177\u4f53\u5efa\u8bae\u548c\u670d\u52a1\u5668\u914d\u7f6e\uff0c\u8bf7\u53c2\u9605\u201c\u5b89\u5168\u901a\u4fe1\u201d\u4e00\u7ae0\u3002","title":"HTTP \u4e25\u683c\u4f20\u8f93\u5b89\u5168 \uff08HSTS\uff09"},{"location":"security/security-guide/#_127","text":"","title":"\u524d\u7aef\u7f13\u5b58\u548c\u4f1a\u8bdd\u540e\u7aef"},{"location":"security/security-guide/#_128","text":"\u6211\u4eec\u4e0d\u5efa\u8bae\u5728\u4eea\u8868\u677f\u4e2d\u4f7f\u7528\u524d\u7aef\u7f13\u5b58\u5de5\u5177\u3002\u4eea\u8868\u677f\u6b63\u5728\u6e32\u67d3\u76f4\u63a5\u7531 OpenStack API \u8bf7\u6c42\u751f\u6210\u7684\u52a8\u6001\u5185\u5bb9\uff0c\u524d\u7aef\u7f13\u5b58\u5c42\uff08\u5982 varnish\uff09\u53ef\u80fd\u4f1a\u963b\u6b62\u663e\u793a\u6b63\u786e\u7684\u5185\u5bb9\u3002\u5728 Django \u4e2d\uff0c\u9759\u6001\u5a92\u4f53\u76f4\u63a5\u4ece Apache \u6216 Nginx \u63d0\u4f9b\uff0c\u5e76\u4e14\u5df2\u7ecf\u53d7\u76ca\u4e8e Web \u4e3b\u673a\u7f13\u5b58\u3002","title":"\u524d\u7aef\u7f13\u5b58"},{"location":"security/security-guide/#_129","text":"Horizon \u7684\u9ed8\u8ba4\u4f1a\u8bdd\u540e\u7aef django.contrib.sessions.backends.signed_cookies \u5c06\u7528\u6237\u6570\u636e\u4fdd\u5b58\u5728\u6d4f\u89c8\u5668\u4e2d\u5b58\u50a8\u7684\u5df2\u7b7e\u540d\u4f46\u672a\u52a0\u5bc6\u7684 Cookie \u4e2d\u3002\u7531\u4e8e\u6bcf\u4e2a\u4eea\u8868\u677f\u5b9e\u4f8b\u90fd\u662f\u65e0\u72b6\u6001\u7684\uff0c\u56e0\u6b64\u524d\u9762\u63d0\u5230\u7684\u65b9\u6cd5\u63d0\u4f9b\u4e86\u5b9e\u73b0\u6700\u7b80\u5355\u7684\u4f1a\u8bdd\u540e\u7aef\u6269\u5c55\u7684\u80fd\u529b\u3002 \u5e94\u8be5\u6ce8\u610f\u7684\u662f\uff0c\u5728\u8fd9\u79cd\u7c7b\u578b\u7684\u5b9e\u73b0\u4e2d\uff0c\u654f\u611f\u7684\u8bbf\u95ee\u4ee4\u724c\u5c06\u5b58\u50a8\u5728\u6d4f\u89c8\u5668\u4e2d\uff0c\u5e76\u5c06\u968f\u7740\u6bcf\u4e2a\u8bf7\u6c42\u7684\u53d1\u51fa\u800c\u4f20\u8f93\u3002\u540e\u7aef\u786e\u4fdd\u4f1a\u8bdd\u6570\u636e\u7684\u5b8c\u6574\u6027\uff0c\u5373\u4f7f\u4f20\u8f93\u7684\u6570\u636e\u4ec5\u901a\u8fc7 HTTPS \u52a0\u5bc6\u3002 \u5982\u679c\u60a8\u7684\u67b6\u6784\u5141\u8bb8\u5171\u4eab\u5b58\u50a8\uff0c\u5e76\u4e14\u60a8\u6b63\u786e\u914d\u7f6e\u4e86\u7f13\u5b58\uff0c\u6211\u4eec\u5efa\u8bae\u60a8\u5c06\u5176\u8bbe\u7f6e\u4e3a SESSION_ENGINE django.contrib.sessions.backends.cache \u5e76\u7528\u4f5c\u57fa\u4e8e\u7f13\u5b58\u7684\u4f1a\u8bdd\u540e\u7aef\uff0c\u5e76\u5c06 memcached \u4f5c\u4e3a\u7f13\u5b58\u3002Memcached \u662f\u4e00\u79cd\u9ad8\u6548\u7684\u5185\u5b58\u952e\u503c\u5b58\u50a8\uff0c\u7528\u4e8e\u5b58\u50a8\u6570\u636e\u5757\uff0c\u53ef\u5728\u9ad8\u53ef\u7528\u6027\u548c\u5206\u5e03\u5f0f\u73af\u5883\u4e2d\u4f7f\u7528\uff0c\u5e76\u4e14\u6613\u4e8e\u914d\u7f6e\u3002\u4f46\u662f\uff0c\u60a8\u9700\u8981\u786e\u4fdd\u6ca1\u6709\u6570\u636e\u6cc4\u6f0f\u3002Memcached \u5229\u7528\u5907\u7528 RAM \u6765\u5b58\u50a8\u7ecf\u5e38\u8bbf\u95ee\u7684\u6570\u636e\u5757\uff0c\u5c31\u50cf\u91cd\u590d\u8bbf\u95ee\u4fe1\u606f\u7684\u5185\u5b58\u7f13\u5b58\u4e00\u6837\u3002\u7531\u4e8e memcached \u4f7f\u7528\u672c\u5730\u5185\u5b58\uff0c\u56e0\u6b64\u4e0d\u4f1a\u4ea7\u751f\u6570\u636e\u5e93\u548c\u6587\u4ef6\u7cfb\u7edf\u4f7f\u7528\u5f00\u9500\uff0c\u4ece\u800c\u5bfc\u81f4\u76f4\u63a5\u4ece RAM \u800c\u4e0d\u662f\u4ece\u78c1\u76d8\u8bbf\u95ee\u6570\u636e\u3002 \u6211\u4eec\u5efa\u8bae\u4f7f\u7528 memcached \u800c\u4e0d\u662f\u672c\u5730\u5185\u5b58\u7f13\u5b58\uff0c\u56e0\u4e3a\u5b83\u901f\u5ea6\u5feb\uff0c\u6570\u636e\u4fdd\u7559\u65f6\u95f4\u66f4\u957f\uff0c\u591a\u8fdb\u7a0b\u5b89\u5168\uff0c\u5e76\u4e14\u80fd\u591f\u5728\u591a\u4e2a\u670d\u52a1\u5668\u4e0a\u5171\u4eab\u7f13\u5b58\uff0c\u4f46\u4ecd\u5c06\u5176\u89c6\u4e3a\u5355\u4e2a\u7f13\u5b58\u3002 \u8981\u542f\u7528 memcached\uff0c\u8bf7\u6267\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache' } \u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Django \u6587\u6863\u3002","title":"\u4f1a\u8bdd\u540e\u7aef"},{"location":"security/security-guide/#_130","text":"\u4eea\u8868\u677f\u7684\u9759\u6001\u5a92\u4f53\u5e94\u90e8\u7f72\u5230\u4eea\u8868\u677f\u57df\u7684\u5b50\u57df\uff0c\u5e76\u7531 Web \u670d\u52a1\u5668\u63d0\u4f9b\u670d\u52a1\u3002\u4f7f\u7528\u5916\u90e8\u5185\u5bb9\u5206\u53d1\u7f51\u7edc \uff08CDN\uff09 \u4e5f\u662f\u53ef\u4ee5\u63a5\u53d7\u7684\u3002\u6b64\u5b50\u57df\u4e0d\u5e94\u8bbe\u7f6e Cookie \u6216\u63d0\u4f9b\u7528\u6237\u63d0\u4f9b\u7684\u5185\u5bb9\u3002\u5a92\u4f53\u4e5f\u5e94\u4f7f\u7528 HTTPS \u63d0\u4f9b\u3002 Django \u5a92\u4f53\u8bbe\u7f6e\u8bb0\u5f55\u5728 Django \u6587\u6863\u4e2d\u3002 Dashboard \u7684\u9ed8\u8ba4\u914d\u7f6e\u4f7f\u7528 django_compressor \u6765\u538b\u7f29\u548c\u7f29\u5c0f CSS \u548c JavaScript \u5185\u5bb9\uff0c\u7136\u540e\u518d\u63d0\u4f9b\u8fd9\u4e9b\u5185\u5bb9\u3002\u6b64\u8fc7\u7a0b\u5e94\u5728\u90e8\u7f72\u4eea\u8868\u677f\u4e4b\u524d\u9759\u6001\u5b8c\u6210\uff0c\u800c\u4e0d\u662f\u4f7f\u7528\u9ed8\u8ba4\u7684\u8bf7\u6c42\u5185\u52a8\u6001\u538b\u7f29\uff0c\u5e76\u5c06\u751f\u6210\u7684\u6587\u4ef6\u4e0e\u5df2\u90e8\u7f72\u7684\u4ee3\u7801\u4e00\u8d77\u590d\u5236\u5230 CDN \u670d\u52a1\u5668\u3002\u538b\u7f29\u5e94\u5728\u975e\u751f\u4ea7\u751f\u6210\u73af\u5883\u4e2d\u5b8c\u6210\u3002\u5982\u679c\u8fd9\u4e0d\u53ef\u884c\uff0c\u6211\u4eec\u5efa\u8bae\u5b8c\u5168\u7981\u7528\u8d44\u6e90\u538b\u7f29\u3002\u4e0d\u5e94\u5728\u751f\u4ea7\u8ba1\u7b97\u673a\u4e0a\u5b89\u88c5\u8054\u673a\u538b\u7f29\u4f9d\u8d56\u9879\uff08\u8f83\u5c11\uff0cNode.js\uff09\u3002","title":"\u9759\u6001\u5a92\u4f53"},{"location":"security/security-guide/#_131","text":"\u5bc6\u7801\u7ba1\u7406\u5e94\u8be5\u662f\u4e91\u7ba1\u7406\u8ba1\u5212\u4e0d\u53ef\u6216\u7f3a\u7684\u4e00\u90e8\u5206\u3002\u5173\u4e8e\u5bc6\u7801\u7684\u6743\u5a01\u6559\u7a0b\u8d85\u51fa\u4e86\u672c\u4e66\u7684\u8303\u56f4;\u4f46\u662f\uff0c\u4e91\u7ba1\u7406\u5458\u5e94\u53c2\u8003 NIST \u4f01\u4e1a\u5bc6\u7801\u7ba1\u7406\u7279\u522b\u51fa\u7248\u7269\u6307\u5357\u7b2c 4 \u7ae0\u4e2d\u63a8\u8350\u7684\u6700\u4f73\u5b9e\u8df5\u3002 \u65e0\u8bba\u662f\u901a\u8fc7\u4eea\u8868\u677f\u8fd8\u662f\u5176\u4ed6\u5e94\u7528\u7a0b\u5e8f\uff0c\u57fa\u4e8e\u6d4f\u89c8\u5668\u7684 OpenStack \u4e91\u8bbf\u95ee\u90fd\u4f1a\u5f15\u5165\u989d\u5916\u7684\u6ce8\u610f\u4e8b\u9879\u3002\u73b0\u4ee3\u6d4f\u89c8\u5668\u90fd\u652f\u6301\u67d0\u79cd\u5f62\u5f0f\u7684\u5bc6\u7801\u5b58\u50a8\u548c\u81ea\u52a8\u586b\u5145\u8bb0\u4f4f\u7684\u7ad9\u70b9\u7684\u51ed\u636e\u3002\u8fd9\u5728\u4f7f\u7528\u4e0d\u5bb9\u6613\u8bb0\u4f4f\u6216\u952e\u5165\u7684\u5f3a\u5bc6\u7801\u65f6\u975e\u5e38\u6709\u7528\uff0c\u4f46\u5982\u679c\u5ba2\u6237\u7aef\u7684\u7269\u7406\u5b89\u5168\u6027\u53d7\u5230\u5a01\u80c1\uff0c\u53ef\u80fd\u4f1a\u5bfc\u81f4\u6d4f\u89c8\u5668\u6210\u4e3a\u8584\u5f31\u73af\u8282\u3002\u5982\u679c\u6d4f\u89c8\u5668\u7684\u5bc6\u7801\u5b58\u50a8\u672c\u8eab\u4e0d\u53d7\u5f3a\u5bc6\u7801\u4fdd\u62a4\uff0c\u6216\u8005\u5982\u679c\u5141\u8bb8\u5bc6\u7801\u5b58\u50a8\u5728\u4f1a\u8bdd\u671f\u95f4\u4fdd\u6301\u89e3\u9501\u72b6\u6001\uff0c\u5219\u5f88\u5bb9\u6613\u83b7\u5f97\u5bf9\u7cfb\u7edf\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002 KeePassX \u548c Password Safe \u7b49\u5bc6\u7801\u7ba1\u7406\u5e94\u7528\u7a0b\u5e8f\u975e\u5e38\u6709\u7528\uff0c\u56e0\u4e3a\u5927\u591a\u6570\u5e94\u7528\u7a0b\u5e8f\u90fd\u652f\u6301\u751f\u6210\u5f3a\u5bc6\u7801\u548c\u5b9a\u671f\u63d0\u9192\u751f\u6210\u65b0\u5bc6\u7801\u3002\u6700\u91cd\u8981\u7684\u662f\uff0c\u5bc6\u7801\u5b58\u50a8\u4ec5\u77ed\u6682\u4fdd\u6301\u89e3\u9501\u72b6\u6001\uff0c\u4ece\u800c\u964d\u4f4e\u4e86\u5bc6\u7801\u6cc4\u9732\u548c\u901a\u8fc7\u6d4f\u89c8\u5668\u6216\u7cfb\u7edf\u5165\u4fb5\u8fdb\u884c\u672a\u7ecf\u6388\u6743\u7684\u8d44\u6e90\u8bbf\u95ee\u7684\u98ce\u9669\u3002","title":"\u5bc6\u7801"},{"location":"security/security-guide/#_132","text":"\u4eea\u8868\u677f\u4f9d\u8d56\u4e8e\u67d0\u4e9b\u5b89\u5168\u529f\u80fd\u7684\u5171\u4eab SECRET_KEY \u8bbe\u7f6e\u3002\u5bc6\u94a5\u5e94\u4e3a\u968f\u673a\u751f\u6210\u7684\u5b57\u7b26\u4e32\uff0c\u957f\u5ea6\u81f3\u5c11\u4e3a 64 \u4e2a\u5b57\u7b26\uff0c\u5fc5\u987b\u5728\u6240\u6709\u6d3b\u52a8\u4eea\u8868\u677f\u5b9e\u4f8b\u4e4b\u95f4\u5171\u4eab\u3002\u6cc4\u9732\u6b64\u5bc6\u94a5\u53ef\u80fd\u5141\u8bb8\u8fdc\u7a0b\u653b\u51fb\u8005\u6267\u884c\u4efb\u610f\u4ee3\u7801\u3002\u8f6e\u6362\u6b64\u5bc6\u94a5\u4f1a\u4f7f\u73b0\u6709\u7528\u6237\u4f1a\u8bdd\u548c\u7f13\u5b58\u5931\u6548\u3002\u8bf7\u52ff\u5c06\u6b64\u5bc6\u94a5\u63d0\u4ea4\u5230\u516c\u5171\u5b58\u50a8\u5e93\u3002","title":"\u5bc6\u94a5"},{"location":"security/security-guide/#cookies","text":"\u4f1a\u8bddCookies\u5e94\u8bbe\u7f6e\u4e3a HTTPONLY\uff1a SESSION_COOKIE_HTTPONLY = True \u5207\u52ff\u5c06 CSRF \u6216\u4f1a\u8bdd Cookie \u914d\u7f6e\u4e3a\u5177\u6709\u5e26\u524d\u5bfc\u70b9\u7684\u901a\u914d\u7b26\u57df\u3002\u4f7f\u7528 HTTPS \u90e8\u7f72\u65f6\uff0c\u5e94\u4fdd\u62a4 Horizon \u7684\u4f1a\u8bdd\u548c CSRF Cookie\uff1a CSRF_COOKIE_SECURE = True SESSION_COOKIE_SECURE = True","title":"Cookies"},{"location":"security/security-guide/#cors","text":"\u5c06 Web \u670d\u52a1\u5668\u914d\u7f6e\u4e3a\u5728\u6bcf\u6b21\u54cd\u5e94\u65f6\u53d1\u9001\u9650\u5236\u6027 CORS \u6807\u5934\uff0c\u4ec5\u5141\u8bb8\u4eea\u8868\u677f\u57df\u548c\u534f\u8bae\uff1a Access-Control-Allow-Origin: https://example.com/ \u6c38\u8fdc\u4e0d\u5141\u8bb8\u901a\u914d\u7b26\u6765\u6e90\u3002","title":"\u8de8\u57df\u8d44\u6e90\u5171\u4eab \uff08CORS\uff09"},{"location":"security/security-guide/#_133","text":"\u5efa\u8bae\u5728\u751f\u4ea7\u73af\u5883\u4e2d\u5c06 DEBUG \u8be5\u8bbe\u7f6e\u8bbe\u7f6e\u4e3a False \u3002\u5982\u679c DEBUG \u8bbe\u7f6e\u4e3a True\uff0c\u5219\u5f53\u629b\u51fa\u5f02\u5e38\u65f6\uff0cDjango \u5c06\u663e\u793a\u5806\u6808\u8ddf\u8e2a\u548c\u654f\u611f\u7684 Web \u670d\u52a1\u5668\u72b6\u6001\u4fe1\u606f\u3002","title":"\u8c03\u8bd5"},{"location":"security/security-guide/#_134","text":"","title":"\u68c0\u67e5\u8868"},{"location":"security/security-guide/#check-dashboard-01-roothorizon","text":"\u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u5bf9\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u7684\u62d2\u7edd\u670d\u52a1\u3002\u56e0\u6b64\uff0c\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a root\uff0c\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a horizon\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/openstack-dashboard/local_settings.py | egrep \"root horizon\" \u901a\u8fc7\uff1a\u5982\u679c\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c horizon\u3002\u4e0a\u9762\u7684\u547d\u4ee4\u663e\u793a\u4e86\u6839\u5730\u5e73\u7ebf\u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u56e0\u4e3a\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 root \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u6216\u9664 Horizon \u4ee5\u5916\u7684\u4efb\u4f55\u7ec4\u3002","title":"Check-Dashboard-01\uff1a\u7528\u6237/\u914d\u7f6e\u6587\u4ef6\u7ec4\u662f\u5426\u8bbe\u7f6e\u4e3a root/horizon\uff1f"},{"location":"security/security-guide/#check-dashboard-02-horizon","text":"\u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u5efa\u8bae\u5bf9\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/openstack-dashboard/local_settings.py \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\u3002640 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\uff0c\u5373\u201cu=rw\uff0cg=r\uff0co=\u201d\u3002\u8bf7\u6ce8\u610f\uff0c\u4f7f\u7528 Check-Dashboard-01 \u65f6\uff1a\u7528\u6237/\u914d\u7f6e\u6587\u4ef6\u7ec4\u662f\u5426\u8bbe\u7f6e\u4e3a root/horizon\uff1f\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0c\u5219 root \u7528\u6237\u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0cHorizon \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/openstack-dashboard/local_settings.py getfacl: Removing leading '/' from absolute path names # file: etc/openstack-dashboard/local_settings.py USER root rw- GROUP horizon r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u672a\u8bbe\u7f6e\u4e3a\u81f3\u5c11 640\u3002","title":"Check-Dashboard-02\uff1a\u662f\u5426\u4e3a Horizon \u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u6743\u9650\uff1f"},{"location":"security/security-guide/#check-dashboard-03-disallow_iframe_embed-true","text":"DISALLOW_IFRAME_EMBED \u53ef\u7528\u4e8e\u9632\u6b62 OpenStack Dashboard \u5d4c\u5165\u5230 iframe \u4e2d\u3002 \u65e7\u7248\u6d4f\u89c8\u5668\u4ecd\u7136\u5bb9\u6613\u53d7\u5230\u8de8\u5e27\u811a\u672c \uff08XFS\uff09 \u6f0f\u6d1e\u7684\u5f71\u54cd\uff0c\u56e0\u6b64\u6b64\u9009\u9879\u5141\u8bb8\u5728\u90e8\u7f72\u4e2d\u672a\u4f7f\u7528 iframe \u7684\u60c5\u51b5\u4e0b\u8fdb\u884c\u989d\u5916\u7684\u5b89\u5168\u5f3a\u5316\u3002 \u9ed8\u8ba4\u8bbe\u7f6e\u4e3a True\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 DISALLOW_IFRAME_EMBED in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a True \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 DISALLOW_IFRAME_EMBED in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a False \u3002 \u63a8\u8350\u7528\u4e8e\uff1aHTTPS\u3001HSTS\u3001XSS \u548c SSRF\u3002","title":"Check-Dashboard-03\uff1a\u53c2\u6570\u662f\u5426 DISALLOW_IFRAME_EMBED \u8bbe\u7f6e\u4e3a True \uff1f"},{"location":"security/security-guide/#check-dashboard-04-csrf_cookie_secure-true","text":"CSRF\uff08\u8de8\u7ad9\u70b9\u8bf7\u6c42\u4f2a\u9020\uff09\u662f\u4e00\u79cd\u653b\u51fb\uff0c\u5b83\u8feb\u4f7f\u6700\u7ec8\u7528\u6237\u5728\u4ed6/\u5979\u5f53\u524d\u7ecf\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u7684 Web \u5e94\u7528\u7a0b\u5e8f\u4e0a\u6267\u884c\u672a\u7ecf\u6388\u6743\u7684\u547d\u4ee4\u3002\u6210\u529f\u7684 CSRF \u6f0f\u6d1e\u53ef\u80fd\u4f1a\u5371\u53ca\u6700\u7ec8\u7528\u6237\u7684\u6570\u636e\u548c\u64cd\u4f5c\u3002\u5982\u679c\u76ee\u6807\u6700\u7ec8\u7528\u6237\u5177\u6709\u7ba1\u7406\u5458\u6743\u9650\uff0c\u8fd9\u53ef\u80fd\u4f1a\u5371\u53ca\u6574\u4e2a Web \u5e94\u7528\u7a0b\u5e8f\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 CSRF_COOKIE_SECURE in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a True \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 CSRF_COOKIE_SECURE in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a False \u3002 \u63a8\u8350\u4e8e\uff1aCookies\u3002","title":"Check-Dashboard-04\uff1a\u53c2\u6570\u662f\u5426 CSRF_COOKIE_SECURE \u8bbe\u7f6e\u4e3a True \uff1f"},{"location":"security/security-guide/#check-dashboard-05-session_cookie_secure-true","text":"\u201cSECURE\u201dcookie \u5c5e\u6027\u6307\u793a Web \u6d4f\u89c8\u5668\u4ec5\u901a\u8fc7\u52a0\u5bc6\u7684 HTTPS \uff08SSL/TLS\uff09 \u8fde\u63a5\u53d1\u9001 cookie\u3002\u6b64\u4f1a\u8bdd\u4fdd\u62a4\u673a\u5236\u662f\u5f3a\u5236\u6027\u7684\uff0c\u4ee5\u9632\u6b62\u901a\u8fc7 MitM\uff08\u4e2d\u95f4\u4eba\uff09\u653b\u51fb\u6cc4\u9732\u4f1a\u8bdd ID\u3002\u5b83\u786e\u4fdd\u653b\u51fb\u8005\u65e0\u6cd5\u7b80\u5355\u5730\u4ece Web \u6d4f\u89c8\u5668\u6d41\u91cf\u4e2d\u6355\u83b7\u4f1a\u8bdd ID\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 SESSION_COOKIE_SECURE in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a True \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 SESSION_COOKIE_SECURE in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a False \u3002 \u63a8\u8350\u4e8e\uff1aCookies\u3002","title":"Check-Dashboard-05\uff1a\u53c2\u6570\u662f\u5426 SESSION_COOKIE_SECURE \u8bbe\u7f6e\u4e3a True \uff1f"},{"location":"security/security-guide/#check-dashboard-06-session_cookie_httponly-true","text":"\u201cHTTPONLY\u201dcookie \u5c5e\u6027\u6307\u793a Web \u6d4f\u89c8\u5668\u4e0d\u5141\u8bb8\u811a\u672c\uff08\u4f8b\u5982 JavaScript \u6216 VBscript\uff09\u901a\u8fc7 DOM document.cookie \u5bf9\u8c61\u8bbf\u95ee cookie\u3002\u6b64\u4f1a\u8bdd ID \u4fdd\u62a4\u662f\u5fc5\u9700\u7684\uff0c\u4ee5\u9632\u6b62\u901a\u8fc7 XSS \u653b\u51fb\u7a83\u53d6\u4f1a\u8bdd ID\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 SESSION_COOKIE_HTTPONLY in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a True \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 SESSION_COOKIE_HTTPONLY in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a False \u3002 \u63a8\u8350\u4e8e\uff1aCookies\u3002","title":"Check-Dashboard-06\uff1a\u53c2\u6570\u662f\u5426 SESSION_COOKIE_HTTPONLY \u8bbe\u7f6e\u4e3a True \uff1f"},{"location":"security/security-guide/#check-dashboard-07-password_autocomplete-false","text":"\u5e94\u7528\u7a0b\u5e8f\u7528\u4e8e\u4e3a\u7528\u6237\u63d0\u4f9b\u4fbf\u5229\u7684\u5e38\u89c1\u529f\u80fd\u662f\u5c06\u5bc6\u7801\u672c\u5730\u7f13\u5b58\u5728\u6d4f\u89c8\u5668\u4e2d\uff08\u5728\u5ba2\u6237\u7aef\u8ba1\u7b97\u673a\u4e0a\uff09\uff0c\u5e76\u5728\u6240\u6709\u540e\u7eed\u8bf7\u6c42\u4e2d\u201c\u9884\u5148\u952e\u5165\u201d\u3002\u867d\u7136\u6b64\u529f\u80fd\u5bf9\u666e\u901a\u7528\u6237\u6765\u8bf4\u975e\u5e38\u53cb\u597d\uff0c\u4f46\u540c\u65f6\uff0c\u5b83\u5f15\u5165\u4e86\u4e00\u4e2a\u7f3a\u9677\uff0c\u56e0\u4e3a\u5728\u5ba2\u6237\u7aef\u8ba1\u7b97\u673a\u4e0a\u4f7f\u7528\u76f8\u540c\u5e10\u6237\u7684\u4efb\u4f55\u4eba\u90fd\u53ef\u4ee5\u8f7b\u677e\u8bbf\u95ee\u7528\u6237\u5e10\u6237\uff0c\u4ece\u800c\u53ef\u80fd\u5bfc\u81f4\u7528\u6237\u5e10\u6237\u53d7\u635f\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 PASSWORD_AUTOCOMPLETE in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a off \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 PASSWORD_AUTOCOMPLETE in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a on \u3002","title":"Check-Dashboard-07\uff1a PASSWORD_AUTOCOMPLETE \u8bbe\u7f6e\u4e3a False \uff1f"},{"location":"security/security-guide/#check-dashboard-08-disable_password_reveal-true","text":"\u4e0e\u4e4b\u524d\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u5efa\u8bae\u4e0d\u8981\u663e\u793a\u5bc6\u7801\u5b57\u6bb5\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 DISABLE_PASSWORD_REVEAL in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a True \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 DISABLE_PASSWORD_REVEAL in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a False \u3002 \u6ce8\u610f \u6b64\u9009\u9879\u662f\u5728 Kilo \u7248\u672c\u4e2d\u5f15\u5165\u7684\u3002","title":"Check-Dashboard-08\uff1a DISABLE_PASSWORD_REVEAL \u8bbe\u7f6e\u4e3a True \uff1f"},{"location":"security/security-guide/#check-dashboard-09-enforce_password_check-true","text":"\u8bbe\u7f6e\u4e3a ENFORCE_PASSWORD_CHECK True \u5c06\u5728\u201c\u66f4\u6539\u5bc6\u7801\u201d\u7a97\u4f53\u4e0a\u663e\u793a\u201c\u7ba1\u7406\u5458\u5bc6\u7801\u201d\u5b57\u6bb5\uff0c\u4ee5\u9a8c\u8bc1\u662f\u5426\u786e\u5b9e\u662f\u7ba1\u7406\u5458\u767b\u5f55\u7684\u8981\u66f4\u6539\u5bc6\u7801\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 ENFORCE_PASSWORD_CHECK in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a True \u3002 \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 ENFORCE_PASSWORD_CHECK in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a False \u3002","title":"Check-Dashboard-09\uff1a ENFORCE_PASSWORD_CHECK \u8bbe\u7f6e\u4e3a True \uff1f"},{"location":"security/security-guide/#check-dashboard-10-password_validator","text":"\u5141\u8bb8\u6b63\u5219\u8868\u8fbe\u5f0f\u9a8c\u8bc1\u7528\u6237\u5bc6\u7801\u7684\u590d\u6742\u6027\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 PASSWORD_VALIDATOR in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a defaul \u4e4b\u5916\u7684\u4efb\u4f55\u503c\uff0c\u5219\u5141\u8bb8\u6240\u6709 \u201cregex\u201d\uff1a '.*'\uff0c \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 PASSWORD_VALIDATOR in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a\u5141\u8bb8\u6240\u6709 \u201cregex\u201d\uff1a '.*'","title":"Check-Dashboard-10\uff1a\u662f\u5426 PASSWORD_VALIDATOR \u5df2\u914d\u7f6e\uff1f"},{"location":"security/security-guide/#check-dashboard-11-secure_proxy_ssl_header","text":"\u5982\u679c OpenStack Dashboard \u90e8\u7f72\u5728\u4ee3\u7406\u540e\u9762\uff0c\u5e76\u4e14\u4ee3\u7406\u4ece\u6240\u6709\u4f20\u5165\u8bf7\u6c42\u4e2d\u5265\u79bb X-Forwarded-Proto \u6807\u5934\uff0c\u6216\u8005\u8bbe\u7f6e\u6807\u5934 X-Forwarded-Proto \u5e76\u5c06\u5176\u53d1\u9001\u5230 Dashboard\uff0c\u4f46\u4ec5\u9002\u7528\u4e8e\u6700\u521d\u901a\u8fc7 HTTPS \u4f20\u5165\u7684\u8bf7\u6c42\uff0c\u90a3\u4e48\u60a8\u5e94\u8be5\u8003\u8651\u914d\u7f6e SECURE_PROXY_SSL_HEADER \u66f4\u591a\u4fe1\u606f\u53ef\u4ee5\u5728 Django \u6587\u6863\u4e2d\u627e\u5230\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 SECURE_PROXY_SSL_HEADER in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u8bbe\u7f6e\u4e3a 'HTTP_X_FORWARDED_PROTO', 'https' \u5931\u8d25\uff1a\u5982\u679c\u53c2\u6570 SECURE_PROXY_SSL_HEADER in /etc/openstack-dashboard/local_settings.py \u7684\u503c\u672a\u8bbe\u7f6e\u4e3a 'HTTP_X_FORWARDED_PROTO', 'https' \u6216\u6ce8\u91ca\u6389\u3002","title":"Check-Dashboard-11\uff1a\u662f\u5426 SECURE_PROXY_SSL_HEADER \u5df2\u914d\u7f6e\uff1f"},{"location":"security/security-guide/#_135","text":"OpenStack \u8ba1\u7b97\u670d\u52a1 \uff08nova\uff09 \u5728\u6574\u4e2a\u4e91\u4e2d\u7684\u8bb8\u591a\u4f4d\u7f6e\u8fd0\u884c\uff0c\u5e76\u4e0e\u5404\u79cd\u5185\u90e8\u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002OpenStack \u8ba1\u7b97\u670d\u52a1\u63d0\u4f9b\u4e86\u591a\u79cd\u914d\u7f6e\u9009\u9879\uff0c\u8fd9\u4e9b\u9009\u9879\u53ef\u80fd\u662f\u7279\u5b9a\u4e8e\u90e8\u7f72\u7684\u3002 \u5728\u672c\u7ae0\u4e2d\uff0c\u6211\u4eec\u5c06\u4ecb\u7ecd\u6709\u5173\u8ba1\u7b97\u5b89\u5168\u6027\u7684\u4e00\u822c\u6700\u4f73\u5b9e\u8df5\uff0c\u4ee5\u53ca\u53ef\u80fd\u5bfc\u81f4\u5b89\u5168\u95ee\u9898\u7684\u7279\u5b9a\u5df2\u77e5\u914d\u7f6e\u3002 nova.conf \u6587\u4ef6\u548c /var/lib/nova \u4f4d\u7f6e\u5e94\u53d7\u5230\u4fdd\u62a4\u3002\u5e94\u5b9e\u65bd\u96c6\u4e2d\u5f0f\u65e5\u5fd7\u8bb0\u5f55\u3001 policy.json \u6587\u4ef6\u548c\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u6846\u67b6\u7b49\u63a7\u5236\u63aa\u65bd\u3002 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u9009\u62e9 OpenStack \u4e2d\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f \u7eb3\u5165\u6392\u9664\u6807\u51c6 \u56e2\u961f\u4e13\u957f \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u8ba4\u8bc1\u548c\u8bc1\u660e \u901a\u7528\u6807\u51c6 \u52a0\u5bc6\u6807\u51c6 FIPS 140-2 \u786c\u4ef6\u95ee\u9898 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0e\u88f8\u673a \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5185\u5b58\u4f18\u5316 KVM \u5185\u6838 Samepage \u5408\u5e76 XEN\u900f\u660e\u9875\u9762\u5171\u4eab \u5185\u5b58\u4f18\u5316\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879 \u5176\u4ed6\u5b89\u5168\u529f\u80fd \u4e66\u76ee \u5f3a\u5316\u865a\u62df\u5316\u5c42 \u7269\u7406\u786c\u4ef6\uff08PCI \u76f4\u901a\uff09 \u865a\u62df\u786c\u4ef6 \uff08QEMU\uff09 \u6700\u5c0f\u5316 QEMU \u4ee3\u7801\u5e93 \u7f16\u8bd1\u5668\u5f3a\u5316 \u5b89\u5168\u52a0\u5bc6\u865a\u62df\u5316 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 sVirt\uff1aSELinux \u548c\u865a\u62df\u5316 \u6807\u7b7e\u548c\u7c7b\u522b SELinux \u7528\u6237\u548c\u89d2\u8272 \u5e03\u5c14\u503c \u5f3a\u5316\u8ba1\u7b97\u90e8\u7f72 OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f OpenStack \u5b89\u5168\u8bf4\u660e OpenStack-dev \u90ae\u4ef6\u5217\u8868 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90ae\u4ef6\u5217\u8868 \u6f0f\u6d1e\u610f\u8bc6 OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f OpenStack \u5b89\u5168\u8bf4\u660e OpenStack-\u8ba8\u8bba\u90ae\u4ef6\u5217\u8868 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90ae\u4ef6\u5217\u8868 \u5982\u4f55\u9009\u62e9\u865a\u62df\u63a7\u5236\u53f0 \u865a\u62df\u7f51\u7edc\u8ba1\u7b97\u673a \uff08VNC\uff09 \u72ec\u7acb\u8ba1\u7b97\u73af\u5883\u7684\u7b80\u5355\u534f\u8bae \uff08SPICE\uff09 \u68c0\u67e5\u8868 Check-Compute-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/nova\uff1f Check-Compute-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Compute-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Compute-04\uff1a\u662f\u5426\u4f7f\u7528\u5b89\u5168\u534f\u8bae\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Compute-05\uff1aNova \u4e0e Glance \u7684\u901a\u4fe1\u662f\u5426\u5b89\u5168\uff1f","title":"\u8ba1\u7b97"},{"location":"security/security-guide/#_136","text":"","title":"\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u9009\u62e9"},{"location":"security/security-guide/#openstack_4","text":"\u65e0\u8bbaOpenStack\u662f\u90e8\u7f72\u5728\u79c1\u6709\u6570\u636e\u4e2d\u5fc3\u5185\uff0c\u8fd8\u662f\u4f5c\u4e3a\u516c\u5171\u4e91\u670d\u52a1\u90e8\u7f72\uff0c\u5e95\u5c42\u865a\u62df\u5316\u6280\u672f\u90fd\u80fd\u5728\u53ef\u6269\u5c55\u6027\u3001\u8d44\u6e90\u6548\u7387\u548c\u6b63\u5e38\u8fd0\u884c\u65f6\u95f4\u65b9\u9762\u63d0\u4f9b\u4f01\u4e1a\u7ea7\u529f\u80fd\u3002\u867d\u7136\u5728\u8bb8\u591a OpenStack \u652f\u6301\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6280\u672f\u4e2d\u901a\u5e38\u90fd\u5177\u6709\u8fd9\u79cd\u9ad8\u7ea7\u4f18\u52bf\uff0c\u4f46\u6bcf\u4e2a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u5b89\u5168\u67b6\u6784\u548c\u529f\u80fd\u90fd\u5b58\u5728\u663e\u8457\u5dee\u5f02\uff0c\u5c24\u5176\u662f\u5728\u8003\u8651\u5f39\u6027 OpenStack \u73af\u5883\u7279\u6709\u7684\u5b89\u5168\u5a01\u80c1\u5411\u91cf\u65f6\u3002\u968f\u7740\u5e94\u7528\u7a0b\u5e8f\u6574\u5408\u5230\u5355\u4e2a\u57fa\u7840\u67b6\u6784\u5373\u670d\u52a1 \uff08IaaS\uff09 \u5e73\u53f0\u4e2d\uff0c\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7ea7\u522b\u7684\u5b9e\u4f8b\u9694\u79bb\u53d8\u5f97\u81f3\u5173\u91cd\u8981\u3002\u5b89\u5168\u9694\u79bb\u7684\u8981\u6c42\u5728\u5546\u4e1a\u3001\u653f\u5e9c\u548c\u519b\u4e8b\u793e\u533a\u4e2d\u90fd\u9002\u7528\u3002 \u5728 OpenStack \u6846\u67b6\u4e2d\uff0c\u60a8\u53ef\u4ee5\u5728\u4f17\u591a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5e73\u53f0\u548c\u76f8\u5e94\u7684 OpenStack \u63d2\u4ef6\u4e2d\u8fdb\u884c\u9009\u62e9\uff0c\u4ee5\u4f18\u5316\u60a8\u7684\u4e91\u73af\u5883\u3002\u5728\u672c\u6307\u5357\u7684\u4e0a\u4e0b\u6587\u4e2d\uff0c\u91cd\u70b9\u4ecb\u7ecd\u4e86\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u9009\u62e9\u6ce8\u610f\u4e8b\u9879\uff0c\u56e0\u4e3a\u5b83\u4eec\u4e0e\u5bf9\u5b89\u5168\u6027\u81f3\u5173\u91cd\u8981\u7684\u529f\u80fd\u96c6\u6709\u5173\u3002\u4f46\u662f\uff0c\u8fd9\u4e9b\u6ce8\u610f\u4e8b\u9879\u5e76\u4e0d\u610f\u5473\u7740\u5bf9\u7279\u5b9a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u4f18\u7f3a\u70b9\u8fdb\u884c\u8be6\u5c3d\u7684\u8c03\u67e5\u3002NIST \u5728\u7279\u522b\u51fa\u7248\u7269 800-125\u201c\u5b8c\u6574\u865a\u62df\u5316\u6280\u672f\u5b89\u5168\u6307\u5357\u201d\u4e2d\u63d0\u4f9b\u4e86\u5176\u4ed6\u6307\u5bfc\u3002","title":"OpenStack \u4e2d\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f"},{"location":"security/security-guide/#_137","text":"\u4f5c\u4e3a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u9009\u62e9\u8fc7\u7a0b\u7684\u4e00\u90e8\u5206\uff0c\u60a8\u5fc5\u987b\u8003\u8651\u8bb8\u591a\u91cd\u8981\u56e0\u7d20\uff0c\u4ee5\u5e2e\u52a9\u6539\u5584\u60a8\u7684\u5b89\u5168\u72b6\u51b5\u3002\u5177\u4f53\u6765\u8bf4\uff0c\u60a8\u5fc5\u987b\u719f\u6089\u4ee5\u4e0b\u65b9\u9762\uff1a \u56e2\u961f\u4e13\u957f \u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6 \u901a\u7528\u6807\u51c6 \u8ba4\u8bc1\u548c\u8bc1\u660e \u786c\u4ef6\u95ee\u9898 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0e\u88f8\u673a \u5176\u4ed6\u5b89\u5168\u529f\u80fd \u6b64\u5916\uff0c\u5f3a\u70c8\u5efa\u8bae\u5728\u4e3a OpenStack \u90e8\u7f72\u9009\u62e9\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u65f6\u8bc4\u4f30\u4ee5\u4e0b\u4e0e\u5b89\u5168\u76f8\u5173\u7684\u6807\u51c6\uff1a * \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u662f\u5426\u7ecf\u8fc7\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\uff1f\u5982\u679c\u662f\u8fd9\u6837\uff0c\u8fbe\u5230\u4ec0\u4e48\u6c34\u5e73\uff1f* \u5e95\u5c42\u5bc6\u7801\u5b66\u662f\u5426\u7ecf\u8fc7\u7b2c\u4e09\u65b9\u8ba4\u8bc1\uff1f","title":"\u9009\u62e9\u6807\u51c6"},{"location":"security/security-guide/#_138","text":"\u6700\u6709\u53ef\u80fd\u7684\u662f\uff0c\u5728\u9009\u62e9\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u65f6\uff0c\u6700\u91cd\u8981\u7684\u65b9\u9762\u662f\u60a8\u7684\u5458\u5de5\u5728\u7ba1\u7406\u548c\u7ef4\u62a4\u7279\u5b9a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5e73\u53f0\u65b9\u9762\u7684\u4e13\u4e1a\u77e5\u8bc6\u3002\u60a8\u7684\u56e2\u961f\u5bf9\u7ed9\u5b9a\u4ea7\u54c1\u3001\u5176\u914d\u7f6e\u53ca\u5176\u602a\u7656\u8d8a\u719f\u6089\uff0c\u914d\u7f6e\u9519\u8bef\u5c31\u8d8a\u5c11\u3002\u6b64\u5916\uff0c\u5728\u7ed9\u5b9a\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0a\u5c06\u5458\u5de5\u4e13\u4e1a\u77e5\u8bc6\u5206\u5e03\u5728\u6574\u4e2a\u7ec4\u7ec7\u4e2d\u53ef\u4ee5\u63d0\u9ad8\u7cfb\u7edf\u7684\u53ef\u7528\u6027\uff0c\u5141\u8bb8\u804c\u8d23\u5206\u79bb\uff0c\u5e76\u5728\u56e2\u961f\u6210\u5458\u4e0d\u53ef\u7528\u65f6\u7f13\u89e3\u95ee\u9898\u3002","title":"\u56e2\u961f\u4e13\u957f"},{"location":"security/security-guide/#_139","text":"\u7ed9\u5b9a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4ea7\u54c1\u6216\u9879\u76ee\u7684\u6210\u719f\u5ea6\u5bf9\u60a8\u7684\u5b89\u5168\u72b6\u51b5\u4e5f\u81f3\u5173\u91cd\u8981\u3002\u90e8\u7f72\u4e91\u540e\uff0c\u4ea7\u54c1\u6210\u719f\u5ea6\u4f1a\u4ea7\u751f\u8bb8\u591a\u5f71\u54cd\uff1a\u7ed9\u5b9a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4ea7\u54c1\u6216\u9879\u76ee\u7684\u6210\u719f\u5ea6\u5bf9\u60a8\u7684\u5b89\u5168\u72b6\u51b5\u4e5f\u81f3\u5173\u91cd\u8981\u3002\u90e8\u7f72\u4e91\u540e\uff0c\u4ea7\u54c1\u6210\u719f\u5ea6\u4f1a\u4ea7\u751f\u8bb8\u591a\u5f71\u54cd\uff1a \u4e13\u4e1a\u77e5\u8bc6\u7684\u53ef\u7528\u6027 \u6d3b\u8dc3\u7684\u5f00\u53d1\u4eba\u5458\u548c\u7528\u6237\u793e\u533a \u66f4\u65b0\u7684\u53ca\u65f6\u6027\u548c\u53ef\u7528\u6027 \u53d1\u75c5\u7387\u54cd\u5e94 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6210\u719f\u5ea6\u7684\u6700\u5927\u6307\u6807\u4e4b\u4e00\u662f\u56f4\u7ed5\u5b83\u7684\u793e\u533a\u7684\u89c4\u6a21\u548c\u6d3b\u529b\u3002\u7531\u4e8e\u8fd9\u6d89\u53ca\u5b89\u5168\u6027\uff0c\u56e0\u6b64\u5982\u679c\u60a8\u9700\u8981\u989d\u5916\u7684\u4e91\u64cd\u4f5c\u5458\uff0c\u793e\u533a\u7684\u8d28\u91cf\u4f1a\u5f71\u54cd\u4e13\u4e1a\u77e5\u8bc6\u7684\u53ef\u7528\u6027\u3002\u8fd9\u4e5f\u8868\u660e\u4e86\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u5e7f\u6cdb\u90e8\u7f72\uff0c\u8fdb\u800c\u5bfc\u81f4\u4efb\u4f55\u53c2\u8003\u67b6\u6784\u548c\u6700\u4f73\u5b9e\u8df5\u7684\u6218\u5907\u72b6\u6001\u3002 \u6b64\u5916\uff0c\u793e\u533a\u7684\u8d28\u91cf\uff0c\u56e0\u4e3a\u5b83\u56f4\u7ed5\u7740KVM\u6216Xen\u7b49\u5f00\u6e90\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\uff0c\u5bf9\u9519\u8bef\u4fee\u590d\u548c\u5b89\u5168\u66f4\u65b0\u7684\u53ca\u65f6\u6027\u6709\u76f4\u63a5\u5f71\u54cd\u3002\u5728\u8c03\u67e5\u5546\u4e1a\u548c\u5f00\u6e90\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u65f6\uff0c\u60a8\u5fc5\u987b\u67e5\u770b\u5b83\u4eec\u7684\u53d1\u5e03\u548c\u652f\u6301\u5468\u671f\uff0c\u4ee5\u53ca\u53d1\u5e03\u9519\u8bef\u6216\u5b89\u5168\u95ee\u9898\u4e0e\u8865\u4e01\u6216\u54cd\u5e94\u4e4b\u95f4\u7684\u65f6\u95f4\u5dee\u3002\u6700\u540e\uff0cOpenStack \u8ba1\u7b97\u652f\u6301\u7684\u529f\u80fd\u56e0\u6240\u9009\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u800c\u5f02\u3002\u8bf7\u53c2\u9605 OpenStack Hypervisor Support Matrix\uff0c\u4e86\u89e3 Hypervisor \u5bf9 OpenStack \u8ba1\u7b97\u529f\u80fd\u7684\u652f\u6301\u3002","title":"\u4ea7\u54c1\u6216\u9879\u76ee\u6210\u719f\u5ea6"},{"location":"security/security-guide/#_140","text":"\u9009\u62e9\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u65f6\uff0c\u53e6\u4e00\u4e2a\u8003\u8651\u56e0\u7d20\u662f\u5404\u79cd\u6b63\u5f0f\u8ba4\u8bc1\u548c\u8bc1\u660e\u7684\u53ef\u7528\u6027\u3002\u867d\u7136\u5b83\u4eec\u53ef\u80fd\u4e0d\u662f\u7279\u5b9a\u7ec4\u7ec7\u7684\u8981\u6c42\uff0c\u4f46\u8fd9\u4e9b\u8ba4\u8bc1\u548c\u8bc1\u660e\u8bf4\u660e\u4e86\u7279\u5b9a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5e73\u53f0\u6240\u7ecf\u8fc7\u7684\u6d4b\u8bd5\u7684\u6210\u719f\u5ea6\u3001\u751f\u4ea7\u51c6\u5907\u60c5\u51b5\u548c\u5f7b\u5e95\u6027\u3002","title":"\u8ba4\u8bc1\u548c\u8bc1\u660e"},{"location":"security/security-guide/#_141","text":"\u901a\u7528\u6807\u51c6\u662f\u4e00\u4e2a\u56fd\u9645\u6807\u51c6\u5316\u7684\u8f6f\u4ef6\u8bc4\u4f30\u8fc7\u7a0b\uff0c\u653f\u5e9c\u548c\u5546\u4e1a\u516c\u53f8\u4f7f\u7528\u5b83\u6765\u9a8c\u8bc1\u8f6f\u4ef6\u6280\u672f\u662f\u5426\u5982\u5ba3\u4f20\u7684\u90a3\u6837\u3002\u5728\u653f\u5e9c\u90e8\u95e8\uff0cNSTISSP \u7b2c 11 \u53f7\u89c4\u5b9a\u7f8e\u56fd\u653f\u5e9c\u673a\u6784\u53ea\u80fd\u91c7\u8d2d\u5df2\u901a\u8fc7\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\u7684\u8f6f\u4ef6\uff0c\u8be5\u653f\u7b56\u81ea 2002 \u5e74 7 \u6708\u8d77\u5b9e\u65bd\u3002 \u6ce8\u610f OpenStack\u5c1a\u672a\u901a\u8fc7\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\uff0c\u4f46\u8bb8\u591a\u53ef\u7528\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90fd\u7ecf\u8fc7\u4e86\u8ba4\u8bc1\u3002 \u9664\u4e86\u9a8c\u8bc1\u6280\u672f\u80fd\u529b\u5916\uff0c\u901a\u7528\u6807\u51c6\u6d41\u7a0b\u8fd8\u8bc4\u4f30\u6280\u672f\u7684\u5f00\u53d1\u65b9\u5f0f\u3002 \u5982\u4f55\u8fdb\u884c\u6e90\u4ee3\u7801\u7ba1\u7406\uff1f \u5982\u4f55\u6388\u4e88\u7528\u6237\u5bf9\u6784\u5efa\u7cfb\u7edf\u7684\u8bbf\u95ee\u6743\u9650\uff1f \u8be5\u6280\u672f\u5728\u5206\u53d1\u524d\u662f\u5426\u7ecf\u8fc7\u52a0\u5bc6\u7b7e\u540d\uff1f KVM \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5df2\u901a\u8fc7\u7f8e\u56fd\u653f\u5e9c\u548c\u5546\u4e1a\u53d1\u884c\u7248\u7684\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\u3002\u8fd9\u4e9b\u5df2\u7ecf\u8fc7\u9a8c\u8bc1\uff0c\u53ef\u4ee5\u5c06\u865a\u62df\u673a\u7684\u8fd0\u884c\u65f6\u73af\u5883\u5f7c\u6b64\u5206\u79bb\uff0c\u4ece\u800c\u63d0\u4f9b\u57fa\u7840\u6280\u672f\u6765\u5b9e\u65bd\u5b9e\u4f8b\u9694\u79bb\u3002\u9664\u4e86\u865a\u62df\u673a\u9694\u79bb\u4e4b\u5916\uff0cKVM \u8fd8\u901a\u8fc7\u4e86\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\uff1a \"...provide system-inherent separation mechanisms to the resources of virtual machines. This separation ensures that large software component used for virtualizing and simulating devices executing for each virtual machine cannot interfere with each other. Using the SELinux multi-category mechanism, the virtualization and simulation software instances are isolated. The virtual machine management framework configures SELinux multi-category settings transparently to the administrator.\" \u867d\u7136\u8bb8\u591a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4f9b\u5e94\u5546\uff08\u5982 Red Hat\u3001Microsoft \u548c VMware\uff09\u5df2\u83b7\u5f97\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\uff0c\u4f46\u5176\u57fa\u7840\u8ba4\u8bc1\u529f\u80fd\u96c6\u6709\u6240\u4e0d\u540c\uff0c\u4f46\u6211\u4eec\u5efa\u8bae\u8bc4\u4f30\u4f9b\u5e94\u5546\u58f0\u660e\uff0c\u4ee5\u786e\u4fdd\u5b83\u4eec\u81f3\u5c11\u6ee1\u8db3\u4ee5\u4e0b\u8981\u6c42\uff1a \u5ba1\u8ba1 \u8be5\u7cfb\u7edf\u63d0\u4f9b\u4e86\u5ba1\u6838\u5927\u91cf\u4e8b\u4ef6\u7684\u529f\u80fd\uff0c\u5305\u62ec\u5355\u4e2a\u7cfb\u7edf\u8c03\u7528\u548c\u53d7\u4fe1\u4efb\u8fdb\u7a0b\u751f\u6210\u7684\u4e8b\u4ef6\u3002\u5ba1\u8ba1\u6570\u636e\u4ee5 ASCII \u683c\u5f0f\u6536\u96c6\u5728\u5e38\u89c4\u6587\u4ef6\u4e2d\u3002\u7cfb\u7edf\u63d0\u4f9b\u4e86\u4e00\u4e2a\u7528\u4e8e\u641c\u7d22\u5ba1\u8ba1\u8bb0\u5f55\u7684\u7a0b\u5e8f\u3002\u7cfb\u7edf\u7ba1\u7406\u5458\u53ef\u4ee5\u5b9a\u4e49\u4e00\u4e2a\u89c4\u5219\u5e93\uff0c\u4ee5\u5c06\u5ba1\u6838\u9650\u5236\u4e3a\u4ed6\u4eec\u611f\u5174\u8da3\u7684\u4e8b\u4ef6\u3002\u8fd9\u5305\u62ec\u5c06\u5ba1\u6838\u9650\u5236\u4e3a\u7279\u5b9a\u4e8b\u4ef6\u3001\u7279\u5b9a\u7528\u6237\u3001\u7279\u5b9a\u5bf9\u8c61\u6216\u6240\u6709\u8fd9\u4e9b\u7684\u7ec4\u5408\u7684\u80fd\u529b\u3002\u5ba1\u8ba1\u8bb0\u5f55\u53ef\u4ee5\u4f20\u8f93\u5230\u8fdc\u7a0b\u5ba1\u8ba1\u5b88\u62a4\u7a0b\u5e8f\u3002 \u81ea\u4e3b\u8bbf\u95ee\u63a7\u5236 \u81ea\u4e3b\u8bbf\u95ee\u63a7\u5236 \uff08DAC\uff09 \u9650\u5236\u5bf9\u57fa\u4e8e ACL \u7684\u6587\u4ef6\u7cfb\u7edf\u5bf9\u8c61\u7684\u8bbf\u95ee\uff0c\u8fd9\u4e9b\u5bf9\u8c61\u5305\u62ec\u7528\u6237\u3001\u7ec4\u548c\u5176\u4ed6\u4eba\u5458\u7684\u6807\u51c6 UNIX \u6743\u9650\u3002\u8bbf\u95ee\u63a7\u5236\u673a\u5236\u8fd8\u53ef\u4ee5\u4fdd\u62a4 IPC \u5bf9\u8c61\u514d\u53d7\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u8be5\u7cfb\u7edf\u5305\u62ec ext4 \u6587\u4ef6\u7cfb\u7edf\uff0c\u5b83\u652f\u6301 POSIX ACL\u3002\u8fd9\u5141\u8bb8\u5b9a\u4e49\u5bf9\u6b64\u7c7b\u6587\u4ef6\u7cfb\u7edf\u4e2d\u6587\u4ef6\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u7cbe\u786e\u5230\u5355\u4e2a\u7528\u6237\u7684\u7c92\u5ea6\u3002 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \uff08MAC\uff09 \u6839\u636e\u5206\u914d\u7ed9\u4e3b\u4f53\u548c\u5bf9\u8c61\u7684\u6807\u7b7e\u6765\u9650\u5236\u5bf9\u5bf9\u8c61\u7684\u8bbf\u95ee\u3002\u654f\u611f\u5ea6\u6807\u7b7e\u4f1a\u81ea\u52a8\u9644\u52a0\u5230\u8fdb\u7a0b\u548c\u5bf9\u8c61\u3002\u4f7f\u7528\u8fd9\u4e9b\u6807\u7b7e\u5f3a\u5236\u5b9e\u65bd\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u6d3e\u751f\u81ea Bell-LaPadula \u6a21\u578b\u3002SELinux \u7c7b\u522b\u9644\u52a0\u5230\u865a\u62df\u673a\u53ca\u5176\u8d44\u6e90\u3002\u5982\u679c\u865a\u62df\u673a\u7684\u7c7b\u522b\u4e0e\u6240\u8bbf\u95ee\u8d44\u6e90\u7684\u7c7b\u522b\u76f8\u540c\uff0c\u5219\u4f7f\u7528\u8fd9\u4e9b\u7c7b\u522b\u5f3a\u5236\u5b9e\u65bd\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u5c06\u6388\u4e88\u865a\u62df\u673a\u5bf9\u8d44\u6e90\u7684\u8bbf\u95ee\u6743\u9650\u3002TOE \u5b9e\u73b0\u975e\u5206\u5c42\u7c7b\u522b\u6765\u63a7\u5236\u5bf9\u865a\u62df\u673a\u7684\u8bbf\u95ee\u3002 \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236 \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236 \uff08RBAC\uff09 \u5141\u8bb8\u89d2\u8272\u5206\u79bb\uff0c\u65e0\u9700\u5168\u80fd\u7684\u7cfb\u7edf\u7ba1\u7406\u5458\u3002 \u5bf9\u8c61\u91cd\u7528 \u6587\u4ef6\u7cfb\u7edf\u5bf9\u8c61\u3001\u5185\u5b58\u548c IPC \u5bf9\u8c61\u5728\u88ab\u5c5e\u4e8e\u5176\u4ed6\u7528\u6237\u7684\u8fdb\u7a0b\u91cd\u7528\u4e4b\u524d\u4f1a\u88ab\u6e05\u9664\u3002 \u5b89\u5168\u7ba1\u7406 \u7cfb\u7edf\u5b89\u5168\u5173\u952e\u53c2\u6570\u7684\u7ba1\u7406\u7531\u7ba1\u7406\u7528\u6237\u6267\u884c\u3002\u4e00\u7ec4\u9700\u8981 root \u6743\u9650\uff08\u6216\u4f7f\u7528 RBAC \u65f6\u9700\u8981\u7279\u5b9a\u89d2\u8272\uff09\u7684\u547d\u4ee4\u7528\u4e8e\u7cfb\u7edf\u7ba1\u7406\u3002\u5b89\u5168\u53c2\u6570\u5b58\u50a8\u5728\u7279\u5b9a\u6587\u4ef6\u4e2d\uff0c\u8fd9\u4e9b\u6587\u4ef6\u53d7\u7cfb\u7edf\u7684\u8bbf\u95ee\u63a7\u5236\u673a\u5236\u4fdd\u62a4\uff0c\u9632\u6b62\u975e\u7ba1\u7406\u7528\u6237\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002 \u5b89\u5168\u901a\u4fe1 \u7cfb\u7edf\u652f\u6301\u4f7f\u7528 SSH \u5b9a\u4e49\u53ef\u4fe1\u901a\u9053\u3002\u652f\u6301\u57fa\u4e8e\u5bc6\u7801\u7684\u8eab\u4efd\u9a8c\u8bc1\u3002\u5728\u8bc4\u4f30\u7684\u914d\u7f6e\u4e2d\uff0c\u8fd9\u4e9b\u534f\u8bae\u4ec5\u652f\u6301\u6709\u9650\u6570\u91cf\u7684\u5bc6\u7801\u5957\u4ef6\u3002 \u5b58\u50a8\u52a0\u5bc6 \u7cfb\u7edf\u652f\u6301\u52a0\u5bc6\u5757\u8bbe\u5907\uff0c\u901a\u8fc7 dm_crypt \u63d0\u4f9b\u5b58\u50a8\u673a\u5bc6\u6027\u3002 TSF \u4fdd\u62a4 \u5728\u8fd0\u884c\u65f6\uff0c\u5185\u6838\u8f6f\u4ef6\u548c\u6570\u636e\u53d7\u5230\u786c\u4ef6\u5185\u5b58\u4fdd\u62a4\u673a\u5236\u7684\u4fdd\u62a4\u3002\u5185\u6838\u7684\u5185\u5b58\u548c\u8fdb\u7a0b\u7ba1\u7406\u7ec4\u4ef6\u786e\u4fdd\u7528\u6237\u8fdb\u7a0b\u65e0\u6cd5\u8bbf\u95ee\u5185\u6838\u5b58\u50a8\u6216\u5c5e\u4e8e\u5176\u4ed6\u8fdb\u7a0b\u7684\u5b58\u50a8\u3002\u975e\u5185\u6838 TSF \u8f6f\u4ef6\u548c\u6570\u636e\u53d7 DAC \u548c\u8fdb\u7a0b\u9694\u79bb\u673a\u5236\u4fdd\u62a4\u3002\u5728\u8bc4\u4f30\u7684\u914d\u7f6e\u4e2d\uff0c\u4fdd\u7559\u7528\u6237 ID root \u62e5\u6709\u5b9a\u4e49 TSF \u914d\u7f6e\u7684\u76ee\u5f55\u548c\u6587\u4ef6\u3002\u901a\u5e38\uff0c\u5305\u542b\u5185\u90e8 TSF \u6570\u636e\u7684\u6587\u4ef6\u548c\u76ee\u5f55\uff08\u5982\u914d\u7f6e\u6587\u4ef6\u548c\u6279\u5904\u7406\u4f5c\u4e1a\u961f\u5217\uff09\u4e5f\u53d7\u5230 DAC \u6743\u9650\u7684\u4fdd\u62a4\uff0c\u4e0d\u4f1a\u88ab\u8bfb\u53d6\u3002\u7cfb\u7edf\u4ee5\u53ca\u786c\u4ef6\u548c\u56fa\u4ef6\u7ec4\u4ef6\u9700\u8981\u53d7\u5230\u7269\u7406\u4fdd\u62a4\uff0c\u4ee5\u9632\u6b62\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u7cfb\u7edf\u5185\u6838\u8c03\u89e3\u5bf9\u786c\u4ef6\u673a\u5236\u672c\u8eab\u7684\u6240\u6709\u8bbf\u95ee\uff0c\u4f46\u7a0b\u5e8f\u53ef\u89c1\u7684 CPU \u6307\u4ee4\u51fd\u6570\u9664\u5916\u3002\u6b64\u5916\uff0c\u8fd8\u63d0\u4f9b\u4e86\u9632\u6b62\u5806\u6808\u6ea2\u51fa\u653b\u51fb\u7684\u673a\u5236\u3002","title":"\u901a\u7528\u6807\u51c6"},{"location":"security/security-guide/#_142","text":"OpenStack \u4e2d\u63d0\u4f9b\u4e86\u591a\u79cd\u52a0\u5bc6\u7b97\u6cd5\uff0c\u7528\u4e8e\u8bc6\u522b\u548c\u6388\u6743\u3001\u6570\u636e\u4f20\u8f93\u548c\u9759\u6001\u6570\u636e\u4fdd\u62a4\u3002\u9009\u62e9\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u65f6\uff0c\u6211\u4eec\u5efa\u8bae\u91c7\u7528\u4ee5\u4e0b\u7b97\u6cd5\u548c\u5b9e\u73b0\u6807\u51c6\uff1a \u7b97\u6cd5 \u5bc6\u94a5\u957f\u5ea6 \u9884\u671f\u76ee\u7684 \u5b89\u5168\u529f\u80fd \u6267\u884c\u6807\u51c6 AES 128\u3001192 \u6216 256 \u4f4d \u52a0\u5bc6/\u89e3\u5bc6 \u53d7\u4fdd\u62a4\u7684\u6570\u636e\u4f20\u8f93\uff0c\u4fdd\u62a4\u9759\u6001\u6570\u636e RFC 4253 TDES 168 \u4f4d \u52a0\u5bc6/\u89e3\u5bc6 \u53d7\u4fdd\u62a4\u7684\u6570\u636e\u4f20\u8f93 RFC 4253 RSA 1024\u30012048 \u6216 3072 \u4f4d \u8eab\u4efd\u9a8c\u8bc1\u3001\u5bc6\u94a5\u4ea4\u6362 \u8bc6\u522b\u548c\u8eab\u4efd\u9a8c\u8bc1\uff0c\u53d7\u4fdd\u62a4\u7684\u6570\u636e\u4f20\u8f93 U.S. NIST FIPS PUB 186-3 DSA L=1024\uff0cN=160\u4f4d \u8eab\u4efd\u9a8c\u8bc1\u3001\u5bc6\u94a5\u4ea4\u6362 \u8bc6\u522b\u548c\u8eab\u4efd\u9a8c\u8bc1\uff0c\u53d7\u4fdd\u62a4\u7684\u6570\u636e\u4f20\u8f93 U.S. NIST FIPS PUB 186-3 Serpent 128\u3001192 \u6216 256 \u4f4d \u52a0\u5bc6/\u89e3\u5bc6 \u9759\u6001\u6570\u636e\u4fdd\u62a4 http://www.cl.cam.ac.uk/~rja14/Papers/serpent.pdf Twofish 128\u3001192 \u6216 256 \u4f4d \u52a0\u5bc6/\u89e3\u5bc6 \u9759\u6001\u6570\u636e\u4fdd\u62a4 https://www.schneier.com/paper-twofish-paper.html SHA-1 \u6d88\u606f\u6458\u8981 \u4fdd\u62a4\u9759\u6001\u6570\u636e\uff0c\u53d7\u4fdd\u62a4\u7684\u6570\u636e\u4f20\u8f93 U.S. NIST FIPS PUB 180-3 SHA-2\uff08224\u3001256\u3001384 \u6216 512 \u4f4d\uff09 \u6d88\u606f\u6458\u8981 Protection for data at rest, identification and authentication \u4fdd\u62a4\u9759\u6001\u6570\u636e\u3001\u8bc6\u522b\u548c\u8eab\u4efd\u9a8c\u8bc1 U.S. NIST FIPS PUB 180-3","title":"\u5bc6\u7801\u5b66\u6807\u51c6"},{"location":"security/security-guide/#fips-140-2","text":"\u5728\u7f8e\u56fd\uff0c\u7f8e\u56fd\u56fd\u5bb6\u79d1\u5b66\u6280\u672f\u7814\u7a76\u9662 \uff08NIST\uff09 \u901a\u8fc7\u79f0\u4e3a\u52a0\u5bc6\u6a21\u5757\u9a8c\u8bc1\u8ba1\u5212\u7684\u8fc7\u7a0b\u5bf9\u52a0\u5bc6\u7b97\u6cd5\u8fdb\u884c\u8ba4\u8bc1\u3002NIST \u8ba4\u8bc1\u7b97\u6cd5\u7b26\u5408\u8054\u90a6\u4fe1\u606f\u5904\u7406\u6807\u51c6 140-2 \uff08FIPS 140-2\uff09\uff0c\u786e\u4fdd\uff1a \"... Products validated as conforming to FIPS 140-2 are accepted by the Federal agencies of both countries [United States and Canada] for the protection of sensitive information (United States) or Designated Information (Canada). The goal of the CMVP is to promote the use of validated cryptographic modules and provide Federal agencies with a security metric to use in procuring equipment containing validated cryptographic modules.\" \u5728\u8bc4\u4f30\u57fa\u672c\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6280\u672f\u65f6\uff0c\u8bf7\u8003\u8651\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u662f\u5426\u5df2\u901a\u8fc7 FIPS 140-2 \u8ba4\u8bc1\u3002\u6839\u636e\u7f8e\u56fd\u653f\u5e9c\u653f\u7b56\uff0c\u4e0d\u4ec5\u5f3a\u5236\u8981\u6c42\u7b26\u5408 FIPS 140-2\uff0c\u800c\u4e14\u6b63\u5f0f\u8ba4\u8bc1\u8868\u660e\u5df2\u5bf9\u52a0\u5bc6\u7b97\u6cd5\u7684\u7ed9\u5b9a\u5b9e\u73b0\u8fdb\u884c\u4e86\u5ba1\u67e5\uff0c\u4ee5\u786e\u4fdd\u7b26\u5408\u6a21\u5757\u89c4\u8303\u3001\u52a0\u5bc6\u6a21\u5757\u7aef\u53e3\u548c\u63a5\u53e3;\u89d2\u8272\u3001\u670d\u52a1\u548c\u8eab\u4efd\u9a8c\u8bc1;\u6709\u9650\u72b6\u6001\u6a21\u578b;\u4eba\u8eab\u5b89\u5168;\u64cd\u4f5c\u73af\u5883;\u52a0\u5bc6\u5bc6\u94a5\u7ba1\u7406;\u7535\u78c1\u5e72\u6270/\u7535\u78c1\u517c\u5bb9\u6027\uff08EMI/EMC\uff09;\u81ea\u68c0;\u8bbe\u8ba1\u4fdd\u8bc1;\u4ee5\u53ca\u7f13\u89e3\u5176\u4ed6\u653b\u51fb\u3002","title":"FIPS 140-2"},{"location":"security/security-guide/#_143","text":"\u5728\u8bc4\u4f30\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5e73\u53f0\u65f6\uff0c\u8bf7\u8003\u8651\u8fd0\u884c\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u786c\u4ef6\u7684\u53ef\u652f\u6301\u6027\u3002\u6b64\u5916\uff0c\u8bf7\u8003\u8651\u786c\u4ef6\u4e2d\u53ef\u7528\u7684\u5176\u4ed6\u529f\u80fd\uff0c\u4ee5\u53ca\u60a8\u5728 OpenStack \u90e8\u7f72\u4e2d\u9009\u62e9\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5982\u4f55\u652f\u6301\u8fd9\u4e9b\u529f\u80fd\u3002\u4e3a\u6b64\uff0c\u6bcf\u4e2a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90fd\u6709\u81ea\u5df1\u7684\u786c\u4ef6\u517c\u5bb9\u6027\u5217\u8868 \uff08HCL\uff09\u3002\u5728\u9009\u62e9\u517c\u5bb9\u7684\u786c\u4ef6\u65f6\uff0c\u4ece\u5b89\u5168\u89d2\u5ea6\u6765\u770b\uff0c\u63d0\u524d\u4e86\u89e3\u54ea\u4e9b\u57fa\u4e8e\u786c\u4ef6\u7684\u865a\u62df\u5316\u6280\u672f\u662f\u91cd\u8981\u7684\uff0c\u8fd9\u4e00\u70b9\u5f88\u91cd\u8981\u3002 \u63cf\u8ff0 \u79d1\u6280 \u89e3\u91ca I/O MMU VT-d / AMD-Vi \u4fdd\u62a4 PCI \u76f4\u901a\u6240\u5fc5\u9700\u7684 \u82f1\u7279\u5c14\u53ef\u4fe1\u6267\u884c\u6280\u672f Intel TXT / SEM \u52a8\u6001\u8bc1\u660e\u670d\u52a1\u662f\u5fc5\u9700\u7684 PCI-SIG I/O \u865a\u62df\u5316 SR-IOV, MR-IOV, ATS \u9700\u8981\u5141\u8bb8\u5b89\u5168\u5171\u4eab PCI Express \u8bbe\u5907 \u7f51\u7edc\u865a\u62df\u5316 VT-c \u63d0\u9ad8\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0a\u7684\u7f51\u7edc I/O \u6027\u80fd","title":"\u786c\u4ef6\u95ee\u9898"},{"location":"security/security-guide/#_144","text":"\u91cd\u8981\u7684\u662f\u8981\u8ba4\u8bc6\u5230\u4f7f\u7528 Linux \u5bb9\u5668 \uff08LXC\uff09 \u6216\u88f8\u673a\u7cfb\u7edf\u4e0e\u4f7f\u7528 KVM \u7b49\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e4b\u95f4\u7684\u533a\u522b\u3002\u5177\u4f53\u6765\u8bf4\uff0c\u672c\u5b89\u5168\u6307\u5357\u7684\u91cd\u70b9\u4e3b\u8981\u57fa\u4e8e\u62e5\u6709\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u548c\u865a\u62df\u5316\u5e73\u53f0\u3002\u4f46\u662f\uff0c\u5982\u679c\u60a8\u7684\u5b9e\u73b0\u9700\u8981\u4f7f\u7528\u88f8\u673a\u6216 LXC \u73af\u5883\uff0c\u5219\u5fc5\u987b\u6ce8\u610f\u8be5\u73af\u5883\u90e8\u7f72\u65b9\u9762\u7684\u7279\u6b8a\u5dee\u5f02\u3002 \u5728\u91cd\u65b0\u9884\u914d\u4e4b\u524d\uff0c\u8bf7\u786e\u4fdd\u6700\u7ec8\u7528\u6237\u5df2\u6b63\u786e\u6e05\u7406\u8282\u70b9\u7684\u6570\u636e\u3002\u6b64\u5916\uff0c\u5728\u91cd\u7528\u8282\u70b9\u4e4b\u524d\uff0c\u5fc5\u987b\u4fdd\u8bc1\u786c\u4ef6\u672a\u88ab\u7be1\u6539\u6216\u4ee5\u5176\u4ed6\u65b9\u5f0f\u53d7\u5230\u635f\u5bb3\u3002 \u6ce8\u610f \u867d\u7136OpenStack\u6709\u4e00\u4e2a\u88f8\u673a\u9879\u76ee\uff0c\u4f46\u5bf9\u8fd0\u884c\u88f8\u673a\u7684\u7279\u6b8a\u5b89\u5168\u5f71\u54cd\u7684\u8ba8\u8bba\u8d85\u51fa\u4e86\u672c\u4e66\u7684\u8303\u56f4\u3002 \u7531\u4e8e\u4e66\u672c\u51b2\u523a\u7684\u65f6\u95f4\u9650\u5236\uff0c\u8be5\u56e2\u961f\u9009\u62e9\u5728\u6211\u4eec\u7684\u793a\u4f8b\u5b9e\u73b0\u548c\u67b6\u6784\u4e2d\u4f7f\u7528 KVM \u4f5c\u4e3a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 \u6ce8\u610f \u6709\u4e00\u4e2a\u5173\u4e8e\u5728\u8ba1\u7b97\u4e2d\u4f7f\u7528 LXC \u7684 OpenStack \u5b89\u5168\u8bf4\u660e\u3002","title":"\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0e\u88f8\u673a"},{"location":"security/security-guide/#hypervisor","text":"\u8bb8\u591a\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4f7f\u7528\u5185\u5b58\u4f18\u5316\u6280\u672f\u5c06\u5185\u5b58\u8fc7\u91cf\u4f7f\u7528\u5230\u6765\u5bbe\u865a\u62df\u673a\u3002\u8fd9\u662f\u4e00\u9879\u6709\u7528\u7684\u529f\u80fd\uff0c\u53ef\u7528\u4e8e\u90e8\u7f72\u975e\u5e38\u5bc6\u96c6\u7684\u8ba1\u7b97\u7fa4\u96c6\u3002\u5b9e\u73b0\u6b64\u76ee\u7684\u7684\u4e00\u79cd\u65b9\u6cd5\u662f\u901a\u8fc7\u91cd\u590d\u6570\u636e\u6d88\u9664\u6216\u5171\u4eab\u5185\u5b58\u9875\u3002\u5f53\u4e24\u4e2a\u865a\u62df\u673a\u5728\u5185\u5b58\u4e2d\u5177\u6709\u76f8\u540c\u7684\u6570\u636e\u65f6\uff0c\u8ba9\u5b83\u4eec\u5f15\u7528\u76f8\u540c\u7684\u5185\u5b58\u662f\u6709\u597d\u5904\u7684\u3002 \u901a\u5e38\uff0c\u8fd9\u662f\u901a\u8fc7\u5199\u5165\u65f6\u590d\u5236 \uff08COW\uff09 \u673a\u5236\u5b9e\u73b0\u7684\u3002\u8fd9\u4e9b\u673a\u5236\u5df2\u88ab\u8bc1\u660e\u5bb9\u6613\u53d7\u5230\u4fa7\u4fe1\u9053\u653b\u51fb\uff0c\u5176\u4e2d\u4e00\u4e2a VM \u53ef\u4ee5\u63a8\u65ad\u51fa\u53e6\u4e00\u4e2a VM \u7684\u72b6\u6001\uff0c\u5e76\u4e14\u53ef\u80fd\u4e0d\u9002\u7528\u4e8e\u5e76\u975e\u6240\u6709\u79df\u6237\u90fd\u53d7\u4fe1\u4efb\u6216\u5171\u4eab\u76f8\u540c\u4fe1\u4efb\u7ea7\u522b\u7684\u591a\u79df\u6237\u73af\u5883\u3002","title":"Hypervisor \u5185\u5b58\u4f18\u5316"},{"location":"security/security-guide/#kvm","text":"\u5728\u7248\u672c 2.6.32 \u4e2d\u5f15\u5165\u5230 Linux \u5185\u6838\u4e2d\uff0c\u5185\u6838\u76f8\u540c\u9875\u5408\u5e76 \uff08KSM\uff09 \u5728 Linux \u8fdb\u7a0b\u4e4b\u95f4\u6574\u5408\u4e86\u76f8\u540c\u7684\u5185\u5b58\u9875\u3002\u7531\u4e8e KVM \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0b\u7684\u6bcf\u4e2a\u5ba2\u6237\u673a\u865a\u62df\u673a\u90fd\u5728\u81ea\u5df1\u7684\u8fdb\u7a0b\u4e2d\u8fd0\u884c\uff0c\u56e0\u6b64 KSM \u53ef\u7528\u4e8e\u4f18\u5316\u865a\u62df\u673a\u4e4b\u95f4\u7684\u5185\u5b58\u4f7f\u7528\u3002","title":"KVM \u5185\u6838\u540c\u9875\u5408\u5e76"},{"location":"security/security-guide/#xen","text":"XenServer 5.6 \u5305\u542b\u4e00\u4e2a\u540d\u4e3a\u900f\u660e\u9875\u9762\u5171\u4eab \uff08TPS\uff09 \u7684\u5185\u5b58\u8fc7\u91cf\u4f7f\u7528\u529f\u80fd\u3002TPS \u626b\u63cf 4 KB \u533a\u5757\u4e2d\u7684\u5185\u5b58\u4ee5\u67e5\u627e\u4efb\u4f55\u91cd\u590d\u9879\u3002\u627e\u5230\u540e\uff0cXen \u865a\u62df\u673a\u76d1\u89c6\u5668 \uff08VMM\uff09 \u5c06\u4e22\u5f03\u5176\u4e2d\u4e00\u4e2a\u91cd\u590d\u9879\uff0c\u5e76\u8bb0\u5f55\u7b2c\u4e8c\u4e2a\u526f\u672c\u7684\u5f15\u7528\u3002","title":"XEN \u900f\u660e\u9875\u9762\u5171\u4eab"},{"location":"security/security-guide/#_145","text":"\u4f20\u7edf\u4e0a\uff0c\u5185\u5b58\u91cd\u590d\u6570\u636e\u6d88\u9664\u7cfb\u7edf\u5bb9\u6613\u53d7\u5230\u4fa7\u4fe1\u9053\u653b\u51fb\u3002KSM \u548c TPS \u90fd\u5df2\u88ab\u8bc1\u660e\u5bb9\u6613\u53d7\u5230\u67d0\u79cd\u5f62\u5f0f\u7684\u653b\u51fb\u3002\u5728\u5b66\u672f\u7814\u7a76\u4e2d\uff0c\u653b\u51fb\u8005\u80fd\u591f\u901a\u8fc7\u5206\u6790\u653b\u51fb\u8005\u865a\u62df\u673a\u4e0a\u7684\u5185\u5b58\u8bbf\u95ee\u65f6\u95f4\u6765\u8bc6\u522b\u76f8\u90bb\u865a\u62df\u673a\u4e0a\u8fd0\u884c\u7684\u8f6f\u4ef6\u5305\u548c\u7248\u672c\uff0c\u4ee5\u53ca\u8f6f\u4ef6\u4e0b\u8f7d\u548c\u5176\u4ed6\u654f\u611f\u4fe1\u606f\u3002 \u5982\u679c\u4e91\u90e8\u7f72\u9700\u8981\u5f3a\u79df\u6237\u5206\u79bb\uff08\u5982\u516c\u6709\u4e91\u548c\u67d0\u4e9b\u79c1\u6709\u4e91\u7684\u60c5\u51b5\uff09\uff0c\u90e8\u7f72\u4eba\u5458\u5e94\u8003\u8651\u7981\u7528 TPS \u548c KSM \u5185\u5b58\u4f18\u5316\u3002","title":"\u5185\u5b58\u4f18\u5316\u7684\u5b89\u5168\u6ce8\u610f\u4e8b\u9879"},{"location":"security/security-guide/#_146","text":"\u9009\u62e9\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5e73\u53f0\u65f6\u8981\u8003\u8651\u7684\u53e6\u4e00\u4ef6\u4e8b\u662f\u7279\u5b9a\u5b89\u5168\u529f\u80fd\u7684\u53ef\u7528\u6027\u3002\u7279\u522b\u662f\u529f\u80fd\u3002\u4f8b\u5982\uff0cXen Server \u7684 XSM \u6216 Xen \u5b89\u5168\u6a21\u5757\u3001sVirt\u3001Intel TXT \u6216 AppArmor\u3002 \u4e0b\u8868\u6309\u5e38\u89c1\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5e73\u53f0\u5217\u51fa\u4e86\u8fd9\u4e9b\u529f\u80fd\u3002 XSM sVirt TXT AppArmor cgroups MAC \u7b56\u7565 KVM X X X X X Xen X X ESXi X Hyper-V \u6ce8\u610f \u6b64\u8868\u4e2d\u7684\u529f\u80fd\u53ef\u80fd\u4e0d\u9002\u7528\u4e8e\u6240\u6709\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\uff0c\u4e5f\u53ef\u80fd\u65e0\u6cd5\u5728\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e4b\u95f4\u76f4\u63a5\u6620\u5c04\u3002","title":"\u5176\u4ed6\u5b89\u5168\u529f\u80fd"},{"location":"security/security-guide/#_147","text":"Sunar\u3001Eisenbarth\u3001Inci\u3001Gorka Irazoqui Apecechea\u3002\u5bf9 Xen \u548c VMware \u8fdb\u884c\u7ec6\u7c92\u5ea6\u8de8\u865a\u62df\u673a\u653b\u51fb\u662f\u53ef\u80fd\u7684\uff012014\u3002 https://eprint.iacr.org/2014/248.pfd Artho\u3001Yagi\u3001Iijima\u3001Kuniyasu Suzaki\u3002\u5185\u5b58\u91cd\u590d\u6570\u636e\u5220\u9664\u5bf9\u5ba2\u6237\u673a\u64cd\u4f5c\u7cfb\u7edf\u7684\u5a01\u80c1\u30022011 \u5e74\u3002https://staff.aist.go.jp/c.artho/papers/EuroSec2011-suzaki.pdf KVM\uff1a\u57fa\u4e8e\u5185\u6838\u7684\u865a\u62df\u673a\u3002\u5185\u6838\u76f8\u540c\u9875\u5408\u5e76\u30022010\u3002http://www.linux-kvm.org/page/KSM Xen \u9879\u76ee\uff0cXen \u5b89\u5168\u6a21\u5757\uff1aXSM-FLASK\u30022014\u3002 http://wiki.xen.org/wiki/Xen_Security_Modules_:_XSM-FLASK SELinux \u9879\u76ee\uff0cSVirt\u30022011\u3002 http://selinuxproject.org/page/SVirt Intel.com\uff0c\u91c7\u7528\u82f1\u7279\u5c14\u53ef\u4fe1\u6267\u884c\u6280\u672f \uff08Intel TXT\uff09 \u7684\u53ef\u4fe1\u8ba1\u7b97\u6c60\u3002http://www.intel.com/txt AppArmor.net\uff0cAppArmor \u4e3b\u9875\u30022011\u3002 http://wiki.apparmor.net/index.php/Main_Page Kernel.org\uff0cCGroups\u30022004\u3002https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt \u8ba1\u7b97\u673a\u5b89\u5168\u8d44\u6e90\u4e2d\u5fc3\u3002\u5b8c\u6574\u865a\u62df\u5316\u6280\u672f\u5b89\u5168\u6307\u5357\u30022011\u3002 http://csrc.nist.gov/publications/nistpubs/800-125/SP800-125-final.pdf \u56fd\u5bb6\u4fe1\u606f\u4fdd\u969c\u4f19\u4f34\u5173\u7cfb\uff0c\u56fd\u5bb6\u5b89\u5168\u7535\u4fe1\u548c\u4fe1\u606f\u7cfb\u7edf\u5b89\u5168\u653f\u7b56\u30022003\u3002http://www.niap-ccevs.org/cc-scheme/nstissp_11_revised_factsheet.pdf","title":"\u53c2\u8003\u4e66\u76ee"},{"location":"security/security-guide/#_148","text":"\u5728\u672c\u7ae0\u7684\u5f00\u5934\uff0c\u6211\u4eec\u5c06\u8ba8\u8bba\u5b9e\u4f8b\u5bf9\u7269\u7406\u548c\u865a\u62df\u786c\u4ef6\u7684\u4f7f\u7528\u3001\u76f8\u5173\u7684\u5b89\u5168\u98ce\u9669\u4ee5\u53ca\u7f13\u89e3\u8fd9\u4e9b\u98ce\u9669\u7684\u4e00\u4e9b\u5efa\u8bae\u3002\u7136\u540e\uff0c\u6211\u4eec\u5c06\u8ba8\u8bba\u5982\u4f55\u4f7f\u7528\u5b89\u5168\u52a0\u5bc6\u865a\u62df\u5316\u6280\u672f\u6765\u52a0\u5bc6\u652f\u6301\u8be5\u6280\u672f\u7684\u57fa\u4e8e AMD \u7684\u673a\u5668\u4e0a\u7684\u865a\u62df\u673a\u7684\u5185\u5b58\u3002\u5728\u672c\u7ae0\u7684\u6700\u540e\uff0c\u6211\u4eec\u5c06\u8ba8\u8bba sVirt\uff0c\u8fd9\u662f\u4e00\u4e2a\u5f00\u6e90\u9879\u76ee\uff0c\u7528\u4e8e\u5c06 SELinux \u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u4e0e\u865a\u62df\u5316\u7ec4\u4ef6\u96c6\u6210\u3002","title":"\u52a0\u56fa\u865a\u62df\u5316\u5c42"},{"location":"security/security-guide/#pci","text":"\u8bb8\u591a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90fd\u63d0\u4f9b\u4e00\u79cd\u79f0\u4e3a PCI \u76f4\u901a\u7684\u529f\u80fd\u3002\u8fd9\u5141\u8bb8\u5b9e\u4f8b\u76f4\u63a5\u8bbf\u95ee\u8282\u70b9\u4e0a\u7684\u786c\u4ef6\u3002\u4f8b\u5982\uff0c\u8fd9\u53ef\u7528\u4e8e\u5141\u8bb8\u5b9e\u4f8b\u8bbf\u95ee\u63d0\u4f9b\u8ba1\u7b97\u7edf\u4e00\u8bbe\u5907\u67b6\u6784 \uff08CUDA\uff09 \u4ee5\u5b9e\u73b0\u9ad8\u6027\u80fd\u8ba1\u7b97\u7684\u89c6\u9891\u5361\u6216 GPU\u3002\u6b64\u529f\u80fd\u5b58\u5728\u4e24\u79cd\u7c7b\u578b\u7684\u5b89\u5168\u98ce\u9669\uff1a\u76f4\u63a5\u5185\u5b58\u8bbf\u95ee\u548c\u786c\u4ef6\u611f\u67d3\u3002 \u76f4\u63a5\u5185\u5b58\u8bbf\u95ee \uff08DMA\uff09 \u662f\u4e00\u79cd\u529f\u80fd\uff0c\u5b83\u5141\u8bb8\u67d0\u4e9b\u786c\u4ef6\u8bbe\u5907\u8bbf\u95ee\u4e3b\u673a\u4e2d\u7684\u4efb\u610f\u7269\u7406\u5185\u5b58\u5730\u5740\u3002\u89c6\u9891\u5361\u901a\u5e38\u5177\u6709\u6b64\u529f\u80fd\u3002\u4f46\u662f\uff0c\u4e0d\u5e94\u5411\u5b9e\u4f8b\u6388\u4e88\u4efb\u610f\u7269\u7406\u5185\u5b58\u8bbf\u95ee\u6743\u9650\uff0c\u56e0\u4e3a\u8fd9\u5c06\u4f7f\u5176\u80fd\u591f\u5168\u9762\u4e86\u89e3\u4e3b\u673a\u7cfb\u7edf\u548c\u5728\u540c\u4e00\u8282\u70b9\u4e0a\u8fd0\u884c\u7684\u5176\u4ed6\u5b9e\u4f8b\u3002\u5728\u8fd9\u4e9b\u60c5\u51b5\u4e0b\uff0c\u786c\u4ef6\u4f9b\u5e94\u5546\u4f7f\u7528\u8f93\u5165/\u8f93\u51fa\u5185\u5b58\u7ba1\u7406\u5355\u5143 \uff08IOMMU\uff09 \u6765\u7ba1\u7406 DMA \u8bbf\u95ee\u3002\u6211\u4eec\u5efa\u8bae\u4e91\u67b6\u6784\u5e08\u5e94\u786e\u4fdd\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u914d\u7f6e\u4e3a\u4f7f\u7528\u6b64\u786c\u4ef6\u529f\u80fd\u3002 KVM: KVM\uff1a \u5982\u4f55\u5728 KVM \u4e2d\u4f7f\u7528 VT-d \u5206\u914d\u8bbe\u5907 Xen: Xen\uff1a Xen VTd Howto Xen VTd \u8d34\u58eb\u6307\u5357 \u6ce8\u610f IOMMU \u529f\u80fd\u7531 Intel \u4f5c\u4e3a VT-d \u9500\u552e\uff0c\u7531 AMD \u4ee5 AMD-Vi \u9500\u552e\u3002 \u5f53\u5b9e\u4f8b\u5bf9\u56fa\u4ef6\u6216\u8bbe\u5907\u7684\u67d0\u4e9b\u5176\u4ed6\u90e8\u5206\u8fdb\u884c\u6076\u610f\u4fee\u6539\u65f6\uff0c\u5c31\u4f1a\u53d1\u751f\u786c\u4ef6\u611f\u67d3\u3002\u7531\u4e8e\u6b64\u8bbe\u5907\u7531\u5176\u4ed6\u5b9e\u4f8b\u6216\u4e3b\u673a\u64cd\u4f5c\u7cfb\u7edf\u4f7f\u7528\uff0c\u56e0\u6b64\u6076\u610f\u4ee3\u7801\u53ef\u80fd\u4f1a\u4f20\u64ad\u5230\u8fd9\u4e9b\u7cfb\u7edf\u4e2d\u3002\u6700\u7ec8\u7ed3\u679c\u662f\uff0c\u4e00\u4e2a\u5b9e\u4f8b\u53ef\u4ee5\u5728\u5176\u5b89\u5168\u57df\u4e4b\u5916\u8fd0\u884c\u4ee3\u7801\u3002\u8fd9\u662f\u4e00\u4e2a\u91cd\u5927\u7684\u6f0f\u6d1e\uff0c\u56e0\u4e3a\u91cd\u7f6e\u7269\u7406\u786c\u4ef6\u7684\u72b6\u6001\u6bd4\u91cd\u7f6e\u865a\u62df\u786c\u4ef6\u66f4\u96be\uff0c\u5e76\u4e14\u53ef\u80fd\u5bfc\u81f4\u989d\u5916\u7684\u66b4\u9732\uff0c\u4f8b\u5982\u8bbf\u95ee\u7ba1\u7406\u7f51\u7edc\u3002 \u786c\u4ef6\u611f\u67d3\u95ee\u9898\u7684\u89e3\u51b3\u65b9\u6848\u662f\u7279\u5b9a\u4e8e\u57df\u7684\u3002\u8be5\u7b56\u7565\u662f\u786e\u5b9a\u5b9e\u4f8b\u5982\u4f55\u4fee\u6539\u786c\u4ef6\u72b6\u6001\uff0c\u7136\u540e\u786e\u5b9a\u5728\u4f7f\u7528\u786c\u4ef6\u5b8c\u6210\u5b9e\u4f8b\u65f6\u5982\u4f55\u91cd\u7f6e\u4efb\u4f55\u4fee\u6539\u3002\u4f8b\u5982\uff0c\u4e00\u79cd\u9009\u62e9\u53ef\u80fd\u662f\u5728\u4f7f\u7528\u540e\u91cd\u65b0\u5237\u65b0\u56fa\u4ef6\u3002\u9700\u8981\u5e73\u8861\u786c\u4ef6\u5bff\u547d\u548c\u5b89\u5168\u6027\uff0c\u56e0\u4e3a\u67d0\u4e9b\u56fa\u4ef6\u5728\u5927\u91cf\u5199\u5165\u540e\u4f1a\u51fa\u73b0\u6545\u969c\u3002\u5b89\u5168\u5f15\u5bfc\u4e2d\u6240\u8ff0\u7684 TPM \u6280\u672f\u662f\u4e00\u79cd\u7528\u4e8e\u68c0\u6d4b\u672a\u7ecf\u6388\u6743\u7684\u56fa\u4ef6\u66f4\u6539\u7684\u89e3\u51b3\u65b9\u6848\u3002\u65e0\u8bba\u9009\u62e9\u54ea\u79cd\u7b56\u7565\uff0c\u90fd\u5fc5\u987b\u4e86\u89e3\u4e0e\u6b64\u7c7b\u786c\u4ef6\u5171\u4eab\u76f8\u5173\u7684\u98ce\u9669\uff0c\u4ee5\u4fbf\u9488\u5bf9\u7ed9\u5b9a\u7684\u90e8\u7f72\u65b9\u6848\u9002\u5f53\u7f13\u89e3\u8fd9\u4e9b\u98ce\u9669\u3002 \u7531\u4e8e\u4e0e PCI \u76f4\u901a\u76f8\u5173\u7684\u98ce\u9669\u548c\u590d\u6742\u6027\uff0c\u9ed8\u8ba4\u60c5\u51b5\u4e0b\u5e94\u7981\u7528\u5b83\u3002\u5982\u679c\u4e3a\u7279\u5b9a\u9700\u6c42\u542f\u7528\uff0c\u5219\u9700\u8981\u5236\u5b9a\u9002\u5f53\u7684\u6d41\u7a0b\uff0c\u4ee5\u786e\u4fdd\u786c\u4ef6\u5728\u91cd\u65b0\u53d1\u884c\u4e4b\u524d\u662f\u5e72\u51c0\u7684\u3002","title":"\u7269\u7406\u786c\u4ef6\uff08PCI\u76f4\u901a\uff09"},{"location":"security/security-guide/#qemu","text":"\u8fd0\u884c\u865a\u62df\u673a\u65f6\uff0c\u865a\u62df\u786c\u4ef6\u662f\u4e3a\u865a\u62df\u673a\u63d0\u4f9b\u786c\u4ef6\u63a5\u53e3\u7684\u8f6f\u4ef6\u5c42\u3002\u5b9e\u4f8b\u4f7f\u7528\u6b64\u529f\u80fd\u63d0\u4f9b\u53ef\u80fd\u9700\u8981\u7684\u7f51\u7edc\u3001\u5b58\u50a8\u3001\u89c6\u9891\u548c\u5176\u4ed6\u8bbe\u5907\u3002\u8003\u8651\u5230\u8fd9\u4e00\u70b9\uff0c\u73af\u5883\u4e2d\u7684\u5927\u591a\u6570\u5b9e\u4f8b\u5c06\u4e13\u95e8\u4f7f\u7528\u865a\u62df\u786c\u4ef6\uff0c\u5c11\u6570\u5b9e\u4f8b\u9700\u8981\u76f4\u63a5\u786c\u4ef6\u8bbf\u95ee\u3002\u4e3b\u8981\u7684\u5f00\u6e90\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4f7f\u7528 QEMU \u6765\u5b9e\u73b0\u6b64\u529f\u80fd\u3002\u867d\u7136 QEMU \u6ee1\u8db3\u4e86\u5bf9\u865a\u62df\u5316\u5e73\u53f0\u7684\u91cd\u8981\u9700\u6c42\uff0c\u4f46\u5b83\u5df2\u88ab\u8bc1\u660e\u662f\u4e00\u4e2a\u975e\u5e38\u5177\u6709\u6311\u6218\u6027\u7684\u8f6f\u4ef6\u9879\u76ee\u3002QEMU \u4e2d\u7684\u8bb8\u591a\u529f\u80fd\u90fd\u662f\u901a\u8fc7\u5927\u591a\u6570\u5f00\u53d1\u4eba\u5458\u96be\u4ee5\u7406\u89e3\u7684\u4f4e\u7ea7\u4ee3\u7801\u5b9e\u73b0\u7684\u3002QEMU \u865a\u62df\u5316\u7684\u786c\u4ef6\u5305\u62ec\u8bb8\u591a\u4f20\u7edf\u8bbe\u5907\uff0c\u8fd9\u4e9b\u8bbe\u5907\u6709\u81ea\u5df1\u7684\u4e00\u5957\u602a\u7656\u3002\u7efc\u4e0a\u6240\u8ff0\uff0cQEMU \u4e00\u76f4\u662f\u8bb8\u591a\u5b89\u5168\u95ee\u9898\u7684\u6839\u6e90\uff0c\u5305\u62ec\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7a81\u7834\u653b\u51fb\u3002 \u91c7\u53d6\u79ef\u6781\u4e3b\u52a8\u7684\u63aa\u65bd\u6765\u5f3a\u5316 QEMU \u975e\u5e38\u91cd\u8981\u3002\u6211\u4eec\u5efa\u8bae\u6267\u884c\u4e09\u4e2a\u5177\u4f53\u6b65\u9aa4\uff1a \u6700\u5c0f\u5316\u4ee3\u7801\u5e93\u3002 \u4f7f\u7528\u7f16\u8bd1\u5668\u5f3a\u5316\u3002 \u4f7f\u7528\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\uff0c\u4f8b\u5982 sVirt\u3001SELinux \u6216 AppArmor\u3002 \u786e\u4fdd\u60a8\u7684 iptables \u5177\u6709\u8fc7\u6ee4\u7f51\u7edc\u6d41\u91cf\u7684\u9ed8\u8ba4\u7b56\u7565\uff0c\u5e76\u8003\u8651\u68c0\u67e5\u73b0\u6709\u89c4\u5219\u96c6\u4ee5\u4e86\u89e3\u6bcf\u4e2a\u89c4\u5219\u5e76\u786e\u5b9a\u662f\u5426\u9700\u8981\u6269\u5c55\u8be5\u7b56\u7565\u3002","title":"\u865a\u62df\u786c\u4ef6 \uff08QEMU\uff09"},{"location":"security/security-guide/#qemu_1","text":"\u6211\u4eec\u5efa\u8bae\u901a\u8fc7\u4ece\u7cfb\u7edf\u4e2d\u5220\u9664\u672a\u4f7f\u7528\u7684\u7ec4\u4ef6\u6765\u6700\u5c0f\u5316 QEMU \u4ee3\u7801\u5e93\u3002QEMU \u4e3a\u8bb8\u591a\u4e0d\u540c\u7684\u865a\u62df\u786c\u4ef6\u8bbe\u5907\u63d0\u4f9b\u652f\u6301\uff0c\u4f46\u7ed9\u5b9a\u5b9e\u4f8b\u53ea\u9700\u8981\u5c11\u91cf\u8bbe\u5907\u3002\u6700\u5e38\u89c1\u7684\u786c\u4ef6\u8bbe\u5907\u662f virtio \u8bbe\u5907\u3002\u67d0\u4e9b\u65e7\u5b9e\u4f8b\u5c06\u9700\u8981\u8bbf\u95ee\u7279\u5b9a\u786c\u4ef6\uff0c\u8fd9\u4e9b\u786c\u4ef6\u53ef\u4ee5\u4f7f\u7528 glance \u5143\u6570\u636e\u6307\u5b9a\uff1a $ glance image-update \\ --property hw_disk_bus=ide \\ --property hw_cdrom_bus=ide \\ --property hw_vif_model=e1000 \\ f16-x86_64-openstack-sda \u4e91\u67b6\u6784\u5e08\u5e94\u51b3\u5b9a\u5411\u4e91\u7528\u6237\u63d0\u4f9b\u54ea\u4e9b\u8bbe\u5907\u3002\u4efb\u4f55\u4e0d\u9700\u8981\u7684\u4e1c\u897f\u90fd\u5e94\u8be5\u4ece QEMU \u4e2d\u5220\u9664\u3002\u6b64\u6b65\u9aa4\u9700\u8981\u5728\u4fee\u6539\u4f20\u9012\u7ed9 QEMU \u914d\u7f6e\u811a\u672c\u7684\u9009\u9879\u540e\u91cd\u65b0\u7f16\u8bd1 QEMU\u3002\u8981\u83b7\u5f97\u6700\u65b0\u9009\u9879\u7684\u5b8c\u6574\u5217\u8868\uff0c\u53ea\u9700\u4ece QEMU \u6e90\u76ee\u5f55\u4e2d\u8fd0\u884c ./configure --help\u3002\u786e\u5b9a\u90e8\u7f72\u6240\u9700\u7684\u5185\u5bb9\uff0c\u5e76\u7981\u7528\u5176\u4f59\u9009\u9879\u3002","title":"\u6700\u5c0f\u5316 QEMU \u4ee3\u7801\u5e93"},{"location":"security/security-guide/#_149","text":"\u4f7f\u7528\u7f16\u8bd1\u5668\u5f3a\u5316\u9009\u9879\u5f3a\u5316 QEMU\u3002\u73b0\u4ee3\u7f16\u8bd1\u5668\u63d0\u4f9b\u4e86\u591a\u79cd\u7f16\u8bd1\u65f6\u9009\u9879\uff0c\u4ee5\u63d0\u9ad8\u751f\u6210\u7684\u4e8c\u8fdb\u5236\u6587\u4ef6\u7684\u5b89\u5168\u6027\u3002\u8fd9\u4e9b\u529f\u80fd\u5305\u62ec\u53ea\u8bfb\u91cd\u5b9a\u4f4d \uff08RELRO\uff09\u3001\u5806\u6808\u91d1\u4e1d\u96c0\u3001\u4ece\u4e0d\u6267\u884c \uff08NX\uff09\u3001\u4f4d\u7f6e\u65e0\u5173\u53ef\u6267\u884c\u6587\u4ef6 \uff08PIE\uff09 \u548c\u5730\u5740\u7a7a\u95f4\u5e03\u5c40\u968f\u673a\u5316 \uff08ASLR\uff09\u3002 \u8bb8\u591a\u73b0\u4ee3 Linux \u53d1\u884c\u7248\u5df2\u7ecf\u5728\u6784\u5efa\u542f\u7528\u7f16\u8bd1\u5668\u5f3a\u5316\u7684 QEMU\uff0c\u6211\u4eec\u5efa\u8bae\u5728\u7ee7\u7eed\u64cd\u4f5c\u4e4b\u524d\u9a8c\u8bc1\u73b0\u6709\u7684\u53ef\u6267\u884c\u6587\u4ef6\u3002\u53ef\u4ee5\u5e2e\u52a9\u60a8\u8fdb\u884c\u6b64\u9a8c\u8bc1\u7684\u4e00\u79cd\u5de5\u5177\u79f0\u4e3a checksec.sh RELocation \u53ea\u8bfb \uff08RELRO\uff09 \u5f3a\u5316\u53ef\u6267\u884c\u6587\u4ef6\u7684\u6570\u636e\u90e8\u5206\u3002gcc \u652f\u6301\u5b8c\u6574\u548c\u90e8\u5206 RELRO \u6a21\u5f0f\u3002\u5bf9\u4e8eQEMU\u6765\u8bf4\uff0c\u5b8c\u6574\u7684RELLO\u662f\u60a8\u7684\u6700\u4f73\u9009\u62e9\u3002\u8fd9\u5c06\u4f7f\u5168\u5c40\u504f\u79fb\u8868\u6210\u4e3a\u53ea\u8bfb\u7684\uff0c\u5e76\u5728\u751f\u6210\u7684\u53ef\u6267\u884c\u6587\u4ef6\u4e2d\u5c06\u5404\u79cd\u5185\u90e8\u6570\u636e\u90e8\u5206\u653e\u5728\u7a0b\u5e8f\u6570\u636e\u90e8\u5206\u4e4b\u524d\u3002 \u6808\u4fdd\u62a4 \u5c06\u503c\u653e\u5728\u5806\u6808\u4e0a\u5e76\u9a8c\u8bc1\u5176\u662f\u5426\u5b58\u5728\uff0c\u4ee5\u5e2e\u52a9\u9632\u6b62\u7f13\u51b2\u533a\u6ea2\u51fa\u653b\u51fb\u3002 \u4ece\u4e0d\u6267\u884c \uff08NX\uff09 \u4e5f\u79f0\u4e3a\u6570\u636e\u6267\u884c\u4fdd\u62a4 \uff08DEP\uff09\uff0c\u786e\u4fdd\u65e0\u6cd5\u6267\u884c\u53ef\u6267\u884c\u6587\u4ef6\u7684\u6570\u636e\u90e8\u5206\u3002 \u4f4d\u7f6e\u65e0\u5173\u53ef\u6267\u884c\u6587\u4ef6 \uff08PIE\uff09 \u751f\u6210\u4e00\u4e2a\u72ec\u7acb\u4e8e\u4f4d\u7f6e\u7684\u53ef\u6267\u884c\u6587\u4ef6\uff0c\u8fd9\u662f ASLR \u6240\u5fc5\u9700\u7684\u3002 \u5730\u5740\u7a7a\u95f4\u5e03\u5c40\u968f\u673a\u5316 \uff08ASLR\uff09 \u8fd9\u786e\u4fdd\u4e86\u4ee3\u7801\u548c\u6570\u636e\u533a\u57df\u7684\u653e\u7f6e\u90fd\u662f\u968f\u673a\u7684\u3002\u5f53\u4f7f\u7528 PIE \u6784\u5efa\u53ef\u6267\u884c\u6587\u4ef6\u65f6\uff0c\u7531\u5185\u6838\u542f\u7528\uff08\u6240\u6709\u73b0\u4ee3 Linux \u5185\u6838\u90fd\u652f\u6301 ASLR\uff09\u3002 \u7f16\u8bd1 QEMU \u65f6\uff0c\u5efa\u8bae\u5bf9 GCC \u4f7f\u7528\u4ee5\u4e0b\u7f16\u8bd1\u5668\u9009\u9879\uff1a CFLAGS=\"-arch x86_64 -fstack-protector-all -Wstack-protector \\ --param ssp-buffer-size=4 -pie -fPIE -ftrapv -D_FORTIFY_SOURCE=2 -O2 \\ -Wl,-z,relro,-z,now\" \u6211\u4eec\u5efa\u8bae\u5728\u7f16\u8bd1 QEMU \u53ef\u6267\u884c\u6587\u4ef6\u540e\u5bf9\u5176\u8fdb\u884c\u6d4b\u8bd5\uff0c\u4ee5\u786e\u4fdd\u7f16\u8bd1\u5668\u5f3a\u5316\u6b63\u5e38\u5de5\u4f5c\u3002 \u5927\u591a\u6570\u4e91\u90e8\u7f72\u4e0d\u4f1a\u624b\u52a8\u6784\u5efa\u8f6f\u4ef6\uff0c\u4f8b\u5982 QEMU\u3002\u6700\u597d\u4f7f\u7528\u6253\u5305\u6765\u786e\u4fdd\u8be5\u8fc7\u7a0b\u662f\u53ef\u91cd\u590d\u7684\uff0c\u5e76\u786e\u4fdd\u6700\u7ec8\u7ed3\u679c\u53ef\u4ee5\u8f7b\u677e\u5730\u90e8\u7f72\u5728\u6574\u4e2a\u4e91\u4e2d\u3002\u4e0b\u9762\u7684\u53c2\u8003\u8d44\u6599\u63d0\u4f9b\u4e86\u6709\u5173\u5c06\u7f16\u8bd1\u5668\u5f3a\u5316\u9009\u9879\u5e94\u7528\u4e8e\u73b0\u6709\u5305\u7684\u4e00\u4e9b\u5176\u4ed6\u8be6\u7ec6\u4fe1\u606f\u3002 DEB \u5c01\u88c5\uff1a \u786c\u5316\u6307\u5357 RPM \u5305\uff1a \u5982\u4f55\u521b\u5efa RPM \u5305","title":"\u7f16\u8bd1\u5668\u52a0\u56fa"},{"location":"security/security-guide/#_150","text":"\u5b89\u5168\u52a0\u5bc6\u865a\u62df\u5316 \uff08SEV\uff09 \u662f AMD \u7684\u4e00\u9879\u6280\u672f\uff0c\u5b83\u5141\u8bb8\u4f7f\u7528 VM \u552f\u4e00\u7684\u5bc6\u94a5\u5bf9 VM \u7684\u5185\u5b58\u8fdb\u884c\u52a0\u5bc6\u3002SEV \u5728 Train \u7248\u672c\u4e2d\u4f5c\u4e3a\u6280\u672f\u9884\u89c8\u7248\u63d0\u4f9b\uff0c\u5728\u67d0\u4e9b\u57fa\u4e8e AMD \u7684\u673a\u5668\u4e0a\u63d0\u4f9b KVM \u5ba2\u6237\u673a\uff0c\u7528\u4e8e\u8bc4\u4f30\u6280\u672f\u3002 nova \u914d\u7f6e\u6307\u5357\u7684 KVM \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90e8\u5206\u5305\u542b\u914d\u7f6e\u8ba1\u7b97\u673a\u548c\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6240\u9700\u7684\u4fe1\u606f\uff0c\u5e76\u5217\u51fa\u4e86 SEV \u7684\u51e0\u4e2a\u9650\u5236\u3002 SEV \u4e3a\u6b63\u5728\u8fd0\u884c\u7684 VM \u4f7f\u7528\u7684\u5185\u5b58\u4e2d\u7684\u6570\u636e\u63d0\u4f9b\u4fdd\u62a4\u3002\u4f46\u662f\uff0c\u867d\u7136 SEV \u4e0e OpenStack \u96c6\u6210\u7684\u7b2c\u4e00\u9636\u6bb5\u652f\u6301\u865a\u62df\u673a\u52a0\u5bc6\u5185\u5b58\uff0c\u4f46\u91cd\u8981\u7684\u662f\u5b83\u4e0d\u63d0\u4f9b SEV \u56fa\u4ef6\u63d0\u4f9b\u7684 LAUNCH_MEASURE or LAUNCH_SECRET \u529f\u80fd\u3002\u8fd9\u610f\u5473\u7740\u53d7 SEV \u4fdd\u62a4\u7684 VM \u4f7f\u7528\u7684\u6570\u636e\u53ef\u80fd\u4f1a\u53d7\u5230\u63a7\u5236\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u7684\u6709\u52a8\u673a\u7684\u5bf9\u624b\u7684\u653b\u51fb\u3002\u4f8b\u5982\uff0c\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u8ba1\u7b97\u673a\u4e0a\u7684\u6076\u610f\u7ba1\u7406\u5458\u53ef\u4ee5\u4e3a\u5177\u6709\u540e\u95e8\u548c\u95f4\u8c0d\u8f6f\u4ef6\u7684\u79df\u6237\u63d0\u4f9b VM \u6620\u50cf\uff0c\u8fd9\u4e9b\u540e\u95e8\u548c\u95f4\u8c0d\u8f6f\u4ef6\u80fd\u591f\u7a83\u53d6\u673a\u5bc6\uff0c\u6216\u8005\u66ff\u6362 VNC \u670d\u52a1\u5668\u8fdb\u7a0b\u4ee5\u7aa5\u63a2\u53d1\u9001\u5230 VM \u63a7\u5236\u53f0\u6216\u4ece VM \u63a7\u5236\u53f0\u53d1\u9001\u7684\u6570\u636e\uff0c\u5305\u62ec\u89e3\u9501\u5168\u78c1\u76d8\u52a0\u5bc6\u89e3\u51b3\u65b9\u6848\u7684\u5bc6\u7801\u3002 \u4e3a\u4e86\u51cf\u5c11\u6076\u610f\u7ba1\u7406\u5458\u672a\u7ecf\u6388\u6743\u8bbf\u95ee\u6570\u636e\u7684\u673a\u4f1a\uff0c\u4f7f\u7528 SEV \u65f6\u5e94\u9075\u5faa\u4ee5\u4e0b\u5b89\u5168\u505a\u6cd5\uff1a VM \u5e94\u4f7f\u7528\u5b8c\u6574\u78c1\u76d8\u52a0\u5bc6\u89e3\u51b3\u65b9\u6848\u3002 \u5e94\u5728 VM \u4e0a\u4f7f\u7528\u5f15\u5bfc\u52a0\u8f7d\u7a0b\u5e8f\u5bc6\u7801\u3002 \u6b64\u5916\uff0c\u5e94\u5c06\u6807\u51c6\u5b89\u5168\u6700\u4f73\u505a\u6cd5\u7528\u4e8e VM\uff0c\u5305\u62ec\u4ee5\u4e0b\u5185\u5bb9\uff1a VM \u5e94\u5f97\u5230\u826f\u597d\u7684\u7ef4\u62a4\uff0c\u5305\u62ec\u5b9a\u671f\u8fdb\u884c\u5b89\u5168\u626b\u63cf\u548c\u4fee\u8865\uff0c\u4ee5\u786e\u4fdd VM \u6301\u7eed\u4fdd\u6301\u5f3a\u5927\u7684\u5b89\u5168\u6001\u52bf\u3002 \u4e0e VM \u7684\u8fde\u63a5\u5e94\u4f7f\u7528\u52a0\u5bc6\u548c\u7ecf\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u7684\u534f\u8bae\uff0c\u4f8b\u5982 HTTPS \u548c SSH\u3002 \u5e94\u8003\u8651\u4f7f\u7528\u5176\u4ed6\u5b89\u5168\u5de5\u5177\u548c\u6d41\u7a0b\uff0c\u5e76\u5c06\u5176\u7528\u4e8e\u9002\u5408\u6570\u636e\u654f\u611f\u5ea6\u7ea7\u522b\u7684 VM\u3002","title":"\u5b89\u5168\u52a0\u5bc6\u865a\u62df\u5316"},{"location":"security/security-guide/#_151","text":"\u7f16\u8bd1\u5668\u52a0\u56fa\u4f7f\u653b\u51fb QEMU \u8fdb\u7a0b\u53d8\u5f97\u66f4\u52a0\u56f0\u96be\u3002\u4f46\u662f\uff0c\u5982\u679c\u653b\u51fb\u8005\u5f97\u901e\uff0c\u5219\u9700\u8981\u9650\u5236\u653b\u51fb\u7684\u5f71\u54cd\u3002\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u901a\u8fc7\u5c06 QEMU \u8fdb\u7a0b\u4e0a\u7684\u6743\u9650\u9650\u5236\u4e3a\u4ec5\u9700\u8981\u7684\u6743\u9650\u6765\u5b9e\u73b0\u6b64\u76ee\u7684\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u4f7f\u7528 sVirt\u3001SELinux \u6216 AppArmor \u6765\u5b9e\u73b0\u3002\u4f7f\u7528 sVirt \u65f6\uff0cSELinux \u914d\u7f6e\u4e3a\u5728\u5355\u72ec\u7684\u5b89\u5168\u4e0a\u4e0b\u6587\u4e0b\u8fd0\u884c\u6bcf\u4e2a QEMU \u8fdb\u7a0b\u3002AppArmor \u53ef\u4ee5\u914d\u7f6e\u4e3a\u63d0\u4f9b\u7c7b\u4f3c\u7684\u529f\u80fd\u3002\u6211\u4eec\u5728\u4ee5\u4e0b sVirt \u548c\u5b9e\u4f8b\u9694\u79bb\u90e8\u5206\u4e2d\u63d0\u4f9b\u4e86\u6709\u5173 sVirt \u548c\u5b9e\u4f8b\u9694\u79bb\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff1aSELinux \u548c\u865a\u62df\u5316\u3002 \u7279\u5b9a\u7684 SELinux \u7b56\u7565\u53ef\u7528\u4e8e\u8bb8\u591a OpenStack \u670d\u52a1\u3002CentOS \u7528\u6237\u53ef\u4ee5\u901a\u8fc7\u5b89\u88c5 selinux-policy \u6e90\u7801\u5305\u6765\u67e5\u770b\u8fd9\u4e9b\u7b56\u7565\u3002\u6700\u65b0\u7684\u7b56\u7565\u51fa\u73b0\u5728 Fedora \u7684 selinux-policy \u5b58\u50a8\u5e93\u4e2d\u3002rawhide-contrib \u5206\u652f\u5305\u542b\u4ee5 .te \u7ed3\u5c3e\u7684\u6587\u4ef6\uff0c\u4f8b\u5982 cinder.te \uff0c\u8fd9\u4e9b\u6587\u4ef6\u53ef\u4ee5\u5728\u8fd0\u884c SELinux \u7684\u7cfb\u7edf\u4e0a\u4f7f\u7528\u3002 OpenStack \u670d\u52a1\u7684 AppArmor \u914d\u7f6e\u6587\u4ef6\u5f53\u524d\u4e0d\u5b58\u5728\uff0c\u4f46 OpenStack-Ansible \u9879\u76ee\u901a\u8fc7\u5c06 AppArmor \u914d\u7f6e\u6587\u4ef6\u5e94\u7528\u4e8e\u8fd0\u884c OpenStack \u670d\u52a1\u7684\u6bcf\u4e2a\u5bb9\u5668\u6765\u5904\u7406\u6b64\u95ee\u9898\u3002","title":"\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236"},{"location":"security/security-guide/#svirtselinux","text":"\u51ed\u501f\u72ec\u7279\u7684\u5185\u6838\u7ea7\u67b6\u6784\u548c\u56fd\u5bb6\u5b89\u5168\u5c40 \uff08NSA\uff09 \u5f00\u53d1\u7684\u5b89\u5168\u673a\u5236\uff0cKVM \u4e3a\u591a\u79df\u6237\u63d0\u4f9b\u4e86\u57fa\u7840\u9694\u79bb\u6280\u672f\u3002\u5b89\u5168\u865a\u62df\u5316 \uff08sVirt\uff09 \u6280\u672f\u7684\u53d1\u5c55\u8d77\u6e90\u4e8e 2002 \u5e74\uff0c\u662f SELinux \u5bf9\u73b0\u4ee3\u865a\u62df\u5316\u7684\u5e94\u7528\u3002SELinux \u65e8\u5728\u5e94\u7528\u57fa\u4e8e\u6807\u7b7e\u7684\u5206\u79bb\u63a7\u5236\uff0c\u73b0\u5df2\u6269\u5c55\u4e3a\u5728\u865a\u62df\u673a\u8fdb\u7a0b\u3001\u8bbe\u5907\u3001\u6570\u636e\u6587\u4ef6\u548c\u4ee3\u8868\u5b83\u4eec\u6267\u884c\u64cd\u4f5c\u7684\u7cfb\u7edf\u8fdb\u7a0b\u4e4b\u95f4\u63d0\u4f9b\u9694\u79bb\u3002 OpenStack \u7684 sVirt \u5b9e\u73b0\u65e8\u5728\u4fdd\u62a4\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e3b\u673a\u548c\u865a\u62df\u673a\u514d\u53d7\u4e24\u4e2a\u4e3b\u8981\u5a01\u80c1\u5a92\u4ecb\u7684\u4fb5\u5bb3\uff1a \u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u5a01\u80c1 \u5728\u865a\u62df\u673a\u4e2d\u8fd0\u884c\u7684\u53d7\u635f\u5e94\u7528\u7a0b\u5e8f\u4f1a\u653b\u51fb\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4ee5\u8bbf\u95ee\u5e95\u5c42\u8d44\u6e90\u3002\u4f8b\u5982\uff0c\u5f53\u865a\u62df\u673a\u80fd\u591f\u8bbf\u95ee\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u64cd\u4f5c\u7cfb\u7edf\u3001\u7269\u7406\u8bbe\u5907\u6216\u5176\u4ed6\u5e94\u7528\u7a0b\u5e8f\u65f6\u3002\u6b64\u5a01\u80c1\u5411\u91cf\u5b58\u5728\u76f8\u5f53\u5927\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4e0a\u7684\u5165\u4fb5\u53ef\u80fd\u4f1a\u611f\u67d3\u7269\u7406\u786c\u4ef6\u5e76\u66b4\u9732\u5176\u4ed6\u865a\u62df\u673a\u548c\u7f51\u6bb5\u3002 \u865a\u62df\u673a\uff08\u591a\u79df\u6237\uff09\u5a01\u80c1 \u5728 VM \u4e2d\u8fd0\u884c\u7684\u53d7\u635f\u5e94\u7528\u7a0b\u5e8f\u4f1a\u653b\u51fb\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\uff0c\u4ee5\u8bbf\u95ee\u6216\u63a7\u5236\u53e6\u4e00\u4e2a\u865a\u62df\u673a\u53ca\u5176\u8d44\u6e90\u3002\u8fd9\u662f\u865a\u62df\u5316\u7279\u6709\u7684\u5a01\u80c1\u5411\u91cf\uff0c\u5b58\u5728\u76f8\u5f53\u5927\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u5927\u91cf\u865a\u62df\u673a\u6587\u4ef6\u6620\u50cf\u53ef\u80fd\u56e0\u5355\u4e2a\u5e94\u7528\u7a0b\u5e8f\u4e2d\u7684\u6f0f\u6d1e\u800c\u53d7\u5230\u635f\u5bb3\u3002\u8fd9\u79cd\u865a\u62df\u7f51\u7edc\u653b\u51fb\u662f\u4e00\u4e2a\u4e3b\u8981\u95ee\u9898\uff0c\u56e0\u4e3a\u7528\u4e8e\u4fdd\u62a4\u771f\u5b9e\u7f51\u7edc\u7684\u7ba1\u7406\u6280\u672f\u5e76\u4e0d\u76f4\u63a5\u9002\u7528\u4e8e\u865a\u62df\u73af\u5883\u3002 \u6bcf\u4e2a\u57fa\u4e8e KVM \u7684\u865a\u62df\u673a\u90fd\u662f\u4e00\u4e2a\u7531 SELinux \u6807\u8bb0\u7684\u8fdb\u7a0b\uff0c\u4ece\u800c\u6709\u6548\u5730\u5728\u6bcf\u4e2a\u865a\u62df\u673a\u5468\u56f4\u5efa\u7acb\u5b89\u5168\u8fb9\u754c\u3002\u6b64\u5b89\u5168\u8fb9\u754c\u7531 Linux \u5185\u6838\u76d1\u89c6\u548c\u5f3a\u5236\u6267\u884c\uff0c\u4ece\u800c\u9650\u5236\u865a\u62df\u673a\u8bbf\u95ee\u5176\u8fb9\u754c\u4e4b\u5916\u7684\u8d44\u6e90\uff0c\u4f8b\u5982\u4e3b\u673a\u6570\u636e\u6587\u4ef6\u6216\u5176\u4ed6 VM\u3002 \u65e0\u8bba\u865a\u62df\u673a\u5185\u8fd0\u884c\u7684\u5ba2\u6237\u673a\u64cd\u4f5c\u7cfb\u7edf\u5982\u4f55\uff0c\u90fd\u4f1a\u63d0\u4f9b sVirt \u9694\u79bb\u3002\u53ef\u4ee5\u4f7f\u7528 Linux \u6216 Windows VM\u3002\u6b64\u5916\uff0c\u8bb8\u591a Linux \u53d1\u884c\u7248\u5728\u64cd\u4f5c\u7cfb\u7edf\u4e2d\u63d0\u4f9b SELinux\uff0c\u4f7f\u865a\u62df\u673a\u80fd\u591f\u4fdd\u62a4\u5185\u90e8\u865a\u62df\u8d44\u6e90\u514d\u53d7\u5a01\u80c1\u3002","title":"sVirt\uff1aSELinux \u548c\u865a\u62df\u5316"},{"location":"security/security-guide/#_152","text":"\u57fa\u4e8e KVM \u7684\u865a\u62df\u673a\u5b9e\u4f8b\u4f7f\u7528\u5176\u81ea\u5df1\u7684 SELinux \u6570\u636e\u7c7b\u578b\u8fdb\u884c\u6807\u8bb0\uff0c\u79f0\u4e3a svirt_image_t \u3002\u5185\u6838\u7ea7\u4fdd\u62a4\u53ef\u9632\u6b62\u672a\u7ecf\u6388\u6743\u7684\u7cfb\u7edf\u8fdb\u7a0b\uff08\u5982\u6076\u610f\u8f6f\u4ef6\uff09\u64cd\u7eb5\u78c1\u76d8\u4e0a\u7684\u865a\u62df\u673a\u6620\u50cf\u6587\u4ef6\u3002\u5173\u95ed\u865a\u62df\u673a\u7535\u6e90\u540e\uff0c\u6620\u50cf\u7684\u5b58\u50a8 svirt_image_t \u65b9\u5f0f\u5982\u4e0b\u6240\u793a\uff1a system_u:object_r:svirt_image_t:SystemLow image1 system_u:object_r:svirt_image_t:SystemLow image2 system_u:object_r:svirt_image_t:SystemLow image3 system_u:object_r:svirt_image_t:SystemLow image4 \u8be5 svirt_image_t \u6807\u7b7e\u552f\u4e00\u6807\u8bc6\u78c1\u76d8\u4e0a\u7684\u56fe\u50cf\u6587\u4ef6\uff0c\u5141\u8bb8 SELinux \u7b56\u7565\u9650\u5236\u8bbf\u95ee\u3002\u5f53\u57fa\u4e8e KVM \u7684\u8ba1\u7b97\u6620\u50cf\u901a\u7535\u65f6\uff0csVirt \u4f1a\u5c06\u968f\u673a\u6570\u5b57\u6807\u8bc6\u7b26\u9644\u52a0\u5230\u6620\u50cf\u4e2d\u3002sVirt \u80fd\u591f\u4e3a\u6bcf\u4e2a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u8282\u70b9\u6700\u591a\u5206\u914d 524,288 \u4e2a\u865a\u62df\u673a\u7684\u6570\u5b57\u6807\u8bc6\u7b26\uff0c\u4f46\u5927\u591a\u6570 OpenStack \u90e8\u7f72\u6781\u4e0d\u53ef\u80fd\u9047\u5230\u6b64\u9650\u5236\u3002 \u6b64\u793a\u4f8b\u663e\u793a\u4e86 sVirt \u7c7b\u522b\u6807\u8bc6\u7b26\uff1a system_u:object_r:svirt_image_t:s0:c87,c520 image1 system_u:object_r:svirt_image_t:s0:419,c172 image2","title":"\u6807\u7b7e\u548c\u7c7b\u522b"},{"location":"security/security-guide/#selinux","text":"SELinux \u7ba1\u7406\u7528\u6237\u89d2\u8272\u3002\u53ef\u4ee5\u901a\u8fc7 -Z \u6807\u5fd7\u6216\u4f7f\u7528 semanage \u547d\u4ee4\u67e5\u770b\u8fd9\u4e9b\u5185\u5bb9\u3002\u5728\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e0a\uff0c\u53ea\u6709\u7ba1\u7406\u5458\u624d\u80fd\u8bbf\u95ee\u7cfb\u7edf\uff0c\u5e76\u4e14\u5e94\u8be5\u56f4\u7ed5\u7ba1\u7406\u7528\u6237\u548c\u7cfb\u7edf\u4e0a\u7684\u4efb\u4f55\u5176\u4ed6\u7528\u6237\u5177\u6709\u9002\u5f53\u7684\u4e0a\u4e0b\u6587\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 SELinux \u7528\u6237\u6587\u6863\u3002","title":"SELinux \u7528\u6237\u548c\u89d2\u8272"},{"location":"security/security-guide/#_153","text":"\u4e3a\u4e86\u51cf\u8f7b\u7ba1\u7406 SELinux \u7684\u7ba1\u7406\u8d1f\u62c5\uff0c\u8bb8\u591a\u4f01\u4e1a Linux \u5e73\u53f0\u5229\u7528 SELinux \u5e03\u5c14\u503c\u6765\u5feb\u901f\u6539\u53d8 sVirt \u7684\u5b89\u5168\u6001\u52bf\u3002 \u57fa\u4e8e Red Hat Enterprise Linux \u7684 KVM \u90e8\u7f72\u4f7f\u7528\u4ee5\u4e0b sVirt \u5e03\u5c14\u503c\uff1a sVirt SELinux \u5e03\u5c14\u503c \u63cf\u8ff0 virt_use_common \u5141\u8bb8 virt \u4f7f\u7528\u4e32\u884c\u6216\u5e76\u884c\u901a\u4fe1\u7aef\u53e3\u3002 virt_use_fusefs \u5141\u8bb8 virt \u8bfb\u53d6 FUSE \u6302\u8f7d\u7684\u6587\u4ef6\u3002 virt_use_nfs \u5141\u8bb8 virt \u7ba1\u7406 NFS \u6302\u8f7d\u7684\u6587\u4ef6\u3002 virt_use_samba \u5141\u8bb8 virt \u7ba1\u7406 CIFS \u6302\u8f7d\u7684\u6587\u4ef6\u3002 virt_use_sanlock \u5141\u8bb8\u53d7\u9650\u7684\u865a\u62df\u8bbf\u5ba2\u4e0e sanlock \u4ea4\u4e92\u3002 virt_use_sysfs \u5141\u8bb8 virt \u7ba1\u7406\u8bbe\u5907\u914d\u7f6e \uff08PCI\uff09\u3002 virt_use_usb \u5141\u8bb8 virt \u4f7f\u7528 USB \u8bbe\u5907\u3002 virt_use_xserver \u5141\u8bb8\u865a\u62df\u673a\u4e0e X Window \u7cfb\u7edf\u4ea4\u4e92\u3002","title":"\u5e03\u5c14\u503c"},{"location":"security/security-guide/#_154","text":"\u4efb\u4f55OpenStack\u90e8\u7f72\u7684\u4e3b\u8981\u5b89\u5168\u95ee\u9898\u4e4b\u4e00\u662f\u56f4\u7ed5\u654f\u611f\u6587\u4ef6\uff08\u5982 nova.conf \u6587\u4ef6\uff09\u7684\u5b89\u5168\u6027\u548c\u63a7\u5236\u3002\u6b64\u914d\u7f6e\u6587\u4ef6\u901a\u5e38\u5305\u542b\u5728 /etc \u76ee\u5f55\u4e2d\uff0c\u5305\u542b\u8bb8\u591a\u654f\u611f\u9009\u9879\uff0c\u5305\u62ec\u914d\u7f6e\u8be6\u7ec6\u4fe1\u606f\u548c\u670d\u52a1\u5bc6\u7801\u3002\u5e94\u4e3a\u6240\u6709\u6b64\u7c7b\u654f\u611f\u6587\u4ef6\u6388\u4e88\u4e25\u683c\u7684\u6587\u4ef6\u7ea7\u6743\u9650\uff0c\u5e76\u901a\u8fc7\u6587\u4ef6\u5b8c\u6574\u6027\u76d1\u89c6 \uff08FIM\uff09 \u5de5\u5177\uff08\u5982 iNotify \u6216 Samhain\uff09\u76d1\u89c6\u66f4\u6539\u3002\u8fd9\u4e9b\u5b9e\u7528\u7a0b\u5e8f\u5c06\u83b7\u53d6\u5904\u4e8e\u5df2\u77e5\u826f\u597d\u72b6\u6001\u7684\u76ee\u6807\u6587\u4ef6\u7684\u54c8\u5e0c\u503c\uff0c\u7136\u540e\u5b9a\u671f\u83b7\u53d6\u8be5\u6587\u4ef6\u7684\u65b0\u54c8\u5e0c\u503c\uff0c\u5e76\u5c06\u5176\u4e0e\u5df2\u77e5\u826f\u597d\u7684\u54c8\u5e0c\u503c\u8fdb\u884c\u6bd4\u8f83\u3002\u5982\u679c\u53d1\u73b0\u8b66\u62a5\u88ab\u610f\u5916\u4fee\u6539\uff0c\u5219\u53ef\u4ee5\u521b\u5efa\u8b66\u62a5\u3002 \u53ef\u4ee5\u68c0\u67e5\u6587\u4ef6\u7684\u6743\u9650\uff0c\u6211\u79fb\u52a8\u5230\u6587\u4ef6\u6240\u5728\u7684\u76ee\u5f55\u5e76\u8fd0\u884c ls -lh \u547d\u4ee4\u3002\u8fd9\u5c06\u663e\u793a\u6709\u6743\u8bbf\u95ee\u6587\u4ef6\u7684\u6743\u9650\u3001\u6240\u6709\u8005\u548c\u7ec4\uff0c\u4ee5\u53ca\u5176\u4ed6\u4fe1\u606f\uff0c\u4f8b\u5982\u4e0a\u6b21\u4fee\u6539\u6587\u4ef6\u7684\u65f6\u95f4\u548c\u521b\u5efa\u65f6\u95f4\u3002 \u8be5 /var/lib/nova \u76ee\u5f55\u7528\u4e8e\u4fdd\u5b58\u6709\u5173\u7ed9\u5b9a\u8ba1\u7b97\u4e3b\u673a\u4e0a\u7684\u5b9e\u4f8b\u7684\u8be6\u7ec6\u4fe1\u606f\u3002\u6b64\u76ee\u5f55\u4e5f\u5e94\u88ab\u89c6\u4e3a\u654f\u611f\u76ee\u5f55\uff0c\u5e76\u5177\u6709\u4e25\u683c\u5f3a\u5236\u6267\u884c\u7684\u6587\u4ef6\u6743\u9650\u3002\u6b64\u5916\uff0c\u5e94\u5b9a\u671f\u5907\u4efd\u5b83\uff0c\u56e0\u4e3a\u5b83\u5305\u542b\u4e0e\u8be5\u4e3b\u673a\u5173\u8054\u7684\u5b9e\u4f8b\u7684\u4fe1\u606f\u548c\u5143\u6570\u636e\u3002 \u5982\u679c\u90e8\u7f72\u4e0d\u9700\u8981\u5b8c\u6574\u7684\u865a\u62df\u673a\u5907\u4efd\uff0c\u5efa\u8bae\u6392\u9664\u8be5 /var/lib/nova/instances \u76ee\u5f55\uff0c\u56e0\u4e3a\u5b83\u7684\u5927\u5c0f\u5c06\u4e0e\u8be5\u8282\u70b9\u4e0a\u8fd0\u884c\u7684\u6bcf\u4e2a VM \u7684\u603b\u7a7a\u95f4\u4e00\u6837\u5927\u3002\u5982\u679c\u90e8\u7f72\u786e\u5b9e\u9700\u8981\u5b8c\u6574 VM \u5907\u4efd\uff0c\u5219\u9700\u8981\u786e\u4fdd\u6210\u529f\u5907\u4efd\u6b64\u76ee\u5f55\u3002 \u76d1\u89c6\u662f IT \u57fa\u7840\u7ed3\u6784\u7684\u5173\u952e\u7ec4\u4ef6\uff0c\u6211\u4eec\u5efa\u8bae\u76d1\u89c6\u548c\u5206\u6790\u8ba1\u7b97\u65e5\u5fd7\u6587\u4ef6\uff0c\u4ee5\u4fbf\u53ef\u4ee5\u521b\u5efa\u6709\u610f\u4e49\u7684\u8b66\u62a5\u3002","title":"\u52a0\u56fa\u8ba1\u7b97\u90e8\u7f72"},{"location":"security/security-guide/#openstack_5","text":"\u6211\u4eec\u5efa\u8bae\u5728\u53d1\u5e03\u5b89\u5168\u95ee\u9898\u548c\u5efa\u8bae\u65f6\u53ca\u65f6\u4e86\u89e3\u5b83\u4eec\u3002OpenStack \u5b89\u5168\u95e8\u6237\u662f\u4e00\u4e2a\u4e2d\u592e\u95e8\u6237\uff0c\u53ef\u4ee5\u5728\u8fd9\u91cc\u534f\u8c03\u5efa\u8bae\u3001\u901a\u77e5\u3001\u4f1a\u8bae\u548c\u6d41\u7a0b\u3002\u6b64\u5916\uff0cOpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f \uff08VMT\uff09 \u95e8\u6237\u901a\u8fc7\u5c06 Bug \u6807\u8bb0\u4e3a\u201c\u6b64 bug \u662f\u5b89\u5168\u6f0f\u6d1e\u201d\u6765\u534f\u8c03 OpenStack \u9879\u76ee\u5185\u7684\u8865\u6551\u63aa\u65bd\uff0c\u4ee5\u53ca\u8c03\u67e5\u8d1f\u8d23\u4efb\u5730\uff08\u79c1\u4e0b\uff09\u5411 VMT \u62ab\u9732\u7684\u62a5\u544a bug \u7684\u8fc7\u7a0b\u3002VMT \u6d41\u7a0b\u9875\u9762\u4e2d\u6982\u8ff0\u4e86\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u5e76\u751f\u6210\u4e86 OpenStack \u5b89\u5168\u516c\u544a \uff08OSSA\uff09\u3002\u6b64 OSSA \u6982\u8ff0\u4e86\u95ee\u9898\u548c\u4fee\u590d\u7a0b\u5e8f\uff0c\u5e76\u94fe\u63a5\u5230\u539f\u59cb\u9519\u8bef\u548c\u8865\u4e01\u6258\u7ba1\u4f4d\u7f6e\u3002","title":"OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f"},{"location":"security/security-guide/#openstack_6","text":"\u62a5\u544a\u7684\u5b89\u5168\u6f0f\u6d1e\u88ab\u53d1\u73b0\u662f\u914d\u7f6e\u9519\u8bef\u7684\u7ed3\u679c\uff0c\u6216\u8005\u4e0d\u662f\u4e25\u683c\u610f\u4e49\u4e0a\u7684 OpenStack \u7684\u4e00\u90e8\u5206\uff0c\u8fd9\u4e9b\u6f0f\u6d1e\u5c06\u88ab\u8d77\u8349\u5230 OpenStack \u5b89\u5168\u8bf4\u660e \uff08OSSN\uff09 \u4e2d\u3002\u8fd9\u4e9b\u95ee\u9898\u5305\u62ec\u914d\u7f6e\u95ee\u9898\uff0c\u4f8b\u5982\u786e\u4fdd\u8eab\u4efd\u63d0\u4f9b\u7a0b\u5e8f\u6620\u5c04\u4ee5\u53ca\u975e OpenStack\uff0c\u4f46\u5173\u952e\u95ee\u9898\uff08\u4f8b\u5982\u5f71\u54cd OpenStack \u4f7f\u7528\u7684\u5e73\u53f0\u7684 Bashbug/Ghost \u6216 Venom \u6f0f\u6d1e\uff09\u3002\u5f53\u524d\u7684 OSSN \u96c6\u4f4d\u4e8e\u5b89\u5168\u8bf4\u660e wiki \u4e2d\u3002","title":"OpenStack \u5b89\u5168\u6ce8\u610f\u4e8b\u9879"},{"location":"security/security-guide/#openstack-dev","text":"\u6240\u6709\u9519\u8bef\u3001OSSA \u548c OSSN \u90fd\u901a\u8fc7 openstack-discuss \u90ae\u4ef6\u5217\u8868\u516c\u5f00\u53d1\u5e03\uff0c\u4e3b\u9898\u884c\u4e2d\u5e26\u6709 [security] \u4e3b\u9898\u3002\u6211\u4eec\u5efa\u8bae\u8ba2\u9605\u6b64\u5217\u8868\u4ee5\u53ca\u90ae\u4ef6\u8fc7\u6ee4\u89c4\u5219\uff0c\u4ee5\u786e\u4fdd\u4e0d\u4f1a\u9057\u6f0f OSSN\u3001OSSA \u548c\u5176\u4ed6\u91cd\u8981\u516c\u544a\u3002openstack-discuss \u90ae\u4ef6\u5217\u8868\u901a\u8fc7 OpenStack Development Mailing List \u8fdb\u884c\u7ba1\u7406\u3002openstack-discuss \u4f7f\u7528\u300a\u9879\u76ee\u56e2\u961f\u6307\u5357\u300b\u4e2d\u5b9a\u4e49\u7684\u6807\u8bb0\u3002","title":"OpenStack-dev \u90ae\u4ef6\u5217\u8868"},{"location":"security/security-guide/#_155","text":"\u5728\u5b9e\u65bdOpenStack\u65f6\uff0c\u6838\u5fc3\u51b3\u7b56\u4e4b\u4e00\u662f\u4f7f\u7528\u54ea\u4e2a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002\u6211\u4eec\u5efa\u8bae\u60a8\u4e86\u89e3\u4e0e\u60a8\u9009\u62e9\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u76f8\u5173\u7684\u516c\u544a\u3002\u4ee5\u4e0b\u662f\u51e0\u4e2a\u5e38\u89c1\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5b89\u5168\u5217\u8868\uff1a Xen\uff1a http://xenbits.xen.org/xsa/ VMWare\uff1a http://blogs.vmware.com/security/ \u5176\u4ed6\uff08KVM \u7b49\uff09\uff1a http://seclists.org/oss-sec","title":"\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90ae\u4ef6\u5217\u8868"},{"location":"security/security-guide/#_156","text":"","title":"\u6f0f\u6d1e\u610f\u8bc6"},{"location":"security/security-guide/#openstack_7","text":"\u6211\u4eec\u5efa\u8bae\u5728\u53d1\u5e03\u5b89\u5168\u95ee\u9898\u548c\u5efa\u8bae\u65f6\u53ca\u65f6\u4e86\u89e3\u5b83\u4eec\u3002OpenStack \u5b89\u5168\u95e8\u6237\u662f\u4e00\u4e2a\u4e2d\u592e\u95e8\u6237\uff0c\u53ef\u4ee5\u5728\u8fd9\u91cc\u534f\u8c03\u5efa\u8bae\u3001\u901a\u77e5\u3001\u4f1a\u8bae\u548c\u6d41\u7a0b\u3002\u6b64\u5916\uff0cOpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f \uff08VMT\uff09 \u95e8\u6237\u534f\u8c03 OpenStack \u5185\u90e8\u7684\u8865\u6551\u63aa\u65bd\uff0c\u4ee5\u53ca\u8c03\u67e5\u8d1f\u8d23\u4efb\u5730\uff08\u79c1\u4e0b\uff09\u5411 VMT \u62ab\u9732\u7684\u62a5\u544a\u9519\u8bef\u7684\u8fc7\u7a0b\uff0c\u65b9\u6cd5\u662f\u5c06\u9519\u8bef\u6807\u8bb0\u4e3a\u201c\u6b64\u9519\u8bef\u662f\u5b89\u5168\u6f0f\u6d1e\u201d\u3002VMT \u6d41\u7a0b\u9875\u9762\u4e2d\u6982\u8ff0\u4e86\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u5e76\u751f\u6210\u4e86 OpenStack \u5b89\u5168\u516c\u544a \uff08OSSA\uff09\u3002\u6b64 OSSA \u6982\u8ff0\u4e86\u95ee\u9898\u548c\u4fee\u590d\u7a0b\u5e8f\uff0c\u5e76\u94fe\u63a5\u5230\u539f\u59cb\u9519\u8bef\u548c\u8865\u4e01\u6258\u7ba1\u4f4d\u7f6e\u3002","title":"OpenStack \u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f"},{"location":"security/security-guide/#openstack_8","text":"\u62a5\u544a\u7684\u5b89\u5168\u6f0f\u6d1e\u88ab\u53d1\u73b0\u662f\u914d\u7f6e\u9519\u8bef\u7684\u7ed3\u679c\uff0c\u6216\u8005\u4e0d\u662f\u4e25\u683c\u610f\u4e49\u4e0a\u7684 OpenStack \u7684\u4e00\u90e8\u5206\uff0c\u5c06\u88ab\u8d77\u8349\u5230 OpenStack \u5b89\u5168\u8bf4\u660e \uff08OSSN\uff09 \u4e2d\u3002\u8fd9\u4e9b\u95ee\u9898\u5305\u62ec\u914d\u7f6e\u95ee\u9898\uff0c\u4f8b\u5982\u786e\u4fdd\u8eab\u4efd\u63d0\u4f9b\u5546\u6620\u5c04\uff0c\u4ee5\u53ca\u975e OpenStack \u4f46\u5173\u952e\u7684\u95ee\u9898\uff0c\u4f8b\u5982\u5f71\u54cd OpenStack \u4f7f\u7528\u7684\u5e73\u53f0\u7684 Bashbug/Ghost \u6216 Venom \u6f0f\u6d1e\u3002\u5f53\u524d\u7684 OSSN \u96c6\u4f4d\u4e8e\u5b89\u5168\u8bf4\u660e wiki \u4e2d\u3002","title":"OpenStack \u5b89\u5168\u6ce8\u610f\u4e8b\u9879"},{"location":"security/security-guide/#openstack-discuss","text":"\u6240\u6709 bug\u3001OSSA \u548c OSSN \u90fd\u901a\u8fc7 openstack-discuss \u90ae\u4ef6\u5217\u8868\u516c\u5f00\u53d1\u5e03\uff0c\u4e3b\u9898\u884c\u4e2d\u5305\u542b [security] \u4e3b\u9898\u3002\u6211\u4eec\u5efa\u8bae\u8ba2\u9605\u6b64\u5217\u8868\u4ee5\u53ca\u90ae\u4ef6\u8fc7\u6ee4\u89c4\u5219\uff0c\u4ee5\u786e\u4fdd\u4e0d\u4f1a\u9057\u6f0f OSSN\u3001OSSA \u548c\u5176\u4ed6\u91cd\u8981\u516c\u544a\u3002openstack-discuss \u90ae\u4ef6\u5217\u8868\u901a\u8fc7 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss \u8fdb\u884c\u7ba1\u7406\u3002openstack-discuss \u4f7f\u7528\u300a\u9879\u76ee\u56e2\u961f\u6307\u5357\u300b\u4e2d\u5b9a\u4e49\u7684\u6807\u8bb0\u3002","title":"OpenStack-discuss \u90ae\u4ef6\u5217\u8868"},{"location":"security/security-guide/#_157","text":"\u5728\u5b9e\u65bdOpenStack\u65f6\uff0c\u6838\u5fc3\u51b3\u7b56\u4e4b\u4e00\u662f\u4f7f\u7528\u54ea\u4e2a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002\u6211\u4eec\u5efa\u8bae\u60a8\u4e86\u89e3\u4e0e\u60a8\u9009\u62e9\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u76f8\u5173\u7684\u516c\u544a\u3002\u4ee5\u4e0b\u662f\u51e0\u4e2a\u5e38\u89c1\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5b89\u5168\u5217\u8868\uff1a Xen\uff1a http://xenbits.xen.org/xsa/ VMWare\uff1a http://blogs.vmware.com/security/ \u5176\u4ed6\uff08KVM \u7b49\uff09\uff1a http://seclists.org/oss-sec","title":"\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u90ae\u4ef6\u5217\u8868"},{"location":"security/security-guide/#_158","text":"\u4e91\u67b6\u6784\u5e08\u9700\u8981\u505a\u51fa\u7684\u6709\u5173\u8ba1\u7b97\u670d\u52a1\u914d\u7f6e\u7684\u4e00\u4e2a\u51b3\u5b9a\u662f\u4f7f\u7528 VNC \u8fd8\u662f SPICE\u3002","title":"\u5982\u4f55\u9009\u62e9\u865a\u62df\u63a7\u5236\u53f0"},{"location":"security/security-guide/#vnc","text":"OpenStack \u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528\u865a\u62df\u7f51\u7edc\u8ba1\u7b97\u673a \uff08VNC\uff09 \u534f\u8bae\u4e3a\u79df\u6237\u548c\u7ba1\u7406\u5458\u63d0\u4f9b\u5bf9\u5b9e\u4f8b\u7684\u8fdc\u7a0b\u684c\u9762\u63a7\u5236\u53f0\u8bbf\u95ee\u3002","title":"\u865a\u62df\u7f51\u7edc\u8ba1\u7b97\u673a \uff08VNC\uff09"},{"location":"security/security-guide/#_159","text":"OpenStack Dashboard \uff08horizon\uff09 \u53ef\u4ee5\u4f7f\u7528 HTML5 noVNC \u5ba2\u6237\u7aef\u76f4\u63a5\u5728\u7f51\u9875\u4e0a\u4e3a\u5b9e\u4f8b\u63d0\u4f9b VNC \u63a7\u5236\u53f0\u3002\u8fd9\u8981\u6c42 nova-novncproxy \u670d\u52a1\u4ece\u516c\u7528\u7f51\u7edc\u6865\u63a5\u5230\u7ba1\u7406\u7f51\u7edc\u3002 nova \u547d\u4ee4\u884c\u5b9e\u7528\u7a0b\u5e8f\u53ef\u4ee5\u8fd4\u56de VNC \u63a7\u5236\u53f0\u7684 URL\uff0c\u4ee5\u4f9b nova Java VNC \u5ba2\u6237\u7aef\u8bbf\u95ee\u3002\u8fd9\u8981\u6c42 nova-xvpvncproxy \u670d\u52a1\u4ece\u516c\u7528\u7f51\u7edc\u6865\u63a5\u5230\u7ba1\u7406\u7f51\u7edc\u3002","title":"\u529f\u80fd"},{"location":"security/security-guide/#_160","text":"\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c nova-novncproxy \u548c nova-xvpvncproxy \u670d\u52a1\u4f1a\u6253\u5f00\u7ecf\u8fc7\u4ee4\u724c\u8eab\u4efd\u9a8c\u8bc1\u7684\u9762\u5411\u516c\u4f17\u7684\u7aef\u53e3\u3002 \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u8fdc\u7a0b\u684c\u9762\u6d41\u91cf\u672a\u52a0\u5bc6\u3002\u53ef\u4ee5\u542f\u7528 TLS \u6765\u52a0\u5bc6 VNC \u6d41\u91cf\u3002\u8bf7\u53c2\u9605 TLS \u548c SSL \u7b80\u4ecb\u4ee5\u83b7\u53d6\u9002\u5f53\u7684\u5efa\u8bae\u3002","title":"\u5b89\u5168\u6ce8\u610f\u4e8b\u9879"},{"location":"security/security-guide/#_161","text":"blog.malchuk.ru, OpenStack VNC Security. 2013. Secure Connections to VNC ports blog.malchuk.ru\uff0cOpenStack VNC \u5b89\u5168\u6027\u30022013. \u4e0e VNC \u7aef\u53e3\u7684\u5b89\u5168\u8fde\u63a5 OpenStack Mailing List, [OpenStack] nova-novnc SSL configuration - Havana. 2014. OpenStack nova-novnc SSL Configuration OpenStack \u90ae\u4ef6\u5217\u8868\uff0c[OpenStack] nova-novnc SSL \u914d\u7f6e - \u54c8\u74e6\u90a3\u30022014. OpenStack nova-novnc SSL\u914d\u7f6e Redhat.com/solutions\uff0c\u5728 OpenStack \u4e2d\u4f7f\u7528 SSL \u52a0\u5bc6 nova-novacproxy\u30022014. OpenStack nova-novncproxy SSL\u52a0\u5bc6","title":"\u53c2\u8003\u4e66\u76ee"},{"location":"security/security-guide/#spice","text":"\u4f5c\u4e3a VNC \u7684\u66ff\u4ee3\u65b9\u6848\uff0cOpenStack \u4f7f\u7528\u72ec\u7acb\u8ba1\u7b97\u73af\u5883\u7684\u7b80\u5355\u534f\u8bae \uff08SPICE\uff09 \u534f\u8bae\u63d0\u4f9b\u5bf9\u5ba2\u6237\u673a\u865a\u62df\u673a\u7684\u8fdc\u7a0b\u684c\u9762\u8bbf\u95ee\u3002","title":"\u72ec\u7acb\u8ba1\u7b97\u73af\u5883\u7684\u7b80\u5355\u534f\u8bae \uff08SPICE\uff09"},{"location":"security/security-guide/#_162","text":"OpenStack Dashboard \uff08horizon\uff09 \u76f4\u63a5\u5728\u5b9e\u4f8b\u7f51\u9875\u4e0a\u652f\u6301 SPICE\u3002\u8fd9\u9700\u8981\u670d\u52a1 nova-spicehtml5proxy \u3002 nova \u547d\u4ee4\u884c\u5b9e\u7528\u7a0b\u5e8f\u53ef\u4ee5\u8fd4\u56de SPICE \u63a7\u5236\u53f0\u7684 URL\uff0c\u4ee5\u4f9b SPICE-html \u5ba2\u6237\u7aef\u8bbf\u95ee\u3002","title":"\u529f\u80fd"},{"location":"security/security-guide/#_163","text":"\u5c3d\u7ba1 SPICE \u4e0e VNC \u76f8\u6bd4\u5177\u6709\u8bb8\u591a\u4f18\u52bf\uff0c\u4f46 spice-html5 \u6d4f\u89c8\u5668\u96c6\u6210\u76ee\u524d\u4e0d\u5141\u8bb8\u7ba1\u7406\u5458\u5229\u7528\u8fd9\u4e9b\u4f18\u52bf\u3002\u4e3a\u4e86\u5229\u7528 \u591a\u663e\u793a\u5668\u3001USB \u76f4\u901a\u7b49 SPICE \u529f\u80fd\uff0c\u6211\u4eec\u5efa\u8bae\u7ba1\u7406\u5458\u5728\u7ba1\u7406\u7f51\u7edc\u4e2d\u4f7f\u7528\u72ec\u7acb\u7684 SPICE \u5ba2\u6237\u7aef\u3002","title":"\u9650\u5236"},{"location":"security/security-guide/#_164","text":"\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u8be5 nova-spicehtml5proxy \u670d\u52a1\u4f1a\u6253\u5f00\u7ecf\u8fc7\u4ee4\u724c\u8eab\u4efd\u9a8c\u8bc1\u7684\u9762\u5411\u516c\u4f17\u7684\u7aef\u53e3\u3002 \u529f\u80fd\u548c\u96c6\u6210\u4ecd\u5728\u4e0d\u65ad\u53d1\u5c55\u3002\u6211\u4eec\u5c06\u5728\u4e0b\u4e00\u4e2a\u7248\u672c\u4e2d\u8bbf\u95ee\u8fd9\u4e9b\u529f\u80fd\u5e76\u63d0\u51fa\u5efa\u8bae\u3002 \u4e0e VNC \u7684\u60c5\u51b5\u4e00\u6837\uff0c\u76ee\u524d\u6211\u4eec\u5efa\u8bae\u4ece\u7ba1\u7406\u7f51\u7edc\u4f7f\u7528 SPICE\uff0c\u6b64\u5916\u8fd8\u9650\u5236\u4f7f\u7528\u5c11\u6570\u4eba\u3002","title":"\u5b89\u5168\u6ce8\u610f\u4e8b\u9879"},{"location":"security/security-guide/#_165","text":"OpenStack \u7ba1\u7406\u5458\u6307\u5357\u3002SPICE\u63a7\u5236\u53f0\u3002SPICE\u63a7\u5236\u53f0\u3002 bugzilla.redhat.com\uff0c Bug 913607 - RFE\uff1a \u652f\u6301\u901a\u8fc7 websockets \u96a7\u9053\u4f20\u8f93 SPICE\u30022013. RedHat \u9519\u8bef913607\u3002","title":"\u53c2\u8003\u4e66\u76ee"},{"location":"security/security-guide/#_166","text":"","title":"\u68c0\u67e5\u8868"},{"location":"security/security-guide/#check-compute-01-rootnova","text":"\u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u62d2\u7edd\u5411\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u63d0\u4f9b\u670d\u52a1\u3002\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a nova \uff0c root \u5e76\u4e14\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a \u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/nova/nova.conf | egrep \"root nova\" $ stat -L -c \"%U %G\" /etc/nova/api-paste.ini | egrep \"root nova\" $ stat -L -c \"%U %G\" /etc/nova/policy.json | egrep \"root nova\" $ stat -L -c \"%U %G\" /etc/nova/rootwrap.conf | egrep \"root nova\" $ stat -L -c \"%U %G\" /etc/nova | egrep \"root nova\" \u901a\u8fc7\uff1a\u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c nova \u3002\u4e0a\u8ff0\u547d\u4ee4\u663e\u793a \u7684 root nova \u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u5219\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u6216\u9664 nova \u4ee5\u5916\u7684 root \u4efb\u4f55\u7ec4\u3002 \u63a8\u8350\u4e8e\uff1a\u8ba1\u7b97\u3002","title":"Check-Compute-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/nova\uff1f"},{"location":"security/security-guide/#check-compute-02","text":"\u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u6211\u4eec\u5efa\u8bae\u4e3a\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/nova/nova.conf $ stat -L -c \"%a\" /etc/nova/api-paste.ini $ stat -L -c \"%a\" /etc/nova/policy.json $ stat -L -c \"%a\" /etc/nova/rootwrap.conf \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002640/750 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\u3002\u4f8b\u5982\uff0c\u201cu=rw\uff0cg=r\uff0co=\u201d\u3002 \u6ce8\u610f \u5982\u679c Check-Compute-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/nova\uff1f\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0croot \u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0cnova \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/nova/nova.conf getfacl: Removing leading '/' from absolute path names # file: etc/nova/nova.conf USER root rw- GROUP nova r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u672a\u8bbe\u7f6e\u4e3a\u81f3\u5c11 640/750\u3002 \u63a8\u8350\u4e8e\uff1a\u8ba1\u7b97\u3002","title":"Check-Compute-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f"},{"location":"security/security-guide/#check-compute-03keystone","text":"\u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Rocky \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Stein \u4e2d\u5df2\u5f03\u7528\u3002 OpenStack \u652f\u6301\u5404\u79cd\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\uff0c\u5982 noauth \u548c keystone\u3002\u5982\u679c\u4f7f\u7528 noauth \u7b56\u7565\uff0c\u90a3\u4e48\u7528\u6237\u65e0\u9700\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u4e0e OpenStack \u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u6f5c\u5728\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u83b7\u5f97\u5bf9 OpenStack \u7ec4\u4ef6\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u670d\u52a1\u90fd\u5fc5\u987b\u4f7f\u7528\u5176\u670d\u52a1\u5e10\u6237\u901a\u8fc7 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u5728Ocata\u4e4b\u524d\uff1a \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 auth_strategy \u8bbe\u7f6e\u4e3a keystone \u3002 [DEFAULT] /etc/nova/nova.conf \u5931\u8d25\uff1a\u5982\u679c section \u4e0b\u7684 [DEFAULT] \u53c2\u6570 auth_strategy \u503c\u8bbe\u7f6e\u4e3a noauth \u6216 noauth2 \u3002 \u5728Ocata\u4e4b\u540e\uff1a \u901a\u8fc7\uff1a\u5982\u679c under [api] \u6216 [DEFAULT] section in /etc/nova/nova.conf \u7684\u53c2\u6570 auth_strategy \u503c\u8bbe\u7f6e\u4e3a keystone \u3002 \u5931\u8d25\uff1a\u5982\u679c or [DEFAULT] \u90e8\u5206\u4e0b\u7684 [api] \u53c2\u6570 auth_strategy \u503c\u8bbe\u7f6e\u4e3a noauth \u6216 noauth2 \u3002","title":"Check-Compute-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f"},{"location":"security/security-guide/#check-compute-04","text":"OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in /etc/nova/nova.conf \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a Identity API \u7aef\u70b9\u5f00\u5934\uff0c https:// \u5e76\u4e14 same /etc/nova/nova.conf \u4e2d\u540c\u4e00 [keystone_authtoken] \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri insecure \u503c\u8bbe\u7f6e\u4e3a False \u3002 \u5931\u8d25\uff1a\u5982\u679c in /etc/nova/nova.conf \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri \u503c\u672a\u8bbe\u7f6e\u4e3a\u4ee5 \u5f00\u5934\u7684\u8eab\u4efd API \u7aef\u70b9\uff0c https:// \u6216\u8005\u540c\u4e00 /etc/nova/nova.conf \u90e8\u5206\u4e2d\u7684\u53c2\u6570 insecure [keystone_authtoken] \u503c\u8bbe\u7f6e\u4e3a True \u3002","title":"Check-Compute-04\uff1a\u662f\u5426\u4f7f\u7528\u5b89\u5168\u534f\u8bae\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f"},{"location":"security/security-guide/#check-compute-05nova-glance","text":"OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a False \uff0c\u5e76\u4e14 section in /etc/nova/nova.conf /etc/nova/nova.conf \u4e0b\u7684 [glance] [glance] \u53c2\u6570 api_insecure api_servers \u503c\u8bbe\u7f6e\u4e3a\u4ee5 https:// \u5f00\u5934\u7684\u503c\u3002 \u5931\u8d25\uff1a\u5982\u679c in /etc/nova/nova.conf \u8282\u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a True \uff0c\u6216\u8005 in /etc/nova/nova.conf \u8282\u4e0b\u7684 [glance] [glance] \u53c2\u6570 api_insecure api_servers \u503c\u8bbe\u7f6e\u4e3a\u4e0d\u4ee5 https:// \u5f00\u5934\u7684\u503c\u3002","title":"Check-Compute-05\uff1aNova \u4e0e Glance \u7684\u901a\u4fe1\u662f\u5426\u5b89\u5168\uff1f"},{"location":"security/security-guide/#_167","text":"OpenStack Block Storage \uff08cinder\uff09 \u662f\u4e00\u9879\u670d\u52a1\uff0c\u5b83\u63d0\u4f9b\u8f6f\u4ef6\uff08\u670d\u52a1\u548c\u5e93\uff09\u6765\u81ea\u52a9\u7ba1\u7406\u6301\u4e45\u6027\u5757\u7ea7\u5b58\u50a8\u8bbe\u5907\u3002\u8fd9\u5c06\u521b\u5efa\u5bf9\u5757\u5b58\u50a8\u8d44\u6e90\u7684\u6309\u9700\u8bbf\u95ee\uff0c\u4ee5\u4fbf\u4e0e OpenStack \u8ba1\u7b97 \uff08nova\uff09 \u5b9e\u4f8b\u4e00\u8d77\u4f7f\u7528\u3002\u901a\u8fc7\u5c06\u5757\u5b58\u50a8\u6c60\u865a\u62df\u5316\u5230\u5404\u79cd\u540e\u7aef\u5b58\u50a8\u8bbe\u5907\uff08\u53ef\u4ee5\u662f\u8f6f\u4ef6\u5b9e\u73b0\u6216\u4f20\u7edf\u786c\u4ef6\u5b58\u50a8\u4ea7\u54c1\uff09\uff0c\u901a\u8fc7\u62bd\u8c61\u521b\u5efa\u8f6f\u4ef6\u5b9a\u4e49\u5b58\u50a8\u3002\u5176\u4e3b\u8981\u529f\u80fd\u662f\u7ba1\u7406\u5757\u8bbe\u5907\u7684\u521b\u5efa\u3001\u9644\u52a0\u548c\u5206\u79bb\u3002\u6d88\u8d39\u8005\u4e0d\u9700\u8981\u77e5\u9053\u540e\u7aef\u5b58\u50a8\u8bbe\u5907\u7684\u7c7b\u578b\u6216\u5b83\u7684\u4f4d\u7f6e\u3002 \u8ba1\u7b97\u5b9e\u4f8b\u901a\u8fc7\u884c\u4e1a\u6807\u51c6\u5b58\u50a8\u534f\u8bae\uff08\u5982 iSCSI\u3001\u4ee5\u592a\u7f51 ATA \u6216\u5149\u7ea4\u901a\u9053\uff09\u5b58\u50a8\u548c\u68c0\u7d22\u5757\u5b58\u50a8\u3002\u8fd9\u4e9b\u8d44\u6e90\u901a\u8fc7 OpenStack \u539f\u751f\u6807\u51c6 HTTP RESTful API \u8fdb\u884c\u7ba1\u7406\u548c\u914d\u7f6e\u3002\u6709\u5173 API \u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack \u5757\u5b58\u50a8\u6587\u6863\u3002 \u5377\u64e6\u9664 \u68c0\u67e5\u8868 Check-Block-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/cinder\uff1f Check-Block-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Block-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Block-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Block-05\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e nova \u901a\u4fe1\uff1f Check-Block-06\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e glance \u901a\u4fe1\uff1f Check-Block-07\uff1a NAS \u662f\u5426\u5728\u5b89\u5168\u7684\u73af\u5883\u4e2d\u8fd0\u884c\uff1f Check-Block-08\uff1a\u8bf7\u6c42\u6b63\u6587\u7684\u6700\u5927\u5927\u5c0f\u662f\u5426\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f Check-Block-09\uff1a\u662f\u5426\u542f\u7528\u4e86\u5377\u52a0\u5bc6\u529f\u80fd\uff1f \u6ce8\u610f \u867d\u7136\u672c\u7ae0\u76ee\u524d\u5bf9\u5177\u4f53\u6307\u5357\u7684\u4ecb\u7ecd\u5f88\u5c11\uff0c\u4f46\u9884\u8ba1\u5c06\u9075\u5faa\u6807\u51c6\u7684\u5f3a\u5316\u5b9e\u8df5\u3002\u672c\u8282\u5c06\u6269\u5c55\u76f8\u5173\u4fe1\u606f\u3002","title":"\u5757\u5b58\u50a8"},{"location":"security/security-guide/#_168","text":"\u6709\u51e0\u79cd\u65b9\u6cd5\u53ef\u4ee5\u64e6\u9664\u5757\u5b58\u50a8\u8bbe\u5907\u3002\u4f20\u7edf\u7684\u65b9\u6cd5\u662f\u5c06 lvm_type \u8bbe\u7f6e\u4e3a thin \uff0c\u5982\u679c\u4f7f\u7528 LVM \u540e\u7aef\uff0c\u5219\u4f7f\u7528 volume_clear \u8be5\u53c2\u6570\u3002\u6216\u8005\uff0c\u5982\u679c\u4f7f\u7528\u5377\u52a0\u5bc6\u529f\u80fd\uff0c\u5219\u5728\u5220\u9664\u5377\u52a0\u5bc6\u5bc6\u94a5\u65f6\u4e0d\u9700\u8981\u5377\u64e6\u9664\u3002\u6709\u5173\u8bbe\u7f6e\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5377\u52a0\u5bc6\u90e8\u5206\u4e2d\u7684 OpenStack \u914d\u7f6e\u53c2\u8003\u6587\u6863\uff0c\u4ee5\u53ca\u6709\u5173\u5bc6\u94a5\u5220\u9664\u7684 Castellan \u4f7f\u7528\u6587\u6863 \u6ce8\u610f \u5728\u8f83\u65e7\u7684 OpenStack \u7248\u672c\u4e2d\uff0c `lvm_type=default` \u7528\u4e8e\u8868\u793a\u64e6\u9664\u3002\u867d\u7136\u6b64\u65b9\u6cd5\u4ecd\u7136\u6709\u6548\uff0c\u4f46 `lvm_type=default` \u4e0d\u5efa\u8bae\u7528\u4e8e\u8bbe\u7f6e\u5b89\u5168\u5220\u9664\u3002 \u8be5 volume_clear \u53c2\u6570\u53ef\u4ee5\u8bbe\u7f6e\u4e3a zero \u3002\u8be5 zero \u53c2\u6570\u5c06\u5411\u8bbe\u5907\u5199\u5165\u4e00\u6b21\u96f6\u4f20\u9012\u3002 \u6709\u5173\u8be5 lvm_type \u53c2\u6570\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 cinder \u9879\u76ee\u6587\u6863\u7684\u7cbe\u7b80\u7f6e\u5907\u4e2d\u7684 LVM \u548c\u8d85\u989d\u8ba2\u9605\u90e8\u5206\u3002 \u6709\u5173\u8be5 volume_clear \u53c2\u6570\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 cinder \u9879\u76ee\u6587\u6863\u7684 Cinder \u914d\u7f6e\u9009\u9879\u90e8\u5206\u3002","title":"\u5377\u64e6\u9664"},{"location":"security/security-guide/#_169","text":"","title":"\u68c0\u67e5\u8868"},{"location":"security/security-guide/#check-block-01-rootcinder","text":"\u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u62d2\u7edd\u5411\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u63d0\u4f9b\u670d\u52a1\u3002\u56e0\u6b64\uff0c\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a root\uff0c\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a cinder\u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/cinder/cinder.conf | egrep \"root cinder\" $ stat -L -c \"%U %G\" /etc/cinder/api-paste.ini | egrep \"root cinder\" $ stat -L -c \"%U %G\" /etc/cinder/policy.json | egrep \"root cinder\" $ stat -L -c \"%U %G\" /etc/cinder/rootwrap.conf | egrep \"root cinder\" $ stat -L -c \"%U %G\" /etc/cinder | egrep \"root cinder\" \u901a\u8fc7\uff1a\u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c cinder\u3002\u4e0a\u9762\u7684\u547d\u4ee4\u663e\u793a\u4e86\u6839\u7164\u6e23\u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u56e0\u4e3a\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 root \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u6216\u9664 cinder \u4ee5\u5916\u7684\u4efb\u4f55\u7ec4\u3002","title":"Check-Block-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/cinder\uff1f"},{"location":"security/security-guide/#check-block-02","text":"\u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u6211\u4eec\u5efa\u8bae\u4e3a\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/cinder/cinder.conf $ stat -L -c \"%a\" /etc/cinder/api-paste.ini $ stat -L -c \"%a\" /etc/cinder/policy.json $ stat -L -c \"%a\" /etc/cinder/rootwrap.conf $ stat -L -c \"%a\" /etc/cinder \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002640/750 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\uff0c\u5373\u201cu=rw\uff0cg=r\uff0co=\u201d\u3002\u8bf7\u6ce8\u610f\uff0c\u4f7f\u7528 Check-Block-01 \u65f6\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/cinder\uff1f\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0croot \u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0ccinder \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/cinder/cinder.conf getfacl: Removing leading '/' from absolute path names # file: etc/cinder/cinder.conf USER root rw- GROUP cinder r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u672a\u8bbe\u7f6e\u4e3a\u81f3\u5c11 640\u3002","title":"Check-Block-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f"},{"location":"security/security-guide/#check-block-03keystone","text":"\u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Rocky \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Stein \u4e2d\u5df2\u5f03\u7528\u3002 OpenStack \u652f\u6301\u5404\u79cd\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\uff0c\u5982 noauth\u3001keystone \u7b49\u3002\u5982\u679c\u4f7f\u7528\u201cnoauth\u201d\u7b56\u7565\uff0c\u90a3\u4e48\u7528\u6237\u65e0\u9700\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u4e0eOpenStack\u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u6f5c\u5728\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u83b7\u5f97\u5bf9 OpenStack \u7ec4\u4ef6\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u56e0\u6b64\uff0c\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u670d\u52a1\u90fd\u5fc5\u987b\u4f7f\u7528\u5176\u670d\u52a1\u5e10\u6237\u901a\u8fc7 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 auth_strategy \u8bbe\u7f6e\u4e3a keystone \u3002 [DEFAULT] /etc/cinder/cinder.conf \u5931\u8d25\uff1a\u5982\u679c section \u4e0b\u7684 [DEFAULT] \u53c2\u6570 auth_strategy \u503c\u8bbe\u7f6e\u4e3a noauth \u3002","title":"Check-Block-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f"},{"location":"security/security-guide/#check-block-04-tls","text":"OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f/\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u56e0\u6b64\uff0c\u6240\u6709\u7ec4\u4ef6\u90fd\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u7684\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in /etc/cinder/cinder.conf \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a Identity API \u7aef\u70b9\u5f00\u5934\uff0c https:// \u5e76\u4e14 same /etc/cinder/cinder.conf \u4e2d\u540c\u4e00 [keystone_authtoken] \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri insecure \u503c\u8bbe\u7f6e\u4e3a False \u3002 \u5931\u8d25\uff1a\u5982\u679c in /etc/cinder/cinder.conf \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri \u503c\u672a\u8bbe\u7f6e\u4e3a\u4ee5 \u5f00\u5934\u7684\u8eab\u4efd API \u7aef\u70b9\uff0c https:// \u6216\u8005\u540c\u4e00 /etc/cinder/cinder.conf \u90e8\u5206\u4e2d\u7684\u53c2\u6570 insecure [keystone_authtoken] \u503c\u8bbe\u7f6e\u4e3a True \u3002","title":"Check-Block-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f"},{"location":"security/security-guide/#check-block-05cinder-tls-nova","text":"OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f/\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u56e0\u6b64\uff0c\u6240\u6709\u7ec4\u4ef6\u90fd\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u7684\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 nova_api_insecure \u8bbe\u7f6e\u4e3a False \u3002 [DEFAULT] /etc/cinder/cinder.conf \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 nova_api_insecure \u8bbe\u7f6e\u4e3a True \u3002 [DEFAULT] /etc/cinder/cinder.conf","title":"Check-Block-05\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e nova \u901a\u4fe1\uff1f"},{"location":"security/security-guide/#check-block-06cinder-tls-glance","text":"\u4e0e\u4e4b\u524d\u7684\u68c0\u67e5\uff08Check-Block-05\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e nova \u901a\u4fe1\uff1f\uff09\u7c7b\u4f3c\uff0c\u6211\u4eec\u5efa\u8bae\u6240\u6709\u7ec4\u4ef6\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c in \u90e8\u5206\u4e0b\u7684 [DEFAULT] \u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a False \u5e76\u4e14\u53c2\u6570 glance_api_servers glance_api_insecure \u503c\u8bbe\u7f6e\u4e3a\u4ee5 https:// \u5f00\u5934 /etc/cinder/cinder.conf \u7684\u503c\u3002 \u5931\u8d25\uff1a\u5982\u679c\u5c06 section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a True \u6216\u53c2\u6570 glance_api_servers glance_api_insecure \u503c\u8bbe\u7f6e\u4e3a\u4e0d\u4ee5 https:// \u5f00\u5934\u7684\u503c\u3002 [DEFAULT] /etc/cinder/cinder.conf","title":"Check-Block-06\uff1acinder \u662f\u5426\u901a\u8fc7 TLS \u4e0e glance \u901a\u4fe1\uff1f"},{"location":"security/security-guide/#check-block-07-nas","text":"Cinder \u652f\u6301 NFS \u9a71\u52a8\u7a0b\u5e8f\uff0c\u5176\u5de5\u4f5c\u65b9\u5f0f\u4e0e\u4f20\u7edf\u7684\u5757\u5b58\u50a8\u9a71\u52a8\u7a0b\u5e8f\u4e0d\u540c\u3002NFS \u9a71\u52a8\u7a0b\u5e8f\u5b9e\u9645\u4e0a\u4e0d\u5141\u8bb8\u5b9e\u4f8b\u5728\u5757\u7ea7\u522b\u8bbf\u95ee\u5b58\u50a8\u8bbe\u5907\u3002\u76f8\u53cd\uff0c\u6587\u4ef6\u662f\u5728 NFS \u5171\u4eab\u4e0a\u521b\u5efa\u7684\uff0c\u5e76\u6620\u5c04\u5230\u6a21\u62df\u5757\u50a8\u5b58\u8bbe\u5907\u7684\u5b9e\u4f8b\u3002Cinder \u901a\u8fc7\u5728\u521b\u5efa Cinder \u5377\u65f6\u63a7\u5236\u6587\u4ef6\u6743\u9650\u6765\u652f\u6301\u6b64\u7c7b\u6587\u4ef6\u7684\u5b89\u5168\u914d\u7f6e\u3002Cinder \u914d\u7f6e\u8fd8\u53ef\u4ee5\u63a7\u5236\u662f\u4ee5 root \u7528\u6237\u8eab\u4efd\u8fd8\u662f\u5f53\u524d OpenStack \u8fdb\u7a0b\u7528\u6237\u8eab\u4efd\u8fd0\u884c\u6587\u4ef6\u64cd\u4f5c\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 nas_secure_file_permissions \u8bbe\u7f6e\u4e3a auto \u3002 [DEFAULT] /etc/cinder/cinder.conf \u5982\u679c\u8bbe\u7f6e\u4e3a auto \uff0c\u5219\u5728 cinder \u542f\u52a8\u671f\u95f4\u8fdb\u884c\u68c0\u67e5\u4ee5\u786e\u5b9a\u662f\u5426\u5b58\u5728\u73b0\u6709\u7684 cinder \u5377\uff0c\u4efb\u4f55\u5377\u90fd\u4e0d\u4f1a\u5c06\u9009\u9879\u8bbe\u7f6e\u4e3a True \uff0c\u5e76\u4f7f\u7528\u5b89\u5168\u6587\u4ef6\u6743\u9650\u3002\u68c0\u6d4b\u73b0\u6709\u5377\u4f1a\u5c06\u9009\u9879\u8bbe\u7f6e\u4e3a False \uff0c\u5e76\u4f7f\u7528\u5f53\u524d\u4e0d\u5b89\u5168\u7684\u65b9\u6cd5\u6765\u5904\u7406\u6587\u4ef6\u6743\u9650\u3002\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 nas_secure_file_operations \u8bbe\u7f6e\u4e3a auto \u3002 [DEFAULT] /etc/cinder/cinder.conf \u5f53\u8bbe\u7f6e\u4e3a\u201cauto\u201d\u65f6\uff0c\u5728 cinder \u542f\u52a8\u671f\u95f4\u8fdb\u884c\u68c0\u67e5\u4ee5\u786e\u5b9a\u662f\u5426\u5b58\u5728\u73b0\u6709\u7684 cinder \u5377\uff0c\u4efb\u4f55\u5377\u90fd\u4e0d\u4f1a\u5c06\u9009\u9879\u8bbe\u7f6e\u4e3a True \uff0c\u5b89\u5168\u4e14\u4e0d\u4ee5 root \u7528\u6237\u8eab\u4efd\u8fd0\u884c\u3002\u5bf9\u73b0\u6709\u5377\u7684\u68c0\u6d4b\u4f1a\u5c06\u9009\u9879\u8bbe\u7f6e\u4e3a False \uff0c\u5e76\u4f7f\u7528\u5f53\u524d\u65b9\u6cd5\u4ee5 root \u7528\u6237\u8eab\u4efd\u8fd0\u884c\u64cd\u4f5c\u3002\u5bf9\u4e8e\u65b0\u5b89\u88c5\uff0c\u4f1a\u7f16\u5199\u4e00\u4e2a\u201c\u6807\u8bb0\u6587\u4ef6\u201d\uff0c\u4ee5\u4fbf\u968f\u540e\u91cd\u65b0\u542f\u52a8 cinder \u5c06\u77e5\u9053\u539f\u59cb\u786e\u5b9a\u662f\u4ec0\u4e48\u3002 \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a False \uff0c\u5e76\u4e14 section in /etc/cinder/cinder.conf /etc/cinder/cinder.conf \u4e0b\u7684 [DEFAULT] [DEFAULT] \u53c2\u6570 nas_secure_file_permissions nas_secure_file_operations \u503c\u8bbe\u7f6e\u4e3a False \u3002","title":"Check-Block-07\uff1a NAS \u662f\u5426\u5728\u5b89\u5168\u7684\u73af\u5883\u4e2d\u8fd0\u884c\uff1f"},{"location":"security/security-guide/#check-block-08-114688","text":"\u5982\u679c\u672a\u5b9a\u4e49\u6bcf\u4e2a\u8bf7\u6c42\u7684\u6700\u5927\u6b63\u6587\u5927\u5c0f\uff0c\u653b\u51fb\u8005\u53ef\u4ee5\u6784\u5efa\u4efb\u610f\u8f83\u5927\u7684osapi\u8bf7\u6c42\uff0c\u5bfc\u81f4\u670d\u52a1\u5d29\u6e83\uff0c\u6700\u7ec8\u5bfc\u81f4\u62d2\u7edd\u670d\u52a1\u653b\u51fb\u3002\u5206\u914d\u6700\u5927\u503c\u53ef\u786e\u4fdd\u963b\u6b62\u4efb\u4f55\u6076\u610f\u8d85\u5927\u8bf7\u6c42\uff0c\u4ece\u800c\u786e\u4fdd\u670d\u52a1\u7684\u6301\u7eed\u53ef\u7528\u6027\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a 114688 114688 \uff0c\u6216\u8005 section in /etc/cinder/cinder.conf /etc/cinder/cinder.conf \u4e0b\u7684 [oslo_middleware] [DEFAULT] \u53c2\u6570 osapi_max_request_body_size max_request_body_size \u503c\u8bbe\u7f6e\u4e3a \u3002 \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u672a\u8bbe\u7f6e\u4e3a 114688 \uff0c 114688 \u6216\u8005 section in /etc/cinder/cinder.conf /etc/cinder/cinder.conf \u4e0b\u7684 [oslo_middleware] [DEFAULT] \u53c2\u6570 osapi_max_request_body_size max_request_body_size \u503c\u672a\u8bbe\u7f6e\u4e3a \u3002","title":"Check-Block-08\uff1a\u8bf7\u6c42\u6b63\u6587\u7684\u6700\u5927\u5927\u5c0f\u662f\u5426\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f"},{"location":"security/security-guide/#check-block-09","text":"\u672a\u52a0\u5bc6\u7684\u5377\u6570\u636e\u4f7f\u5377\u6258\u7ba1\u5e73\u53f0\u6210\u4e3a\u653b\u51fb\u8005\u7279\u522b\u9ad8\u4ef7\u503c\u7684\u76ee\u6807\uff0c\u56e0\u4e3a\u5b83\u5141\u8bb8\u653b\u51fb\u8005\u8bfb\u53d6\u8bb8\u591a\u4e0d\u540c VM \u7684\u6570\u636e\u3002\u6b64\u5916\uff0c\u7269\u7406\u5b58\u50a8\u4ecb\u8d28\u53ef\u80fd\u4f1a\u88ab\u7a83\u53d6\u3001\u91cd\u65b0\u88c5\u8f7d\u548c\u4ece\u53e6\u4e00\u53f0\u8ba1\u7b97\u673a\u8bbf\u95ee\u3002\u52a0\u5bc6\u5377\u6570\u636e\u53ef\u4ee5\u964d\u4f4e\u8fd9\u4e9b\u98ce\u9669\uff0c\u5e76\u4e3a\u5377\u6258\u7ba1\u5e73\u53f0\u63d0\u4f9b\u6df1\u5ea6\u9632\u5fa1\u3002\u5757\u5b58\u50a8 \uff08cinder\uff09 \u80fd\u591f\u5728\u5c06\u5377\u6570\u636e\u5199\u5165\u78c1\u76d8\u4e4b\u524d\u5bf9\u5176\u8fdb\u884c\u52a0\u5bc6\uff0c\u56e0\u6b64\u5efa\u8bae\u5f00\u542f\u5377\u52a0\u5bc6\u529f\u80fd\u3002\u6709\u5173\u8bf4\u660e\uff0c\u8bf7\u53c2\u9605 Openstack Cinder \u670d\u52a1\u914d\u7f6e\u6587\u6863\u7684\u5377\u52a0\u5bc6\u90e8\u5206\u3002 \u901a\u8fc7\uff1a\u5982\u679c 1\uff09 \u8bbe\u7f6e\u4e86 in [key_manager] \u90e8\u5206\u4e0b\u7684\u53c2\u6570\u503c\uff0c2\uff09 \u8bbe\u7f6e\u4e86 in \u4e0b\u7684 [key_manager] \u53c2\u6570 backend backend \u503c\uff0c\u4ee5\u53ca 3\uff09 \u5982\u679c\u6b63\u786e\u9075\u5faa\u4e86 /etc/cinder/cinder.conf /etc/nova/nova.conf \u4e0a\u8ff0\u6587\u6863\u4e2d\u7684\u8bf4\u660e\u3002 \u82e5\u8981\u8fdb\u4e00\u6b65\u9a8c\u8bc1\uff0c\u8bf7\u5728\u5b8c\u6210\u5377\u52a0\u5bc6\u8bbe\u7f6e\u5e76\u4e3a LUKS \u521b\u5efa\u5377\u7c7b\u578b\u540e\u6267\u884c\u8fd9\u4e9b\u6b65\u9aa4\uff0c\u5982\u4e0a\u8ff0\u6587\u6863\u4e2d\u6240\u8ff0\u3002 \u521b\u5efa VM\uff1a $ openstack server create --image cirros-0.3.1-x86_64-disk --flavor m1.tiny TESTVM \u521b\u5efa\u52a0\u5bc6\u5377\u5e76\u5c06\u5176\u9644\u52a0\u5230 VM\uff1a $ openstack volume create --size 1 --type LUKS 'encrypted volume' $ openstack volume list $ openstack server add volume --device /dev/vdb TESTVM 'encrypted volume' \u5728 VM \u4e0a\uff0c\u5c06\u4e00\u4e9b\u6587\u672c\u53d1\u9001\u5230\u65b0\u9644\u52a0\u7684\u5377\u5e76\u540c\u6b65\u5b83\uff1a # echo \"Hello, world (encrypted /dev/vdb)\" >> /dev/vdb # sync && sleep 2 \u5728\u6258\u7ba1 cinder \u5377\u670d\u52a1\u7684\u7cfb\u7edf\u4e0a\uff0c\u540c\u6b65\u4ee5\u5237\u65b0 I/O \u7f13\u5b58\uff0c\u7136\u540e\u6d4b\u8bd5\u662f\u5426\u53ef\u4ee5\u627e\u5230\u5b57\u7b26\u4e32\uff1a # sync && sleep 2 # strings /dev/stack-volumes/volume-* | grep \"Hello\" \u641c\u7d22\u4e0d\u5e94\u8fd4\u56de\u5199\u5165\u52a0\u5bc6\u5377\u7684\u5b57\u7b26\u4e32\u3002 \u5931\u8d25\uff1a\u5982\u679c\u672a\u8bbe\u7f6e in \u90e8\u5206\u4e0b\u7684\u53c2\u6570\u503c\uff0c\u6216\u8005\u672a\u8bbe\u7f6e in /etc/cinder/cinder.conf /etc/nova/nova.conf \u90e8\u5206\u4e0b\u7684 [key_manager] [key_manager] \u53c2\u6570 backend backend \u503c\uff0c\u6216\u8005\u672a\u6b63\u786e\u9075\u5faa\u4e0a\u8ff0\u6587\u6863\u4e2d\u7684\u8bf4\u660e\u3002","title":"Check-Block-09\uff1a\u662f\u5426\u542f\u7528\u4e86\u5377\u52a0\u5bc6\u529f\u80fd\uff1f"},{"location":"security/security-guide/#_170","text":"OpenStack Image Storage \uff08glance\uff09 \u662f\u4e00\u9879\u670d\u52a1\uff0c\u7528\u6237\u53ef\u4ee5\u5728\u5176\u4e2d\u4e0a\u4f20\u548c\u53d1\u73b0\u65e8\u5728\u4e0e\u5176\u4ed6\u670d\u52a1\u4e00\u8d77\u4f7f\u7528\u7684\u6570\u636e\u8d44\u4ea7\u3002\u8fd9\u76ee\u524d\u5305\u62ec\u56fe\u50cf\u548c\u5143\u6570\u636e\u5b9a\u4e49\u3002 \u6620\u50cf\u670d\u52a1\u5305\u62ec\u53d1\u73b0\u3001\u6ce8\u518c\u548c\u68c0\u7d22\u865a\u62df\u673a\u6620\u50cf\u3002Glance \u6709\u4e00\u4e2a RESTful API\uff0c\u5141\u8bb8\u67e5\u8be2 VM \u6620\u50cf\u5143\u6570\u636e\u4ee5\u53ca\u68c0\u7d22\u5b9e\u9645\u6620\u50cf\u3002 \u6709\u5173\u8be5\u670d\u52a1\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack Glance \u6587\u6863\u3002 \u68c0\u67e5\u8868 Check-Image-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/glance\uff1f Check-Image-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Image-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Image-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Image-05\uff1a\u662f\u5426\u963b\u6b62\u4e86\u5c4f\u853d\u7aef\u53e3\u626b\u63cf\uff1f \u6ce8\u610f \u867d\u7136\u672c\u7ae0\u76ee\u524d\u5bf9\u5177\u4f53\u6307\u5357\u7684\u4ecb\u7ecd\u5f88\u5c11\uff0c\u4f46\u9884\u8ba1\u5c06\u9075\u5faa\u6807\u51c6\u7684\u5f3a\u5316\u5b9e\u8df5\u3002\u672c\u8282\u5c06\u6269\u5c55\u76f8\u5173\u4fe1\u606f\u3002","title":"\u56fe\u50cf\u5b58\u50a8"},{"location":"security/security-guide/#_171","text":"","title":"\u68c0\u67e5\u8868"},{"location":"security/security-guide/#check-image-01-rootglance","text":"\u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u62d2\u7edd\u5411\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u63d0\u4f9b\u670d\u52a1\u3002\u56e0\u6b64\uff0c\u5fc5\u987b\u5c06\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u8bbe\u7f6e\u4e3a glance \uff0c root \u5e76\u4e14\u5fc5\u987b\u5c06\u7ec4\u6240\u6709\u6743\u8bbe\u7f6e\u4e3a \u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/glance/glance-api-paste.ini | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-api.conf | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-cache.conf | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-manage.conf | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-registry-paste.ini | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-registry.conf | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-scrubber.conf | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/glance-swift-store.conf | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/policy.json | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/schema-image.json | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance/schema.json | egrep \"root glance\" $ stat -L -c \"%U %G\" /etc/glance | egrep \"root glance\" \u901a\u8fc7\uff1a\u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c glance\u3002\u4e0a\u9762\u7684\u547d\u4ee4\u663e\u793a\u4e86 root glance \u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u4e0d\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\u3002","title":"Check-Image-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/glance\uff1f"},{"location":"security/security-guide/#check-image-02","text":"\u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u6211\u4eec\u5efa\u8bae\u60a8\u4e3a\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/glance/glance-api-paste.ini $ stat -L -c \"%a\" /etc/glance/glance-api.conf $ stat -L -c \"%a\" /etc/glance/glance-cache.conf $ stat -L -c \"%a\" /etc/glance/glance-manage.conf $ stat -L -c \"%a\" /etc/glance/glance-registry-paste.ini $ stat -L -c \"%a\" /etc/glance/glance-registry.conf $ stat -L -c \"%a\" /etc/glance/glance-scrubber.conf $ stat -L -c \"%a\" /etc/glance/glance-swift-store.conf $ stat -L -c \"%a\" /etc/glance/policy.json $ stat -L -c \"%a\" /etc/glance/schema-image.json $ stat -L -c \"%a\" /etc/glance/schema.json $ stat -L -c \"%a\" /etc/glance \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002640/750 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\u3002\u4f8b\u5982\uff0c u=rw,g=r,o= . \u6ce8\u610f \u4f7f\u7528 Check-Image-01\uff1a Devices / Group Ownership of config files \u662f\u5426\u8bbe\u7f6e\u4e3a root/glance\uff1f\uff0c\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0c\u5219 root \u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0cglance \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/glance/glance-api.conf getfacl: Removing leading '/' from absolute path names # file: /etc/glance/glance-api.conf USER root rw- GROUP glance r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u672a\u8bbe\u7f6e\u4e3a\u81f3\u5c11 640\u3002","title":"Check-Image-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f"},{"location":"security/security-guide/#check-image-03keystone","text":"\u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Rocky \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Stein \u4e2d\u5df2\u5f03\u7528\u3002 OpenStack \u652f\u6301\u5404\u79cd\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\uff0c\u5305\u62ec noauth \u548c keystone\u3002\u5982\u679c\u4f7f\u7528\u8be5 noauth \u7b56\u7565\uff0c\u5219\u7528\u6237\u65e0\u9700\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u4e0e OpenStack \u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u6f5c\u5728\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u83b7\u5f97\u5bf9 OpenStack \u7ec4\u4ef6\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u670d\u52a1\u90fd\u5fc5\u987b\u4f7f\u7528\u5176\u670d\u52a1\u5e10\u6237\u901a\u8fc7 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a \uff0c keystone \u5e76\u4e14 section in /etc/glance/glance-api.conf /etc/glance /glance-registry.conf \u4e0b\u7684 [DEFAULT] [DEFAULT] \u53c2\u6570 auth_strategy auth_strategy \u503c\u8bbe\u7f6e\u4e3a keystone \u3002 \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a noauth \u6216 section in /etc/glance/glance-api.conf /etc/glance/glance- registry.conf \u4e0b\u7684 [DEFAULT] [DEFAULT] \u53c2\u6570 auth_strategy auth_strategy \u503c\u8bbe\u7f6e\u4e3a noauth \u3002","title":"Check-Image-03\uff1aKeystone \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f"},{"location":"security/security-guide/#check-image-04-tls","text":"OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u7684\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a\u4ee5 \u5f00\u5934\u7684 Identity API \u7aef\u70b9 https:// \uff0c\u5e76\u4e14\u8be5\u53c2\u6570 insecure www_authenticate_uri \u7684\u503c\u4f4d\u4e8e same /etc/glance/glance-registry.conf \u4e2d\u7684\u540c\u4e00 [keystone_authtoken] \u90e8\u5206\u4e0b\uff0c\u5219\u8bbe\u7f6e\u4e3a False \u3002 [keystone_authtoken] /etc/glance/glance-api.conf \u5931\u8d25\uff1a\u5982\u679c \u4e2d\u7684 /etc/glance/glance-api.conf \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri \u503c\u672a\u8bbe\u7f6e\u4e3a\u4ee5 https:// \u5f00\u5934\u7684\u6807\u8bc6 API \u7aef\u70b9\uff0c\u6216\u8005\u540c\u4e00 /etc/glance/glance-api.conf \u90e8\u5206\u4e2d\u7684\u53c2\u6570 insecure [keystone_authtoken] \u503c\u8bbe\u7f6e\u4e3a True \u3002","title":"Check-Image-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f"},{"location":"security/security-guide/#check-image-05","text":"Glance \u63d0\u4f9b\u7684\u6620\u50cf\u670d\u52a1 API v1 \u4e2d\u7684 copy_from \u529f\u80fd\u53ef\u5141\u8bb8\u653b\u51fb\u8005\u6267\u884c\u5c4f\u853d\u7684\u7f51\u7edc\u7aef\u53e3\u626b\u63cf\u3002\u5982\u679c\u542f\u7528\u4e86 v1 API\uff0c\u5219\u5e94\u5c06\u6b64\u7b56\u7565\u8bbe\u7f6e\u4e3a\u53d7\u9650\u503c\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 copy_from in /etc/glance/policy.json \u7684\u503c\u8bbe\u7f6e\u4e3a\u53d7\u9650\u503c\uff0c\u4f8b\u5982 role:admin . \u5931\u8d25\uff1a\u672a\u8bbe\u7f6e\u53c2\u6570 copy_from in /etc/glance/policy.json \u7684\u503c\u3002","title":"Check-Image-05\uff1a\u662f\u5426\u963b\u6b62\u4e86\u5c4f\u853d\u7aef\u53e3\u626b\u63cf\uff1f"},{"location":"security/security-guide/#_172","text":"\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff08manila\uff09\u63d0\u4f9b\u4e86\u4e00\u7ec4\u670d\u52a1\uff0c\u7528\u4e8e\u7ba1\u7406\u591a\u79df\u6237\u4e91\u73af\u5883\u4e2d\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u3002\u5b83\u7c7b\u4f3c\u4e8eOpenStack\u901a\u8fc7OpenStack\u5757\u5b58\u50a8\u670d\u52a1\uff08cinder\uff09\u9879\u76ee\u63d0\u4f9b\u57fa\u4e8e\u5757\u7684\u5b58\u50a8\u7ba1\u7406\u7684\u65b9\u5f0f\u3002\u4f7f\u7528\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff0c\u60a8\u53ef\u4ee5\u521b\u5efa\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u5e76\u7ba1\u7406\u5176\u5c5e\u6027\uff0c\u4f8b\u5982\u53ef\u89c1\u6027\u3001\u53ef\u8bbf\u95ee\u6027\u548c\u4f7f\u7528\u914d\u989d\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u9002\u7528\u4e8e\u4f7f\u7528\u4ee5\u4e0b\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u7684\u5404\u79cd\u5b58\u50a8\u63d0\u4f9b\u7a0b\u5e8f\uff1aNFS\u3001CIFS\u3001GlusterFS \u548c HDFS\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7684\u7528\u9014\u4e0e Amazon Elastic File System \uff08EFS\uff09 \u76f8\u540c\u3002 \u4ecb\u7ecd \u4e00\u822c\u5b89\u5168\u4fe1\u606f \u7f51\u7edc\u548c\u5b89\u5168\u6a21\u578b \u5171\u4eab\u540e\u7aef\u6a21\u5f0f \u6241\u5e73\u5316\u7f51\u7edc\u4e0e\u5206\u6bb5\u5316\u7f51\u7edc \u7f51\u7edc\u63d2\u4ef6 \u5b89\u5168\u670d\u52a1 \u5b89\u5168\u670d\u52a1\u7b80\u4ecb \u5b89\u5168\u670d\u52a1\u7ba1\u7406 \u5171\u4eab\u8bbf\u95ee\u63a7\u5236 \u5171\u4eab\u7c7b\u578b\u8bbf\u95ee\u63a7\u5236 \u653f\u7b56 \u68c0\u67e5\u8868 Check-Shared-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/manila\uff1f Check-Shared-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Shared-03\uff1aOpenStack Identity \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Shared-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Shared-05\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u8ba1\u7b97\u8054\u7cfb\uff1f Check-Shared-06\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u7f51\u7edc\u8054\u7cfb\uff1f Check-Shared-07\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u5757\u5b58\u50a8\u8054\u7cfb\uff1f Check-Shared-08\uff1a\u8bf7\u6c42\u6b63\u6587\u7684\u6700\u5927\u5927\u5c0f\u662f\u5426\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f","title":"\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf"},{"location":"security/security-guide/#_173","text":"\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff08\u9a6c\u5c3c\u62c9\uff09\u65e8\u5728\u5728\u5355\u8282\u70b9\u6216\u8de8\u591a\u4e2a\u8282\u70b9\u8fd0\u884c\u3002\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7531\u56db\u4e2a\u4e3b\u8981\u670d\u52a1\u7ec4\u6210\uff0c\u5b83\u4eec\u7c7b\u4f3c\u4e8e\u5757\u5b58\u50a8\u670d\u52a1\uff1a manila-api manila-scheduler manila-share manila-data manila-api \u63d0\u4f9b\u7a33\u5b9a RESTful API \u7684\u670d\u52a1\u3002\u8be5\u670d\u52a1\u5728\u6574\u4e2a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e2d\u5bf9\u8bf7\u6c42\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u548c\u8def\u7531\u3002\u6709 python-manilaclient \u53ef\u4ee5\u4e0e API \u4ea4\u4e92\u3002\u6709\u5173\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API \u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API\u3002 manila-share \u8d1f\u8d23\u7ba1\u7406\u5171\u4eab\u6587\u4ef6\u670d\u52a1\u8bbe\u5907\uff0c\u7279\u522b\u662f\u540e\u7aef\u8bbe\u5907\u3002 manila-scheduler \u8d1f\u8d23\u5b89\u6392\u8bf7\u6c42\u5e76\u5c06\u5176\u8def\u7531\u5230\u76f8\u5e94\u7684 manila-share \u670d\u52a1\u3002\u5b83\u901a\u8fc7\u9009\u62e9\u4e00\u4e2a\u540e\u7aef\uff0c\u540c\u65f6\u8fc7\u6ee4\u9664\u4e00\u4e2a\u540e\u7aef\u4e4b\u5916\u7684\u6240\u6709\u540e\u7aef\u6765\u5b9e\u73b0\u8fd9\u4e00\u70b9\u3002 manila-data \u6b64\u670d\u52a1\u8d1f\u8d23\u7ba1\u7406\u6570\u636e\u64cd\u4f5c\uff0c\u5982\u679c\u4e0d\u5355\u72ec\u5904\u7406\uff0c\u53ef\u80fd\u9700\u8981\u5f88\u957f\u65f6\u95f4\u624d\u80fd\u5b8c\u6210\uff0c\u5e76\u963b\u6b62\u5176\u4ed6\u670d\u52a1\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4f7f\u7528\u57fa\u4e8e SQL \u7684\u4e2d\u592e\u6570\u636e\u5e93\uff0c\u8be5\u6570\u636e\u5e93\u7531\u7cfb\u7edf\u4e2d\u7684\u6240\u6709\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5171\u4eab\u3002\u5b83\u53ef\u4ee5\u4f7f\u7528 ORM SQLALcvery \u652f\u6301\u7684\u4efb\u4f55 SQL \u65b9\u8a00\uff0c\u4f46\u4ec5\u4f7f\u7528 MySQL \u548c PostgreSQL \u6570\u636e\u5e93\u8fdb\u884c\u6d4b\u8bd5\u3002 \u4f7f\u7528 SQL\uff0c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7c7b\u4f3c\u4e8e\u5176\u4ed6 OpenStack \u670d\u52a1\uff0c\u53ef\u4ee5\u4e0e\u4efb\u4f55 OpenStack \u90e8\u7f72\u4e00\u8d77\u4f7f\u7528\u3002\u6709\u5173 API \u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API \u8bf4\u660e\u3002\u6709\u5173 CLI \u7528\u6cd5\u548c\u914d\u7f6e\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u4e91\u7ba1\u7406\u6307\u5357\u3002 \u4e0b\u56fe\u4e2d\uff0c\u60a8\u53ef\u4ee5\u770b\u5230\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7684\u4e0d\u540c\u90e8\u5206\u5982\u4f55\u76f8\u4e92\u4ea4\u4e92\u3002 \u9664\u4e86\u5df2\u7ecf\u63cf\u8ff0\u7684\u670d\u52a1\u4e4b\u5916\uff0c\u60a8\u8fd8\u53ef\u4ee5\u5728\u56fe\u50cf\u4e0a\u770b\u5230\u53e6\u5916\u4e24\u4e2a\u5b9e\u4f53\uff1a python-manilaclient \u548c storage controller \u3002 python-manilaclient \u547d\u4ee4\u884c\u754c\u9762\uff0c\u7528\u4e8e\u901a\u8fc7 manila-api \u4e0e\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\uff0c\u4ee5\u53ca\u7528\u4e8e\u4ee5\u7f16\u7a0b\u65b9\u5f0f\u4e0e\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4ea4\u4e92\u7684 Python \u6a21\u5757\u3002 Storage controller \u901a\u5e38\u662f\u4e00\u4e2a\u91d1\u5c5e\u76d2\uff0c\u5e26\u6709\u65cb\u8f6c\u78c1\u76d8\u3001\u4ee5\u592a\u7f51\u7aef\u53e3\u548c\u67d0\u79cd\u8f6f\u4ef6\uff0c\u5141\u8bb8\u7f51\u7edc\u5ba2\u6237\u7aef\u5728\u78c1\u76d8\u4e0a\u8bfb\u53d6\u548c\u5199\u5165\u6587\u4ef6\u3002\u8fd8\u6709\u4e00\u4e9b\u5728\u4efb\u610f\u786c\u4ef6\u4e0a\u8fd0\u884c\u7684\u7eaf\u8f6f\u4ef6\u5b58\u50a8\u63a7\u5236\u5668\uff0c\u7fa4\u96c6\u63a7\u5236\u5668\u53ef\u80fd\u5141\u8bb8\u591a\u4e2a\u7269\u7406\u8bbe\u5907\u663e\u793a\u4e3a\u5355\u4e2a\u5b58\u50a8\u63a7\u5236\u5668\uff0c\u6216\u7eaf\u865a\u62df\u5b58\u50a8\u63a7\u5236\u5668\u3002 \u5171\u4eab\u662f\u8fdc\u7a0b\u7684\u3001\u53ef\u88c5\u8f7d\u7684\u6587\u4ef6\u7cfb\u7edf\u3002\u60a8\u53ef\u4ee5\u4e00\u6b21\u5c06\u5171\u4eab\u88c5\u8f7d\u5230\u591a\u4e2a\u4e3b\u673a\uff0c\u4e5f\u53ef\u4ee5\u7531\u591a\u4e2a\u7528\u6237\u4ece\u591a\u4e2a\u4e3b\u673a\u8bbf\u95ee\u5171\u4eab\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u53ef\u4ee5\u4f7f\u7528\u4e0d\u540c\u7684\u7f51\u7edc\u7c7b\u578b\uff1a\u6241\u5e73\u7f51\u7edc\u3001VLAN\u3001VXLAN \u6216 GRE\uff0c\u5e76\u652f\u6301\u5206\u6bb5\u7f51\u7edc\u3002\u6b64\u5916\uff0c\u8fd8\u6709\u4e0d\u540c\u7684\u7f51\u7edc\u63d2\u4ef6\uff0c\u5b83\u4eec\u63d0\u4f9b\u4e86\u4e0e OpenStack \u63d0\u4f9b\u7684\u7f51\u7edc\u670d\u52a1\u7684\u5404\u79cd\u96c6\u6210\u65b9\u6cd5\u3002 \u4e0d\u540c\u4f9b\u5e94\u5546\u521b\u5efa\u4e86\u5927\u91cf\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\uff0c\u8fd9\u4e9b\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u4e0d\u540c\u7684\u786c\u4ef6\u5b58\u50a8\u89e3\u51b3\u65b9\u6848\uff0c\u4f8b\u5982 NetApp \u96c6\u7fa4\u6a21\u5f0f Data ONTAP \uff08 cDOT \uff09\u9a71\u52a8\u7a0b\u5e8f\uff0c\u534e\u4e3a NAS \u9a71\u52a8\u7a0b\u5e8f\u6216 GlusterFS \u9a71\u52a8\u7a0b\u5e8f\u3002\u6bcf\u4e2a\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u90fd\u662f\u4e00\u4e2a Python \u7c7b\uff0c\u53ef\u4ee5\u4e3a\u540e\u7aef\u8bbe\u7f6e\u5e76\u5728\u540e\u7aef\u8fd0\u884c\u4ee5\u7ba1\u7406\u5171\u4eab\u64cd\u4f5c\uff0c\u5176\u4e2d\u4e00\u4e9b\u64cd\u4f5c\u53ef\u80fd\u662f\u7279\u5b9a\u4e8e\u4f9b\u5e94\u5546\u7684\u3002\u540e\u7aef\u662f manila-share \u670d\u52a1\u7684\u4e00\u4e2a\u5b9e\u4f8b\u3002 \u5ba2\u6237\u7aef\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u7684\u914d\u7f6e\u6570\u636e\u53ef\u4ee5\u7531\u5b89\u5168\u670d\u52a1\u5b58\u50a8\u3002\u53ef\u4ee5\u914d\u7f6e\u548c\u4f7f\u7528 LDAP\u3001Kerberos \u6216 Microsoft Active Directory \u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u7b49\u534f\u8bae\u3002 \u9664\u975e\u672a\u5728 policy.json \u4e2d\u663e\u5f0f\u66f4\u6539\uff0c\u5426\u5219\u7ba1\u7406\u5458\u6216\u62e5\u6709\u5171\u4eab\u7684\u79df\u6237\u90fd\u80fd\u591f\u7ba1\u7406\u5bf9\u5171\u4eab\u7684\u8bbf\u95ee\u3002\u8bbf\u95ee\u7ba1\u7406\u662f\u901a\u8fc7\u521b\u5efa\u8bbf\u95ee\u89c4\u5219\u6765\u5b8c\u6210\u7684\uff0c\u8be5\u89c4\u5219\u901a\u8fc7 IP \u5730\u5740\u3001\u7528\u6237\u3001\u7ec4\u6216 TLS \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u53ef\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u53d6\u51b3\u4e8e\u60a8\u914d\u7f6e\u548c\u4f7f\u7528\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u548c\u5b89\u5168\u670d\u52a1\u3002 \u6ce8\u610f \u4e0d\u540c\u7684\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u4e0d\u540c\u7684\u8bbf\u95ee\u9009\u9879\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u4f7f\u7528\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u3002\u652f\u6301\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u5305\u62ec NFS\u3001CIFS\u3001GlusterFS \u548c HDFS\u3002\u4f8b\u5982\uff0c\u901a\u7528\uff08\u5757\u5b58\u50a8\u4f5c\u4e3a\u540e\u7aef\uff09\u9a71\u52a8\u7a0b\u5e8f\u4e0d\u652f\u6301\u7528\u6237\u548c\u8bc1\u4e66\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002\u5b83\u8fd8\u4e0d\u652f\u6301\u4efb\u4f55\u5b89\u5168\u670d\u52a1\uff0c\u4f8b\u5982 LDAP\u3001Kerberos \u6216 Active Directory\u3002\u6709\u5173\u4e0d\u540c\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u7684\u529f\u80fd\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u9a6c\u5c3c\u62c9\u5171\u4eab\u529f\u80fd\u652f\u6301\u6620\u5c04\u3002 \u4f5c\u4e3a\u7ba1\u7406\u5458\uff0c\u60a8\u53ef\u4ee5\u521b\u5efa\u5171\u4eab\u7c7b\u578b\uff0c\u4f7f\u8ba1\u5212\u7a0b\u5e8f\u80fd\u591f\u5728\u521b\u5efa\u5171\u4eab\u4e4b\u524d\u7b5b\u9009\u540e\u7aef\u3002\u5171\u4eab\u7c7b\u578b\u5177\u6709\u989d\u5916\u7684\u89c4\u8303\uff0c\u60a8\u53ef\u4ee5\u4e3a\u8ba1\u5212\u7a0b\u5e8f\u8bbe\u7f6e\u8fd9\u4e9b\u89c4\u8303\uff0c\u4ee5\u7b5b\u9009\u548c\u6743\u8861\u540e\u7aef\uff0c\u4ee5\u4fbf\u4e3a\u8bf7\u6c42\u521b\u5efa\u5171\u4eab\u7684\u7528\u6237\u9009\u62e9\u9002\u5f53\u7684\u5171\u4eab\u7c7b\u578b\u3002\u5171\u4eab\u548c\u5171\u4eab\u7c7b\u578b\u53ef\u4ee5\u521b\u5efa\u4e3a\u516c\u5171\u6216\u79c1\u6709\u3002\u6b64\u53ef\u89c1\u6027\u7ea7\u522b\u5b9a\u4e49\u5176\u4ed6\u79df\u6237\u662f\u5426\u80fd\u591f\u770b\u5230\u8fd9\u4e9b\u5bf9\u8c61\u5e76\u5bf9\u5176\u8fdb\u884c\u64cd\u4f5c\u3002\u7ba1\u7406\u5458\u53ef\u4ee5\u4e3a\u8eab\u4efd\u670d\u52a1\u4e2d\u7684\u7279\u5b9a\u7528\u6237\u6216\u79df\u6237\u6dfb\u52a0\u5bf9\u4e13\u7528\u5171\u4eab\u7c7b\u578b\u7684\u8bbf\u95ee\u6743\u9650\u3002\u56e0\u6b64\uff0c\u60a8\u6388\u4e88\u8bbf\u95ee\u6743\u9650\u7684\u7528\u6237\u53ef\u4ee5\u770b\u5230\u53ef\u7528\u7684\u5171\u4eab\u7c7b\u578b\uff0c\u5e76\u4f7f\u7528\u5b83\u4eec\u521b\u5efa\u5171\u4eab\u3002 \u4e0d\u540c\u7528\u6237\u53ca\u5176\u89d2\u8272\u7684 API \u8c03\u7528\u6743\u9650\u7531\u7b56\u7565\u51b3\u5b9a\uff0c\u5c31\u50cf\u5728\u5176\u4ed6 OpenStack \u670d\u52a1\u4e2d\u4e00\u6837\u3002 \u6807\u8bc6\u670d\u52a1\u53ef\u7528\u4e8e\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e2d\u7684\u8eab\u4efd\u9a8c\u8bc1\u3002\u8bf7\u53c2\u9605\u201c\u8eab\u4efd\u201d\u90e8\u5206\u4e2d\u7684\u8eab\u4efd\u670d\u52a1\u5b89\u5168\u6027\u7684\u8be6\u7ec6\u4fe1\u606f\u3002","title":"\u4ecb\u7ecd"},{"location":"security/security-guide/#_174","text":"\u4e0e\u5176\u4ed6 OpenStack \u9879\u76ee\u7c7b\u4f3c\uff0c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5df2\u6ce8\u518c\u5230 Identity \u670d\u52a1\uff0c\u56e0\u6b64\u60a8\u53ef\u4ee5\u4f7f\u7528 manila endpoints \u547d\u4ee4\u67e5\u627e\u5171\u4eab\u670d\u52a1 v1 \u548c v2 \u7684 API \u7aef\u70b9\uff1a $ manila endpoints +-------------+-----------------------------------------+ | manila | Value | +-------------+-----------------------------------------+ | adminURL | http://172.18.198.55:8786/v1/20787a7b...| | region | RegionOne | | publicURL | http://172.18.198.55:8786/v1/20787a7b...| | internalURL | http://172.18.198.55:8786/v1/20787a7b...| | id | 82cc5535aa444632b64585f138cb9b61 | +-------------+-----------------------------------------+ +-------------+-----------------------------------------+ | manilav2 | Value | +-------------+-----------------------------------------+ | adminURL | http://172.18.198.55:8786/v2/20787a7b...| | region | RegionOne | | publicURL | http://172.18.198.55:8786/v2/20787a7b...| | internalURL | http://172.18.198.55:8786/v2/20787a7b...| | id | 2e8591bfcac4405fa7e5dc3fd61a2b85 | +-------------+-----------------------------------------+ \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API \u670d\u52a1\u4ec5\u4fa6\u542c tcp6 \u7c7b\u578b\u540c\u65f6\u652f\u6301 IPv4 \u548c IPv6 \u7684\u7aef\u53e3 8786 \u3002 \u6ce8\u610f \u8be5\u7aef\u53e3\u662f\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7684\u9ed8\u8ba4\u7aef\u53e3 8786 \u3002\u5b83\u53ef\u4ee5\u66f4\u6539\u4e3a\u4efb\u4f55\u5176\u4ed6\u7aef\u53e3\uff0c\u4f46\u6b64\u66f4\u6539\u4e5f\u5e94\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d\u7684 \u9009\u9879\u4e2d\u8fdb\u884c\uff0c\u8be5\u9009\u9879 osapi_share_listen_port \u9ed8\u8ba4\u4e3a 8786 \u3002 \u5728 /etc/manila/ \u76ee\u5f55\u4e2d\uff0c\u60a8\u53ef\u4ee5\u627e\u5230\u51e0\u4e2a\u914d\u7f6e\u6587\u4ef6\uff1a api-paste.ini manila.conf policy.json rootwrap.conf rootwrap.d ./rootwrap.d: share.filters \u5efa\u8bae\u60a8\u5c06\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u914d\u7f6e\u4e3a\u5728\u975e root \u670d\u52a1\u5e10\u6237\u4e0b\u8fd0\u884c\uff0c\u5e76\u66f4\u6539\u6587\u4ef6\u6743\u9650\uff0c\u4ee5\u4fbf\u53ea\u6709\u7cfb\u7edf\u7ba1\u7406\u5458\u624d\u80fd\u4fee\u6539\u5b83\u4eec\u3002\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u8981\u6c42\u53ea\u6709\u7ba1\u7406\u5458\u624d\u80fd\u5199\u5165\u914d\u7f6e\u6587\u4ef6\uff0c\u800c\u670d\u52a1\u53ea\u80fd\u901a\u8fc7\u5176\u5728\u7ec4\u4e2d\u7684 manila \u7ec4\u6210\u5458\u8eab\u4efd\u8bfb\u53d6\u5b83\u4eec\u3002\u5176\u4ed6\u4eba\u4e00\u5b9a\u65e0\u6cd5\u8bfb\u53d6\u8fd9\u4e9b\u6587\u4ef6\uff0c\u56e0\u4e3a\u8fd9\u4e9b\u6587\u4ef6\u5305\u542b\u4e0d\u540c\u670d\u52a1\u7684\u7ba1\u7406\u5458\u5bc6\u7801\u3002 \u5e94\u7528\u68c0\u67e5 Check-Shared-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/manila\uff1f\u548c Check-Shared-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f\u4ece\u6e05\u5355\u4e2d\u9a8c\u8bc1\u6743\u9650\u8bbe\u7f6e\u662f\u5426\u6b63\u786e\u3002 \u6ce8\u610f \u6587\u4ef6\u4e2d\u7684 manila-rootwrap \u914d\u7f6e\u548c\u6587\u4ef6\u4e2d `rootwrap.conf` `rootwrap.d/share.filters` \u5171\u4eab\u8282\u70b9\u7684 manila-rootwrap \u547d\u4ee4\u8fc7\u6ee4\u5668\u5e94\u5f52 root \u7528\u6237\u6240\u6709\uff0c\u5e76\u4e14\u53ea\u80fd\u7531 root \u7528\u6237\u5199\u5165\u3002 \u5efa\u8bae manila \u914d\u7f6e\u6587\u4ef6 `manila.conf` \u53ef\u4ee5\u653e\u7f6e\u5728\u4efb\u4f55\u4f4d\u7f6e\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u8be5\u8def\u5f84 `/etc/manila/manila.conf` \u662f\u5fc5\u9700\u7684\u3002","title":"\u4e00\u822c\u5b89\u5168\u4fe1\u606f"},{"location":"security/security-guide/#_175","text":"\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e2d\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u662f\u4e00\u4e2a Python \u7c7b\uff0c\u53ef\u4ee5\u4e3a\u540e\u7aef\u8bbe\u7f6e\u5e76\u5728\u5176\u4e2d\u8fd0\u884c\u4ee5\u7ba1\u7406\u5171\u4eab\u64cd\u4f5c\uff0c\u5176\u4e2d\u4e00\u4e9b\u64cd\u4f5c\u662f\u7279\u5b9a\u4e8e\u4f9b\u5e94\u5546\u7684\u3002\u540e\u7aef\u662f manila-share \u670d\u52a1\u7684\u5b9e\u4f8b\u3002\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e2d\u6709\u8bb8\u591a\u7531\u4e0d\u540c\u4f9b\u5e94\u5546\u521b\u5efa\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u3002\u6bcf\u4e2a\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u90fd\u652f\u6301\u4e00\u79cd\u6216\u591a\u79cd\u540e\u7aef\u6a21\u5f0f\uff1a\u5171\u4eab\u670d\u52a1\u5668\u548c\u65e0\u5171\u4eab\u670d\u52a1\u5668\u3002\u7ba1\u7406\u5458\u901a\u8fc7\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d manila.conf \u6307\u5b9a\u6a21\u5f0f\u6765\u9009\u62e9\u4f7f\u7528\u54ea\u79cd\u6a21\u5f0f\u3002\u5b83\u4f7f\u7528\u4e86\u4e00\u4e2a\u9009\u9879 driver_handles_share_servers \u3002 \u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u53ef\u4ee5\u914d\u7f6e\u4e3a\u6241\u5e73\u7f51\u7edc\uff0c\u4e5f\u53ef\u4ee5\u914d\u7f6e\u5206\u6bb5\u7f51\u7edc\u3002\u8fd9\u53d6\u51b3\u4e8e\u7f51\u7edc\u63d0\u4f9b\u5546\u3002 \u5982\u679c\u60a8\u60f3\u4f7f\u7528\u4e0d\u540c\u7684\u914d\u7f6e\uff0c\u5219\u53ef\u4ee5\u4e3a\u4e0d\u540c\u7684\u6a21\u5f0f\u4f7f\u7528\u76f8\u540c\u7684\u786c\u4ef6\u4f7f\u7528\u5355\u72ec\u7684\u9a71\u52a8\u7a0b\u5e8f\u3002\u6839\u636e\u9009\u62e9\u7684\u6a21\u5f0f\uff0c\u7ba1\u7406\u5458\u53ef\u80fd\u9700\u8981\u901a\u8fc7\u914d\u7f6e\u6587\u4ef6\u63d0\u4f9b\u66f4\u591a\u914d\u7f6e\u8be6\u7ec6\u4fe1\u606f\u3002","title":"\u7f51\u7edc\u548c\u5b89\u5168\u6a21\u578b"},{"location":"security/security-guide/#_176","text":"\u6bcf\u4e2a\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u81f3\u5c11\u652f\u6301\u4e00\u79cd\u53ef\u80fd\u7684\u9a71\u52a8\u7a0b\u5e8f\u6a21\u5f0f\uff1a \u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f \u65e0\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f \u8bbe\u7f6e\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u6216\u65e0\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u7684 manila.conf \u914d\u7f6e\u9009\u9879\u662f driver_handles_share_servers \u9009\u9879\u3002\u5b83\u6307\u793a\u9a71\u52a8\u7a0b\u5e8f\u662f\u81ea\u884c\u5904\u7406\u5171\u4eab\u670d\u52a1\u5668\uff0c\u8fd8\u662f\u671f\u671b\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u6267\u884c\u6b64\u64cd\u4f5c\u3002 \u6a21\u5f0f \u914d\u7f6e\u9009\u9879 \u63cf\u8ff0 \u5171\u4eab\u670d\u52a1\u5668 driver_handles_share_servers =True \u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u521b\u5efa\u5171\u4eab\u670d\u52a1\u5668\u5e76\u7ba1\u7406\u6216\u5904\u7406\u5171\u4eab\u670d\u52a1\u5668\u751f\u547d\u5468\u671f\u3002 \u65e0\u5171\u4eab\u670d\u52a1\u5668 driver_handles_share_servers =False \u7ba1\u7406\u5458\uff08\u800c\u4e0d\u662f\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\uff09\u4f7f\u7528\u67d0\u4e9b\u7f51\u7edc\u63a5\u53e3\uff08\u800c\u4e0d\u662f\u5171\u4eab\u670d\u52a1\u5668\u7684\u5b58\u5728\uff09\u7ba1\u7406\u88f8\u673a\u5b58\u50a8\u3002 \u65e0\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f \u5728\u8fd9\u79cd\u6a21\u5f0f\u4e0b\uff0c\u9a71\u52a8\u7a0b\u5e8f\u57fa\u672c\u4e0a\u6ca1\u6709\u4efb\u4f55\u7f51\u7edc\u8981\u6c42\u3002\u5047\u5b9a\u7531\u9a71\u52a8\u7a0b\u5e8f\u7ba1\u7406\u7684\u5b58\u50a8\u63a7\u5236\u5668\u5177\u6709\u6240\u9700\u7684\u6240\u6709\u7f51\u7edc\u63a5\u53e3\u3002\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5c06\u671f\u671b\u9a71\u52a8\u7a0b\u5e8f\u76f4\u63a5\u8bbe\u7f6e\u5171\u4eab\uff0c\u800c\u65e0\u9700\u4e8b\u5148\u521b\u5efa\u4efb\u4f55\u5171\u4eab\u670d\u52a1\u5668\u3002\u6b64\u6a21\u5f0f\u5bf9\u5e94\u4e8e\u67d0\u4e9b\u73b0\u6709\u9a71\u52a8\u7a0b\u5e8f\u5df2\u5728\u6267\u884c\u7684\u64cd\u4f5c\uff0c\u4f46\u5b83\u4f7f\u7ba1\u7406\u5458\u53ef\u4ee5\u660e\u786e\u9009\u62e9\u3002\u5728\u6b64\u6a21\u5f0f\u4e0b\uff0c\u5171\u4eab\u521b\u5efa\u65f6\u4e0d\u9700\u8981\u5171\u4eab\u7f51\u7edc\uff0c\u4e5f\u4e0d\u5f97\u63d0\u4f9b\u5171\u4eab\u7f51\u7edc\u3002 \u6ce8\u610f \u5728\u65e0\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u4e0b\uff0c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5c06\u5047\u5b9a\u6240\u6709\u79df\u6237\u90fd\u5df2\u53ef\u8bbf\u95ee\u7528\u4e8e\u5bfc\u51fa\u4efb\u4f55\u5171\u4eab\u7684\u7f51\u7edc\u63a5\u53e3\u3002 \u5728\u65e0\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u4e0b\uff0c\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u4e0d\u5904\u7406\u5b58\u50a8\u751f\u547d\u5468\u671f\u3002\u7ba1\u7406\u5458\u5e94\u5904\u7406\u5b58\u50a8\u3001\u7f51\u7edc\u63a5\u53e3\u548c\u5176\u4ed6\u4e3b\u673a\u914d\u7f6e\u3002\u5728\u6b64\u6a21\u5f0f\u4e0b\uff0c\u7ba1\u7406\u5458\u53ef\u4ee5\u5c06\u5b58\u50a8\u8bbe\u7f6e\u4e3a\u5bfc\u51fa\u5171\u4eab\u7684\u4e3b\u673a\u3002\u6b64\u6a21\u5f0f\u7684\u4e3b\u8981\u7279\u5f81\u662f\u5b58\u50a8\u4e0d\u7531\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5904\u7406\u3002\u79df\u6237\u4e2d\u7684\u7528\u6237\u5171\u4eab\u516c\u5171\u7f51\u7edc\u3001\u4e3b\u673a\u3001\u5904\u7406\u5668\u548c\u7f51\u7edc\u7ba1\u9053\u3002\u5982\u679c\u7ba1\u7406\u5458\u6216\u4ee3\u7406\u4e4b\u524d\u914d\u7f6e\u7684\u5b58\u50a8\u6ca1\u6709\u6b63\u786e\u7684\u5e73\u8861\u8c03\u6574\uff0c\u5b83\u4eec\u53ef\u80fd\u4f1a\u76f8\u4e92\u963b\u788d\u3002\u5728\u516c\u6709\u4e91\u4e2d\uff0c\u6240\u6709\u7f51\u7edc\u5bb9\u91cf\u53ef\u80fd\u90fd\u7531\u4e00\u4e2a\u5ba2\u6237\u7aef\u4f7f\u7528\uff0c\u56e0\u6b64\u7ba1\u7406\u5458\u5e94\u6ce8\u610f\u4e0d\u8981\u53d1\u751f\u8fd9\u79cd\u60c5\u51b5\u3002\u5e73\u8861\u8c03\u6574\u53ef\u4ee5\u901a\u8fc7\u4efb\u4f55\u65b9\u5f0f\u5b8c\u6210\uff0c\u800c\u4e0d\u4e00\u5b9a\u662f\u4f7f\u7528 OpenStack \u5de5\u5177\u3002 \u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f \u5728\u6b64\u6a21\u5f0f\u4e0b\uff0c\u9a71\u52a8\u7a0b\u5e8f\u80fd\u591f\u521b\u5efa\u5171\u4eab\u670d\u52a1\u5668\u5e76\u5c06\u5176\u63d2\u5165\u73b0\u6709\u7f51\u7edc\u3002\u63d0\u4f9b\u65b0\u7684\u5171\u4eab\u670d\u52a1\u5668\u65f6\uff0c\u9a71\u52a8\u7a0b\u5e8f\u9700\u8981\u6765\u81ea\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7684 IP \u5730\u5740\u548c\u5b50\u7f51\u3002 \u4e0e\u65e0\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u4e0d\u540c\uff0c\u5728\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u4e0b\uff0c\u7528\u6237\u5177\u6709\u4e00\u4e2a\u5171\u4eab\u7f51\u7edc\u548c\u4e00\u4e2a\u4e3a\u6bcf\u4e2a\u5171\u4eab\u7f51\u7edc\u521b\u5efa\u7684\u5171\u4eab\u670d\u52a1\u5668\u3002\u56e0\u6b64\uff0c\u6240\u6709\u7528\u6237\u90fd\u6709\u5355\u72ec\u7684 CPU\u3001CPU \u65f6\u95f4\u3001\u7f51\u7edc\u3001\u5bb9\u91cf\u548c\u541e\u5410\u91cf\u3002 \u60a8\u8fd8\u53ef\u4ee5\u5728\u5171\u4eab\u670d\u52a1\u5668\u548c\u65e0\u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\u4e0b\u914d\u7f6e\u5b89\u5168\u670d\u52a1\u3002\u4f46\u662f\uff0c\u5982\u679c\u6ca1\u6709\u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\uff0c\u7ba1\u7406\u5458\u5e94\u5728\u4e3b\u673a\u4e0a\u624b\u52a8\u8bbe\u7f6e\u6240\u9700\u7684\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u3002\u5728\u5171\u4eab\u670d\u52a1\u5668\u6a21\u5f0f\u4e0b\uff0c\u53ef\u4ee5\u4f7f\u7528\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u7684\u4efb\u4f55\u73b0\u6709\u5b89\u5168\u670d\u52a1\u81ea\u52a8\u914d\u7f6e\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u3002","title":"\u5171\u4eab\u540e\u7aef\u6a21\u5f0f"},{"location":"security/security-guide/#_177","text":"\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5141\u8bb8\u4f7f\u7528\u4e0d\u540c\u7c7b\u578b\u7684\u7f51\u7edc\uff1a flat GRE VLAN VXLAN \u6ce8\u610f \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u53ea\u662f\u5c06\u6709\u5173\u7f51\u7edc\u7684\u4fe1\u606f\u4fdd\u5b58\u5728\u6570\u636e\u5e93\u4e2d\uff0c\u800c\u771f\u6b63\u7684\u7f51\u7edc\u5219\u7531\u7f51\u7edc\u63d0\u4f9b\u5546\u63d0\u4f9b\u3002\u5728OpenStack\u4e2d\uff0c\u5b83\u53ef\u4ee5\u662f\u4f20\u7edf\u7f51\u7edc\uff08nova-network\uff09\u6216\u7f51\u7edc\uff08neutron\uff09\u670d\u52a1\uff0c\u4f46\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u751a\u81f3\u53ef\u4ee5\u5728OpenStack\u4e4b\u5916\u5de5\u4f5c\u3002\u8fd9\u662f\u5141\u8bb8\u7684\uff0c `StandaloneNetworkPlugin` \u53ef\u4ee5\u4e0e\u4efb\u4f55\u7f51\u7edc\u5e73\u53f0\u4e00\u8d77\u4f7f\u7528\uff0c\u5e76\u4e14\u4e0d\u9700\u8981OpenStack\u4e2d\u7684\u67d0\u4e9b\u7279\u5b9a\u7f51\u7edc\u670d\u52a1\uff0c\u5982Networking\u6216Legacy\u7f51\u7edc\u670d\u52a1\u3002\u60a8\u53ef\u4ee5\u5728\u5176\u914d\u7f6e\u6587\u4ef6\u4e2d\u8bbe\u7f6e\u7f51\u7edc\u53c2\u6570\u3002 \u5728\u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\u4e0b\uff0c\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u4e3a\u6bcf\u4e2a\u5171\u4eab\u7f51\u7edc\u521b\u5efa\u548c\u7ba1\u7406\u5171\u4eab\u670d\u52a1\u5668\u3002\u6b64\u6a21\u5f0f\u53ef\u5206\u4e3a\u4e24\u79cd\u53d8\u4f53\uff1a \u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\u4e0b\u7684\u6241\u5e73\u7f51\u7edc \u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\u4e0b\u7684\u5206\u6bb5\u7f51\u7edc \u6700\u521d\uff0c\u5728\u521b\u5efa\u5171\u4eab\u7f51\u7edc\u65f6\uff0c\u60a8\u53ef\u4ee5\u8bbe\u7f6e OpenStack Networking \uff08neutron\uff09 \u7684\u7f51\u7edc\u548c\u5b50\u7f51\uff0c\u4e5f\u53ef\u4ee5\u8bbe\u7f6e Legacy \u7f51\u7edc \uff08nova-network\uff09 \u670d\u52a1\u7f51\u7edc\u3002\u7b2c\u4e09\u79cd\u65b9\u6cd5\u662f\u5728\u6ca1\u6709\u65e7\u7248\u7f51\u7edc\u548c\u7f51\u7edc\u670d\u52a1\u7684\u60c5\u51b5\u4e0b\u914d\u7f6e\u7f51\u7edc\u3002 StandaloneNetworkPlugin \u53ef\u4e0e\u4efb\u4f55\u7f51\u7edc\u5e73\u53f0\u4e00\u8d77\u4f7f\u7528\u3002\u60a8\u53ef\u4ee5\u5728\u5176\u914d\u7f6e\u6587\u4ef6\u4e2d\u8bbe\u7f6e\u7f51\u7edc\u53c2\u6570\u3002 \u5efa\u8bae \u6240\u6709\u4f7f\u7528 OpenStack Compute \u670d\u52a1\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u90fd\u4e0d\u4f7f\u7528\u7f51\u7edc\u63d2\u4ef6\u3002\u5728 Mitaka \u7248\u672c\u4e2d\uff0c\u5b83\u662f Windows \u548c\u901a\u7528\u9a71\u52a8\u7a0b\u5e8f\u3002\u8fd9\u4e9b\u5171\u4eab\u9a71\u52a8\u5668\u5177\u6709\u5176\u4ed6\u9009\u9879\u5e76\u4f7f\u7528\u4e0d\u540c\u7684\u65b9\u6cd5\u3002 \u521b\u5efa\u5171\u4eab\u7f51\u7edc\u540e\uff0c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5c06\u68c0\u7d22\u7531\u7f51\u7edc\u63d0\u4f9b\u5546\u786e\u5b9a\u7684\u7f51\u7edc\u4fe1\u606f\uff1a\u7f51\u7edc\u7c7b\u578b\u3001\u5206\u6bb5\u6807\u8bc6\u7b26\uff08\u5982\u679c\u7f51\u7edc\u4f7f\u7528\u5206\u6bb5\uff09\u548c CIDR \u8868\u793a\u6cd5\u4e2d\u7684 IP \u5757\uff0c\u4ee5\u4fbf\u4ece\u4e2d\u5206\u914d\u7f51\u7edc\u3002 \u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\u4e0b\u7684\u6241\u5e73\u7f51\u7edc \u5728\u6b64\u6a21\u5f0f\u4e0b\uff0c\u67d0\u4e9b\u5b58\u50a8\u63a7\u5236\u5668\u53ef\u4ee5\u521b\u5efa\u5171\u4eab\u670d\u52a1\u5668\uff0c\u4f46\u7531\u4e8e\u7269\u7406\u6216\u903b\u8f91\u7f51\u7edc\u7684\u5404\u79cd\u9650\u5236\uff0c\u6240\u6709\u5171\u4eab\u670d\u52a1\u5668\u90fd\u5fc5\u987b\u4f4d\u4e8e\u6241\u5e73\u7f51\u7edc\u4e0a\u3002\u5728\u6b64\u6a21\u5f0f\u4e0b\uff0c\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u9700\u8981\u4e00\u4e9b\u4e1c\u897f\u6765\u4e3a\u5171\u4eab\u670d\u52a1\u5668\u9884\u914d IP \u5730\u5740\uff0c\u4f46 IP \u5c06\u5168\u90e8\u6765\u81ea\u540c\u4e00\u5b50\u7f51\uff0c\u5e76\u4e14\u5047\u5b9a\u6240\u6709\u79df\u6237\u90fd\u53ef\u4ee5\u8bbf\u95ee\u8be5\u5b50\u7f51\u672c\u8eab\u3002 \u5171\u4eab\u7f51\u7edc\u7684\u5b89\u5168\u670d\u52a1\u90e8\u5206\u6307\u5b9a\u5b89\u5168\u8981\u6c42\uff0c\u4f8b\u5982 AD \u6216 LDAP \u57df\u6216 Kerberos \u57df\u3002\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5047\u5b9a\u5b89\u5168\u670d\u52a1\u4e2d\u5f15\u7528\u7684\u4efb\u4f55\u4e3b\u673a\u90fd\u53ef\u4ee5\u4ece\u521b\u5efa\u5171\u4eab\u670d\u52a1\u5668\u7684\u5b50\u7f51\u8bbf\u95ee\uff0c\u8fd9\u9650\u5236\u4e86\u53ef\u4ee5\u4f7f\u7528\u6b64\u6a21\u5f0f\u7684\u60c5\u51b5\u6570\u3002 \u5171\u4eab\u670d\u52a1\u5668\u540e\u7aef\u6a21\u5f0f\u4e0b\u7684\u5206\u6bb5\u7f51\u7edc \u5728\u6b64\u6a21\u5f0f\u4e0b\uff0c\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u80fd\u591f\u521b\u5efa\u5171\u4eab\u670d\u52a1\u5668\u5e76\u5c06\u5176\u63d2\u5165\u5230\u73b0\u6709\u7684\u5206\u6bb5\u7f51\u7edc\u3002\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u671f\u671b\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e3a\u6bcf\u4e2a\u65b0\u7684\u5171\u4eab\u670d\u52a1\u5668\u63d0\u4f9b\u5b50\u7f51\u5b9a\u4e49\u3002\u6b64\u5b9a\u4e49\u5e94\u5305\u62ec\u5206\u6bb5\u7c7b\u578b\u3001\u5206\u6bb5 ID \u4ee5\u53ca\u4e0e\u5206\u6bb5\u7c7b\u578b\u76f8\u5173\u7684\u4efb\u4f55\u5176\u4ed6\u4fe1\u606f\u3002 \u6ce8\u610f \u67d0\u4e9b\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u53ef\u80fd\u4e0d\u652f\u6301\u6240\u6709\u7c7b\u578b\u7684\u5206\u6bb5\uff0c\u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6b63\u5728\u4f7f\u7528\u7684\u9a71\u52a8\u7a0b\u5e8f\u7684\u89c4\u8303\u3002","title":"\u6241\u5e73\u5316\u4e0e\u5206\u6bb5\u5316\u7f51\u7edc"},{"location":"security/security-guide/#_178","text":"\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4f53\u7cfb\u7ed3\u6784\u5b9a\u4e49\u4e86\u7528\u4e8e\u7f51\u7edc\u8d44\u6e90\u8c03\u914d\u7684\u62bd\u8c61\u5c42\u3002\u5b83\u5141\u8bb8\u7ba1\u7406\u5458\u4ece\u4e0d\u540c\u7684\u9009\u9879\u4e2d\u8fdb\u884c\u9009\u62e9\uff0c\u4ee5\u51b3\u5b9a\u5982\u4f55\u5c06\u7f51\u7edc\u8d44\u6e90\u5206\u914d\u7ed9\u5176\u79df\u6237\u7684\u7f51\u7edc\u5b58\u50a8\u3002\u6709\u51e0\u4e2a\u7f51\u7edc\u63d2\u4ef6\u63d0\u4f9b\u4e86\u4e0eOpenStack\u63d0\u4f9b\u7684\u7f51\u7edc\u670d\u52a1\u7684\u5404\u79cd\u96c6\u6210\u65b9\u6cd5\u3002 \u7f51\u7edc\u63d2\u4ef6\u5141\u8bb8\u4f7f\u7528 OpenStack Networking \u548c Legacy \u7f51\u7edc\u670d\u52a1\u7684\u4efb\u4f55\u529f\u80fd\u3001\u914d\u7f6e\u3002\u53ef\u4ee5\u4f7f\u7528\u7f51\u7edc\u670d\u52a1\u652f\u6301\u7684\u4efb\u4f55\u7f51\u7edc\u5206\u6bb5\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4f20\u7edf\u7f51\u7edc \uff08nova-network\uff09 \u670d\u52a1\u7684\u6241\u5e73\u7f51\u7edc\u6216 VLAN \u5206\u6bb5\u7f51\u7edc\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528\u63d2\u4ef6\u6765\u72ec\u7acb\u4e8e OpenStack \u7f51\u7edc\u670d\u52a1\u6307\u5b9a\u7f51\u7edc\u3002\u6709\u5173\u5982\u4f55\u4f7f\u7528\u4e0d\u540c\u7f51\u7edc\u63d2\u4ef6\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7f51\u7edc\u63d2\u4ef6\u3002","title":"\u7f51\u7edc\u63d2\u4ef6"},{"location":"security/security-guide/#_179","text":"\u5bf9\u4e8e\u5ba2\u6237\u7aef\u7684\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\uff0c\u53ef\u4ee5\u9009\u62e9\u4f7f\u7528\u4e0d\u540c\u7684\u7f51\u7edc\u8eab\u4efd\u9a8c\u8bc1\u534f\u8bae\u914d\u7f6e\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u670d\u52a1\u3002\u652f\u6301\u7684\u8eab\u4efd\u9a8c\u8bc1\u534f\u8bae\u5305\u62ec LDAP\u3001Kerberos \u548c Microsoft Active Directory \u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u3002","title":"\u5b89\u5168\u670d\u52a1"},{"location":"security/security-guide/#_180","text":"\u521b\u5efa\u5171\u4eab\u5e76\u83b7\u53d6\u5176\u5bfc\u51fa\u4f4d\u7f6e\u540e\uff0c\u7528\u6237\u65e0\u6743\u88c5\u8f7d\u8be5\u5171\u4eab\u5e76\u5904\u7406\u6587\u4ef6\u3002\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u9700\u8981\u663e\u5f0f\u6388\u4e88\u5bf9\u65b0\u5171\u4eab\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743 \uff08AuthN/AuthZ\uff09 \u7684\u5ba2\u6237\u673a\u914d\u7f6e\u6570\u636e\u53ef\u4ee5\u901a\u8fc7 \u5b58\u50a8 security services \u3002\u5982\u679c\u4f7f\u7528\u7684\u9a71\u52a8\u7a0b\u5e8f\u548c\u540e\u7aef\u652f\u6301 LDAP\u3001Kerberos \u6216 Microsoft Active Directory\uff0c\u5219\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u53ef\u4ee5\u4f7f\u7528\u5b83\u4eec\u3002\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u4e5f\u53ef\u4ee5\u5728\u6ca1\u6709\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7684\u60c5\u51b5\u4e0b\u8fdb\u884c\u914d\u7f6e\u3002 \u6ce8\u610f \u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u9700\u8981\u663e\u5f0f\u6307\u5b9a\u5176\u4e2d\u4e00\u9879\u5b89\u5168\u670d\u52a1\uff0c\u4f8b\u5982\uff0cNetApp\u3001EMC \u548c Windows \u9a71\u52a8\u7a0b\u5e8f\u9700\u8981 Active Directory \u624d\u80fd\u521b\u5efa\u4e0e CIFS \u534f\u8bae\u7684\u5171\u4eab\u3002","title":"\u5b89\u5168\u670d\u52a1\u4ecb\u7ecd"},{"location":"security/security-guide/#_181","text":"\u5b89\u5168\u670d\u52a1\u662f\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff08\u9a6c\u5c3c\u62c9\uff09\u5b9e\u4f53\uff0c\u5b83\u62bd\u8c61\u51fa\u4e00\u7ec4\u9009\u9879\uff0c\u8fd9\u4e9b\u9009\u9879\u4e3a\u7279\u5b9a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\uff08\u5982 Active Directory \u57df\u6216 Kerberos \u57df\uff09\u5b9a\u4e49\u5b89\u5168\u57df\u3002\u5b89\u5168\u670d\u52a1\u5305\u542b\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u521b\u5efa\u52a0\u5165\u7ed9\u5b9a\u57df\u7684\u670d\u52a1\u5668\u6240\u9700\u7684\u6240\u6709\u4fe1\u606f\u3002 \u4f7f\u7528 API\uff0c\u7528\u6237\u53ef\u4ee5\u521b\u5efa\u3001\u66f4\u65b0\u3001\u67e5\u770b\u548c\u5220\u9664\u5b89\u5168\u670d\u52a1\u3002\u5b89\u5168\u670d\u52a1\u7684\u8bbe\u8ba1\u57fa\u4e8e\u4ee5\u4e0b\u5047\u8bbe\uff1a \u79df\u6237\u63d0\u4f9b\u5b89\u5168\u670d\u52a1\u7684\u8be6\u7ec6\u4fe1\u606f\u3002 \u7ba1\u7406\u5458\u5173\u5fc3\u5b89\u5168\u670d\u52a1\uff1a\u4ed6\u4eec\u914d\u7f6e\u6b64\u7c7b\u5b89\u5168\u670d\u52a1\u7684\u670d\u52a1\u5668\u7aef\u3002 \u5728\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API \u4e2d\uff0ca security_service \u4e0e share_networks \u5173\u8054\u3002 \u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u4f7f\u7528\u5b89\u5168\u670d\u52a1\u4e2d\u7684\u6570\u636e\u6765\u914d\u7f6e\u65b0\u521b\u5efa\u7684\u5171\u4eab\u670d\u52a1\u5668\u3002 \u521b\u5efa\u5b89\u5168\u670d\u52a1\u65f6\uff0c\u53ef\u4ee5\u9009\u62e9\u4ee5\u4e0b\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u4e4b\u4e00\uff1a \u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1 \u63cf\u8ff0 LDAP \u8f7b\u91cf\u7ea7\u76ee\u5f55\u8bbf\u95ee\u534f\u8bae\u3002\u7528\u4e8e\u901a\u8fc7 IP \u7f51\u7edc\u8bbf\u95ee\u548c\u7ef4\u62a4\u5206\u5e03\u5f0f\u76ee\u5f55\u4fe1\u606f\u670d\u52a1\u7684\u5e94\u7528\u7a0b\u5e8f\u534f\u8bae\u3002 Kerberos \u7f51\u7edc\u8eab\u4efd\u9a8c\u8bc1\u534f\u8bae\uff0c\u5b83\u57fa\u4e8e\u7968\u8bc1\u5de5\u4f5c\uff0c\u5141\u8bb8\u901a\u8fc7\u975e\u5b89\u5168\u7f51\u7edc\u8fdb\u884c\u901a\u4fe1\u7684\u8282\u70b9\u4ee5\u5b89\u5168\u7684\u65b9\u5f0f\u76f8\u4e92\u8bc1\u660e\u5176\u8eab\u4efd\u3002 \u6d3b\u52a8\u76ee\u5f55 Microsoft \u4e3a Windows \u57df\u7f51\u7edc\u5f00\u53d1\u7684\u76ee\u5f55\u670d\u52a1\u3002\u4f7f\u7528 LDAP\u3001Microsoft \u7684 Kerberos \u7248\u672c\u548c DNS\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5141\u8bb8\u60a8\u4f7f\u7528\u4ee5\u4e0b\u9009\u9879\u914d\u7f6e\u5b89\u5168\u670d\u52a1\uff1a \u79df\u6237\u7f51\u7edc\u5185\u90e8\u4f7f\u7528\u7684 DNS IP \u5730\u5740\u3002 \u5b89\u5168\u670d\u52a1\u7684 IP \u5730\u5740\u6216\u4e3b\u673a\u540d\u3002 \u5b89\u5168\u670d\u52a1\u7684\u57df\u3002 \u79df\u6237\u4f7f\u7528\u7684\u7528\u6237\u540d\u6216\u7ec4\u540d\u3002 \u5982\u679c\u6307\u5b9a\u7528\u6237\u540d\uff0c\u5219\u9700\u8981\u4e00\u4e2a\u7528\u6237\u5bc6\u7801\u3002 \u73b0\u6709\u5b89\u5168\u670d\u52a1\u5b9e\u4f53\u53ef\u4ee5\u4e0e\u5171\u4eab\u7f51\u7edc\u5b9e\u4f53\u76f8\u5173\u8054\uff0c\u8fd9\u4e9b\u5b9e\u4f53\u901a\u77e5\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e00\u7ec4\u5171\u4eab\u7684\u5b89\u5168\u6027\u548c\u7f51\u7edc\u914d\u7f6e\u3002\u60a8\u8fd8\u53ef\u4ee5\u67e5\u770b\u6307\u5b9a\u5171\u4eab\u7f51\u7edc\u7684\u6240\u6709\u5b89\u5168\u670d\u52a1\u7684\u5217\u8868\uff0c\u5e76\u53d6\u6d88\u5b83\u4eec\u4e0e\u5171\u4eab\u7f51\u7edc\u7684\u5173\u8054\u3002 \u6709\u5173\u901a\u8fc7 API \u7ba1\u7406\u5b89\u5168\u670d\u52a1\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5b89\u5168\u670d\u52a1 API\u3002\u60a8\u8fd8\u53ef\u4ee5\u901a\u8fc7 python-manilaclient \u7ba1\u7406\u5b89\u5168\u670d\u52a1\uff0c\u8bf7\u53c2\u9605\u5b89\u5168\u670d\u52a1 CLI \u7ba1\u7406\u3002 \u7ba1\u7406\u5458\u548c\u4f5c\u4e3a\u5171\u4eab\u6240\u6709\u8005\u7684\u7528\u6237\u53ef\u4ee5\u901a\u8fc7\u521b\u5efa\u8bbf\u95ee\u89c4\u5219\uff0c\u5e76\u901a\u8fc7 IP \u5730\u5740\u3001\u7528\u6237\u3001\u7ec4\u6216 TLS \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u6765\u7ba1\u7406\u5bf9\u5171\u4eab\u7684\u8bbf\u95ee\u3002\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u53d6\u51b3\u4e8e\u60a8\u914d\u7f6e\u548c\u4f7f\u7528\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u548c\u5b89\u5168\u670d\u52a1\u3002 \u56e0\u6b64\uff0c\u4f5c\u4e3a\u7ba1\u7406\u5458\uff0c\u60a8\u53ef\u4ee5\u5c06\u540e\u7aef\u914d\u7f6e\u4e3a\u901a\u8fc7\u7f51\u7edc\u4f7f\u7528\u7279\u5b9a\u7684\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\uff0c\u5b83\u5c06\u5b58\u50a8\u7528\u6237\u3002\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u53ef\u4ee5\u5728\u6ca1\u6709\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u548c\u6807\u8bc6\u670d\u52a1\u7684\u5ba2\u6237\u7aef\u4e0a\u8fd0\u884c\u3002 \u6ce8\u610f \u4e0d\u540c\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u4e0d\u540c\u7684\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u3002\u6709\u5173\u4e0d\u540c\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u529f\u80fd\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u9a6c\u5c3c\u62c9\u5171\u4eab\u529f\u80fd\u652f\u6301\u6620\u5c04\u3002\u9a71\u52a8\u7a0b\u5e8f\u5bf9\u7279\u5b9a\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u7684\u652f\u6301\u5e76\u4e0d\u610f\u5473\u7740\u53ef\u4ee5\u4f7f\u7528\u4efb\u4f55\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u5bf9\u5176\u8fdb\u884c\u914d\u7f6e\u3002\u652f\u6301\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u5305\u62ec NFS\u3001CIFS\u3001GlusterFS \u548c HDFS\u3002\u6709\u5173\u7279\u5b9a\u9a71\u52a8\u7a0b\u5e8f\u53ca\u5176\u5b89\u5168\u670d\u52a1\u914d\u7f6e\u7684\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u9a71\u52a8\u7a0b\u5e8f\u4f9b\u5e94\u5546\u7684\u6587\u6863\u3002 \u67d0\u4e9b\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u5b89\u5168\u670d\u52a1\uff0c\u800c\u5176\u4ed6\u9a71\u52a8\u7a0b\u5e8f\u4e0d\u652f\u6301\u4e0a\u8ff0\u4efb\u4f55\u5b89\u5168\u670d\u52a1\u3002\u4f8b\u5982\uff0c\u5177\u6709 NFS \u6216 CIFS \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u7684\u901a\u7528\u9a71\u52a8\u7a0b\u5e8f\u4ec5\u652f\u6301\u901a\u8fc7 IP \u5730\u5740\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002 \u5efa\u8bae - \u5728\u5927\u591a\u6570\u60c5\u51b5\u4e0b\uff0c\u652f\u6301 CIFS \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u7684\u9a71\u52a8\u7a0b\u5e8f\u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528 Active Directory \u5e76\u901a\u8fc7\u7528\u6237\u8eab\u4efd\u9a8c\u8bc1\u7ba1\u7406\u8bbf\u95ee\u3002 - \u652f\u6301 GlusterFS \u534f\u8bae\u7684\u9a71\u52a8\u7a0b\u5e8f\u53ef\u4ee5\u901a\u8fc7 TLS \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 - \u4f7f\u7528\u652f\u6301 NFS \u534f\u8bae\u7684\u9a71\u52a8\u7a0b\u5e8f\uff0c\u901a\u8fc7 IP \u5730\u5740\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u662f\u552f\u4e00\u53d7\u652f\u6301\u7684\u9009\u9879\u3002 - \u7531\u4e8e HDFS \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u4f7f\u7528 NFS \u8bbf\u95ee\uff0c\u56e0\u6b64\u4e5f\u53ef\u4ee5\u5c06\u5176\u914d\u7f6e\u4e3a\u901a\u8fc7 IP \u5730\u5740\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u4f46\u8bf7\u6ce8\u610f\uff0c\u901a\u8fc7 IP \u8fdb\u884c\u7684\u8eab\u4efd\u9a8c\u8bc1\u662f\u6700\u4e0d\u5b89\u5168\u7684\u8eab\u4efd\u9a8c\u8bc1\u7c7b\u578b\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5b9e\u9645\u4f7f\u7528\u60c5\u51b5\u7684\u5efa\u8bae\u914d\u7f6e\u662f\u4f7f\u7528 CIFS \u5171\u4eab\u534f\u8bae\u521b\u5efa\u5171\u4eab\uff0c\u5e76\u5411\u5176\u6dfb\u52a0 Microsoft Active Directory \u76ee\u5f55\u670d\u52a1\u3002\u5728\u6b64\u914d\u7f6e\u4e2d\uff0c\u60a8\u5c06\u83b7\u5f97\u96c6\u4e2d\u5f0f\u6570\u636e\u5e93\u4ee5\u53ca\u5c06Kerberos\u548cLDAP\u65b9\u6cd5\u7ed3\u5408\u5728\u4e00\u8d77\u7684\u670d\u52a1\u3002\u8fd9\u662f\u4e00\u4e2a\u771f\u5b9e\u7684\u7528\u4f8b\uff0c\u5bf9\u4e8e\u751f\u4ea7\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u6765\u8bf4\u5f88\u65b9\u4fbf\u3002","title":"\u5b89\u5168\u670d\u52a1\u7ba1\u7406"},{"location":"security/security-guide/#_182","text":"\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5141\u8bb8\u6388\u4e88\u6216\u62d2\u7edd\u5176\u4ed6\u5ba2\u6237\u7aef\u5bf9\u670d\u52a1\u7684\u4e0d\u540c\u5b9e\u4f53\u7684\u8bbf\u95ee\u3002 \u5c06\u5171\u4eab\u4f5c\u4e3a\u6587\u4ef6\u7cfb\u7edf\u7684\u53ef\u8fdc\u7a0b\u6302\u8f7d\u5b9e\u4f8b\uff0c\u53ef\u4ee5\u7ba1\u7406\u5bf9\u6307\u5b9a\u5171\u4eab\u7684\u8bbf\u95ee\uff0c\u5e76\u5217\u51fa\u6307\u5b9a\u5171\u4eab\u7684\u6743\u9650\u3002 \u5171\u4eab\u53ef\u4ee5\u662f\u516c\u5171\u7684\uff0c\u4e5f\u53ef\u4ee5\u662f\u79c1\u6709\u7684\u3002\u8fd9\u662f\u5171\u4eab\u7684\u53ef\u89c1\u6027\u7ea7\u522b\uff0c\u7528\u4e8e\u5b9a\u4e49\u5176\u4ed6\u79df\u6237\u662f\u5426\u53ef\u4ee5\u770b\u5230\u5171\u4eab\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u6240\u6709\u5171\u4eab\u90fd\u521b\u5efa\u4e3a\u4e13\u7528\u5171\u4eab\u3002\u521b\u5efa\u5171\u4eab\u65f6\uff0c\u8bf7\u4f7f\u7528\u5bc6\u94a5 --public \u5c06\u5171\u4eab\u516c\u5f00\uff0c\u4f9b\u5176\u4ed6\u79df\u6237\u67e5\u770b\u5171\u4eab\u5217\u8868\u5e76\u67e5\u770b\u5176\u8be6\u7ec6\u4fe1\u606f\u3002 \u6839\u636e policy.json \u6587\u4ef6\uff0c\u7ba1\u7406\u5458\u548c\u4f5c\u4e3a\u5171\u4eab\u6240\u6709\u8005\u7684\u7528\u6237\u53ef\u4ee5\u901a\u8fc7\u521b\u5efa\u8bbf\u95ee\u89c4\u5219\u6765\u7ba1\u7406\u5bf9\u5171\u4eab\u7684\u8bbf\u95ee\u3002\u4f7f\u7528 manila access-allow\u3001manila access-deny \u548c manila access-list \u547d\u4ee4\uff0c\u60a8\u53ef\u4ee5\u76f8\u5e94\u5730\u6388\u4e88\u3001\u62d2\u7edd\u548c\u5217\u51fa\u5bf9\u6307\u5b9a\u5171\u4eab\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u5efa\u8bae \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u5f53\u521b\u5efa\u5171\u4eab\u5e76\u5177\u6709\u5176\u5bfc\u51fa\u4f4d\u7f6e\u65f6\uff0c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u671f\u671b\u4efb\u4f55\u4eba\u90fd\u65e0\u6cd5\u901a\u8fc7\u88c5\u8f7d\u5171\u4eab\u6765\u8bbf\u95ee\u8be5\u5171\u4eab\u3002\u8bf7\u6ce8\u610f\uff0c\u60a8\u4f7f\u7528\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u53ef\u4ee5\u66f4\u6539\u6b64\u914d\u7f6e\uff0c\u4e5f\u53ef\u4ee5\u76f4\u63a5\u5728\u5171\u4eab\u5b58\u50a8\u4e0a\u66f4\u6539\u3002\u8981\u786e\u4fdd\u8bbf\u95ee\u5171\u4eab\uff0c\u8bf7\u68c0\u67e5\u5bfc\u51fa\u534f\u8bae\u7684\u6302\u8f7d\u914d\u7f6e\u3002 \u521a\u521b\u5efa\u5171\u4eab\u65f6\uff0c\u6ca1\u6709\u4e0e\u4e4b\u5173\u8054\u7684\u9ed8\u8ba4\u8bbf\u95ee\u89c4\u5219\u548c\u88c5\u8f7d\u6743\u9650\u3002\u8fd9\u53ef\u4ee5\u5728\u6b63\u5728\u4f7f\u7528\u7684\u5bfc\u51fa\u534f\u8bae\u7684\u6302\u8f7d\u914d\u7f6e\u4e2d\u770b\u5230\u3002\u4f8b\u5982\uff0c\u5b58\u50a8\u4e0a\u6709\u4e00\u4e2a NFS \u547d\u4ee4 exportfs \u6216 /etc/exports \u6587\u4ef6\uff0c\u7528\u4e8e\u63a7\u5236\u6bcf\u4e2a\u8fdc\u7a0b\u5171\u4eab\u5e76\u5b9a\u4e49\u53ef\u4ee5\u8bbf\u95ee\u5b83\u7684\u4e3b\u673a\u3002\u5982\u679c\u6ca1\u6709\u4eba\u53ef\u4ee5\u6302\u8f7d\u5171\u4eab\uff0c\u5219\u4e3a\u7a7a\u3002\u5bf9\u4e8e\u8fdc\u7a0b CIFS \u670d\u52a1\u5668\uff0c\u6709\u4e00\u4e2a net conf list \u663e\u793a\u914d\u7f6e\u7684\u547d\u4ee4\u3002 hosts deny \u53c2\u6570\u5e94\u7531\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u8bbe\u7f6e 0.0.0.0/0 \uff0c\u8fd9\u610f\u5473\u7740\u4efb\u4f55\u4e3b\u673a\u90fd\u88ab\u62d2\u7edd\u6302\u8f7d\u5171\u4eab\u3002 \u4f7f\u7528\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff0c\u53ef\u4ee5\u901a\u8fc7\u6307\u5b9a\u4ee5\u4e0b\u652f\u6301\u7684\u5171\u4eab\u8bbf\u95ee\u7ea7\u522b\u4e4b\u4e00\u6765\u6388\u4e88\u6216\u62d2\u7edd\u5bf9\u5171\u4eab\u7684\u8bbf\u95ee\uff1a rw\u3002\u8bfb\u53d6\u548c\u5199\u5165 \uff08RW\uff09 \u8bbf\u95ee\u3002\u8fd9\u662f\u9ed8\u8ba4\u503c\u3002 ro\u3002\u53ea\u8bfb \uff08RO\uff09 \u8bbf\u95ee\u3002 \u5efa\u8bae \u5f53\u7ba1\u7406\u5458\u4e3a\u67d0\u4e9b\u7279\u5b9a\u7f16\u8f91\u8005\u6216\u8d21\u732e\u8005\u63d0\u4f9b\u8bfb\u5199 \uff08RW\uff09 \u8bbf\u95ee\u6743\u9650\u5e76\u4e3a\u5176\u4f59\u7528\u6237\uff08\u67e5\u770b\u8005\uff09\u63d0\u4f9b\u53ea\u8bfb \uff08RO\uff09 \u8bbf\u95ee\u6743\u9650\u65f6\uff0cRO \u8bbf\u95ee\u7ea7\u522b\u5728\u516c\u5171\u5171\u4eab\u4e2d\u4f1a\u5f88\u6709\u5e2e\u52a9\u3002 \u60a8\u8fd8\u5fc5\u987b\u6307\u5b9a\u4ee5\u4e0b\u53d7\u652f\u6301\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u4e4b\u4e00\uff1a ip\u3002\u901a\u8fc7\u5b9e\u4f8b\u7684 IP \u5730\u5740\u5bf9\u5b9e\u4f8b\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u6709\u6548\u683c\u5f0f\u4e3a XX.XX.XX.XX \u6216 XX.XX.XX.XX/XX\u3002\u4f8b\u5982\uff0c0.0.0.0/0\u3002 cert\u3002\u901a\u8fc7 TLS \u8bc1\u4e66\u5bf9\u5b9e\u4f8b\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u5c06 TLS \u6807\u8bc6\u6307\u5b9a\u4e3a IDENTKEY\u3002\u6709\u6548\u503c\u662f\u8bc1\u4e66\u516c\u7528\u540d \uff08CN\uff09 \u4e2d\u957f\u5ea6\u4e0d\u8d85\u8fc7 64 \u4e2a\u5b57\u7b26\u7684\u4efb\u4f55\u5b57\u7b26\u4e32\u3002 user\u3002\u6309\u6307\u5b9a\u7684\u7528\u6237\u540d\u6216\u7ec4\u540d\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u6709\u6548\u503c\u662f\u4e00\u4e2a\u5b57\u6bcd\u6570\u5b57\u5b57\u7b26\u4e32\uff0c\u53ef\u4ee5\u5305\u542b\u4e00\u4e9b\u7279\u6b8a\u5b57\u7b26\uff0c\u957f\u5ea6\u4e3a 4 \u5230 32 \u4e2a\u5b57\u7b26\u3002 \u6ce8\u610f \u652f\u6301\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u53d6\u51b3\u4e8e\u60a8\u914d\u7f6e\u548c\u4f7f\u7528\u7684\u5171\u4eab\u9a71\u52a8\u7a0b\u5e8f\u3001\u5b89\u5168\u670d\u52a1\u548c\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u3002\u652f\u6301\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\u5305\u62ec NFS\u3001CIFS\u3001GlusterFS \u548c HDFS\u3002\u652f\u6301\u7684\u5b89\u5168\u670d\u52a1\u5305\u62ec LDAP\u3001Kerberos \u534f\u8bae\u6216 Microsoft Active Directory \u670d\u52a1\u3002\u6709\u5173\u4e0d\u540c\u9a71\u52a8\u7a0b\u5e8f\u652f\u6301\u529f\u80fd\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u9a6c\u5c3c\u62c9\u5171\u4eab\u529f\u80fd\u652f\u6301\u6620\u5c04\u3002 \u4e0b\u9762\u662f\u4e0e\u901a\u7528\u9a71\u52a8\u7a0b\u5e8f\u5171\u4eab\u7684 NFS \u793a\u4f8b\u3002\u521b\u5efa\u5171\u4eab\u540e\uff0c\u5b83\u5177\u6709\u5bfc\u51fa\u4f4d\u7f6e 10.254.0.3:/shares/share-b2874f8d-d428-4a5c-b056-e6af80a995de \u3002\u5982\u679c\u60a8\u5c1d\u8bd5\u4f7f\u7528 10.254.0.4 IP \u5730\u5740\u5c06\u5176\u6302\u8f7d\u5230\u4e3b\u673a\u4e0a\uff0c\u60a8\u5c06\u6536\u5230\u201c\u6743\u9650\u88ab\u62d2\u7edd\u201d\u6d88\u606f\u3002 # mount.nfs -v 10.254.0.3:/shares/share-b2874f8d-d428-4a5c-b056-e6af80a995de /mnt mount.nfs: timeout set for Mon Oct 12 13:07:47 2015 mount.nfs: trying text-based options 'vers=4,addr=10.254.0.3,clientaddr=10.254.0.4' mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting 10.254.0.3:/shares/share-b2874f8d-... \u4f5c\u4e3a\u7ba1\u7406\u5458\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7 SSH \u8fde\u63a5\u5230\u5177\u6709 IP \u5730\u5740\u7684 10.254.0.3 \u4e3b\u673a\uff0c\u68c0\u67e5\u5176 /etc/exports \u4e0a\u7684\u6587\u4ef6\u5e76\u67e5\u770b\u5b83\u662f\u5426\u4e3a\u7a7a\uff1a # cat /etc/exports # \u6211\u4eec\u5728\u793a\u4f8b\u4e2d\u4f7f\u7528\u7684\u901a\u7528\u9a71\u52a8\u7a0b\u5e8f\u4e0d\u652f\u6301\u4efb\u4f55\u5b89\u5168\u670d\u52a1\uff0c\u56e0\u6b64\u4f7f\u7528 NFS \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u534f\u8bae\uff0c\u6211\u4eec\u53ea\u80fd\u901a\u8fc7 IP \u5730\u5740\u6388\u4e88\u8bbf\u95ee\u6743\u9650\uff1a $ manila access-allow Share_demo2 ip 10.254.0.4 +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | share_id | e57c25a8-0392-444f-9ffc-5daadb9f756c | | access_type | ip | | access_to | 10.254.0.4 | | access_level | rw | | state | new | | id | 62b8e453-d712-4074-8410-eab6227ba267 | +--------------+--------------------------------------+ \u89c4\u5219\u8fdb\u5165\u72b6\u6001 active \u540e\uff0c\u6211\u4eec\u53ef\u4ee5\u518d\u6b21\u8fde\u63a5\u5230 10.254.0.3 \u4e3b\u673a\u5e76\u68c0\u67e5 /etc/exports \u6587\u4ef6\uff0c\u5e76\u67e5\u770b\u662f\u5426\u6dfb\u52a0\u4e86\u5e26\u6709\u89c4\u5219\u7684\u884c\uff1a # cat /etc/exports /shares/share-b2874f8d-d428-4a5c-b056-e6af80a995de 10.254.0.4(rw,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,anonuid=65534,anongid=65534,sec=sys,rw,root_squash,no_all_squash) \u73b0\u5728\uff0c\u6211\u4eec\u53ef\u4ee5\u4f7f\u7528 IP \u5730\u5740 10.254.0.4 \u5728\u4e3b\u673a\u4e0a\u6302\u8f7d\u5171\u4eab\uff0c\u5e76\u62e5\u6709 rw \u5171\u4eab\u6743\u9650\uff1a # mount.nfs -v 10.254.0.3:/shares/share-b2874f8d-d428-4a5c-b056-e6af80a995de /mnt # ls -a /mnt . .. lost+found # echo \"Hello!\" > /mnt/1.txt # ls -a /mnt . .. 1.txt lost+found #","title":"\u5171\u4eab\u8bbf\u95ee\u63a7\u5236"},{"location":"security/security-guide/#_183","text":"\u5171\u4eab\u7c7b\u578b\u662f\u7ba1\u7406\u5458\u5b9a\u4e49\u7684\u201c\u670d\u52a1\u7c7b\u578b\u201d\uff0c\u7531\u79df\u6237\u53ef\u89c1\u63cf\u8ff0\u548c\u79df\u6237\u4e0d\u53ef\u89c1\u952e\u503c\u5bf9\u5217\u8868\uff08\u989d\u5916\u89c4\u8303\uff09\u7ec4\u6210\u3002manila-scheduler \u4f7f\u7528\u989d\u5916\u7684\u89c4\u8303\u6765\u505a\u51fa\u8c03\u5ea6\u51b3\u7b56\uff0c\u9a71\u52a8\u7a0b\u5e8f\u63a7\u5236\u5171\u4eab\u521b\u5efa\u3002 \u7ba1\u7406\u5458\u53ef\u4ee5\u521b\u5efa\u548c\u5220\u9664\u5171\u4eab\u7c7b\u578b\uff0c\u8fd8\u53ef\u4ee5\u7ba1\u7406\u5728\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e2d\u8d4b\u4e88\u5b83\u4eec\u542b\u4e49\u7684\u989d\u5916\u89c4\u8303\u3002\u79df\u6237\u53ef\u4ee5\u5217\u51fa\u5171\u4eab\u7c7b\u578b\uff0c\u5e76\u53ef\u4ee5\u4f7f\u7528\u5b83\u4eec\u521b\u5efa\u65b0\u5171\u4eab\u3002\u6709\u5173\u7ba1\u7406\u5171\u4eab\u7c7b\u578b\u7684\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API \u548c\u5171\u4eab\u7c7b\u578b\u7ba1\u7406\u6587\u6863\u3002 \u5171\u4eab\u7c7b\u578b\u53ef\u4ee5\u521b\u5efa\u4e3a\u516c\u5171\u548c\u79c1\u6709\u3002\u8fd9\u662f\u5171\u4eab\u7c7b\u578b\u7684\u53ef\u89c1\u6027\u7ea7\u522b\uff0c\u7528\u4e8e\u5b9a\u4e49\u5176\u4ed6\u79df\u6237\u662f\u5426\u53ef\u4ee5\u5728\u5171\u4eab\u7c7b\u578b\u5217\u8868\u4e2d\u770b\u5230\u5b83\uff0c\u5e76\u4f7f\u7528\u5b83\u6765\u521b\u5efa\u65b0\u5171\u4eab\u3002 \u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u5171\u4eab\u7c7b\u578b\u521b\u5efa\u4e3a\u516c\u5171\u7c7b\u578b\u3002\u521b\u5efa\u5171\u4eab\u7c7b\u578b\u65f6\uff0c\u8bf7\u4f7f\u7528 --is_public \u53c2\u6570\u96c6 \u8bbe\u7f6e\u4e3a False \u79c1\u6709\u5171\u4eab\u7c7b\u578b\uff0c\u8fd9\u5c06\u9632\u6b62\u5176\u4ed6\u79df\u6237\u5728\u5171\u4eab\u7c7b\u578b\u5217\u8868\u4e2d\u770b\u5230\u5b83\u5e76\u4f7f\u7528\u5b83\u521b\u5efa\u65b0\u5171\u4eab\u3002\u53e6\u4e00\u65b9\u9762\uff0c\u516c\u5171\u5171\u4eab\u7c7b\u578b\u53ef\u4f9b\u4e91\u4e2d\u7684\u6bcf\u4e2a\u79df\u6237\u4f7f\u7528\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u5141\u8bb8\u7ba1\u7406\u5458\u6388\u4e88\u6216\u62d2\u7edd\u5bf9\u79df\u6237\u7684\u4e13\u7528\u5171\u4eab\u7c7b\u578b\u7684\u8bbf\u95ee\u6743\u9650\u3002\u8fd8\u53ef\u4ee5\u83b7\u53d6\u6709\u5173\u6307\u5b9a\u4e13\u7528\u5171\u4eab\u7c7b\u578b\u7684\u8bbf\u95ee\u6743\u9650\u7684\u4fe1\u606f\u3002 \u5efa\u8bae \u7531\u4e8e\u5171\u4eab\u7c7b\u578b\u7531\u4e8e\u5176\u989d\u5916\u7684\u89c4\u8303\u800c\u6709\u52a9\u4e8e\u5728\u7528\u6237\u521b\u5efa\u5171\u4eab\u4e4b\u524d\u7b5b\u9009\u6216\u9009\u62e9\u540e\u7aef\uff0c\u56e0\u6b64\u4f7f\u7528\u5bf9\u5171\u4eab\u7c7b\u578b\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u53ef\u4ee5\u9650\u5236\u5ba2\u6237\u7aef\u9009\u62e9\u7279\u5b9a\u7684\u540e\u7aef\u3002 \u4f8b\u5982\uff0c\u4f5c\u4e3a\u7ba1\u7406\u5458\u79df\u6237\u4e2d\u7684\u7ba1\u7406\u5458\u7528\u6237\uff0c\u53ef\u4ee5\u521b\u5efa\u540d\u4e3a my_type \u7684\u4e13\u7528\u5171\u4eab\u7c7b\u578b\uff0c\u5e76\u5728\u5217\u8868\u4e2d\u67e5\u770b\u5b83\u3002\u5728\u63a7\u5236\u53f0\u793a\u4f8b\u4e2d\uff0c\u7701\u7565\u4e86\u767b\u5f55\u548c\u6ce8\u9500\uff0c\u5e76\u63d0\u4f9b\u4e86\u73af\u5883\u53d8\u91cf\u4ee5\u663e\u793a\u5f53\u524d\u767b\u5f55\u7684\u7528\u6237\u3002 $ env | grep OS_ ... OS_USERNAME=admin OS_TENANT_NAME=admin ... $ manila type-list --all +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | 4..| my_type| private | - | driver_handles_share_servers:False| snapshot_support:True | | 5..| default| public | YES | driver_handles_share_servers:True | snapshot_support:True | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ demo \u79df\u6237\u4e2d\u7684 demo \u7528\u6237\u53ef\u4ee5\u5217\u51fa\u7c7b\u578b\uff0c\u5e76\u4e14\u547d\u540d my_type \u7684\u4e13\u7528\u5171\u4eab\u7c7b\u578b\u5bf9\u4ed6\u4e0d\u53ef\u89c1\u3002 $ env | grep OS_ ... OS_USERNAME=demo OS_TENANT_NAME=demo ... $ manila type-list --all +----+--------+-----------+-----------+----------------------------------+----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+----------------------------------+----------------------+ | 5..| default| public | YES | driver_handles_share_servers:True| snapshot_support:True| +----+--------+-----------+-----------+----------------------------------+----------------------+ \u7ba1\u7406\u5458\u53ef\u4ee5\u6388\u4e88\u5bf9\u79df\u6237 ID \u7b49\u4e8e df29a37db5ae48d19b349fe947fada46 \u7684\u6f14\u793a\u79df\u6237\u7684\u4e13\u7528\u5171\u4eab\u7c7b\u578b\u7684\u8bbf\u95ee\u6743\u9650\uff1a $ env | grep OS_ ... OS_USERNAME=admin OS_TENANT_NAME=admin ... $ openstack project list +----------------------------------+--------------------+ | ID | Name | +----------------------------------+--------------------+ | ... | ... | | df29a37db5ae48d19b349fe947fada46 | demo | +----------------------------------+--------------------+ $ manila type-access-add my_type df29a37db5ae48d19b349fe947fada46 \u56e0\u6b64\uff0c\u73b0\u5728\u6f14\u793a\u79df\u6237\u4e2d\u7684\u7528\u6237\u53ef\u4ee5\u770b\u5230\u4e13\u7528\u5171\u4eab\u7c7b\u578b\uff0c\u5e76\u5728\u5171\u4eab\u521b\u5efa\u4e2d\u4f7f\u7528\u5b83\uff1a $ env | grep OS_ ... OS_USERNAME=demo OS_TENANT_NAME=demo ... $ manila type-list --all +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | 4..| my_type| private | - | driver_handles_share_servers:False| snapshot_support:True | | 5..| default| public | YES | driver_handles_share_servers:True | snapshot_support:True | +----+--------+-----------+-----------+-----------------------------------+- \u8981\u62d2\u7edd\u5bf9\u6307\u5b9a\u9879\u76ee\u7684\u8bbf\u95ee\uff0c\u8bf7\u4f7f\u7528 manila type-access-remove \u547d\u4ee4\u3002 \u5efa\u8bae \u4e00\u4e2a\u771f\u5b9e\u7684\u751f\u4ea7\u7528\u4f8b\u663e\u793a\u4e86\u5171\u4eab\u7c7b\u578b\u7684\u7528\u9014\u548c\u5bf9\u5b83\u4eec\u7684\u8bbf\u95ee\uff0c\u5f53\u4f60\u6709\u4e24\u4e2a\u540e\u7aef\u65f6\uff1a\u5ec9\u4ef7\u7684 LVM \u4f5c\u4e3a\u516c\u5171\u5b58\u50a8\uff0c\u6602\u8d35\u7684 Ceph \u4f5c\u4e3a\u79c1\u6709\u5b58\u50a8\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u53ef\u4ee5\u5411\u67d0\u4e9b\u79df\u6237\u6388\u4e88\u8bbf\u95ee\u6743\u9650\uff0c\u5e76\u4f7f\u7528 `user/group` \u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u8fdb\u884c\u8bbf\u95ee\u3002","title":"\u5171\u4eab\u7c7b\u578b\u8bbf\u95ee\u63a7\u5236"},{"location":"security/security-guide/#_184","text":"\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u6709\u81ea\u5df1\u7684\u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u7b56\u7565\u3002\u5b83\u4eec\u786e\u5b9a\u54ea\u4e2a\u7528\u6237\u53ef\u4ee5\u4ee5\u54ea\u79cd\u65b9\u5f0f\u8bbf\u95ee\u54ea\u4e9b\u5bf9\u8c61\uff0c\u5e76\u5728\u670d\u52a1\u7684 policy.json \u6587\u4ef6\u4e2d\u5b9a\u4e49\u3002 \u5efa\u8bae \u914d\u7f6e\u6587\u4ef6 `policy.json` \u53ef\u4ee5\u653e\u7f6e\u5728\u4efb\u4f55\u4f4d\u7f6e\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u8be5\u8def\u5f84 `/etc/manila/policy.json` \u662f\u5fc5\u9700\u7684\u3002 \u6bcf\u5f53\u5bf9\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u8fdb\u884c API \u8c03\u7528\u65f6\uff0c\u7b56\u7565\u5f15\u64ce\u90fd\u4f1a\u4f7f\u7528\u76f8\u5e94\u7684\u7b56\u7565\u5b9a\u4e49\u6765\u786e\u5b9a\u662f\u5426\u53ef\u4ee5\u63a5\u53d7\u8be5\u8c03\u7528\u3002 \u7b56\u7565\u89c4\u5219\u786e\u5b9a\u5728\u4ec0\u4e48\u60c5\u51b5\u4e0b\u5141\u8bb8 API \u8c03\u7528\u3002\u5f53 /etc/manila/policy.json \u89c4\u5219\u4e3a\u7a7a\u5b57\u7b26\u4e32\u65f6\uff0c\u8be5\u6587\u4ef6\u5177\u6709\u59cb\u7ec8\u5141\u8bb8\u64cd\u4f5c\u7684\u89c4\u5219\uff1a \"\" ;\u57fa\u4e8e\u7528\u6237\u89d2\u8272\u6216\u89c4\u5219\u7684\u89c4\u5219;\u5e26\u6709\u5e03\u5c14\u8868\u8fbe\u5f0f\u7684\u89c4\u5219\u3002\u4e0b\u9762\u662f\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1 policy.json \u7684\u6587\u4ef6\u7247\u6bb5\u3002\u4ece\u4e00\u4e2aOpenStack\u7248\u672c\u5230\u53e6\u4e00\u4e2aOpenStack\u7248\u672c\uff0c\u53ef\u4ee5\u5bf9\u5176\u8fdb\u884c\u66f4\u6539\u3002 { \"context_is_admin\": \"role:admin\", \"admin_or_owner\": \"is_admin:True or project_id:%(project_id)s\", \"default\": \"rule:admin_or_owner\", \"share_extension:quotas:show\": \"\", \"share_extension:quotas:update\": \"rule:admin_api\", \"share_extension:quotas:delete\": \"rule:admin_api\", \"share_extension:quota_classes\": \"\", } \u5fc5\u987b\u5c06\u7528\u6237\u5206\u914d\u5230\u7b56\u7565\u4e2d\u5f15\u7528\u7684\u7ec4\u548c\u89d2\u8272\u3002\u5f53\u4f7f\u7528\u7528\u6237\u7ba1\u7406\u547d\u4ee4\u65f6\uff0c\u670d\u52a1\u4f1a\u81ea\u52a8\u5b8c\u6210\u6b64\u64cd\u4f5c\u3002 \u6ce8\u610f \u4efb\u4f55\u66f4\u6539 `/etc/manila/policy.json` \u90fd\u4f1a\u7acb\u5373\u751f\u6548\uff0c\u8fd9\u5141\u8bb8\u5728\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u8fd0\u884c\u65f6\u5b9e\u65bd\u65b0\u7b56\u7565\u3002\u624b\u52a8\u4fee\u6539\u7b56\u7565\u53ef\u80fd\u4f1a\u4ea7\u751f\u610f\u60f3\u4e0d\u5230\u7684\u526f\u4f5c\u7528\uff0c\u56e0\u6b64\u4e0d\u9f13\u52b1\u8fd9\u6837\u505a\u3002\u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 policy.json \u6587\u4ef6\u3002","title":"\u653f\u7b56"},{"location":"security/security-guide/#_185","text":"","title":"\u68c0\u67e5\u8868"},{"location":"security/security-guide/#check-shared-01-rootmanila","text":"\u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u62d2\u7edd\u5411\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u63d0\u4f9b\u670d\u52a1\u3002\u56e0\u6b64\uff0c\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a root\uff0c\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a manila\u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/manila/manila.conf | egrep \"root manila\" $ stat -L -c \"%U %G\" /etc/manila/api-paste.ini | egrep \"root manila\" $ stat -L -c \"%U %G\" /etc/manila/policy.json | egrep \"root manila\" $ stat -L -c \"%U %G\" /etc/manila/rootwrap.conf | egrep \"root manila\" $ stat -L -c \"%U %G\" /etc/manila | egrep \"root manila\" \u901a\u8fc7\uff1a\u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c manila\u3002\u4e0a\u9762\u7684\u547d\u4ee4\u663e\u793a\u4e86\u6839\u9a6c\u5c3c\u62c9\u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u56e0\u4e3a\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 root \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u6216\u9a6c\u5c3c\u62c9\u4ee5\u5916\u7684\u4efb\u4f55\u7ec4\u3002","title":"Check-Shared-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/manila\uff1f"},{"location":"security/security-guide/#check-shared-02","text":"\u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u5efa\u8bae\u5bf9\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/manila/manila.conf $ stat -L -c \"%a\" /etc/manila/api-paste.ini $ stat -L -c \"%a\" /etc/manila/policy.json $ stat -L -c \"%a\" /etc/manila/rootwrap.conf $ stat -L -c \"%a\" /etc/manila \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002640 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\uff0c\u5373\u201cu=rw\uff0cg=r\uff0co=\u201d\u3002\u8bf7\u6ce8\u610f\uff0c\u4f7f\u7528 Check-Shared-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/manila\uff1f\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0croot \u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0cmanila \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/manila/manila.conf getfacl: Removing leading '/' from absolute path names # file: etc/manila/manila.conf USER root rw- GROUP manila r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u672a\u8bbe\u7f6e\u4e3a\u81f3\u5c11 640\u3002","title":"Check-Shared-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f"},{"location":"security/security-guide/#check-shared-03openstack-identity","text":"\u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Rocky \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Stein \u4e2d\u5df2\u5f03\u7528\u3002 OpenStack \u652f\u6301\u5404\u79cd\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\uff0c\u5982 noauth \u548c keystone\u3002\u5982\u679c\u4f7f\u7528 ' noauth ' \u7b56\u7565\uff0c\u5219\u7528\u6237\u65e0\u9700\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u4e0e OpenStack \u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u6f5c\u5728\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u83b7\u5f97\u5bf9 OpenStack \u7ec4\u4ef6\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u56e0\u6b64\uff0c\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u670d\u52a1\u90fd\u5fc5\u987b\u4f7f\u7528\u5176\u670d\u52a1\u5e10\u6237\u901a\u8fc7 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 auth_strategy \u8bbe\u7f6e\u4e3a keystone \u3002 [DEFAULT] manila.conf \u5931\u8d25\uff1a\u5982\u679c section \u4e0b\u7684 [DEFAULT] \u53c2\u6570 auth_strategy \u503c\u8bbe\u7f6e\u4e3a noauth \u3002","title":"Check-Shared-03\uff1aOpenStack Identity \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f"},{"location":"security/security-guide/#check-shared-04-tls","text":"OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in /etc/manila/manila.conf \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a Identity API \u7aef\u70b9\u5f00\u5934\uff0c https:// \u5e76\u4e14 same /etc/manila/manila.conf \u4e2d\u540c\u4e00 [keystone_authtoken] \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri insecure \u503c\u8bbe\u7f6e\u4e3a False \u3002 \u5931\u8d25\uff1a\u5982\u679c in /etc/manila/manila.conf \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri \u503c\u672a\u8bbe\u7f6e\u4e3a\u4ee5 \u5f00\u5934\u7684\u8eab\u4efd API \u7aef\u70b9\uff0c https:// \u6216\u8005\u540c\u4e00 /etc/manila/manila.conf \u90e8\u5206\u4e2d\u7684\u53c2\u6570 insecure [keystone_authtoken] \u503c\u8bbe\u7f6e\u4e3a True \u3002","title":"Check-Shared-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f"},{"location":"security/security-guide/#check-shared-05-tls","text":"\u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Train \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Ussuri \u4e2d\u5df2\u5f03\u7528\u3002 OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u56e0\u6b64\uff0c\u6240\u6709\u7ec4\u4ef6\u90fd\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u7684\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 nova_api_insecure \u8bbe\u7f6e\u4e3a False \u3002 [DEFAULT] manila.conf \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 nova_api_insecure \u8bbe\u7f6e\u4e3a True \u3002 [DEFAULT] manila.conf","title":"Check-Shared-05\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u8ba1\u7b97\u8054\u7cfb\uff1f"},{"location":"security/security-guide/#check-shared-06-tls","text":"\u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Train \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Ussuri \u4e2d\u5df2\u5f03\u7528\u3002 \u4e0e\u4e4b\u524d\u7684\u68c0\u67e5\uff08Check-Shared-05\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u8ba1\u7b97\u8054\u7cfb\uff1f\uff09\u7c7b\u4f3c\uff0c\u5efa\u8bae\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 neutron_api_insecure \u8bbe\u7f6e\u4e3a False \u3002 [DEFAULT] manila.conf \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 neutron_api_insecure \u8bbe\u7f6e\u4e3a True \u3002 [DEFAULT] manila.conf","title":"Check-Shared-06\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u7f51\u7edc\u8054\u7cfb\uff1f"},{"location":"security/security-guide/#check-shared-07-tls","text":"\u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Train \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Ussuri \u4e2d\u5df2\u5f03\u7528\u3002 \u4e0e\u4e4b\u524d\u7684\u68c0\u67e5\uff08Check-Shared-05\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u8ba1\u7b97\u8054\u7cfb\uff1f\uff09\u7c7b\u4f3c\uff0c\u5efa\u8bae\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 cinder_api_insecure \u8bbe\u7f6e\u4e3a False \u3002 [DEFAULT] manila.conf \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 cinder_api_insecure \u8bbe\u7f6e\u4e3a True \u3002 [DEFAULT] manila.conf","title":"Check-Shared-07\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u662f\u5426\u901a\u8fc7 TLS \u4e0e\u5757\u5b58\u50a8\u8054\u7cfb\uff1f"},{"location":"security/security-guide/#check-shared-08-114688","text":"\u5982\u679c\u672a\u5b9a\u4e49\u6bcf\u4e2a\u8bf7\u6c42\u7684\u6700\u5927\u6b63\u6587\u5927\u5c0f\uff0c\u653b\u51fb\u8005\u53ef\u4ee5\u6784\u5efa\u4efb\u610f\u8f83\u5927\u7684OSAPI\u8bf7\u6c42\uff0c\u5bfc\u81f4\u670d\u52a1\u5d29\u6e83\uff0c\u6700\u7ec8\u5bfc\u81f4\u62d2\u7edd\u670d\u52a1\u653b\u51fb\u3002\u5206\u914d\u6700\u5927\u503c\u53ef\u786e\u4fdd\u963b\u6b62\u4efb\u4f55\u6076\u610f\u8d85\u5927\u8bf7\u6c42\uff0c\u4ece\u800c\u786e\u4fdd\u670d\u52a1\u7684\u6301\u7eed\u53ef\u7528\u6027\u3002 \u901a\u8fc7\uff1a\u5982\u679c in \u8282\u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a \uff0c\u6216\u8005 in manila.conf manila.conf \u8282\u4e0b\u7684 [oslo_middleware] [DEFAULT] \u53c2\u6570 max_request_body_size osapi_max_request_body_size \u503c\u8bbe\u7f6e\u4e3a 114688 \u3002 114688 \u4e0b\u9762\u7684 [DEFAULT] \u53c2\u6570 osapi_max_request_body_size \u5df2\u5f03\u7528\uff0c\u6700\u597d\u4f7f\u7528 [oslo_middleware]/ max_request_body_size \u3002 \u5931\u8d25\uff1a\u5982\u679c in manila.conf \u8282\u4e0b\u7684\u53c2\u6570\u503c\u672a\u8bbe\u7f6e\u4e3a 114688 \uff0c\u6216\u8005 in manila.conf \u8282\u4e0b\u7684 [DEFAULT] [oslo_middleware] \u53c2\u6570 max_request_body_size osapi_max_request_body_size \u503c\u672a\u8bbe\u7f6e\u4e3a 114688 \u3002","title":"Check-Shared-08\uff1a\u8bf7\u6c42\u6b63\u6587\u7684\u6700\u5927\u5927\u5c0f\u662f\u5426\u8bbe\u7f6e\u4e3a\u9ed8\u8ba4\u503c \uff08114688\uff09\uff1f"},{"location":"security/security-guide/#_186","text":"OpenStack \u7f51\u7edc\u670d\u52a1 \uff08neutron\uff09 \u4f7f\u6700\u7ec8\u7528\u6237\u6216\u79df\u6237\u80fd\u591f\u5b9a\u4e49\u3001\u5229\u7528\u548c\u4f7f\u7528\u7f51\u7edc\u8d44\u6e90\u3002OpenStack Networking \u63d0\u4f9b\u4e86\u4e00\u4e2a\u9762\u5411\u79df\u6237\u7684 API\uff0c\u7528\u4e8e\u5b9a\u4e49\u4e91\u4e2d\u5b9e\u4f8b\u7684\u7f51\u7edc\u8fde\u63a5\u548c IP \u5bfb\u5740\uff0c\u4ee5\u53ca\u7f16\u6392\u7f51\u7edc\u914d\u7f6e\u3002\u968f\u7740\u5411\u4ee5 API \u4e3a\u4e2d\u5fc3\u7684\u7f51\u7edc\u670d\u52a1\u7684\u8fc7\u6e21\uff0c\u4e91\u67b6\u6784\u5e08\u548c\u7ba1\u7406\u5458\u5e94\u8003\u8651\u6700\u4f73\u5b9e\u8df5\u6765\u4fdd\u62a4\u7269\u7406\u548c\u865a\u62df\u7f51\u7edc\u57fa\u7840\u67b6\u6784\u548c\u670d\u52a1\u3002 OpenStack Networking \u91c7\u7528\u63d2\u4ef6\u67b6\u6784\u8bbe\u8ba1\uff0c\u901a\u8fc7\u5f00\u6e90\u793e\u533a\u6216\u7b2c\u4e09\u65b9\u670d\u52a1\u63d0\u4f9b API \u7684\u53ef\u6269\u5c55\u6027\u3002\u5728\u8bc4\u4f30\u67b6\u6784\u8bbe\u8ba1\u8981\u6c42\u65f6\uff0c\u786e\u5b9a OpenStack Networking \u6838\u5fc3\u670d\u52a1\u4e2d\u6709\u54ea\u4e9b\u529f\u80fd\u3001\u7b2c\u4e09\u65b9\u4ea7\u54c1\u63d0\u4f9b\u7684\u4efb\u4f55\u5176\u4ed6\u670d\u52a1\u4ee5\u53ca\u9700\u8981\u5728\u7269\u7406\u57fa\u7840\u67b6\u6784\u4e2d\u5b9e\u73b0\u54ea\u4e9b\u8865\u5145\u670d\u52a1\u975e\u5e38\u91cd\u8981\u3002 \u672c\u8282\u7b80\u8981\u6982\u8ff0\u4e86\u5728\u5b9e\u73b0 OpenStack Networking \u65f6\u5e94\u8003\u8651\u54ea\u4e9b\u6d41\u7a0b\u548c\u6700\u4f73\u5b9e\u8df5\u3002 \u7f51\u7edc\u67b6\u6784 \u5728\u7269\u7406\u670d\u52a1\u5668\u4e0a\u653e\u7f6e OpenStack Networking \u670d\u52a1 \u7f51\u7edc\u670d\u52a1 \u4f7f\u7528 VLAN \u548c\u96a7\u9053\u7684 L2 \u9694\u79bb \u7f51\u7edc\u670d\u52a1 \u7f51\u7edc\u670d\u52a1\u6269\u5c55 \u7f51\u7edc\u670d\u52a1\u9650\u5236 \u7f51\u7edc\u670d\u52a1\u5b89\u5168\u6700\u4f73\u505a\u6cd5 OpenStack Networking \u670d\u52a1\u914d\u7f6e \u4fdd\u62a4 OpenStack \u7f51\u7edc\u670d\u52a1 \u9879\u76ee\u7f51\u7edc\u670d\u52a1\u5de5\u4f5c\u6d41\u7a0b \u7f51\u7edc\u8d44\u6e90\u7b56\u7565\u5f15\u64ce \u5b89\u5168\u7ec4 \u914d\u989d \u7f13\u89e3 ARP \u6b3a\u9a97 \u68c0\u67e5\u8868 Check-Neutron-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/neutron\uff1f Check-Neutron-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Neutron-03\uff1aKeystone\u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Neutron-04\uff1a\u662f\u5426\u4f7f\u7528\u5b89\u5168\u534f\u8bae\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Neutron-05\uff1aNeutron API \u670d\u52a1\u5668\u4e0a\u662f\u5426\u542f\u7528\u4e86 TLS\uff1f","title":"\u8054\u7f51"},{"location":"security/security-guide/#_187","text":"OpenStack Networking \u662f\u4e00\u4e2a\u72ec\u7acb\u7684\u670d\u52a1\uff0c\u901a\u5e38\u5728\u591a\u4e2a\u8282\u70b9\u4e0a\u90e8\u7f72\u591a\u4e2a\u8fdb\u7a0b\u3002\u8fd9\u4e9b\u8fdb\u7a0b\u5f7c\u6b64\u4ea4\u4e92\uff0c\u5e76\u4e0e\u5176\u4ed6 OpenStack \u670d\u52a1\u4ea4\u4e92\u3002OpenStack Networking \u670d\u52a1\u7684\u4e3b\u8981\u8fdb\u7a0b\u662f neutron-server\uff0c\u8fd9\u662f\u4e00\u4e2a Python \u5b88\u62a4\u8fdb\u7a0b\uff0c\u5b83\u516c\u5f00 OpenStack Networking API\uff0c\u5e76\u5c06\u79df\u6237\u8bf7\u6c42\u4f20\u9012\u7ed9\u4e00\u7ec4\u63d2\u4ef6\u8fdb\u884c\u989d\u5916\u5904\u7406\u3002 OpenStack Networking \u7ec4\u4ef6\u5305\u62ec\uff1a neutron \u670d\u52a1\u5668\uff08neutron-server \u548c neutron-*-plugin\uff09 \u6b64\u670d\u52a1\u5728\u7f51\u7edc\u8282\u70b9\u4e0a\u8fd0\u884c\uff0c\u4e3a\u7f51\u7edc API \u53ca\u5176\u6269\u5c55\u63d0\u4f9b\u670d\u52a1\u3002\u5b83\u8fd8\u5f3a\u5236\u6267\u884c\u6bcf\u4e2a\u7aef\u53e3\u7684\u7f51\u7edc\u6a21\u578b\u548c IP \u5bfb\u5740\u3002neutron-server \u9700\u8981\u95f4\u63a5\u8bbf\u95ee\u6301\u4e45\u6027\u6570\u636e\u5e93\u3002\u8fd9\u662f\u901a\u8fc7\u63d2\u4ef6\u5b9e\u73b0\u7684\uff0c\u63d2\u4ef6\u4f7f\u7528 AMQP\uff08\u9ad8\u7ea7\u6d88\u606f\u961f\u5217\u534f\u8bae\uff09\u4e0e\u6570\u636e\u5e93\u8fdb\u884c\u901a\u4fe1\u3002 \u63d2\u4ef6\u4ee3\u7406 \uff08neutron-*-agent\uff09 \u5728\u6bcf\u4e2a\u8ba1\u7b97\u8282\u70b9\u4e0a\u8fd0\u884c\uff0c\u4ee5\u7ba1\u7406\u672c\u5730\u865a\u62df\u4ea4\u6362\u673a \uff08vswitch\uff09 \u914d\u7f6e\u3002\u60a8\u4f7f\u7528\u7684\u63d2\u4ef6\u51b3\u5b9a\u4e86\u8fd0\u884c\u54ea\u4e9b\u4ee3\u7406\u3002\u6b64\u670d\u52a1\u9700\u8981\u6d88\u606f\u961f\u5217\u8bbf\u95ee\uff0c\u5e76\u53d6\u51b3\u4e8e\u6240\u4f7f\u7528\u7684\u63d2\u4ef6\u3002\u4e00\u4e9b\u63d2\u4ef6\uff0c\u5982 OpenDaylight\uff08ODL\uff09 \u548c\u5f00\u653e\u865a\u62df\u7f51\u7edc \uff08OVN\uff09\uff0c\u5728\u8ba1\u7b97\u8282\u70b9\u4e0a\u4e0d\u9700\u8981\u4efb\u4f55 python \u4ee3\u7406\u3002 DHCP \u4ee3\u7406 \uff08neutron-dhcp-agent\uff09 \u4e3a\u79df\u6237\u7f51\u7edc\u63d0\u4f9bDHCP\u670d\u52a1\u3002\u6b64\u4ee3\u7406\u5728\u6240\u6709\u63d2\u4ef6\u4e2d\u90fd\u662f\u76f8\u540c\u7684\uff0c\u5e76\u8d1f\u8d23\u7ef4\u62a4 DHCP \u914d\u7f6e\u3002neutron-dhcp-agent \u9700\u8981\u6d88\u606f\u961f\u5217\u8bbf\u95ee\u3002\u53ef\u9009\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u63d2\u4ef6\u3002 L3 \u4ee3\u7406\uff08neutron-L3-agent\uff09 \u4e3a\u79df\u6237\u7f51\u7edc\u4e0a\u7684\u865a\u62df\u673a\u63d0\u4f9b L3/NAT \u8f6c\u53d1\u3002\u9700\u8981\u6d88\u606f\u961f\u5217\u8bbf\u95ee\u6743\u9650\u3002\u53ef\u9009\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u63d2\u4ef6\u3002 \u7f51\u7edc\u63d0\u4f9b\u5546\u670d\u52a1\uff08SDN \u670d\u52a1\u5668/\u670d\u52a1\uff09 \u4e3a\u79df\u6237\u7f51\u7edc\u63d0\u4f9b\u5176\u4ed6\u7f51\u7edc\u670d\u52a1\u3002\u8fd9\u4e9b SDN \u670d\u52a1\u53ef\u4ee5\u901a\u8fc7 REST API \u7b49\u901a\u4fe1\u901a\u9053\u4e0e neutron-server\u3001neutron-plugin \u548c plugin-agents \u8fdb\u884c\u4ea4\u4e92\u3002 \u4e0b\u56fe\u663e\u793a\u4e86 OpenStack Networking \u7ec4\u4ef6\u7684\u67b6\u6784\u548c\u7f51\u7edc\u6d41\u7a0b\u56fe\uff1a","title":"\u7f51\u7edc\u67b6\u6784"},{"location":"security/security-guide/#openstack-networking","text":"\u672c\u6307\u5357\u91cd\u70b9\u4ecb\u7ecd\u4e00\u4e2a\u6807\u51c6\u67b6\u6784\uff0c\u5176\u4e2d\u5305\u62ec\u4e00\u4e2a\u4e91\u63a7\u5236\u5668\u4e3b\u673a\u3001\u4e00\u4e2a\u7f51\u7edc\u4e3b\u673a\u548c\u4e00\u7ec4\u7528\u4e8e\u8fd0\u884c VM \u7684\u8ba1\u7b97\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u3002","title":"OpenStack Networking \u670d\u52a1\u5728\u7269\u7406\u670d\u52a1\u5668\u4e0a\u7684\u653e\u7f6e"},{"location":"security/security-guide/#_188","text":"\u6807\u51c6\u7684 OpenStack Networking \u8bbe\u7f6e\u6700\u591a\u6709\u56db\u4e2a\u4e0d\u540c\u7684\u7269\u7406\u6570\u636e\u4e2d\u5fc3\u7f51\u7edc\uff1a \u7ba1\u7406\u7f51\u7edc \u7528\u4e8e OpenStack \u7ec4\u4ef6\u4e4b\u95f4\u7684\u5185\u90e8\u901a\u4fe1\u3002\u6b64\u7f51\u7edc\u4e0a\u7684 IP \u5730\u5740\u5e94\u53ea\u80fd\u5728\u6570\u636e\u4e2d\u5fc3\u5185\u8bbf\u95ee\uff0c\u5e76\u88ab\u89c6\u4e3a\u7ba1\u7406\u5b89\u5168\u57df\u3002 \u8bbf\u5ba2\u7f51\u7edc \u7528\u4e8e\u4e91\u90e8\u7f72\u4e2d\u7684 VM \u6570\u636e\u901a\u4fe1\u3002\u6b64\u7f51\u7edc\u7684 IP \u5bfb\u5740\u8981\u6c42\u53d6\u51b3\u4e8e\u6240\u4f7f\u7528\u7684 OpenStack Networking \u63d2\u4ef6\u4ee5\u53ca\u79df\u6237\u5bf9\u865a\u62df\u7f51\u7edc\u6240\u505a\u7684\u7f51\u7edc\u914d\u7f6e\u9009\u62e9\u3002\u6b64\u7f51\u7edc\u88ab\u89c6\u4e3a\u5ba2\u6237\u673a\u5b89\u5168\u57df\u3002 \u5916\u90e8\u7f51\u7edc \u7528\u4e8e\u5728\u67d0\u4e9b\u90e8\u7f72\u65b9\u6848\u4e2d\u4e3a VM \u63d0\u4f9b Internet \u8bbf\u95ee\u6743\u9650\u3002Internet \u4e0a\u7684\u4efb\u4f55\u4eba\u90fd\u53ef\u4ee5\u8bbf\u95ee\u6b64\u7f51\u7edc\u4e0a\u7684 IP \u5730\u5740\u3002\u6b64\u7f51\u7edc\u88ab\u89c6\u4e3a\u5c5e\u4e8e\u516c\u5171\u5b89\u5168\u57df\u3002 API\u7f51\u7edc \u5411\u79df\u6237\u516c\u5f00\u6240\u6709 OpenStack API\uff0c\u5305\u62ec OpenStack \u7f51\u7edc API\u3002Internet \u4e0a\u7684\u4efb\u4f55\u4eba\u90fd\u53ef\u4ee5\u8bbf\u95ee\u6b64\u7f51\u7edc\u4e0a\u7684 IP \u5730\u5740\u3002\u8fd9\u53ef\u80fd\u4e0e\u5916\u90e8\u7f51\u7edc\u662f\u540c\u4e00\u7f51\u7edc\uff0c\u56e0\u4e3a\u53ef\u4ee5\u4e3a\u4f7f\u7528 IP \u5206\u914d\u8303\u56f4\u7684\u5916\u90e8\u7f51\u7edc\u521b\u5efa\u4e00\u4e2a\u5b50\u7f51\uff0c\u4ee5\u4fbf\u4ec5\u4f7f\u7528 IP \u5757\u4e2d\u5c0f\u4e8e\u5168\u90e8\u8303\u56f4\u7684 IP \u5730\u5740\u3002\u6b64\u7f51\u7edc\u88ab\u89c6\u4e3a\u516c\u5171\u5b89\u5168\u57df\u3002 \u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u300aOpenStack \u7ba1\u7406\u5458\u6307\u5357\u300b\u3002","title":"\u7269\u7406\u670d\u52a1\u5668\u7684\u7f51\u7edc\u8fde\u63a5"},{"location":"security/security-guide/#_189","text":"\u5728\u8bbe\u8ba1 OpenStack \u7f51\u7edc\u57fa\u7840\u67b6\u6784\u7684\u521d\u59cb\u67b6\u6784\u9636\u6bb5\uff0c\u786e\u4fdd\u63d0\u4f9b\u9002\u5f53\u7684\u4e13\u4e1a\u77e5\u8bc6\u6765\u534f\u52a9\u8bbe\u8ba1\u7269\u7406\u7f51\u7edc\u57fa\u7840\u67b6\u6784\uff0c\u786e\u5b9a\u9002\u5f53\u7684\u5b89\u5168\u63a7\u5236\u548c\u5ba1\u8ba1\u673a\u5236\u975e\u5e38\u91cd\u8981\u3002 OpenStack Networking \u589e\u52a0\u4e86\u4e00\u5c42\u865a\u62df\u5316\u7f51\u7edc\u670d\u52a1\uff0c\u4f7f\u79df\u6237\u80fd\u591f\u6784\u5efa\u81ea\u5df1\u7684\u865a\u62df\u7f51\u7edc\u3002\u76ee\u524d\uff0c\u8fd9\u4e9b\u865a\u62df\u5316\u670d\u52a1\u8fd8\u6ca1\u6709\u4f20\u7edf\u7f51\u7edc\u7684\u6210\u719f\u3002\u5728\u91c7\u7528\u8fd9\u4e9b\u865a\u62df\u5316\u670d\u52a1\u4e4b\u524d\uff0c\u8bf7\u8003\u8651\u8fd9\u4e9b\u670d\u52a1\u7684\u5f53\u524d\u72b6\u6001\uff0c\u56e0\u4e3a\u5b83\u51b3\u5b9a\u4e86\u60a8\u53ef\u80fd\u9700\u8981\u5728\u865a\u62df\u5316\u548c\u4f20\u7edf\u7f51\u7edc\u8fb9\u754c\u4e0a\u5b9e\u73b0\u54ea\u4e9b\u63a7\u5236\u3002","title":"\u7f51\u7edc\u670d\u52a1"},{"location":"security/security-guide/#vlan-l2","text":"OpenStack Networking \u53ef\u4ee5\u91c7\u7528\u4e24\u79cd\u4e0d\u540c\u7684\u673a\u5236\u5bf9\u6bcf\u4e2a\u79df\u6237/\u7f51\u7edc\u7ec4\u5408\u8fdb\u884c\u6d41\u91cf\u9694\u79bb\uff1aVLAN\uff08IEEE 802.1Q \u6807\u8bb0\uff09\u6216\u4f7f\u7528 GRE \u5c01\u88c5\u7684 L2 \u96a7\u9053\u3002OpenStack \u90e8\u7f72\u7684\u8303\u56f4\u548c\u89c4\u6a21\u51b3\u5b9a\u4e86\u60a8\u5e94\u8be5\u4f7f\u7528\u54ea\u79cd\u65b9\u6cd5\u8fdb\u884c\u6d41\u91cf\u9694\u79bb\u6216\u9694\u79bb\u3002","title":"\u4f7f\u7528 VLAN \u548c\u96a7\u9053\u7684 L2 \u9694\u79bb"},{"location":"security/security-guide/#vlans","text":"VLAN \u5728\u7279\u5b9a\u7269\u7406\u7f51\u7edc\u4e0a\u5b9e\u73b0\u4e3a\u6570\u636e\u5305\uff0c\u5176\u4e2d\u5305\u542b\u5177\u6709\u7279\u5b9a VLAN ID \uff08VID\uff09 \u5b57\u6bb5\u503c\u7684 IEEE 802.1Q \u6807\u5934\u3002\u5171\u4eab\u540c\u4e00\u7269\u7406\u7f51\u7edc\u7684 VLAN \u7f51\u7edc\u5728 L2 \u4e0a\u5f7c\u6b64\u9694\u79bb\uff0c\u751a\u81f3\u53ef\u4ee5\u6709\u91cd\u53e0\u7684 IP \u5730\u5740\u7a7a\u95f4\u3002\u6bcf\u4e2a\u652f\u6301 VLAN \u7f51\u7edc\u7684\u4e0d\u540c\u7269\u7406\u7f51\u7edc\u90fd\u88ab\u89c6\u4e3a\u4e00\u4e2a\u5355\u72ec\u7684 VLAN \u4e2d\u7ee7\uff0c\u5177\u6709\u4e0d\u540c\u7684 VID \u503c\u7a7a\u95f4\u3002\u6709\u6548\u7684 VID \u503c\u4e3a 1 \u5230 4094\u3002 VLAN \u914d\u7f6e\u7684\u590d\u6742\u6027\u53d6\u51b3\u4e8e\u60a8\u7684 OpenStack \u8bbe\u8ba1\u8981\u6c42\u3002\u4e3a\u4e86\u8ba9 OpenStack Networking \u80fd\u591f\u6709\u6548\u5730\u4f7f\u7528 VLAN\uff0c\u60a8\u5fc5\u987b\u5206\u914d\u4e00\u4e2a VLAN \u8303\u56f4\uff08\u6bcf\u4e2a\u79df\u6237\u4e00\u4e2a\uff09\uff0c\u5e76\u5c06\u6bcf\u4e2a\u8ba1\u7b97\u8282\u70b9\u7269\u7406\u4ea4\u6362\u673a\u7aef\u53e3\u8f6c\u6362\u4e3a VLAN \u4e2d\u7ee7\u7aef\u53e3\u3002 \u6ce8\u610f \u5982\u679c\u60a8\u6253\u7b97\u8ba9\u60a8\u7684\u7f51\u7edc\u652f\u6301\u8d85\u8fc7 4094 \u4e2a\u79df\u6237\uff0c\u5219 VLAN \u53ef\u80fd\u4e0d\u662f\u60a8\u7684\u6b63\u786e\u9009\u62e9\uff0c\u56e0\u4e3a\u9700\u8981\u591a\u4e2a\u201c\u9ed1\u5ba2\u201d\u624d\u80fd\u5c06 VLAN \u6807\u8bb0\u6269\u5c55\u5230\u8d85\u8fc7 4094 \u4e2a\u79df\u6237\u3002","title":"VLANs"},{"location":"security/security-guide/#l2","text":"\u7f51\u7edc\u96a7\u9053\u4f7f\u7528\u552f\u4e00\u7684\u201ctunnel-id\u201d\u5c01\u88c5\u6bcf\u4e2a\u79df\u6237/\u7f51\u7edc\u7ec4\u5408\uff0c\u8be5 ID \u7528\u4e8e\u6807\u8bc6\u5c5e\u4e8e\u8be5\u7ec4\u5408\u7684\u7f51\u7edc\u6d41\u91cf\u3002\u79df\u6237\u7684 L2 \u7f51\u7edc\u8fde\u63a5\u4e0e\u7269\u7406\u4f4d\u7f6e\u6216\u57fa\u7840\u7f51\u7edc\u8bbe\u8ba1\u65e0\u5173\u3002\u901a\u8fc7\u5c06\u6d41\u91cf\u5c01\u88c5\u5728 IP \u6570\u636e\u5305\u4e2d\uff0c\u8be5\u6d41\u91cf\u53ef\u4ee5\u8de8\u8d8a\u7b2c 3 \u5c42\u8fb9\u754c\uff0c\u65e0\u9700\u9884\u914d\u7f6e VLAN \u548c VLAN \u4e2d\u7ee7\u3002\u96a7\u9053\u4e3a\u7f51\u7edc\u6570\u636e\u6d41\u91cf\u589e\u52a0\u4e86\u4e00\u5c42\u6df7\u6dc6\uff0c\u4ece\u76d1\u63a7\u7684\u89d2\u5ea6\u964d\u4f4e\u4e86\u5355\u4e2a\u79df\u6237\u6d41\u91cf\u7684\u53ef\u89c1\u6027\u3002 OpenStack Networking \u76ee\u524d\u652f\u6301 GRE \u548c VXLAN \u5c01\u88c5\u3002 \u63d0\u4f9b L2 \u9694\u79bb\u7684\u6280\u672f\u9009\u62e9\u53d6\u51b3\u4e8e\u5c06\u5728\u90e8\u7f72\u4e2d\u521b\u5efa\u7684\u79df\u6237\u7f51\u7edc\u7684\u8303\u56f4\u548c\u5927\u5c0f\u3002\u5982\u679c\u60a8\u7684\u73af\u5883\u7684 VLAN ID \u53ef\u7528\u6027\u6709\u9650\u6216\u5c06\u5177\u6709\u5927\u91cf L2 \u7f51\u7edc\uff0c\u6211\u4eec\u5efa\u8bae\u60a8\u4f7f\u7528\u96a7\u9053\u3002","title":"L2 \u96a7\u9053"},{"location":"security/security-guide/#_190","text":"\u79df\u6237\u7f51\u7edc\u9694\u79bb\u7684\u9009\u62e9\u4f1a\u5f71\u54cd\u79df\u6237\u670d\u52a1\u7684\u7f51\u7edc\u5b89\u5168\u548c\u63a7\u5236\u8fb9\u754c\u7684\u5b9e\u73b0\u65b9\u5f0f\u3002\u4ee5\u4e0b\u9644\u52a0\u7f51\u7edc\u670d\u52a1\u5df2\u7ecf\u53ef\u7528\u6216\u76ee\u524d\u6b63\u5728\u5f00\u53d1\u4e2d\uff0c\u4ee5\u589e\u5f3a OpenStack \u7f51\u7edc\u67b6\u6784\u7684\u5b89\u5168\u6001\u52bf\u3002","title":"\u7f51\u7edc\u670d\u52a1"},{"location":"security/security-guide/#_191","text":"OpenStack \u8ba1\u7b97\u5728\u4e0e\u65e7\u7248 nova-network \u670d\u52a1\u4e00\u8d77\u90e8\u7f72\u65f6\u76f4\u63a5\u652f\u6301\u79df\u6237\u7f51\u7edc\u6d41\u91cf\u8bbf\u95ee\u63a7\u5236\uff0c\u6216\u8005\u53ef\u4ee5\u5c06\u8bbf\u95ee\u63a7\u5236\u63a8\u8fdf\u5230 OpenStack Networking \u670d\u52a1\u3002 \u8bf7\u6ce8\u610f\uff0c\u65e7\u7248 nova-network \u5b89\u5168\u7ec4\u4f7f\u7528 iptables \u5e94\u7528\u4e8e\u5b9e\u4f8b\u4e0a\u7684\u6240\u6709\u865a\u62df\u63a5\u53e3\u7aef\u53e3\u3002 \u5b89\u5168\u7ec4\u5141\u8bb8\u7ba1\u7406\u5458\u548c\u79df\u6237\u6307\u5b9a\u6d41\u91cf\u7c7b\u578b\u4ee5\u53ca\u5141\u8bb8\u901a\u8fc7\u865a\u62df\u63a5\u53e3\u7aef\u53e3\u7684\u65b9\u5411\uff08\u5165\u53e3/\u51fa\u53e3\uff09\u3002\u5b89\u5168\u7ec4\u89c4\u5219\u662f\u6709\u72b6\u6001\u7684 L2-L4 \u6d41\u91cf\u8fc7\u6ee4\u5668\u3002 \u4f7f\u7528\u7f51\u7edc\u670d\u52a1\u65f6\uff0c\u5efa\u8bae\u5728\u6b64\u670d\u52a1\u4e2d\u542f\u7528\u5b89\u5168\u7ec4\uff0c\u5e76\u5728\u8ba1\u7b97\u670d\u52a1\u4e2d\u7981\u7528\u5b89\u5168\u7ec4\u3002","title":"\u8bbf\u95ee\u63a7\u5236\u5217\u8868"},{"location":"security/security-guide/#l3-nat","text":"OpenStack Networking \u8def\u7531\u5668\u53ef\u4ee5\u8fde\u63a5\u591a\u4e2a L2 \u7f51\u7edc\uff0c\u5e76\u4e14\u8fd8\u53ef\u4ee5\u63d0\u4f9b\u8fde\u63a5\u4e00\u4e2a\u6216\u591a\u4e2a\u79c1\u6709 L2 \u7f51\u7edc\u5230\u5171\u4eab\u5916\u90e8\u7f51\u7edc\uff08\u4f8b\u5982\u7528\u4e8e\u8bbf\u95ee\u4e92\u8054\u7f51\u7684\u516c\u5171\u7f51\u7edc\uff09\u7684\u7f51\u5173\u3002 L3 \u8def\u7531\u5668\u5728\u5c06\u8def\u7531\u5668\u4e0a\u884c\u94fe\u8def\u5230\u5916\u90e8\u7f51\u7edc\u7684\u7f51\u5173\u7aef\u53e3\u4e0a\u63d0\u4f9b\u57fa\u672c\u7684\u7f51\u7edc\u5730\u5740\u8f6c\u6362 \uff08NAT\uff09 \u529f\u80fd\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u6b64\u8def\u7531\u5668\u4f1a SNAT\uff08\u9759\u6001 NAT\uff09\u6240\u6709\u6d41\u91cf\uff0c\u5e76\u652f\u6301\u6d6e\u52a8 IP\uff0c\u8fd9\u4f1a\u521b\u5efa\u4ece\u5916\u90e8\u7f51\u7edc\u4e0a\u7684\u516c\u5171 IP \u5230\u8fde\u63a5\u5230\u8def\u7531\u5668\u7684\u5176\u4ed6\u5b50\u7f51\u4e0a\u7684\u4e13\u7528 IP \u7684\u9759\u6001\u4e00\u5bf9\u4e00\u6620\u5c04\u3002 \u6211\u4eec\u5efa\u8bae\u5229\u7528\u6bcf\u4e2a\u79df\u6237\u7684 L3 \u8def\u7531\u548c\u6d6e\u52a8 IP \u6765\u5b9e\u73b0\u79df\u6237 VM \u7684\u66f4\u7cbe\u7ec6\u8fde\u63a5\u3002","title":"L3 \u8def\u7531\u548c NAT"},{"location":"security/security-guide/#qos","text":"\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u670d\u52a1\u8d28\u91cf \uff08QoS\uff09 \u7b56\u7565\u548c\u89c4\u5219\u7531\u4e91\u7ba1\u7406\u5458\u7ba1\u7406\uff0c\u8fd9\u4f1a\u5bfc\u81f4\u79df\u6237\u65e0\u6cd5\u521b\u5efa\u7279\u5b9a\u7684 QoS \u89c4\u5219\uff0c\u4e5f\u65e0\u6cd5\u5c06\u7279\u5b9a\u7aef\u53e3\u9644\u52a0\u5230\u7b56\u7565\u3002\u5728\u67d0\u4e9b\u7528\u4f8b\u4e2d\uff0c\u4f8b\u5982\u67d0\u4e9b\u7535\u4fe1\u5e94\u7528\u7a0b\u5e8f\uff0c\u7ba1\u7406\u5458\u53ef\u80fd\u4fe1\u4efb\u79df\u6237\uff0c\u56e0\u6b64\u5141\u8bb8\u4ed6\u4eec\u521b\u5efa\u81ea\u5df1\u7684\u7b56\u7565\u5e76\u5c06\u5176\u9644\u52a0\u5230\u7aef\u53e3\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u4fee\u6539 policy.json \u6587\u4ef6\u548c\u7279\u5b9a\u6587\u6863\u6765\u5b9e\u73b0\u3002\u5c06\u4e0e\u6269\u5c55\u4e00\u8d77\u53d1\u5e03\u3002 \u7f51\u7edc\u670d\u52a1 \uff08neutron\uff09 \u652f\u6301 Liberty \u53ca\u66f4\u9ad8\u7248\u672c\u4e2d\u7684\u5e26\u5bbd\u9650\u5236 QoS \u89c4\u5219\u3002\u6b64 QoS \u89c4\u5219\u5df2\u547d\u540d QosBandwidthLimitRule \uff0c\u5b83\u63a5\u53d7\u4e24\u4e2a\u975e\u8d1f\u6574\u6570\uff0c\u4ee5\u5343\u6bd4\u7279/\u79d2\u4e3a\u5355\u4f4d\uff1a max-kbps \uff1a\u5e26\u5bbd max-burst-kbps \uff1a\u7a81\u53d1\u7f13\u51b2\u533a \u5df2 QoSBandwidthLimitRule \u5728 neutron Open vSwitch\u3001Linux \u7f51\u6865\u548c\u5355\u6839\u8f93\u5165/\u8f93\u51fa\u865a\u62df\u5316 \uff08SR-IOV\uff09 \u9a71\u52a8\u7a0b\u5e8f\u4e2d\u5b9e\u73b0\u3002 \u5728 Newton \u4e2d\uff0c\u6dfb\u52a0\u4e86 QoS \u89c4\u5219 QosDscpMarkingRule \u3002\u6b64\u89c4\u5219\u5728 IPv4 \uff08RFC 2474\uff09 \u4e0a\u7684\u670d\u52a1\u6807\u5934\u7c7b\u578b\u548c IPv6 \u4e0a\u7684\u6d41\u91cf\u7c7b\u6807\u5934\u4e2d\u6807\u8bb0\u5dee\u5206\u670d\u52a1\u4ee3\u7801\u70b9 \uff08DSCP\uff09 \u503c\uff0c\u8fd9\u4e9b\u503c\u9002\u7528\u4e8e\u5e94\u7528\u89c4\u5219\u7684\u865a\u62df\u673a\u7684\u6240\u6709\u6d41\u91cf\u3002\u8fd9\u662f\u4e00\u4e2a 6 \u4f4d\u6807\u5934\uff0c\u5177\u6709 21 \u4e2a\u6709\u6548\u503c\uff0c\u8868\u793a\u6570\u636e\u5305\u5728\u9047\u5230\u62e5\u585e\u65f6\u7a7f\u8fc7\u7f51\u7edc\u65f6\u7684\u4e22\u5f03\u4f18\u5148\u7ea7\u3002\u9632\u706b\u5899\u8fd8\u53ef\u4ee5\u4f7f\u7528\u5b83\u6765\u5c06\u6709\u6548\u6216\u65e0\u6548\u6d41\u91cf\u4e0e\u5176\u8bbf\u95ee\u63a7\u5236\u5217\u8868\u8fdb\u884c\u5339\u914d\u3002 \u7aef\u53e3\u955c\u50cf\u670d\u52a1\u6d89\u53ca\u5c06\u8fdb\u5165\u6216\u79bb\u5f00\u4e00\u4e2a\u7aef\u53e3\u7684\u6570\u636e\u5305\u526f\u672c\u53d1\u9001\u5230\u53e6\u4e00\u4e2a\u7aef\u53e3\uff0c\u8be5\u7aef\u53e3\u901a\u5e38\u4e0e\u88ab\u955c\u50cf\u6570\u636e\u5305\u7684\u539f\u59cb\u76ee\u7684\u5730\u4e0d\u540c\u3002Tap-as-a-Service \uff08TaaS\uff09 \u662f OpenStack \u7f51\u7edc\u670d\u52a1 \uff08neutron\uff09 \u7684\u6269\u5c55\u3002\u5b83\u4e3a\u79df\u6237\u865a\u62df\u7f51\u7edc\u63d0\u4f9b\u8fdc\u7a0b\u7aef\u53e3\u955c\u50cf\u529f\u80fd\u3002\u6b64\u670d\u52a1\u4e3b\u8981\u65e8\u5728\u5e2e\u52a9\u79df\u6237\uff08\u6216\u4e91\u7ba1\u7406\u5458\uff09\u8c03\u8bd5\u590d\u6742\u7684\u865a\u62df\u7f51\u7edc\uff0c\u5e76\u901a\u8fc7\u76d1\u89c6\u4e0e\u5176\u5173\u8054\u7684\u7f51\u7edc\u6d41\u91cf\u6765\u4e86\u89e3\u5176 VM\u3002TaaS \u9075\u5faa\u79df\u6237\u8fb9\u754c\uff0c\u5176\u955c\u50cf\u4f1a\u8bdd\u80fd\u591f\u8de8\u8d8a\u591a\u4e2a\u8ba1\u7b97\u548c\u7f51\u7edc\u8282\u70b9\u3002\u5b83\u662f\u4e00\u4e2a\u5fc5\u4e0d\u53ef\u5c11\u7684\u57fa\u7840\u8bbe\u65bd\u7ec4\u4ef6\uff0c\u53ef\u7528\u4e8e\u5411\u5404\u79cd\u7f51\u7edc\u5206\u6790\u548c\u5b89\u5168\u5e94\u7528\u7a0b\u5e8f\u63d0\u4f9b\u6570\u636e\u3002","title":"\u670d\u52a1\u8d28\u91cf \uff08QoS\uff09"},{"location":"security/security-guide/#_192","text":"OpenStack Networking \u7684\u53e6\u4e00\u4e2a\u7279\u6027\u662f\u8d1f\u8f7d\u5747\u8861\u5668\u5373\u670d\u52a1 \uff08LBaaS\uff09\u3002LBaaS \u53c2\u8003\u5b9e\u73b0\u57fa\u4e8e HA-Proxy\u3002OpenStack Networking \u4e2d\u7684\u6269\u5c55\u6b63\u5728\u5f00\u53d1\u7b2c\u4e09\u65b9\u63d2\u4ef6\uff0c\u4ee5\u4fbf\u4e3a\u865a\u62df\u63a5\u53e3\u7aef\u53e3\u63d0\u4f9b\u5e7f\u6cdb\u7684 L4-L7 \u529f\u80fd\u3002","title":"\u8d1f\u8f7d\u5747\u8861"},{"location":"security/security-guide/#_193","text":"FW-as-a-Service\uff08FWaaS\uff09\u88ab\u8ba4\u4e3a\u662fOpenStack Networking\u7684Kilo\u7248\u672c\u7684\u5b9e\u9a8c\u6027\u529f\u80fd\u3002FWaaS \u6ee1\u8db3\u4e86\u7ba1\u7406\u548c\u5229\u7528\u5178\u578b\u9632\u706b\u5899\u4ea7\u54c1\u63d0\u4f9b\u7684\u4e30\u5bcc\u5b89\u5168\u529f\u80fd\u7684\u9700\u6c42\uff0c\u8fd9\u4e9b\u4ea7\u54c1\u901a\u5e38\u6bd4\u5f53\u524d\u5b89\u5168\u7ec4\u63d0\u4f9b\u7684\u8981\u5168\u9762\u5f97\u591a\u3002\u98de\u601d\u5361\u5c14\u548c\u82f1\u7279\u5c14\u90fd\u5f00\u53d1\u4e86\u7b2c\u4e09\u65b9\u63d2\u4ef6\u4f5c\u4e3aOpenStack Networking\u7684\u6269\u5c55\uff0c\u4ee5\u5728Kilo\u7248\u672c\u4e2d\u652f\u6301\u6b64\u7ec4\u4ef6\u3002\u6709\u5173 FWaaS \u7ba1\u7406\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u300aOpenStack \u7ba1\u7406\u5458\u6307\u5357\u300b\u4e2d\u7684\u9632\u706b\u5899\u5373\u670d\u52a1 \uff08FWaaS\uff09 \u6982\u8ff0\u3002 \u5728\u8bbe\u8ba1 OpenStack Networking \u57fa\u7840\u67b6\u6784\u65f6\uff0c\u4e86\u89e3\u53ef\u7528\u7f51\u7edc\u670d\u52a1\u7684\u5f53\u524d\u7279\u6027\u548c\u5c40\u9650\u6027\u975e\u5e38\u91cd\u8981\u3002\u4e86\u89e3\u865a\u62df\u7f51\u7edc\u548c\u7269\u7406\u7f51\u7edc\u7684\u8fb9\u754c\u5c06\u6709\u52a9\u4e8e\u5728\u60a8\u7684\u73af\u5883\u4e2d\u6dfb\u52a0\u6240\u9700\u7684\u5b89\u5168\u63a7\u4ef6\u3002","title":"\u9632\u706b\u5899"},{"location":"security/security-guide/#_194","text":"\u5f00\u6e90\u793e\u533a\u6216\u4f7f\u7528 OpenStack Networking \u7684 SDN \u516c\u53f8\u63d0\u4f9b\u7684\u5df2\u77e5\u63d2\u4ef6\u5217\u8868\u53ef\u5728 OpenStack neutron \u63d2\u4ef6\u548c\u9a71\u52a8\u7a0b\u5e8f wiki \u9875\u9762\u4e0a\u627e\u5230\u3002","title":"\u7f51\u7edc\u670d\u52a1\u6269\u5c55"},{"location":"security/security-guide/#_195","text":"OpenStack Networking \u5177\u6709\u4ee5\u4e0b\u5df2\u77e5\u9650\u5236\uff1a \u91cd\u53e0\u7684 IP \u5730\u5740 \u5982\u679c\u8fd0\u884c neutron-l3-agent \u6216 neutron-dhcp-agent \u7684\u8282\u70b9\u4f7f\u7528\u91cd\u53e0\u7684 IP \u5730\u5740\uff0c\u5219\u8fd9\u4e9b\u8282\u70b9\u5fc5\u987b\u4f7f\u7528 Linux \u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u3002\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0cDHCP \u548c L3 \u4ee3\u7406\u4f7f\u7528 Linux \u7f51\u7edc\u547d\u540d\u7a7a\u95f4\uff0c\u5e76\u5728\u5404\u81ea\u7684\u547d\u540d\u7a7a\u95f4\u4e2d\u8fd0\u884c\u3002\u4f46\u662f\uff0c\u5982\u679c\u4e3b\u673a\u4e0d\u652f\u6301\u591a\u4e2a\u547d\u540d\u7a7a\u95f4\uff0c\u5219 DHCP \u548c L3 \u4ee3\u7406\u5e94\u5728\u4e0d\u540c\u7684\u4e3b\u673a\u4e0a\u8fd0\u884c\u3002\u8fd9\u662f\u56e0\u4e3a L3 \u4ee3\u7406\u548c DHCP \u4ee3\u7406\u521b\u5efa\u7684 IP \u5730\u5740\u4e4b\u95f4\u6ca1\u6709\u9694\u79bb\u3002 \u5982\u679c\u4e0d\u5b58\u5728\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u652f\u6301\uff0c\u5219 L3 \u4ee3\u7406\u7684\u53e6\u4e00\u4e2a\u9650\u5236\u662f\u4ec5\u652f\u6301\u5355\u4e2a\u903b\u8f91\u8def\u7531\u5668\u3002 \u591a\u4e3b\u673a DHCP \u4ee3\u7406 OpenStack Networking \u652f\u6301\u591a\u4e2a\u5177\u6709\u8d1f\u8f7d\u5747\u8861\u529f\u80fd\u7684 L3 \u548c DHCP \u4ee3\u7406\u3002\u4f46\u662f\uff0c\u4e0d\u652f\u6301\u865a\u62df\u673a\u4f4d\u7f6e\u7684\u7d27\u5bc6\u8026\u5408\u3002\u6362\u8a00\u4e4b\uff0c\u5728\u521b\u5efa\u865a\u62df\u673a\u65f6\uff0c\u9ed8\u8ba4\u865a\u62df\u673a\u8c03\u5ea6\u7a0b\u5e8f\u4e0d\u4f1a\u8003\u8651\u4ee3\u7406\u7684\u4f4d\u7f6e\u3002 L3 \u4ee3\u7406\u4e0d\u652f\u6301 IPv6 neutron-l3-agent \u88ab\u8bb8\u591a\u63d2\u4ef6\u7528\u4e8e\u5b9e\u73b0 L3 \u8f6c\u53d1\uff0c\u4ec5\u652f\u6301 IPv4 \u8f6c\u53d1\u3002","title":"\u7f51\u7edc\u670d\u52a1\u9650\u5236"},{"location":"security/security-guide/#_196","text":"\u8981\u4fdd\u62a4 OpenStack Networking\uff0c\u60a8\u5fc5\u987b\u4e86\u89e3\u5982\u4f55\u5c06\u79df\u6237\u5b9e\u4f8b\u521b\u5efa\u7684\u5de5\u4f5c\u6d41\u8fc7\u7a0b\u6620\u5c04\u5230\u5b89\u5168\u57df\u3002 \u6709\u56db\u4e2a\u4e3b\u8981\u670d\u52a1\u4e0e OpenStack Networking \u4ea4\u4e92\u3002\u5728\u5178\u578b\u7684 OpenStack \u90e8\u7f72\u4e2d\uff0c\u8fd9\u4e9b\u670d\u52a1\u6620\u5c04\u5230\u4ee5\u4e0b\u5b89\u5168\u57df\uff1a OpenStack \u4eea\u8868\u677f\uff1a\u516c\u5171\u548c\u7ba1\u7406 OpenStack Identity\uff1a\u7ba1\u7406 OpenStack \u8ba1\u7b97\u8282\u70b9\uff1a\u7ba1\u7406\u548c\u5ba2\u6237\u7aef OpenStack \u7f51\u7edc\u8282\u70b9\uff1a\u7ba1\u7406\u3001\u5ba2\u6237\u7aef\uff0c\u4ee5\u53ca\u53ef\u80fd\u7684\u516c\u5171\u8282\u70b9\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u6b63\u5728\u4f7f\u7528\u7684 neutron-plugin\u3002 SDN \u670d\u52a1\u8282\u70b9\uff1a\u7ba1\u7406\u3001\u8bbf\u5ba2\u548c\u53ef\u80fd\u7684\u516c\u5171\u670d\u52a1\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u4f7f\u7528\u7684\u4ea7\u54c1\u3002 \u8981\u9694\u79bb OpenStack Networking \u670d\u52a1\u4e0e\u5176\u4ed6 OpenStack \u6838\u5fc3\u670d\u52a1\u4e4b\u95f4\u7684\u654f\u611f\u6570\u636e\u901a\u4fe1\uff0c\u8bf7\u5c06\u8fd9\u4e9b\u901a\u4fe1\u901a\u9053\u914d\u7f6e\u4e3a\u4ec5\u5141\u8bb8\u901a\u8fc7\u9694\u79bb\u7684\u7ba1\u7406\u7f51\u7edc\u8fdb\u884c\u901a\u4fe1\u3002","title":"\u7f51\u7edc\u670d\u52a1\u5b89\u5168\u6700\u4f73\u505a\u6cd5"},{"location":"security/security-guide/#openstack-networking_1","text":"","title":"OpenStack Networking \u670d\u52a1\u914d\u7f6e"},{"location":"security/security-guide/#api-neutron-server","text":"\u8981\u9650\u5236 OpenStack Networking API \u670d\u52a1\u4e3a\u4f20\u5165\u5ba2\u6237\u7aef\u8fde\u63a5\u7ed1\u5b9a\u7f51\u7edc\u5957\u63a5\u5b57\u7684\u63a5\u53e3\u6216 IP \u5730\u5740\uff0c\u8bf7\u5728 neutron.conf \u6587\u4ef6\u4e2d\u6307\u5b9a bind_host \u548c bind_port\uff0c\u5982\u4e0b\u6240\u793a\uff1a # Address to bind the API server bind_host = IP ADDRESS OF SERVER # Port the bind the API server to bind_port = 9696","title":"\u9650\u5236 API \u670d\u52a1\u5668\u7684\u7ed1\u5b9a\u5730\u5740\uff1aneutron-server"},{"location":"security/security-guide/#openstack-networking-db-rpc","text":"OpenStack Networking \u670d\u52a1\u7684\u5404\u79cd\u7ec4\u4ef6\u4f7f\u7528\u6d88\u606f\u961f\u5217\u6216\u6570\u636e\u5e93\u8fde\u63a5\u4e0e OpenStack Networking \u4e2d\u7684\u5176\u4ed6\u7ec4\u4ef6\u8fdb\u884c\u901a\u4fe1\u3002 \u5bf9\u4e8e\u9700\u8981\u76f4\u63a5\u6570\u636e\u5e93\u8fde\u63a5\u7684\u6240\u6709\u7ec4\u4ef6\uff0c\u5efa\u8bae\u60a8\u9075\u5faa\u6570\u636e\u5e93\u8eab\u4efd\u9a8c\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236\u4e2d\u63d0\u4f9b\u7684\u51c6\u5219\u3002 \u5efa\u8bae\u60a8\u9075\u5faa\u961f\u5217\u8eab\u4efd\u9a8c\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236\u4e2d\u63d0\u4f9b\u7684\u51c6\u5219\uff0c\u9002\u7528\u4e8e\u9700\u8981 RPC \u901a\u4fe1\u7684\u6240\u6709\u7ec4\u4ef6\u3002","title":"\u9650\u5236 OpenStack Networking \u670d\u52a1\u7684 DB \u548c RPC \u901a\u4fe1"},{"location":"security/security-guide/#openstack_9","text":"\u672c\u8282\u8ba8\u8bba OpenStack Networking \u914d\u7f6e\u6700\u4f73\u5b9e\u8df5\uff0c\u56e0\u4e3a\u5b83\u4eec\u9002\u7528\u4e8e OpenStack \u90e8\u7f72\u4e2d\u7684\u9879\u76ee\u7f51\u7edc\u5b89\u5168\u3002","title":"\u4fdd\u62a4 OpenStack \u7f51\u7edc\u670d\u52a1"},{"location":"security/security-guide/#_197","text":"OpenStack Networking \u4e3a\u7528\u6237\u63d0\u4f9b\u7f51\u7edc\u8d44\u6e90\u548c\u914d\u7f6e\u7684\u81ea\u52a9\u670d\u52a1\u3002\u4e91\u67b6\u6784\u5e08\u548c\u8fd0\u7ef4\u4eba\u5458\u5fc5\u987b\u8bc4\u4f30\u5176\u8bbe\u8ba1\u7528\u4f8b\uff0c\u4ee5\u4fbf\u4e3a\u7528\u6237\u63d0\u4f9b\u521b\u5efa\u3001\u66f4\u65b0\u548c\u9500\u6bc1\u53ef\u7528\u7f51\u7edc\u8d44\u6e90\u7684\u80fd\u529b\u3002","title":"\u9879\u76ee\u7f51\u7edc\u670d\u52a1\u5de5\u4f5c\u6d41"},{"location":"security/security-guide/#_198","text":"OpenStack Networking \u4e2d\u7684\u7b56\u7565\u5f15\u64ce\u53ca\u5176\u914d\u7f6e\u6587\u4ef6 policy.json \u63d0\u4f9b\u4e86\u4e00\u79cd\u65b9\u6cd5\uff0c\u53ef\u4ee5\u5bf9\u7528\u6237\u5728\u9879\u76ee\u7f51\u7edc\u65b9\u6cd5\u548c\u5bf9\u8c61\u4e0a\u63d0\u4f9b\u66f4\u7ec6\u7c92\u5ea6\u7684\u6388\u6743\u3002OpenStack Networking \u7b56\u7565\u5b9a\u4e49\u4f1a\u5f71\u54cd\u7f51\u7edc\u53ef\u7528\u6027\u3001\u7f51\u7edc\u5b89\u5168\u548c\u6574\u4f53 OpenStack \u5b89\u5168\u6027\u3002\u4e91\u67b6\u6784\u5e08\u548c\u8fd0\u7ef4\u4eba\u5458\u5e94\u4ed4\u7ec6\u8bc4\u4f30\u5176\u5bf9\u7528\u6237\u548c\u9879\u76ee\u8bbf\u95ee\u7f51\u7edc\u8d44\u6e90\u7ba1\u7406\u7684\u7b56\u7565\u3002\u6709\u5173 OpenStack Networking \u7b56\u7565\u5b9a\u4e49\u7684\u66f4\u8be6\u7ec6\u8bf4\u660e\uff0c\u8bf7\u53c2\u9605\u300aOpenStack \u7ba1\u7406\u5458\u6307\u5357\u300b\u4e2d\u7684\u201c\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u201d\u90e8\u5206\u3002 \u6ce8\u610f \u8bf7\u52a1\u5fc5\u67e5\u770b\u9ed8\u8ba4\u7f51\u7edc\u8d44\u6e90\u7b56\u7565\uff0c\u56e0\u4e3a\u53ef\u4ee5\u4fee\u6539\u6b64\u7b56\u7565\u4ee5\u9002\u5408\u60a8\u7684\u5b89\u5168\u72b6\u51b5\u3002 \u5982\u679c\u60a8\u7684 OpenStack \u90e8\u7f72\u4e3a\u4e0d\u540c\u7684\u5b89\u5168\u57df\u63d0\u4f9b\u4e86\u591a\u4e2a\u5916\u90e8\u8bbf\u95ee\u70b9\uff0c\u90a3\u4e48\u9650\u5236\u9879\u76ee\u5c06\u591a\u4e2a vNIC \u8fde\u63a5\u5230\u591a\u4e2a\u5916\u90e8\u8bbf\u95ee\u70b9\u7684\u80fd\u529b\u975e\u5e38\u91cd\u8981\uff0c\u8fd9\u5c06\u6865\u63a5\u8fd9\u4e9b\u5b89\u5168\u57df\uff0c\u5e76\u53ef\u80fd\u5bfc\u81f4\u4e0d\u53ef\u9884\u89c1\u7684\u5b89\u5168\u5371\u5bb3\u3002\u901a\u8fc7\u5229\u7528 OpenStack Compute \u63d0\u4f9b\u7684\u4e3b\u673a\u805a\u5408\u529f\u80fd\uff0c\u6216\u8005\u5c06\u9879\u76ee\u865a\u62df\u673a\u62c6\u5206\u4e3a\u5177\u6709\u4e0d\u540c\u865a\u62df\u7f51\u7edc\u914d\u7f6e\u7684\u591a\u4e2a\u9879\u76ee\u9879\u76ee\uff0c\u53ef\u4ee5\u964d\u4f4e\u8fd9\u79cd\u98ce\u9669\u3002","title":"\u7f51\u7edc\u8d44\u6e90\u7b56\u7565\u5f15\u64ce"},{"location":"security/security-guide/#_199","text":"OpenStack Networking \u670d\u52a1\u4f7f\u7528\u6bd4 OpenStack Compute \u4e2d\u5185\u7f6e\u7684\u5b89\u5168\u7ec4\u529f\u80fd\u66f4\u7075\u6d3b\u3001\u66f4\u5f3a\u5927\u7684\u673a\u5236\u63d0\u4f9b\u5b89\u5168\u7ec4\u529f\u80fd\u3002\u56e0\u6b64\uff0c\u5728\u4f7f\u7528 OpenStack Network \u65f6\uff0c\u5e94\u59cb\u7ec8\u7981\u7528\u5185\u7f6e\u5b89\u5168\u7ec4\uff0c nova.conf \u5e76\u5c06\u6240\u6709\u5b89\u5168\u7ec4\u8c03\u7528\u4ee3\u7406\u5230 OpenStack Networking API\u3002\u5982\u679c\u4e0d\u8fd9\u6837\u505a\uff0c\u5c06\u5bfc\u81f4\u4e24\u4e2a\u670d\u52a1\u540c\u65f6\u5e94\u7528\u51b2\u7a81\u7684\u5b89\u5168\u7b56\u7565\u3002\u8981\u5c06\u5b89\u5168\u7ec4\u4ee3\u7406\u5230 OpenStack Networking\uff0c\u8bf7\u4f7f\u7528\u4ee5\u4e0b\u914d\u7f6e\u503c\uff1a firewall_driver \u5fc5\u987b\u8bbe\u7f6e\u4e3a nova.virt.firewall.NoopFirewallDriver \uff0c\u4ee5\u4fbf nova-compute \u672c\u8eab\u4e0d\u6267\u884c\u57fa\u4e8e iptables \u7684\u8fc7\u6ee4\u3002 security_group_api \u5fc5\u987b\u8bbe\u7f6e\u4e3a neutron \u4ee5\u4fbf\u5c06\u6240\u6709\u5b89\u5168\u7ec4\u8bf7\u6c42\u4ee3\u7406\u5230 OpenStack Networking \u670d\u52a1\u3002 \u5b89\u5168\u7ec4\u662f\u5b89\u5168\u7ec4\u89c4\u5219\u7684\u5bb9\u5668\u3002\u5b89\u5168\u7ec4\u53ca\u5176\u89c4\u5219\u5141\u8bb8\u7ba1\u7406\u5458\u548c\u9879\u76ee\u6307\u5b9a\u5141\u8bb8\u901a\u8fc7\u865a\u62df\u63a5\u53e3\u7aef\u53e3\u7684\u6d41\u91cf\u7c7b\u578b\u548c\u65b9\u5411\uff08\u5165\u53e3/\u51fa\u53e3\uff09\u3002\u5728 OpenStack Networking \u4e2d\u521b\u5efa\u865a\u62df\u63a5\u53e3\u7aef\u53e3\u65f6\uff0c\u8be5\u7aef\u53e3\u4e0e\u5b89\u5168\u7ec4\u76f8\u5173\u8054\u3002\u6709\u5173\u7aef\u53e3\u5b89\u5168\u7ec4\u9ed8\u8ba4\u884c\u4e3a\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u7f51\u7edc\u5b89\u5168\u7ec4\u884c\u4e3a\u6587\u6863\u3002\u53ef\u4ee5\u5c06\u89c4\u5219\u6dfb\u52a0\u5230\u9ed8\u8ba4\u5b89\u5168\u7ec4\uff0c\u4ee5\u4fbf\u6839\u636e\u6bcf\u4e2a\u90e8\u7f72\u66f4\u6539\u884c\u4e3a\u3002 \u4f7f\u7528 OpenStack Compute API \u4fee\u6539\u5b89\u5168\u7ec4\u65f6\uff0c\u66f4\u65b0\u540e\u7684\u5b89\u5168\u7ec4\u5c06\u5e94\u7528\u4e8e\u5b9e\u4f8b\u4e0a\u7684\u6240\u6709\u865a\u62df\u63a5\u53e3\u7aef\u53e3\u3002\u8fd9\u662f\u56e0\u4e3a OpenStack Compute \u5b89\u5168\u7ec4 API \u662f\u57fa\u4e8e\u5b9e\u4f8b\u7684\uff0c\u800c\u4e0d\u662f\u57fa\u4e8e\u7aef\u53e3\u7684\uff0c\u5982 OpenStack Networking \u4e2d\u6240\u793a\u3002","title":"\u5b89\u5168\u7ec4"},{"location":"security/security-guide/#_200","text":"\u914d\u989d\u63d0\u4f9b\u4e86\u9650\u5236\u9879\u76ee\u53ef\u7528\u7684\u7f51\u7edc\u8d44\u6e90\u6570\u91cf\u7684\u529f\u80fd\u3002\u60a8\u53ef\u4ee5\u5bf9\u6240\u6709\u9879\u76ee\u5f3a\u5236\u5b9e\u65bd\u9ed8\u8ba4\u914d\u989d\u3002\u5305\u62ec /etc/neutron/neutron.conf \u4ee5\u4e0b\u914d\u989d\u9009\u9879\uff1a [QUOTAS] # resource name(s) that are supported in quota features quota_items = network,subnet,port # default number of resource allowed per tenant, minus for unlimited #default_quota = -1 # number of networks allowed per tenant, and minus means unlimited quota_network = 10 # number of subnets allowed per tenant, and minus means unlimited quota_subnet = 10 # number of ports allowed per tenant, and minus means unlimited quota_port = 50 # number of security groups allowed per tenant, and minus means unlimited quota_security_group = 10 # number of security group rules allowed per tenant, and minus means unlimited quota_security_group_rule = 100 # default driver to use for quota checks quota_driver = neutron.quota.ConfDriver OpenStack Networking \u8fd8\u901a\u8fc7\u914d\u989d\u6269\u5c55 API \u652f\u6301\u6bcf\u4e2a\u9879\u76ee\u7684\u914d\u989d\u9650\u5236\u3002\u8981\u542f\u7528\u6bcf\u4e2a\u9879\u76ee\u7684\u914d\u989d\uff0c\u5fc5\u987b\u5728 \u4e2d\u8bbe\u7f6e\u9009\u9879 quota_driver neutron.conf \u3002 quota_driver = neutron.db.quota.driver.DbQuotaDriver","title":"\u914d\u989d"},{"location":"security/security-guide/#arp","text":"\u4f7f\u7528\u6241\u5e73\u7f51\u7edc\u65f6\uff0c\u4e0d\u80fd\u5047\u5b9a\u5171\u4eab\u540c\u4e00\u7b2c 2 \u5c42\u7f51\u7edc\uff08\u6216\u5e7f\u64ad\u57df\uff09\u7684\u9879\u76ee\u5f7c\u6b64\u5b8c\u5168\u9694\u79bb\u3002\u8fd9\u4e9b\u9879\u76ee\u53ef\u80fd\u5bb9\u6613\u53d7\u5230 ARP \u6b3a\u9a97\u7684\u653b\u51fb\uff0c\u4ece\u800c\u6709\u53ef\u80fd\u906d\u53d7\u4e2d\u95f4\u4eba\u653b\u51fb\u3002 \u5982\u679c\u4f7f\u7528\u652f\u6301 ARP \u5b57\u6bb5\u5339\u914d\u7684 Open vSwitch \u7248\u672c\uff0c\u5219\u53ef\u4ee5\u901a\u8fc7\u542f\u7528 Open vSwitch \u4ee3\u7406 prevent_arp_spoofing \u9009\u9879\u6765\u5e2e\u52a9\u964d\u4f4e\u6b64\u98ce\u9669\u3002\u6b64\u9009\u9879\u53ef\u9632\u6b62\u5b9e\u4f8b\u6267\u884c\u6b3a\u9a97\u653b\u51fb;\u5b83\u4e0d\u80fd\u4fdd\u62a4\u4ed6\u4eec\u514d\u53d7\u6b3a\u9a97\u653b\u51fb\u3002\u8bf7\u6ce8\u610f\uff0c\u6b64\u8bbe\u7f6e\u9884\u8ba1\u5c06\u5728 Ocata \u4e2d\u5220\u9664\uff0c\u8be5\u884c\u4e3a\u5c06\u6c38\u4e45\u5904\u4e8e\u6d3b\u52a8\u72b6\u6001\u3002 \u4f8b\u5982\uff0c\u5728 /etc/neutron/plugins/ml2/openvswitch_agent.ini \uff1a prevent_arp_spoofing = True \u9664 Open vSwitch \u5916\uff0c\u5176\u4ed6\u63d2\u4ef6\u4e5f\u53ef\u80fd\u5305\u542b\u7c7b\u4f3c\u7684\u7f13\u89e3\u63aa\u65bd;\u5efa\u8bae\u60a8\u5728\u9002\u5f53\u7684\u60c5\u51b5\u4e0b\u542f\u7528\u6b64\u529f\u80fd\u3002 \u6ce8\u610f \u5373\u4f7f\u542f\u7528 `prevent_arp_spoofing` \u4e86\u6241\u5e73\u7f51\u7edc\uff0c\u4e5f\u65e0\u6cd5\u63d0\u4f9b\u5b8c\u6574\u7684\u9879\u76ee\u9694\u79bb\u7ea7\u522b\uff0c\u56e0\u4e3a\u6240\u6709\u9879\u76ee\u6d41\u91cf\u4ecd\u4f1a\u53d1\u9001\u5230\u540c\u4e00 VLAN\u3002","title":"\u7f13\u89e3 ARP \u6b3a\u9a97"},{"location":"security/security-guide/#_201","text":"","title":"\u68c0\u67e5\u8868"},{"location":"security/security-guide/#check-neutron-01-rootneutron","text":"\u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u5bf9\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u7684\u62d2\u7edd\u670d\u52a1\u3002\u56e0\u6b64\uff0c\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a root\uff0c\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a neutron\u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/neutron/neutron.conf | egrep \"root neutron\" $ stat -L -c \"%U %G\" /etc/neutron/api-paste.ini | egrep \"root neutron\" $ stat -L -c \"%U %G\" /etc/neutron/policy.json | egrep \"root neutron\" $ stat -L -c \"%U %G\" /etc/neutron/rootwrap.conf | egrep \"root neutron\" $ stat -L -c \"%U %G\" /etc/neutron | egrep \"root neutron\" \u901a\u8fc7\uff1a\u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c neutron\u3002\u4e0a\u9762\u7684\u547d\u4ee4\u663e\u793a\u4e86\u6839\u4e2d\u5b50\u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u56e0\u4e3a\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 root \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u6216\u9664 neutron \u4ee5\u5916\u7684\u4efb\u4f55\u7ec4\u3002","title":"Check-Neutron-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/neutron\uff1f"},{"location":"security/security-guide/#check-neutron-02","text":"\u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u5efa\u8bae\u5bf9\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/neutron/neutron.conf $ stat -L -c \"%a\" /etc/neutron/api-paste.ini $ stat -L -c \"%a\" /etc/neutron/policy.json $ stat -L -c \"%a\" /etc/neutron/rootwrap.conf $ stat -L -c \"%a\" /etc/neutron \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002640 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\uff0c\u5373\u201cu=rw\uff0cg=r\uff0co=\u201d\u3002 \u8bf7\u6ce8\u610f\uff0c\u4f7f\u7528 Check-Neutron-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237/\u7ec4\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/neutron\uff1f\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0croot \u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0cneutron \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/neutron/neutron.conf getfacl: Removing leading '/' from absolute path names # file: etc/neutron/neutron.conf USER root rw- GROUP neutron r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u6ca1\u6709\u8bbe\u7f6e\u81f3\u5c11\u4e3a640\u3002","title":"Check-Neutron-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f"},{"location":"security/security-guide/#check-neutron-03keystone","text":"\u6ce8\u610f \u6b64\u9879\u4ec5\u9002\u7528\u4e8e OpenStack \u7248\u672c Rocky \u53ca\u4e4b\u524d\u7248\u672c\uff0c\u56e0\u4e3a `auth_strategy` Stein \u4e2d\u5df2\u5f03\u7528\u3002 OpenStack \u652f\u6301\u5404\u79cd\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\uff0c\u5982 noauth\u3001keystone \u7b49\u3002\u5982\u679c\u4f7f\u7528\u201cnoauth\u201d\u7b56\u7565\uff0c\u90a3\u4e48\u7528\u6237\u65e0\u9700\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u4e0eOpenStack\u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u6f5c\u5728\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u83b7\u5f97\u5bf9 OpenStack \u7ec4\u4ef6\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u56e0\u6b64\uff0c\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u670d\u52a1\u90fd\u5fc5\u987b\u4f7f\u7528\u5176\u670d\u52a1\u5e10\u6237\u901a\u8fc7 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 auth_strategy \u8bbe\u7f6e\u4e3a keystone \u3002 [DEFAULT] /etc/neutron/neutron.conf \u5931\u8d25\uff1a\u5982\u679c section \u4e0b\u7684 [DEFAULT] \u53c2\u6570 auth_strategy \u503c\u8bbe\u7f6e\u4e3a noauth \u6216 noauth2 \u3002","title":"Check-Neutron-03\uff1aKeystone\u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f"},{"location":"security/security-guide/#check-neutron-04","text":"OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f/\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u56e0\u6b64\uff0c\u6240\u6709\u7ec4\u4ef6\u90fd\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u7684\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in /etc/neutron/neutron.conf \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a Identity API \u7aef\u70b9\u5f00\u5934\uff0c https:// \u5e76\u4e14 same /etc/neutron/neutron.conf \u4e2d\u540c\u4e00 [keystone_authtoken] \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri insecure \u503c\u8bbe\u7f6e\u4e3a False \u3002 \u5931\u8d25\uff1a\u5982\u679c in /etc/neutron/neutron.conf \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri \u503c\u672a\u8bbe\u7f6e\u4e3a\u4ee5 \u5f00\u5934\u7684\u8eab\u4efd API \u7aef\u70b9\uff0c https:// \u6216\u8005\u540c\u4e00 /etc/neutron/neutron.conf \u90e8\u5206\u4e2d\u7684\u53c2\u6570 insecure [keystone_authtoken] \u503c\u8bbe\u7f6e\u4e3a True \u3002","title":"Check-Neutron-04\uff1a\u662f\u5426\u4f7f\u7528\u5b89\u5168\u534f\u8bae\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f"},{"location":"security/security-guide/#check-neutron-05neutron-api-tls","text":"\u4e0e\u4e4b\u524d\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u5efa\u8bae\u5728 API \u670d\u52a1\u5668\u4e0a\u542f\u7528\u5b89\u5168\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 use_ssl \u8bbe\u7f6e\u4e3a True \u3002 [DEFAULT] /etc/neutron/neutron.conf \u5931\u8d25\uff1a\u5982\u679c section in \u4e0b\u7684\u53c2\u6570 use_ssl \u8bbe\u7f6e\u4e3a False \u3002 [DEFAULT] /etc/neutron/neutron.conf","title":"Check-Neutron-05\uff1aNeutron API \u670d\u52a1\u5668\u4e0a\u662f\u5426\u542f\u7528\u4e86 TLS\uff1f"},{"location":"security/security-guide/#_202","text":"OpenStack \u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09 \u670d\u52a1\u63d0\u4f9b\u901a\u8fc7 HTTP \u5b58\u50a8\u548c\u68c0\u7d22\u6570\u636e\u7684\u8f6f\u4ef6\u3002\u5bf9\u8c61\uff08\u6570\u636e blob\uff09\u5b58\u50a8\u5728\u7ec4\u7ec7\u5c42\u6b21\u7ed3\u6784\u4e2d\uff0c\u8be5\u5c42\u6b21\u7ed3\u6784\u63d0\u4f9b\u533f\u540d\u53ea\u8bfb\u8bbf\u95ee\u3001ACL \u5b9a\u4e49\u7684\u8bbf\u95ee\uff0c\u751a\u81f3\u4e34\u65f6\u8bbf\u95ee\u3002\u5bf9\u8c61\u5b58\u50a8\u652f\u6301\u901a\u8fc7\u4e2d\u95f4\u4ef6\u5b9e\u73b0\u7684\u591a\u79cd\u57fa\u4e8e\u4ee4\u724c\u7684\u8eab\u4efd\u9a8c\u8bc1\u673a\u5236\u3002 \u5e94\u7528\u7a0b\u5e8f\u901a\u8fc7\u884c\u4e1a\u6807\u51c6\u7684 HTTP RESTful API \u5728\u5bf9\u8c61\u5b58\u50a8\u4e2d\u5b58\u50a8\u548c\u68c0\u7d22\u6570\u636e\u3002\u5bf9\u8c61\u5b58\u50a8\u7684\u540e\u7aef\u7ec4\u4ef6\u9075\u5faa\u76f8\u540c\u7684 RESTful \u6a21\u578b\uff0c\u5c3d\u7ba1\u67d0\u4e9b API\uff08\u4f8b\u5982\u7ba1\u7406\u6301\u4e45\u6027\u7684 API\uff09\u5bf9\u96c6\u7fa4\u662f\u79c1\u6709\u7684\u3002\u6709\u5173 API \u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 OpenStack Storage API\u3002 \u5bf9\u8c61\u5b58\u50a8\u7684\u7ec4\u4ef6\u5206\u4e3a\u4ee5\u4e0b\u4e3b\u8981\u7ec4\uff1a \u4ee3\u7406\u670d\u52a1 \u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1 \u5b58\u50a8\u670d\u52a1 \u8d26\u6237\u670d\u52a1 \u5bb9\u5668\u670d\u52a1 \u5bf9\u8c61\u670d\u52a1 OpenStack \u5bf9\u8c61\u5b58\u50a8\u7ba1\u7406\u6307\u5357 \uff082013\uff09 \u4e2d\u7684\u793a\u4f8b\u56fe \u6ce8\u610f \u5bf9\u8c61\u5b58\u50a8\u5b89\u88c5\u4e0d\u5fc5\u4f4d\u4e8e Internet \u4e0a\uff0c\u4e5f\u53ef\u4ee5\u662f\u79c1\u6709\u4e91\uff0c\u5176\u4e2d\u516c\u5171\u4ea4\u6362\u673a\u662f\u7ec4\u7ec7\u5185\u90e8\u7f51\u7edc\u57fa\u7840\u67b6\u6784\u7684\u4e00\u90e8\u5206\u3002","title":"\u5bf9\u8c61\u5b58\u50a8"},{"location":"security/security-guide/#_203","text":"\u8981\u4fdd\u62a4\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff0c\u9996\u5148\u8981\u4fdd\u62a4\u7f51\u7edc\u7ec4\u4ef6\u3002\u5982\u679c\u60a8\u8df3\u8fc7\u4e86\u7f51\u7edc\u7ae0\u8282\uff0c\u8bf7\u8fd4\u56de\u5230\u7f51\u7edc\u90e8\u5206\u3002 rsync \u534f\u8bae\u7528\u4e8e\u5728\u5b58\u50a8\u670d\u52a1\u8282\u70b9\u4e4b\u95f4\u590d\u5236\u6570\u636e\u4ee5\u5b9e\u73b0\u9ad8\u53ef\u7528\u6027\u3002\u6b64\u5916\uff0c\u5728\u5ba2\u6237\u7aef\u7aef\u70b9\u548c\u4e91\u73af\u5883\u4e4b\u95f4\u6765\u56de\u4e2d\u7ee7\u6570\u636e\u65f6\uff0c\u4ee3\u7406\u670d\u52a1\u4f1a\u4e0e\u5b58\u50a8\u670d\u52a1\u8fdb\u884c\u901a\u4fe1\u3002 \u8b66\u544a \u5bf9\u8c61\u5b58\u50a8\u4e0d\u5bf9\u8282\u70b9\u95f4\u901a\u4fe1\u8fdb\u884c\u52a0\u5bc6\u6216\u8eab\u4efd\u9a8c\u8bc1\u3002\u8fd9\u5c31\u662f\u60a8\u5728\u4f53\u7cfb\u7ed3\u6784\u56fe\u4e2d\u770b\u5230\u4e13\u7528\u4ea4\u6362\u673a\u6216\u4e13\u7528\u7f51\u7edc \uff08[V]LAN\uff09 \u7684\u539f\u56e0\u3002\u8fd9\u4e2a\u6570\u636e\u57df\u4e5f\u5e94\u8be5\u4e0e\u5176\u4ed6OpenStack\u6570\u636e\u7f51\u7edc\u5206\u5f00\u3002\u6709\u5173\u5b89\u5168\u57df\u7684\u8fdb\u4e00\u6b65\u8ba8\u8bba\uff0c\u8bf7\u53c2\u9605\u5b89\u5168\u8fb9\u754c\u548c\u5a01\u80c1\u3002 \u5efa\u8bae \u5bf9\u6570\u636e\u57df\u4e2d\u7684\u5b58\u50a8\u8282\u70b9\u4f7f\u7528\u4e13\u7528 \uff08V\uff09LAN \u7f51\u6bb5\u3002 \u8fd9\u9700\u8981\u4ee3\u7406\u8282\u70b9\u5177\u6709\u53cc\u63a5\u53e3\uff08\u7269\u7406\u6216\u865a\u62df\uff09\uff1a \u4e00\u4e2a\u4f5c\u4e3a\u6d88\u8d39\u8005\u8bbf\u95ee\u7684\u516c\u5171\u754c\u9762\u3002 \u53e6\u4e00\u4e2a\u4f5c\u4e3a\u53ef\u4ee5\u8bbf\u95ee\u5b58\u50a8\u8282\u70b9\u7684\u4e13\u7528\u63a5\u53e3\u3002 \u4e0b\u56fe\u6f14\u793a\u4e86\u4e00\u79cd\u53ef\u80fd\u7684\u7f51\u7edc\u4f53\u7cfb\u7ed3\u6784\u3002 \u5177\u6709\u7ba1\u7406\u8282\u70b9\uff08OSAM\uff09\u7684\u5bf9\u8c61\u5b58\u50a8\u7f51\u7edc\u67b6\u6784","title":"\u7f51\u7edc\u5b89\u5168"},{"location":"security/security-guide/#_204","text":"","title":"\u4e00\u822c\u670d\u52a1\u5b89\u5168"},{"location":"security/security-guide/#root","text":"\u6211\u4eec\u5efa\u8bae\u60a8\u5c06\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u914d\u7f6e\u4e3a\u5728\u975e root \uff08UID 0\uff09 \u670d\u52a1\u5e10\u6237\u4e0b\u8fd0\u884c\u3002\u4e00\u4e2a\u5efa\u8bae\u662f swift \u5177\u6709\u4e3b\u7ec4 swift \u7684\u7528\u6237\u540d\u3002\u4f8b\u5982\uff0c proxy-server \u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5305\u62ec\u3001\u3001 container-server account-server \u3002\u6709\u5173\u8bbe\u7f6e\u548c\u914d\u7f6e\u7684\u8be6\u7ec6\u6b65\u9aa4\uff0c\u8bf7\u53c2\u9605\u300a\u5b89\u88c5\u6307\u5357\u300b\u7684\u201c\u6dfb\u52a0\u5bf9\u8c61\u5b58\u50a8\u201d\u4e00\u7ae0\u7684 OpenStack \u6587\u6863\u7d22\u5f15\u3002 \u6ce8\u610f \u4e0a\u9762\u7684\u94fe\u63a5\u9ed8\u8ba4\u4e3aUbuntu\u7248\u672c\u3002","title":"\u4ee5\u975e root \u7528\u6237\u8eab\u4efd\u8fd0\u884c\u670d\u52a1"},{"location":"security/security-guide/#_205","text":"\u8be5 /etc/swift \u76ee\u5f55\u5305\u542b\u6709\u5173\u73af\u5f62\u62d3\u6251\u548c\u73af\u5883\u914d\u7f6e\u7684\u4fe1\u606f\u3002\u5efa\u8bae\u4f7f\u7528\u4ee5\u4e0b\u6743\u9650\uff1a # chown -R root:swift /etc/swift/* # find /etc/swift/ -type f -exec chmod 640 {} \\; # find /etc/swift/ -type d -exec chmod 750 {} \\; \u8fd9\u5c06\u9650\u5236\u53ea\u6709 root \u7528\u6237\u80fd\u591f\u4fee\u6539\u914d\u7f6e\u6587\u4ef6\uff0c\u540c\u65f6\u5141\u8bb8\u670d\u52a1\u901a\u8fc7\u5176 swift \u5728\u7ec4\u4e2d\u7684\u7ec4\u6210\u5458\u8eab\u4efd\u8bfb\u53d6\u5b83\u4eec\u3002","title":"\u6587\u4ef6\u6743\u9650"},{"location":"security/security-guide/#_206","text":"\u4ee5\u4e0b\u662f\u5404\u79cd\u5b58\u50a8\u670d\u52a1\u7684\u9ed8\u8ba4\u4fa6\u542c\u7aef\u53e3\uff1a \u670d\u52a1\u540d\u79f0 \u6e2f\u53e3 \u7c7b\u578b \u8d26\u6237\u670d\u52a1 6002 TCP \u5bb9\u5668\u670d\u52a1 6001 TCP \u5bf9\u8c61\u670d\u52a1 6000 TCP \u540c\u6b65 [1] 873 TCP \u5982\u679c\u4f7f\u7528 ssync \u800c\u4e0d\u662f rsync\uff0c\u5219\u4f7f\u7528\u5bf9\u8c61\u670d\u52a1\u7aef\u53e3\u6765\u7ef4\u62a4\u6301\u4e45\u6027\u3002 \u91cd\u8981 \u5728\u5b58\u50a8\u8282\u70b9\u4e0a\u4e0d\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u5982\u679c\u80fd\u591f\u5728\u5176\u4e2d\u4e00\u4e2a\u7aef\u53e3\u4e0a\u8fde\u63a5\u5230\u5b58\u50a8\u8282\u70b9\uff0c\u5219\u65e0\u9700\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u8bbf\u95ee\u6216\u4fee\u6539\u6570\u636e\u3002\u4e3a\u4e86\u9632\u6b62\u6b64\u95ee\u9898\uff0c\u60a8\u5e94\u8be5\u9075\u5faa\u4e4b\u524d\u7ed9\u51fa\u7684\u6709\u5173\u4f7f\u7528\u4e13\u7528\u5b58\u50a8\u7f51\u7edc\u7684\u5efa\u8bae\u3002","title":"\u4fdd\u62a4\u5b58\u50a8\u670d\u52a1"},{"location":"security/security-guide/#_207","text":"\u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u4e0d\u662f\u7528\u6237\u5e10\u6237\u6216\u51ed\u636e\u3002\u4e0b\u9762\u5bf9\u8fd9\u4e9b\u5173\u7cfb\u8fdb\u884c\u8bf4\u660e\uff1a \u5bf9\u8c61\u5b58\u50a8\u5e10\u6237 \u5bb9\u5668\u7684\u6536\u96c6;\u4e0d\u662f\u7528\u6237\u5e10\u6237\u6216\u8eab\u4efd\u9a8c\u8bc1\u3002\u54ea\u4e9b\u7528\u6237\u4e0e\u8be5\u5e10\u6237\u76f8\u5173\u8054\u4ee5\u53ca\u4ed6\u4eec\u5982\u4f55\u8bbf\u95ee\u8be5\u5e10\u6237\u53d6\u51b3\u4e8e\u6240\u4f7f\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u3002\u8bf7\u53c2\u9605\u5bf9\u8c61\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1\u3002 \u5bf9\u8c61\u5b58\u50a8\u5bb9\u5668 \u5bf9\u8c61\u7684\u96c6\u5408\u3002\u5bb9\u5668\u4e0a\u7684\u5143\u6570\u636e\u53ef\u7528\u4e8e ACL\u3002ACL \u7684\u542b\u4e49\u53d6\u51b3\u4e8e\u6240\u4f7f\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u3002 \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61 \u5b9e\u9645\u6570\u636e\u5bf9\u8c61\u3002\u5bf9\u8c61\u7ea7\u522b\u7684 ACL \u4e5f\u53ef\u4ee5\u4e0e\u5143\u6570\u636e\u4e00\u8d77\u4f7f\u7528\uff0c\u5e76\u4e14\u53d6\u51b3\u4e8e\u6240\u4f7f\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u3002 \u5728\u6bcf\u4e2a\u7ea7\u522b\uff0c\u60a8\u90fd\u6709 ACL\uff0c\u7528\u4e8e\u6307\u793a\u8c01\u62e5\u6709\u54ea\u79cd\u7c7b\u578b\u7684\u8bbf\u95ee\u6743\u9650\u3002ACL \u662f\u6839\u636e\u6b63\u5728\u4f7f\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u8fdb\u884c\u89e3\u91ca\u7684\u3002\u6700\u5e38\u7528\u7684\u4e24\u79cd\u8eab\u4efd\u9a8c\u8bc1\u63d0\u4f9b\u7a0b\u5e8f\u7c7b\u578b\u662f Identity service \uff08keystone\uff09 \u548c TempAuth\u3002\u81ea\u5b9a\u4e49\u8eab\u4efd\u9a8c\u8bc1\u63d0\u4f9b\u7a0b\u5e8f\u4e5f\u662f\u53ef\u80fd\u7684\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5bf9\u8c61\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1\u3002","title":"\u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u672f\u8bed"},{"location":"security/security-guide/#_208","text":"\u4ee3\u7406\u8282\u70b9\u5e94\u81f3\u5c11\u5177\u6709\u4e24\u4e2a\u63a5\u53e3\uff08\u7269\u7406\u6216\u865a\u62df\uff09\uff1a\u4e00\u4e2a\u516c\u5171\u63a5\u53e3\u548c\u4e00\u4e2a\u4e13\u7528\u63a5\u53e3\u3002\u9632\u706b\u5899\u6216\u670d\u52a1\u7ed1\u5b9a\u53ef\u80fd\u4f1a\u4fdd\u62a4\u516c\u5171\u63a5\u53e3\u3002\u9762\u5411\u516c\u4f17\u7684\u670d\u52a1\u662f\u4e00\u4e2a HTTP Web \u670d\u52a1\u5668\uff0c\u7528\u4e8e\u5904\u7406\u7aef\u70b9\u5ba2\u6237\u7aef\u8bf7\u6c42\u3001\u5bf9\u5176\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u5e76\u6267\u884c\u76f8\u5e94\u7684\u64cd\u4f5c\u3002\u4e13\u7528\u63a5\u53e3\u4e0d\u9700\u8981\u4efb\u4f55\u4fa6\u542c\u670d\u52a1\uff0c\u800c\u662f\u7528\u4e8e\u5efa\u7acb\u4e0e\u4e13\u7528\u5b58\u50a8\u7f51\u7edc\u4e0a\u7684\u5b58\u50a8\u8282\u70b9\u7684\u4f20\u51fa\u8fde\u63a5\u3002","title":"\u4fdd\u62a4\u4ee3\u7406\u670d\u52a1"},{"location":"security/security-guide/#http_1","text":"\u5982\u524d\u6240\u8ff0\uff0c\u60a8\u5e94\u8be5\u5c06 Web \u670d\u52a1\u914d\u7f6e\u4e3a\u975e root\uff08\u65e0 UID 0\uff09\u7528\u6237 swift \u3002\u9700\u8981\u4f7f\u7528\u5927\u4e8e 1024 \u7684\u7aef\u53e3\u624d\u80fd\u8f7b\u677e\u5b8c\u6210\u6b64\u64cd\u4f5c\uff0c\u5e76\u907f\u514d\u4ee5 root \u8eab\u4efd\u8fd0\u884c Web \u5bb9\u5668\u7684\u4efb\u4f55\u90e8\u5206\u3002\u901a\u5e38\uff0c\u4f7f\u7528 HTTP REST API \u5e76\u6267\u884c\u8eab\u4efd\u9a8c\u8bc1\u7684\u5ba2\u6237\u7aef\u4f1a\u81ea\u52a8\u4ece\u8eab\u4efd\u9a8c\u8bc1\u54cd\u5e94\u4e2d\u68c0\u7d22\u6240\u9700\u7684\u5b8c\u6574 REST API URL\u3002OpenStack \u7684 REST API \u5141\u8bb8\u5ba2\u6237\u7aef\u5bf9\u4e00\u4e2a URL \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff0c\u7136\u540e\u88ab\u544a\u77e5\u5bf9\u5b9e\u9645\u670d\u52a1\u4f7f\u7528\u5b8c\u5168\u4e0d\u540c\u7684 URL\u3002\u4f8b\u5982\uff0c\u5ba2\u6237\u7aef\u5411 https://identity.cloud.example.org:55443/v1/auth \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff0c\u5e76\u83b7\u53d6\u5176\u8eab\u4efd\u9a8c\u8bc1\u5bc6\u94a5\u548c\u5b58\u50a8 URL\uff08\u4ee3\u7406\u8282\u70b9\u6216\u8d1f\u8f7d\u5747\u8861\u5668\u7684 URL\uff09https://swift.cloud.example.org:44443/v1/AUTH_8980 \u54cd\u5e94\u3002 \u5c06 Web \u670d\u52a1\u5668\u914d\u7f6e\u4e3a\u4ee5\u975e root \u7528\u6237\u8eab\u4efd\u542f\u52a8\u548c\u8fd0\u884c\u7684\u65b9\u6cd5\u56e0 Web \u670d\u52a1\u5668\u548c\u64cd\u4f5c\u7cfb\u7edf\u800c\u5f02\u3002","title":"HTTP \u76d1\u542c\u7aef\u53e3"},{"location":"security/security-guide/#_209","text":"\u5982\u679c\u4f7f\u7528 Apache \u7684\u9009\u9879\u4e0d\u53ef\u884c\uff0c\u6216\u8005\u4e3a\u4e86\u63d0\u9ad8\u6027\u80fd\uff0c\u60a8\u5e0c\u671b\u51cf\u8f7b TLS \u5de5\u4f5c\uff0c\u5219\u53ef\u4ee5\u4f7f\u7528\u4e13\u7528\u7684\u7f51\u7edc\u8bbe\u5907\u8d1f\u8f7d\u5e73\u8861\u5668\u3002\u8fd9\u662f\u5728\u4f7f\u7528\u591a\u4e2a\u4ee3\u7406\u8282\u70b9\u65f6\u63d0\u4f9b\u5197\u4f59\u548c\u8d1f\u8f7d\u5e73\u8861\u7684\u5e38\u7528\u65b9\u6cd5\u3002 \u5982\u679c\u9009\u62e9\u5378\u8f7d TLS\uff0c\u8bf7\u786e\u4fdd\u8d1f\u8f7d\u5747\u8861\u5668\u548c\u4ee3\u7406\u8282\u70b9\u4e4b\u95f4\u7684\u7f51\u7edc\u94fe\u8def\u4f4d\u4e8e\u4e13\u7528 \uff08V\uff09LAN \u7f51\u6bb5\u4e0a\uff0c\u4ee5\u4fbf\u7f51\u7edc\u4e0a\u7684\u5176\u4ed6\u8282\u70b9\uff08\u53ef\u80fd\u5df2\u6cc4\u9732\uff09\u65e0\u6cd5\u7a83\u542c\uff08\u55c5\u63a2\uff09\u672a\u52a0\u5bc6\u7684\u6d41\u91cf\u3002\u5982\u679c\u53d1\u751f\u6b64\u7c7b\u8fdd\u89c4\u884c\u4e3a\uff0c\u653b\u51fb\u8005\u53ef\u4ee5\u8bbf\u95ee\u7aef\u70b9\u5ba2\u6237\u7aef\u6216\u4e91\u7ba1\u7406\u5458\u51ed\u636e\u5e76\u8bbf\u95ee\u4e91\u6570\u636e\u3002 \u60a8\u4f7f\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\uff08\u4f8b\u5982\u8eab\u4efd\u670d\u52a1\uff08keystone\uff09\u6216TempAuth\uff09\u5c06\u51b3\u5b9a\u5982\u4f55\u5728\u5bf9\u7aef\u70b9\u5ba2\u6237\u7aef\u7684\u54cd\u5e94\u4e2d\u914d\u7f6e\u4e0d\u540c\u7684URL\uff0c\u4ee5\u4fbf\u5b83\u4eec\u4f7f\u7528\u8d1f\u8f7d\u5e73\u8861\u5668\u800c\u4e0d\u662f\u5355\u4e2a\u4ee3\u7406\u8282\u70b9\u3002","title":"\u8d1f\u8f7d\u5747\u8861\u5668"},{"location":"security/security-guide/#_210","text":"\u5bf9\u8c61\u5b58\u50a8\u4f7f\u7528 WSGI \u6a21\u578b\u6765\u63d0\u4f9b\u4e2d\u95f4\u4ef6\u529f\u80fd\uff0c\u8be5\u529f\u80fd\u4e0d\u4ec5\u63d0\u4f9b\u901a\u7528\u53ef\u6269\u5c55\u6027\uff0c\u8fd8\u7528\u4e8e\u7aef\u70b9\u5ba2\u6237\u7aef\u7684\u8eab\u4efd\u9a8c\u8bc1\u3002\u8eab\u4efd\u9a8c\u8bc1\u63d0\u4f9b\u7a0b\u5e8f\u5b9a\u4e49\u5b58\u5728\u7684\u89d2\u8272\u548c\u7528\u6237\u7c7b\u578b\u3002\u6709\u4e9b\u4f7f\u7528\u4f20\u7edf\u7684\u7528\u6237\u540d\u548c\u5bc6\u7801\u51ed\u636e\uff0c\u800c\u53e6\u4e00\u4e9b\u5219\u53ef\u80fd\u5229\u7528 API \u5bc6\u94a5\u4ee4\u724c\u751a\u81f3\u5ba2\u6237\u7aef x.509 \u8bc1\u4e66\u3002\u81ea\u5b9a\u4e49\u63d0\u4f9b\u7a0b\u5e8f\u53ef\u4ee5\u96c6\u6210\u5230\u4f7f\u7528\u81ea\u5b9a\u4e49\u4e2d\u95f4\u4ef6\u4e2d\u3002 \u5bf9\u8c61\u5b58\u50a8\u9ed8\u8ba4\u81ea\u5e26\u4e24\u4e2a\u8ba4\u8bc1\u4e2d\u95f4\u4ef6\u6a21\u5757\uff0c\u5176\u4e2d\u4efb\u4f55\u4e00\u4e2a\u6a21\u5757\u90fd\u53ef\u4ee5\u4f5c\u4e3a\u5f00\u53d1\u81ea\u5b9a\u4e49\u8ba4\u8bc1\u4e2d\u95f4\u4ef6\u7684\u793a\u4f8b\u4ee3\u7801\u3002","title":"\u5bf9\u8c61\u5b58\u50a8\u8eab\u4efd\u9a8c\u8bc1"},{"location":"security/security-guide/#tempauth","text":"TempAuth \u662f\u5bf9\u8c61\u5b58\u50a8\u7684\u9ed8\u8ba4\u8eab\u4efd\u9a8c\u8bc1\u3002\u4e0e Identity \u76f8\u6bd4\uff0c\u5b83\u5c06\u7528\u6237\u5e10\u6237\u3001\u51ed\u636e\u548c\u5143\u6570\u636e\u5b58\u50a8\u5728\u5bf9\u8c61\u5b58\u50a8\u672c\u8eab\u4e2d\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09 \u6587\u6863\u7684\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u90e8\u5206\u3002","title":"TempAuth \u51fd\u6570"},{"location":"security/security-guide/#keystone","text":"Keystone \u662f OpenStack \u4e2d\u5e38\u7528\u7684\u8eab\u4efd\u63d0\u4f9b\u7a0b\u5e8f\u3002\u5b83\u8fd8\u53ef\u7528\u4e8e\u5bf9\u8c61\u5b58\u50a8\u4e2d\u7684\u8eab\u4efd\u9a8c\u8bc1\u3002Identity \u4e2d\u5df2\u63d0\u4f9b\u4fdd\u62a4 keystone \u7684\u8986\u76d6\u8303\u56f4\u3002","title":"Keystone"},{"location":"security/security-guide/#_211","text":"\u5728 \u4e2d /etc/swift \uff0c\u5728\u6bcf\u4e2a\u8282\u70b9\u4e0a\uff0c\u90fd\u6709\u4e00\u4e2a\u8bbe\u7f6e\u548c\u4e00\u4e2a swift_hash_path_prefix swift_hash_path_suffix \u8bbe\u7f6e\u3002\u63d0\u4f9b\u8fd9\u4e9b\u662f\u4e3a\u4e86\u51cf\u5c11\u5b58\u50a8\u5bf9\u8c61\u53d1\u751f\u54c8\u5e0c\u51b2\u7a81\u7684\u53ef\u80fd\u6027\uff0c\u5e76\u907f\u514d\u4e00\u4e2a\u7528\u6237\u8986\u76d6\u53e6\u4e00\u4e2a\u7528\u6237\u7684\u6570\u636e\u3002 \u6b64\u503c\u6700\u521d\u5e94\u4f7f\u7528\u52a0\u5bc6\u5b89\u5168\u7684\u968f\u673a\u6570\u751f\u6210\u5668\u8fdb\u884c\u8bbe\u7f6e\uff0c\u5e76\u5728\u6240\u6709\u8282\u70b9\u4e0a\u4fdd\u6301\u4e00\u81f4\u3002\u786e\u4fdd\u5b83\u53d7\u5230\u9002\u5f53\u7684 ACL \u4fdd\u62a4\uff0c\u5e76\u4e14\u60a8\u6709\u5907\u4efd\u526f\u672c\u4ee5\u907f\u514d\u6570\u636e\u4e22\u5931\u3002","title":"\u5176\u4ed6\u503c\u5f97\u6ce8\u610f\u7684\u4e8b\u9879"},{"location":"security/security-guide/#_212","text":"\u64cd\u4f5c\u5458\u901a\u8fc7\u4f7f\u7528\u5404\u79cd\u52a0\u5bc6\u5e94\u7528\u7a0b\u5e8f\u6765\u4fdd\u62a4\u4e91\u90e8\u7f72\u4e2d\u7684\u654f\u611f\u4fe1\u606f\u3002\u4f8b\u5982\uff0c\u5bf9\u9759\u6001\u6570\u636e\u8fdb\u884c\u52a0\u5bc6\u6216\u5bf9\u6620\u50cf\u8fdb\u884c\u7b7e\u540d\u4ee5\u8bc1\u660e\u5176\u672a\u88ab\u7be1\u6539\u3002\u5728\u6240\u6709\u60c5\u51b5\u4e0b\uff0c\u8fd9\u4e9b\u52a0\u5bc6\u529f\u80fd\u90fd\u9700\u8981\u67d0\u79cd\u5bc6\u94a5\u6750\u6599\u624d\u80fd\u8fd0\u884c\u3002 \u673a\u5bc6\u7ba1\u7406\u63cf\u8ff0\u4e86\u4e00\u7ec4\u65e8\u5728\u4fdd\u62a4\u8f6f\u4ef6\u7cfb\u7edf\u4e2d\u7684\u5173\u952e\u6750\u6599\u7684\u6280\u672f\u3002\u4f20\u7edf\u4e0a\uff0c\u5bc6\u94a5\u7ba1\u7406\u6d89\u53ca\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u7684\u90e8\u7f72\u3002\u8fd9\u4e9b\u8bbe\u5907\u5df2\u7ecf\u8fc7\u7269\u7406\u5f3a\u5316\uff0c\u53ef\u9632\u6b62\u7be1\u6539\u3002 \u968f\u7740\u6280\u672f\u7684\u8fdb\u6b65\uff0c\u9700\u8981\u4fdd\u62a4\u7684\u79d8\u5bc6\u7269\u54c1\u7684\u6570\u91cf\u5df2\u7ecf\u4ece\u5bc6\u94a5\u6750\u6599\u589e\u52a0\u5230\u5305\u62ec\u8bc1\u4e66\u5bf9\u3001API \u5bc6\u94a5\u3001\u7cfb\u7edf\u5bc6\u7801\u3001\u7b7e\u540d\u5bc6\u94a5\u7b49\u3002\u8fd9\u79cd\u589e\u957f\u4ea7\u751f\u4e86\u5bf9\u66f4\u5177\u53ef\u6269\u5c55\u6027\u7684\u5bc6\u94a5\u7ba1\u7406\u65b9\u6cd5\u7684\u9700\u6c42\uff0c\u5e76\u5bfc\u81f4\u521b\u5efa\u4e86\u8bb8\u591a\u63d0\u4f9b\u53ef\u6269\u5c55\u52a8\u6001\u5bc6\u94a5\u7ba1\u7406\u7684\u8f6f\u4ef6\u670d\u52a1\u3002\u672c\u7ae0\u4ecb\u7ecd\u4e86\u76ee\u524d\u5b58\u5728\u7684\u670d\u52a1\uff0c\u5e76\u91cd\u70b9\u4ecb\u7ecd\u4e86\u90a3\u4e9b\u80fd\u591f\u96c6\u6210\u5230OpenStack\u4e91\u4e2d\u7684\u670d\u52a1\u3002 \u73b0\u6709\u6280\u672f\u6458\u8981 \u76f8\u5173 Openstack \u9879\u76ee \u4f7f\u7528\u6848\u4f8b \u955c\u50cf\u7b7e\u540d\u9a8c\u8bc1 \u5377\u52a0\u5bc6 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6 Sahara Magnum Octavia/LBaaS Swift \u914d\u7f6e\u6587\u4ef6\u4e2d\u7684\u5bc6\u7801 Barbican \u6982\u8ff0 \u52a0\u5bc6\u63d2\u4ef6 \u7b80\u5355\u7684\u52a0\u5bc6\u63d2\u4ef6 PKCS#11\u52a0\u5bc6\u63d2\u4ef6 \u5bc6\u94a5\u5546\u5e97\u63d2\u4ef6 KMIP\u63d2\u4ef6 Dogtag \u63d2\u4ef6 Vault \u63d2\u4ef6 Castellan \u6982\u8ff0 \u5e38\u89c1\u95ee\u9898\u89e3\u7b54 \u68c0\u67e5\u8868 Check-Key-Manager-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/barbican\uff1f Check-Key-Manager-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f Check-Key-Manager-03\uff1aOpenStack Identity \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f Check-Key-Manager-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f","title":"\u673a\u5bc6\u7ba1\u7406"},{"location":"security/security-guide/#_213","text":"\u5728OpenStack\u4e2d\uff0c\u6709\u4e24\u79cd\u63a8\u8350\u7528\u4e8e\u673a\u5bc6\u7ba1\u7406\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u5373Barbican\u548cCastellan\u3002\u672c\u7ae0\u5c06\u6982\u8ff0\u4e0d\u540c\u7684\u65b9\u6848\uff0c\u4ee5\u5e2e\u52a9\u64cd\u4f5c\u5458\u9009\u62e9\u4f7f\u7528\u54ea\u4e2a\u5bc6\u94a5\u7ba1\u7406\u5668\u3002 \u7b2c\u4e09\u79cd\u4e0d\u53d7\u652f\u6301\u7684\u65b9\u6cd5\u662f\u56fa\u5b9a/\u786c\u7f16\u7801\u5bc6\u94a5\u3002\u4f17\u6240\u5468\u77e5\uff0c\u67d0\u4e9b OpenStack \u670d\u52a1\u53ef\u4ee5\u9009\u62e9\u5728\u5176\u914d\u7f6e\u6587\u4ef6\u4e2d\u6307\u5b9a\u5bc6\u94a5\u3002\u8fd9\u662f\u6700\u4e0d\u5b89\u5168\u7684\u64cd\u4f5c\u65b9\u5f0f\uff0c\u6211\u4eec\u4e0d\u5efa\u8bae\u5728\u4efb\u4f55\u7c7b\u578b\u7684\u751f\u4ea7\u73af\u5883\u4e2d\u4f7f\u7528\u3002 \u5176\u4ed6\u89e3\u51b3\u65b9\u6848\u5305\u62ec KeyWhiz\u3001Confidant\u3001Conjur\u3001EJSON\u3001Knox \u548c Red October\uff0c\u4f46\u5728\u672c\u6587\u6863\u7684\u8ba8\u8bba\u8303\u56f4\u4e4b\u5916\uff0c\u65e0\u6cd5\u6db5\u76d6\u6240\u6709\u53ef\u7528\u7684 Key Manager\u3002 \u5bf9\u4e8e\u673a\u5bc6\u7684\u5b58\u50a8\uff0c\u5f3a\u70c8\u5efa\u8bae\u4f7f\u7528\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u3002HSM \u53ef\u4ee5\u6709\u591a\u79cd\u5f62\u5f0f\u3002\u4f20\u7edf\u8bbe\u5907\u662f\u673a\u67b6\u5f0f\u8bbe\u5907\uff0c\u5982\u4ee5\u4e0b\u535a\u5ba2\u6587\u7ae0\u4e2d\u6240\u793a\u3002","title":"\u73b0\u6709\u6280\u672f\u6458\u8981"},{"location":"security/security-guide/#openstack_10","text":"Castellan \u662f\u4e00\u4e2a\u5e93\uff0c\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u7b80\u5355\u7684\u901a\u7528\u63a5\u53e3\u6765\u5b58\u50a8\u3001\u751f\u6210\u548c\u68c0\u7d22\u673a\u5bc6\u3002\u5927\u591a\u6570 Openstack \u670d\u52a1\u90fd\u4f7f\u7528\u5b83\u8fdb\u884c\u673a\u5bc6\u7ba1\u7406\u3002\u4f5c\u4e3a\u4e00\u4e2a\u56fe\u4e66\u9986\uff0cCastellan \u672c\u8eab\u5e76\u4e0d\u63d0\u4f9b\u79d8\u5bc6\u5b58\u50a8\u3002\u76f8\u53cd\uff0c\u9700\u8981\u90e8\u7f72\u540e\u7aef\u5b9e\u73b0\u3002 \u8bf7\u6ce8\u610f\uff0cCastellan \u4e0d\u63d0\u4f9b\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u3002\u5b83\u53ea\u662f\u901a\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u51ed\u636e\uff08\u4f8b\u5982Keystone\u4ee4\u724c\uff09\u4f20\u9012\u5230\u540e\u7aef\u3002 Barbican \u662f\u4e00\u4e2a OpenStack \u670d\u52a1\uff0c\u4e3a Castellan \u63d0\u4f9b\u540e\u7aef\u3002Barbican \u9700\u8981\u5e76\u9a8c\u8bc1 keystone \u8eab\u4efd\u9a8c\u8bc1\u4ee4\u724c\uff0c\u4ee5\u8bc6\u522b\u8bbf\u95ee\u6216\u5b58\u50a8\u5bc6\u94a5\u7684\u7528\u6237\u548c\u9879\u76ee\u3002\u7136\u540e\uff0c\u5b83\u5e94\u7528\u7b56\u7565\u6765\u786e\u5b9a\u662f\u5426\u5141\u8bb8\u8bbf\u95ee\u3002\u5b83\u8fd8\u63d0\u4f9b\u4e86\u8bb8\u591a\u989d\u5916\u7684\u6709\u7528\u529f\u80fd\u6765\u6539\u8fdb\u5bc6\u94a5\u7ba1\u7406\uff0c\u5305\u62ec\u914d\u989d\u3001\u6bcf\u4e2a\u5bc6\u94a5\u7684 ACL\u3001\u8ddf\u8e2a\u5bc6\u94a5\u4f7f\u7528\u8005\u4ee5\u53ca\u5bc6\u94a5\u5bb9\u5668\u4e2d\u7684\u5bc6\u94a5\u5206\u7ec4\u3002\u4f8b\u5982\uff0c\u660e\u9510\u76f4\u63a5\u4e0e\u5df4\u6bd4\u80af\uff08\u800c\u4e0d\u662f\u5361\u65af\u7279\u62c9\u5170\uff09\u96c6\u6210\uff0c\u4ee5\u5229\u7528\u5176\u4e2d\u4e00\u4e9b\u529f\u80fd\u3002 Barbican \u6709\u8bb8\u591a\u540e\u7aef\u63d2\u4ef6\uff0c\u53ef\u7528\u4e8e\u5c06\u673a\u5bc6\u5b89\u5168\u5730\u5b58\u50a8\u5728\u672c\u5730\u6570\u636e\u5e93\u6216 HSM \u4e2d\u3002 \u76ee\u524d\uff0cBarbican \u662f Castellan \u552f\u4e00\u53ef\u7528\u7684\u540e\u7aef\u3002\u7136\u800c\uff0c\u6709\u51e0\u4e2a\u540e\u7aef\u6b63\u5728\u5f00\u53d1\u4e2d\uff0c\u5305\u62ec KMIP\u3001Dogtag\u3001Hashicorp Vault \u548c Custodia\u3002\u5bf9\u4e8e\u90a3\u4e9b\u4e0d\u5e0c\u671b\u90e8\u7f72 Barbican \u5e76\u4e14\u5bc6\u94a5\u7ba1\u7406\u9700\u6c42\u76f8\u5bf9\u7b80\u5355\u7684\u90e8\u7f72\u4eba\u5458\u6765\u8bf4\uff0c\u4f7f\u7528\u8fd9\u4e9b\u540e\u7aef\u4e4b\u4e00\u53ef\u80fd\u662f\u4e00\u4e2a\u53ef\u884c\u7684\u66ff\u4ee3\u65b9\u6848\u3002\u4f46\u662f\uff0c\u5728\u68c0\u7d22\u5bc6\u94a5\u65f6\uff0c\u7f3a\u5c11\u7684\u662f\u591a\u79df\u6237\u548c\u79df\u6237\u7b56\u7565\u7684\u5b9e\u65bd\uff0c\u4ee5\u53ca\u4e0a\u9762\u63d0\u5230\u7684\u4efb\u4f55\u989d\u5916\u529f\u80fd\u3002","title":"\u76f8\u5173 Openstack \u9879\u76ee"},{"location":"security/security-guide/#_214","text":"","title":"\u4f7f\u7528\u6848\u4f8b"},{"location":"security/security-guide/#_215","text":"\u9a8c\u8bc1\u955c\u50cf\u7b7e\u540d\u53ef\u786e\u4fdd\u955c\u50cf\u81ea\u539f\u59cb\u4e0a\u4f20\u4ee5\u6765\u4e0d\u4f1a\u88ab\u66ff\u6362\u6216\u66f4\u6539\u3002\u955c\u50cf\u7b7e\u540d\u9a8c\u8bc1\u529f\u80fd\u4f7f\u7528 Castellan \u4f5c\u4e3a\u5176\u5bc6\u94a5\u7ba1\u7406\u5668\u6765\u5b58\u50a8\u52a0\u5bc6\u7b7e\u540d\u3002\u955c\u50cf\u7b7e\u540d\u548c\u8bc1\u4e66 UUID \u5c06\u4e0e\u955c\u50cf\u4e00\u8d77\u4e0a\u4f20\u5230\u955c\u50cf \uff08glance\uff09 \u670d\u52a1\u3002Glance \u5728\u4ece\u5bc6\u94a5\u7ba1\u7406\u5668\u68c0\u7d22\u8bc1\u4e66\u540e\u9a8c\u8bc1\u7b7e\u540d\u3002\u542f\u52a8\u955c\u50cf\u65f6\uff0c\u8ba1\u7b97\u670d\u52a1 \uff08nova\uff09 \u5728\u4ece\u5bc6\u94a5\u7ba1\u7406\u5668\u68c0\u7d22\u8bc1\u4e66\u540e\u9a8c\u8bc1\u7b7e\u540d\u3002 \u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u53ef\u4fe1\u6620\u50cf\u6587\u6863\u3002","title":"\u955c\u50cf\u7b7e\u540d\u9a8c\u8bc1"},{"location":"security/security-guide/#_216","text":"\u5377\u52a0\u5bc6\u529f\u80fd\u4f7f\u7528 Castellan \u63d0\u4f9b\u9759\u6001\u6570\u636e\u52a0\u5bc6\u3002\u5f53\u7528\u6237\u521b\u5efa\u52a0\u5bc6\u5377\u7c7b\u578b\u5e76\u4f7f\u7528\u8be5\u7c7b\u578b\u521b\u5efa\u5377\u65f6\uff0c\u5757\u5b58\u50a8 \uff08cinder\uff09 \u670d\u52a1\u4f1a\u8bf7\u6c42\u5bc6\u94a5\u7ba1\u7406\u5668\u521b\u5efa\u8981\u4e0e\u8be5\u5377\u5173\u8054\u7684\u5bc6\u94a5\u3002\u5f53\u5377\u9644\u52a0\u5230\u5b9e\u4f8b\u65f6\uff0cnova \u4f1a\u68c0\u7d22\u5bc6\u94a5\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6570\u636e\u52a0\u5bc6\u90e8\u5206\u3002\u548c\u5377\u52a0\u5bc6\u3002","title":"\u5377\u52a0\u5bc6"},{"location":"security/security-guide/#_217","text":"\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u53ef\u89e3\u51b3\u6570\u636e\u9690\u79c1\u95ee\u9898\u3002\u4e34\u65f6\u78c1\u76d8\u662f\u865a\u62df\u4e3b\u673a\u64cd\u4f5c\u7cfb\u7edf\u4f7f\u7528\u7684\u4e34\u65f6\u5de5\u4f5c\u7a7a\u95f4\u3002\u5982\u679c\u4e0d\u52a0\u5bc6\uff0c\u53ef\u4ee5\u5728\u6b64\u78c1\u76d8\u4e0a\u8bbf\u95ee\u654f\u611f\u7684\u7528\u6237\u4fe1\u606f\uff0c\u5e76\u4e14\u5728\u5378\u8f7d\u78c1\u76d8\u540e\u53ef\u80fd\u4f1a\u4fdd\u7559\u6b8b\u7559\u4fe1\u606f\u3002 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u53ef\u4ee5\u901a\u8fc7\u5b89\u5168\u5305\u88c5\u5668\u4e0e\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u4ea4\u4e92\uff0c\u5e76\u901a\u8fc7\u6309\u79df\u6237\u63d0\u4f9b\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u5bc6\u94a5\u6765\u652f\u6301\u6570\u636e\u9694\u79bb\u3002\u5efa\u8bae\u4f7f\u7528\u540e\u7aef\u5bc6\u94a5\u5b58\u50a8\u4ee5\u589e\u5f3a\u5b89\u5168\u6027\uff08\u4f8b\u5982\uff0cHSM \u6216 KMIP \u670d\u52a1\u5668\u53ef\u7528\u4f5c barbican \u540e\u7aef\u5bc6\u94a5\u5b58\u50a8\uff09\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u6587\u6863\u3002","title":"\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6"},{"location":"security/security-guide/#sahara","text":"Sahara\u5728\u64cd\u4f5c\u8fc7\u7a0b\u4e2d\u751f\u6210\u5e76\u5b58\u50a8\u591a\u4e2a\u5bc6\u7801\u3002\u4e3a\u4e86\u52a0\u5f3aSahara\u5bf9\u5bc6\u7801\u7684\u4f7f\u7528\uff0c\u53ef\u4ee5\u6307\u793a\u5b83\u4f7f\u7528\u5916\u90e8\u5bc6\u94a5\u7ba1\u7406\u5668\u6765\u5b58\u50a8\u548c\u68c0\u7d22\u8fd9\u4e9b\u5bc6\u94a5\u3002\u8981\u542f\u7528\u6b64\u529f\u80fd\uff0c\u5fc5\u987b\u9996\u5148\u5728\u5806\u6808\u4e2d\u90e8\u7f72\u4e00\u4e2a OpenStack Key Manager \u670d\u52a1\u3002 \u5728\u5806\u6808\u4e0a\u90e8\u7f72\u5bc6\u94a5\u7ba1\u7406\u5668\u670d\u52a1\u540e\uff0c\u5fc5\u987b\u5c06 sahara \u914d\u7f6e\u4e3a\u542f\u7528\u5bc6\u94a5\u7684\u5916\u90e8\u5b58\u50a8\u3002Sahara \u4f7f\u7528 Castellan \u5e93\u4e0e OpenStack Key Manager \u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u6b64\u5e93\u63d0\u4f9b\u5bf9\u5bc6\u94a5\u7ba1\u7406\u5668\u7684\u53ef\u914d\u7f6e\u8bbf\u95ee\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Sahara \u9ad8\u7ea7\u914d\u7f6e\u6307\u5357\u3002","title":"Sahara"},{"location":"security/security-guide/#magnum","text":"\u4e3a\u4e86\u4f7f\u7528\u672c\u673a\u5ba2\u6237\u7aef\uff08 docker \u6216 kubectl \u5206\u522b\uff09\u63d0\u4f9b\u5bf9 Docker Swarm \u6216 Kubernetes \u7684\u8bbf\u95ee\uff0cmagnum \u4f7f\u7528 TLS \u8bc1\u4e66\u3002\u8981\u5b58\u50a8\u8bc1\u4e66\uff0c\u5efa\u8bae\u4f7f\u7528 Barbican \u6216 Magnum \u6570\u636e\u5e93 \uff08 x590keypair \uff09\u3002 \u4e5f\u53ef\u4ee5\u4f7f\u7528\u672c\u5730\u76ee\u5f55 \uff08 local \uff09\uff0c\u4f46\u88ab\u8ba4\u4e3a\u662f\u4e0d\u5b89\u5168\u7684\uff0c\u4e0d\u9002\u5408\u751f\u4ea7\u73af\u5883\u3002 \u6709\u5173\u4e3a Magnum \u8bbe\u7f6e\u8bc1\u4e66\u7ba1\u7406\u5668\u7684\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5bb9\u5668\u57fa\u7840\u67b6\u6784\u7ba1\u7406\u670d\u52a1\u6587\u6863\u3002","title":"Magnum"},{"location":"security/security-guide/#octavialbaas","text":"Neutron \u548c Octavia \u9879\u76ee\u7684 LBaaS\uff08\u8d1f\u8f7d\u5747\u8861\u5668\u5373\u670d\u52a1\uff09\u529f\u80fd\u9700\u8981\u8bc1\u4e66\u53ca\u5176\u79c1\u94a5\u6765\u4e3a TLS \u8fde\u63a5\u63d0\u4f9b\u8d1f\u8f7d\u5747\u8861\u3002Barbican \u53ef\u7528\u4e8e\u5b58\u50a8\u6b64\u654f\u611f\u4fe1\u606f\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5982\u4f55\u521b\u5efa TLS \u8d1f\u8f7d\u5747\u8861\u5668\u548c\u90e8\u7f72\u4ee5 TLS \u7ed3\u5c3e\u7684 HTTPS \u8d1f\u8f7d\u5747\u8861\u5668\u3002","title":"Octavia/LBaaS"},{"location":"security/security-guide/#swift","text":"\u5bf9\u79f0\u5bc6\u94a5\u53ef\u7528\u4e8e\u52a0\u5bc6 Swift \u5bb9\u5668\uff0c\u4ee5\u964d\u4f4e\u7528\u6237\u6570\u636e\u88ab\u8bfb\u53d6\u7684\u98ce\u9669\uff0c\u5982\u679c\u672a\u7ecf\u6388\u6743\u7684\u4e00\u65b9\u8981\u83b7\u5f97\u5bf9\u78c1\u76d8\u7684\u7269\u7406\u8bbf\u95ee\u6743\u9650\u3002 \u6709\u5173\u66f4\u591a\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u5b98\u65b9 swift \u6587\u6863\u4e2d\u7684\u5bf9\u8c61\u52a0\u5bc6\u90e8\u5206\u3002","title":"Swift"},{"location":"security/security-guide/#_218","text":"OpenStack \u670d\u52a1\u7684\u914d\u7f6e\u6587\u4ef6\u5305\u542b\u8bb8\u591a\u7eaf\u6587\u672c\u5bc6\u7801\u3002\u4f8b\u5982\uff0c\u8fd9\u4e9b\u5305\u62ec\u670d\u52a1\u7528\u6237\u7528\u4e8e\u5411 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u4ee5\u9a8c\u8bc1 keystone \u4ee4\u724c\u7684\u5bc6\u7801\u3002 \u76ee\u524d\u6ca1\u6709\u5bf9\u8fd9\u4e9b\u5bc6\u7801\u8fdb\u884c\u6a21\u7cca\u5904\u7406\u7684\u89e3\u51b3\u65b9\u6848\u3002\u5efa\u8bae\u901a\u8fc7\u6587\u4ef6\u6743\u9650\u9002\u5f53\u5730\u4fdd\u62a4\u8fd9\u4e9b\u6587\u4ef6\u3002 \u76ee\u524d\u6b63\u5728\u52aa\u529b\u5c06\u8fd9\u4e9b\u5bc6\u94a5\u5b58\u50a8\u5728 Castellan \u540e\u7aef\uff0c\u7136\u540e\u8ba9 oslo.config \u4f7f\u7528 Castellan \u6765\u68c0\u7d22\u8fd9\u4e9b\u5bc6\u94a5\u3002","title":"\u914d\u7f6e\u6587\u4ef6\u4e2d\u7684\u5bc6\u7801"},{"location":"security/security-guide/#barbican","text":"","title":"Barbican"},{"location":"security/security-guide/#_219","text":"Barbican \u662f\u4e00\u4e2a REST API\uff0c\u65e8\u5728\u5b89\u5168\u5b58\u50a8\u3001\u914d\u7f6e\u548c\u7ba1\u7406\u5bc6\u7801\u3001\u52a0\u5bc6\u5bc6\u94a5\u548c X.509 \u8bc1\u4e66\u7b49\u673a\u5bc6\u3002\u5b83\u65e8\u5728\u5bf9\u6240\u6709\u73af\u5883\u90fd\u6709\u7528\uff0c\u5305\u62ec\u5927\u578b\u77ed\u6682\u4e91\u3002 Barbican \u4e0e\u591a\u4e2a OpenStack \u529f\u80fd\u96c6\u6210\uff0c\u53ef\u4ee5\u76f4\u63a5\u96c6\u6210\uff0c\u4e5f\u53ef\u4ee5\u4f5c\u4e3a Castellan \u7684\u540e\u7aef\u96c6\u6210\u3002 Barbican \u901a\u5e38\u7528\u4f5c\u5bc6\u94a5\u7ba1\u7406\u7cfb\u7edf\uff0c\u4ee5\u5b9e\u73b0\u56fe\u50cf\u7b7e\u540d\u9a8c\u8bc1\u3001\u5377\u52a0\u5bc6\u7b49\u7528\u4f8b\u3002\u8fd9\u4e9b\u7528\u4f8b\u5728\u7528\u4f8b\u4e2d\u8fdb\u884c\u4e86\u6982\u8ff0","title":"\u6982\u8ff0"},{"location":"security/security-guide/#barbican_1","text":"\u5f85\u5b9a","title":"Barbican \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236"},{"location":"security/security-guide/#_220","text":"Key Manager \u670d\u52a1\u5177\u6709\u63d2\u4ef6\u67b6\u6784\uff0c\u5141\u8bb8\u90e8\u7f72\u7a0b\u5e8f\u5c06\u5bc6\u94a5\u5b58\u50a8\u5728\u4e00\u4e2a\u6216\u591a\u4e2a\u5bc6\u94a5\u5b58\u50a8\u4e2d\u3002\u673a\u5bc6\u5b58\u50a8\u53ef\u4ee5\u662f\u57fa\u4e8e\u8f6f\u4ef6\u7684\uff08\u5982\u8f6f\u4ef6\u4ee4\u724c\uff09\uff0c\u4e5f\u53ef\u4ee5\u662f\u57fa\u4e8e\u786c\u4ef6\u8bbe\u5907\uff08\u5982\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09\uff09\u7684\u3002\u672c\u8282\u4ecb\u7ecd\u5f53\u524d\u53ef\u7528\u7684\u63d2\u4ef6\uff0c\u5e76\u8ba8\u8bba\u6bcf\u4e2a\u63d2\u4ef6\u7684\u5b89\u5168\u72b6\u51b5\u3002\u63d2\u4ef6\u5df2\u542f\u7528\u5e76\u4f7f\u7528\u914d\u7f6e\u6587\u4ef6\u4e2d\u7684 /etc/barbican/barbican.conf \u8bbe\u7f6e\u8fdb\u884c\u914d\u7f6e\u3002 \u6709\u4e24\u79cd\u7c7b\u578b\u7684\u63d2\u4ef6\uff1a\u52a0\u5bc6\u63d2\u4ef6\u548c\u673a\u5bc6\u5b58\u50a8\u63d2\u4ef6\u3002","title":"\u673a\u5bc6\u5b58\u50a8\u540e\u7aef"},{"location":"security/security-guide/#_221","text":"\u52a0\u5bc6\u63d2\u4ef6\u5c06\u673a\u5bc6\u5b58\u50a8\u4e3a Barbican \u6570\u636e\u5e93\u4e2d\u7684\u52a0\u5bc6 blob\u3002\u8c03\u7528\u8be5\u63d2\u4ef6\u6765\u52a0\u5bc6\u5bc6\u94a5\u5b58\u50a8\u4e0a\u7684\u5bc6\u94a5\uff0c\u5e76\u5728\u5bc6\u94a5\u68c0\u7d22\u65f6\u89e3\u5bc6\u5bc6\u94a5\u3002\u76ee\u524d\u6709\u4e24\u79cd\u7c7b\u578b\u7684\u5b58\u50a8\u63d2\u4ef6\u53ef\u7528\uff1aSimple Crypto \u63d2\u4ef6\u548c PKCS#11 \u52a0\u5bc6\u63d2\u4ef6\u3002","title":"\u52a0\u5bc6\u63d2\u4ef6"},{"location":"security/security-guide/#_222","text":"\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u5728 \u4e2d barbican.conf \u914d\u7f6e\u4e86\u7b80\u5355\u7684\u52a0\u5bc6\u63d2\u4ef6\u3002\u8be5\u63d2\u4ef6\u4f7f\u7528\u5355\u4e2a\u5bf9\u79f0\u5bc6\u94a5\uff08KEK - \u6216\u201c\u5bc6\u94a5\u52a0\u5bc6\u5bc6\u94a5\u201d\uff09\uff0c\u8be5\u5bc6\u94a5\u4ee5\u7eaf\u6587\u672c\u5f62\u5f0f\u5b58\u50a8\u5728 barbican.conf \u6587\u4ef6\u4e2d\uff0c\u4ee5\u52a0\u5bc6\u548c\u89e3\u5bc6\u6240\u6709\u673a\u5bc6\u3002\u6b64\u63d2\u4ef6\u88ab\u8ba4\u4e3a\u662f\u4e0d\u592a\u5b89\u5168\u7684\u9009\u9879\uff0c\u4ec5\u9002\u7528\u4e8e\u5f00\u53d1\u548c\u6d4b\u8bd5\uff0c\u56e0\u4e3a\u4e3b\u5bc6\u94a5\u4ee5\u7eaf\u6587\u672c\u5f62\u5f0f\u5b58\u50a8\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d\uff0c\u56e0\u6b64\u4e0d\u5efa\u8bae\u5728\u751f\u4ea7\u90e8\u7f72\u4e2d\u4f7f\u7528\u3002","title":"\u7b80\u5355\u7684\u52a0\u5bc6\u63d2\u4ef6"},{"location":"security/security-guide/#pkcs11","text":"PKCS#11 \u52a0\u5bc6\u63d2\u4ef6\u53ef\u7528\u4e8e\u4e0e\u4f7f\u7528 PKCS#11 \u534f\u8bae\u7684\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u8fde\u63a5\u3002\u673a\u5bc6\u7531\u9879\u76ee\u7279\u5b9a\u7684\u5bc6\u94a5\u52a0\u5bc6\u5bc6\u94a5 \uff08KEK\uff09 \u52a0\u5bc6 \uff08\u5e76\u5728\u68c0\u7d22\u65f6\u89e3\u5bc6\uff09 \u3002KEK \u53d7\u4e3b KEK \uff08MKEK\uff09 \u4fdd\u62a4\uff08\u52a0\u5bc6\uff09\u3002MKEK \u4e0e HMAC \u4e00\u8d77\u9a7b\u7559\u5728 HSM \u4e2d\u3002\u7531\u4e8e\u6bcf\u4e2a\u9879\u76ee\u90fd\u4f7f\u7528\u4e0d\u540c\u7684 KEK\uff0c\u5e76\u4e14\u7531\u4e8e KEK \u4ee5\u52a0\u5bc6\u5f62\u5f0f\uff08\u800c\u4e0d\u662f\u914d\u7f6e\u6587\u4ef6\u4e2d\u7684\u660e\u6587\uff09\u5b58\u50a8\u5728\u6570\u636e\u5e93\u4e2d\uff0c\u56e0\u6b64 PKCS#11 \u63d2\u4ef6\u6bd4\u7b80\u5355\u7684\u52a0\u5bc6\u63d2\u4ef6\u5b89\u5168\u5f97\u591a\u3002\u5b83\u662f Barbican \u90e8\u7f72\u4e2d\u6700\u53d7\u6b22\u8fce\u7684\u540e\u7aef\u3002","title":"PKCS#11 \u52a0\u5bc6\u63d2\u4ef6"},{"location":"security/security-guide/#_223","text":"\u5bc6\u94a5\u5b58\u50a8\u63d2\u4ef6\u4e0e\u5b89\u5168\u5b58\u50a8\u7cfb\u7edf\u63a5\u53e3\uff0c\u4ee5\u5c06\u5bc6\u94a5\u5b58\u50a8\u5728\u8fd9\u4e9b\u7cfb\u7edf\u4e2d\u3002\u5bc6\u94a5\u5b58\u50a8\u63d2\u4ef6\u6709\u4e09\u79cd\u7c7b\u578b\uff1aKMIP \u63d2\u4ef6\u3001Dogtag \u63d2\u4ef6\u548c Vault \u63d2\u4ef6\u3002","title":"\u673a\u5bc6\u5b58\u50a8\u63d2\u4ef6"},{"location":"security/security-guide/#kmip","text":"\u5bc6\u94a5\u7ba1\u7406\u4e92\u64cd\u4f5c\u6027\u534f\u8bae \uff08KMIP\uff09 \u5bc6\u94a5\u5b58\u50a8\u63d2\u4ef6\u7528\u4e8e\u4e0e\u542f\u7528\u4e86 KMIP \u7684\u8bbe\u5907\uff08\u5982\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09\uff09\u8fdb\u884c\u901a\u4fe1\u3002\u5bc6\u94a5\u76f4\u63a5\u5b89\u5168\u5730\u5b58\u50a8\u5728\u542f\u7528\u4e86 KMIP \u7684\u8bbe\u5907\u4e2d\uff0c\u800c\u4e0d\u662f\u5b58\u50a8\u5728 Barbican \u6570\u636e\u5e93\u4e2d\u3002Barbican \u6570\u636e\u5e93\u7ef4\u62a4\u5bf9\u5bc6\u94a5\u4f4d\u7f6e\u7684\u5f15\u7528\uff0c\u4ee5\u4f9b\u4ee5\u540e\u68c0\u7d22\u3002\u8be5\u63d2\u4ef6\u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528\u7528\u6237\u540d\u548c\u5bc6\u7801\u6216\u4f7f\u7528\u5ba2\u6237\u7aef\u8bc1\u4e66\u5411\u542f\u7528\u4e86 KMIP \u7684\u8bbe\u5907\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u6b64\u4fe1\u606f\u5b58\u50a8\u5728 Barbican \u914d\u7f6e\u6587\u4ef6\u4e2d\u3002","title":"KMIP \u63d2\u4ef6"},{"location":"security/security-guide/#dogtag","text":"Dogtag \u79d8\u5bc6\u5b58\u50a8\u63d2\u4ef6\u7528\u4e8e\u4e0e Dogtag \u901a\u4fe1\u3002Dogtag \u662f\u5bf9\u5e94\u4e8e Red Hat \u8bc1\u4e66\u7cfb\u7edf\u7684\u4e0a\u6e38\u9879\u76ee\uff0cRed Hat Certificate System \u662f\u4e00\u4e2a\u901a\u7528\u6807\u51c6/FIPS \u8ba4\u8bc1\u7684 PKI \u89e3\u51b3\u65b9\u6848\uff0c\u5305\u542b\u8bc1\u4e66\u7ba1\u7406\u5668 \uff08CA\uff09 \u548c\u5bc6\u94a5\u6062\u590d\u673a\u6784 \uff08KRA\uff09\uff0c\u7528\u4e8e\u5b89\u5168\u5b58\u50a8\u673a\u5bc6\u3002KRA \u5c06\u673a\u5bc6\u4f5c\u4e3a\u52a0\u5bc6\u7684 blob \u5b58\u50a8\u5728\u5176\u5185\u90e8\u6570\u636e\u5e93\u4e2d\uff0c\u4e3b\u52a0\u5bc6\u5bc6\u94a5\u5b58\u50a8\u5728\u57fa\u4e8e\u8f6f\u4ef6\u7684 NSS \u5b89\u5168\u6570\u636e\u5e93\u4e2d\uff0c\u6216\u5b58\u50a8\u5728\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u4e2d\u3002\u57fa\u4e8e\u8f6f\u4ef6\u7684 NSS \u6570\u636e\u5e93\u914d\u7f6e\u4e3a\u4e0d\u5e0c\u671b\u4f7f\u7528 HSM \u7684\u90e8\u7f72\u63d0\u4f9b\u4e86\u5b89\u5168\u9009\u9879\u3002KRA \u662f FreeIPA \u7684\u4e00\u4e2a\u7ec4\u4ef6\uff0c\u56e0\u6b64\u53ef\u4ee5\u4f7f\u7528 FreeIPA \u670d\u52a1\u5668\u914d\u7f6e\u63d2\u4ef6\u3002\u4ee5\u4e0b\u535a\u5ba2\u6587\u7ae0\u4e2d\u63d0\u4f9b\u4e86\u6709\u5173\u5982\u4f55\u4f7f\u7528 FreeIPA \u8bbe\u7f6e Barbican \u7684\u66f4\u8be6\u7ec6\u8bf4\u660e\u3002","title":"Dogtag \u63d2\u4ef6"},{"location":"security/security-guide/#vault","text":"Vault \u662f Hashicorp \u5f00\u53d1\u7684\u79d8\u5bc6\u5b58\u50a8\uff0c\u7528\u4e8e\u5b89\u5168\u8bbf\u95ee\u673a\u5bc6\u548c\u5176\u4ed6\u5bf9\u8c61\uff0c\u4f8b\u5982 API \u5bc6\u94a5\u3001\u5bc6\u7801\u6216\u8bc1\u4e66\u3002\u4fdd\u9669\u67dc\u4e3a\u4efb\u4f55\u673a\u5bc6\u63d0\u4f9b\u7edf\u4e00\u7684\u754c\u9762\uff0c\u540c\u65f6\u63d0\u4f9b\u4e25\u683c\u7684\u8bbf\u95ee\u63a7\u5236\u5e76\u8bb0\u5f55\u8be6\u7ec6\u7684\u5ba1\u6838\u65e5\u5fd7\u3002Vault \u4f01\u4e1a\u7248\u8fd8\u5141\u8bb8\u4e0e HSM \u96c6\u6210\u4ee5\u8fdb\u884c\u81ea\u52a8\u89e3\u5c01\u3001\u63d0\u4f9b FIPS \u5bc6\u94a5\u5b58\u50a8\u548c\u71b5\u589e\u5f3a\u3002\u4f46\u662f\uff0cVault \u63d2\u4ef6\u7684\u7f3a\u70b9\u662f\u5b83\u4e0d\u652f\u6301\u591a\u79df\u6237\uff0c\u56e0\u6b64\u6240\u6709\u5bc6\u94a5\u90fd\u5c06\u5b58\u50a8\u5728\u540c\u4e00\u4e2a\u952e/\u503c\u5bc6\u94a5\u5f15\u64ce\u4e0b\u3002\u6302\u8f7d\u70b9\u3002","title":"Vault \u63d2\u4ef6"},{"location":"security/security-guide/#_224","text":"Barbican \u56e2\u961f\u4e0e OpenStack \u5b89\u5168\u9879\u76ee\u5408\u4f5c\uff0c\u5bf9\u6700\u4f73\u5b9e\u8df5 Barbican \u90e8\u7f72\u8fdb\u884c\u4e86\u5b89\u5168\u5ba1\u67e5\u3002\u5b89\u5168\u5ba1\u67e5\u7684\u76ee\u7684\u662f\u8bc6\u522b\u670d\u52a1\u8bbe\u8ba1\u548c\u4f53\u7cfb\u7ed3\u6784\u4e2d\u7684\u5f31\u70b9\u548c\u7f3a\u9677\uff0c\u5e76\u63d0\u51fa\u89e3\u51b3\u8fd9\u4e9b\u95ee\u9898\u7684\u63a7\u5236\u6216\u4fee\u590d\u63aa\u65bd\u3002 \u5df4\u6bd4\u80af\u5a01\u80c1\u5206\u6790\u786e\u5b9a\u4e86\u516b\u9879\u5b89\u5168\u53d1\u73b0\u548c\u4e24\u9879\u5efa\u8bae\uff0c\u4ee5\u63d0\u9ad8\u5df4\u6bd4\u80af\u90e8\u7f72\u7684\u5b89\u5168\u6027\u3002\u8fd9\u4e9b\u7ed3\u679c\u53ef\u4ee5\u5728\u5b89\u5168\u5206\u6790\u5b58\u50a8\u5e93\u4e2d\u67e5\u770b\uff0c\u4ee5\u53ca Barbican \u4f53\u7cfb\u7ed3\u6784\u56fe\u548c\u4f53\u7cfb\u7ed3\u6784\u63cf\u8ff0\u9875\u3002","title":"\u5a01\u80c1\u5206\u6790"},{"location":"security/security-guide/#castellan","text":"","title":"Castellan"},{"location":"security/security-guide/#_225","text":"Castellan \u662f\u7531 Barbican \u56e2\u961f\u5f00\u53d1\u7684\u901a\u7528\u5bc6\u94a5\u7ba1\u7406\u5668\u754c\u9762\u3002\u5b83\u4f7f\u9879\u76ee\u80fd\u591f\u4f7f\u7528\u53ef\u914d\u7f6e\u7684\u5bc6\u94a5\u7ba1\u7406\u5668\uff0c\u8be5\u7ba1\u7406\u5668\u53ef\u4ee5\u7279\u5b9a\u4e8e\u90e8\u7f72\u3002","title":"\u6982\u8ff0"},{"location":"security/security-guide/#_226","text":"\u200b 1.\u5728 OpenStack \u4e2d\u5b89\u5168\u5b58\u50a8\u5bc6\u94a5\u7684\u63a8\u8350\u65b9\u6cd5\u662f\u4ec0\u4e48\uff1f \u5728OpenStack\u4e2d\u5b89\u5168\u5730\u5b58\u50a8\u548c\u7ba1\u7406\u5bc6\u94a5\u7684\u63a8\u8350\u65b9\u6cd5\u662f\u4f7f\u7528Barbican\u3002 \u200b 2.\u6211\u4e3a\u4ec0\u4e48\u8981\u4f7f\u7528Barbican\uff1f Barbican \u662f\u4e00\u79cd OpenStack \u670d\u52a1\uff0c\u5b83\u652f\u6301\u591a\u79df\u6237\uff0c\u5e76\u4f7f\u7528 Keystone \u4ee4\u724c\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u8fd9\u610f\u5473\u7740\u5bf9\u5bc6\u94a5\u7684\u8bbf\u95ee\u662f\u901a\u8fc7\u79df\u6237\u548c RBAC \u89d2\u8272\u7684 OpenStack \u7b56\u7565\u6765\u63a7\u5236\u7684\u3002 Barbican \u5177\u6709\u591a\u4e2a\u53ef\u63d2\u62d4\u540e\u7aef\uff0c\u53ef\u4ee5\u4f7f\u7528 PKCS#11 \u6216 KMIP \u4e0e\u57fa\u4e8e\u8f6f\u4ef6\u548c\u786c\u4ef6\u7684\u5b89\u5168\u6a21\u5757\u8fdb\u884c\u901a\u4fe1\u3002 \u200b 3.\u5982\u679c\u6211\u4e0d\u60f3\u4f7f\u7528Barbican\u600e\u4e48\u529e\uff1f \u5728 Openstack \u4e0a\u4e0b\u6587\u4e2d\uff0c\u9700\u8981\u7ba1\u7406\u4e24\u79cd\u7c7b\u578b\u7684\u5bc6\u94a5 - \u9700\u8981\u5bc6\u94a5\u5931\u771f\u4ee4\u724c\u624d\u80fd\u8bbf\u95ee\u7684\u5bc6\u94a5\uff0c\u4ee5\u53ca\u4e0d\u9700\u8981\u5bc6\u94a5\u9a8c\u8bc1\u4ee4\u724c\u7684\u5bc6\u94a5\u3002 \u9700\u8981 keystone \u8eab\u4efd\u9a8c\u8bc1\u7684\u5bc6\u94a5\u7684\u4e00\u4e2a\u793a\u4f8b\u662f\u7279\u5b9a\u9879\u76ee\u62e5\u6709\u7684\u5bc6\u7801\u548c\u5bc6\u94a5\u3002\u4f8b\u5982\uff0c\u8fd9\u4e9b\u5305\u62ec\u9879\u76ee\u52a0\u5bc6\u7164\u6e23\u5377\u7684\u52a0\u5bc6\u5bc6\u94a5\u6216\u9879\u76ee\u6982\u89c8\u56fe\u50cf\u7684\u7b7e\u540d\u5bc6\u94a5\u3002 \u4e0d\u9700\u8981 keystone \u4ee4\u724c\u5373\u53ef\u8bbf\u95ee\u7684\u5bc6\u94a5\u793a\u4f8b\u5305\u62ec\u670d\u52a1\u914d\u7f6e\u6587\u4ef6\u4e2d\u670d\u52a1\u7528\u6237\u7684\u5bc6\u7801\u6216\u4e0d\u5c5e\u4e8e\u4efb\u4f55\u7279\u5b9a\u9879\u76ee\u7684\u52a0\u5bc6\u5bc6\u94a5\u3002 \u9700\u8981 keystone \u4ee4\u724c\u7684\u673a\u5bc6\u5e94\u4f7f\u7528 Barbican \u8fdb\u884c\u5b58\u50a8\u3002 \u4e0d\u9700\u8981 keystone \u8eab\u4efd\u9a8c\u8bc1\u7684\u5bc6\u94a5\u53ef\u4ee5\u5b58\u50a8\u5728\u4efb\u4f55\u5bc6\u94a5\u5b58\u50a8\u4e2d\uff0c\u8be5\u5bc6\u94a5\u5b58\u50a8\u5b9e\u73b0\u4e86\u901a\u8fc7 Castellan \u516c\u5f00\u7684\u7b80\u5355\u5bc6\u94a5\u5b58\u50a8 API\u3002\u8fd9\u4e5f\u5305\u62ec\u5df4\u6bd4\u80af\u3002 \u200b 4.\u5982\u4f55\u4f7f\u7528 Vault\u3001Keywhiz\u3001Custodia \u7b49...\uff1f \u5982\u679c\u5df2\u4e3a\u8be5\u5bc6\u94a5\u7ba1\u7406\u5668\u7f16\u5199\u4e86 Castellan \u63d2\u4ef6\uff0c\u5219\u60a8\u9009\u62e9\u7684\u5bc6\u94a5\u7ba1\u7406\u5668\u53ef\u4ee5\u4e0e\u8be5\u5bc6\u94a5\u7ba1\u7406\u5668\u4e00\u8d77\u4f7f\u7528\u3002\u4e00\u65e6\u8be5\u63d2\u4ef6\u88ab\u7f16\u5199\u51fa\u6765\uff0c\u76f4\u63a5\u4f7f\u7528\u8be5\u63d2\u4ef6\u6216\u5728 Barbican \u540e\u9762\u4f7f\u7528\u8be5\u63d2\u4ef6\u662f\u76f8\u5bf9\u5fae\u4e0d\u8db3\u9053\u7684\u3002 \u76ee\u524d\uff0cVault \u548c Custodia \u63d2\u4ef6\u6b63\u5728\u4e3a Queens \u5468\u671f\u5f00\u53d1\u3002","title":"\u5e38\u89c1\u95ee\u9898\u89e3\u7b54"},{"location":"security/security-guide/#_227","text":"","title":"\u68c0\u67e5\u8868"},{"location":"security/security-guide/#check-key-manager-01-rootbarbican","text":"\u914d\u7f6e\u6587\u4ef6\u5305\u542b\u7ec4\u4ef6\u5e73\u7a33\u8fd0\u884c\u6240\u9700\u7684\u5173\u952e\u53c2\u6570\u548c\u4fe1\u606f\u3002\u5982\u679c\u975e\u7279\u6743\u7528\u6237\u6709\u610f\u6216\u65e0\u610f\u5730\u4fee\u6539\u6216\u5220\u9664\u4efb\u4f55\u53c2\u6570\u6216\u6587\u4ef6\u672c\u8eab\uff0c\u5219\u4f1a\u5bfc\u81f4\u4e25\u91cd\u7684\u53ef\u7528\u6027\u95ee\u9898\uff0c\u4ece\u800c\u5bfc\u81f4\u62d2\u7edd\u5411\u5176\u4ed6\u6700\u7ec8\u7528\u6237\u63d0\u4f9b\u670d\u52a1\u3002\u6b64\u7c7b\u5173\u952e\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a root\uff0c\u7ec4\u6240\u6709\u6743\u5fc5\u987b\u8bbe\u7f6e\u4e3a barbican\u3002\u6b64\u5916\uff0c\u5305\u542b\u76ee\u5f55\u5e94\u5177\u6709\u76f8\u540c\u7684\u6240\u6709\u6743\uff0c\u4ee5\u786e\u4fdd\u6b63\u786e\u62e5\u6709\u65b0\u6587\u4ef6\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%U %G\" /etc/barbican/barbican.conf | egrep \"root barbican\" $ stat -L -c \"%U %G\" /etc/barbican/barbican-api-paste.ini | egrep \"root barbican\" $ stat -L -c \"%U %G\" /etc/barbican/policy.json | egrep \"root barbican\" $ stat -L -c \"%U %G\" /etc/barbican | egrep \"root barbican\" \u901a\u8fc7\uff1a\u5982\u679c\u6240\u6709\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u5206\u522b\u8bbe\u7f6e\u4e3a root \u548c barbican\u3002\u4e0a\u9762\u7684\u547d\u4ee4\u663e\u793a\u4e86 root / barbican \u7684\u8f93\u51fa\u3002 \u5931\u8d25\uff1a\u5982\u679c\u4e0a\u8ff0\u547d\u4ee4\u672a\u8fd4\u56de\u4efb\u4f55\u8f93\u51fa\uff0c\u5219\u7528\u6237\u548c\u7ec4\u6240\u6709\u6743\u53ef\u80fd\u5df2\u8bbe\u7f6e\u4e3a\u9664 root \u4ee5\u5916\u7684\u4efb\u4f55\u7528\u6237\u6216\u9664 barbican \u4ee5\u5916\u7684\u4efb\u4f55\u7ec4\u3002","title":"Check-Key-Manager-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/barbican\uff1f"},{"location":"security/security-guide/#check-key-manager-02","text":"\u4e0e\u524d\u9762\u7684\u68c0\u67e5\u7c7b\u4f3c\uff0c\u6211\u4eec\u5efa\u8bae\u4e3a\u6b64\u7c7b\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e25\u683c\u7684\u8bbf\u95ee\u6743\u9650\u3002 \u8fd0\u884c\u4ee5\u4e0b\u547d\u4ee4\uff1a $ stat -L -c \"%a\" /etc/barbican/barbican.conf $ stat -L -c \"%a\" /etc/barbican/barbican-api-paste.ini $ stat -L -c \"%a\" /etc/barbican/policy.json $ stat -L -c \"%a\" /etc/barbican \u8fd8\u53ef\u4ee5\u8fdb\u884c\u66f4\u5e7f\u6cdb\u7684\u9650\u5236\uff1a\u5982\u679c\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\uff0c\u5219\u4fdd\u8bc1\u6b64\u76ee\u5f55\u4e2d\u65b0\u521b\u5efa\u7684\u6587\u4ef6\u5177\u6709\u6240\u9700\u7684\u6743\u9650\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u4e3a 640 \u6216\u66f4\u4e25\u683c\uff0c\u6216\u8005\u5305\u542b\u76ee\u5f55\u8bbe\u7f6e\u4e3a 750\u3002640 \u7684\u6743\u9650\u8f6c\u6362\u4e3a\u6240\u6709\u8005 r/w\u3001\u7ec4 r\uff0c\u800c\u5bf9\u5176\u4ed6\u4eba\u6ca1\u6709\u6743\u9650\uff0c\u4f8b\u5982\u201cu=rw\uff0cg=r\uff0co=\u201d\u3002 \u6ce8\u610f \u4f7f\u7528 Check-Key-Manager-01\uff1a\u914d\u7f6e\u6587\u4ef6\u7684\u6240\u6709\u6743\u662f\u5426\u8bbe\u7f6e\u4e3a root/barbican\uff1f\u6743\u9650\u8bbe\u7f6e\u4e3a 640\uff0croot \u5177\u6709\u8bfb/\u5199\u8bbf\u95ee\u6743\u9650\uff0cBarbican \u5177\u6709\u5bf9\u8fd9\u4e9b\u914d\u7f6e\u6587\u4ef6\u7684\u8bfb\u53d6\u8bbf\u95ee\u6743\u9650\u3002\u4e5f\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u9a8c\u8bc1\u8bbf\u95ee\u6743\u9650\u3002\u4ec5\u5f53\u6b64\u547d\u4ee4\u652f\u6301 ACL \u65f6\uff0c\u5b83\u624d\u5728\u60a8\u7684\u7cfb\u7edf\u4e0a\u53ef\u7528\u3002 $ getfacl --tabular -a /etc/barbican/barbican.conf getfacl: Removing leading '/' from absolute path names # file: etc/barbican/barbican.conf USER root rw- GROUP barbican r-- mask r-- other --- \u5931\u8d25\uff1a\u5982\u679c\u6743\u9650\u8bbe\u7f6e\u5927\u4e8e 640\u3002","title":"Check-Key-Manager-02\uff1a\u662f\u5426\u4e3a\u914d\u7f6e\u6587\u4ef6\u8bbe\u7f6e\u4e86\u4e25\u683c\u7684\u6743\u9650\uff1f"},{"location":"security/security-guide/#check-key-manager-03openstack-identity","text":"OpenStack \u652f\u6301\u5404\u79cd\u8eab\u4efd\u9a8c\u8bc1\u7b56\u7565\uff0c\u5982 noauth \u548c keystone \u3002\u5982\u679c\u4f7f\u7528\u8be5 noauth \u7b56\u7565\uff0c\u5219\u7528\u6237\u65e0\u9700\u4efb\u4f55\u8eab\u4efd\u9a8c\u8bc1\u5373\u53ef\u4e0e OpenStack \u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\u3002\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u6f5c\u5728\u7684\u98ce\u9669\uff0c\u56e0\u4e3a\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u83b7\u5f97\u5bf9 OpenStack \u7ec4\u4ef6\u7684\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u6240\u6709\u670d\u52a1\u90fd\u5fc5\u987b\u4f7f\u7528\u5176\u670d\u52a1\u5e10\u6237\u901a\u8fc7 keystone \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 \u901a\u8fc7\uff1a\u5982\u679c\u53c2\u6570 authtoken \u5217\u5728 \u4e2d\u7684 pipeline:barbican-api-keystone barbican-api-paste.ini \u90e8\u5206\u4e0b\u3002 \u5931\u8d25\uff1a\u5982\u679c \u4e2d\u7684 pipeline:barbican-api-keystone barbican-api-paste.ini \u90e8\u5206\u4e0b\u7f3a\u5c11\u8be5\u53c2\u6570 authtoken \u3002","title":"Check-Key-Manager-03\uff1aOpenStack Identity \u662f\u5426\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\uff1f"},{"location":"security/security-guide/#check-key-manager-04-tls","text":"OpenStack \u7ec4\u4ef6\u4f7f\u7528\u5404\u79cd\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\uff0c\u901a\u4fe1\u53ef\u80fd\u6d89\u53ca\u654f\u611f\u6216\u673a\u5bc6\u6570\u636e\u3002\u653b\u51fb\u8005\u53ef\u80fd\u4f1a\u5c1d\u8bd5\u7a83\u542c\u9891\u9053\u4ee5\u8bbf\u95ee\u654f\u611f\u4fe1\u606f\u3002\u6240\u6709\u7ec4\u4ef6\u5fc5\u987b\u4f7f\u7528\u5b89\u5168\u901a\u4fe1\u534f\u8bae\u76f8\u4e92\u901a\u4fe1\u3002 \u901a\u8fc7\uff1a\u5982\u679c section in /etc/barbican/barbican.conf \u4e0b\u7684\u53c2\u6570\u503c\u8bbe\u7f6e\u4e3a Identity API \u7aef\u70b9\u5f00\u5934\uff0c https:// \u5e76\u4e14 same /etc/barbican/barbican.conf \u4e2d\u540c\u4e00 [keystone_authtoken] \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri insecure \u503c\u8bbe\u7f6e\u4e3a False \u3002 \u5931\u8d25\uff1a\u5982\u679c in /etc/barbican/barbican.conf \u90e8\u5206\u4e0b\u7684 [keystone_authtoken] \u53c2\u6570 www_authenticate_uri \u503c\u672a\u8bbe\u7f6e\u4e3a\u4ee5 \u5f00\u5934\u7684\u8eab\u4efd API \u7aef\u70b9\uff0c https:// \u6216\u8005\u540c\u4e00 /etc/barbican/barbican.conf \u90e8\u5206\u4e2d\u7684\u53c2\u6570 insecure [keystone_authtoken] \u503c\u8bbe\u7f6e\u4e3a True \u3002","title":"Check-Key-Manager-04\uff1a\u662f\u5426\u542f\u7528\u4e86 TLS \u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff1f"},{"location":"security/security-guide/#_228","text":"\u6d88\u606f\u961f\u5217\u670d\u52a1\u4fc3\u8fdb\u4e86 OpenStack \u4e2d\u7684\u8fdb\u7a0b\u95f4\u901a\u4fe1\u3002OpenStack \u652f\u6301\u4ee5\u4e0b\u6d88\u606f\u961f\u5217\u670d\u52a1\u540e\u7aef\uff1a RabbitMQ Qpid ZeroMQ \u6216 0MQ RabbitMQ \u548c Qpid \u90fd\u662f\u9ad8\u7ea7\u6d88\u606f\u961f\u5217\u534f\u8bae \uff08AMQP\uff09 \u6846\u67b6\uff0c\u5b83\u4eec\u4e3a\u70b9\u5bf9\u70b9\u901a\u4fe1\u63d0\u4f9b\u6d88\u606f\u961f\u5217\u3002\u961f\u5217\u5b9e\u73b0\u901a\u5e38\u90e8\u7f72\u4e3a\u96c6\u4e2d\u5f0f\u6216\u5206\u6563\u5f0f\u961f\u5217\u670d\u52a1\u5668\u6c60\u3002ZeroMQ \u901a\u8fc7 TCP \u5957\u63a5\u5b57\u63d0\u4f9b\u76f4\u63a5\u7684\u70b9\u5bf9\u70b9\u901a\u4fe1\u3002 \u6d88\u606f\u961f\u5217\u6709\u6548\u5730\u4fc3\u8fdb\u4e86\u8de8 OpenStack \u90e8\u7f72\u7684\u547d\u4ee4\u548c\u63a7\u5236\u529f\u80fd\u3002\u4e00\u65e6\u5141\u8bb8\u8bbf\u95ee\u961f\u5217\uff0c\u5c31\u4e0d\u4f1a\u6267\u884c\u8fdb\u4e00\u6b65\u7684\u6388\u6743\u68c0\u67e5\u3002\u53ef\u901a\u8fc7\u961f\u5217\u8bbf\u95ee\u7684\u670d\u52a1\u4f1a\u9a8c\u8bc1\u5b9e\u9645\u6d88\u606f\u8d1f\u8f7d\u4e2d\u7684\u4e0a\u4e0b\u6587\u548c\u4ee4\u724c\u3002\u4f46\u662f\uff0c\u60a8\u5fc5\u987b\u6ce8\u610f\u4ee4\u724c\u7684\u5230\u671f\u65e5\u671f\uff0c\u56e0\u4e3a\u4ee4\u724c\u53ef\u80fd\u53ef\u91cd\u64ad\uff0c\u5e76\u4e14\u53ef\u4ee5\u6388\u6743\u57fa\u7840\u7ed3\u6784\u4e2d\u7684\u5176\u4ed6\u670d\u52a1\u3002 OpenStack \u4e0d\u652f\u6301\u6d88\u606f\u7ea7\u522b\u7684\u5b89\u5168\u6027\uff0c\u4f8b\u5982\u6d88\u606f\u7b7e\u540d\u3002\u56e0\u6b64\uff0c\u60a8\u5fc5\u987b\u5bf9\u6d88\u606f\u4f20\u8f93\u672c\u8eab\u8fdb\u884c\u5b89\u5168\u548c\u8eab\u4efd\u9a8c\u8bc1\u3002\u5bf9\u4e8e\u9ad8\u53ef\u7528\u6027 \uff08HA\uff09 \u914d\u7f6e\uff0c\u60a8\u5fc5\u987b\u6267\u884c\u961f\u5217\u5bf9\u961f\u5217\u7684\u8eab\u4efd\u9a8c\u8bc1\u548c\u52a0\u5bc6\u3002 \u901a\u8fc7 ZeroMQ \u6d88\u606f\u4f20\u9012\uff0cIPC \u5957\u63a5\u5b57\u5728\u5355\u4e2a\u673a\u5668\u4e0a\u4f7f\u7528\u3002\u7531\u4e8e\u8fd9\u4e9b\u5957\u63a5\u5b57\u5bb9\u6613\u53d7\u5230\u653b\u51fb\uff0c\u56e0\u6b64\u8bf7\u786e\u4fdd\u4e91\u8fd0\u8425\u5546\u5df2\u4fdd\u62a4\u5b83\u4eec\u3002 \u6d88\u606f\u5b89\u5168 \u6d88\u606f\u4f20\u8f93\u5b89\u5168 \u961f\u5217\u8eab\u4efd\u9a8c\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236 \u6d88\u606f\u961f\u5217\u8fdb\u7a0b\u9694\u79bb\u548c\u7b56\u7565","title":"\u6d88\u606f\u961f\u5217"},{"location":"security/security-guide/#_229","text":"\u672c\u8282\u8ba8\u8bba OpenStack \u4e2d\u4f7f\u7528\u7684\u4e09\u79cd\u6700\u5e38\u89c1\u7684\u6d88\u606f\u961f\u5217\u89e3\u51b3\u65b9\u6848\u7684\u5b89\u5168\u5f3a\u5316\u65b9\u6cd5\uff1aRabbitMQ\u3001Qpid \u548c ZeroMQ\u3002","title":"\u6d88\u606f\u5b89\u5168"},{"location":"security/security-guide/#_230","text":"\u57fa\u4e8e AMQP \u7684\u89e3\u51b3\u65b9\u6848\uff08Qpid \u548c RabbitMQ\uff09\u652f\u6301\u4f7f\u7528 TLS \u7684\u4f20\u8f93\u7ea7\u5b89\u5168\u6027\u3002ZeroMQ \u6d88\u606f\u4f20\u9012\u672c\u8eab\u4e0d\u652f\u6301 TLS\uff0c\u4f46\u4f7f\u7528\u6807\u8bb0\u7684 IPsec \u6216 CIPSO \u7f51\u7edc\u6807\u7b7e\u53ef\u4ee5\u5b9e\u73b0\u4f20\u8f93\u7ea7\u5b89\u5168\u6027\u3002 \u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u4e3a\u60a8\u7684\u6d88\u606f\u961f\u5217\u542f\u7528\u4f20\u8f93\u7ea7\u52a0\u5bc6\u3002\u5c06 TLS \u7528\u4e8e\u6d88\u606f\u4f20\u9012\u5ba2\u6237\u7aef\u8fde\u63a5\u53ef\u4ee5\u4fdd\u62a4\u901a\u4fe1\u5728\u4f20\u8f93\u5230\u6d88\u606f\u4f20\u9012\u670d\u52a1\u5668\u7684\u8fc7\u7a0b\u4e2d\u4e0d\u88ab\u7be1\u6539\u548c\u7a83\u542c\u3002\u4ee5\u4e0b\u662f\u6709\u5173\u5982\u4f55\u4e3a\u4e24\u4e2a\u5e38\u7528\u6d88\u606f\u4f20\u9012\u670d\u52a1\u5668 Qpid \u548c RabbitMQ \u914d\u7f6e TLS \u7684\u6307\u5357\u3002\u5728\u914d\u7f6e\u6d88\u606f\u4f20\u9012\u670d\u52a1\u5668\u7528\u4e8e\u9a8c\u8bc1\u5ba2\u6237\u673a\u8fde\u63a5\u7684\u53ef\u4fe1\u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09 \u6346\u7ed1\u8f6f\u4ef6\u65f6\uff0c\u5efa\u8bae\u4ec5\u5c06\u5176\u9650\u5236\u4e3a\u7528\u4e8e\u8282\u70b9\u7684 CA\uff0c\u6700\u597d\u662f\u5185\u90e8\u7ba1\u7406\u7684 CA\u3002\u53d7\u4fe1\u4efb\u7684 CA \u6346\u7ed1\u5305\u5c06\u786e\u5b9a\u54ea\u4e9b\u5ba2\u6237\u7aef\u8bc1\u4e66\u5c06\u83b7\u5f97\u6388\u6743\uff0c\u5e76\u901a\u8fc7\u8bbe\u7f6e TLS \u8fde\u63a5\u7684\u5ba2\u6237\u7aef-\u670d\u52a1\u5668\u9a8c\u8bc1\u6b65\u9aa4\u3002\u8bf7\u6ce8\u610f\uff0c\u5728\u5b89\u88c5\u8bc1\u4e66\u548c\u5bc6\u94a5\u6587\u4ef6\u65f6\uff0c\u8bf7\u786e\u4fdd\u6587\u4ef6\u6743\u9650\u53d7\u5230\u9650\u5236\uff0c\u4f8b\u5982\u4f7f\u7528 chmod 0600 \uff0c\u5e76\u4e14\u6240\u6709\u6743\u9650\u5236\u4e3a\u6d88\u606f\u4f20\u9012\u670d\u52a1\u5668\u5b88\u62a4\u7a0b\u5e8f\u7528\u6237\uff0c\u4ee5\u9632\u6b62\u6d88\u606f\u4f20\u9012\u670d\u52a1\u5668\u4e0a\u7684\u5176\u4ed6\u8fdb\u7a0b\u548c\u7528\u6237\u8fdb\u884c\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002","title":"\u6d88\u606f\u4f20\u8f93\u5b89\u5168"},{"location":"security/security-guide/#rabbitmq-ssl","text":"\u5e94\u5c06\u4ee5\u4e0b\u884c\u6dfb\u52a0\u5230\u7cfb\u7edf\u8303\u56f4\u7684 RabbitMQ \u914d\u7f6e\u6587\u4ef6\u4e2d\uff0c\u901a\u5e38 /etc/rabbitmq/rabbitmq.config \uff1a [ {rabbit, [ {tcp_listeners, [] }, {ssl_listeners, [{\"\", 5671}] }, {ssl_options, [{cacertfile,\"/etc/ssl/cacert.pem\"}, {certfile,\"/etc/ssl/rabbit-server-cert.pem\"}, {keyfile,\"/etc/ssl/rabbit-server-key.pem\"}, {verify,verify_peer}, {fail_if_no_peer_cert,true}]} ]} ]. \u8bf7\u6ce8\u610f\uff0c\u8be5 tcp_listeners \u9009\u9879\u8bbe\u7f6e\u4e3a [] \u963b\u6b62\u5b83\u4fa6\u542c\u975e SSL \u7aef\u53e3\u3002\u5e94\u5c06\u8be5 ssl_listeners \u9009\u9879\u9650\u5236\u4e3a\u4ec5\u5728\u7ba1\u7406\u7f51\u7edc\u4e0a\u4fa6\u542c\u670d\u52a1\u3002 \u6709\u5173 RabbitMQ SSL \u914d\u7f6e\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\uff1a RabbitMQ \u914d\u7f6e RabbitMQ SSL\u534f\u8bae","title":"RabbitMQ \u670d\u52a1\u5668 SSL \u914d\u7f6e"},{"location":"security/security-guide/#qpid-ssl","text":"Apache \u57fa\u91d1\u4f1a\u4e3a Qpid \u63d0\u4f9b\u4e86\u6d88\u606f\u4f20\u9012\u5b89\u5168\u6307\u5357\u3002\u8bf7\u53c2\u9605\uff1a Apache Qpid SSL","title":"Qpid \u670d\u52a1\u5668 SSL \u914d\u7f6e"},{"location":"security/security-guide/#_231","text":"RabbitMQ \u548c Qpid \u63d0\u4f9b\u8eab\u4efd\u9a8c\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236\u673a\u5236\uff0c\u7528\u4e8e\u63a7\u5236\u5bf9\u961f\u5217\u7684\u8bbf\u95ee\u3002ZeroMQ \u4e0d\u63d0\u4f9b\u6b64\u7c7b\u673a\u5236\u3002 \u7b80\u5355\u8eab\u4efd\u9a8c\u8bc1\u548c\u5b89\u5168\u5c42 \uff08SASL\uff09 \u662f Internet \u534f\u8bae\u4e2d\u7528\u4e8e\u8eab\u4efd\u9a8c\u8bc1\u548c\u6570\u636e\u5b89\u5168\u7684\u6846\u67b6\u3002RabbitMQ \u548c Qpid \u90fd\u63d0\u4f9b SASL \u548c\u5176\u4ed6\u53ef\u63d2\u5165\u7684\u8eab\u4efd\u9a8c\u8bc1\u673a\u5236\uff0c\u800c\u4e0d\u4ec5\u4ec5\u662f\u7b80\u5355\u7684\u7528\u6237\u540d\u548c\u5bc6\u7801\uff0c\u4ece\u800c\u53ef\u4ee5\u63d0\u9ad8\u8eab\u4efd\u9a8c\u8bc1\u5b89\u5168\u6027\u3002\u867d\u7136 RabbitMQ \u652f\u6301 SASL\uff0c\u4f46 OpenStack \u4e2d\u7684\u652f\u6301\u76ee\u524d\u4e0d\u5141\u8bb8\u8bf7\u6c42\u7279\u5b9a\u7684 SASL \u8eab\u4efd\u9a8c\u8bc1\u673a\u5236\u3002OpenStack \u4e2d\u7684 RabbitMQ \u652f\u6301\u5141\u8bb8\u901a\u8fc7\u672a\u52a0\u5bc6\u7684\u8fde\u63a5\u8fdb\u884c\u7528\u6237\u540d\u548c\u5bc6\u7801\u8eab\u4efd\u9a8c\u8bc1\uff0c\u6216\u8005\u5c06\u7528\u6237\u540d\u548c\u5bc6\u7801\u4e0e X.509 \u5ba2\u6237\u7aef\u8bc1\u4e66\u7ed3\u5408\u4f7f\u7528\uff0c\u4ee5\u5efa\u7acb\u5b89\u5168\u7684 TLS \u8fde\u63a5\u3002 \u6211\u4eec\u5efa\u8bae\u5728\u6240\u6709 OpenStack \u670d\u52a1\u8282\u70b9\u4e0a\u914d\u7f6e X.509 \u5ba2\u6237\u7aef\u8bc1\u4e66\uff0c\u4ee5\u4fbf\u5ba2\u6237\u7aef\u8fde\u63a5\u5230\u6d88\u606f\u4f20\u9012\u961f\u5217\uff0c\u5e76\u5728\u53ef\u80fd\u7684\u60c5\u51b5\u4e0b\uff08\u76ee\u524d\u4ec5 Qpid\uff09\u4f7f\u7528 X.509 \u5ba2\u6237\u7aef\u8bc1\u4e66\u6267\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002\u4f7f\u7528\u7528\u6237\u540d\u548c\u5bc6\u7801\u65f6\uff0c\u5e94\u6309\u670d\u52a1\u548c\u8282\u70b9\u521b\u5efa\u5e10\u6237\uff0c\u4ee5\u4fbf\u5bf9\u961f\u5217\u7684\u8bbf\u95ee\u8fdb\u884c\u66f4\u7cbe\u7ec6\u7684\u53ef\u5ba1\u6838\u6027\u3002 \u5728\u90e8\u7f72\u4e4b\u524d\uff0c\u8bf7\u8003\u8651\u6392\u961f\u670d\u52a1\u5668\u4f7f\u7528\u7684 TLS \u5e93\u3002Qpid \u4f7f\u7528 Mozilla \u7684 NSS \u5e93\uff0c\u800c RabbitMQ \u4f7f\u7528 Erlang \u7684 TLS \u6a21\u5757\uff0c\u8be5\u6a21\u5757\u4f7f\u7528 OpenSSL\u3002","title":"\u961f\u5217\u8ba4\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236"},{"location":"security/security-guide/#rabbitmq","text":"\u5728 RabbitMQ \u670d\u52a1\u5668\u4e0a\uff0c\u5220\u9664\u9ed8\u8ba4 guest \u7528\u6237\uff1a # rabbitmqctl delete_user guest \u5728 RabbitMQ \u670d\u52a1\u5668\u4e0a\uff0c\u5bf9\u4e8e\u4e0e\u6d88\u606f\u961f\u5217\u901a\u4fe1\u7684\u6bcf\u4e2a OpenStack \u670d\u52a1\u6216\u8282\u70b9\uff0c\u8bf7\u8bbe\u7f6e\u7528\u6237\u5e10\u6237\u548c\u6743\u9650\uff1a # rabbitmqctl add_user compute01 RABBIT_PASS # rabbitmqctl set_permissions compute01 \".*\" \".*\" \".*\" \u5c06RABBIT_PASS\u66ff\u6362\u4e3a\u5408\u9002\u7684\u5bc6\u7801\u3002 \u6709\u5173\u5176\u4ed6\u914d\u7f6e\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\uff1a RabbitMQ \u8bbf\u95ee\u63a7\u5236 RabbitMQ \u8eab\u4efd\u9a8c\u8bc1 RabbitMQ \u63d2\u4ef6 RabbitMQ SASL \u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1","title":"\u8eab\u4efd\u9a8c\u8bc1\u914d\u7f6e\u793a\u4f8b\uff1aRabbitMQ"},{"location":"security/security-guide/#openstack-rabbitmq","text":"[DEFAULT] rpc_backend = nova.openstack.common.rpc.impl_kombu rabbit_use_ssl = True rabbit_host = RABBIT_HOST rabbit_port = 5671 rabbit_user = compute01 rabbit_password = RABBIT_PASS kombu_ssl_keyfile = /etc/ssl/node-key.pem kombu_ssl_certfile = /etc/ssl/node-cert.pem kombu_ssl_ca_certs = /etc/ssl/cacert.pem","title":"OpenStack \u670d\u52a1\u914d\u7f6e\uff1aRabbitMQ"},{"location":"security/security-guide/#qpid","text":"\u6709\u5173\u914d\u7f6e\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\uff1a Apache Qpid \u8eab\u4efd\u9a8c\u8bc1 Apache Qpid \u6388\u6743","title":"\u8eab\u4efd\u9a8c\u8bc1\u914d\u7f6e\u793a\u4f8b\uff1aQpid"},{"location":"security/security-guide/#openstack-qpid","text":"[DEFAULT] rpc_backend = nova.openstack.common.rpc.impl_qpid qpid_protocol = ssl qpid_hostname = qpid_port = 5671 qpid_username = compute01 qpid_password = QPID_PASS \uff08\u53ef\u9009\uff09\u5982\u679c\u5c06 SASL \u4e0e Qpid \u4e00\u8d77\u4f7f\u7528\uff0c\u8bf7\u901a\u8fc7\u6dfb\u52a0\u4ee5\u4e0b\u5185\u5bb9\u6765\u6307\u5b9a\u6b63\u5728\u4f7f\u7528\u7684 SASL \u673a\u5236\uff1a qpid_sasl_mechanisms = ","title":"OpenStack \u670d\u52a1\u914d\u7f6e\uff1aQpid"},{"location":"security/security-guide/#_232","text":"\u6bcf\u4e2a\u9879\u76ee\u90fd\u63d0\u4f9b\u4e86\u8bb8\u591a\u53d1\u9001\u548c\u4f7f\u7528\u6d88\u606f\u7684\u670d\u52a1\u3002\u6bcf\u4e2a\u53d1\u9001\u6d88\u606f\u7684\u4e8c\u8fdb\u5236\u6587\u4ef6\u90fd\u5e94\u8be5\u4f7f\u7528\u961f\u5217\u4e2d\u7684\u6d88\u606f\uff0c\u5982\u679c\u53ea\u662f\u56de\u590d\u7684\u8bdd\u3002 \u6d88\u606f\u961f\u5217\u670d\u52a1\u8fdb\u7a0b\u5e94\u5f7c\u6b64\u9694\u79bb\uff0c\u5e76\u5e94\u4e0e\u8ba1\u7b97\u673a\u4e0a\u7684\u5176\u4ed6\u8fdb\u7a0b\u9694\u79bb\u3002","title":"\u6d88\u606f\u961f\u5217\u8fdb\u7a0b\u9694\u79bb\u548c\u7b56\u7565"},{"location":"security/security-guide/#_233","text":"\u5f3a\u70c8\u5efa\u8bae\u5728 OpenStack Compute Hypervisor \u4e0a\u8fd0\u884c\u7684\u6240\u6709\u670d\u52a1\u4f7f\u7528\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u3002\u8fd9\u5c06\u6709\u52a9\u4e8e\u9632\u6b62 VM \u6765\u5bbe\u548c\u7ba1\u7406\u7f51\u7edc\u4e4b\u95f4\u7684\u7f51\u7edc\u6d41\u91cf\u6865\u63a5\u3002 \u4f7f\u7528 ZeroMQ \u6d88\u606f\u4f20\u9012\u65f6\uff0c\u6bcf\u4e2a\u4e3b\u673a\u5fc5\u987b\u81f3\u5c11\u8fd0\u884c\u4e00\u4e2a ZeroMQ \u6d88\u606f\u63a5\u6536\u5668\uff0c\u4ee5\u63a5\u6536\u6765\u81ea\u7f51\u7edc\u7684\u6d88\u606f\u5e76\u901a\u8fc7 IPC \u5c06\u6d88\u606f\u8f6c\u53d1\u5230\u672c\u5730\u8fdb\u7a0b\u3002\u5728 IPC \u547d\u540d\u7a7a\u95f4\u4e2d\u4e3a\u6bcf\u4e2a\u9879\u76ee\u8fd0\u884c\u4e00\u4e2a\u72ec\u7acb\u7684\u6d88\u606f\u63a5\u6536\u5668\u662f\u53ef\u80fd\u7684\uff0c\u4e5f\u662f\u53ef\u53d6\u7684\uff0c\u4ee5\u53ca\u540c\u4e00\u9879\u76ee\u4e2d\u7684\u5176\u4ed6\u670d\u52a1\u3002","title":"\u547d\u540d\u7a7a\u95f4"},{"location":"security/security-guide/#_234","text":"\u961f\u5217\u670d\u52a1\u5668\u5e94\u4ec5\u63a5\u53d7\u6765\u81ea\u7ba1\u7406\u7f51\u7edc\u7684\u8fde\u63a5\u3002\u8fd9\u9002\u7528\u4e8e\u6240\u6709\u5b9e\u73b0\u3002\u8fd9\u5e94\u901a\u8fc7\u670d\u52a1\u914d\u7f6e\u6765\u5b9e\u73b0\uff0c\u5e76\u53ef\u9009\u62e9\u901a\u8fc7\u5168\u5c40\u7f51\u7edc\u7b56\u7565\u5f3a\u5236\u5b9e\u65bd\u3002 \u4f7f\u7528 ZeroMQ \u6d88\u606f\u4f20\u9012\u65f6\uff0c\u6bcf\u4e2a\u9879\u76ee\u90fd\u5e94\u5728\u4e13\u7528\u4e8e\u5c5e\u4e8e\u8be5\u9879\u76ee\u7684\u670d\u52a1\u7684\u7aef\u53e3\u4e0a\u8fd0\u884c\u5355\u72ec\u7684 ZeroMQ \u63a5\u6536\u65b9\u8fdb\u7a0b\u3002\u8fd9\u76f8\u5f53\u4e8e AMQP \u7684\u63a7\u5236\u4ea4\u6362\u6982\u5ff5\u3002","title":"\u7f51\u7edc\u7b56\u7565"},{"location":"security/security-guide/#_235","text":"\u4f7f\u7528\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236 \uff08MAC\uff09 \u548c\u81ea\u7531\u8bbf\u95ee\u63a7\u5236 \uff08DAC\uff09 \u5c06\u8fdb\u7a0b\u7684\u914d\u7f6e\u9650\u5236\u4e3a\u4ec5\u8fd9\u4e9b\u8fdb\u7a0b\u3002\u6b64\u9650\u5236\u53ef\u9632\u6b62\u8fd9\u4e9b\u8fdb\u7a0b\u4e0e\u5728\u540c\u4e00\u53f0\u8ba1\u7b97\u673a\u4e0a\u8fd0\u884c\u7684\u5176\u4ed6\u8fdb\u7a0b\u9694\u79bb\u3002","title":"\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236"},{"location":"security/security-guide/#_236","text":"\u6570\u636e\u5904\u7406\u670d\u52a1\uff08sahara\uff09\u63d0\u4f9b\u4e86\u4e00\u4e2a\u5e73\u53f0\uff0c\u7528\u4e8e\u4f7f\u7528Hadoop\u548cSpark\u7b49\u5904\u7406\u6846\u67b6\u6765\u914d\u7f6e\u548c\u7ba1\u7406\u5b9e\u4f8b\u96c6\u7fa4\u3002\u901a\u8fc7 OpenStack Dashboard \u6216 REST API\uff0c\u7528\u6237\u80fd\u591f\u4e0a\u4f20\u548c\u6267\u884c\u6846\u67b6\u5e94\u7528\u7a0b\u5e8f\uff0c\u8fd9\u4e9b\u5e94\u7528\u7a0b\u5e8f\u53ef\u4ee5\u8bbf\u95ee\u5bf9\u8c61\u5b58\u50a8\u6216\u5916\u90e8\u63d0\u4f9b\u7a0b\u5e8f\u4e2d\u7684\u6570\u636e\u3002\u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u4f7f\u7528\u7f16\u6392\u670d\u52a1 \uff08heat\uff09 \u521b\u5efa\u5b9e\u4f8b\u96c6\u7fa4\uff0c\u8fd9\u4e9b\u96c6\u7fa4\u53ef\u4ee5\u4f5c\u4e3a\u957f\u671f\u8fd0\u884c\u7684\u7ec4\u5b58\u5728\uff0c\u8fd9\u4e9b\u7ec4\u53ef\u4ee5\u6839\u636e\u8bf7\u6c42\u8fdb\u884c\u6269\u5c55\u548c\u6536\u7f29\uff0c\u4e5f\u53ef\u4ee5\u4f5c\u4e3a\u4e3a\u5355\u4e2a\u5de5\u4f5c\u8d1f\u8f7d\u521b\u5efa\u7684\u77ac\u6001\u7ec4\u5b58\u5728\u3002 \u6570\u636e\u5904\u7406\u7b80\u4ecb \u67b6\u6784 \u6d89\u53ca\u7684\u6280\u672f \u7528\u6237\u5bf9\u8d44\u6e90\u7684\u8bbf\u95ee\u6743\u9650 \u90e8\u7f72 \u63a7\u5236\u5668\u5bf9\u96c6\u7fa4\u7684\u7f51\u7edc\u8bbf\u95ee \u914d\u7f6e\u548c\u5f3a\u5316 TLS \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565 \u5b89\u5168\u7ec4 \u4ee3\u7406\u57df \u81ea\u5b9a\u4e49\u7f51\u7edc\u62d3\u6251 \u95f4\u63a5\u8bbf\u95ee \u6839\u5305\u88c5 \u65e5\u5fd7\u8bb0\u5f55 \u53c2\u8003\u4e66\u76ee","title":"\u6570\u636e\u5904\u7406"},{"location":"security/security-guide/#_237","text":"\u6570\u636e\u5904\u7406\u670d\u52a1\u63a7\u5236\u5668\u5c06\u8d1f\u8d23\u521b\u5efa\u3001\u7ef4\u62a4\u548c\u9500\u6bc1\u4e3a\u5176\u96c6\u7fa4\u521b\u5efa\u7684\u4efb\u4f55\u5b9e\u4f8b\u3002\u63a7\u5236\u5668\u5c06\u4f7f\u7528\u7f51\u7edc\u670d\u52a1\u5728\u81ea\u8eab\u548c\u96c6\u7fa4\u5b9e\u4f8b\u4e4b\u95f4\u5efa\u7acb\u7f51\u7edc\u8def\u5f84\u3002\u5b83\u8fd8\u5c06\u7ba1\u7406\u8981\u5728\u96c6\u7fa4\u4e0a\u8fd0\u884c\u7684\u7528\u6237\u5e94\u7528\u7a0b\u5e8f\u7684\u90e8\u7f72\u548c\u751f\u547d\u5468\u671f\u3002\u96c6\u7fa4\u4e2d\u7684\u5b9e\u4f8b\u5305\u542b\u6846\u67b6\u5904\u7406\u5f15\u64ce\u7684\u6838\u5fc3\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u63d0\u4f9b\u4e86\u591a\u4e2a\u9009\u9879\u6765\u521b\u5efa\u548c\u7ba1\u7406\u4e0e\u8fd9\u4e9b\u5b9e\u4f8b\u7684\u8fde\u63a5\u3002 \u6570\u636e\u5904\u7406\u8d44\u6e90\uff08\u7fa4\u96c6\u3001\u4f5c\u4e1a\u548c\u6570\u636e\u6e90\uff09\u6309\u8eab\u4efd\u670d\u52a1\u4e2d\u5b9a\u4e49\u7684\u9879\u76ee\u8fdb\u884c\u5206\u9694\u3002\u8fd9\u4e9b\u8d44\u6e90\u5728\u9879\u76ee\u4e2d\u5171\u4eab\uff0c\u4e86\u89e3\u4f7f\u7528\u8be5\u670d\u52a1\u7684\u4eba\u5458\u7684\u8bbf\u95ee\u9700\u6c42\u975e\u5e38\u91cd\u8981\u3002\u901a\u8fc7\u4f7f\u7528\u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236\uff0c\u53ef\u4ee5\u8fdb\u4e00\u6b65\u9650\u5236\u9879\u76ee\u4e2d\u7684\u6d3b\u52a8\uff08\u4f8b\u5982\u542f\u52a8\u96c6\u7fa4\u3001\u4e0a\u4f20\u4f5c\u4e1a\u7b49\uff09\u3002 \u5728\u672c\u7ae0\u4e2d\uff0c\u6211\u4eec\u5c06\u8ba8\u8bba\u5982\u4f55\u8bc4\u4f30\u6570\u636e\u5904\u7406\u7528\u6237\u5bf9\u5176\u5e94\u7528\u7a0b\u5e8f\u3001\u4ed6\u4eec\u4f7f\u7528\u7684\u6570\u636e\u4ee5\u53ca\u4ed6\u4eec\u5728\u9879\u76ee\u4e2d\u7684\u9884\u671f\u529f\u80fd\u7684\u9700\u6c42\u3002\u6211\u4eec\u8fd8\u5c06\u6f14\u793a\u670d\u52a1\u63a7\u5236\u5668\u53ca\u5176\u96c6\u7fa4\u7684\u4e00\u4e9b\u5f3a\u5316\u6280\u672f\uff0c\u5e76\u63d0\u4f9b\u5404\u79cd\u63a7\u5236\u5668\u914d\u7f6e\u548c\u7528\u6237\u7ba1\u7406\u65b9\u6cd5\u7684\u793a\u4f8b\uff0c\u4ee5\u786e\u4fdd\u8db3\u591f\u7684\u5b89\u5168\u548c\u9690\u79c1\u7ea7\u522b\u3002","title":"\u6570\u636e\u5904\u7406\u7b80\u4ecb"},{"location":"security/security-guide/#_238","text":"\u4e0b\u56fe\u663e\u793a\u4e86\u6570\u636e\u5904\u7406\u670d\u52a1\u5982\u4f55\u9002\u5e94\u66f4\u5927\u7684 OpenStack \u751f\u6001\u7cfb\u7edf\u7684\u6982\u5ff5\u89c6\u56fe\u3002 \u6570\u636e\u5904\u7406\u670d\u52a1\u5728\u96c6\u7fa4\u914d\u7f6e\u8fc7\u7a0b\u4e2d\u5927\u91cf\u4f7f\u7528\u8ba1\u7b97\u3001\u7f16\u6392\u3001\u955c\u50cf\u548c\u5757\u5b58\u50a8\u670d\u52a1\u3002\u5b83\u8fd8\u5c06\u4f7f\u7528\u5728\u7fa4\u96c6\u521b\u5efa\u671f\u95f4\u63d0\u4f9b\u7684\u7531\u7f51\u7edc\u670d\u52a1\u521b\u5efa\u7684\u4e00\u4e2a\u6216\u591a\u4e2a\u7f51\u7edc\u6765\u7ba1\u7406\u5b9e\u4f8b\u3002\u5f53\u7528\u6237\u8fd0\u884c\u6846\u67b6\u5e94\u7528\u7a0b\u5e8f\u65f6\uff0c\u63a7\u5236\u5668\u548c\u96c6\u7fa4\u5c06\u8bbf\u95ee\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u3002\u9274\u4e8e\u8fd9\u4e9b\u670d\u52a1\u7528\u6cd5\uff0c\u6211\u4eec\u5efa\u8bae\u6309\u7167\u7cfb\u7edf\u6587\u6863\u4e2d\u6982\u8ff0\u7684\u8bf4\u660e\u5bf9\u5b89\u88c5\u7684\u6240\u6709\u7ec4\u4ef6\u8fdb\u884c\u7f16\u76ee\u3002","title":"\u67b6\u6784"},{"location":"security/security-guide/#_239","text":"\u6570\u636e\u5904\u7406\u670d\u52a1\u8d1f\u8d23\u90e8\u7f72\u548c\u7ba1\u7406\u591a\u4e2a\u5e94\u7528\u7a0b\u5e8f\u3002\u4e3a\u4e86\u5168\u9762\u4e86\u89e3\u6240\u63d0\u4f9b\u7684\u5b89\u5168\u9009\u9879\uff0c\u6211\u4eec\u5efa\u8bae\u64cd\u4f5c\u5458\u5927\u81f4\u719f\u6089\u8fd9\u4e9b\u5e94\u7528\u7a0b\u5e8f\u3002\u7a81\u51fa\u663e\u793a\u7684\u6280\u672f\u5217\u8868\u5206\u4e3a\u4e24\u90e8\u5206\uff1a\u7b2c\u4e00\u90e8\u5206\uff0c\u5bf9\u5b89\u5168\u6027\u5f71\u54cd\u8f83\u5927\u7684\u9ad8\u4f18\u5148\u7ea7\u5e94\u7528\u7a0b\u5e8f\uff0c\u7b2c\u4e8c\u90e8\u5206\uff0c\u652f\u6301\u5f71\u54cd\u8f83\u5c0f\u7684\u5e94\u7528\u7a0b\u5e8f\u3002 \u66f4\u9ad8\u7684\u5f71\u54cd Hadoop Hadoop\u5b89\u5168\u6a21\u5f0f\u6587\u6863 HDFS Spark Spark \u5b89\u5168 Storm Zookeeper \u8f83\u4f4e\u7684\u5f71\u54cd Oozie Hive Pig \u8fd9\u4e9b\u6280\u672f\u6784\u6210\u4e86\u4e0e\u6570\u636e\u5904\u7406\u670d\u52a1\u4e00\u8d77\u90e8\u7f72\u7684\u6846\u67b6\u7684\u6838\u5fc3\u3002\u9664\u4e86\u8fd9\u4e9b\u6280\u672f\u4e4b\u5916\uff0c\u8be5\u670d\u52a1\u8fd8\u5305\u62ec\u7b2c\u4e09\u65b9\u4f9b\u5e94\u5546\u63d0\u4f9b\u7684\u6346\u7ed1\u6846\u67b6\u3002\u8fd9\u4e9b\u6346\u7ed1\u6846\u67b6\u662f\u4f7f\u7528\u4e0a\u8ff0\u76f8\u540c\u6838\u5fc3\u90e8\u5206\u4ee5\u53ca\u4f9b\u5e94\u5546\u5305\u542b\u7684\u914d\u7f6e\u548c\u5e94\u7528\u7a0b\u5e8f\u6784\u5efa\u7684\u3002\u6709\u5173\u7b2c\u4e09\u65b9\u6846\u67b6\u6346\u7ed1\u5305\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u4ee5\u4e0b\u94fe\u63a5\uff1a Cloudera CDH Hortonworks Data Platform MapR","title":"\u6d89\u53ca\u7684\u6280\u672f"},{"location":"security/security-guide/#_240","text":"\u6570\u636e\u5904\u7406\u670d\u52a1\u7684\u8d44\u6e90\uff08\u96c6\u7fa4\u3001\u4f5c\u4e1a\u548c\u6570\u636e\u6e90\uff09\u5728\u9879\u76ee\u8303\u56f4\u5185\u5171\u4eab\u3002\u5c3d\u7ba1\u5355\u4e2a\u63a7\u5236\u5668\u5b89\u88c5\u53ef\u4ee5\u7ba1\u7406\u591a\u7ec4\u8d44\u6e90\uff0c\u4f46\u8fd9\u4e9b\u8d44\u6e90\u7684\u8303\u56f4\u5c06\u9650\u5b9a\u4e3a\u5355\u4e2a\u9879\u76ee\u3002\u9274\u4e8e\u6b64\u9650\u5236\uff0c\u6211\u4eec\u5efa\u8bae\u5bc6\u5207\u76d1\u89c6\u9879\u76ee\u4e2d\u7684\u7528\u6237\u6210\u5458\u8eab\u4efd\uff0c\u4ee5\u4fdd\u6301\u8d44\u6e90\u7684\u9002\u5f53\u9694\u79bb\u3002 \u7531\u4e8e\u90e8\u7f72\u6b64\u670d\u52a1\u7684\u7ec4\u7ec7\u7684\u5b89\u5168\u8981\u6c42\u4f1a\u6839\u636e\u5176\u7279\u5b9a\u9700\u6c42\u800c\u6709\u6240\u4e0d\u540c\uff0c\u56e0\u6b64\u6211\u4eec\u5efa\u8bae\u8fd0\u8425\u5546\u5c06\u91cd\u70b9\u653e\u5728\u6570\u636e\u9690\u79c1\u3001\u96c6\u7fa4\u7ba1\u7406\u548c\u6700\u7ec8\u7528\u6237\u5e94\u7528\u7a0b\u5e8f\u4e0a\uff0c\u4f5c\u4e3a\u8bc4\u4f30\u7528\u6237\u9700\u6c42\u7684\u8d77\u70b9\u3002\u8fd9\u4e9b\u51b3\u7b56\u5c06\u6709\u52a9\u4e8e\u6307\u5bfc\u914d\u7f6e\u7528\u6237\u5bf9\u670d\u52a1\u7684\u8bbf\u95ee\u7684\u8fc7\u7a0b\u3002\u6709\u5173\u6570\u636e\u9690\u79c1\u7684\u6269\u5c55\u8ba8\u8bba\uff0c\u8bf7\u53c2\u9605\u79df\u6237\u6570\u636e\u9690\u79c1\u3002 \u6570\u636e\u5904\u7406\u5b89\u88c5\u7684\u9ed8\u8ba4\u5047\u8bbe\u662f\u7528\u6237\u5c06\u6709\u6743\u8bbf\u95ee\u5176\u9879\u76ee\u4e2d\u7684\u6240\u6709\u529f\u80fd\u3002\u5982\u679c\u9700\u8981\u66f4\u7cbe\u7ec6\u7684\u63a7\u5236\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u4f1a\u63d0\u4f9b\u7b56\u7565\u6587\u4ef6\uff08\u5982\u7b56\u7565\u4e2d\u6240\u8ff0\uff09\u3002\u8fd9\u4e9b\u914d\u7f6e\u5c06\u9ad8\u5ea6\u4f9d\u8d56\u4e8e\u5b89\u88c5\u7ec4\u7ec7\u7684\u9700\u6c42\uff0c\u56e0\u6b64\u6ca1\u6709\u5173\u4e8e\u5176\u4f7f\u7528\u7684\u4e00\u822c\u5efa\u8bae\uff1a\u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u3002","title":"\u7528\u6237\u8bbf\u95ee\u8d44\u6e90"},{"location":"security/security-guide/#_241","text":"\u4e0e\u8bb8\u591a\u5176\u4ed6 OpenStack \u670d\u52a1\u4e00\u6837\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u88ab\u90e8\u7f72\u4e3a\u5728\u8fde\u63a5\u5230\u5806\u6808\u7684\u4e3b\u673a\u4e0a\u8fd0\u884c\u7684\u5e94\u7528\u7a0b\u5e8f\u3002\u4ece Kilo \u7248\u672c\u5f00\u59cb\uff0c\u5b83\u80fd\u591f\u4ee5\u5206\u5e03\u5f0f\u65b9\u5f0f\u90e8\u7f72\u591a\u4e2a\u5197\u4f59\u63a7\u5236\u5668\u3002\u4e0e\u5176\u4ed6\u670d\u52a1\u4e00\u6837\uff0c\u5b83\u4e5f\u9700\u8981\u4e00\u4e2a\u6570\u636e\u5e93\u6765\u5b58\u50a8\u6709\u5173\u5176\u8d44\u6e90\u7684\u4fe1\u606f\u3002\u8bf7\u53c2\u9605\u6570\u636e\u5e93\u3002\u8bf7\u52a1\u5fc5\u6ce8\u610f\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u5c06\u9700\u8981\u7ba1\u7406\u591a\u4e2a\u6807\u8bc6\u670d\u52a1\u4fe1\u4efb\uff0c\u76f4\u63a5\u4e0e\u4e1a\u52a1\u6d41\u7a0b\u548c\u7f51\u7edc\u670d\u52a1\u901a\u4fe1\uff0c\u5e76\u53ef\u80fd\u5728\u4ee3\u7406\u57df\u4e2d\u521b\u5efa\u7528\u6237\u3002\u7531\u4e8e\u8fd9\u4e9b\u539f\u56e0\uff0c\u63a7\u5236\u5668\u5c06\u9700\u8981\u8bbf\u95ee\u63a7\u5236\u5e73\u9762\uff0c\u56e0\u6b64\u6211\u4eec\u5efa\u8bae\u5c06\u5176\u4e0e\u5176\u4ed6\u670d\u52a1\u63a7\u5236\u5668\u4e00\u8d77\u5b89\u88c5\u3002 \u6570\u636e\u5904\u7406\u76f4\u63a5\u4e0e\u591a\u4e2a OpenStack \u670d\u52a1\u4ea4\u4e92\uff1a \u8ba1\u7b97 \u8eab\u4efd\u9a8c\u8bc1 \u8054\u7f51 \u5bf9\u8c61\u5b58\u50a8 \u914d\u5668 \u5757\u5b58\u50a8\uff08\u53ef\u9009\uff09 \u5efa\u8bae\u8bb0\u5f55\u8fd9\u4e9b\u670d\u52a1\u4e0e\u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u4e4b\u95f4\u7684\u6240\u6709\u6570\u636e\u6d41\u548c\u6865\u63a5\u70b9\u3002\u8bf7\u53c2\u9605\u7cfb\u7edf\u6587\u6863\u3002 \u6570\u636e\u5904\u7406\u670d\u52a1\u4f7f\u7528\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u6765\u5b58\u50a8\u4f5c\u4e1a\u4e8c\u8fdb\u5236\u6587\u4ef6\u548c\u6570\u636e\u6e90\u3002\u5e0c\u671b\u8bbf\u95ee\u5b8c\u6574\u6570\u636e\u5904\u7406\u670d\u52a1\u529f\u80fd\u7684\u7528\u6237\u5c06\u9700\u8981\u5728\u4ed6\u4eec\u6b63\u5728\u4f7f\u7528\u7684\u9879\u76ee\u4e2d\u5b58\u50a8\u5bf9\u8c61\u3002 \u7f51\u7edc\u670d\u52a1\u5728\u7fa4\u96c6\u7684\u914d\u7f6e\u4e2d\u8d77\u7740\u91cd\u8981\u4f5c\u7528\u3002\u5728\u9884\u914d\u4e4b\u524d\uff0c\u7528\u6237\u5e94\u4e3a\u7fa4\u96c6\u5b9e\u4f8b\u63d0\u4f9b\u4e00\u4e2a\u6216\u591a\u4e2a\u7f51\u7edc\u3002\u5173\u8054\u7f51\u7edc\u7684\u64cd\u4f5c\u7c7b\u4f3c\u4e8e\u901a\u8fc7\u4eea\u8868\u677f\u542f\u52a8\u5b9e\u4f8b\u65f6\u5206\u914d\u7f51\u7edc\u7684\u8fc7\u7a0b\u3002\u63a7\u5236\u5668\u4f7f\u7528\u8fd9\u4e9b\u7f51\u7edc\u5bf9\u5176\u96c6\u7fa4\u7684\u5b9e\u4f8b\u548c\u6846\u67b6\u8fdb\u884c\u7ba1\u7406\u8bbf\u95ee\u3002 \u53e6\u5916\u503c\u5f97\u6ce8\u610f\u7684\u662f\u8eab\u4efd\u670d\u52a1\u3002\u6570\u636e\u5904\u7406\u670d\u52a1\u7684\u7528\u6237\u9700\u8981\u5728\u5176\u9879\u76ee\u4e2d\u5177\u6709\u9002\u5f53\u7684\u89d2\u8272\uff0c\u4ee5\u5141\u8bb8\u4e3a\u5176\u96c6\u7fa4\u9884\u7f6e\u5b9e\u4f8b\u3002\u4f7f\u7528\u4ee3\u7406\u57df\u914d\u7f6e\u7684\u5b89\u88c5\u9700\u8981\u7279\u522b\u6ce8\u610f\u3002\u8bf7\u53c2\u9605\u4ee3\u7406\u57df\u3002\u5177\u4f53\u800c\u8a00\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u5c06\u9700\u8981\u80fd\u591f\u5728\u4ee3\u7406\u57df\u4e2d\u521b\u5efa\u7528\u6237\u3002","title":"\u90e8\u7f72"},{"location":"security/security-guide/#_242","text":"\u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u7684\u4e3b\u8981\u4efb\u52a1\u4e4b\u4e00\u662f\u4e0e\u5176\u751f\u6210\u7684\u5b9e\u4f8b\u8fdb\u884c\u901a\u4fe1\u3002\u8fd9\u4e9b\u5b9e\u4f8b\u662f\u9884\u7f6e\u7684\uff0c\u7136\u540e\u6839\u636e\u6240\u4f7f\u7528\u7684\u6846\u67b6\u8fdb\u884c\u914d\u7f6e\u3002\u63a7\u5236\u5668\u548c\u5b9e\u4f8b\u4e4b\u95f4\u7684\u901a\u4fe1\u4f7f\u7528\u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 \u548c HTTP \u534f\u8bae\u3002 \u5728\u9884\u914d\u96c6\u7fa4\u65f6\uff0c\u5c06\u5728\u7528\u6237\u63d0\u4f9b\u7684\u7f51\u7edc\u4e2d\u4e3a\u6bcf\u4e2a\u5b9e\u4f8b\u63d0\u4f9b\u4e00\u4e2a IP \u5730\u5740\u3002\u7b2c\u4e00\u4e2a\u7f51\u7edc\u901a\u5e38\u79f0\u4e3a\u6570\u636e\u5904\u7406\u7ba1\u7406\u7f51\u7edc\uff0c\u5b9e\u4f8b\u53ef\u4ee5\u4f7f\u7528\u7f51\u7edc\u670d\u52a1\u4e3a\u6b64\u7f51\u7edc\u5206\u914d\u7684\u56fa\u5b9a IP \u5730\u5740\u3002\u63a7\u5236\u5668\u8fd8\u53ef\u4ee5\u914d\u7f6e\u4e3a\u9664\u4e86\u56fa\u5b9a\u5730\u5740\u4e4b\u5916\uff0c\u8fd8\u5bf9\u5b9e\u4f8b\u4f7f\u7528\u6d6e\u52a8 IP \u5730\u5740\u3002\u4e0e\u5b9e\u4f8b\u901a\u4fe1\u65f6\uff0c\u63a7\u5236\u5668\u5c06\u9996\u9009\u6d6e\u52a8\u5730\u5740\uff08\u5982\u679c\u542f\u7528\uff09\u3002 \u5bf9\u4e8e\u56fa\u5b9a\u548c\u6d6e\u52a8 IP \u5730\u5740\u65e0\u6cd5\u63d0\u4f9b\u6240\u9700\u529f\u80fd\u7684\u60c5\u51b5\uff0c\u63a7\u5236\u5668\u53ef\u4ee5\u901a\u8fc7\u4e24\u79cd\u66ff\u4ee3\u65b9\u6cd5\u63d0\u4f9b\u8bbf\u95ee\uff1a\u81ea\u5b9a\u4e49\u7f51\u7edc\u62d3\u6251\u548c\u95f4\u63a5\u8bbf\u95ee\u3002\u81ea\u5b9a\u4e49\u7f51\u7edc\u62d3\u6251\u529f\u80fd\u5141\u8bb8\u63a7\u5236\u5668\u901a\u8fc7\u914d\u7f6e\u6587\u4ef6\u4e2d\u63d0\u4f9b\u7684 shell \u547d\u4ee4\u8bbf\u95ee\u5b9e\u4f8b\u3002\u95f4\u63a5\u8bbf\u95ee\u7528\u4e8e\u6307\u5b9a\u7528\u6237\u5728\u96c6\u7fa4\u7f6e\u5907\u671f\u95f4\u53ef\u7528\u4f5c\u4ee3\u7406\u7f51\u5173\u7684\u5b9e\u4f8b\u3002\u8fd9\u4e9b\u9009\u9879\u901a\u8fc7\u914d\u7f6e\u548c\u5f3a\u5316\u4e2d\u7684\u7528\u6cd5\u793a\u4f8b\u8fdb\u884c\u8ba8\u8bba\u3002","title":"\u63a7\u5236\u5668\u5bf9\u96c6\u7fa4\u7684\u7f51\u7edc\u8bbf\u95ee"},{"location":"security/security-guide/#_243","text":"\u6709\u591a\u4e2a\u914d\u7f6e\u9009\u9879\u548c\u90e8\u7f72\u7b56\u7565\u53ef\u4ee5\u63d0\u9ad8\u6570\u636e\u5904\u7406\u670d\u52a1\u7684\u5b89\u5168\u6027\u3002\u670d\u52a1\u63a7\u5236\u5668\u901a\u8fc7\u4e3b\u914d\u7f6e\u6587\u4ef6\u548c\u4e00\u4e2a\u6216\u591a\u4e2a\u7b56\u7565\u6587\u4ef6\u8fdb\u884c\u914d\u7f6e\u3002\u4f7f\u7528\u6570\u636e\u5c40\u90e8\u6027\u529f\u80fd\u7684\u5b89\u88c5\u8fd8\u5c06\u5177\u6709\u4e24\u4e2a\u9644\u52a0\u6587\u4ef6\uff0c\u7528\u4e8e\u6307\u5b9a\u8ba1\u7b97\u8282\u70b9\u548c\u5bf9\u8c61\u5b58\u50a8\u8282\u70b9\u7684\u7269\u7406\u4f4d\u7f6e\u3002","title":"\u914d\u7f6e\u548c\u5f3a\u5316"},{"location":"security/security-guide/#tls_1","text":"\u4e0e\u8bb8\u591a\u5176\u4ed6 OpenStack \u63a7\u5236\u5668\u4e00\u6837\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u63a7\u5236\u5668\u53ef\u4ee5\u914d\u7f6e\u4e3a\u9700\u8981 TLS \u8fde\u63a5\u3002 Pre-Kilo \u7248\u672c\u5c06\u9700\u8981 TLS \u4ee3\u7406\uff0c\u56e0\u4e3a\u63a7\u5236\u5668\u4e0d\u5141\u8bb8\u76f4\u63a5 TLS \u8fde\u63a5\u3002TLS \u4ee3\u7406\u548c HTTP \u670d\u52a1\u4e2d\u4ecb\u7ecd\u4e86\u5982\u4f55\u914d\u7f6e TLS \u4ee3\u7406\uff0c\u6211\u4eec\u5efa\u8bae\u6309\u7167\u5176\u4e2d\u7684\u5efa\u8bae\u521b\u5efa\u6b64\u7c7b\u5b89\u88c5\u3002 \u4ece Kilo \u7248\u672c\u5f00\u59cb\uff0c\u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u5141\u8bb8\u76f4\u63a5 TLS \u8fde\u63a5\uff0c\u6211\u4eec\u5efa\u8bae\u8fd9\u6837\u505a\u3002\u542f\u7528\u6b64\u884c\u4e3a\u9700\u8981\u5bf9\u63a7\u5236\u5668\u914d\u7f6e\u6587\u4ef6\u8fdb\u884c\u4e00\u4e9b\u5c0f\u7684\u8c03\u6574\u3002 \u4f8b\u3002\u914d\u7f6e\u5bf9\u63a7\u5236\u5668\u7684 TLS \u8bbf\u95ee [ssl] ca_file = cafile.pem cert_file = certfile.crt key_file = keyfile.key","title":"TLS\u7cfb\u7edf"},{"location":"security/security-guide/#_244","text":"\u6570\u636e\u5904\u7406\u670d\u52a1\u4f7f\u7528\u7b56\u7565\u6587\u4ef6\uff08\u5982\u7b56\u7565\u4e2d\u6240\u8ff0\uff09\u6765\u914d\u7f6e\u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236\u3002\u4f7f\u7528\u7b56\u7565\u6587\u4ef6\uff0c\u64cd\u4f5c\u5458\u53ef\u4ee5\u9650\u5236\u7ec4\u5bf9\u7279\u5b9a\u6570\u636e\u5904\u7406\u529f\u80fd\u7684\u8bbf\u95ee\u3002 \u6267\u884c\u6b64\u64cd\u4f5c\u7684\u539f\u56e0\u5c06\u6839\u636e\u5b89\u88c5\u7684\u7ec4\u7ec7\u8981\u6c42\u800c\u66f4\u6539\u3002\u901a\u5e38\uff0c\u8fd9\u4e9b\u7ec6\u7c92\u5ea6\u63a7\u4ef6\u7528\u4e8e\u64cd\u4f5c\u5458\u9700\u8981\u9650\u5236\u6570\u636e\u5904\u7406\u670d\u52a1\u8d44\u6e90\u7684\u521b\u5efa\u3001\u5220\u9664\u548c\u68c0\u7d22\u7684\u60c5\u51b5\u3002\u9700\u8981\u9650\u5236\u9879\u76ee\u5185\u8bbf\u95ee\u7684\u64cd\u4f5c\u5458\u5e94\u5145\u5206\u610f\u8bc6\u5230\uff0c\u9700\u8981\u6709\u5176\u4ed6\u65b9\u6cd5\u8ba9\u7528\u6237\u8bbf\u95ee\u670d\u52a1\u7684\u6838\u5fc3\u529f\u80fd\uff08\u4f8b\u5982\uff0c\u914d\u7f6e\u96c6\u7fa4\uff09\u3002 \u4f8b\u3002\u5141\u8bb8\u6240\u6709\u7528\u6237\u4f7f\u7528\u6240\u6709\u65b9\u6cd5\uff08\u9ed8\u8ba4\u7b56\u7565\uff09 { \"default\": \"\" } \u4f8b\u3002\u7981\u6b62\u5bf9\u975e\u7ba1\u7406\u5458\u7528\u6237\u8fdb\u884c\u6620\u50cf\u6ce8\u518c\u8868\u64cd\u4f5c { \"default\": \"\", \"data-processing:images:register\": \"role:admin\", \"data-processing:images:unregister\": \"role:admin\", \"data-processing:images:add_tags\": \"role:admin\", \"data-processing:images:remove_tags\": \"role:admin\" }","title":"\u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236\u7b56\u7565"},{"location":"security/security-guide/#_245","text":"\u6570\u636e\u5904\u7406\u670d\u52a1\u5141\u8bb8\u5c06\u5b89\u5168\u7ec4\u4e0e\u4e3a\u5176\u96c6\u7fa4\u9884\u7f6e\u7684\u5b9e\u4f8b\u76f8\u5173\u8054\u3002\u65e0\u9700\u5176\u4ed6\u914d\u7f6e\uff0c\u8be5\u670d\u52a1\u5c06\u5bf9\u9884\u7f6e\u96c6\u7fa4\u7684\u4efb\u4f55\u9879\u76ee\u4f7f\u7528\u9ed8\u8ba4\u5b89\u5168\u7ec4\u3002\u5982\u679c\u8bf7\u6c42\uff0c\u53ef\u4ee5\u4f7f\u7528\u4e0d\u540c\u7684\u5b89\u5168\u7ec4\uff0c\u6216\u8005\u5b58\u5728\u4e00\u4e2a\u81ea\u52a8\u9009\u9879\uff0c\u8be5\u9009\u9879\u6307\u793a\u670d\u52a1\u6839\u636e\u6240\u8bbf\u95ee\u6846\u67b6\u6307\u5b9a\u7684\u7aef\u53e3\u521b\u5efa\u5b89\u5168\u7ec4\u3002 \u5bf9\u4e8e\u751f\u4ea7\u73af\u5883\uff0c\u6211\u4eec\u5efa\u8bae\u624b\u52a8\u63a7\u5236\u5b89\u5168\u7ec4\uff0c\u5e76\u521b\u5efa\u4e00\u7ec4\u9002\u5408\u5b89\u88c5\u7684\u7ec4\u89c4\u5219\u3002\u901a\u8fc7\u8fd9\u79cd\u65b9\u5f0f\uff0c\u64cd\u4f5c\u5458\u53ef\u4ee5\u786e\u4fdd\u9ed8\u8ba4\u5b89\u5168\u7ec4\u5c06\u5305\u542b\u6240\u6709\u9002\u5f53\u7684\u89c4\u5219\u3002\u6709\u5173\u5b89\u5168\u7ec4\u7684\u6269\u5c55\u8ba8\u8bba\uff0c\u8bf7\u53c2\u9605\u5b89\u5168\u7ec4\u3002","title":"\u5b89\u5168\u7ec4"},{"location":"security/security-guide/#_246","text":"\u5c06\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u4e0e\u6570\u636e\u5904\u7406\u7ed3\u5408\u4f7f\u7528\u65f6\uff0c\u9700\u8981\u6dfb\u52a0\u5b58\u50a8\u8bbf\u95ee\u51ed\u636e\u3002\u4f7f\u7528\u4ee3\u7406\u57df\uff0c\u6570\u636e\u5904\u7406\u670d\u52a1\u53ef\u4ee5\u6539\u7528\u6765\u81ea\u6807\u8bc6\u670d\u52a1\u7684\u59d4\u6d3e\u4fe1\u4efb\uff0c\u4ee5\u5141\u8bb8\u901a\u8fc7\u57df\u4e2d\u521b\u5efa\u7684\u4e34\u65f6\u7528\u6237\u8fdb\u884c\u5b58\u50a8\u8bbf\u95ee\u3002\u8981\u4f7f\u6b64\u59d4\u6d3e\u673a\u5236\u8d77\u4f5c\u7528\uff0c\u5fc5\u987b\u5c06\u6570\u636e\u5904\u7406\u670d\u52a1\u914d\u7f6e\u4e3a\u4f7f\u7528\u4ee3\u7406\u57df\uff0c\u5e76\u4e14\u64cd\u4f5c\u5458\u5fc5\u987b\u4e3a\u4ee3\u7406\u7528\u6237\u914d\u7f6e\u8eab\u4efd\u57df\u3002 \u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u4fdd\u7559\u4e3a\u5bf9\u8c61\u5b58\u50a8\u8bbf\u95ee\u63d0\u4f9b\u7684\u7528\u6237\u540d\u548c\u5bc6\u7801\u7684\u4e34\u65f6\u5b58\u50a8\u3002\u4f7f\u7528\u4ee3\u7406\u57df\u65f6\uff0c\u63a7\u5236\u5668\u5c06\u4e3a\u4ee3\u7406\u7528\u6237\u751f\u6210\u6b64\u5bf9\uff0c\u5e76\u4e14\u6b64\u7528\u6237\u7684\u8bbf\u95ee\u5c06\u4ec5\u9650\u4e8e\u8eab\u4efd\u4fe1\u4efb\u7684\u8bbf\u95ee\u3002\u6211\u4eec\u5efa\u8bae\u5728\u63a7\u5236\u5668\u6216\u5176\u6570\u636e\u5e93\u5177\u6709\u4e0e\u516c\u5171\u7f51\u7edc\u4e4b\u95f4\u7684\u8def\u7531\u7684\u4efb\u4f55\u5b89\u88c5\u4e2d\u4f7f\u7528\u4ee3\u7406\u57df\u3002 \u793a\u4f8b\uff1a\u4e3a\u540d\u4e3a\u201cdp_proxy\u201d\u7684\u4ee3\u7406\u57df\u8fdb\u884c\u914d\u7f6e [DEFAULT] use_domain_for_proxy_users = true proxy_user_domain_name = dp_proxy proxy_user_role_names = Member","title":"\u4ee3\u7406\u57df"},{"location":"security/security-guide/#_247","text":"\u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528\u4ee3\u7406\u547d\u4ee4\u6765\u8bbf\u95ee\u5176\u96c6\u7fa4\u5b9e\u4f8b\u3002\u901a\u8fc7\u8fd9\u79cd\u65b9\u5f0f\uff0c\u53ef\u4ee5\u4e3a\u4e0d\u4f7f\u7528\u7f51\u7edc\u670d\u52a1\u76f4\u63a5\u63d0\u4f9b\u7684\u7f51\u7edc\u7684\u5b89\u88c5\u521b\u5efa\u81ea\u5b9a\u4e49\u7f51\u7edc\u62d3\u6251\u3002\u5bf9\u4e8e\u9700\u8981\u9650\u5236\u63a7\u5236\u5668\u548c\u5b9e\u4f8b\u4e4b\u95f4\u8bbf\u95ee\u7684\u5b89\u88c5\uff0c\u6211\u4eec\u5efa\u8bae\u4f7f\u7528\u6b64\u9009\u9879\u3002 \u793a\u4f8b\uff1a\u901a\u8fc7\u6307\u5b9a\u7684\u4e2d\u7ee7\u673a\u8bbf\u95ee\u5b9e\u4f8b [DEFAULT] proxy_command='ssh relay-machine-{tenant_id} nc {host} {port}' \u793a\u4f8b\uff1a\u901a\u8fc7\u81ea\u5b9a\u4e49\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u8bbf\u95ee\u5b9e\u4f8b [DEFAULT] proxy_command='ip netns exec ns_for_{network_id} nc {host} {port}'","title":"\u81ea\u5b9a\u4e49\u7f51\u7edc\u62d3\u6251"},{"location":"security/security-guide/#_248","text":"\u5bf9\u4e8e\u63a7\u5236\u5668\u5bf9\u96c6\u7fa4\u6240\u6709\u5b9e\u4f8b\u7684\u8bbf\u95ee\u6743\u9650\u6709\u9650\u7684\u5b89\u88c5\uff0c\u7531\u4e8e\u5bf9\u6d6e\u52a8 IP \u5730\u5740\u6216\u5b89\u5168\u89c4\u5219\u7684\u9650\u5236\uff0c\u53ef\u4ee5\u914d\u7f6e\u95f4\u63a5\u8bbf\u95ee\u3002\u8fd9\u5141\u8bb8\u5c06\u67d0\u4e9b\u5b9e\u4f8b\u6307\u5b9a\u4e3a\u96c6\u7fa4\u5176\u4ed6\u5b9e\u4f8b\u7684\u4ee3\u7406\u7f51\u5173\u3002 \u53ea\u6709\u5728\u5b9a\u4e49\u5c06\u6784\u6210\u6570\u636e\u5904\u7406\u96c6\u7fa4\u7684\u8282\u70b9\u7ec4\u6a21\u677f\u65f6\uff0c\u624d\u80fd\u542f\u7528\u6b64\u914d\u7f6e\u3002\u5b83\u4f5c\u4e3a\u8fd0\u884c\u65f6\u9009\u9879\u63d0\u4f9b\uff0c\u53ef\u5728\u7fa4\u96c6\u7f6e\u5907\u8fc7\u7a0b\u4e2d\u542f\u7528\u3002","title":"\u95f4\u63a5\u8bbf\u95ee"},{"location":"security/security-guide/#rootwrap","text":"\u5728\u4e3a\u7f51\u7edc\u8bbf\u95ee\u521b\u5efa\u81ea\u5b9a\u4e49\u62d3\u6251\u65f6\uff0c\u53ef\u80fd\u9700\u8981\u5141\u8bb8\u975e root \u7528\u6237\u8fd0\u884c\u4ee3\u7406\u547d\u4ee4\u3002\u5bf9\u4e8e\u8fd9\u4e9b\u60c5\u51b5\uff0coslo rootwrap \u8f6f\u4ef6\u5305\u7528\u4e8e\u4e3a\u975e root \u7528\u6237\u63d0\u4f9b\u8fd0\u884c\u7279\u6743\u547d\u4ee4\u7684\u5de5\u5177\u3002\u6b64\u914d\u7f6e\u8981\u6c42\u4e0e\u6570\u636e\u5904\u7406\u63a7\u5236\u5668\u5e94\u7528\u7a0b\u5e8f\u5173\u8054\u7684\u7528\u6237\u4f4d\u4e8e sudoers \u5217\u8868\u4e2d\uff0c\u5e76\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d\u542f\u7528\u8be5\u9009\u9879\u3002\u6216\u8005\uff0c\u53ef\u4ee5\u63d0\u4f9b\u5907\u7528 rootwrap \u547d\u4ee4\u3002 \u793a\u4f8b\uff1a\u542f\u7528 rootwrap \u7528\u6cd5\u5e76\u663e\u793a\u9ed8\u8ba4\u547d\u4ee4 [DEFAULT] use_rootwrap=True rootwrap_command=\u2019sudo sahara-rootwrap /etc/sahara/rootwrap.conf\u2019 \u5173\u4e8e rootwrap \u9879\u76ee\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u8003\u5b98\u65b9\u6587\u6863\uff1ahttps://wiki.openstack.org/wiki/Rootwrap","title":"Rootwrap"},{"location":"security/security-guide/#_249","text":"\u76d1\u89c6\u670d\u52a1\u63a7\u5236\u5668\u7684\u8f93\u51fa\u662f\u4e00\u4e2a\u5f3a\u5927\u7684\u53d6\u8bc1\u5de5\u5177\uff0c\u5982\u76d1\u89c6\u548c\u65e5\u5fd7\u8bb0\u5f55\u4e2d\u66f4\u8be6\u7ec6\u5730\u63cf\u8ff0\u7684\u90a3\u6837\u3002\u6570\u636e\u5904\u7406\u670d\u52a1\u63a7\u5236\u5668\u63d0\u4f9b\u4e86\u51e0\u4e2a\u9009\u9879\u6765\u8bbe\u7f6e\u65e5\u5fd7\u8bb0\u5f55\u7684\u4f4d\u7f6e\u548c\u7ea7\u522b\u3002 \u793a\u4f8b\uff1a\u5c06\u65e5\u5fd7\u7ea7\u522b\u8bbe\u7f6e\u4e3a\u9ad8\u4e8e\u8b66\u544a\u5e76\u6307\u5b9a\u8f93\u51fa\u6587\u4ef6\u3002 [DEFAULT] verbose = true log_file = /var/log/data-processing.log","title":"\u65e5\u5fd7"},{"location":"security/security-guide/#_250","text":"OpenStack.org\uff0c\u6b22\u8fce\u6765\u5230Sahara\uff012016.Sahara\u9879\u76ee\u6587\u6863 Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0c\u6b22\u8fce\u6765\u5230 Apache Hadoop\uff012016. Apache Hadoop \u9879\u76ee Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0c\u5b89\u5168\u6a21\u5f0f\u4e0b\u7684 Hadoop\u30022016. Hadoop \u5b89\u5168\u6a21\u5f0f\u6587\u6863 Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cHDFS \u7528\u6237\u6307\u5357\u30022016. Hadoop HDFS \u6587\u6863 Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cSpark\u30022016. Spark\u9879\u76ee Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cSpark Security\u30022016. Spark \u5b89\u5168\u6587\u6863 Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cApache Storm\u30022016. Storm \u9879\u76ee Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cApache Zookeeper\u30022016. Zookeeper \u9879\u76ee Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cApache Oozie Workflow Scheduler for Hadoop\u30022016. Oozie\u9879\u76ee Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cApache Hive\u30022016. Hive Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0c\u6b22\u8fce\u6765\u5230 Apache Pig\u30022016.Pig Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\uff0cCloudera \u4ea7\u54c1\u6587\u6863\u30022016. Cloudera CDH \u6587\u6863 Hortonworks\uff0cHortonworks\u30022016. Hortonworks \u6570\u636e\u5e73\u53f0\u6587\u6863 MapR Technologies\uff0c\u7528\u4e8e MapR \u878d\u5408\u6570\u636e\u5e73\u53f0\u7684 Apache Hadoop\u30022016. MapR \u9879\u76ee","title":"\u53c2\u8003\u4e66\u76ee"},{"location":"security/security-guide/#_251","text":"\u6570\u636e\u5e93\u670d\u52a1\u5668\u7684\u9009\u62e9\u662f OpenStack \u90e8\u7f72\u5b89\u5168\u6027\u7684\u4e00\u4e2a\u91cd\u8981\u8003\u8651\u56e0\u7d20\u3002\u5728\u51b3\u5b9a\u4f7f\u7528\u6570\u636e\u5e93\u670d\u52a1\u5668\u65f6\uff0c\u5e94\u8003\u8651\u591a\u79cd\u56e0\u7d20\uff0c\u4f46\u5728\u672c\u672c\u4e66\u7684\u8303\u56f4\u5185\uff0c\u5c06\u53ea\u8ba8\u8bba\u5b89\u5168\u6ce8\u610f\u4e8b\u9879\u3002OpenStack \u652f\u6301\u591a\u79cd\u6570\u636e\u5e93\u7c7b\u578b\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u300aOpenStack \u7ba1\u7406\u5458\u6307\u5357\u300b\u3002 \u300a\u5b89\u5168\u6307\u5357\u300b\u76ee\u524d\u4e3b\u8981\u9488\u5bf9 PostgreSQL \u548c MySQL\u3002 \u6570\u636e\u5e93\u540e\u7aef\u6ce8\u610f\u4e8b\u9879 \u6570\u636e\u5e93\u540e\u7aef\u7684\u5b89\u5168\u53c2\u8003 \u6570\u636e\u5e93\u8bbf\u95ee\u63a7\u5236 OpenStack \u6570\u636e\u5e93\u8bbf\u95ee\u6a21\u578b \u6570\u636e\u5e93\u8eab\u4efd\u9a8c\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236 \u8981\u6c42\u7528\u6237\u5e10\u6237\u9700\u8981 SSL \u4f20\u8f93 \u4f7f\u7528 X.509 \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1 OpenStack \u670d\u52a1\u6570\u636e\u5e93\u914d\u7f6e Nova-conductor \u6570\u636e\u5e93\u4f20\u8f93\u5b89\u5168\u6027 \u6570\u636e\u5e93\u670d\u52a1\u5668 IP \u5730\u5740\u7ed1\u5b9a \u6570\u636e\u5e93\u4f20\u8f93 MySQL SSL\u914d\u7f6e PostgreSQL SSL \u914d\u7f6e","title":"\u6570\u636e\u5e93"},{"location":"security/security-guide/#_252","text":"PostgreSQL \u5177\u6709\u8bb8\u591a\u7406\u60f3\u7684\u5b89\u5168\u529f\u80fd\uff0c\u4f8b\u5982 Kerberos \u8eab\u4efd\u9a8c\u8bc1\u3001\u5bf9\u8c61\u7ea7\u5b89\u5168\u6027\u548c\u52a0\u5bc6\u652f\u6301\u3002PostgreSQL \u793e\u533a\u5728\u63d0\u4f9b\u53ef\u9760\u7684\u6307\u5bfc\u3001\u6587\u6863\u548c\u5de5\u5177\u4ee5\u4fc3\u8fdb\u79ef\u6781\u7684\u5b89\u5168\u5b9e\u8df5\u65b9\u9762\u505a\u5f97\u5f88\u597d\u3002 MySQL\u62e5\u6709\u5e9e\u5927\u7684\u793e\u533a\uff0c\u88ab\u5e7f\u6cdb\u91c7\u7528\uff0c\u5e76\u63d0\u4f9b\u9ad8\u53ef\u7528\u6027\u9009\u9879\u3002MySQL\u8fd8\u80fd\u591f\u901a\u8fc7\u63d2\u4ef6\u8eab\u4efd\u9a8c\u8bc1\u673a\u5236\u63d0\u4f9b\u589e\u5f3a\u7684\u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u3002MySQL\u793e\u533a\u4e2d\u7684\u5206\u53c9\u53d1\u884c\u7248\u63d0\u4f9b\u4e86\u8bb8\u591a\u53ef\u4f9b\u8003\u8651\u7684\u9009\u9879\u3002\u6839\u636e\u5bf9\u5b89\u5168\u6001\u52bf\u7684\u5168\u9762\u8bc4\u4f30\u548c\u4e3a\u7ed9\u5b9a\u53d1\u884c\u7248\u63d0\u4f9b\u7684\u652f\u6301\u7ea7\u522b\uff0c\u9009\u62e9MySQL\u7684\u7279\u5b9a\u5b9e\u73b0\u975e\u5e38\u91cd\u8981\u3002","title":"\u6570\u636e\u5e93\u540e\u7aef\u6ce8\u610f\u4e8b\u9879"},{"location":"security/security-guide/#_253","text":"\u5efa\u8bae\u90e8\u7f72 MySQL \u6216 PostgreSQL \u7684\u7528\u6237\u53c2\u8003\u73b0\u6709\u7684\u5b89\u5168\u6307\u5357\u3002\u4e0b\u9762\u5217\u51fa\u4e86\u4e00\u4e9b\u53c2\u8003\u8d44\u6599\uff1a MySQL\u6570\u636e\u5e93\uff1a OWASP MySQL\u5f3a\u5316 MySQL \u53ef\u63d2\u5165\u8eab\u4efd\u9a8c\u8bc1 MySQL\u4e2d\u7684\u5b89\u5168\u6027 PostgreSQL\u683c\u5f0f\uff1a OWASP PostgreSQL \u5f3a\u5316 PostgreSQL \u6570\u636e\u5e93\u4e2d\u7684\u603b\u4f53\u5b89\u5168\u6027","title":"\u6570\u636e\u5e93\u540e\u7aef\u7684\u5b89\u5168\u53c2\u8003"},{"location":"security/security-guide/#_254","text":"\u6bcf\u4e2a\u6838\u5fc3 OpenStack \u670d\u52a1\uff08\u8ba1\u7b97\u3001\u8eab\u4efd\u3001\u7f51\u7edc\u3001\u5757\u5b58\u50a8\uff09\u90fd\u5c06\u72b6\u6001\u548c\u914d\u7f6e\u4fe1\u606f\u5b58\u50a8\u5728\u6570\u636e\u5e93\u4e2d\u3002\u5728\u672c\u7ae0\u4e2d\uff0c\u6211\u4eec\u5c06\u8ba8\u8bba\u5f53\u524d\u5728OpenStack\u4e2d\u4f7f\u7528\u6570\u636e\u5e93\u7684\u65b9\u5f0f\u3002\u6211\u4eec\u8fd8\u63a2\u8ba8\u4e86\u5b89\u5168\u95ee\u9898\uff0c\u4ee5\u53ca\u6570\u636e\u5e93\u540e\u7aef\u9009\u62e9\u7684\u5b89\u5168\u540e\u679c\u3002","title":"\u6570\u636e\u5e93\u8bbf\u95ee\u63a7\u5236"},{"location":"security/security-guide/#openstack_11","text":"OpenStack \u9879\u76ee\u4e2d\u7684\u6240\u6709\u670d\u52a1\u90fd\u8bbf\u95ee\u5355\u4e2a\u6570\u636e\u5e93\u3002\u76ee\u524d\u6ca1\u6709\u7528\u4e8e\u521b\u5efa\u57fa\u4e8e\u8868\u6216\u884c\u7684\u6570\u636e\u5e93\u8bbf\u95ee\u9650\u5236\u7684\u53c2\u8003\u7b56\u7565\u3002 \u5728OpenStack\u4e2d\uff0c\u6ca1\u6709\u5bf9\u6570\u636e\u5e93\u64cd\u4f5c\u8fdb\u884c\u7cbe\u7ec6\u63a7\u5236\u7684\u4e00\u822c\u89c4\u5b9a\u3002\u8bbf\u95ee\u6743\u9650\u548c\u7279\u6743\u7684\u6388\u4e88\u4ec5\u57fa\u4e8e\u8282\u70b9\u662f\u5426\u6709\u6743\u8bbf\u95ee\u6570\u636e\u5e93\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u6709\u6743\u8bbf\u95ee\u6570\u636e\u5e93\u7684\u8282\u70b9\u53ef\u80fd\u5177\u6709 DROP\u3001INSERT \u6216 UPDATE \u51fd\u6570\u7684\u5b8c\u5168\u6743\u9650\u3002","title":"OpenStack \u6570\u636e\u5e93\u8bbf\u95ee\u6a21\u578b"},{"location":"security/security-guide/#_255","text":"\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0c\u6bcf\u4e2a OpenStack \u670d\u52a1\u53ca\u5176\u8fdb\u7a0b\u90fd\u4f7f\u7528\u4e00\u7ec4\u5171\u4eab\u51ed\u636e\u8bbf\u95ee\u6570\u636e\u5e93\u3002\u8fd9\u4f7f\u5f97\u5ba1\u6838\u6570\u636e\u5e93\u64cd\u4f5c\u548c\u64a4\u6d88\u670d\u52a1\u53ca\u5176\u8fdb\u7a0b\u5bf9\u6570\u636e\u5e93\u7684\u8bbf\u95ee\u6743\u9650\u53d8\u5f97\u7279\u522b\u56f0\u96be\u3002","title":"\u7cbe\u7ec6\u8bbf\u95ee\u63a7\u5236"},{"location":"security/security-guide/#nova-conductor","text":"\u8ba1\u7b97\u8282\u70b9\u662f OpenStack \u4e2d\u6700\u4e0d\u53d7\u4fe1\u4efb\u7684\u670d\u52a1\uff0c\u56e0\u4e3a\u5b83\u4eec\u6258\u7ba1\u79df\u6237\u5b9e\u4f8b\u3002\u5f15\u5165\u8be5 nova-conductor \u670d\u52a1\u4f5c\u4e3a\u6570\u636e\u5e93\u4ee3\u7406\uff0c\u5145\u5f53\u8ba1\u7b97\u8282\u70b9\u548c\u6570\u636e\u5e93\u4e4b\u95f4\u7684\u4e2d\u4ecb\u3002\u6211\u4eec\u5c06\u5728\u672c\u7ae0\u540e\u9762\u8ba8\u8bba\u5176\u540e\u679c\u3002 \u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\uff1a \u6240\u6709\u6570\u636e\u5e93\u901a\u4fe1\u90fd\u4e0e\u7ba1\u7406\u7f51\u7edc\u9694\u79bb \u4f7f\u7528 TLS \u4fdd\u62a4\u901a\u4fe1 \u4e3a\u6bcf\u4e2a OpenStack \u670d\u52a1\u7aef\u70b9\u521b\u5efa\u552f\u4e00\u7684\u6570\u636e\u5e93\u7528\u6237\u5e10\u6237\uff08\u5982\u4e0b\u56fe\u6240\u793a\uff09","title":"Nova-conductor"},{"location":"security/security-guide/#_256","text":"\u8003\u8651\u5230\u8bbf\u95ee\u6570\u636e\u5e93\u7684\u98ce\u9669\uff0c\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u4e3a\u6bcf\u4e2a\u9700\u8981\u8bbf\u95ee\u6570\u636e\u5e93\u7684\u8282\u70b9\u521b\u5efa\u552f\u4e00\u7684\u6570\u636e\u5e93\u7528\u6237\u5e10\u6237\u3002\u8fd9\u6837\u505a\u6709\u52a9\u4e8e\u66f4\u597d\u5730\u8fdb\u884c\u5206\u6790\u548c\u5ba1\u6838\uff0c\u4ee5\u786e\u4fdd\u5408\u89c4\u6027\uff0c\u6216\u8005\u5728\u8282\u70b9\u906d\u5230\u5165\u4fb5\u65f6\uff0c\u901a\u8fc7\u5728\u68c0\u6d4b\u5230\u8be5\u8282\u70b9\u65f6\u5220\u9664\u8be5\u8282\u70b9\u5bf9\u6570\u636e\u5e93\u7684\u8bbf\u95ee\u6765\u9694\u79bb\u53d7\u611f\u67d3\u7684\u4e3b\u673a\u3002\u521b\u5efa\u8fd9\u4e9b\u6bcf\u4e2a\u670d\u52a1\u7ec8\u7ed3\u70b9\u6570\u636e\u5e93\u7528\u6237\u5e10\u6237\u65f6\uff0c\u5e94\u6ce8\u610f\u786e\u4fdd\u5c06\u5176\u914d\u7f6e\u4e3a\u9700\u8981 TLS\u3002\u6216\u8005\uff0c\u4e3a\u4e86\u63d0\u9ad8\u5b89\u5168\u6027\uff0c\u5efa\u8bae\u9664\u4e86\u7528\u6237\u540d\u548c\u5bc6\u7801\u5916\uff0c\u8fd8\u4f7f\u7528 X.509 \u8bc1\u4e66\u8eab\u4efd\u9a8c\u8bc1\u6765\u914d\u7f6e\u6570\u636e\u5e93\u5e10\u6237\u3002","title":"\u6570\u636e\u5e93\u8ba4\u8bc1\u548c\u8bbf\u95ee\u63a7\u5236"},{"location":"security/security-guide/#_257","text":"\u5e94\u521b\u5efa\u5e76\u4fdd\u62a4\u4e00\u4e2a\u5355\u72ec\u7684\u6570\u636e\u5e93\u7ba1\u7406\u5458 \uff08DBA\uff09 \u5e10\u6237\uff0c\u8be5\u5e10\u6237\u5177\u6709\u521b\u5efa/\u5220\u9664\u6570\u636e\u5e93\u3001\u521b\u5efa\u7528\u6237\u5e10\u6237\u548c\u66f4\u65b0\u7528\u6237\u6743\u9650\u7684\u5b8c\u5168\u6743\u9650\u3002\u8fd9\u79cd\u7b80\u5355\u7684\u8d23\u4efb\u5206\u79bb\u65b9\u6cd5\u6709\u52a9\u4e8e\u9632\u6b62\u610f\u5916\u914d\u7f6e\u9519\u8bef\uff0c\u964d\u4f4e\u98ce\u9669\u5e76\u7f29\u5c0f\u5371\u5bb3\u8303\u56f4\u3002 \u4e3a OpenStack \u670d\u52a1\u548c\u6bcf\u4e2a\u8282\u70b9\u521b\u5efa\u7684\u6570\u636e\u5e93\u7528\u6237\u5e10\u6237\u7684\u6743\u9650\u5e94\u4ec5\u9650\u4e8e\u4e0e\u8be5\u8282\u70b9\u6240\u5c5e\u7684\u670d\u52a1\u76f8\u5173\u7684\u6570\u636e\u5e93\u3002","title":"\u6743\u9650"},{"location":"security/security-guide/#ssl","text":"","title":"\u8981\u6c42\u7528\u6237\u5e10\u6237\u9700\u8981 SSL \u4f20\u8f93"},{"location":"security/security-guide/#1mysql","text":"GRANT ALL ON dbname.* to 'compute01'@'hostname' IDENTIFIED BY 'NOVA_DBPASS' REQUIRE SSL;","title":"\u914d\u7f6e\u793a\u4f8b #1\uff1a\uff08MySQL\uff09"},{"location":"security/security-guide/#2postgresql","text":"\u5728\u6587\u4ef6\u4e2d pg_hba.conf \uff1a hostssl dbname compute01 hostname md5 \u8bf7\u6ce8\u610f\uff0c\u6b64\u547d\u4ee4\u4ec5\u6dfb\u52a0\u901a\u8fc7 SSL \u8fdb\u884c\u901a\u4fe1\u7684\u529f\u80fd\uff0c\u5e76\u4e14\u662f\u975e\u72ec\u5360\u7684\u3002\u5e94\u7981\u7528\u53ef\u80fd\u5141\u8bb8\u672a\u52a0\u5bc6\u4f20\u8f93\u7684\u5176\u4ed6\u8bbf\u95ee\u65b9\u6cd5\uff0c\u4ee5\u4fbf SSL \u662f\u552f\u4e00\u7684\u8bbf\u95ee\u65b9\u6cd5\u3002 \u8be5 md5 \u53c2\u6570\u5c06\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u5b9a\u4e49\u4e3a\u54c8\u5e0c\u5bc6\u7801\u3002\u6211\u4eec\u5728\u4ee5\u4e0b\u90e8\u5206\u4e2d\u63d0\u4f9b\u4e86\u4e00\u4e2a\u5b89\u5168\u8eab\u4efd\u9a8c\u8bc1\u793a\u4f8b\u3002","title":"\u914d\u7f6e\u793a\u4f8b #2\uff1a\uff08PostgreSQL\uff09"},{"location":"security/security-guide/#openstack_12","text":"\u5982\u679c\u6570\u636e\u5e93\u670d\u52a1\u5668\u914d\u7f6e\u4e3a\u4f7f\u7528 TLS \u4f20\u8f93\uff0c\u5219\u9700\u8981\u6307\u5b9a\u7528\u4e8e SQLAlchemy \u67e5\u8be2\u4e2d\u7684\u521d\u59cb\u8fde\u63a5\u5b57\u7b26\u4e32\u7684\u8bc1\u4e66\u9881\u53d1\u673a\u6784\u4fe1\u606f\u3002","title":"OpenStack \u670d\u52a1\u6570\u636e\u5e93\u914d\u7f6e"},{"location":"security/security-guide/#mysql-sql_connection","text":"sql_connection = mysql://compute01:NOVA_DBPASS@localhost/nova?charset=utf8&ssl_ca=/etc/mysql/cacert.pem","title":"MySQL :sql_connection \u7684\u5b57\u7b26\u4e32\u793a\u4f8b\uff1a"},{"location":"security/security-guide/#x509","text":"\u901a\u8fc7\u8981\u6c42\u4f7f\u7528 X.509 \u5ba2\u6237\u7aef\u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff0c\u53ef\u4ee5\u589e\u5f3a\u5b89\u5168\u6027\u3002\u4ee5\u8fd9\u79cd\u65b9\u5f0f\u5bf9\u6570\u636e\u5e93\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u53ef\u4ee5\u4e3a\u4e0e\u6570\u636e\u5e93\u5efa\u7acb\u8fde\u63a5\u7684\u5ba2\u6237\u7aef\u63d0\u4f9b\u66f4\u597d\u7684\u8eab\u4efd\u4fdd\u8bc1\uff0c\u5e76\u786e\u4fdd\u901a\u4fe1\u662f\u52a0\u5bc6\u7684\u3002","title":"\u4f7f\u7528 X.509 \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1"},{"location":"security/security-guide/#1mysql_1","text":"GRANT ALL on dbname.* to 'compute01'@'hostname' IDENTIFIED BY 'NOVA_DBPASS' REQUIRE SUBJECT '/C=XX/ST=YYY/L=ZZZZ/O=cloudycloud/CN=compute01' AND ISSUER '/C=XX/ST=YYY/L=ZZZZ/O=cloudycloud/CN=cloud-ca';","title":"\u914d\u7f6e\u793a\u4f8b #1\uff1a\uff08MySQL\uff09"},{"location":"security/security-guide/#2postgresql_1","text":"hostssl dbname compute01 hostname cert","title":"\u914d\u7f6e\u793a\u4f8b #2\uff1a\uff08PostgreSQL\uff09"},{"location":"security/security-guide/#openstack_13","text":"\u5982\u679c\u6570\u636e\u5e93\u670d\u52a1\u5668\u914d\u7f6e\u4e3a\u9700\u8981 X.509 \u8bc1\u4e66\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\uff0c\u5219\u9700\u8981\u4e3a\u6570\u636e\u5e93\u540e\u7aef\u6307\u5b9a\u76f8\u5e94\u7684 SQLAlchemy \u67e5\u8be2\u53c2\u6570\u3002\u8fd9\u4e9b\u53c2\u6570\u6307\u5b9a\u7528\u4e8e\u521d\u59cb\u8fde\u63a5\u5b57\u7b26\u4e32\u7684\u8bc1\u4e66\u3001\u79c1\u94a5\u548c\u8bc1\u4e66\u9881\u53d1\u673a\u6784\u4fe1\u606f\u3002 MySQL \u7684 X.509 \u8bc1\u4e66\u8eab\u4efd\u9a8c\u8bc1 :sql_connection \u5b57\u7b26\u4e32\u793a\u4f8b\uff1a sql_connection = mysql://compute01:NOVA_DBPASS@localhost/nova? charset=utf8&ssl_ca = /etc/mysql/cacert.pem&ssl_cert=/etc/mysql/server-cert.pem&ssl_key=/etc/mysql/server-key.pem","title":"OpenStack \u670d\u52a1\u6570\u636e\u5e93\u914d\u7f6e"},{"location":"security/security-guide/#nova-conductor_1","text":"OpenStack Compute \u63d0\u4f9b\u4e86\u4e00\u4e2a\u79f0\u4e3a nova-conductor \u7684\u5b50\u670d\u52a1\uff0c\u7528\u4e8e\u4ee3\u7406\u6570\u636e\u5e93\u8fde\u63a5\uff0c\u5176\u4e3b\u8981\u76ee\u7684\u662f\u8ba9 nova \u8ba1\u7b97\u8282\u70b9\u4e0e nova-conductor \u8fde\u63a5\u4ee5\u6ee1\u8db3\u6570\u636e\u6301\u4e45\u6027\u9700\u6c42\uff0c\u800c\u4e0d\u662f\u76f4\u63a5\u4e0e\u6570\u636e\u5e93\u901a\u4fe1\u3002 Nova-conductor \u901a\u8fc7 RPC \u63a5\u6536\u8bf7\u6c42\u5e76\u4ee3\u8868\u8c03\u7528\u670d\u52a1\u6267\u884c\u64cd\u4f5c\uff0c\u800c\u65e0\u9700\u6388\u4e88\u5bf9\u6570\u636e\u5e93\u3001\u5176\u8868\u6216\u5176\u4e2d\u6570\u636e\u7684\u7cbe\u7ec6\u8bbf\u95ee\u6743\u9650\u3002Nova-conductor \u5b9e\u8d28\u4e0a\u5c06\u76f4\u63a5\u6570\u636e\u5e93\u8bbf\u95ee\u4ece\u8ba1\u7b97\u8282\u70b9\u4e2d\u62bd\u8c61\u51fa\u6765\u3002 \u8fd9\u79cd\u62bd\u8c61\u7684\u4f18\u70b9\u662f\u5c06\u670d\u52a1\u9650\u5236\u4e3a\u4f7f\u7528\u53c2\u6570\u6267\u884c\u65b9\u6cd5\uff0c\u7c7b\u4f3c\u4e8e\u5b58\u50a8\u8fc7\u7a0b\uff0c\u4ece\u800c\u9632\u6b62\u5927\u91cf\u7cfb\u7edf\u76f4\u63a5\u8bbf\u95ee\u6216\u4fee\u6539\u6570\u636e\u5e93\u6570\u636e\u3002\u8fd9\u662f\u5728\u4e0d\u5728\u6570\u636e\u5e93\u672c\u8eab\u7684\u4e0a\u4e0b\u6587\u6216\u8303\u56f4\u5185\u5b58\u50a8\u6216\u6267\u884c\u8fd9\u4e9b\u8fc7\u7a0b\u7684\u60c5\u51b5\u4e0b\u5b8c\u6210\u7684\uff0c\u8fd9\u662f\u5bf9\u5178\u578b\u5b58\u50a8\u8fc7\u7a0b\u7684\u5e38\u89c1\u6279\u8bc4\u3002 \u9057\u61be\u7684\u662f\uff0c\u6b64\u89e3\u51b3\u65b9\u6848\u4f7f\u66f4\u7ec6\u7c92\u5ea6\u7684\u8bbf\u95ee\u63a7\u5236\u548c\u5ba1\u6838\u6570\u636e\u8bbf\u95ee\u7684\u80fd\u529b\u7684\u4efb\u52a1\u590d\u6742\u5316\u3002\u7531\u4e8e nova-conductor \u670d\u52a1\u901a\u8fc7 RPC \u63a5\u6536\u8bf7\u6c42\uff0c\u56e0\u6b64\u5b83\u7a81\u51fa\u4e86\u63d0\u9ad8\u6d88\u606f\u4f20\u9012\u5b89\u5168\u6027\u7684\u91cd\u8981\u6027\u3002\u4efb\u4f55\u6709\u6743\u8bbf\u95ee\u6d88\u606f\u961f\u5217\u7684\u8282\u70b9\u90fd\u53ef\u4ee5\u6267\u884c nova-conductor \u63d0\u4f9b\u7684\u8fd9\u4e9b\u65b9\u6cd5\uff0c\u5e76\u6709\u6548\u5730\u4fee\u6539\u6570\u636e\u5e93\u3002 \u8bf7\u6ce8\u610f\uff0c\u7531\u4e8e nova-conductor \u4ec5\u9002\u7528\u4e8e OpenStack Compute\uff0c\u56e0\u6b64\u5bf9\u4e8e\u5176\u4ed6 OpenStack \u7ec4\u4ef6\uff08\u5982 Telemetry\uff08\u4e91\u9ad8\u8ba1\uff09\u3001\u7f51\u7edc\u548c\u5757\u5b58\u50a8\uff09\u7684\u8fd0\u884c\uff0c\u53ef\u80fd\u4ecd\u7136\u9700\u8981\u4ece\u8ba1\u7b97\u4e3b\u673a\u76f4\u63a5\u8bbf\u95ee\u6570\u636e\u5e93\u3002 \u82e5\u8981\u7981\u7528 nova-conductor\uff0c\u8bf7\u5c06\u4ee5\u4e0b\u5185\u5bb9\u653e\u5165 nova.conf \u6587\u4ef6\u4e2d\uff08\u5728\u8ba1\u7b97\u4e3b\u673a\u4e0a\uff09\uff1a [conductor] use_local = true","title":"Nova-conductor"},{"location":"security/security-guide/#_258","text":"\u672c\u7ae0\u4ecb\u7ecd\u4e0e\u6570\u636e\u5e93\u670d\u52a1\u5668\u4e4b\u95f4\u7684\u7f51\u7edc\u901a\u4fe1\u76f8\u5173\u7684\u95ee\u9898\u3002\u8fd9\u5305\u62ec IP \u5730\u5740\u7ed1\u5b9a\u548c\u4f7f\u7528 TLS \u52a0\u5bc6\u7f51\u7edc\u6d41\u91cf\u3002","title":"\u6570\u636e\u5e93\u4f20\u8f93\u5b89\u5168\u6027"},{"location":"security/security-guide/#ip","text":"\u82e5\u8981\u9694\u79bb\u670d\u52a1\u548c\u6570\u636e\u5e93\u4e4b\u95f4\u7684\u654f\u611f\u6570\u636e\u5e93\u901a\u4fe1\uff0c\u5f3a\u70c8\u5efa\u8bae\u5c06\u6570\u636e\u5e93\u670d\u52a1\u5668\u914d\u7f6e\u4e3a\u4ec5\u5141\u8bb8\u901a\u8fc7\u9694\u79bb\u7684\u7ba1\u7406\u7f51\u7edc\u4e0e\u6570\u636e\u5e93\u8fdb\u884c\u901a\u4fe1\u3002\u8fd9\u662f\u901a\u8fc7\u9650\u5236\u6570\u636e\u5e93\u670d\u52a1\u5668\u4e3a\u4f20\u5165\u5ba2\u6237\u7aef\u8fde\u63a5\u7ed1\u5b9a\u7f51\u7edc\u5957\u63a5\u5b57\u7684\u63a5\u53e3\u6216 IP \u5730\u5740\u6765\u5b9e\u73b0\u7684\u3002","title":"\u6570\u636e\u5e93\u670d\u52a1\u5668 IP \u5730\u5740\u7ed1\u5b9a"},{"location":"security/security-guide/#mysql","text":"\u5728 my.cnf \uff1a [mysqld] ... bind-address ","title":"\u9650\u5236 MySQL \u7684\u7ed1\u5b9a\u5730\u5740"},{"location":"security/security-guide/#postgresql","text":"\u5728 postgresql.conf \uff1a listen_addresses = ","title":"\u9650\u5236 PostgreSQL \u7684\u76d1\u542c\u5730\u5740"},{"location":"security/security-guide/#_259","text":"\u9664\u4e86\u5c06\u6570\u636e\u5e93\u901a\u4fe1\u9650\u5236\u4e3a\u7ba1\u7406\u7f51\u7edc\u5916\uff0c\u6211\u4eec\u8fd8\u5f3a\u70c8\u5efa\u8bae\u4e91\u7ba1\u7406\u5458\u5c06\u5176\u6570\u636e\u5e93\u540e\u7aef\u914d\u7f6e\u4e3a\u9700\u8981 TLS\u3002\u5c06 TLS \u7528\u4e8e\u6570\u636e\u5e93\u5ba2\u6237\u7aef\u8fde\u63a5\u53ef\u4fdd\u62a4\u901a\u4fe1\u4e0d\u88ab\u7be1\u6539\u548c\u7a83\u542c\u3002\u6b63\u5982\u4e0b\u4e00\u8282\u5c06\u8ba8\u8bba\u7684\u90a3\u6837\uff0c\u4f7f\u7528 TLS \u8fd8\u63d0\u4f9b\u4e86\u901a\u8fc7 X.509 \u8bc1\u4e66\uff08\u901a\u5e38\u79f0\u4e3a PKI\uff09\u6267\u884c\u6570\u636e\u5e93\u7528\u6237\u8eab\u4efd\u9a8c\u8bc1\u7684\u6846\u67b6\u3002\u4ee5\u4e0b\u662f\u6709\u5173\u5982\u4f55\u4e3a\u4e24\u4e2a\u6d41\u884c\u7684\u6570\u636e\u5e93\u540e\u7aef MySQL \u548c PostgreSQL \u914d\u7f6e TLS \u7684\u6307\u5357\u3002 \u6ce8\u610f \u5b89\u88c5\u8bc1\u4e66\u548c\u5bc6\u94a5\u6587\u4ef6\u65f6\uff0c\u8bf7\u786e\u4fdd\u6587\u4ef6\u6743\u9650\u53d7\u5230\u9650\u5236\uff0c\u4f8b\u5982 `chmod 0600` \uff0c\u6240\u6709\u6743\u9650\u5236\u4e3a\u6570\u636e\u5e93\u5b88\u62a4\u7a0b\u5e8f\u7528\u6237\uff0c\u4ee5\u9632\u6b62\u6570\u636e\u5e93\u670d\u52a1\u5668\u4e0a\u7684\u5176\u4ed6\u8fdb\u7a0b\u548c\u7528\u6237\u8fdb\u884c\u672a\u7ecf\u6388\u6743\u7684\u8bbf\u95ee\u3002","title":"\u6570\u636e\u5e93\u4f20\u8f93"},{"location":"security/security-guide/#mysql-ssl","text":"\u5e94\u5728\u7cfb\u7edf\u8303\u56f4\u7684MySQL\u914d\u7f6e\u6587\u4ef6\u4e2d\u6dfb\u52a0\u4ee5\u4e0b\u884c\uff1a \u5728 my.cnf \uff1a [[mysqld]] ... ssl-ca = /path/to/ssl/cacert.pem ssl-cert = /path/to/ssl/server-cert.pem ssl-key = /path/to/ssl/server-key.pem \uff08\u53ef\u9009\uff09\u5982\u679c\u60a8\u5e0c\u671b\u9650\u5236\u7528\u4e8e\u52a0\u5bc6\u8fde\u63a5\u7684 SSL \u5bc6\u7801\u96c6\u3002\u6709\u5173\u5bc6\u7801\u5217\u8868\u548c\u7528\u4e8e\u6307\u5b9a\u5bc6\u7801\u5b57\u7b26\u4e32\u7684\u8bed\u6cd5\uff0c\u8bf7\u53c2\u9605\u5bc6\u7801\uff1a ssl-cipher = 'cipher:list'","title":"MySQL SSL\u914d\u7f6e"},{"location":"security/security-guide/#postgresql-ssl","text":"\u5e94\u5728\u7cfb\u7edf\u8303\u56f4\u7684 PostgreSQL \u914d\u7f6e\u6587\u4ef6\u4e2d\u6dfb\u52a0\u4ee5\u4e0b\u884c\u3002 postgresql.conf ssl = true \uff08\u53ef\u9009\uff09\u5982\u679c\u60a8\u5e0c\u671b\u9650\u5236\u7528\u4e8e\u52a0\u5bc6\u8fde\u63a5\u7684 SSL \u5bc6\u7801\u96c6\u3002\u6709\u5173\u5bc6\u7801\u5217\u8868\u548c\u7528\u4e8e\u6307\u5b9a\u5bc6\u7801\u5b57\u7b26\u4e32\u7684\u8bed\u6cd5\uff0c\u8bf7\u53c2\u9605\u5bc6\u7801\uff1a ssl-ciphers = 'cipher:list' \u670d\u52a1\u5668\u8bc1\u4e66\u3001\u5bc6\u94a5\u548c\u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09 \u6587\u4ef6\u5e94\u653e\u5728\u4ee5\u4e0b\u6587\u4ef6\u7684 $PGDATA \u76ee\u5f55\u4e2d\uff1a $PGDATA/server.crt - \u670d\u52a1\u5668\u8bc1\u4e66 $PGDATA/server.key - \u79c1\u94a5\u5bf9\u5e94\u4e8e server.crt $PGDATA/root.crt - \u53ef\u4fe1\u8bc1\u4e66\u9881\u53d1\u673a\u6784 $PGDATA/root.crl - \u8bc1\u4e66\u64a4\u9500\u5217\u8868","title":"PostgreSQL SSL \u914d\u7f6e"},{"location":"security/security-guide/#_260","text":"OpenStack\u65e8\u5728\u652f\u6301\u591a\u79df\u6237\uff0c\u8fd9\u4e9b\u79df\u6237\u5f88\u53ef\u80fd\u6709\u4e0d\u540c\u7684\u6570\u636e\u8981\u6c42\u3002\u4f5c\u4e3a\u4e91\u6784\u5efa\u8005\u6216\u8fd0\u8425\u5546\uff0c\u60a8\u5fc5\u987b\u786e\u4fdd\u60a8\u7684 OpenStack \u73af\u5883\u80fd\u591f\u89e3\u51b3\u6570\u636e\u9690\u79c1\u95ee\u9898\u548c\u6cd5\u89c4\u3002\u5728\u672c\u7ae0\u4e2d\uff0c\u6211\u4eec\u5c06\u8ba8\u8bba\u4e0e OpenStack \u5b9e\u73b0\u76f8\u5173\u7684\u6570\u636e\u9a7b\u7559\u548c\u5904\u7f6e\u3002 \u6570\u636e\u9690\u79c1\u95ee\u9898 \u6570\u636e\u9a7b\u7559 \u6570\u636e\u5904\u7f6e \u6570\u636e\u52a0\u5bc6 \u5377\u52a0\u5bc6 \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6 \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61 \u5757\u5b58\u50a8\u6027\u80fd\u548c\u540e\u7aef \u7f51\u7edc\u6570\u636e \u5bc6\u94a5\u7ba1\u7406 \u53c2\u8003\u4e66\u76ee:","title":"\u79df\u6237\u6570\u636e\u9690\u79c1"},{"location":"security/security-guide/#_261","text":"","title":"\u6570\u636e\u9690\u79c1\u95ee\u9898"},{"location":"security/security-guide/#_262","text":"\u5728\u8fc7\u53bb\u51e0\u5e74\u4e2d\uff0c\u6570\u636e\u7684\u9690\u79c1\u548c\u9694\u79bb\u4e00\u76f4\u88ab\u8ba4\u4e3a\u662f\u91c7\u7528\u4e91\u7684\u4e3b\u8981\u969c\u788d\u3002\u8fc7\u53bb\uff0c\u5bf9\u8c01\u62e5\u6709\u4e91\u4e2d\u6570\u636e\u4ee5\u53ca\u4e91\u8fd0\u8425\u5546\u662f\u5426\u53ef\u4ee5\u6700\u7ec8\u4fe1\u4efb\u8fd9\u4e9b\u6570\u636e\u7684\u4fdd\u7ba1\u4eba\u7684\u62c5\u5fe7\u4e00\u76f4\u662f\u91cd\u5927\u95ee\u9898\u3002 \u8bb8\u591a OpenStack \u670d\u52a1\u7ef4\u62a4\u5c5e\u4e8e\u79df\u6237\u7684\u6570\u636e\u548c\u5143\u6570\u636e\u6216\u53c2\u8003\u79df\u6237\u4fe1\u606f\u3002 \u5b58\u50a8\u5728 OpenStack \u4e91\u4e2d\u7684\u79df\u6237\u6570\u636e\u53ef\u80fd\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\uff1a \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61 \u8ba1\u7b97\u5b9e\u4f8b\u4e34\u65f6\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8 \u8ba1\u7b97\u5b9e\u4f8b\u5185\u5b58 \u5757\u5b58\u50a8\u5377\u6570\u636e \u7528\u4e8e\u8ba1\u7b97\u8bbf\u95ee\u7684\u516c\u94a5 \u6620\u50cf\u670d\u52a1\u4e2d\u7684\u865a\u62df\u673a\u6620\u50cf \u8ba1\u7b97\u673a\u5feb\u7167 \u4f20\u9012\u7ed9 OpenStack Compute \u7684\u914d\u7f6e\u9a71\u52a8\u5668\u6269\u5c55\u7684\u6570\u636e OpenStack \u4e91\u5b58\u50a8\u7684\u5143\u6570\u636e\u5305\u62ec\u4ee5\u4e0b\u975e\u8be6\u5c3d\u9879\u76ee\uff1a \u7ec4\u7ec7\u540d\u79f0 \u7528\u6237\u7684\u201c\u771f\u5b9e\u59d3\u540d\u201d \u6b63\u5728\u8fd0\u884c\u7684\u5b9e\u4f8b\u3001\u5b58\u50a8\u6876\u3001\u5bf9\u8c61\u3001\u5377\u548c\u5176\u4ed6\u914d\u989d\u76f8\u5173\u9879\u76ee\u7684\u6570\u91cf\u6216\u5927\u5c0f \u8fd0\u884c\u5b9e\u4f8b\u6216\u5b58\u50a8\u6570\u636e\u7684\u5c0f\u65f6\u6570 \u7528\u6237\u7684 IP \u5730\u5740 \u5185\u90e8\u751f\u6210\u7684\u7528\u4e8e\u8ba1\u7b97\u6620\u50cf\u6346\u7ed1\u7684\u79c1\u94a5","title":"\u6570\u636e\u9a7b\u7559"},{"location":"security/security-guide/#_263","text":"OpenStack\u8fd0\u8425\u5546\u5e94\u52aa\u529b\u63d0\u4f9b\u4e00\u5b9a\u7a0b\u5ea6\u7684\u79df\u6237\u6570\u636e\u5904\u7f6e\u4fdd\u8bc1\u3002\u6700\u4f73\u5b9e\u8df5\u5efa\u8bae\u64cd\u4f5c\u5458\u5728\u5904\u7f6e\u3001\u91ca\u653e\u7ec4\u7ec7\u63a7\u5236\u6216\u91ca\u653e\u4ee5\u4f9b\u91cd\u590d\u4f7f\u7528\u4e4b\u524d\u5bf9\u4e91\u7cfb\u7edf\u4ecb\u8d28\uff08\u6570\u5b57\u548c\u975e\u6570\u5b57\uff09\u8fdb\u884c\u6e05\u7406\u3002\u9274\u4e8e\u4fe1\u606f\u7684\u7279\u5b9a\u5b89\u5168\u57df\u548c\u654f\u611f\u6027\uff0c\u6e05\u7406\u65b9\u6cd5\u5e94\u5b9e\u73b0\u9002\u5f53\u7ea7\u522b\u7684\u5f3a\u5ea6\u548c\u5b8c\u6574\u6027\u3002 \u201c\u6e05\u7406\u8fc7\u7a0b\u4f1a\u4ece\u4ecb\u8d28\u4e2d\u5220\u9664\u4fe1\u606f\uff0c\u56e0\u6b64\u65e0\u6cd5\u68c0\u7d22\u6216\u91cd\u5efa\u4fe1\u606f\u3002\u6e05\u7406\u6280\u672f\uff0c\u5305\u62ec\u6e05\u9664\u3001\u6e05\u9664\u3001\u52a0\u5bc6\u64e6\u9664\u548c\u9500\u6bc1\uff0c\u53ef\u9632\u6b62\u5728\u91cd\u590d\u4f7f\u7528\u6216\u91ca\u653e\u5904\u7f6e\u6b64\u7c7b\u4ecb\u8d28\u65f6\u5411\u672a\u7ecf\u6388\u6743\u7684\u4e2a\u4eba\u62ab\u9732\u4fe1\u606f\u3002NIST \u7279\u522b\u51fa\u7248\u7269 800-53 \u4fee\u8ba2\u7248 4 NIST\u5efa\u8bae\u7684\u5b89\u5168\u63a7\u5236\u63aa\u65bd\u4e2d\u91c7\u7528\u7684\u4e00\u822c\u6570\u636e\u5904\u7f6e\u548c\u6e05\u7406\u6307\u5357\u3002\u4e91\u8fd0\u8425\u5546\u5e94\uff1a \u8ddf\u8e2a\u3001\u8bb0\u5f55\u548c\u9a8c\u8bc1\u4ecb\u8d28\u6e05\u7406\u548c\u5904\u7f6e\u64cd\u4f5c\u3002 \u6d4b\u8bd5\u6e05\u7406\u8bbe\u5907\u548c\u7a0b\u5e8f\u4ee5\u9a8c\u8bc1\u5176\u6027\u80fd\u662f\u5426\u6b63\u5e38\u3002 \u5728\u5c06\u4fbf\u643a\u5f0f\u53ef\u79fb\u52a8\u5b58\u50a8\u8bbe\u5907\u8fde\u63a5\u5230\u4e91\u57fa\u7840\u67b6\u6784\u4e4b\u524d\uff0c\u5148\u5bf9\u5176\u8fdb\u884c\u6e05\u7406\u3002 \u9500\u6bc1\u65e0\u6cd5\u6e05\u7406\u7684\u4e91\u7cfb\u7edf\u4ecb\u8d28\u3002 \u5728 OpenStack \u90e8\u7f72\u4e2d\uff0c\u60a8\u9700\u8981\u89e3\u51b3\u4ee5\u4e0b\u95ee\u9898\uff1a \u5b89\u5168\u6570\u636e\u64e6\u9664 \u5b9e\u4f8b\u5185\u5b58\u6e05\u7406 \u5757\u5b58\u50a8\u5377\u6570\u636e \u8ba1\u7b97\u5b9e\u4f8b\u4e34\u65f6\u5b58\u50a8 \u88f8\u673a\u670d\u52a1\u5668\u6e05\u7406","title":"\u6570\u636e\u5904\u7f6e"},{"location":"security/security-guide/#_264","text":"\u5728OpenStack\u4e2d\uff0c\u67d0\u4e9b\u6570\u636e\u53ef\u80fd\u4f1a\u88ab\u5220\u9664\uff0c\u4f46\u5728\u4e0a\u8ff0NIST\u6807\u51c6\u7684\u4e0a\u4e0b\u6587\u4e2d\u4e0d\u4f1a\u88ab\u5b89\u5168\u5220\u9664\u3002\u8fd9\u901a\u5e38\u9002\u7528\u4e8e\u5b58\u50a8\u5728\u6570\u636e\u5e93\u4e2d\u7684\u5927\u591a\u6570\u6216\u5168\u90e8\u4e0a\u8ff0\u5b9a\u4e49\u7684\u5143\u6570\u636e\u548c\u4fe1\u606f\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u6570\u636e\u5e93\u548c/\u6216\u7cfb\u7edf\u914d\u7f6e\u8fdb\u884c\u81ea\u52a8\u5438\u5c18\u548c\u5b9a\u671f\u53ef\u7528\u7a7a\u95f4\u64e6\u9664\u6765\u4fee\u590d\u3002","title":"\u6570\u636e\u672a\u5b89\u5168\u5220\u9664"},{"location":"security/security-guide/#_265","text":"\u7279\u5b9a\u4e8e\u5404\u79cd\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u662f\u5b9e\u4f8b\u5185\u5b58\u7684\u5904\u7406\u3002OpenStack Compute \u4e2d\u6ca1\u6709\u5b9a\u4e49\u6b64\u884c\u4e3a\uff0c\u5c3d\u7ba1\u901a\u5e38\u671f\u671b hypervisor \u5728\u5220\u9664\u5b9e\u4f8b\u548c/\u6216\u521b\u5efa\u5b9e\u4f8b\u65f6\u5c3d\u6700\u5927\u52aa\u529b\u6e05\u7406\u5185\u5b58\u3002 Xen \u663e\u5f0f\u5730\u4e3a\u5b9e\u4f8b\u5206\u914d\u4e13\u7528\u5185\u5b58\u533a\u57df\uff0c\u5e76\u5728\u5b9e\u4f8b\uff08\u6216 Xen \u672f\u8bed\u4e2d\u7684\u57df\uff09\u9500\u6bc1\u65f6\u6e05\u7406\u6570\u636e\u3002KVM \u5728\u5f88\u5927\u7a0b\u5ea6\u4e0a\u4f9d\u8d56\u4e8e Linux \u9875\u9762\u7ba1\u7406;KVM \u6587\u6863\u4e2d\u5b9a\u4e49\u4e86\u4e00\u7ec4\u4e0e KVM \u5206\u9875\u76f8\u5173\u7684\u590d\u6742\u89c4\u5219\u3002 \u9700\u8981\u6ce8\u610f\u7684\u662f\uff0c\u4f7f\u7528 Xen \u5185\u5b58\u6c14\u7403\u529f\u80fd\u53ef\u80fd\u4f1a\u5bfc\u81f4\u4fe1\u606f\u6cc4\u9732\u3002\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u907f\u514d\u4f7f\u7528\u6b64\u529f\u80fd\u3002 \u5bf9\u4e8e\u8fd9\u4e9b\u548c\u5176\u4ed6\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\uff0c\u6211\u4eec\u5efa\u8bae\u53c2\u8003\u7279\u5b9a\u4e8e\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u6587\u6863\u3002","title":"\u5b9e\u4f8b\u5185\u5b58\u6e05\u7406"},{"location":"security/security-guide/#cinder","text":"\u5f3a\u70c8\u5efa\u8bae\u4f7f\u7528 OpenStack \u5377\u52a0\u5bc6\u529f\u80fd\u3002\u4e0b\u9762\u201c\u5377\u52a0\u5bc6\u201d\u4e0b\u7684\u201c\u6570\u636e\u52a0\u5bc6\u201d\u90e8\u5206\u5bf9\u6b64\u8fdb\u884c\u4e86\u8ba8\u8bba\u3002\u4f7f\u7528\u6b64\u529f\u80fd\u65f6\uff0c\u901a\u8fc7\u5b89\u5168\u5730\u5220\u9664\u52a0\u5bc6\u5bc6\u94a5\u6765\u5b8c\u6210\u6570\u636e\u9500\u6bc1\u3002\u6700\u7ec8\u7528\u6237\u53ef\u4ee5\u5728\u521b\u5efa\u5377\u65f6\u9009\u62e9\u6b64\u529f\u80fd\uff0c\u4f46\u8bf7\u6ce8\u610f\uff0c\u7ba1\u7406\u5458\u5fc5\u987b\u5148\u6267\u884c\u5377\u52a0\u5bc6\u529f\u80fd\u7684\u4e00\u6b21\u6027\u8bbe\u7f6e\u3002\u6709\u5173\u6b64\u8bbe\u7f6e\u7684\u8bf4\u660e\uff0c\u8bf7\u53c2\u9605\u201c\u914d\u7f6e\u53c2\u8003\u201d\u7684\u201c\u5757\u5b58\u50a8\u201d\u90e8\u5206\u7684\u201c\u5377\u52a0\u5bc6\u201d\u4e0b\u3002 \u5982\u679c\u4e0d\u4f7f\u7528 OpenStack \u5377\u52a0\u5bc6\u529f\u80fd\uff0c\u90a3\u4e48\u5176\u4ed6\u65b9\u6cd5\u901a\u5e38\u66f4\u96be\u542f\u7528\u3002\u5982\u679c\u4f7f\u7528\u540e\u7aef\u63d2\u4ef6\uff0c\u5219\u53ef\u80fd\u5b58\u5728\u72ec\u7acb\u7684\u52a0\u5bc6\u65b9\u6cd5\u6216\u975e\u6807\u51c6\u8986\u76d6\u89e3\u51b3\u65b9\u6848\u3002OpenStack Block Storage \u7684\u63d2\u4ef6\u5c06\u4ee5\u591a\u79cd\u65b9\u5f0f\u5b58\u50a8\u6570\u636e\u3002\u8bb8\u591a\u63d2\u4ef6\u7279\u5b9a\u4e8e\u4f9b\u5e94\u5546\u6216\u6280\u672f\uff0c\u800c\u5176\u4ed6\u63d2\u4ef6\u5219\u66f4\u591a\u5730\u662f\u56f4\u7ed5\u6587\u4ef6\u7cfb\u7edf\uff08\u5982 LVM \u6216 ZFS\uff09\u7684 DIY \u89e3\u51b3\u65b9\u6848\u3002\u5b89\u5168\u9500\u6bc1\u6570\u636e\u7684\u65b9\u6cd5\u56e0\u63d2\u4ef6\u800c\u5f02\uff0c\u56e0\u4f9b\u5e94\u5546\u7684\u89e3\u51b3\u65b9\u6848\u800c\u5f02\uff0c\u4e5f\u56e0\u6587\u4ef6\u7cfb\u7edf\u800c\u5f02\u3002 \u4e00\u4e9b\u540e\u7aef\uff08\u5982 ZFS\uff09\u5c06\u652f\u6301\u5199\u5165\u65f6\u590d\u5236\uff0c\u4ee5\u9632\u6b62\u6570\u636e\u6cc4\u9732\u3002\u5728\u8fd9\u4e9b\u60c5\u51b5\u4e0b\uff0c\u4ece\u672a\u5199\u5165\u5757\u4e2d\u8bfb\u53d6\u5c06\u59cb\u7ec8\u8fd4\u56de\u96f6\u3002\u5176\u4ed6\u540e\u7aef\uff08\u5982 LVM\uff09\u53ef\u80fd\u672c\u8eab\u4e0d\u652f\u6301\u6b64\u529f\u80fd\uff0c\u56e0\u6b64\u5757\u5b58\u50a8\u63d2\u4ef6\u8d1f\u8d23\u5728\u5c06\u4e4b\u524d\u5199\u5165\u7684\u5757\u4ea4\u7ed9\u7528\u6237\u4e4b\u524d\u8986\u76d6\u5b83\u4eec\u3002\u8bf7\u52a1\u5fc5\u67e5\u770b\u6240\u9009\u5377\u540e\u7aef\u63d0\u4f9b\u54ea\u4e9b\u4fdd\u8bc1\uff0c\u5e76\u67e5\u770b\u54ea\u4e9b\u4e2d\u4ecb\u53ef\u7528\u4e8e\u672a\u63d0\u4f9b\u7684\u4fdd\u8bc1\u3002","title":"Cinder \u5377\u6570\u636e"},{"location":"security/security-guide/#_266","text":"OpenStack \u955c\u50cf\u670d\u52a1\u5177\u6709\u5ef6\u8fdf\u5220\u9664\u529f\u80fd\uff0c\u8be5\u529f\u80fd\u5c06\u5728\u5b9a\u4e49\u7684\u65f6\u95f4\u6bb5\u5185\u7b49\u5f85\u955c\u50cf\u7684\u5220\u9664\u3002\u5982\u679c\u5b58\u5728\u5b89\u5168\u95ee\u9898\uff0c\u5efa\u8bae\u901a\u8fc7\u7f16\u8f91 etc/glance/glance-api.conf \u6587\u4ef6\u5e76\u5c06 delayed_delete \u9009\u9879\u8bbe\u7f6e\u4e3a False \u6765\u7981\u7528\u6b64\u529f\u80fd\u3002","title":"\u955c\u50cf\u670d\u52a1\u5ef6\u65f6\u5220\u9664\u529f\u80fd"},{"location":"security/security-guide/#_267","text":"OpenStack Compute \u5177\u6709\u8f6f\u5220\u9664\u529f\u80fd\uff0c\u8be5\u529f\u80fd\u4f7f\u88ab\u5220\u9664\u7684\u5b9e\u4f8b\u5728\u5b9a\u4e49\u7684\u65f6\u95f4\u6bb5\u5185\u5904\u4e8e\u8f6f\u5220\u9664\u72b6\u6001\u3002\u5b9e\u4f8b\u53ef\u4ee5\u5728\u6b64\u65f6\u95f4\u6bb5\u5185\u6062\u590d\u3002\u82e5\u8981\u7981\u7528\u8f6f\u5220\u9664\u529f\u80fd\uff0c\u8bf7\u7f16\u8f91 etc/nova/nova.conf \u6587\u4ef6\u5e76\u5c06\u8be5 reclaim_instance_interval \u9009\u9879\u7559\u7a7a\u3002","title":"\u8ba1\u7b97\u8f6f\u5220\u9664\u529f\u80fd"},{"location":"security/security-guide/#_268","text":"\u8bf7\u6ce8\u610f\uff0cOpenStack \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u63d0\u4f9b\u4e86\u4e00\u79cd\u6539\u8fdb\u4e34\u65f6\u5b58\u50a8\u9690\u79c1\u548c\u9694\u79bb\u7684\u65b9\u6cd5\uff0c\u65e0\u8bba\u662f\u5728\u4e3b\u52a8\u4f7f\u7528\u671f\u95f4\u8fd8\u662f\u5728\u9500\u6bc1\u6570\u636e\u65f6\u3002\u4e0e\u52a0\u5bc6\u5757\u5b58\u50a8\u4e00\u6837\uff0c\u53ea\u9700\u5220\u9664\u52a0\u5bc6\u5bc6\u94a5\u5373\u53ef\u6709\u6548\u5730\u9500\u6bc1\u6570\u636e\u3002 \u5728\u521b\u5efa\u548c\u9500\u6bc1\u4e34\u65f6\u5b58\u50a8\u65f6\uff0c\u63d0\u4f9b\u6570\u636e\u9690\u79c1\u7684\u66ff\u4ee3\u63aa\u65bd\u5c06\u5728\u4e00\u5b9a\u7a0b\u5ea6\u4e0a\u53d6\u51b3\u4e8e\u6240\u9009\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u548c OpenStack \u8ba1\u7b97\u63d2\u4ef6\u3002 \u7528\u4e8e\u8ba1\u7b97\u7684 libvirt \u63d2\u4ef6\u53ef\u4ee5\u76f4\u63a5\u5728\u6587\u4ef6\u7cfb\u7edf\u4e0a\u6216 LVM \u4e2d\u7ef4\u62a4\u4e34\u65f6\u5b58\u50a8\u3002\u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u901a\u5e38\u4e0d\u4f1a\u5728\u5220\u9664\u6570\u636e\u65f6\u8986\u76d6\u6570\u636e\uff0c\u4f46\u53ef\u4ee5\u4fdd\u8bc1\u4e0d\u4f1a\u5411\u7528\u6237\u63d0\u4f9b\u810f\u76d8\u533a\u3002 \u5f53\u4f7f\u7528 LVM \u652f\u6301\u7684\u57fa\u4e8e\u5757\u7684\u4e34\u65f6\u5b58\u50a8\u65f6\uff0cOpenStack \u8ba1\u7b97\u8f6f\u4ef6\u5fc5\u987b\u5b89\u5168\u5730\u64e6\u9664\u5757\u4ee5\u9632\u6b62\u4fe1\u606f\u6cc4\u9732\u3002\u8fc7\u53bb\u66fe\u5b58\u5728\u4e0e\u4e0d\u5f53\u64e6\u9664\u7684\u4e34\u65f6\u5757\u5b58\u50a8\u8bbe\u5907\u76f8\u5173\u7684\u4fe1\u606f\u6cc4\u9732\u6f0f\u6d1e\u3002 \u6587\u4ef6\u7cfb\u7edf\u5b58\u50a8\u5bf9\u4e8e\u4e34\u65f6\u5757\u5b58\u50a8\u8bbe\u5907\u6765\u8bf4\u662f\u4e00\u79cd\u6bd4 LVM \u66f4\u5b89\u5168\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u56e0\u4e3a\u65e0\u6cd5\u4e3a\u7528\u6237\u63d0\u4f9b\u810f\u76d8\u533a\u3002\u4f46\u662f\uff0c\u9700\u8981\u6ce8\u610f\u7684\u662f\uff0c\u7528\u6237\u6570\u636e\u4e0d\u4f1a\u88ab\u7834\u574f\uff0c\u56e0\u6b64\u5efa\u8bae\u5bf9\u540e\u5907\u6587\u4ef6\u7cfb\u7edf\u8fdb\u884c\u52a0\u5bc6\u3002","title":"\u8ba1\u7b97\u5b9e\u4f8b\u4e34\u65f6\u5b58\u50a8"},{"location":"security/security-guide/#_269","text":"\u7528\u4e8e\u8ba1\u7b97\u7684\u88f8\u673a\u670d\u52a1\u5668\u9a71\u52a8\u7a0b\u5e8f\u6b63\u5728\u5f00\u53d1\u4e2d\uff0c\u6b64\u540e\u5df2\u8f6c\u79fb\u5230\u4e00\u4e2a\u540d\u4e3a ironic \u7684\u5355\u72ec\u9879\u76ee\u4e2d\u3002\u5728\u64b0\u5199\u672c\u6587\u65f6\uff0c\u5177\u6709\u8bbd\u523a\u610f\u5473\u7684\u662f\uff0c\u4f3c\u4e4e\u6ca1\u6709\u89e3\u51b3\u9a7b\u7559\u5728\u7269\u7406\u786c\u4ef6\u4e2d\u7684\u79df\u6237\u6570\u636e\u7684\u6e05\u7406\u95ee\u9898\u3002 \u6b64\u5916\uff0c\u88f8\u673a\u7cfb\u7edf\u7684\u79df\u6237\u53ef\u4ee5\u4fee\u6539\u7cfb\u7edf\u56fa\u4ef6\u3002\u5b89\u5168\u5f15\u5bfc\u4e2d\u6240\u8ff0\u7684 TPM \u6280\u672f\u63d0\u4f9b\u4e86\u4e00\u79cd\u7528\u4e8e\u68c0\u6d4b\u672a\u7ecf\u6388\u6743\u7684\u56fa\u4ef6\u66f4\u6539\u7684\u89e3\u51b3\u65b9\u6848\u3002","title":"\u88f8\u673a\u670d\u52a1\u5668\u6e05\u7406"},{"location":"security/security-guide/#_270","text":"\u8be5\u9009\u9879\u53ef\u4f9b\u5b9e\u65bd\u8005\u52a0\u5bc6\u79df\u6237\u6570\u636e\uff0c\u65e0\u8bba\u8fd9\u4e9b\u6570\u636e\u5b58\u50a8\u5728\u78c1\u76d8\u4e0a\u6216\u901a\u8fc7\u7f51\u7edc\u4f20\u8f93\uff0c\u4f8b\u5982\u4e0b\u9762\u63cf\u8ff0\u7684 OpenStack \u5377\u52a0\u5bc6\u529f\u80fd\u3002\u8fd9\u8d85\u51fa\u4e86\u7528\u6237\u5728\u5c06\u81ea\u5df1\u7684\u6570\u636e\u53d1\u9001\u7ed9\u63d0\u4f9b\u5546\u4e4b\u524d\u52a0\u5bc6\u81ea\u5df1\u7684\u6570\u636e\u7684\u4e00\u822c\u5efa\u8bae\u3002 \u4ee3\u8868\u79df\u6237\u52a0\u5bc6\u6570\u636e\u7684\u91cd\u8981\u6027\u5f88\u5927\u7a0b\u5ea6\u4e0a\u4e0e\u63d0\u4f9b\u5546\u627f\u62c5\u7684\u653b\u51fb\u8005\u53ef\u80fd\u8bbf\u95ee\u79df\u6237\u6570\u636e\u7684\u98ce\u9669\u6709\u5173\u3002\u653f\u5e9c\u53ef\u80fd\u6709\u8981\u6c42\uff0c\u4e5f\u6709\u6bcf\u4e2a\u7b56\u7565\u7684\u8981\u6c42\uff0c\u79c1\u6709\u5408\u540c\uff0c\u751a\u81f3\u4e0e\u516c\u5171\u4e91\u63d0\u4f9b\u5546\u7684\u79c1\u6709\u5408\u540c\u6709\u5173\u7684\u5224\u4f8b\u6cd5\u3002\u5efa\u8bae\u5728\u9009\u62e9\u79df\u6237\u52a0\u5bc6\u7b56\u7565\u4e4b\u524d\u8fdb\u884c\u98ce\u9669\u8bc4\u4f30\u548c\u6cd5\u5f8b\u987e\u95ee\u3002 \u6309\u5b9e\u4f8b\u6216\u6309\u5bf9\u8c61\u52a0\u5bc6\u6bd4\u6309\u9879\u76ee\u3001\u6309\u79df\u6237\u3001\u6309\u4e3b\u673a\u548c\u6309\u4e91\u805a\u5408\u964d\u5e8f\u8fdb\u884c\u52a0\u5bc6\u66f4\u53ef\u53d6\u3002\u8fd9\u9879\u5efa\u8bae\u4e0e\u5b9e\u65bd\u7684\u590d\u6742\u6027\u548c\u96be\u5ea6\u76f8\u53cd\u3002\u76ee\u524d\uff0c\u5728\u67d0\u4e9b\u9879\u76ee\u4e2d\uff0c\u5f88\u96be\u6216\u4e0d\u53ef\u80fd\u5b9e\u73b0\u50cf\u6bcf\u4e2a\u79df\u6237\u4e00\u6837\u677e\u6563\u7684\u52a0\u5bc6\u3002\u6211\u4eec\u5efa\u8bae\u5b9e\u73b0\u8005\u5c3d\u6700\u5927\u52aa\u529b\u52a0\u5bc6\u79df\u6237\u6570\u636e\u3002 \u901a\u5e38\uff0c\u6570\u636e\u52a0\u5bc6\u4e0e\u53ef\u9760\u5730\u9500\u6bc1\u79df\u6237\u548c\u6bcf\u4e2a\u5b9e\u4f8b\u6570\u636e\u7684\u80fd\u529b\u5448\u6b63\u76f8\u5173\uff0c\u53ea\u9700\u4e22\u5f03\u5bc6\u94a5\u5373\u53ef\u3002\u5e94\u8be5\u6307\u51fa\u7684\u662f\uff0c\u5728\u8fd9\u6837\u505a\u65f6\uff0c\u4ee5\u53ef\u9760\u548c\u5b89\u5168\u7684\u65b9\u5f0f\u9500\u6bc1\u8fd9\u4e9b\u5bc6\u94a5\u53d8\u5f97\u975e\u5e38\u91cd\u8981\u3002 Opportunities to encrypt data for users are present: \u5b58\u5728\u4e3a\u7528\u6237\u52a0\u5bc6\u6570\u636e\u7684\u673a\u4f1a\uff1a \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61 \u7f51\u7edc\u6570\u636e","title":"\u6570\u636e\u52a0\u5bc6"},{"location":"security/security-guide/#_271","text":"OpenStack \u4e2d\u7684\u5377\u52a0\u5bc6\u529f\u80fd\u652f\u6301\u57fa\u4e8e\u6bcf\u4e2a\u79df\u6237\u7684\u9690\u79c1\u4fdd\u62a4\u3002\u4ece Kilo \u7248\u672c\u5f00\u59cb\uff0c\u652f\u6301\u4ee5\u4e0b\u529f\u80fd\uff1a \u521b\u5efa\u548c\u4f7f\u7528\u52a0\u5bc6\u5377\u7c7b\u578b\uff0c\u901a\u8fc7\u4eea\u8868\u677f\u6216\u547d\u4ee4\u884c\u754c\u9762\u542f\u52a8 \u542f\u7528\u52a0\u5bc6\u5e76\u9009\u62e9\u52a0\u5bc6\u7b97\u6cd5\u548c\u5bc6\u94a5\u5927\u5c0f\u7b49\u53c2\u6570 iSCSI \u6570\u636e\u5305\u4e2d\u5305\u542b\u7684\u5377\u6570\u636e\u5df2\u52a0\u5bc6 \u5982\u679c\u539f\u59cb\u5377\u5df2\u52a0\u5bc6\uff0c\u5219\u652f\u6301\u52a0\u5bc6\u5907\u4efd \u4eea\u8868\u677f\u6307\u793a\u5377\u52a0\u5bc6\u72b6\u6001\u3002\u5305\u62ec\u5377\u5df2\u52a0\u5bc6\u7684\u6307\u793a\uff0c\u5e76\u5305\u62ec\u7b97\u6cd5\u548c\u5bc6\u94a5\u5927\u5c0f\u7b49\u52a0\u5bc6\u53c2\u6570 \u901a\u8fc7\u5b89\u5168\u5305\u88c5\u5668\u4e0e\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u4ea4\u4e92 \u540e\u7aef\u5bc6\u94a5\u5b58\u50a8\u652f\u6301\u5377\u52a0\u5bc6\uff0c\u4ee5\u589e\u5f3a\u5b89\u5168\u6027\uff08\u4f8b\u5982\uff0c\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u6216 KMIP \u670d\u52a1\u5668\u53ef\u7528\u4f5c barbican \u540e\u7aef\u5bc6\u94a5\u5b58\u50a8\uff09","title":"\u5377\u52a0\u5bc6"},{"location":"security/security-guide/#_272","text":"\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u53ef\u89e3\u51b3\u6570\u636e\u9690\u79c1\u95ee\u9898\u3002\u4e34\u65f6\u78c1\u76d8\u662f\u865a\u62df\u4e3b\u673a\u64cd\u4f5c\u7cfb\u7edf\u4f7f\u7528\u7684\u4e34\u65f6\u5de5\u4f5c\u7a7a\u95f4\u3002\u5982\u679c\u4e0d\u52a0\u5bc6\uff0c\u53ef\u4ee5\u5728\u6b64\u78c1\u76d8\u4e0a\u8bbf\u95ee\u654f\u611f\u7684\u7528\u6237\u4fe1\u606f\uff0c\u5e76\u4e14\u5728\u5378\u8f7d\u78c1\u76d8\u540e\u53ef\u80fd\u4f1a\u4fdd\u7559\u6b8b\u7559\u4fe1\u606f\u3002\u4ece Kilo \u7248\u672c\u5f00\u59cb\uff0c\u652f\u6301\u4ee5\u4e0b\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\uff1a \u521b\u5efa\u548c\u4f7f\u7528\u52a0\u5bc6\u7684 LVM \u4e34\u65f6\u78c1\u76d8\uff08\u6ce8\u610f\uff1a\u76ee\u524d OpenStack \u8ba1\u7b97\u670d\u52a1\u4ec5\u652f\u6301 LVM \u683c\u5f0f\u7684\u52a0\u5bc6\u4e34\u65f6\u78c1\u76d8\uff09 \u8ba1\u7b97\u914d\u7f6e \uff0c nova.conf \u5728\u201c[ephemeral_storage_encryption]\u201d\u90e8\u5206\u4e2d\u5177\u6709\u4ee5\u4e0b\u9ed8\u8ba4\u53c2\u6570 \u9009\u9879\uff1a\u201c\u5bc6\u7801 = AES-XTS-plain64\u201d \u6b64\u5b57\u6bb5\u8bbe\u7f6e\u7528\u4e8e\u52a0\u5bc6\u4e34\u65f6\u5b58\u50a8\u7684\u5bc6\u7801\u548c\u6a21\u5f0f\u3002NIST\u5efa\u8bae\u5c06AES-XTS\u4e13\u95e8\u7528\u4e8e\u78c1\u76d8\u5b58\u50a8\uff0c\u8be5\u540d\u79f0\u662f\u4f7f\u7528XTS\u52a0\u5bc6\u6a21\u5f0f\u7684AES\u52a0\u5bc6\u7684\u7b80\u5199\u3002\u53ef\u7528\u7684\u5bc6\u7801\u53d6\u51b3\u4e8e\u5185\u6838\u652f\u6301\u3002\u5728\u547d\u4ee4\u884c\u4e2d\uff0c\u8f93\u5165\u201ccryptsetup benchmark\u201d\u4ee5\u786e\u5b9a\u53ef\u7528\u9009\u9879\uff08\u5e76\u67e5\u770b\u57fa\u51c6\u6d4b\u8bd5\u7ed3\u679c\uff09\uff0c\u6216\u8f6c\u5230 /proc/crypto \u9009\u9879\uff1a 'enabled = false' \u8981\u4f7f\u7528\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\uff0c\u8bf7\u8bbe\u7f6e\u9009\u9879\uff1a\u201cenabled = true\u201d \u9009\u9879\uff1a\u201ckey_size = 512\u201d \u8bf7\u6ce8\u610f\uff0c\u540e\u7aef\u5bc6\u94a5\u7ba1\u7406\u5668\u53ef\u80fd\u5b58\u5728\u5bc6\u94a5\u5927\u5c0f\u9650\u5236\uff0c\u53ef\u80fd\u9700\u8981\u4f7f\u7528\u201ckey_size = 256\u201d\uff0c\u8fd9\u4ec5\u63d0\u4f9b 128 \u4f4d\u7684 AES \u5bc6\u94a5\u5927\u5c0f\u3002\u9664\u4e86 AES \u6240\u9700\u7684\u52a0\u5bc6\u5bc6\u94a5\u5916\uff0cXTS \u8fd8\u9700\u8981\u81ea\u5df1\u7684\u201c\u8c03\u6574\u5bc6\u94a5\u201d\u3002\u8fd9\u901a\u5e38\u8868\u793a\u4e3a\u5355\u4e2a\u5927\u952e\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u4f7f\u7528 512 \u4f4d\u8bbe\u7f6e\uff0cAES \u5c06\u4f7f\u7528 256 \u4f4d\uff0cXTS \u5c06\u4f7f\u7528 256 \u4f4d\u3002\uff08\u89c1NIST\uff09 \u901a\u8fc7\u5b89\u5168\u5305\u88c5\u5668\u4e0e\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u4ea4\u4e92 \u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u5c06\u901a\u8fc7\u4e3a\u6bcf\u4e2a\u79df\u6237\u63d0\u4f9b\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u5bc6\u94a5\u6765\u652f\u6301\u6570\u636e\u9694\u79bb \u540e\u7aef\u5bc6\u94a5\u5b58\u50a8\u652f\u6301\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\uff0c\u4ee5\u589e\u5f3a\u5b89\u5168\u6027\uff08\u4f8b\u5982\uff0cHSM \u6216 KMIP \u670d\u52a1\u5668\u53ef\u7528\u4f5c barbican \u540e\u7aef\u5bc6\u94a5\u5b58\u50a8\uff09 \u4f7f\u7528\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u65f6\uff0c\u5f53\u4e0d\u518d\u9700\u8981\u4e34\u65f6\u78c1\u76d8\u65f6\uff0c\u53ea\u9700\u5220\u9664\u5bc6\u94a5\u5373\u53ef\u53d6\u4ee3\u8986\u76d6\u4e34\u65f6\u78c1\u76d8\u5b58\u50a8\u533a\u57df","title":"\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6"},{"location":"security/security-guide/#_273","text":"\u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09 \u652f\u6301\u5bf9\u5b58\u50a8\u8282\u70b9\u4e0a\u7684\u9759\u6001\u5bf9\u8c61\u6570\u636e\u8fdb\u884c\u53ef\u9009\u52a0\u5bc6\u3002\u5bf9\u8c61\u6570\u636e\u7684\u52a0\u5bc6\u65e8\u5728\u964d\u4f4e\u5728\u672a\u7ecf\u6388\u6743\u7684\u4e00\u65b9\u83b7\u5f97\u5bf9\u78c1\u76d8\u7684\u7269\u7406\u8bbf\u95ee\u6743\u9650\u65f6\u8bfb\u53d6\u7528\u6237\u6570\u636e\u7684\u98ce\u9669\u3002 \u9759\u6001\u6570\u636e\u52a0\u5bc6\u7531\u4e2d\u95f4\u4ef6\u5b9e\u73b0\uff0c\u4e2d\u95f4\u4ef6\u53ef\u80fd\u5305\u542b\u5728\u4ee3\u7406\u670d\u52a1\u5668 WSGI \u7ba1\u9053\u4e2d\u3002\u8be5\u529f\u80fd\u662f swift \u96c6\u7fa4\u5185\u90e8\u7684\uff0c\u4e0d\u901a\u8fc7 API \u516c\u5f00\u3002\u5ba2\u6237\u7aef\u4e0d\u77e5\u9053 swift \u670d\u52a1\u5185\u90e8\u7684\u6b64\u529f\u80fd\u5bf9\u6570\u636e\u8fdb\u884c\u4e86\u52a0\u5bc6;\u5185\u90e8\u52a0\u5bc6\u7684\u6570\u636e\u4e0d\u5e94\u901a\u8fc7 swift API \u8fd4\u56de\u7ed9\u5ba2\u6237\u7aef\u3002 \u4ee5\u4e0b\u6570\u636e\u5728 swift \u4e2d\u9759\u6001\u65f6\u88ab\u52a0\u5bc6\uff1a \u5bf9\u8c61\u5185\u5bb9\u3002\u4f8b\u5982\uff0c\u5bf9\u8c61 PUT \u8bf7\u6c42\u6b63\u6587\u7684\u5185\u5bb9 \u5177\u6709\u975e\u96f6\u5185\u5bb9\u7684\u5bf9\u8c61\u7684\u5b9e\u4f53\u6807\u8bb0 \uff08ETag\uff09 \u6240\u6709\u81ea\u5b9a\u4e49\u7528\u6237\u5bf9\u8c61\u5143\u6570\u636e\u503c\u3002\u4f8b\u5982\uff0c\u4f7f\u7528 X-Object-Meta- \u5e26\u6709 PUT \u6216 POST \u8bf7\u6c42\u7684\u524d\u7f00\u6807\u5934\u53d1\u9001\u7684\u5143\u6570\u636e \u4e0a\u8ff0\u5217\u8868\u4e2d\u672a\u5305\u542b\u7684\u4efb\u4f55\u6570\u636e\u6216\u5143\u6570\u636e\u5747\u672a\u52a0\u5bc6\uff0c\u5305\u62ec\uff1a \u5e10\u6237\u3001\u5bb9\u5668\u548c\u5bf9\u8c61\u540d\u79f0 \u5e10\u6237\u548c\u5bb9\u5668\u81ea\u5b9a\u4e49\u7528\u6237\u5143\u6570\u636e\u503c \u6240\u6709\u81ea\u5b9a\u4e49\u7528\u6237\u5143\u6570\u636e\u540d\u79f0 \u5bf9\u8c61\u5185\u5bb9\u7c7b\u578b\u503c \u5bf9\u8c61\u5927\u5c0f \u7cfb\u7edf\u5143\u6570\u636e \u6709\u5173\u5bf9\u8c61\u5b58\u50a8\u52a0\u5bc6\u7684\u90e8\u7f72\u3001\u64cd\u4f5c\u6216\u5b9e\u65bd\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u6709\u5173\u5bf9\u8c61\u52a0\u5bc6\u7684 swift \u5f00\u53d1\u4eba\u5458\u6587\u6863\u3002","title":"\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61"},{"location":"security/security-guide/#_274","text":"\u542f\u7528\u64cd\u4f5c\u7cfb\u7edf\u65f6\uff0c\u53ef\u4ee5\u4f7f\u7528 Intel \u548c AMD \u5904\u7406\u5668\u4e2d\u5f53\u524d\u53ef\u7528\u7684\u786c\u4ef6\u52a0\u901f\u529f\u80fd\u6765\u589e\u5f3a OpenStack Volume Encryption \u6027\u80fd\u3002OpenStack \u5377\u52a0\u5bc6\u529f\u80fd\u548c OpenStack \u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u90fd\u7528\u4e8e dm-crypt \u4fdd\u62a4\u5377\u6570\u636e\u3002 dm-crypt \u662f Linux \u5185\u6838\u7248\u672c 2.6 \u53ca\u66f4\u9ad8\u7248\u672c\u4e2d\u7684\u900f\u660e\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u3002\u542f\u7528\u5377\u52a0\u5bc6\u540e\uff0c\u52a0\u5bc6\u6570\u636e\u5c06\u901a\u8fc7 iSCSI \u53d1\u9001\u5230\u5757\u5b58\u50a8\uff0c\u4ece\u800c\u540c\u65f6\u4fdd\u62a4\u4f20\u8f93\u4e2d\u7684\u6570\u636e\u548c\u9759\u6001\u6570\u636e\u3002\u4f7f\u7528\u786c\u4ef6\u52a0\u901f\u65f6\uff0c\u8fd9\u4e24\u79cd\u52a0\u5bc6\u529f\u80fd\u5bf9\u6027\u80fd\u7684\u5f71\u54cd\u90fd\u4f1a\u964d\u5230\u6700\u4f4e\u3002 \u867d\u7136\u6211\u4eec\u5efa\u8bae\u4f7f\u7528 OpenStack \u5377\u52a0\u5bc6\u529f\u80fd\uff0c\u4f46\u5757\u5b58\u50a8\u652f\u6301\u591a\u79cd\u66ff\u4ee3\u540e\u7aef\u6765\u63d0\u4f9b\u53ef\u6302\u8f7d\u5377\uff0c\u5176\u4e2d\u4e00\u4e9b\u8fd8\u53ef\u80fd\u63d0\u4f9b\u5377\u52a0\u5bc6\u3002\u7531\u4e8e\u540e\u7aef\u5982\u6b64\u4e4b\u591a\uff0c\u5e76\u4e14\u5fc5\u987b\u4ece\u6bcf\u4e2a\u4f9b\u5e94\u5546\u5904\u83b7\u53d6\u4fe1\u606f\uff0c\u56e0\u6b64\u6307\u5b9a\u5728\u4efb\u4f55\u4e00\u4e2a\u4f9b\u5e94\u5546\u4e2d\u5b9e\u65bd\u52a0\u5bc6\u7684\u5efa\u8bae\u8d85\u51fa\u4e86\u672c\u6307\u5357\u7684\u8303\u56f4\u3002","title":"\u5757\u5b58\u50a8\u6027\u80fd\u548c\u540e\u7aef"},{"location":"security/security-guide/#_275","text":"\u8ba1\u7b97\u7684\u79df\u6237\u6570\u636e\u53ef\u4ee5\u901a\u8fc7 IPsec \u6216\u5176\u4ed6\u96a7\u9053\u8fdb\u884c\u52a0\u5bc6\u3002\u8fd9\u5728OpenStack\u4e2d\u5e76\u4e0d\u5e38\u89c1\u6216\u6807\u51c6\uff0c\u4f46\u5bf9\u4e8e\u6709\u52a8\u529b\u548c\u611f\u5174\u8da3\u7684\u5b9e\u73b0\u8005\u6765\u8bf4\uff0c\u8fd9\u662f\u4e00\u4e2a\u9009\u9879\u3002 \u540c\u6837\uff0c\u52a0\u5bc6\u6570\u636e\u5728\u901a\u8fc7\u7f51\u7edc\u4f20\u8f93\u65f6\u5c06\u4fdd\u6301\u52a0\u5bc6\u72b6\u6001\u3002","title":"\u7f51\u7edc\u6570\u636e"},{"location":"security/security-guide/#_276","text":"\u4e3a\u4e86\u89e3\u51b3\u7ecf\u5e38\u63d0\u5230\u7684\u79df\u6237\u6570\u636e\u9690\u79c1\u548c\u9650\u5236\u4e91\u63d0\u4f9b\u5546\u8d23\u4efb\u7684\u95ee\u9898\uff0cOpenStack\u793e\u533a\u5bf9\u4f7f\u6570\u636e\u52a0\u5bc6\u66f4\u52a0\u666e\u904d\u7684\u5174\u8da3\u8d8a\u6765\u8d8a\u5927\u3002\u5bf9\u4e8e\u6700\u7ec8\u7528\u6237\u6765\u8bf4\uff0c\u5728\u5c06\u6570\u636e\u4fdd\u5b58\u5230\u4e91\u4e4b\u524d\u5bf9\u5176\u8fdb\u884c\u52a0\u5bc6\u76f8\u5bf9\u5bb9\u6613\uff0c\u8fd9\u662f\u79df\u6237\u5bf9\u8c61\uff08\u5982\u5a92\u4f53\u6587\u4ef6\u3001\u6570\u636e\u5e93\u5b58\u6863\u7b49\uff09\u7684\u53ef\u884c\u8def\u5f84\u3002\u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u5ba2\u6237\u7aef\u52a0\u5bc6\u7528\u4e8e\u52a0\u5bc6\u865a\u62df\u5316\u6280\u672f\u4fdd\u5b58\u7684\u6570\u636e\uff0c\u8fd9\u9700\u8981\u5ba2\u6237\u7aef\u4ea4\u4e92\uff08\u4f8b\u5982\u63d0\u4f9b\u5bc6\u94a5\uff09\u6765\u89e3\u5bc6\u6570\u636e\u4ee5\u4f9b\u5c06\u6765\u4f7f\u7528\u3002\u4e3a\u4e86\u65e0\u7f1d\u5730\u4fdd\u62a4\u6570\u636e\u5e76\u4f7f\u5176\u53ef\u8bbf\u95ee\uff0c\u800c\u65e0\u9700\u7ed9\u5ba2\u6237\u5e26\u6765\u7ba1\u7406\u5176\u5bc6\u94a5\u7684\u8d1f\u62c5\uff0c\u5e76\u4ee5\u4ea4\u4e92\u65b9\u5f0f\u5411\u4ed6\u4eec\u63d0\u4f9b OpenStack \u4e2d\u7684\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u3002\u4f5c\u4e3aOpenStack\u7684\u4e00\u90e8\u5206\uff0c\u63d0\u4f9b\u52a0\u5bc6\u548c\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\u53ef\u4ee5\u7b80\u5316\u9759\u6001\u6570\u636e\u5b89\u5168\u91c7\u7528\uff0c\u5e76\u89e3\u51b3\u5ba2\u6237\u5bf9\u9690\u79c1\u6216\u6570\u636e\u6ee5\u7528\u7684\u62c5\u5fe7\uff0c\u540c\u65f6\u4e5f\u9650\u5236\u4e86\u4e91\u63d0\u4f9b\u5546\u7684\u8d23\u4efb\u3002\u8fd9\u6709\u52a9\u4e8e\u51cf\u5c11\u63d0\u4f9b\u5546\u5728\u591a\u79df\u6237\u516c\u6709\u4e91\u4e2d\u7684\u4e8b\u4ef6\u8c03\u67e5\u671f\u95f4\u5904\u7406\u79df\u6237\u6570\u636e\u65f6\u7684\u8d23\u4efb\u3002 \u5377\u52a0\u5bc6\u548c\u4e34\u65f6\u78c1\u76d8\u52a0\u5bc6\u529f\u80fd\u4f9d\u8d56\u4e8e\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\uff08\u4f8b\u5982\uff0cbarbican\uff09\u6765\u521b\u5efa\u548c\u5b89\u5168\u5b58\u50a8\u5bc6\u94a5\u3002\u5bc6\u94a5\u7ba1\u7406\u5668\u662f\u53ef\u63d2\u5165\u7684\uff0c\u4ee5\u65b9\u4fbf\u9700\u8981\u7b2c\u4e09\u65b9\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u6216\u4f7f\u7528\u5bc6\u94a5\u7ba1\u7406\u4ea4\u6362\u534f\u8bae \uff08KMIP\uff09 \u7684\u90e8\u7f72\uff0c\u8be5\u534f\u8bae\u7531\u540d\u4e3a PyKMIP \u7684\u5f00\u6e90\u9879\u76ee\u652f\u6301\u3002","title":"\u5bc6\u94a5\u7ba1\u7406"},{"location":"security/security-guide/#_277","text":"OpenStack.org\uff0c\u6b22\u8fce\u6765\u5230 barbican \u7684\u5f00\u53d1\u8005\u6587\u6863\uff012014\u3002Barbican \u5f00\u53d1\u8005\u6587\u6863 oasis-open.org\uff0cOASIS \u5bc6\u94a5\u7ba1\u7406\u4e92\u64cd\u4f5c\u6027\u534f\u8bae \uff08KMIP\uff09\u30022014\u5e74\u3002KMIP PyKMIP \u5e93 \u673a\u5bc6\u7ba1\u7406 \u673a\u5bc6\u7ba1\u7406","title":"\u53c2\u8003\u4e66\u76ee\uff1a"},{"location":"security/security-guide/#_278","text":"\u5728\u865a\u62df\u5316\u73af\u5883\u4e2d\u8fd0\u884c\u5b9e\u4f8b\u7684\u4f18\u70b9\u4e4b\u4e00\u662f\uff0c\u5b83\u4e3a\u5b89\u5168\u63a7\u5236\u5f00\u8f9f\u4e86\u65b0\u7684\u673a\u4f1a\uff0c\u800c\u8fd9\u4e9b\u63a7\u5236\u5728\u90e8\u7f72\u5230\u88f8\u673a\u4e0a\u65f6\u901a\u5e38\u4e0d\u53ef\u7528\u3002\u6709\u51e0\u79cd\u6280\u672f\u53ef\u4ee5\u5e94\u7528\u4e8e\u865a\u62df\u5316\u5806\u6808\uff0c\u4e3a\u4e91\u79df\u6237\u5e26\u6765\u66f4\u597d\u7684\u4fe1\u606f\u4fdd\u969c\u3002 \u5177\u6709\u5f3a\u70c8\u5b89\u5168\u8981\u6c42\u7684 OpenStack \u90e8\u7f72\u4eba\u5458\u6216\u7528\u6237\u53ef\u80fd\u9700\u8981\u8003\u8651\u90e8\u7f72\u8fd9\u4e9b\u6280\u672f\u3002\u5e76\u975e\u6240\u6709\u60c5\u51b5\u90fd\u9002\u7528\u3002\u5728\u67d0\u4e9b\u60c5\u51b5\u4e0b\uff0c\u7531\u4e8e\u89c4\u8303\u6027\u4e1a\u52a1\u9700\u6c42\uff0c\u53ef\u80fd\u4f1a\u6392\u9664\u5728\u4e91\u4e2d\u4f7f\u7528\u6280\u672f\u3002\u540c\u6837\uff0c\u67d0\u4e9b\u6280\u672f\u4f1a\u68c0\u67e5\u5b9e\u4f8b\u6570\u636e\uff0c\u4f8b\u5982\u8fd0\u884c\u72b6\u6001\uff0c\u8fd9\u5bf9\u7cfb\u7edf\u7528\u6237\u6765\u8bf4\u53ef\u80fd\u662f\u4e0d\u5e0c\u671b\u7684\u3002 \u5728\u672c\u7ae0\u4e2d\uff0c\u6211\u4eec\u5c06\u63a2\u8ba8\u8fd9\u4e9b\u6280\u672f\uff0c\u5e76\u63cf\u8ff0\u5b83\u4eec\u53ef\u7528\u4e8e\u589e\u5f3a\u5b9e\u4f8b\u6216\u5e95\u5c42\u5b9e\u4f8b\u5b89\u5168\u6027\u7684\u60c5\u51b5\u3002\u6211\u4eec\u8fd8\u8bd5\u56fe\u5f3a\u8c03\u53ef\u80fd\u5b58\u5728\u9690\u79c1\u95ee\u9898\u7684\u5730\u65b9\u3002\u8fd9\u4e9b\u5305\u62ec\u6570\u636e\u4f20\u9012\u3001\u5185\u7701\u6216\u63d0\u4f9b\u71b5\u6e90\u3002\u5728\u672c\u8282\u4e2d\uff0c\u6211\u4eec\u5c06\u91cd\u70b9\u4ecb\u7ecd\u4ee5\u4e0b\u9644\u52a0\u5b89\u5168\u670d\u52a1\uff1a \u5b9e\u4f8b\u7684\u71b5 \u5c06\u5b9e\u4f8b\u8c03\u5ea6\u5230\u8282\u70b9 \u53d7\u4fe1\u4efb\u7684\u6620\u50cf \u5b9e\u4f8b\u8fc1\u79fb \u76d1\u63a7\u3001\u8b66\u62a5\u548c\u62a5\u544a \u66f4\u65b0\u548c\u8865\u4e01 \u9632\u706b\u5899\u548c\u5176\u4ed6\u57fa\u4e8e\u4e3b\u673a\u7684\u5b89\u5168\u63a7\u5236 \u5b9e\u4f8b\u7684\u5b89\u5168\u670d\u52a1 \u5b9e\u4f8b\u7684\u71b5 \u5c06\u5b9e\u4f8b\u8c03\u5ea6\u5230\u8282\u70b9 \u53d7\u4fe1\u4efb\u7684\u6620\u50cf \u5b9e\u4f8b\u8fc1\u79fb \u76d1\u63a7\u3001\u8b66\u62a5\u548c\u62a5\u544a \u66f4\u65b0\u548c\u8865\u4e01 \u9632\u706b\u5899\u548c\u5176\u4ed6\u57fa\u4e8e\u4e3b\u673a\u7684\u5b89\u5168\u63a7\u5236","title":"\u5b9e\u4f8b\u5b89\u5168\u7ba1\u7406"},{"location":"security/security-guide/#_279","text":"","title":"\u5b9e\u4f8b\u7684\u5b89\u5168\u670d\u52a1"},{"location":"security/security-guide/#_280","text":"\u6211\u4eec\u8ba4\u4e3a\u71b5\u662f\u6307\u5b9e\u4f8b\u53ef\u7528\u7684\u968f\u673a\u6570\u636e\u7684\u8d28\u91cf\u548c\u6765\u6e90\u3002\u52a0\u5bc6\u6280\u672f\u901a\u5e38\u4e25\u91cd\u4f9d\u8d56\u968f\u673a\u6027\uff0c\u9700\u8981\u9ad8\u8d28\u91cf\u7684\u71b5\u6c60\u624d\u80fd\u4ece\u4e2d\u6c72\u53d6\u3002\u865a\u62df\u673a\u901a\u5e38\u5f88\u96be\u83b7\u5f97\u8db3\u591f\u7684\u71b5\u6765\u652f\u6301\u8fd9\u4e9b\u64cd\u4f5c\uff0c\u8fd9\u79f0\u4e3a\u71b5\u9965\u997f\u3002\u71b5\u9965\u997f\u53ef\u4ee5\u8868\u73b0\u4e3a\u770b\u4f3c\u65e0\u5173\u7684\u4e8b\u60c5\u3002\u4f8b\u5982\uff0c\u542f\u52a8\u65f6\u95f4\u6162\u53ef\u80fd\u662f\u7531\u4e8e\u5b9e\u4f8b\u7b49\u5f85 ssh \u5bc6\u94a5\u751f\u6210\u9020\u6210\u7684\u3002\u71b5\u9965\u997f\u8fd8\u53ef\u80fd\u4fc3\u4f7f\u7528\u6237\u5728\u5b9e\u4f8b\u4e2d\u4f7f\u7528\u8d28\u91cf\u8f83\u5dee\u7684\u71b5\u6e90\uff0c\u4ece\u800c\u4f7f\u5728\u4e91\u4e2d\u8fd0\u884c\u7684\u5e94\u7528\u7a0b\u5e8f\u6574\u4f53\u5b89\u5168\u6027\u964d\u4f4e\u3002 \u5e78\u8fd0\u7684\u662f\uff0c\u4e91\u67b6\u6784\u5e08\u53ef\u4ee5\u901a\u8fc7\u4e3a\u4e91\u5b9e\u4f8b\u63d0\u4f9b\u9ad8\u8d28\u91cf\u7684\u71b5\u6e90\u6765\u89e3\u51b3\u8fd9\u4e9b\u95ee\u9898\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u5728\u4e91\u4e2d\u62e5\u6709\u8db3\u591f\u7684\u786c\u4ef6\u968f\u673a\u6570\u751f\u6210\u5668 \uff08HRNG\uff09 \u6765\u652f\u6301\u5b9e\u4f8b\u6765\u5b9e\u73b0\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u201c\u8db3\u591f\u201d\u5728\u67d0\u79cd\u7a0b\u5ea6\u4e0a\u662f\u7279\u5b9a\u4e8e\u57df\u7684\u3002\u5bf9\u4e8e\u65e5\u5e38\u64cd\u4f5c\uff0c\u73b0\u4ee3 HRNG \u53ef\u80fd\u4f1a\u4ea7\u751f\u8db3\u591f\u7684\u71b5\u6765\u652f\u6301 50-100 \u4e2a\u8ba1\u7b97\u8282\u70b9\u3002\u9ad8\u5e26\u5bbd HRNG\uff08\u4f8b\u5982\u82f1\u7279\u5c14 Ivy Bridge \u548c\u66f4\u65b0\u7684\u5904\u7406\u5668\u63d0\u4f9b\u7684 RdRand \u6307\u4ee4\uff09\u53ef\u80fd\u4f1a\u5904\u7406\u66f4\u591a\u8282\u70b9\u3002\u5bf9\u4e8e\u7ed9\u5b9a\u7684\u4e91\uff0c\u67b6\u6784\u5e08\u9700\u8981\u4e86\u89e3\u5e94\u7528\u7a0b\u5e8f\u8981\u6c42\uff0c\u4ee5\u786e\u4fdd\u6709\u8db3\u591f\u7684\u71b5\u53ef\u7528\u3002 Virtio RNG \u662f\u4e00\u4e2a\u968f\u673a\u6570\u751f\u6210\u5668\uff0c\u9ed8\u8ba4\u60c5\u51b5\u4e0b\u7528\u4f5c /dev/random \u71b5\u6e90\uff0c\u4f46\u53ef\u4ee5\u914d\u7f6e\u4e3a\u4f7f\u7528\u786c\u4ef6 RNG \u6216\u71b5\u6536\u96c6\u5b88\u62a4\u7a0b\u5e8f \uff08EGD\uff09 \u7b49\u5de5\u5177\uff0c\u4ee5\u63d0\u4f9b\u4e00\u79cd\u901a\u8fc7\u5206\u5e03\u5f0f\u7cfb\u7edf\u516c\u5e73\u5b89\u5168\u5730\u5206\u914d\u71b5\u7684\u65b9\u6cd5\u3002Virtio RNG \u662f\u4f7f\u7528\u7528\u4e8e\u521b\u5efa\u5b9e\u4f8b\u7684\u5143\u6570\u636e\u7684 hw_rng \u5c5e\u6027\u542f\u7528\u7684\u3002","title":"\u5b9e\u4f8b\u7684\u71b5"},{"location":"security/security-guide/#_281","text":"\u5728\u521b\u5efa\u5b9e\u4f8b\u4e4b\u524d\uff0c\u5fc5\u987b\u9009\u62e9\u7528\u4e8e\u955c\u50cf\u5b9e\u4f8b\u5316\u7684\u4e3b\u673a\u3002\u6b64\u9009\u62e9\u7531 nova-scheduler \u786e\u5b9a\u5982\u4f55\u5206\u6d3e\u8ba1\u7b97\u548c\u5377\u8bf7\u6c42\u7684 \u6267\u884c\u3002 \u8fd9\u662f FilterScheduler OpenStack Compute\u7684\u9ed8\u8ba4\u8c03\u5ea6\u7a0b\u5e8f\uff0c\u5c3d\u7ba1\u5b58\u5728\u5176\u4ed6\u8c03\u5ea6\u7a0b\u5e8f\uff08\u8bf7\u53c2\u9605 OpenStack Configuration Reference \u4e2d\u7684 Scheduling \u90e8\u5206\uff09\u3002\u8fd9\u4e0e\u201c\u8fc7\u6ee4\u5668\u63d0\u793a\u201d\u534f\u540c\u5de5\u4f5c\uff0c\u4ee5\u51b3\u5b9a\u5b9e\u4f8b\u7684\u542f\u52a8\u4f4d\u7f6e\u3002\u6b64\u4e3b\u673a\u9009\u62e9\u8fc7\u7a0b\u5141\u8bb8\u7ba1\u7406\u5458\u6ee1\u8db3\u8bb8\u591a\u4e0d\u540c\u7684\u5b89\u5168\u6027\u548c\u5408\u89c4\u6027\u8981\u6c42\u3002\u4f8b\u5982\uff0c\u6839\u636e\u4e91\u90e8\u7f72\u7c7b\u578b\uff0c\u5982\u679c\u6570\u636e\u9694\u79bb\u662f\u4e3b\u8981\u95ee\u9898\uff0c\u5219\u53ef\u4ee5\u9009\u62e9\u5c3d\u53ef\u80fd\u8ba9\u79df\u6237\u5b9e\u4f8b\u9a7b\u7559\u5728\u76f8\u540c\u7684\u4e3b\u673a\u4e0a\u3002\u76f8\u53cd\uff0c\u51fa\u4e8e\u53ef\u7528\u6027\u6216\u5bb9\u9519\u539f\u56e0\uff0c\u53ef\u4ee5\u5c1d\u8bd5\u5c06\u79df\u6237\u7684\u5b9e\u4f8b\u9a7b\u7559\u5728\u5c3d\u53ef\u80fd\u591a\u7684\u4e0d\u540c\u4e3b\u673a\u4e0a\u3002 \u7b5b\u9009\u5668\u8ba1\u5212\u7a0b\u5e8f\u5206\u4e3a\u56db\u5927\u7c7b\uff1a \u57fa\u4e8e\u8d44\u6e90\u7684\u7b5b\u9009\u5668 \u8fd9\u4e9b\u7b5b\u9009\u5668\u5c06\u6839\u636e\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4e3b\u673a\u96c6\u7684\u5229\u7528\u7387\u521b\u5efa\u5b9e\u4f8b\uff0c\u5e76\u53ef\u4ee5\u5728\u53ef\u7528\u6216\u4f7f\u7528\u7684\u5c5e\u6027\uff08\u5982 RAM\u3001IO \u6216 CPU \u5229\u7528\u7387\uff09\u4e0a\u89e6\u53d1\u3002 \u57fa\u4e8e\u6620\u50cf\u7684\u8fc7\u6ee4\u5668 \u8fd9\u5c06\u6839\u636e\u4f7f\u7528\u7684\u6620\u50cf\uff08\u4f8b\u5982 VM \u7684\u64cd\u4f5c\u7cfb\u7edf\u6216\u4f7f\u7528\u7684\u6620\u50cf\u7c7b\u578b\uff09\u59d4\u6d3e\u5b9e\u4f8b\u521b\u5efa\u3002 \u57fa\u4e8e\u73af\u5883\u7684\u8fc7\u6ee4\u5668 \u6b64\u7b5b\u9009\u5668\u5c06\u57fa\u4e8e\u5916\u90e8\u8be6\u7ec6\u4fe1\u606f\u521b\u5efa\u5b9e\u4f8b\uff0c\u4f8b\u5982\u5728\u7279\u5b9a IP \u8303\u56f4\u5185\u3001\u8de8\u53ef\u7528\u533a\u6216\u4e0e\u5176\u4ed6\u5b9e\u4f8b\u4f4d\u4e8e\u540c\u4e00\u4e3b\u673a\u4e0a\u3002 \u81ea\u5b9a\u4e49\u6761\u4ef6 \u6b64\u7b5b\u9009\u5668\u5c06\u6839\u636e\u7528\u6237\u6216\u7ba1\u7406\u5458\u63d0\u4f9b\u7684\u6761\u4ef6\uff08\u5982\u4fe1\u4efb\u6216\u5143\u6570\u636e\u5206\u6790\uff09\u59d4\u6d3e\u5b9e\u4f8b\u521b\u5efa\u3002 \u53ef\u4ee5\u540c\u65f6\u5e94\u7528\u591a\u4e2a\u7b5b\u9009\u5668\uff0c\u4f8b\u5982\uff0c\u7b5b\u9009\u5668\u7528\u4e8e\u786e\u4fdd\u5728\u4e00\u7ec4\u7279\u5b9a\u4e3b\u673a\u7684\u6210\u5458\u4e0a\u521b\u5efa\u5b9e\u4f8b\uff0c\u4ee5\u53ca ServerGroupAntiAffinity \u7528\u4e8e\u786e\u4fdd\u4e0d\u4f1a\u5728\u53e6\u4e00\u7ec4\u7279\u5b9a\u4e3b\u673a\u4e0a\u521b\u5efa\u540c\u4e00\u5b9e\u4f8b\u7684\u7b5b\u9009\u5668 ServerGroupAffinity \u3002\u5e94\u4ed4\u7ec6\u5206\u6790\u8fd9\u4e9b\u7b5b\u9009\u5668\uff0c\u4ee5\u786e\u4fdd\u5b83\u4eec\u4e0d\u4f1a\u76f8\u4e92\u51b2\u7a81\uff0c\u5e76\u5bfc\u81f4\u963b\u6b62\u521b\u5efa\u5b9e\u4f8b\u7684\u89c4\u5219\u3002 GroupAffinity \u548c GroupAntiAffinity \u7b5b\u9009\u5668\u51b2\u7a81\uff0c\u4e0d\u5e94\u540c\u65f6\u542f\u7528\u3002 \u7b5b\u9009\u5668 DiskFilter \u80fd\u591f\u8d85\u989d\u8ba2\u9605\u78c1\u76d8\u7a7a\u95f4\u3002\u867d\u7136\u901a\u5e38\u4e0d\u662f\u95ee\u9898\uff0c\u4f46\u5bf9\u4e8e\u7cbe\u7b80\u9884\u914d\u7684\u5b58\u50a8\u8bbe\u5907\u6765\u8bf4\uff0c\u8fd9\u53ef\u80fd\u662f\u4e00\u4e2a\u95ee\u9898\uff0c\u5e76\u4e14\u6b64\u7b5b\u9009\u5668\u5e94\u4e0e\u5e94\u7528\u7ecf\u8fc7\u5145\u5206\u6d4b\u8bd5\u7684\u914d\u989d\u4e00\u8d77\u4f7f\u7528\u3002 \u6211\u4eec\u5efa\u8bae\u60a8\u7981\u7528\u8fc7\u6ee4\u5668\uff0c\u8fd9\u4e9b\u8fc7\u6ee4\u5668\u53ef\u4ee5\u5206\u6790\u7528\u6237\u63d0\u4f9b\u7684\u5185\u5bb9\u6216\u53ef\u64cd\u4f5c\u7684\u5185\u5bb9\uff0c\u4f8b\u5982\u5143\u6570\u636e\u3002","title":"\u5c06\u5b9e\u4f8b\u8c03\u5ea6\u5230\u8282\u70b9"},{"location":"security/security-guide/#_282","text":"\u5728\u4e91\u73af\u5883\u4e2d\uff0c\u7528\u6237\u4f7f\u7528\u9884\u5b89\u88c5\u7684\u6620\u50cf\u6216\u4ed6\u4eec\u81ea\u5df1\u4e0a\u4f20\u7684\u6620\u50cf\u3002\u5728\u8fd9\u4e24\u79cd\u60c5\u51b5\u4e0b\uff0c\u7528\u6237\u90fd\u5e94\u8be5\u80fd\u591f\u786e\u4fdd\u4ed6\u4eec\u6b63\u5728\u4f7f\u7528\u7684\u56fe\u50cf\u6ca1\u6709\u88ab\u7be1\u6539\u3002\u9a8c\u8bc1\u56fe\u50cf\u7684\u80fd\u529b\u662f\u5b89\u5168\u6027\u7684\u57fa\u672c\u8981\u6c42\u3002\u4ece\u6620\u50cf\u6e90\u5230\u4f7f\u7528\u6620\u50cf\u7684\u76ee\u6807\u9700\u8981\u4fe1\u4efb\u94fe\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u5bf9\u4ece\u53d7\u4fe1\u4efb\u6765\u6e90\u83b7\u53d6\u7684\u6620\u50cf\u8fdb\u884c\u7b7e\u540d\u5e76\u5728\u4f7f\u7528\u524d\u9a8c\u8bc1\u7b7e\u540d\u6765\u5b9e\u73b0\u3002\u4e0b\u9762\u5c06\u8ba8\u8bba\u83b7\u53d6\u548c\u521b\u5efa\u5df2\u9a8c\u8bc1\u56fe\u50cf\u7684\u5404\u79cd\u65b9\u6cd5\uff0c\u7136\u540e\u4ecb\u7ecd\u56fe\u50cf\u7b7e\u540d\u9a8c\u8bc1\u529f\u80fd\u3002","title":"\u53ef\u4fe1\u955c\u50cf"},{"location":"security/security-guide/#_283","text":"OpenStack \u6587\u6863\u63d0\u4f9b\u4e86\u6709\u5173\u5982\u4f55\u521b\u5efa\u6620\u50cf\u5e76\u5c06\u5176\u4e0a\u4f20\u5230\u6620\u50cf\u670d\u52a1\u7684\u6307\u5bfc\u3002\u6b64\u5916\uff0c\u5047\u5b9a\u60a8\u6709\u4e00\u4e2a\u5b89\u88c5\u548c\u5f3a\u5316\u64cd\u4f5c\u7cfb\u7edf\u7684\u8fc7\u7a0b\u3002\u56e0\u6b64\uff0c\u4ee5\u4e0b\u5404\u9879\u5c06\u63d0\u4f9b\u6709\u5173\u5982\u4f55\u786e\u4fdd\u5c06\u6620\u50cf\u5b89\u5168\u5730\u4f20\u8f93\u5230 OpenStack \u4e2d\u7684\u989d\u5916\u6307\u5bfc\u3002\u6709\u591a\u79cd\u9009\u9879\u53ef\u7528\u4e8e\u83b7\u53d6\u56fe\u50cf\u3002\u6bcf\u4e2a\u6b65\u9aa4\u90fd\u6709\u7279\u5b9a\u7684\u6b65\u9aa4\uff0c\u6709\u52a9\u4e8e\u9a8c\u8bc1\u56fe\u50cf\u7684\u51fa\u5904\u3002 \u7b2c\u4e00\u4e2a\u9009\u9879\u662f\u4ece\u53d7\u4fe1\u4efb\u7684\u6765\u6e90\u83b7\u53d6\u542f\u52a8\u5a92\u4f53\u3002 $ mkdir -p /tmp/download_directorycd /tmp/download_directory $ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/ubuntu-12.04.2-server-amd64.iso $ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/SHA256SUMS $ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/SHA256SUMS.gpg $ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 0xFBB75451 $ gpg --verify SHA256SUMS.gpg SHA256SUMSsha256sum -c SHA256SUMS 2>&1 | grep OK \u7b2c\u4e8c\u79cd\u9009\u62e9\u662f\u4f7f\u7528 OpenStack \u865a\u62df\u673a\u6620\u50cf\u6307\u5357\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u60a8\u9700\u8981\u9075\u5faa\u7ec4\u7ec7\u7684\u64cd\u4f5c\u7cfb\u7edf\u5f3a\u5316\u51c6\u5219\u6216\u53d7\u4fe1\u4efb\u7684\u7b2c\u4e09\u65b9\uff08\u5982 Linux STIG\uff09\u63d0\u4f9b\u7684\u51c6\u5219\u3002 \u6700\u540e\u4e00\u79cd\u9009\u62e9\u662f\u4f7f\u7528\u81ea\u52a8\u6620\u50cf\u751f\u6210\u5668\u3002\u4ee5\u4e0b\u793a\u4f8b\u4f7f\u7528 Oz \u6620\u50cf\u751f\u6210\u5668\u3002OpenStack \u793e\u533a\u6700\u8fd1\u521b\u5efa\u4e86\u4e00\u4e2a\u503c\u5f97\u7814\u7a76\u7684\u65b0\u5de5\u5177\uff1adisk-image-builder\u3002\u6211\u4eec\u5c1a\u672a\u4ece\u5b89\u5168\u89d2\u5ea6\u8bc4\u4f30\u6b64\u5de5\u5177\u3002 RHEL 6 CCE-26976-1 \u793a\u4f8b\uff0c\u8fd9\u5c06\u6709\u52a9\u4e8e\u5728 OZ \u4e2d\u5b9e\u65bd NIST 800-53 \u7b2c AC-19\uff08d\uff09\u8282\u3002 \u5efa\u8bae\u907f\u514d\u624b\u52a8\u6620\u50cf\u6784\u5efa\u8fc7\u7a0b\uff0c\u56e0\u4e3a\u5b83\u5f88\u590d\u6742\u4e14\u5bb9\u6613\u51fa\u9519\u3002\u6b64\u5916\uff0c\u4f7f\u7528 Oz \u7b49\u81ea\u52a8\u5316\u7cfb\u7edf\u8fdb\u884c\u6620\u50cf\u6784\u5efa\uff0c\u6216\u4f7f\u7528 Chef \u6216 Puppet \u7b49\u914d\u7f6e\u7ba1\u7406\u5b9e\u7528\u7a0b\u5e8f\u8fdb\u884c\u542f\u52a8\u540e\u6620\u50cf\u5f3a\u5316\uff0c\u4f7f\u60a8\u80fd\u591f\u751f\u6210\u4e00\u81f4\u7684\u6620\u50cf\uff0c\u5e76\u8ddf\u8e2a\u57fa\u7840\u6620\u50cf\u5728\u4e00\u6bb5\u65f6\u95f4\u5185\u662f\u5426\u7b26\u5408\u5176\u5404\u81ea\u7684\u5f3a\u5316\u51c6\u5219\u3002 \u5982\u679c\u8ba2\u9605\u516c\u6709\u4e91\u670d\u52a1\uff0c\u5219\u5e94\u4e0e\u4e91\u63d0\u4f9b\u5546\u8054\u7cfb\uff0c\u4e86\u89e3\u7528\u4e8e\u751f\u6210\u5176\u9ed8\u8ba4\u6620\u50cf\u7684\u8fc7\u7a0b\u7684\u6982\u8ff0\u3002\u5982\u679c\u63d0\u4f9b\u5546\u5141\u8bb8\u60a8\u4e0a\u4f20\u81ea\u5df1\u7684\u6620\u50cf\uff0c\u5219\u9700\u8981\u786e\u4fdd\u5728\u4f7f\u7528\u6620\u50cf\u521b\u5efa\u5b9e\u4f8b\u4e4b\u524d\u80fd\u591f\u9a8c\u8bc1\u6620\u50cf\u662f\u5426\u672a\u88ab\u4fee\u6539\u3002\u4e3a\u6b64\uff0c\u8bf7\u53c2\u9605\u4ee5\u4e0b\u6709\u5173\u56fe\u50cf\u7b7e\u540d\u9a8c\u8bc1\u7684\u90e8\u5206\uff0c\u5982\u679c\u65e0\u6cd5\u4f7f\u7528\u7b7e\u540d\uff0c\u8bf7\u53c2\u9605\u4ee5\u4e0b\u6bb5\u843d\u3002 \u6620\u50cf\u4ece\u8282\u70b9\u4e0a\u7684\u6620\u50cf\u670d\u52a1\u4f20\u8f93\u5230\u8ba1\u7b97\u670d\u52a1\u3002\u5e94\u901a\u8fc7\u901a\u8fc7 TLS \u8fd0\u884c\u6765\u4fdd\u62a4\u6b64\u4f20\u8f93\u3002\u6620\u50cf\u4f4d\u4e8e\u8282\u70b9\u4e0a\u540e\uff0c\u5c06\u4f7f\u7528\u57fa\u672c\u6821\u9a8c\u548c\u5bf9\u5176\u8fdb\u884c\u9a8c\u8bc1\uff0c\u7136\u540e\u6839\u636e\u8981\u542f\u52a8\u7684\u5b9e\u4f8b\u7684\u5927\u5c0f\u6269\u5c55\u5176\u78c1\u76d8\u3002\u5982\u679c\u7a0d\u540e\u5728\u6b64\u8282\u70b9\u4e0a\u4ee5\u76f8\u540c\u7684\u5b9e\u4f8b\u5927\u5c0f\u542f\u52a8\u540c\u4e00\u6620\u50cf\uff0c\u5219\u4f1a\u4ece\u540c\u4e00\u6269\u5c55\u6620\u50cf\u542f\u52a8\u8be5\u6620\u50cf\u3002\u7531\u4e8e\u6b64\u6269\u5c55\u6620\u50cf\u5728\u542f\u52a8\u524d\u9ed8\u8ba4\u4e0d\u4f1a\u91cd\u65b0\u9a8c\u8bc1\uff0c\u56e0\u6b64\u5b83\u53ef\u80fd\u5df2\u88ab\u7be1\u6539\u3002\u9664\u975e\u5728\u751f\u6210\u7684\u6620\u50cf\u4e2d\u5bf9\u6587\u4ef6\u6267\u884c\u624b\u52a8\u68c0\u67e5\uff0c\u5426\u5219\u7528\u6237\u4e0d\u4f1a\u610f\u8bc6\u5230\u7be1\u6539\u3002","title":"\u955c\u50cf\u521b\u5efa\u8fc7\u7a0b"},{"location":"security/security-guide/#_284","text":"OpenStack \u4e2d\u73b0\u5728\u63d0\u4f9b\u4e86\u4e00\u4e9b\u4e0e\u6620\u50cf\u7b7e\u540d\u76f8\u5173\u7684\u529f\u80fd\u3002\u4ece Mitaka \u7248\u672c\u5f00\u59cb\uff0c\u6620\u50cf\u670d\u52a1\u53ef\u4ee5\u9a8c\u8bc1\u8fd9\u4e9b\u5df2\u7b7e\u540d\u7684\u6620\u50cf\uff0c\u5e76\u4e14\u4e3a\u4e86\u63d0\u4f9b\u5b8c\u6574\u7684\u4fe1\u4efb\u94fe\uff0c\u8ba1\u7b97\u670d\u52a1\u53ef\u4ee5\u9009\u62e9\u5728\u6620\u50cf\u542f\u52a8\u4e4b\u524d\u6267\u884c\u6620\u50cf\u7b7e\u540d\u9a8c\u8bc1\u3002\u5728\u6620\u50cf\u542f\u52a8\u4e4b\u524d\u6210\u529f\u8fdb\u884c\u7b7e\u540d\u9a8c\u8bc1\u53ef\u786e\u4fdd\u5df2\u7b7e\u540d\u7684\u6620\u50cf\u672a\u66f4\u6539\u3002\u542f\u7528\u6b64\u529f\u80fd\u540e\uff0c\u53ef\u4ee5\u68c0\u6d4b\u5230\u672a\u7ecf\u6388\u6743\u7684\u6620\u50cf\u4fee\u6539\uff08\u4f8b\u5982\uff0c\u4fee\u6539\u6620\u50cf\u4ee5\u5305\u542b\u6076\u610f\u8f6f\u4ef6\u6216 rootkit\uff09\u3002 \u7ba1\u7406\u5458\u53ef\u4ee5\u901a\u8fc7\u5728\u6587\u4ef6\u4e2d\u5c06 verify_glance_signatures \u6807\u5fd7\u8bbe\u7f6e\u4e3a\u6765 True \u542f\u7528\u5b9e\u4f8b\u7b7e\u540d /etc/nova/nova.conf \u9a8c\u8bc1\u3002\u542f\u7528\u540e\uff0c\u8ba1\u7b97\u670d\u52a1\u4f1a\u5728\u4ece\u5f71\u50cf\u670d\u52a1\u68c0\u7d22\u7b7e\u540d\u5b9e\u4f8b\u65f6\u81ea\u52a8\u5bf9\u5176\u8fdb\u884c\u9a8c\u8bc1\u3002\u5982\u679c\u6b64\u9a8c\u8bc1\u5931\u8d25\uff0c\u5219\u4e0d\u4f1a\u542f\u52a8\u3002\u300aOpenStack \u64cd\u4f5c\u6307\u5357\u300b\u63d0\u4f9b\u4e86\u6709\u5173\u5982\u4f55\u521b\u5efa\u548c\u4e0a\u4f20\u7b7e\u540d\u6620\u50cf\u4ee5\u53ca\u5982\u4f55\u4f7f\u7528\u6b64\u529f\u80fd\u7684\u6307\u5bfc\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u300a\u64cd\u4f5c\u6307\u5357\u300b\u4e2d\u7684\u6dfb\u52a0\u7b7e\u540d\u6620\u50cf\u3002","title":"\u6620\u50cf\u7b7e\u540d\u9a8c\u8bc1"},{"location":"security/security-guide/#_285","text":"OpenStack \u548c\u5e95\u5c42\u865a\u62df\u5316\u5c42\u63d0\u4f9b\u5728 OpenStack \u8282\u70b9\u4e4b\u95f4\u5b9e\u65f6\u8fc1\u79fb\u6620\u50cf\uff0c\u4f7f\u60a8\u80fd\u591f\u65e0\u7f1d\u5730\u6267\u884c OpenStack \u8ba1\u7b97\u8282\u70b9\u7684\u6eda\u52a8\u5347\u7ea7\uff0c\u800c\u65e0\u9700\u5b9e\u4f8b\u505c\u673a\u3002\u4f46\u662f\uff0c\u5b9e\u65f6\u8fc1\u79fb\u4e5f\u5b58\u5728\u91cd\u5927\u98ce\u9669\u3002\u82e5\u8981\u4e86\u89e3\u6240\u6d89\u53ca\u7684\u98ce\u9669\uff0c\u4ee5\u4e0b\u662f\u5728\u5b9e\u65f6\u8fc1\u79fb\u671f\u95f4\u6267\u884c\u7684\u9ad8\u7ea7\u6b65\u9aa4\uff1a \u5728\u76ee\u6807\u4e3b\u673a\u4e0a\u542f\u52a8\u5b9e\u4f8b \u4f20\u8f93\u5185\u5b58 \u505c\u6b62\u5ba2\u6237\u673a\u548c\u540c\u6b65\u78c1\u76d8 \u4f20\u8f93\u72b6\u6001 \u542f\u52a8\u5ba2\u6237\u673a","title":"\u5b9e\u4f8b\u8fc1\u79fb"},{"location":"security/security-guide/#_286","text":"\u5728\u5b9e\u65f6\u8fc1\u79fb\u8fc7\u7a0b\u7684\u5404\u4e2a\u9636\u6bb5\uff0c\u5b9e\u4f8b\u8fd0\u884c\u65f6\u3001\u5185\u5b58\u548c\u78c1\u76d8\u7684\u5185\u5bb9\u4ee5\u7eaf\u6587\u672c\u5f62\u5f0f\u901a\u8fc7\u7f51\u7edc\u4f20\u8f93\u3002\u56e0\u6b64\uff0c\u5728\u4f7f\u7528\u5b9e\u65f6\u8fc1\u79fb\u65f6\u9700\u8981\u89e3\u51b3\u4e00\u4e9b\u98ce\u9669\u3002\u4ee5\u4e0b\u8be6\u5c3d\u5217\u8868\u8be6\u7ec6\u4ecb\u7ecd\u4e86\u5176\u4e2d\u7684\u4e00\u4e9b\u98ce\u9669\uff1a \u62d2\u7edd\u670d\u52a1 \uff08DoS\uff09\uff1a\u5982\u679c\u5728\u8fc1\u79fb\u8fc7\u7a0b\u4e2d\u51fa\u73b0\u6545\u969c\uff0c\u5b9e\u4f8b\u53ef\u80fd\u4f1a\u4e22\u5931\u3002 \u6570\u636e\u6cc4\u9732\uff1a\u5fc5\u987b\u5b89\u5168\u5730\u5904\u7406\u5185\u5b58\u6216\u78c1\u76d8\u4f20\u8f93\u3002 \u6570\u636e\u64cd\u7eb5\uff1a\u5982\u679c\u5185\u5b58\u6216\u78c1\u76d8\u4f20\u8f93\u672a\u5f97\u5230\u5b89\u5168\u5904\u7406\uff0c\u5219\u653b\u51fb\u8005\u53ef\u4ee5\u5728\u8fc1\u79fb\u8fc7\u7a0b\u4e2d\u64cd\u7eb5\u7528\u6237\u6570\u636e\u3002 \u4ee3\u7801\u6ce8\u5165\uff1a\u5982\u679c\u5185\u5b58\u6216\u78c1\u76d8\u4f20\u8f93\u672a\u5f97\u5230\u5b89\u5168\u5904\u7406\uff0c\u5219\u653b\u51fb\u8005\u53ef\u4ee5\u5728\u8fc1\u79fb\u671f\u95f4\u64cd\u7eb5\u78c1\u76d8\u6216\u5185\u5b58\u4e2d\u7684\u53ef\u6267\u884c\u6587\u4ef6\u3002","title":"\u5b9e\u65f6\u8fc1\u79fb\u98ce\u9669"},{"location":"security/security-guide/#_287","text":"\u6709\u51e0\u79cd\u65b9\u6cd5\u53ef\u4ee5\u7f13\u89e3\u4e0e\u5b9e\u65f6\u8fc1\u79fb\u76f8\u5173\u7684\u4e00\u4e9b\u98ce\u9669\uff0c\u4ee5\u4e0b\u5217\u8868\u8be6\u7ec6\u4ecb\u7ecd\u4e86\u5176\u4e2d\u7684\u4e00\u4e9b\u65b9\u6cd5\uff1a \u7981\u7528\u5b9e\u65f6\u8fc1\u79fb I\u9694\u79bb\u7684\u8fc1\u79fb\u7f51\u7edc \u52a0\u5bc6\u5b9e\u65f6\u8fc1\u79fb","title":"\u5b9e\u65f6\u8fc1\u79fb\u7f13\u89e3\u63aa\u65bd"},{"location":"security/security-guide/#_288","text":"\u76ee\u524d\uff0cOpenStack \u4e2d\u9ed8\u8ba4\u542f\u7528\u5b9e\u65f6\u8fc1\u79fb\u3002\u53ef\u4ee5\u901a\u8fc7\u5411 nova policy.json \u6587\u4ef6\u6dfb\u52a0\u4ee5\u4e0b\u884c\u6765\u7981\u7528\u5b9e\u65f6\u8fc1\u79fb\uff1a { \"compute_extension:admin_actions:migrate\": \"!\", \"compute_extension:admin_actions:migrateLive\": \"!\", }","title":"\u7981\u7528\u5b9e\u65f6\u8fc1\u79fb"},{"location":"security/security-guide/#_289","text":"\u4e00\u822c\u505a\u6cd5\u662f\uff0c\u5b9e\u65f6\u8fc1\u79fb\u6d41\u91cf\u5e94\u9650\u5236\u5728\u7ba1\u7406\u5b89\u5168\u57df\u5185\uff0c\u8bf7\u53c2\u9605\u5b89\u5168\u8fb9\u754c\u548c\u5a01\u80c1\u3002\u5bf9\u4e8e\u5b9e\u65f6\u8fc1\u79fb\u6d41\u91cf\uff0c\u7531\u4e8e\u5176\u7eaf\u6587\u672c\u6027\u8d28\u4ee5\u53ca\u60a8\u6b63\u5728\u4f20\u8f93\u6b63\u5728\u8fd0\u884c\u7684\u5b9e\u4f8b\u7684\u78c1\u76d8\u548c\u5185\u5b58\u5185\u5bb9\uff0c\u56e0\u6b64\u5efa\u8bae\u60a8\u8fdb\u4e00\u6b65\u5c06\u5b9e\u65f6\u8fc1\u79fb\u6d41\u91cf\u5206\u79bb\u5230\u4e13\u7528\u7f51\u7edc\u4e0a\u3002\u5c06\u6d41\u91cf\u9694\u79bb\u5230\u4e13\u7528\u7f51\u7edc\u53ef\u4ee5\u964d\u4f4e\u66b4\u9732\u98ce\u9669\u3002","title":"\u8fc1\u79fb\u7f51\u7edc"},{"location":"security/security-guide/#_290","text":"\u5982\u679c\u6709\u8db3\u591f\u7684\u4e1a\u52a1\u6848\u4f8b\u6765\u4fdd\u6301\u5b9e\u65f6\u8fc1\u79fb\u7684\u542f\u7528\u72b6\u6001\uff0c\u5219 libvirtd \u53ef\u4ee5\u4e3a\u5b9e\u65f6\u8fc1\u79fb\u63d0\u4f9b\u52a0\u5bc6\u96a7\u9053\u3002\u4f46\u662f\uff0c\u6b64\u529f\u80fd\u76ee\u524d\u5c1a\u672a\u5728 OpenStack Dashboard \u6216 nova-client \u547d\u4ee4\u4e2d\u516c\u5f00\uff0c\u53ea\u80fd\u901a\u8fc7\u624b\u52a8\u914d\u7f6e libvirtd \u6765\u8bbf\u95ee\u3002\u7136\u540e\uff0c\u5b9e\u65f6\u8fc1\u79fb\u8fc7\u7a0b\u5c06\u66f4\u6539\u4e3a\u4ee5\u4e0b\u9ad8\u7ea7\u6b65\u9aa4\uff1a \u5b9e\u4f8b\u6570\u636e\u4ece\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u590d\u5236\u5230 libvirtd\u3002 \u5728\u6e90\u4e3b\u673a\u548c\u76ee\u6807\u4e3b\u673a\u4e0a\u7684 libvirtd \u8fdb\u7a0b\u4e4b\u95f4\u521b\u5efa\u52a0\u5bc6\u96a7\u9053\u3002 \u76ee\u6807 libvirtd \u4e3b\u673a\u5c06\u5b9e\u4f8b\u590d\u5236\u56de\u5e95\u5c42\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002","title":"\u52a0\u5bc6\u5b9e\u65f6\u8fc1\u79fb"},{"location":"security/security-guide/#_291","text":"\u7531\u4e8e OpenStack \u865a\u62df\u673a\u662f\u80fd\u591f\u8de8\u4e3b\u673a\u590d\u5236\u7684\u670d\u52a1\u5668\u6620\u50cf\uff0c\u56e0\u6b64\u65e5\u5fd7\u8bb0\u5f55\u7684\u6700\u4f73\u5b9e\u8df5\u540c\u6837\u9002\u7528\u4e8e\u7269\u7406\u4e3b\u673a\u548c\u865a\u62df\u4e3b\u673a\u3002\u5e94\u8bb0\u5f55\u64cd\u4f5c\u7cfb\u7edf\u7ea7\u548c\u5e94\u7528\u7a0b\u5e8f\u7ea7\u4e8b\u4ef6\uff0c\u5305\u62ec\u5bf9\u4e3b\u673a\u548c\u6570\u636e\u7684\u8bbf\u95ee\u4e8b\u4ef6\u3001\u7528\u6237\u6dfb\u52a0\u548c\u5220\u9664\u3001\u6743\u9650\u66f4\u6539\u4ee5\u53ca\u73af\u5883\u89c4\u5b9a\u7684\u5176\u4ed6\u4e8b\u4ef6\u3002\u7406\u60f3\u60c5\u51b5\u4e0b\uff0c\u60a8\u53ef\u4ee5\u5c06\u8fd9\u4e9b\u65e5\u5fd7\u914d\u7f6e\u4e3a\u5bfc\u51fa\u5230\u65e5\u5fd7\u805a\u5408\u5668\uff0c\u8be5\u805a\u5408\u5668\u6536\u96c6\u65e5\u5fd7\u4e8b\u4ef6\uff0c\u5c06\u5b83\u4eec\u5173\u8054\u8d77\u6765\u8fdb\u884c\u5206\u6790\uff0c\u5e76\u5b58\u50a8\u5b83\u4eec\u4ee5\u4f9b\u53c2\u8003\u6216\u8fdb\u4e00\u6b65\u64cd\u4f5c\u3002\u5b9e\u73b0\u6b64\u76ee\u7684\u7684\u4e00\u4e2a\u5e38\u89c1\u5de5\u5177\u662f ELK \u5806\u6808\uff0c\u5373 Elasticsearch\u3001Logstash \u548c Kibana\u3002 \u5e94\u5b9a\u671f\u67e5\u770b\u8fd9\u4e9b\u65e5\u5fd7\uff0c\u4f8b\u5982\u7531\u7f51\u7edc\u8fd0\u8425\u4e2d\u5fc3 \uff08NOC\uff09 \u5b9e\u65f6\u67e5\u770b\uff0c\u6216\u8005\u5982\u679c\u73af\u5883\u4e0d\u591f\u5927\u800c\u4e0d\u9700\u8981 NOC\uff0c\u5219\u65e5\u5fd7\u5e94\u5b9a\u671f\u8fdb\u884c\u65e5\u5fd7\u5ba1\u67e5\u8fc7\u7a0b\u3002 \u5f88\u591a\u65f6\u5019\uff0c\u6709\u8da3\u7684\u4e8b\u4ef6\u4f1a\u89e6\u53d1\u8b66\u62a5\uff0c\u8be5\u8b66\u62a5\u5c06\u53d1\u9001\u7ed9\u54cd\u5e94\u65b9\u4ee5\u91c7\u53d6\u884c\u52a8\u3002\u901a\u5e38\uff0c\u6b64\u8b66\u62a5\u91c7\u7528\u5305\u542b\u76f8\u5173\u6d88\u606f\u7684\u7535\u5b50\u90ae\u4ef6\u5f62\u5f0f\u3002\u4e00\u4e2a\u6709\u8da3\u7684\u4e8b\u4ef6\u53ef\u80fd\u662f\u91cd\u5927\u6545\u969c\uff0c\u4e5f\u53ef\u80fd\u662f\u6302\u8d77\u6545\u969c\u7684\u5df2\u77e5\u8fd0\u884c\u72b6\u51b5\u6307\u793a\u5668\u3002\u7528\u4e8e\u7ba1\u7406\u544a\u8b66\u7684\u4e24\u4e2a\u5e38\u89c1\u5b9e\u7528\u7a0b\u5e8f\u662f Nagios \u548c Zabbix\u3002","title":"\u76d1\u63a7\u3001\u544a\u8b66\u548c\u62a5\u544a"},{"location":"security/security-guide/#_292","text":"\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u8fd0\u884c\u72ec\u7acb\u7684\u865a\u62df\u673a\u3002\u6b64\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u53ef\u4ee5\u5728\u64cd\u4f5c\u7cfb\u7edf\u4e2d\u8fd0\u884c\uff0c\u4e5f\u53ef\u4ee5\u76f4\u63a5\u5728\u786c\u4ef6\u4e0a\u8fd0\u884c\uff08\u79f0\u4e3a\u88f8\u673a\uff09\u3002\u5bf9\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u7684\u66f4\u65b0\u4e0d\u4f1a\u5411\u4e0b\u4f20\u64ad\u5230\u865a\u62df\u673a\u3002\u4f8b\u5982\uff0c\u5982\u679c\u90e8\u7f72\u4f7f\u7528\u7684\u662f XenServer\uff0c\u5e76\u4e14\u5177\u6709\u4e00\u7ec4 Debian \u865a\u62df\u673a\uff0c\u5219\u5bf9 XenServer \u7684\u66f4\u65b0\u4e0d\u4f1a\u66f4\u65b0 Debian \u865a\u62df\u673a\u4e0a\u8fd0\u884c\u7684\u4efb\u4f55\u5185\u5bb9\u3002 \u56e0\u6b64\uff0c\u6211\u4eec\u5efa\u8bae\u5206\u914d\u865a\u62df\u673a\u7684\u660e\u786e\u6240\u6709\u6743\uff0c\u5e76\u7531\u8fd9\u4e9b\u6240\u6709\u8005\u8d1f\u8d23\u865a\u62df\u673a\u7684\u5f3a\u5316\u3001\u90e8\u7f72\u548c\u6301\u7eed\u529f\u80fd\u3002\u6211\u4eec\u8fd8\u5efa\u8bae\u5b9a\u671f\u90e8\u7f72\u66f4\u65b0\u3002\u8fd9\u4e9b\u8865\u4e01\u5e94\u5728\u5c3d\u53ef\u80fd\u63a5\u8fd1\u751f\u4ea7\u73af\u5883\u7684\u73af\u5883\u4e2d\u8fdb\u884c\u6d4b\u8bd5\uff0c\u4ee5\u786e\u4fdd\u8865\u4e01\u80cc\u540e\u7684\u95ee\u9898\u7684\u7a33\u5b9a\u6027\u548c\u89e3\u51b3\u65b9\u6848\u3002","title":"\u66f4\u65b0\u548c\u8865\u4e01"},{"location":"security/security-guide/#_293","text":"\u6700\u5e38\u89c1\u7684\u64cd\u4f5c\u7cfb\u7edf\u5305\u62ec\u57fa\u4e8e\u4e3b\u673a\u7684\u9632\u706b\u5899\uff0c\u4ee5\u63d0\u9ad8\u5b89\u5168\u6027\u3002\u867d\u7136\u6211\u4eec\u5efa\u8bae\u865a\u62df\u673a\u8fd0\u884c\u5c3d\u53ef\u80fd\u5c11\u7684\u5e94\u7528\u7a0b\u5e8f\uff08\u5982\u679c\u53ef\u80fd\u7684\u8bdd\uff0c\u8fbe\u5230\u5355\u4e00\u7528\u9014\u5b9e\u4f8b\u7684\u7a0b\u5ea6\uff09\uff0c\u4f46\u5e94\u5206\u6790\u865a\u62df\u673a\u4e0a\u8fd0\u884c\u7684\u6240\u6709\u5e94\u7528\u7a0b\u5e8f\uff0c\u4ee5\u786e\u5b9a\u5e94\u7528\u7a0b\u5e8f\u9700\u8981\u8bbf\u95ee\u54ea\u4e9b\u7cfb\u7edf\u8d44\u6e90\u3001\u8fd0\u884c\u6240\u9700\u7684\u6700\u4f4e\u7279\u6743\u7ea7\u522b\uff0c\u4ee5\u53ca\u5c06\u8fdb\u51fa\u865a\u62df\u673a\u7684\u9884\u671f\u7f51\u7edc\u6d41\u91cf\u3002\u6b64\u9884\u671f\u6d41\u91cf\u5e94\u4f5c\u4e3a\u5141\u8bb8\u7684\u6d41\u91cf\uff08\u6216\u5217\u5165\u767d\u540d\u5355\uff09\u6dfb\u52a0\u5230\u57fa\u4e8e\u4e3b\u673a\u7684\u9632\u706b\u5899\u4e2d\uff0c\u4ee5\u53ca\u4efb\u4f55\u5fc5\u8981\u7684\u65e5\u5fd7\u8bb0\u5f55\u548c\u7ba1\u7406\u901a\u4fe1\uff0c\u4f8b\u5982 SSH \u6216 RDP\u3002\u5e94\u5728\u9632\u706b\u5899\u914d\u7f6e\u4e2d\u660e\u786e\u62d2\u7edd\u6240\u6709\u5176\u4ed6\u6d41\u91cf\u3002 \u5728 Linux \u865a\u62df\u673a\u4e0a\uff0c\u4e0a\u8ff0\u5e94\u7528\u7a0b\u5e8f\u914d\u7f6e\u6587\u4ef6\u53ef\u4ee5\u4e0e audit2allow \u7b49\u5de5\u5177\u7ed3\u5408\u4f7f\u7528\uff0c\u4ee5\u6784\u5efa SELinux \u7b56\u7565\uff0c\u4ee5\u8fdb\u4e00\u6b65\u4fdd\u62a4\u5927\u591a\u6570 Linux \u53d1\u884c\u7248\u4e0a\u7684\u654f\u611f\u7cfb\u7edf\u4fe1\u606f\u3002SELinux \u4f7f\u7528\u7528\u6237\u3001\u7b56\u7565\u548c\u5b89\u5168\u4e0a\u4e0b\u6587\u7684\u7ec4\u5408\u6765\u5212\u5206\u5e94\u7528\u7a0b\u5e8f\u8fd0\u884c\u6240\u9700\u7684\u8d44\u6e90\uff0c\u5e76\u5c06\u5176\u4e0e\u5176\u4ed6\u4e0d\u9700\u8981\u7684\u7cfb\u7edf\u8d44\u6e90\u533a\u5206\u5f00\u6765\u3002 OpenStack \u4e3a\u4e3b\u673a\u548c\u7f51\u7edc\u63d0\u4f9b\u5b89\u5168\u7ec4\uff0c\u4ee5\u589e\u52a0\u5bf9\u7ed9\u5b9a\u9879\u76ee\u4e2d\u865a\u62df\u673a\u7684\u6df1\u5ea6\u9632\u5fa1\u3002\u8fd9\u4e9b\u89c4\u5219\u7c7b\u4f3c\u4e8e\u57fa\u4e8e\u4e3b\u673a\u7684\u9632\u706b\u5899\uff0c\u56e0\u4e3a\u5b83\u4eec\u6839\u636e\u7aef\u53e3\u3001\u534f\u8bae\u548c\u5730\u5740\u5141\u8bb8\u6216\u62d2\u7edd\u4f20\u5165\u6d41\u91cf\uff0c\u4f46\u5b89\u5168\u7ec4\u89c4\u5219\u4ec5\u9002\u7528\u4e8e\u4f20\u5165\u6d41\u91cf\uff0c\u800c\u57fa\u4e8e\u4e3b\u673a\u7684\u9632\u706b\u5899\u89c4\u5219\u80fd\u591f\u5e94\u7528\u4e8e\u4f20\u5165\u548c\u4f20\u51fa\u6d41\u91cf\u3002\u4e3b\u673a\u548c\u7f51\u7edc\u5b89\u5168\u7ec4\u89c4\u5219\u4e5f\u53ef\u80fd\u53d1\u751f\u51b2\u7a81\u5e76\u62d2\u7edd\u5408\u6cd5\u6d41\u91cf\u3002\u6211\u4eec\u5efa\u8bae\u786e\u4fdd\u4e3a\u6b63\u5728\u4f7f\u7528\u7684\u7f51\u7edc\u6b63\u786e\u914d\u7f6e\u5b89\u5168\u7ec4\u3002\u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u672c\u6307\u5357\u4e2d\u7684\u5b89\u5168\u7ec4\u3002","title":"\u9632\u706b\u5899\u548c\u5176\u4ed6\u57fa\u4e8e\u4e3b\u673a\u7684\u5b89\u5168\u63a7\u5236"},{"location":"security/security-guide/#_294","text":"\u5728\u4e91\u73af\u5883\u4e2d\uff0c\u786c\u4ef6\u3001\u64cd\u4f5c\u7cfb\u7edf\u3001\u865a\u62df\u673a\u7ba1\u7406\u5668\u3001OpenStack \u670d\u52a1\u3001\u4e91\u7528\u6237\u6d3b\u52a8\uff08\u4f8b\u5982\u521b\u5efa\u5b9e\u4f8b\u548c\u9644\u52a0\u5b58\u50a8\uff09\u3001\u7f51\u7edc\u4ee5\u53ca\u4f7f\u7528\u5728\u5404\u79cd\u5b9e\u4f8b\u4e0a\u8fd0\u884c\u7684\u5e94\u7528\u7a0b\u5e8f\u7684\u6700\u7ec8\u7528\u6237\u6df7\u5408\u5728\u4e00\u8d77\u3002 \u65e5\u5fd7\u8bb0\u5f55\u7684\u57fa\u7840\u77e5\u8bc6\uff1a\u914d\u7f6e\u3001\u8bbe\u7f6e\u65e5\u5fd7\u7ea7\u522b\u3001\u65e5\u5fd7\u6587\u4ef6\u7684\u4f4d\u7f6e\u3001\u5982\u4f55\u4f7f\u7528\u548c\u81ea\u5b9a\u4e49\u65e5\u5fd7\uff0c\u4ee5\u53ca\u5982\u4f55\u96c6\u4e2d\u6536\u96c6\u65e5\u5fd7\uff0c\u8fd9\u4e9b\u5728 OpenStack \u64cd\u4f5c\u6307\u5357\u4e2d\u90fd\u6709\u5f88\u597d\u7684\u4ecb\u7ecd\u3002 \u53d6\u8bc1\u548c\u4e8b\u4ef6\u54cd\u5e94 \u76d1\u63a7\u7528\u4f8b \u53c2\u8003\u4e66\u76ee","title":"\u76d1\u89c6\u548c\u65e5\u5fd7\u8bb0\u5f55"},{"location":"security/security-guide/#_295","text":"\u65e5\u5fd7\u7684\u751f\u6210\u548c\u6536\u96c6\u662f\u5b89\u5168\u76d1\u63a7 OpenStack \u57fa\u7840\u67b6\u6784\u7684\u91cd\u8981\u7ec4\u6210\u90e8\u5206\u3002\u65e5\u5fd7\u63d0\u4f9b\u5bf9\u7ba1\u7406\u5458\u3001\u79df\u6237\u548c\u6765\u5bbe\u65e5\u5e38\u64cd\u4f5c\u7684\u53ef\u89c1\u6027\uff0c\u4ee5\u53ca\u8ba1\u7b97\u3001\u7f51\u7edc\u548c\u5b58\u50a8\u4ee5\u53ca\u6784\u6210 OpenStack \u90e8\u7f72\u7684\u5176\u4ed6\u7ec4\u4ef6\u4e2d\u7684\u6d3b\u52a8\u3002 \u65e5\u5fd7\u4e0d\u4ec5\u5bf9\u4e3b\u52a8\u5b89\u5168\u548c\u6301\u7eed\u5408\u89c4\u6027\u6d3b\u52a8\u5f88\u6709\u4ef7\u503c\uff0c\u800c\u4e14\u4e5f\u662f\u8c03\u67e5\u548c\u54cd\u5e94\u4e8b\u4ef6\u7684\u5b9d\u8d35\u4fe1\u606f\u6e90\u3002 \u4f8b\u5982\uff0c\u5206\u6790\u8eab\u4efd\u670d\u52a1\u6216\u5176\u66ff\u4ee3\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u7684\u8bbf\u95ee\u65e5\u5fd7\u4f1a\u63d0\u9192\u6211\u4eec\u767b\u5f55\u5931\u8d25\u3001\u9891\u7387\u3001\u6e90 IP\u3001\u4e8b\u4ef6\u662f\u5426\u4ec5\u9650\u4e8e\u9009\u62e9\u5e10\u6237\u548c\u5176\u4ed6\u76f8\u5173\u4fe1\u606f\u3002\u65e5\u5fd7\u5206\u6790\u652f\u6301\u68c0\u6d4b\u3002 \u53ef\u4ee5\u91c7\u53d6\u63aa\u65bd\u6765\u7f13\u89e3\u6f5c\u5728\u7684\u6076\u610f\u6d3b\u52a8\uff0c\u4f8b\u5982\u5c06 IP \u5730\u5740\u5217\u5165\u9ed1\u540d\u5355\u3001\u5efa\u8bae\u52a0\u5f3a\u7528\u6237\u5bc6\u7801\u6216\u505c\u7528\u88ab\u89c6\u4e3a\u4f11\u7720\u7684\u7528\u6237\u5e10\u6237\u3002","title":"\u53d6\u8bc1\u548c\u4e8b\u4ef6\u54cd\u5e94"},{"location":"security/security-guide/#_296","text":"\u4e8b\u4ef6\u76d1\u63a7\u662f\u4e00\u79cd\u66f4\u4e3b\u52a8\u7684\u65b9\u6cd5\uff0c\u53ef\u4ee5\u4fdd\u62a4\u73af\u5883\uff0c\u63d0\u4f9b\u5b9e\u65f6\u68c0\u6d4b\u548c\u54cd\u5e94\u3002\u6709\u51e0\u79cd\u5de5\u5177\u53ef\u4ee5\u5e2e\u52a9\u8fdb\u884c\u76d1\u63a7\u3002 \u5bf9\u4e8eOpenStack\u4e91\u5b9e\u4f8b\uff0c\u6211\u4eec\u9700\u8981\u76d1\u63a7\u786c\u4ef6\u3001OpenStack\u670d\u52a1\u548c\u4e91\u8d44\u6e90\u4f7f\u7528\u60c5\u51b5\u3002\u540e\u8005\u6e90\u4e8e\u5e0c\u671b\u5177\u6709\u5f39\u6027\uff0c\u4ee5\u9002\u5e94\u7528\u6237\u7684\u52a8\u6001\u9700\u6c42\u3002 \u4ee5\u4e0b\u662f\u5728\u5b9e\u65bd\u65e5\u5fd7\u805a\u5408\u3001\u5206\u6790\u548c\u76d1\u63a7\u65f6\u9700\u8981\u8003\u8651\u7684\u51e0\u4e2a\u91cd\u8981\u7528\u4f8b\u3002\u8fd9\u4e9b\u7528\u4f8b\u53ef\u4ee5\u901a\u8fc7\u5404\u79cd\u5e94\u7528\u7a0b\u5e8f\u3001\u5de5\u5177\u6216\u811a\u672c\u6765\u5b9e\u73b0\u548c\u76d1\u63a7\u3002\u6709\u5f00\u6e90\u548c\u5546\u4e1a\u89e3\u51b3\u65b9\u6848\uff0c\u4e00\u4e9b\u8fd0\u8425\u5546\u5f00\u53d1\u81ea\u5df1\u7684\u5185\u90e8\u89e3\u51b3\u65b9\u6848\u3002\u8fd9\u4e9b\u5de5\u5177\u548c\u811a\u672c\u53ef\u4ee5\u751f\u6210\u4e8b\u4ef6\uff0c\u8fd9\u4e9b\u4e8b\u4ef6\u53ef\u4ee5\u901a\u8fc7\u7535\u5b50\u90ae\u4ef6\u53d1\u9001\u7ed9\u7ba1\u7406\u5458\u6216\u5728\u96c6\u6210\u4eea\u8868\u677f\u4e2d\u67e5\u770b\u3002\u8bf7\u52a1\u5fc5\u8003\u8651\u53ef\u80fd\u9002\u7528\u4e8e\u60a8\u7684\u7279\u5b9a\u7f51\u7edc\u7684\u5176\u4ed6\u7528\u4f8b\uff0c\u4ee5\u53ca\u60a8\u53ef\u80fd\u8ba4\u4e3a\u7684\u5f02\u5e38\u884c\u4e3a\u3002 \u68c0\u6d4b\u65e5\u5fd7\u751f\u6210\u7f3a\u5931\u662f\u4e00\u4e2a\u5177\u6709\u5f88\u9ad8\u4ef7\u503c\u7684\u4e8b\u4ef6\u3002\u6b64\u7c7b\u4e8b\u4ef6\u5c06\u8868\u660e\u670d\u52a1\u5931\u8d25\uff0c\u751a\u81f3\u8868\u793a\u5165\u4fb5\u8005\u6682\u65f6\u5173\u95ed\u4e86\u65e5\u5fd7\u8bb0\u5f55\u6216\u4fee\u6539\u4e86\u65e5\u5fd7\u7ea7\u522b\u4ee5\u9690\u85cf\u5176\u8e2a\u8ff9\u3002 \u5e94\u7528\u7a0b\u5e8f\u4e8b\u4ef6\uff08\u5982\u8ba1\u5212\u5916\u7684\u542f\u52a8\u6216\u505c\u6b62\u4e8b\u4ef6\uff09\u4e5f\u662f\u8981\u76d1\u89c6\u548c\u68c0\u67e5\u53ef\u80fd\u7684\u5b89\u5168\u9690\u60a3\u7684\u4e8b\u4ef6\u3002 OpenStack \u670d\u52a1\u673a\u5668\u4e0a\u7684\u64cd\u4f5c\u7cfb\u7edf\u4e8b\u4ef6\uff08\u5982\u7528\u6237\u767b\u5f55\u6216\u91cd\u65b0\u542f\u52a8\uff09\u4e5f\u4e3a\u7cfb\u7edf\u7684\u6b63\u786e\u548c\u4e0d\u5f53\u4f7f\u7528\u63d0\u4f9b\u4e86\u6709\u4ef7\u503c\u7684\u89c1\u89e3\u3002 \u80fd\u591f\u68c0\u6d4bOpenStack\u670d\u52a1\u5668\u4e0a\u7684\u8d1f\u8f7d\u8fd8\u53ef\u4ee5\u901a\u8fc7\u5f15\u5165\u5176\u4ed6\u670d\u52a1\u5668\u8fdb\u884c\u8d1f\u8f7d\u5e73\u8861\u6765\u505a\u51fa\u54cd\u5e94\uff0c\u4ee5\u786e\u4fdd\u9ad8\u53ef\u7528\u6027\u3002 \u5176\u4ed6\u53ef\u64cd\u4f5c\u7684\u4e8b\u4ef6\u5305\u62ec\u7f51\u7edc\u7f51\u6865\u5173\u95ed\u3001\u8ba1\u7b97\u8282\u70b9\u4e0a\u7684 IP \u8868\u88ab\u5237\u65b0\uff0c\u4ee5\u53ca\u968f\u4e4b\u800c\u6765\u7684\u5bf9\u5b9e\u4f8b\u7684\u8bbf\u95ee\u4e22\u5931\uff0c\u5bfc\u81f4\u5ba2\u6237\u4e0d\u6ee1\u610f\u3002 \u4e3a\u4e86\u964d\u4f4e\u5728\u8eab\u4efd\u670d\u52a1\u4e2d\u5220\u9664\u7528\u6237\u3001\u79df\u6237\u6216\u57df\u65f6\u5b64\u7acb\u5b9e\u4f8b\u7684\u5b89\u5168\u98ce\u9669\uff0c\u6211\u4eec\u8ba8\u8bba\u4e86\u5728\u7cfb\u7edf\u4e2d\u751f\u6210\u901a\u77e5\uff0c\u5e76\u8ba9 OpenStack \u7ec4\u4ef6\u9002\u5f53\u5730\u54cd\u5e94\u8fd9\u4e9b\u4e8b\u4ef6\uff0c\u4f8b\u5982\u7ec8\u6b62\u5b9e\u4f8b\u3001\u65ad\u5f00\u8fde\u63a5\u7684\u5377\u3001\u56de\u6536 CPU \u548c\u5b58\u50a8\u8d44\u6e90\u7b49\u3002 \u4e91\u5c06\u6258\u7ba1\u8bb8\u591a\u865a\u62df\u5b9e\u4f8b\uff0c\u5e76\u4e14\u76d1\u89c6\u8fd9\u4e9b\u5b9e\u4f8b\u8d85\u51fa\u4e86\u53ef\u80fd\u4ec5\u5305\u542b CRUD \u4e8b\u4ef6\u7684\u786c\u4ef6\u76d1\u89c6\u548c\u65e5\u5fd7\u6587\u4ef6\u3002 \u5b89\u5168\u76d1\u63a7\u63a7\u5236\uff08\u5982\u5165\u4fb5\u68c0\u6d4b\u8f6f\u4ef6\u3001\u9632\u75c5\u6bd2\u8f6f\u4ef6\u4ee5\u53ca\u95f4\u8c0d\u8f6f\u4ef6\u68c0\u6d4b\u548c\u5220\u9664\u5b9e\u7528\u7a0b\u5e8f\uff09\u53ef\u4ee5\u751f\u6210\u65e5\u5fd7\uff0c\u663e\u793a\u653b\u51fb\u6216\u5165\u4fb5\u53d1\u751f\u7684\u65f6\u95f4\u548c\u65b9\u5f0f\u3002\u5728\u4e91\u8ba1\u7b97\u673a\u4e0a\u90e8\u7f72\u8fd9\u4e9b\u5de5\u5177\u53ef\u63d0\u4f9b\u4ef7\u503c\u548c\u4fdd\u62a4\u3002\u4e91\u7528\u6237\uff0c\u5373\u5728\u4e91\u4e0a\u8fd0\u884c\u5b9e\u4f8b\u7684\u7528\u6237\uff0c\u53ef\u80fd\u4e5f\u5e0c\u671b\u5728\u5176\u5b9e\u4f8b\u4e0a\u8fd0\u884c\u6b64\u7c7b\u5de5\u5177\u3002","title":"\u76d1\u63a7\u7528\u4f8b"},{"location":"security/security-guide/#_297","text":"Siwczak, Piotr\uff0c\u5728 OpenStack \u4e91\u4e2d\u8fdb\u884c\u76d1\u63a7\u7684\u4e00\u4e9b\u5b9e\u9645\u6ce8\u610f\u4e8b\u9879\u30022012. blog.sflow.com\uff0c sflow\uff1a\u4e3b\u673a sFlow \u5206\u5e03\u5f0f\u4ee3\u7406\u30022012. blog.sflow.com\uff0csflow\uff1aLAN \u548c WAN\u30022009. blog.sflow.com\u3001sflow\uff1a\u5feb\u901f\u68c0\u6d4b\u5927\u6d41\u91cf sFlow \u4e0e NetFlow/IPFIX\u30022013.","title":"\u53c2\u8003\u4e66\u76ee"},{"location":"security/security-guide/#_298","text":"OpenStack \u90e8\u7f72\u53ef\u80fd\u9700\u8981\u51fa\u4e8e\u591a\u79cd\u76ee\u7684\u8fdb\u884c\u5408\u89c4\u6027\u6d3b\u52a8\uff0c\u4f8b\u5982\u6cd5\u89c4\u548c\u6cd5\u5f8b\u8981\u6c42\u3001\u5ba2\u6237\u9700\u6c42\u3001\u9690\u79c1\u6ce8\u610f\u4e8b\u9879\u548c\u5b89\u5168\u6700\u4f73\u5b9e\u8df5\u3002\u5408\u89c4\u529f\u80fd\u5bf9\u4f01\u4e1a\u53ca\u5176\u5ba2\u6237\u5f88\u91cd\u8981\u3002\u5408\u89c4\u610f\u5473\u7740\u9075\u5b88\u6cd5\u89c4\u3001\u89c4\u8303\u3001\u6807\u51c6\u548c\u6cd5\u5f8b\u3002\u5b83\u8fd8\u7528\u4e8e\u63cf\u8ff0\u6709\u5173\u8bc4\u4f30\u3001\u5ba1\u6838\u548c\u8ba4\u8bc1\u7684\u7ec4\u7ec7\u72b6\u6001\u3002\u5982\u679c\u64cd\u4f5c\u5f97\u5f53\uff0c\u5408\u89c4\u6027\u53ef\u4ee5\u7edf\u4e00\u548c\u52a0\u5f3a\u672c\u6307\u5357\u4e2d\u8ba8\u8bba\u7684\u5176\u4ed6\u5b89\u5168\u4e3b\u9898\u3002 \u672c\u7ae0\u6709\u51e0\u4e2a\u76ee\u6807\uff1a \u67e5\u770b\u5e38\u89c1\u7684\u5b89\u5168\u539f\u5219\u3002 \u8ba8\u8bba\u5e38\u89c1\u7684\u63a7\u5236\u6846\u67b6\u548c\u8ba4\u8bc1\u8d44\u6e90\uff0c\u4ee5\u5b9e\u73b0\u884c\u4e1a\u8ba4\u8bc1\u6216\u76d1\u7ba1\u673a\u6784\u8ba4\u8bc1\u3002 \u5728\u8bc4\u4f30 OpenStack \u90e8\u7f72\u65f6\uff0c\u53ef\u4f5c\u4e3a\u5ba1\u8ba1\u4eba\u5458\u7684\u53c2\u8003\u3002 \u4ecb\u7ecd\u7279\u5b9a\u4e8e OpenStack \u548c\u4e91\u73af\u5883\u7684\u9690\u79c1\u6ce8\u610f\u4e8b\u9879\u3002 \u5408\u89c4\u6027\u6982\u8ff0 \u5b89\u5168\u539f\u5219 \u5e38\u89c1\u63a7\u5236\u6846\u67b6 \u5ba1\u6838\u53c2\u8003 \u4e86\u89e3\u5ba1\u6838\u6d41\u7a0b \u786e\u5b9a\u5ba1\u8ba1\u8303\u56f4 \u5ba1\u8ba1\u9636\u6bb5 \u5185\u90e8\u5ba1\u8ba1 \u51c6\u5907\u5916\u90e8\u5ba1\u8ba1 \u5916\u90e8\u5ba1\u8ba1 \u5408\u89c4\u6027\u7ef4\u62a4 \u5408\u89c4\u6d3b\u52a8 \u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u7cfb\u7edf\uff08ISMS\uff09 \u98ce\u9669\u8bc4\u4f30 \u8bbf\u95ee\u548c\u65e5\u5fd7\u5ba1\u67e5 \u5907\u4efd\u548c\u707e\u96be\u6062\u590d \u5b89\u5168\u57f9\u8bad \u5b89\u5168\u5ba1\u67e5 \u6f0f\u6d1e\u7ba1\u7406 \u6570\u636e\u5206\u7c7b \u5f02\u5e38\u8fc7\u7a0b \u8ba4\u8bc1\u548c\u5408\u89c4\u58f0\u660e \u5546\u4e1a\u6807\u51c6 \u653f\u5e9c\u6807\u51c6 \u9690\u79c1","title":"\u5408\u89c4"},{"location":"security/security-guide/#_299","text":"","title":"\u5408\u89c4\u6027\u6982\u8ff0"},{"location":"security/security-guide/#_300","text":"\u884c\u4e1a\u6807\u51c6\u5b89\u5168\u539f\u5219\u4e3a\u5408\u89c4\u6027\u8ba4\u8bc1\u548c\u8bc1\u660e\u63d0\u4f9b\u4e86\u57fa\u51c6\u3002\u5982\u679c\u5728\u6574\u4e2a OpenStack \u90e8\u7f72\u8fc7\u7a0b\u4e2d\u8003\u8651\u548c\u5f15\u7528\u8fd9\u4e9b\u539f\u5219\uff0c\u5219\u53ef\u4ee5\u7b80\u5316\u8ba4\u8bc1\u6d3b\u52a8\u3002","title":"\u5b89\u5168\u539f\u5219"},{"location":"security/security-guide/#_301","text":"\u786e\u5b9a\u4e91\u67b6\u6784\u4e2d\u5b58\u5728\u98ce\u9669\u7684\u4f4d\u7f6e\uff0c\u5e76\u5e94\u7528\u63a7\u5236\u63aa\u65bd\u6765\u964d\u4f4e\u98ce\u9669\u3002\u5728\u91cd\u5927\u5173\u6ce8\u9886\u57df\uff0c\u5206\u5c42\u9632\u5fa1\u63d0\u4f9b\u591a\u79cd\u4e92\u8865\u63a7\u5236\uff0c\u5c06\u98ce\u9669\u7ba1\u7406\u5230\u53ef\u63a5\u53d7\u7684\u6c34\u5e73\u3002\u4f8b\u5982\uff0c\u4e3a\u4e86\u786e\u4fdd\u4e91\u79df\u6237\u4e4b\u95f4\u7684\u5145\u5206\u9694\u79bb\uff0c\u6211\u4eec\u5efa\u8bae\u5f3a\u5316 QEMU\uff0c\u4f7f\u7528\u652f\u6301 SELinux \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\uff0c\u5b9e\u65bd\u5f3a\u5236\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\uff0c\u5e76\u51cf\u5c11\u6574\u4f53\u653b\u51fb\u9762\u3002\u57fa\u672c\u539f\u5219\u662f\u7528\u591a\u5c42\u9632\u5fa1\u6765\u5f3a\u5316\u5173\u6ce8\u533a\u57df\uff0c\u8fd9\u6837\uff0c\u5982\u679c\u4efb\u4f55\u4e00\u5c42\u53d7\u5230\u635f\u5bb3\uff0c\u5176\u4ed6\u5c42\u5c06\u5b58\u5728\u4ee5\u63d0\u4f9b\u4fdd\u62a4\u5e76\u6700\u5927\u9650\u5ea6\u5730\u51cf\u5c11\u66b4\u9732\u3002","title":"\u5206\u5c42\u9632\u5fa1"},{"location":"security/security-guide/#_302","text":"\u5728\u53d1\u751f\u6545\u969c\u7684\u60c5\u51b5\u4e0b\uff0c\u7cfb\u7edf\u5e94\u914d\u7f6e\u4e3a\u5728\u5173\u95ed\u7684\u5b89\u5168\u72b6\u6001\u4e2d\u5931\u8d25\u3002\u4f8b\u5982\uff0c\u5982\u679cTLS\u8bc1\u4e66\u9a8c\u8bc1\u672a\u901a\u8fc7\uff0c\u5373CNAME\u4e0e\u670d\u52a1\u5668\u7684DNS\u540d\u79f0\u4e0d\u5339\u914d\uff0c\u5e94\u901a\u8fc7\u5207\u65ad\u7f51\u7edc\u8fde\u63a5\u6765\u5b89\u5168\u5931\u8d25\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u8f6f\u4ef6\u901a\u5e38\u4f1a\u4ee5\u5f00\u653e\u65b9\u5f0f\u5931\u8d25\uff0c\u5141\u8bb8\u8fde\u63a5\u5728\u6ca1\u6709CNAME\u5339\u914d\u7684\u60c5\u51b5\u4e0b\u7ee7\u7eed\u8fdb\u884c\uff0c\u8fd9\u6837\u4e0d\u591f\u5b89\u5168\uff0c\u4e5f\u4e0d\u5efa\u8bae\u3002","title":"\u5b89\u5168\u5931\u8d25"},{"location":"security/security-guide/#_303","text":"\u4ec5\u6388\u4e88\u7528\u6237\u548c\u7cfb\u7edf\u670d\u52a1\u7684\u6700\u4f4e\u8bbf\u95ee\u7ea7\u522b\u3002\u8fd9\u79cd\u8bbf\u95ee\u57fa\u4e8e\u89d2\u8272\u3001\u804c\u8d23\u548c\u5de5\u4f5c\u804c\u80fd\u3002\u8fd9\u79cd\u6700\u5c0f\u7279\u6743\u5b89\u5168\u539f\u5219\u5df2\u5199\u5165\u591a\u4e2a\u56fd\u9645\u653f\u5e9c\u5b89\u5168\u7b56\u7565\u4e2d\uff0c\u4f8b\u5982\u7f8e\u56fd\u5883\u5185\u7684 NIST 800-53 \u7b2c AC-6 \u8282\u3002","title":"\u6700\u5c0f\u6743\u9650"},{"location":"security/security-guide/#_304","text":"\u7cfb\u7edf\u5e94\u4ee5\u8fd9\u6837\u4e00\u79cd\u65b9\u5f0f\u9694\u79bb\uff0c\u5373\u5982\u679c\u4e00\u53f0\u8ba1\u7b97\u673a\u6216\u7cfb\u7edf\u7ea7\u670d\u52a1\u53d7\u5230\u635f\u5bb3\uff0c\u5176\u4ed6\u7cfb\u7edf\u7684\u5b89\u5168\u6027\u5c06\u4fdd\u6301\u4e0d\u53d8\u3002\u5b9e\u9645\u4e0a\uff0cSELinux \u7684\u542f\u7528\u548c\u6b63\u786e\u4f7f\u7528\u6709\u52a9\u4e8e\u5b9e\u73b0\u8fd9\u4e00\u76ee\u6807\u3002","title":"\u5206\u9694"},{"location":"security/security-guide/#_305","text":"\u5e94\u5c3d\u91cf\u51cf\u5c11\u53ef\u4ee5\u6536\u96c6\u7684\u6709\u5173\u7cfb\u7edf\u53ca\u5176\u7528\u6237\u7684\u4fe1\u606f\u91cf\u3002","title":"\u4fc3\u8fdb\u9690\u79c1"},{"location":"security/security-guide/#_306","text":"\u5b9e\u65bd\u9002\u5f53\u7684\u65e5\u5fd7\u8bb0\u5f55\u4ee5\u76d1\u63a7\u672a\u7ecf\u6388\u6743\u7684\u4f7f\u7528\u3001\u4e8b\u4ef6\u54cd\u5e94\u548c\u53d6\u8bc1\u3002\u6211\u4eec\u5f3a\u70c8\u5efa\u8bae\u9009\u5b9a\u7684\u5ba1\u8ba1\u5b50\u7cfb\u7edf\u901a\u8fc7\u901a\u7528\u6807\u51c6\u8ba4\u8bc1\uff0c\u8be5\u6807\u51c6\u5728\u5927\u591a\u6570\u56fd\u5bb6/\u5730\u533a\u63d0\u4f9b\u4e0d\u53ef\u8bc1\u660e\u7684\u4e8b\u4ef6\u8bb0\u5f55\u3002","title":"\u65e5\u5fd7\u8bb0\u5f55\u80fd\u529b"},{"location":"security/security-guide/#_307","text":"\u4ee5\u4e0b\u662f\u7ec4\u7ec7\u53ef\u7528\u4e8e\u6784\u5efa\u5176\u5b89\u5168\u63a7\u5236\u7684\u63a7\u5236\u6846\u67b6\u5217\u8868\u3002 \u4e91\u5b89\u5168\u8054\u76df \uff08CSA\uff09 \u901a\u7528\u63a7\u5236\u77e9\u9635 \uff08CCM\uff09 CSA CCM \u4e13\u95e8\u7528\u4e8e\u63d0\u4f9b\u57fa\u672c\u7684\u5b89\u5168\u539f\u5219\uff0c\u4ee5\u6307\u5bfc\u4e91\u4f9b\u5e94\u5546\u5e76\u5e2e\u52a9\u6f5c\u5728\u7684\u4e91\u5ba2\u6237\u8bc4\u4f30\u4e91\u63d0\u4f9b\u5546\u7684\u6574\u4f53\u5b89\u5168\u98ce\u9669\u3002CSA CCM \u63d0\u4f9b\u4e86\u4e00\u4e2a\u8de8 16 \u4e2a\u5b89\u5168\u57df\u4fdd\u6301\u4e00\u81f4\u7684\u63a7\u5236\u6846\u67b6\u3002\u4e91\u63a7\u5236\u77e9\u9635\u7684\u57fa\u7840\u5728\u4e8e\u5176\u4e0e\u5176\u4ed6\u884c\u4e1a\u6807\u51c6\u3001\u6cd5\u89c4\u548c\u63a7\u5236\u6846\u67b6\u7684\u5b9a\u5236\u5173\u7cfb\uff0c\u4f8b\u5982\uff1aISO 27001\uff1a2013\u3001COBIT 5.0\u3001PCI\uff1aDSS v3\u3001AICPA 2014 \u4fe1\u4efb\u670d\u52a1\u539f\u5219\u548c\u6807\u51c6\uff0c\u5e76\u589e\u5f3a\u4e86\u670d\u52a1\u7ec4\u7ec7\u63a7\u5236\u62a5\u544a\u8bc1\u660e\u7684\u5185\u90e8\u63a7\u5236\u65b9\u5411\u3002 CSA CCM \u901a\u8fc7\u51cf\u5c11\u4e91\u4e2d\u7684\u5b89\u5168\u5a01\u80c1\u548c\u6f0f\u6d1e\u6765\u52a0\u5f3a\u73b0\u6709\u7684\u4fe1\u606f\u5b89\u5168\u63a7\u5236\u73af\u5883\uff0c\u63d0\u4f9b\u6807\u51c6\u5316\u7684\u5b89\u5168\u548c\u8fd0\u8425\u98ce\u9669\u7ba1\u7406\uff0c\u5e76\u5bfb\u6c42\u89c4\u8303\u5316\u5b89\u5168\u671f\u671b\u3001\u4e91\u5206\u7c7b\u548c\u672f\u8bed\u4ee5\u53ca\u5728\u4e91\u4e2d\u5b9e\u65bd\u7684\u5b89\u5168\u63aa\u65bd\u3002 ISO 27001/2:2013 ISO 27001/2\uff1a2013 \u8ba4\u8bc1 ISO 27001 \u4fe1\u606f\u5b89\u5168\u6807\u51c6\u548c\u8ba4\u8bc1\u591a\u5e74\u6765\u4e00\u76f4\u7528\u4e8e\u8bc4\u4f30\u548c\u533a\u5206\u7ec4\u7ec7\u662f\u5426\u7b26\u5408\u4fe1\u606f\u5b89\u5168\u6700\u4f73\u5b9e\u8df5\u3002\u8be5\u6807\u51c6\u7531\u4e24\u90e8\u5206\u7ec4\u6210\uff1a\u5b9a\u4e49\u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u7cfb\u7edf \uff08ISMS\uff09 \u7684\u5f3a\u5236\u6027\u6761\u6b3e\u548c\u5305\u542b\u6309\u9886\u57df\u7ec4\u7ec7\u7684\u63a7\u5236\u5217\u8868\u7684\u9644\u5f55 A\u3002 \u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u7cfb\u7edf\u901a\u8fc7\u5e94\u7528\u98ce\u9669\u7ba1\u7406\u6d41\u7a0b\u6765\u4fdd\u6301\u4fe1\u606f\u7684\u673a\u5bc6\u6027\u3001\u5b8c\u6574\u6027\u548c\u53ef\u7528\u6027\uff0c\u5e76\u4f7f\u76f8\u5173\u65b9\u76f8\u4fe1\u98ce\u9669\u5f97\u5230\u5145\u5206\u7ba1\u7406\u3002 \u53ef\u4fe1\u5b89\u5168\u539f\u5219 \u4fe1\u6258\u670d\u52a1\u662f\u4e00\u5957\u57fa\u4e8e\u4e00\u5957\u6838\u5fc3\u539f\u5219\u548c\u6807\u51c6\u7684\u4e13\u4e1a\u8ba4\u8bc1\u548c\u54a8\u8be2\u670d\u52a1\uff0c\u7528\u4e8e\u89e3\u51b3 IT \u7cfb\u7edf\u548c\u9690\u79c1\u8ba1\u5212\u7684\u98ce\u9669\u548c\u673a\u9047\u3002\u901a\u5e38\u79f0\u4e3a SOC \u5ba1\u8ba1\uff0c\u8fd9\u4e9b\u539f\u5219\u5b9a\u4e49\u4e86\u8981\u6c42\u662f\u4ec0\u4e48\uff0c\u7ec4\u7ec7\u6709\u8d23\u4efb\u5b9a\u4e49\u6ee1\u8db3\u8981\u6c42\u7684\u63a7\u5236\u63aa\u65bd\u3002","title":"\u5e38\u7528\u63a7\u5236\u6846\u67b6"},{"location":"security/security-guide/#_308","text":"OpenStack\u5728\u8bb8\u591a\u65b9\u9762\u90fd\u662f\u521b\u65b0\u7684\uff0c\u4f46\u662f\u7528\u4e8e\u5ba1\u8ba1OpenStack\u90e8\u7f72\u7684\u8fc7\u7a0b\u76f8\u5f53\u666e\u904d\u3002\u5ba1\u6838\u5458\u5c06\u6839\u636e\u4e24\u4e2a\u6807\u51c6\u8bc4\u4f30\u6d41\u7a0b\uff1a\u63a7\u5236\u662f\u5426\u6709\u6548\u8bbe\u8ba1\u4ee5\u53ca\u63a7\u5236\u662f\u5426\u6709\u6548\u8fd0\u884c\u3002\u4e86\u89e3\u5ba1\u8ba1\u5e08\u5982\u4f55\u8bc4\u4f30\u63a7\u5236\u63aa\u65bd\u662f\u5426\u6709\u6548\u8bbe\u8ba1\u548c\u8fd0\u884c\uff0c\u5c06\u5728\u201c\u4e86\u89e3\u5ba1\u8ba1\u8fc7\u7a0b\u201d\u4e00\u8282\u4e2d\u8ba8\u8bba\u3002 \u7528\u4e8e\u5ba1\u6838\u548c\u8bc4\u4f30\u4e91\u90e8\u7f72\u7684\u6700\u5e38\u89c1\u6846\u67b6\u5305\u62ec\u524d\u9762\u63d0\u5230\u7684 ISO 27001/2 \u4fe1\u606f\u5b89\u5168\u6807\u51c6\u3001ISACA \u7684\u4fe1\u606f\u548c\u76f8\u5173\u6280\u672f\u63a7\u5236\u76ee\u6807 \uff08COBIT\uff09 \u6846\u67b6\u3001\u7279\u96f7\u5fb7\u97e6\u59d4\u5458\u4f1a\u8d5e\u52a9\u7ec4\u7ec7\u59d4\u5458\u4f1a \uff08COSO\uff09 \u548c\u4fe1\u606f\u6280\u672f\u57fa\u7840\u8bbe\u65bd\u5e93 \uff08ITIL\uff09\u3002\u5ba1\u8ba1\u901a\u5e38\u5305\u62ec\u4e00\u4e2a\u6216\u591a\u4e2a\u8fd9\u4e9b\u6846\u67b6\u4e2d\u7684\u91cd\u70b9\u9886\u57df\u3002\u5e78\u8fd0\u7684\u662f\uff0c\u8fd9\u4e9b\u6846\u67b6\u4e4b\u95f4\u6709\u5f88\u591a\u91cd\u53e0\uff0c\u56e0\u6b64\u91c7\u7528\u6846\u67b6\u7684\u7ec4\u7ec7\u5c06\u5728\u5ba1\u8ba1\u65f6\u5904\u4e8e\u6709\u5229\u5730\u4f4d\u3002","title":"\u5ba1\u8ba1\u53c2\u8003"},{"location":"security/security-guide/#_309","text":"\u4fe1\u606f\u7cfb\u7edf\u5b89\u5168\u5408\u89c4\u6027\u4f9d\u8d56\u4e8e\u4e24\u4e2a\u57fa\u672c\u6d41\u7a0b\u7684\u5b8c\u6210\uff1a \u5b89\u5168\u63a7\u5236\u7684\u5b9e\u65bd\u548c\u64cd\u4f5c \u4f7f\u4fe1\u606f\u7cfb\u7edf\u4e0e\u8303\u56f4\u5185\u7684\u6807\u51c6\u548c\u6cd5\u89c4\u4fdd\u6301\u4e00\u81f4\u6d89\u53ca\u5185\u90e8\u4efb\u52a1\uff0c\u8fd9\u4e9b\u4efb\u52a1\u5fc5\u987b\u5728\u6b63\u5f0f\u8bc4\u4f30\u4e4b\u524d\u8fdb\u884c\u3002\u5ba1\u6838\u5458\u53ef\u80fd\u4f1a\u53c2\u4e0e\u6b64\u72b6\u6001\uff0c\u4ee5\u8fdb\u884c\u5dee\u8ddd\u5206\u6790\uff0c\u63d0\u4f9b\u6307\u5bfc\uff0c\u5e76\u589e\u52a0\u6210\u529f\u8ba4\u8bc1\u7684\u53ef\u80fd\u6027\u3002 \u72ec\u7acb\u9a8c\u8bc1\u548c\u786e\u8ba4 \u5728\u8bb8\u591a\u4fe1\u606f\u7cfb\u7edf\u83b7\u5f97\u8ba4\u8bc1\u72b6\u6001\u4e4b\u524d\uff0c\u9700\u8981\u5411\u4e2d\u7acb\u7684\u7b2c\u4e09\u65b9\u8bc1\u660e\u7cfb\u7edf\u5b89\u5168\u63a7\u5236\u5df2\u5b9e\u65bd\u5e76\u6709\u6548\u8fd0\u884c\uff0c\u7b26\u5408\u8303\u56f4\u5185\u7684\u6807\u51c6\u548c\u6cd5\u89c4\u3002\u8bb8\u591a\u8ba4\u8bc1\u9700\u8981\u5b9a\u671f\u5ba1\u6838\uff0c\u4ee5\u786e\u4fdd\u6301\u7eed\u8ba4\u8bc1\uff0c\u8fd9\u88ab\u8ba4\u4e3a\u662f\u603b\u4f53\u6301\u7eed\u76d1\u63a7\u5b9e\u8df5\u7684\u4e00\u90e8\u5206\u3002","title":"\u4e86\u89e3\u5ba1\u6838\u6d41\u7a0b"},{"location":"security/security-guide/#_310","text":"\u786e\u5b9a\u5ba1\u8ba1\u8303\u56f4\uff0c\u7279\u522b\u662f\u9700\u8981\u54ea\u4e9b\u63a7\u5236\u63aa\u65bd\uff0c\u4ee5\u53ca\u5982\u4f55\u8bbe\u8ba1\u6216\u4fee\u6539OpenStack\u90e8\u7f72\u4ee5\u6ee1\u8db3\u8fd9\u4e9b\u63a7\u5236\u63aa\u65bd\uff0c\u5e94\u8be5\u662f\u6700\u521d\u7684\u89c4\u5212\u6b65\u9aa4\u3002 \u5728\u51fa\u4e8e\u5408\u89c4\u6027\u76ee\u7684\u786e\u5b9a OpenStack \u90e8\u7f72\u8303\u56f4\u65f6\uff0c\u5e94\u4f18\u5148\u8003\u8651\u5bf9\u654f\u611f\u670d\u52a1\u7684\u63a7\u5236\uff0c\u4f8b\u5982\u547d\u4ee4\u548c\u63a7\u5236\u529f\u80fd\u4ee5\u53ca\u57fa\u672c\u865a\u62df\u5316\u6280\u672f\u3002\u8fd9\u4e9b\u8bbe\u65bd\u7684\u59a5\u534f\u53ef\u80fd\u4f1a\u5f71\u54cd\u6574\u4e2a OpenStack \u73af\u5883\u3002 \u7f29\u5c0f\u8303\u56f4\u6709\u52a9\u4e8e\u786e\u4fdd OpenStack \u67b6\u6784\u5e08\u5efa\u7acb\u9488\u5bf9\u7279\u5b9a\u90e8\u7f72\u91cf\u8eab\u5b9a\u5236\u7684\u9ad8\u8d28\u91cf\u5b89\u5168\u63a7\u5236\uff0c\u4f46\u6700\u91cd\u8981\u7684\u662f\u786e\u4fdd\u8fd9\u4e9b\u5b9e\u8df5\u4e0d\u4f1a\u9057\u6f0f\u5b89\u5168\u5f3a\u5316\u4e2d\u7684\u533a\u57df\u6216\u529f\u80fd\u3002\u4e00\u4e2a\u5e38\u89c1\u7684\u4f8b\u5b50\u662fPCI-DSS\u51c6\u5219\uff0c\u5176\u4e2d\u4e0e\u652f\u4ed8\u76f8\u5173\u7684\u57fa\u7840\u8bbe\u65bd\u53ef\u80fd\u4f1a\u53d7\u5230\u5b89\u5168\u95ee\u9898\u7684\u5ba1\u67e5\uff0c\u4f46\u652f\u6301\u670d\u52a1\u88ab\u5ffd\u89c6\uff0c\u5e76\u4e14\u5bb9\u6613\u53d7\u5230\u653b\u51fb\u3002 \u5728\u89e3\u51b3\u5408\u89c4\u6027\u95ee\u9898\u65f6\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u786e\u5b9a\u9002\u7528\u4e8e\u591a\u4e2a\u8ba4\u8bc1\u7684\u5e38\u89c1\u9886\u57df\u548c\u6807\u51c6\u6765\u63d0\u9ad8\u6548\u7387\u5e76\u51cf\u5c11\u5de5\u4f5c\u91cf\u3002\u672c\u4e66\u4e2d\u8ba8\u8bba\u7684\u8bb8\u591a\u5ba1\u8ba1\u539f\u5219\u548c\u51c6\u5219\u5c06\u6709\u52a9\u4e8e\u786e\u5b9a\u8fd9\u4e9b\u63a7\u5236\u63aa\u65bd\uff0c\u6b64\u5916\uff0c\u4e00\u4e9b\u5916\u90e8\u5b9e\u4f53\u63d0\u4f9b\u4e86\u5168\u9762\u7684\u6e05\u5355\u3002\u4ee5\u4e0b\u662f\u4e00\u4e9b\u793a\u4f8b\uff1a \u4e91\u5b89\u5168\u8054\u76df\u4e91\u63a7\u5236\u77e9\u9635 \uff08CCM\uff09 \u53ef\u5e2e\u52a9\u4e91\u63d0\u4f9b\u5546\u548c\u6d88\u8d39\u8005\u8bc4\u4f30\u4e91\u63d0\u4f9b\u5546\u7684\u6574\u4f53\u5b89\u5168\u6027\u3002CSA CMM \u63d0\u4f9b\u4e86\u4e00\u4e2a\u63a7\u5236\u6846\u67b6\uff0c\u8be5\u6846\u67b6\u6620\u5c04\u5230\u8bb8\u591a\u884c\u4e1a\u516c\u8ba4\u7684\u6807\u51c6\u548c\u6cd5\u89c4\uff0c\u5305\u62ec ISO 27001/2\u3001ISACA\u3001COBIT\u3001PCI\u3001NIST\u3001Jericho Forum \u548c NERC CIP\u3002 \u300aSCAP \u5b89\u5168\u6307\u5357\u300b\u662f\u53e6\u4e00\u4e2a\u6709\u7528\u7684\u53c2\u8003\u3002\u8fd9\u4ecd\u7136\u662f\u4e00\u4e2a\u65b0\u5174\u7684\u6765\u6e90\uff0c\u4f46\u6211\u4eec\u9884\u8ba1\u8fd9\u5c06\u53d1\u5c55\u6210\u4e3a\u4e00\u4e2a\u5de5\u5177\uff0c\u5176\u63a7\u4ef6\u6620\u5c04\u66f4\u4fa7\u91cd\u4e8e\u7f8e\u56fd\u8054\u90a6\u653f\u5e9c\u7684\u8ba4\u8bc1\u548c\u5efa\u8bae\u3002\u4f8b\u5982\uff0cSCAP \u5b89\u5168\u6307\u5357\u76ee\u524d\u5305\u542b\u5b89\u5168\u6280\u672f\u5b9e\u65bd\u6307\u5357 \uff08STIG\uff09 \u548c NIST-800-53 \u7684\u4e00\u4e9b\u6620\u5c04\u3002 \u8fd9\u4e9b\u63a7\u5236\u6620\u5c04\u5c06\u6709\u52a9\u4e8e\u8bc6\u522b\u8de8\u8ba4\u8bc1\u7684\u901a\u7528\u63a7\u5236\u6807\u51c6\uff0c\u5e76\u4e3a\u5ba1\u6838\u5458\u548c\u88ab\u5ba1\u6838\u65b9\u63d0\u4f9b\u5bf9\u7279\u5b9a\u5408\u89c4\u6027\u8ba4\u8bc1\u548c\u8bc1\u660e\u7684\u63a7\u5236\u96c6\u4e2d\u95ee\u9898\u533a\u57df\u7684\u53ef\u89c1\u6027\u3002","title":"\u786e\u5b9a\u5ba1\u8ba1\u8303\u56f4"},{"location":"security/security-guide/#_311","text":"\u5ba1\u8ba1\u6709\u56db\u4e2a\u4e0d\u540c\u7684\u9636\u6bb5\uff0c\u5c3d\u7ba1\u5927\u591a\u6570\u5229\u76ca\u76f8\u5173\u8005\u548c\u63a7\u5236\u6240\u6709\u8005\u53ea\u4f1a\u53c2\u4e0e\u4e00\u4e24\u4e2a\u9636\u6bb5\u3002\u56db\u4e2a\u9636\u6bb5\u662f\u89c4\u5212\u3001\u5b9e\u5730\u8003\u5bdf\u3001\u62a5\u544a\u548c\u603b\u7ed3\u3002\u4e0b\u9762\u5c06\u8ba8\u8bba\u8fd9\u4e9b\u9636\u6bb5\u4e2d\u7684\u6bcf\u4e00\u4e2a\u3002 \u89c4\u5212\u9636\u6bb5\u901a\u5e38\u5728\u5b9e\u5730\u5de5\u4f5c\u5f00\u59cb\u524d\u4e24\u5468\u5230\u516d\u4e2a\u6708\u8fdb\u884c\u3002\u5728\u6b64\u9636\u6bb5\uff0c\u5c06\u8ba8\u8bba\u5e76\u6700\u7ec8\u786e\u5b9a\u65f6\u95f4\u8303\u56f4\u3001\u65f6\u95f4\u8868\u3001\u8981\u8bc4\u4f30\u7684\u63a7\u5236\u63aa\u65bd\u548c\u63a7\u5236\u6240\u6709\u8005\u7b49\u5ba1\u8ba1\u9879\u76ee\u3002\u5bf9\u8d44\u6e90\u53ef\u7528\u6027\u3001\u516c\u6b63\u6027\u548c\u6210\u672c\u7684\u62c5\u5fe7\u4e5f\u5f97\u5230\u4e86\u89e3\u51b3\u3002 \u5b9e\u5730\u8003\u5bdf\u9636\u6bb5\u662f\u5ba1\u8ba1\u4e2d\u6700\u660e\u663e\u7684\u90e8\u5206\u3002\u8fd9\u662f\u5ba1\u8ba1\u5458\u5728\u73b0\u573a\u7684\u5730\u65b9\uff0c\u4e0e\u63a7\u5236\u6240\u6709\u8005\u9762\u8c08\uff0c\u8bb0\u5f55\u73b0\u6709\u7684\u63a7\u5236\u63aa\u65bd\uff0c\u5e76\u786e\u5b9a\u4efb\u4f55\u95ee\u9898\u3002\u9700\u8981\u6ce8\u610f\u7684\u662f\uff0c\u5ba1\u8ba1\u5e08\u5c06\u4f7f\u7528\u4e24\u90e8\u5206\u6d41\u7a0b\u6765\u8bc4\u4f30\u73b0\u6709\u7684\u63a7\u5236\u63aa\u65bd\u3002\u7b2c\u4e00\u90e8\u5206\u662f\u8bc4\u4f30\u63a7\u5236\u7684\u8bbe\u8ba1\u6709\u6548\u6027\u3002\u5728\u8fd9\u91cc\uff0c\u5ba1\u8ba1\u5458\u5c06\u8bc4\u4f30\u63a7\u5236\u662f\u5426\u80fd\u591f\u6709\u6548\u5730\u9884\u9632\u6216\u68c0\u6d4b\u548c\u7ea0\u6b63\u5f31\u70b9\u548c\u7f3a\u9677\u3002\u63a7\u4ef6\u5fc5\u987b\u901a\u8fc7\u6b64\u6d4b\u8bd5\u624d\u80fd\u5728\u7b2c\u4e8c\u9636\u6bb5\u8fdb\u884c\u8bc4\u4f30\u3002\u8fd9\u662f\u56e0\u4e3a\u5bf9\u4e8e\u8bbe\u8ba1\u65e0\u6548\u7684\u63a7\u4ef6\uff0c\u6ca1\u6709\u5fc5\u8981\u8003\u8651\u5b83\u662f\u5426\u6709\u6548\u8fd0\u884c\u3002\u7b2c\u4e8c\u90e8\u5206\u662f\u8fd0\u8425\u6548\u7387\u3002\u64cd\u4f5c\u6709\u6548\u6027\u6d4b\u8bd5\u5c06\u786e\u5b9a\u5982\u4f55\u5e94\u7528\u63a7\u5236\u63aa\u65bd\uff0c\u5e94\u7528\u63a7\u5236\u63aa\u65bd\u7684\u4e00\u81f4\u6027\u4ee5\u53ca\u7531\u8c01\u6216\u4ee5\u4f55\u79cd\u65b9\u5f0f\u5e94\u7528\u63a7\u5236\u63aa\u65bd\u3002\u4e00\u9879\u63a7\u5236\u53ef\u80fd\u4f9d\u8d56\u4e8e\u5176\u4ed6\u63a7\u5236\uff08\u95f4\u63a5\u63a7\u5236\uff09\uff0c\u5982\u679c\u5b83\u4eec\u4f9d\u8d56\u4e8e\u5176\u4ed6\u63a7\u5236\uff0c\u5219\u5ba1\u8ba1\u5e08\u53ef\u80fd\u9700\u8981\u989d\u5916\u7684\u8bc1\u636e\u6765\u8bc1\u660e\u8fd9\u4e9b\u95f4\u63a5\u63a7\u5236\u7684\u8fd0\u4f5c\u6709\u6548\u6027\uff0c\u4ee5\u786e\u5b9a\u63a7\u5236\u7684\u6574\u4f53\u8fd0\u4f5c\u6709\u6548\u6027\u3002 \u5728\u62a5\u544a\u9636\u6bb5\uff0c\u7ba1\u7406\u5c42\u5c06\u5bf9\u5728\u5b9e\u5730\u5de5\u4f5c\u9636\u6bb5\u53d1\u73b0\u7684\u4efb\u4f55\u95ee\u9898\u8fdb\u884c\u9a8c\u8bc1\u3002\u51fa\u4e8e\u540e\u52e4\u76ee\u7684\uff0c\u4e00\u4e9b\u6d3b\u52a8\uff08\u4f8b\u5982\u95ee\u9898\u9a8c\u8bc1\uff09\u53ef\u80fd\u4f1a\u5728\u5b9e\u5730\u5de5\u4f5c\u9636\u6bb5\u6267\u884c\u3002\u7ba1\u7406\u5c42\u8fd8\u9700\u8981\u63d0\u4f9b\u8865\u6551\u8ba1\u5212\u6765\u89e3\u51b3\u95ee\u9898\uff0c\u5e76\u786e\u4fdd\u5b83\u4eec\u4e0d\u4f1a\u518d\u6b21\u53d1\u751f\u3002\u5c06\u5411\u5229\u76ca\u6538\u5173\u65b9\u548c\u7ba1\u7406\u5c42\u5206\u53d1\u4e00\u4efd\u603b\u4f53\u62a5\u544a\u8349\u7a3f\uff0c\u4f9b\u5176\u5ba1\u67e5\u3002\u5546\u5b9a\u7684\u4fee\u6539\u88ab\u7eb3\u5165\uff0c\u66f4\u65b0\u540e\u7684\u8349\u6848\u5c06\u9001\u4ea4\u9ad8\u7ea7\u7ba1\u7406\u5c42\u5ba1\u67e5\u548c\u6279\u51c6\u3002\u4e00\u65e6\u9ad8\u7ea7\u7ba1\u7406\u5c42\u6279\u51c6\u62a5\u544a\uff0c\u8be5\u62a5\u544a\u5c31\u4f1a\u5b9a\u7a3f\u5e76\u5206\u53d1\u7ed9\u6267\u884c\u7ba1\u7406\u5c42\u3002\u4efb\u4f55\u95ee\u9898\u90fd\u4f1a\u8f93\u5165\u5230\u7ec4\u7ec7\u4f7f\u7528\u7684\u95ee\u9898\u8ddf\u8e2a\u6216\u98ce\u9669\u8ddf\u8e2a\u673a\u5236\u4e2d\u3002 \u603b\u7ed3\u9636\u6bb5\u662f\u5ba1\u8ba1\u6b63\u5f0f\u7ec8\u6b62\u7684\u5730\u65b9\u3002\u6b64\u65f6\uff0c\u7ba1\u7406\u5c42\u5c06\u5f00\u59cb\u6574\u6539\u6d3b\u52a8\u3002\u4f7f\u7528\u8fc7\u7a0b\u548c\u901a\u77e5\u786e\u4fdd\u5c06\u4efb\u4f55\u4e0e\u5ba1\u8ba1\u76f8\u5173\u7684\u4fe1\u606f\u90fd\u88ab\u79fb\u81f3\u5b89\u5168\u5b58\u50a8\u5e930\u3002","title":"\u5ba1\u8ba1\u7684\u9636\u6bb5"},{"location":"security/security-guide/#_312","text":"\u90e8\u7f72\u4e91\u540e\uff0c\u5c31\u8be5\u8fdb\u884c\u5185\u90e8\u5ba1\u8ba1\u4e86\u3002\u73b0\u5728\u662f\u65f6\u5019\u5c06\u4e0a\u9762\u786e\u5b9a\u7684\u63a7\u4ef6\u4e0e\u4e91\u4e2d\u4f7f\u7528\u7684\u8bbe\u8ba1\u3001\u529f\u80fd\u548c\u90e8\u7f72\u7b56\u7565\u8fdb\u884c\u6bd4\u8f83\u4e86\u3002\u76ee\u6807\u662f\u4e86\u89e3\u6bcf\u4e2a\u63a7\u4ef6\u7684\u5904\u7406\u65b9\u5f0f\u4ee5\u53ca\u5b58\u5728\u5dee\u8ddd\u7684\u4f4d\u7f6e\u3002\u8bb0\u5f55\u6240\u6709\u53d1\u73b0\u4ee5\u5907\u5c06\u6765\u53c2\u8003\u3002 \u5728\u5ba1\u8ba1OpenStack\u4e91\u65f6\uff0c\u4e86\u89e3OpenStack\u67b6\u6784\u56fa\u6709\u7684\u591a\u79df\u6237\u73af\u5883\u662f\u5f88\u91cd\u8981\u7684\u3002\u9700\u8981\u5173\u6ce8\u7684\u4e00\u4e9b\u5173\u952e\u9886\u57df\u5305\u62ec\u6570\u636e\u5904\u7f6e\u3001\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u5b89\u5168\u6027\u3001\u8282\u70b9\u5f3a\u5316\u548c\u8eab\u4efd\u9a8c\u8bc1\u673a\u5236\u3002","title":"\u5185\u90e8\u5ba1\u8ba1"},{"location":"security/security-guide/#_313","text":"\u4e00\u65e6\u5185\u90e8\u5ba1\u8ba1\u7ed3\u679c\u770b\u8d77\u6765\u4e0d\u9519\uff0c\u5c31\u8be5\u4e3a\u5916\u90e8\u5ba1\u8ba1\u505a\u51c6\u5907\u4e86\u3002\u5728\u6b64\u9636\u6bb5\u9700\u8981\u91c7\u53d6\u51e0\u9879\u5173\u952e\u884c\u52a8\uff0c\u8fd9\u4e9b\u884c\u52a8\u6982\u8ff0\u5982\u4e0b\uff1a \u4fdd\u6301\u5185\u90e8\u5ba1\u8ba1\u7684\u826f\u597d\u8bb0\u5f55\u3002\u8fd9\u4e9b\u5c06\u5728\u5916\u90e8\u5ba1\u8ba1\u671f\u95f4\u8bc1\u660e\u5f88\u6709\u7528\uff0c\u56e0\u6b64\u60a8\u53ef\u4ee5\u51c6\u5907\u597d\u56de\u7b54\u6709\u5173\u5c06\u5408\u89c4\u6027\u63a7\u5236\u6620\u5c04\u5230\u7279\u5b9a\u90e8\u7f72\u7684\u95ee\u9898\u3002 \u90e8\u7f72\u81ea\u52a8\u5316\u6d4b\u8bd5\u5de5\u5177\uff0c\u786e\u4fdd\u4e91\u957f\u671f\u4fdd\u6301\u5408\u89c4\u3002 \u9009\u62e9\u5ba1\u8ba1\u5458\u3002 \u9009\u62e9\u5ba1\u8ba1\u5e08\u53ef\u80fd\u5177\u6709\u6311\u6218\u6027\u3002\u7406\u60f3\u60c5\u51b5\u4e0b\uff0c\u60a8\u6b63\u5728\u5bfb\u627e\u5177\u6709\u4e91\u5408\u89c4\u6027\u5ba1\u6838\u7ecf\u9a8c\u7684\u4eba\u3002OpenStack\u7ecf\u9a8c\u662f\u53e6\u4e00\u5927\u4f18\u52bf\u3002\u901a\u5e38\uff0c\u6700\u597d\u54a8\u8be2\u7ecf\u5386\u8fc7\u6b64\u8fc7\u7a0b\u7684\u4eba\u8fdb\u884c\u8f6c\u8bca\u3002\u6210\u672c\u53ef\u80fd\u4f1a\u56e0\u53c2\u4e0e\u8303\u56f4\u548c\u6240\u8003\u8651\u7684\u5ba1\u8ba1\u516c\u53f8\u800c\u6709\u5f88\u5927\u5dee\u5f02\u3002","title":"\u51c6\u5907\u5916\u90e8\u5ba1\u8ba1"},{"location":"security/security-guide/#_314","text":"\u8fd9\u662f\u6b63\u5f0f\u7684\u5ba1\u8ba1\u8fc7\u7a0b\u3002\u5ba1\u8ba1\u5458\u5c06\u6d4b\u8bd5\u7279\u5b9a\u8ba4\u8bc1\u8303\u56f4\u5185\u7684\u5b89\u5168\u63a7\u5236\u63aa\u65bd\uff0c\u5e76\u8981\u6c42\u63d0\u4f9b\u8bc1\u636e\u8981\u6c42\uff0c\u4ee5\u8bc1\u660e\u8fd9\u4e9b\u63a7\u5236\u63aa\u65bd\u5728\u5ba1\u8ba1\u7a97\u53e3\u5185\u4e5f\u5df2\u5230\u4f4d\uff08\u4f8b\u5982\uff0cSOC 2 \u5ba1\u8ba1\u901a\u5e38\u5728 6-12 \u4e2a\u6708\u5185\u8bc4\u4f30\u5b89\u5168\u63a7\u5236\u63aa\u65bd\uff09\u3002\u4efb\u4f55\u63a7\u5236\u5931\u8d25\u90fd\u4f1a\u88ab\u8bb0\u5f55\u4e0b\u6765\uff0c\u5e76\u5c06\u8bb0\u5f55\u5728\u5916\u90e8\u5ba1\u8ba1\u5e08\u7684\u6700\u7ec8\u62a5\u544a\u4e2d\u3002\u6839\u636e OpenStack \u90e8\u7f72\u7684\u7c7b\u578b\uff0c\u5ba2\u6237\u53ef\u80fd\u4f1a\u67e5\u770b\u8fd9\u4e9b\u62a5\u544a\uff0c\u56e0\u6b64\u907f\u514d\u63a7\u5236\u5931\u8d25\u975e\u5e38\u91cd\u8981\u3002\u8fd9\u5c31\u662f\u4e3a\u4ec0\u4e48\u5ba1\u8ba1\u51c6\u5907\u5982\u6b64\u91cd\u8981\u7684\u539f\u56e0\u3002","title":"\u5916\u90e8\u5ba1\u8ba1"},{"location":"security/security-guide/#_315","text":"\u8be5\u8fc7\u7a0b\u4e0d\u4f1a\u56e0\u5355\u4e00\u7684\u5916\u90e8\u5ba1\u8ba1\u800c\u7ed3\u675f\u3002\u5927\u591a\u6570\u8ba4\u8bc1\u90fd\u9700\u8981\u6301\u7eed\u7684\u5408\u89c4\u6d3b\u52a8\uff0c\u8fd9\u610f\u5473\u7740\u8981\u5b9a\u671f\u91cd\u590d\u5ba1\u6838\u8fc7\u7a0b\u3002\u6211\u4eec\u5efa\u8bae\u5c06\u81ea\u52a8\u5408\u89c4\u6027\u9a8c\u8bc1\u5de5\u5177\u96c6\u6210\u5230\u4e91\u4e2d\uff0c\u4ee5\u786e\u4fdd\u5176\u59cb\u7ec8\u5408\u89c4\u3002\u9664\u4e86\u5176\u4ed6\u5b89\u5168\u76d1\u63a7\u5de5\u5177\u4e4b\u5916\uff0c\u8fd8\u5e94\u8be5\u8fd9\u6837\u505a\u3002\u8bf7\u8bb0\u4f4f\uff0c\u76ee\u6807\u65e2\u662f\u5b89\u5168\u6027\uff0c\u4e5f\u662f\u5408\u89c4\u6027\u3002\u5982\u679c\u5728\u4e0a\u8ff0\u4efb\u4f55\u4e00\u9879\u65b9\u9762\u90fd\u5931\u8d25\uff0c\u5c06\u4f7f\u672a\u6765\u7684\u5ba1\u8ba1\u53d8\u5f97\u975e\u5e38\u590d\u6742\u3002","title":"\u5408\u89c4\u6027\u7ef4\u62a4"},{"location":"security/security-guide/#_316","text":"\u6709\u8bb8\u591a\u6807\u51c6\u6d3b\u52a8\u5c06\u6781\u5927\u5730\u5e2e\u52a9\u5408\u89c4\u8fc7\u7a0b\u3002\u672c\u7ae0\u6982\u8ff0\u4e86\u4e00\u4e9b\u6700\u5e38\u89c1\u7684\u5408\u89c4\u6027\u6d3b\u52a8\u3002\u8fd9\u4e9b\u5e76\u4e0d\u662fOpenStack\u6240\u7279\u6709\u7684\uff0c\u4f46\u662f\u672c\u4e66\u4e2d\u63d0\u4f9b\u4e86\u76f8\u5173\u7ae0\u8282\u7684\u53c2\u8003\u8d44\u6599\uff0c\u4f5c\u4e3a\u6709\u7528\u7684\u4e0a\u4e0b\u6587\u3002","title":"\u5408\u89c4\u6d3b\u52a8"},{"location":"security/security-guide/#isms","text":"\u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u7cfb\u7edf \uff08ISMS\uff09 \u662f\u7ec4\u7ec7\u521b\u5efa\u548c\u7ef4\u62a4\u7684\u4e00\u5957\u5168\u9762\u7684\u7b56\u7565\u548c\u6d41\u7a0b\uff0c\u7528\u4e8e\u7ba1\u7406\u4fe1\u606f\u8d44\u4ea7\u7684\u98ce\u9669\u3002\u4e91\u90e8\u7f72\u6700\u5e38\u89c1\u7684 ISMS \u662f ISO/IEC 27001/2\uff0c\u5b83\u4e3a\u5b89\u5168\u63a7\u5236\u548c\u5b9e\u8df5\u5960\u5b9a\u4e86\u575a\u5b9e\u7684\u57fa\u7840\uff0c\u4ee5\u5b9e\u73b0\u66f4\u4e25\u683c\u7684\u5408\u89c4\u6027\u8ba4\u8bc1\u3002\u8be5\u6807\u51c6\u4e8e 2013 \u5e74\u8fdb\u884c\u4e86\u66f4\u65b0\uff0c\u4ee5\u53cd\u6620\u4e91\u670d\u52a1\u7684\u65e5\u76ca\u4f7f\u7528\uff0c\u5e76\u66f4\u52a0\u5f3a\u8c03\u8861\u91cf\u548c\u8bc4\u4f30\u7ec4\u7ec7\u7684 ISMS \u6027\u80fd\u3002","title":"\u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u7cfb\u7edf \uff08ISMS\uff09"},{"location":"security/security-guide/#_317","text":"\u98ce\u9669\u8bc4\u4f30\u6846\u67b6\u53ef\u8bc6\u522b\u7ec4\u7ec7\u6216\u670d\u52a1\u4e2d\u7684\u98ce\u9669\uff0c\u5e76\u6307\u5b9a\u8fd9\u4e9b\u98ce\u9669\u7684\u6240\u6709\u6743\uff0c\u4ee5\u53ca\u5b9e\u65bd\u548c\u7f13\u89e3\u7b56\u7565\u3002\u98ce\u9669\u9002\u7528\u4e8e\u670d\u52a1\u7684\u6240\u6709\u9886\u57df\uff0c\u4ece\u6280\u672f\u63a7\u5236\u5230\u73af\u5883\u707e\u96be\u573a\u666f\u548c\u4eba\u4e3a\u56e0\u7d20\u3002\u4f8b\u5982\uff0c\u6076\u610f\u5185\u90e8\u4eba\u5458\u3002\u53ef\u4ee5\u4f7f\u7528\u591a\u79cd\u673a\u5236\u5bf9\u98ce\u9669\u8fdb\u884c\u8bc4\u7ea7\u3002\u4f8b\u5982\uff0c\u53ef\u80fd\u6027\u4e0e\u5f71\u54cd\u3002OpenStack \u90e8\u7f72\u98ce\u9669\u8bc4\u4f30\u53ef\u4ee5\u5305\u62ec\u63a7\u5236\u5dee\u8ddd\u3002","title":"\u98ce\u9669\u8bc4\u4f30"},{"location":"security/security-guide/#_318","text":"\u9700\u8981\u5b9a\u671f\u8bbf\u95ee\u548c\u65e5\u5fd7\u5ba1\u67e5\uff0c\u4ee5\u786e\u4fdd\u670d\u52a1\u90e8\u7f72\u4e2d\u7684\u8eab\u4efd\u9a8c\u8bc1\u3001\u6388\u6743\u548c\u95ee\u8d23\u5236\u3002\u6709\u5173\u8fd9\u4e9b\u4e3b\u9898\u7684 OpenStack \u7684\u5177\u4f53\u6307\u5357\u5728\u76d1\u63a7\u548c\u65e5\u5fd7\u8bb0\u5f55\u4e2d\u8fdb\u884c\u4e86\u6df1\u5165\u8ba8\u8bba\u3002 OpenStack Identity \u670d\u52a1\u652f\u6301\u4e91\u5ba1\u8ba1\u6570\u636e\u8054\u5408 \uff08CADF\uff09 \u901a\u77e5\uff0c\u63d0\u4f9b\u5ba1\u8ba1\u6570\u636e\u4ee5\u7b26\u5408\u5b89\u5168\u6027\u3001\u64cd\u4f5c\u548c\u4e1a\u52a1\u6d41\u7a0b\u3002\u6709\u5173\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 Keystone \u5f00\u53d1\u4eba\u5458\u6587\u6863\u3002","title":"\u8bbf\u95ee\u548c\u65e5\u5fd7\u5ba1\u67e5"},{"location":"security/security-guide/#_319","text":"\u707e\u96be\u6062\u590d \uff08DR\uff09 \u548c\u4e1a\u52a1\u8fde\u7eed\u6027\u89c4\u5212 \uff08BCP\uff09 \u8ba1\u5212\u662f ISMS \u548c\u5408\u89c4\u6027\u6d3b\u52a8\u7684\u5e38\u89c1\u8981\u6c42\u3002\u8fd9\u4e9b\u8ba1\u5212\u5fc5\u987b\u5b9a\u671f\u6d4b\u8bd5\u5e76\u8bb0\u5f55\u5728\u6848\u3002\u5728 OpenStack \u4e2d\uff0c\u5173\u952e\u533a\u57df\u4f4d\u4e8e\u7ba1\u7406\u5b89\u5168\u57df\u4e2d\uff0c\u4ee5\u53ca\u4efb\u4f55\u53ef\u4ee5\u8bc6\u522b\u5355\u70b9\u6545\u969c \uff08SPOF\uff09 \u7684\u5730\u65b9\u3002","title":"\u5907\u4efd\u548c\u707e\u96be\u6062\u590d"},{"location":"security/security-guide/#_320","text":"\u9488\u5bf9\u7279\u5b9a\u89d2\u8272\u7684\u5e74\u5ea6\u5b89\u5168\u57f9\u8bad\u662f\u51e0\u4e4e\u6240\u6709\u5408\u89c4\u6027\u8ba4\u8bc1\u548c\u8bc1\u660e\u7684\u5f3a\u5236\u6027\u8981\u6c42\u3002\u4e3a\u4e86\u4f18\u5316\u5b89\u5168\u57f9\u8bad\u7684\u6709\u6548\u6027\uff0c\u4e00\u79cd\u5e38\u89c1\u7684\u65b9\u6cd5\u662f\u63d0\u4f9b\u7279\u5b9a\u4e8e\u89d2\u8272\u7684\u57f9\u8bad\uff0c\u4f8b\u5982\u5411\u5f00\u53d1\u4eba\u5458\u3001\u64cd\u4f5c\u4eba\u5458\u548c\u975e\u6280\u672f\u4eba\u5458\u63d0\u4f9b\u57f9\u8bad\u3002\u57fa\u4e8e\u6b64\u5f3a\u5316\u6307\u5357\u7684\u5176\u4ed6\u4e91\u5b89\u5168\u6216 OpenStack \u5b89\u5168\u57f9\u8bad\u5c06\u662f\u7406\u60f3\u7684\u9009\u62e9\u3002","title":"\u5b89\u5168\u57f9\u8bad"},{"location":"security/security-guide/#_321","text":"\u7531\u4e8eOpenStack\u662f\u4e00\u4e2a\u6d41\u884c\u7684\u5f00\u6e90\u9879\u76ee\uff0c\u56e0\u6b64\u8bb8\u591a\u4ee3\u7801\u5e93\u548c\u67b6\u6784\u5df2\u7ecf\u8fc7\u4e2a\u4eba\u8d21\u732e\u8005\u3001\u7ec4\u7ec7\u548c\u4f01\u4e1a\u7684\u5ba1\u67e5\u3002\u4ece\u5b89\u5168\u89d2\u5ea6\u6765\u770b\uff0c\u8fd9\u53ef\u80fd\u662f\u6709\u5229\u7684\uff0c\u4f46\u662f\u5bf9\u4e8e\u670d\u52a1\u63d0\u4f9b\u5546\u6765\u8bf4\uff0c\u5b89\u5168\u5ba1\u67e5\u7684\u9700\u6c42\u4ecd\u7136\u662f\u4e00\u4e2a\u5173\u952e\u7684\u8003\u8651\u56e0\u7d20\uff0c\u56e0\u4e3a\u90e8\u7f72\u5404\u4e0d\u76f8\u540c\uff0c\u800c\u4e14\u5b89\u5168\u6027\u5e76\u4e0d\u603b\u662f\u8d21\u732e\u8005\u7684\u4e3b\u8981\u5173\u6ce8\u70b9\u3002\u5168\u9762\u7684\u5b89\u5168\u5ba1\u67e5\u8fc7\u7a0b\u53ef\u80fd\u5305\u62ec\u67b6\u6784\u5ba1\u67e5\u3001\u5a01\u80c1\u5efa\u6a21\u3001\u6e90\u4ee3\u7801\u5206\u6790\u548c\u6e17\u900f\u6d4b\u8bd5\u3002\u6709\u8bb8\u591a\u7528\u4e8e\u8fdb\u884c\u5b89\u5168\u5ba1\u67e5\u7684\u6280\u672f\u548c\u5efa\u8bae\uff0c\u53ef\u4ee5\u5728\u516c\u5f00\u53d1\u5e03\u4e2d\u627e\u5230\u3002\u4e00\u4e2a\u7ecf\u8fc7\u5145\u5206\u6d4b\u8bd5\u7684\u4f8b\u5b50\u662f Microsoft SDL\uff0c\u5b83\u662f\u4f5c\u4e3a Microsoft \u53ef\u4fe1\u8ba1\u7b97\u8ba1\u5212\u7684\u4e00\u90e8\u5206\u521b\u5efa\u7684\u3002","title":"\u5b89\u5168\u5ba1\u67e5"},{"location":"security/security-guide/#_322","text":"\u5b89\u5168\u66f4\u65b0\u5bf9\u4e8e\u4efb\u4f55 IaaS \u90e8\u7f72\uff08\u65e0\u8bba\u662f\u79c1\u6709\u90e8\u7f72\u8fd8\u662f\u516c\u5171\u90e8\u7f72\uff09\u90fd\u81f3\u5173\u91cd\u8981\u3002\u6613\u53d7\u653b\u51fb\u7684\u7cfb\u7edf\u6269\u5927\u4e86\u653b\u51fb\u9762\uff0c\u662f\u653b\u51fb\u8005\u7684\u660e\u663e\u76ee\u6807\u3002\u5e38\u89c1\u7684\u626b\u63cf\u6280\u672f\u548c\u6f0f\u6d1e\u901a\u77e5\u670d\u52a1\u53ef\u4ee5\u5e2e\u52a9\u7f13\u89e3\u8fd9\u79cd\u5a01\u80c1\u3002\u91cd\u8981\u7684\u662f\uff0c\u626b\u63cf\u8981\u7ecf\u8fc7\u8eab\u4efd\u9a8c\u8bc1\uff0c\u5e76\u4e14\u7f13\u89e3\u7b56\u7565\u8981\u8d85\u8d8a\u7b80\u5355\u7684\u5916\u56f4\u5f3a\u5316\u3002OpenStack \u7b49\u591a\u79df\u6237\u67b6\u6784\u7279\u522b\u5bb9\u6613\u53d7\u5230\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6f0f\u6d1e\u7684\u5f71\u54cd\uff0c\u56e0\u6b64\u8fd9\u662f\u6f0f\u6d1e\u7ba1\u7406\u7cfb\u7edf\u7684\u5173\u952e\u90e8\u5206\u3002","title":"\u6f0f\u6d1e\u7ba1\u7406"},{"location":"security/security-guide/#_323","text":"\u6570\u636e\u5206\u7c7b\u5b9a\u4e49\u4e86\u4e00\u79cd\u5bf9\u4fe1\u606f\u8fdb\u884c\u5206\u7c7b\u548c\u5904\u7406\u7684\u65b9\u6cd5\uff0c\u901a\u5e38\u7528\u4e8e\u4fdd\u62a4\u5ba2\u6237\u4fe1\u606f\u514d\u906d\u610f\u5916\u6216\u6545\u610f\u76d7\u7a83\u3001\u4e22\u5931\u6216\u4e0d\u5f53\u62ab\u9732\u3002\u6700\u5e38\u89c1\u7684\u60c5\u51b5\u662f\uff0c\u8fd9\u6d89\u53ca\u5c06\u4fe1\u606f\u5206\u7c7b\u4e3a\u654f\u611f\u6216\u975e\u654f\u611f\u4fe1\u606f\uff0c\u6216\u4e2a\u4eba\u8eab\u4efd\u4fe1\u606f \uff08PII\uff09\u3002\u6839\u636e\u90e8\u7f72\u7684\u4e0a\u4e0b\u6587\uff0c\u53ef\u4ee5\u4f7f\u7528\u5404\u79cd\u5176\u4ed6\u5206\u7c7b\u6807\u51c6\uff08\u653f\u5e9c\u3001\u533b\u7597\u4fdd\u5065\uff09\u3002\u57fa\u672c\u539f\u5219\u662f\u660e\u786e\u5b9a\u4e49\u548c\u4f7f\u7528\u6570\u636e\u5206\u7c7b\u3002\u6700\u5e38\u89c1\u7684\u4fdd\u62a4\u673a\u5236\u5305\u62ec\u884c\u4e1a\u6807\u51c6\u52a0\u5bc6\u6280\u672f\u3002","title":"\u6570\u636e\u5206\u7c7b"},{"location":"security/security-guide/#_324","text":"\u5f02\u5e38\u8fc7\u7a0b\u662f ISMS \u7684\u91cd\u8981\u7ec4\u6210\u90e8\u5206\u3002\u5f53\u67d0\u4e9b\u64cd\u4f5c\u4e0d\u7b26\u5408\u7ec4\u7ec7\u5b9a\u4e49\u7684\u5b89\u5168\u7b56\u7565\u65f6\uff0c\u5fc5\u987b\u8bb0\u5f55\u8fd9\u4e9b\u64cd\u4f5c\u3002\u9700\u8981\u5305\u62ec\u9002\u5f53\u7684\u7406\u7531\u3001\u63cf\u8ff0\u548c\u7f13\u89e3\u7ec6\u8282\uff0c\u5e76\u7531\u6709\u5173\u5f53\u5c40\u7b7e\u7f72\u3002OpenStack \u9ed8\u8ba4\u914d\u7f6e\u5728\u6ee1\u8db3\u5404\u79cd\u5408\u89c4\u6027\u6807\u51c6\u65b9\u9762\u53ef\u80fd\u4f1a\u6709\u6240\u4e0d\u540c\uff0c\u5e94\u8bb0\u5f55\u4e0d\u7b26\u5408\u5408\u89c4\u6027\u8981\u6c42\u7684\u533a\u57df\uff0c\u5e76\u8003\u8651\u6f5c\u5728\u7684\u4fee\u590d\u7a0b\u5e8f\u4ee5\u5bf9\u793e\u533a\u505a\u51fa\u8d21\u732e\u3002","title":"\u5f02\u5e38\u8fc7\u7a0b"},{"location":"security/security-guide/#_325","text":"\u5408\u89c4\u6027\u548c\u5b89\u5168\u6027\u4e0d\u662f\u6392\u4ed6\u6027\u7684\uff0c\u5fc5\u987b\u4e00\u8d77\u89e3\u51b3\u3002\u5982\u679c\u4e0d\u8fdb\u884c\u5b89\u5168\u5f3a\u5316\uff0cOpenStack \u90e8\u7f72\u4e0d\u592a\u53ef\u80fd\u6ee1\u8db3\u5408\u89c4\u6027\u8981\u6c42\u3002\u4e0b\u9762\u7684\u5217\u8868\u63d0\u4f9b\u4e86 OpenStack \u67b6\u6784\u5e08\u7684\u57fa\u7840\u77e5\u8bc6\u548c\u6307\u5bfc\uff0c\u4ee5\u5b9e\u73b0\u5bf9\u5546\u4e1a\u548c\u653f\u5e9c\u8ba4\u8bc1\u548c\u6807\u51c6\u7684\u5408\u89c4\u6027\u3002","title":"\u8ba4\u8bc1\u548c\u5408\u89c4\u58f0\u660e"},{"location":"security/security-guide/#_326","text":"\u5bf9\u4e8eOpenStack\u7684\u5546\u4e1a\u90e8\u7f72\uff0c\u6211\u4eec\u5efa\u8bae\u5c06SOC 1/2\u4e0eISO 2700 1/2\u76f8\u7ed3\u5408\uff0c\u4f5c\u4e3aOpenStack\u8ba4\u8bc1\u6d3b\u52a8\u7684\u8d77\u70b9\u3002\u8fd9\u4e9b\u8ba4\u8bc1\u89c4\u5b9a\u7684\u6240\u9700\u5b89\u5168\u6d3b\u52a8\u6709\u52a9\u4e8e\u4e3a\u5b89\u5168\u6700\u4f73\u5b9e\u8df5\u548c\u901a\u7528\u63a7\u5236\u6807\u51c6\u5960\u5b9a\u57fa\u7840\uff0c\u4ece\u800c\u6709\u52a9\u4e8e\u5b9e\u73b0\u66f4\u4e25\u683c\u7684\u5408\u89c4\u6027\u6d3b\u52a8\uff0c\u5305\u62ec\u653f\u5e9c\u8bc1\u660e\u548c\u8ba4\u8bc1\u3002 \u5b8c\u6210\u8fd9\u4e9b\u521d\u59cb\u8ba4\u8bc1\u540e\uff0c\u5176\u4f59\u8ba4\u8bc1\u5c06\u66f4\u52a0\u7279\u5b9a\u4e8e\u90e8\u7f72\u3002\u4f8b\u5982\uff0c\u5904\u7406\u4fe1\u7528\u5361\u4ea4\u6613\u7684\u4e91\u9700\u8981 PCI-DSS\uff0c\u5b58\u50a8\u533b\u7597\u4fdd\u5065\u4fe1\u606f\u7684\u4e91\u9700\u8981 HIPAA\uff0c\u8054\u90a6\u653f\u5e9c\u5185\u90e8\u7684\u4e91\u53ef\u80fd\u9700\u8981 FedRAMP/FISMA \u548c ITAR \u8ba4\u8bc1\u3002","title":"\u5546\u4e1a\u6807\u51c6"},{"location":"security/security-guide/#soc-1-ssae-16-isae-3402","text":"\u670d\u52a1\u7ec4\u7ec7\u63a7\u5236 \uff08SOC\uff09 \u6807\u51c6\u7531\u7f8e\u56fd\u6ce8\u518c\u4f1a\u8ba1\u5e08\u534f\u4f1a \uff08AICPA\uff09 \u5b9a\u4e49\u3002SOC \u63a7\u5236\u8bc4\u4f30\u670d\u52a1\u63d0\u4f9b\u5546\u7684\u76f8\u5173\u8d22\u52a1\u62a5\u8868\u548c\u65ad\u8a00\uff0c\u4f8b\u5982\u662f\u5426\u9075\u5b88\u300a\u8428\u73ed\u65af-\u5965\u514b\u65af\u5229\u6cd5\u6848\u300b\u3002 SOC 1 \u53d6\u4ee3\u4e86\u5ba1\u8ba1\u51c6\u5219\u7b2c 70 \u53f7\u58f0\u660e \uff08SAS 70\uff09 II \u7c7b\u62a5\u544a\u3002\u8fd9\u4e9b\u63a7\u5236\u63aa\u65bd\u901a\u5e38\u5305\u62ec\u8303\u56f4\u5185\u7684\u7269\u7406\u6570\u636e\u4e2d\u5fc3\u3002 \u6709\u4e24\u79cd\u7c7b\u578b\u7684 SOC 1 \u62a5\u544a\uff1a \u7c7b\u578b 1 - \u62a5\u544a\u7ba1\u7406\u5c42\u5bf9\u670d\u52a1\u7ec4\u7ec7\u7cfb\u7edf\u7684\u63cf\u8ff0\u7684\u516c\u5141\u6027\uff0c\u4ee5\u53ca\u63a7\u5236\u8bbe\u8ba1\u662f\u5426\u9002\u5408\u5b9e\u73b0\u622a\u81f3\u6307\u5b9a\u65e5\u671f\u7684\u63cf\u8ff0\u4e2d\u5305\u542b\u7684\u76f8\u5173\u63a7\u5236\u76ee\u6807\u3002 \u7c7b\u578b 2 - \u62a5\u544a\u7ba1\u7406\u5c42\u5bf9\u670d\u52a1\u7ec4\u7ec7\u7cfb\u7edf\u7684\u63cf\u8ff0\u7684\u516c\u5141\u6027\uff0c\u4ee5\u53ca\u63a7\u5236\u63aa\u65bd\u7684\u8bbe\u8ba1\u548c\u8fd0\u8425\u6709\u6548\u6027\u662f\u5426\u9002\u5408\u5728\u7279\u5b9a\u65f6\u671f\u5185\u5b9e\u73b0\u63cf\u8ff0\u4e2d\u5305\u542b\u7684\u76f8\u5173\u63a7\u5236\u76ee\u6807 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605AICPA\u5173\u4e8e\u4e0e\u7528\u6237\u5b9e\u4f53\u8d22\u52a1\u62a5\u544a\u5185\u90e8\u63a7\u5236\u76f8\u5173\u7684\u670d\u52a1\u7ec4\u7ec7\u63a7\u5236\u7684\u62a5\u544a\u3002","title":"SOC 1 \uff08SSAE 16\uff09 / ISAE 3402"},{"location":"security/security-guide/#soc-2","text":"\u670d\u52a1\u7ec4\u7ec7\u63a7\u5236 \uff08SOC\uff09 2 \u662f\u5bf9\u5f71\u54cd\u670d\u52a1\u7ec4\u7ec7\u7528\u4e8e\u5904\u7406\u7528\u6237\u6570\u636e\u7684\u7cfb\u7edf\u7684\u5b89\u5168\u6027\u3001\u53ef\u7528\u6027\u548c\u5904\u7406\u5b8c\u6574\u6027\u4ee5\u53ca\u8fd9\u4e9b\u7cfb\u7edf\u5904\u7406\u7684\u4fe1\u606f\u7684\u673a\u5bc6\u6027\u548c\u9690\u79c1\u6027\u7684\u63a7\u5236\u7684\u81ea\u6211\u8bc1\u660e\u3002\u7528\u6237\u793a\u4f8b\u5305\u62ec\u8d1f\u8d23\u670d\u52a1\u7ec4\u7ec7\u6cbb\u7406\u7684\u4eba\u5458\u3001\u670d\u52a1\u7ec4\u7ec7\u7684\u5ba2\u6237\u3001\u76d1\u7ba1\u673a\u6784\u3001\u4e1a\u52a1\u5408\u4f5c\u4f19\u4f34\u3001\u4f9b\u5e94\u5546\u4ee5\u53ca\u4e86\u89e3\u670d\u52a1\u7ec4\u7ec7\u53ca\u5176\u63a7\u5236\u63aa\u65bd\u7684\u5176\u4ed6\u4eba\u5458\u3002 \u6709\u4e24\u79cd\u7c7b\u578b\u7684 SOC 2 \u62a5\u544a\uff1a \u7c7b\u578b 1 - \u62a5\u544a\u7ba1\u7406\u5c42\u5bf9\u670d\u52a1\u7ec4\u7ec7\u7cfb\u7edf\u7684\u63cf\u8ff0\u7684\u516c\u5141\u6027\uff0c\u4ee5\u53ca\u63a7\u5236\u8bbe\u8ba1\u662f\u5426\u9002\u5408\u5b9e\u73b0\u622a\u81f3\u6307\u5b9a\u65e5\u671f\u7684\u63cf\u8ff0\u4e2d\u5305\u542b\u7684\u76f8\u5173\u63a7\u5236\u76ee\u6807\u3002 \u7c7b\u578b 2 - \u62a5\u544a\u7ba1\u7406\u5c42\u5bf9\u670d\u52a1\u7ec4\u7ec7\u7cfb\u7edf\u7684\u63cf\u8ff0\u7684\u516c\u5141\u6027\uff0c\u4ee5\u53ca\u63a7\u5236\u7684\u8bbe\u8ba1\u548c\u8fd0\u8425\u6709\u6548\u6027\u7684\u9002\u7528\u6027\uff0c\u4ee5\u5728\u7279\u5b9a\u65f6\u671f\u5185\u5b9e\u73b0\u63cf\u8ff0\u4e2d\u5305\u542b\u7684\u76f8\u5173\u63a7\u5236\u76ee\u6807\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 AICPA \u5173\u4e8e\u670d\u52a1\u7ec4\u7ec7\u4e2d\u4e0e\u5b89\u5168\u6027\u3001\u53ef\u7528\u6027\u3001\u5904\u7406\u5b8c\u6574\u6027\u3001\u673a\u5bc6\u6027\u6216\u9690\u79c1\u76f8\u5173\u7684\u63a7\u5236\u7684\u62a5\u544a\u3002","title":"SOC 2 \u51fd\u6570"},{"location":"security/security-guide/#soc-3","text":"\u670d\u52a1\u7ec4\u7ec7\u63a7\u5236 \uff08SOC\uff09 3 \u662f\u670d\u52a1\u7ec4\u7ec7\u7684\u4fe1\u4efb\u670d\u52a1\u62a5\u544a\u3002\u8fd9\u4e9b\u62a5\u544a\u65e8\u5728\u6ee1\u8db3\u4ee5\u4e0b\u7528\u6237\u7684\u9700\u6c42\uff1a\u8fd9\u4e9b\u7528\u6237\u5e0c\u671b\u786e\u4fdd\u670d\u52a1\u7ec4\u7ec7\u4e2d\u4e0e\u5b89\u5168\u6027\u3001\u53ef\u7528\u6027\u3001\u5904\u7406\u5b8c\u6574\u6027\u3001\u673a\u5bc6\u6027\u6216\u9690\u79c1\u76f8\u5173\u7684\u63a7\u5236\u63aa\u65bd\uff0c\u4f46\u6ca1\u6709\u6709\u6548\u4f7f\u7528 SOC 2 \u62a5\u544a\u6240\u9700\u7684\u77e5\u8bc6\u3002\u8fd9\u4e9b\u62a5\u544a\u662f\u6839\u636e AICPA/\u52a0\u62ff\u5927\u7279\u8bb8\u4f1a\u8ba1\u5e08\u534f\u4f1a \uff08CICA\uff09 \u5173\u4e8e\u5b89\u5168\u6027\u3001\u53ef\u7528\u6027\u3001\u5904\u7406\u5b8c\u6574\u6027\u3001\u673a\u5bc6\u6027\u548c\u9690\u79c1\u7684\u4fe1\u6258\u670d\u52a1\u539f\u5219\u3001\u6807\u51c6\u548c\u63d2\u56fe\u7f16\u5199\u7684\u3002\u7531\u4e8e SOC 3 \u62a5\u544a\u662f\u901a\u7528\u62a5\u544a\uff0c\u56e0\u6b64\u53ef\u4ee5\u4f5c\u4e3a\u5370\u7ae0\u81ea\u7531\u5206\u53d1\u6216\u53d1\u5e03\u5728\u7f51\u7ad9\u4e0a\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u670d\u52a1\u7ec4\u7ec7\u7684 AICPA \u4fe1\u4efb\u670d\u52a1\u62a5\u544a\u3002","title":"SOC 3 \u51fd\u6570"},{"location":"security/security-guide/#iso-270012","text":"ISO/IEC 27001/2 \u6807\u51c6\u53d6\u4ee3\u4e86 BS7799-2\uff0c\u662f\u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u4f53\u7cfb \uff08ISMS\uff09 \u7684\u89c4\u8303\u3002ISMS \u662f\u7ec4\u7ec7\u4e3a\u7ba1\u7406\u4fe1\u606f\u8d44\u4ea7\u98ce\u9669\u800c\u521b\u5efa\u548c\u7ef4\u62a4\u7684\u4e00\u6574\u5957\u7b56\u7565\u548c\u8fc7\u7a0b\u3002\u8fd9\u4e9b\u98ce\u9669\u57fa\u4e8e\u7528\u6237\u4fe1\u606f\u7684\u673a\u5bc6\u6027\u3001\u5b8c\u6574\u6027\u548c\u53ef\u7528\u6027 \uff08CIA\uff09\u3002\u4e2d\u592e\u60c5\u62a5\u5c40\u7684\u5b89\u5168\u4e09\u5408\u4f1a\u5df2\u88ab\u7528\u4f5c\u672c\u4e66\u5927\u90e8\u5206\u7ae0\u8282\u7684\u57fa\u7840\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 ISO 27001\u3002","title":"ISO 27001/2 \u8ba4\u8bc1"},{"location":"security/security-guide/#hipaa-hitech","text":"\u5065\u5eb7\u4fdd\u9669\u6d41\u901a\u4e0e\u8d23\u4efb\u6cd5\u6848 \uff08HIPAA\uff09 \u662f\u7f8e\u56fd\u56fd\u4f1a\u7684\u4e00\u9879\u6cd5\u6848\uff0c\u7528\u4e8e\u7ba1\u7406\u60a3\u8005\u5065\u5eb7\u8bb0\u5f55\u7684\u6536\u96c6\u3001\u5b58\u50a8\u3001\u4f7f\u7528\u548c\u9500\u6bc1\u3002\u8be5\u6cd5\u6848\u89c4\u5b9a\uff0c\u53d7\u4fdd\u62a4\u7684\u5065\u5eb7\u4fe1\u606f\uff08PHI\uff09\u5fc5\u987b\u5bf9\u672a\u7ecf\u6388\u6743\u7684\u4eba\u5458\u201c\u4e0d\u53ef\u7528\u3001\u4e0d\u53ef\u8bfb\u6216\u65e0\u6cd5\u7834\u8bd1\u201d\uff0c\u5e76\u4e14\u5e94\u89e3\u51b3\u201c\u9759\u6001\u201d\u548c\u201c\u52a8\u6001\u201d\u6570\u636e\u7684\u52a0\u5bc6\u95ee\u9898\u3002 HIPAA \u4e0d\u662f\u8ba4\u8bc1\uff0c\u800c\u662f\u4fdd\u62a4\u533b\u7597\u4fdd\u5065\u6570\u636e\u7684\u6307\u5357\u3002\u4e0e PCI-DSS \u7c7b\u4f3c\uff0cPCI \u548c HIPPA \u6700\u91cd\u8981\u7684\u95ee\u9898\u662f\u4e0d\u4f1a\u53d1\u751f\u4fe1\u7528\u5361\u4fe1\u606f\u548c\u5065\u5eb7\u6570\u636e\u6cc4\u9732\u7684\u60c5\u51b5\u3002\u5728\u53d1\u751f\u8fdd\u89c4\u884c\u4e3a\u65f6\uff0c\u5c06\u4ed4\u7ec6\u5ba1\u67e5\u4e91\u63d0\u4f9b\u5546\u662f\u5426\u7b26\u5408 PCI \u548c HIPPA \u63a7\u5236\u63aa\u65bd\u3002\u5982\u679c\u8bc1\u660e\u5408\u89c4\uff0c\u63d0\u4f9b\u5546\u5c06\u7acb\u5373\u5b9e\u65bd\u8865\u6551\u63a7\u5236\u3001\u8fdd\u89c4\u901a\u77e5\u8d23\u4efb\u4ee5\u53ca\u7528\u4e8e\u989d\u5916\u5408\u89c4\u6d3b\u52a8\u7684\u5927\u91cf\u652f\u51fa\u3002\u5982\u679c\u4e0d\u5408\u89c4\uff0c\u4e91\u63d0\u4f9b\u5546\u53ef\u80fd\u4f1a\u9762\u4e34\u73b0\u573a\u5ba1\u8ba1\u56e2\u961f\u3001\u7f5a\u6b3e\u3001\u6f5c\u5728\u7684\u5546\u5bb6 ID \uff08PCI\uff09 \u4e22\u5931\u4ee5\u53ca\u5de8\u5927\u7684\u58f0\u8a89\u5f71\u54cd\u3002 \u62e5\u6709 PHI \u7684\u7528\u6237\u6216\u7ec4\u7ec7\u5fc5\u987b\u652f\u6301 HIPAA \u8981\u6c42\uff0c\u5e76\u4e14\u662f HIPAA \u6db5\u76d6\u7684\u5b9e\u4f53\u3002\u5982\u679c\u5b9e\u4f53\u6253\u7b97\u4f7f\u7528\u67d0\u9879\u670d\u52a1\uff0c\u6216\u8005\u5728\u672c\u4f8b\u4e2d\uff0c\u4f7f\u7528\u53ef\u80fd\u4f7f\u7528\u3001\u5b58\u50a8\u6216\u8bbf\u95ee\u8be5 PHI \u7684 OpenStack \u4e91\uff0c\u5219\u5fc5\u987b\u7b7e\u7f72\u4e1a\u52a1\u4f19\u4f34\u534f\u8bae \uff08BAA\uff09\u3002BAA \u662f HIPAA \u6db5\u76d6\u7684\u5b9e\u4f53\u4e0e OpenStack \u670d\u52a1\u63d0\u4f9b\u5546\u4e4b\u95f4\u7684\u5408\u540c\uff0c\u8981\u6c42\u63d0\u4f9b\u5546\u6839\u636e HIPAA \u8981\u6c42\u5904\u7406\u8be5 PHI\u3002\u5982\u679c\u670d\u52a1\u63d0\u4f9b\u5546\u4e0d\u5904\u7406 PHI\uff0c\u4f8b\u5982\u5b89\u5168\u63a7\u5236\u548c\u5f3a\u5316\uff0c\u90a3\u4e48\u4ed6\u4eec\u5c06\u53d7\u5230 HIPAA \u7684\u7f5a\u6b3e\u548c\u5904\u7f5a\u3002 OpenStack \u67b6\u6784\u5e08\u89e3\u91ca\u548c\u54cd\u5e94 HIPAA \u58f0\u660e\uff0c\u6570\u636e\u52a0\u5bc6\u4ecd\u7136\u662f\u6838\u5fc3\u5b9e\u8df5\u3002\u76ee\u524d\uff0c\u8fd9\u5c06\u8981\u6c42\u4f7f\u7528\u884c\u4e1a\u6807\u51c6\u52a0\u5bc6\u7b97\u6cd5\u5bf9 OpenStack \u90e8\u7f72\u4e2d\u5305\u542b\u7684\u4efb\u4f55\u53d7\u4fdd\u62a4\u7684\u5065\u5eb7\u4fe1\u606f\u8fdb\u884c\u52a0\u5bc6\u3002\u672a\u6765\u6f5c\u5728\u7684OpenStack\u9879\u76ee\uff0c\u5982\u5bf9\u8c61\u52a0\u5bc6\uff0c\u5c06\u4fc3\u8fdbHIPAA\u51c6\u5219\u7684\u9075\u5b88\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u300a\u5065\u5eb7\u4fdd\u9669\u6d41\u901a\u4e0e\u8d23\u4efb\u6cd5\u6848\u300b\u3002","title":"HIPAA / HITECH"},{"location":"security/security-guide/#pci-dss","text":"\u652f\u4ed8\u5361\u884c\u4e1a\u6570\u636e\u5b89\u5168\u6807\u51c6 \uff08PCI DSS\uff09 \u7531\u652f\u4ed8\u5361\u884c\u4e1a\u6807\u51c6\u59d4\u5458\u4f1a\u5b9a\u4e49\uff0c\u65e8\u5728\u52a0\u5f3a\u5bf9\u6301\u5361\u4eba\u6570\u636e\u7684\u63a7\u5236\uff0c\u4ee5\u51cf\u5c11\u4fe1\u7528\u5361\u6b3a\u8bc8\u3002\u5e74\u5ea6\u5408\u89c4\u6027\u9a8c\u8bc1\u7531\u5916\u90e8\u5408\u683c\u5b89\u5168\u8bc4\u4f30\u673a\u6784 \uff08QSA\uff09 \u8fdb\u884c\u8bc4\u4f30\uff0c\u8be5\u8bc4\u4f30\u673a\u6784\u4f1a\u6839\u636e\u6301\u5361\u4eba\u7684\u4ea4\u6613\u91cf\u521b\u5efa\u5408\u89c4\u62a5\u544a \uff08ROC\uff09\uff0c\u6216\u901a\u8fc7\u81ea\u6211\u8bc4\u4f30\u95ee\u5377 \uff08SAQ\uff09 \u8fdb\u884c\u8bc4\u4f30\u3002 \u5b58\u50a8\u3001\u5904\u7406\u6216\u4f20\u8f93\u652f\u4ed8\u5361\u8be6\u7ec6\u4fe1\u606f\u7684 OpenStack \u90e8\u7f72\u5728 PCI-DSS \u7684\u8303\u56f4\u5185\u3002\u6240\u6709\u672a\u4ece\u5904\u7406\u652f\u4ed8\u6570\u636e\u7684\u7cfb\u7edf\u6216\u7f51\u7edc\u4e2d\u6b63\u786e\u5206\u5272\u7684 OpenStack \u7ec4\u4ef6\u90fd\u5c5e\u4e8e PCI-DSS \u7684\u51c6\u5219\u3002PCI-DSS \u4e0a\u4e0b\u6587\u4e2d\u7684\u5206\u6bb5\u4e0d\u652f\u6301\u591a\u79df\u6237\uff0c\u800c\u662f\u7269\u7406\u5206\u79bb\uff08\u4e3b\u673a/\u7f51\u7edc\uff09\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 PCI \u5b89\u5168\u6807\u51c6\u3002","title":"PCI-DSS"},{"location":"security/security-guide/#_327","text":"","title":"\u653f\u5e9c\u6807\u51c6"},{"location":"security/security-guide/#fedramp","text":"\u201c\u8054\u90a6\u98ce\u9669\u548c\u6388\u6743\u7ba1\u7406\u8ba1\u5212 \uff08FedRAMP\uff09 \u662f\u4e00\u9879\u653f\u5e9c\u8303\u56f4\u7684\u8ba1\u5212\uff0c\u5b83\u4e3a\u4e91\u4ea7\u54c1\u548c\u670d\u52a1\u7684\u5b89\u5168\u8bc4\u4f30\u3001\u6388\u6743\u548c\u6301\u7eed\u76d1\u63a7\u63d0\u4f9b\u4e86\u4e00\u79cd\u6807\u51c6\u5316\u65b9\u6cd5\u201d\u3002NIST 800-53 \u662f FISMA \u548c FedRAMP \u7684\u57fa\u7840\uff0c\u540e\u8005\u8981\u6c42\u4e13\u95e8\u9009\u62e9\u5b89\u5168\u63a7\u5236\u4ee5\u5728\u4e91\u73af\u5883\u4e2d\u63d0\u4f9b\u4fdd\u62a4\u3002\u7531\u4e8e\u5b89\u5168\u63a7\u5236\u7684\u7279\u6b8a\u6027\u4ee5\u53ca\u6ee1\u8db3\u653f\u5e9c\u6807\u51c6\u6240\u9700\u7684\u6587\u6863\u91cf\uff0cFedRAMP \u53ef\u80fd\u975e\u5e38\u5bc6\u96c6\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605 FedRAMP\u3002","title":"FedRAMP"},{"location":"security/security-guide/#itar","text":"\u300a\u56fd\u9645\u6b66\u5668\u8d38\u6613\u6761\u4f8b\u300b\uff08ITAR\uff09 \u662f\u4e00\u5957\u7f8e\u56fd\u653f\u5e9c\u6cd5\u89c4\uff0c\u7528\u4e8e\u63a7\u5236\u7f8e\u56fd\u519b\u9700\u54c1\u6e05\u5355 \uff08USML\uff09 \u548c\u76f8\u5173\u6280\u672f\u6570\u636e\u4e2d\u4e0e\u56fd\u9632\u76f8\u5173\u7684\u7269\u54c1\u548c\u670d\u52a1\u7684\u8fdb\u51fa\u53e3\u3002ITAR\u901a\u5e38\u88ab\u4e91\u63d0\u4f9b\u5546\u89c6\u4e3a\u201c\u64cd\u4f5c\u4e00\u81f4\u6027\u201d\uff0c\u800c\u4e0d\u662f\u6b63\u5f0f\u8ba4\u8bc1\u3002\u8fd9\u901a\u5e38\u6d89\u53ca\u6309\u7167 FISMA \u8981\u6c42\uff0c\u9075\u5faa\u57fa\u4e8e NIST 800-53 \u6846\u67b6\u7684\u505a\u6cd5\u5b9e\u65bd\u9694\u79bb\u7684\u4e91\u73af\u5883\uff0c\u5e76\u8f85\u4ee5\u9650\u5236\u4ec5\u8bbf\u95ee\u201c\u7f8e\u56fd\u4eba\u201d\u548c\u80cc\u666f\u7b5b\u9009\u7684\u989d\u5916\u63a7\u5236\u63aa\u65bd\u3002 \u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u300a\u56fd\u9645\u6b66\u5668\u8d38\u6613\u6761\u4f8b\u300b\uff08ITAR\uff09\u3002","title":"ITAR"},{"location":"security/security-guide/#fisma","text":"\u300a\u8054\u90a6\u4fe1\u606f\u5b89\u5168\u7ba1\u7406\u6cd5\u300b\u8981\u6c42\u653f\u5e9c\u673a\u6784\u5236\u5b9a\u4e00\u9879\u5168\u9762\u7684\u8ba1\u5212\uff0c\u4ee5\u5b9e\u65bd\u4f17\u591a\u653f\u5e9c\u5b89\u5168\u6807\u51c6\uff0c\u5e76\u5728 2002 \u5e74\u7684\u300a\u7535\u5b50\u653f\u52a1\u6cd5\u300b\u4e2d\u9881\u5e03\u3002FISMA\u6982\u8ff0\u4e86\u4e00\u4e2a\u8fc7\u7a0b\uff0c\u8be5\u8fc7\u7a0b\u5229\u7528\u591a\u4e2aNIST\u51fa\u7248\u7269\uff0c\u51c6\u5907\u4e86\u4e00\u4e2a\u4fe1\u606f\u7cfb\u7edf\u6765\u5b58\u50a8\u548c\u5904\u7406\u653f\u5e9c\u6570\u636e\u3002 \u6b64\u8fc7\u7a0b\u5206\u4e3a\u4e09\u4e2a\u4e3b\u8981\u7c7b\u522b\uff1a \u7cfb\u7edf\u5206\u7c7b\uff1a \u4fe1\u606f\u7cfb\u7edf\u5c06\u6536\u5230\u8054\u90a6\u4fe1\u606f\u5904\u7406\u6807\u51c6\u51fa\u7248\u7269 199 \uff08FIPS 199\uff09 \u4e2d\u5b9a\u4e49\u7684\u5b89\u5168\u7c7b\u522b\u3002\u8fd9\u4e9b\u7c7b\u522b\u53cd\u6620\u4e86\u7cfb\u7edf\u5165\u4fb5\u7684\u6f5c\u5728\u5f71\u54cd\u3002 \u63a7\u4ef6\u9009\u62e9\uff1a \u6839\u636e FIPS 199 \u4e2d\u5b9a\u4e49\u7684\u7cfb\u7edf\u5b89\u5168\u7c7b\u522b\uff0c\u7ec4\u7ec7\u5229\u7528 FIPS 200 \u6765\u786e\u5b9a\u4fe1\u606f\u7cfb\u7edf\u7684\u7279\u5b9a\u5b89\u5168\u63a7\u5236\u8981\u6c42\u3002\u4f8b\u5982\uff0c\u5982\u679c\u7cfb\u7edf\u88ab\u5f52\u7c7b\u4e3a\u201c\u4e2d\u7b49\u201d\uff0c\u5219\u53ef\u80fd\u4f1a\u5f15\u5165\u5f3a\u5236\u8981\u6c42\u201c\u5b89\u5168\u5bc6\u7801\u201d\u7684\u8981\u6c42\u3002 \u63a7\u5236\u5b9a\u5236\uff1a \u4e00\u65e6\u786e\u5b9a\u4e86\u7cfb\u7edf\u5b89\u5168\u63a7\u5236\u63aa\u65bd\uff0cOpenStack \u67b6\u6784\u5e08\u5c06\u5229\u7528 NIST 800-53 \u6765\u63d0\u53d6\u91cf\u8eab\u5b9a\u5236\u7684\u63a7\u5236\u63aa\u65bd\u9009\u62e9\u3002\u4f8b\u5982\uff0c\u89c4\u8303\u4ec0\u4e48\u662f\u201c\u5b89\u5168\u5bc6\u7801\u201d\u3002","title":"FISMA"},{"location":"security/security-guide/#_328","text":"\u9690\u79c1\u662f\u5408\u89c4\u8ba1\u5212\u4e2d\u8d8a\u6765\u8d8a\u91cd\u8981\u7684\u5143\u7d20\u3002\u5ba2\u6237\u5bf9\u4f01\u4e1a\u7684\u8981\u6c42\u8d8a\u6765\u8d8a\u9ad8\uff0c\u4ed6\u4eec\u8d8a\u6765\u8d8a\u6709\u5174\u8da3\u4ece\u9690\u79c1\u7684\u89d2\u5ea6\u4e86\u89e3\u4ed6\u4eec\u7684\u6570\u636e\u662f\u5982\u4f55\u88ab\u5904\u7406\u7684\u3002 OpenStack\u90e8\u7f72\u53ef\u80fd\u9700\u8981\u8bc1\u660e\u7b26\u5408\u7ec4\u7ec7\u7684\u9690\u79c1\u653f\u7b56\uff0c\u4ee5\u53ca\u7f8e\u56fd-\u6b27\u76df\u3002\u5b89\u5168\u6e2f\u6846\u67b6\u3001ISO/IEC 29100\uff1a2011 \u9690\u79c1\u6846\u67b6\u6216\u5176\u4ed6\u7279\u5b9a\u4e8e\u9690\u79c1\u7684\u51c6\u5219\u3002\u5728\u7f8e\u56fd\uff0c\u7f8e\u56fd\u6ce8\u518c\u4f1a\u8ba1\u5e08\u534f\u4f1a\uff08AICPA\uff09\u5df2\u7ecf\u5b9a\u4e49\u4e8610\u4e2a\u9690\u79c1\u91cd\u70b9\u9886\u57df\uff0c\u5728\u5546\u4e1a\u73af\u5883\u4e2d\u90e8\u7f72OpenStack\u53ef\u80fd\u5e0c\u671b\u8bc1\u660e\u5176\u4e2d\u7684\u90e8\u5206\u6216\u5168\u90e8\u539f\u5219\u3002 \u4e3a\u4e86\u5e2e\u52a9 OpenStack \u67b6\u6784\u5e08\u4fdd\u62a4\u4e2a\u4eba\u6570\u636e\uff0c\u6211\u4eec\u5efa\u8bae OpenStack \u67b6\u6784\u5e08\u67e5\u770b NIST \u51fa\u7248\u7269 800-122\uff0c\u6807\u9898\u4e3a\u201c\u4fdd\u62a4\u4e2a\u4eba\u8eab\u4efd\u4fe1\u606f \uff08PII\uff09 \u673a\u5bc6\u6027\u6307\u5357\u201d\u3002\u672c\u6307\u5357\u9010\u6b65\u5b8c\u6210\u4fdd\u62a4\u8fc7\u7a0b\uff1a \"...\u7531\u673a\u6784\u7ef4\u62a4\u7684\u6709\u5173\u4e2a\u4eba\u7684\u4efb\u4f55\u4fe1\u606f\uff0c\u5305\u62ec \uff081\uff09 \u53ef\u7528\u4e8e\u533a\u5206\u6216\u8ffd\u8e2a\u4e2a\u4eba\u8eab\u4efd\u7684\u4efb\u4f55\u4fe1\u606f\uff0c\u4f8b\u5982\u59d3\u540d\u3001\u793e\u4f1a\u5b89\u5168\u53f7\u7801\u3001\u51fa\u751f\u65e5\u671f\u548c\u5730\u70b9\u3001\u6bcd\u4eb2\u7684\u5a5a\u524d\u59d3\u6c0f\u6216\u751f\u7269\u8bc6\u522b\u8bb0\u5f55;\uff082\uff09\u4e0e\u4e2a\u4eba\u6709\u8054\u7cfb\u6216\u53ef\u8054\u7cfb\u7684\u4efb\u4f55\u5176\u4ed6\u4fe1\u606f\uff0c\u5982\u533b\u7597\u3001\u6559\u80b2\u3001\u8d22\u52a1\u548c\u5c31\u4e1a\u4fe1\u606f......\u201d \u5168\u9762\u7684\u9690\u79c1\u7ba1\u7406\u9700\u8981\u5927\u91cf\u7684\u51c6\u5907\u3001\u601d\u8003\u548c\u6295\u8d44\u3002\u5728\u6784\u5efa\u5168\u7403OpenStack\u4e91\u65f6\uff0c\u8fd8\u5f15\u5165\u4e86\u989d\u5916\u7684\u590d\u6742\u6027\uff0c\u4f8b\u5982\uff0c\u5728\u7f8e\u56fd\u548c\u66f4\u4e25\u683c\u7684\u6b27\u76df\u9690\u79c1\u6cd5\u4e4b\u95f4\u7684\u5dee\u5f02\u4e2d\u5bfc\u822a\u3002\u6b64\u5916\uff0c\u5728\u5904\u7406\u654f\u611f\u7684 PII \u65f6\u9700\u8981\u683c\u5916\u5c0f\u5fc3\uff0c\u5176\u4e2d\u53ef\u80fd\u5305\u62ec\u4fe1\u7528\u5361\u53f7\u6216\u533b\u7597\u8bb0\u5f55\u7b49\u4fe1\u606f\u3002\u8fd9\u4e9b\u654f\u611f\u6570\u636e\u4e0d\u4ec5\u53d7\u9690\u79c1\u6cd5\u7684\u7ea6\u675f\uff0c\u8fd8\u53d7\u76d1\u7ba1\u548c\u653f\u5e9c\u6cd5\u89c4\u7684\u7ea6\u675f\u3002\u901a\u8fc7\u9075\u5faa\u65e2\u5b9a\u7684\u6700\u4f73\u5b9e\u8df5\uff0c\u5305\u62ec\u653f\u5e9c\u53d1\u5e03\u7684\u6700\u4f73\u5b9e\u8df5\uff0c\u53ef\u4ee5\u4e3aOpenStack\u90e8\u7f72\u521b\u5efa\u548c\u5b9e\u8df5\u4e00\u4e2a\u5168\u9762\u7684\u9690\u79c1\u7ba1\u7406\u653f\u7b56\u3002","title":"\u9690\u79c1"},{"location":"security/security-guide/#_329","text":"OpenStack\u793e\u533a\u5b89\u5168\u5ba1\u67e5\u7684\u76ee\u6807\u662f\u8bc6\u522bOpenStack\u9879\u76ee\u8bbe\u8ba1\u6216\u5b9e\u73b0\u4e2d\u7684\u5f31\u70b9\u3002\u867d\u7136\u8fd9\u4e9b\u5f31\u70b9\u5f88\u5c11\u89c1\uff0c\u4f46\u53ef\u80fd\u4f1a\u5bf9OpenStack\u90e8\u7f72\u7684\u5b89\u5168\u6027\u4ea7\u751f\u707e\u96be\u6027\u7684\u5f71\u54cd\uff0c\u56e0\u6b64\u5e94\u8be5\u52aa\u529b\u5c06\u8fd9\u4e9b\u7f3a\u9677\u5728\u5df2\u53d1\u5e03\u9879\u76ee\u4e2d\u7684\u53ef\u80fd\u6027\u964d\u5230\u6700\u4f4e\u3002\u5728\u5b89\u5168\u5ba1\u67e5\u8fc7\u7a0b\u4e2d\uff0c\u5e94\u4e86\u89e3\u5e76\u8bb0\u5f55\u4ee5\u4e0b\u5185\u5bb9\uff1a \u7cfb\u7edf\u7684\u6240\u6709\u5165\u53e3\u70b9 \u98ce\u9669\u8d44\u4ea7 \u6570\u636e\u6301\u4e45\u5316\u7684\u4f4d\u7f6e \u6570\u636e\u5982\u4f55\u5728\u7cfb\u7edf\u7ec4\u4ef6\u4e4b\u95f4\u4f20\u8f93 \u6570\u636e\u683c\u5f0f\u548c\u8f6c\u6362 \u9879\u76ee\u7684\u5916\u90e8\u4f9d\u8d56\u9879 \u4e00\u7ec4\u5546\u5b9a\u7684\u8c03\u67e5\u7ed3\u679c\u548c/\u6216\u7f3a\u9677 \u9879\u76ee\u5982\u4f55\u4e0e\u5916\u90e8\u4f9d\u8d56\u9879\u4ea4\u4e92 \u5bf9 OpenStack \u53ef\u4ea4\u4ed8\u5b58\u50a8\u5e93\u6267\u884c\u5b89\u5168\u5ba1\u67e5\u7684\u4e00\u4e2a\u5e38\u89c1\u539f\u56e0\u662f\u534f\u52a9\u6f0f\u6d1e\u7ba1\u7406\u56e2\u961f \uff08VMT\uff09 \u76d1\u7763\u3002OpenStack VMT \u5217\u51fa\u4e86\u53d7\u76d1\u7763\u7684\u5b58\u50a8\u5e93\uff0c\u5176\u4e2d\u6f0f\u6d1e\u7684\u62a5\u544a\u63a5\u6536\u548c\u62ab\u9732\u7531 VMT \u7ba1\u7406\u3002\u867d\u7136\u4e0d\u662f\u4e25\u683c\u7684\u8981\u6c42\uff0c\u4f46\u67d0\u79cd\u5f62\u5f0f\u7684\u5b89\u5168\u5ba1\u67e5\u3001\u5ba1\u8ba1\u6216\u5a01\u80c1\u5206\u6790\u53ef\u4ee5\u5e2e\u52a9\u6bcf\u4e2a\u4eba\u66f4\u8f7b\u677e\u5730\u67e5\u660e\u7cfb\u7edf\u66f4\u5bb9\u6613\u51fa\u73b0\u6f0f\u6d1e\u7684\u533a\u57df\uff0c\u5e76\u5728\u5b83\u4eec\u6210\u4e3a\u7528\u6237\u95ee\u9898\u4e4b\u524d\u89e3\u51b3\u5b83\u4eec\u3002 OpenStack VMT \u5efa\u8bae\uff0c\u5bf9\u9879\u76ee\u63a8\u8350\u7684\u90e8\u7f72\u8fdb\u884c\u67b6\u6784\u5ba1\u67e5\u662f\u4e00\u79cd\u9002\u5f53\u7684\u5b89\u5168\u5ba1\u67e5\u5f62\u5f0f\uff0c\u5728\u5ba1\u67e5\u9700\u6c42\u4e0e OpenStack \u89c4\u6a21\u7684\u9879\u76ee\u8d44\u6e90\u9700\u6c42\u4e4b\u95f4\u53d6\u5f97\u5e73\u8861\u3002\u5b89\u5168\u67b6\u6784\u5ba1\u67e5\u901a\u5e38\u4e5f\u79f0\u4e3a\u5a01\u80c1\u5206\u6790\u3001\u5b89\u5168\u5206\u6790\u6216\u5a01\u80c1\u5efa\u6a21\u3002\u5728OpenStack\u5b89\u5168\u5ba1\u67e5\u7684\u80cc\u666f\u4e0b\uff0c\u8fd9\u4e9b\u672f\u8bed\u662f\u67b6\u6784\u5b89\u5168\u5ba1\u67e5\u7684\u540c\u4e49\u8bcd\uff0c\u5b83\u53ef\u4ee5\u8bc6\u522b\u9879\u76ee\u6216\u53c2\u8003\u67b6\u6784\u8bbe\u8ba1\u4e2d\u7684\u7f3a\u9677\uff0c\u5e76\u53ef\u80fd\u5bfc\u81f4\u8fdb\u4e00\u6b65\u7684\u8c03\u67e5\u5de5\u4f5c\u6765\u9a8c\u8bc1\u90e8\u5206\u5b9e\u73b0\u3002 \u5bf9\u4e8e\u65b0\u9879\u76ee\u4ee5\u53ca\u7b2c\u4e09\u65b9\u672a\u8fdb\u884c\u5b89\u5168\u5ba1\u67e5\u6216\u65e0\u6cd5\u5171\u4eab\u5176\u7ed3\u679c\u7684\u60c5\u51b5\uff0c\u9884\u8ba1\u5b89\u5168\u5ba1\u67e5\u5c06\u662f\u6b63\u5e38\u9014\u5f84\u3002\u9700\u8981\u5b89\u5168\u5ba1\u67e5\u7684\u9879\u76ee\u7684\u4fe1\u606f\u5c06\u5728\u5373\u5c06\u5230\u6765\u7684\u5b89\u5168\u5ba1\u67e5\u8fc7\u7a0b\u4e2d\u63d0\u4f9b\u3002 \u5982\u679c\u7b2c\u4e09\u65b9\u5df2\u7ecf\u6267\u884c\u4e86\u5b89\u5168\u5ba1\u67e5\uff0c\u6216\u8005\u9879\u76ee\u66f4\u559c\u6b22\u4f7f\u7528\u7b2c\u4e09\u65b9\u6765\u6267\u884c\u5ba1\u67e5\uff0c\u5219\u5728\u5373\u5c06\u5230\u6765\u7684\u7b2c\u4e09\u65b9\u5b89\u5168\u5ba1\u67e5\u8fc7\u7a0b\u4e2d\u5c06\u63d0\u4f9b\u6709\u5173\u5982\u4f55\u83b7\u53d6\u8be5\u7b2c\u4e09\u65b9\u5ba1\u67e5\u7684\u8f93\u51fa\u5e76\u5c06\u5176\u63d0\u4ea4\u9a8c\u8bc1\u7684\u4fe1\u606f\u3002 \u65e0\u8bba\u54ea\u79cd\u60c5\u51b5\uff0c\u5bf9\u6587\u6863\u5de5\u4ef6\u7684\u8981\u6c42\u90fd\u662f\u76f8\u4f3c\u7684 - \u9879\u76ee\u5fc5\u987b\u63d0\u4f9b\u6700\u4f73\u5b9e\u8df5\u90e8\u7f72\u7684\u67b6\u6784\u56fe\u3002\u867d\u7136\u5f3a\u70c8\u5efa\u8bae\u4f5c\u4e3a\u6240\u6709\u56e2\u961f\u5f00\u53d1\u5468\u671f\u7684\u4e00\u90e8\u5206\uff0c\u4f46\u6f0f\u6d1e\u626b\u63cf\u548c\u9759\u6001\u5206\u6790\u626b\u63cf\u4e0d\u8db3\u4ee5\u4f5c\u4e3a\u7b2c\u4e09\u65b9\u5ba1\u67e5\u7684\u8bc1\u636e\u3002 \u67b6\u6784\u9875\u9762\u6307\u5357 \u6807\u9898\u3001\u7248\u672c\u4fe1\u606f\u3001\u8054\u7cfb\u65b9\u5f0f \u9879\u76ee\u63cf\u8ff0\u548c\u76ee\u7684 \u4e3b\u8981\u7528\u6237\u548c\u7528\u4f8b \u5916\u90e8\u4f9d\u8d56\u5173\u7cfb\u548c\u5173\u8054\u7684\u5b89\u5168\u5047\u8bbe \u7ec4\u4ef6 \u670d\u52a1\u67b6\u6784\u56fe \u6570\u636e\u8d44\u4ea7 \u6570\u636e\u8d44\u4ea7\u5f71\u54cd\u5206\u6790 \u63a5\u53e3 \u8d44\u6e90","title":"\u5b89\u5168\u5ba1\u67e5"},{"location":"security/security-guide/#_330","text":"\u67b6\u6784\u9875\u9762\u7684\u76ee\u7684\u662f\u8bb0\u5f55\u670d\u52a1\u6216\u9879\u76ee\u7684\u4f53\u7cfb\u7ed3\u6784\u3001\u7528\u9014\u548c\u5b89\u5168\u63a7\u5236\u3002\u5b83\u5e94\u8be5\u8bb0\u5f55\u8be5\u9879\u76ee\u7684\u6700\u4f73\u5b9e\u8df5\u90e8\u7f72\u3002 \u67b6\u6784\u9875\u9762\u6709\u4e00\u4e9b\u5173\u952e\u90e8\u5206\uff0c\u4e0b\u9762\u5c06\u66f4\u8be6\u7ec6\u5730\u89e3\u91ca\u8fd9\u4e9b\u90e8\u5206\uff1a \u6807\u9898\u3001\u7248\u672c\u4fe1\u606f\u3001\u8054\u7cfb\u65b9\u5f0f \u9879\u76ee\u63cf\u8ff0\u548c\u76ee\u7684 \u4e3b\u8981\u7528\u6237\u548c\u7528\u4f8b \u5916\u90e8\u4f9d\u8d56\u5173\u7cfb\u548c\u5173\u8054\u7684\u5b89\u5168\u5047\u8bbe \u7ec4\u4ef6 \u67b6\u6784\u56fe \u6570\u636e\u8d44\u4ea7 \u6570\u636e\u8d44\u4ea7\u5f71\u54cd\u5206\u6790 \u63a5\u53e3","title":"\u67b6\u6784\u9875\u9762\u6307\u5357"},{"location":"security/security-guide/#_331","text":"\u672c\u90e8\u5206\u4e3a\u67b6\u6784\u9875\u9762\u6dfb\u52a0\u6807\u9898\uff0c\u63d0\u4f9b\u8bc4\u5ba1\u72b6\u6001\uff08\u8349\u7a3f\u3001\u51c6\u5907\u8bc4\u5ba1\u3001\u5df2\u5ba1\u6838\uff09\uff0c\u5e76\u6355\u83b7\u9879\u76ee\u7684\u53d1\u5e03\u548c\u7248\u672c\uff08\u5982\u679c\u76f8\u5173\uff09\u3002\u5b83\u8fd8\u8bb0\u5f55\u4e86\u9879\u76ee\u7684 PTL\u3001\u8d1f\u8d23\u751f\u6210\u67b6\u6784\u9875\u9762\u3001\u56fe\u8868\u548c\u5b8c\u6210\u8bc4\u5ba1\u7684\u9879\u76ee\u67b6\u6784\u5e08\uff08\u8fd9\u53ef\u80fd\u662f\u4e5f\u53ef\u80fd\u4e0d\u662f PTL\uff09\u548c\u5b89\u5168\u8bc4\u5ba1\u5458\u3002","title":"\u6807\u9898\u3001\u7248\u672c\u4fe1\u606f\u3001\u8054\u7cfb\u65b9\u5f0f"},{"location":"security/security-guide/#_332","text":"\u672c\u8282\u5c06\u5305\u542b\u9879\u76ee\u7684\u7b80\u8981\u8bf4\u660e\uff0c\u4ee5\u5411\u7b2c\u4e09\u65b9\u4ecb\u7ecd\u8be5\u670d\u52a1\u3002\u8fd9\u5e94\u8be5\u662f\u4e00\u4e24\u4e2a\u6bb5\u843d\uff0c\u53ef\u4ee5\u4ece wiki \u6216\u5176\u4ed6\u6587\u6863\u4e2d\u526a\u5207/\u7c98\u8d34\u3002\u5305\u62ec\u76f8\u5173\u6f14\u793a\u6587\u7a3f\u548c\u66f4\u591a\u6587\u6863\u7684\u94fe\u63a5\uff08\u5982\u679c\u6709\uff09\u3002 \u4f8b\u5982\uff1a \u201cAnchor \u662f\u4e00\u79cd\u516c\u94a5\u57fa\u7840\u8bbe\u65bd \uff08PKI\uff09 \u670d\u52a1\uff0c\u5b83\u4f7f\u7528\u81ea\u52a8\u8bc1\u4e66\u8bf7\u6c42\u9a8c\u8bc1\u6765\u81ea\u52a8\u505a\u51fa\u9881\u53d1\u51b3\u7b56\u3002\u8bc1\u4e66\u7684\u9881\u53d1\u65f6\u95f4\u5f88\u77ed\uff08\u901a\u5e38\u4e3a 12-48 \u5c0f\u65f6\uff09\uff0c\u4ee5\u907f\u514d\u4e0e CRL \u548c OCSP \u76f8\u5173\u7684\u6709\u7f3a\u9677\u7684\u540a\u9500\u95ee\u9898\u3002","title":"\u9879\u76ee\u63cf\u8ff0\u548c\u76ee\u7684"},{"location":"security/security-guide/#_333","text":"\u5df2\u5b9e\u73b0\u67b6\u6784\u7684\u9884\u671f\u4e3b\u8981\u7528\u6237\u53ca\u5176\u7528\u4f8b\u7684\u5217\u8868\u3002\u201c\u7528\u6237\u201d\u53ef\u4ee5\u662f OpenStack \u4e2d\u7684\u53c2\u4e0e\u8005\u6216\u5176\u4ed6\u670d\u52a1\u3002 \u4f8b\u5982\uff1a \u6700\u7ec8\u7528\u6237\u5c06\u4f7f\u7528\u7cfb\u7edf\u6765\u5b58\u50a8\u654f\u611f\u6570\u636e\uff0c\u4f8b\u5982\u5bc6\u7801\u3001\u52a0\u5bc6\u5bc6\u94a5\u7b49\u3002 \u4e91\u7ba1\u7406\u5458\u5c06\u4f7f\u7528\u7ba1\u7406 API \u6765\u7ba1\u7406\u8d44\u6e90\u914d\u989d\u3002","title":"\u4e3b\u8981\u7528\u6237\u548c\u7528\u4f8b"},{"location":"security/security-guide/#_334","text":"\u5916\u90e8\u4f9d\u8d56\u9879\u662f\u670d\u52a1\u64cd\u4f5c\u6240\u9700\u7684\u4e0d\u53d7\u63a7\u5236\u7684\u9879\uff0c\u5982\u679c\u5b83\u4eec\u53d7\u5230\u5a01\u80c1\u6216\u53d8\u5f97\u4e0d\u53ef\u7528\uff0c\u53ef\u80fd\u4f1a\u5f71\u54cd\u670d\u52a1\u3002\u8fd9\u4e9b\u9879\u76ee\u901a\u5e38\u4e0d\u5728\u5f00\u53d1\u4eba\u5458\u7684\u63a7\u5236\u8303\u56f4\u5185\uff0c\u4f46\u5728\u90e8\u7f72\u8005\u7684\u63a7\u5236\u8303\u56f4\u5185\uff0c\u6216\u8005\u5b83\u4eec\u53ef\u80fd\u7531\u7b2c\u4e09\u65b9\u64cd\u4f5c\u3002\u8bbe\u5907\u5e94\u88ab\u89c6\u4e3a\u5916\u90e8\u4f9d\u8d56\u9879\u3002 \u4f8b\u5982\uff1a Nova \u8ba1\u7b97\u670d\u52a1\u4f9d\u8d56\u4e8e\u5916\u90e8\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u670d\u52a1\u3002\u5728\u5178\u578b\u90e8\u7f72\u4e2d\uff0c\u6b64\u4f9d\u8d56\u5173\u7cfb\u5c06\u7531 keystone \u670d\u52a1\u5b9e\u73b0\u3002 Barbican \u4f9d\u8d56\u4e8e\u786c\u4ef6\u5b89\u5168\u6a21\u5757 \uff08HSM\uff09 \u8bbe\u5907\u7684\u4f7f\u7528\u3002","title":"\u5916\u90e8\u4f9d\u8d56\u548c\u76f8\u5173\u7684\u5b89\u5168\u5047\u8bbe"},{"location":"security/security-guide/#_335","text":"\u5df2\u90e8\u7f72\u9879\u76ee\u7684\u7ec4\u4ef6\u5217\u8868\uff0c\u4e0d\u5305\u62ec\u5916\u90e8\u5b9e\u4f53\u3002\u6bcf\u4e2a\u7ec4\u4ef6\u90fd\u5e94\u547d\u540d\u5e76\u7b80\u8981\u63cf\u8ff0\u5176\u7528\u9014\uff0c\u5e76\u4f7f\u7528\u4f7f\u7528\u7684\u4e3b\u8981\u6280\u672f\uff08\u4f8b\u5982 Python\u3001MySQL\u3001RabbitMQ\uff09\u8fdb\u884c\u6807\u8bb0\u3002 \u4f8b\u5982\uff1a keystone \u76d1\u542c\u5668\u8fdb\u7a0b \uff08Python\uff09\uff1a\u4f7f\u7528 keystone \u670d\u52a1\u53d1\u5e03\u7684 keystone \u4e8b\u4ef6\u7684 Python \u8fdb\u7a0b\u3002 \u6570\u636e\u5e93 \uff08MySQL\uff09\uff1aMySQL \u6570\u636e\u5e93\uff0c\u7528\u4e8e\u5b58\u50a8\u4e0e\u5176\u6258\u7ba1\u5b9e\u4f53\u53ca\u5176\u5143\u6570\u636e\u76f8\u5173\u7684\u5df4\u6bd4\u80af\u72b6\u6001\u6570\u636e\u3002","title":"\u7ec4\u4ef6"},{"location":"security/security-guide/#_336","text":"\u67b6\u6784\u56fe\u663e\u793a\u4e86\u7cfb\u7edf\u7684\u903b\u8f91\u5e03\u5c40\uff0c\u4ee5\u4fbf\u5b89\u5168\u5ba1\u9605\u8005\u53ef\u4ee5\u4e0e\u9879\u76ee\u56e2\u961f\u4e00\u8d77\u9010\u6b65\u5b8c\u6210\u67b6\u6784\u3002\u5b83\u662f\u4e00\u4e2a\u903b\u8f91\u56fe\uff0c\u663e\u793a\u7ec4\u4ef6\u5982\u4f55\u4ea4\u4e92\u3001\u5b83\u4eec\u5982\u4f55\u8fde\u63a5\u5230\u5916\u90e8\u5b9e\u4f53\u4ee5\u53ca\u901a\u4fe1\u8de8\u8d8a\u4fe1\u4efb\u8fb9\u754c\u7684\u4f4d\u7f6e\u3002\u6709\u5173\u67b6\u6784\u56fe\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u5305\u62ec\u7b26\u53f7\u952e\uff0c\u5c06\u5728\u5373\u5c06\u53d1\u5e03\u7684\u67b6\u6784\u56fe\u6307\u5357\u4e2d\u7ed9\u51fa\u3002\u53ef\u4ee5\u5728\u4efb\u4f55\u53ef\u4ee5\u751f\u6210\u4f7f\u7528\u952e\u4e2d\u7b26\u53f7\u7684\u56fe\u8868\u7684\u5de5\u5177\u4e2d\u7ed8\u5236\u56fe\u8868\uff0c\u4f46\u5f3a\u70c8\u5efa\u8bae draw.io\u3002 \u6b64\u793a\u4f8b\u663e\u793a\u4e86 barbican \u67b6\u6784\u56fe\uff1a","title":"\u670d\u52a1\u67b6\u6784\u56fe"},{"location":"security/security-guide/#_337","text":"\u6570\u636e\u8d44\u4ea7\u662f\u653b\u51fb\u8005\u53ef\u80fd\u9488\u5bf9\u7684\u7528\u6237\u6570\u636e\u3001\u9ad8\u4ef7\u503c\u6570\u636e\u3001\u914d\u7f6e\u9879\u3001\u6388\u6743\u4ee4\u724c\u6216\u5176\u4ed6\u9879\u3002\u6570\u636e\u9879\u96c6\u56e0\u9879\u76ee\u800c\u5f02\uff0c\u4f46\u4e00\u822c\u800c\u8a00\uff0c\u5e94\u5c06\u5176\u89c6\u4e3a\u5bf9\u9879\u76ee\u9884\u671f\u64cd\u4f5c\u81f3\u5173\u91cd\u8981\u7684\u7c7b\u522b\u3002\u6240\u9700\u7684\u8be6\u7ec6\u7a0b\u5ea6\u5728\u67d0\u79cd\u7a0b\u5ea6\u4e0a\u53d6\u51b3\u4e8e\u4e0a\u4e0b\u6587\u3002\u6570\u636e\u901a\u5e38\u53ef\u4ee5\u5206\u7ec4\uff0c\u4f8b\u5982\u201c\u7528\u6237\u6570\u636e\u201d\u3001\u201c\u673a\u5bc6\u6570\u636e\u201d\u6216\u201c\u914d\u7f6e\u6587\u4ef6\u201d\uff0c\u4f46\u4e5f\u53ef\u4ee5\u662f\u5355\u6570\uff0c\u4f8b\u5982\u201c\u7ba1\u7406\u5458\u8eab\u4efd\u4ee4\u724c\u201d\u6216\u201c\u7528\u6237\u8eab\u4efd\u4ee4\u724c\u201d\u6216\u201c\u6570\u636e\u5e93\u914d\u7f6e\u6587\u4ef6\u201d\u3002 \u6570\u636e\u8d44\u4ea7\u5e94\u5305\u62ec\u8be5\u8d44\u4ea7\u6301\u4e45\u5316\u4f4d\u7f6e\u7684\u58f0\u660e\u3002 \u4f8b\u5982\uff1a \u673a\u5bc6\u6570\u636e - \u5bc6\u7801\u3001\u52a0\u5bc6\u5bc6\u94a5\u3001RSA \u5bc6\u94a5 - \u4fdd\u7559\u5728\u6570\u636e\u5e93 [PKCS#11] \u6216 HSM [KMIP] \u6216 [KMIP\u3001Dogtag] \u4e2d RBAC \u89c4\u5219\u96c6 - \u4fdd\u7559\u5728 policy.json \u4e2d RabbitMQ \u51ed\u8bc1 - \u4fdd\u7559\u5728 barbican.conf \u4e2d keystone \u4e8b\u4ef6\u961f\u5217\u51ed\u636e - \u4fdd\u7559\u5728 barbican.conf \u4e2d \u4e2d\u95f4\u4ef6\u914d\u7f6e - \u4fdd\u7559\u5728\u7c98\u8d34 .ini \u4e2d","title":"\u6570\u636e\u8d44\u4ea7"},{"location":"security/security-guide/#_338","text":"\u6570\u636e\u8d44\u4ea7\u5f71\u54cd\u5206\u6790\u5206\u89e3\u4e86\u6bcf\u4e2a\u6570\u636e\u8d44\u4ea7\u7684\u673a\u5bc6\u6027\u3001\u5b8c\u6574\u6027\u6216\u53ef\u7528\u6027\u635f\u5931\u7684\u5f71\u54cd\u3002\u9879\u76ee\u67b6\u6784\u5e08\u5e94\u8be5\u5c1d\u8bd5\u5b8c\u6210\u8fd9\u9879\u5de5\u4f5c\uff0c\u56e0\u4e3a\u4ed6\u4eec\u6700\u8be6\u7ec6\u5730\u4e86\u89e3\u4ed6\u4eec\u7684\u9879\u76ee\uff0c\u4f46 OpenStack \u5b89\u5168\u9879\u76ee \uff08OSSP\uff09 \u5c06\u5728\u5b89\u5168\u5ba1\u67e5\u671f\u95f4\u4e0e\u9879\u76ee\u4e00\u8d77\u89e3\u51b3\u8fd9 \u4e2a\u95ee\u9898\uff0c\u5e76\u53ef\u80fd\u6dfb\u52a0\u6216\u66f4\u65b0\u5f71\u54cd\u7ec6\u8282\u3002 \u4f8b\u5982\uff1a RabbitMQ \u51ed\u636e\uff1a \u5b8c\u6574\u6027\u6545\u969c\u5f71\u54cd\uff1abarbican \u548c Workers \u65e0\u6cd5\u518d\u8bbf\u95ee\u961f\u5217\u3002\u62d2\u7edd\u670d\u52a1\u3002 \u673a\u5bc6\u6027\u6545\u969c\u5f71\u54cd\uff1a\u653b\u51fb\u8005\u53ef\u4ee5\u5c06\u65b0\u4efb\u52a1\u6dfb\u52a0\u5230\u961f\u5217\u4e2d\uff0c\u8fd9\u4e9b\u4efb\u52a1\u5c06\u7531\u5de5\u4f5c\u4eba\u5458\u6267\u884c\u3002\u653b\u51fb\u8005\u53ef\u80fd\u8017\u5c3d\u7528\u6237\u914d\u989d\u3002\u62d2\u7edd\u670d\u52a1\u3002\u7528\u6237\u5c06\u65e0\u6cd5\u521b\u5efa\u771f\u6b63\u7684\u673a\u5bc6\u3002 \u53ef\u7528\u6027\u6545\u969c\u5f71\u54cd\uff1a\u5982\u679c\u6ca1\u6709\u5bf9\u961f\u5217\u7684\u8bbf\u95ee\u6743\u9650\uff0cbarbican \u65e0\u6cd5\u518d\u521b\u5efa\u65b0\u5bc6\u94a5\u3002 Keystone \u51ed\u636e\uff1a \u5b8c\u6574\u6027\u6545\u969c\u5f71\u54cd\uff1abarbican \u5c06\u65e0\u6cd5\u9a8c\u8bc1\u7528\u6237\u51ed\u636e\u5e76\u5931\u8d25\u3002\u62d2\u7edd\u670d\u52a1\u3002 \u673a\u5bc6\u6027\u6545\u969c\u5f71\u54cd\uff1a\u6076\u610f\u7528\u6237\u53ef\u80fd\u4f1a\u6ee5\u7528\u5176\u4ed6 OpenStack \u670d\u52a1\uff08\u53d6\u51b3\u4e8e keystone \u89d2\u8272\u914d\u7f6e\uff09\uff0c\u4f46 barbican \u4e0d\u53d7\u5f71\u54cd\u3002\u5982\u679c\u7528\u4e8e\u4ee4\u724c\u9a8c\u8bc1\u7684\u670d\u52a1\u5e10\u6237\u4e5f\u5177\u6709 barbican \u7ba1\u7406\u5458\u6743\u9650\uff0c\u5219\u6076\u610f\u7528\u6237\u53ef\u4ee5\u64cd\u7eb5 barbican \u7ba1\u7406\u5458\u529f\u80fd\u3002 \u53ef\u7528\u6027\u6545\u969c\u5f71\u54cd\uff1abarbican \u5c06\u65e0\u6cd5\u9a8c\u8bc1\u7528\u6237\u51ed\u636e\u5e76\u5931\u8d25\u3002\u62d2\u7edd\u670d\u52a1\u3002","title":"\u6570\u636e\u8d44\u4ea7\u5f71\u54cd\u5206\u6790"},{"location":"security/security-guide/#_339","text":"\u63a5\u53e3\u5217\u8868\u6355\u83b7\u4e86\u5ba1\u67e5\u8303\u56f4\u5185\u7684\u63a5\u53e3\u3002\u8fd9\u5305\u62ec\u67b6\u6784\u56fe\u4e0a\u8de8\u8d8a\u4fe1\u4efb\u8fb9\u754c\u6216\u4e0d\u4f7f\u7528\u884c\u4e1a\u6807\u51c6\u52a0\u5bc6\u534f\u8bae\uff08\u5982 TLS \u6216 SSH\uff09\u7684\u6a21\u5757\u4e4b\u95f4\u7684\u8fde\u63a5\u3002\u5bf9\u4e8e\u6bcf\u4e2a\u63a5\u53e3\uff0c\u5c06\u6355\u83b7\u4ee5\u4e0b\u4fe1\u606f\uff1a \u4f7f\u7528\u7684\u534f\u8bae \u901a\u8fc7\u8be5\u63a5\u53e3\u4f20\u8f93\u7684\u4efb\u4f55\u6570\u636e\u8d44\u4ea7 \u6709\u5173\u7528\u4e8e\u8fde\u63a5\u5230\u8be5\u63a5\u53e3\u7684\u8eab\u4efd\u9a8c\u8bc1\u7684\u4fe1\u606f \u63a5\u53e3\u7528\u9014\u7684\u7b80\u8981\u8bf4\u660e\u3002 \u8bb0\u5f55\u683c\u5f0f\u5982\u4e0b\uff1a \u4ece>\u5230[\u4f20\u8f93\u65b9\u5f0f]\uff1a \u52a8\u6001\u8d44\u4ea7 \u8eab\u4efd\u8ba4\u8bc1\uff1f \u63cf\u8ff0 \u4f8b\u5982\uff1a \u5ba2\u6237\u7aef>API \u8fdb\u7a0b [TLS]\uff1a \u4f20\u8f93\u4e2d\u7684\u8d44\u4ea7\uff1a\u7528\u6237\u5bc6\u94a5\u5931\u771f\u51ed\u636e\u3001\u660e\u6587\u5bc6\u94a5\u3001HTTP \u8c13\u8bcd\u3001\u5bc6\u94a5 ID\u3001\u8def\u5f84 \u5bf9 keystone \u51ed\u636e\u6216\u660e\u6587\u673a\u5bc6\u7684\u8bbf\u95ee\u88ab\u89c6\u4e3a\u7cfb\u7edf\u7684\u5b8c\u5168\u5b89\u5168\u6545\u969c - \u6b64\u63a5\u53e3\u5fc5\u987b\u5177\u6709\u5f3a\u5927\u7684\u673a\u5bc6\u6027\u548c\u5b8c\u6574\u6027\u63a7\u5236\u3002","title":"\u63a5\u53e3"},{"location":"security/security-guide/#_340","text":"\u5217\u51fa\u4e0e\u9879\u76ee\u76f8\u5173\u7684\u8d44\u6e90\uff0c\u4f8b\u5982\u63cf\u8ff0\u5176\u90e8\u7f72\u548c\u7528\u6cd5\u7684 Wiki \u9875\u9762\uff0c\u4ee5\u53ca\u6307\u5411\u4ee3\u7801\u5b58\u50a8\u5e93\u548c\u76f8\u5173\u6f14\u793a\u6587\u7a3f\u7684\u94fe\u63a5\u3002","title":"\u8d44\u6e90"},{"location":"security/security-guide/#_341","text":"\u8eab\u4efd\u670d\u52a1\u68c0\u67e5\u8868 \u4eea\u8868\u677f\u68c0\u67e5\u8868 \u8ba1\u7b97\u670d\u52a1\u68c0\u67e5\u8868 \u5757\u5b58\u50a8\u670d\u52a1\u68c0\u67e5\u8868 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u68c0\u67e5\u8868 \u7f51\u7edc\u670d\u52a1\u68c0\u67e5\u8868","title":"\u5b89\u5168\u68c0\u67e5\u8868"},{"location":"security/security-guide/#_342","text":"\u793e\u533a\u652f\u6301 \u8bcd\u6c47\u8868","title":"\u9644\u5f55"},{"location":"security/security-guide/#_343","text":"\u4ee5\u4e0b\u8d44\u6e90\u53ef\u5e2e\u52a9\u60a8\u8fd0\u884c\u548c\u4f7f\u7528 OpenStack\u3002OpenStack\u793e\u533a\u4e0d\u65ad\u6539\u8fdb\u548c\u589e\u52a0OpenStack\u7684\u4e3b\u8981\u529f\u80fd\uff0c\u4f46\u5982\u679c\u60a8\u6709\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7\u968f\u65f6\u63d0\u95ee\u3002\u4f7f\u7528\u4ee5\u4e0b\u8d44\u6e90\u83b7\u53d6 OpenStack \u652f\u6301\u5e76\u5bf9\u5b89\u88c5\u8fdb\u884c\u6545\u969c\u6392\u9664\u3002","title":"\u793e\u533a\u652f\u6301"},{"location":"security/security-guide/#_344","text":"\u6709\u5173\u53ef\u7528\u7684 OpenStack \u6587\u6863\uff0c\u8bf7\u53c2\u9605 docs.openstack.org\u3002 \u4ee5\u4e0b\u6307\u5357\u89e3\u91ca\u4e86\u5982\u4f55\u5b89\u88c5\u6982\u5ff5\u9a8c\u8bc1 OpenStack \u4e91\u53ca\u5176\u76f8\u5173\u7ec4\u4ef6\uff1a Rocky \u5b89\u88c5\u6307\u5357 \u4ee5\u4e0b\u4e66\u7c4d\u4ecb\u7ecd\u4e86\u5982\u4f55\u914d\u7f6e\u548c\u8fd0\u884c OpenStack \u4e91\uff1a \u67b6\u6784\u8bbe\u8ba1\u6307\u5357 Rocky \u7ba1\u7406\u5458\u6307\u5357 Rocky \u914d\u7f6e\u6307\u5357 Rocky \u7f51\u7edc\u6307\u5357 \u9ad8\u53ef\u7528\u6027\u6307\u5357 \u5b89\u5168\u6307\u5357 \u865a\u62df\u673a\u6620\u50cf\u6307\u5357 \u4ee5\u4e0b\u4e66\u7c4d\u4ecb\u7ecd\u4e86\u5982\u4f55\u4f7f\u7528\u547d\u4ee4\u884c\u5ba2\u6237\u7aef\uff1a Rocky API \u7ed1\u5b9a \u4ee5\u4e0b\u6587\u6863\u63d0\u4f9b\u4e86 OpenStack API \u7684\u53c2\u8003\u548c\u6307\u5bfc\u4fe1\u606f\uff1a API \u6587\u6863 \u4ee5\u4e0b\u6307\u5357\u63d0\u4f9b\u4e86\u6709\u5173\u5982\u4f55\u4e3a OpenStack \u6587\u6863\u505a\u51fa\u8d21\u732e\u7684\u4fe1\u606f\uff1a \u6587\u6863\u8d21\u732e\u8005\u6307\u5357","title":"\u6587\u6863"},{"location":"security/security-guide/#openstack-wiki","text":"OpenStack wiki \u5305\u542b\u5e7f\u6cdb\u7684\u4e3b\u9898\uff0c\u4f46\u6709\u4e9b\u4fe1\u606f\u53ef\u80fd\u5f88\u96be\u627e\u5230\u6216\u53ea\u6709\u51e0\u9875\u6df1\u3002\u5e78\u8fd0\u7684\u662f\uff0cWiki \u641c\u7d22\u529f\u80fd\u4f7f\u60a8\u80fd\u591f\u6309\u6807\u9898\u6216\u5185\u5bb9\u8fdb\u884c\u641c\u7d22\u3002\u5982\u679c\u60a8\u641c\u7d22\u7279\u5b9a\u4fe1\u606f\uff0c\u4f8b\u5982\u6709\u5173\u7f51\u7edc\u6216 OpenStack \u8ba1\u7b97\u7684\u4fe1\u606f\uff0c\u60a8\u53ef\u4ee5\u627e\u5230\u5927\u91cf\u76f8\u5173\u6750\u6599\u3002\u66f4\u591a\u5185\u5bb9\u4e00\u76f4\u5728\u6dfb\u52a0\uff0c\u56e0\u6b64\u8bf7\u52a1\u5fc5\u7ecf\u5e38\u56de\u6765\u67e5\u770b\u3002\u60a8\u53ef\u4ee5\u5728\u4efb\u4f55 OpenStack wiki \u9875\u9762\u7684\u53f3\u4e0a\u89d2\u627e\u5230\u641c\u7d22\u6846\u3002","title":"OpenStack wiki"},{"location":"security/security-guide/#launchpad-bug","text":"OpenStack \u793e\u533a\u91cd\u89c6\u60a8\u7684\u8bbe\u7f6e\u548c\u6d4b\u8bd5\u5de5\u4f5c\uff0c\u5e76\u5e0c\u671b\u5f97\u5230\u60a8\u7684\u53cd\u9988\u3002\u8981\u8bb0\u5f55bug\uff0c\u60a8\u5fc5\u987b\u6ce8\u518c\u4e00\u4e2a Launchpad \u5e10\u6237\u3002\u60a8\u53ef\u4ee5\u5728 Launchpad bug \u533a\u57df\u4e2d\u67e5\u770b\u73b0\u6709bug\u5e76\u62a5\u544abug\u3002\u4f7f\u7528\u641c\u7d22\u529f\u80fd\u786e\u5b9abug\u662f\u5426\u5df2\u62a5\u544a\u6216\u5df2\u4fee\u590d\u3002\u5982\u679c\u60a8\u7684bug\u4f3c\u4e4e\u4ecd\u672a\u62a5\u544a\uff0c\u8bf7\u586b\u5199bug\u62a5\u544a\u3002 \u4e00\u4e9b\u63d0\u793a\uff1a \u7ed9\u51fa\u4e00\u4e2a\u6e05\u6670\u3001\u7b80\u6d01\u7684\u603b\u7ed3\u3002 \u5728\u63cf\u8ff0\u4e2d\u63d0\u4f9b\u5c3d\u53ef\u80fd\u591a\u7684\u8be6\u7ec6\u4fe1\u606f\u3002\u7c98\u8d34\u547d\u4ee4\u8f93\u51fa\u6216\u5806\u6808\u8ddf\u8e2a\u3001\u5c4f\u5e55\u622a\u56fe\u94fe\u63a5\u4ee5\u53ca\u53ef\u80fd\u6709\u7528\u7684\u4efb\u4f55\u5176\u4ed6\u4fe1\u606f\u3002 \u8bf7\u52a1\u5fc5\u5305\u62ec\u60a8\u6b63\u5728\u4f7f\u7528\u7684\u8f6f\u4ef6\u548c\u8f6f\u4ef6\u5305\u7248\u672c\uff0c\u5c24\u5176\u662f\u5728\u4f7f\u7528\u5f00\u53d1\u5206\u652f\uff08\u5982 \"Kilo release\" vs git commit bc79c3ecc55929bac585d04a03475b72e06a3208 . \u4efb\u4f55\u7279\u5b9a\u4e8e\u90e8\u7f72\u7684\u4fe1\u606f\u90fd\u5f88\u6709\u7528\uff0c\u4f8b\u5982\u60a8\u4f7f\u7528\u7684\u662f Ubuntu 14.04 \u8fd8\u662f\u6b63\u5728\u6267\u884c\u591a\u8282\u70b9\u5b89\u88c5\u3002 \u4ee5\u4e0b Launchpad Bug \u533a\u57df\u53ef\u7528\uff1a Bugs\uff1aOpenStack \u5757\u5b58\u50a8 \uff08cinder\uff09 Bugs\uff1aOpenStack \u8ba1\u7b97\uff08nova\uff09 Bugs\uff1aOpenStack \u4eea\u8868\u677f\uff08horizon\uff09 Bugs\uff1aOpenStack \u8eab\u4efd\u8ba4\u8bc1\uff08keystone\uff09 Bugs\uff1aOpenStack \u955c\u50cf\u670d\u52a1 \uff08glance\uff09 Bugs\uff1aOpenStack \u7f51\u7edc\uff08neutron\uff09 Bugs\uff1aOpenStack \u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09 Bugs\uff1a\u5e94\u7528\u7a0b\u5e8f\u76ee\u5f55 \uff08murano\uff09 Bugs\uff1a\u88f8\u673a\u670d\u52a1\uff08ironic\uff09 Bugs\uff1a\u96c6\u7fa4\u670d\u52a1\uff08senlin\uff09 Bugs\uff1a\u5bb9\u5668\u57fa\u7840\u67b6\u6784\u7ba1\u7406\u670d\u52a1\uff08magnum\uff09 Bugs\uff1a\u6570\u636e\u5904\u7406\u670d\u52a1\uff08sahara\uff09 Bugs\uff1a\u6570\u636e\u5e93\u670d\u52a1 \uff08trove\uff09 Bugs\uff1aDNS\u670d\u52a1\uff08designate\uff09 Bugs\uff1a\u5bc6\u94a5\u7ba1\u7406\u670d\u52a1\uff08barbican\uff09 Bugs\uff1a\u76d1\u63a7 \uff08monasca\uff09 Bugs\uff1a\u7f16\u6392 \uff08heat\uff09 Bugs\uff1a\u8bc4\u7ea7 \uff08cloudkitty\uff09 Bugs\uff1a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf \uff08manila\uff09 Bugs\uff1a\u9065\u6d4b\uff08ceilometer\uff09 Bugs\uff1a\u9065\u6d4bv3 \uff08gnocchi\uff09 Bugs\uff1a\u5de5\u4f5c\u6d41\u670d\u52a1 \uff08mistral\uff09 Bugs\uff1a\u6d88\u606f\u4f20\u9012\u670d\u52a1 \uff08zaqar\uff09 Bugs\uff1a\u5bb9\u5668\u670d\u52a1 \uff08zun\uff09 Bugs\uff1aOpenStack API \u6587\u6863 \uff08developer.openstack.org\uff09 Bugs\uff1aOpenStack \u6587\u6863 \uff08docs.openstack.org\uff09","title":"Launchpad bug \u533a\u57df"},{"location":"security/security-guide/#_345","text":"\u8981\u63d0\u4f9b\u6709\u5173\u6587\u6863\u7684\u53cd\u9988\uff0c\u8bf7\u52a0\u5165\u6211\u4eec\u5728 OFTC IRC \u7f51\u7edc\u4e0a\u7684 IRC \u9891\u9053 #openstack-doc \uff0c\u6216\u5728 Launchpad \u4e2d\u62a5\u544a\u9519\u8bef\u5e76\u9009\u62e9\u6587\u6863\u6240\u5c5e\u7684\u7279\u5b9a\u9879\u76ee\u3002","title":"\u6587\u6863\u53cd\u9988"},{"location":"security/security-guide/#openstack-irc","text":"OpenStack \u793e\u533a\u4f4d\u4e8e OFTC \u7f51\u7edc\u4e0a\u7684 #openstack IRC \u9891\u9053\u4e2d\u3002\u60a8\u53ef\u4ee5\u5728\u8fd9\u91cc\u63d0\u95ee\uff0c\u83b7\u53d6\u5373\u65f6\u53cd\u9988\uff0c\u89e3\u51b3\u7d27\u6025\u95ee\u9898\u3002\u8981\u5b89\u88c5 IRC \u5ba2\u6237\u7aef\u6216\u4f7f\u7528\u57fa\u4e8e\u6d4f\u89c8\u5668\u7684\u5ba2\u6237\u7aef\uff0c\u8bf7\u8bbf\u95ee https://webchat.oftc.net/\u3002\u60a8\u8fd8\u53ef\u4ee5\u4f7f\u7528Colloquy \uff08Mac OS X\uff09\u3001mIRC \uff08Windows\uff09 \u6216 XChat \uff08Linux\uff09\u3002\u5f53\u60a8\u5728 IRC \u9891\u9053\u4e2d\u5e76\u4e14\u60f3\u8981\u5171\u4eab\u4ee3\u7801\u6216\u547d\u4ee4\u8f93\u51fa\u65f6\uff0c\u901a\u5e38\u63a5\u53d7\u7684\u65b9\u6cd5\u662f\u4f7f\u7528 Paste Bin\u3002OpenStack \u9879\u76ee\u6709\u4e00\u4e2aPaste\u7f51\u7ad9\u3002\u53ea\u9700\u5c06\u8f83\u957f\u7684\u6587\u672c\u6216\u65e5\u5fd7\u7c98\u8d34\u5230 Web \u8868\u5355\u4e2d\uff0c\u5373\u53ef\u83b7\u5f97\u4e00\u4e2aURL\uff0c\u53ef\u4ee5\u5c06\u5176\u7c98\u8d34\u5230\u9891\u9053\u4e2d\u3002OpenStack IRC \u9891\u9053\u5904\u4e8e #openstack . irc.oftc.net \u60a8\u53ef\u4ee5\u5728 wiki \u7684 IRC \u9875\u9762\u4e0a\u627e\u5230\u6240\u6709 OpenStack IRC \u9891\u9053\u7684\u5217\u8868\u3002","title":"OpenStack IRC \u9891\u9053"},{"location":"security/security-guide/#openstack_14","text":"\u83b7\u5f97\u7b54\u6848\u548c\u89c1\u89e3\u7684\u4e00\u4e2a\u597d\u65b9\u6cd5\u662f\u5c06\u60a8\u7684\u95ee\u9898\u6216\u6709\u95ee\u9898\u7684\u573a\u666f\u53d1\u5e03\u5230 OpenStack \u90ae\u4ef6\u5217\u8868\u4e2d\u3002\u60a8\u53ef\u4ee5\u5411\u53ef\u80fd\u9047\u5230\u7c7b\u4f3c\u95ee\u9898\u7684\u5176\u4ed6\u4eba\u5b66\u4e60\u548c\u63d0\u4f9b\u5e2e\u52a9\u3002\u8981\u8ba2\u9605\u6216\u67e5\u770b\u5b58\u6863\uff0c\u8bf7\u8bbf\u95ee\u4e00\u822c\u7684 OpenStack \u90ae\u4ef6\u5217\u8868\u3002\u5982\u679c\u60a8\u5bf9\u7279\u5b9a\u9879\u76ee\u6216\u5f00\u53d1\u7684\u5176\u4ed6\u90ae\u4ef6\u5217\u8868\u611f\u5174\u8da3\uff0c\u8bf7\u53c2\u9605\u90ae\u4ef6\u5217\u8868\u3002","title":"OpenStack \u90ae\u4ef6\u5217\u8868"},{"location":"security/security-guide/#openstack_15","text":"\u4ee5\u4e0b Linux \u53d1\u884c\u7248\u4e3a OpenStack \u63d0\u4f9b\u793e\u533a\u652f\u6301\u7684\u8f6f\u4ef6\u5305\uff1a CentOS, Fedora, and Red Hat Enterprise Linux: https://www.rdoproject.org/ openSUSE and SUSE Linux Enterprise Server: https://en.opensuse.org/Portal:OpenStack Ubuntu: https://wiki.ubuntu.com/OpenStack/CloudArchive","title":"OpenStack \u53d1\u884c\u5305"},{"location":"security/security-guide/#_346","text":"\u672c\u8bcd\u6c47\u8868\u63d0\u4f9b\u4e86\u4e00\u7cfb\u5217\u672f\u8bed\u548c\u5b9a\u4e49\uff0c\u7528\u4e8e\u5b9a\u4e49 OpenStack \u76f8\u5173\u6982\u5ff5\u7684\u8bcd\u6c47\u8868\u3002 \u8981\u6dfb\u52a0\u5230 OpenStack \u672f\u8bed\u8868\uff0c\u8bf7\u514b\u9686 openstack/openstack-manuals \u5b58\u50a8\u5e93\uff0c\u5e76\u901a\u8fc7 OpenStack \u8d21\u732e\u8fc7\u7a0b\u66f4\u65b0\u6e90\u6587\u4ef6 doc/common/glossary.rst \u3002","title":"\u8bcd\u6c47\u8868"},{"location":"security/security-guide/#0-9","text":"2023.1 Antelope OpenStack \u7b2c 27 \u7248\u7684\u4ee3\u53f7\u3002\u6b64\u7248\u672c\u662f\u57fa\u4e8e\u201c\u5e74\u201d\u4e4b\u540e\u5f62\u6210\u7684\u65b0\u7248\u672c\u6807\u8bc6\u8fc7\u7a0b\u7684\u7b2c\u4e00\u4e2a\u7248\u672c\u3002\u5e74\u5185\u91ca\u653e\u8ba1\u6570\u201c\uff0cAntelope\u662f\u4e00\u79cd\u654f\u6377\u800c\u4eb2\u5207\u7684\u52a8\u7269\uff0c\u4e5f\u662f\u4e00\u79cd\u84b8\u6c7d\u673a\u8f66\u7684\u7c7b\u578b\u3002 2023.2 Bobcat OpenStack \u7b2c 28 \u7248\u7684\u4ee3\u53f7\u3002 2024.1 Caracal OpenStack \u7b2c 29 \u7248\u7684\u4ee3\u53f7\u3002 6to4 \u4e00\u79cd\u5141\u8bb8 IPv6 \u6570\u636e\u5305\u901a\u8fc7 IPv4 \u7f51\u7edc\u4f20\u8f93\u7684\u673a\u5236\uff0c\u63d0\u4f9b\u8fc1\u79fb\u5230 IPv6 \u7684\u7b56\u7565\u3002","title":"0-9"},{"location":"security/security-guide/#a","text":"\u7edd\u5bf9\u9650\u5236 \u5ba2\u6237\u673a\u865a\u62df\u673a\u7684\u4e0d\u53ef\u903e\u8d8a\u9650\u5236\u3002 \u8bbe\u7f6e\u5305\u62ec\u603b RAM \u5927\u5c0f\u3001\u6700\u5927 vCPU \u6570\u548c\u6700\u5927\u78c1\u76d8\u5927\u5c0f\u3002 \u8bbf\u95ee\u63a7\u5236\u5217\u8868\uff08ACL\uff09 \u9644\u52a0\u5230\u5bf9\u8c61\u7684\u6743\u9650\u5217\u8868\u3002ACL \u6307\u5b9a\u54ea\u4e9b\u7528\u6237\u6216\u7cfb\u7edf\u8fdb\u7a0b\u6709\u6743\u8bbf\u95ee\u5bf9\u8c61\u3002\u5b83\u8fd8\u5b9a\u4e49\u53ef\u4ee5\u5bf9\u6307\u5b9a\u5bf9\u8c61\u6267\u884c\u54ea\u4e9b\u64cd\u4f5c\u3002\u5178\u578b ACL \u4e2d\u7684\u6bcf\u4e2a\u6761\u76ee\u90fd\u6307\u5b9a\u4e00\u4e2a\u4e3b\u9898\u548c\u4e00\u4e2a\u64cd\u4f5c\u3002\u4f8b\u5982\uff0c\u6587\u4ef6\u7684 ACL \u6761\u76ee (Alice, delete) \u6388\u4e88 Alice \u5220\u9664\u8be5\u6587\u4ef6\u7684\u6743\u9650\u3002 \u8bbf\u95ee\u5bc6\u94a5 Amazon EC2 \u8bbf\u95ee\u5bc6\u94a5\u7684\u66ff\u4ee3\u672f\u8bed\u3002\u8bf7\u53c2\u9605 EC2 \u8bbf\u95ee\u5bc6\u94a5\u3002 \u8d26\u6237 \u5bf9\u8c61\u5b58\u50a8\u4e2d\u8d26\u6237\u7684\u4e0a\u4e0b\u6587\u3002\u4e0d\u8981\u4e0e\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u4e2d\u7684\u7528\u6237\u5e10\u6237\u6df7\u6dc6\uff0c\u4f8b\u5982 Active Directory\u3001/etc/passwd\u3001OpenLDAP\u3001OpenStack Identity \u7b49\u3002 \u8d26\u6237\u5ba1\u6838\u5458 \u901a\u8fc7\u5bf9\u540e\u7aef SQLite \u6570\u636e\u5e93\u8fd0\u884c\u67e5\u8be2\uff0c\u68c0\u67e5\u6307\u5b9a\u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u4e2d\u7f3a\u5c11\u7684\u526f\u672c\u4ee5\u53ca\u4e0d\u6b63\u786e\u6216\u635f\u574f\u7684\u5bf9\u8c61\u3002 \u8d26\u6237\u6570\u636e\u5e93 \u4e00\u4e2a SQLite \u6570\u636e\u5e93\uff0c\u5176\u4e2d\u5305\u542b\u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u548c\u76f8\u5173\u5143\u6570\u636e\uff0c\u5e76\u4e14\u5e10\u6237\u670d\u52a1\u5668\u53ef\u4ee5\u8bbf\u95ee\u8be5\u6570\u636e\u5e93\u3002 \u8d26\u6237\u56de\u6536\u5668 \u4e00\u4e2a\u5bf9\u8c61\u5b58\u50a8\u5de5\u4f5c\u7ebf\u7a0b\uff0c\u7528\u4e8e\u626b\u63cf\u548c\u5220\u9664\u5e10\u6237\u6570\u636e\u5e93\uff0c\u5e76\u4e14\u5e10\u6237\u670d\u52a1\u5668\u5df2\u6807\u8bb0\u4e3a\u5220\u9664\u3002 \u8d26\u6237\u670d\u52a1\u5668 \u5217\u51fa\u5bf9\u8c61\u5b58\u50a8\u4e2d\u7684\u5bb9\u5668\uff0c\u5e76\u5c06\u5bb9\u5668\u4fe1\u606f\u5b58\u50a8\u5728\u5e10\u6237\u6570\u636e\u5e93\u4e2d\u3002 \u8d26\u6237\u670d\u52a1 \u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\uff0c\u63d0\u4f9b\u5217\u8868\u3001\u521b\u5efa\u3001\u4fee\u6539\u3001\u5ba1\u8ba1\u7b49\u8d26\u53f7\u670d\u52a1\u3002\u4e0d\u8981\u4e0e OpenStack Identity \u670d\u52a1\u3001OpenLDAP \u6216\u7c7b\u4f3c\u7684\u7528\u6237\u5e10\u6237\u670d\u52a1\u6df7\u6dc6\u3002 \u4f1a\u8ba1 \u8ba1\u7b97\u670d\u52a1\u901a\u8fc7\u4e8b\u4ef6\u901a\u77e5\u548c\u7cfb\u7edf\u4f7f\u7528\u60c5\u51b5\u6570\u636e\u5de5\u5177\u63d0\u4f9b\u4f1a\u8ba1\u4fe1\u606f\u3002 \u6d3b\u52a8\u76ee\u5f55 Microsoft \u57fa\u4e8e LDAP \u7684\u8eab\u4efd\u9a8c\u8bc1\u548c\u8eab\u4efd\u670d\u52a1\u3002\u5728 OpenStack \u4e2d\u53d7\u652f\u6301\u3002 \u4e3b/\u4e3b\u914d\u7f6e \u5728\u5177\u6709\u4e3b/\u4e3b\u914d\u7f6e\u7684\u9ad8\u53ef\u7528\u6027\u8bbe\u7f6e\u4e2d\uff0c\u591a\u4e2a\u7cfb\u7edf\u4e00\u8d77\u5206\u62c5\u8d1f\u8f7d\uff0c\u5982\u679c\u5176\u4e2d\u4e00\u4e2a\u7cfb\u7edf\u53d1\u751f\u6545\u969c\uff0c\u5219\u8d1f\u8f7d\u5c06\u5206\u914d\u7ed9\u5176\u4f59\u7cfb\u7edf\u3002 \u4e3b/\u5907\u914d\u7f6e \u5728\u5177\u6709\u4e3b/\u5907\u914d\u7f6e\u7684\u9ad8\u53ef\u7528\u6027\u8bbe\u7f6e\u4e2d\uff0c\u7cfb\u7edf\u8bbe\u7f6e\u4e3a\u4f7f\u5176\u4ed6\u8d44\u6e90\u8054\u673a\u4ee5\u66ff\u6362\u90a3\u4e9b\u51fa\u73b0\u6545\u969c\u7684\u8d44\u6e90\u3002 \u5730\u5740\u6c60 \u5206\u914d\u7ed9\u9879\u76ee\u7684\u4e00\u7ec4\u56fa\u5b9a\u548c/\u6216\u6d6e\u52a8 IP \u5730\u5740\uff0c\u53ef\u7531\u9879\u76ee\u4e2d\u7684 VM \u5b9e\u4f8b\u4f7f\u7528\u6216\u5206\u914d\u7ed9\u9879\u76ee\u3002 \u5730\u5740\u89e3\u6790\u534f\u8bae \uff08ARP\uff09 \u5c06\u4e09\u5c42IP\u5730\u5740\u89e3\u6790\u4e3a\u4e8c\u5c42\u94fe\u8def\u672c\u5730\u5730\u5740\u7684\u534f\u8bae\u3002 \u7ba1\u7406\u5458 API \u6388\u6743\u7ba1\u7406\u5458\u53ef\u8bbf\u95ee\u7684 API \u8c03\u7528\u5b50\u96c6\uff0c\u6700\u7ec8\u7528\u6237\u6216\u516c\u5171 Internet \u901a\u5e38\u65e0\u6cd5\u8bbf\u95ee\u8fd9\u4e9b\u8c03\u7528\u3002\u5b83\u4eec\u53ef\u4ee5\u4f5c\u4e3a\u5355\u72ec\u7684\u670d\u52a1 \uff08keystone\uff09 \u5b58\u5728\uff0c\u4e5f\u53ef\u4ee5\u662f\u53e6\u4e00\u4e2a API \uff08nova\uff09 \u7684\u5b50\u96c6\u3002 \u7ba1\u7406\u5458\u670d\u52a1\u5668 \u5728 Identity \u670d\u52a1\u7684\u4e0a\u4e0b\u6587\u4e2d\uff0c\u63d0\u4f9b\u5bf9\u7ba1\u7406 API \u7684\u8bbf\u95ee\u7684\u5de5\u4f5c\u8fdb\u7a0b\u3002 \u7ba1\u7406\u5458 \u8d1f\u8d23\u5b89\u88c5\u3001\u914d\u7f6e\u548c\u7ba1\u7406 OpenStack \u4e91\u7684\u4eba\u5458\u3002 \u9ad8\u7ea7\u6d88\u606f\u961f\u5217\u534f\u8bae \uff08AMQP\uff09 OpenStack \u7ec4\u4ef6\u7528\u4e8e\u670d\u52a1\u5185\u90e8\u901a\u4fe1\u7684\u5f00\u653e\u6807\u51c6\u6d88\u606f\u4f20\u9012\u534f\u8bae\uff0c\u7531 RabbitMQ\u3001Qpid \u6216 ZeroMQ \u63d0\u4f9b\u3002 \u9ad8\u7ea7 RISC \u673a\u5668 \uff08ARM\uff09 \u4f4e\u529f\u8017 CPU \u5e38\u89c1\u4e8e\u79fb\u52a8\u548c\u5d4c\u5165\u5f0f\u8bbe\u5907\u4e2d\u3002\u7531 OpenStack \u652f\u6301\u3002 \u8b66\u62a5 \u8ba1\u7b97\u670d\u52a1\u53ef\u4ee5\u901a\u8fc7\u5176\u901a\u77e5\u7cfb\u7edf\u53d1\u9001\u8b66\u62a5\uff0c\u8be5\u7cfb\u7edf\u5305\u62ec\u7528\u4e8e\u521b\u5efa\u81ea\u5b9a\u4e49\u901a\u77e5\u9a71\u52a8\u7a0b\u5e8f\u7684\u5de5\u5177\u3002\u8b66\u62a5\u53ef\u4ee5\u53d1\u9001\u5230\u5e76\u5728\u4eea\u8868\u677f\u4e0a\u663e\u793a\u3002 \u5206\u914d \u4ece\u5730\u5740\u6c60\u4e2d\u83b7\u53d6\u6d6e\u52a8 IP \u5730\u5740\uff0c\u4ee5\u4fbf\u5c06\u5176\u4e0e\u6765\u5bbe VM \u5b9e\u4f8b\u4e0a\u7684\u56fa\u5b9a IP \u76f8\u5173\u8054\u7684\u8fc7\u7a0b\u3002 Amazon \u5185\u6838\u6620\u50cf \uff08AKI\uff09 VM \u5bb9\u5668\u683c\u5f0f\u548c\u78c1\u76d8\u683c\u5f0f\u3002\u53d7Image\u670d\u52a1\u652f\u6301\u3002 Amazon \u7cfb\u7edf\u6620\u50cf \uff08AMI\uff09 VM \u5bb9\u5668\u683c\u5f0f\u548c\u78c1\u76d8\u683c\u5f0f\u3002\u53d7Image\u670d\u52a1\u652f\u6301\u3002 Amazon Ramdisk \u6620\u50cf \uff08ARI\uff09 VM \u5bb9\u5668\u683c\u5f0f\u548c\u78c1\u76d8\u683c\u5f0f\u3002\u53d7Image\u670d\u52a1\u652f\u6301\u3002 Anvil \u5c06\u540d\u4e3a DevStack \u7684\u57fa\u4e8e shell \u811a\u672c\u7684\u9879\u76ee\u79fb\u690d\u5230 Python \u7684\u9879\u76ee\u3002 AODH OpenStack \u9065\u6d4b\u670d\u52a1\u7684\u4e00\u90e8\u5206;\u63d0\u4f9b\u62a5\u8b66\u529f\u80fd\u3002 Apache Apache \u8f6f\u4ef6\u57fa\u91d1\u4f1a\u652f\u6301 Apache \u5f00\u6e90\u8f6f\u4ef6\u9879\u76ee\u7684 Apache \u793e\u533a\u3002\u8fd9\u4e9b\u9879\u76ee\u4e3a\u516c\u5171\u5229\u76ca\u63d0\u4f9b\u8f6f\u4ef6\u4ea7\u54c1\u3002 Apache \u8bb8\u53ef\u8bc1 2.0 \u6240\u6709 OpenStack \u6838\u5fc3\u9879\u76ee\u90fd\u662f\u6839\u636e Apache License 2.0 \u8bb8\u53ef\u8bc1\u7684\u6761\u6b3e\u63d0\u4f9b\u7684\u3002 Apache Web \u670d\u52a1\u5668 \u76ee\u524d\u5728 Internet \u4e0a\u4f7f\u7528\u7684\u6700\u5e38\u7528\u7684 Web \u670d\u52a1\u5668\u8f6f\u4ef6\u3002 API \u7aef\u70b9 \u5ba2\u6237\u7aef\u4e3a\u8bbf\u95ee API \u800c\u4e0e\u4e4b\u901a\u4fe1\u7684\u5b88\u62a4\u7a0b\u5e8f\u3001\u5de5\u4f5c\u7a0b\u5e8f\u6216\u670d\u52a1\u3002API \u7ec8\u7ed3\u70b9\u53ef\u4ee5\u63d0\u4f9b\u4efb\u610f\u6570\u91cf\u7684\u670d\u52a1\uff0c\u4f8b\u5982\u8eab\u4efd\u9a8c\u8bc1\u3001\u9500\u552e\u6570\u636e\u3001\u6027\u80fd\u6307\u6807\u3001\u8ba1\u7b97 VM \u547d\u4ee4\u3001\u4eba\u53e3\u666e\u67e5\u6570\u636e\u7b49\u3002 API \u6269\u5c55 \u6269\u5c55\u67d0\u4e9b OpenStack \u6838\u5fc3 API \u7684\u81ea\u5b9a\u4e49\u6a21\u5757\u3002 API \u6269\u5c55\u63d2\u4ef6 \u7f51\u7edc\u63d2\u4ef6\u6216\u7f51\u7edc API \u6269\u5c55\u7684\u66ff\u4ee3\u672f\u8bed\u3002 API \u5bc6\u94a5 API \u4ee4\u724c\u7684\u66ff\u4ee3\u672f\u8bed\u3002 API \u670d\u52a1\u5668 \u8fd0\u884c\u63d0\u4f9b API \u7aef\u70b9\u7684\u5b88\u62a4\u7a0b\u5e8f\u6216\u5de5\u4f5c\u7ebf\u7a0b\u7684\u4efb\u4f55\u8282\u70b9\u3002 API \u4ee4\u724c \u4f20\u9012\u7ed9 API \u8bf7\u6c42\u5e76\u7531 OpenStack \u7528\u4e8e\u9a8c\u8bc1\u5ba2\u6237\u7aef\u662f\u5426\u6709\u6743\u8fd0\u884c\u8bf7\u6c42\u7684\u64cd\u4f5c\u3002 API \u7248\u672c \u5728 OpenStack \u4e2d\uff0c\u9879\u76ee\u7684 API \u7248\u672c\u662f URL \u7684\u4e00\u90e8\u5206\u3002\u4f8b\u5982\uff0c example.com/nova/v1/foobar . \u5c0f\u5e94\u7528\u7a0b\u5e8f \u53ef\u4ee5\u5d4c\u5165\u5230\u7f51\u9875\u4e2d\u7684 Java \u7a0b\u5e8f\u3002 \u5e94\u7528\u7a0b\u5e8f\u76ee\u5f55\u670d\u52a1\uff08murano\uff09 \u63d0\u4f9b\u5e94\u7528\u7a0b\u5e8f\u76ee\u5f55\u670d\u52a1\u7684\u9879\u76ee\uff0c\u4ee5\u4fbf\u7528\u6237\u53ef\u4ee5\u5728\u7ba1\u7406\u5e94\u7528\u7a0b\u5e8f\u751f\u547d\u5468\u671f\u7684\u540c\u65f6\uff0c\u5728\u5e94\u7528\u7a0b\u5e8f\u62bd\u8c61\u7ea7\u522b\u4e0a\u7f16\u5199\u548c\u90e8\u7f72\u590d\u5408\u73af\u5883\u3002 \u5e94\u7528\u7a0b\u5e8f\u7f16\u7a0b\u63a5\u53e3\uff08API\uff09 \u7528\u4e8e\u8bbf\u95ee\u670d\u52a1\u3001\u5e94\u7528\u7a0b\u5e8f\u6216\u7a0b\u5e8f\u7684\u89c4\u8303\u96c6\u5408\u3002\u5305\u62ec\u670d\u52a1\u8c03\u7528\u3001\u6bcf\u4e2a\u8c03\u7528\u7684\u5fc5\u9700\u53c2\u6570\u4ee5\u53ca\u9884\u671f\u7684\u8fd4\u56de\u503c\u3002 \u5e94\u7528\u670d\u52a1\u5668 \u4e00\u79cd\u8f6f\u4ef6\uff0c\u5b83\u4f7f\u53e6\u4e00\u79cd\u8f6f\u4ef6\u5728\u7f51\u7edc\u4e0a\u53ef\u7528\u3002 \u5e94\u7528\u670d\u52a1\u63d0\u4f9b\u8005\u5546\uff08ASP\uff09 \u79df\u7528\u4e13\u7528\u5e94\u7528\u7a0b\u5e8f\u7684\u516c\u53f8\uff0c\u8fd9\u4e9b\u5e94\u7528\u7a0b\u5e8f\u53ef\u5e2e\u52a9\u4f01\u4e1a\u548c\u7ec4\u7ec7\u4ee5\u66f4\u4f4e\u7684\u6210\u672c\u63d0\u4f9b\u9644\u52a0\u670d\u52a1\u3002 \u53ef\u5206\u914d \u7528\u4e8e\u7ef4\u62a4 Linux \u5185\u6838\u9632\u706b\u5899\u6a21\u5757\u4e2d\u7684\u5730\u5740\u89e3\u6790\u534f\u8bae\u6570\u636e\u5305\u8fc7\u6ee4\u89c4\u5219\u7684\u5de5\u5177\u3002\u5728\u8ba1\u7b97\u4e2d\u4e0e iptables\u3001ebtables \u548c ip6tables \u4e00\u8d77\u4f7f\u7528\uff0c\u4e3a VM \u63d0\u4f9b\u9632\u706b\u5899\u670d\u52a1\u3002 \u5173\u8054 \u5c06\u8ba1\u7b97\u6d6e\u52a8 IP \u5730\u5740\u4e0e\u56fa\u5b9a IP \u5730\u5740\u5173\u8054\u7684\u8fc7\u7a0b\u3002 \u5f02\u6b65 JavaScript \u548c XML \uff08AJAX\uff09 \u4e00\u7ec4\u76f8\u4e92\u5173\u8054\u7684 Web \u5f00\u53d1\u6280\u672f\uff0c\u7528\u4e8e\u5728\u5ba2\u6237\u7aef\u521b\u5efa\u5f02\u6b65 Web \u5e94\u7528\u7a0b\u5e8f\u3002\u5728\u5730\u5e73\u7ebf\u4e2d\u5e7f\u6cdb\u4f7f\u7528\u3002 \u4ee5\u592a\u7f51 ATA \uff08AoE\uff09 \u5728\u4ee5\u592a\u7f51\u4e2d\u5efa\u7acb\u96a7\u9053\u7684\u78c1\u76d8\u5b58\u50a8\u534f\u8bae\u3002 \u9644\u52a0 \u5728\u7f51\u7edc\u4e2d\u5c06 VIF \u6216 vNIC \u8fde\u63a5\u5230 L2 \u7f51\u7edc\u7684\u8fc7\u7a0b\u3002\u5728\u8ba1\u7b97\u4e0a\u4e0b\u6587\u4e2d\uff0c\u6b64\u8fc7\u7a0b\u5c06\u5b58\u50a8\u5377\u8fde\u63a5\u5230\u5b9e\u4f8b\u3002 \u9644\u4ef6\uff08\u7f51\u7edc\uff09 \u63a5\u53e3 ID \u4e0e\u903b\u8f91\u7aef\u53e3\u7684\u5173\u8054\u3002\u5c06\u63a5\u53e3\u63d2\u5165\u7aef\u53e3\u3002 \u5ba1\u8ba1 \u901a\u8fc7\u7cfb\u7edf\u4f7f\u7528\u60c5\u51b5\u6570\u636e\u5de5\u5177\u5728\u8ba1\u7b97\u4e2d\u63d0\u4f9b\u3002 \u5ba1\u8ba1\u5458 \u9a8c\u8bc1\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u3001\u5bb9\u5668\u548c\u5e10\u6237\u5b8c\u6574\u6027\u7684\u5de5\u4f5c\u8fdb\u7a0b\u3002\u5ba1\u6838\u5458\u662f\u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u5ba1\u8ba1\u5458\u3001\u5bb9\u5668\u5ba1\u8ba1\u5458\u548c\u5bf9\u8c61\u5ba1\u8ba1\u5458\u7684\u7edf\u79f0\u3002 Austin OpenStack \u521d\u59cb\u7248\u672c\u7684\u4ee3\u53f7\u3002\u9996\u5c4a\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u5fb7\u514b\u8428\u65af\u5dde\u5965\u65af\u6c40\u4e3e\u884c\u3002 auth \u8282\u70b9 \u5bf9\u8c61\u5b58\u50a8\u6388\u6743\u8282\u70b9\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u8eab\u4efd\u9a8c\u8bc1 \u901a\u8fc7\u79c1\u94a5\u3001\u79d8\u5bc6\u4ee4\u724c\u3001\u5bc6\u7801\u3001\u6307\u7eb9\u6216\u7c7b\u4f3c\u65b9\u6cd5\u786e\u8ba4\u7528\u6237\u3001\u8fdb\u7a0b\u6216\u5ba2\u6237\u7aef\u786e\u5b9e\u662f\u4ed6\u4eec\u6240\u8bf4\u7684\u4eba\u7684\u8fc7\u7a0b\u3002 \u8eab\u4efd\u9a8c\u8bc1\u4ee4\u724c \u8eab\u4efd\u9a8c\u8bc1\u540e\u63d0\u4f9b\u7ed9\u5ba2\u6237\u7aef\u7684\u6587\u672c\u5b57\u7b26\u4e32\u3002\u5fc5\u987b\u7531\u7528\u6237\u6216\u8fdb\u7a0b\u5728\u5bf9 API \u7aef\u70b9\u7684\u540e\u7eed\u8bf7\u6c42\u4e2d\u63d0\u4f9b\u3002 AuthN \u63d0\u4f9b\u8eab\u4efd\u9a8c\u8bc1\u670d\u52a1\u7684\u6807\u8bc6\u670d\u52a1\u7ec4\u4ef6\u3002 \u6388\u6743 \u9a8c\u8bc1\u7528\u6237\u3001\u8fdb\u7a0b\u6216\u5ba2\u6237\u7aef\u662f\u5426\u6709\u6743\u6267\u884c\u64cd\u4f5c\u7684\u884c\u4e3a\u3002 \u6388\u6743\u8282\u70b9 \u63d0\u4f9b\u6388\u6743\u670d\u52a1\u7684\u5bf9\u8c61\u5b58\u50a8\u8282\u70b9\u3002 AuthZ \u63d0\u4f9b\u9ad8\u7ea7\u6388\u6743\u670d\u52a1\u7684\u8eab\u4efd\u7ec4\u4ef6\u3002 \u81ea\u52a8\u786e\u8ba4 RabbitMQ \u4e2d\u7684\u914d\u7f6e\u8bbe\u7f6e\uff0c\u7528\u4e8e\u542f\u7528\u6216\u7981\u7528\u6d88\u606f\u786e\u8ba4\u3002\u9ed8\u8ba4\u542f\u7528\u3002 \u81ea\u52a8\u58f0\u660e \u4e00\u4e2a Compute RabbitMQ \u8bbe\u7f6e\uff0c\u7528\u4e8e\u786e\u5b9a\u5728\u7a0b\u5e8f\u542f\u52a8\u65f6\u662f\u5426\u81ea\u52a8\u521b\u5efa\u6d88\u606f\u4ea4\u6362\u3002 \u53ef\u7528\u533a \u7528\u4e8e\u5bb9\u9519\u7684\u9694\u79bb\u533a\u57df\u7684 Amazon EC2 \u6982\u5ff5\u3002\u4e0d\u8981\u4e0e OpenStack Compute \u533a\u57df\u6216\u5355\u5143\u6df7\u6dc6\u3002 AWS CloudFormation \u6a21\u677f AWS CloudFormation \u5141\u8bb8 Amazon Web Services \uff08AWS\uff09 \u7528\u6237\u521b\u5efa\u548c\u7ba1\u7406\u76f8\u5173\u8d44\u6e90\u7684\u96c6\u5408\u3002\u7f16\u6392\u670d\u52a1\u652f\u6301\u4e0e CloudFormation \u517c\u5bb9\u7684\u683c\u5f0f \uff08CFN\uff09\u3002","title":"A"},{"location":"security/security-guide/#b","text":"\u540e\u7aef \u5bf9\u7528\u6237\u8fdb\u884c\u6a21\u7cca\u5904\u7406\u7684\u4ea4\u4e92\u548c\u8fdb\u7a0b\uff0c\u4f8b\u5982\u8ba1\u7b97\u5377\u6302\u8f7d\u3001\u5b88\u62a4\u7a0b\u5e8f\u5411 iSCSI \u76ee\u6807\u4f20\u8f93\u6570\u636e\u6216\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u5b8c\u6574\u6027\u68c0\u67e5\u3002 \u540e\u7aef\u76ee\u5f55 \u8eab\u4efd\u670d\u52a1\u76ee\u5f55\u670d\u52a1\u7528\u4e8e\u5b58\u50a8\u548c\u68c0\u7d22\u6709\u5173\u5ba2\u6237\u7aef\u53ef\u7528\u7684 API \u7aef\u70b9\u7684\u4fe1\u606f\u7684\u5b58\u50a8\u65b9\u6cd5\u3002\u793a\u4f8b\u5305\u62ec SQL \u6570\u636e\u5e93\u3001LDAP \u6570\u636e\u5e93\u6216 KVS \u540e\u7aef\u3002 \u540e\u7aef\u5b58\u50a8 \u7528\u4e8e\u4fdd\u5b58\u548c\u68c0\u7d22\u670d\u52a1\u4fe1\u606f\u7684\u6301\u4e45\u6027\u6570\u636e\u5b58\u50a8\uff0c\u4f8b\u5982\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u5217\u8868\u3001\u5ba2\u6237\u673a\u865a\u62df\u673a\u7684\u5f53\u524d\u72b6\u6001\u3001\u7528\u6237\u540d\u5217\u8868\u7b49\u3002\u6b64\u5916\uff0c\u6620\u50cf\u670d\u52a1\u7528\u4e8e\u83b7\u53d6\u548c\u5b58\u50a8 VM \u6620\u50cf\u7684\u65b9\u6cd5\u3002\u9009\u9879\u5305\u62ec\u5bf9\u8c61\u5b58\u50a8\u3001\u672c\u5730\u6302\u8f7d\u7684\u6587\u4ef6\u7cfb\u7edf\u3001RADOS \u5757\u8bbe\u5907\u3001VMware \u6570\u636e\u5b58\u50a8\u548c HTTP\u3002 \u5907\u4efd\u3001\u6062\u590d\u548c\u707e\u96be\u6062\u590d\u670d\u52a1\uff08freezer\uff09 \u63d0\u4f9b\u7528\u4e8e\u5907\u4efd\u3001\u8fd8\u539f\u548c\u6062\u590d\u6587\u4ef6\u7cfb\u7edf\u3001\u5b9e\u4f8b\u6216\u6570\u636e\u5e93\u5907\u4efd\u7684\u96c6\u6210\u5de5\u5177\u7684\u9879\u76ee\u3002 \u5e26\u5bbd \u901a\u4fe1\u8d44\u6e90\uff08\u5982 Internet\uff09\u4f7f\u7528\u7684\u53ef\u7528\u6570\u636e\u91cf\u3002\u8868\u793a\u7528\u4e8e\u4e0b\u8f7d\u5185\u5bb9\u7684\u6570\u636e\u91cf\u6216\u53ef\u4f9b\u4e0b\u8f7d\u7684\u6570\u636e\u91cf\u3002 barbican Key Manager \u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u88f8\u673a \u6620\u50cf\u670d\u52a1\u5bb9\u5668\u683c\u5f0f\uff0c\u6307\u793a VM \u6620\u50cf\u4e0d\u5b58\u5728\u5bb9\u5668\u3002 \u88f8\u673a\u670d\u52a1\uff08ironic\uff09 OpenStack \u670d\u52a1\uff0c\u5b83\u63d0\u4f9b\u670d\u52a1\u548c\u5173\u8054\u7684\u5e93\uff0c\u80fd\u591f\u4ee5\u5b89\u5168\u611f\u77e5\u548c\u5bb9\u9519\u7684\u65b9\u5f0f\u7ba1\u7406\u548c\u914d\u7f6e\u7269\u7406\u673a\u3002 \u57fa\u7840\u6620\u50cf OpenStack \u63d0\u4f9b\u7684\u6620\u50cf\u3002 Bell-LaPadula \u6a21\u578b \u4e00\u79cd\u5b89\u5168\u6a21\u578b\uff0c\u4fa7\u91cd\u4e8e\u6570\u636e\u673a\u5bc6\u6027\u548c\u5bf9\u673a\u5bc6\u4fe1\u606f\u7684\u53d7\u63a7\u8bbf\u95ee\u3002\u8be5\u6a21\u578b\u5c06\u5b9e\u4f53\u5206\u4e3a\u4e3b\u4f53\u548c\u5ba2\u4f53\u3002\u5c06\u4e3b\u4f53\u7684\u8bb8\u53ef\u4e0e\u4e3b\u4f53\u7684\u5206\u7c7b\u8fdb\u884c\u6bd4\u8f83\uff0c\u4ee5\u786e\u5b9a\u4e3b\u4f53\u662f\u5426\u88ab\u6388\u6743\u7528\u4e8e\u7279\u5b9a\u7684\u8bbf\u95ee\u6a21\u5f0f\u3002\u95f4\u9699\u6216\u5206\u7c7b\u65b9\u6848\u7528\u6676\u683c\u8868\u793a\u3002 \u57fa\u51c6\u670d\u52a1\uff08\u53cd\u5f39\uff09 OpenStack\u9879\u76ee\uff0c\u4e3a\u5355\u4e2aOpenStack\u7ec4\u4ef6\u7684\u6027\u80fd\u5206\u6790\u548c\u57fa\u51c6\u6d4b\u8bd5\u4ee5\u53ca\u5b8c\u6574\u7684\u751f\u4ea7OpenStack\u4e91\u90e8\u7f72\u63d0\u4f9b\u4e86\u4e00\u4e2a\u6846\u67b6\u3002 Bexar 2011 \u5e74 2 \u6708\u53d1\u5e03\u7684\u4e0e OpenStack \u76f8\u5173\u7684\u9879\u76ee\u7684\u5206\u7ec4\u7248\u672c\u3002\u5b83\u4ec5\u5305\u62ec\u8ba1\u7b97 \uff08nova\uff09 \u548c\u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09\u3002Bexar \u662f OpenStack \u7b2c\u4e8c\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u5fb7\u514b\u8428\u65af\u5dde\u5723\u5b89\u4e1c\u5c3c\u5965\u4e3e\u884c\uff0c\u8fd9\u91cc\u662f\u8d1d\u514b\u8428\u5c14\u53bf\u7684\u53bf\u57ce\u3002 \u4e8c\u8fdb\u5236 \u4ec5\u7531 1 \u548c 0 \u7ec4\u6210\u7684\u4fe1\u606f\uff0c\u8fd9\u662f\u8ba1\u7b97\u673a\u7684\u8bed\u8a00\u3002 \u4f4d \u4f4d\u662f\u4ee5 2 \u4e3a\u57fa\u6570\u7684\u4e2a\u4f4d\u6570\uff080 \u6216 1\uff09\u3002\u5e26\u5bbd\u4f7f\u7528\u91cf\u4ee5\u6bcf\u79d2\u4f4d\u6570\u4e3a\u5355\u4f4d\u3002 \u6bcf\u79d2\u6bd4\u7279\u6570 \uff08BPS\uff09 \u901a\u7528\u6d4b\u91cf\u6570\u636e\u4ece\u4e00\u4e2a\u5730\u65b9\u4f20\u8f93\u5230\u53e6\u4e00\u4e2a\u5730\u65b9\u7684\u901f\u5ea6\u3002 \u5757\u8bbe\u5907 \u4e00\u79cd\u4ee5\u5757\u7684\u5f62\u5f0f\u79fb\u52a8\u6570\u636e\u7684\u8bbe\u5907\u3002\u8fd9\u4e9b\u8bbe\u5907\u8282\u70b9\u8fde\u63a5\u8bbe\u5907\uff0c\u4f8b\u5982\u786c\u76d8\u3001CD-ROM \u9a71\u52a8\u5668\u3001\u95ea\u5b58\u9a71\u52a8\u5668\u548c\u5176\u4ed6\u53ef\u5bfb\u5740\u5185\u5b58\u533a\u57df\u3002 \u533a\u5757\u8fc1\u79fb KVM \u4f7f\u7528\u7684\u4e00\u79cd\u865a\u62df\u673a\u5b9e\u65f6\u8fc1\u79fb\u65b9\u6cd5\uff0c\u7528\u4e8e\u5728\u7528\u6237\u542f\u52a8\u7684\u5207\u6362\u671f\u95f4\u5c06\u5b9e\u4f8b\u4ece\u4e00\u53f0\u4e3b\u673a\u64a4\u79bb\u5230\u53e6\u4e00\u53f0\u4e3b\u673a\uff0c\u505c\u673a\u65f6\u95f4\u975e\u5e38\u77ed\u3002\u4e0d\u9700\u8981\u5171\u4eab\u5b58\u50a8\u3002\u7531\u8ba1\u7b97\u652f\u6301\u3002 \u5757\u5b58\u50a8 API \u5355\u72ec\u7ec8\u7ed3\u70b9\u4e0a\u7684 API\uff0c\u7528\u4e8e\u4e3a\u8ba1\u7b97 VM \u9644\u52a0\u3001\u5206\u79bb\u548c\u521b\u5efa\u5757\u5b58\u50a8\u3002 \u5757\u5b58\u50a8\u670d\u52a1\uff08cinder\uff09 OpenStack \u670d\u52a1\uff0c\u5b83\u5b9e\u73b0\u4e86\u670d\u52a1\u548c\u5e93\uff0c\u901a\u8fc7\u5728\u5176\u4ed6\u5757\u5b58\u50a8\u8bbe\u5907\u4e4b\u4e0a\u7684\u62bd\u8c61\u548c\u81ea\u52a8\u5316\uff0c\u63d0\u4f9b\u5bf9\u5757\u5b58\u50a8\u8d44\u6e90\u7684\u6309\u9700\u81ea\u52a9\u8bbf\u95ee\u3002 BMC\uff08\u57fa\u677f\u7ba1\u7406\u63a7\u5236\u5668\uff09 IPMI\u67b6\u6784\u4e2d\u7684\u667a\u80fd\uff0c\u5b83\u662f\u4e00\u79cd\u4e13\u7528\u7684\u5fae\u63a7\u5236\u5668\uff0c\u5d4c\u5165\u5728\u8ba1\u7b97\u673a\u4e3b\u677f\u4e0a\u5e76\u5145\u5f53\u670d\u52a1\u5668\u3002\u7ba1\u7406\u7cfb\u7edf\u7ba1\u7406\u8f6f\u4ef6\u548c\u5e73\u53f0\u786c\u4ef6\u4e4b\u95f4\u7684\u63a5\u53e3\u3002 \u53ef\u542f\u52a8\u78c1\u76d8\u6620\u50cf \u4e00\u79cd VM \u6620\u50cf\u7c7b\u578b\uff0c\u4ee5\u5355\u4e2a\u53ef\u542f\u52a8\u6587\u4ef6\u7684\u5f62\u5f0f\u5b58\u5728\u3002 Bootstrap \u534f\u8bae \uff08BOOTP\uff09 \u7f51\u7edc\u5ba2\u6237\u7aef\u7528\u4e8e\u4ece\u914d\u7f6e\u670d\u52a1\u5668\u83b7\u53d6 IP \u5730\u5740\u7684\u7f51\u7edc\u534f\u8bae\u3002\u5728\u4f7f\u7528 FlatDHCP \u7ba1\u7406\u5668\u6216 VLAN \u7ba1\u7406\u5668\u7f51\u7edc\u7ba1\u7406\u5668\u65f6\uff0c\u901a\u8fc7 dnsmasq \u5b88\u62a4\u7a0b\u5e8f\u8fdb\u884c\u8ba1\u7b97\u4e2d\u63d0\u4f9b\u3002 \u8fb9\u754c\u7f51\u5173\u534f\u8bae \uff08BGP\uff09 \u8fb9\u754c\u7f51\u5173\u534f\u8bae\u662f\u4e00\u79cd\u8fde\u63a5\u81ea\u6cbb\u7cfb\u7edf\u7684\u52a8\u6001\u8def\u7531\u534f\u8bae\u3002\u8be5\u534f\u8bae\u88ab\u8ba4\u4e3a\u662f\u4e92\u8054\u7f51\u7684\u9aa8\u5e72\uff0c\u5c06\u4e0d\u540c\u7684\u7f51\u7edc\u8fde\u63a5\u8d77\u6765\uff0c\u5f62\u6210\u4e00\u4e2a\u66f4\u5927\u7684\u7f51\u7edc\u3002 \u6d4f\u89c8\u5668 \u4f7f\u8ba1\u7b97\u673a\u6216\u8bbe\u5907\u80fd\u591f\u8bbf\u95ee Internet \u7684\u4efb\u4f55\u5ba2\u6237\u7aef\u8f6f\u4ef6\u3002 \u6784\u5efa\u5668\u6587\u4ef6 \u5305\u542b\u5bf9\u8c61\u5b58\u50a8\u7528\u4e8e\u91cd\u65b0\u914d\u7f6e\u73af\u6216\u5728\u53d1\u751f\u4e25\u91cd\u6545\u969c\u540e\u4ece\u5934\u5f00\u59cb\u91cd\u65b0\u521b\u5efa\u73af\u7684\u914d\u7f6e\u4fe1\u606f\u3002 \u6269\u5c55 \u5728\u4e3b\u73af\u5883\u8d44\u6e90\u53d7\u9650\u65f6\uff0c\u5229\u7528\u8f85\u52a9\u73af\u5883\u6309\u9700\u5f39\u6027\u6784\u5efa\u5b9e\u4f8b\u7684\u505a\u6cd5\u3002 \u6309\u94ae\u7c7b \u5730\u5e73\u7ebf\u4e2d\u7684\u4e00\u7ec4\u76f8\u5173\u6309\u94ae\u7c7b\u578b\u3002\u7528\u4e8e\u542f\u52a8\u3001\u505c\u6b62\u548c\u6302\u8d77 VM \u7684\u6309\u94ae\u4f4d\u4e8e\u4e00\u4e2a\u7c7b\u4e2d\u3002\u7528\u4e8e\u5173\u8054\u548c\u53d6\u6d88\u5173\u8054\u6d6e\u52a8 IP \u5730\u5740\u7684\u6309\u94ae\u4f4d\u4e8e\u53e6\u4e00\u4e2a\u7c7b\u4e2d\uff0c\u4f9d\u6b64\u7c7b\u63a8\u3002 \u5b57\u8282 \u6784\u6210\u5355\u4e2a\u5b57\u7b26\u7684\u4f4d\u96c6;\u4e00\u4e2a\u5b57\u8282\u901a\u5e38\u6709 8 \u4f4d\u3002","title":"B"},{"location":"security/security-guide/#c","text":"\u7f13\u5b58\u4fee\u526a\u5668 \u5c06\u6620\u50cf\u670d\u52a1\u865a\u62df\u673a\u6620\u50cf\u7f13\u5b58\u4fdd\u6301\u5728\u6216\u4f4e\u4e8e\u5176\u914d\u7f6e\u7684\u6700\u5927\u5927\u5c0f\u7684\u7a0b\u5e8f\u3002 Cactus 2011 \u5e74\u6625\u5b63\u53d1\u5e03\u7684 OpenStack \u9879\u76ee\u5206\u7ec4\u7248\u672c\u3002\u5b83\u5305\u62ec\u8ba1\u7b97 \uff08nova\uff09\u3001\u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09 \u548c\u56fe\u50cf\u670d\u52a1 \uff08glance\uff09\u3002Cactus \u662f\u7f8e\u56fd\u5fb7\u514b\u8428\u65af\u5dde\u7684\u4e00\u4e2a\u57ce\u5e02\uff0c\u662f OpenStack \u7b2c\u4e09\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u5f53OpenStack\u7248\u672c\u4ece3\u4e2a\u6708\u5ef6\u957f\u52306\u4e2a\u6708\u65f6\uff0c\u8be5\u7248\u672c\u7684\u4ee3\u53f7\u53d1\u751f\u4e86\u53d8\u5316\uff0c\u4ee5\u5339\u914d\u6700\u63a5\u8fd1\u4e0a\u4e00\u6b21\u5cf0\u4f1a\u7684\u5730\u7406\u4f4d\u7f6e\u3002 \u8c03\u7528 OpenStack \u6d88\u606f\u961f\u5217\u8f6f\u4ef6\u4f7f\u7528\u7684 RPC \u539f\u8bed\u4e4b\u4e00\u3002\u53d1\u9001\u6d88\u606f\u5e76\u7b49\u5f85\u54cd\u5e94\u3002 \u80fd\u529b \u5b9a\u4e49\u5355\u5143\u7684\u8d44\u6e90\uff0c\u5305\u62ec CPU\u3001\u5b58\u50a8\u548c\u7f51\u7edc\u3002\u53ef\u4ee5\u5e94\u7528\u4e8e\u4e00\u4e2a\u5355\u5143\u6216\u6574\u4e2a\u5355\u5143\u5185\u7684\u7279\u5b9a\u670d\u52a1\u3002 \u5bb9\u91cf\u7f13\u5b58 \u8ba1\u7b97\u540e\u7aef\u6570\u636e\u5e93\u8868\uff0c\u5176\u4e2d\u5305\u542b\u5f53\u524d\u5de5\u4f5c\u8d1f\u8f7d\u3001\u53ef\u7528 RAM \u91cf\u4ee5\u53ca\u6bcf\u4e2a\u4e3b\u673a\u4e0a\u8fd0\u884c\u7684 VM \u6570\u3002\u7528\u4e8e\u786e\u5b9a VM \u5728\u54ea\u4e2a\u4e3b\u673a\u4e0a\u542f\u52a8\u3002 \u5bb9\u91cf\u66f4\u65b0\u7a0b\u5e8f \u76d1\u89c6 VM \u5b9e\u4f8b\u5e76\u6839\u636e\u9700\u8981\u66f4\u65b0\u5bb9\u91cf\u7f13\u5b58\u7684\u901a\u77e5\u9a71\u52a8\u7a0b\u5e8f\u3002 \u6295\u5c04 OpenStack \u6d88\u606f\u961f\u5217\u8f6f\u4ef6\u4f7f\u7528\u7684 RPC \u539f\u8bed\u4e4b\u4e00\u3002\u53d1\u9001\u6d88\u606f\uff0c\u4e0d\u7b49\u5f85\u54cd\u5e94\u3002 \u76ee\u5f55 \u7528\u6237\u5728\u4f7f\u7528 Identity \u670d\u52a1\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u540e\u53ef\u7528\u7684 API \u7aef\u70b9\u5217\u8868\u3002 \u76ee\u5f55\u670d\u52a1 \u4e00\u79cd\u8eab\u4efd\u670d\u52a1\uff0c\u5217\u51fa\u7528\u6237\u5728\u4f7f\u7528 Identity \u670d\u52a1\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u540e\u53ef\u7528\u7684 API \u7aef\u70b9\u3002 \u6d4b\u9ad8\u4eea OpenStack Telemetry \u670d\u52a1\u7684\u4e00\u90e8\u5206;\u6536\u96c6\u548c\u5b58\u50a8\u6765\u81ea\u5176\u4ed6 OpenStack \u670d\u52a1\u7684\u6307\u6807\u3002 \u5355\u5143\u683c \u5728\u5b50\u5173\u7cfb\u548c\u7236\u5173\u7cfb\u4e2d\u63d0\u4f9b\u8ba1\u7b97\u8d44\u6e90\u7684\u903b\u8f91\u5206\u533a\u3002\u5982\u679c\u7236\u5355\u5143\u65e0\u6cd5\u63d0\u4f9b\u8bf7\u6c42\u7684\u8d44\u6e90\uff0c\u5219\u8bf7\u6c42\u5c06\u4ece\u7236\u5355\u5143\u4f20\u9012\u5230\u5b50\u5355\u5143\u3002 \u5355\u5143\u683c\u8f6c\u53d1 \u4e00\u4e2a\u201c\u8ba1\u7b97\u201d\u9009\u9879\uff0c\u8be5\u9009\u9879\u4f7f\u7236\u5355\u5143\u80fd\u591f\u5728\u7236\u5355\u5143\u65e0\u6cd5\u63d0\u4f9b\u6240\u8bf7\u6c42\u7684\u8d44\u6e90\u65f6\u5c06\u8d44\u6e90\u8bf7\u6c42\u4f20\u9012\u7ed9\u5b50\u5355\u5143\u3002 \u5355\u5143\u683c\u7ba1\u7406\u5668 \u8ba1\u7b97\u7ec4\u4ef6\uff0c\u5176\u4e2d\u5305\u542b\u5355\u5143\u4e2d\u6bcf\u4e2a\u4e3b\u673a\u7684\u5f53\u524d\u529f\u80fd\u5217\u8868\uff0c\u5e76\u6839\u636e\u9700\u8981\u8def\u7531\u8bf7\u6c42\u3002 CentOS \u64cd\u4f5c\u7cfb\u7edf \u4e0e OpenStack \u517c\u5bb9\u7684 Linux \u53d1\u884c\u7248\u3002 Ceph \u51fd\u6570 \u53ef\u5927\u89c4\u6a21\u6269\u5c55\u7684\u5206\u5e03\u5f0f\u5b58\u50a8\u7cfb\u7edf\uff0c\u7531\u5bf9\u8c61\u5b58\u50a8\u3001\u5757\u5b58\u50a8\u548c\u517c\u5bb9 POSIX \u7684\u5206\u5e03\u5f0f\u6587\u4ef6\u7cfb\u7edf\u7ec4\u6210\u3002\u4e0eOpenStack\u517c\u5bb9\u3002 CephFS Ceph \u63d0\u4f9b\u7684\u7b26\u5408 POSIX \u6807\u51c6\u7684\u6587\u4ef6\u7cfb\u7edf\u3002 \u8bc1\u4e66\u9881\u53d1\u673a\u6784 \uff08CA\uff09 \u5728\u5bc6\u7801\u5b66\u4e2d\uff0c\u9881\u53d1\u6570\u5b57\u8bc1\u4e66\u7684\u5b9e\u4f53\u3002\u6570\u5b57\u8bc1\u4e66\u901a\u8fc7\u8bc1\u4e66\u7684\u6307\u5b9a\u4e3b\u4f53\u8bc1\u660e\u516c\u94a5\u7684\u6240\u6709\u6743\u3002\u8fd9\u4f7f\u5176\u4ed6\u4eba\uff08\u4f9d\u8d56\u65b9\uff09\u80fd\u591f\u4f9d\u8d56\u4e0e\u8ba4\u8bc1\u516c\u94a5\u76f8\u5bf9\u5e94\u7684\u79c1\u94a5\u6240\u505a\u7684\u7b7e\u540d\u6216\u65ad\u8a00\u3002\u5728\u8fd9\u79cd\u4fe1\u4efb\u5173\u7cfb\u6a21\u578b\u4e2d\uff0cCA \u662f\u8bc1\u4e66\u4e3b\u4f53\uff08\u6240\u6709\u8005\uff09\u548c\u4f9d\u8d56\u8bc1\u4e66\u7684\u4e00\u65b9\u7684\u53d7\u4fe1\u4efb\u7b2c\u4e09\u65b9\u3002CA \u662f\u8bb8\u591a\u516c\u94a5\u57fa\u7840\u7ed3\u6784 \uff08PKI\uff09 \u65b9\u6848\u7684\u7279\u5f81\u3002\u5728 OpenStack \u4e2d\uff0cCompute \u4e3a cloudpipe VPN \u548c VM \u6620\u50cf\u89e3\u5bc6\u63d0\u4f9b\u4e86\u4e00\u4e2a\u7b80\u5355\u7684\u8bc1\u4e66\u9881\u53d1\u673a\u6784\u3002 \u6311\u6218\u63e1\u624b\u8eab\u4efd\u9a8c\u8bc1\u534f\u8bae \uff08CHAP\uff09 \u8ba1\u7b97\u652f\u6301\u7684 iSCSI \u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002 \u673a\u4f1a\u8c03\u5ea6\u5668 \u8ba1\u7b97\u4f7f\u7528\u7684\u4e00\u79cd\u8ba1\u5212\u65b9\u6cd5\uff0c\u7528\u4e8e\u4ece\u6c60\u4e2d\u968f\u673a\u9009\u62e9\u53ef\u7528\u4e3b\u673a\u3002 \u81ea\u4e0a\u6b21\u66f4\u6539\u4ee5\u6765 \u4e00\u4e2a\u8ba1\u7b97 API \u53c2\u6570\uff0c\u8be5\u53c2\u6570\u5141\u8bb8\u4e0b\u8f7d\u81ea\u4e0a\u6b21\u8bf7\u6c42\u4ee5\u6765\u5bf9\u6240\u8bf7\u6c42\u9879\u7684\u66f4\u6539\uff0c\u800c\u4e0d\u662f\u4e0b\u8f7d\u4e00\u7ec4\u65b0\u7684\u6570\u636e\u5e76\u5c06\u5176\u4e0e\u65e7\u6570\u636e\u8fdb\u884c\u6bd4\u8f83\u3002 Chef \u652f\u6301 OpenStack \u90e8\u7f72\u7684\u64cd\u4f5c\u7cfb\u7edf\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u3002 \u5b50\u5355\u5143\u683c \u5982\u679c\u8bf7\u6c42\u7684\u8d44\u6e90\uff08\u5982 CPU \u65f6\u95f4\u3001\u78c1\u76d8\u5b58\u50a8\u6216\u5185\u5b58\uff09\u5728\u7236\u5355\u5143\u4e2d\u4e0d\u53ef\u7528\uff0c\u5219\u8be5\u8bf7\u6c42\u5c06\u8f6c\u53d1\u5230\u5176\u5173\u8054\u7684\u5b50\u5355\u5143\u3002\u5982\u679c\u5b50\u5355\u5143\u53ef\u4ee5\u6ee1\u8db3\u8bf7\u6c42\uff0c\u5219\u5b83\u786e\u5b9e\u53ef\u4ee5\u3002\u5426\u5219\uff0c\u5b83\u4f1a\u5c1d\u8bd5\u5c06\u8bf7\u6c42\u4f20\u9012\u7ed9\u5176\u4efb\u4f55\u5b50\u7ea7\u3002 cinder \u5757\u5b58\u50a8\u670d\u52a1\u7684\u4ee3\u53f7\u3002 CirrOS \u4e00\u4e2a\u6700\u5c0f\u7684 Linux \u53d1\u884c\u7248\uff0c\u8bbe\u8ba1\u7528\u4f5c\u4e91\uff08\u5982 OpenStack\uff09\u4e0a\u7684\u6d4b\u8bd5\u6620\u50cf\u3002 Cisco neutron \u63d2\u4ef6 \u9002\u7528\u4e8e Cisco \u8bbe\u5907\u548c\u6280\u672f\uff08\u5305\u62ec UCS \u548c Nexus\uff09\u7684\u7f51\u7edc\u63d2\u4ef6\u3002 \u4e91\u67b6\u6784\u5e08 \u8ba1\u5212\u3001\u8bbe\u8ba1\u548c\u76d1\u7763\u4e91\u521b\u5efa\u7684\u4eba\u3002 \u4e91\u5ba1\u8ba1\u6570\u636e\u8054\u90a6 \uff08CADF\uff09 Cloud Auditing Data Federation \uff08CADF\uff09 \u662f\u7528\u4e8e\u5ba1\u6838\u4e8b\u4ef6\u6570\u636e\u7684\u89c4\u8303\u3002CADF \u53d7 OpenStack Identity \u652f\u6301\u3002 \u4e91\u8ba1\u7b97 \u4e00\u79cd\u6a21\u578b\uff0c\u652f\u6301\u8bbf\u95ee\u53ef\u914d\u7f6e\u8ba1\u7b97\u8d44\u6e90\uff08\u5982\u7f51\u7edc\u3001\u670d\u52a1\u5668\u3001\u5b58\u50a8\u3001\u5e94\u7528\u7a0b\u5e8f\u548c\u670d\u52a1\uff09\u7684\u5171\u4eab\u6c60\uff0c\u8fd9\u4e9b\u8d44\u6e90\u53ef\u4ee5\u5feb\u901f\u914d\u7f6e\u548c\u53d1\u5e03\uff0c\u53ea\u9700\u6700\u5c11\u7684\u7ba1\u7406\u5de5\u4f5c\u6216\u670d\u52a1\u63d0\u4f9b\u5546\u4ea4\u4e92\u3002 \u4e91\u8ba1\u7b97\u57fa\u7840\u8bbe\u65bd \u652f\u6301\u4e91\u8ba1\u7b97\u6a21\u578b\u7684\u8ba1\u7b97\u8981\u6c42\u6240\u9700\u7684\u786c\u4ef6\u548c\u8f6f\u4ef6\u7ec4\u4ef6\uff0c\u4f8b\u5982\u670d\u52a1\u5668\u3001\u5b58\u50a8\u3001\u7f51\u7edc\u548c\u865a\u62df\u5316\u8f6f\u4ef6\u3002 \u4e91\u8ba1\u7b97\u5e73\u53f0\u8f6f\u4ef6 \u901a\u8fc7\u4e92\u8054\u7f51\u63d0\u4f9b\u4e0d\u540c\u7684\u670d\u52a1\u3002\u8fd9\u4e9b\u8d44\u6e90\u5305\u62ec\u6570\u636e\u5b58\u50a8\u3001\u670d\u52a1\u5668\u3001\u6570\u636e\u5e93\u3001\u7f51\u7edc\u548c\u8f6f\u4ef6\u7b49\u5de5\u5177\u548c\u5e94\u7528\u7a0b\u5e8f\u3002\u53ea\u8981\u7535\u5b50\u8bbe\u5907\u53ef\u4ee5\u8bbf\u95ee\u7f51\u7edc\uff0c\u5b83\u5c31\u53ef\u4ee5\u8bbf\u95ee\u6570\u636e\u548c\u8fd0\u884c\u5b83\u7684\u8f6f\u4ef6\u7a0b\u5e8f\u3002 \u4e91\u8ba1\u7b97\u670d\u52a1\u67b6\u6784 \u4e91\u670d\u52a1\u4f53\u7cfb\u7ed3\u6784\u5b9a\u4e49\u4e86\u5728\u4f01\u4e1a\u4e1a\u52a1\u7f51\u7edc\u8fb9\u754c\u5185\u548c\u8de8\u4f01\u4e1a\u4e1a\u52a1\u7f51\u7edc\u8fb9\u754c\u5b9e\u65bd\u7684\u6574\u4f53\u4e91\u8ba1\u7b97\u670d\u52a1\u548c\u89e3\u51b3\u65b9\u6848\u3002\u8003\u8651\u6838\u5fc3\u4e1a\u52a1\u9700\u6c42\uff0c\u5e76\u5c06\u5176\u4e0e\u53ef\u80fd\u7684\u4e91\u89e3\u51b3\u65b9\u6848\u76f8\u5339\u914d\u3002 \u4e91\u63a7\u5236\u5668 \u8868\u793a\u4e91\u5168\u5c40\u72b6\u6001\u7684\u8ba1\u7b97\u7ec4\u4ef6\u7684\u96c6\u5408;\u901a\u8fc7\u961f\u5217\u4e0e\u670d\u52a1\uff08\u4f8b\u5982\u8eab\u4efd\u8ba4\u8bc1\u3001\u5bf9\u8c61\u5b58\u50a8\u548c\u8282\u70b9/\u5b58\u50a8\u5de5\u4f5c\u7ebf\u7a0b\uff09\u8fdb\u884c\u901a\u4fe1\u3002 \u4e91\u63a7\u5236\u5668\u8282\u70b9 \u8fd0\u884c\u7f51\u7edc\u3001\u5377\u3001API\u3001\u8c03\u5ea6\u7a0b\u5e8f\u548c\u6620\u50cf\u670d\u52a1\u7684\u8282\u70b9\u3002\u6bcf\u4e2a\u670d\u52a1\u90fd\u53ef\u4ee5\u5206\u89e3\u4e3a\u5355\u72ec\u7684\u8282\u70b9\uff0c\u4ee5\u5b9e\u73b0\u53ef\u4f38\u7f29\u6027\u6216\u53ef\u7528\u6027\u3002 \u4e91\u6570\u636e\u7ba1\u7406\u63a5\u53e3\uff08CDMI\uff09 SINA\u6807\u51c6\u5b9a\u4e49\u4e86\u4e00\u4e2aRESTful API\uff0c\u7528\u4e8e\u7ba1\u7406\u4e91\u4e2d\u7684\u5bf9\u8c61\uff0c\u76ee\u524d\u5728OpenStack\u4e2d\u4e0d\u53d7\u652f\u6301\u3002 \u4e91\u57fa\u7840\u8bbe\u65bd\u7ba1\u7406\u63a5\u53e3\uff08CIMI\uff09 \u6b63\u5728\u8fdb\u884c\u7684\u4e91\u7ba1\u7406\u89c4\u8303\u3002\u76ee\u524d\u5728 OpenStack \u4e2d\u4e0d\u53d7\u652f\u6301\u3002 \u4e91\u6280\u672f \u4e91\u662f\u7531\u7ba1\u7406\u548c\u81ea\u52a8\u5316\u8f6f\u4ef6\u7f16\u6392\u7684\u865a\u62df\u6e90\u5de5\u5177\u3002\u8fd9\u5305\u62ec\u539f\u59cb\u5904\u7406\u80fd\u529b\u3001\u5185\u5b58\u3001\u7f51\u7edc\u3001\u57fa\u4e8e\u4e91\u7684\u5e94\u7528\u7a0b\u5e8f\u7684\u5b58\u50a8\u3002 cloud-init \u51fd\u6570 \u901a\u5e38\u5b89\u88c5\u5728 VM \u6620\u50cf\u4e2d\u7684\u5305\uff0c\u7528\u4e8e\u5728\u542f\u52a8\u540e\u4f7f\u7528\u4ece\u5143\u6570\u636e\u670d\u52a1\u68c0\u7d22\u5230\u7684\u4fe1\u606f\uff08\u5982 SSH \u516c\u94a5\u548c\u7528\u6237\u6570\u636e\uff09\u6267\u884c\u5b9e\u4f8b\u7684\u521d\u59cb\u5316\u3002 cloudadmin \u8ba1\u7b97 RBAC \u7cfb\u7edf\u4e2d\u7684\u9ed8\u8ba4\u89d2\u8272\u4e4b\u4e00\u3002\u6388\u4e88\u5b8c\u6574\u7684\u7cfb\u7edf\u8bbf\u95ee\u6743\u9650\u3002 Cloudbase-\u521d\u59cb\u5316 \u63d0\u4f9b\u6765\u5bbe\u521d\u59cb\u5316\u529f\u80fd\u7684 Windows \u9879\u76ee\uff0c\u7c7b\u4f3c\u4e8e cloud-init\u3002 cloudpipe \u4e00\u79cd\u57fa\u4e8e\u6bcf\u4e2a\u9879\u76ee\u521b\u5efa VPN \u7684\u8ba1\u7b97\u670d\u52a1\u3002 CloudPipe \u955c\u50cf \u4f5c\u4e3a cloudpipe \u670d\u52a1\u5668\u7684\u9884\u5236 VM \u955c\u50cf\u3002\u4ece\u672c\u8d28\u4e0a\u8bb2\uff0cOpenVPN\u8fd0\u884c\u5728Linux\u4e0a\u3002 \u96c6\u7fa4\u670d\u52a1\uff08senlin\uff09 \u5b9e\u73b0\u96c6\u7fa4\u670d\u52a1\u548c\u5e93\u7684\u9879\u76ee\uff0c\u7528\u4e8e\u7ba1\u7406\u7531\u5176\u4ed6 OpenStack \u670d\u52a1\u516c\u5f00\u7684\u540c\u6784\u5bf9\u8c61\u7ec4\u3002 \u547d\u4ee4\u8fc7\u6ee4\u5668 \u5217\u51fa\u8ba1\u7b97 rootwrap \u5de5\u5177\u4e2d\u5141\u8bb8\u7684\u547d\u4ee4\u3002 \u547d\u4ee4\u884c\u754c\u9762 \uff08CLI\uff09 \u4e00\u4e2a\u57fa\u4e8e\u6587\u672c\u7684\u5ba2\u6237\u7aef\uff0c\u53ef\u5e2e\u52a9\u60a8\u521b\u5efa\u811a\u672c\u4ee5\u4e0e OpenStack \u4e91\u8fdb\u884c\u4ea4\u4e92\u3002 \u901a\u7528 Internet \u6587\u4ef6\u7cfb\u7edf \uff08CIFS\uff09 \u6587\u4ef6\u5171\u4eab\u534f\u8bae\u3002\u5b83\u662f Microsoft \u5f00\u53d1\u548c\u4f7f\u7528\u7684\u539f\u59cb\u670d\u52a1\u5668\u6d88\u606f\u5757 \uff08SMB\uff09 \u534f\u8bae\u7684\u516c\u5171\u6216\u5f00\u653e\u53d8\u4f53\u3002\u4e0e SMB \u534f\u8bae\u4e00\u6837\uff0c CIFS \u5728\u66f4\u9ad8\u7ea7\u522b\u8fd0\u884c\u5e76\u4f7f\u7528 TCP/IP \u534f\u8bae\u3002 \u516c\u5171\u5e93 \uff08oslo\uff09 \u751f\u6210\u4e00\u7ec4 python \u5e93\u7684\u9879\u76ee\uff0c\u5176\u4e2d\u5305\u542b OpenStack \u9879\u76ee\u5171\u4eab\u7684\u4ee3\u7801\u3002\u8fd9\u4e9b\u5e93\u63d0\u4f9b\u7684 API \u5e94\u8be5\u662f\u9ad8\u8d28\u91cf\u3001\u7a33\u5b9a\u3001\u4e00\u81f4\u3001\u6709\u6587\u6863\u8bb0\u5f55\u7684\u548c\u666e\u904d\u9002\u7528\u7684\u3002 \u793e\u533a\u9879\u76ee \u4e00\u4e2a\u6ca1\u6709\u5f97\u5230OpenStack\u6280\u672f\u59d4\u5458\u4f1a\u6b63\u5f0f\u8ba4\u53ef\u7684\u9879\u76ee\u3002\u5982\u679c\u9879\u76ee\u8db3\u591f\u6210\u529f\uff0c\u5b83\u53ef\u80fd\u4f1a\u88ab\u63d0\u5347\u4e3a\u5b75\u5316\u9879\u76ee\uff0c\u7136\u540e\u88ab\u63d0\u5347\u4e3a\u6838\u5fc3\u9879\u76ee\uff0c\u6216\u8005\u5b83\u53ef\u80fd\u4e0e\u4e3b\u4ee3\u7801\u4e3b\u5e72\u5408\u5e76\u3002 \u538b\u7f29 \u901a\u8fc7\u7279\u6b8a\u7f16\u7801\u51cf\u5c0f\u6587\u4ef6\u5927\u5c0f\uff0c\u6587\u4ef6\u53ef\u4ee5\u518d\u6b21\u89e3\u538b\u7f29\u4e3a\u539f\u59cb\u5185\u5bb9\u3002OpenStack \u652f\u6301 Linux \u6587\u4ef6\u7cfb\u7edf\u7ea7\u522b\u7684\u538b\u7f29\uff0c\u4f46\u4e0d\u652f\u6301\u5bf9\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u6216\u955c\u50cf\u670d\u52a1\u865a\u62df\u673a\u6620\u50cf\u7b49\u5185\u5bb9\u8fdb\u884c\u538b\u7f29\u3002 \u8ba1\u7b97 API \uff08nova API\uff09 nova-api \u5b88\u62a4\u7a0b\u5e8f\u63d0\u4f9b\u5bf9 nova \u670d\u52a1\u7684\u8bbf\u95ee\u3002\u53ef\u4ee5\u4e0e\u5176\u4ed6 API \u901a\u4fe1\uff0c\u4f8b\u5982 Amazon EC2 API\u3002 \u8ba1\u7b97\u63a7\u5236\u5668 \u8ba1\u7b97\u7ec4\u4ef6\uff0c\u7528\u4e8e\u9009\u62e9\u8981\u5728\u5176\u4e0a\u542f\u52a8 VM \u5b9e\u4f8b\u7684\u5408\u9002\u4e3b\u673a\u3002 \u8ba1\u7b97\u4e3b\u673a \u4e13\u7528\u4e8e\u8fd0\u884c\u8ba1\u7b97\u8282\u70b9\u7684\u7269\u7406\u4e3b\u673a\u3002 \u8ba1\u7b97\u8282\u70b9 \u8fd0\u884c nova-compute \u5b88\u62a4\u7a0b\u5e8f\u7684\u8282\u70b9\uff0c\u8be5\u5b88\u62a4\u7a0b\u5e8f\u7ba1\u7406\u63d0\u4f9b\u5404\u79cd\u670d\u52a1\uff08\u5982 Web \u5e94\u7528\u7a0b\u5e8f\u548c\u5206\u6790\uff09\u7684 VM \u5b9e\u4f8b\u3002 \u8ba1\u7b97\u670d\u52a1 \uff08nova\uff09 OpenStack \u6838\u5fc3\u9879\u76ee\uff0c\u7528\u4e8e\u5b9e\u73b0\u670d\u52a1\u548c\u76f8\u5173\u5e93\uff0c\u4ee5\u63d0\u4f9b\u5bf9\u8ba1\u7b97\u8d44\u6e90\uff08\u5305\u62ec\u88f8\u673a\u3001\u865a\u62df\u673a\u548c\u5bb9\u5668\uff09\u7684\u5927\u89c4\u6a21\u53ef\u6269\u5c55\u3001\u6309\u9700\u3001\u81ea\u52a9\u8bbf\u95ee\u3002 \u8ba1\u7b97\u5de5\u4f5c\u8fdb\u7a0b \u5728\u6bcf\u4e2a\u8ba1\u7b97\u8282\u70b9\u4e0a\u8fd0\u884c\u5e76\u7ba1\u7406 VM \u5b9e\u4f8b\u751f\u547d\u5468\u671f\u7684\u8ba1\u7b97\u7ec4\u4ef6\uff0c\u5305\u62ec\u8fd0\u884c\u3001\u91cd\u65b0\u542f\u52a8\u3001\u7ec8\u6b62\u3001\u9644\u52a0/\u5206\u79bb\u5377\u7b49\u3002\u7531 nova-compute \u5b88\u62a4\u7a0b\u5e8f\u63d0\u4f9b\u3002 \u4e32\u8054\u5bf9\u8c61 \u5bf9\u8c61\u5b58\u50a8\u7ec4\u5408\u5e76\u53d1\u9001\u5230\u5ba2\u6237\u7aef\u7684\u4e00\u7ec4\u5206\u6bb5\u5bf9\u8c61\u3002 \u5bfc\u4f53 \u5728\u8ba1\u7b97\u4e2d\uff0cconductor \u662f\u4ee3\u7406\u6765\u81ea\u8ba1\u7b97\u8fdb\u7a0b\u7684\u6570\u636e\u5e93\u8bf7\u6c42\u7684\u8fdb\u7a0b\u3002\u4f7f\u7528 conductor \u53ef\u4ee5\u63d0\u9ad8\u5b89\u5168\u6027\uff0c\u56e0\u4e3a\u8ba1\u7b97\u8282\u70b9\u4e0d\u9700\u8981\u76f4\u63a5\u8bbf\u95ee\u6570\u636e\u5e93\u3002 congress \u6cbb\u7406\u670d\u52a1\u7684\u4ee3\u7801\u540d\u79f0\u3002 \u4e00\u81f4\u6027\u7a97\u53e3 \u6240\u6709\u5ba2\u6237\u7aef\u90fd\u53ef\u4ee5\u8bbf\u95ee\u65b0\u7684\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u6240\u9700\u7684\u65f6\u95f4\u3002 \u63a7\u5236\u53f0\u65e5\u5fd7 \u5305\u542b\u8ba1\u7b97\u4e2d Linux VM \u63a7\u5236\u53f0\u7684\u8f93\u51fa\u3002 \u5bb9\u5668 \u5728\u5bf9\u8c61\u5b58\u50a8\u4e2d\u7ec4\u7ec7\u548c\u5b58\u50a8\u5bf9\u8c61\u3002\u7c7b\u4f3c\u4e8e Linux \u76ee\u5f55\u7684\u6982\u5ff5\uff0c\u4f46\u4e0d\u80fd\u5d4c\u5957\u3002\u5f71\u50cf\u670d\u52a1\u5bb9\u5668\u683c\u5f0f\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5bb9\u5668\u5ba1\u6838\u5458 \u901a\u8fc7\u5bf9 SQLite \u540e\u7aef\u6570\u636e\u5e93\u7684\u67e5\u8be2\uff0c\u68c0\u67e5\u6307\u5b9a\u5bf9\u8c61\u5b58\u50a8\u5bb9\u5668\u4e2d\u7f3a\u5c11\u526f\u672c\u6216\u4e0d\u6b63\u786e\u7684\u5bf9\u8c61\u3002 \u5bb9\u5668\u6570\u636e\u5e93 \u5b58\u50a8\u5bf9\u8c61\u5b58\u50a8\u5bb9\u5668\u548c\u5bb9\u5668\u5143\u6570\u636e\u7684 SQLite \u6570\u636e\u5e93\u3002\u5bb9\u5668\u670d\u52a1\u5668\u8bbf\u95ee\u6b64\u6570\u636e\u5e93\u3002 \u5bb9\u5668\u683c\u5f0f \u6620\u50cf\u670d\u52a1\u4f7f\u7528\u7684\u5305\u88c5\u5668\uff0c\u5176\u4e2d\u5305\u542b VM \u6620\u50cf\u53ca\u5176\u5173\u8054\u7684\u5143\u6570\u636e\uff0c\u4f8b\u5982\u8ba1\u7b97\u673a\u72b6\u6001\u3001OS \u78c1\u76d8\u5927\u5c0f\u7b49\u3002 \u5bb9\u5668\u57fa\u7840\u8bbe\u65bd\u7ba1\u7406\u670d\u52a1\uff08magnum\uff09 \u8be5\u9879\u76ee\u63d0\u4f9b\u4e00\u7ec4\u7528\u4e8e\u9884\u914d\u3001\u6269\u5c55\u548c\u7ba1\u7406\u5bb9\u5668\u7f16\u6392\u5f15\u64ce\u7684\u670d\u52a1\u3002 \u5bb9\u5668\u670d\u52a1\u5668 \u7ba1\u7406\u5bb9\u5668\u7684\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u5668\u3002 \u5bb9\u5668\u670d\u52a1 \u63d0\u4f9b\u521b\u5efa\u3001\u5220\u9664\u3001\u5217\u8868\u7b49\u5bb9\u5668\u670d\u52a1\u7684\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\u3002 \u5185\u5bb9\u5206\u53d1\u7f51\u7edc \uff08CDN\uff09 \u5185\u5bb9\u5206\u53d1\u7f51\u7edc\u662f\u7528\u4e8e\u5c06\u5185\u5bb9\u5206\u53d1\u5230\u5ba2\u6237\u7aef\u7684\u4e13\u7528\u7f51\u7edc\uff0c\u901a\u5e38\u4f4d\u4e8e\u5ba2\u6237\u7aef\u9644\u8fd1\u4ee5\u63d0\u9ad8\u6027\u80fd\u3002 \u6301\u7eed\u4ea4\u4ed8 \u4e00\u79cd\u8f6f\u4ef6\u5de5\u7a0b\u65b9\u6cd5\uff0c\u56e2\u961f\u5728\u77ed\u5468\u671f\u5185\u751f\u4ea7\u8f6f\u4ef6\uff0c\u786e\u4fdd\u8f6f\u4ef6\u53ef\u4ee5\u968f\u65f6\u53ef\u9760\u5730\u53d1\u5e03\uff0c\u5e76\u4e14\u5728\u53d1\u5e03\u8f6f\u4ef6\u65f6\u624b\u52a8\u53d1\u5e03\u3002 \u6301\u7eed\u90e8\u7f72 \u4e00\u79cd\u8f6f\u4ef6\u53d1\u5e03\u8fc7\u7a0b\uff0c\u8be5\u8fc7\u7a0b\u4f7f\u7528\u81ea\u52a8\u5316\u6d4b\u8bd5\u6765\u9a8c\u8bc1\u5bf9\u4ee3\u7801\u5e93\u7684\u66f4\u6539\u662f\u5426\u6b63\u786e\u4e14\u7a33\u5b9a\uff0c\u4ee5\u4fbf\u7acb\u5373\u81ea\u4e3b\u90e8\u7f72\u5230\u751f\u4ea7\u73af\u5883\u3002 \u6301\u7eed\u96c6\u6210 \u6bcf\u5929\u591a\u6b21\u5c06\u6240\u6709\u5f00\u53d1\u4eba\u5458\u7684\u5de5\u4f5c\u526f\u672c\u5408\u5e76\u5230\u5171\u4eab\u4e3b\u7ebf\u7684\u505a\u6cd5\u3002 \u63a7\u5236\u5668\u8282\u70b9 \u4e91\u63a7\u5236\u5668\u8282\u70b9\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u6838\u5fc3 API \u6839\u636e\u4e0a\u4e0b\u6587\uff0c\u6838\u5fc3 API \u53ef\u4ee5\u662f OpenStack API \u6216\u7279\u5b9a\u6838\u5fc3\u9879\u76ee\u7684\u4e3b API\uff0c\u4f8b\u5982\u8ba1\u7b97\u3001\u7f51\u7edc\u3001\u6620\u50cf\u670d\u52a1\u7b49\u3002 \u6838\u5fc3\u670d\u52a1 \u7531 Interop \u5de5\u4f5c\u7ec4\u5b9a\u4e49\u4e3a\u6838\u5fc3\u7684\u5b98\u65b9 OpenStack \u670d\u52a1\u3002\u76ee\u524d\u7531\u5757\u5b58\u50a8\u670d\u52a1\uff08cinder\uff09\u3001\u8ba1\u7b97\u670d\u52a1\uff08nova\uff09\u3001\u8eab\u4efd\u670d\u52a1\uff08keystone\uff09\u3001\u955c\u50cf\u670d\u52a1\uff08glance\uff09\u3001\u7f51\u7edc\u670d\u52a1\uff08neutron\uff09\u548c\u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff08swift\uff09\u7ec4\u6210\u3002 \u6210\u672c \u5728\u8ba1\u7b97\u5206\u5e03\u5f0f\u8ba1\u5212\u7a0b\u5e8f\u4e0b\uff0c\u8fd9\u662f\u901a\u8fc7\u67e5\u770b\u6bcf\u4e2a\u4e3b\u673a\u76f8\u5bf9\u4e8e\u6240\u8bf7\u6c42\u7684 VM \u5b9e\u4f8b\u7684\u98ce\u683c\u7684\u529f\u80fd\u6765\u8ba1\u7b97\u7684\u3002 \u51ed\u8bc1 \u53ea\u6709\u7528\u6237\u77e5\u9053\u6216\u53ef\u8bbf\u95ee\u7684\u6570\u636e\uff0c\u7528\u4e8e\u9a8c\u8bc1\u7528\u6237\u662f\u5426\u662f\u4ed6\u6240\u8bf4\u7684\u4eba\u3002\u5728\u8eab\u4efd\u9a8c\u8bc1\u671f\u95f4\uff0c\u5c06\u51ed\u636e\u63d0\u4f9b\u7ed9\u670d\u52a1\u5668\u3002\u793a\u4f8b\u5305\u62ec\u5bc6\u7801\u3001\u5bc6\u94a5\u3001\u6570\u5b57\u8bc1\u4e66\u548c\u6307\u7eb9\u3002 CRL \u51fd\u6570 PKI \u6a21\u578b\u4e2d\u7684\u8bc1\u4e66\u540a\u9500\u5217\u8868 \uff08CRL\uff09 \u662f\u5df2\u540a\u9500\u7684\u8bc1\u4e66\u5217\u8868\u3002\u4e0d\u5e94\u4fe1\u4efb\u63d0\u4f9b\u8fd9\u4e9b\u8bc1\u4e66\u7684\u6700\u7ec8\u5b9e\u4f53\u3002 \u8de8\u57df\u8d44\u6e90\u5171\u4eab \uff08CORS\uff09 \u4e00\u79cd\u673a\u5236\uff0c\u5141\u8bb8\u4ece\u8d44\u6e90\u6765\u6e90\u57df\u4e4b\u5916\u7684\u53e6\u4e00\u4e2a\u57df\u8bf7\u6c42\u7f51\u9875\u4e0a\u7684\u8bb8\u591a\u8d44\u6e90\uff08\u4f8b\u5982\uff0c\u5b57\u4f53\u3001JavaScript\uff09\u3002\u7279\u522b\u662f\uff0cJavaScript \u7684 AJAX \u8c03\u7528\u53ef\u4ee5\u4f7f\u7528 XMLHttpRequest \u673a\u5236\u3002 Crowbar SUSE \u7684\u5f00\u6e90\u793e\u533a\u9879\u76ee\uff0c\u65e8\u5728\u63d0\u4f9b\u6240\u6709\u5fc5\u8981\u7684\u670d\u52a1\uff0c\u4ee5\u5feb\u901f\u90e8\u7f72\u548c\u7ba1\u7406\u4e91\u3002 \u5f53\u524d\u5de5\u4f5c\u8d1f\u8f7d \u8ba1\u7b97\u5bb9\u91cf\u7f13\u5b58\u7684\u4e00\u4e2a\u5143\u7d20\uff0c\u6839\u636e\u7ed9\u5b9a\u4e3b\u673a\u4e0a\u5f53\u524d\u6b63\u5728\u8fdb\u884c\u7684\u751f\u6210\u3001\u5feb\u7167\u3001\u8fc1\u79fb\u548c\u8c03\u6574\u5927\u5c0f\u64cd\u4f5c\u7684\u6570\u91cf\u8fdb\u884c\u8ba1\u7b97\u3002 \u5ba2\u6237 \u9879\u76ee\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u81ea\u5b9a\u4e49\u6a21\u5757 \u7528\u6237\u521b\u5efa\u7684 Python \u6a21\u5757\uff0c\u7531 horizon \u52a0\u8f7d\uff0c\u7528\u4e8e\u66f4\u6539\u4eea\u8868\u677f\u7684\u5916\u89c2\u3002","title":"C"},{"location":"security/security-guide/#d","text":"\u5b88\u62a4\u8fdb\u7a0b \u5728\u540e\u53f0\u8fd0\u884c\u5e76\u7b49\u5f85\u8bf7\u6c42\u7684\u8fdb\u7a0b\u3002\u53ef\u80fd\u4fa6\u542c\u4e5f\u53ef\u80fd\u4e0d\u4fa6\u542c TCP \u6216 UDP \u7aef\u53e3\u3002\u4e0d\u8981\u4e0e\u5de5\u4eba\u6df7\u6dc6\u3002 \u4eea\u8868\u677f\uff08horizon\uff09 OpenStack \u9879\u76ee\uff0c\u4e3a\u6240\u6709 OpenStack \u670d\u52a1\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u3001\u7edf\u4e00\u7684\u3001\u57fa\u4e8e Web \u7684\u7528\u6237\u754c\u9762\u3002 \u6570\u636e\u52a0\u5bc6 \u955c\u50cf\u670d\u52a1\u548c\u8ba1\u7b97\u90fd\u652f\u6301\u52a0\u5bc6\u7684\u865a\u62df\u673a \uff08VM\uff09 \u955c\u50cf\uff08\u4f46\u4e0d\u652f\u6301\u5b9e\u4f8b\uff09\u3002OpenStack \u652f\u6301\u4f7f\u7528 HTTPS\u3001SSL\u3001TLS \u548c SSH \u7b49\u6280\u672f\u8fdb\u884c\u4f20\u8f93\u4e2d\u6570\u636e\u52a0\u5bc6\u3002\u5bf9\u8c61\u5b58\u50a8\u4e0d\u652f\u6301\u5e94\u7528\u7a0b\u5e8f\u7ea7\u522b\u7684\u5bf9\u8c61\u52a0\u5bc6\uff0c\u4f46\u53ef\u80fd\u652f\u6301\u4f7f\u7528\u78c1\u76d8\u52a0\u5bc6\u7684\u5b58\u50a8\u3002 \u6570\u636e\u4e22\u5931\u9632\u62a4\uff08DLP\uff09 \u8f6f\u4ef6 \u7528\u4e8e\u4fdd\u62a4\u654f\u611f\u4fe1\u606f\u5e76\u901a\u8fc7\u68c0\u6d4b\u548c\u62d2\u7edd\u6570\u636e\u4f20\u8f93\u6765\u9632\u6b62\u5176\u6cc4\u6f0f\u5230\u7f51\u7edc\u8fb9\u754c\u4e4b\u5916\u7684\u8f6f\u4ef6\u7a0b\u5e8f\u3002 \u6570\u636e\u5904\u7406\u670d\u52a1\uff08sahara\uff09 OpenStack \u9879\u76ee\uff0c\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u6570\u636e\u5904\u7406\u5806\u6808\u548c\u5173\u8054\u7684\u7ba1\u7406\u63a5\u53e3\u3002 \u6570\u636e\u5b58\u50a8 \u6570\u636e\u5e93\u670d\u52a1\u652f\u6301\u7684\u6570\u636e\u5e93\u5f15\u64ce\u3002 \u6570\u636e\u5e93 ID \u4e3a\u5bf9\u8c61\u5b58\u50a8\u6570\u636e\u5e93\u7684\u6bcf\u4e2a\u526f\u672c\u6307\u5b9a\u7684\u552f\u4e00 ID\u3002 \u6570\u636e\u5e93\u590d\u5236\u5668 \u4e00\u4e2a\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\uff0c\u7528\u4e8e\u5c06\u5e10\u6237\u3001\u5bb9\u5668\u548c\u5bf9\u8c61\u6570\u636e\u5e93\u4e2d\u7684\u66f4\u6539\u590d\u5236\u5230\u5176\u4ed6\u8282\u70b9\u3002 \u6570\u636e\u5e93\u670d\u52a1\uff08trove\uff09 \u4e00\u4e2a\u96c6\u6210\u9879\u76ee\uff0c\u4e3a\u5173\u7cfb\u548c\u975e\u5173\u7cfb\u6570\u636e\u5e93\u5f15\u64ce\u63d0\u4f9b\u53ef\u6269\u5c55\u4e14\u53ef\u9760\u7684\u4e91\u6570\u636e\u5e93\u5373\u670d\u52a1\u529f\u80fd\u3002 \u89e3\u9664\u5206\u914d \u5220\u9664\u6d6e\u52a8 IP \u5730\u5740\u548c\u56fa\u5b9a IP \u5730\u5740\u4e4b\u95f4\u7684\u5173\u8054\u7684\u8fc7\u7a0b\u3002\u5220\u9664\u6b64\u5173\u8054\u540e\uff0c\u6d6e\u52a8 IP \u5c06\u8fd4\u56de\u5230\u5730\u5740\u6c60\u3002 Debian \u4e0e OpenStack \u517c\u5bb9\u7684 Linux \u53d1\u884c\u7248\u3002 \u91cd\u590d\u6570\u636e\u5220\u9664 \u5728\u78c1\u76d8\u5757\u3001\u6587\u4ef6\u548c/\u6216\u5bf9\u8c61\u7ea7\u522b\u67e5\u627e\u91cd\u590d\u6570\u636e\u4ee5\u6700\u5927\u7a0b\u5ea6\u5730\u51cf\u5c11\u5b58\u50a8\u4f7f\u7528\u7684\u8fc7\u7a0b - \u76ee\u524d\u5728 OpenStack \u4e2d\u4e0d\u53d7\u652f\u6301\u3002 \u9ed8\u8ba4\u9762\u677f \u7528\u6237\u8bbf\u95ee\u4eea\u8868\u677f\u65f6\u663e\u793a\u7684\u9ed8\u8ba4\u9762\u677f\u3002 \u9ed8\u8ba4\u9879\u76ee \u5982\u679c\u5728\u521b\u5efa\u7528\u6237\u65f6\u672a\u6307\u5b9a\u4efb\u4f55\u9879\u76ee\uff0c\u5219\u4f1a\u5c06\u65b0\u7528\u6237\u5206\u914d\u7ed9\u6b64\u9879\u76ee\u3002 \u9ed8\u8ba4\u4ee4\u724c \u4e00\u4e2a\u6807\u8bc6\u670d\u52a1\u4ee4\u724c\uff0c\u8be5\u4ee4\u724c\u4e0d\u4e0e\u7279\u5b9a\u9879\u76ee\u5173\u8054\uff0c\u5e76\u4ea4\u6362\u4e3a\u4f5c\u7528\u57df\u5185\u4ee4\u724c\u3002 \u5ef6\u8fdf\u5220\u9664 \u5f71\u50cf\u670d\u52a1\u4e2d\u7684\u4e00\u4e2a\u9009\u9879\uff0c\u7528\u4e8e\u5728\u9884\u5b9a\u4e49\u7684\u79d2\u6570\u540e\u5220\u9664\u5f71\u50cf\uff0c\u800c\u4e0d\u662f\u7acb\u5373\u5220\u9664\u5f71\u50cf\u3002 \u4ea4\u4ed8\u65b9\u5f0f Compute RabbitMQ\u6d88\u606f\u6295\u9012\u6a21\u5f0f\u7684\u8bbe\u7f6e;\u53ef\u4ee5\u8bbe\u7f6e\u4e3a\u77ac\u6001\u6216\u6301\u4e45\u6027\u3002 \u62d2\u7edd\u670d\u52a1 \uff08DoS\uff09 \u62d2\u7edd\u670d\u52a1 \uff08DoS\uff09 \u662f\u62d2\u7edd\u670d\u52a1\u653b\u51fb\u7684\u7b80\u79f0\u3002\u8fd9\u662f\u963b\u6b62\u5408\u6cd5\u7528\u6237\u4f7f\u7528\u670d\u52a1\u7684\u6076\u610f\u5c1d\u8bd5\u3002 \u5df2\u5f03\u7528\u7684\u8eab\u4efd\u9a8c\u8bc1 \u8ba1\u7b97\u4e2d\u7684\u4e00\u4e2a\u9009\u9879\uff0c\u4f7f\u7ba1\u7406\u5458\u80fd\u591f\u901a\u8fc7 nova-manage \u547d\u4ee4\u521b\u5efa\u548c\u7ba1\u7406\u7528\u6237\uff0c\u800c\u4e0d\u662f\u4f7f\u7528\u6807\u8bc6\u670d\u52a1\u3002 \u6307\u5b9a DNS \u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u684c\u9762\u5373\u670d\u52a1 \u4e00\u4e2a\u5e73\u53f0\uff0c\u5b83\u63d0\u4f9b\u4e86\u4e00\u5957\u684c\u9762\u73af\u5883\uff0c\u7528\u6237\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95ee\u8fd9\u4e9b\u73af\u5883\u4ece\u4efb\u4f55\u4f4d\u7f6e\u63a5\u6536\u684c\u9762\u4f53\u9a8c\u3002\u8fd9\u53ef\u4ee5\u63d0\u4f9b\u901a\u7528\u3001\u5f00\u53d1\u751a\u81f3\u540c\u6784\u6d4b\u8bd5\u73af\u5883\u3002 \u5f00\u53d1\u8005 \u8ba1\u7b97 RBAC \u7cfb\u7edf\u4e2d\u7684\u9ed8\u8ba4\u89d2\u8272\u4e4b\u4e00\uff0c\u4e5f\u662f\u5206\u914d\u7ed9\u65b0\u7528\u6237\u7684\u9ed8\u8ba4\u89d2\u8272\u3002 \u8bbe\u5907 ID \u5c06\u5bf9\u8c61\u5b58\u50a8\u5206\u533a\u6620\u5c04\u5230\u7269\u7406\u5b58\u50a8\u8bbe\u5907\u3002 \u8bbe\u5907\u6743\u91cd \u6839\u636e\u6bcf\u4e2a\u8bbe\u5907\u7684\u5b58\u50a8\u5bb9\u91cf\uff0c\u5728\u5bf9\u8c61\u5b58\u50a8\u8bbe\u5907\u4e4b\u95f4\u6309\u6bd4\u4f8b\u5206\u914d\u5206\u533a\u3002 \u5f00\u53d1\u5806\u6808 \u4f7f\u7528 shell \u811a\u672c\u5feb\u901f\u6784\u5efa\u5b8c\u6574 OpenStack \u5f00\u53d1\u73af\u5883\u7684\u793e\u533a\u9879\u76ee\u3002 DHCP\u4ee3\u7406 \u4e3a\u865a\u62df\u7f51\u7edc\u63d0\u4f9b DHCP \u670d\u52a1\u7684 OpenStack Networking \u4ee3\u7406\u3002 Diablo 2011 \u5e74\u79cb\u5b63\u53d1\u5e03\u7684\u4e0e OpenStack \u76f8\u5173\u7684\u9879\u76ee\u7684\u5206\u7ec4\u7248\u672c\uff0c\u662f OpenStack \u7684\u7b2c\u56db\u4e2a\u7248\u672c\u3002\u5b83\u5305\u62ec\u8ba1\u7b97 \uff08nova 2011.3\uff09\u3001\u5bf9\u8c61\u5b58\u50a8 \uff08swift 1.4.3\uff09 \u548c\u955c\u50cf\u670d\u52a1 \uff08glance\uff09\u3002Diablo\u662fOpenStack\u7b2c\u56db\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u52a0\u5229\u798f\u5c3c\u4e9a\u5dde\u5723\u514b\u62c9\u62c9\u9644\u8fd1\u7684\u6e7e\u533a\u4e3e\u884c\uff0cDiablo\u662f\u9644\u8fd1\u7684\u57ce\u5e02\u3002 \u76f4\u63a5\u6d88\u8d39\u8005 Compute RabbitMQ \u7684\u4e00\u4e2a\u5143\u7d20\uff0c\u5728\u6267\u884c RPC \u8c03\u7528\u65f6\u751f\u6548\u3002\u5b83\u901a\u8fc7\u552f\u4e00\u7684\u72ec\u5360\u961f\u5217\u8fde\u63a5\u5230\u76f4\u63a5\u4ea4\u6362\uff0c\u53d1\u9001\u6d88\u606f\uff0c\u7136\u540e\u7ec8\u6b62\u3002 \u76f4\u63a5\u4ea4\u6362 RPC \u8c03\u7528\u671f\u95f4\u5728 Compute RabbitMQ \u4e2d\u521b\u5efa\u7684\u8def\u7531\u8868;\u4e3a\u6bcf\u4e2a\u8c03\u7528\u7684 RPC \u8c03\u7528\u521b\u5efa\u4e00\u4e2a\u3002 \u76f4\u63a5\u53d1\u5e03\u8005 RabbitMQ \u7684\u5143\u7d20\uff0c\u7528\u4e8e\u63d0\u4f9b\u5bf9\u4f20\u5165 MQ \u6d88\u606f\u7684\u54cd\u5e94\u3002 \u89e3\u9664\u5173\u8054 \u5220\u9664\u6d6e\u52a8 IP \u5730\u5740\u548c\u56fa\u5b9a IP \u4e4b\u95f4\u7684\u5173\u8054\uff0c\u4ece\u800c\u5c06\u6d6e\u52a8 IP \u5730\u5740\u8fd4\u56de\u5230\u5730\u5740\u6c60\u7684\u8fc7\u7a0b\u3002 \u81ea\u4e3b\u8bbf\u95ee\u63a7\u5236 \uff08DAC\uff09 \u63a7\u5236\u4f7f\u7528\u8005\u8bbf\u95ee\u5bf9\u8c61\u7684\u80fd\u529b\uff0c\u540c\u65f6\u4f7f\u7528\u6237\u80fd\u591f\u505a\u51fa\u7b56\u7565\u51b3\u7b56\u5e76\u5206\u914d\u5b89\u5168\u5c5e\u6027\u3002\u4f20\u7edf\u7684\u7528\u6237\u3001\u7ec4\u548c\u8bfb-\u5199-\u6267\u884c\u6743\u9650\u7684 UNIX \u7cfb\u7edf\u5c31\u662f DAC \u7684\u4e00\u4e2a\u793a\u4f8b\u3002 \u78c1\u76d8\u52a0\u5bc6 \u80fd\u591f\u5728\u6587\u4ef6\u7cfb\u7edf\u3001\u78c1\u76d8\u5206\u533a\u6216\u6574\u4e2a\u78c1\u76d8\u7ea7\u522b\u52a0\u5bc6\u6570\u636e\u3002\u5728\u8ba1\u7b97 VM \u4e2d\u53d7\u652f\u6301\u3002 \u78c1\u76d8\u683c\u5f0f VM \u7684\u78c1\u76d8\u6620\u50cf\u5728\u6620\u50cf\u670d\u52a1\u540e\u7aef\u5b58\u50a8\u4e2d\u5b58\u50a8\u7684\u57fa\u7840\u683c\u5f0f\u3002\u4f8b\u5982\uff0cAMI\u3001ISO\u3001QCOW2\u3001VMDK \u7b49\u3002 \u5206\u6563 \u5728\u5bf9\u8c61\u5b58\u50a8\u4e2d\uff0c\u7528\u4e8e\u6d4b\u8bd5\u548c\u786e\u4fdd\u5bf9\u8c61\u548c\u5bb9\u5668\u5206\u6563\u4ee5\u786e\u4fdd\u5bb9\u9519\u7684\u5de5\u5177\u3002 \u5206\u5e03\u5f0f\u865a\u62df\u8def\u7531\u5668 \uff08DVR\uff09 \u4f7f\u7528 OpenStack Networking \uff08neutron\uff09 \u65f6\u5b9e\u73b0\u9ad8\u53ef\u7528\u6027\u591a\u4e3b\u673a\u8def\u7531\u7684\u673a\u5236\u3002 Django \u5728\u5730\u5e73\u7ebf\u4e2d\u5e7f\u6cdb\u4f7f\u7528\u7684 Web \u6846\u67b6\u3002 DNS \u8bb0\u5f55 \u6307\u5b9a\u6709\u5173\u7279\u5b9a\u57df\u5e76\u5c5e\u4e8e\u8be5\u57df\u7684\u4fe1\u606f\u7684\u8bb0\u5f55\u3002 DNS\u670d\u52a1\uff08\u6307\u5b9a\uff09 OpenStack \u9879\u76ee\uff0c\u4ee5\u4e0e\u6280\u672f\u65e0\u5173\u7684\u65b9\u5f0f\u63d0\u4f9b\u5bf9\u6743\u5a01 DNS \u670d\u52a1\u7684\u53ef\u6269\u5c55\u3001\u6309\u9700\u3001\u81ea\u52a9\u8bbf\u95ee\u3002 dnsmasq \u4e3a\u865a\u62df\u7f51\u7edc\u63d0\u4f9b DNS\u3001DHCP\u3001BOOTP \u548c TFTP \u670d\u52a1\u7684\u5b88\u62a4\u7a0b\u5e8f\u3002 \u57df \u6807\u8bc6 API v3 \u5b9e\u4f53\u3002\u8868\u793a\u9879\u76ee\u3001\u7ec4\u548c\u7528\u6237\u7684\u96c6\u5408\uff0c\u7528\u4e8e\u5b9a\u4e49\u7528\u4e8e\u7ba1\u7406 OpenStack Identity \u5b9e\u4f53\u7684\u7ba1\u7406\u8fb9\u754c\u3002\u5728 Internet \u4e0a\uff0c\u5c06\u7f51\u7ad9\u4e0e\u5176\u4ed6\u7f51\u7ad9\u5206\u5f00\u3002\u901a\u5e38\uff0c\u57df\u540d\u6709\u4e24\u4e2a\u6216\u591a\u4e2a\u90e8\u5206\uff0c\u7528\u70b9\u5206\u9694\u3002\u4f8b\u5982\uff0cyahoo.com\u3001usa.gov\u3001harvard.edu \u6216 mail.yahoo.com\u3002\u6b64\u5916\uff0c\u57df\u662f\u5305\u542b\u4e00\u6761\u6216\u591a\u6761\u8bb0\u5f55\u7684\u6240\u6709 DNS \u76f8\u5173\u4fe1\u606f\u7684\u5b9e\u4f53\u6216\u5bb9\u5668\u3002 \u57df\u540d\u7cfb\u7edf\uff08DNS\uff09 \u7528\u4e8e\u786e\u5b9a Internet \u57df\u540d\u5230\u5730\u5740\u548c\u5730\u5740\u5230\u540d\u79f0\u89e3\u6790\u7684\u7cfb\u7edf\u3002DNS \u901a\u8fc7\u5c06 IP \u5730\u5740\u8f6c\u6362\u4e3a\u66f4\u6613\u4e8e\u8bb0\u5fc6\u7684\u5730\u5740\u6765\u5e2e\u52a9\u6d4f\u89c8 Internet\u3002\u4f8b\u5982\uff0c\u5c06 111.111.111.1 \u8f6c\u6362\u4e3a www.yahoo.com\u3002\u6240\u6709\u57df\u53ca\u5176\u7ec4\u4ef6\uff08\u5982\u90ae\u4ef6\u670d\u52a1\u5668\uff09\u90fd\u5229\u7528 DNS \u89e3\u6790\u5230\u9002\u5f53\u7684\u4f4d\u7f6e\u3002DNS\u670d\u52a1\u5668\u901a\u5e38\u8bbe\u7f6e\u5728\u4e3b\u4ece\u5173\u7cfb\u4e2d\uff0c\u4ee5\u4fbf\u4e3b\u670d\u52a1\u5668\u6545\u969c\u8c03\u7528\u4ece\u670d\u52a1\u5668\u3002\u8fd8\u53ef\u4ee5\u5bf9 DNS \u670d\u52a1\u5668\u8fdb\u884c\u7fa4\u96c6\u6216\u590d\u5236\uff0c\u4ee5\u4fbf\u5bf9\u4e00\u4e2a DNS \u670d\u52a1\u5668\u6240\u505a\u7684\u66f4\u6539\u81ea\u52a8\u4f20\u64ad\u5230\u5176\u4ed6\u6d3b\u52a8\u670d\u52a1\u5668\u3002\u5728\u8ba1\u7b97\u4e2d\uff0c\u652f\u6301\u5c06 DNS \u6761\u76ee\u4e0e\u6d6e\u52a8 IP \u5730\u5740\u3001\u8282\u70b9\u6216\u5355\u5143\u76f8\u5173\u8054\uff0c\u4ee5\u4fbf\u4e3b\u673a\u540d\u5728\u91cd\u65b0\u542f\u52a8\u65f6\u4fdd\u6301\u4e00\u81f4\u3002 \u4e0b\u8f7d \u5c06\u6570\u636e\uff08\u901a\u5e38\u4ee5\u6587\u4ef6\u7684\u5f62\u5f0f\uff09\u4ece\u4e00\u53f0\u8ba1\u7b97\u673a\u4f20\u8f93\u5230\u53e6\u4e00\u53f0\u8ba1\u7b97\u673a\u3002 \u6301\u4e45\u4ea4\u6362 \u670d\u52a1\u5668\u91cd\u65b0\u542f\u52a8\u65f6\u4fdd\u6301\u6d3b\u52a8\u72b6\u6001\u7684 Compute RabbitMQ \u6d88\u606f\u4ea4\u6362\u3002 \u6301\u4e45\u961f\u5217 \u4e00\u4e2a Compute RabbitMQ \u6d88\u606f\u961f\u5217\uff0c\u5728\u670d\u52a1\u5668\u91cd\u65b0\u542f\u52a8\u65f6\u4fdd\u6301\u6d3b\u52a8\u72b6\u6001\u3002 \u52a8\u6001\u4e3b\u673a\u914d\u7f6e\u534f\u8bae \uff08DHCP\uff09 \u4e00\u79cd\u7f51\u7edc\u534f\u8bae\uff0c\u7528\u4e8e\u914d\u7f6e\u8fde\u63a5\u5230\u7f51\u7edc\u7684\u8bbe\u5907\uff0c\u4ee5\u4fbf\u5b83\u4eec\u53ef\u4ee5\u4f7f\u7528 Internet \u534f\u8bae \uff08IP\uff09 \u5728\u8be5\u7f51\u7edc\u4e0a\u8fdb\u884c\u901a\u4fe1\u3002\u8be5\u534f\u8bae\u5728\u5ba2\u6237\u7aef-\u670d\u52a1\u5668\u6a21\u578b\u4e2d\u5b9e\u73b0\uff0c\u5176\u4e2d DHCP \u5ba2\u6237\u7aef\u4ece DHCP \u670d\u52a1\u5668\u8bf7\u6c42\u914d\u7f6e\u6570\u636e\uff0c\u4f8b\u5982 IP \u5730\u5740\u3001\u9ed8\u8ba4\u8def\u7531\u4ee5\u53ca\u4e00\u4e2a\u6216\u591a\u4e2a DNS \u670d\u52a1\u5668\u5730\u5740\u3002\u4e00\u79cd\u5728\u5f15\u5bfc\u65f6\u81ea\u52a8\u4e3a\u4e3b\u673a\u914d\u7f6e\u7f51\u7edc\u7684\u65b9\u6cd5\u3002\u7531\u7f51\u7edc\u548c\u8ba1\u7b97\u63d0\u4f9b\u3002 \u52a8\u6001\u8d85\u6587\u672c\u6807\u8bb0\u8bed\u8a00 \uff08DHTML\uff09 \u4f7f\u7528 HTML\u3001JavaScript \u548c\u7ea7\u8054\u6837\u5f0f\u8868\u4f7f\u7528\u6237\u80fd\u591f\u4e0e\u7f51\u9875\u4ea4\u4e92\u6216\u663e\u793a\u7b80\u5355\u52a8\u753b\u7684\u9875\u9762\u3002","title":"D"},{"location":"security/security-guide/#e","text":"\u4e1c\u897f\u5411\u6d41\u91cf \u540c\u4e00\u4e91\u6216\u6570\u636e\u4e2d\u5fc3\u4e2d\u7684\u670d\u52a1\u5668\u4e4b\u95f4\u7684\u7f51\u7edc\u6d41\u91cf\u3002\u53e6\u8bf7\u53c2\u9605\u5357\u5317\u5411\u6d41\u91cf\u3002 EBS \u542f\u52a8\u5377 \u5305\u542b\u53ef\u542f\u52a8 VM \u6620\u50cf\u7684 Amazon EBS \u5b58\u50a8\u5377\uff0cOpenStack \u76ee\u524d\u4e0d\u652f\u6301\u8be5\u6620\u50cf\u3002 ebtables \u7528\u4e8e Linux \u6865\u63a5\u9632\u706b\u5899\u7684\u8fc7\u6ee4\u5de5\u5177\uff0c\u652f\u6301\u8fc7\u6ee4\u901a\u8fc7 Linux \u6865\u63a5\u7684\u7f51\u7edc\u6d41\u91cf\u3002\u5728\u8ba1\u7b97\u4e2d\u4e0e arptables\u3001iptables \u548c ip6tables \u4e00\u8d77\u4f7f\u7528\uff0c\u4ee5\u786e\u4fdd\u7f51\u7edc\u901a\u4fe1\u7684\u9694\u79bb\u3002 EC2 \u51fd\u6570 Amazon \u5546\u4e1a\u8ba1\u7b97\u4ea7\u54c1\uff0c\u7c7b\u4f3c\u4e8e\u8ba1\u7b97\u3002 EC2 \u8bbf\u95ee\u5bc6\u94a5 \u4e0e EC2 \u79c1\u6709\u5bc6\u94a5\u4e00\u8d77\u4f7f\u7528\u4ee5\u8bbf\u95ee\u8ba1\u7b97 EC2 API\u3002 EC2 API OpenStack \u652f\u6301\u901a\u8fc7\u8ba1\u7b97\u8bbf\u95ee Amazon EC2 API\u3002 EC2 \u517c\u5bb9\u6027 API \u4f7f OpenStack \u80fd\u591f\u4e0e Amazon EC2 \u901a\u4fe1\u7684\u8ba1\u7b97\u7ec4\u4ef6\u3002 EC2 \u79c1\u6709\u5bc6\u94a5 \u4e0e\u8ba1\u7b97 EC2 API \u901a\u4fe1\u65f6\u4e0e EC2 \u8bbf\u95ee\u5bc6\u94a5\u4e00\u8d77\u4f7f\u7528;\u7528\u4e8e\u5bf9\u6bcf\u4e2a\u8bf7\u6c42\u8fdb\u884c\u6570\u5b57\u7b7e\u540d\u3002 \u8fb9\u7f18\u8ba1\u7b97 \u5728\u4e91\u4e2d\u8fd0\u884c\u66f4\u5c11\u7684\u8fdb\u7a0b\uff0c\u5e76\u5c06\u8fd9\u4e9b\u8fdb\u7a0b\u79fb\u52a8\u5230\u672c\u5730\u3002 \u5f39\u6027\u5757\u5b58\u50a8 \uff08EBS\uff09 Amazon \u5546\u4e1a\u5757\u5b58\u50a8\u4ea7\u54c1\u3002 \u5c01\u88c5 \u5c06\u4e00\u79cd\u6570\u636e\u5305\u7c7b\u578b\u7f6e\u4e8e\u53e6\u4e00\u79cd\u6570\u636e\u5305\u7c7b\u578b\u4e2d\uff0c\u4ee5\u63d0\u53d6\u6216\u4fdd\u62a4\u6570\u636e\u3002\u793a\u4f8b\u5305\u62ec GRE\u3001MPLS \u6216 IPsec\u3002 \u52a0\u5bc6 OpenStack\u652f\u6301HTTPS\u3001SSH\u3001SSL\u3001TLS\u3001\u6570\u5b57\u8bc1\u4e66\u3001\u6570\u636e\u52a0\u5bc6\u7b49\u52a0\u5bc6\u6280\u672f\u3002 \u7aef\u70b9 \u8bf7\u53c2\u9605 API \u7aef\u70b9\u3002 \u7aef\u70b9\u6ce8\u518c\u8868 \u8eab\u4efd\u670d\u52a1\u76ee\u5f55\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u7aef\u70b9\u6a21\u677f URL \u548c\u7aef\u53e3\u53f7\u7aef\u70b9\u5217\u8868\uff0c\u6307\u793a\u53ef\u4ee5\u8bbf\u95ee\u670d\u52a1\uff08\u5982\u5bf9\u8c61\u5b58\u50a8\u3001\u8ba1\u7b97\u3001\u6807\u8bc6\u7b49\uff09\u7684\u4f4d\u7f6e\u3002 \u4f01\u4e1a\u4e91\u8ba1\u7b97 \u4f4d\u4e8e\u9632\u706b\u5899\u540e\u9762\u7684\u8ba1\u7b97\u73af\u5883\uff0c\u4e3a\u4f01\u4e1a\u63d0\u4f9b\u8f6f\u4ef6\u3001\u57fa\u7840\u8bbe\u65bd\u548c\u5e73\u53f0\u670d\u52a1\u3002 \u5b9e\u4f53 \u4efb\u4f55\u60f3\u8981\u8fde\u63a5\u5230\u7f51\u7edc\uff08\u7f51\u7edc\u8fde\u63a5\u670d\u52a1\uff09\u63d0\u4f9b\u7684\u7f51\u7edc\u670d\u52a1\u7684\u786c\u4ef6\u6216\u8f6f\u4ef6\u3002\u5b9e\u4f53\u53ef\u4ee5\u901a\u8fc7\u5b9e\u73b0 VIF \u6765\u5229\u7528\u7f51\u7edc\u3002 \u4e34\u65f6\u6620\u50cf \u4e0d\u4fdd\u5b58\u5bf9\u5176\u5377\u6240\u505a\u7684\u66f4\u6539\u5e76\u5728\u5b9e\u4f8b\u7ec8\u6b62\u540e\u5c06\u5176\u6062\u590d\u5230\u539f\u59cb\u72b6\u6001\u7684 VM \u6620\u50cf\u3002 \u4e34\u65f6\u5377 \u4e0d\u4fdd\u5b58\u5bf9\u5176\u6240\u505a\u7684\u66f4\u6539\u5e76\u5728\u5f53\u524d\u7528\u6237\u653e\u5f03\u63a7\u5236\u6743\u65f6\u6062\u590d\u5230\u5176\u539f\u59cb\u72b6\u6001\u7684\u5377\u3002 Essex 2012 \u5e74 4 \u6708\u53d1\u5e03\u7684\u4e0e OpenStack \u76f8\u5173\u7684\u9879\u76ee\u7684\u5206\u7ec4\u7248\u672c\uff0c\u662f OpenStack \u7684\u7b2c\u4e94\u4e2a\u7248\u672c\u3002\u5b83\u5305\u62ec\u8ba1\u7b97\uff08nova 2012.1\uff09\u3001\u5bf9\u8c61\u5b58\u50a8\uff08swift 1.4.8\uff09\u3001\u56fe\u50cf\uff08glance\uff09\u3001\u8eab\u4efd\uff08keystone\uff09\u548c\u4eea\u8868\u677f\uff08horizon\uff09\u3002Essex \u662f OpenStack \u7b2c\u4e94\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u9a6c\u8428\u8bf8\u585e\u5dde\u6ce2\u58eb\u987f\u4e3e\u884c\uff0cEssex\u662f\u9644\u8fd1\u7684\u57ce\u5e02\u3002 ESXi \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 ETag \u51fd\u6570 \u5bf9\u8c61\u5b58\u50a8\u4e2d\u5bf9\u8c61\u7684 MD5 \u54c8\u5e0c\u503c\uff0c\u7528\u4e8e\u786e\u4fdd\u6570\u636e\u5b8c\u6574\u6027\u3002 euca2ools \u7528\u4e8e\u7ba1\u7406 VM \u7684\u547d\u4ee4\u884c\u5de5\u5177\u96c6\u5408;\u5927\u591a\u6570\u90fd\u4e0eOpenStack\u517c\u5bb9\u3002 Eucalyptus Kernel Image \uff08EKI\uff09 \u4e0e ERI \u4e00\u8d77\u4f7f\u7528\u4ee5\u521b\u5efa EMI\u3002 Eucalyptus\u673a\u5668\u6620\u50cf \uff08EMI\uff09 \u6620\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u955c\u50cf\u5bb9\u5668\u683c\u5f0f\u3002 Eucalyptus Ramdisk \u955c\u50cf \uff08ERI\uff09 \u4e0e EKI \u4e00\u8d77\u4f7f\u7528\u4ee5\u521b\u5efa EMI\u3002 \u64a4\u79bb \u5c06\u4e00\u4e2a\u6216\u6240\u6709\u865a\u62df\u673a \uff08VM\uff09 \u5b9e\u4f8b\u4ece\u4e00\u53f0\u4e3b\u673a\u8fc1\u79fb\u5230\u53e6\u4e00\u53f0\u4e3b\u673a\u7684\u8fc7\u7a0b\uff0c\u4e0e\u5171\u4eab\u5b58\u50a8\u5b9e\u65f6\u8fc1\u79fb\u548c\u5757\u8fc1\u79fb\u517c\u5bb9\u3002 \u4ea4\u6362 RabbitMQ \u6d88\u606f\u4ea4\u6362\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u4ea4\u6362\u7c7b\u578b Compute RabbitMQ \u4e2d\u7684\u8def\u7531\u7b97\u6cd5\u3002 \u72ec\u5360\u961f\u5217 \u7531 RabbitMQ \u4e2d\u7684\u76f4\u63a5\u4f7f\u7528\u8005\u8fde\u63a5\u5230 - \u8ba1\u7b97\uff0c\u6d88\u606f\u53ea\u80fd\u7531\u5f53\u524d\u8fde\u63a5\u4f7f\u7528\u3002 \u6269\u5c55\u5c5e\u6027 \uff08xattr\uff09 \u6587\u4ef6\u7cfb\u7edf\u9009\u9879\uff0c\u7528\u4e8e\u5b58\u50a8\u6240\u6709\u8005\u3001\u7ec4\u3001\u6743\u9650\u3001\u4fee\u6539\u65f6\u95f4\u7b49\u4ee5\u5916\u7684\u5176\u4ed6\u4fe1\u606f\u3002\u5e95\u5c42\u5bf9\u8c61\u5b58\u50a8\u6587\u4ef6\u7cfb\u7edf\u5fc5\u987b\u652f\u6301\u6269\u5c55\u5c5e\u6027\u3002 \u6269\u5c55 API \u6269\u5c55\u6216\u63d2\u4ef6\u7684\u66ff\u4ee3\u672f\u8bed\u3002\u5728 Identity \u670d\u52a1\u7684\u4e0a\u4e0b\u6587\u4e2d\uff0c\u8fd9\u662f\u7279\u5b9a\u4e8e\u5b9e\u73b0\u7684\u8c03\u7528\uff0c\u4f8b\u5982\u6dfb\u52a0\u5bf9 OpenID \u7684\u652f\u6301\u3002 \u5916\u90e8\u7f51\u7edc \u901a\u5e38\u7528\u4e8e Internet \u8bbf\u95ee\u7684\u7f51\u6bb5\u3002 \u989d\u5916\u89c4\u683c \u6307\u5b9a\u8ba1\u7b97\u786e\u5b9a\u4ece\u4f55\u5904\u5f00\u59cb\u65b0\u5b9e\u4f8b\u65f6\u7684\u5176\u4ed6\u8981\u6c42\u3002\u793a\u4f8b\u5305\u62ec\u6700\u5c0f\u7f51\u7edc\u5e26\u5bbd\u6216 GPU \u91cf\u3002","title":"E"},{"location":"security/security-guide/#f","text":"FakeLDAP \u521b\u5efa\u7528\u4e8e\u6d4b\u8bd5\u8eab\u4efd\u548c\u8ba1\u7b97\u7684\u672c\u5730 LDAP \u76ee\u5f55\u7684\u7b80\u5355\u65b9\u6cd5\u3002\u9700\u8981 Redis\u3002 fan-out\u4ea4\u6362 \u5728 RabbitMQ \u548c Compute \u4e2d\uff0c\u8c03\u5ea6\u7a0b\u5e8f\u670d\u52a1\u4f7f\u7528\u6d88\u606f\u4f20\u9012\u63a5\u53e3\u4ece\u8ba1\u7b97\u3001\u5377\u548c\u7f51\u7edc\u8282\u70b9\u63a5\u6536\u529f\u80fd\u6d88\u606f\u3002 \u8054\u5408\u8eab\u4efd \u4e00\u79cd\u5728\u8eab\u4efd\u63d0\u4f9b\u5546\u548c OpenStack \u4e91\u4e4b\u95f4\u5efa\u7acb\u4fe1\u4efb\u7684\u65b9\u6cd5\u3002 Fedora \u4e0e OpenStack \u517c\u5bb9\u7684 Linux \u53d1\u884c\u7248\u3002 \u5149\u7ea4\u901a\u9053 \u5b58\u50a8\u534f\u8bae\u5728\u6982\u5ff5\u4e0a\u7c7b\u4f3c\u4e8e TCP/IP;\u5c01\u88c5 SCSI \u547d\u4ee4\u548c\u6570\u636e\u3002 \u4ee5\u592a\u7f51\u5149\u7ea4\u901a\u9053 \uff08FCoE\uff09 \u5149\u7ea4\u901a\u9053\u534f\u8bae\u5728\u4ee5\u592a\u7f51\u5185\u901a\u8fc7\u96a7\u9053\u4f20\u8f93\u3002 \u586b\u5145\u4f18\u5148\u8c03\u5ea6\u5668 \u8ba1\u7b97\u8ba1\u5212\u65b9\u6cd5\uff0c\u5c1d\u8bd5\u7528 VM \u586b\u5145\u4e3b\u673a\uff0c\u800c\u4e0d\u662f\u5728\u5404\u79cd\u4e3b\u673a\u4e0a\u542f\u52a8\u65b0 VM\u3002 \u8fc7\u6ee4\u5668 \u8ba1\u7b97\u8ba1\u5212\u8fc7\u7a0b\u4e2d\u7684\u6b65\u9aa4\uff0c\u5f53\u65e0\u6cd5\u8fd0\u884c VM \u7684\u4e3b\u673a\u88ab\u6dd8\u6c70\u4e14\u672a\u88ab\u9009\u4e2d\u65f6\u3002 \u9632\u706b\u5899 \u7528\u4e8e\u9650\u5236\u4e3b\u673a\u548c/\u6216\u8282\u70b9\u4e4b\u95f4\u7684\u901a\u4fe1\uff0c\u5728\u8ba1\u7b97\u4e2d\u4f7f\u7528 iptables\u3001arptables\u3001ip6tables \u548c ebtables \u5b9e\u73b0\u3002 \u9632\u706b\u5899\u5373\u670d\u52a1 \uff08FWaaS\uff09 \u63d0\u4f9b\u5916\u56f4\u9632\u706b\u5899\u529f\u80fd\u7684\u7f51\u7edc\u6269\u5c55\u3002 \u56fa\u5b9a IP \u5730\u5740 \u6bcf\u6b21\u542f\u52a8\u5b9e\u4f8b\u65f6\u90fd\u4e0e\u540c\u4e00\u5b9e\u4f8b\u5173\u8054\u7684 IP \u5730\u5740\u901a\u5e38\u4e0d\u5bf9\u6700\u7ec8\u7528\u6237\u6216\u516c\u5171 Internet \u8bbf\u95ee\uff0c\u5e76\u7528\u4e8e\u7ba1\u7406\u5b9e\u4f8b\u3002 \u5e73\u9762\u7ba1\u7406\u5668 \u8ba1\u7b97\u7ec4\u4ef6\u4e3a\u6388\u6743\u8282\u70b9\u63d0\u4f9b IP \u5730\u5740\uff0c\u5e76\u5047\u5b9a DHCP\u3001DNS \u4ee5\u53ca\u8def\u7531\u914d\u7f6e\u548c\u670d\u52a1\u7531\u5176\u4ed6\u8bbe\u5907\u63d0\u4f9b\u3002 \u5e73\u9762\u6a21\u5f0f\u6ce8\u5165 \u4e00\u79cd\u8ba1\u7b97\u7f51\u7edc\u65b9\u6cd5\uff0c\u5728\u5b9e\u4f8b\u542f\u52a8\u4e4b\u524d\u5c06\u64cd\u4f5c\u7cfb\u7edf\u7f51\u7edc\u914d\u7f6e\u4fe1\u606f\u6ce8\u5165\u5230 VM \u6620\u50cf\u4e2d\u3002 \u5e73\u9762\u7f51\u7edc \u865a\u62df\u7f51\u7edc\u7c7b\u578b\uff0c\u4e0d\u4f7f\u7528VLAN\u6216\u96a7\u9053\u6765\u5206\u9694\u9879\u76ee\u6d41\u91cf\u3002\u6bcf\u4e2a\u5e73\u9762\u7f51\u7edc\u901a\u5e38\u9700\u8981\u5b9a\u4e49\u7531\u6865\u63a5\u6620\u5c04\u5b9a\u4e49\u7684\u5355\u72ec\u7684\u5e95\u5c42\u7269\u7406\u63a5\u53e3\u3002\u4f46\u662f\uff0c\u5e73\u9762\u7f51\u7edc\u53ef\u4ee5\u5305\u542b\u591a\u4e2a\u5b50\u7f51\u3002FlatDHCP \u7ba1\u7406\u5668 \u63d0\u4f9b dnsmasq\uff08DHCP\u3001DNS\u3001BOOTP\u3001TFTP\uff09\u548c radvd\uff08\u8def\u7531\uff09\u670d\u52a1\u7684\u8ba1\u7b97\u7ec4\u4ef6\u3002 \u89c4\u683c VM \u5b9e\u4f8b\u7c7b\u578b\u7684\u66ff\u4ee3\u672f\u8bed \u89c4\u683cID \u6bcf\u79cd\u8ba1\u7b97\u6216\u6620\u50cf\u670d\u52a1\u865a\u62df\u673a\u89c4\u683c\u6216\u5b9e\u4f8b\u7c7b\u578b\u7684 UUID\u3002 \u6d6e\u52a8 IP \u5730\u5740 \u9879\u76ee\u53ef\u4ee5\u4e0e VM \u5173\u8054\u7684 IP \u5730\u5740\uff0c\u4ee5\u4fbf\u5b9e\u4f8b\u5728\u6bcf\u6b21\u542f\u52a8\u65f6\u90fd\u5177\u6709\u76f8\u540c\u7684\u516c\u6709 IP \u5730\u5740\u3002\u60a8\u53ef\u4ee5\u521b\u5efa\u4e00\u4e2a\u6d6e\u52a8 IP \u5730\u5740\u6c60\uff0c\u5e76\u5728\u5b9e\u4f8b\u542f\u52a8\u65f6\u5c06\u5176\u5206\u914d\u7ed9\u5b9e\u4f8b\uff0c\u4ee5\u4fdd\u6301\u4e00\u81f4\u7684 IP \u5730\u5740\u4ee5\u7ef4\u62a4 DNS \u5206\u914d\u3002 Folsom 2012 \u5e74\u79cb\u5b63\u53d1\u5e03\u7684\u4e0e OpenStack \u76f8\u5173\u7684\u9879\u76ee\u7684\u5206\u7ec4\u7248\u672c\uff0c\u662f OpenStack \u7684\u7b2c\u516d\u4e2a\u7248\u672c\u3002\u5b83\u5305\u62ec\u8ba1\u7b97 \uff08nova\uff09\u3001\u5bf9\u8c61\u5b58\u50a8 \uff08swift\uff09\u3001\u8eab\u4efd \uff08keystone\uff09\u3001\u7f51\u7edc \uff08neutron\uff09\u3001\u6620\u50cf\u670d\u52a1 \uff08glance\uff09 \u4ee5\u53ca\u5377\u6216\u5757\u5b58\u50a8 \uff08cinder\uff09\u3002Folsom \u662f OpenStack \u7b2c\u516d\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u52a0\u5229\u798f\u5c3c\u4e9a\u5dde\u65e7\u91d1\u5c71\u4e3e\u884c\uff0c\u798f\u5c14\u745f\u59c6\u662f\u9644\u8fd1\u7684\u57ce\u5e02\u3002 FormPost \u5bf9\u8c61\u5b58\u50a8\u4e2d\u95f4\u4ef6\uff0c\u901a\u8fc7\u7f51\u9875\u4e0a\u7684\u8868\u5355\u4e0a\u4f20\uff08\u53d1\u5e03\uff09\u56fe\u50cf\u3002 freezer \u5907\u4efd\u3001\u8fd8\u539f\u548c\u707e\u96be\u6062\u590d\u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u524d\u7aef \u7528\u6237\u4e0e\u670d\u52a1\u4ea4\u4e92\u7684\u70b9;\u53ef\u4ee5\u662f API \u7aef\u70b9\u3001\u4eea\u8868\u677f\u6216\u547d\u4ee4\u884c\u5de5\u5177\u3002","title":"F"},{"location":"security/security-guide/#g","text":"\u7f51\u5173 \u901a\u5e38\u5206\u914d\u7ed9\u8def\u7531\u5668\u7684 IP \u5730\u5740\uff0c\u7528\u4e8e\u5728\u4e0d\u540c\u7f51\u7edc\u4e4b\u95f4\u4f20\u9012\u7f51\u7edc\u6d41\u91cf\u3002 \u901a\u7528\u63a5\u6536\u5378\u8f7d \uff08GRO\uff09 \u67d0\u4e9b\u7f51\u7edc\u63a5\u53e3\u9a71\u52a8\u7a0b\u5e8f\u7684\u529f\u80fd\uff0c\u5728\u4f20\u9001\u5230\u5185\u6838 IP \u5806\u6808\u4e4b\u524d\uff0c\u5c06\u8bb8\u591a\u8f83\u5c0f\u7684\u63a5\u6536\u6570\u636e\u5305\u5408\u5e76\u4e3a\u4e00\u4e2a\u5927\u6570\u636e\u5305\u3002 \u901a\u7528\u8def\u7531\u5c01\u88c5 \uff08GRE\uff09 \u5728\u865a\u62df\u70b9\u5bf9\u70b9\u94fe\u8def\u4e2d\u5c01\u88c5\u5404\u79cd\u7f51\u7edc\u5c42\u534f\u8bae\u7684\u534f\u8bae\u3002 glance \u5f71\u50cf\u670d\u52a1\u7684\u4ee3\u53f7\u3002 glance API \u670d\u52a1\u5668 \u56fe\u50cf API \u7684\u66ff\u4ee3\u540d\u79f0\u3002 glance \u6ce8\u518c\u8868 \u6620\u50cf\u670d\u52a1\u6620\u50cf\u6ce8\u518c\u8868\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5168\u5c40\u7aef\u70b9\u6a21\u677f \u5305\u542b\u53ef\u7528\u4e8e\u6240\u6709\u9879\u76ee\u7684\u670d\u52a1\u7684\u6807\u8bc6\u670d\u52a1\u7ec8\u7ed3\u70b9\u6a21\u677f\u3002 GlusterFS \u4e00\u4e2a\u65e8\u5728\u805a\u5408 NAS \u4e3b\u673a\u7684\u6587\u4ef6\u7cfb\u7edf\uff0c\u4e0e OpenStack \u517c\u5bb9\u3002 gnocchi OpenStack Telemetry \u670d\u52a1\u7684\u4e00\u90e8\u5206;\u63d0\u4f9b\u7d22\u5f15\u5668\u548c\u65f6\u5e8f\u6570\u636e\u5e93\u3002 golden\u6620\u50cf \u4e00\u79cd\u64cd\u4f5c\u7cfb\u7edf\u5b89\u88c5\u65b9\u6cd5\uff0c\u5176\u4e2d\u521b\u5efa\u6700\u7ec8\u7684\u78c1\u76d8\u6620\u50cf\uff0c\u7136\u540e\u7531\u6240\u6709\u8282\u70b9\u4f7f\u7528\uff0c\u65e0\u9700\u4fee\u6539\u3002 \u6cbb\u7406\u670d\u52a1\uff08\u5927\u4f1a\uff09 \u8be5\u9879\u76ee\u5728\u4efb\u4f55\u4e91\u670d\u52a1\u96c6\u5408\u4e2d\u63d0\u4f9b\u6cbb\u7406\u5373\u670d\u52a1\uff0c\u4ee5\u4fbf\u76d1\u89c6\u3001\u5b9e\u65bd\u548c\u5ba1\u6838\u52a8\u6001\u57fa\u7840\u7ed3\u6784\u4e0a\u7684\u7b56\u7565\u3002 \u56fe\u5f62\u4ea4\u6362\u683c\u5f0f \uff08GIF\uff09 \u4e00\u79cd\u901a\u5e38\u7528\u4e8e\u7f51\u9875\u4e0a\u7684\u52a8\u753b\u56fe\u50cf\u7684\u56fe\u50cf\u6587\u4ef6\u3002 \u56fe\u5f62\u5904\u7406\u5355\u5143 \uff08GPU\uff09 OpenStack \u76ee\u524d\u4e0d\u652f\u6301\u6839\u636e GPU \u7684\u5b58\u5728\u6765\u9009\u62e9\u4e3b\u673a\u3002 \u7eff\u8272\u7ebf\u7a0b Python \u4f7f\u7528\u7684\u534f\u4f5c\u7ebf\u7a0b\u6a21\u578b;\u51cf\u5c11\u4e89\u7528\u6761\u4ef6\uff0c\u5e76\u4e14\u4ec5\u5728\u8fdb\u884c\u7279\u5b9a\u5e93\u8c03\u7528\u65f6\u8fdb\u884c\u4e0a\u4e0b\u6587\u5207\u6362\u3002\u6bcf\u4e2a OpenStack \u670d\u52a1\u90fd\u662f\u5b83\u81ea\u5df1\u7684\u7ebf\u7a0b\u3002 Grizzly OpenStack \u7b2c\u4e03\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u52a0\u5229\u798f\u5c3c\u4e9a\u5dde\u5723\u5730\u4e9a\u54e5\u4e3e\u884c\uff0cGrizzly\u662f\u52a0\u5229\u798f\u5c3c\u4e9a\u5dde\u5dde\u65d7\u7684\u4e00\u4e2a\u5143\u7d20\u3002 \u5206\u7ec4 Identity v3 API \u5b9e\u4f53\u3002\u8868\u793a\u7279\u5b9a\u57df\u6240\u62e5\u6709\u7684\u7528\u6237\u96c6\u5408\u3002 \u5ba2\u6237\u673a\u64cd\u4f5c\u7cfb\u7edf \u5728\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u7684\u63a7\u5236\u4e0b\u8fd0\u884c\u7684\u64cd\u4f5c\u7cfb\u7edf\u5b9e\u4f8b\u3002","title":"G"},{"location":"security/security-guide/#h","text":"Hadoop Apache Hadoop \u662f\u4e00\u4e2a\u5f00\u6e90\u8f6f\u4ef6\u6846\u67b6\uff0c\u652f\u6301\u6570\u636e\u5bc6\u96c6\u578b\u5206\u5e03\u5f0f\u5e94\u7528\u7a0b\u5e8f\u3002 Hadoop \u5206\u5e03\u5f0f\u6587\u4ef6\u7cfb\u7edf \uff08HDFS\uff09 \u4e00\u79cd\u5206\u5e03\u5f0f\u3001\u9ad8\u5ea6\u5bb9\u9519\u7684\u6587\u4ef6\u7cfb\u7edf\uff0c\u8bbe\u8ba1\u7528\u4e8e\u5728\u4f4e\u6210\u672c\u5546\u7528\u786c\u4ef6\u4e0a\u8fd0\u884c\u3002 \u4ea4\u63a5 \u5bf9\u8c61\u5b58\u50a8\u4e2d\u7684\u4e00\u79cd\u5bf9\u8c61\u72b6\u6001\uff0c\u5176\u4e2d\u7531\u4e8e\u9a71\u52a8\u5668\u6545\u969c\u800c\u81ea\u52a8\u521b\u5efa\u5bf9\u8c61\u7684\u65b0\u526f\u672c\u3002 HAProxy \u51fd\u6570 \u4e3a\u57fa\u4e8e TCP \u548c HTTP \u7684\u5e94\u7528\u7a0b\u5e8f\u63d0\u4f9b\u8d1f\u8f7d\u5e73\u8861\u5668\uff0c\u5c06\u8bf7\u6c42\u5206\u6563\u5230\u591a\u4e2a\u670d\u52a1\u5668\u3002 \u786c\u91cd\u542f \u4e00\u79cd\u91cd\u65b0\u542f\u52a8\u7c7b\u578b\uff0c\u5176\u4e2d\u6309\u4e0b\u7269\u7406\u6216\u865a\u62df\u7535\u6e90\u6309\u94ae\uff0c\u800c\u4e0d\u662f\u6b63\u5e38\u3001\u6b63\u786e\u5730\u5173\u95ed\u64cd\u4f5c\u7cfb\u7edf\u3002 Havana OpenStack \u7b2c\u516b\u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u4fc4\u52d2\u5188\u5dde\u6ce2\u7279\u5170\u5e02\u4e3e\u884c\uff0cHavana\u662f\u4fc4\u52d2\u5188\u5dde\u7684\u4e00\u4e2a\u975e\u6cd5\u4eba\u793e\u533a\u3002 \u5065\u5eb7\u76d1\u89c6\u5668 \u786e\u5b9a VIP \u6c60\u7684\u540e\u7aef\u6210\u5458\u662f\u5426\u53ef\u4ee5\u5904\u7406\u8bf7\u6c42\u3002\u4e00\u4e2a\u6c60\u53ef\u4ee5\u6709\u591a\u4e2a\u4e0e\u4e4b\u5173\u8054\u7684\u8fd0\u884c\u72b6\u51b5\u76d1\u89c6\u5668\u3002\u5f53\u6c60\u6709\u591a\u4e2a\u4e0e\u4e4b\u5173\u8054\u7684\u76d1\u89c6\u5668\u65f6\uff0c\u6240\u6709\u76d1\u89c6\u5668\u90fd\u4f1a\u68c0\u67e5\u6c60\u7684\u6bcf\u4e2a\u6210\u5458\u3002\u6240\u6709\u76d1\u89c6\u5668\u90fd\u5fc5\u987b\u58f0\u660e\u6210\u5458\u8fd0\u884c\u72b6\u51b5\u826f\u597d\uff0c\u624d\u80fd\u4fdd\u6301\u6d3b\u52a8\u72b6\u6001\u3002 heat \u4e1a\u52a1\u6d41\u7a0b\u670d\u52a1\u7684\u4ee3\u53f7\u3002 Heat \u7f16\u6392\u6a21\u677f \uff08HOT\uff09 \u4ee5 OpenStack \u539f\u751f\u683c\u5f0f\u7684 Heat \u8f93\u5165\u3002 \u9ad8\u53ef\u7528\u6027 \uff08HA\uff09 \u9ad8\u53ef\u7528\u6027\u7cfb\u7edf\u8bbe\u8ba1\u65b9\u6cd5\u548c\u76f8\u5173\u670d\u52a1\u5b9e\u65bd\u53ef\u786e\u4fdd\u5728\u5408\u540c\u6d4b\u91cf\u671f\u95f4\u8fbe\u5230\u9884\u5148\u5b89\u6392\u7684\u8fd0\u8425\u7ee9\u6548\u6c34\u5e73\u3002\u9ad8\u53ef\u7528\u6027\u7cfb\u7edf\u529b\u6c42\u6700\u5927\u9650\u5ea6\u5730\u51cf\u5c11\u7cfb\u7edf\u505c\u673a\u65f6\u95f4\u548c\u6570\u636e\u4e22\u5931\u3002 horizon \u4eea\u8868\u677f\u7684\u4ee3\u53f7\u3002 Horizon \u63d2\u4ef6 OpenStack Dashboard \uff08horizon\uff09 \u7684\u63d2\u4ef6\u3002 \u4e3b\u673a \u7269\u7406\u8ba1\u7b97\u673a\uff0c\u800c\u4e0d\u662f VM \u5b9e\u4f8b\uff08\u8282\u70b9\uff09\u3002 \u4e3b\u673a\u805a\u5408 \u4e00\u79cd\u5c06\u53ef\u7528\u6027\u533a\u57df\u8fdb\u4e00\u6b65\u7ec6\u5206\u4e3a\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6c60\uff08\u516c\u5171\u4e3b\u673a\u7684\u96c6\u5408\uff09\u7684\u65b9\u6cd5\u3002 \u4e3b\u673a\u603b\u7ebf\u9002\u914d\u5668 \uff08HBA\uff09 \u63d2\u5165 PCI \u63d2\u69fd\uff08\u5982\u5149\u7ea4\u901a\u9053\u6216\u7f51\u5361\uff09\u7684\u8bbe\u5907\u3002 \u6df7\u5408\u4e91 \u6df7\u5408\u4e91\u662f\u7531\u4e24\u4e2a\u6216\u591a\u4e2a\u4e91\uff08\u79c1\u6709\u4e91\u3001\u793e\u533a\u4e91\u6216\u516c\u6709\u4e91\uff09\u7ec4\u6210\u7684\uff0c\u8fd9\u4e9b\u4e91\u4ecd\u7136\u662f\u4e0d\u540c\u7684\u5b9e\u4f53\uff0c\u4f46\u7ed1\u5b9a\u5728\u4e00\u8d77\uff0c\u63d0\u4f9b\u591a\u79cd\u90e8\u7f72\u6a21\u578b\u7684\u4f18\u52bf\u3002\u6df7\u5408\u4e91\u8fd8\u610f\u5473\u7740\u80fd\u591f\u5c06\u4e3b\u673a\u6258\u7ba1\u3001\u6258\u7ba1\u548c/\u6216\u4e13\u7528\u670d\u52a1\u4e0e\u4e91\u8d44\u6e90\u8fde\u63a5\u8d77\u6765\u3002 \u6df7\u5408\u4e91\u8ba1\u7b97 \u6df7\u5408\u4e86\u672c\u5730\u3001\u79c1\u6709\u4e91\u548c\u7b2c\u4e09\u65b9\u516c\u6709\u4e91\u670d\u52a1\uff0c\u5e76\u5728\u4e24\u4e2a\u5e73\u53f0\u4e4b\u95f4\u8fdb\u884c\u7f16\u6392\u3002 Hyper-V OpenStack \u652f\u6301\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e4b\u4e00\u3002 \u8d85\u94fe\u63a5 \u5305\u542b\u6307\u5411\u5176\u4ed6\u7f51\u7ad9\u7684\u94fe\u63a5\u7684\u4efb\u4f55\u7c7b\u578b\u7684\u6587\u672c\uff0c\u5e38\u89c1\u4e8e\u5355\u51fb\u4e00\u4e2a\u6216\u591a\u4e2a\u5355\u8bcd\u4f1a\u6253\u5f00\u5176\u4ed6\u7f51\u7ad9\u7684\u6587\u6863\u4e2d\u3002 \u8d85\u6587\u672c\u4f20\u8f93\u534f\u8bae \uff08HTTP\uff09 \u7528\u4e8e\u5206\u5e03\u5f0f\u3001\u534f\u4f5c\u5f0f\u3001\u8d85\u5a92\u4f53\u4fe1\u606f\u7cfb\u7edf\u7684\u5e94\u7528\u534f\u8bae\u3002\u5b83\u662f\u4e07\u7ef4\u7f51\u6570\u636e\u901a\u4fe1\u7684\u57fa\u7840\u3002\u8d85\u6587\u672c\u662f\u5728\u5305\u542b\u6587\u672c\u7684\u8282\u70b9\u4e4b\u95f4\u4f7f\u7528\u903b\u8f91\u94fe\u63a5\uff08\u8d85\u94fe\u63a5\uff09\u7684\u7ed3\u6784\u5316\u6587\u672c\u3002HTTP\u662f\u4ea4\u6362\u6216\u4f20\u8f93\u8d85\u6587\u672c\u7684\u534f\u8bae\u3002 \u5b89\u5168\u8d85\u6587\u672c\u4f20\u8f93\u534f\u8bae \uff08HTTPS\uff09\u4e00\u79cd\u52a0\u5bc6\u901a\u4fe1\u534f\u8bae\uff0c\u7528\u4e8e\u901a\u8fc7\u8ba1\u7b97\u673a\u7f51\u7edc\u8fdb\u884c\u5b89\u5168\u901a\u4fe1\uff0c\u5728 Internet \u4e0a\u7684\u90e8\u7f72\u7279\u522b\u5e7f\u6cdb\u3002\u4ece\u6280\u672f\u4e0a\u8bb2\uff0c\u5b83\u672c\u8eab\u4e0d\u662f\u4e00\u4e2a\u534f\u8bae;\u76f8\u53cd\uff0c\u5b83\u662f\u7b80\u5355\u5730\u5c06\u8d85\u6587\u672c\u4f20\u8f93\u534f\u8bae \uff08HTTP\uff09 \u5206\u5c42\u5728 TLS \u6216 SSL \u534f\u8bae\u4e4b\u4e0a\u7684\u7ed3\u679c\uff0c\u4ece\u800c\u5c06 TLS \u6216 SSL \u7684\u5b89\u5168\u529f\u80fd\u6dfb\u52a0\u5230\u6807\u51c6 HTTP \u901a\u4fe1\u4e2d\u3002\u5927\u591a\u6570 OpenStack API \u7aef\u70b9\u548c\u8bb8\u591a\u7ec4\u4ef6\u95f4\u901a\u4fe1\u90fd\u652f\u6301 HTTPS \u901a\u4fe1\u3002 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f \u4ef2\u88c1\u548c\u63a7\u5236 VM \u5bf9\u5b9e\u9645\u5e95\u5c42\u786c\u4ef6\u7684\u8bbf\u95ee\u7684\u8f6f\u4ef6\u3002 \u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u6c60 \u901a\u8fc7\u4e3b\u673a\u805a\u5408\u7ec4\u5408\u5728\u4e00\u8d77\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u7684\u96c6\u5408\u3002","title":"H"},{"location":"security/security-guide/#i","text":"Icehouse OpenStack \u7b2c\u4e5d\u4e2a\u7248\u672c\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u9999\u6e2f\u4e3e\u884c\uff0cIce House\u662f\u8be5\u5e02\u7684\u4e00\u6761\u8857\u9053\u7684\u540d\u5b57\u3002 \u8eab\u4efd\u8bc1\u53f7\u7801 \u4e0e\u8eab\u4efd\u4e2d\u7684\u6bcf\u4e2a\u7528\u6237\u5173\u8054\u7684\u552f\u4e00\u6570\u5b57 ID\uff0c\u5728\u6982\u5ff5\u4e0a\u7c7b\u4f3c\u4e8e Linux \u6216 LDAP UID\u3002 \u8eab\u4efd\u9a8c\u8bc1 API Identity \u670d\u52a1 API \u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u8eab\u4efd\u9a8c\u8bc1\u540e\u7aef Identity \u670d\u52a1\u7528\u4e8e\u68c0\u7d22\u7528\u6237\u4fe1\u606f\u7684\u6e90;\u4f8b\u5982\uff0cOpenLDAP \u670d\u52a1\u5668\u3002 \u8eab\u4efd\u63d0\u4f9b\u8005 \u4e00\u79cd\u76ee\u5f55\u670d\u52a1\uff0c\u5141\u8bb8\u7528\u6237\u4f7f\u7528\u7528\u6237\u540d\u548c\u5bc6\u7801\u767b\u5f55\u3002\u5b83\u662f\u8eab\u4efd\u9a8c\u8bc1\u4ee4\u724c\u7684\u5178\u578b\u6765\u6e90\u3002 \u8eab\u4efd\u670d\u52a1\uff08keystone\uff09 \u4fc3\u8fdb API \u5ba2\u6237\u7aef\u8eab\u4efd\u9a8c\u8bc1\u3001\u670d\u52a1\u53d1\u73b0\u3001\u5206\u5e03\u5f0f\u591a\u9879\u76ee\u6388\u6743\u548c\u5ba1\u8ba1\u7684\u9879\u76ee\u3002\u5b83\u63d0\u4f9b\u4e86\u4e00\u4e2a\u7528\u6237\u6620\u5c04\u5230\u4ed6\u4eec\u53ef\u4ee5\u8bbf\u95ee\u7684 OpenStack \u670d\u52a1\u7684\u4e2d\u592e\u76ee\u5f55\u3002\u5b83\u8fd8\u4e3a OpenStack \u670d\u52a1\u6ce8\u518c\u7aef\u70b9\uff0c\u5e76\u5145\u5f53\u901a\u7528\u8eab\u4efd\u9a8c\u8bc1\u7cfb\u7edf\u3002 \u8eab\u4efd\u670d\u52a1 API \u7528\u4e8e\u8bbf\u95ee\u901a\u8fc7 keystone \u63d0\u4f9b\u7684 OpenStack Identity \u670d\u52a1\u7684 API\u3002 IETF \uff08\u82f1\u8bed\uff09 Internet \u5de5\u7a0b\u4efb\u52a1\u7ec4 \uff08IETF\uff09 \u662f\u4e00\u4e2a\u5f00\u653e\u6807\u51c6\u7ec4\u7ec7\uff0c\u8d1f\u8d23\u5236\u5b9a Internet \u6807\u51c6\uff0c\u5c24\u5176\u662f\u4e0e TCP/IP \u76f8\u5173\u7684\u6807\u51c6\u3002 \u6620\u50cf \u7528\u4e8e\u521b\u5efa\u6216\u91cd\u5efa\u670d\u52a1\u5668\u7684\u7279\u5b9a\u64cd\u4f5c\u7cfb\u7edf \uff08OS\uff09 \u7684\u6587\u4ef6\u96c6\u5408\u3002OpenStack \u63d0\u4f9b\u9884\u6784\u5efa\u7684\u6620\u50cf\u3002\u60a8\u8fd8\u53ef\u4ee5\u4ece\u5df2\u542f\u52a8\u7684\u670d\u52a1\u5668\u521b\u5efa\u81ea\u5b9a\u4e49\u6620\u50cf\u6216\u5feb\u7167\u3002\u81ea\u5b9a\u4e49\u6620\u50cf\u53ef\u7528\u4e8e\u6570\u636e\u5907\u4efd\uff0c\u6216\u7528\u4f5c\u5176\u4ed6\u670d\u52a1\u5668\u7684\u201c\u9ec4\u91d1\u201d\u6620\u50cf\u3002 \u6620\u50cfAPI \u7528\u4e8e\u7ba1\u7406 VM \u6620\u50cf\u7684\u6620\u50cf\u670d\u52a1 API \u7ec8\u7ed3\u70b9\u3002\u5904\u7406\u5ba2\u6237\u7aef\u5bf9 VM \u7684\u8bf7\u6c42\uff0c\u66f4\u65b0\u6ce8\u518c\u8868\u670d\u52a1\u5668\u4e0a\u7684\u6620\u50cf\u670d\u52a1\u5143\u6570\u636e\uff0c\u5e76\u4e0e\u5b58\u50a8\u9002\u914d\u5668\u901a\u4fe1\u4ee5\u4ece\u540e\u7aef\u5b58\u50a8\u4e0a\u4f20 VM \u6620\u50cf\u3002 \u6620\u50cf\u7f13\u5b58 \u7531\u56fe\u50cf\u670d\u52a1\u7528\u4e8e\u83b7\u53d6\u672c\u5730\u4e3b\u673a\u4e0a\u7684\u56fe\u50cf\uff0c\u800c\u4e0d\u662f\u5728\u6bcf\u6b21\u8bf7\u6c42\u56fe\u50cf\u65f6\u4ece\u56fe\u50cf\u670d\u52a1\u5668\u91cd\u65b0\u4e0b\u8f7d\u56fe\u50cf\u3002 \u6620\u50cf ID URI \u548c UUID \u7684\u7ec4\u5408\uff0c\u7528\u4e8e\u901a\u8fc7\u955c\u50cf API \u8bbf\u95ee\u955c\u50cf\u670d\u52a1\u865a\u62df\u673a\u955c\u50cf\u3002 \u6620\u50cf\u6210\u5458 \u53ef\u4ee5\u5728\u6620\u50cf\u670d\u52a1\u4e2d\u8bbf\u95ee\u7ed9\u5b9a VM \u6620\u50cf\u7684\u9879\u76ee\u5217\u8868\u3002 \u6620\u50cf\u6240\u6709\u8005 \u62e5\u6709\u955c\u50cf\u670d\u52a1\u865a\u62df\u673a\u955c\u50cf\u7684\u9879\u76ee\u3002 \u6620\u50cf\u6ce8\u518c\u8868 \u53ef\u901a\u8fc7\u6620\u50cf\u670d\u52a1\u83b7\u53d6\u7684 VM \u6620\u50cf\u7684\u5217\u8868\u3002 \u6620\u50cf\u670d\u52a1\uff08glance\uff09 OpenStack \u670d\u52a1\uff0c\u5b83\u63d0\u4f9b\u670d\u52a1\u548c\u5173\u8054\u7684\u5e93\u6765\u5b58\u50a8\u3001\u6d4f\u89c8\u3001\u5171\u4eab\u3001\u5206\u53d1\u548c\u7ba1\u7406\u53ef\u542f\u52a8\u78c1\u76d8\u6620\u50cf\u3001\u4e0e\u521d\u59cb\u5316\u8ba1\u7b97\u8d44\u6e90\u5bc6\u5207\u76f8\u5173\u7684\u5176\u4ed6\u6570\u636e\u4ee5\u53ca\u5143\u6570\u636e\u5b9a\u4e49\u3002 \u6620\u50cf\u72b6\u6001 \u955c\u50cf\u670d\u52a1\u4e2d\u865a\u62df\u673a\u955c\u50cf\u7684\u5f53\u524d\u72b6\u6001\uff0c\u4e0d\u8981\u4e0e\u6b63\u5728\u8fd0\u884c\u7684\u5b9e\u4f8b\u7684\u72b6\u6001\u6df7\u6dc6\u3002 \u6620\u50cf\u5b58\u50a8 \u6620\u50cf\u670d\u52a1\u7528\u4e8e\u5b58\u50a8\u865a\u62df\u673a\u6620\u50cf\u7684\u540e\u7aef\u5b58\u50a8\uff0c\u9009\u9879\u5305\u62ec\u5bf9\u8c61\u5b58\u50a8\u3001\u672c\u5730\u6302\u8f7d\u7684\u6587\u4ef6\u7cfb\u7edf\u3001RADOS \u5757\u8bbe\u5907\u3001VMware \u6570\u636e\u5b58\u50a8\u6216 HTTP\u3002 \u6620\u50cf UUID \u6620\u50cf\u670d\u52a1\u7528\u4e8e\u552f\u4e00\u6807\u8bc6\u6bcf\u4e2a VM \u6620\u50cf\u7684 UUID\u3002 \u5b75\u5316\u9879\u76ee \u793e\u533a\u9879\u76ee\u53ef\u4ee5\u63d0\u5347\u5230\u6b64\u72b6\u6001\uff0c\u7136\u540e\u63d0\u5347\u4e3a\u6838\u5fc3\u9879\u76ee \u57fa\u7840\u8bbe\u65bd\u4f18\u5316\u670d\u52a1\uff08\u89c2\u5bdf\u8005\uff09 OpenStack\u9879\u76ee\uff0c\u65e8\u5728\u4e3a\u57fa\u4e8eOpenStack\u7684\u591a\u9879\u76ee\u4e91\u63d0\u4f9b\u7075\u6d3b\u4e14\u53ef\u6269\u5c55\u7684\u8d44\u6e90\u4f18\u5316\u670d\u52a1\u3002 \u57fa\u7840\u67b6\u6784\u5373\u670d\u52a1 \uff08IaaS\uff09 IaaS \u662f\u4e00\u79cd\u914d\u7f6e\u6a21\u578b\uff0c\u5728\u8fd9\u79cd\u6a21\u578b\u4e2d\uff0c\u7ec4\u7ec7\u5916\u5305\u6570\u636e\u4e2d\u5fc3\u7684\u7269\u7406\u7ec4\u4ef6\uff0c\u4f8b\u5982\u5b58\u50a8\u3001\u786c\u4ef6\u3001\u670d\u52a1\u5668\u548c\u7f51\u7edc\u7ec4\u4ef6\u3002\u670d\u52a1\u63d0\u4f9b\u5546\u62e5\u6709\u8bbe\u5907\uff0c\u5e76\u8d1f\u8d23\u8bbe\u5907\u7684\u5b89\u88c5\u3001\u64cd\u4f5c\u548c\u7ef4\u62a4\u3002\u5ba2\u6237\u901a\u5e38\u6309\u4f7f\u7528\u91cf\u4ed8\u8d39\u3002IaaS \u662f\u4e00\u79cd\u63d0\u4f9b\u4e91\u670d\u52a1\u7684\u6a21\u578b\u3002 Ingress \u8fc7\u6ee4 \u7b5b\u9009\u4f20\u5165\u7f51\u7edc\u6d41\u91cf\u7684\u8fc7\u7a0b\u3002\u7531\u8ba1\u7b97\u652f\u6301\u3002 INI \u683c\u5f0f OpenStack \u914d\u7f6e\u6587\u4ef6\u4f7f\u7528 INI \u683c\u5f0f\u6765\u63cf\u8ff0\u9009\u9879\u53ca\u5176\u503c\u3002\u5b83\u7531\u90e8\u5206\u548c\u952e\u503c\u5bf9\u7ec4\u6210\u3002 \u6ce8\u5165 \u5728\u542f\u52a8\u5b9e\u4f8b\u4e4b\u524d\u5c06\u6587\u4ef6\u653e\u5165\u865a\u62df\u673a\u6620\u50cf\u7684\u8fc7\u7a0b\u3002 \u6bcf\u79d2\u8f93\u5165/\u8f93\u51fa\u64cd\u4f5c\u6570 \uff08IOPS\uff09 IOPS \u662f\u4e00\u79cd\u5e38\u89c1\u7684\u6027\u80fd\u5ea6\u91cf\uff0c\u7528\u4e8e\u5bf9\u8ba1\u7b97\u673a\u5b58\u50a8\u8bbe\u5907\uff08\u5982\u786c\u76d8\u9a71\u52a8\u5668\u3001\u56fa\u6001\u9a71\u52a8\u5668\u548c\u5b58\u50a8\u533a\u57df\u7f51\u7edc\uff09\u8fdb\u884c\u57fa\u51c6\u6d4b\u8bd5\u3002 \u5b9e\u4f8b \u6b63\u5728\u8fd0\u884c\u7684 VM \u6216\u5904\u4e8e\u5df2\u77e5\u72b6\u6001\uff08\u5982\u6302\u8d77\uff09\u7684 VM\uff0c\u53ef\u4ee5\u50cf\u786c\u4ef6\u670d\u52a1\u5668\u4e00\u6837\u4f7f\u7528\u3002 \u5b9e\u4f8bID \u4f8b\u5982UUID\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5b9e\u4f8b\u72b6\u6001 \u6765\u5bbe\u865a\u62df\u673a\u6620\u50cf\u7684\u5f53\u524d\u72b6\u6001\u3002 \u5b9e\u4f8b\u96a7\u9053\u7f51\u7edc \u7528\u4e8e\u8ba1\u7b97\u8282\u70b9\u548c\u7f51\u7edc\u8282\u70b9\u4e4b\u95f4\u7684\u5b9e\u4f8b\u6d41\u91cf\u96a7\u9053\u7684\u7f51\u6bb5\u3002 \u5b9e\u4f8b\u7c7b\u578b \u63cf\u8ff0\u53ef\u4f9b\u7528\u6237\u4f7f\u7528\u7684\u5404\u79cd\u865a\u62df\u673a\u6620\u50cf\u7684\u53c2\u6570;\u5305\u62ec CPU\u3001\u5b58\u50a8\u548c\u5185\u5b58\u7b49\u53c2\u6570\u3002\u98ce\u5473\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5b9e\u4f8b\u7c7b\u578b ID \u7279\u5b9a\u5b9e\u4f8b ID \u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5b9e\u4f8b UUID \u5206\u914d\u7ed9\u6bcf\u4e2a\u6765\u5bbe VM \u5b9e\u4f8b\u7684\u552f\u4e00 ID\u3002 \u667a\u80fd\u5e73\u53f0\u7ba1\u7406\u63a5\u53e3\uff08IPMI\uff09 IPMI \u662f\u7cfb\u7edf\u7ba1\u7406\u5458\u7528\u4e8e\u8ba1\u7b97\u673a\u7cfb\u7edf\u5e26\u5916\u7ba1\u7406\u548c\u76d1\u63a7\u5176\u64cd\u4f5c\u7684\u6807\u51c6\u5316\u8ba1\u7b97\u673a\u7cfb\u7edf\u63a5\u53e3\u3002\u901a\u4fd7\u5730\u8bf4\uff0c\u5b83\u662f\u4e00\u79cd\u4f7f\u7528\u76f4\u63a5\u7f51\u7edc\u8fde\u63a5\u7ba1\u7406\u8ba1\u7b97\u673a\u7684\u65b9\u6cd5\uff0c\u65e0\u8bba\u5b83\u662f\u5426\u6253\u5f00;\u8fde\u63a5\u5230\u786c\u4ef6\uff0c\u800c\u4e0d\u662f\u64cd\u4f5c\u7cfb\u7edf\u6216\u767b\u5f55 shell\u3002 \u63a5\u53e3 \u63d0\u4f9b\u4e0e\u5176\u4ed6\u8bbe\u5907\u6216\u4ecb\u8d28\u7684\u8fde\u63a5\u7684\u7269\u7406\u6216\u865a\u62df\u8bbe\u5907\u3002 \u63a5\u53e3 ID UUID \u5f62\u5f0f\u7684\u7f51\u7edc VIF \u6216 vNIC \u7684\u552f\u4e00 ID\u3002 \u4e92\u8054\u7f51\u63a7\u5236\u6d88\u606f\u534f\u8bae \uff08ICMP\uff09 \u7f51\u7edc\u8bbe\u5907\u7528\u4e8e\u63a7\u5236\u6d88\u606f\u7684\u7f51\u7edc\u534f\u8bae\u3002\u4f8b\u5982\uff0cping \u4f7f\u7528 ICMP \u6765\u6d4b\u8bd5\u8fde\u63a5\u3002 \u4e92\u8054\u7f51\u534f\u8bae \uff08IP\uff09 Internet \u534f\u8bae\u5957\u4ef6\u4e2d\u7684\u4e3b\u8981\u901a\u4fe1\u534f\u8bae\uff0c\u7528\u4e8e\u8de8\u7f51\u7edc\u8fb9\u754c\u4e2d\u7ee7\u6570\u636e\u62a5\u3002 \u4e92\u8054\u7f51\u670d\u52a1\u63d0\u4f9b\u5546 \uff08ISP\uff09 \u4efb\u4f55\u5411\u4e2a\u4eba\u6216\u4f01\u4e1a\u63d0\u4f9b\u4e92\u8054\u7f51\u8bbf\u95ee\u7684\u4f01\u4e1a\u3002 \u4e92\u8054\u7f51\u5c0f\u578b\u8ba1\u7b97\u673a\u7cfb\u7edf\u63a5\u53e3\uff08iSCSI\uff09 \u5c01\u88c5 SCSI \u5e27\u4ee5\u901a\u8fc7 IP \u7f51\u7edc\u4f20\u8f93\u7684\u5b58\u50a8\u534f\u8bae\u3002\u53d7\u8ba1\u7b97\u3001\u5bf9\u8c61\u5b58\u50a8\u548c\u955c\u50cf\u670d\u52a1\u652f\u6301\u3002 IO \u8f93\u5165\u548c\u8f93\u51fa\u7684\u7f29\u5199\u3002 IP \u5730\u5740 Internet \u4e0a\u6bcf\u4e2a\u8ba1\u7b97\u673a\u7cfb\u7edf\u552f\u4e00\u7684\u7f16\u53f7\u3002\u5730\u5740\u4f7f\u7528\u4e86\u4e24\u4e2a\u7248\u672c\u7684 Internet \u534f\u8bae \uff08IP\uff09\uff1aIPv4 \u548c IPv6\u3002 IP \u5730\u5740\u7ba1\u7406 \uff08IPAM\uff09 \u81ea\u52a8\u6267\u884c IP \u5730\u5740\u5206\u914d\u3001\u89e3\u9664\u5206\u914d\u548c\u7ba1\u7406\u7684\u8fc7\u7a0b\u3002\u76ee\u524d\u7531 Compute\u3001melange \u548c Networking \u63d0\u4f9b\u3002 ip6tables \u7528\u4e8e\u5728 Linux \u5185\u6838\u4e2d\u8bbe\u7f6e\u3001\u7ef4\u62a4\u548c\u68c0\u67e5 IPv6 \u6570\u636e\u5305\u8fc7\u6ee4\u89c4\u5219\u8868\u7684\u5de5\u5177\u3002\u5728 OpenStack \u8ba1\u7b97\u4e2d\uff0cip6tables \u4e0e arptables\u3001ebtables \u548c iptables \u4e00\u8d77\u4f7f\u7528\uff0c\u4e3a\u8282\u70b9\u548c\u865a\u62df\u673a\u521b\u5efa\u9632\u706b\u5899\u3002 ipset \u5bf9 iptables \u7684\u6269\u5c55\uff0c\u5141\u8bb8\u521b\u5efa\u540c\u65f6\u5339\u914d\u6574\u4e2a IP \u5730\u5740\u201c\u96c6\u201d\u7684\u9632\u706b\u5899\u89c4\u5219\u3002\u8fd9\u4e9b\u96c6\u9a7b\u7559\u5728\u7d22\u5f15\u6570\u636e\u7ed3\u6784\u4e2d\u4ee5\u63d0\u9ad8\u6548\u7387\uff0c\u5c24\u5176\u662f\u5728\u5177\u6709\u5927\u91cf\u89c4\u5219\u7684\u7cfb\u7edf\u4e0a\u3002 iptables iptables \u4e0e arptables \u548c ebtables \u4e00\u8d77\u4f7f\u7528\uff0c\u53ef\u5728 Compute \u4e2d\u521b\u5efa\u9632\u706b\u5899\u3002iptables \u662f Linux \u5185\u6838\u9632\u706b\u5899\uff08\u4f5c\u4e3a\u4e0d\u540c\u7684 Netfilter \u6a21\u5757\u5b9e\u73b0\uff09\u63d0\u4f9b\u7684\u8868\u53ca\u5176\u5b58\u50a8\u7684\u94fe\u548c\u89c4\u5219\u3002\u76ee\u524d\u4e0d\u540c\u7684\u5185\u6838\u6a21\u5757\u548c\u7a0b\u5e8f\u7528\u4e8e\u4e0d\u540c\u7684\u534f\u8bae\uff1aiptables \u9002\u7528\u4e8e IPv4\uff0cip6tables \u9002\u7528\u4e8e IPv6\uff0carptables \u9002\u7528\u4e8e ARP\uff0cebtables \u7528\u4e8e\u4ee5\u592a\u7f51\u5e27\u3002\u9700\u8981 root \u6743\u9650\u624d\u80fd\u64cd\u4f5c\u3002 ironic \u88f8\u673a\u670d\u52a1\u7684\u4ee3\u53f7\u3002 iSCSI \u9650\u5b9a\u540d\u79f0 \uff08IQN\uff09 IQN \u662f\u6700\u5e38\u7528\u7684 iSCSI \u540d\u79f0\u683c\u5f0f\uff0c\u7528\u4e8e\u552f\u4e00\u6807\u8bc6 iSCSI \u7f51\u7edc\u4e2d\u7684\u8282\u70b9\u3002\u6240\u6709 IQN \u90fd\u9075\u5faa iqn.yyyy-mm.domain\uff1aidentifier \u6a21\u5f0f\uff0c\u5176\u4e2d\u201cyyyy-mm\u201d\u662f\u57df\u540d\u6ce8\u518c\u7684\u5e74\u4efd\u548c\u6708\u4efd\uff0c\u201cdomain\u201d\u662f\u9881\u53d1\u7ec4\u7ec7\u7684\u53cd\u5411\u57df\u540d\uff0c\u201cidentifier\u201d\u662f\u4e00\u4e2a\u53ef\u9009\u5b57\u7b26\u4e32\uff0c\u4f7f\u540c\u4e00\u57df\u540d\u4e0b\u7684\u6bcf\u4e2a IQN \u90fd\u662f\u552f\u4e00\u7684\u3002\u4f8b\u5982\uff0c\u201ciqn.2015-10.org.openstack.408ae959bce1\u201d\u3002 ISO9660 \u955c\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u955c\u50cf\u78c1\u76d8\u683c\u5f0f\u4e4b\u4e00\u3002 ITSEC \u51fd\u6570 \u8ba1\u7b97 RBAC \u7cfb\u7edf\u4e2d\u7684\u9ed8\u8ba4\u89d2\u8272\uff0c\u53ef\u4ee5\u9694\u79bb\u4efb\u4f55\u9879\u76ee\u4e2d\u7684\u5b9e\u4f8b\u3002","title":"I"},{"location":"security/security-guide/#j","text":"Java \u4e00\u79cd\u7f16\u7a0b\u8bed\u8a00\uff0c\u7528\u4e8e\u521b\u5efa\u901a\u8fc7\u7f51\u7edc\u6d89\u53ca\u591a\u53f0\u8ba1\u7b97\u673a\u7684\u7cfb\u7edf\u3002 JavaScript \u4e00\u79cd\u7528\u4e8e\u751f\u6210\u7f51\u9875\u7684\u811a\u672c\u8bed\u8a00\u3002 JavaScript \u5bf9\u8c61\u8868\u793a\u6cd5 \uff08JSON\uff09 OpenStack \u4e2d\u652f\u6301\u7684\u54cd\u5e94\u683c\u5f0f\u4e4b\u4e00\u3002 \u6846\u67b6\u7684\u5f62\u72b6 \u73b0\u4ee3\u4ee5\u592a\u7f51\u7f51\u7edc\u4e2d\u7684\u529f\u80fd\uff0c\u652f\u6301\u9ad8\u8fbe\u7ea6 9000 \u5b57\u8282\u7684\u5e27\u3002 Juno OpenStack \u7b2c\u5341\u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u4f50\u6cbb\u4e9a\u5dde\u4e9a\u7279\u5170\u5927\u4e3e\u884c\uff0cJuno\u662f\u4f50\u6cbb\u4e9a\u5dde\u7684\u4e00\u4e2a\u975e\u6cd5\u4eba\u793e\u533a\u3002","title":"J"},{"location":"security/security-guide/#k","text":"Kerberos \u4e00\u79cd\u57fa\u4e8e\u7968\u8bc1\u7684\u7f51\u7edc\u8eab\u4efd\u9a8c\u8bc1\u534f\u8bae\u3002Kerberos \u5141\u8bb8\u8282\u70b9\u901a\u8fc7\u975e\u5b89\u5168\u7f51\u7edc\u8fdb\u884c\u901a\u4fe1\uff0c\u5e76\u5141\u8bb8\u8282\u70b9\u4ee5\u5b89\u5168\u7684\u65b9\u5f0f\u76f8\u4e92\u8bc1\u660e\u5176\u8eab\u4efd\u3002 \u57fa\u4e8e\u5185\u6838\u7684\u865a\u62df\u673a \uff08KVM\uff09 \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002KVM \u662f\u9002\u7528\u4e8e Linux on x86 \u786c\u4ef6\u7684\u5b8c\u6574\u865a\u62df\u5316\u89e3\u51b3\u65b9\u6848\uff0c\u5305\u542b\u865a\u62df\u5316\u6269\u5c55\uff08Intel VT \u6216 AMD-V\uff09\u3001ARM\u3001IBM Power \u548c IBM zSeries\u3002\u5b83\u7531\u4e00\u4e2a\u53ef\u52a0\u8f7d\u7684\u5185\u6838\u6a21\u5757\u7ec4\u6210\uff0c\u8be5\u6a21\u5757\u63d0\u4f9b\u6838\u5fc3\u865a\u62df\u5316\u57fa\u7840\u67b6\u6784\u548c\u7279\u5b9a\u4e8e\u5904\u7406\u5668\u7684\u6a21\u5757\u3002 \u5bc6\u94a5\u7ba1\u7406\u5668\u670d\u52a1\uff08barbican\uff09 \u8be5\u9879\u76ee\u4ea7\u751f\u4e00\u4e2a\u79d8\u5bc6\u5b58\u50a8\u548c\u751f\u6210\u7cfb\u7edf\uff0c\u80fd\u591f\u4e3a\u5e0c\u671b\u542f\u7528\u52a0\u5bc6\u529f\u80fd\u7684\u670d\u52a1\u63d0\u4f9b\u5bc6\u94a5\u7ba1\u7406\u3002 keystone Identity \u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u5feb\u901f\u542f\u52a8 \u7528\u4e8e\u5728\u57fa\u4e8e Red Hat\u3001Fedora \u548c CentOS \u7684 Linux \u53d1\u884c\u7248\u4e0a\u81ea\u52a8\u8fdb\u884c\u7cfb\u7edf\u914d\u7f6e\u548c\u5b89\u88c5\u7684\u5de5\u5177\u3002 Kilo OpenStack \u7b2c 11 \u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u6cd5\u56fd\u5df4\u9ece\u4e3e\u884c\u3002\u7531\u4e8e\u540d\u79f0\u9009\u62e9\u7684\u5ef6\u8fdf\uff0c\u8be5\u7248\u672c\u4ec5\u88ab\u79f0\u4e3a K\u3002\u7531\u4e8e k kilo \u662f\u5355\u4f4d\u7b26\u53f7\uff0c\u800c kilogram \u53c2\u8003\u5de5\u4ef6\u5b58\u653e\u5728\u5df4\u9ece\u9644\u8fd1\u7684\u585e\u592b\u5c14 Pavillon de Breteuil \u4e2d\uff0c\u56e0\u6b64\u793e\u533a\u9009\u62e9\u4e86 Kilo \u4f5c\u4e3a\u7248\u672c\u540d\u79f0\u3002 L \u5927\u5bf9\u8c61 Object Storage \u4e2d\u5927\u4e8e 5 GB \u7684\u5bf9\u8c61\u3002 \u542f\u52a8\u677f OpenStack \u7684\u534f\u4f5c\u7ad9\u70b9\u3002 \u4e8c\u5c42\uff08L2\uff09\u4ee3\u7406 \u4e3a\u865a\u62df\u7f51\u7edc\u63d0\u4f9b\u7b2c 2 \u5c42\u8fde\u63a5\u7684 OpenStack Networking \u4ee3\u7406\u3002 \u4e8c\u5c42\u7f51\u7edc OSI \u7f51\u7edc\u4f53\u7cfb\u7ed3\u6784\u4e2d\u7528\u4e8e\u6570\u636e\u94fe\u8def\u5c42\u7684\u672f\u8bed\u3002\u6570\u636e\u94fe\u8def\u5c42\u8d1f\u8d23\u5a92\u4f53\u8bbf\u95ee\u63a7\u5236\u3001\u6d41\u91cf\u63a7\u5236\u4ee5\u53ca\u68c0\u6d4b\u548c\u7ea0\u6b63\u7269\u7406\u5c42\u4e2d\u53ef\u80fd\u53d1\u751f\u7684\u9519\u8bef\u3002 \u4e09\u5c42 \uff08L3\uff09 \u4ee3\u7406 OpenStack Networking \u4ee3\u7406\uff0c\u4e3a\u865a\u62df\u7f51\u7edc\u63d0\u4f9b\u7b2c 3 \u5c42\uff08\u8def\u7531\uff09\u670d\u52a1\u3002 \u4e09\u5c42\u7f51\u7edc \u5728 OSI \u7f51\u7edc\u4f53\u7cfb\u7ed3\u6784\u4e2d\u7528\u4e8e\u7f51\u7edc\u5c42\u7684\u672f\u8bed\u3002\u7f51\u7edc\u5c42\u8d1f\u8d23\u6570\u636e\u5305\u8f6c\u53d1\uff0c\u5305\u62ec\u4ece\u4e00\u4e2a\u8282\u70b9\u5230\u53e6\u4e00\u4e2a\u8282\u70b9\u7684\u8def\u7531\u3002 Liberty OpenStack \u7b2c 12 \u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u52a0\u62ff\u5927\u6e29\u54e5\u534e\u4e3e\u884c\uff0cLiberty\u662f\u52a0\u62ff\u5927\u8428\u65af\u5580\u5f7b\u6e29\u7701\u4e00\u4e2a\u6751\u5e84\u7684\u540d\u5b57\u3002 libvirt OpenStack \u7528\u6765\u4e0e\u8bb8\u591a\u53d7\u652f\u6301\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u8fdb\u884c\u4ea4\u4e92\u7684\u865a\u62df\u5316 API \u5e93\u3002 \u8f7b\u91cf\u7ea7\u76ee\u5f55\u8bbf\u95ee\u534f\u8bae \uff08LDAP\uff09 \u7528\u4e8e\u901a\u8fc7 IP \u7f51\u7edc\u8bbf\u95ee\u548c\u7ef4\u62a4\u5206\u5e03\u5f0f\u76ee\u5f55\u4fe1\u606f\u670d\u52a1\u7684\u5e94\u7528\u7a0b\u5e8f\u534f\u8bae\u3002 Linux \u64cd\u4f5c\u7cfb\u7edf \u7c7bUnix\u8ba1\u7b97\u673a\u64cd\u4f5c\u7cfb\u7edf\uff0c\u5728\u81ea\u7531\u548c\u5f00\u6e90\u8f6f\u4ef6\u5f00\u53d1\u548c\u5206\u53d1\u7684\u6a21\u5f0f\u4e0b\u7ec4\u88c5\u3002 Linux\u6865\u63a5 \u4f7f\u591a\u4e2a VM \u80fd\u591f\u5728\u8ba1\u7b97\u4e2d\u5171\u4eab\u5355\u4e2a\u7269\u7406 NIC \u7684\u8f6f\u4ef6\u3002 Linux Bridge neutron \u63d2\u4ef6 \u4f7f Linux \u7f51\u6865\u80fd\u591f\u7406\u89e3\u7f51\u7edc\u7aef\u53e3\u3001\u63a5\u53e3\u8fde\u63a5\u548c\u5176\u4ed6\u62bd\u8c61\u3002 Linux \u5bb9\u5668 \uff08LXC\uff09 \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 \u5b9e\u65f6\u8fc1\u79fb \u8ba1\u7b97\u4e2d\u80fd\u591f\u5c06\u6b63\u5728\u8fd0\u884c\u7684\u865a\u62df\u673a\u5b9e\u4f8b\u4ece\u4e00\u53f0\u4e3b\u673a\u79fb\u52a8\u5230\u53e6\u4e00\u53f0\u4e3b\u673a\uff0c\u5728\u5207\u6362\u671f\u95f4\u4ec5\u53d1\u751f\u5c11\u91cf\u670d\u52a1\u4e2d\u65ad\u3002 \u8d1f\u8f7d\u5747\u8861\u5668 \u8d1f\u8f7d\u5747\u8861\u5668\u662f\u5c5e\u4e8e\u4e91\u5e10\u6237\u7684\u903b\u8f91\u8bbe\u5907\u3002\u5b83\u7528\u4e8e\u6839\u636e\u5b9a\u4e49\u4e3a\u5176\u914d\u7f6e\u4e00\u90e8\u5206\u7684\u6761\u4ef6\u5728\u591a\u4e2a\u540e\u7aef\u7cfb\u7edf\u6216\u670d\u52a1\u4e4b\u95f4\u5206\u914d\u5de5\u4f5c\u8d1f\u8f7d\u3002 \u8d1f\u8f7d\u5747\u8861 \u5728\u4e24\u4e2a\u6216\u591a\u4e2a\u8282\u70b9\u4e4b\u95f4\u5206\u6563\u5ba2\u6237\u7aef\u8bf7\u6c42\u4ee5\u63d0\u9ad8\u6027\u80fd\u548c\u53ef\u7528\u6027\u7684\u8fc7\u7a0b\u3002 \u8d1f\u8f7d\u5747\u8861\u5668\u5373\u670d\u52a1\uff08LBaaS\uff09 \u4f7f\u7f51\u7edc\u80fd\u591f\u5728\u6307\u5b9a\u5b9e\u4f8b\u4e4b\u95f4\u5747\u5300\u5206\u914d\u4f20\u5165\u8bf7\u6c42\u3002 \u8d1f\u8f7d\u5747\u8861\u670d\u52a1\uff08octavia\uff09 \u8be5\u9879\u76ee\u65e8\u5728\u4ee5\u4e0e\u6280\u672f\u65e0\u5173\u7684\u65b9\u5f0f\u63d0\u4f9b\u5bf9\u8d1f\u8f7d\u5747\u8861\u5668\u670d\u52a1\u7684\u53ef\u6269\u5c55\u3001\u6309\u9700\u3001\u81ea\u52a9\u670d\u52a1\u8bbf\u95ee\u3002 \u903b\u8f91\u5377\u7ba1\u7406\u5668 \uff08LVM\uff09 \u63d0\u4f9b\u4e00\u79cd\u5728\u5927\u5bb9\u91cf\u5b58\u50a8\u8bbe\u5907\u4e0a\u5206\u914d\u7a7a\u95f4\u7684\u65b9\u6cd5\uff0c\u8be5\u65b9\u6cd5\u6bd4\u4f20\u7edf\u7684\u5206\u533a\u65b9\u6848\u66f4\u7075\u6d3b\u3002","title":"K"},{"location":"security/security-guide/#m","text":"magnum \u5bb9\u5668\u57fa\u7840\u7ed3\u6784\u7ba1\u7406\u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u7ba1\u7406 API \u7ba1\u7406 API \u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u7ba1\u7406\u7f51\u7edc \u7528\u4e8e\u7ba1\u7406\u7684\u7f51\u6bb5\uff0c\u516c\u5171 Internet \u65e0\u6cd5\u8bbf\u95ee\u3002 \u7ba1\u7406\u5668 \u76f8\u5173\u4ee3\u7801\u7684\u903b\u8f91\u5206\u7ec4\uff0c\u4f8b\u5982\u5757\u5b58\u50a8\u5377\u7ba1\u7406\u5668\u6216\u7f51\u7edc\u7ba1\u7406\u5668\u3002 \u6e05\u5355 \u7528\u4e8e\u8ddf\u8e2a\u5bf9\u8c61\u5b58\u50a8\u4e2d\u5927\u578b\u5bf9\u8c61\u7684\u6bb5\u3002 manifest \u5bf9\u8c61 \u4e00\u4e2a\u7279\u6b8a\u7684\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\uff0c\u5176\u4e2d\u5305\u542b\u5927\u578b\u5bf9\u8c61\u7684\u6e05\u5355\u3002 manila OpenStack \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u7684\u4ee3\u53f7\u3002 manila\u5206\u4eab \u8d1f\u8d23\u7ba1\u7406\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u8bbe\u5907\uff0c\u7279\u522b\u662f\u540e\u7aef\u8bbe\u5907\u3002 \u6700\u5927\u4f20\u8f93\u5355\u5143 \uff08MTU\uff09 \u7279\u5b9a\u7f51\u7edc\u4ecb\u8d28\u7684\u6700\u5927\u5e27\u6216\u6570\u636e\u5305\u5927\u5c0f\u3002\u4ee5\u592a\u7f51\u901a\u5e38\u4e3a 1500 \u5b57\u8282\u3002 \u673a\u5236\u9a71\u52a8 \u7a0b\u5e8f \u6a21\u5757\u5316\u7b2c 2 \u5c42 \uff08ML2\uff09 neutron \u63d2\u4ef6\u7684\u9a71\u52a8\u7a0b\u5e8f\uff0c\u4e3a\u865a\u62df\u5b9e\u4f8b\u63d0\u4f9b\u7b2c 2 \u5c42\u8fde\u63a5\u3002\u5355\u4e2a OpenStack \u5b89\u88c5\u53ef\u4ee5\u4f7f\u7528\u591a\u4e2a\u673a\u5236\u9a71\u52a8\u7a0b\u5e8f\u3002 melange OpenStack Network Information Service \u7684\u9879\u76ee\u540d\u79f0\u3002\u5c06\u4e0e\u7f51\u7edc\u5408\u5e76\u3002 \u6210\u5458\u5173\u7cfb \u955c\u50cf\u670d\u52a1\u865a\u62df\u673a\u955c\u50cf\u4e0e\u9879\u76ee\u4e4b\u95f4\u7684\u5173\u8054\u3002\u5141\u8bb8\u4e0e\u6307\u5b9a\u9879\u76ee\u5171\u4eab\u56fe\u50cf\u3002 \u6210\u5458\u5217\u8868 \u53ef\u4ee5\u5728\u6620\u50cf\u670d\u52a1\u4e2d\u8bbf\u95ee\u7ed9\u5b9a VM \u6620\u50cf\u7684\u9879\u76ee\u5217\u8868\u3002 \u5185\u5b58\u7f13\u5b58 \u5bf9\u8c61\u5b58\u50a8\u7528\u4e8e\u7f13\u5b58\u7684\u5206\u5e03\u5f0f\u5185\u5b58\u5bf9\u8c61\u7f13\u5b58\u7cfb\u7edf\u3002 \u5185\u5b58\u8fc7\u91cf\u5206\u914d \u80fd\u591f\u6839\u636e\u4e3b\u673a\u7684\u5b9e\u9645\u5185\u5b58\u4f7f\u7528\u60c5\u51b5\u542f\u52a8\u65b0\u7684 VM \u5b9e\u4f8b\uff0c\u800c\u4e0d\u662f\u6839\u636e\u6bcf\u4e2a\u6b63\u5728\u8fd0\u884c\u7684\u5b9e\u4f8b\u8ba4\u4e3a\u5176\u53ef\u7528\u7684 RAM \u91cf\u6765\u505a\u51fa\u51b3\u5b9a\u3002\u4e5f\u79f0\u4e3a RAM \u8fc7\u91cf\u4f7f\u7528\u3002 \u6d88\u606f\u4ee3\u7406 \u7528\u4e8e\u5728\u8ba1\u7b97\u4e2d\u63d0\u4f9b AMQP \u6d88\u606f\u4f20\u9012\u529f\u80fd\u7684\u8f6f\u4ef6\u5305\u3002\u9ed8\u8ba4\u5305\u4e3a RabbitMQ\u3002 \u6d88\u606f\u603b\u7ebf \u6240\u6709 AMQP \u6d88\u606f\u7528\u4e8e\u8ba1\u7b97\u4e2d\u7684\u4e91\u95f4\u901a\u4fe1\u7684\u4e3b\u8981\u865a\u62df\u901a\u4fe1\u7ebf\u8def\u3002 \u6d88\u606f\u961f\u5217 \u5c06\u6765\u81ea\u5ba2\u6237\u7aef\u7684\u8bf7\u6c42\u4f20\u9012\u7ed9\u76f8\u5e94\u7684\u5de5\u4f5c\u7ebf\u7a0b\uff0c\u5e76\u5728\u4f5c\u4e1a\u5b8c\u6210\u540e\u5c06\u8f93\u51fa\u8fd4\u56de\u7ed9\u5ba2\u6237\u7aef\u3002 \u6d88\u606f\u670d\u52a1 \uff08zaqar\uff09 \u8be5\u9879\u76ee\u63d0\u4f9b\u6d88\u606f\u4f20\u9012\u670d\u52a1\uff0c\u8be5\u670d\u52a1\u4ee5\u9ad8\u6548\u3001\u53ef\u6269\u5c55\u548c\u9ad8\u5ea6\u53ef\u7528\u7684\u65b9\u5f0f\u63d0\u4f9b\u5404\u79cd\u5206\u5e03\u5f0f\u5e94\u7528\u7a0b\u5e8f\u6a21\u5f0f\uff0c\u5e76\u521b\u5efa\u548c\u7ef4\u62a4\u5173\u8054\u7684 Python \u5e93\u548c\u6587\u6863\u3002 \u5143\u6570\u636e\u670d\u52a1\u5668 \uff08MDS\uff09 \u5b58\u50a8 CephFS \u5143\u6570\u636e\u3002 \u5143\u6570\u636e\u4ee3\u7406 \u4e3a\u5b9e\u4f8b\u63d0\u4f9b\u5143\u6570\u636e\u670d\u52a1\u7684 OpenStack Networking \u4ee3\u7406\u3002 \u8fc1\u79fb \u5c06 VM \u5b9e\u4f8b\u4ece\u4e00\u53f0\u4e3b\u673a\u79fb\u52a8\u5230\u53e6\u4e00\u53f0\u4e3b\u673a\u7684\u8fc7\u7a0b\u3002 mistral \u5de5\u4f5c\u6d41\u670d\u52a1\u7684\u4ee3\u53f7\u3002 Mitaka OpenStack \u7b2c 13 \u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u65e5\u672c\u4e1c\u4eac\u4e3e\u884c\u3002Mitaka\u662f\u4e1c\u4eac\u7684\u4e00\u5ea7\u57ce\u5e02\u3002 \u6a21\u5757\u5316\u7b2c 2 \u5c42 \uff08ML2\uff09neutron\u63d2\u4ef6 \u53ef\u4ee5\u5728\u7f51\u7edc\u4e2d\u540c\u65f6\u4f7f\u7528\u591a\u79cd\u4e8c\u5c42\u7f51\u7edc\u6280\u672f\uff0c\u5982802.1Q\u548cVXLAN\u3002 monasca OpenStack \u76d1\u63a7\u7684\u4ee3\u53f7\u3002 \u76d1\u63a7 \uff08LBaaS\uff09 LBaaS \u529f\u80fd\uff0c\u4f7f\u7528 ping \u547d\u4ee4\u3001TCP \u548c HTTP/HTTPS GET \u63d0\u4f9b\u53ef\u7528\u6027\u76d1\u63a7\u3002 \u76d1\u89c6\u5668 \uff08Mon\uff09 \u4e00\u4e2a Ceph \u7ec4\u4ef6\uff0c\u7528\u4e8e\u4e0e\u5916\u90e8\u5ba2\u6237\u7aef\u901a\u4fe1\u3001\u68c0\u67e5\u6570\u636e\u72b6\u6001\u548c\u4e00\u81f4\u6027\u4ee5\u53ca\u6267\u884c\u4ef2\u88c1\u529f\u80fd\u3002 \u76d1\u63a7 \uff08monasca\uff09 OpenStack \u670d\u52a1\uff0c\u4e3a\u6307\u6807\u3001\u590d\u6742\u4e8b\u4ef6\u5904\u7406\u548c\u65e5\u5fd7\u8bb0\u5f55\u63d0\u4f9b\u591a\u9879\u76ee\u3001\u9ad8\u5ea6\u53ef\u6269\u5c55\u3001\u9ad8\u6027\u80fd\u3001\u5bb9\u9519\u7684\u76d1\u63a7\u5373\u670d\u52a1\u89e3\u51b3\u65b9\u6848\u3002\u4e3a\u9ad8\u7ea7\u76d1\u63a7\u670d\u52a1\u6784\u5efa\u4e00\u4e2a\u53ef\u6269\u5c55\u7684\u5e73\u53f0\uff0c\u8fd0\u8425\u5546\u548c\u9879\u76ee\u90fd\u53ef\u4ee5\u4f7f\u7528\u8be5\u5e73\u53f0\u6765\u83b7\u5f97\u8fd0\u8425\u6d1e\u5bdf\u529b\u548c\u53ef\u89c1\u6027\uff0c\u786e\u4fdd\u53ef\u7528\u6027\u548c\u7a33\u5b9a\u6027\u3002 \u591a\u4e91\u8ba1\u7b97 \u5728\u5355\u4e2a\u7f51\u7edc\u67b6\u6784\u4e2d\u4f7f\u7528\u591a\u79cd\u4e91\u8ba1\u7b97\u548c\u5b58\u50a8\u670d\u52a1\u3002 \u591a\u4e91 SDK \u63d0\u4f9b\u591a\u4e91\u62bd\u8c61\u5c42\u5e76\u5305\u542b\u5bf9 OpenStack \u7684\u652f\u6301\u7684 SDK\u3002\u8fd9\u4e9b SDK \u975e\u5e38\u9002\u5408\u7f16\u5199\u9700\u8981\u4f7f\u7528\u591a\u79cd\u7c7b\u578b\u7684\u4e91\u63d0\u4f9b\u5546\u7684\u5e94\u7528\u7a0b\u5e8f\uff0c\u4f46\u53ef\u80fd\u4f1a\u516c\u5f00\u4e00\u7ec4\u66f4\u6709\u9650\u7684\u529f\u80fd\u3002 \u591a\u56e0\u7d20\u8eab\u4efd\u9a8c\u8bc1 \u4f7f\u7528\u4e24\u4e2a\u6216\u591a\u4e2a\u51ed\u636e\uff08\u5982\u5bc6\u7801\u548c\u79c1\u94a5\uff09\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002\u76ee\u524d\u5728 Identity \u4e2d\u4e0d\u53d7\u652f\u6301\u3002 \u591a\u4e3b\u673a \u4f20\u7edf \uff08nova\uff09 \u7f51\u7edc\u7684\u9ad8\u53ef\u7528\u6027\u6a21\u5f0f\u3002\u6bcf\u4e2a\u8ba1\u7b97\u8282\u70b9\u5904\u7406 NAT \u548c DHCP\uff0c\u5e76\u5145\u5f53\u5176\u4e0a\u6240\u6709 VM \u7684\u7f51\u5173\u3002\u4e00\u4e2a\u8ba1\u7b97\u8282\u70b9\u4e0a\u7684\u7f51\u7edc\u6545\u969c\u4e0d\u4f1a\u5f71\u54cd\u5176\u4ed6\u8ba1\u7b97\u8282\u70b9\u4e0a\u7684 VM\u3002 multinic \u51fd\u6570 \u8ba1\u7b97\u4e2d\u7684\u5de5\u5177\uff0c\u5141\u8bb8\u6bcf\u4e2a\u865a\u62df\u673a\u5b9e\u4f8b\u8fde\u63a5\u591a\u4e2a VIF\u3002 murano \u5e94\u7528\u7a0b\u5e8f\u76ee\u5f55\u670d\u52a1\u7684\u4ee3\u53f7\u3002","title":"M"},{"location":"security/security-guide/#n","text":"Nebula NASA \u4e8e 2010 \u5e74\u4ee5\u5f00\u6e90\u5f62\u5f0f\u53d1\u5e03\uff0c\u662f Compute \u7684\u57fa\u7840\u3002 \u7f51\u7edc\u7ba1\u7406\u5458 \u8ba1\u7b97 RBAC \u7cfb\u7edf\u4e2d\u7684\u9ed8\u8ba4\u89d2\u8272\u4e4b\u4e00\u3002\u5141\u8bb8\u7528\u6237\u4e3a\u5b9e\u4f8b\u5206\u914d\u53ef\u516c\u5f00\u8bbf\u95ee\u7684 IP \u5730\u5740\u5e76\u66f4\u6539\u9632\u706b\u5899\u89c4\u5219\u3002 NetApp \u5377\u9a71\u52a8\u7a0b\u5e8f \u4f7f\u8ba1\u7b97\u80fd\u591f\u901a\u8fc7 NetApp OnCommand \u914d\u7f6e\u7ba1\u7406\u5668\u4e0e NetApp \u5b58\u50a8\u8bbe\u5907\u8fdb\u884c\u901a\u4fe1\u3002 \u7f51\u7edc \u5728\u5b9e\u4f53\u4e4b\u95f4\u63d0\u4f9b\u8fde\u63a5\u7684\u865a\u62df\u7f51\u7edc\u3002\u4f8b\u5982\uff0c\u5171\u4eab\u7f51\u7edc\u8fde\u63a5\u7684\u865a\u62df\u7aef\u53e3\u7684\u96c6\u5408\u3002\u5728\u7f51\u7edc\u672f\u8bed\u4e2d\uff0c\u7f51\u7edc\u59cb\u7ec8\u662f\u7b2c 2 \u5c42\u7f51\u7edc\u3002 \u7f51\u7edc\u5730\u5740\u8f6c\u6362 \uff08NAT\uff09 \u5728\u4f20\u8f93\u8fc7\u7a0b\u4e2d\u4fee\u6539 IP \u5730\u5740\u4fe1\u606f\u7684\u8fc7\u7a0b\u3002\u7531\u8ba1\u7b97\u548c\u7f51\u7edc\u652f\u6301\u3002 \u7f51\u7edc\u63a7\u5236\u5668 \u4e00\u4e2a\u8ba1\u7b97\u5b88\u62a4\u7a0b\u5e8f\uff0c\u7528\u4e8e\u534f\u8c03\u8282\u70b9\u7684\u7f51\u7edc\u914d\u7f6e\uff0c\u5305\u62ec IP \u5730\u5740\u3001VLAN \u548c\u6865\u63a5\u3002\u8fd8\u7ba1\u7406\u516c\u5171\u7f51\u7edc\u548c\u4e13\u7528\u7f51\u7edc\u7684\u8def\u7531\u3002 \u7f51\u7edc\u6587\u4ef6\u7cfb\u7edf \uff08NFS\uff09 \u4e00\u79cd\u4f7f\u6587\u4ef6\u7cfb\u7edf\u5728\u7f51\u7edc\u4e0a\u53ef\u7528\u7684\u65b9\u6cd5\u3002\u7531 OpenStack \u652f\u6301\u3002 \u7f51\u7edc ID \u5206\u914d\u7ed9\u7f51\u7edc\u4e2d\u6bcf\u4e2a\u7f51\u6bb5\u7684\u552f\u4e00 ID\u3002\u4e0e\u7f51\u7edc UUID \u76f8\u540c\u3002 \u7f51\u7edc\u7ba1\u7406\u5668 \u7528\u4e8e\u7ba1\u7406\u5404\u79cd\u7f51\u7edc\u7ec4\u4ef6\uff08\u5982\u9632\u706b\u5899\u89c4\u5219\u3001IP \u5730\u5740\u5206\u914d\u7b49\uff09\u7684\u8ba1\u7b97\u7ec4\u4ef6\u3002 \u7f51\u7edc\u547d\u540d\u7a7a\u95f4 Linux \u5185\u6838\u529f\u80fd\uff0c\u5728\u5355\u4e2a\u4e3b\u673a\u4e0a\u63d0\u4f9b\u72ec\u7acb\u7684\u865a\u62df\u7f51\u7edc\u5b9e\u4f8b\uff0c\u5177\u6709\u5355\u72ec\u7684\u8def\u7531\u8868\u548c\u63a5\u53e3\u3002\u7c7b\u4f3c\u4e8e\u7269\u7406\u7f51\u7edc\u8bbe\u5907\u4e0a\u7684\u865a\u62df\u8def\u7531\u548c\u8f6c\u53d1 \uff08VRF\uff09 \u670d\u52a1\u3002 \u7f51\u7edc\u8282\u70b9 \u8fd0\u884c Network Worker \u5b88\u62a4\u7a0b\u5e8f\u7684\u4efb\u4f55\u8ba1\u7b97\u8282\u70b9\u3002 \u7f51\u7edc\u6bb5 \u8868\u793a\u7f51\u7edc\u4e2d\u865a\u62df\u7684\u9694\u79bb OSI \u7b2c 2 \u5c42\u5b50\u7f51\u3002 \u7f51\u7edc\u670d\u52a1\u6807\u5934 \uff08NSH\uff09 \u63d0\u4f9b\u6cbf\u5b9e\u4f8b\u5316\u670d\u52a1\u8def\u5f84\u8fdb\u884c\u5143\u6570\u636e\u4ea4\u6362\u7684\u673a\u5236\u3002 \u7f51\u7edc\u65f6\u95f4\u534f\u8bae \uff08NTP\uff09 \u901a\u8fc7\u4e0e\u53ef\u4fe1\u3001\u51c6\u786e\u7684\u65f6\u95f4\u6e90\u901a\u4fe1\u6765\u4fdd\u6301\u4e3b\u673a\u6216\u8282\u70b9\u65f6\u949f\u6b63\u786e\u7684\u65b9\u6cd5\u3002 \u7f51\u7edc UUID \u7f51\u7edc\u7f51\u6bb5\u7684\u552f\u4e00 ID\u3002 \u7f51\u7edc\u5de5\u4f5c\u8fdb\u7a0b nova-network worker \u5b88\u62a4\u8fdb\u7a0b;\u63d0\u4f9b\u8bf8\u5982\u4e3a\u542f\u52a8\u7684 nova \u5b9e\u4f8b\u63d0\u4f9b IP \u5730\u5740\u7b49\u670d\u52a1\u3002 \u7f51\u7edc API\uff08Neutron API\uff09 \u7528\u4e8e\u8bbf\u95ee OpenStack Networking \u7684 API\u3002\u63d0\u4f9b\u53ef\u6269\u5c55\u7684\u4f53\u7cfb\u7ed3\u6784\u4ee5\u542f\u7528\u81ea\u5b9a\u4e49\u63d2\u4ef6\u521b\u5efa\u3002 \u7f51\u7edc\u670d\u52a1\uff08neutron\uff09 OpenStack \u9879\u76ee\uff0c\u5b83\u5b9e\u73b0\u4e86\u670d\u52a1\u548c\u76f8\u5173\u5e93\uff0c\u4ee5\u63d0\u4f9b\u6309\u9700\u3001\u53ef\u6269\u5c55\u4e14\u4e0e\u6280\u672f\u65e0\u5173\u7684\u7f51\u7edc\u62bd\u8c61\u3002 neutron OpenStack Networking \u670d\u52a1\u7684\u4ee3\u53f7\u3002 neutron API \u7f51\u7edc API \u7684\u66ff\u4ee3\u540d\u79f0\u3002 Neutron \u7ba1\u7406\u5668 \u542f\u7528\u8ba1\u7b97\u548c\u7f51\u7edc\u96c6\u6210\uff0c\u4f7f\u7f51\u7edc\u80fd\u591f\u5bf9\u6765\u5bbe VM \u6267\u884c\u7f51\u7edc\u7ba1\u7406\u3002 Neutron \u63d2\u4ef6 \u7f51\u7edc\u4e2d\u7684\u63a5\u53e3\uff0c\u4f7f\u7ec4\u7ec7\u80fd\u591f\u4e3a\u9ad8\u7ea7\u529f\u80fd\uff08\u5982 QoS\u3001ACL \u6216 IDS\uff09\u521b\u5efa\u81ea\u5b9a\u4e49\u63d2\u4ef6\u3002 Newton OpenStack \u7b2c 14 \u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u7f8e\u56fd\u5fb7\u514b\u8428\u65af\u5dde\u5965\u65af\u6c40\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u4f4d\u4e8e\u5fb7\u514b\u8428\u65af\u5dde\u5965\u65af\u6c40\u5e02\u7b2c\u4e5d\u8857 1013 \u53f7\u7684\u201cNewton House\u201d\u547d\u540d\u3002\u88ab\u5217\u5165\u56fd\u5bb6\u53f2\u8ff9\u540d\u5f55\u3002 Nexenta \u5377\u9a71\u52a8\u7a0b\u5e8f \u4e3a\u8ba1\u7b97\u4e2d\u7684 NexentaStor \u8bbe\u5907\u63d0\u4f9b\u652f\u6301\u3002 NFV \u7f16\u6392\u670d\u52a1\uff08tacker\uff09 OpenStack \u670d\u52a1\uff0c\u65e8\u5728\u5b9e\u73b0\u7f51\u7edc\u529f\u80fd\u865a\u62df\u5316 \uff08NFV\uff09 \u7f16\u6392\u670d\u52a1\u548c\u5e93\uff0c\u7528\u4e8e\u7f51\u7edc\u670d\u52a1\u548c\u865a\u62df\u7f51\u7edc\u529f\u80fd \uff08VNF\uff09 \u7684\u7aef\u5230\u7aef\u751f\u547d\u5468\u671f\u7ba1\u7406\u3002 Nginx \u51fd\u6570 HTTP \u548c\u53cd\u5411\u4ee3\u7406\u670d\u52a1\u5668\u3001\u90ae\u4ef6\u4ee3\u7406\u670d\u52a1\u5668\u548c\u901a\u7528 TCP/UDP \u4ee3\u7406\u670d\u52a1\u5668\u3002 \u65e0 ACK \u5728 Compute RabbitMQ \u4e2d\u7981\u7528\u670d\u52a1\u5668\u7aef\u6d88\u606f\u786e\u8ba4\u3002\u63d0\u9ad8\u6027\u80fd\u4f46\u964d\u4f4e\u53ef\u9760\u6027\u3002 \u8282\u70b9 \u5728\u4e3b\u673a\u4e0a\u8fd0\u884c\u7684 VM \u5b9e\u4f8b\u3002 \u975e\u6301\u4e45\u4ea4\u6362 \u670d\u52a1\u91cd\u65b0\u542f\u52a8\u65f6\u6e05\u9664\u7684\u6d88\u606f\u4ea4\u6362\u3002\u5176\u6570\u636e\u4e0d\u4f1a\u5199\u5165\u6301\u4e45\u6027\u5b58\u50a8\u3002 \u975e\u6301\u4e45\u961f\u5217 \u670d\u52a1\u91cd\u65b0\u542f\u52a8\u65f6\u6e05\u9664\u7684\u6d88\u606f\u961f\u5217\u3002\u5176\u6570\u636e\u4e0d\u4f1a\u5199\u5165\u6301\u4e45\u6027\u5b58\u50a8\u3002 \u975e\u6301\u4e45\u5316\u5377 \u4e34\u65f6\u5377\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5357\u5317\u5411\u6d41\u91cf \u7528\u6237\u6216\u5ba2\u6237\u7aef\uff08\u5317\uff09\u4e0e\u670d\u52a1\u5668\uff08\u5357\uff09\u4e4b\u95f4\u7684\u7f51\u7edc\u6d41\u91cf\uff0c\u6216\u8fdb\u5165\u4e91\uff08\u5357\uff09\u548c\u4e91\u5916\uff08\u5317\uff09\u7684\u6d41\u91cf\u3002\u53e6\u8bf7\u53c2\u9605\u4e1c\u897f\u5411\u6d41\u91cf\u3002 nova OpenStack \u8ba1\u7b97\u670d\u52a1\u7684\u4ee3\u53f7\u3002 Nova API \u63a5\u53e3 \u8ba1\u7b97 API \u7684\u66ff\u4ee3\u672f\u8bed\u3002 nova-network \uff08\u65b0\u661f\u7f51\u7edc\uff09 \u4e00\u4e2a\u8ba1\u7b97\u7ec4\u4ef6\uff0c\u7528\u4e8e\u7ba1\u7406 IP \u5730\u5740\u5206\u914d\u3001\u9632\u706b\u5899\u548c\u5176\u4ed6\u4e0e\u7f51\u7edc\u76f8\u5173\u7684\u4efb\u52a1\u3002\u8fd9\u662f\u65e7\u7248\u7f51\u7edc\u9009\u9879\uff0c\u4e5f\u662f\u7f51\u7edc\u7684\u66ff\u4ee3\u65b9\u6cd5\u3002","title":"N"},{"location":"security/security-guide/#o","text":"\u5bf9\u8c61 \u5bf9\u8c61\u5b58\u50a8\u4fdd\u5b58\u7684\u6570\u636e\u7684 BLOB;\u53ef\u4ee5\u662f\u4efb\u4f55\u683c\u5f0f\u3002 \u5bf9\u8c61\u5ba1\u8ba1\u5668 \u6253\u5f00\u5bf9\u8c61\u670d\u52a1\u5668\u7684\u6240\u6709\u5bf9\u8c61\uff0c\u5e76\u9a8c\u8bc1\u6bcf\u4e2a\u5bf9\u8c61\u7684 MD5 \u54c8\u5e0c\u3001\u5927\u5c0f\u548c\u5143\u6570\u636e\u3002 \u5bf9\u8c61\u8fc7\u671f Object Storage \u4e2d\u7684\u4e00\u4e2a\u53ef\u914d\u7f6e\u9009\u9879\uff0c\u7528\u4e8e\u5728\u7ecf\u8fc7\u6307\u5b9a\u65f6\u95f4\u6216\u8fbe\u5230\u7279\u5b9a\u65e5\u671f\u540e\u81ea\u52a8\u5220\u9664\u5bf9\u8c61\u3002 \u5bf9\u8c61\u54c8\u5e0c \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u7684\u552f\u4e00 ID\u3002 \u5bf9\u8c61\u8def\u5f84\u54c8\u5e0c \u5bf9\u8c61\u5b58\u50a8\u7528\u4e8e\u786e\u5b9a\u5bf9\u8c61\u5728\u73af\u4e2d\u7684\u4f4d\u7f6e\u3002\u5c06\u5bf9\u8c61\u6620\u5c04\u5230\u5206\u533a\u3002 \u5bf9\u8c61\u590d\u5236\u5668 \u4e00\u4e2a\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\uff0c\u7528\u4e8e\u5c06\u5bf9\u8c61\u590d\u5236\u5230\u8fdc\u7a0b\u5206\u533a\u4ee5\u5b9e\u73b0\u5bb9\u9519\u3002 \u5bf9\u8c61\u670d\u52a1\u5668 \u8d1f\u8d23\u7ba1\u7406\u5bf9\u8c61\u7684\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\u3002 \u5bf9\u8c61\u5b58\u50a8 API \u7528\u4e8e\u8bbf\u95ee OpenStack \u5bf9\u8c61\u5b58\u50a8\u7684 API\u3002 \u5bf9\u8c61\u5b58\u50a8\u8bbe\u5907 \uff08OSD\uff09 Ceph \u5b58\u50a8\u5b88\u62a4\u8fdb\u7a0b\u3002 \u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\uff08swift\uff09 OpenStack \u6838\u5fc3\u9879\u76ee\uff0c\u4e3a\u56fa\u5b9a\u6570\u5b57\u5185\u5bb9\u63d0\u4f9b\u6700\u7ec8\u4e00\u81f4\u6027\u548c\u5197\u4f59\u7684\u5b58\u50a8\u548c\u68c0\u7d22\u3002 \u5bf9\u8c61\u7248\u672c\u63a7\u5236 \u5141\u8bb8\u7528\u6237\u5728\u5bf9\u8c61\u5b58\u50a8\u5bb9\u5668\u4e0a\u8bbe\u7f6e\u6807\u5fd7\uff0c\u4ee5\u4fbf\u5bf9\u5bb9\u5668\u5185\u7684\u6240\u6709\u5bf9\u8c61\u8fdb\u884c\u7248\u672c\u63a7\u5236\u3002 Ocata OpenStack \u7b2c 15 \u7248\u7684\u4ee3\u53f7\u3002\u8bbe\u8ba1\u5cf0\u4f1a\u5728\u897f\u73ed\u7259\u5df4\u585e\u7f57\u90a3\u4e3e\u884c\u3002Ocata\u662f\u5df4\u585e\u7f57\u90a3\u5317\u90e8\u7684\u4e00\u4e2a\u6d77\u6ee9\u3002 Octavia \u8d1f\u8f7d\u5e73\u8861\u670d\u52a1\u7684\u4ee3\u53f7\u3002 Oldie \u957f\u65f6\u95f4\u8fd0\u884c\u7684\u5bf9\u8c61\u5b58\u50a8\u8fdb\u7a0b\u7684\u672f\u8bed\u3002\u53ef\u4ee5\u6307\u793a\u6302\u8d77\u7684\u8fdb\u7a0b\u3002 \u5f00\u653e\u4e91\u8ba1\u7b97\u63a5\u53e3\uff08OCCI\uff09 \u7528\u4e8e\u7ba1\u7406\u8ba1\u7b97\u3001\u6570\u636e\u548c\u7f51\u7edc\u8d44\u6e90\u7684\u6807\u51c6\u5316\u63a5\u53e3\uff0c\u76ee\u524d\u5728 OpenStack \u4e2d\u4e0d\u53d7\u652f\u6301\u3002 \u5f00\u653e\u865a\u62df\u5316\u683c\u5f0f \uff08OVF\uff09 \u6253\u5305 VM \u6620\u50cf\u7684\u6807\u51c6\u3002\u5728 OpenStack \u4e2d\u53d7\u652f\u6301\u3002 \u6253\u5f00 vSwitch Open vSwitch \u662f\u5728\u5f00\u6e90 Apache 2.0 \u8bb8\u53ef\u8bc1\u4e0b\u83b7\u5f97\u8bb8\u53ef\u7684\u751f\u4ea7\u8d28\u91cf\u7684\u591a\u5c42\u865a\u62df\u4ea4\u6362\u673a\u3002\u5b83\u65e8\u5728\u901a\u8fc7\u7f16\u7a0b\u6269\u5c55\u5b9e\u73b0\u5927\u89c4\u6a21\u7f51\u7edc\u81ea\u52a8\u5316\uff0c\u540c\u65f6\u4ecd\u652f\u6301\u6807\u51c6\u7ba1\u7406\u63a5\u53e3\u548c\u534f\u8bae\uff08\u4f8b\u5982 NetFlow\u3001sFlow\u3001SPAN\u3001RSPAN\u3001CLI\u3001LACP\u3001802.1ag\uff09\u3002 Open vSwitch\uff08OVS\uff09\u4ee3\u7406 \u4e3a\u7f51\u7edc\u63d2\u4ef6\u63d0\u4f9b\u5e95\u5c42 Open vSwitch \u670d\u52a1\u7684\u63a5\u53e3\u3002 \u6253\u5f00 vSwitch neutron \u63d2\u4ef6 \u5728\u7f51\u7edc\u4e2d\u63d0\u4f9b\u5bf9 Open vSwitch \u7684\u652f\u6301\u3002 OpenDev OpenDev \u662f\u4e00\u4e2a\u534f\u4f5c\u5f00\u6e90\u8f6f\u4ef6\u5f00\u53d1\u7684\u7a7a\u95f4\u3002 OpenDev \u7684\u4f7f\u547d\u662f\u4e3a\u5f00\u6e90\u8f6f\u4ef6\u9879\u76ee\u63d0\u4f9b\u9879\u76ee\u6258\u7ba1\u3001\u6301\u7eed\u96c6\u6210\u5de5\u5177\u548c\u865a\u62df\u534f\u4f5c\u7a7a\u95f4\u3002OpenDev \u672c\u8eab\u662f\u81ea\u6258\u7ba1\u5728\u8fd9\u5957\u5de5\u5177\u4e0a\uff0c\u5305\u62ec\u4ee3\u7801\u5ba1\u67e5\u3001\u6301\u7eed\u96c6\u6210\u3001etherpad\u3001wiki\u3001\u4ee3\u7801\u6d4f\u89c8\u7b49\u3002\u8fd9\u610f\u5473\u7740 OpenDev \u672c\u8eab\u5c31\u50cf\u4e00\u4e2a\u5f00\u6e90\u9879\u76ee\u4e00\u6837\u8fd0\u884c\uff0c\u60a8\u53ef\u4ee5\u52a0\u5165\u6211\u4eec\u5e76\u5e2e\u52a9\u8fd0\u884c\u7cfb\u7edf\u3002\u6b64\u5916\uff0c\u8fd0\u884c\u7684\u6240\u6709\u670d\u52a1\u672c\u8eab\u90fd\u662f\u5f00\u6e90\u8f6f\u4ef6\u3002 OpenStack \u9879\u76ee\u662f\u4f7f\u7528 OpenDev \u7684\u6700\u5927\u9879\u76ee\u3002 OpenLDAP \u5f00\u6e90 LDAP \u670d\u52a1\u5668\u3002\u53d7\u8ba1\u7b97\u548c\u6807\u8bc6\u652f\u6301\u3002 OpenStack OpenStack \u662f\u4e00\u4e2a\u4e91\u64cd\u4f5c\u7cfb\u7edf\uff0c\u53ef\u63a7\u5236\u6574\u4e2a\u6570\u636e\u4e2d\u5fc3\u7684\u5927\u578b\u8ba1\u7b97\u3001\u5b58\u50a8\u548c\u7f51\u7edc\u8d44\u6e90\u6c60\uff0c\u6240\u6709\u8fd9\u4e9b\u8d44\u6e90\u90fd\u901a\u8fc7\u4eea\u8868\u677f\u8fdb\u884c\u7ba1\u7406\uff0c\u8be5\u4eea\u8868\u677f\u4f7f\u7ba1\u7406\u5458\u80fd\u591f\u8fdb\u884c\u63a7\u5236\uff0c\u540c\u65f6\u6388\u6743\u7528\u6237\u901a\u8fc7 Web \u754c\u9762\u914d\u7f6e\u8d44\u6e90\u3002OpenStack \u662f\u4e00\u4e2a\u6839\u636e Apache License 2.0 \u8bb8\u53ef\u7684\u5f00\u6e90\u9879\u76ee\u3002 OpenStack \u4ee3\u7801\u540d\u79f0 \u6bcf\u4e2a OpenStack \u7248\u672c\u90fd\u6709\u4e00\u4e2a\u4ee3\u53f7\u3002\u4ee3\u53f7\u6309\u5b57\u6bcd\u987a\u5e8f\u6392\u5217\uff1aAustin, Bexar, Cactus, Diablo, Essex, Folsom, Grizzly, Havana, Icehouse, Juno, Kilo, Liberty, Mitaka, Newton, Ocata, Pike, Queens, Rocky, Stein, Train, Ussuri, Victoria, Wallaby, Xena, Yoga, Zed\u3002 Wallaby \u662f\u65b0\u7b56\u7565\u9009\u62e9\u7684\u7b2c\u4e00\u4e2a\u4ee3\u53f7\uff1a\u4ee3\u53f7\u7531\u793e\u533a\u6309\u7167\u5b57\u6bcd\u987a\u5e8f\u9009\u62e9\uff0c\u6709\u5173\u8be6\u7ec6\u4fe1\u606f\uff0c\u8bf7\u53c2\u9605\u53d1\u5e03\u540d\u79f0\u6807\u51c6\u3002 \u7ef4\u591a\u5229\u4e9a\u7684\u540d\u5b57\u662f\u59d3\u6c0f\uff0c\u5176\u4e2d\u4ee3\u53f7\u662f\u9760\u8fd1\u76f8\u5e94OpenStack\u8bbe\u8ba1\u5cf0\u4f1a\u4e3e\u529e\u5730\u7684\u57ce\u5e02\u6216\u53bf\u3002\u4e00\u4e2a\u4f8b\u5916\uff0c\u79f0\u4e3a\u6c83\u5c14\u767b\u4f8b\u5916\uff0c\u88ab\u6388\u4e88\u5dde\u65d7\u4e2d\u542c\u8d77\u6765\u7279\u522b\u9177\u7684\u5143\u7d20\u3002\u4ee3\u53f7\u7531\u5927\u4f17\u6295\u7968\u9009\u51fa\u3002 \u4e0e\u6b64\u540c\u65f6\uff0c\u968f\u7740OpenStack\u53d1\u884c\u7248\u7684\u5b57\u6bcd\u8868\u7528\u5b8c\uff0c\u6280\u672f\u59d4\u5458\u4f1a\u6539\u53d8\u4e86\u547d\u540d\u8fc7\u7a0b\uff0c\u5c06\u53d1\u884c\u53f7\u548c\u53d1\u884c\u7248\u540d\u79f0\u4f5c\u4e3a\u8bc6\u522b\u7801\u3002\u7248\u672c\u53f7\u5c06\u662f\u4e3b\u8981\u6807\u8bc6\u7b26\uff1a\u201cyear\u201d\u3002\u5e74\u5185\u53d1\u5e03\u8ba1\u6570\u201c\uff0c\u8be5\u540d\u79f0\u5c06\u4e3b\u8981\u7528\u4e8e\u8425\u9500\u76ee\u7684\u3002\u7b2c\u4e00\u4e2a\u8fd9\u6837\u7684\u7248\u672c\u662f 2023.1 Antelope\u3002\u7d27\u968f\u5176\u540e\u7684\u662f 2023.2 Bobcat\u30012024.1 Caracal\u3002 openSUSE \u4e0e OpenStack \u517c\u5bb9\u7684 Linux \u53d1\u884c\u7248\u3002 \u64cd\u4f5c\u5458 \u8d1f\u8d23\u89c4\u5212\u548c\u7ef4\u62a4 OpenStack \u5b89\u88c5\u7684\u4eba\u5458\u3002 \u53ef\u9009\u670d\u52a1 \u7531 Interop \u5de5\u4f5c\u7ec4\u5b9a\u4e49\u4e3a\u53ef\u9009\u7684\u5b98\u65b9 OpenStack \u670d\u52a1\u3002\u76ee\u524d\uff0c\u7531 Dashboard \uff08horizon\uff09\u3001Telemetry \u670d\u52a1 \uff08Telemetry\uff09\u3001Orchestration \u670d\u52a1 \uff08heat\uff09\u3001Database \u670d\u52a1 \uff08trove\uff09\u3001Bare Metal \u670d\u52a1 \uff08ironic\uff09 \u7b49\u7ec4\u6210\u3002 \u7f16\u6392\u670d\u52a1\uff08heat\uff09 OpenStack \u670d\u52a1\uff0c\u5b83\u901a\u8fc7 OpenStack \u539f\u751f REST API \u4f7f\u7528\u58f0\u660e\u6027\u6a21\u677f\u683c\u5f0f\u7f16\u6392\u590d\u5408\u4e91\u5e94\u7528\u7a0b\u5e8f\u3002 orphan \u5728\u5bf9\u8c61\u5b58\u50a8\u7684\u4e0a\u4e0b\u6587\u4e2d\uff0c\u8fd9\u662f\u4e00\u4e2a\u5728\u5347\u7ea7\u3001\u91cd\u65b0\u542f\u52a8\u6216\u91cd\u65b0\u52a0\u8f7d\u670d\u52a1\u540e\u4e0d\u4f1a\u7ec8\u6b62\u7684\u8fc7\u7a0b\u3002 Oslo Common Libraries \u9879\u76ee\u7684\u4ee3\u53f7\u3002","title":"O"},{"location":"security/security-guide/#p","text":"panko OpenStack Telemetry \u670d\u52a1\u7684\u4e00\u90e8\u5206;\u63d0\u4f9b\u4e8b\u4ef6\u5b58\u50a8\u3002 \u7236\u5355\u5143\u683c \u5982\u679c\u8bf7\u6c42\u7684\u8d44\u6e90\uff08\u5982 CPU \u65f6\u95f4\u3001\u78c1\u76d8\u5b58\u50a8\u6216\u5185\u5b58\uff09\u5728\u7236\u5355\u5143\u4e2d\u4e0d\u53ef\u7528\uff0c\u5219\u8be5\u8bf7\u6c42\u5c06\u8f6c\u53d1\u5230\u5173\u8054\u7684\u5b50\u5355\u5143\u3002 \u5206\u533a \u5bf9\u8c61\u5b58\u50a8\u4e2d\u7528\u4e8e\u5b58\u50a8\u5bf9\u8c61\u7684\u5b58\u50a8\u5355\u5143\u3002\u5b83\u5b58\u5728\u4e8e\u8bbe\u5907\u4e4b\u4e0a\uff0c\u5e76\u88ab\u590d\u5236\u4ee5\u5b9e\u73b0\u5bb9\u9519\u3002. \u5206\u533a\u7d22\u5f15 \u5305\u542b\u73af\u5185\u6240\u6709\u5bf9\u8c61\u5b58\u50a8\u5206\u533a\u7684\u4f4d\u7f6e\u3002 \u5206\u533a\u504f\u79fb\u503c \u5bf9\u8c61\u5b58\u50a8\u7528\u4e8e\u786e\u5b9a\u6570\u636e\u5e94\u9a7b\u7559\u5728\u54ea\u4e2a\u5206\u533a\u4e0a\u3002 \u8def\u5f84 MTU \u53d1\u73b0 \uff08PMTUD\uff09 IP \u7f51\u7edc\u4e2d\u7528\u4e8e\u68c0\u6d4b\u7aef\u5230\u7aef MTU \u5e76\u76f8\u5e94\u5730\u8c03\u6574\u6570\u636e\u5305\u5927\u5c0f\u7684\u673a\u5236\u3002 \u6682\u505c \u672a\u53d1\u751f\u4efb\u4f55\u66f4\u6539\uff08\u5185\u5b58\u672a\u66f4\u6539\u3001\u7f51\u7edc\u901a\u4fe1\u505c\u6b62\u7b49\uff09\u7684 VM \u72b6\u6001;VM \u5df2\u51bb\u7ed3\uff0c\u4f46\u672a\u5173\u95ed\u3002 PCI\u76f4\u901a \u4e3a\u5ba2\u6237\u673a\u865a\u62df\u673a\u63d0\u4f9b\u5bf9 PCI \u8bbe\u5907\u7684\u72ec\u5360\u8bbf\u95ee\u6743\u9650\u3002\u76ee\u524d\u5728 OpenStack Havana \u53ca\u66f4\u9ad8\u7248\u672c\u4e2d\u53d7\u652f\u6301\u3002 \u6301\u4e45\u6d88\u606f \u5b58\u50a8\u5728\u5185\u5b58\u548c\u78c1\u76d8\u4e0a\u7684\u6d88\u606f\u3002\u5931\u8d25\u6216\u91cd\u65b0\u542f\u52a8\u540e\uff0c\u6d88\u606f\u4e0d\u4f1a\u4e22\u5931\u3002 \u6301\u4e45\u5377 \u5c06\u4fdd\u5b58\u5bf9\u8fd9\u4e9b\u7c7b\u578b\u7684\u78c1\u76d8\u5377\u6240\u505a\u7684\u66f4\u6539\u3002 \u4e2a\u6027\u6587\u4ef6 \u7528\u4e8e\u81ea\u5b9a\u4e49 Compute \u5b9e\u4f8b\u7684\u6587\u4ef6\u3002\u5b83\u53ef\u7528\u4e8e\u6ce8\u5165 SSH \u5bc6\u94a5\u6216\u7279\u5b9a\u7684\u7f51\u7edc\u914d\u7f6e\u3002 Pike OpenStack \u7b2c 16 \u7248\u7684\u4ee3\u53f7\u3002OpenStack\u5cf0\u4f1a\u5728\u7f8e\u56fd\u9a6c\u8428\u8bf8\u585e\u5dde\u6ce2\u58eb\u987f\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u9a6c\u8428\u8bf8\u585e\u5dde\u6536\u8d39\u516c\u8def\u547d\u540d\uff0c\u901a\u5e38\u7f29\u5199\u4e3a\u9a6c\u8428\u8bf8\u585e\u5dde\u6536\u8d39\u516c\u8def\uff0c\u8fd9\u662f 90 \u53f7\u5dde\u9645\u516c\u8def\u6700\u4e1c\u7aef\u7684\u8def\u6bb5\u3002 \u5e73\u53f0\u5373\u670d\u52a1\uff08PaaS\uff09 \u4e3a\u4f7f\u7528\u8005\u63d0\u4f9b\u64cd\u4f5c\u7cfb\u7edf\uff0c\u901a\u5e38\u8fd8\u4e3a\u8bed\u8a00\u8fd0\u884c\u65f6\u548c\u5e93\uff08\u7edf\u79f0\u4e3a\u201c\u5e73\u53f0\u201d\uff09\u63d0\u4f9b\uff0c\u6d88\u8d39\u8005\u53ef\u4ee5\u5728\u5176\u4e0a\u8fd0\u884c\u81ea\u5df1\u7684\u5e94\u7528\u7a0b\u5e8f\u4ee3\u7801\uff0c\u800c\u65e0\u9700\u63d0\u4f9b\u5bf9\u5e95\u5c42\u57fa\u7840\u7ed3\u6784\u7684\u4efb\u4f55\u63a7\u5236\u3002\u5e73\u53f0\u5373\u670d\u52a1\u63d0\u4f9b\u5546\u7684\u793a\u4f8b\u5305\u62ec Cloud Foundry \u548c OpenShift\u3002 \u63d2\u4ef6 \u4e3a\u7f51\u7edc API \u6216\u8ba1\u7b97 API \u63d0\u4f9b\u5b9e\u9645\u5b9e\u73b0\u7684\u8f6f\u4ef6\u7ec4\u4ef6\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u4e0a\u4e0b\u6587\u3002 \u7b56\u7565\u670d\u52a1 \u6807\u8bc6\u7ec4\u4ef6\uff0c\u63d0\u4f9b\u89c4\u5219\u7ba1\u7406\u63a5\u53e3\u548c\u57fa\u4e8e\u89c4\u5219\u7684\u6388\u6743\u5f15\u64ce\u3002 \u57fa\u4e8e\u7b56\u7565\u7684\u8def\u7531 \uff08PBR\uff09 \u63d0\u4f9b\u4e00\u79cd\u673a\u5236\uff0c\u7528\u4e8e\u6839\u636e\u7f51\u7edc\u7ba1\u7406\u5458\u5b9a\u4e49\u7684\u7b56\u7565\u5b9e\u73b0\u6570\u636e\u5305\u8f6c\u53d1\u548c\u8def\u7531\u3002 \u6c60 \u4e00\u7ec4\u903b\u8f91\u8bbe\u5907\uff0c\u4f8b\u5982 Web \u670d\u52a1\u5668\uff0c\u60a8\u53ef\u4ee5\u5c06\u5176\u7ec4\u5408\u5728\u4e00\u8d77\u4ee5\u63a5\u6536\u548c\u5904\u7406\u6d41\u91cf\u3002\u8d1f\u8f7d\u5e73\u8861\u529f\u80fd\u9009\u62e9\u6c60\u4e2d\u7684\u54ea\u4e2a\u6210\u5458\u5904\u7406\u5728 VIP \u5730\u5740\u4e0a\u6536\u5230\u7684\u65b0\u8bf7\u6c42\u6216\u8fde\u63a5\u3002\u6bcf\u4e2aVIP\u90fd\u6709\u4e00\u4e2a\u6e38\u6cf3\u6c60\u3002 \u6c60\u6210\u5458 \u5728\u8d1f\u8f7d\u5e73\u8861\u7cfb\u7edf\u4e2d\u7684\u540e\u7aef\u670d\u52a1\u5668\u4e0a\u8fd0\u884c\u7684\u5e94\u7528\u7a0b\u5e8f\u3002 \u7aef\u53e3 \u7f51\u7edc\u4e2d\u7684\u865a\u62df\u7f51\u7edc\u7aef\u53e3;VIF / vNIC \u8fde\u63a5\u5230\u7aef\u53e3\u3002 \u7aef\u53e3 UUID \u7f51\u7edc\u7aef\u53e3\u7684\u552f\u4e00 ID\u3002 \u9884\u7f6e \u5728\u57fa\u4e8e Debian \u7684 Linux \u53d1\u884c\u7248\u4e0a\u81ea\u52a8\u8fdb\u884c\u7cfb\u7edf\u914d\u7f6e\u548c\u5b89\u88c5\u7684\u5de5\u5177\u3002 \u79c1\u6709\u4e91 \u4e00\u4e2a\u4f01\u4e1a\u6216\u7ec4\u7ec7\u72ec\u5360\u4f7f\u7528\u7684\u8ba1\u7b97\u8d44\u6e90\u3002 \u79c1\u6709\u6620\u50cf \u4ec5\u5bf9\u6307\u5b9a\u9879\u76ee\u53ef\u7528\u7684\u6620\u50cf\u670d\u52a1\u865a\u62df\u673a\u6620\u50cf\u3002 \u79c1\u6709 IP \u5730\u5740 \u7528\u4e8e\u7ba1\u7406\u548c\u7ba1\u7406\u7684 IP \u5730\u5740\uff0c\u4e0d\u53ef\u7528\u4e8e\u516c\u5171 Internet\u3002 \u4e13\u7528\u7f51\u7edc \u7f51\u7edc\u63a7\u5236\u5668\u63d0\u4f9b\u865a\u62df\u7f51\u7edc\uff0c\u4f7f\u8ba1\u7b97\u670d\u52a1\u5668\u80fd\u591f\u76f8\u4e92\u4ea4\u4e92\u4ee5\u53ca\u4e0e\u516c\u7528\u7f51\u7edc\u4ea4\u4e92\u3002\u6240\u6709\u8ba1\u7b97\u673a\u90fd\u5fc5\u987b\u5177\u6709\u516c\u5171\u548c\u4e13\u7528\u7f51\u7edc\u63a5\u53e3\u3002\u4e13\u7528\u7f51\u7edc\u63a5\u53e3\u53ef\u4ee5\u662f\u5e73\u9762\u7f51\u7edc\u63a5\u53e3\uff0c\u4e5f\u53ef\u4ee5\u662f VLAN \u7f51\u7edc\u63a5\u53e3\u3002\u6241\u5e73\u5316\u7f51\u7edc\u63a5\u53e3\u7531\u5177\u6709\u6241\u5e73\u5316\u7ba1\u7406\u5668\u7684flat_interface\u63a7\u5236\u3002VLAN \u7f51\u7edc\u63a5\u53e3\u7531\u5e26\u6709 VLAN \u7ba1\u7406\u5668\u7684 vlan_interface \u9009\u4ef6\u63a7\u5236\u3002 \u9879\u76ee \u9879\u76ee\u4ee3\u8868\u4e86OpenStack\u4e2d\u201c\u6240\u6709\u6743\u201d\u7684\u57fa\u672c\u5355\u4f4d\uff0c\u56e0\u4e3aOpenStack\u4e2d\u7684\u6240\u6709\u8d44\u6e90\u90fd\u5e94\u8be5\u7531\u7279\u5b9a\u9879\u76ee\u62e5\u6709\u3002\u5728 OpenStack Identity \u4e2d\uff0c\u9879\u76ee\u5fc5\u987b\u7531\u7279\u5b9a\u57df\u62e5\u6709\u3002 \u9879\u76ee ID Identity \u670d\u52a1\u5206\u914d\u7ed9\u6bcf\u4e2a\u9879\u76ee\u7684\u552f\u4e00 ID\u3002 \u9879\u76ee VPN cloudpipe \u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u6df7\u6742\u6a21\u5f0f \u4f7f\u7f51\u7edc\u63a5\u53e3\u5c06\u5176\u63a5\u6536\u7684\u6240\u6709\u6d41\u91cf\u4f20\u9012\u5230\u4e3b\u673a\uff0c\u800c\u4e0d\u662f\u4ec5\u4f20\u9012\u5bfb\u5740\u5230\u5b83\u7684\u5e27\u3002 \u53d7\u4fdd\u62a4\u7684\u5c5e\u6027 \u901a\u5e38\uff0c\u53ea\u6709\u4e91\u7ba1\u7406\u5458\u624d\u80fd\u8bbf\u95ee\u7684\u6620\u50cf\u670d\u52a1\u6620\u50cf\u4e0a\u7684\u989d\u5916\u5c5e\u6027\u3002\u9650\u5236\u54ea\u4e9b\u7528\u6237\u89d2\u8272\u53ef\u4ee5\u5bf9\u8be5\u5c5e\u6027\u6267\u884c CRUD \u64cd\u4f5c\u3002\u4e91\u7ba1\u7406\u5458\u53ef\u4ee5\u5c06\u4efb\u4f55\u6620\u50cf\u5c5e\u6027\u914d\u7f6e\u4e3a\u53d7\u4fdd\u62a4\u3002 \u63d0\u4f9b\u8005 \u6709\u6743\u8bbf\u95ee\u6240\u6709\u4e3b\u673a\u548c\u5b9e\u4f8b\u7684\u7ba1\u7406\u5458\u3002 \u4ee3\u7406\u8282\u70b9 \u63d0\u4f9bObject Storage\u4ee3\u7406\u670d\u52a1\u7684\u8282\u70b9\u3002 \u4ee3\u7406\u670d\u52a1\u5668 \u5bf9\u8c61\u5b58\u50a8\u7684\u7528\u6237\u901a\u8fc7\u4ee3\u7406\u670d\u52a1\u5668\u4e0e\u670d\u52a1\u8fdb\u884c\u4ea4\u4e92\uff0c\u4ee3\u7406\u670d\u52a1\u5668\u53c8\u5728\u73af\u5185\u67e5\u627e\u6240\u8bf7\u6c42\u6570\u636e\u7684\u4f4d\u7f6e\uff0c\u5e76\u5c06\u7ed3\u679c\u8fd4\u56de\u7ed9\u7528\u6237\u3002 \u516c\u5171 API \u7528\u4e8e\u670d\u52a1\u5230\u670d\u52a1\u901a\u4fe1\u548c\u6700\u7ec8\u7528\u6237\u4ea4\u4e92\u7684 API \u7ec8\u7ed3\u70b9\u3002 \u516c\u6709\u4e91 \u8bb8\u591a\u7528\u6237\u53ef\u901a\u8fc7 Internet \u8bbf\u95ee\u7684\u6570\u636e\u4e2d\u5fc3\u3002 \u516c\u5171\u955c\u50cf \u53ef\u4f9b\u6240\u6709\u9879\u76ee\u4f7f\u7528\u7684\u955c\u50cf\u670d\u52a1\u865a\u62df\u673a\u955c\u50cf\u3002 \u516c\u7f51 IP \u5730\u5740 \u6700\u7ec8\u7528\u6237\u53ef\u8bbf\u95ee\u7684 IP \u5730\u5740\u3002 \u516c\u94a5\u8ba4\u8bc1 \u4f7f\u7528\u5bc6\u94a5\u800c\u4e0d\u662f\u5bc6\u7801\u7684\u8eab\u4efd\u9a8c\u8bc1\u65b9\u6cd5\u3002 \u516c\u7f51 \u7f51\u7edc\u63a7\u5236\u5668\u63d0\u4f9b\u865a\u62df\u7f51\u7edc\uff0c\u4f7f\u8ba1\u7b97\u670d\u52a1\u5668\u80fd\u591f\u76f8\u4e92\u4ea4\u4e92\u4ee5\u53ca\u4e0e\u516c\u7528\u7f51\u7edc\u4ea4\u4e92\u3002\u6240\u6709\u8ba1\u7b97\u673a\u90fd\u5fc5\u987b\u5177\u6709\u516c\u5171\u548c\u4e13\u7528\u7f51\u7edc\u63a5\u53e3\u3002\u516c\u7528\u7f51\u7edc\u63a5\u53e3\u7531\u8be5 public_interface \u9009\u9879\u63a7\u5236\u3002 Puppet OpenStack\u652f\u6301\u7684\u64cd\u4f5c\u7cfb\u7edf\u914d\u7f6e\u7ba1\u7406\u5de5\u5177\u3002 Python \u6a21\u578b OpenStack\u4e2d\u5e7f\u6cdb\u4f7f\u7528\u7684\u7f16\u7a0b\u8bed\u8a00\u3002","title":"P"},{"location":"security/security-guide/#q","text":"QEMU \u5199\u5165\u65f6\u590d\u5236 2 \uff08QCOW2\uff09 \u955c\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u955c\u50cf\u78c1\u76d8\u683c\u5f0f\u4e4b\u4e00\u3002 Qpid penStack\u652f\u6301\u7684\u6d88\u606f\u961f\u5217\u8f6f\u4ef6;RabbitMQ \u7684\u66ff\u4ee3\u54c1\u3002 \u670d\u52a1\u8d28\u91cf \uff08QoS\uff09 \u4fdd\u8bc1\u67d0\u4e9b\u7f51\u7edc\u6216\u5b58\u50a8\u8981\u6c42\u4ee5\u6ee1\u8db3\u5e94\u7528\u7a0b\u5e8f\u63d0\u4f9b\u5546\u548c\u6700\u7ec8\u7528\u6237\u4e4b\u95f4\u7684\u670d\u52a1\u7ea7\u522b\u534f\u8bae \uff08SLA\uff09 \u7684\u80fd\u529b\u3002\u901a\u5e38\u5305\u62ec\u7f51\u7edc\u5e26\u5bbd\u3001\u5ef6\u8fdf\u3001\u6296\u52a8\u6821\u6b63\u548c\u53ef\u9760\u6027\u7b49\u6027\u80fd\u8981\u6c42\uff0c\u4ee5\u53ca\u6bcf\u79d2\u8f93\u5165/\u8f93\u51fa\u64cd\u4f5c\u6570 \uff08IOPS\uff09 \u4e2d\u7684\u5b58\u50a8\u6027\u80fd\u3001\u9650\u5236\u534f\u8bae\u548c\u5cf0\u503c\u8d1f\u8f7d\u4e0b\u7684\u6027\u80fd\u9884\u671f\u3002 \u9694\u79bb \u5982\u679c\u5bf9\u8c61\u5b58\u50a8\u53d1\u73b0\u5bf9\u8c61\u3001\u5bb9\u5668\u6216\u5e10\u6237\u5df2\u635f\u574f\uff0c\u5219\u4f1a\u5c06\u5176\u7f6e\u4e8e\u6b64\u72b6\u6001\uff0c\u4e0d\u4f1a\u88ab\u590d\u5236\uff0c\u5ba2\u6237\u7aef\u65e0\u6cd5\u8bfb\u53d6\uff0c\u5e76\u4e14\u4f1a\u91cd\u65b0\u590d\u5236\u6b63\u786e\u7684\u526f\u672c\u3002 Queens OpenStack \u7b2c 17 \u7248\u7684\u4ee3\u53f7\u3002OpenStack\u5cf0\u4f1a\u5728\u6fb3\u5927\u5229\u4e9a\u6089\u5c3c\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u65b0\u5357\u5a01\u5c14\u58eb\u5dde\u5357\u6d77\u5cb8\u5730\u533a\u7684\u7687\u540e\u5e9e\u5fb7\u6cb3\u547d\u540d\u3002 Quick EMUlator \uff08QEMU\uff09 \uff08\u5feb\u901f EMUlator\uff09 QEMU \u662f\u4e00\u4e2a\u901a\u7528\u7684\u5f00\u6e90\u673a\u5668\u4eff\u771f\u5668\u548c\u865a\u62df\u5316\u5668\u3002OpenStack \u652f\u6301\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u4e4b\u4e00\uff0c\u901a\u5e38\u7528\u4e8e\u5f00\u53d1\u76ee\u7684\u3002 \u914d\u989d \u5728\u8ba1\u7b97\u548c\u5757\u5b58\u50a8\u4e2d\uff0c\u80fd\u591f\u57fa\u4e8e\u6bcf\u4e2a\u9879\u76ee\u8bbe\u7f6e\u8d44\u6e90\u9650\u5236\u3002","title":"Q"},{"location":"security/security-guide/#r","text":"RabbitMQ \u6a21\u578b OpenStack \u4f7f\u7528\u7684\u9ed8\u8ba4\u6d88\u606f\u961f\u5217\u8f6f\u4ef6\u3002 Rackspace \u4e91\u6587\u4ef6 2010 \u5e74\u7531 Rackspace \u5f00\u6e90\u53d1\u5e03;\u5bf9\u8c61\u5b58\u50a8\u7684\u57fa\u7840\u3002 RADOS \u5757\u8bbe\u5907 \uff08RBD\uff09 Ceph \u7ec4\u4ef6\uff0c\u4f7f Linux \u5757\u8bbe\u5907\u80fd\u591f\u5728\u591a\u4e2a\u5206\u5e03\u5f0f\u6570\u636e\u5b58\u50a8\u4e0a\u8fdb\u884c\u6761\u5e26\u5316\u3002 radvd \u8def\u7531\u5668\u901a\u544a\u5b88\u62a4\u7a0b\u5e8f\uff0c\u7531\u8ba1\u7b97 VLAN \u7ba1\u7406\u5668\u548c FlatDHCP \u7ba1\u7406\u5668\u7528\u4e8e\u4e3a VM \u5b9e\u4f8b\u63d0\u4f9b\u8def\u7531\u670d\u52a1\u3002 rally Benchmark \u670d\u52a1\u7684\u4ee3\u53f7\u3002 RAM\u8fc7\u6ee4\u5668 \u542f\u7528\u6216\u7981\u7528 RAM \u8fc7\u91cf\u5206\u914d\u7684\u8ba1\u7b97\u8bbe\u7f6e\u3002 RAM \u8fc7\u91cf\u5206\u914d \u80fd\u591f\u6839\u636e\u4e3b\u673a\u7684\u5b9e\u9645\u5185\u5b58\u4f7f\u7528\u60c5\u51b5\u542f\u52a8\u65b0\u7684 VM \u5b9e\u4f8b\uff0c\u800c\u4e0d\u662f\u6839\u636e\u6bcf\u4e2a\u6b63\u5728\u8fd0\u884c\u7684\u5b9e\u4f8b\u8ba4\u4e3a\u5176\u53ef\u7528\u7684 RAM \u91cf\u6765\u505a\u51fa\u51b3\u5b9a\u3002\u4e5f\u79f0\u4e3a\u5185\u5b58\u8fc7\u91cf\u4f7f\u7528\u3002 \u901f\u7387\u9650\u5236 \u5bf9\u8c61\u5b58\u50a8\u4e2d\u7684\u53ef\u914d\u7f6e\u9009\u9879\uff0c\u7528\u4e8e\u9650\u5236\u6bcf\u4e2a\u5e10\u6237\u548c/\u6216\u6bcf\u4e2a\u5bb9\u5668\u7684\u6570\u636e\u5e93\u5199\u5165\u3002 \u539f\u59cb \u6620\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u6620\u50cf\u78c1\u76d8\u683c\u5f0f\u4e4b\u4e00;\u975e\u7ed3\u6784\u5316\u78c1\u76d8\u6620\u50cf\u3002 \u91cd\u65b0\u5e73\u8861 \u5728\u73af\u4e2d\u7684\u6240\u6709\u9a71\u52a8\u5668\u4e4b\u95f4\u5206\u914d\u5bf9\u8c61\u5b58\u50a8\u5206\u533a\u7684\u8fc7\u7a0b;\u5728\u521d\u59cb\u73af\u521b\u5efa\u671f\u95f4\u548c\u73af\u91cd\u65b0\u914d\u7f6e\u540e\u4f7f\u7528\u3002 \u91cd\u542f \u5bf9\u670d\u52a1\u5668\u8fdb\u884c\u8f6f\u91cd\u542f\u6216\u786c\u91cd\u542f\u3002\u901a\u8fc7\u8f6f\u91cd\u542f\uff0c\u64cd\u4f5c\u7cfb\u7edf\u4f1a\u53d1\u51fa\u91cd\u65b0\u542f\u52a8\u4fe1\u53f7\uff0c\u4ece\u800c\u53ef\u4ee5\u6b63\u5e38\u5173\u95ed\u6240\u6709\u8fdb\u7a0b\u3002\u786c\u91cd\u542f\u76f8\u5f53\u4e8e\u91cd\u542f\u670d\u52a1\u5668\u3002\u865a\u62df\u5316\u5e73\u53f0\u5e94\u786e\u4fdd\u91cd\u65b0\u542f\u52a8\u64cd\u4f5c\u5df2\u6210\u529f\u5b8c\u6210\uff0c\u5373\u4f7f\u5728\u57fa\u7840\u57df/VM \u6682\u505c\u6216\u505c\u6b62/\u505c\u6b62\u7684\u60c5\u51b5\u4e0b\u4e5f\u662f\u5982\u6b64\u3002 \u91cd\u5efa \u5220\u9664\u670d\u52a1\u5668\u4e0a\u7684\u6240\u6709\u6570\u636e\uff0c\u5e76\u5c06\u5176\u66ff\u6362\u4e3a\u6307\u5b9a\u7684\u6620\u50cf\u3002\u670d\u52a1\u5668 ID \u548c IP \u5730\u5740\u4fdd\u6301\u4e0d\u53d8\u3002 \u4fa6\u5bdf \u7528\u4e8e\u6536\u96c6\u8ba1\u91cf\u7684\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\u3002 \u8bb0\u5f55 \u5c5e\u4e8e\u7279\u5b9a\u57df\uff0c\u7528\u4e8e\u6307\u5b9a\u6709\u5173\u8be5\u57df\u7684\u4fe1\u606f\u3002\u6709\u51e0\u79cd\u7c7b\u578b\u7684 DNS \u8bb0\u5f55\u3002\u6bcf\u79cd\u8bb0\u5f55\u7c7b\u578b\u90fd\u5305\u542b\u7528\u4e8e\u63cf\u8ff0\u8be5\u8bb0\u5f55\u7528\u9014\u7684\u7279\u5b9a\u4fe1\u606f\u3002\u793a\u4f8b\u5305\u62ec\u90ae\u4ef6\u4ea4\u6362 \uff08MX\uff09 \u8bb0\u5f55\uff0c\u5b83\u6307\u5b9a\u7279\u5b9a\u57df\u7684\u90ae\u4ef6\u670d\u52a1\u5668;\u548c\u540d\u79f0\u670d\u52a1\u5668 \uff08NS\uff09 \u8bb0\u5f55\uff0c\u7528\u4e8e\u6307\u5b9a\u57df\u7684\u6743\u5a01\u540d\u79f0\u670d\u52a1\u5668\u3002 \u8bb0\u5f55 ID \u6570\u636e\u5e93\u4e2d\u7684\u4e00\u4e2a\u6570\u5b57\uff0c\u6bcf\u6b21\u8fdb\u884c\u66f4\u6539\u65f6\u90fd\u4f1a\u9012\u589e\u3002\u5bf9\u8c61\u5b58\u50a8\u5728\u590d\u5236\u65f6\u4f7f\u7528\u3002 Red Hat Enterprise Linux \uff08RHEL\uff09 \uff08\u82f1\u8bed\uff09 \u4e0e OpenStack \u517c\u5bb9\u7684 Linux \u53d1\u884c\u7248\u3002 \u53c2\u8003\u67b6\u6784 OpenStack \u4e91\u7684\u63a8\u8350\u67b6\u6784\u3002 \u533a\u57df \u5177\u6709\u4e13\u7528 API \u7aef\u70b9\u7684\u79bb\u6563 OpenStack \u73af\u5883\uff0c\u901a\u5e38\u4ec5\u4e0e\u5176\u4ed6\u533a\u57df\u5171\u4eab\u8eab\u4efd \uff08keystone\uff09\u3002 \u6ce8\u518c\u8868 \u5f71\u50cf\u670d\u52a1\u6ce8\u518c\u8868\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u6ce8\u518c\u8868\u670d\u52a1\u5668 \u5411\u5ba2\u6237\u7aef\u63d0\u4f9b\u865a\u62df\u673a\u955c\u50cf\u5143\u6570\u636e\u4fe1\u606f\u7684\u955c\u50cf\u670d\u52a1\u3002 \u53ef\u9760\u3001\u81ea\u4e3b\u7684\u5206\u5e03\u5f0f\u5bf9\u8c61\u5b58\u50a8 \uff08\u96f7\u8fbe\uff09 \u5728 Ceph \u4e2d\u63d0\u4f9b\u5bf9\u8c61\u5b58\u50a8\u7684\u7ec4\u4ef6\u96c6\u5408\u3002\u7c7b\u4f3c\u4e8e OpenStack Object Storage\u3002 \u8fdc\u7a0b\u8fc7\u7a0b\u8c03\u7528 \uff08RPC\uff09 \u8ba1\u7b97RabbitMQ \u7528\u4e8e\u670d\u52a1\u5185\u901a\u4fe1\u7684\u65b9\u6cd5\u3002 \u526f\u672c \u901a\u8fc7\u521b\u5efa\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u3001\u5e10\u6237\u548c\u5bb9\u5668\u7684\u526f\u672c\u6765\u63d0\u4f9b\u6570\u636e\u5197\u4f59\u548c\u5bb9\u9519\uff0c\u4ee5\u4fbf\u5728\u5e95\u5c42\u5b58\u50a8\u53d1\u751f\u6545\u969c\u65f6\u4e0d\u4f1a\u4e22\u5931\u5b83\u4eec\u3002 \u526f\u672c\u6570\u91cf \u5bf9\u8c61\u5b58\u50a8\u73af\u4e2d\u6570\u636e\u7684\u526f\u672c\u6570\u3002 \u590d\u5236 \u5c06\u6570\u636e\u590d\u5236\u5230\u5355\u72ec\u7684\u7269\u7406\u8bbe\u5907\u4ee5\u5b9e\u73b0\u5bb9\u9519\u548c\u6027\u80fd\u7684\u8fc7\u7a0b\u3002 \u590d\u5236\u5668 \u5bf9\u8c61\u5b58\u50a8\u540e\u7aef\u8fdb\u7a0b\uff0c\u7528\u4e8e\u521b\u5efa\u548c\u7ba1\u7406\u5bf9\u8c61\u526f\u672c\u3002 \u8bf7\u6c42 ID \u5206\u914d\u7ed9\u53d1\u9001\u5230\u8ba1\u7b97\u7684\u6bcf\u4e2a\u8bf7\u6c42\u7684\u552f\u4e00 ID\u3002 \u6551\u63f4\u6620\u50cf \u4e00\u79cd\u7279\u6b8a\u7c7b\u578b\u7684 VM \u6620\u50cf\uff0c\u5728\u5c06\u5b9e\u4f8b\u7f6e\u4e8e\u6551\u63f4\u6a21\u5f0f\u65f6\u542f\u52a8\u3002\u5141\u8bb8\u7ba1\u7406\u5458\u6302\u8f7d\u5b9e\u4f8b\u7684\u6587\u4ef6\u7cfb\u7edf\u4ee5\u66f4\u6b63\u95ee\u9898\u3002 \u8c03\u6574\u5927\u5c0f \u5c06\u73b0\u6709\u670d\u52a1\u5668\u8f6c\u6362\u4e3a\u5176\u4ed6\u98ce\u683c\uff0c\u4ece\u800c\u6269\u5c55\u6216\u7f29\u51cf\u670d\u52a1\u5668\u3002\u4fdd\u5b58\u539f\u59cb\u670d\u52a1\u5668\u4ee5\u5728\u51fa\u73b0\u95ee\u9898\u65f6\u542f\u7528\u56de\u6eda\u3002\u5fc5\u987b\u6d4b\u8bd5\u5e76\u660e\u786e\u786e\u8ba4\u6240\u6709\u8c03\u6574\u5927\u5c0f\uff0c\u6b64\u65f6\u5c06\u5220\u9664\u539f\u59cb\u670d\u52a1\u5668\u3002 RESTful \u4e00\u79cd\u4f7f\u7528 REST \u6216\u5177\u8c61\u72b6\u6001\u4f20\u8f93\u7684 Web \u670d\u52a1 API\u3002REST\u662f\u7528\u4e8e\u4e07\u7ef4\u7f51\u7684\u8d85\u5a92\u4f53\u7cfb\u7edf\u7684\u67b6\u6784\u98ce\u683c \u73af \u5c06\u5bf9\u8c61\u5b58\u50a8\u6570\u636e\u6620\u5c04\u5230\u5206\u533a\u7684\u5b9e\u4f53\u3002\u6bcf\u4e2a\u670d\u52a1\uff08\u4f8b\u5982\u5e10\u6237\u3001\u5bf9\u8c61\u548c\u5bb9\u5668\uff09\u90fd\u5b58\u5728\u4e00\u4e2a\u5355\u72ec\u7684\u73af\u3002 \u73af\u6784\u5efa\u5668 \u5728\u5bf9\u8c61\u5b58\u50a8\u4e2d\u6784\u5efa\u548c\u7ba1\u7406\u73af\uff0c\u4e3a\u8bbe\u5907\u5206\u914d\u5206\u533a\uff0c\u5e76\u5c06\u914d\u7f6e\u63a8\u9001\u5230\u5176\u4ed6\u5b58\u50a8\u8282\u70b9\u3002 Rocky OpenStack \u7b2c 18 \u7248\u7684\u4ee3\u53f7\u3002OpenStack\u5cf0\u4f1a\u5728\u52a0\u62ff\u5927\u6e29\u54e5\u534e\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u843d\u57fa\u5c71\u8109\u547d\u540d\u3002 \u89d2\u8272 \u7528\u6237\u4e3a\u6267\u884c\u4e00\u7ec4\u7279\u5b9a\u64cd\u4f5c\u800c\u5047\u5b9a\u7684\u4e2a\u6027\u3002\u89d2\u8272\u5305\u62ec\u4e00\u7ec4\u6743\u9650\u548c\u7279\u6743\u3002\u62c5\u4efb\u8be5\u89d2\u8272\u7684\u7528\u6237\u5c06\u7ee7\u627f\u8fd9\u4e9b\u6743\u5229\u548c\u7279\u6743\u3002 \u57fa\u4e8e\u89d2\u8272\u7684\u8bbf\u95ee\u63a7\u5236 \uff08RBAC\uff09 \u63d0\u4f9b\u7528\u6237\u53ef\u4ee5\u6267\u884c\u7684\u64cd\u4f5c\u7684\u9884\u5b9a\u4e49\u5217\u8868\uff0c\u4f8b\u5982\u542f\u52a8\u6216\u505c\u6b62 VM\u3001\u91cd\u7f6e\u5bc6\u7801\u7b49\u3002\u5728\u6807\u8bc6\u548c\u8ba1\u7b97\u4e2d\u5747\u53d7\u652f\u6301\uff0c\u53ef\u4ee5\u4f7f\u7528\u4eea\u8868\u677f\u8fdb\u884c\u914d\u7f6e\u3002 \u89d2\u8272 ID \u5206\u914d\u7ed9\u6bcf\u4e2a\u8eab\u4efd\u670d\u52a1\u89d2\u8272\u7684\u5b57\u6bcd\u6570\u5b57 ID\u3002 \u6839\u672c\u539f\u56e0\u5206\u6790\uff08RCA\uff09\u670d\u52a1\uff08Vitrage\uff09 OpenStack\u9879\u76ee\u65e8\u5728\u7ec4\u7ec7\u3001\u5206\u6790\u548c\u53ef\u89c6\u5316OpenStack\u8b66\u62a5\u548c\u4e8b\u4ef6\uff0c\u6df1\u5165\u4e86\u89e3\u95ee\u9898\u7684\u6839\u672c\u539f\u56e0\uff0c\u5e76\u5728\u76f4\u63a5\u68c0\u6d4b\u5230\u95ee\u9898\u4e4b\u524d\u63a8\u65ad\u51fa\u5b83\u4eec\u7684\u5b58\u5728\u3002 rootwrap \u8ba1\u7b97\u7684\u4e00\u9879\u529f\u80fd\uff0c\u5141\u8bb8\u975e\u7279\u6743\u201cnova\u201d\u7528\u6237\u4ee5 Linux root \u7528\u6237\u8eab\u4efd\u8fd0\u884c\u6307\u5b9a\u7684\u547d\u4ee4\u5217\u8868\u3002 \u5faa\u73af\u8c03\u5ea6\u5668 \u5728\u53ef\u7528\u4e3b\u673a\u4e4b\u95f4\u5747\u5300\u5206\u914d\u5b9e\u4f8b\u7684\u8ba1\u7b97\u8ba1\u5212\u7a0b\u5e8f\u7684\u7c7b\u578b\u3002 \u8def\u7531\u5668 \u5728\u4e0d\u540c\u7f51\u7edc\u4e4b\u95f4\u4f20\u9012\u7f51\u7edc\u6d41\u91cf\u7684\u7269\u7406\u6216\u865a\u62df\u7f51\u7edc\u8bbe\u5907\u3002 \u8def\u7531\u5bc6\u94a5 \u8ba1\u7b97\u76f4\u63a5\u4ea4\u6362\u3001\u6247\u51fa\u4ea4\u6362\u548c\u4e3b\u9898\u4ea4\u6362\u4f7f\u7528\u6b64\u5bc6\u94a5\u6765\u786e\u5b9a\u5982\u4f55\u5904\u7406\u6d88\u606f;\u5904\u7406\u65b9\u5f0f\u56e0 Exchange \u7c7b\u578b\u800c\u5f02\u3002 RPC \u9a71\u52a8\u7a0b\u5e8f \u6a21\u5757\u5316\u7cfb\u7edf\uff0c\u5141\u8bb8\u66f4\u6539 Compute \u7684\u5e95\u5c42\u6d88\u606f\u961f\u5217\u8f6f\u4ef6\u3002\u4f8b\u5982\uff0c\u4ece RabbitMQ \u5230 ZeroMQ \u6216 Qpid\u3002 rsync \u7531\u5bf9\u8c61\u5b58\u50a8\u7528\u4e8e\u63a8\u9001\u5bf9\u8c61\u526f\u672c\u3002 RXTX \u9650 \u5236 \u8ba1\u7b97 VM \u5b9e\u4f8b\u53ef\u4ee5\u53d1\u9001\u548c\u63a5\u6536\u7684\u7f51\u7edc\u6d41\u91cf\u7684\u7edd\u5bf9\u9650\u5236\u3002 RXTX \u914d\u989d \u5bf9\u8ba1\u7b97 VM \u5b9e\u4f8b\u53ef\u4ee5\u53d1\u9001\u548c\u63a5\u6536\u7684\u7f51\u7edc\u6d41\u91cf\u7684\u8f6f\u9650\u5236\u3002","title":"R"},{"location":"security/security-guide/#s","text":"sahara \u6570\u636e\u5904\u7406\u670d\u52a1\u7684\u4ee3\u53f7\u3002 SAML \u65ad\u8a00 \u5305\u542b\u6807\u8bc6\u63d0\u4f9b\u8005\u63d0\u4f9b\u7684\u6709\u5173\u7528\u6237\u7684\u4fe1\u606f\u3002\u8fd9\u8868\u793a\u7528\u6237\u5df2\u901a\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u3002 \u6c99\u76d2 \u4e00\u4e2a\u865a\u62df\u7a7a\u95f4\uff0c\u53ef\u4ee5\u5728\u5176\u4e2d\u5b89\u5168\u5730\u8fd0\u884c\u65b0\u7684\u6216\u672a\u7ecf\u6d4b\u8bd5\u7684\u8f6f\u4ef6\u3002 \u8c03\u5ea6\u5668\u7ba1\u7406\u5668 \u4e00\u4e2a\u8ba1\u7b97\u7ec4\u4ef6\uff0c\u7528\u4e8e\u786e\u5b9a VM \u5b9e\u4f8b\u7684\u542f\u52a8\u4f4d\u7f6e\u3002\u91c7\u7528\u6a21\u5757\u5316\u8bbe\u8ba1\uff0c\u652f\u6301\u591a\u79cd\u8c03\u5ea6\u7a0b\u5e8f\u7c7b\u578b\u3002 \u4f5c\u7528\u57df\u4ee4\u724c \u4e0e\u7279\u5b9a\u9879\u76ee\u5173\u8054\u7684\u8eab\u4efd\u670d\u52a1 API \u8bbf\u95ee\u4ee4\u724c\u3002 \u6d17\u6da4\u5668 \u68c0\u67e5\u5e76\u5220\u9664\u672a\u4f7f\u7528\u7684\u865a\u62df\u673a;\u5b9e\u73b0\u5ef6\u8fdf\u5220\u9664\u7684\u5f71\u50cf\u670d\u52a1\u7ec4\u4ef6\u3002 \u5bc6\u94a5 \u53ea\u6709\u7528\u6237\u77e5\u9053\u7684\u6587\u672c\u5b57\u7b26\u4e32;\u4e0e\u8bbf\u95ee\u5bc6\u94a5\u4e00\u8d77\u4f7f\u7528\uff0c\u4ee5\u5411\u8ba1\u7b97 API \u53d1\u51fa\u8bf7\u6c42\u3002 \u5b89\u5168\u542f\u52a8 \u7cfb\u7edf\u56fa\u4ef6\u9a8c\u8bc1\u542f\u52a8\u8fc7\u7a0b\u4e2d\u6d89\u53ca\u7684\u4ee3\u7801\u7684\u771f\u5b9e\u6027\u7684\u8fc7\u7a0b\u3002 \u5b89\u5168\u5916\u58f3 \uff08SSH\uff09 \u7528\u4e8e\u901a\u8fc7\u52a0\u5bc6\u901a\u4fe1\u901a\u9053\u8bbf\u95ee\u8fdc\u7a0b\u4e3b\u673a\u7684\u5f00\u6e90\u5de5\u5177\uff0c\u8ba1\u7b97\u652f\u6301 SSH \u5bc6\u94a5\u6ce8\u5165\u3002 \u5b89\u5168\u7ec4 \u5e94\u7528\u4e8e\u8ba1\u7b97\u5b9e\u4f8b\u7684\u4e00\u7ec4\u7f51\u7edc\u6d41\u91cf\u7b5b\u9009\u89c4\u5219\u3002 \u5206\u6bb5\u5bf9\u8c61 \u5df2\u5206\u89e3\u4e3a\u591a\u4e2a\u90e8\u5206\u7684\u5bf9\u8c61\u5b58\u50a8\u5927\u578b\u5bf9\u8c61\u3002\u91cd\u65b0\u7ec4\u5408\u7684\u5bf9\u8c61\u79f0\u4e3a\u4e32\u8054\u5bf9\u8c61\u3002 \u81ea\u52a9\u670d\u52a1 \u5bf9\u4e8e IaaS\uff0c\u5e38\u89c4\uff08\u975e\u7279\u6743\uff09\u5e10\u6237\u80fd\u591f\u5728\u4e0d\u6d89\u53ca\u7ba1\u7406\u5458\u7684\u60c5\u51b5\u4e0b\u7ba1\u7406\u865a\u62df\u57fa\u7840\u67b6\u6784\u7ec4\u4ef6\uff08\u5982\u7f51\u7edc\uff09\u3002 SELinux \u51fd\u6570 Linux \u5185\u6838\u5b89\u5168\u6a21\u5757\uff0c\u63d0\u4f9b\u7528\u4e8e\u652f\u6301\u8bbf\u95ee\u63a7\u5236\u7b56\u7565\u7684\u673a\u5236\u3002 senlin \u7fa4\u96c6\u670d\u52a1\u7684\u4ee3\u7801\u540d\u79f0\u3002 \u670d\u52a1\u5668 \u4e3a\u8be5\u7cfb\u7edf\u4e0a\u8fd0\u884c\u7684\u5ba2\u6237\u7aef\u8f6f\u4ef6\u63d0\u4f9b\u663e\u5f0f\u670d\u52a1\u7684\u8ba1\u7b97\u673a\uff0c\u901a\u5e38\u7ba1\u7406\u5404\u79cd\u8ba1\u7b97\u673a\u64cd\u4f5c\u3002\u670d\u52a1\u5668\u662f\u8ba1\u7b97\u7cfb\u7edf\u4e2d\u7684 VM \u5b9e\u4f8b\u3002\u98ce\u683c\u548c\u56fe\u50cf\u662f\u521b\u5efa\u670d\u52a1\u5668\u65f6\u7684\u5fc5\u8981\u5143\u7d20\u3002 \u670d\u52a1\u5668\u6620\u50cf VM \u6620\u50cf\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u670d\u52a1\u5668 UUID \u5206\u914d\u7ed9\u6bcf\u4e2a\u6765\u5bbe VM \u5b9e\u4f8b\u7684\u552f\u4e00 ID\u3002 \u670d\u52a1 OpenStack \u670d\u52a1\uff0c\u4f8b\u5982\u8ba1\u7b97\u3001\u5bf9\u8c61\u5b58\u50a8\u6216\u6620\u50cf\u670d\u52a1\u3002\u63d0\u4f9b\u4e00\u4e2a\u6216\u591a\u4e2a\u7aef\u70b9\uff0c\u7528\u6237\u53ef\u4ee5\u901a\u8fc7\u8fd9\u4e9b\u7aef\u70b9\u8bbf\u95ee\u8d44\u6e90\u548c\u6267\u884c\u64cd\u4f5c\u3002 \u670d\u52a1\u76ee\u5f55 Identity \u670d\u52a1\u76ee\u5f55\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u670d\u52a1\u529f\u80fd\u94fe \uff08SFC\uff09 \u5bf9\u4e8e\u7ed9\u5b9a\u7684\u670d\u52a1\uff0cSFC \u662f\u6240\u9700\u670d\u52a1\u529f\u80fd\u53ca\u5176\u5e94\u7528\u987a\u5e8f\u7684\u62bd\u8c61\u89c6\u56fe\u3002 \u670d\u52a1 ID \u5206\u914d\u7ed9 Identity \u670d\u52a1\u76ee\u5f55\u4e2d\u53ef\u7528\u7684\u6bcf\u4e2a\u670d\u52a1\u7684\u552f\u4e00 ID\u3002 \u670d\u52a1\u6c34\u5e73\u534f\u8bae \uff08SLA\uff09 \u786e\u4fdd\u670d\u52a1\u53ef\u7528\u6027\u7684\u5408\u540c\u4e49\u52a1\u3002 \u670d\u52a1\u9879\u76ee \u5305\u542b\u76ee\u5f55\u4e2d\u5217\u51fa\u7684\u6240\u6709\u670d\u52a1\u7684\u7279\u6b8a\u9879\u76ee\u3002 \u670d\u52a1\u63d0\u4f9b\u8005 \u5411\u5176\u4ed6\u7cfb\u7edf\u5b9e\u4f53\u63d0\u4f9b\u670d\u52a1\u7684\u7cfb\u7edf\u3002\u5728\u8054\u5408\u8eab\u4efd\u7684\u60c5\u51b5\u4e0b\uff0cOpenStack \u8eab\u4efd\u662f\u670d\u52a1\u63d0\u4f9b\u8005\u3002 \u670d\u52a1\u6ce8\u518c \u4e00\u79cd\u8eab\u4efd\u670d\u52a1\u529f\u80fd\uff0c\u4f7f\u670d\u52a1\uff08\u5982\u8ba1\u7b97\uff09\u80fd\u591f\u81ea\u52a8\u6ce8\u518c\u5230\u76ee\u5f55\u3002 \u670d\u52a1\u4ee4\u724c \u7ba1\u7406\u5458\u5b9a\u4e49\u7684\u4ee4\u724c\uff0c\u7531\u8ba1\u7b97\u7528\u4e8e\u4e0e\u8eab\u4efd\u670d\u52a1\u8fdb\u884c\u5b89\u5168\u901a\u4fe1\u3002 \u4f1a\u8bdd\u540e\u7aef Horizon \u7528\u4e8e\u8ddf\u8e2a\u5ba2\u6237\u7aef\u4f1a\u8bdd\u7684\u5b58\u50a8\u65b9\u6cd5\uff0c\u4f8b\u5982\u672c\u5730\u5185\u5b58\u3001Cookie\u3001\u6570\u636e\u5e93\u6216 memcached\u3002 \u4f1a\u8bdd\u6301\u4e45\u5316 \u8d1f\u8f7d\u5e73\u8861\u670d\u52a1\u7684\u4e00\u9879\u529f\u80fd\u3002\u53ea\u8981\u67d0\u4e2a\u670d\u52a1\u5904\u4e8e\u8054\u673a\u72b6\u6001\uff0c\u5b83\u5c31\u4f1a\u5c1d\u8bd5\u5f3a\u5236\u5c06\u670d\u52a1\u7684\u540e\u7eed\u8fde\u63a5\u91cd\u5b9a\u5411\u5230\u540c\u4e00\u8282\u70b9\u3002 \u4f1a\u8bdd\u5b58\u50a8 \u7528\u4e8e\u5b58\u50a8\u548c\u8ddf\u8e2a\u5ba2\u6237\u7aef\u4f1a\u8bdd\u4fe1\u606f\u7684 Horizon \u7ec4\u4ef6\u3002\u901a\u8fc7 Django \u4f1a\u8bdd\u6846\u67b6\u5b9e\u73b0\u3002 \u5171\u4eab \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e0a\u4e0b\u6587\u4e2d\u7684\u8fdc\u7a0b\u53ef\u6302\u8f7d\u6587\u4ef6\u7cfb\u7edf\u3002\u60a8\u53ef\u4ee5\u4e00\u6b21\u5c06\u5171\u4eab\u88c5\u8f7d\u5230\u591a\u4e2a\u4e3b\u673a\uff0c\u4e5f\u53ef\u4ee5\u7531\u591a\u4e2a\u7528\u6237\u4ece\u591a\u4e2a\u4e3b\u673a\u8bbf\u95ee\u5171\u4eab\u3002 \u5171\u4eab\u7f51\u7edc \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e0a\u4e0b\u6587\u4e2d\u7684\u5b9e\u4f53\uff0c\u7528\u4e8e\u5c01\u88c5\u4e0e\u7f51\u7edc\u670d\u52a1\u7684\u4ea4\u4e92\u3002\u5982\u679c\u6240\u9009\u9a71\u52a8\u7a0b\u5e8f\u5728\u9700\u8981\u6b64\u7c7b\u4ea4\u4e92\u7684\u6a21\u5f0f\u4e0b\u8fd0\u884c\uff0c\u5219\u9700\u8981\u6307\u5b9a\u5171\u4eab\u7f51\u7edc\u4ee5\u521b\u5efa\u5171\u4eab\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf API \u63d0\u4f9b\u7a33\u5b9a RESTful API \u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u3002\u8be5\u670d\u52a1\u5728\u6574\u4e2a\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\u4e2d\u5bf9\u8bf7\u6c42\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u548c\u8def\u7531\u3002\u6709 python-manilaclient \u53ef\u4ee5\u4e0e API \u4ea4\u4e92\u3002 \u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff08manila\uff09 \u8be5\u670d\u52a1\u63d0\u4f9b\u4e00\u7ec4\u670d\u52a1\uff0c\u7528\u4e8e\u7ba1\u7406\u591a\u9879\u76ee\u4e91\u73af\u5883\u4e2d\u7684\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\uff0c\u7c7b\u4f3c\u4e8e OpenStack \u901a\u8fc7 OpenStack Block Storage \u670d\u52a1\u9879\u76ee\u63d0\u4f9b\u57fa\u4e8e\u5757\u7684\u5b58\u50a8\u7ba1\u7406\u3002\u4f7f\u7528\u5171\u4eab\u6587\u4ef6\u7cfb\u7edf\u670d\u52a1\uff0c\u60a8\u53ef\u4ee5\u521b\u5efa\u8fdc\u7a0b\u6587\u4ef6\u7cfb\u7edf\u5e76\u5c06\u6587\u4ef6\u7cfb\u7edf\u6302\u8f7d\u5230\u60a8\u7684\u5b9e\u4f8b\u4e0a\u3002\u60a8\u8fd8\u53ef\u4ee5\u5728\u6587\u4ef6\u7cfb\u7edf\u4e2d\u8bfb\u53d6\u548c\u5199\u5165\u5b9e\u4f8b\u4e2d\u7684\u6570\u636e\u3002 \u5171\u4eab IP \u5730\u5740 \u53ef\u5206\u914d\u7ed9\u5171\u4eab IP \u7ec4\u4e2d\u7684 VM \u5b9e\u4f8b\u7684 IP \u5730\u5740\u3002\u516c\u5171 IP \u5730\u5740\u53ef\u4ee5\u5728\u591a\u4e2a\u670d\u52a1\u5668\u4e4b\u95f4\u5171\u4eab\uff0c\u4ee5\u4fbf\u5728\u5404\u79cd\u9ad8\u53ef\u7528\u6027\u65b9\u6848\u4e2d\u4f7f\u7528\u3002\u5f53 IP \u5730\u5740\u5171\u4eab\u5230\u53e6\u4e00\u53f0\u670d\u52a1\u5668\u65f6\uff0c\u5c06\u4fee\u6539\u4e91\u7f51\u7edc\u9650\u5236\uff0c\u4f7f\u6bcf\u4e2a\u670d\u52a1\u5668\u90fd\u80fd\u4fa6\u542c\u548c\u54cd\u5e94\u8be5 IP \u5730\u5740\u3002\u60a8\u53ef\u4ee5\u9009\u62e9\u6307\u5b9a\u4fee\u6539\u76ee\u6807\u670d\u52a1\u5668\u7f51\u7edc\u914d\u7f6e\u3002\u5171\u4eab IP \u5730\u5740\u53ef\u4ee5\u4e0e\u8bb8\u591a\u6807\u51c6\u68c0\u6d4b\u4fe1\u53f7\u5de5\u5177\uff08\u5982 keepalive\uff09\u4e00\u8d77\u4f7f\u7528\uff0c\u8fd9\u4e9b\u5de5\u5177\u53ef\u76d1\u89c6\u6545\u969c\u5e76\u7ba1\u7406 IP \u6545\u969c\u8f6c\u79fb\u3002 \u5171\u4eab IP \u7ec4 \u53ef\u4ee5\u4e0e\u7ec4\u7684\u5176\u4ed6\u6210\u5458\u5171\u4eab IP \u7684\u670d\u52a1\u5668\u96c6\u5408\u3002\u7ec4\u4e2d\u7684\u4efb\u4f55\u670d\u52a1\u5668\u90fd\u53ef\u4ee5\u4e0e\u7ec4\u4e2d\u7684\u4efb\u4f55\u5176\u4ed6\u670d\u52a1\u5668\u5171\u4eab\u4e00\u4e2a\u6216\u591a\u4e2a\u516c\u5171 IP\u3002\u9664\u4e86\u5171\u4eab IP \u7ec4\u4e2d\u7684\u7b2c\u4e00\u53f0\u670d\u52a1\u5668\u5916\uff0c\u670d\u52a1\u5668\u5fc5\u987b\u542f\u52a8\u5230\u5171\u4eab IP \u7ec4\u4e2d\u3002\u4e00\u53f0\u670d\u52a1\u5668\u53ea\u80fd\u662f\u4e00\u4e2a\u5171\u4eab IP \u7ec4\u7684\u6210\u5458\u3002 \u5171\u4eab\u5b58\u50a8 \u53ef\u7531\u591a\u4e2a\u5ba2\u6237\u7aef\u540c\u65f6\u8bbf\u95ee\u7684\u5757\u5b58\u50a8\uff0c\u4f8b\u5982 NFS\u3002 Sheepdog \u9762\u5411 QEMU \u7684\u5206\u5e03\u5f0f\u5757\u5b58\u50a8\u7cfb\u7edf\uff0c\u7531 OpenStack \u63d0\u4f9b\u652f\u6301\u3002 \u7b80\u5355\u4e91\u8eab\u4efd\u7ba1\u7406 \uff08SCIM\uff09 \u7528\u4e8e\u5728\u4e91\u4e2d\u7ba1\u7406\u8eab\u4efd\u7684\u89c4\u8303\uff0c\u76ee\u524d\u4e0d\u53d7 OpenStack \u652f\u6301\u3002 \u72ec\u7acb\u8ba1\u7b97\u73af\u5883\u7684\u7b80\u5355\u534f\u8bae \uff08SPICE\uff09 SPICE \u63d0\u4f9b\u5bf9\u5ba2\u6237\u673a\u865a\u62df\u673a\u7684\u8fdc\u7a0b\u684c\u9762\u8bbf\u95ee\u3002\u5b83\u662f VNC \u7684\u66ff\u4ee3\u54c1\u3002OpenStack\u652f\u6301SPICE\u3002 \u5355\u6839 I/O \u865a\u62df\u5316 \uff08SR-IOV\uff09 \u5f53\u7531\u7269\u7406 PCIe \u8bbe\u5907\u5b9e\u73b0\u65f6\uff0c\u8be5\u89c4\u8303\u4f7f\u5176\u80fd\u591f\u663e\u793a\u4e3a\u591a\u4e2a\u5355\u72ec\u7684 PCIe \u8bbe\u5907\u3002\u8fd9\u4f7f\u591a\u4e2a\u865a\u62df\u5316\u5ba2\u6237\u673a\u80fd\u591f\u5171\u4eab\u5bf9\u7269\u7406\u8bbe\u5907\u7684\u76f4\u63a5\u8bbf\u95ee\uff0c\u4ece\u800c\u63d0\u4f9b\u6bd4\u7b49\u6548\u865a\u62df\u8bbe\u5907\u66f4\u9ad8\u7684\u6027\u80fd\u3002\u76ee\u524d\u5728 OpenStack Havana \u53ca\u66f4\u9ad8\u7248\u672c\u4e2d\u53d7\u652f\u6301\u3002 SmokeStack \u9488\u5bf9\u6838\u5fc3 OpenStack API \u8fd0\u884c\u81ea\u52a8\u5316\u6d4b\u8bd5;\u7528 Rails \u7f16\u5199\u3002 \u5feb\u7167 OpenStack \u5b58\u50a8\u5377\u6216\u6620\u50cf\u7684\u65f6\u95f4\u70b9\u526f\u672c\u3002\u4f7f\u7528\u5b58\u50a8\u5377\u5feb\u7167\u5907\u4efd\u5377\u3002\u4f7f\u7528\u6620\u50cf\u5feb\u7167\u6765\u5907\u4efd\u6570\u636e\uff0c\u6216\u4f5c\u4e3a\u5176\u4ed6\u670d\u52a1\u5668\u7684\u201c\u9ec4\u91d1\u201d\u6620\u50cf\u3002 \u8f6f\u91cd\u542f \u901a\u8fc7\u64cd\u4f5c\u7cfb\u7edf\u547d\u4ee4\u6b63\u786e\u91cd\u542f VM \u5b9e\u4f8b\u7684\u53d7\u63a7\u91cd\u542f\u3002 \u8f6f\u4ef6\u5f00\u53d1\u5de5\u5177\u5305 \uff08SDK\uff09 \u5305\u542b\u4ee3\u7801\u3001\u793a\u4f8b\u548c\u6587\u6863\uff0c\u60a8\u53ef\u4ee5\u4f7f\u7528\u8fd9\u4e9b\u4ee3\u7801\u3001\u793a\u4f8b\u548c\u6587\u6863\u4ee5\u6240\u9009\u8bed\u8a00\u521b\u5efa\u5e94\u7528\u7a0b\u5e8f\u3002 \u8f6f\u4ef6\u5f00\u53d1\u751f\u547d\u5468\u671f\u81ea\u52a8\u5316\u670d\u52a1\uff08solum\uff09 OpenStack\u9879\u76ee\uff0c\u65e8\u5728\u901a\u8fc7\u81ea\u52a8\u5316\u4ece\u6e90\u5230\u6620\u50cf\u7684\u8fc7\u7a0b\uff0c\u5e76\u7b80\u5316\u4ee5\u5e94\u7528\u7a0b\u5e8f\u4e3a\u4e2d\u5fc3\u7684\u90e8\u7f72\uff0c\u4f7f\u4e91\u670d\u52a1\u66f4\u6613\u4e8e\u4f7f\u7528\u5e76\u4e0e\u5e94\u7528\u7a0b\u5e8f\u5f00\u53d1\u8fc7\u7a0b\u96c6\u6210\u3002 \u8f6f\u4ef6\u5b9a\u4e49\u7f51\u7edc \uff08SDN\uff09 \u4e3a\u7f51\u7edc\u7ba1\u7406\u5458\u63d0\u4f9b\u4e00\u79cd\u65b9\u6cd5\uff0c\u901a\u8fc7\u62bd\u8c61\u8f83\u4f4e\u7ea7\u522b\u7684\u529f\u80fd\u6765\u7ba1\u7406\u8ba1\u7b97\u673a\u7f51\u7edc\u670d\u52a1\u3002 SolidFire \u5377\u9a71\u52a8\u7a0b\u5e8f SolidFire iSCSI \u5b58\u50a8\u8bbe\u5907\u7684\u5757\u5b58\u50a8\u9a71\u52a8\u7a0b\u5e8f\u3002 solum \u8f6f\u4ef6\u5f00\u53d1\u751f\u547d\u5468\u671f\u81ea\u52a8\u5316\u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u70b9\u5dee\u4f18\u5148\u8c03\u5ea6\u5668 \u8ba1\u7b97 VM \u8ba1\u5212\u7b97\u6cd5\uff0c\u5c1d\u8bd5\u4ee5\u6700\u5c0f\u7684\u8d1f\u8f7d\u5728\u4e3b\u673a\u4e0a\u542f\u52a8\u65b0 VM\u3002 SQLAlchemy \u7528\u4e8e Python \u7684\u5f00\u6e90 SQL \u5de5\u5177\u5305\uff0c\u7528\u4e8e OpenStack\u3002 SQLite \u4e00\u4e2a\u8f7b\u91cf\u7ea7\u7684 SQL \u6570\u636e\u5e93\uff0c\u5728\u8bb8\u591a OpenStack \u670d\u52a1\u4e2d\u7528\u4f5c\u9ed8\u8ba4\u7684\u6301\u4e45\u5316\u5b58\u50a8\u65b9\u6cd5\u3002 \u5806\u6808 \u7531\u7f16\u6392\u670d\u52a1\u6839\u636e\u7ed9\u5b9a\u6a21\u677f\uff08AWS CloudFormation \u6a21\u677f\u6216 Heat \u7f16\u6392\u6a21\u677f \uff08HOT\uff09\uff09\u521b\u5efa\u548c\u7ba1\u7406\u7684\u4e00\u7ec4 OpenStack \u8d44\u6e90\u3002 StackTach \u6355\u83b7\u8ba1\u7b97 AMQP \u901a\u4fe1\u7684\u793e\u533a\u9879\u76ee;\u5bf9\u8c03\u8bd5\u5f88\u6709\u7528\u3002 \u9759\u6001 IP \u5730\u5740 \u56fa\u5b9a IP \u5730\u5740\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u9759\u6001\u7f51\u9875 \u5bf9\u8c61\u5b58\u50a8\u7684 WSGI \u4e2d\u95f4\u4ef6\u7ec4\u4ef6\uff0c\u5c06\u5bb9\u5668\u6570\u636e\u4f5c\u4e3a\u9759\u6001\u7f51\u9875\u63d0\u4f9b\u3002 Stein OpenStack \u7b2c 19 \u7248\u7684\u4ee3\u53f7\u3002OpenStack\u5cf0\u4f1a\u5728\u5fb7\u56fd\u67cf\u6797\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u67cf\u6797\u7684 Steinstra\u00dfe \u8857\u547d\u540d\u3002 \u5b58\u50a8\u540e\u7aef \u670d\u52a1\u7528\u4e8e\u6301\u4e45\u6027\u5b58\u50a8\u7684\u65b9\u6cd5\uff0c\u4f8b\u5982 iSCSI\u3001NFS \u6216\u672c\u5730\u78c1\u76d8\u3002 \u5b58\u50a8\u7ba1\u7406\u5668 \u4e00\u4e2a XenAPI \u7ec4\u4ef6\uff0c\u5b83\u63d0\u4f9b\u53ef\u63d2\u5165\u63a5\u53e3\u4ee5\u652f\u6301\u5404\u79cd\u6301\u4e45\u6027\u5b58\u50a8\u540e\u7aef\u3002 \u5b58\u50a8\u7ba1\u7406\u5668\u540e\u7aef XenAPI \u652f\u6301\u7684\u6301\u4e45\u6027\u5b58\u50a8\u65b9\u6cd5\uff0c\u4f8b\u5982 iSCSI \u6216 NFS\u3002 \u5b58\u50a8\u8282\u70b9 \u63d0\u4f9b\u5bb9\u5668\u670d\u52a1\u3001\u8d26\u6237\u670d\u52a1\u548c\u5bf9\u8c61\u670d\u52a1\u7684\u5bf9\u8c61\u5b58\u50a8\u8282\u70b9;\u63a7\u5236\u5e10\u6237\u6570\u636e\u5e93\u3001\u5bb9\u5668\u6570\u636e\u5e93\u548c\u5bf9\u8c61\u5b58\u50a8\u3002 \u5b58\u50a8\u670d\u52a1 \u63d0\u4f9b\u5bb9\u5668\u670d\u52a1\u3001\u8d26\u6237\u670d\u52a1\u548c\u5bf9\u8c61\u670d\u52a1\u7684\u5bf9\u8c61\u5b58\u50a8\u8282\u70b9;\u63a7\u5236\u5e10\u6237\u6570\u636e\u5e93\u3001\u5bb9\u5668\u6570\u636e\u5e93\u548c\u5bf9\u8c61\u5b58\u50a8\u3002 \u5b58\u50a8\u670d\u52a1 \u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61\u670d\u52a1\u3001\u5bb9\u5668\u670d\u52a1\u548c\u5e10\u6237\u670d\u52a1\u7684\u96c6\u5408\u540d\u79f0\u3002 \u7b56\u7565 \u6307\u5b9a\u955c\u50cf\u670d\u52a1\u6216\u8eab\u4efd\u4f7f\u7528\u7684\u8ba4\u8bc1\u6e90\u3002\u5728\u6570\u636e\u5e93\u670d\u52a1\u4e2d\uff0c\u5b83\u662f\u6307\u4e3a\u6570\u636e\u5b58\u50a8\u5b9e\u73b0\u7684\u6269\u5c55\u3002 \u5b50\u57df \u7236\u57df\u4e2d\u7684\u57df\u3002\u65e0\u6cd5\u6ce8\u518c\u5b50\u57df\u3002\u5b50\u57df\u4f7f\u60a8\u80fd\u591f\u59d4\u6d3e\u57df\u3002\u5b50\u57df\u672c\u8eab\u53ef\u4ee5\u6709\u5b50\u57df\uff0c\u56e0\u6b64\u53ef\u4ee5\u8fdb\u884c\u4e09\u7ea7\u3001\u56db\u7ea7\u3001\u4e94\u7ea7\u548c\u66f4\u6df1\u7ea7\u522b\u7684\u5d4c\u5957\u3002 \u5b50\u7f51 IP \u7f51\u7edc\u7684\u903b\u8f91\u7ec6\u5206\u3002 SUSE Linux Enterprise Server \uff08SLES\uff09 \uff08\u82f1\u8bed\uff09 \u4e0e OpenStack \u517c\u5bb9\u7684 Linux \u53d1\u884c\u7248\u3002 \u6302\u8d77 \u865a\u62df\u673a\u5b9e\u4f8b\u5c06\u6682\u505c\uff0c\u5176\u72b6\u6001\u5c06\u4fdd\u5b58\u5230\u4e3b\u673a\u7684\u78c1\u76d8\u4e2d\u3002 \u4ea4\u6362 \u64cd\u4f5c\u7cfb\u7edf\u4f7f\u7528\u7684\u57fa\u4e8e\u78c1\u76d8\u7684\u865a\u62df\u5185\u5b58\uff0c\u7528\u4e8e\u63d0\u4f9b\u6bd4\u7cfb\u7edf\u4e0a\u5b9e\u9645\u53ef\u7528\u7684\u5185\u5b58\u66f4\u591a\u7684\u5185\u5b58\u3002 swift OpenStack \u5bf9\u8c61\u5b58\u50a8\u670d\u52a1\u7684\u4ee3\u53f7\u3002 swift \u591a\u5408\u4e00 \uff08SAIO\uff09 Swift \u4e2d\u95f4\u4ef6 \u63d0\u4f9b\u9644\u52a0\u529f\u80fd\u7684\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\u7684\u7edf\u79f0\u3002 Swift \u4ee3\u7406\u670d\u52a1\u5668 \u5145\u5f53\u5bf9\u8c61\u5b58\u50a8\u7684\u7f51\u5b88\uff0c\u5e76\u8d1f\u8d23\u5bf9\u7528\u6237\u8fdb\u884c\u8eab\u4efd\u9a8c\u8bc1\u3002 Swift \u5b58\u50a8\u8282\u70b9 \u8fd0\u884c\u5bf9\u8c61\u5b58\u50a8\u5e10\u6237\u3001\u5bb9\u5668\u548c\u5bf9\u8c61\u670d\u52a1\u7684\u8282\u70b9\u3002 \u540c\u6b65\u70b9 \u81ea\u4e0a\u6b21\u5bb9\u5668\u548c\u5e10\u6237\u6570\u636e\u5e93\u5728\u5bf9\u8c61\u5b58\u50a8\u4e2d\u7684\u8282\u70b9\u4e4b\u95f4\u540c\u6b65\u4ee5\u6765\u7684\u65f6\u95f4\u70b9\u3002 \u7cfb\u7edf\u7ba1\u7406\u5458 \u8ba1\u7b97 RBAC \u7cfb\u7edf\u4e2d\u7684\u9ed8\u8ba4\u89d2\u8272\u4e4b\u4e00\u3002\u4f7f\u7528\u6237\u80fd\u591f\u5c06\u5176\u4ed6\u7528\u6237\u6dfb\u52a0\u5230\u9879\u76ee\u4e2d\uff0c\u4e0e\u4e0e\u9879\u76ee\u5173\u8054\u7684 VM \u6620\u50cf\u8fdb\u884c\u4ea4\u4e92\uff0c\u4ee5\u53ca\u542f\u52a8\u548c\u505c\u6b62 VM \u5b9e\u4f8b\u3002 \u7cfb\u7edf\u4f7f\u7528\u60c5\u51b5 \u4e00\u4e2a\u8ba1\u7b97\u7ec4\u4ef6\uff0c\u5b83\u4e0e\u901a\u77e5\u7cfb\u7edf\u4e00\u8d77\u6536\u96c6\u8ba1\u91cf\u548c\u4f7f\u7528\u60c5\u51b5\u4fe1\u606f\u3002\u6b64\u4fe1\u606f\u53ef\u7528\u4e8e\u8ba1\u8d39\u3002","title":"S"},{"location":"security/security-guide/#t","text":"Tacker NFV \u7f16\u6392\u670d\u52a1\u7684\u4ee3\u7801\u540d\u79f0 \u9065\u6d4b\u670d\u52a1\uff08telemetry\uff09 OpenStack\u9879\u76ee\u6536\u96c6\u5305\u542b\u5df2\u90e8\u7f72\u4e91\u7684\u7269\u7406\u548c\u865a\u62df\u8d44\u6e90\u5229\u7528\u7387\u7684\u6d4b\u91cf\u503c\uff0c\u4fdd\u7559\u6b64\u6570\u636e\u4ee5\u4f9b\u540e\u7eed\u68c0\u7d22\u548c\u5206\u6790\uff0c\u5e76\u5728\u6ee1\u8db3\u5b9a\u4e49\u7684\u6761\u4ef6\u65f6\u89e6\u53d1\u64cd\u4f5c\u3002 TempAuth \u51fd\u6570 Object Storage\u4e2d\u7684\u4e00\u79cd\u8eab\u4efd\u9a8c\u8bc1\u5de5\u5177\uff0c\u4f7fObject Storage\u672c\u8eab\u80fd\u591f\u6267\u884c\u8eab\u4efd\u9a8c\u8bc1\u548c\u6388\u6743\u3002\u7ecf\u5e38\u7528\u4e8e\u6d4b\u8bd5\u548c\u5f00\u53d1\u3002 Tempest \u81ea\u52a8\u5316\u8f6f\u4ef6\u6d4b\u8bd5\u5957\u4ef6\uff0c\u65e8\u5728\u9488\u5bf9 OpenStack \u6838\u5fc3\u9879\u76ee\u7684\u4e3b\u5e72\u8fd0\u884c\u3002 TempURL \u4e00\u4e2a\u5bf9\u8c61\u5b58\u50a8\u4e2d\u95f4\u4ef6\u7ec4\u4ef6\uff0c\u7528\u4e8e\u521b\u5efa\u7528\u4e8e\u4e34\u65f6\u5bf9\u8c61\u8bbf\u95ee\u7684 URL\u3002 \u79df\u6237 \u4e00\u7ec4\u7528\u6237;\u7528\u4e8e\u9694\u79bb\u5bf9\u8ba1\u7b97\u8d44\u6e90\u7684\u8bbf\u95ee\u3002\u9879\u76ee\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u79df\u6237 API \u9879\u76ee\u53ef\u8bbf\u95ee\u7684 API\u3002 \u79df\u6237\u7aef\u70b9 \u4e0e\u4e00\u4e2a\u6216\u591a\u4e2a\u9879\u76ee\u5173\u8054\u7684\u8eab\u4efd\u670d\u52a1 API \u7aef\u70b9\u3002 \u79df\u6237 ID \u9879\u76ee ID \u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u4ee4\u724c \u7528\u4e8e\u8bbf\u95ee OpenStack API \u548c\u8d44\u6e90\u7684\u5b57\u6bcd\u6570\u5b57\u6587\u672c\u5b57\u7b26\u4e32\u3002 \u4ee4\u724c\u670d\u52a1 \u4e00\u4e2a\u8eab\u4efd\u670d\u52a1\u7ec4\u4ef6\uff0c\u7528\u4e8e\u5728\u7528\u6237\u6216\u9879\u76ee\u901a\u8fc7\u8eab\u4efd\u9a8c\u8bc1\u540e\u7ba1\u7406\u548c\u9a8c\u8bc1\u4ee4\u724c\u3002 \u903b\u8f91\u5220\u9664 \u7528\u4e8e\u6807\u8bb0\u5df2\u5220\u9664\u7684\u5bf9\u8c61\u5b58\u50a8\u5bf9\u8c61;\u786e\u4fdd\u5bf9\u8c61\u5728\u5220\u9664\u540e\u4e0d\u4f1a\u5728\u53e6\u4e00\u4e2a\u8282\u70b9\u4e0a\u66f4\u65b0\u3002 \u4e3b\u9898\u53d1\u5e03\u8005 \u6267\u884c RPC \u8c03\u7528\u65f6\u521b\u5efa\u7684\u8fdb\u7a0b;\u7528\u4e8e\u5c06\u6d88\u606f\u63a8\u9001\u5230\u4e3b\u9898\u4ea4\u6362\u3002 Torpedo \u7528\u4e8e\u9488\u5bf9 OpenStack API \u8fd0\u884c\u81ea\u52a8\u5316\u6d4b\u8bd5\u7684\u793e\u533a\u9879\u76ee\u3002 Train OpenStack \u7b2c 20 \u7248\u7684\u4ee3\u53f7\u3002OpenStack \u57fa\u7840\u67b6\u6784\u5cf0\u4f1a\u5728\u7f8e\u56fd\u79d1\u7f57\u62c9\u591a\u5dde\u4e39\u4f5b\u5e02\u4e3e\u884c\u3002 \u4e39\u4f5b\u7684\u4e24\u6b21\u9879\u76ee\u56e2\u961f\u805a\u4f1a\u4f1a\u8bae\u5728\u4ece\u5e02\u4e2d\u5fc3\u5230\u673a\u573a\u7684\u706b\u8f66\u7ebf\u65c1\u8fb9\u7684\u4e00\u5bb6\u9152\u5e97\u4e3e\u884c\u3002\u90a3\u91cc\u7684\u4ea4\u53c9\u4fe1\u53f7\u706f\u8fc7\u53bb\u66fe\u51fa\u73b0\u8fc7\u67d0\u79cd\u6545\u969c\uff0c\u5bfc\u81f4\u5b83\u4eec\u5728\u706b\u8f66\u6b63\u5e38\u9a76\u6765\u65f6\u6ca1\u6709\u505c\u4e0b\u8f66\u53a2\u3002\u56e0\u6b64\uff0c\u706b\u8f66\u5728\u7ecf\u8fc7\u8be5\u5730\u533a\u65f6\u5fc5\u987b\u9e23\u5587\u53ed\u3002\u663e\u7136\uff0c\u4f4f\u5728\u9152\u5e97\u91cc\uff0c\u4e58\u5750\u706b\u8f6624/7\u5439\u5587\u53ed\uff0c\u4e0d\u592a\u7406\u60f3\u3002\u7ed3\u679c\uff0c\u51fa\u73b0\u4e86\u8bb8\u591a\u5173\u4e8e\u4e39\u4f5b\u548c\u706b\u8f66\u7684\u7b11\u8bdd\u2014\u2014\u56e0\u6b64\u8fd9\u4e2a\u7248\u672c\u88ab\u79f0\u4e3a\u706b\u8f66\u3002 \u4ea4\u6613 ID \u5206\u914d\u7ed9\u6bcf\u4e2a\u5bf9\u8c61\u5b58\u50a8\u8bf7\u6c42\u7684\u552f\u4e00 ID;\u7528\u4e8e\u8c03\u8bd5\u548c\u8ddf\u8e2a\u3002 \u77ac\u6001 \u975e\u8010\u7528\u54c1\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u77ac\u6001\u4ea4\u6362 \u975e\u6301\u4e45\u4ea4\u6362\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u77ac\u6001\u6d88\u606f \u5b58\u50a8\u5728\u5185\u5b58\u4e2d\u5e76\u5728\u670d\u52a1\u5668\u91cd\u65b0\u542f\u52a8\u540e\u4e22\u5931\u7684\u6d88\u606f\u3002 \u77ac\u6001\u961f\u5217 \u975e\u6301\u4e45\u961f\u5217\u7684\u66ff\u4ee3\u672f\u8bed\u3002 TripleO OpenStack-on-OpenStack \u7a0b\u5e8f\u3002OpenStack Deployment \u7a0b\u5e8f\u7684\u4ee3\u53f7\u3002 Trove OpenStack \u6570\u636e\u5e93\u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u53ef\u4fe1\u5e73\u53f0\u6a21\u5757\uff08TPM\uff09 \u4e13\u7528\u5fae\u5904\u7406\u5668\uff0c\u7528\u4e8e\u5c06\u52a0\u5bc6\u5bc6\u94a5\u6574\u5408\u5230\u8bbe\u5907\u4e2d\uff0c\u4ee5\u9a8c\u8bc1\u548c\u4fdd\u62a4\u786c\u4ef6\u5e73\u53f0\u3002","title":"T"},{"location":"security/security-guide/#u","text":"Ubuntu \u57fa\u4e8e Debian \u7684 Linux \u53d1\u884c\u7248\u3002 \u65e0\u4f5c\u7528\u57df\u4ee4\u724c Identity \u670d\u52a1\u9ed8\u8ba4\u4ee4\u724c\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u66f4\u65b0\u5668 \u4e00\u7ec4\u5bf9\u8c61\u5b58\u50a8\u7ec4\u4ef6\u7684\u7edf\u79f0\uff0c\u7528\u4e8e\u5904\u7406\u5bb9\u5668\u548c\u5bf9\u8c61\u7684\u6392\u961f\u548c\u5931\u8d25\u7684\u66f4\u65b0\u3002 \u7528\u6237 \u5728 OpenStack Identity \u4e2d\uff0c\u5b9e\u4f53\u4ee3\u8868\u5355\u4e2a API \u4f7f\u7528\u8005\uff0c\u5e76\u7531\u7279\u5b9a\u57df\u62e5\u6709\u3002\u5728 OpenStack \u8ba1\u7b97\u4e2d\uff0c\u7528\u6237\u53ef\u4ee5\u4e0e\u89d2\u8272\u548c/\u6216\u9879\u76ee\u76f8\u5173\u8054\u3002 \u7528\u6237\u6570\u636e \u7528\u6237\u5728\u542f\u52a8\u5b9e\u4f8b\u65f6\u53ef\u4ee5\u6307\u5b9a\u7684\u6570\u636e Blob\u3002\u5b9e\u4f8b\u53ef\u4ee5\u901a\u8fc7\u5143\u6570\u636e\u670d\u52a1\u6216\u914d\u7f6e\u9a71\u52a8\u5668\u8bbf\u95ee\u6b64\u6570\u636e\u3002\u901a\u5e38\u7528\u4e8e\u4f20\u9012\u5b9e\u4f8b\u5728\u542f\u52a8\u65f6\u8fd0\u884c\u7684 shell \u811a\u672c\u3002 \u7528\u6237\u6a21\u5f0f Linux \uff08UML\uff09 \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 Ussuri OpenStack \u7b2c 21 \u7248\u7684\u4ee3\u53f7\u3002OpenStack\u57fa\u7840\u8bbe\u65bd\u5cf0\u4f1a\u5728\u4e2d\u534e\u4eba\u6c11\u5171\u548c\u56fd\u4e0a\u6d77\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u4e4c\u82cf\u91cc\u6cb3\u547d\u540d\u3002","title":"U"},{"location":"security/security-guide/#v","text":"Victoria OpenStack \u7b2c 22 \u7248\u7684\u4ee3\u53f7\u3002OpenDev + PTG \u8ba1\u5212\u5728\u52a0\u62ff\u5927\u4e0d\u5217\u98a0\u54e5\u4f26\u6bd4\u4e9a\u7701\u6e29\u54e5\u534e\u4e3e\u884c\u3002\u8be5\u7248\u672c\u4ee5\u4e0d\u5217\u98a0\u54e5\u4f26\u6bd4\u4e9a\u7701\u9996\u5e9c\u7ef4\u591a\u5229\u4e9a\u547d\u540d\u3002 \u7531\u4e8e COVID-19\uff0c\u73b0\u573a\u6d3b\u52a8\u88ab\u53d6\u6d88\u3002\u8be5\u4e8b\u4ef6\u6b63\u5728\u865a\u62df\u5316\u3002 VIF UUID \u5206\u914d\u7ed9\u6bcf\u4e2a\u7f51\u7edc VIF \u7684\u552f\u4e00 ID\u3002 \u865a\u62df\u4e2d\u592e\u5904\u7406\u5668 \uff08vCPU\uff09 \u7ec6\u5206\u7269\u7406 CPU\u3002\u7136\u540e\uff0c\u5b9e\u4f8b\u53ef\u4ee5\u4f7f\u7528\u8fd9\u4e9b\u5206\u533a\u3002 \u865a\u62df\u78c1\u76d8\u6620\u50cf \uff08VDI\uff09 \u6620\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u6620\u50cf\u78c1\u76d8\u683c\u5f0f\u4e4b\u4e00\u3002 \u865a\u62df\u53ef\u6269\u5c55\u5c40\u57df\u7f51 \uff08VXLAN\uff09 \u4e00\u79cd\u7f51\u7edc\u865a\u62df\u5316\u6280\u672f\uff0c\u8bd5\u56fe\u51cf\u5c11\u4e0e\u5927\u578b\u4e91\u8ba1\u7b97\u90e8\u7f72\u76f8\u5173\u7684\u53ef\u4f38\u7f29\u6027\u95ee\u9898\u3002\u5b83\u4f7f\u7528\u7c7b\u4f3c VLAN \u7684\u5c01\u88c5\u6280\u672f\u5c06\u4ee5\u592a\u7f51\u5e27\u5c01\u88c5\u5728 UDP \u6570\u636e\u5305\u4e2d\u3002 \u865a\u62df\u786c\u76d8 \uff08VHD\uff09 \u955c\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u955c\u50cf\u78c1\u76d8\u683c\u5f0f\u4e4b\u4e00\u3002 \u865a\u62df IP \u5730\u5740 \uff08VIP\uff09 \u5728\u8d1f\u8f7d\u5e73\u8861\u5668\u4e0a\u914d\u7f6e\u7684 Internet \u534f\u8bae \uff08IP\uff09 \u5730\u5740\uff0c\u4f9b\u8fde\u63a5\u5230\u8d1f\u8f7d\u5e73\u8861\u670d\u52a1\u7684\u5ba2\u6237\u7aef\u4f7f\u7528\u3002\u4f20\u5165\u8fde\u63a5\u5c06\u6839\u636e\u8d1f\u8f7d\u5747\u8861\u5668\u7684\u914d\u7f6e\u5206\u53d1\u5230\u540e\u7aef\u8282\u70b9\u3002 \u865a\u62df\u673a \uff08VM\uff09 \u5728\u865a\u62df\u673a\u76d1\u63a7\u7a0b\u5e8f\u4e0a\u8fd0\u884c\u7684\u64cd\u4f5c\u7cfb\u7edf\u5b9e\u4f8b\u3002\u591a\u4e2a VM \u53ef\u4ee5\u5728\u540c\u4e00\u7269\u7406\u4e3b\u673a\u4e0a\u540c\u65f6\u8fd0\u884c\u3002 \u865a\u62df\u7f51\u7edc \u7f51\u7edc\u4e2d\u7684 L2 \u7f51\u6bb5\u3002 \u865a\u62df\u7f51\u7edc\u8ba1\u7b97 \uff08VNC\uff09 \u7528\u4e8e\u8fdc\u7a0b\u63a7\u5236\u53f0\u8bbf\u95ee VM \u7684\u5f00\u6e90 GUI \u548c CLI \u5de5\u5177\u3002 \u865a\u62df\u7f51\u7edc\u63a5\u53e3 \uff08VIF\uff09 \u63d2\u5165\u7f51\u7edc\u7f51\u7edc\u4e2d\u7684\u7aef\u53e3\u7684\u63a5\u53e3\u3002\u901a\u5e38\u5c5e\u4e8e VM \u7684\u865a\u62df\u7f51\u7edc\u63a5\u53e3\u3002 \u865a\u62df\u7f51\u7edc \u4f7f\u7528\u7269\u7406\u7f51\u7edc\u57fa\u7840\u67b6\u6784\u4e0a\u7684\u865a\u62df\u673a\u548c\u8986\u76d6\u7f51\u7edc\u7ec4\u5408\u5b9e\u73b0\u7f51\u7edc\u529f\u80fd\u865a\u62df\u5316\uff08\u5982\u4ea4\u6362\u3001\u8def\u7531\u3001\u8d1f\u8f7d\u5e73\u8861\u548c\u5b89\u5168\u6027\uff09\u7684\u901a\u7528\u672f\u8bed\u3002 \u865a\u62df\u7aef\u53e3 \u865a\u62df\u63a5\u53e3\u8fde\u63a5\u5230\u865a\u62df\u7f51\u7edc\u7684\u8fde\u63a5\u70b9\u3002 \u865a\u62df\u4e13\u7528\u7f51\u7edc \uff08VPN\uff09 \u7531 Compute \u4ee5 cloudpipes \u7684\u5f62\u5f0f\u63d0\u4f9b\uff0c\u8fd9\u4e9b\u4e13\u7528\u5b9e\u4f8b\u7528\u4e8e\u6309\u9879\u76ee\u521b\u5efa VPN\u3002 \u865a\u62df\u670d\u52a1\u5668 VM \u6216\u6765\u5bbe\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u865a\u62df\u4ea4\u6362\u673a \uff08vSwitch\uff09 \u5728\u4e3b\u673a\u6216\u8282\u70b9\u4e0a\u8fd0\u884c\u5e76\u63d0\u4f9b\u57fa\u4e8e\u786c\u4ef6\u7684\u7f51\u7edc\u4ea4\u6362\u673a\u7684\u7279\u6027\u548c\u529f\u80fd\u7684\u8f6f\u4ef6\u3002 \u865a\u62df VLAN \u865a\u62df\u7f51\u7edc\u7684\u66ff\u4ee3\u672f\u8bed\u3002 VirtualBox \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 Vitrage Root Cause Analysis\u670d\u52a1\u7684\u4ee3\u7801\u540d\u79f0\u3002 VLAN \u7ba1\u7406\u5668 \u4e00\u4e2a Compute \u7ec4\u4ef6\uff0c\u5b83\u63d0\u4f9b dnsmasq \u548c radvd\uff0c\u5e76\u8bbe\u7f6e\u4e0e cloudpipe \u5b9e\u4f8b\u4e4b\u95f4\u7684\u8f6c\u53d1\u3002 VLAN \u7f51\u7edc \u7f51\u7edc\u63a7\u5236\u5668\u63d0\u4f9b\u865a\u62df\u7f51\u7edc\uff0c\u4f7f\u8ba1\u7b97\u670d\u52a1\u5668\u80fd\u591f\u76f8\u4e92\u4ea4\u4e92\u4ee5\u53ca\u4e0e\u516c\u7528\u7f51\u7edc\u4ea4\u4e92\u3002\u6240\u6709\u8ba1\u7b97\u673a\u90fd\u5fc5\u987b\u5177\u6709\u516c\u5171\u548c\u4e13\u7528\u7f51\u7edc\u63a5\u53e3\u3002VLAN \u7f51\u7edc\u662f\u4e00\u4e2a\u4e13\u7528\u7f51\u7edc\u63a5\u53e3\uff0c\u7531 VLAN \u7ba1\u7406\u5668 vlan_interface \u9009\u9879\u63a7\u5236\u3002 \u865a\u62df\u673a\u78c1\u76d8\uff08VMDK\uff09 \u955c\u50cf\u670d\u52a1\u652f\u6301\u7684\u865a\u62df\u673a\u955c\u50cf\u78c1\u76d8\u683c\u5f0f\u4e4b\u4e00\u3002 \u865a\u62df\u673a\u6620\u50cf \u6620\u50cf\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u865a\u62df\u673a\u8fdc\u7a0b\u63a7\u5236 \uff08VMRC\uff09 \u4f7f\u7528 Web \u6d4f\u89c8\u5668\u8bbf\u95ee VM \u5b9e\u4f8b\u63a7\u5236\u53f0\u7684\u65b9\u6cd5\u3002\u7531\u8ba1\u7b97\u652f\u6301\u3002 VMware API \u63a5\u53e3 \u652f\u6301\u5728\u8ba1\u7b97\u4e2d\u4e0e VMware \u4ea7\u54c1\u8fdb\u884c\u4ea4\u4e92\u3002 VMware NSX Neutron \u63d2\u4ef6 \u5728 Neutron \u4e2d\u63d0\u4f9b\u5bf9 VMware NSX \u7684\u652f\u6301\u3002 VNC \u4ee3\u7406 \u4e00\u4e2a\u8ba1\u7b97\u7ec4\u4ef6\uff0c\u5141\u8bb8\u7528\u6237\u901a\u8fc7 VNC \u6216 VMRC \u8bbf\u95ee\u5176 VM \u5b9e\u4f8b\u7684\u63a7\u5236\u53f0\u3002 \u5377 \u57fa\u4e8e\u78c1\u76d8\u7684\u6570\u636e\u5b58\u50a8\u901a\u5e38\u8868\u793a\u4e3a\u5177\u6709\u652f\u6301\u6269\u5c55\u5c5e\u6027\u7684\u6587\u4ef6\u7cfb\u7edf\u7684 iSCSI \u76ee\u6807;\u53ef\u4ee5\u662f\u6301\u4e45\u7684\uff0c\u4e5f\u53ef\u4ee5\u662f\u77ed\u6682\u7684\u3002 \u5377 API \u5757\u5b58\u50a8 API \u7684\u66ff\u4ee3\u540d\u79f0\u3002 \u5377\u63a7\u5236\u5668 \u4e00\u4e2a\u5757\u5b58\u50a8\u7ec4\u4ef6\uff0c\u7528\u4e8e\u76d1\u7763\u548c\u534f\u8c03\u5b58\u50a8\u5377\u64cd\u4f5c\u3002 \u5377\u9a71\u52a8\u7a0b\u5e8f \u5377\u63d2\u4ef6\u7684\u66ff\u4ee3\u672f\u8bed\u3002 \u5377 ID \u5e94\u7528\u4e8e\u5757\u5b58\u50a8\u63a7\u5236\u4e0b\u6bcf\u4e2a\u5b58\u50a8\u5377\u7684\u552f\u4e00 ID\u3002 \u5377\u7ba1\u7406\u5668 \u7528\u4e8e\u521b\u5efa\u3001\u9644\u52a0\u548c\u5206\u79bb\u6301\u4e45\u6027\u5b58\u50a8\u5377\u7684\u5757\u5b58\u50a8\u7ec4\u4ef6\u3002 \u5377\u8282\u70b9 \u8fd0\u884c cinder-volume \u5b88\u62a4\u7a0b\u5e8f\u7684\u5757\u5b58\u50a8\u8282\u70b9\u3002 \u5377\u63d2\u4ef6 \u4e3a\u5757\u5b58\u50a8\u5377\u7ba1\u7406\u5668\u63d0\u4f9b\u5bf9\u65b0\u578b\u548c\u4e13\u7528\u540e\u7aef\u5b58\u50a8\u7c7b\u578b\u7684\u652f\u6301\u3002 \u5377\u5de5\u4f5c\u5668 \u4e00\u4e2a cinder \u7ec4\u4ef6\uff0c\u5b83\u4e0e\u540e\u7aef\u5b58\u50a8\u4ea4\u4e92\uff0c\u4ee5\u7ba1\u7406\u5377\u7684\u521b\u5efa\u548c\u5220\u9664\u4ee5\u53ca\u8ba1\u7b97\u5377\u7684\u521b\u5efa\uff0c\u7531 cinder-volume \u5b88\u62a4\u7a0b\u5e8f\u63d0\u4f9b\u3002 vSphere \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002","title":"V"},{"location":"security/security-guide/#w","text":"Wallaby OpenStack \u7b2c 23 \u7248\u7684\u4ee3\u53f7\u3002\u5c0f\u888b\u9f20\u539f\u4ea7\u4e8e\u6fb3\u5927\u5229\u4e9a\uff0c\u5728\u8fd9\u4e2a\u547d\u540d\u671f\u5f00\u59cb\u65f6\uff0c\u6fb3\u5927\u5229\u4e9a\u6b63\u5728\u7ecf\u5386\u524d\u6240\u672a\u6709\u7684\u91ce\u706b\u3002 Watcher \u57fa\u7840\u7ed3\u6784\u4f18\u5316\u670d\u52a1\u7684\u4ee3\u53f7\u3002 \u6743\u91cd \u5bf9\u8c61\u5b58\u50a8\u8bbe\u5907\u7528\u4e8e\u786e\u5b9a\u54ea\u4e9b\u5b58\u50a8\u8bbe\u5907\u9002\u5408\u4f5c\u4e1a\u3002\u8bbe\u5907\u6309\u5927\u5c0f\u52a0\u6743\u3002 \u52a0\u6743\u6210\u672c \u51b3\u5b9a\u5728\u8ba1\u7b97\u4e2d\u542f\u52a8\u65b0 VM \u5b9e\u4f8b\u7684\u4f4d\u7f6e\u65f6\u6240\u4f7f\u7528\u7684\u6bcf\u4e2a\u6210\u672c\u7684\u603b\u548c\u3002 \u52a0\u6743 \u4e00\u4e2a\u8ba1\u7b97\u8fc7\u7a0b\uff0c\u7528\u4e8e\u786e\u5b9a VM \u5b9e\u4f8b\u662f\u5426\u9002\u5408\u7279\u5b9a\u4e3b\u673a\u7684\u4f5c\u4e1a\u3002\u4f8b\u5982\uff0c\u4e3b\u673a\u4e0a\u7684 RAM \u4e0d\u8db3\u3001\u4e3b\u673a\u4e0a\u7684 CPU \u8fc7\u591a\u7b49\u3002 \u5de5\u4f5c\u8005 \u4fa6\u542c\u961f\u5217\u5e76\u6267\u884c\u4efb\u52a1\u4ee5\u54cd\u5e94\u6d88\u606f\u7684\u5b88\u62a4\u7a0b\u5e8f\u3002\u4f8b\u5982\uff0c cinder-volume worker \u7ba1\u7406\u5b58\u50a8\u9635\u5217\u4e0a\u7684\u5377\u521b\u5efa\u548c\u5220\u9664\u3002 \u5de5\u4f5c\u6d41\u670d\u52a1 \uff08mistral\uff09 OpenStack\u670d\u52a1\u63d0\u4f9b\u4e86\u4e00\u79cd\u57fa\u4e8eYAML\u7684\u7b80\u5355\u8bed\u8a00\u6765\u7f16\u5199\u5de5\u4f5c\u6d41\uff08\u4efb\u52a1\u548c\u8f6c\u6362\u89c4\u5219\uff09\uff0c\u4ee5\u53ca\u4e00\u79cd\u5141\u8bb8\u4e0a\u4f20\u3001\u4fee\u6539\u3001\u5927\u89c4\u6a21\u548c\u9ad8\u5ea6\u53ef\u7528\u7684\u65b9\u5f0f\u8fd0\u884c\u5b83\u4eec\u3001\u7ba1\u7406\u548c\u76d1\u63a7\u5de5\u4f5c\u6d41\u6267\u884c\u72b6\u6001\u548c\u5355\u4e2a\u4efb\u52a1\u72b6\u6001\u7684\u670d\u52a1\u3002","title":"W"},{"location":"security/security-guide/#x","text":"X.509 X.509 \u662f\u5b9a\u4e49\u6570\u5b57\u8bc1\u4e66\u7684\u6700\u5e7f\u6cdb\u4f7f\u7528\u7684\u6807\u51c6\u3002\u5b83\u662f\u4e00\u79cd\u6570\u636e\u7ed3\u6784\uff0c\u5305\u542b\u4e3b\u9898\uff08\u5b9e\u4f53\uff09\u53ef\u8bc6\u522b\u4fe1\u606f\uff0c\u4f8b\u5982\u5176\u540d\u79f0\u53ca\u5176\u516c\u94a5\u3002\u8bc1\u4e66\u8fd8\u53ef\u4ee5\u5305\u542b\u4e00\u4e9b\u5176\u4ed6\u5c5e\u6027\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u7248\u672c\u3002X.509 \u7684\u6700\u65b0\u6807\u51c6\u7248\u672c\u662f v3\u3002 Xen Xen \u662f\u4e00\u4e2a\u4f7f\u7528\u5fae\u5185\u6838\u8bbe\u8ba1\u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\uff0c\u5b83\u63d0\u4f9b\u7684\u670d\u52a1\u5141\u8bb8\u591a\u4e2a\u8ba1\u7b97\u673a\u64cd\u4f5c\u7cfb\u7edf\u5728\u540c\u4e00\u8ba1\u7b97\u673a\u786c\u4ef6\u4e0a\u540c\u65f6\u6267\u884c\u3002 Xen API Xen \u7ba1\u7406 API\uff0c\u53d7 Compute \u652f\u6301\u3002 Xen \u4e91\u5e73\u53f0 \uff08XCP\uff09 \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 Xen Storage Manager \u5377\u9a71\u52a8\u7a0b\u5e8f \u652f\u6301\u4e0e Xen Storage Manager API \u8fdb\u884c\u901a\u4fe1\u7684\u5757\u5b58\u50a8\u5377\u63d2\u4ef6\u3002 Xena OpenStack \u7b2c 24 \u7248\u7684\u4ee3\u53f7\u3002\u8be5\u7248\u672c\u4ee5\u865a\u6784\u7684\u6218\u58eb\u516c\u4e3b\u547d\u540d\u3002 XenServer An OpenStack-supported hypervisor. \u652f\u6301 OpenStack \u7684\u865a\u62df\u673a\u7ba1\u7406\u7a0b\u5e8f\u3002 XFS \u51fd\u6570 \u7531 Silicon Graphics \u521b\u5efa\u7684\u9ad8\u6027\u80fd 64 \u4f4d\u6587\u4ef6\u7cfb\u7edf\u3002\u5728\u5e76\u884c I/O \u64cd\u4f5c\u548c\u6570\u636e\u4e00\u81f4\u6027\u65b9\u9762\u8868\u73b0\u51fa\u8272\u3002","title":"X"},{"location":"security/security-guide/#y","text":"Yoga OpenStack \u7b2c 25 \u7248\u7684\u4ee3\u53f7\u3002\u8be5\u7248\u672c\u4ee5\u6765\u81ea\u5370\u5ea6\u7684\u4e00\u6240\u54f2\u5b66\u5b66\u6821\u547d\u540d\uff0c\u8be5\u5b66\u6821\u5177\u6709\u5fc3\u7406\u548c\u8eab\u4f53\u5b9e\u8df5\u3002","title":"Y"},{"location":"security/security-guide/#z","text":"Yoga \u6d88\u606f\u670d\u52a1\u7684\u4ee3\u53f7\u3002 Zed OpenStack \u7b2c 26 \u7248\u7684\u4ee3\u53f7\u3002\u8be5\u7248\u672c\u4ee5\u5b57\u6bcd Z \u7684\u53d1\u97f3\u547d\u540d\u3002 ZeroMQ OpenStack \u652f\u6301\u7684\u6d88\u606f\u961f\u5217\u8f6f\u4ef6\u3002RabbitMQ \u7684\u66ff\u4ee3\u54c1\u3002\u4e5f\u62fc\u5199\u4e3a 0MQ\u3002 Zuul Zuul \u662f\u4e00\u4e2a\u5f00\u6e90 CI/CD \u5e73\u53f0\uff0c\u4e13\u95e8\u7528\u4e8e\u5728\u767b\u9646\u5355\u4e2a\u8865\u4e01\u4e4b\u524d\u8de8\u591a\u4e2a\u7cfb\u7edf\u548c\u5e94\u7528\u7a0b\u5e8f\u8fdb\u884c\u95e8\u63a7\u66f4\u6539\u3002 Zuul \u7528\u4e8e OpenStack \u5f00\u53d1\uff0c\u4ee5\u786e\u4fdd\u53ea\u6709\u7ecf\u8fc7\u6d4b\u8bd5\u7684\u4ee3\u7801\u624d\u4f1a\u88ab\u5408\u5e76\u3002","title":"Z"},{"location":"spec/distributed-traffic/","text":"\u6d41\u91cf\u5206\u6563 \u00b6 \u6982\u8ff0 \u00b6 OpenStack\u4e3a\u7528\u6237\u63d0\u4f9b\u8ba1\u7b97\u548c\u7f51\u7edc\u670d\u52a1\u3002\u7528\u6237\u521b\u5efa\u865a\u62df\u673a\u5e76\u8fde\u63a5Router\u53ef\u4ee5\u8bbf\u95ee\u5916\u90e8\u7f51\u7edc\uff0c\u540c\u65f6\u53ef\u4ee5\u5f00\u542f\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\uff0c\u8ba9\u5916\u90e8\u7f51\u7edc\u7684\u8bbe\u5907\u8bbf\u95ee\u865a\u62df\u673a\u5185\u90e8\u7684\u670d\u52a1\u3002\u4f46\u4e0e\u6b64\u540c\u65f6\uff0c\u968f\u7740\u865a\u62df\u673a\u548c\u6d6e\u52a8IP \u7aef\u53e3\u6620\u5c04\u7684\u6570\u91cf\u7684\u589e\u591a\uff0c\u7f51\u7edc\u8282\u70b9\u7684\u538b\u529b\u4e5f\u8d8a\u6765\u8d8a\u5927\uff0c\u5fc5\u987b\u627e\u5230\u5206\u6563\u7f51\u7edc\u8282\u70b9\u6d41\u91cf\uff0c\u758f\u89e3\u7f51\u7edc\u8282\u70b9\u538b\u529b\u7684\u65b9\u6cd5\u3002\u672c\u65b9\u6848\u5b9e\u73b0\u4e86\u5728OpenStack\u73af\u5883\u4e2d\u5c06\u7f51\u7edc\u8282\u70b9\u6d41\u91cf\u5206\u6563\uff0c\u4fdd\u8bc1\u517c\u5bb9\u652f\u6301L3 HA\u548cDVR\uff0c\u540c\u65f6\u53c8\u5c06\u7f51\u7edc\u8d44\u6e90\u4f7f\u7528\u6700\u5c0f\u5316\u3002 \u80cc\u666f \u00b6 \u7528\u6237\u521b\u5efa\u865a\u62df\u673a\u5e76\u8fde\u63a5Router\u7684\u57fa\u672c\u6d41\u7a0b\u5982\u4e0b\u3002 \u7528\u6237\u63d0\u524d\u521b\u5efa\u5185\u90e8\u7f51\u7edc\u548c\u5916\u90e8\u7f51\u7edc\u3002 \u521b\u5efaRouter\u65f6\u6307\u5b9aExternal Gateway\u4e3a\u63d0\u524d\u521b\u5efa\u7684\u5916\u90e8\u7f51\u7edc\u3002 \u5c06Router\u548c\u521b\u5efa\u597d\u7684\u5185\u90e8\u7f51\u7edc\u8fdb\u884c\u8fde\u63a5\u3002 \u521b\u5efa\u865a\u62df\u673a\u5b9e\u4f8b\u65f6\u6307\u5b9a\u5185\u90e8\u7f51\u7edc\u3002 \u5229\u7528\u521b\u5efa\u7684\u5916\u90e8\u7f51\u7edc\u521b\u5efa\u6d6e\u52a8IP\u3002 \u4e3a\u865a\u62df\u673a\u5b9e\u4f8b\u5f00\u542f\u6d6e\u52a8IP\u7aef\u53e3\u6620\u5c04\u3002 \u7ecf\u8fc7\u4e0a\u9762\u7684\u64cd\u4f5c\uff0c\u7528\u6237\u521b\u5efa\u7684\u865a\u62df\u673a\u5b9e\u4f8b\u53ef\u4ee5\u8bbf\u95ee\u5230\u5916\u90e8\u7f51\u7edc\uff0c\u5916\u90e8\u7f51\u7edc\u7684\u8bbe\u5907\u4e5f\u53ef\u4ee5\u6839\u636e\u6d6e\u52a8IP\u6307\u5b9a\u7684\u7aef\u53e3\u8bbf\u95ee\u865a\u62df\u673a\u5b9e\u4f8b\u5185\u90e8\u7684\u670d\u52a1\u3002 \u5728\u4e00\u4e2a\u57fa\u672c\u7684OpenStack\u73af\u5883\u4e2d\u865a\u62df\u673a\u5b9e\u4f8b\u7684\u6d41\u91cf\u8d70\u5411\u5982\u4e0b\u6240\u793a\u3002 \u5728\u7528\u6237\u521b\u5efa\u5b8c\u591a\u4e2a\u5b9e\u4f8b\u540e\uff0c\u865a\u62df\u673a\u5b9e\u4f8b\u53ef\u80fd\u4f1a\u5747\u5300\u5206\u5e03\u5728\u5404\u4e2a\u8ba1\u7b97\u8282\u70b9\uff0c\u865a\u62df\u673a\u7684\u6d41\u91cf\u8d70\u5411\u53ef\u80fd\u5982\u4e0b\u56fe\u6240\u793a\u3002 \u53ef\u4ee5\u770b\u5230\uff0c\u4e0d\u8bba\u865a\u62df\u673a\u7684\u4e1c\u897f\u6d41\u91cf\u8fd8\u662f\u5357\u5317\u6d41\u91cf\u90fd\u4f1a\u7ecf\u8fc7Network-1\u8282\u70b9\uff0c\u8fd9\u65e0\u7591\u52a0\u5927\u4e86\u7f51\u7edc\u8282\u70b9\u7684\u8d1f\u8f7d\uff0c\u540c\u65f6\u5f53\u7f51\u7edc\u8282\u70b9\u53d1\u751f\u6545\u969c\u65f6\u4e0d\u80fd\u5f88\u597d\u7684\u8fdb\u884c\u6545\u969c\u6062\u590d\u3002 \u90a3\u4e48\u662f\u5426\u53ef\u4ee5\u5c06\u540c\u4e00\u5b50\u7f51\u7ed1\u5b9a\u591a\u4e2aRouter\uff0c\u5728OpenStack\u4e2d\u540c\u4e00\u5b50\u7f51\u53ef\u4ee5\u7ed1\u5b9a\u591a\u4e2aRouter\uff0c\u4f46\u662f\u5b50\u7f51\u5728\u7ed1\u5b9aRouter\u65f6\u9ed8\u8ba4\u4f1a\u5c06\u5b50\u7f51\u7684\u7f51\u5173\u5730\u5740\u7ed1\u5b9a\u5230Router\u4e0a\uff0c\u4e00\u4e2a\u5b50\u7f51\u53ea\u6709\u4e00\u4e2a\u7f51\u5173\u5730\u5740\uff0c\u540c\u65f6\u8fd9\u4e2a\u7f51\u5173\u5730\u5740\u53c8\u4f1a\u5728DHCP\u670d\u52a1\u4e2d\u7528\u5230\uff0c\u7528\u4e8e\u7ed9\u865a\u62df\u673a\u5b9e\u4f8b\u63d0\u4f9b\u4e0b\u4e00\u8df3\u7684\u7f51\u5173\u5730\u5740\uff0c\u4e8e\u662f\u4e4e\u5373\u4f7f\u5c06\u5b50\u7f51\u7ed1\u5b9a\u5230\u591a\u4e2aRouter\u4e0a\uff0c\u865a\u62df\u673a\u5185\u90e8\u4e0b\u4e00\u8df3\u7684\u7f51\u5173\u5730\u5740\u8fd8\u4f1a\u662f\u5b50\u7f51\u7684\u7f51\u5173\u5730\u5740\uff0c\u800c\u4e14Router\u9009\u62e9\u7684\u7f51\u7edc\u8282\u70b9\u7528\u6237\u662f\u4e0d\u53ef\u63a7\u7684\uff0c\u96be\u514d\u4f1a\u51fa\u73b0\u867d\u7136\u5b50\u7f51\u7ed1\u5b9a\u4e86\u4e24\u4e2aRouter\uff0c\u4f46\u662f\u8fd9\u4e24\u4e2aRouter\u5728\u540c\u4e00\u4e2a\u7f51\u7edc\u8282\u70b9\u4e0a\u7684\u5c34\u5c2c\u573a\u9762\u3002 \u4e3a\u4e86\u5206\u6563\u6d41\u91cfOpenStack\u6709\u5e94\u5bf9\u7684\u7b56\u7565\uff0c\u53ef\u4ee5\u5c06neutron\u7684DVR\u529f\u80fd\u6253\u5f00\uff0c\u4e3a\u9884\u9632\u7f51\u7edc\u8282\u70b9\u7684\u5355\u70b9\u6545\u969c\u4e5f\u53ef\u4ee5\u6253\u5f00neutron\u7684L3 HA\uff0c\u4f46\u662f\u4e0a\u8ff0\u65b9\u6cd5\u4e5f\u6709\u5b83\u4eec\u7684\u5c40\u9650\u6027\u3002 DVR\u7684\u6d41\u91cf\u5206\u6563\u6709\u6bd4\u8f83\u5927\u7684\u5c40\u9650\u6027\uff0c\u539f\u56e0\u6709\u4ee5\u4e0b\u51e0\u70b9\u3002 DVR\u53ea\u662f\u4f5c\u7528\u4e8e\u540c\u4e00Router\u4e0b\u4e0d\u540c\u8ba1\u7b97\u8282\u70b9\u7684\u865a\u62df\u673a\u5b9e\u4f8b\u4e4b\u95f4\u7684\u4e1c\u897f\u6d41\u91cf\uff0c\u5df2\u7ecf\u7ed1\u5b9a\u6d6e\u52a8IP\u865a\u62df\u673a\u7684\u5357\u5317\u6d41\u91cf\uff0c\u5bf9\u4e8e\u672a\u7ed1\u5b9a\u6d6e\u52a8IP\u7684\u865a\u62df\u673a\u8bbf\u95ee\u5916\u90e8\u7f51\u7edc\u4f9d\u636e\u9700\u8981\u7ecf\u8fc7\u7f51\u7edc\u8282\u70b9\u3002 \u751f\u4ea7\u73af\u5883\u4e0b\uff0c\u7ed9\u6bcf\u4e2a\u865a\u62df\u673a\u90fd\u7ed1\u5b9a\u6d6e\u52a8IP\u662f\u4e0d\u5207\u5b9e\u9645\u7684\uff0c\u4f46\u662f\u53ef\u4ee5\u901a\u8fc7\u5f00\u542f\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\uff0c\u8ba9\u591a\u53f0\u865a\u62df\u673a\u5bf9\u5e94\u4e00\u4e2a\u6d6e\u52a8IP\uff0c\u4f46\u5728\u76ee\u524d\u7684OpenStack\u7248\u672c\u4e2d\uff0c\u4e0d\u8bba\u662f\u5426\u5f00\u542fDVR\uff0c\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\u7684\u5b9e\u73b0\u90fd\u662f\u5728\u7f51\u7edc\u8282\u70b9\u7684\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u4e2d\u5b8c\u6210\u7684\u3002 \u6700\u540e\u4e00\u70b9\uff0cDVR\u6a21\u5f0f\u4e0b\uff0c\u4e3a\u4e86\u8ba9\u865a\u62df\u673a\u7684\u5357\u5317\u6d41\u91cf\u4e0d\u7ecf\u8fc7\u7f51\u7edc\u8282\u70b9\uff0c\u4ece\u8ba1\u7b97\u8282\u70b9\u4e0a\u76f4\u63a5\u8d70\u51fa\uff0c\u90fd\u4f1a\u5728\u6bcf\u4e2a\u8ba1\u7b97\u8282\u70b9\u4e0a\u751f\u6210\u4e00\u4e2afip\u5f00\u5934\u7684\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\uff0c\u5373\u4f7f\u865a\u62df\u673a\u4e0d\u4f1a\u7ed1\u5b9a\u6d6e\u52a8IP\u3002\u800c\u8fd9\u4e2afip\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u4e2d\u4f1a\u5360\u7528\u4e00\u4e2a\u5916\u90e8\u7f51\u7edc\u7684IP\u5730\u5740\uff0c\u8fd9\u65e0\u7591\u4f1a\u52a0\u5927\u7f51\u7edc\u8d44\u6e90\u7684\u6d88\u8017\u3002 L3 HA\u4e5f\u6709\u51e0\u70b9\u4e0d\u8db3\uff0c\u5f00\u542fL3 HA\u540e\uff0cRouter\u5229\u7528keepalived\u4f1a\u5728\u51e0\u4e2a\u7f51\u7edc\u8282\u70b9\u4e4b\u95f4\u8fdb\u884c\u9009\u62e9\uff0c\u53ea\u6709Keepalived\u7684\u72b6\u6001\u4e3aMaster \u7684\u7f51\u7edc\u8282\u70b9\u624d\u4f1a\u62c5\u4efb\u771f\u6b63\u7684\u6d41\u91cf\u8fd0\u8f93\u7684\u4efb\u52a1\uff0c\u800c\u5bf9\u4e8e\u7f51\u7edc\u8282\u70b9\u9009\u62e9\uff0c\u7528\u6237\u65e0\u6743\u5e72\u6d89\u3002\u867d\u7136neutron\u4e2d\u7ed9\u51fa\u4e86Router\u7684\u9ed8\u8ba4\u8c03\u5ea6\u7b56\u7565\uff0c\u4e5f\u5c31\u662f\u6700\u5c11Router\u6570\uff0cRouter\u4f1a\u8c03\u5ea6\u5230Router\u4e2a\u6570\u6700\u5c11\u7684\u7f51\u7edc\u8282\u70b9\u4e0a\u3002\u800c\u4e14\u5728\u5e95\u5c42keepalived\u5f00\u542f\u7684\u6a21\u5f0f\u662f\u975e\u62a2\u5360\u7684\uff0c\u4e5f\u5c31\u662f\u5f53vip\u53d1\u751f\u6f02\u79fb\u540e\uff0c\u5373\u4f7f\u4e3b\u670d\u52a1\u5668\u6062\u590d\u6b63\u5e38\uff0c\u4e5f\u4e0d\u4f1a\u81ea\u52a8\u5c06\u8d44\u6e90\u4ece\u5907\u7528\u670d\u52a1\u5668\u624b\u4e2d\u62a2\u5360\u56de\u6765\uff0c\u8fd9\u53c8\u589e\u52a0\u4e86\u5bf9\u4e8e\u771f\u6b63\u8fd0\u884cRouter\u7684\u7f51\u7edc\u8282\u70b9\u7684\u4e0d\u786e\u5b9a\u6027\u3002 \u603b\u7ed3\u4e00\u4e0b\uff0c\u73b0\u6709\u7684\u6280\u672f\u65b9\u6848\u505a\u4e0d\u5230\u771f\u6b63\u7684\u6d41\u91cf\u5206\u53d1\uff0c\u5373\u4f7f\u5728\u5f00\u542fDVR\u540e\uff0c\u4e00\u65b9\u9762\u4f1a\u6709\u4e00\u4e9b\u989d\u5916\u7f51\u7edc\u8d44\u6e90\u7684\u635f\u8017\uff0c\u540c\u65f6\u53c8\u56e0\u4e3aRouter\u7684\u7f51\u7edc\u8282\u70b9\u7684\u4e0d\u786e\u5b9a\u6027\uff0c\u5bfc\u81f4\u865a\u62df\u673a\u7684\u5357\u5317\u6d41\u91cf\u65e0\u6cd5\u505a\u5230\u5f88\u597d\u7684\u5206\u53d1\u3002 \u9700\u8981\u89e3\u51b3\u7684\u95ee\u9898 \u00b6 \u5b9e\u73b0DVR\u6a21\u5f0f\u548cL3 HA\u6a21\u5f0f\u4e0b\u4ee5\u53caLegacy\u6a21\u5f0f\u4e0b\u7f51\u7edc\u5206\u53d1\u3002\u9996\u5148\u8981\u89e3\u51b3\u4ee5\u4e0b\u51e0\u4e2a\u6280\u672f\u95ee\u9898\uff1a Router\u53ef\u4ee5\u6307\u5b9a\u7f51\u7edc\u8282\u70b9\uff0c\u4e0d\u8bba\u662f\u5426\u5f00\u542fL3 HA\u3002 \u540c\u4e00\u5b50\u7f51\u7ed1\u5b9a\u591a\u4e2aRouter\u65f6\uff0cDHCP\u670d\u52a1\u80fd\u4e3a\u4e0d\u540c\u8ba1\u7b97\u8282\u70b9\u7684\u865a\u62df\u673a\u63d0\u4f9b\u4e0d\u540c\u7684\u8def\u7531\u65b9\u5f0f\u3002 \u5728\u7528\u6237\u4f7f\u7528\u7aef\u53e3\u6620\u5c04\u65f6\uff0c\u53ef\u4ee5\u5c06Router\u7684External Gateway\u7684IP\u5730\u5740\u4f5c\u4e3a\u5916\u90e8\u7f51\u7edc\u7684\u5730\u5740\u3002 \u5b9e\u73b0\u65b9\u6848 \u00b6 \u89e3\u51b3\u6307\u5b9aL3 agent\u7684\u95ee\u9898 \u00b6 \u9996\u5148\u4fee\u6539Router\u7684\u5e95\u5c42\u6570\u636e\u5e93\u4e3a\u5176\u6dfb\u52a0\u4e00\u4e2aconfigurations\u5b57\u6bb5\uff0c\u7528\u4e8e\u5b58\u50a8Router\u7684\u76f8\u5173\u914d\u7f6e\u4fe1\u606f\uff0cconfigurations\u7684\u683c\u5f0f\u5982\u4e0b\u6240\u793a\u3002 { \"configurations\": { \"preferred_agent\": \"network-1\" } } \u5728\u672a\u5f00\u542fL3 HA\u65f6\uff0cpreferred_agent\u5b57\u6bb5\u7528\u4e8e\u6307\u5b9aRouter\u4f4d\u4e8e\u7684\u7f51\u7edc\u8282\u70b9\u3002 \u5728\u5f00\u542fL3 HA\u65f6\uff0cconfigurations\u7684\u683c\u5f0f\u5982\u4e0b\u6240\u793a\u3002 { \"configurations\": { \"slave_agents\": [ \"compute-1\" ], \"master_agent\": \"network-1\" } } master_agent\u7528\u4e8e\u6307\u5b9aMaster\u89d2\u8272\u7684\u7f51\u7edc\u8282\u70b9\uff0cslave_agents\u7528\u4e8e\u6307\u5b9aSlave\u89d2\u8272\u7684\u7f51\u7edc\u8282\u70b9\u6570\u7ec4\u3002 \u7136\u540e\u8981\u4fee\u6539Router\u7684\u521b\u5efa\u903b\u8f91\uff0c\u9700\u8981\u4e3aRouter\u65b0\u589e\u4e00\u4e2a\u8c03\u5ea6\u65b9\u6cd5\u3002Neutron\u4e2drouter_scheduler_driver\u9ed8\u8ba4\u662fLeastRoutersScheduler\uff08\u6700\u5c11Router\u4e2a\u6570\u7684\u7f51\u7edc\u8282\u70b9\uff09\uff0c\u7ee7\u627f\u8be5\u7c7b\u65b0\u589e\u8c03\u5ea6\u65b9\u6cd5\uff0c\u53ef\u4ee5\u6839\u636eRouter\u7684configurations\u5b57\u6bb5\u9009\u62e9\u6307\u5b9a\u7684\u7f51\u7edc\u8282\u70b9\u3002 \u6700\u540e\u9700\u8981\u4fee\u6539neutron-l3-agent\u7684Router\u66f4\u65b0\u7684\u903b\u8f91\u4ee3\u7801\uff0c\u7531\u4e8eneutron-l3-agent\u542f\u52a8\u65f6\u4f1a\u521d\u59cb\u5316\u4e00\u4e2a\u8d44\u6e90\u961f\u5217\u7528\u4e8e\u66f4\u65b0\u8d44\u6e90\u72b6\u6001\uff0c\u540c\u65f6\u5f00\u542f\u4e00\u4e2a\u5b88\u62a4\u7ebf\u7a0b\u7528\u4e8e\u8bfb\u53d6\u8d44\u6e90\u961f\u5217\uff0c\u6bcf\u6b21\u7f51\u7edc\u8d44\u6e90\u72b6\u6001\u6709\u53d8\u5316\uff08\u521b\u5efa\u3001\u5220\u9664\u6216\u8005\u66f4\u65b0\uff09\u65f6\uff0c\u5c31\u4f1a\u6dfb\u52a0\u5230\u8be5\u961f\u5217\u4e2d\uff0c\u6700\u540e\u6839\u636e\u8d44\u6e90\u7684\u7c7b\u578b\u548c\u72b6\u6001\u786e\u5b9a\u5c06\u8981\u6267\u884c\u7684\u52a8\u4f5c\u3002 \u8fd9\u91ccRouter\u521b\u5efa\u5b8c\u540e\uff0cneutron-l3-agent\u6700\u540e\u4f1a\u6267\u884c_process_added_router\u65b9\u6cd5\uff0c\u5148\u8c03\u7528RouterInfo\u7684initialize\u65b9\u6cd5\uff0c\u518d\u8c03\u7528process\u65b9\u6cd5\u3002 initialize\u65b9\u6cd5\u4e3b\u8981\u6d89\u53ca\u5230Router\u4fe1\u606f\u7684\u4e00\u4e9b\u521d\u59cb\u5316\uff0c\u5305\u62ec\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u7684\u521b\u5efa\u3001port\u7684\u521b\u5efa\u3001keepalived\u8fdb\u7a0b\u7684\u521d\u59cb\u5316\u7b49\u7b49\u3002 process\u65b9\u6cd5\u4e2d\u4f1a\u505a\u4e0b\u9762\u51e0\u4e2a\u64cd\u4f5c\u3002 \u8bbe\u7f6e\u5185\u90e8\u7684Port\uff0c\u7528\u4e8e\u8fde\u63a5\u5185\u90e8\u7f51\u7edc\uff1b \u8bbe\u7f6e\u5916\u90e8Port\uff0c\u7528\u4e8e\u8fde\u63a5\u5916\u90e8\u7f51\u7edc\uff1b \u66f4\u65b0\u8def\u7531\u8868\uff1b \u5bf9\u4e8e\u5f00\u542fL3 HA\u7684Router\uff0c\u9700\u8981\u8bbe\u7f6eHA\u7684Port\uff0c\u7136\u540e\u5f00\u542fkeepalived\u8fdb\u7a0b\u3002 \u5bf9\u4e8e\u5f00\u542fDVR\u7684Router\uff0c\u8fd8\u9700\u8981\u8bbe\u7f6e\u4e00\u4e0bfip\u547d\u540d\u7a7a\u95f4\u4e2d\u7684Port\u3002 \u8fd9\u91cc\u53ea\u9700\u8981\u8003\u8651L3 HA\u5f00\u542f\u7684\u60c5\u51b5\uff0c\u56e0\u4e3a\u5728\u672a\u5f00\u542fL3 HA\u65f6\uff0cneutron-server\u521b\u5efa\u5b8cRouter\u540e\uff0c\u7ecf\u8fc7\u65b0\u7684\u8c03\u5ea6\u65b9\u6cd5\u9009\u62e9\u7279\u5b9a\u7684\u7f51\u7edc\u8282\u70b9\uff0cRPC\u8c03\u7528\u76f4\u63a5\u53d1\u9001\u7ed9\u7279\u5b9a\u7f51\u7edc\u8282\u70b9\u7684neutron-l3-agent\u670d\u52a1\u3002\u5f00\u542fL3 HA\u65f6\uff0c\u8c03\u5ea6\u65b9\u6cd5\u4f1a\u9009\u62e9\u51famaster\u548cslave\u7f51\u7edc\u8282\u70b9\uff0c\u5e76\u4e14RPC\u8c03\u7528\u4f1a\u53d1\u9001\u7ed9\u8fd9\u4e9b\u7f51\u7edc\u8282\u70b9\u4e0a\u7684neutron-l3-agent\u670d\u52a1\u3002 neutron-l3-agent\u4f1a\u4e3a\u6bcf\u4e2aRouter\u542f\u52a8\u4e00\u4e2akeepalived\u8fdb\u7a0b\u7528\u4e8eL3 HA\uff0c\u6240\u4ee5\u9700\u8981\u5728keepalived\u521d\u59cb\u5316\u65f6\uff0c\u5c06keepalived\u542f\u52a8\u903b\u8f91\u4fee\u6539\u3002\u5229\u7528configurations\u5b57\u6bb5\u7684\u4fe1\u606f\uff0c\u83b7\u53d6master\u548cslave\u7f51\u7edc\u8282\u70b9\uff0c\u540c\u65f6\u548c\u5f53\u524d\u7f51\u7edc\u8282\u70b9\u7684\u4fe1\u606f\u5224\u65ad\uff0c\u786e\u5b9a\u7f51\u7edc\u8282\u70b9\u7684\u89d2\u8272\u3002\u6700\u540e\uff0c\u56e0\u4e3a\u6307\u5b9a\u4e86master\u548cslave\u8282\u70b9\uff0c\u907f\u514d\u51fa\u73b0master\u7f51\u7edc\u8282\u70b9\u5b95\u673a\u6062\u590d\u540e\uff0cvip\u4f9d\u65e7\u5728slave\u8282\u70b9\u7684\u60c5\u51b5\uff0c\u8981\u628akeepalived\u7684\u6a21\u5f0f\u6539\u4e3a\u62a2\u5360\u6a21\u5f0f\u3002 \u89e3\u51b3\u8def\u7531\u95ee\u9898 \u00b6 \u89e3\u51b3\u540c\u4e00\u5b50\u7f51\u7ed1\u5b9a\u591a\u4e2aRouter\u540e\uff0c\u865a\u62df\u673a\u5b9e\u4f8b\u7684\u8def\u7531\u95ee\u9898\u3002DHCP\u534f\u8bae\u529f\u80fd\u4e0d\u4ec5\u5305\u62ec\u548cDNS\u670d\u52a1\u5668\u5206\u914d\u8fd8\u5305\u62ec\u7f51\u5173\u5730\u5740\u5206\u914d\uff0c\u4e5f\u5c31\u662f\u53ef\u4ee5\u901a\u8fc7DHCP\u534f\u8bae\u5c06\u8def\u7531\u4fe1\u606f\u4f20\u7ed9\u865a\u62df\u673a\u5b9e\u4f8b\u3002\u5728OpenStack\u4e2d\uff0c\u865a\u62df\u673a\u5b9e\u4f8b\u7684DHCP\u7531neutron-dhcp-agent\u63d0\u4f9b\uff0cneutron-dhcp-agent\u7684\u6838\u5fc3\u529f\u80fd\u57fa\u672c\u7531dnsmasq\u5b8c\u6210\u3002 dnsmasq\u4e2d\u63d0\u4f9btag\u6807\u7b7e\uff0c\u53ef\u4ee5\u4e3a\u6307\u5b9aIP\u5730\u5740\u6dfb\u52a0\u6807\u7b7e\uff0c\u7136\u540e\u53ef\u4ee5\u6839\u636e\u6807\u7b7e\u4e0b\u53d1\u914d\u7f6e\u3002 dnsmasq\u7684host\u914d\u7f6e\u6587\u4ef6\u5982\u4e0b\u6240\u793a\u3002 fa:16:3e:28:a5:0a,host-172-16-0-1.openstacklocal,172.16.0.1,set:subnet-6a4db541-e563-43ff-891b-aa8c05c988c5 fa:16:3e:2b:dd:88,host-172-16-0-10.openstacklocal,172.16.0.10,set:subnet-6a4db541-e563-43ff-891b-aa8c05c988c5 fa:16:3e:a1:96:fc,host-172-16-0-207.openstacklocal,172.16.0.207,set:compute-1-subnet-6a4db541-e563-43ff-891b-aa8c05c988c5 fa:16:3e:45:b4:1a,host-172-16-10-1.openstacklocal,172.16.10.1,set:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902 dnsmasq\u7684option\u914d\u7f6e\u6587\u4ef6\u5982\u4e0b\u6240\u793a\u3002 tag:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902,option:dns-server,8.8.8.8 tag:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902,option:classless-static-route,172.16.0.0/24,0.0.0.0,169.254.169.254/32,172.16.0.2,0.0.0.0/0,172.16.0.1 tag:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902,249,172.16.0.0/24,0.0.0.0,169.254.169.254/32,172.16.0.2,0.0.0.0/0,172.16.0.1 tag:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902,option:router,172.16.0.1 tag:compute-1-subnet-6a4db541-e563-43ff-891b-aa8c05c988c5,option:classless-static-route,172.16.10.0/24,0.0.0.0,169.254.169.254/32,172.16.0.2,0.0.0.0/0,172.16.0.10 tag:compute-1-subnet-6a4db541-e563-43ff-891b-aa8c05c988c5,249,172.16.0.0/24,0.0.0.0,169.254.169.254/32,172.16.0.2,0.0.0.0/0,172.16.0.10 tag:compute-1-subnet-6a4db541-e563-43ff-891b-aa8c05c988c5,option:router,172.16.0.10 \u53ef\u4ee5\u770b\u5230IP172.16.0.207\u88ab\u6253\u4e0a\u4e86compute-1\u5f00\u5934\u7684tag\uff0c\u5339\u914d\u5230option\u6587\u4ef6\u540e\uff0c172.16.0.207\u7684\u865a\u62df\u673a\u7684\u9ed8\u8ba4\u8def\u7531\u7f51\u5173\u5730\u5740\u5c31\u4f1a\u4ece172.16.0.1\u53d8\u4e3a172.16.0.10\u3002\u5f53\u7136\u8fd9\u4e00\u5207\u7684\u524d\u63d0\u5b50\u7f51\u9700\u8981\u7ed1\u5b9a\u591a\u4e2aRouter\u3002 \u540c\u65f6\u4e3aneutron-dhcp-agent\u63d0\u4f9b\u53ef\u4f9b\u7ba1\u7406\u5458\u4fee\u6539\u7684\u914d\u7f6e\u9879\uff0c\u7528\u4e8e\u6307\u5b9a\u8ba1\u7b97\u8282\u70b9\u548c\u7f51\u7edc\u8282\u70b9\u7684\u5173\u7cfb\uff0c\u53ef\u4ee5\u662f\u4e00\u5bf9\u4e00\uff0c\u53ef\u4ee5\u662f\u591a\u5bf9\u4e00\u3002 \u89e3\u51b3Router Gateway\u7aef\u53e3\u8f6c\u53d1\u7684\u95ee\u9898 \u00b6 \u5c06\u539f\u672c\u57fa\u4e8e\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\u6539\u4e3a\u57fa\u4e8eRouter\u7684External Gateway\u7684\u65b9\u5f0f\u3002\u539f\u56e0\u6709\u4e8c\uff1a \u57fa\u4e8e\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\uff0c\u5bf9\u4e8e\u539f\u672c\u5c31\u8981\u4f7f\u7528Router\u7684External Gateway\u7684\u7528\u6237\u5c31\u4f1a\u591a\u5360\u7528\u4e00\u4e2a\u5916\u90e8\u7f51\u7edc\u7684IP\uff0c\u4e3a\u51cf\u5c11\u5916\u90e8\u7f51\u7edcIP\u7684\u4f7f\u7528\u6539\u7528External Gateway\u7684\u65b9\u5f0f\u8fdb\u884c\u7aef\u53e3\u6620\u5c04\u3002 \u57fa\u4e8e\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\uff0c\u4f9d\u8d56Router\u7684\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u6765\u505aNAT\uff0c\u4e0d\u5f00\u542fL3 HA\u65f6\uff0c\u540c\u4e00\u5b50\u7f51\u5728\u7ed1\u5b9a\u591a\u4e2aRouter\u540e\uff0c\u7531\u4e8e\u7aef\u53e3\u6620\u5c04\u521b\u5efa\u7684\u903b\u8f91\uff0cNAT\u4f1a\u53d1\u751f\u5728\u5b50\u7f51\u7f51\u5173\u5730\u5740\u6240\u5728\u7684Router\u7684\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u4e2d\uff08\u7279\u5b9a\u7684\u7f51\u7edc\u8282\u70b9\uff09\uff0c\u4e0d\u4f1a\u5206\u6563\u5728\u5404\u4e2aRouter\u7684\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u4e2d\uff08\u6bcf\u4e2a\u7f51\u7edc\u8282\u70b9\uff09\u3002\u8fd9\u6837\u5728\u7aef\u53e3\u6620\u5c04\u65f6\uff0c\u4f1a\u589e\u52a0\u7f51\u7edc\u8282\u70b9\u7684\u538b\u529b\u3002 \u5b9e\u73b0\u65b9\u5f0f\u548c\u57fa\u4e8e\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\u7c7b\u4f3c\uff0c\u4e0e\u4e4b\u4e0d\u540c\u7684\u662fExternal Gateway\u4e0d\u9700\u8981\u9009\u62e9Router\uff0c\u56e0\u4e3aExternal Gateway\u672c\u6765\u548cRouter\u5c31\u662f\u76f8\u5173\u8054\u7684\u3002\u57fa\u4e8e\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\u5728\u9009\u62e9Router\u65f6\uff0c\u9009\u62e9\u7684\u662f\u5b50\u7f51\u7684\u7f51\u5173\u5730\u5740\u6240\u5728\u7684Router\u3002 \u6700\u540e\uff0c\u5728\u5b9e\u73b0\u4e0a\u9762\u4e09\u4e2a\u90e8\u5206\u540e\uff0c\u7528\u6237\u5b9e\u73b0\u6d41\u91cf\u5206\u6563\u7684\u6b65\u9aa4\u5982\u4e0b\u3002 \u7528\u6237\u4fee\u6539neutron-dhcp-agent\u7684\u914d\u7f6e\u6587\u4ef6\uff0c\u4fee\u6539\u8ba1\u7b97\u8282\u70b9\u548c\u7f51\u7edc\u8282\u70b9\u7684\u6620\u5c04\u5173\u7cfb\u3002\u4f8b\u5982\u4e09\u4e2a\u7f51\u7edc\u8282\u70b9\u3001\u4e09\u4e2a\u8ba1\u7b97\u8282\u70b9\uff0c\u914d\u7f6ecompute-1\u8d70network-1\u8282\u70b9\uff0ccompute-2\u548ccompute-3\u8d70network-2\u8282\u70b9\u3002 \u5229\u7528neutron\u7684API\u521b\u5efa\u591a\u4e2aRouter\u5e76\u6307\u5b9a\u7f51\u7edc\u8282\u70b9\uff0c\u5e76\u5c06Router\u7ed1\u5b9a\u5230\u540c\u4e00\u5b50\u7f51\u3002 \u5229\u7528\u5b50\u7f51\u7f51\u7edc\u521b\u5efa\u591a\u4e2a\u865a\u62df\u673a\u5b9e\u4f8b\u3002 \u865a\u62df\u673a\u5b9e\u4f8b\u7684\u7f51\u7edc\u6d41\u91cf\u7684\u6d41\u5411\u5982\u4e0b\u56fe\u6240\u793a\u3002 \u53ef\u4ee5\u770b\u5230\uff0cVM-1\u8bbf\u95ee\u5916\u90e8\u7f51\u7edc\u7ecf\u8fc7\u7684\u662fnetwork-1\u8282\u70b9\uff0cVM-2\u548cVM-3\u8bbf\u95ee\u5916\u90e8\u7f51\u7edc\u7ecf\u8fc7\u7684\u662fnetwork-2\u8282\u70b9\u3002\u540c\u65f6VM-1\u3001VM-2\u548cVM-3\u53c8\u662f\u5728\u540c\u4e00\u4e2a\u5b50\u7f51\u4e0b\uff0c\u53ef\u4ee5\u4e92\u76f8\u8bbf\u95ee\u3002 API \u00b6 \u67e5\u770b\u8def\u7531\u5668\u7f51\u5173\u7aef\u53e3\u8f6c\u53d1\u5217\u8868 \u00b6 GET /v2.0/routers/{router_id}/gateway_port_forwardings Response { \"gateway_port_forwardings\": [ { \"id\": \"67a70b09-f9e7-441e-bd49-7177fe70bb47\", \"external_port\": 34203, \"protocol\": \"tcp\", \"internal_port_id\": \"b671c61a-95c3-49cd-89f2-b7e817d1f486\", \"internal_ip_address\": \"172.16.0.196\", \"internal_port\": 518, \"gw_ip_address\": \"192.168.57.234\" } ] } \u67e5\u770b\u8def\u7531\u5668\u7f51\u5173\u7aef\u53e3\u8f6c\u53d1 \u00b6 GET /v2.0/routers/{router_id}/gateway_port_forwardings/{port_forwarding_id} Response { \"gateway_port_forwarding\": { \"id\": \"67a70b09-f9e7-441e-bd49-7177fe70bb47\", \"external_port\": 34203, \"protocol\": \"tcp\", \"internal_port_id\": \"b671c61a-95c3-49cd-89f2-b7e817d1f486\", \"internal_ip_address\": \"172.16.0.196\", \"internal_port\": 518, \"gw_ip_address\": \"192.168.57.234\" } } \u521b\u5efa\u8def\u7531\u5668\u7f51\u5173\u7aef\u53e3\u8f6c\u53d1 \u00b6 POST /v2.0/routers/{router_id}/gateway_port_forwardings Request Body { \"gateway_port_forwarding\": { \"external_port\": int, \"internal_port\": int, \"internal_ip_address\": \"string\", \"protocol\": \"tcp\", \"internal_port_id\": \"string\" } } Response { \"gateway_port_forwarding\": { \"id\": \"da554833-b756-4626-9900-6256c361f94b\", \"external_port\": 14122, \"protocol\": \"tcp\", \"internal_port_id\": \"b671c61a-95c3-49cd-89f2-b7e817d1f486\", \"internal_ip_address\": \"172.16.0.196\", \"internal_port\": 3634, \"gw_ip_address\": \"192.168.57.234\" } } \u66f4\u65b0\u8def\u7531\u5668\u7f51\u5173\u7aef\u53e3\u8f6c\u53d1 \u00b6 PUT /v2.0/routers/{router_id}/gateway_port_forwardings/{port_forwarding_id} Request Body { \"gateway_port_forwarding\": { \"external_port\": int, \"internal_port\": int, \"internal_ip_address\": \"string\", \"protocol\": \"tcp\", \"internal_port_id\": \"string\" } } Response { \"gateway_port_forwarding\": { \"id\": \"da554833-b756-4626-9900-6256c361f94b\", \"external_port\": 14122, \"protocol\": \"tcp\", \"internal_port_id\": \"b671c61a-95c3-49cd-89f2-b7e817d1f486\", \"internal_ip_address\": \"172.16.0.196\", \"internal_port\": 3634, \"gw_ip_address\": \"192.168.57.234\" } } \u5220\u9664\u8def\u7531\u5668\u7f51\u5173\u7aef\u53e3\u8f6c\u53d1 \u00b6 DELETE /v2.0/routers/{router_id}/gateway_port_forwardings/{port_forwarding_id} \u65b0\u5efa\u8def\u7531\u5668 \u00b6 POST /v2.0/routers Request Body { \"router\": { \"name\": \"string\", \"admin_state_up\": true, \"configurations\": { \"preferred_agent\": \"string\", \"master_agent\": \"string\", \"slave_agents\": [ \"string\" ] } } } \u66f4\u65b0\u8def\u7531\u5668 \u00b6 PUT /v2.0/routers/{router_id} Request Body { \"router\": { \"name\": \"string\", \"admin_state_up\": true, \"configurations\": { \"preferred_agent\": \"string\", \"master_agent\": \"control01\", \"slave_agents\": [ \"control01\" ] } } } \u5f00\u53d1\u8282\u594f \u00b6 2023-07-28\u52302023-08-30 \u5b8c\u6210\u5f00\u53d1 2023-09-01\u52302023-11-15 \u6d4b\u8bd5\u3001\u95ee\u9898\u4fee\u590d 2023-11-30\u5f15\u5165openEuler 20.03 LTS SP4\u7248\u672c 2023-12-30\u5f15\u5165openEuler 22.03 LTS SP3\u7248\u672c","title":"\u6d41\u91cf\u5206\u6563"},{"location":"spec/distributed-traffic/#_1","text":"","title":"\u6d41\u91cf\u5206\u6563"},{"location":"spec/distributed-traffic/#_2","text":"OpenStack\u4e3a\u7528\u6237\u63d0\u4f9b\u8ba1\u7b97\u548c\u7f51\u7edc\u670d\u52a1\u3002\u7528\u6237\u521b\u5efa\u865a\u62df\u673a\u5e76\u8fde\u63a5Router\u53ef\u4ee5\u8bbf\u95ee\u5916\u90e8\u7f51\u7edc\uff0c\u540c\u65f6\u53ef\u4ee5\u5f00\u542f\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\uff0c\u8ba9\u5916\u90e8\u7f51\u7edc\u7684\u8bbe\u5907\u8bbf\u95ee\u865a\u62df\u673a\u5185\u90e8\u7684\u670d\u52a1\u3002\u4f46\u4e0e\u6b64\u540c\u65f6\uff0c\u968f\u7740\u865a\u62df\u673a\u548c\u6d6e\u52a8IP \u7aef\u53e3\u6620\u5c04\u7684\u6570\u91cf\u7684\u589e\u591a\uff0c\u7f51\u7edc\u8282\u70b9\u7684\u538b\u529b\u4e5f\u8d8a\u6765\u8d8a\u5927\uff0c\u5fc5\u987b\u627e\u5230\u5206\u6563\u7f51\u7edc\u8282\u70b9\u6d41\u91cf\uff0c\u758f\u89e3\u7f51\u7edc\u8282\u70b9\u538b\u529b\u7684\u65b9\u6cd5\u3002\u672c\u65b9\u6848\u5b9e\u73b0\u4e86\u5728OpenStack\u73af\u5883\u4e2d\u5c06\u7f51\u7edc\u8282\u70b9\u6d41\u91cf\u5206\u6563\uff0c\u4fdd\u8bc1\u517c\u5bb9\u652f\u6301L3 HA\u548cDVR\uff0c\u540c\u65f6\u53c8\u5c06\u7f51\u7edc\u8d44\u6e90\u4f7f\u7528\u6700\u5c0f\u5316\u3002","title":"\u6982\u8ff0"},{"location":"spec/distributed-traffic/#_3","text":"\u7528\u6237\u521b\u5efa\u865a\u62df\u673a\u5e76\u8fde\u63a5Router\u7684\u57fa\u672c\u6d41\u7a0b\u5982\u4e0b\u3002 \u7528\u6237\u63d0\u524d\u521b\u5efa\u5185\u90e8\u7f51\u7edc\u548c\u5916\u90e8\u7f51\u7edc\u3002 \u521b\u5efaRouter\u65f6\u6307\u5b9aExternal Gateway\u4e3a\u63d0\u524d\u521b\u5efa\u7684\u5916\u90e8\u7f51\u7edc\u3002 \u5c06Router\u548c\u521b\u5efa\u597d\u7684\u5185\u90e8\u7f51\u7edc\u8fdb\u884c\u8fde\u63a5\u3002 \u521b\u5efa\u865a\u62df\u673a\u5b9e\u4f8b\u65f6\u6307\u5b9a\u5185\u90e8\u7f51\u7edc\u3002 \u5229\u7528\u521b\u5efa\u7684\u5916\u90e8\u7f51\u7edc\u521b\u5efa\u6d6e\u52a8IP\u3002 \u4e3a\u865a\u62df\u673a\u5b9e\u4f8b\u5f00\u542f\u6d6e\u52a8IP\u7aef\u53e3\u6620\u5c04\u3002 \u7ecf\u8fc7\u4e0a\u9762\u7684\u64cd\u4f5c\uff0c\u7528\u6237\u521b\u5efa\u7684\u865a\u62df\u673a\u5b9e\u4f8b\u53ef\u4ee5\u8bbf\u95ee\u5230\u5916\u90e8\u7f51\u7edc\uff0c\u5916\u90e8\u7f51\u7edc\u7684\u8bbe\u5907\u4e5f\u53ef\u4ee5\u6839\u636e\u6d6e\u52a8IP\u6307\u5b9a\u7684\u7aef\u53e3\u8bbf\u95ee\u865a\u62df\u673a\u5b9e\u4f8b\u5185\u90e8\u7684\u670d\u52a1\u3002 \u5728\u4e00\u4e2a\u57fa\u672c\u7684OpenStack\u73af\u5883\u4e2d\u865a\u62df\u673a\u5b9e\u4f8b\u7684\u6d41\u91cf\u8d70\u5411\u5982\u4e0b\u6240\u793a\u3002 \u5728\u7528\u6237\u521b\u5efa\u5b8c\u591a\u4e2a\u5b9e\u4f8b\u540e\uff0c\u865a\u62df\u673a\u5b9e\u4f8b\u53ef\u80fd\u4f1a\u5747\u5300\u5206\u5e03\u5728\u5404\u4e2a\u8ba1\u7b97\u8282\u70b9\uff0c\u865a\u62df\u673a\u7684\u6d41\u91cf\u8d70\u5411\u53ef\u80fd\u5982\u4e0b\u56fe\u6240\u793a\u3002 \u53ef\u4ee5\u770b\u5230\uff0c\u4e0d\u8bba\u865a\u62df\u673a\u7684\u4e1c\u897f\u6d41\u91cf\u8fd8\u662f\u5357\u5317\u6d41\u91cf\u90fd\u4f1a\u7ecf\u8fc7Network-1\u8282\u70b9\uff0c\u8fd9\u65e0\u7591\u52a0\u5927\u4e86\u7f51\u7edc\u8282\u70b9\u7684\u8d1f\u8f7d\uff0c\u540c\u65f6\u5f53\u7f51\u7edc\u8282\u70b9\u53d1\u751f\u6545\u969c\u65f6\u4e0d\u80fd\u5f88\u597d\u7684\u8fdb\u884c\u6545\u969c\u6062\u590d\u3002 \u90a3\u4e48\u662f\u5426\u53ef\u4ee5\u5c06\u540c\u4e00\u5b50\u7f51\u7ed1\u5b9a\u591a\u4e2aRouter\uff0c\u5728OpenStack\u4e2d\u540c\u4e00\u5b50\u7f51\u53ef\u4ee5\u7ed1\u5b9a\u591a\u4e2aRouter\uff0c\u4f46\u662f\u5b50\u7f51\u5728\u7ed1\u5b9aRouter\u65f6\u9ed8\u8ba4\u4f1a\u5c06\u5b50\u7f51\u7684\u7f51\u5173\u5730\u5740\u7ed1\u5b9a\u5230Router\u4e0a\uff0c\u4e00\u4e2a\u5b50\u7f51\u53ea\u6709\u4e00\u4e2a\u7f51\u5173\u5730\u5740\uff0c\u540c\u65f6\u8fd9\u4e2a\u7f51\u5173\u5730\u5740\u53c8\u4f1a\u5728DHCP\u670d\u52a1\u4e2d\u7528\u5230\uff0c\u7528\u4e8e\u7ed9\u865a\u62df\u673a\u5b9e\u4f8b\u63d0\u4f9b\u4e0b\u4e00\u8df3\u7684\u7f51\u5173\u5730\u5740\uff0c\u4e8e\u662f\u4e4e\u5373\u4f7f\u5c06\u5b50\u7f51\u7ed1\u5b9a\u5230\u591a\u4e2aRouter\u4e0a\uff0c\u865a\u62df\u673a\u5185\u90e8\u4e0b\u4e00\u8df3\u7684\u7f51\u5173\u5730\u5740\u8fd8\u4f1a\u662f\u5b50\u7f51\u7684\u7f51\u5173\u5730\u5740\uff0c\u800c\u4e14Router\u9009\u62e9\u7684\u7f51\u7edc\u8282\u70b9\u7528\u6237\u662f\u4e0d\u53ef\u63a7\u7684\uff0c\u96be\u514d\u4f1a\u51fa\u73b0\u867d\u7136\u5b50\u7f51\u7ed1\u5b9a\u4e86\u4e24\u4e2aRouter\uff0c\u4f46\u662f\u8fd9\u4e24\u4e2aRouter\u5728\u540c\u4e00\u4e2a\u7f51\u7edc\u8282\u70b9\u4e0a\u7684\u5c34\u5c2c\u573a\u9762\u3002 \u4e3a\u4e86\u5206\u6563\u6d41\u91cfOpenStack\u6709\u5e94\u5bf9\u7684\u7b56\u7565\uff0c\u53ef\u4ee5\u5c06neutron\u7684DVR\u529f\u80fd\u6253\u5f00\uff0c\u4e3a\u9884\u9632\u7f51\u7edc\u8282\u70b9\u7684\u5355\u70b9\u6545\u969c\u4e5f\u53ef\u4ee5\u6253\u5f00neutron\u7684L3 HA\uff0c\u4f46\u662f\u4e0a\u8ff0\u65b9\u6cd5\u4e5f\u6709\u5b83\u4eec\u7684\u5c40\u9650\u6027\u3002 DVR\u7684\u6d41\u91cf\u5206\u6563\u6709\u6bd4\u8f83\u5927\u7684\u5c40\u9650\u6027\uff0c\u539f\u56e0\u6709\u4ee5\u4e0b\u51e0\u70b9\u3002 DVR\u53ea\u662f\u4f5c\u7528\u4e8e\u540c\u4e00Router\u4e0b\u4e0d\u540c\u8ba1\u7b97\u8282\u70b9\u7684\u865a\u62df\u673a\u5b9e\u4f8b\u4e4b\u95f4\u7684\u4e1c\u897f\u6d41\u91cf\uff0c\u5df2\u7ecf\u7ed1\u5b9a\u6d6e\u52a8IP\u865a\u62df\u673a\u7684\u5357\u5317\u6d41\u91cf\uff0c\u5bf9\u4e8e\u672a\u7ed1\u5b9a\u6d6e\u52a8IP\u7684\u865a\u62df\u673a\u8bbf\u95ee\u5916\u90e8\u7f51\u7edc\u4f9d\u636e\u9700\u8981\u7ecf\u8fc7\u7f51\u7edc\u8282\u70b9\u3002 \u751f\u4ea7\u73af\u5883\u4e0b\uff0c\u7ed9\u6bcf\u4e2a\u865a\u62df\u673a\u90fd\u7ed1\u5b9a\u6d6e\u52a8IP\u662f\u4e0d\u5207\u5b9e\u9645\u7684\uff0c\u4f46\u662f\u53ef\u4ee5\u901a\u8fc7\u5f00\u542f\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\uff0c\u8ba9\u591a\u53f0\u865a\u62df\u673a\u5bf9\u5e94\u4e00\u4e2a\u6d6e\u52a8IP\uff0c\u4f46\u5728\u76ee\u524d\u7684OpenStack\u7248\u672c\u4e2d\uff0c\u4e0d\u8bba\u662f\u5426\u5f00\u542fDVR\uff0c\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\u7684\u5b9e\u73b0\u90fd\u662f\u5728\u7f51\u7edc\u8282\u70b9\u7684\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u4e2d\u5b8c\u6210\u7684\u3002 \u6700\u540e\u4e00\u70b9\uff0cDVR\u6a21\u5f0f\u4e0b\uff0c\u4e3a\u4e86\u8ba9\u865a\u62df\u673a\u7684\u5357\u5317\u6d41\u91cf\u4e0d\u7ecf\u8fc7\u7f51\u7edc\u8282\u70b9\uff0c\u4ece\u8ba1\u7b97\u8282\u70b9\u4e0a\u76f4\u63a5\u8d70\u51fa\uff0c\u90fd\u4f1a\u5728\u6bcf\u4e2a\u8ba1\u7b97\u8282\u70b9\u4e0a\u751f\u6210\u4e00\u4e2afip\u5f00\u5934\u7684\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\uff0c\u5373\u4f7f\u865a\u62df\u673a\u4e0d\u4f1a\u7ed1\u5b9a\u6d6e\u52a8IP\u3002\u800c\u8fd9\u4e2afip\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u4e2d\u4f1a\u5360\u7528\u4e00\u4e2a\u5916\u90e8\u7f51\u7edc\u7684IP\u5730\u5740\uff0c\u8fd9\u65e0\u7591\u4f1a\u52a0\u5927\u7f51\u7edc\u8d44\u6e90\u7684\u6d88\u8017\u3002 L3 HA\u4e5f\u6709\u51e0\u70b9\u4e0d\u8db3\uff0c\u5f00\u542fL3 HA\u540e\uff0cRouter\u5229\u7528keepalived\u4f1a\u5728\u51e0\u4e2a\u7f51\u7edc\u8282\u70b9\u4e4b\u95f4\u8fdb\u884c\u9009\u62e9\uff0c\u53ea\u6709Keepalived\u7684\u72b6\u6001\u4e3aMaster \u7684\u7f51\u7edc\u8282\u70b9\u624d\u4f1a\u62c5\u4efb\u771f\u6b63\u7684\u6d41\u91cf\u8fd0\u8f93\u7684\u4efb\u52a1\uff0c\u800c\u5bf9\u4e8e\u7f51\u7edc\u8282\u70b9\u9009\u62e9\uff0c\u7528\u6237\u65e0\u6743\u5e72\u6d89\u3002\u867d\u7136neutron\u4e2d\u7ed9\u51fa\u4e86Router\u7684\u9ed8\u8ba4\u8c03\u5ea6\u7b56\u7565\uff0c\u4e5f\u5c31\u662f\u6700\u5c11Router\u6570\uff0cRouter\u4f1a\u8c03\u5ea6\u5230Router\u4e2a\u6570\u6700\u5c11\u7684\u7f51\u7edc\u8282\u70b9\u4e0a\u3002\u800c\u4e14\u5728\u5e95\u5c42keepalived\u5f00\u542f\u7684\u6a21\u5f0f\u662f\u975e\u62a2\u5360\u7684\uff0c\u4e5f\u5c31\u662f\u5f53vip\u53d1\u751f\u6f02\u79fb\u540e\uff0c\u5373\u4f7f\u4e3b\u670d\u52a1\u5668\u6062\u590d\u6b63\u5e38\uff0c\u4e5f\u4e0d\u4f1a\u81ea\u52a8\u5c06\u8d44\u6e90\u4ece\u5907\u7528\u670d\u52a1\u5668\u624b\u4e2d\u62a2\u5360\u56de\u6765\uff0c\u8fd9\u53c8\u589e\u52a0\u4e86\u5bf9\u4e8e\u771f\u6b63\u8fd0\u884cRouter\u7684\u7f51\u7edc\u8282\u70b9\u7684\u4e0d\u786e\u5b9a\u6027\u3002 \u603b\u7ed3\u4e00\u4e0b\uff0c\u73b0\u6709\u7684\u6280\u672f\u65b9\u6848\u505a\u4e0d\u5230\u771f\u6b63\u7684\u6d41\u91cf\u5206\u53d1\uff0c\u5373\u4f7f\u5728\u5f00\u542fDVR\u540e\uff0c\u4e00\u65b9\u9762\u4f1a\u6709\u4e00\u4e9b\u989d\u5916\u7f51\u7edc\u8d44\u6e90\u7684\u635f\u8017\uff0c\u540c\u65f6\u53c8\u56e0\u4e3aRouter\u7684\u7f51\u7edc\u8282\u70b9\u7684\u4e0d\u786e\u5b9a\u6027\uff0c\u5bfc\u81f4\u865a\u62df\u673a\u7684\u5357\u5317\u6d41\u91cf\u65e0\u6cd5\u505a\u5230\u5f88\u597d\u7684\u5206\u53d1\u3002","title":"\u80cc\u666f"},{"location":"spec/distributed-traffic/#_4","text":"\u5b9e\u73b0DVR\u6a21\u5f0f\u548cL3 HA\u6a21\u5f0f\u4e0b\u4ee5\u53caLegacy\u6a21\u5f0f\u4e0b\u7f51\u7edc\u5206\u53d1\u3002\u9996\u5148\u8981\u89e3\u51b3\u4ee5\u4e0b\u51e0\u4e2a\u6280\u672f\u95ee\u9898\uff1a Router\u53ef\u4ee5\u6307\u5b9a\u7f51\u7edc\u8282\u70b9\uff0c\u4e0d\u8bba\u662f\u5426\u5f00\u542fL3 HA\u3002 \u540c\u4e00\u5b50\u7f51\u7ed1\u5b9a\u591a\u4e2aRouter\u65f6\uff0cDHCP\u670d\u52a1\u80fd\u4e3a\u4e0d\u540c\u8ba1\u7b97\u8282\u70b9\u7684\u865a\u62df\u673a\u63d0\u4f9b\u4e0d\u540c\u7684\u8def\u7531\u65b9\u5f0f\u3002 \u5728\u7528\u6237\u4f7f\u7528\u7aef\u53e3\u6620\u5c04\u65f6\uff0c\u53ef\u4ee5\u5c06Router\u7684External Gateway\u7684IP\u5730\u5740\u4f5c\u4e3a\u5916\u90e8\u7f51\u7edc\u7684\u5730\u5740\u3002","title":"\u9700\u8981\u89e3\u51b3\u7684\u95ee\u9898"},{"location":"spec/distributed-traffic/#_5","text":"","title":"\u5b9e\u73b0\u65b9\u6848"},{"location":"spec/distributed-traffic/#l3-agent","text":"\u9996\u5148\u4fee\u6539Router\u7684\u5e95\u5c42\u6570\u636e\u5e93\u4e3a\u5176\u6dfb\u52a0\u4e00\u4e2aconfigurations\u5b57\u6bb5\uff0c\u7528\u4e8e\u5b58\u50a8Router\u7684\u76f8\u5173\u914d\u7f6e\u4fe1\u606f\uff0cconfigurations\u7684\u683c\u5f0f\u5982\u4e0b\u6240\u793a\u3002 { \"configurations\": { \"preferred_agent\": \"network-1\" } } \u5728\u672a\u5f00\u542fL3 HA\u65f6\uff0cpreferred_agent\u5b57\u6bb5\u7528\u4e8e\u6307\u5b9aRouter\u4f4d\u4e8e\u7684\u7f51\u7edc\u8282\u70b9\u3002 \u5728\u5f00\u542fL3 HA\u65f6\uff0cconfigurations\u7684\u683c\u5f0f\u5982\u4e0b\u6240\u793a\u3002 { \"configurations\": { \"slave_agents\": [ \"compute-1\" ], \"master_agent\": \"network-1\" } } master_agent\u7528\u4e8e\u6307\u5b9aMaster\u89d2\u8272\u7684\u7f51\u7edc\u8282\u70b9\uff0cslave_agents\u7528\u4e8e\u6307\u5b9aSlave\u89d2\u8272\u7684\u7f51\u7edc\u8282\u70b9\u6570\u7ec4\u3002 \u7136\u540e\u8981\u4fee\u6539Router\u7684\u521b\u5efa\u903b\u8f91\uff0c\u9700\u8981\u4e3aRouter\u65b0\u589e\u4e00\u4e2a\u8c03\u5ea6\u65b9\u6cd5\u3002Neutron\u4e2drouter_scheduler_driver\u9ed8\u8ba4\u662fLeastRoutersScheduler\uff08\u6700\u5c11Router\u4e2a\u6570\u7684\u7f51\u7edc\u8282\u70b9\uff09\uff0c\u7ee7\u627f\u8be5\u7c7b\u65b0\u589e\u8c03\u5ea6\u65b9\u6cd5\uff0c\u53ef\u4ee5\u6839\u636eRouter\u7684configurations\u5b57\u6bb5\u9009\u62e9\u6307\u5b9a\u7684\u7f51\u7edc\u8282\u70b9\u3002 \u6700\u540e\u9700\u8981\u4fee\u6539neutron-l3-agent\u7684Router\u66f4\u65b0\u7684\u903b\u8f91\u4ee3\u7801\uff0c\u7531\u4e8eneutron-l3-agent\u542f\u52a8\u65f6\u4f1a\u521d\u59cb\u5316\u4e00\u4e2a\u8d44\u6e90\u961f\u5217\u7528\u4e8e\u66f4\u65b0\u8d44\u6e90\u72b6\u6001\uff0c\u540c\u65f6\u5f00\u542f\u4e00\u4e2a\u5b88\u62a4\u7ebf\u7a0b\u7528\u4e8e\u8bfb\u53d6\u8d44\u6e90\u961f\u5217\uff0c\u6bcf\u6b21\u7f51\u7edc\u8d44\u6e90\u72b6\u6001\u6709\u53d8\u5316\uff08\u521b\u5efa\u3001\u5220\u9664\u6216\u8005\u66f4\u65b0\uff09\u65f6\uff0c\u5c31\u4f1a\u6dfb\u52a0\u5230\u8be5\u961f\u5217\u4e2d\uff0c\u6700\u540e\u6839\u636e\u8d44\u6e90\u7684\u7c7b\u578b\u548c\u72b6\u6001\u786e\u5b9a\u5c06\u8981\u6267\u884c\u7684\u52a8\u4f5c\u3002 \u8fd9\u91ccRouter\u521b\u5efa\u5b8c\u540e\uff0cneutron-l3-agent\u6700\u540e\u4f1a\u6267\u884c_process_added_router\u65b9\u6cd5\uff0c\u5148\u8c03\u7528RouterInfo\u7684initialize\u65b9\u6cd5\uff0c\u518d\u8c03\u7528process\u65b9\u6cd5\u3002 initialize\u65b9\u6cd5\u4e3b\u8981\u6d89\u53ca\u5230Router\u4fe1\u606f\u7684\u4e00\u4e9b\u521d\u59cb\u5316\uff0c\u5305\u62ec\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u7684\u521b\u5efa\u3001port\u7684\u521b\u5efa\u3001keepalived\u8fdb\u7a0b\u7684\u521d\u59cb\u5316\u7b49\u7b49\u3002 process\u65b9\u6cd5\u4e2d\u4f1a\u505a\u4e0b\u9762\u51e0\u4e2a\u64cd\u4f5c\u3002 \u8bbe\u7f6e\u5185\u90e8\u7684Port\uff0c\u7528\u4e8e\u8fde\u63a5\u5185\u90e8\u7f51\u7edc\uff1b \u8bbe\u7f6e\u5916\u90e8Port\uff0c\u7528\u4e8e\u8fde\u63a5\u5916\u90e8\u7f51\u7edc\uff1b \u66f4\u65b0\u8def\u7531\u8868\uff1b \u5bf9\u4e8e\u5f00\u542fL3 HA\u7684Router\uff0c\u9700\u8981\u8bbe\u7f6eHA\u7684Port\uff0c\u7136\u540e\u5f00\u542fkeepalived\u8fdb\u7a0b\u3002 \u5bf9\u4e8e\u5f00\u542fDVR\u7684Router\uff0c\u8fd8\u9700\u8981\u8bbe\u7f6e\u4e00\u4e0bfip\u547d\u540d\u7a7a\u95f4\u4e2d\u7684Port\u3002 \u8fd9\u91cc\u53ea\u9700\u8981\u8003\u8651L3 HA\u5f00\u542f\u7684\u60c5\u51b5\uff0c\u56e0\u4e3a\u5728\u672a\u5f00\u542fL3 HA\u65f6\uff0cneutron-server\u521b\u5efa\u5b8cRouter\u540e\uff0c\u7ecf\u8fc7\u65b0\u7684\u8c03\u5ea6\u65b9\u6cd5\u9009\u62e9\u7279\u5b9a\u7684\u7f51\u7edc\u8282\u70b9\uff0cRPC\u8c03\u7528\u76f4\u63a5\u53d1\u9001\u7ed9\u7279\u5b9a\u7f51\u7edc\u8282\u70b9\u7684neutron-l3-agent\u670d\u52a1\u3002\u5f00\u542fL3 HA\u65f6\uff0c\u8c03\u5ea6\u65b9\u6cd5\u4f1a\u9009\u62e9\u51famaster\u548cslave\u7f51\u7edc\u8282\u70b9\uff0c\u5e76\u4e14RPC\u8c03\u7528\u4f1a\u53d1\u9001\u7ed9\u8fd9\u4e9b\u7f51\u7edc\u8282\u70b9\u4e0a\u7684neutron-l3-agent\u670d\u52a1\u3002 neutron-l3-agent\u4f1a\u4e3a\u6bcf\u4e2aRouter\u542f\u52a8\u4e00\u4e2akeepalived\u8fdb\u7a0b\u7528\u4e8eL3 HA\uff0c\u6240\u4ee5\u9700\u8981\u5728keepalived\u521d\u59cb\u5316\u65f6\uff0c\u5c06keepalived\u542f\u52a8\u903b\u8f91\u4fee\u6539\u3002\u5229\u7528configurations\u5b57\u6bb5\u7684\u4fe1\u606f\uff0c\u83b7\u53d6master\u548cslave\u7f51\u7edc\u8282\u70b9\uff0c\u540c\u65f6\u548c\u5f53\u524d\u7f51\u7edc\u8282\u70b9\u7684\u4fe1\u606f\u5224\u65ad\uff0c\u786e\u5b9a\u7f51\u7edc\u8282\u70b9\u7684\u89d2\u8272\u3002\u6700\u540e\uff0c\u56e0\u4e3a\u6307\u5b9a\u4e86master\u548cslave\u8282\u70b9\uff0c\u907f\u514d\u51fa\u73b0master\u7f51\u7edc\u8282\u70b9\u5b95\u673a\u6062\u590d\u540e\uff0cvip\u4f9d\u65e7\u5728slave\u8282\u70b9\u7684\u60c5\u51b5\uff0c\u8981\u628akeepalived\u7684\u6a21\u5f0f\u6539\u4e3a\u62a2\u5360\u6a21\u5f0f\u3002","title":"\u89e3\u51b3\u6307\u5b9aL3 agent\u7684\u95ee\u9898"},{"location":"spec/distributed-traffic/#_6","text":"\u89e3\u51b3\u540c\u4e00\u5b50\u7f51\u7ed1\u5b9a\u591a\u4e2aRouter\u540e\uff0c\u865a\u62df\u673a\u5b9e\u4f8b\u7684\u8def\u7531\u95ee\u9898\u3002DHCP\u534f\u8bae\u529f\u80fd\u4e0d\u4ec5\u5305\u62ec\u548cDNS\u670d\u52a1\u5668\u5206\u914d\u8fd8\u5305\u62ec\u7f51\u5173\u5730\u5740\u5206\u914d\uff0c\u4e5f\u5c31\u662f\u53ef\u4ee5\u901a\u8fc7DHCP\u534f\u8bae\u5c06\u8def\u7531\u4fe1\u606f\u4f20\u7ed9\u865a\u62df\u673a\u5b9e\u4f8b\u3002\u5728OpenStack\u4e2d\uff0c\u865a\u62df\u673a\u5b9e\u4f8b\u7684DHCP\u7531neutron-dhcp-agent\u63d0\u4f9b\uff0cneutron-dhcp-agent\u7684\u6838\u5fc3\u529f\u80fd\u57fa\u672c\u7531dnsmasq\u5b8c\u6210\u3002 dnsmasq\u4e2d\u63d0\u4f9btag\u6807\u7b7e\uff0c\u53ef\u4ee5\u4e3a\u6307\u5b9aIP\u5730\u5740\u6dfb\u52a0\u6807\u7b7e\uff0c\u7136\u540e\u53ef\u4ee5\u6839\u636e\u6807\u7b7e\u4e0b\u53d1\u914d\u7f6e\u3002 dnsmasq\u7684host\u914d\u7f6e\u6587\u4ef6\u5982\u4e0b\u6240\u793a\u3002 fa:16:3e:28:a5:0a,host-172-16-0-1.openstacklocal,172.16.0.1,set:subnet-6a4db541-e563-43ff-891b-aa8c05c988c5 fa:16:3e:2b:dd:88,host-172-16-0-10.openstacklocal,172.16.0.10,set:subnet-6a4db541-e563-43ff-891b-aa8c05c988c5 fa:16:3e:a1:96:fc,host-172-16-0-207.openstacklocal,172.16.0.207,set:compute-1-subnet-6a4db541-e563-43ff-891b-aa8c05c988c5 fa:16:3e:45:b4:1a,host-172-16-10-1.openstacklocal,172.16.10.1,set:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902 dnsmasq\u7684option\u914d\u7f6e\u6587\u4ef6\u5982\u4e0b\u6240\u793a\u3002 tag:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902,option:dns-server,8.8.8.8 tag:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902,option:classless-static-route,172.16.0.0/24,0.0.0.0,169.254.169.254/32,172.16.0.2,0.0.0.0/0,172.16.0.1 tag:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902,249,172.16.0.0/24,0.0.0.0,169.254.169.254/32,172.16.0.2,0.0.0.0/0,172.16.0.1 tag:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902,option:router,172.16.0.1 tag:compute-1-subnet-6a4db541-e563-43ff-891b-aa8c05c988c5,option:classless-static-route,172.16.10.0/24,0.0.0.0,169.254.169.254/32,172.16.0.2,0.0.0.0/0,172.16.0.10 tag:compute-1-subnet-6a4db541-e563-43ff-891b-aa8c05c988c5,249,172.16.0.0/24,0.0.0.0,169.254.169.254/32,172.16.0.2,0.0.0.0/0,172.16.0.10 tag:compute-1-subnet-6a4db541-e563-43ff-891b-aa8c05c988c5,option:router,172.16.0.10 \u53ef\u4ee5\u770b\u5230IP172.16.0.207\u88ab\u6253\u4e0a\u4e86compute-1\u5f00\u5934\u7684tag\uff0c\u5339\u914d\u5230option\u6587\u4ef6\u540e\uff0c172.16.0.207\u7684\u865a\u62df\u673a\u7684\u9ed8\u8ba4\u8def\u7531\u7f51\u5173\u5730\u5740\u5c31\u4f1a\u4ece172.16.0.1\u53d8\u4e3a172.16.0.10\u3002\u5f53\u7136\u8fd9\u4e00\u5207\u7684\u524d\u63d0\u5b50\u7f51\u9700\u8981\u7ed1\u5b9a\u591a\u4e2aRouter\u3002 \u540c\u65f6\u4e3aneutron-dhcp-agent\u63d0\u4f9b\u53ef\u4f9b\u7ba1\u7406\u5458\u4fee\u6539\u7684\u914d\u7f6e\u9879\uff0c\u7528\u4e8e\u6307\u5b9a\u8ba1\u7b97\u8282\u70b9\u548c\u7f51\u7edc\u8282\u70b9\u7684\u5173\u7cfb\uff0c\u53ef\u4ee5\u662f\u4e00\u5bf9\u4e00\uff0c\u53ef\u4ee5\u662f\u591a\u5bf9\u4e00\u3002","title":"\u89e3\u51b3\u8def\u7531\u95ee\u9898"},{"location":"spec/distributed-traffic/#router-gateway","text":"\u5c06\u539f\u672c\u57fa\u4e8e\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\u6539\u4e3a\u57fa\u4e8eRouter\u7684External Gateway\u7684\u65b9\u5f0f\u3002\u539f\u56e0\u6709\u4e8c\uff1a \u57fa\u4e8e\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\uff0c\u5bf9\u4e8e\u539f\u672c\u5c31\u8981\u4f7f\u7528Router\u7684External Gateway\u7684\u7528\u6237\u5c31\u4f1a\u591a\u5360\u7528\u4e00\u4e2a\u5916\u90e8\u7f51\u7edc\u7684IP\uff0c\u4e3a\u51cf\u5c11\u5916\u90e8\u7f51\u7edcIP\u7684\u4f7f\u7528\u6539\u7528External Gateway\u7684\u65b9\u5f0f\u8fdb\u884c\u7aef\u53e3\u6620\u5c04\u3002 \u57fa\u4e8e\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\uff0c\u4f9d\u8d56Router\u7684\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u6765\u505aNAT\uff0c\u4e0d\u5f00\u542fL3 HA\u65f6\uff0c\u540c\u4e00\u5b50\u7f51\u5728\u7ed1\u5b9a\u591a\u4e2aRouter\u540e\uff0c\u7531\u4e8e\u7aef\u53e3\u6620\u5c04\u521b\u5efa\u7684\u903b\u8f91\uff0cNAT\u4f1a\u53d1\u751f\u5728\u5b50\u7f51\u7f51\u5173\u5730\u5740\u6240\u5728\u7684Router\u7684\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u4e2d\uff08\u7279\u5b9a\u7684\u7f51\u7edc\u8282\u70b9\uff09\uff0c\u4e0d\u4f1a\u5206\u6563\u5728\u5404\u4e2aRouter\u7684\u7f51\u7edc\u547d\u540d\u7a7a\u95f4\u4e2d\uff08\u6bcf\u4e2a\u7f51\u7edc\u8282\u70b9\uff09\u3002\u8fd9\u6837\u5728\u7aef\u53e3\u6620\u5c04\u65f6\uff0c\u4f1a\u589e\u52a0\u7f51\u7edc\u8282\u70b9\u7684\u538b\u529b\u3002 \u5b9e\u73b0\u65b9\u5f0f\u548c\u57fa\u4e8e\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\u7c7b\u4f3c\uff0c\u4e0e\u4e4b\u4e0d\u540c\u7684\u662fExternal Gateway\u4e0d\u9700\u8981\u9009\u62e9Router\uff0c\u56e0\u4e3aExternal Gateway\u672c\u6765\u548cRouter\u5c31\u662f\u76f8\u5173\u8054\u7684\u3002\u57fa\u4e8e\u6d6e\u52a8IP\u7684\u7aef\u53e3\u6620\u5c04\u5728\u9009\u62e9Router\u65f6\uff0c\u9009\u62e9\u7684\u662f\u5b50\u7f51\u7684\u7f51\u5173\u5730\u5740\u6240\u5728\u7684Router\u3002 \u6700\u540e\uff0c\u5728\u5b9e\u73b0\u4e0a\u9762\u4e09\u4e2a\u90e8\u5206\u540e\uff0c\u7528\u6237\u5b9e\u73b0\u6d41\u91cf\u5206\u6563\u7684\u6b65\u9aa4\u5982\u4e0b\u3002 \u7528\u6237\u4fee\u6539neutron-dhcp-agent\u7684\u914d\u7f6e\u6587\u4ef6\uff0c\u4fee\u6539\u8ba1\u7b97\u8282\u70b9\u548c\u7f51\u7edc\u8282\u70b9\u7684\u6620\u5c04\u5173\u7cfb\u3002\u4f8b\u5982\u4e09\u4e2a\u7f51\u7edc\u8282\u70b9\u3001\u4e09\u4e2a\u8ba1\u7b97\u8282\u70b9\uff0c\u914d\u7f6ecompute-1\u8d70network-1\u8282\u70b9\uff0ccompute-2\u548ccompute-3\u8d70network-2\u8282\u70b9\u3002 \u5229\u7528neutron\u7684API\u521b\u5efa\u591a\u4e2aRouter\u5e76\u6307\u5b9a\u7f51\u7edc\u8282\u70b9\uff0c\u5e76\u5c06Router\u7ed1\u5b9a\u5230\u540c\u4e00\u5b50\u7f51\u3002 \u5229\u7528\u5b50\u7f51\u7f51\u7edc\u521b\u5efa\u591a\u4e2a\u865a\u62df\u673a\u5b9e\u4f8b\u3002 \u865a\u62df\u673a\u5b9e\u4f8b\u7684\u7f51\u7edc\u6d41\u91cf\u7684\u6d41\u5411\u5982\u4e0b\u56fe\u6240\u793a\u3002 \u53ef\u4ee5\u770b\u5230\uff0cVM-1\u8bbf\u95ee\u5916\u90e8\u7f51\u7edc\u7ecf\u8fc7\u7684\u662fnetwork-1\u8282\u70b9\uff0cVM-2\u548cVM-3\u8bbf\u95ee\u5916\u90e8\u7f51\u7edc\u7ecf\u8fc7\u7684\u662fnetwork-2\u8282\u70b9\u3002\u540c\u65f6VM-1\u3001VM-2\u548cVM-3\u53c8\u662f\u5728\u540c\u4e00\u4e2a\u5b50\u7f51\u4e0b\uff0c\u53ef\u4ee5\u4e92\u76f8\u8bbf\u95ee\u3002","title":"\u89e3\u51b3Router Gateway\u7aef\u53e3\u8f6c\u53d1\u7684\u95ee\u9898"},{"location":"spec/distributed-traffic/#api","text":"","title":"API"},{"location":"spec/distributed-traffic/#_7","text":"GET /v2.0/routers/{router_id}/gateway_port_forwardings Response { \"gateway_port_forwardings\": [ { \"id\": \"67a70b09-f9e7-441e-bd49-7177fe70bb47\", \"external_port\": 34203, \"protocol\": \"tcp\", \"internal_port_id\": \"b671c61a-95c3-49cd-89f2-b7e817d1f486\", \"internal_ip_address\": \"172.16.0.196\", \"internal_port\": 518, \"gw_ip_address\": \"192.168.57.234\" } ] }","title":"\u67e5\u770b\u8def\u7531\u5668\u7f51\u5173\u7aef\u53e3\u8f6c\u53d1\u5217\u8868"},{"location":"spec/distributed-traffic/#_8","text":"GET /v2.0/routers/{router_id}/gateway_port_forwardings/{port_forwarding_id} Response { \"gateway_port_forwarding\": { \"id\": \"67a70b09-f9e7-441e-bd49-7177fe70bb47\", \"external_port\": 34203, \"protocol\": \"tcp\", \"internal_port_id\": \"b671c61a-95c3-49cd-89f2-b7e817d1f486\", \"internal_ip_address\": \"172.16.0.196\", \"internal_port\": 518, \"gw_ip_address\": \"192.168.57.234\" } }","title":"\u67e5\u770b\u8def\u7531\u5668\u7f51\u5173\u7aef\u53e3\u8f6c\u53d1"},{"location":"spec/distributed-traffic/#_9","text":"POST /v2.0/routers/{router_id}/gateway_port_forwardings Request Body { \"gateway_port_forwarding\": { \"external_port\": int, \"internal_port\": int, \"internal_ip_address\": \"string\", \"protocol\": \"tcp\", \"internal_port_id\": \"string\" } } Response { \"gateway_port_forwarding\": { \"id\": \"da554833-b756-4626-9900-6256c361f94b\", \"external_port\": 14122, \"protocol\": \"tcp\", \"internal_port_id\": \"b671c61a-95c3-49cd-89f2-b7e817d1f486\", \"internal_ip_address\": \"172.16.0.196\", \"internal_port\": 3634, \"gw_ip_address\": \"192.168.57.234\" } }","title":"\u521b\u5efa\u8def\u7531\u5668\u7f51\u5173\u7aef\u53e3\u8f6c\u53d1"},{"location":"spec/distributed-traffic/#_10","text":"PUT /v2.0/routers/{router_id}/gateway_port_forwardings/{port_forwarding_id} Request Body { \"gateway_port_forwarding\": { \"external_port\": int, \"internal_port\": int, \"internal_ip_address\": \"string\", \"protocol\": \"tcp\", \"internal_port_id\": \"string\" } } Response { \"gateway_port_forwarding\": { \"id\": \"da554833-b756-4626-9900-6256c361f94b\", \"external_port\": 14122, \"protocol\": \"tcp\", \"internal_port_id\": \"b671c61a-95c3-49cd-89f2-b7e817d1f486\", \"internal_ip_address\": \"172.16.0.196\", \"internal_port\": 3634, \"gw_ip_address\": \"192.168.57.234\" } }","title":"\u66f4\u65b0\u8def\u7531\u5668\u7f51\u5173\u7aef\u53e3\u8f6c\u53d1"},{"location":"spec/distributed-traffic/#_11","text":"DELETE /v2.0/routers/{router_id}/gateway_port_forwardings/{port_forwarding_id}","title":"\u5220\u9664\u8def\u7531\u5668\u7f51\u5173\u7aef\u53e3\u8f6c\u53d1"},{"location":"spec/distributed-traffic/#_12","text":"POST /v2.0/routers Request Body { \"router\": { \"name\": \"string\", \"admin_state_up\": true, \"configurations\": { \"preferred_agent\": \"string\", \"master_agent\": \"string\", \"slave_agents\": [ \"string\" ] } } }","title":"\u65b0\u5efa\u8def\u7531\u5668"},{"location":"spec/distributed-traffic/#_13","text":"PUT /v2.0/routers/{router_id} Request Body { \"router\": { \"name\": \"string\", \"admin_state_up\": true, \"configurations\": { \"preferred_agent\": \"string\", \"master_agent\": \"control01\", \"slave_agents\": [ \"control01\" ] } } }","title":"\u66f4\u65b0\u8def\u7531\u5668"},{"location":"spec/distributed-traffic/#_14","text":"2023-07-28\u52302023-08-30 \u5b8c\u6210\u5f00\u53d1 2023-09-01\u52302023-11-15 \u6d4b\u8bd5\u3001\u95ee\u9898\u4fee\u590d 2023-11-30\u5f15\u5165openEuler 20.03 LTS SP4\u7248\u672c 2023-12-30\u5f15\u5165openEuler 22.03 LTS SP3\u7248\u672c","title":"\u5f00\u53d1\u8282\u594f"},{"location":"spec/openkite/","text":"1\u3001\u524d\u5e8f \u00b6 1.1\u3001 \u8f6f\u4ef6\u8bb8\u53ef\u534f\u8bae \u00b6 \u672c\u8f6f\u4ef6\u57fa\u4e8eLGPL V3\u534f\u8bae\uff0c\u8bf7\u7528\u6237\u548c\u5f00\u53d1\u8005\u6ce8\u610fLGPL\u534f\u8bae\u7684\u8981\u6c42\uff0c\u5176\u4e2d\u6700\u91cd\u8981\u7684\u4e00\u70b9\u662f \u4e0d\u5141\u8bb8fork\u9879\u76ee\u95ed\u6e90 \u3002 1.2\u3001 \u8f6f\u4ef6\u7528\u9014 \u00b6 1.3\u3001 \u5f00\u53d1\u4eba\u5458\u540d\u5355 \u00b6 1.4\u3001 \u751f\u547d\u5f00\u53d1\u5468\u671f \u00b6 1.5\u3001 \u529f\u80fd\u5f00\u53d1\u987a\u5e8f \u00b6 2\u3001\u5f00\u53d1\u89c4\u8303\u7ea6\u5b9a \u00b6 2.1\u3001 \u7a97\u4f53\u63a7\u4ef6\u547d\u540d\u89c4\u8303 \u00b6 \u63a7\u4ef6\u539f\u540d\u79f0_\u7a97\u4f53_\u63a7\u4ef6\u540d\u79f0\u7ec4\u5408\u4f53\u9996\u5b57\u6bcd\u5927\u5199 \u793a\u4f8b\uff1a \u6309\u94ae\u539f\u540d\u79f0\uff1apushButton \u4e3b\u7a97\u4f53 \u83dc\u5355\u6309\u94ae \u547d\u540d\u89c4\u8303\uff1apushButton_MainWindow_Menu \u6309\u94ae\u539f\u540d\u79f0\uff1atoolButton \u4e3b\u7a97\u4f53 \u4e0a\u4f20\u6309\u94ae \u547d\u540d\u89c4\u8303\uff1atoolButton_MainWindow_UpLoad 2.2\u3001 \u540e\u53f0\u529f\u80fd\u5b9e\u73b0\u547d\u540d\u89c4\u8303 \u00b6 \u53d8\u91cf\u3001\u5e38\u91cf\u3001\u51fd\u6570\u3001\u7c7b\u3001\u5bb9\u5668\u7b49 2.3\u3001 \u8f6f\u4ef6\u5305\u6587\u4ef6\u540d\u547d\u540d\u89c4\u8303 \u00b6 2.4\u3001 \u6587\u4ef6\u547d\u540d\u89c4\u8303 \u00b6 2.5\u3001 \u6807\u6ce8 \u00b6 \u5220\u9664\u3001\u79fb\u52a8\u3001\u6539\u540d\u3001\u6743\u9650\u8bbe\u7f6e 3\u3001\u7a97\u53e3\u4e3b\u4f53\u63a7\u4ef6\u540d\u79f0\u3001\u5c3a\u5bf8\u3001\u7528\u9014 \u00b6 3.1\u3001\u83dc\u5355\u529f\u80fd\u5927\u7c7b \u00b6 PushButton\u63a7\u4ef6\u7528\u4e8e\u83dc\u5355\u5927\u7c7b\u8c03\u7528\u7a97\u53e3 \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u56fa\u5b9a\u5c3a\u5bf8 80*25 \u63a7\u4ef6\u4e2d\u6587\u540d \u63a7\u4ef6\u79cd\u7c7b \u63a7\u4ef6\u540d \u7528\u9014 \u83dc\u5355 PushButton pushButton_MainWindow_Menu \u8c03\u51fa\u83dc\u5355\u7a97\u53e3 \u5e2e\u52a9 PushButton pushButton_MainWindow_Help \u8c03\u51fa\u5e2e\u52a9\u7a97\u53e3 \u5de5\u5177 PushButton pushButton_MainWindow_Tool \u8c03\u51fa\u5de5\u5177\u7a97\u53e3 \u62a5\u9519\u5206\u6790 PushButton pushButton_MainWindow_ErrorAnalysis \u8c03\u51fa\u62a5\u9519\u5206\u6790\u7a97\u53e3 \u76d1\u63a7 PushButton pushButton_MainWindow_Monitor \u8c03\u51fa\u76d1\u63a7\u7a97\u53e3 \u8fd0\u7ef4\u65e5\u5fd7 PushButton pushButton_MainWindow_OperationLog \u8c03\u51fa\u8fd0\u7ef4\u65e5\u5fd7\u7a97\u53e3 3.1.1\u3001\u83dc\u5355\u5b50\u7c7b \u00b6 \u8bbe\u7f6e \u8f6f\u4ef6\u4e3b\u9898 3.1.2\u3001\u5e2e\u52a9\u7c7b \u00b6 \u793e\u533a \u7248\u672c\u66f4\u65b0 \u4f7f\u7528\u624b\u518c 3.1.3\u3001\u5de5\u5177\u7c7b \u00b6 \u63d2\u4ef6\u4ed3\u5e93 img\u955c\u50cf\u5de5\u5177 MD5\u6821\u9a8c\u5de5\u5177 OpenStack\u6a21\u5757\u529f\u80fd\u6d4b\u8bd5 \u538b\u529b\u6d4b\u8bd5 3.1.4\u3001\u62a5\u9519\u5206\u6790\u7c7b \u00b6 \u7cfb\u7edf\u62a5\u9519\uff08\u8282\u70b9\u62a5\u9519\u5206\u6790\uff09 OpenStack\u62a5\u9519 K8S\u62a5\u9519 3.1.5\u3001\u76d1\u63a7\u7c7b \u00b6 OPS\u76d1\u63a7\u72b6\u6001\u4e0e\u6027\u80fd\u4f7f\u7528\u5206\u6790 K8S\u76d1\u63a7\u72b6\u6001\u4e0e\u6027\u80fd\u4f7f\u7528\u5206\u6790 3.1.6\u3001\u8fd0\u7ef4\u65e5\u5fd7\u7c7b \u00b6 \u67e5\u770b\u5386\u53f2\u8fd0\u7ef4\u65e5\u5fd7 \u65e5\u5fd7\u5bfc\u51fa 3.2\u3001\u6570\u636e\u53ef\u89c6\u5316\u7c7b \u00b6 3.2.1\u3001\u8ba1\u7b97\u673a\u786c\u4ef6\u4fe1\u606f\u7c7b \u00b6 ProgressBar\u63a7\u4ef6\u663e\u793a\u8ba1\u7b97\u673a\u786c\u4ef6\u6027\u80fd\u5360\u7528\u6bd4 \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u6700\u5c0f\u5c3a\u5bf8 116*27 \u9ad8\u5ea6\u5c3a\u5bf8\u56fa\u5b9a \u63a7\u4ef6\u4e2d\u6587\u540d \u63a7\u4ef6\u79cd\u7c7b \u63a7\u4ef6\u540d \u7528\u9014 \u672c\u673aCPU ProgressBar progressBar_MainWindow_LocalCPU \u663e\u793a\u672c\u5730CPU\u4f7f\u7528\u7387 \u76ee\u6807CPU ProgressBar progressBar_MainWindow_TargetCPU \u663e\u793a\u76ee\u6807CPU\u4f7f\u7528\u7387 \u672c\u673aRAM ProgressBar progressBar_MainWindow_LocalRAM \u663e\u793a\u672c\u673aRAM\u4f7f\u7528\u7387 \u76ee\u6807RAM ProgressBar progressBar_MainWindow_TargetRAM \u663e\u793a\u76ee\u6807RAM\u4f7f\u7528\u7387 \u672c\u673a\u7f51\u7edc ProgressBar progressBar_MainWindow_LocalNetwork \u663e\u793a\u672c\u673a\u7f51\u7edc\u5e26\u5bbd\u4f7f\u7528\u7387 \u76ee\u6807\u7f51\u7edc ProgressBar progressBar_MainWindow_TargetNetwork \u663e\u793a\u76ee\u6807\u7f51\u7edc\u5e26\u5bbd\u4f7f\u7528\u7387 \u672c\u673a\u78c1\u76d8 ProgressBar progressBar_MainWindow_LocalDisk \u663e\u793a\u672c\u673a\u78c1\u76d8IO\u4f7f\u7528\u7387 \u76ee\u6807\u78c1\u76d8 ProgressBar progressBar_MainWindow_TargetDisk \u663e\u793a\u76ee\u6807\u78c1\u76d8IO\u4f7f\u7528\u7387 3.2.2\u3001\u8ba1\u7b97\u673a\u8f6f\u4ef6\u4fe1\u606f\u7c7b \u00b6 Label\u63a7\u4ef6\u663e\u793a\u7cfb\u7edfIP\u4e0eDNS \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u56fa\u5b9a\u5c3a\u5bf8 110*27 \u63a7\u4ef6\u4e2d\u6587\u540d \u63a7\u4ef6\u79cd\u7c7b \u63a7\u4ef6\u540d \u7528\u9014 \u672c\u673aIP Label label_MainWindow_LocalIP \u663e\u793a\u672c\u673aIP \u76ee\u6807IP Label label_MainWindow_TargetIP \u663e\u793a\u76ee\u6807IP \u672c\u673aDNS Label label_MainWindow_LocalNDS \u663e\u793a\u672c\u673aDNS \u76ee\u6807DNS Label label_MainWindow_TargetNDS \u663e\u793a\u76ee\u6807DNS ListWidget\u63a7\u4ef6\u663e\u793a\u7cfb\u7edf\u5fc5\u8981\u4fe1\u606f\u9879\u8bf4\u660e \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u56fa\u5b9a\u5c3a\u5bf8 200*111 \u63a7\u4ef6\u4e2d\u6587\u540d \u63a7\u4ef6\u79cd\u7c7b \u63a7\u4ef6\u540d \u7528\u9014 \u7cfb\u7edf\u4fe1\u606f\u663e\u793a ListWidgets listWidget_MainWidow_SystemShow \u663e\u793a\u7cfb\u7edf\u5fc5\u8981\u4fe1\u606f \u7cfb\u7edf\u5fc5\u8981\u4fe1\u606f\u663e\u793a\u6240\u7528\u53d8\u91cf\u7684API\u63a5\u53e3 \u4e2d\u6587\u540d \u53d8\u91cf\u7c7b\u578b \u53d8\u91cf\u540d \u7528\u9014 \u53d1\u884c\u7248 QStringList systemNameShow linux\u53d1\u884c\u7248\u540d\u79f0 \u7248\u672c\u53f7 QStringList systemVersion linux\u53d1\u884c\u7248\u7248\u672c\u53f7 \u5185\u6838\u53f7 QStringList systemKernel linux\u53d1\u884c\u7248\u5185\u6838\u7248\u672c \u7ba1\u7406\u6743\u9650 QStringList systemAdminPower \u5f53\u524d\u8d26\u53f7\u64cd\u4f5c\u6743\u9650 \u670d\u52a1\u540d\u79f0 QStringList systemServiceName \u5f53\u524d\u8fd0\u7ef4\u8f6f\u4ef6\u670d\u52a1\u540d\u79f0 \u670d\u52a1\u7248\u672c QStringList systemServicVersion \u5f53\u524d\u8fd0\u7ef4\u8f6f\u4ef6\u7248\u672c Label\u4e0eProgressBar\u63a7\u4ef6\u663e\u793a\u5f53\u524d\u8fd0\u884c\u547d\u4ee4\u53ca\u8fdb\u5ea6 \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u5f53\u524d\u8fd0\u884c\u547d\u4ee4\u63a7\u4ef6\u5c3a\u5bf8\uff1a \u6700\u5c0f\u5c3a\u5bf8\uff1a500*31 \u9ad8\u5ea6\u5c3a\u5bf8\u56fa\u5b9a \u5f53\u524d\u547d\u4ee4\u8fdb\u5ea6\u63a7\u4ef6\u5c3a\u5bf8\uff1a \u6700\u5c0f\u5c3a\u5bf8\uff1a171*31 \u9ad8\u5ea6\u5c3a\u5bf8\u56fa\u5b9a \u4e2d\u6587\u540d \u63a7\u4ef6\u79cd\u7c7b \u63a7\u4ef6\u540d \u7528\u9014 \u5f53\u524d\u8fd0\u884c\u547d\u4ee4 Label label_MainWindow_ShowCurrentCommand \u663e\u793a\u5f53\u524d\u96c6\u7fa4\u6216\u8282\u70b9\u6b63\u5728\u8fd0\u884c\u7684\u547d\u4ee4 \u5f53\u524d\u547d\u4ee4\u8fdb\u5ea6 ProgressBar progressBar_MainWindow_ShowCommandProgress \u663e\u793a\u5f53\u524d\u96c6\u7fa4\u6216\u8282\u70b9\u6b63\u5728\u8fd0\u884c\u7684\u547d\u4ee4\u7684\u8fdb\u5ea6 3.3\u3001\u6dfb\u52a0\u96c6\u7fa4\u7c7b \u00b6 3.3.1\u3001 \u96c6\u7fa4\u6dfb\u52a0\u7c7b \u00b6 ToolButton\u63a7\u4ef6\u6dfb\u52a0\u96c6\u7fa4\u8282\u70b9\u4fe1\u606f \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u56fa\u5b9a\u5c3a\u5bf8\uff1a300*31 \u4e2d\u6587\u540d \u63a7\u4ef6\u7c7b\u578b \u63a7\u4ef6\u540d \u7528\u9014 \u6dfb\u52a0\u96c6\u7fa4/\u8282\u70b9 ToolButton toolButton_MainWindow_AddNode \u5f39\u51fa\u7a97\u53e3\u6dfb\u52a0\u96c6\u7fa4\u6216\u8282\u70b9 \u5355\u8282\u70b9\u6dfb\u52a0 \u6279\u91cf\u8282\u70b9\u6dfb\u52a0 \u96c6\u7fa4\u6dfb\u52a0 3.3.2\u3001\u96c6\u7fa4\u663e\u793a\u7c7b \u00b6 TreeWidget\u63a7\u4ef6\u663e\u793a\u96c6\u7fa4\u4fe1\u606f \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u6700\u5c0f\u5c3a\u5bf8\uff1a200*438 \u5bbd\u5ea6\u5c3a\u5bf8\u56fa\u5b9a \u4e2d\u6587\u540d \u63a7\u4ef6\u7c7b\u578b \u63a7\u4ef6\u540d \u7528\u9014 \u8282\u70b9\u4fe1\u606f TreeWidget treeWidget_MainWindow_ShowNode \u7528\u4e8e\u663e\u793a\u96c6\u7fa4\u4e0e\u8282\u70b9\u4fe1\u606f\u6216\u70b9\u51fb\u4fe1\u606f\u540e\u521b\u5efaSSH\u8fdc\u7a0b\u7a97\u53e3\u754c\u9762 \u96c6\u7fa4\u540d\u79f0 \u8282\u70b9\u540d\u79f0 \u8282\u70b9IP\u5730\u5740 3.4\u3001\u811a\u672c\u4e0e\u90e8\u7f72\u7c7b \u00b6 TerrWidget\u63a7\u4ef6\u5f39\u7a97 \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u4e0a\u4f20\u3001\u811a\u672c\u6309\u94ae\u56fa\u5b9a\u5c3a\u5bf8\uff1a63*31 \u90e8\u7f72\u6309\u94ae\u56fa\u5b9a\u5c3a\u5bf8\uff1a65*31 \u4e2d\u6587\u540d \u63a7\u4ef6\u7c7b\u578b \u63a7\u4ef6\u540d \u7528\u9014 \u4e0a\u4f20 terrWidget toolButton_MainWindow_UpLoad \u5f39\u51fa\u4e0a\u4f20\u7a97\u4f53:load.ui \u811a\u672c terrWidget toolButton_MainWindow_Shell \u5f39\u51fa\u811a\u672c\u7a97\u4f53:shell.ui \u90e8\u7f72 terrWidget toolButton_MainWindow_Deploy \u5f39\u51fa\u90e8\u7f72\u7a97\u4f53:deploy.ui 3.4.1\u3001\u4e0a\u4f20\u4e0e\u4e0b\u8f7d\u529f\u80fd\u7c7b \u00b6 \u811a\u672c\u7f16\u8bd1\u5668 yaml\u7f16\u8bd1\u5668 \u811a\u672c\u7f16\u8bd1\u5668 \u52a0\u8f7d\u672c\u5730\u7b56\u7565 \u52a0\u8f7d\u96c6\u7fa4\u914d\u7f6e\u7b56\u7565 \u52a0\u8f7d\u8282\u70b9\u914d\u7f6e\u7b56\u7565 \u4e0a\u4f20\u6587\u4ef6\u5230\u76ee\u6807\u8ba1\u7b97\u673a \u5355\u8282\u70b9 \u591a\u8282\u70b9 \u4e0b\u8f7d\u6587\u4ef6\u5230\u672c\u5730\u8ba1\u7b97\u673a \u5355\u8282\u70b9 \u591a\u8282\u70b9 \u76ee\u6807\u8ba1\u7b97\u673a\u6587\u4ef6\u4e92\u4f20 \u70b9\u5bf9\u70b9\u4e92\u4f20 \u70b9\u5bf9\u591a\u4e92\u4f20 3.4.2\u3001\u811a\u672c\u7c7b \u00b6 \u7f16\u8f91 \u7f16\u8f91\u5b50\u6a21\u5757\u811a\u672c \u7f16\u8f91\u96c6\u7fa4\u6a21\u5757\u811a\u672c \u67e5\u770b \u67e5\u770b\u5b50\u6a21\u5757\u811a\u672c \u67e5\u770b\u96c6\u7fa4\u6a21\u5757\u811a\u672c \u5bfc\u51fa \u5bfc\u51fa\u5b50\u6a21\u5757\u811a\u672c \u5bfc\u51fa\u96c6\u7fa4\u6a21\u5757\u811a\u672c \u5bfc\u51fa\u6240\u6709\u811a\u672c 3.4.3\u3001\u90e8\u7f72\u7c7b \u00b6 \u90e8\u7f72 \u53ef\u6279\u91cf\u9009\u62e9\u8282\u70b9\u90e8\u7f72\u4e0d\u540c\u529f\u80fd\u811a\u672c \u53ef\u96c6\u7fa4\u90e8\u7f72\u4e0d\u540c\u8282\u70b9\u4e0d\u540c\u529f\u80fd\u811a\u672c \u53ef\u5355\u8282\u70b9\u90e8\u7f72\u4e0d\u540c\u529f\u80fd\u811a\u672c \u7ec8\u6b62 \u53ef\u6279\u91cf\u591a\u8282\u70b9\u3001\u5355\u8282\u70b9\u3001\u96c6\u7fa4\u7ec8\u6b62\u5f53\u524d\u90e8\u7f72 3.5\u3001\u529f\u80fd\u63d2\u4ef6\u7c7b \u00b6 3.5.1\u3001\u57fa\u7840\u8fd0\u7ef4\u7c7b \u00b6 \u4fee\u6539\u670d\u52a1\u5668\u8ba1\u7b97\u673a\u540d \u4fee\u6539\u670d\u52a1\u5668\u7528\u6237\u540d \u4fee\u6539\u670d\u52a1\u5668\u5bc6\u7801 \u4fee\u6539\u9632\u706b\u5899\u914d\u7f6e \u4fee\u6539host \u4fee\u6539DNS \u4fee\u6539\u7f51\u5173 \u4fee\u6539IP \u90e8\u7f72\u65f6\u95f4\u670d\u52a1 \u90e8\u7f72DNS\u670d\u52a1 3.5.2\u3001\u5176\u4ed6\u529f\u80fd\u63d2\u4ef6\u7c7b \u00b6 OpenStack\u63d2\u4ef6\u7c7b K8S\u63d2\u4ef6\u7c7b Ceph\u63d2\u4ef6\u7c7b 3.6\u3001ssh\u8fdc\u7a0b\u663e\u793a\u7c7b \u00b6 \u53ef\u590d\u5236\u7c98\u8d34\u547d\u4ee4\uff0c\u4e2d\u6587\u663e\u793a\u7efc\u5408\u7aef\u53e3 3.6.1\u3001\u96c6\u7fa4SSH\u8fdc\u7a0b\u663e\u793a\u7c7b \u00b6 \u7efc\u5408\u7aef\u53e3\u663e\u793a\uff0c\u70b9\u5bf9\u591assh\u8fdc\u7a0b 3.6.2\u3001\u5355\u8282\u70b9SSH\u8fdc\u7a0b\u663e\u793a\u7c7b \u00b6 \u70b9\u5bf9\u70b9ssh\u8fdc\u7a0b 4\u3001\u7a97\u53e3\u4e3b\u4f53\u529f\u80fd\u63d2\u4ef6\u6dfb\u52a0\u65b9\u5f0f\u3001\u89c4\u8303\u3001API\u4e0e\u529f\u80fd\u6ce8\u91ca \u00b6 4.1\u3001\u5de5\u5177\u7c7b \u00b6 \u5f00\u53d1\u89c4\u8303\uff1a API\u63a5\u53e3\uff1a \u529f\u80fd\u6ce8\u91ca\uff1a \u9762\u677f\u6dfb\u52a0\u65b9\u5f0f\uff1a \u540e\u53f0\u529f\u80fd\u6a21\u5757\u6dfb\u52a0\u65b9\u5f0f\uff1a \u6587\u4ef6\u5939\u4f4d\u7f6e\uff1a 4.2\u3001\u529f\u80fd\u63d2\u4ef6\u7c7b \u00b6 \u5f00\u53d1\u89c4\u8303\uff1a API\u63a5\u53e3\uff1a \u529f\u80fd\u6ce8\u91ca\uff1a \u9762\u677f\u6dfb\u52a0\u65b9\u5f0f\uff1a \u540e\u53f0\u529f\u80fd\u6a21\u5757\u6dfb\u52a0\u65b9\u5f0f\uff1a \u6587\u4ef6\u5939\u4f4d\u7f6e\uff1a 5\u3001\u540e\u53f0API\u8c03\u7528\u3001\u89c4\u8303\u4e0e\u4f7f\u7528\u8bf4\u660e \u00b6 5.1\u3001\u8ba1\u7b97\u673a\u786c\u4ef6 \u00b6 5.1.1\u3001CPU \u00b6 5.1.2\u3001RAM \u00b6 5.2\u3001\u8ba1\u7b97\u673a\u8f6f\u4ef6 \u00b6 5.2.1\u3001\u672c\u5730\u8f6f\u4ef6\u5305 \u00b6 5.2.2\u3001\u6e90\u8f6f\u4ef6\u5305 \u00b6 6\u3001\u5f00\u53d1\u601d\u8def\u5907\u6ce8 \u00b6 \u5728\u5404\u79cd\u64cd\u4f5c\u524d\u8fdb\u884c\u5224\u65ad\u672c\u5730\u7f51\u7edc\u4e0e\u76ee\u6807\u7f51\u7edc\u662f\u5426\u8fde\u540c \u5728\u76ee\u6807\u7f51\u7edc\u65e0\u6cd5\u8fde\u901a\u65f6\u63d0\u793a\uff1a\u76ee\u6807IP\u7f51\u7edc\u4e0d\u901a \u5728\u96c6\u7fa4\u8282\u70b9\u90fd\u65e0\u6cd5\u8054\u901a\u65f6\uff0c\u96c6\u7fa4\u8282\u70b9\u5b57\u4f53\u7070\u8272 \u5728\u96c6\u7fa4\u64cd\u4f5c\u6216\u591a\u8282\u70b9\u64cd\u4f5c\u65f6\u63d0\u793a\u65e0\u6cd5\u8fde\u63a5\u7684\u76ee\u6807\u4fe1\u606f\uff0c\u5e76\u63d0\u793a\u786e\u5b9e\u662f\u5426\u7ee7\u7eed\uff0c\u5982\u7ee7\u7eed\u5219\u5c4f\u853d\u65e0\u6cd5\u8fde\u63a5\u7684\u8282\u70b9\u53bb\u8fdb\u884c\u6279\u91cf\u90e8\u7f72 \u754c\u9762\u4fe1\u606f\u5237\u65b0\u9891\u7387 \u8f6f\u786c\u4ef6\u4fe1\u606f\u5237\u65b0\u9891\u7387 cpu\u3001\u5185\u5b58\u7b49\u5360\u6bd4\u663e\u793a\u4fe1\u606f\u7684\u5237\u65b0\u9891\u7387\u4e3a0.5s ssh\u754c\u9762\u5237\u5c4f\u9891\u7387\u4e3a\u5b9e\u65f6\u5237\u65b0 \u96c6\u7fa4\u663e\u793a\u4fe1\u606f\u4e3a\u5b9e\u65f6\u5237\u65b0 \u7cfb\u7edf\u5fc5\u8981\u4fe1\u606f\u663e\u793a\u533a\u57df\u4e3a\u5b9e\u65f6\u5237\u65b0","title":"1\u3001\u524d\u5e8f"},{"location":"spec/openkite/#1","text":"","title":"1\u3001\u524d\u5e8f"},{"location":"spec/openkite/#11","text":"\u672c\u8f6f\u4ef6\u57fa\u4e8eLGPL V3\u534f\u8bae\uff0c\u8bf7\u7528\u6237\u548c\u5f00\u53d1\u8005\u6ce8\u610fLGPL\u534f\u8bae\u7684\u8981\u6c42\uff0c\u5176\u4e2d\u6700\u91cd\u8981\u7684\u4e00\u70b9\u662f \u4e0d\u5141\u8bb8fork\u9879\u76ee\u95ed\u6e90 \u3002","title":"1.1\u3001 \u8f6f\u4ef6\u8bb8\u53ef\u534f\u8bae"},{"location":"spec/openkite/#12","text":"","title":"1.2\u3001 \u8f6f\u4ef6\u7528\u9014"},{"location":"spec/openkite/#13","text":"","title":"1.3\u3001 \u5f00\u53d1\u4eba\u5458\u540d\u5355"},{"location":"spec/openkite/#14","text":"","title":"1.4\u3001 \u751f\u547d\u5f00\u53d1\u5468\u671f"},{"location":"spec/openkite/#15","text":"","title":"1.5\u3001 \u529f\u80fd\u5f00\u53d1\u987a\u5e8f"},{"location":"spec/openkite/#2","text":"","title":"2\u3001\u5f00\u53d1\u89c4\u8303\u7ea6\u5b9a"},{"location":"spec/openkite/#21","text":"\u63a7\u4ef6\u539f\u540d\u79f0_\u7a97\u4f53_\u63a7\u4ef6\u540d\u79f0\u7ec4\u5408\u4f53\u9996\u5b57\u6bcd\u5927\u5199 \u793a\u4f8b\uff1a \u6309\u94ae\u539f\u540d\u79f0\uff1apushButton \u4e3b\u7a97\u4f53 \u83dc\u5355\u6309\u94ae \u547d\u540d\u89c4\u8303\uff1apushButton_MainWindow_Menu \u6309\u94ae\u539f\u540d\u79f0\uff1atoolButton \u4e3b\u7a97\u4f53 \u4e0a\u4f20\u6309\u94ae \u547d\u540d\u89c4\u8303\uff1atoolButton_MainWindow_UpLoad","title":"2.1\u3001 \u7a97\u4f53\u63a7\u4ef6\u547d\u540d\u89c4\u8303"},{"location":"spec/openkite/#22","text":"\u53d8\u91cf\u3001\u5e38\u91cf\u3001\u51fd\u6570\u3001\u7c7b\u3001\u5bb9\u5668\u7b49","title":"2.2\u3001 \u540e\u53f0\u529f\u80fd\u5b9e\u73b0\u547d\u540d\u89c4\u8303"},{"location":"spec/openkite/#23","text":"","title":"2.3\u3001 \u8f6f\u4ef6\u5305\u6587\u4ef6\u540d\u547d\u540d\u89c4\u8303"},{"location":"spec/openkite/#24","text":"","title":"2.4\u3001 \u6587\u4ef6\u547d\u540d\u89c4\u8303"},{"location":"spec/openkite/#25","text":"\u5220\u9664\u3001\u79fb\u52a8\u3001\u6539\u540d\u3001\u6743\u9650\u8bbe\u7f6e","title":"2.5\u3001 \u6807\u6ce8"},{"location":"spec/openkite/#3","text":"","title":"3\u3001\u7a97\u53e3\u4e3b\u4f53\u63a7\u4ef6\u540d\u79f0\u3001\u5c3a\u5bf8\u3001\u7528\u9014"},{"location":"spec/openkite/#31","text":"PushButton\u63a7\u4ef6\u7528\u4e8e\u83dc\u5355\u5927\u7c7b\u8c03\u7528\u7a97\u53e3 \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u56fa\u5b9a\u5c3a\u5bf8 80*25 \u63a7\u4ef6\u4e2d\u6587\u540d \u63a7\u4ef6\u79cd\u7c7b \u63a7\u4ef6\u540d \u7528\u9014 \u83dc\u5355 PushButton pushButton_MainWindow_Menu \u8c03\u51fa\u83dc\u5355\u7a97\u53e3 \u5e2e\u52a9 PushButton pushButton_MainWindow_Help \u8c03\u51fa\u5e2e\u52a9\u7a97\u53e3 \u5de5\u5177 PushButton pushButton_MainWindow_Tool \u8c03\u51fa\u5de5\u5177\u7a97\u53e3 \u62a5\u9519\u5206\u6790 PushButton pushButton_MainWindow_ErrorAnalysis \u8c03\u51fa\u62a5\u9519\u5206\u6790\u7a97\u53e3 \u76d1\u63a7 PushButton pushButton_MainWindow_Monitor \u8c03\u51fa\u76d1\u63a7\u7a97\u53e3 \u8fd0\u7ef4\u65e5\u5fd7 PushButton pushButton_MainWindow_OperationLog \u8c03\u51fa\u8fd0\u7ef4\u65e5\u5fd7\u7a97\u53e3","title":"3.1\u3001\u83dc\u5355\u529f\u80fd\u5927\u7c7b"},{"location":"spec/openkite/#311","text":"\u8bbe\u7f6e \u8f6f\u4ef6\u4e3b\u9898","title":"3.1.1\u3001\u83dc\u5355\u5b50\u7c7b"},{"location":"spec/openkite/#312","text":"\u793e\u533a \u7248\u672c\u66f4\u65b0 \u4f7f\u7528\u624b\u518c","title":"3.1.2\u3001\u5e2e\u52a9\u7c7b"},{"location":"spec/openkite/#313","text":"\u63d2\u4ef6\u4ed3\u5e93 img\u955c\u50cf\u5de5\u5177 MD5\u6821\u9a8c\u5de5\u5177 OpenStack\u6a21\u5757\u529f\u80fd\u6d4b\u8bd5 \u538b\u529b\u6d4b\u8bd5","title":"3.1.3\u3001\u5de5\u5177\u7c7b"},{"location":"spec/openkite/#314","text":"\u7cfb\u7edf\u62a5\u9519\uff08\u8282\u70b9\u62a5\u9519\u5206\u6790\uff09 OpenStack\u62a5\u9519 K8S\u62a5\u9519","title":"3.1.4\u3001\u62a5\u9519\u5206\u6790\u7c7b"},{"location":"spec/openkite/#315","text":"OPS\u76d1\u63a7\u72b6\u6001\u4e0e\u6027\u80fd\u4f7f\u7528\u5206\u6790 K8S\u76d1\u63a7\u72b6\u6001\u4e0e\u6027\u80fd\u4f7f\u7528\u5206\u6790","title":"3.1.5\u3001\u76d1\u63a7\u7c7b"},{"location":"spec/openkite/#316","text":"\u67e5\u770b\u5386\u53f2\u8fd0\u7ef4\u65e5\u5fd7 \u65e5\u5fd7\u5bfc\u51fa","title":"3.1.6\u3001\u8fd0\u7ef4\u65e5\u5fd7\u7c7b"},{"location":"spec/openkite/#32","text":"","title":"3.2\u3001\u6570\u636e\u53ef\u89c6\u5316\u7c7b"},{"location":"spec/openkite/#321","text":"ProgressBar\u63a7\u4ef6\u663e\u793a\u8ba1\u7b97\u673a\u786c\u4ef6\u6027\u80fd\u5360\u7528\u6bd4 \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u6700\u5c0f\u5c3a\u5bf8 116*27 \u9ad8\u5ea6\u5c3a\u5bf8\u56fa\u5b9a \u63a7\u4ef6\u4e2d\u6587\u540d \u63a7\u4ef6\u79cd\u7c7b \u63a7\u4ef6\u540d \u7528\u9014 \u672c\u673aCPU ProgressBar progressBar_MainWindow_LocalCPU \u663e\u793a\u672c\u5730CPU\u4f7f\u7528\u7387 \u76ee\u6807CPU ProgressBar progressBar_MainWindow_TargetCPU \u663e\u793a\u76ee\u6807CPU\u4f7f\u7528\u7387 \u672c\u673aRAM ProgressBar progressBar_MainWindow_LocalRAM \u663e\u793a\u672c\u673aRAM\u4f7f\u7528\u7387 \u76ee\u6807RAM ProgressBar progressBar_MainWindow_TargetRAM \u663e\u793a\u76ee\u6807RAM\u4f7f\u7528\u7387 \u672c\u673a\u7f51\u7edc ProgressBar progressBar_MainWindow_LocalNetwork \u663e\u793a\u672c\u673a\u7f51\u7edc\u5e26\u5bbd\u4f7f\u7528\u7387 \u76ee\u6807\u7f51\u7edc ProgressBar progressBar_MainWindow_TargetNetwork \u663e\u793a\u76ee\u6807\u7f51\u7edc\u5e26\u5bbd\u4f7f\u7528\u7387 \u672c\u673a\u78c1\u76d8 ProgressBar progressBar_MainWindow_LocalDisk \u663e\u793a\u672c\u673a\u78c1\u76d8IO\u4f7f\u7528\u7387 \u76ee\u6807\u78c1\u76d8 ProgressBar progressBar_MainWindow_TargetDisk \u663e\u793a\u76ee\u6807\u78c1\u76d8IO\u4f7f\u7528\u7387","title":"3.2.1\u3001\u8ba1\u7b97\u673a\u786c\u4ef6\u4fe1\u606f\u7c7b"},{"location":"spec/openkite/#322","text":"Label\u63a7\u4ef6\u663e\u793a\u7cfb\u7edfIP\u4e0eDNS \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u56fa\u5b9a\u5c3a\u5bf8 110*27 \u63a7\u4ef6\u4e2d\u6587\u540d \u63a7\u4ef6\u79cd\u7c7b \u63a7\u4ef6\u540d \u7528\u9014 \u672c\u673aIP Label label_MainWindow_LocalIP \u663e\u793a\u672c\u673aIP \u76ee\u6807IP Label label_MainWindow_TargetIP \u663e\u793a\u76ee\u6807IP \u672c\u673aDNS Label label_MainWindow_LocalNDS \u663e\u793a\u672c\u673aDNS \u76ee\u6807DNS Label label_MainWindow_TargetNDS \u663e\u793a\u76ee\u6807DNS ListWidget\u63a7\u4ef6\u663e\u793a\u7cfb\u7edf\u5fc5\u8981\u4fe1\u606f\u9879\u8bf4\u660e \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u56fa\u5b9a\u5c3a\u5bf8 200*111 \u63a7\u4ef6\u4e2d\u6587\u540d \u63a7\u4ef6\u79cd\u7c7b \u63a7\u4ef6\u540d \u7528\u9014 \u7cfb\u7edf\u4fe1\u606f\u663e\u793a ListWidgets listWidget_MainWidow_SystemShow \u663e\u793a\u7cfb\u7edf\u5fc5\u8981\u4fe1\u606f \u7cfb\u7edf\u5fc5\u8981\u4fe1\u606f\u663e\u793a\u6240\u7528\u53d8\u91cf\u7684API\u63a5\u53e3 \u4e2d\u6587\u540d \u53d8\u91cf\u7c7b\u578b \u53d8\u91cf\u540d \u7528\u9014 \u53d1\u884c\u7248 QStringList systemNameShow linux\u53d1\u884c\u7248\u540d\u79f0 \u7248\u672c\u53f7 QStringList systemVersion linux\u53d1\u884c\u7248\u7248\u672c\u53f7 \u5185\u6838\u53f7 QStringList systemKernel linux\u53d1\u884c\u7248\u5185\u6838\u7248\u672c \u7ba1\u7406\u6743\u9650 QStringList systemAdminPower \u5f53\u524d\u8d26\u53f7\u64cd\u4f5c\u6743\u9650 \u670d\u52a1\u540d\u79f0 QStringList systemServiceName \u5f53\u524d\u8fd0\u7ef4\u8f6f\u4ef6\u670d\u52a1\u540d\u79f0 \u670d\u52a1\u7248\u672c QStringList systemServicVersion \u5f53\u524d\u8fd0\u7ef4\u8f6f\u4ef6\u7248\u672c Label\u4e0eProgressBar\u63a7\u4ef6\u663e\u793a\u5f53\u524d\u8fd0\u884c\u547d\u4ee4\u53ca\u8fdb\u5ea6 \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u5f53\u524d\u8fd0\u884c\u547d\u4ee4\u63a7\u4ef6\u5c3a\u5bf8\uff1a \u6700\u5c0f\u5c3a\u5bf8\uff1a500*31 \u9ad8\u5ea6\u5c3a\u5bf8\u56fa\u5b9a \u5f53\u524d\u547d\u4ee4\u8fdb\u5ea6\u63a7\u4ef6\u5c3a\u5bf8\uff1a \u6700\u5c0f\u5c3a\u5bf8\uff1a171*31 \u9ad8\u5ea6\u5c3a\u5bf8\u56fa\u5b9a \u4e2d\u6587\u540d \u63a7\u4ef6\u79cd\u7c7b \u63a7\u4ef6\u540d \u7528\u9014 \u5f53\u524d\u8fd0\u884c\u547d\u4ee4 Label label_MainWindow_ShowCurrentCommand \u663e\u793a\u5f53\u524d\u96c6\u7fa4\u6216\u8282\u70b9\u6b63\u5728\u8fd0\u884c\u7684\u547d\u4ee4 \u5f53\u524d\u547d\u4ee4\u8fdb\u5ea6 ProgressBar progressBar_MainWindow_ShowCommandProgress \u663e\u793a\u5f53\u524d\u96c6\u7fa4\u6216\u8282\u70b9\u6b63\u5728\u8fd0\u884c\u7684\u547d\u4ee4\u7684\u8fdb\u5ea6","title":"3.2.2\u3001\u8ba1\u7b97\u673a\u8f6f\u4ef6\u4fe1\u606f\u7c7b"},{"location":"spec/openkite/#33","text":"","title":"3.3\u3001\u6dfb\u52a0\u96c6\u7fa4\u7c7b"},{"location":"spec/openkite/#331","text":"ToolButton\u63a7\u4ef6\u6dfb\u52a0\u96c6\u7fa4\u8282\u70b9\u4fe1\u606f \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u56fa\u5b9a\u5c3a\u5bf8\uff1a300*31 \u4e2d\u6587\u540d \u63a7\u4ef6\u7c7b\u578b \u63a7\u4ef6\u540d \u7528\u9014 \u6dfb\u52a0\u96c6\u7fa4/\u8282\u70b9 ToolButton toolButton_MainWindow_AddNode \u5f39\u51fa\u7a97\u53e3\u6dfb\u52a0\u96c6\u7fa4\u6216\u8282\u70b9 \u5355\u8282\u70b9\u6dfb\u52a0 \u6279\u91cf\u8282\u70b9\u6dfb\u52a0 \u96c6\u7fa4\u6dfb\u52a0","title":"3.3.1\u3001 \u96c6\u7fa4\u6dfb\u52a0\u7c7b"},{"location":"spec/openkite/#332","text":"TreeWidget\u63a7\u4ef6\u663e\u793a\u96c6\u7fa4\u4fe1\u606f \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u6700\u5c0f\u5c3a\u5bf8\uff1a200*438 \u5bbd\u5ea6\u5c3a\u5bf8\u56fa\u5b9a \u4e2d\u6587\u540d \u63a7\u4ef6\u7c7b\u578b \u63a7\u4ef6\u540d \u7528\u9014 \u8282\u70b9\u4fe1\u606f TreeWidget treeWidget_MainWindow_ShowNode \u7528\u4e8e\u663e\u793a\u96c6\u7fa4\u4e0e\u8282\u70b9\u4fe1\u606f\u6216\u70b9\u51fb\u4fe1\u606f\u540e\u521b\u5efaSSH\u8fdc\u7a0b\u7a97\u53e3\u754c\u9762 \u96c6\u7fa4\u540d\u79f0 \u8282\u70b9\u540d\u79f0 \u8282\u70b9IP\u5730\u5740","title":"3.3.2\u3001\u96c6\u7fa4\u663e\u793a\u7c7b"},{"location":"spec/openkite/#34","text":"TerrWidget\u63a7\u4ef6\u5f39\u7a97 \u63a7\u4ef6\u5c3a\u5bf8\uff1a \u4e0a\u4f20\u3001\u811a\u672c\u6309\u94ae\u56fa\u5b9a\u5c3a\u5bf8\uff1a63*31 \u90e8\u7f72\u6309\u94ae\u56fa\u5b9a\u5c3a\u5bf8\uff1a65*31 \u4e2d\u6587\u540d \u63a7\u4ef6\u7c7b\u578b \u63a7\u4ef6\u540d \u7528\u9014 \u4e0a\u4f20 terrWidget toolButton_MainWindow_UpLoad \u5f39\u51fa\u4e0a\u4f20\u7a97\u4f53:load.ui \u811a\u672c terrWidget toolButton_MainWindow_Shell \u5f39\u51fa\u811a\u672c\u7a97\u4f53:shell.ui \u90e8\u7f72 terrWidget toolButton_MainWindow_Deploy \u5f39\u51fa\u90e8\u7f72\u7a97\u4f53:deploy.ui","title":"3.4\u3001\u811a\u672c\u4e0e\u90e8\u7f72\u7c7b"},{"location":"spec/openkite/#341","text":"\u811a\u672c\u7f16\u8bd1\u5668 yaml\u7f16\u8bd1\u5668 \u811a\u672c\u7f16\u8bd1\u5668 \u52a0\u8f7d\u672c\u5730\u7b56\u7565 \u52a0\u8f7d\u96c6\u7fa4\u914d\u7f6e\u7b56\u7565 \u52a0\u8f7d\u8282\u70b9\u914d\u7f6e\u7b56\u7565 \u4e0a\u4f20\u6587\u4ef6\u5230\u76ee\u6807\u8ba1\u7b97\u673a \u5355\u8282\u70b9 \u591a\u8282\u70b9 \u4e0b\u8f7d\u6587\u4ef6\u5230\u672c\u5730\u8ba1\u7b97\u673a \u5355\u8282\u70b9 \u591a\u8282\u70b9 \u76ee\u6807\u8ba1\u7b97\u673a\u6587\u4ef6\u4e92\u4f20 \u70b9\u5bf9\u70b9\u4e92\u4f20 \u70b9\u5bf9\u591a\u4e92\u4f20","title":"3.4.1\u3001\u4e0a\u4f20\u4e0e\u4e0b\u8f7d\u529f\u80fd\u7c7b"},{"location":"spec/openkite/#342","text":"\u7f16\u8f91 \u7f16\u8f91\u5b50\u6a21\u5757\u811a\u672c \u7f16\u8f91\u96c6\u7fa4\u6a21\u5757\u811a\u672c \u67e5\u770b \u67e5\u770b\u5b50\u6a21\u5757\u811a\u672c \u67e5\u770b\u96c6\u7fa4\u6a21\u5757\u811a\u672c \u5bfc\u51fa \u5bfc\u51fa\u5b50\u6a21\u5757\u811a\u672c \u5bfc\u51fa\u96c6\u7fa4\u6a21\u5757\u811a\u672c \u5bfc\u51fa\u6240\u6709\u811a\u672c","title":"3.4.2\u3001\u811a\u672c\u7c7b"},{"location":"spec/openkite/#343","text":"\u90e8\u7f72 \u53ef\u6279\u91cf\u9009\u62e9\u8282\u70b9\u90e8\u7f72\u4e0d\u540c\u529f\u80fd\u811a\u672c \u53ef\u96c6\u7fa4\u90e8\u7f72\u4e0d\u540c\u8282\u70b9\u4e0d\u540c\u529f\u80fd\u811a\u672c \u53ef\u5355\u8282\u70b9\u90e8\u7f72\u4e0d\u540c\u529f\u80fd\u811a\u672c \u7ec8\u6b62 \u53ef\u6279\u91cf\u591a\u8282\u70b9\u3001\u5355\u8282\u70b9\u3001\u96c6\u7fa4\u7ec8\u6b62\u5f53\u524d\u90e8\u7f72","title":"3.4.3\u3001\u90e8\u7f72\u7c7b"},{"location":"spec/openkite/#35","text":"","title":"3.5\u3001\u529f\u80fd\u63d2\u4ef6\u7c7b"},{"location":"spec/openkite/#351","text":"\u4fee\u6539\u670d\u52a1\u5668\u8ba1\u7b97\u673a\u540d \u4fee\u6539\u670d\u52a1\u5668\u7528\u6237\u540d \u4fee\u6539\u670d\u52a1\u5668\u5bc6\u7801 \u4fee\u6539\u9632\u706b\u5899\u914d\u7f6e \u4fee\u6539host \u4fee\u6539DNS \u4fee\u6539\u7f51\u5173 \u4fee\u6539IP \u90e8\u7f72\u65f6\u95f4\u670d\u52a1 \u90e8\u7f72DNS\u670d\u52a1","title":"3.5.1\u3001\u57fa\u7840\u8fd0\u7ef4\u7c7b"},{"location":"spec/openkite/#352","text":"OpenStack\u63d2\u4ef6\u7c7b K8S\u63d2\u4ef6\u7c7b Ceph\u63d2\u4ef6\u7c7b","title":"3.5.2\u3001\u5176\u4ed6\u529f\u80fd\u63d2\u4ef6\u7c7b"},{"location":"spec/openkite/#36ssh","text":"\u53ef\u590d\u5236\u7c98\u8d34\u547d\u4ee4\uff0c\u4e2d\u6587\u663e\u793a\u7efc\u5408\u7aef\u53e3","title":"3.6\u3001ssh\u8fdc\u7a0b\u663e\u793a\u7c7b"},{"location":"spec/openkite/#361ssh","text":"\u7efc\u5408\u7aef\u53e3\u663e\u793a\uff0c\u70b9\u5bf9\u591assh\u8fdc\u7a0b","title":"3.6.1\u3001\u96c6\u7fa4SSH\u8fdc\u7a0b\u663e\u793a\u7c7b"},{"location":"spec/openkite/#362ssh","text":"\u70b9\u5bf9\u70b9ssh\u8fdc\u7a0b","title":"3.6.2\u3001\u5355\u8282\u70b9SSH\u8fdc\u7a0b\u663e\u793a\u7c7b"},{"location":"spec/openkite/#4api","text":"","title":"4\u3001\u7a97\u53e3\u4e3b\u4f53\u529f\u80fd\u63d2\u4ef6\u6dfb\u52a0\u65b9\u5f0f\u3001\u89c4\u8303\u3001API\u4e0e\u529f\u80fd\u6ce8\u91ca"},{"location":"spec/openkite/#41","text":"\u5f00\u53d1\u89c4\u8303\uff1a API\u63a5\u53e3\uff1a \u529f\u80fd\u6ce8\u91ca\uff1a \u9762\u677f\u6dfb\u52a0\u65b9\u5f0f\uff1a \u540e\u53f0\u529f\u80fd\u6a21\u5757\u6dfb\u52a0\u65b9\u5f0f\uff1a \u6587\u4ef6\u5939\u4f4d\u7f6e\uff1a","title":"4.1\u3001\u5de5\u5177\u7c7b"},{"location":"spec/openkite/#42","text":"\u5f00\u53d1\u89c4\u8303\uff1a API\u63a5\u53e3\uff1a \u529f\u80fd\u6ce8\u91ca\uff1a \u9762\u677f\u6dfb\u52a0\u65b9\u5f0f\uff1a \u540e\u53f0\u529f\u80fd\u6a21\u5757\u6dfb\u52a0\u65b9\u5f0f\uff1a \u6587\u4ef6\u5939\u4f4d\u7f6e\uff1a","title":"4.2\u3001\u529f\u80fd\u63d2\u4ef6\u7c7b"},{"location":"spec/openkite/#5api","text":"","title":"5\u3001\u540e\u53f0API\u8c03\u7528\u3001\u89c4\u8303\u4e0e\u4f7f\u7528\u8bf4\u660e"},{"location":"spec/openkite/#51","text":"","title":"5.1\u3001\u8ba1\u7b97\u673a\u786c\u4ef6"},{"location":"spec/openkite/#511cpu","text":"","title":"5.1.1\u3001CPU"},{"location":"spec/openkite/#512ram","text":"","title":"5.1.2\u3001RAM"},{"location":"spec/openkite/#52","text":"","title":"5.2\u3001\u8ba1\u7b97\u673a\u8f6f\u4ef6"},{"location":"spec/openkite/#521","text":"","title":"5.2.1\u3001\u672c\u5730\u8f6f\u4ef6\u5305"},{"location":"spec/openkite/#522","text":"","title":"5.2.2\u3001\u6e90\u8f6f\u4ef6\u5305"},{"location":"spec/openkite/#6","text":"\u5728\u5404\u79cd\u64cd\u4f5c\u524d\u8fdb\u884c\u5224\u65ad\u672c\u5730\u7f51\u7edc\u4e0e\u76ee\u6807\u7f51\u7edc\u662f\u5426\u8fde\u540c \u5728\u76ee\u6807\u7f51\u7edc\u65e0\u6cd5\u8fde\u901a\u65f6\u63d0\u793a\uff1a\u76ee\u6807IP\u7f51\u7edc\u4e0d\u901a \u5728\u96c6\u7fa4\u8282\u70b9\u90fd\u65e0\u6cd5\u8054\u901a\u65f6\uff0c\u96c6\u7fa4\u8282\u70b9\u5b57\u4f53\u7070\u8272 \u5728\u96c6\u7fa4\u64cd\u4f5c\u6216\u591a\u8282\u70b9\u64cd\u4f5c\u65f6\u63d0\u793a\u65e0\u6cd5\u8fde\u63a5\u7684\u76ee\u6807\u4fe1\u606f\uff0c\u5e76\u63d0\u793a\u786e\u5b9e\u662f\u5426\u7ee7\u7eed\uff0c\u5982\u7ee7\u7eed\u5219\u5c4f\u853d\u65e0\u6cd5\u8fde\u63a5\u7684\u8282\u70b9\u53bb\u8fdb\u884c\u6279\u91cf\u90e8\u7f72 \u754c\u9762\u4fe1\u606f\u5237\u65b0\u9891\u7387 \u8f6f\u786c\u4ef6\u4fe1\u606f\u5237\u65b0\u9891\u7387 cpu\u3001\u5185\u5b58\u7b49\u5360\u6bd4\u663e\u793a\u4fe1\u606f\u7684\u5237\u65b0\u9891\u7387\u4e3a0.5s ssh\u754c\u9762\u5237\u5c4f\u9891\u7387\u4e3a\u5b9e\u65f6\u5237\u65b0 \u96c6\u7fa4\u663e\u793a\u4fe1\u606f\u4e3a\u5b9e\u65f6\u5237\u65b0 \u7cfb\u7edf\u5fc5\u8981\u4fe1\u606f\u663e\u793a\u533a\u57df\u4e3a\u5b9e\u65f6\u5237\u65b0","title":"6\u3001\u5f00\u53d1\u601d\u8def\u5907\u6ce8"},{"location":"spec/openstack-sig-tool-requirement/","text":"openEuler OpenStack\u5f00\u53d1\u5e73\u53f0\u9700\u6c42\u8bf4\u660e\u4e66 \u00b6 \u80cc\u666f \u00b6 \u76ee\u524d\uff0c\u968f\u7740SIG\u7684\u4e0d\u65ad\u53d1\u5c55\uff0c\u6211\u4eec\u660e\u663e\u7684\u9047\u5230\u4e86\u4ee5\u4e0b\u51e0\u7c7b\u95ee\u9898\uff1a 1. OpenStack\u6280\u672f\u590d\u6742\uff0c\u6d89\u53ca\u4e91IAAS\u5c42\u7684\u8ba1\u7b97\u3001\u7f51\u7edc\u3001\u5b58\u50a8\u3001\u955c\u50cf\u3001\u9274\u6743\u7b49\u65b9\u65b9\u9762\u9762\u7684\u6280\u672f\uff0c\u5f00\u53d1\u8005\u5f88\u96be\u5168\u77e5\u5168\u4f1a\uff0c\u63d0\u4ea4\u7684 \u4ee3\u7801\u903b\u8f91\u3001\u8d28\u91cf\u582a\u5fe7 \u3002 2. OpenStack\u662f\u7531python\u7f16\u5199\u7684\uff0cpython\u8f6f\u4ef6\u7684\u4f9d\u8d56\u95ee\u9898\u96be\u4ee5\u5904\u7406\uff0c\u4ee5OpenStack Wallaby\u7248\u672c\u4e3a\u4f8b\uff0c\u6d89\u53ca\u6838\u5fc3python\u8f6f\u4ef6\u5305400+\uff0c \u6bcf\u4e2a\u8f6f\u4ef6\u7684\u4f9d\u8d56\u5c42\u7ea7\u3001\u4f9d\u8d56\u7248\u672c \u9519\u7efc\u590d\u6742\uff0c\u9009\u578b\u56f0\u96be \uff0c\u96be\u4ee5\u5f62\u6210\u95ed\u73af\u3002 3. OpenStack\u8f6f\u4ef6\u5305\u4f17\u591a\uff0cRPM Spec\u7f16\u5199\u5f00\u53d1\u91cf\u5de8\u5927\uff0c\u5e76\u4e14\u968f\u7740openEuler\u3001OpenStack\u672c\u8eab\u7248\u672c\u7684\u4e0d\u65ad\u6f14\u8fdb\uff0cN:N\u7684\u9002\u914d\u5173\u7cfb\u4f1a\u5bfc\u81f4 \u5de5\u4f5c\u91cf\u6210\u500d\u589e\u957f\uff0c\u4eba\u529b\u6210\u672c\u8d8a\u6765\u8d8a\u5927 \u3002 4. OpenStack\u6d4b\u8bd5\u95e8\u69db\u8fc7\u9ad8\uff0c\u4e0d\u4ec5\u9700\u8981\u5f00\u53d1\u4eba\u5458\u719f\u6089OpenStack\uff0c\u8fd8\u8981\u5bf9\u865a\u62df\u5316\u3001\u865a\u62df\u7f51\u6865\u3001\u5757\u5b58\u50a8\u7b49Linux\u5e95\u5c42\u6280\u672f\u6709\u4e00\u5b9a\u4e86\u89e3\u4e0e\u638c\u63e1\uff0c\u90e8\u7f72\u4e00\u5957OpenStack\u73af\u5883\u8017\u65f6\u8fc7\u957f\uff0c\u529f\u80fd\u6d4b\u8bd5\u96be\u5ea6\u5de8\u5927\u3002\u5e76\u4e14\u6d4b\u8bd5\u573a\u666f\u591a\uff0c\u6bd4\u5982X86\u3001ARM64\u67b6\u6784\u6d4b\u8bd5\uff0c\u88f8\u673a\u3001\u865a\u673a\u79cd\u7c7b\u6d4b\u8bd5\uff0cOVS\u3001OVN\u7f51\u6865\u6d4b\u8bd5\uff0cLVM\u3001Ceph\u5b58\u50a8\u6d4b\u8bd5\u7b49\u7b49\uff0c\u66f4\u52a0\u52a0\u91cd\u4e86 \u4eba\u529b\u6210\u672c\u4ee5\u53ca\u6280\u672f\u95e8\u69db \u3002 \u9488\u5bf9\u4ee5\u4e0a\u95ee\u9898\u9700\u8981\u5728openEuler OpenStack\u63d0\u4f9b\u4e00\u4e2a\u5f00\u53d1\u5e73\u53f0\uff0c\u89e3\u51b3\u5f00\u53d1\u8fc7\u7a0b\u9047\u5230\u7684\u4ee5\u4e0a\u75db\u70b9\u95ee\u9898\u3002 \u76ee\u6807 \u00b6 \u8bbe\u8ba1\u5e76\u5f00\u53d1\u4e00\u4e2aOpenStack\u5f3a\u76f8\u5173\u7684openEuler\u5f00\u6e90\u5f00\u53d1\u5e73\u53f0\uff0c\u901a\u8fc7\u89c4\u8303\u5316\u3001\u5de5\u5177\u5316\u3001\u81ea\u52a8\u5316\u7684\u65b9\u5f0f\uff0c\u6ee1\u8db3SIG\u5f00\u53d1\u8005\u7684\u65e5\u5e38\u5f00\u53d1\u9700\u6c42\uff0c\u964d\u4f4e\u5f00\u53d1\u6210\u672c\uff0c\u51cf\u5c11\u4eba\u529b\u6295\u5165\u6210\u672c\uff0c\u964d\u4f4e\u5f00\u53d1\u95e8\u69db\uff0c\u4ece\u800c\u63d0\u9ad8\u5f00\u53d1\u6548\u7387\u3001\u63d0\u9ad8SIG\u8f6f\u4ef6\u8d28\u91cf\u3001\u53d1\u5c55SIG\u751f\u6001\u3001\u5438\u5f15\u66f4\u591a\u5f00\u53d1\u8005\u52a0\u5165SIG\u3002 \u8303\u56f4 \u00b6 \u7528\u6237\u8303\u56f4 \uff1aopenEuler OpenStack SIG\u5f00\u53d1\u8005 \u4e1a\u52a1\u8303\u56f4 \uff1aopenEuler OpenStack SIG\u65e5\u5e38\u5f00\u53d1\u6d3b\u52a8 \u7f16\u7a0b\u8bed\u8a00 \uff1aPython\u3001Ansible\u3001Jinja\u3001JavaScript IT\u6280\u672f \uff1aWeb\u670d\u52a1\u3001RestFul\u89c4\u8303\u3001CLI\u89c4\u8303\u3001\u524d\u7aefGUI\u3001\u6570\u636e\u5e93\u4f7f\u7528 \u529f\u80fd \u00b6 OpenStack\u5f00\u53d1\u5e73\u53f0\u6574\u4f53\u91c7\u7528C/S\u67b6\u6784\uff0c\u4ee5SIG\u5bf9\u5916\u63d0\u4f9b\u5e73\u53f0\u80fd\u529b\uff0cclient\u7aef\u9762\u5411\u6307\u5b9a\u7528\u6237\u767d\u540d\u5355\u5f00\u653e\u3002 \u4e3a\u65b9\u4fbf\u767d\u540d\u5355\u4ee5\u5916\u7528\u6237\u4f7f\u7528\uff0c\u672c\u5e73\u53f0\u8fd8\u63d0\u4f9bCLI\u6a21\u5f0f\uff0c\u5728\u6b64\u6a21\u5f0f\u4e0b\u4e0d\u9700\u8981\u989d\u5916\u670d\u52a1\u7aef\u901a\u4fe1\uff0c\u5728\u672c\u5730\u5373\u53ef\u5f00\u7bb1\u5373\u7528\u3002 \u8f93\u51faOpenStack\u670d\u52a1\u7c7b\u8f6f\u4ef6\u3001\u4f9d\u8d56\u5e93\u8f6f\u4ef6\u7684RPM SPEC\u5f00\u53d1\u89c4\u8303\uff0c\u5f00\u53d1\u8005\u53caReviewer\u9700\u8981\u4e25\u683c\u9075\u5b88\u89c4\u8303\u8fdb\u884c\u5f00\u53d1\u5b9e\u65bd\u3002 \u63d0\u4f9bOpenStack python\u8f6f\u4ef6\u4f9d\u8d56\u5206\u6790\u529f\u80fd\uff0c\u4e00\u952e\u751f\u6210\u4f9d\u8d56\u62d3\u6251\u4e0e\u7ed3\u679c\uff0c\u4fdd\u8bc1\u4f9d\u8d56\u95ed\u73af\uff0c\u907f\u514d\u8f6f\u4ef6\u4f9d\u8d56\u98ce\u9669\u3002 \u63d0\u4f9bOpenStack RPM spec\u751f\u6210\u529f\u80fd\uff0c\u9488\u5bf9\u901a\u7528\u6027\u8f6f\u4ef6\uff0c\u63d0\u4f9b\u4e00\u952e\u751f\u6210 RPM spec\u7684\u529f\u80fd\uff0c\u7f29\u77ed\u5f00\u53d1\u65f6\u95f4\uff0c\u964d\u4f4e\u6295\u5165\u6210\u672c\u3002 \u63d0\u4f9b\u81ea\u52a8\u5316\u90e8\u7f72\u3001\u6d4b\u8bd5\u5e73\u53f0\u529f\u80fd\uff0c\u5b9e\u73b0\u4e00\u952e\u5728\u4efb\u4f55openEuler\u7248\u672c\u4e0a\u90e8\u7f72\u6307\u5b9aOpenStack\u7248\u672c\u7684\u80fd\u529b\uff0c\u5feb\u901f\u6d4b\u8bd5\u3001\u5feb\u901f\u8fed\u4ee3\u3002 \u63d0\u4f9bopenEuler Gitee\u4ed3\u5e93\u81ea\u52a8\u5316\u5904\u7406\u80fd\u529b\uff0c\u6ee1\u8db3\u6279\u91cf\u4fee\u6539\u8f6f\u4ef6\u7684\u9700\u6c42\uff0c\u6bd4\u5982\u521b\u5efa\u4ee3\u7801\u5206\u652f\u3001\u521b\u5efa\u4ed3\u5e93\u3001\u63d0\u4ea4Pull Request\u7b49\u529f\u80fd\u3002 SPEC\u5f00\u53d1\u89c4\u8303\u5236\u5b9a \u00b6 \u3010\u529f\u80fd\u70b9\u3011 1. \u7ea6\u675fOpenStack\u670d\u52a1\u7ea7\u9879\u76eeSPEC\u683c\u5f0f\u4e0e\u5185\u5bb9\u89c4\u8303 2. \u89c4\u5b9aOpenStack\u4f9d\u8d56\u5e93\u7ea7\u522b\u9879\u76eeSPEC\u7684\u6846\u67b6\u3002 \u3010\u5148\u51b3\u6761\u4ef6\u3011\uff1aOpenStack SIG\u5168\u4f53Maintainer\u8fbe\u6210\u4e00\u81f4\uff0c\u53c2\u4e0e\u5382\u5546\u6ca1\u6709\u5206\u6b67\u3002 \u3010\u53c2\u4e0e\u65b9\u3011\uff1a\u4e2d\u56fd\u7535\u4fe1\u3001\u4e2d\u56fd\u8054\u901a\u3001\u7edf\u4fe1\u8f6f\u4ef6 \u3010\u8f93\u5165\u3011\uff1aRPM SPEC\u7f16\u5199\u6807\u51c6 \u3010\u8f93\u51fa\u3011\uff1a\u670d\u52a1\u7ea7\u3001\u4f9d\u8d56\u5e93\u7ea7SPEC\u6a21\u677f\uff1b\u8f6f\u4ef6\u5206\u5c42\u89c4\u8303\u3002 \u3010\u5bf9\u5176\u4ed6\u529f\u80fd\u7684\u5f71\u54cd\u3011\uff1a\u672c\u529f\u80fd\u662f\u4ee5\u4e0b\u8f6f\u4ef6\u529f\u80fd\u7684\u524d\u63d0\uff0c\u4e0b\u8ff0\u5982 SPEC\u81ea\u52a8\u751f\u6210\u529f\u80fd \u9700\u9075\u5faa\u672c\u89c4\u8303\u6267\u884c\u3002 \u4f9d\u8d56\u5206\u6790\u9700\u6c42 \u00b6 \u3010\u529f\u80fd\u70b9\u3011 1. \u81ea\u52a8\u751f\u6210\u57fa\u4e8e\u6307\u5b9aopenEuler\u7248\u672c\u7684OpenStack\u4f9d\u8d56\u8868\u3002 2. \u80fd\u5904\u7406\u4f9d\u8d56\u6210\u73af\u3001\u7248\u672c\u7f3a\u7701\u3001\u540d\u79f0\u4e0d\u4e00\u81f4\u7b49\u4f9d\u8d56\u5e38\u89c1\u95ee\u9898\u3002 \u3010\u5148\u51b3\u6761\u4ef6\u3011\uff1aN/A \u3010\u53c2\u4e0e\u65b9\u3011\uff1aOpenStack SIG\u6838\u5fc3\u5f00\u53d1\u8005 \u3010\u8f93\u5165\u3011\uff1aopenEuler\u7248\u672c\u53f7\u3001OpenStack\u7248\u672c\u53f7\u3001\u76ee\u6807\u4f9d\u8d56\u8303\u56f4\uff08\u6838\u5fc3/\u6d4b\u8bd5/\u6587\u6863\uff09 \u3010\u8f93\u51fa\u3011\uff1a\u6307\u5b9aOpenStack\u7248\u672c\u7684\u5168\u91cf\u4f9d\u8d56\u5e93\u4fe1\u606f\uff0c\u5305\u62ec\u6700\u5c0f/\u6700\u5927\u4f9d\u8d56\u7248\u672c\u3001\u6240\u5c5eopenEuler SIG\u3001RPM\u5305\u540d\u3001\u4f9d\u8d56\u5c42\u7ea7\u3001\u5b50\u4f9d\u8d56\u6811\u7b49\u5185\u5bb9\uff0c\u53ef\u4ee5\u4ee5Excel\u8868\u683c\u7684\u65b9\u5f0f\u8f93\u51fa\u3002 \u3010\u5bf9\u5176\u4ed6\u529f\u80fd\u7684\u5f71\u54cd\u3011\uff1aN/A Spec\u81ea\u52a8\u751f\u6210\u9700\u6c42 \u00b6 \u3010\u529f\u80fd\u70b9\u3011 1. \u4e00\u952e\u751f\u6210OpenStack\u4f9d\u8d56\u5e93\u7c7b\u8f6f\u4ef6\u7684RPM SPEC 2. \u652f\u6301\u5404\u79cdPython\u8f6f\u4ef6\u6784\u5efa\u7cfb\u7edf\uff0c\u6bd4\u5982setuptools\u3001pyproject\u7b49\u3002 \u3010\u5148\u51b3\u6761\u4ef6\u3011\uff1a\u9700\u9075\u5b88 SPEC\u5f00\u53d1\u89c4\u8303 \u3010\u53c2\u4e0e\u65b9\u3011\uff1aOpenStack SIG\u6838\u5fc3\u5f00\u53d1\u8005 \u3010\u8f93\u5165\u3011\uff1a\u6307\u5b9a\u8f6f\u4ef6\u540d\u53ca\u76ee\u6807\u7248\u672c \u3010\u8f93\u51fa\u3011\uff1a\u5bf9\u5e94\u8f6f\u4ef6\u7684RPM SPEC\u6587\u4ef6 \u3010\u5bf9\u5176\u4ed6\u529f\u80fd\u7684\u5f71\u54cd\u3011\uff1a\u751f\u6210\u7684SPEC\u53ef\u4ee5\u901a\u8fc7\u4e0b\u8ff0 \u4ee3\u7801\u63d0\u4ea4\u529f\u80fd \u4e00\u952epush\u5230openEuler\u793e\u533a\u3002 \u81ea\u52a8\u5316\u90e8\u7f72\u3001\u6d4b\u8bd5\u9700\u6c42 \u00b6 \u3010\u529f\u80fd\u70b9\u3011 1. \u4e00\u952e\u5feb\u901f\u90e8\u7f72\u6307\u5b9aOpenStack\u7248\u672c\u3001\u62d3\u6251\u3001\u529f\u80fd\u7684OpenStack\u5355/\u591a\u8282\u70b9\u73af\u5883 2. \u4e00\u952e\u57fa\u4e8e\u5df2\u90e8\u7f72OpenStack\u73af\u5883\u8fdb\u884c\u8d44\u6e90\u9884\u914d\u7f6e\u4e0e\u529f\u80fd\u6d4b\u8bd5\u3002 3. \u652f\u6301\u591a\u4e91\u3001\u4e3b\u673a\u7eb3\u7ba1\u529f\u80fd\uff0c\u652f\u6301\u63d2\u4ef6\u81ea\u5b9a\u4e49\u529f\u80fd\u3002 \u3010\u5148\u51b3\u6761\u4ef6\u3011\uff1aN/A \u3010\u53c2\u4e0e\u65b9\u3011\uff1aOpenStack SIG\u6838\u5fc3\u5f00\u53d1\u8005\u3001\u5404\u4e2a\u4e91\u5e73\u53f0\u76f8\u5173\u5f00\u53d1\u8005 \u3010\u8f93\u5165\u3011\uff1a\u76ee\u6807OpenStack\u7248\u672c\u3001\u8ba1\u7b97/\u7f51\u7edc/\u5b58\u50a8\u7684driver\u573a\u666f \u3010\u8f93\u51fa\u3011\uff1a\u4e00\u4e2a\u53ef\u4ee5\u4e00\u952e\u6267\u884cOpenStack Tempest\u6d4b\u8bd5\u7684OpenStack\u73af\u5883\uff1bTempest\u6d4b\u8bd5\u62a5\u544a\u3002 \u3010\u5bf9\u5176\u4ed6\u529f\u80fd\u7684\u5f71\u54cd\u3011\uff1a N/A \u4e00\u952e\u4ee3\u7801\u5904\u7406\u9700\u6c42 \u00b6 \u3010\u529f\u80fd\u70b9\u3011 1. \u4e00\u952e\u9488\u5bf9openEuler OpenStack\u6240\u5c5e\u9879\u76ee\u7684Repo\u3001Branch\u3001PR\u6267\u884c\u5404\u79cd\u64cd\u4f5c\u3002 2. \u64cd\u4f5c\u5305\u62ec\uff1a\u5efa\u7acb/\u5220\u9664\u6e90\u7801\u4ed3;\u5efa\u7acb/\u5220\u9664openEuler\u5206\u652f\uff1b\u63d0\u4ea4\u8f6f\u4ef6Update PR\uff1b\u5728PR\u4e2d\u6dfb\u52a0\u8bc4\u5ba1\u610f\u89c1\u3002 \u3010\u5148\u51b3\u6761\u4ef6\u3011\uff1a\u63d0\u4ea4PR\u529f\u80fd\u4f9d\u8d56\u4e0a\u8ff0 SPEC\u751f\u6210 \u529f\u80fd \u3010\u53c2\u4e0e\u65b9\u3011\uff1aOpenStack SIG\u6838\u5fc3\u5f00\u53d1\u8005 \u3010\u8f93\u5165\u3011\uff1a\u6307\u5b9a\u8f6f\u4ef6\u540d\u3001openEuler release\u540d\u3001\u76ee\u6807Spec\u6587\u4ef6\u3001\u8bc4\u5ba1\u610f\u89c1\u5185\u5bb9\u3002 \u3010\u8f93\u51fa\u3011\uff1a\u8f6f\u4ef6\u5efa\u4ed3PR\uff1b\u8f6f\u4ef6\u521b\u5efa\u5206\u652fPR\uff1b\u8f6f\u4ef6\u5347\u7ea7PR\uff1bPR\u65b0\u589e\u8bc4\u5ba1\u610f\u89c1\u3002 \u3010\u5bf9\u5176\u4ed6\u529f\u80fd\u7684\u5f71\u54cd\u3011\uff1aN/A \u975e\u529f\u80fd\u9700\u6c42 \u00b6 \u6d4b\u8bd5\u9700\u6c42 \u00b6 \u5bf9\u5e94\u8f6f\u4ef6\u4ee3\u7801\u9700\u5305\u542b\u5355\u5143\u6d4b\u8bd5\uff0c\u8986\u76d6\u7387\u4e0d\u4f4e\u4e8e80%\u3002 \u9700\u63d0\u4f9b\u7aef\u5230\u7aef\u529f\u80fd\u6d4b\u8bd5\uff0c\u8986\u76d6\u4e0a\u8ff0\u6240\u6709\u63a5\u53e3\uff0c\u4ee5\u53ca\u6838\u5fc3\u7684\u573a\u666f\u6d4b\u8bd5\u3002 \u57fa\u4e8eopenEuler\u793e\u533aCI\uff0c\u6784\u5efaCI/CD\u6d41\u7a0b\uff0c\u6240\u6709Pull Request\u8981\u6709CI\u4fdd\u8bc1\u4ee3\u7801\u8d28\u91cf\uff0c\u5b9a\u671f\u53d1\u5e03release\u7248\u672c\uff0c\u8f6f\u4ef6\u53d1\u5e03\u95f4\u9694\u4e0d\u5927\u4e8e3\u4e2a\u6708\u3002 \u5b89\u5168 \u00b6 \u6570\u636e\u5b89\u5168\uff1a\u8f6f\u4ef6\u5168\u7a0b\u4e0d\u8054\u7f51\uff0c\u6301\u4e45\u5b58\u50a8\u4e2d\u4e0d\u5305\u542b\u7528\u6237\u654f\u611f\u4fe1\u606f\u3002 \u7f51\u7edc\u5b89\u5168\uff1aOOS\u5728REST\u67b6\u6784\u4e0b\u4f7f\u7528http\u534f\u8bae\u901a\u4fe1\uff0c\u4f46\u8f6f\u4ef6\u8bbe\u8ba1\u76ee\u6807\u5b9e\u5728\u5185\u7f51\u73af\u5883\u4e2d\u4f7f\u7528\uff0c\u4e0d\u5efa\u8bae\u66b4\u9732\u5728\u516c\u7f51IP\u4e2d\uff0c\u5982\u5fc5\u987b\u5982\u6b64\uff0c\u5efa\u8bae\u589e\u52a0\u8bbf\u95eeIP\u767d\u540d\u5355\u9650\u5236\u3002 \u7cfb\u7edf\u5b89\u5168\uff1a\u57fa\u4e8eopenEuler\u5b89\u5168\u673a\u5236\uff0c\u5b9a\u671f\u53d1\u5e03CVE\u4fee\u590d\u6216\u5b89\u5168\u8865\u4e01\u3002 \u5e94\u7528\u5c42\u5b89\u5168\uff1a\u4e0d\u6d89\u53ca\uff0c\u4e0d\u63d0\u4f9b\u5e94\u7528\u7ea7\u5b89\u5168\u670d\u52a1\uff0c\u4f8b\u5982\u5bc6\u7801\u7b56\u7565\u3001\u8bbf\u95ee\u63a7\u5236\u7b49\u3002 \u7ba1\u7406\u5b89\u5168\uff1a\u8f6f\u4ef6\u63d0\u4f9b\u65e5\u5fd7\u751f\u6210\u548c\u5468\u671f\u6027\u5907\u4efd\u673a\u5236\uff0c\u65b9\u4fbf\u7528\u6237\u5b9a\u671f\u5ba1\u8ba1\u3002 \u53ef\u9760\u6027 \u00b6 \u672c\u8f6f\u4ef6\u9762\u5411openEuler\u793e\u533aOpenStack\u5f00\u53d1\u884c\u4e3a\uff0c\u4e0d\u6d89\u53ca\u670d\u52a1\u4e0a\u7ebf\u6216\u8005\u5546\u4e1a\u751f\u4ea7\u843d\u5730\uff0c\u6240\u6709\u4ee3\u7801\u516c\u5f00\u900f\u660e\uff0c\u4e0d\u6d89\u53ca\u79c1\u6709\u529f\u80fd\u53ca\u4ee3\u7801\u3002\u56e0\u6b64\u4e0d\u63d0\u4f9b\u4f8b\u5982\u8282\u70b9\u5197\u4f59\u3001\u5bb9\u707e\u5907\u4efd\u80fd\u529f\u80fd\u3002 \u5f00\u6e90\u5408\u89c4 \u00b6 \u672c\u5e73\u53f0\u91c7\u7528Apache2.0 License\uff0c\u4e0d\u9650\u5236\u4e0b\u6e38fork\u8f6f\u4ef6\u7684\u95ed\u6e90\u4e0e\u5546\u4e1a\u884c\u4e3a\uff0c\u4f46\u4e0b\u6e38\u8f6f\u4ef6\u9700\u6807\u6ce8\u4ee3\u7801\u6765\u6e90\u4ee5\u53ca\u4fdd\u7559\u539f\u6709License\u3002 \u5b9e\u65bd\u8ba1\u5212 \u00b6 \u65f6\u95f4 \u5185\u5bb9 2021.06 \u5b8c\u6210\u8f6f\u4ef6\u6574\u4f53\u6846\u67b6\u7f16\u5199\uff0c\u5b9e\u73b0CLI Built-in\u673a\u5236\uff0c\u81f3\u5c11\u4e00\u4e2aAPI\u53ef\u7528 2021.12 \u5b8c\u6210CLI Built-in\u673a\u5236\u7684\u5168\u91cf\u529f\u80fd\u53ef\u7528 2022.06 \u5b8c\u6210\u8d28\u91cf\u52a0\u56fa\uff0c\u4fdd\u8bc1\u529f\u80fd\uff0c\u5728openEuler OpenStack\u793e\u533a\u5f00\u53d1\u6d41\u7a0b\u4e2d\u6b63\u5f0f\u5f15\u5165OOS 2022.12 \u4e0d\u65ad\u5b8c\u6210OOS\uff0c\u4fdd\u8bc1\u6613\u7528\u6027\u3001\u5065\u58ee\u6027\uff0c\u81ea\u52a8\u5316\u8986\u76d6\u5ea6\u8d85\u8fc780%\uff0c\u964d\u4f4e\u5f00\u53d1\u4eba\u529b\u6295\u5165 2023.06 \u8865\u9f50REST\u6846\u67b6\u3001CI/CD\u6d41\u7a0b\uff0c\u4e30\u5bccPlugin\u673a\u5236\uff0c\u5f15\u5165\u66f4\u591abackend\u652f\u6301 2023.12 \u5b8c\u6210\u524d\u7aefGUI\u529f\u80fd","title":"openEuler OpenStack\u5f00\u53d1\u5e73\u53f0\u9700\u6c42\u8bf4\u660e\u4e66"},{"location":"spec/openstack-sig-tool-requirement/#openeuler-openstack","text":"","title":"openEuler OpenStack\u5f00\u53d1\u5e73\u53f0\u9700\u6c42\u8bf4\u660e\u4e66"},{"location":"spec/openstack-sig-tool-requirement/#_1","text":"\u76ee\u524d\uff0c\u968f\u7740SIG\u7684\u4e0d\u65ad\u53d1\u5c55\uff0c\u6211\u4eec\u660e\u663e\u7684\u9047\u5230\u4e86\u4ee5\u4e0b\u51e0\u7c7b\u95ee\u9898\uff1a 1. OpenStack\u6280\u672f\u590d\u6742\uff0c\u6d89\u53ca\u4e91IAAS\u5c42\u7684\u8ba1\u7b97\u3001\u7f51\u7edc\u3001\u5b58\u50a8\u3001\u955c\u50cf\u3001\u9274\u6743\u7b49\u65b9\u65b9\u9762\u9762\u7684\u6280\u672f\uff0c\u5f00\u53d1\u8005\u5f88\u96be\u5168\u77e5\u5168\u4f1a\uff0c\u63d0\u4ea4\u7684 \u4ee3\u7801\u903b\u8f91\u3001\u8d28\u91cf\u582a\u5fe7 \u3002 2. OpenStack\u662f\u7531python\u7f16\u5199\u7684\uff0cpython\u8f6f\u4ef6\u7684\u4f9d\u8d56\u95ee\u9898\u96be\u4ee5\u5904\u7406\uff0c\u4ee5OpenStack Wallaby\u7248\u672c\u4e3a\u4f8b\uff0c\u6d89\u53ca\u6838\u5fc3python\u8f6f\u4ef6\u5305400+\uff0c \u6bcf\u4e2a\u8f6f\u4ef6\u7684\u4f9d\u8d56\u5c42\u7ea7\u3001\u4f9d\u8d56\u7248\u672c \u9519\u7efc\u590d\u6742\uff0c\u9009\u578b\u56f0\u96be \uff0c\u96be\u4ee5\u5f62\u6210\u95ed\u73af\u3002 3. OpenStack\u8f6f\u4ef6\u5305\u4f17\u591a\uff0cRPM Spec\u7f16\u5199\u5f00\u53d1\u91cf\u5de8\u5927\uff0c\u5e76\u4e14\u968f\u7740openEuler\u3001OpenStack\u672c\u8eab\u7248\u672c\u7684\u4e0d\u65ad\u6f14\u8fdb\uff0cN:N\u7684\u9002\u914d\u5173\u7cfb\u4f1a\u5bfc\u81f4 \u5de5\u4f5c\u91cf\u6210\u500d\u589e\u957f\uff0c\u4eba\u529b\u6210\u672c\u8d8a\u6765\u8d8a\u5927 \u3002 4. OpenStack\u6d4b\u8bd5\u95e8\u69db\u8fc7\u9ad8\uff0c\u4e0d\u4ec5\u9700\u8981\u5f00\u53d1\u4eba\u5458\u719f\u6089OpenStack\uff0c\u8fd8\u8981\u5bf9\u865a\u62df\u5316\u3001\u865a\u62df\u7f51\u6865\u3001\u5757\u5b58\u50a8\u7b49Linux\u5e95\u5c42\u6280\u672f\u6709\u4e00\u5b9a\u4e86\u89e3\u4e0e\u638c\u63e1\uff0c\u90e8\u7f72\u4e00\u5957OpenStack\u73af\u5883\u8017\u65f6\u8fc7\u957f\uff0c\u529f\u80fd\u6d4b\u8bd5\u96be\u5ea6\u5de8\u5927\u3002\u5e76\u4e14\u6d4b\u8bd5\u573a\u666f\u591a\uff0c\u6bd4\u5982X86\u3001ARM64\u67b6\u6784\u6d4b\u8bd5\uff0c\u88f8\u673a\u3001\u865a\u673a\u79cd\u7c7b\u6d4b\u8bd5\uff0cOVS\u3001OVN\u7f51\u6865\u6d4b\u8bd5\uff0cLVM\u3001Ceph\u5b58\u50a8\u6d4b\u8bd5\u7b49\u7b49\uff0c\u66f4\u52a0\u52a0\u91cd\u4e86 \u4eba\u529b\u6210\u672c\u4ee5\u53ca\u6280\u672f\u95e8\u69db \u3002 \u9488\u5bf9\u4ee5\u4e0a\u95ee\u9898\u9700\u8981\u5728openEuler OpenStack\u63d0\u4f9b\u4e00\u4e2a\u5f00\u53d1\u5e73\u53f0\uff0c\u89e3\u51b3\u5f00\u53d1\u8fc7\u7a0b\u9047\u5230\u7684\u4ee5\u4e0a\u75db\u70b9\u95ee\u9898\u3002","title":"\u80cc\u666f"},{"location":"spec/openstack-sig-tool-requirement/#_2","text":"\u8bbe\u8ba1\u5e76\u5f00\u53d1\u4e00\u4e2aOpenStack\u5f3a\u76f8\u5173\u7684openEuler\u5f00\u6e90\u5f00\u53d1\u5e73\u53f0\uff0c\u901a\u8fc7\u89c4\u8303\u5316\u3001\u5de5\u5177\u5316\u3001\u81ea\u52a8\u5316\u7684\u65b9\u5f0f\uff0c\u6ee1\u8db3SIG\u5f00\u53d1\u8005\u7684\u65e5\u5e38\u5f00\u53d1\u9700\u6c42\uff0c\u964d\u4f4e\u5f00\u53d1\u6210\u672c\uff0c\u51cf\u5c11\u4eba\u529b\u6295\u5165\u6210\u672c\uff0c\u964d\u4f4e\u5f00\u53d1\u95e8\u69db\uff0c\u4ece\u800c\u63d0\u9ad8\u5f00\u53d1\u6548\u7387\u3001\u63d0\u9ad8SIG\u8f6f\u4ef6\u8d28\u91cf\u3001\u53d1\u5c55SIG\u751f\u6001\u3001\u5438\u5f15\u66f4\u591a\u5f00\u53d1\u8005\u52a0\u5165SIG\u3002","title":"\u76ee\u6807"},{"location":"spec/openstack-sig-tool-requirement/#_3","text":"\u7528\u6237\u8303\u56f4 \uff1aopenEuler OpenStack SIG\u5f00\u53d1\u8005 \u4e1a\u52a1\u8303\u56f4 \uff1aopenEuler OpenStack SIG\u65e5\u5e38\u5f00\u53d1\u6d3b\u52a8 \u7f16\u7a0b\u8bed\u8a00 \uff1aPython\u3001Ansible\u3001Jinja\u3001JavaScript IT\u6280\u672f \uff1aWeb\u670d\u52a1\u3001RestFul\u89c4\u8303\u3001CLI\u89c4\u8303\u3001\u524d\u7aefGUI\u3001\u6570\u636e\u5e93\u4f7f\u7528","title":"\u8303\u56f4"},{"location":"spec/openstack-sig-tool-requirement/#_4","text":"OpenStack\u5f00\u53d1\u5e73\u53f0\u6574\u4f53\u91c7\u7528C/S\u67b6\u6784\uff0c\u4ee5SIG\u5bf9\u5916\u63d0\u4f9b\u5e73\u53f0\u80fd\u529b\uff0cclient\u7aef\u9762\u5411\u6307\u5b9a\u7528\u6237\u767d\u540d\u5355\u5f00\u653e\u3002 \u4e3a\u65b9\u4fbf\u767d\u540d\u5355\u4ee5\u5916\u7528\u6237\u4f7f\u7528\uff0c\u672c\u5e73\u53f0\u8fd8\u63d0\u4f9bCLI\u6a21\u5f0f\uff0c\u5728\u6b64\u6a21\u5f0f\u4e0b\u4e0d\u9700\u8981\u989d\u5916\u670d\u52a1\u7aef\u901a\u4fe1\uff0c\u5728\u672c\u5730\u5373\u53ef\u5f00\u7bb1\u5373\u7528\u3002 \u8f93\u51faOpenStack\u670d\u52a1\u7c7b\u8f6f\u4ef6\u3001\u4f9d\u8d56\u5e93\u8f6f\u4ef6\u7684RPM SPEC\u5f00\u53d1\u89c4\u8303\uff0c\u5f00\u53d1\u8005\u53caReviewer\u9700\u8981\u4e25\u683c\u9075\u5b88\u89c4\u8303\u8fdb\u884c\u5f00\u53d1\u5b9e\u65bd\u3002 \u63d0\u4f9bOpenStack python\u8f6f\u4ef6\u4f9d\u8d56\u5206\u6790\u529f\u80fd\uff0c\u4e00\u952e\u751f\u6210\u4f9d\u8d56\u62d3\u6251\u4e0e\u7ed3\u679c\uff0c\u4fdd\u8bc1\u4f9d\u8d56\u95ed\u73af\uff0c\u907f\u514d\u8f6f\u4ef6\u4f9d\u8d56\u98ce\u9669\u3002 \u63d0\u4f9bOpenStack RPM spec\u751f\u6210\u529f\u80fd\uff0c\u9488\u5bf9\u901a\u7528\u6027\u8f6f\u4ef6\uff0c\u63d0\u4f9b\u4e00\u952e\u751f\u6210 RPM spec\u7684\u529f\u80fd\uff0c\u7f29\u77ed\u5f00\u53d1\u65f6\u95f4\uff0c\u964d\u4f4e\u6295\u5165\u6210\u672c\u3002 \u63d0\u4f9b\u81ea\u52a8\u5316\u90e8\u7f72\u3001\u6d4b\u8bd5\u5e73\u53f0\u529f\u80fd\uff0c\u5b9e\u73b0\u4e00\u952e\u5728\u4efb\u4f55openEuler\u7248\u672c\u4e0a\u90e8\u7f72\u6307\u5b9aOpenStack\u7248\u672c\u7684\u80fd\u529b\uff0c\u5feb\u901f\u6d4b\u8bd5\u3001\u5feb\u901f\u8fed\u4ee3\u3002 \u63d0\u4f9bopenEuler Gitee\u4ed3\u5e93\u81ea\u52a8\u5316\u5904\u7406\u80fd\u529b\uff0c\u6ee1\u8db3\u6279\u91cf\u4fee\u6539\u8f6f\u4ef6\u7684\u9700\u6c42\uff0c\u6bd4\u5982\u521b\u5efa\u4ee3\u7801\u5206\u652f\u3001\u521b\u5efa\u4ed3\u5e93\u3001\u63d0\u4ea4Pull Request\u7b49\u529f\u80fd\u3002","title":"\u529f\u80fd"},{"location":"spec/openstack-sig-tool-requirement/#spec","text":"\u3010\u529f\u80fd\u70b9\u3011 1. \u7ea6\u675fOpenStack\u670d\u52a1\u7ea7\u9879\u76eeSPEC\u683c\u5f0f\u4e0e\u5185\u5bb9\u89c4\u8303 2. \u89c4\u5b9aOpenStack\u4f9d\u8d56\u5e93\u7ea7\u522b\u9879\u76eeSPEC\u7684\u6846\u67b6\u3002 \u3010\u5148\u51b3\u6761\u4ef6\u3011\uff1aOpenStack SIG\u5168\u4f53Maintainer\u8fbe\u6210\u4e00\u81f4\uff0c\u53c2\u4e0e\u5382\u5546\u6ca1\u6709\u5206\u6b67\u3002 \u3010\u53c2\u4e0e\u65b9\u3011\uff1a\u4e2d\u56fd\u7535\u4fe1\u3001\u4e2d\u56fd\u8054\u901a\u3001\u7edf\u4fe1\u8f6f\u4ef6 \u3010\u8f93\u5165\u3011\uff1aRPM SPEC\u7f16\u5199\u6807\u51c6 \u3010\u8f93\u51fa\u3011\uff1a\u670d\u52a1\u7ea7\u3001\u4f9d\u8d56\u5e93\u7ea7SPEC\u6a21\u677f\uff1b\u8f6f\u4ef6\u5206\u5c42\u89c4\u8303\u3002 \u3010\u5bf9\u5176\u4ed6\u529f\u80fd\u7684\u5f71\u54cd\u3011\uff1a\u672c\u529f\u80fd\u662f\u4ee5\u4e0b\u8f6f\u4ef6\u529f\u80fd\u7684\u524d\u63d0\uff0c\u4e0b\u8ff0\u5982 SPEC\u81ea\u52a8\u751f\u6210\u529f\u80fd \u9700\u9075\u5faa\u672c\u89c4\u8303\u6267\u884c\u3002","title":"SPEC\u5f00\u53d1\u89c4\u8303\u5236\u5b9a"},{"location":"spec/openstack-sig-tool-requirement/#_5","text":"\u3010\u529f\u80fd\u70b9\u3011 1. \u81ea\u52a8\u751f\u6210\u57fa\u4e8e\u6307\u5b9aopenEuler\u7248\u672c\u7684OpenStack\u4f9d\u8d56\u8868\u3002 2. \u80fd\u5904\u7406\u4f9d\u8d56\u6210\u73af\u3001\u7248\u672c\u7f3a\u7701\u3001\u540d\u79f0\u4e0d\u4e00\u81f4\u7b49\u4f9d\u8d56\u5e38\u89c1\u95ee\u9898\u3002 \u3010\u5148\u51b3\u6761\u4ef6\u3011\uff1aN/A \u3010\u53c2\u4e0e\u65b9\u3011\uff1aOpenStack SIG\u6838\u5fc3\u5f00\u53d1\u8005 \u3010\u8f93\u5165\u3011\uff1aopenEuler\u7248\u672c\u53f7\u3001OpenStack\u7248\u672c\u53f7\u3001\u76ee\u6807\u4f9d\u8d56\u8303\u56f4\uff08\u6838\u5fc3/\u6d4b\u8bd5/\u6587\u6863\uff09 \u3010\u8f93\u51fa\u3011\uff1a\u6307\u5b9aOpenStack\u7248\u672c\u7684\u5168\u91cf\u4f9d\u8d56\u5e93\u4fe1\u606f\uff0c\u5305\u62ec\u6700\u5c0f/\u6700\u5927\u4f9d\u8d56\u7248\u672c\u3001\u6240\u5c5eopenEuler SIG\u3001RPM\u5305\u540d\u3001\u4f9d\u8d56\u5c42\u7ea7\u3001\u5b50\u4f9d\u8d56\u6811\u7b49\u5185\u5bb9\uff0c\u53ef\u4ee5\u4ee5Excel\u8868\u683c\u7684\u65b9\u5f0f\u8f93\u51fa\u3002 \u3010\u5bf9\u5176\u4ed6\u529f\u80fd\u7684\u5f71\u54cd\u3011\uff1aN/A","title":"\u4f9d\u8d56\u5206\u6790\u9700\u6c42"},{"location":"spec/openstack-sig-tool-requirement/#spec_1","text":"\u3010\u529f\u80fd\u70b9\u3011 1. \u4e00\u952e\u751f\u6210OpenStack\u4f9d\u8d56\u5e93\u7c7b\u8f6f\u4ef6\u7684RPM SPEC 2. \u652f\u6301\u5404\u79cdPython\u8f6f\u4ef6\u6784\u5efa\u7cfb\u7edf\uff0c\u6bd4\u5982setuptools\u3001pyproject\u7b49\u3002 \u3010\u5148\u51b3\u6761\u4ef6\u3011\uff1a\u9700\u9075\u5b88 SPEC\u5f00\u53d1\u89c4\u8303 \u3010\u53c2\u4e0e\u65b9\u3011\uff1aOpenStack SIG\u6838\u5fc3\u5f00\u53d1\u8005 \u3010\u8f93\u5165\u3011\uff1a\u6307\u5b9a\u8f6f\u4ef6\u540d\u53ca\u76ee\u6807\u7248\u672c \u3010\u8f93\u51fa\u3011\uff1a\u5bf9\u5e94\u8f6f\u4ef6\u7684RPM SPEC\u6587\u4ef6 \u3010\u5bf9\u5176\u4ed6\u529f\u80fd\u7684\u5f71\u54cd\u3011\uff1a\u751f\u6210\u7684SPEC\u53ef\u4ee5\u901a\u8fc7\u4e0b\u8ff0 \u4ee3\u7801\u63d0\u4ea4\u529f\u80fd \u4e00\u952epush\u5230openEuler\u793e\u533a\u3002","title":"Spec\u81ea\u52a8\u751f\u6210\u9700\u6c42"},{"location":"spec/openstack-sig-tool-requirement/#_6","text":"\u3010\u529f\u80fd\u70b9\u3011 1. \u4e00\u952e\u5feb\u901f\u90e8\u7f72\u6307\u5b9aOpenStack\u7248\u672c\u3001\u62d3\u6251\u3001\u529f\u80fd\u7684OpenStack\u5355/\u591a\u8282\u70b9\u73af\u5883 2. \u4e00\u952e\u57fa\u4e8e\u5df2\u90e8\u7f72OpenStack\u73af\u5883\u8fdb\u884c\u8d44\u6e90\u9884\u914d\u7f6e\u4e0e\u529f\u80fd\u6d4b\u8bd5\u3002 3. \u652f\u6301\u591a\u4e91\u3001\u4e3b\u673a\u7eb3\u7ba1\u529f\u80fd\uff0c\u652f\u6301\u63d2\u4ef6\u81ea\u5b9a\u4e49\u529f\u80fd\u3002 \u3010\u5148\u51b3\u6761\u4ef6\u3011\uff1aN/A \u3010\u53c2\u4e0e\u65b9\u3011\uff1aOpenStack SIG\u6838\u5fc3\u5f00\u53d1\u8005\u3001\u5404\u4e2a\u4e91\u5e73\u53f0\u76f8\u5173\u5f00\u53d1\u8005 \u3010\u8f93\u5165\u3011\uff1a\u76ee\u6807OpenStack\u7248\u672c\u3001\u8ba1\u7b97/\u7f51\u7edc/\u5b58\u50a8\u7684driver\u573a\u666f \u3010\u8f93\u51fa\u3011\uff1a\u4e00\u4e2a\u53ef\u4ee5\u4e00\u952e\u6267\u884cOpenStack Tempest\u6d4b\u8bd5\u7684OpenStack\u73af\u5883\uff1bTempest\u6d4b\u8bd5\u62a5\u544a\u3002 \u3010\u5bf9\u5176\u4ed6\u529f\u80fd\u7684\u5f71\u54cd\u3011\uff1a N/A","title":"\u81ea\u52a8\u5316\u90e8\u7f72\u3001\u6d4b\u8bd5\u9700\u6c42"},{"location":"spec/openstack-sig-tool-requirement/#_7","text":"\u3010\u529f\u80fd\u70b9\u3011 1. \u4e00\u952e\u9488\u5bf9openEuler OpenStack\u6240\u5c5e\u9879\u76ee\u7684Repo\u3001Branch\u3001PR\u6267\u884c\u5404\u79cd\u64cd\u4f5c\u3002 2. \u64cd\u4f5c\u5305\u62ec\uff1a\u5efa\u7acb/\u5220\u9664\u6e90\u7801\u4ed3;\u5efa\u7acb/\u5220\u9664openEuler\u5206\u652f\uff1b\u63d0\u4ea4\u8f6f\u4ef6Update PR\uff1b\u5728PR\u4e2d\u6dfb\u52a0\u8bc4\u5ba1\u610f\u89c1\u3002 \u3010\u5148\u51b3\u6761\u4ef6\u3011\uff1a\u63d0\u4ea4PR\u529f\u80fd\u4f9d\u8d56\u4e0a\u8ff0 SPEC\u751f\u6210 \u529f\u80fd \u3010\u53c2\u4e0e\u65b9\u3011\uff1aOpenStack SIG\u6838\u5fc3\u5f00\u53d1\u8005 \u3010\u8f93\u5165\u3011\uff1a\u6307\u5b9a\u8f6f\u4ef6\u540d\u3001openEuler release\u540d\u3001\u76ee\u6807Spec\u6587\u4ef6\u3001\u8bc4\u5ba1\u610f\u89c1\u5185\u5bb9\u3002 \u3010\u8f93\u51fa\u3011\uff1a\u8f6f\u4ef6\u5efa\u4ed3PR\uff1b\u8f6f\u4ef6\u521b\u5efa\u5206\u652fPR\uff1b\u8f6f\u4ef6\u5347\u7ea7PR\uff1bPR\u65b0\u589e\u8bc4\u5ba1\u610f\u89c1\u3002 \u3010\u5bf9\u5176\u4ed6\u529f\u80fd\u7684\u5f71\u54cd\u3011\uff1aN/A","title":"\u4e00\u952e\u4ee3\u7801\u5904\u7406\u9700\u6c42"},{"location":"spec/openstack-sig-tool-requirement/#_8","text":"","title":"\u975e\u529f\u80fd\u9700\u6c42"},{"location":"spec/openstack-sig-tool-requirement/#_9","text":"\u5bf9\u5e94\u8f6f\u4ef6\u4ee3\u7801\u9700\u5305\u542b\u5355\u5143\u6d4b\u8bd5\uff0c\u8986\u76d6\u7387\u4e0d\u4f4e\u4e8e80%\u3002 \u9700\u63d0\u4f9b\u7aef\u5230\u7aef\u529f\u80fd\u6d4b\u8bd5\uff0c\u8986\u76d6\u4e0a\u8ff0\u6240\u6709\u63a5\u53e3\uff0c\u4ee5\u53ca\u6838\u5fc3\u7684\u573a\u666f\u6d4b\u8bd5\u3002 \u57fa\u4e8eopenEuler\u793e\u533aCI\uff0c\u6784\u5efaCI/CD\u6d41\u7a0b\uff0c\u6240\u6709Pull Request\u8981\u6709CI\u4fdd\u8bc1\u4ee3\u7801\u8d28\u91cf\uff0c\u5b9a\u671f\u53d1\u5e03release\u7248\u672c\uff0c\u8f6f\u4ef6\u53d1\u5e03\u95f4\u9694\u4e0d\u5927\u4e8e3\u4e2a\u6708\u3002","title":"\u6d4b\u8bd5\u9700\u6c42"},{"location":"spec/openstack-sig-tool-requirement/#_10","text":"\u6570\u636e\u5b89\u5168\uff1a\u8f6f\u4ef6\u5168\u7a0b\u4e0d\u8054\u7f51\uff0c\u6301\u4e45\u5b58\u50a8\u4e2d\u4e0d\u5305\u542b\u7528\u6237\u654f\u611f\u4fe1\u606f\u3002 \u7f51\u7edc\u5b89\u5168\uff1aOOS\u5728REST\u67b6\u6784\u4e0b\u4f7f\u7528http\u534f\u8bae\u901a\u4fe1\uff0c\u4f46\u8f6f\u4ef6\u8bbe\u8ba1\u76ee\u6807\u5b9e\u5728\u5185\u7f51\u73af\u5883\u4e2d\u4f7f\u7528\uff0c\u4e0d\u5efa\u8bae\u66b4\u9732\u5728\u516c\u7f51IP\u4e2d\uff0c\u5982\u5fc5\u987b\u5982\u6b64\uff0c\u5efa\u8bae\u589e\u52a0\u8bbf\u95eeIP\u767d\u540d\u5355\u9650\u5236\u3002 \u7cfb\u7edf\u5b89\u5168\uff1a\u57fa\u4e8eopenEuler\u5b89\u5168\u673a\u5236\uff0c\u5b9a\u671f\u53d1\u5e03CVE\u4fee\u590d\u6216\u5b89\u5168\u8865\u4e01\u3002 \u5e94\u7528\u5c42\u5b89\u5168\uff1a\u4e0d\u6d89\u53ca\uff0c\u4e0d\u63d0\u4f9b\u5e94\u7528\u7ea7\u5b89\u5168\u670d\u52a1\uff0c\u4f8b\u5982\u5bc6\u7801\u7b56\u7565\u3001\u8bbf\u95ee\u63a7\u5236\u7b49\u3002 \u7ba1\u7406\u5b89\u5168\uff1a\u8f6f\u4ef6\u63d0\u4f9b\u65e5\u5fd7\u751f\u6210\u548c\u5468\u671f\u6027\u5907\u4efd\u673a\u5236\uff0c\u65b9\u4fbf\u7528\u6237\u5b9a\u671f\u5ba1\u8ba1\u3002","title":"\u5b89\u5168"},{"location":"spec/openstack-sig-tool-requirement/#_11","text":"\u672c\u8f6f\u4ef6\u9762\u5411openEuler\u793e\u533aOpenStack\u5f00\u53d1\u884c\u4e3a\uff0c\u4e0d\u6d89\u53ca\u670d\u52a1\u4e0a\u7ebf\u6216\u8005\u5546\u4e1a\u751f\u4ea7\u843d\u5730\uff0c\u6240\u6709\u4ee3\u7801\u516c\u5f00\u900f\u660e\uff0c\u4e0d\u6d89\u53ca\u79c1\u6709\u529f\u80fd\u53ca\u4ee3\u7801\u3002\u56e0\u6b64\u4e0d\u63d0\u4f9b\u4f8b\u5982\u8282\u70b9\u5197\u4f59\u3001\u5bb9\u707e\u5907\u4efd\u80fd\u529f\u80fd\u3002","title":"\u53ef\u9760\u6027"},{"location":"spec/openstack-sig-tool-requirement/#_12","text":"\u672c\u5e73\u53f0\u91c7\u7528Apache2.0 License\uff0c\u4e0d\u9650\u5236\u4e0b\u6e38fork\u8f6f\u4ef6\u7684\u95ed\u6e90\u4e0e\u5546\u4e1a\u884c\u4e3a\uff0c\u4f46\u4e0b\u6e38\u8f6f\u4ef6\u9700\u6807\u6ce8\u4ee3\u7801\u6765\u6e90\u4ee5\u53ca\u4fdd\u7559\u539f\u6709License\u3002","title":"\u5f00\u6e90\u5408\u89c4"},{"location":"spec/openstack-sig-tool-requirement/#_13","text":"\u65f6\u95f4 \u5185\u5bb9 2021.06 \u5b8c\u6210\u8f6f\u4ef6\u6574\u4f53\u6846\u67b6\u7f16\u5199\uff0c\u5b9e\u73b0CLI Built-in\u673a\u5236\uff0c\u81f3\u5c11\u4e00\u4e2aAPI\u53ef\u7528 2021.12 \u5b8c\u6210CLI Built-in\u673a\u5236\u7684\u5168\u91cf\u529f\u80fd\u53ef\u7528 2022.06 \u5b8c\u6210\u8d28\u91cf\u52a0\u56fa\uff0c\u4fdd\u8bc1\u529f\u80fd\uff0c\u5728openEuler OpenStack\u793e\u533a\u5f00\u53d1\u6d41\u7a0b\u4e2d\u6b63\u5f0f\u5f15\u5165OOS 2022.12 \u4e0d\u65ad\u5b8c\u6210OOS\uff0c\u4fdd\u8bc1\u6613\u7528\u6027\u3001\u5065\u58ee\u6027\uff0c\u81ea\u52a8\u5316\u8986\u76d6\u5ea6\u8d85\u8fc780%\uff0c\u964d\u4f4e\u5f00\u53d1\u4eba\u529b\u6295\u5165 2023.06 \u8865\u9f50REST\u6846\u67b6\u3001CI/CD\u6d41\u7a0b\uff0c\u4e30\u5bccPlugin\u673a\u5236\uff0c\u5f15\u5165\u66f4\u591abackend\u652f\u6301 2023.12 \u5b8c\u6210\u524d\u7aefGUI\u529f\u80fd","title":"\u5b9e\u65bd\u8ba1\u5212"},{"location":"spec/openstack-sig-tool/","text":"openEuler OpenStack \u5f00\u53d1\u5e73\u53f0 \u00b6 openEuler OpenStack SIG\u6210\u7acb\u4e8e2021\u5e74\uff0c\u662f\u7531\u4e2d\u56fd\u8054\u901a\u3001\u4e2d\u56fd\u7535\u4fe1\u3001\u534e\u4e3a\u3001\u7edf\u4fe1\u7b49\u516c\u53f8\u7684\u5f00\u53d1\u8005\u5171\u540c\u6295\u5165\u5e76\u7ef4\u62a4\u7684SIG\u5c0f\u7ec4\uff0c\u65e8\u5728openEuler\u4e4b\u4e0a\u63d0\u4f9b\u539f\u751f\u7684OpenStack\uff0c\u6784\u5efa\u5f00\u653e\u53ef\u9760\u7684\u4e91\u8ba1\u7b97\u6280\u672f\u6808\uff0c\u662fopenEuler\u7684\u6807\u6746SIG\u3002\u4f46OpenStack\u672c\u8eab\u6280\u672f\u590d\u6742\u3001\u5305\u542b\u670d\u52a1\u4f17\u591a\uff0c\u5f00\u53d1\u95e8\u69db\u8f83\u9ad8\uff0c\u5bf9\u8d21\u732e\u8005\u7684\u6280\u672f\u80fd\u529b\u8981\u6c42\u4e5f\u8f83\u9ad8\uff0c\u4eba\u529b\u6210\u672c\u9ad8\u5c45\u4e0d\u4e0b\uff0c\u5728\u5b9e\u9645\u5f00\u53d1\u4e0e\u8d21\u732e\u4e2d\u5b58\u5728\u5404\u79cd\u5404\u6837\u7684\u95ee\u9898\u3002\u4e3a\u4e86\u89e3\u51b3SIG\u9762\u4e34\u7684\u95ee\u9898\uff0c\u4e9f\u9700\u4e00\u4e2aopenEuler+OpenStack\u89e3\u51b3\u65b9\u6848\uff0c\u4ece\u800c\u964d\u4f4e\u5f00\u53d1\u8005\u95e8\u69db\uff0c\u964d\u4f4e\u6295\u5165\u6210\u672c\uff0c\u63d0\u9ad8\u5f00\u53d1\u6548\u7387\uff0c\u4fdd\u8bc1SIG\u7684\u6301\u7eed\u6d3b\u8dc3\u4e0e\u53ef\u6301\u7eed\u53d1\u5c55\u3002 1. \u6982\u8ff0 \u00b6 1.1 \u5f53\u524d\u73b0\u72b6 \u00b6 \u76ee\u524d\uff0c\u968f\u7740SIG\u7684\u4e0d\u65ad\u53d1\u5c55\uff0c\u6211\u4eec\u660e\u663e\u7684\u9047\u5230\u4e86\u4ee5\u4e0b\u51e0\u7c7b\u95ee\u9898\uff1a 1. OpenStack\u6280\u672f\u590d\u6742\uff0c\u6d89\u53ca\u4e91IAAS\u5c42\u7684\u8ba1\u7b97\u3001\u7f51\u7edc\u3001\u5b58\u50a8\u3001\u955c\u50cf\u3001\u9274\u6743\u7b49\u65b9\u65b9\u9762\u9762\u7684\u6280\u672f\uff0c\u5f00\u53d1\u8005\u5f88\u96be\u5168\u77e5\u5168\u4f1a\uff0c\u63d0\u4ea4\u7684\u4ee3\u7801\u903b\u8f91\u3001\u8d28\u91cf\u582a\u5fe7\u3002 2. OpenStack\u662f\u7531python\u7f16\u5199\u7684\uff0cpython\u8f6f\u4ef6\u7684\u4f9d\u8d56\u95ee\u9898\u96be\u4ee5\u5904\u7406\uff0c\u4ee5OpenStack Wallaby\u7248\u672c\u4e3a\u4f8b\uff0c\u6d89\u53ca\u6838\u5fc3python\u8f6f\u4ef6\u5305400+\uff0c \u6bcf\u4e2a\u8f6f\u4ef6\u7684\u4f9d\u8d56\u5c42\u7ea7\u3001\u4f9d\u8d56\u7248\u672c\u9519\u7efc\u590d\u6742\uff0c\u9009\u578b\u56f0\u96be\uff0c\u96be\u4ee5\u5f62\u6210\u95ed\u73af\u3002 3. OpenStack\u8f6f\u4ef6\u5305\u4f17\u591a\uff0cRPM Spec\u7f16\u5199\u5f00\u53d1\u91cf\u5de8\u5927\uff0c\u5e76\u4e14\u968f\u7740openEuler\u3001OpenStack\u672c\u8eab\u7248\u672c\u7684\u4e0d\u65ad\u6f14\u8fdb\uff0cN:N\u7684\u9002\u914d\u5173\u7cfb\u4f1a\u5bfc\u81f4\u5de5\u4f5c\u91cf\u6210\u500d\u589e\u957f\uff0c\u4eba\u529b\u6210\u672c\u8d8a\u6765\u8d8a\u5927\u3002 4. OpenStack\u6d4b\u8bd5\u95e8\u69db\u8fc7\u9ad8\uff0c\u4e0d\u4ec5\u9700\u8981\u5f00\u53d1\u4eba\u5458\u719f\u6089OpenStack\uff0c\u8fd8\u8981\u5bf9\u865a\u62df\u5316\u3001\u865a\u62df\u7f51\u6865\u3001\u5757\u5b58\u50a8\u7b49Linux\u5e95\u5c42\u6280\u672f\u6709\u4e00\u5b9a\u4e86\u89e3\u4e0e\u638c\u63e1\uff0c\u90e8\u7f72\u4e00\u5957OpenStack\u73af\u5883\u8017\u65f6\u8fc7\u957f\uff0c\u529f\u80fd\u6d4b\u8bd5\u96be\u5ea6\u5de8\u5927\u3002\u5e76\u4e14\u6d4b\u8bd5\u573a\u666f\u591a\uff0c\u6bd4\u5982X86\u3001ARM64\u67b6\u6784\u6d4b\u8bd5\uff0c\u88f8\u673a\u3001\u865a\u673a\u79cd\u7c7b\u6d4b\u8bd5\uff0cOVS\u3001OVN\u7f51\u6865\u6d4b\u8bd5\uff0cLVM\u3001Ceph\u5b58\u50a8\u6d4b\u8bd5\u7b49\u7b49\uff0c\u66f4\u52a0\u52a0\u91cd\u4e86\u4eba\u529b\u6210\u672c\u4ee5\u53ca\u6280\u672f\u95e8\u69db\u3002 1.2 \u89e3\u51b3\u65b9\u6848 \u00b6 \u9488\u5bf9\u4ee5\u4e0a\u76ee\u524dSIG\u9047\u5230\u7684\u95ee\u9898\uff0c\u89c4\u8303\u5316\u3001\u5de5\u5177\u5316\u3001\u81ea\u52a8\u5316\u7684\u76ee\u6807\u52bf\u5728\u5fc5\u884c\u3002\u672c\u7bc7\u8bbe\u8ba1\u6587\u6863\u65e8\u5728\u5728openEuler OpenStack SIG\u4e2d\u63d0\u4f9b\u4e00\u4e2a\u7aef\u5230\u7aef\u53ef\u7528\u7684\u5f00\u53d1\u89e3\u51b3\u65b9\u6848\uff0c\u4ece\u6280\u672f\u89c4\u8303\u5230\u6280\u672f\u5b9e\u73b0\uff0c\u63d0\u51fa\u4e25\u683c\u7684\u6807\u51c6\u8981\u6c42\u4e0e\u8bbe\u8ba1\u65b9\u6848\uff0c\u6ee1\u8db3SIG\u5f00\u53d1\u8005\u7684\u65e5\u5e38\u5f00\u53d1\u9700\u6c42\uff0c\u964d\u4f4e\u5f00\u53d1\u6210\u672c\uff0c\u51cf\u5c11\u4eba\u529b\u6295\u5165\u6210\u672c\uff0c\u964d\u4f4e\u5f00\u53d1\u95e8\u69db\uff0c\u4ece\u800c\u63d0\u9ad8\u5f00\u53d1\u6548\u7387\u3001\u63d0\u9ad8SIG\u8f6f\u4ef6\u8d28\u91cf\u3001\u53d1\u5c55SIG\u751f\u6001\u3001\u5438\u5f15\u66f4\u591a\u5f00\u53d1\u8005\u52a0\u5165SIG\u3002\u4e3b\u8981\u52a8\u4f5c\u5982\u4e0b\uff1a 1. \u8f93\u51faOpenStack\u670d\u52a1\u7c7b\u8f6f\u4ef6\u3001\u4f9d\u8d56\u5e93\u8f6f\u4ef6\u7684RPM SPEC\u5f00\u53d1\u89c4\u8303\uff0c\u5f00\u53d1\u8005\u53caReviewer\u9700\u8981\u4e25\u683c\u9075\u5b88\u89c4\u8303\u8fdb\u884c\u5f00\u53d1\u5b9e\u65bd\u3002 2. \u63d0\u4f9bOpenStack python\u8f6f\u4ef6\u4f9d\u8d56\u5206\u6790\u529f\u80fd\uff0c\u4e00\u952e\u751f\u6210\u4f9d\u8d56\u62d3\u6251\u4e0e\u7ed3\u679c\uff0c\u4fdd\u8bc1\u4f9d\u8d56\u95ed\u73af\uff0c\u907f\u514d\u8f6f\u4ef6\u4f9d\u8d56\u98ce\u9669\u3002 3. \u63d0\u4f9bOpenStack RPM spec\u751f\u6210\u529f\u80fd\uff0c\u9488\u5bf9\u901a\u7528\u6027\u8f6f\u4ef6\uff0c\u63d0\u4f9b\u4e00\u952e\u751f\u6210 RPM spec\u7684\u529f\u80fd\uff0c\u7f29\u77ed\u5f00\u53d1\u65f6\u95f4\uff0c\u964d\u4f4e\u6295\u5165\u6210\u672c\u3002 4. \u63d0\u4f9b\u81ea\u52a8\u5316\u90e8\u7f72\u3001\u6d4b\u8bd5\u5e73\u53f0\u529f\u80fd\uff0c\u5b9e\u73b0\u4e00\u952e\u5728\u4efb\u4f55openEuler\u7248\u672c\u4e0a\u90e8\u7f72\u6307\u5b9aOpenStack\u7248\u672c\u7684\u80fd\u529b\uff0c\u5feb\u901f\u6d4b\u8bd5\u3001\u5feb\u901f\u8fed\u4ee3\u3002 5. \u63d0\u4f9bopenEuler Gitee\u4ed3\u5e93\u81ea\u52a8\u5316\u5904\u7406\u80fd\u529b\uff0c\u6ee1\u8db3\u6279\u91cf\u4fee\u6539\u8f6f\u4ef6\u7684\u9700\u6c42\uff0c\u6bd4\u5982\u521b\u5efa\u4ee3\u7801\u5206\u652f\u3001\u521b\u5efa\u4ed3\u5e93\u3001\u63d0\u4ea4Pull Request\u7b49\u529f\u80fd\u3002 \u4ee5\u4e0a\u89e3\u51b3\u65b9\u6cd5\u53ef\u4ee5\u7edf\u4e00\u5230\u4e00\u4e2a\u7cfb\u7edf\u5e73\u53f0\u4e2d\uff0c\u6211\u4eec\u79f0\u4f5cOpenStack SIG Tool\uff08\u4ee5\u4e0b\u7b80\u79f0oos\uff09\uff0c\u5373\u5c31\u662fopenEuler OpenStack\u5f00\u53d1\u5e73\u53f0\uff0c\u5177\u4f53\u67b6\u6784\u5982\u4e0b\uff1a \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 CLI \u2502 \u2502 GUI \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 \u2502 Built-in\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502REST \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 OpenStack Develop Platform \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2510 \u2502Dependency Analysis\u2502 \u2502SPEC Generation\u2502 \u2502Deploy and Test\u2502 \u2502Code Action\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u8be5\u67b6\u6784\u4e3b\u8981\u6709\u4ee5\u4e0b\u4e24\u79cd\u6a21\u5f0f\uff1a 1. Client/Server\u6a21\u5f0f \u5728\u8fd9\u79cd\u6a21\u5f0f\u4e0b\uff0coos\u90e8\u7f72\u6210Web Server\u5f62\u5f0f\uff0cClient\u901a\u8fc7REST\u65b9\u5f0f\u8c03\u7528oos\u3002 - \u4f18\u70b9\uff1a\u63d0\u4f9b\u5f02\u6b65\u8c03\u7528\u80fd\u529b\uff0c\u652f\u6301\u5e76\u53d1\u5904\u7406\uff0c\u652f\u6301\u8bb0\u5f55\u6301\u4e45\u5316\u3002 - \u7f3a\u70b9\uff1a\u6709\u4e00\u5b9a\u5b89\u88c5\u90e8\u7f72\u6210\u672c\uff0c\u4f7f\u7528\u65b9\u5f0f\u8f83\u4e3a\u6b7b\u677f\u3002 Built-in\u6a21\u5f0f \u5728\u8fd9\u79cd\u6a21\u5f0f\u4e0b\uff0coos\u65e0\u9700\u90e8\u7f72\uff0c\u4ee5\u5185\u7f6eCLI\u7684\u65b9\u5f0f\u5bf9\u5916\u63d0\u4f9b\u670d\u52a1\uff0c\u7528\u6237\u901a\u8fc7cli\u76f4\u63a5\u8c03\u7528\u5404\u79cd\u529f\u80fd\u3002 \u4f18\u70b9\uff1a\u65e0\u9700\u90e8\u7f72\uff0c\u968f\u65f6\u968f\u5730\u53ef\u7528\u3002 \u7f3a\u70b9\uff1a\u6ca1\u6709\u6301\u4e45\u5316\u80fd\u529b\uff0c\u4e0d\u652f\u6301\u5e76\u53d1\uff0c\u5355\u4eba\u5355\u7528\u3002 2. \u8be6\u7ec6\u8bbe\u8ba1 \u00b6 2.1 OpenStack Spec\u89c4\u8303 \u00b6 Spec\u89c4\u8303\u662f\u4e00\u4e2a\u6216\u591a\u4e2aspec\u6a21\u677f\uff0c\u9488\u5bf9RPM spec\u7684\u6bcf\u4e2a\u5173\u952e\u5b57\u53ca\u6784\u5efa\u7ae0\u8282\uff0c\u4e25\u683c\u89c4\u5b9a\u76f8\u5173\u5185\u5bb9\uff0c\u5f00\u53d1\u8005\u5728\u7f16\u5199spec\u65f6\uff0c\u5fc5\u987b\u6ee1\u8db3\u89c4\u8303\u8981\u6c42\uff0c\u5426\u5219\u4ee3\u7801\u4e0d\u5141\u8bb8\u88ab\u5408\u5165\u3002\u89c4\u8303\u5185\u5bb9\u7531SIG maintainer\u516c\u5f00\u8ba8\u8bba\u540e\u5f62\u6210\u7ed3\u8bba\uff0c\u5e76\u5b9a\u671f\u5ba1\u89c6\u66f4\u65b0\u3002\u4efb\u4f55\u4eba\u90fd\u6709\u6743\u5229\u63d0\u51fa\u5bf9\u89c4\u8303\u7684\u8d28\u7591\u548c\u5efa\u8bae\uff0c maintainer\u8d1f\u8d23\u89e3\u91ca\u4e0e\u5237\u65b0\u3002\u89c4\u8303\u76ee\u524d\u5305\u62ec\u4e24\u7c7b\uff1a 1. \u670d\u52a1\u7c7b\u8f6f\u4ef6\u89c4\u8303 \u6b64\u7c7b\u8f6f\u4ef6\u4ee5Nova\u3001Neutron\u3001Cinder\u7b49OpenStack\u6838\u5fc3\u670d\u52a1\u4e3a\u4f8b\uff0c\u5b83\u4eec\u4e00\u822c\u5b9a\u5236\u5316\u8981\u6c42\u9ad8\uff0c\u5185\u5bb9\u533a\u522b\u5927\uff0c\u5fc5\u8981\u4eba\u4e3a\u624b\u52a8\u7f16\u5199\u3002\u89c4\u8303\u9700\u6e05\u6670\u89c4\u5b9a\u8f6f\u4ef6\u7684\u5206\u5c42\u65b9\u6cd5\u3001\u6784\u5efa\u65b9\u6cd5\u3001\u8f6f\u4ef6\u5305\u7ec4\u6210\u5185\u5bb9\u3001\u6d4b\u8bd5\u65b9\u6cd5\u3001\u7248\u672c\u53f7\u89c4\u5219\u7b49\u5185\u5bb9\u3002 \u901a\u7528\u4f9d\u8d56\u7c7b\u8f6f\u4ef6\u89c4\u8303 \u6b64\u7c7b\u8f6f\u4ef6\u4e00\u822c\u5b9a\u5236\u5316\u4f4e\uff0c\u5185\u5bb9\u7ed3\u6784\u533a\u522b\u5c0f\uff0c\u9002\u5408\u81ea\u52a8\u5316\u5de5\u5177\u4e00\u952e\u751f\u6210\uff0c\u6211\u4eec\u53ea\u9700\u8981\u5728\u89c4\u8303\u4e2d\u5b9a\u4e49\u76f8\u5173\u5de5\u5177\u7684\u751f\u6210\u89c4\u5219\u5373\u53ef\u3002 2.1.1 \u670d\u52a1\u7c7b\u8f6f\u4ef6\u89c4\u8303 \u00b6 OpenStack\u6bcf\u4e2a\u670d\u52a1\u901a\u5e38\u5305\u542b\u82e5\u5e72\u5b50\u670d\u52a1\uff0c\u9488\u5bf9\u8fd9\u4e9b\u5b50\u670d\u52a1\uff0c\u6211\u4eec\u5728\u6253\u5305\u7684\u65f6\u5019\u4e5f\u8981\u505a\u62c6\u5305\u5904\u7406\uff0c\u5206\u6210\u82e5\u5e72\u4e2a\u5b50RPM\u5305\u3002\u672c\u7ae0\u8282\u89c4\u5b9a\u4e86openEuler SIG\u5bf9OpenStack\u670d\u52a1\u7684RPM\u5305\u62c6\u5206\u7684\u539f\u5219\u3002 2.1.1.1 \u901a\u7528\u539f\u5219 \u00b6 \u91c7\u7528\u5206\u5c42\u67b6\u6784\uff0cRPM\u5305\u7ed3\u6784\u5982\u4e0b\u56fe\u6240\u793a\uff0c\u4ee5openstack-nova\u4e3a\u4f8b\uff1a Level | Package | Example | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25021\u2502 | \u2502 Root Package \u2502 \u2502 Doc Package (Optional) \u2502 | \u2502 openstack-nova.rpm \u2502 \u2502 openstack-nova-doc.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2502 | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | | \u2502 \u2502 | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25022\u2502 | \u2502 Service1 Package \u2502 \u2502 Service2 Package \u2502 | | \u2502 openstack-nova-compute.rpm \u2502 \u2502 openstack-nova-api.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | | | | | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | | | | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25023\u2502 | \u2502 Common Package \u2502 | | \u2502 openstack-nova-common.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2502 | | | \u2502 | | | \u2502 | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25024\u2502 | \u2502 Library Package \u25c4------------| Library Test Package (Optional) \u2502 | \u2502 python2-nova.rpm \u2502 \u2502 python2-nova-tests.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u5982\u56fe\u6240\u793a\uff0c\u5206\u4e3a4\u7ea7 Root Package\u4e3a\u603bRPM\u5305\uff0c\u539f\u5219\u4e0a\u4e0d\u5305\u542b\u4efb\u4f55\u6587\u4ef6\u3002\u53ea\u505a\u670d\u52a1\u96c6\u5408\u7528\u3002\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5RPM\u4e00\u952e\u5b89\u88c5\u6240\u6709\u5b50RPM\u5305\u3002 \u5982\u679c\u9879\u76ee\u6709doc\u76f8\u5173\u7684\u6587\u4ef6\uff0c\u4e5f\u53ef\u4ee5\u5355\u72ec\u6210\u5305\uff08\u53ef\u9009\uff09 Service Package\u4e3a\u5b50\u670d\u52a1RPM\u5305\uff0c\u5305\u542b\u8be5\u670d\u52a1\u7684systemd\u670d\u52a1\u542f\u52a8\u6587\u4ef6\u3001\u81ea\u5df1\u72ec\u6709\u7684\u914d\u7f6e\u6587\u4ef6\u7b49\u3002 Common Package\u662f\u5171\u7528\u4f9d\u8d56\u7684RPM\u5305\uff0c\u5305\u542b\u5404\u4e2a\u5b50\u670d\u52a1\u4f9d\u8d56\u7684\u901a\u7528\u914d\u7f6e\u6587\u4ef6\u3001\u7cfb\u7edf\u914d\u7f6e\u6587\u4ef6\u7b49\u3002 Library Package\u4e3apython\u6e90\u7801\u5305\uff0c\u5305\u542b\u4e86\u8be5\u9879\u76ee\u7684python\u4ee3\u7801\u3002 \u5982\u679c\u9879\u76ee\u6709test\u76f8\u5173\u7684\u6587\u4ef6\uff0c\u4e5f\u53ef\u4ee5\u5355\u72ec\u6210\u5305\uff08\u53ef\u9009\uff09 \u6d89\u53ca\u672c\u539f\u5219\u7684\u9879\u76ee\u6709\uff1a openstack-nova openstack-cinder openstack-glance openstack-placment openstack-ironic 2.1.1.2 \u7279\u6b8a\u60c5\u51b5 \u00b6 \u6709\u4e9bopenstack\u7ec4\u4ef6\u672c\u8eab\u53ea\u5305\u542b\u4e00\u4e2a\u670d\u52a1\uff0c\u4e0d\u5b58\u5728\u5b50\u670d\u52a1\u7684\u6982\u5ff5,\u8fd9\u79cd\u670d\u52a1\u5219\u53ea\u9700\u8981\u5206\u4e3a\u4e24\u7ea7\uff1a Level | Package | Example | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25021\u2502 | \u2502 Root Package \u2502 \u2502 Doc Package (Optional) \u2502 | \u2502 openstack-keystone.rpm \u2502 \u2502 openstack-keystone-doc.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25022\u2502 | \u2502 Library Package \u25c4-----| Library Test Package (Optional) \u2502 | \u2502 python2-keystone.rpm \u2502 \u2502 python2-keystone-tests.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 Root Package RPM\u5305\u5305\u542b\u4e86\u9664python\u6e90\u7801\u5916\u7684\u5176\u4ed6\u6240\u6709\u6587\u4ef6\uff0c\u5305\u62ec\u670d\u52a1\u542f\u52a8\u6587\u4ef6\u3001\u9879\u76ee\u914d\u7f6e\u6587\u4ef6\u3001\u7cfb\u7edf\u914d\u7f6e\u6587\u4ef6\u7b49\u7b49\u3002 \u5982\u679c\u9879\u76ee\u6709doc\u76f8\u5173\u7684\u6587\u4ef6\uff0c\u4e5f\u53ef\u4ee5\u5355\u72ec\u6210\u5305\uff08\u53ef\u9009\uff09 Library Package\u4e3apython\u6e90\u7801\u5305\uff0c\u5305\u542b\u4e86\u8be5\u9879\u76ee\u7684python\u4ee3\u7801\u3002 \u5982\u679c\u9879\u76ee\u6709test\u76f8\u5173\u7684\u6587\u4ef6\uff0c\u4e5f\u53ef\u4ee5\u5355\u72ec\u6210\u5305\uff08\u53ef\u9009\uff09 \u6d89\u53ca\u672c\u539f\u5219\u7684\u9879\u76ee\u6709\uff1a openstack-keystone openstack-horizon \u8fd8\u6709\u4e9b\u9879\u76ee\u867d\u7136\u6709\u82e5\u5e72\u5b50RPM\u5305\uff0c\u4f46\u8fd9\u4e9b\u5b50RPM\u5305\u662f\u4e92\u65a5\u7684\uff0c\u5219\u8fd9\u79cd\u670d\u52a1\u7684\u7ed3\u6784\u5982\u4e0b\uff1a Level | Package | Example | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25021\u2502 | \u2502 Root Package \u2502 \u2502 Doc Package (Optional) \u2502 | \u2502 openstack-neutron.rpm \u2502 \u2502 openstack-neutron-doc.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2502 | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | | \u2502 | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25022\u2502 | \u2502 Service1 Package \u2502 \u2502 Service2 Package \u2502 \u2502 Service3 Package \u2502 | | \u2502 openstack-neutron-server.rpm \u2502 \u2502 openstack-neutron-openvswitch.rpm \u2502 \u2502 openstack-neutron-linuxbridge.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | | | | | | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | | | | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25023\u2502 | \u2502 Common Package \u2502 | | \u2502 openstack-neutron-common.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2502 | | | \u2502 | | | \u2502 | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25024\u2502 | \u2502 Library Package \u25c4------| Library Test Package (Optional) \u2502 | \u2502 python2-neutron.rpm \u2502 \u2502 python2-neutron-tests.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u5982\u56fe\u6240\u793a\uff0cService2\u548cService3\u4e92\u65a5\u3002 Root\u5305\u53ea\u5305\u542b\u4e0d\u4e92\u65a5\u7684\u5b50\u5305\uff0c\u4e92\u65a5\u7684\u5b50\u5305\u5355\u72ec\u63d0\u4f9b\u3002 \u5982\u679c\u9879\u76ee\u6709doc\u76f8\u5173\u7684\u6587\u4ef6\uff0c\u4e5f\u53ef\u4ee5\u5355\u72ec\u6210\u5305\uff08\u53ef\u9009\uff09 Service Package\u4e3a\u5b50\u670d\u52a1RPM\u5305\uff0c\u5305\u542b\u8be5\u670d\u52a1\u7684systemd\u670d\u52a1\u542f\u52a8\u6587\u4ef6\u3001\u81ea\u5df1\u72ec\u6709\u7684\u914d\u7f6e\u6587\u4ef6\u7b49\u3002 \u4e92\u65a5\u7684Service\u5305\u4e0d\u88abRoot\u5305\u6240\u5305\u542b\uff0c\u7528\u6237\u9700\u8981\u5355\u72ec\u5b89\u88c5\u3002 Common Package\u662f\u5171\u7528\u4f9d\u8d56\u7684RPM\u5305\uff0c\u5305\u542b\u5404\u4e2a\u5b50\u670d\u52a1\u4f9d\u8d56\u7684\u901a\u7528\u914d\u7f6e\u6587\u4ef6\u3001\u7cfb\u7edf\u914d\u7f6e\u6587\u4ef6\u7b49\u3002 Library Package\u4e3apython\u6e90\u7801\u5305\uff0c\u5305\u542b\u4e86\u8be5\u9879\u76ee\u7684python\u4ee3\u7801\u3002 \u5982\u679c\u9879\u76ee\u6709test\u76f8\u5173\u7684\u6587\u4ef6\uff0c\u4e5f\u53ef\u4ee5\u5355\u72ec\u6210\u5305\uff08\u53ef\u9009\uff09 \u6d89\u53ca\u672c\u539f\u5219\u7684\u9879\u76ee\u6709\uff1a openstack-neutron 2.1.2 \u901a\u7528\u4f9d\u8d56\u7c7b\u8f6f\u4ef6\u89c4\u8303 \u00b6 \u4e00\u4e2a\u4f9d\u8d56\u5e93\u4e00\u822c\u53ea\u5305\u542b\u4e00\u4e2aRPM\u5305\uff0c\u4e0d\u9700\u8981\u505a\u62c6\u5206\u5904\u7406\u3002 Level | Package | Example | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25021\u2502 | \u2502 Library Package \u2502 \u2502 Help Package (Optional)\u2502 | \u2502 python2-oslo-service.rpm \u2502 \u2502 python2-oslo-service-help.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 NOTE openEuler\u793e\u533a\u5bf9python2\u548cpython3 RPM\u5305\u7684\u547d\u540d\u6709\u8981\u6c42\uff0cpython2\u7684\u5305\u524d\u7f00\u4e3a python2- \uff0cpython3\u7684\u5305\u524d\u7f00\u4e3a python3- \u3002\u56e0\u6b64\uff0cOpenStack\u8981\u6c42\u5f00\u53d1\u8005\u5728\u6253Library\u7684RPM\u5305\u65f6\uff0c\u4e5f\u8981\u9075\u5b88openEuler\u793e\u533a\u89c4\u8303\u3002 2.2 \u8f6f\u4ef6\u4f9d\u8d56\u529f\u80fd \u00b6 \u8f6f\u4ef6\u4f9d\u8d56\u5206\u6790\u529f\u80fd\u4e3a\u7528\u6237\u63d0\u4f9b\u4e00\u952e\u5206\u6790\u76ee\u6807OpenStack\u7248\u672c\u5305\u542b\u7684\u5168\u91cfpython\u8f6f\u4ef6\u4f9d\u8d56\u62d3\u6251\u53ca\u5bf9\u5e94\u8f6f\u4ef6\u7248\u672c\u7684\u80fd\u529b\u3002\u5e76\u81ea\u52a8\u4e0e\u76ee\u6807openEuler\u7248\u672c\u8fdb\u884c\u6bd4\u5bf9\uff0c\u8f93\u51fa\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u5f00\u53d1\u5efa\u8bae\u3002\u672c\u529f\u80fd\u5305\u542b\u4e24\u4e2a\u5b50\u529f\u80fd\uff1a - \u4f9d\u8d56\u5206\u6790 \u5bf9OpenStack python\u5305\u7684\u4f9d\u8d56\u6811\u8fdb\u884c\u89e3\u6790\uff0c\u62c6\u89e3\u4f9d\u8d56\u62d3\u6251\u3002\u4f9d\u8d56\u6811\u672c\u8d28\u4e0a\u662f\u5bf9\u6709\u5411\u56fe\u7684\u904d\u5386\uff0c\u7406\u8bba\u4e0a\uff0c\u4e00\u4e2a\u6b63\u5e38\u7684python\u4f9d\u8d56\u6811\u662f\u4e00\u4e2a\u6709\u5411\u65e0\u73af\u56fe\uff0c\u6709\u5411\u65e0\u73af\u56fe\u7684\u89e3\u6790\u65b9\u6cd5\u5f88\u591a\uff0c\u8fd9\u91cc\u91c7\u7528\u5e38\u7528\u7684\u5e7f\u5ea6\u4f18\u5148\u641c\u7d22\u65b9\u6cd5\u5373\u53ef\u3002\u4f46\u5728\u67d0\u4e9b\u7279\u6b8a\u573a\u666f\u4e0b\uff0cpython\u4f9d\u8d56\u6811\u4f1a\u53d8\u6210\u6709\u5411\u6709\u73af\u56fe\uff0c\u4f8b\u5982\uff1aSphinx\u662f\u4e00\u4e2a\u6587\u6863\u751f\u4ea7\u9879\u76ee\uff0c\u4f46\u5b83\u81ea\u5df1\u7684\u6587\u6863\u751f\u6210\u4e5f\u4f9d\u8d56Sphinx\uff0c\u8fd9\u5c31\u5bfc\u81f4\u4e86\u4f9d\u8d56\u73af\u7684\u5f62\u6210\u3002\u9488\u5bf9\u8fd9\u79cd\u95ee\u9898\uff0c\u6211\u4eec\u53ea\u9700\u8981\u628a\u73af\u4e0a\u7684\u7279\u5b9a\u8282\u70b9\u624b\u52a8\u65ad\u5f00\u5373\u53ef\u3002\u7c7b\u4f3c\u7684\u8fd8\u6709\u4e00\u4e9b\u6d4b\u8bd5\u4f9d\u8d56\u5e93\u3002\u53e6\u4e00\u79cd\u89c4\u907f\u65b9\u6cd5\u662f\u8df3\u8fc7\u6587\u6863\u3001\u6d4b\u8bd5\u8fd9\u79cd\u975e\u6838\u5fc3\u5e93\uff0c\u8fd9\u6837\u4e0d\u4ec5\u907f\u514d\u4e86\u4f9d\u8d56\u73af\u7684\u5f62\u6210\uff0c\u4e5f\u4f1a\u6781\u5927\u51cf\u5c11\u8f6f\u4ef6\u5305\u7684\u6570\u91cf\uff0c\u964d\u4f4e\u5f00\u53d1\u5de5\u4f5c\u91cf\u3002\u4ee5OpenStack Wallaby\u7248\u672c\u4e3a\u4f8b\uff0c\u5168\u91cf\u4f9d\u8d56\u5305\u5927\u6982\u5728700+\u4ee5\u4e0a\uff0c\u53bb\u6389\u6587\u6863\u3001\u6d4b\u8bd5\u540e\uff0c\u4f9d\u8d56\u5305\u5927\u6982\u662f300+\u5de6\u53f3\u3002\u56e0\u6b64\u6211\u4eec\u5f15\u5165`core`\u6838\u5fc3\u7684\u6982\u5ff5\uff0c\u7528\u6237\u6839\u636e\u81ea\u5df1\u7684\u9700\u6c42\uff0c\u9009\u62e9\u8981\u5206\u6790\u7684\u8f6f\u4ef6\u8303\u56f4\u3002\u53e6\u5916\u867d\u7136OpenStack\u5305\u542b\u670d\u52a1\u51e0\u5341\u4e2a\uff0c\u4f46\u7528\u6237\u53ef\u80fd\u53ea\u9700\u8981\u5176\u4e2d\u7684\u67d0\u4e9b\u670d\u52a1\uff0c\u56e0\u6b64\u6211\u4eec\u53e6\u5916\u5f15\u5165`projects`\u8fc7\u6ee4\u5668\uff0c\u7528\u6237\u53ef\u4ee5\u6839\u636e\u81ea\u5df1\u7684\u9700\u6c42\uff0c\u6307\u5b9a\u5206\u6790\u7684\u8f6f\u4ef6\u4f9d\u8d56\u8303\u56f4\u3002 \u4f9d\u8d56\u6bd4\u5bf9 \u4f9d\u8d56\u5206\u6790\u5b8c\u540e\uff0c\u8fd8\u8981\u6709\u5bf9\u5e94\u7684openEuler\u5f00\u53d1\u52a8\u4f5c\uff0c\u56e0\u6b64\u6211\u4eec\u8fd8\u8981\u63d0\u4f9b\u57fa\u4e8e\u76ee\u6807openEuler\u7248\u672c\u7684RPM\u8f6f\u4ef6\u5305\u5f00\u53d1\u5efa\u8bae\u3002openEuler\u4e0eOpenStack\u7248\u672c\u4e4b\u95f4\u6709N:N\u7684\u6620\u5c04\u5173\u7cfb\uff0c\u4e00\u4e2aopenEuler\u7248\u672c\u53ef\u4ee5\u652f\u6301\u591a\u4e2aOpenStack\u7248\u672c\uff0c\u4e00\u4e2aOpenStack\u7248\u672c\u53ef\u4ee5\u90e8\u7f72\u5728\u591a\u4e2aopenEuler\u7248\u672c\u4e0a\u3002\u7528\u6237\u5728\u6307\u5b9a\u4e86\u76ee\u6807openEuler\u7248\u672c\u548cOpenStack\u7248\u672c\u540e\uff0c\u672c\u529f\u80fd\u81ea\u52a8\u904d\u5386openEuler\u8f6f\u4ef6\u5e93\uff0c\u5206\u6790\u5e76\u8f93\u51faOpenStack\u6d89\u53ca\u7684\u5168\u91cf\u8f6f\u4ef6\u5305\u9700\u8981\u8fdb\u884c\u4e86\u64cd\u4f5c\uff0c\u4f8b\u5982\u9700\u8981\u521d\u59cb\u5316\u4ed3\u5e93\u3001\u521b\u5efaopenEuler\u5206\u652f\u3001\u5347\u7ea7\u8f6f\u4ef6\u5305\u7b49\u7b49\u3002\u4e3a\u5f00\u53d1\u8005\u540e\u7eed\u7684\u5f00\u53d1\u63d0\u4f9b\u6307\u5bfc\u3002 2.2.1 \u7248\u672c\u5339\u914d\u89c4\u8303 \u00b6 \u4f9d\u8d56\u5206\u6790 \u8f93\u5165\uff1a\u76ee\u6807OpenStack\u7248\u672c\u3001\u76ee\u6807OpenStack\u670d\u52a1\u5217\u8868\u3001\u662f\u5426\u53ea\u5206\u6790\u6838\u5fc3\u8f6f\u4ef6 \u8f93\u51fa\uff1a\u6240\u6709\u6d89\u53ca\u7684\u8f6f\u4ef6\u5305\u53ca\u6bcf\u4e2a\u8f6f\u4ef6\u5305\u7684\u5bf9\u5e94\u5185\u5bb9\u3002\u683c\u5f0f\u5982\u4e0b\uff1a \u2514\u2500\u2500{OpenStack\u7248\u672c\u540d}_cached_file \u2514\u2500\u2500packageA.yaml \u2514\u2500\u2500packageB.yaml \u2514\u2500\u2500packageC.yaml ...... \u6bcf\u4e2a\u8f6f\u4ef6\u5185\u5bb9\u683c\u5f0f\u5982\u4e0b\uff1a { \"name\": \"packageA\", \"version_dict\": { \"version\": \"0.3.7\", \"eq_version\": \"\", \"ge_version\": \"0.3.5\", \"lt_version\": \"\", \"ne_version\": [], \"upper_version\": \"0.3.7\"}, \"deep\": { \"count\": 1, \"list\": [\"packageB\", \"packageC\"]}, \"requires\": {} } \u5173\u952e\u5b57\u8bf4\u660e | Key | Description | |:-----------------:|:-----------:| | name | \u8f6f\u4ef6\u5305\u540d | | version_dict | \u8f6f\u4ef6\u7248\u672c\u8981\u6c42\uff0c\u5305\u62ec\u7b49\u4e8e\u3001\u5927\u4e8e\u7b49\u4e8e\u3001\u5c0f\u4e8e\u3001\u4e0d\u7b49\u4e8e\uff0c\u7b49\u7b49 | | version_dict.deep | \u8868\u793a\u8be5\u8f6f\u4ef6\u5728\u5168\u91cf\u4f9d\u8d56\u6811\u7684\u6df1\u5ea6\uff0c\u4ee5\u53ca\u6df1\u5ea6\u904d\u5386\u7684\u8def\u5f84 | | requires | \u5305\u542b\u672c\u8f6f\u4ef6\u7684\u4f9d\u8d56\u8f6f\u4ef6\u5217\u8868 | \u4f9d\u8d56\u6bd4\u5bf9 \u8f93\u5165\uff1a\u4f9d\u8d56\u5206\u6790\u7ed3\u679c\u3001\u76ee\u6807openEuler\u7248\u672c\u4ee5\u53cabase\u6bd4\u5bf9\u57fa\u7ebf \u8f93\u51fa\uff1a\u4e00\u4e2a\u8868\u683c\uff0c\u5305\u542b\u6bcf\u4e2a\u8f6f\u4ef6\u7684\u5206\u6790\u7ed3\u679c\u53ca\u5904\u7406\u5efa\u8bae\uff0c\u6bcf\u4e00\u884c\u8868\u793a\u4e00\u4e2a\u8f6f\u4ef6\uff0c\u6240\u6709\u5217\u540d\u53ca\u5b9a\u4e49\u89c4\u8303\u5982\u4e0b\uff1a Column Description Project Name \u8f6f\u4ef6\u5305\u540d openEuler Repo \u8f6f\u4ef6\u5728openEuler\u4e0a\u7684\u6e90\u7801\u4ed3\u5e93\u540d Repo version openEuler\u4e0a\u7684\u6e90\u7801\u7248\u672c Required (Min) Version \u8981\u6c42\u7684\u6700\u5c0f\u7248\u672c lt Version \u8981\u6c42\u5c0f\u4e8e\u7684\u7248\u672c ne Version \u8981\u6c42\u7684\u4e0d\u7b49\u4e8e\u7248\u672c Upper Version \u8981\u6c42\u7684\u6700\u5927\u7248\u672c Status \u5f00\u53d1\u5efa\u8bae Requires \u8f6f\u4ef6\u7684\u4f9d\u8d56\u5217\u8868 Depth \u8f6f\u4ef6\u7684\u4f9d\u8d56\u6811\u6df1\u5ea6 \u5176\u4e2d Status \u5305\u542b\u7684\u5efa\u8bae\u6709: - \u201cOK\u201d\uff1a\u5f53\u524d\u7248\u672c\u76f4\u63a5\u53ef\u7528\uff0c\u4e0d\u9700\u8981\u5904\u7406\u3002 - \u201cNeed Create Repo\u201d\uff1aopenEuler \u7cfb\u7edf\u4e2d\u6ca1\u6709\u6b64\u8f6f\u4ef6\u5305\uff0c\u9700\u8981\u5728 Gitee \u4e2d\u7684 src-openeuler repo \u4ed3\u65b0\u5efa\u4ed3\u5e93\u3002 - \u201cNeed Create Branch\u201d\uff1a\u4ed3\u5e93\u4e2d\u6ca1\u6709\u6240\u9700\u5206\u652f\uff0c\u9700\u8981\u5f00\u53d1\u8005\u521b\u5efa\u5e76\u521d\u59cb\u5316\u3002 - \u201cNeed Init Branch\u201d\uff1a\u8868\u660e\u5206\u652f\u5b58\u5728\uff0c\u4f46\u662f\u91cc\u9762\u5e76\u6ca1\u6709\u4efb\u4f55\u7248\u672c\u7684\u6e90\u7801\u5305\uff0c\u5f00\u53d1\u8005\u9700\u8981\u5bf9\u6b64\u5206\u652f\u8fdb\u884c\u521d\u59cb\u5316\u3002 - \u201cNeed Downgrade\u201d\uff1a\u964d\u7ea7\u8f6f\u4ef6\u5305\u3002 - \u201cNeed Upgrade\u201d\uff1a\u5347\u7ea7\u8f6f\u4ef6\u5305\u3002 \u5f00\u53d1\u8005\u6839\u636e Status \u7684\u5efa\u8bae\u8fdb\u884c\u540e\u7eed\u5f00\u53d1\u52a8\u4f5c\u3002 2.2.2 API\u548cCLI\u5b9a\u4e49 \u00b6 \u521b\u5efa\u4f9d\u8d56\u5206\u6790 CLI: oos dependence analysis create endpoint: /dependence/analysis type: POST sync OR async: async request body: { \"release\"[required]: Enum(\"OpenStack Relase\"), \"runtime\"[optional][Default: \"3.10\"]: Enum(\"Python version\"), \"core\"[optional][Default: False]: Boolean, \"projects\"[optional][Default: None]: List(\"OpenStack service\") } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u83b7\u53d6\u4f9d\u8d56\u5206\u6790 CLI: oos dependence analysis show \u3001 oos dependence analysis list endpoint: /dependence/analysis/{UUID} \u3001 /dependence/analysis type: GET sync OR async: sync request body: None response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\", \"OK\") } \u5220\u9664\u4f9d\u8d56\u5206\u6790 CLI: oos dependence analysis delete endpoint: /dependence/analysis/{UUID} type: DELETE sync OR async: sync request body: None response body: { \"ID\": UUID, \"status\": Enum(\"Error\", \"OK\") } \u521b\u5efa\u4f9d\u8d56\u6bd4\u5bf9 CLI: oos dependence generate endpoint: /dependence/generate type: POST sync OR async: async request body: { \"analysis_id\"[required]: UUID, \"compare\"[optional][Default: None]: { \"token\"[required]: GITEE_TOKEN_ID, \"compare-from\"[optional][Default: master]: Enum(\"openEuler project branch\"), \"compare-branch\"[optional][Default: master]: Enum(\"openEuler project branch\") } } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u83b7\u53d6\u4f9d\u8d56\u6bd4\u5bf9 CLI: oos dependence generate show \u3001 oos dependence generate list endpoint: /dependence/generate/{UUID} \u3001 /dependence/generate type: GET sync OR async: sync request body: None response body: { \"ID\": UUID, \"data\" RAW(result data file) } \u5220\u9664\u4f9d\u8d56\u6bd4\u5bf9 CLI: oos dependence generate delete endpoint: /dependence/generate/{UUID} type: DELETE sync OR async: sync request body: None response body: { \"ID\": UUID, \"status\": Enum(\"Error\", \"OK\") } 2.3 \u8f6f\u4ef6SPEC\u751f\u6210\u529f\u80fd \u00b6 OpenStack\u4f9d\u8d56\u7684\u5927\u91cfpython\u5e93\u662f\u9762\u5411\u5f00\u53d1\u8005\u7684\uff0c\u8fd9\u79cd\u5e93\u4e0d\u5bf9\u5916\u63d0\u4f9b\u7528\u6237\u670d\u52a1\uff0c\u53ea\u63d0\u4f9b\u4ee3\u7801\u7ea7\u8c03\u7528\uff0c\u5176RPM\u5185\u5bb9\u6784\u6210\u5355\u4e00\u3001\u683c\u5f0f\u56fa\u5b9a\uff0c\u9002\u5408\u4f7f\u7528\u5de5\u5177\u5316\u65b9\u5f0f\u63d0\u9ad8\u5f00\u53d1\u6548\u7387\u3002 2.3.1 SPEC\u751f\u6210\u89c4\u8303 \u00b6 SPEC\u7f16\u5199\u4e00\u822c\u5206\u4e3a\u51e0\u4e2a\u9636\u6bb5\uff0c\u6bcf\u4e2a\u9636\u6bb5\u6709\u5bf9\u5e94\u7684\u89c4\u8303\u8981\u6c42\uff1a 1. \u5e38\u89c4\u9879\u586b\u5199\uff0c\u5305\u62ecName\u3001Version\u3001Release\u3001Summary\u3001License\u7b49\u5185\u5bb9\uff0c\u8fd9\u4e9b\u5185\u5bb9\u7531\u76ee\u6807\u8f6f\u4ef6\u7684pypi\u4fe1\u606f\u63d0\u4f9b 2. \u5b50\u8f6f\u4ef6\u5305\u4fe1\u606f\u586b\u5199\uff0c\u5305\u62ec\u8f6f\u4ef6\u5305\u540d\u3001\u7f16\u8bd1\u4f9d\u8d56\u3001\u5b89\u88c5\u4f9d\u8d56\u3001\u63cf\u8ff0\u4fe1\u606f\u7b49\u3002\u8fd9\u4e9b\u5185\u5bb9\u4e5f\u7531\u76ee\u6807\u8f6f\u4ef6\u7684pypi\u4fe1\u606f\u63d0\u4f9b\u3002\u5176\u4e2d\u8f6f\u4ef6\u5305\u540d\u9700\u8981\u6709\u660e\u663e\u7684python\u5316\u663e\u793a\uff0c\u6bd4\u5982\u4ee5 python3- \u4e3a\u524d\u7f00\u3002 3. \u6784\u5efa\u8fc7\u7a0b\u4fe1\u606f\u586b\u5199\uff0c\u5305\u62ec%prep\u3001%build %install %check\u5185\u5bb9\uff0c\u8fd9\u4e9b\u5185\u5bb9\u5f62\u5f0f\u56fa\u5b9a\uff0c\u751f\u6210\u5bf9\u5e94rpm\u5b8f\u547d\u4ee4\u5373\u53ef\u3002 4. RPM\u5305\u6587\u4ef6\u5c01\u88c5\u9636\u6bb5\uff0c\u672c\u9636\u6bb5\u901a\u8fc7\u6587\u4ef6\u641c\u7d22\u65b9\u5f0f\uff0c\u628abin\u3001lib\u3001doc\u7b49\u5185\u5bb9\u5206\u522b\u653e\u5230\u5bf9\u5e94\u76ee\u5f55\u5373\u53ef\u3002 NOTE \uff1a\u5728\u901a\u7528\u89c4\u8303\u5916\uff0c\u4e5f\u6709\u4e00\u4e9b\u4f8b\u5916\u60c5\u51b5\uff0c\u9700\u8981\u7279\u6b8a\u8bf4\u660e\uff1a 1. \u8f6f\u4ef6\u5305\u540d\u5982\u679c\u672c\u8eab\u5df2\u5305\u542b python \u8fd9\u6837\u7684\u5b57\u773c\uff0c\u4e0d\u518d\u9700\u8981\u6dfb\u52a0 python- \u6216 python3- \u524d\u7f00\u3002 2. \u8f6f\u4ef6\u6784\u5efa\u548c\u5b89\u88c5\u9636\u6bb5\uff0c\u6839\u636e\u8f6f\u4ef6\u672c\u8eab\u7684\u5b89\u88c5\u65b9\u5f0f\u4e0d\u540c\uff0c\u5b8f\u547d\u4ee4\u5305\u62ec %py3_build \u6216 pyproject_build \uff0c\u9700\u8981\u4eba\u5de5\u5ba1\u89c6\u3002 3. \u5982\u679c\u8f6f\u4ef6\u672c\u8eab\u5305\u542bC\u8bed\u8a00\u7b49\u7f16\u8bd1\u7c7b\u4ee3\u7801\uff0c\u5219\u9700\u8981\u79fb\u9664 BuildArch: noarch \u5173\u952e\u5b57,\u5e76\u4e14\u5728%file\u9636\u6bb5\u6ce8\u610fRPM\u5b8f %{python3_sitelib} \u548c %{python3_sitearch} \u7684\u533a\u522b\u3002 2.3.2 API\u548cCLI\u5b9a\u4e49 \u00b6 \u521b\u5efaSPEC CLI: oos spec create endpoint: /spec type: POST sync OR async: async request body: { \"name\"[required]: String, \"version\"[optional][Default: \"latest\"]: String, \"arch\"[optional][Default: False]: Boolean, \"check\"[optional][Default: True]: Boolean, \"pyproject\"[optional][Default: False]: Boolean, } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u83b7\u53d6SPEC CLI: oos spec show \u3001 oos spec list endpoint: /spec/{UUID} \u3001 /spec/ type: GET sync OR async: sync request body: None response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\", \"OK\") } \u66f4\u65b0SPEC CLI: oos spec update endpoint: /spec/{UUID} type: POST sync OR async: async request body: { \"name\"[required]: String, \"version\"[optional][Default: \"latest\"]: String, } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u5220\u9664SPEC CLI: oos spec delete endpoint: /spec/{UUID} type: DELETE sync OR async: sync request body: None response body: { \"ID\": UUID, \"status\": Enum(\"Error\", \"OK\") } 2.4 \u81ea\u52a8\u5316\u90e8\u7f72\u3001\u6d4b\u8bd5\u529f\u80fd \u00b6 OpenStack\u7684\u90e8\u7f72\u573a\u666f\u591a\u6837\u3001\u90e8\u7f72\u6d41\u7a0b\u590d\u6742\u3001\u90e8\u7f72\u6280\u672f\u95e8\u69db\u8f83\u9ad8\uff0c\u4e3a\u4e86\u89e3\u51b3\u95e8\u69db\u9ad8\u3001\u6548\u7387\u4f4e\u3001\u4eba\u529b\u591a\u7684\u95ee\u9898\uff0copenEuler OpenStack\u5f00\u53d1\u5e73\u53f0\u9700\u8981\u63d0\u4f9b\u81ea\u52a8\u5316\u90e8\u7f72\u3001\u6d4b\u8bd5\u529f\u80fd\u3002 \u81ea\u52a8\u5316\u90e8\u7f72 \u63d0\u4f9b\u57fa\u4e8eopenEuler\u7684OpenStack\u7684\u4e00\u952e\u90e8\u7f72\u80fd\u529b\uff0c\u5305\u62ec\u652f\u6301\u4e0d\u540c\u67b6\u6784\u3001\u4e0d\u540c\u670d\u52a1\u3001\u4e0d\u540c\u573a\u666f\u7684\u90e8\u7f72\u529f\u80fd\uff0c\u63d0\u4f9b\u57fa\u4e8e\u4e0d\u540c\u73af\u5883\u5feb\u901f\u53d1\u653e\u3001\u914d\u7f6eopenEuler\u73af\u5883\u7684\u80fd\u529b\u3002\u5e76\u63d0\u4f9b \u63d2\u4ef6\u5316 \u80fd\u529b\uff0c\u65b9\u4fbf\u7528\u6237\u6269\u5c55\u652f\u6301\u7684\u90e8\u7f72\u540e\u7aef\u548c\u573a\u666f\u3002 \u81ea\u52a8\u5316\u6d4b\u8bd5 \u63d0\u4f9b\u57fa\u4e8eopenEuler\u7684OpenStack\u7684\u4e00\u952e\u6d4b\u8bd5\u80fd\u529b\uff0c\u5305\u62ec\u652f\u6301\u4e0d\u540c\u573a\u666f\u7684\u6d4b\u8bd5\uff0c\u63d0\u4f9b\u7528\u6237\u81ea\u5b9a\u4e49\u6d4b\u8bd5\u7684\u80fd\u529b\uff0c\u5e76\u89c4\u8303\u6d4b\u8bd5\u62a5\u544a\uff0c\u4ee5\u53ca\u652f\u6301\u5bf9\u6d4b\u8bd5\u7ed3\u679c\u4e0a\u62a5\u548c\u6301\u4e45\u5316\u7684\u80fd\u529b\u3002 2.4.1 \u81ea\u52a8\u5316\u90e8\u7f72 \u00b6 \u81ea\u52a8\u5316\u90e8\u7f72\u4e3b\u8981\u5305\u62ec\u4e24\u90e8\u5206\uff1aopenEuler\u73af\u5883\u51c6\u5907\u548cOpenStack\u90e8\u7f72\u3002 openEuler\u73af\u5883\u51c6\u5907 \u63d0\u4f9b\u5feb\u901f\u53d1\u653eopenEuler\u73af\u5883\u7684\u80fd\u529b\uff0c\u652f\u6301\u7684\u53d1\u653e\u65b9\u5f0f\u5305\u62ec \u521b\u5efa\u516c\u6709\u4e91\u8d44\u6e90 \u548c \u7eb3\u7ba1\u5df2\u6709\u73af\u5883 \uff0c\u5177\u4f53\u8bbe\u8ba1\u5982\u4e0b\uff1a **NOTE** openEuler\u7684OpenStack\u652f\u6301\u4ee5RPM + systemd\u7684\u65b9\u5f0f\u4e3a\u4e3b\uff0c\u6682\u4e0d\u652f\u6301\u5bb9\u5668\u65b9\u5f0f\u3002 \u521b\u5efa\u516c\u6709\u4e91\u8d44\u6e90 \u521b\u5efa\u516c\u6709\u4e91\u8d44\u6e90\u4ee5\u865a\u62df\u673a\u652f\u6301\u4e3a\u4e3b\uff08\u88f8\u673a\u5728\u4e91\u4e0a\u64cd\u4f5c\u8d1f\u8d23\uff0c\u751f\u6001\u6ee1\u8db3\u5ea6\u4e0d\u8db3\uff0c\u6682\u4e0d\u505a\u652f\u6301\uff09\u3002\u91c7\u7528\u63d2\u4ef6\u5316\u65b9\u5f0f\uff0c\u63d0\u4f9b\u591a\u4e91\u652f\u6301\u7684\u80fd\u529b\uff0c\u4ee5\u534e\u4e3a\u4e91\u4e3a\u53c2\u8003\u5b9e\u73b0\uff0c\u4f18\u5148\u5b9e\u73b0\u3002\u5176\u4ed6\u4e91\u7684\u652f\u6301\u6839\u636e\u7528\u6237\u9700\u6c42\uff0c\u6301\u7eed\u63a8\u8fdb\u3002\u6839\u636e\u573a\u666f\uff0c\u652f\u6301all in one\u548c\u4e09\u8282\u70b9\u62d3\u6251\u3002 1. \u521b\u5efa\u73af\u5883 - CLI: oos env create - endpoint: `/environment` - type: POST - sync OR async: async - request body: ``` { \"name\"[required]: String, \"type\"[required]: Enmu(\"all-in-one\", \"cluster\"), \"release\"[required]: Enmu(\"openEuler_Release\"), \"flavor\"[required]\uff1a Enmu(\"small\", \"medium\", \"large\"), \"arch\"[required]\uff1a Enmu(\"x86\", \"arm64\"), } ``` - response body: ``` { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } ``` \u67e5\u8be2\u73af\u5883 CLI: oos env list endpoint: /environment type: GET sync OR async: async request body: None response body: { \"ID\": UUID, \"Provider\": String, \"Name\": String, \"IP\": IP_ADDRESS, \"Flavor\": Enmu(\"small\", \"medium\", \"large\"), \"openEuler_release\": String, \"OpenStack_release\": String, \"create_time\": TIME, } \u5220\u9664\u73af\u5883 CLI: oos env delete endpoint: /environment/{UUID} type: DELETE sync OR async: sync request body: None response body: { \"ID\": UUID, \"status\": Enum(\"Error\", \"OK\") } \u7eb3\u7ba1\u5df2\u6709\u73af\u5883 \u7528\u6237\u8fd8\u53ef\u4ee5\u76f4\u63a5\u4f7f\u7528\u5df2\u6709\u7684openEuler\u73af\u5883\u8fdb\u884cOpenStack\u90e8\u7f72\uff0c\u9700\u8981\u628a\u5df2\u6709\u73af\u5883\u7eb3\u7ba1\u5230\u5e73\u53f0\u4e2d\u3002\u7eb3\u7ba1\u540e\uff0c\u73af\u5883\u4e0e\u521b\u5efa\u7684\u9879\u76ee\uff0c\u53ef\u4ee5\u76f4\u63a5\u67e5\u8be2\u6216\u5220\u9664\u3002 1. \u7eb3\u7ba1\u73af\u5883 - CLI: oos env manage - endpoint: `/environment/manage` - type: POST - sync OR async: sync - request body: ``` { \"name\"[required]: String, \"ip\"[required]: IP_ADDRESS, \"release\"[required]: Enmu(\"openEuler_Release\"), \"password\"[required]\uff1a String, } ``` - response body: ``` { \"ID\": UUID, \"status\": Enum(\"Error\", \"OK\") } ``` OpenStack\u90e8\u7f72 \u63d0\u4f9b\u5728\u5df2\u521b\u5efa/\u7eb3\u7ba1\u7684openEuler\u73af\u5883\u4e0a\u90e8\u7f72\u6307\u5b9aOpenStack\u7248\u672c\u7684\u80fd\u529b\u3002 1. \u90e8\u7f72OpenStack - CLI: oos env setup - endpoint: `/environment/setup` - type: POST - sync OR async: async - request body: ``` { \"target\"[required]: UUID(environment), \"release\"[required]: Enmu(\"OpenStack_Release\"), } ``` - response body: ``` { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } ``` \u521d\u59cb\u5316OpenStack\u8d44\u6e90 CLI: oos env init endpoint: /environment/init type: POST sync OR async: async request body: { \"target\"[required]: UUID(environment), } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u5378\u8f7d\u5df2\u90e8\u7f72OpenStack CLI: oos env clean endpoint: /environment/clean type: POST sync OR async: async request body: { \"target\"[required]: UUID(environment), } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u81ea\u52a8\u5316\u6d4b\u8bd5 \u00b6 \u73af\u5883\u90e8\u7f72\u6210\u529f\u540e\uff0cSIG\u5f00\u53d1\u5e73\u53f0\u63d0\u4f9b\u57fa\u4e8e\u5df2\u90e8\u7f72OpenStack\u73af\u5883\u7684\u81ea\u52a8\u5316\u6d4b\u8bd5\u529f\u80fd\u3002\u4e3b\u8981\u5305\u542b\u4ee5\u4e0b\u51e0\u4e2a\u91cd\u8981\u5185\u5bb9\uff1a OpenStack\u672c\u8eab\u63d0\u4f9b\u4e00\u5957\u5b8c\u5584\u7684\u6d4b\u8bd5\u6846\u67b6\u3002\u5305\u62ec \u5355\u5143\u6d4b\u8bd5 \u548c \u529f\u80fd\u6d4b\u8bd5 \uff0c\u5176\u4e2d \u5355\u5143\u6d4b\u8bd5 \u5728 2.3\u7ae0\u8282 \u4e2d\u5df2\u7ecf\u7531RPM spec\u5305\u542b\uff0cspec\u7684%check\u9636\u6bb5\u53ef\u4ee5\u5b9a\u4e49\u6bcf\u4e2a\u9879\u76ee\u7684\u5355\u5143\u6d4b\u8bd5\u65b9\u5f0f\uff0c\u4e00\u822c\u60c5\u51b5\u4e0b\u53ea\u9700\u8981\u6dfb\u52a0 pytest \u6216 stestr \u5373\u53ef\u3002 \u529f\u80fd\u6d4b\u8bd5 \u7531OpenStack Tempest\u670d\u52a1\u63d0\u4f9b\uff0c\u5728\u4e0a\u6587\u6240\u8ff0\u7684\u81ea\u52a8\u5316\u90e8\u7f72 oos env init \u9636\u6bb5\uff0coos\u4f1a\u81ea\u52a8\u5b89\u88c5Tempest\u5e76\u751f\u6210\u9ed8\u8ba4\u7684\u914d\u7f6e\u6587\u4ef6\u3002 - CLI: oos env test endpoint: /environment/test type: POST sync OR async: async request body: { \"target\"[required]: UUID(environment), } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u6d4b\u8bd5\u6267\u884c\u5b8c\u540e\uff0coos\u4f1a\u8f93\u51fa\u6d4b\u8bd5\u62a5\u544a\uff0c\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0coos\u4f7f\u7528 subunit2html \u5de5\u5177\uff0c\u751f\u6210html\u683c\u5f0f\u7684Tempest\u6d4b\u8bd5\u7ed3\u679c\u6587\u4ef6\u3002 2.5 openEuler\u81ea\u52a8\u5316\u5f00\u53d1\u529f\u80fd \u00b6 OpenStack\u6d89\u53ca\u8f6f\u4ef6\u5305\u4f17\u591a\uff0c\u968f\u7740\u7248\u672c\u4e0d\u65ad\u5730\u6f14\u8fdb\u3001\u652f\u6301\u670d\u52a1\u4e0d\u65ad\u7684\u5b8c\u5584\uff0cSIG\u7ef4\u62a4\u7684\u8f6f\u4ef6\u5305\u5217\u8868\u4f1a\u4e0d\u65ad\u5237\u65b0\uff0c\u4e3a\u4e86\u964d\u4f4e\u91cd\u590d\u7684\u5f00\u53d1\u52a8\u4f5c\uff0coos\u8fd8\u5c01\u88c5\u4e86\u4e00\u4e9b\u6613\u7528\u7684\u4ee3\u7801\u5f00\u53d1\u5e73\u53f0\u81ea\u52a8\u5316\u80fd\u529b\uff0c\u6bd4\u5982\u57fa\u4e8eGitee\u7684\u81ea\u52a8\u4ee3\u7801\u63d0\u4ea4\u80fd\u529b\u3002\u529f\u80fd\u5982\u4e0b\uff1a \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 Code Action \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502Repo Action\u2502 \u2502Branch Action\u2502 \u2502Pull Request Action\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 Repo Action \u63d0\u4f9b\u4e0e\u8f6f\u4ef6\u4ed3\u76f8\u5173\u7684\u81ea\u52a8\u5316\u529f\u80fd\uff1a \u81ea\u52a8\u5efa\u4ed3 CLI: oos repo create endpoint: /repo type: POST sync OR async: async request body: { \"project\"[required]: String, \"repo\"[required]: String, \"push\"[optional][Default: \"False\"]: Boolean, } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } Branch Action \u63d0\u4f9b\u4e0e\u8f6f\u4ef6\u5206\u652f\u76f8\u5173\u7684\u81ea\u52a8\u5316\u529f\u80fd\uff1a \u81ea\u52a8\u521b\u5efa\u5206\u652f CLI: oos repo branch-create endpoint: /repo/branch type: POST sync OR async: async request body: { \"branches\"[required]: { \"branch-name\"[required]: String, \"branch-type\"[optional][Default: \"None\"]: Enum(\"protected\"), \"parent-branch\"[required]: String } } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } Pull Request Action \u63d0\u4f9b\u4e0e\u4ee3\u7801PR\u76f8\u5173\u7684\u81ea\u52a8\u5316\u529f\u80fd\uff1a \u65b0\u589ePR\u8bc4\u8bba\uff0c\u65b9\u4fbf\u7528\u6237\u6267\u884c\u7c7b\u4f3c retest \u3001 /lgtm \u7b49\u5e38\u89c4\u5316\u8bc4\u8bba\u3002 CLI: oos repo pr-comment endpoint: /repo/pr/comment type: POST sync OR async: sync request body: { \"repo\"[required]: String, \"pr_number\"[required]: Int, \"comment\"[required]: String } response body: { \"ID\": UUID, \"status\": Enum(\"OK\", \"Error\") } \u83b7\u53d6SIG\u6240\u6709PR\uff0c\u65b9\u4fbfmaintainer\u83b7\u53d6\u5f53\u524dSIG\u7684\u5f00\u53d1\u73b0\u72b6\uff0c\u63d0\u9ad8\u8bc4\u5ba1\u6548\u7387\u3002 CLI: oos repo pr-fetch endpoint: /repo/pr/fetch type: POST sync OR async: async request body: { \"repo\"[optional][Default: \"None\"]: List[String] } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } 3. \u8d28\u91cf\u3001\u5b89\u5168\u4e0e\u5408\u89c4 \u00b6 SIG\u5f00\u6e90\u8f6f\u4ef6\u9700\u8981\u7b26\u5408openeEuler\u793e\u533a\u5bf9\u5176\u4e2d\u8f6f\u4ef6\u7684\u5404\u79cd\u8981\u6c42\uff0c\u5e76\u4e14\u4e5f\u8981\u7b26\u5408OpenStack\u793e\u533a\u8f6f\u4ef6\u7684\u51fa\u53e3\u6807\u51c6\u3002 3.1 \u8d28\u91cf\u4e0e\u5b89\u5168 \u00b6 \u8f6f\u4ef6\u8d28\u91cf\uff08\u53ef\u670d\u52a1\u6027\uff09 \u5bf9\u5e94\u8f6f\u4ef6\u4ee3\u7801\u9700\u5305\u542b\u5355\u5143\u6d4b\u8bd5\uff0c\u8986\u76d6\u7387\u4e0d\u4f4e\u4e8e80%\u3002 \u9700\u63d0\u4f9b\u7aef\u5230\u7aef\u529f\u80fd\u6d4b\u8bd5\uff0c\u8986\u76d6\u4e0a\u8ff0\u6240\u6709\u63a5\u53e3\uff0c\u4ee5\u53ca\u6838\u5fc3\u7684\u573a\u666f\u6d4b\u8bd5\u3002 \u57fa\u4e8eopenEuler\u793e\u533aCI\uff0c\u6784\u5efaCI/CD\u6d41\u7a0b\uff0c\u6240\u6709Pull Request\u8981\u6709CI\u4fdd\u8bc1\u4ee3\u7801\u8d28\u91cf\uff0c\u5b9a\u671f\u53d1\u5e03release\u7248\u672c\uff0c\u8f6f\u4ef6\u53d1\u5e03\u95f4\u9694\u4e0d\u5927\u4e8e3\u4e2a\u6708\u3002 \u57fa\u4e8eGitee ISSUE\u7cfb\u7edf\u5904\u7406\u7528\u6237\u53d1\u73b0\u5e76\u53cd\u9988\u7684\u95ee\u9898\uff0c\u95ed\u73af\u7387\u5927\u4e8e80%\uff0c\u95ed\u73af\u5468\u671f\u4e0d\u8d85\u8fc71\u5468\u3002 \u8f6f\u4ef6\u5b89\u5168 \u6570\u636e\u5b89\u5168\uff1a\u8f6f\u4ef6\u5168\u7a0b\u4e0d\u8054\u7f51\uff0c\u6301\u4e45\u5b58\u50a8\u4e2d\u4e0d\u5305\u542b\u7528\u6237\u654f\u611f\u4fe1\u606f\u3002 \u7f51\u7edc\u5b89\u5168\uff1aOOS\u5728REST\u67b6\u6784\u4e0b\u4f7f\u7528http\u534f\u8bae\u901a\u4fe1\uff0c\u4f46\u8f6f\u4ef6\u8bbe\u8ba1\u76ee\u6807\u5b9e\u5728\u5185\u7f51\u73af\u5883\u4e2d\u4f7f\u7528\uff0c\u4e0d\u5efa\u8bae\u66b4\u9732\u5728\u516c\u7f51IP\u4e2d\uff0c\u5982\u5fc5\u987b\u5982\u6b64\uff0c\u5efa\u8bae\u589e\u52a0\u8bbf\u95eeIP\u767d\u540d\u5355\u9650\u5236\u3002 \u7cfb\u7edf\u5b89\u5168\uff1a\u57fa\u4e8eopenEuler\u5b89\u5168\u673a\u5236\uff0c\u5b9a\u671f\u53d1\u5e03CVE\u4fee\u590d\u6216\u5b89\u5168\u8865\u4e01\u3002 \u5e94\u7528\u5c42\u5b89\u5168\uff1a\u4e0d\u6d89\u53ca\uff0c\u4e0d\u63d0\u4f9b\u5e94\u7528\u7ea7\u5b89\u5168\u670d\u52a1\uff0c\u4f8b\u5982\u5bc6\u7801\u7b56\u7565\u3001\u8bbf\u95ee\u63a7\u5236\u7b49\u3002 \u7ba1\u7406\u5b89\u5168\uff1a\u8f6f\u4ef6\u63d0\u4f9b\u65e5\u5fd7\u751f\u6210\u548c\u5468\u671f\u6027\u5907\u4efd\u673a\u5236\uff0c\u65b9\u4fbf\u7528\u6237\u5b9a\u671f\u5ba1\u8ba1\u3002 \u53ef\u9760\u6027 \u672c\u8f6f\u4ef6\u9762\u5411openEuler\u793e\u533aOpenStack\u5f00\u53d1\u884c\u4e3a\uff0c\u4e0d\u6d89\u53ca\u670d\u52a1\u4e0a\u7ebf\u6216\u8005\u5546\u4e1a\u751f\u4ea7\u843d\u5730\uff0c\u6240\u6709\u4ee3\u7801\u516c\u5f00\u900f\u660e\uff0c\u4e0d\u6d89\u53ca\u79c1\u6709\u529f\u80fd\u53ca\u4ee3\u7801\u3002\u56e0\u6b64\u4e0d\u63d0\u4f9b\u4f8b\u5982\u8282\u70b9\u5197\u4f59\u3001\u5bb9\u707e\u5907\u4efd\u80fd\u529f\u80fd\u3002 3.2 \u5408\u89c4 \u00b6 License\u5408\u89c4 \u672c\u5e73\u53f0\u91c7\u7528Apache2.0 License\uff0c\u4e0d\u9650\u5236\u4e0b\u6e38fork\u8f6f\u4ef6\u7684\u95ed\u6e90\u4e0e\u5546\u4e1a\u884c\u4e3a\uff0c\u4f46\u4e0b\u6e38\u8f6f\u4ef6\u9700\u6807\u6ce8\u4ee3\u7801\u6765\u6e90\u4ee5\u53ca\u4fdd\u7559\u539f\u6709License\u3002 \u6cd5\u52a1\u5408\u89c4 \u672c\u5e73\u53f0\u7531\u5f00\u6e90\u5f00\u53d1\u8005\u5171\u540c\u5f00\u53d1\u7ef4\u62a4\uff0c\u4e0d\u6d89\u53ca\u5546\u4e1a\u516c\u53f8\u7684\u79d8\u5bc6\u4ee5\u53ca\u975e\u516c\u5f00\u4ee3\u7801\u3002\u6240\u6709\u8d21\u732e\u8005\u9700\u9075\u5b88openEuler\u793e\u533a\u8d21\u732e\u51c6\u5219\uff0c\u786e\u4fdd\u81ea\u8eab\u7684\u8d21\u732e\u5408\u89c4\u5408\u6cd5\u3002SIG\u53ca\u793e\u533a\u672c\u8eab\u4e0d\u627f\u62c5\u76f8\u5e94\u8d23\u4efb\u3002 \u5982\u53d1\u73b0\u4e0d\u5408\u89c4\u7684\u6e90\u7801\uff0cSIG\u65e0\u9700\u83b7\u53d6\u8d21\u732e\u8005\u7684\u5141\u8bb8\uff0c\u6709\u6743\u5229\u53ca\u4e49\u52a1\u53ca\u65f6\u5220\u9664\u3002\u5e76\u6709\u6743\u7981\u6b62\u4e0d\u5408\u89c4\u4ee3\u7801\u6216\u5f00\u53d1\u8005\u7ee7\u7eed\u8d21\u732e\u3002 \u5f00\u53d1\u8005\u5982\u679c\u6709\u975e\u516c\u5f00\u4ee3\u7801\u9700\u8981\u8d21\u732e\uff0c\u5219\u8981\u5148\u9075\u5b88\u672c\u516c\u53f8\u7684\u5f00\u6e90\u6d41\u7a0b\u4e0e\u89c4\u5b9a\uff0c\u5e76\u6309\u7167openEuler\u793e\u533a\u5f00\u6e90\u89c4\u8303\u516c\u5f00\u8d21\u732e\u4ee3\u7801\u3002 4. \u5b9e\u65bd\u8ba1\u5212 \u00b6 \u65f6\u95f4 \u5185\u5bb9 \u72b6\u6001 2021.06 \u5b8c\u6210\u8f6f\u4ef6\u6574\u4f53\u6846\u67b6\u7f16\u5199\uff0c\u5b9e\u73b0CLI Built-in\u673a\u5236\uff0c\u81f3\u5c11\u4e00\u4e2aAPI\u53ef\u7528 Done 2021.12 \u5b8c\u6210CLI Built-in\u673a\u5236\u7684\u5168\u91cf\u529f\u80fd\u53ef\u7528 Done 2022.06 \u5b8c\u6210\u8d28\u91cf\u52a0\u56fa\uff0c\u4fdd\u8bc1\u529f\u80fd\uff0c\u5728openEuler OpenStack\u793e\u533a\u5f00\u53d1\u6d41\u7a0b\u4e2d\u6b63\u5f0f\u5f15\u5165OOS Done 2022.12 \u4e0d\u65ad\u5b8c\u6210OOS\uff0c\u4fdd\u8bc1\u6613\u7528\u6027\u3001\u5065\u58ee\u6027\uff0c\u81ea\u52a8\u5316\u8986\u76d6\u5ea6\u8d85\u8fc780%\uff0c\u964d\u4f4e\u5f00\u53d1\u4eba\u529b\u6295\u5165 Done 2023.06 \u8865\u9f50REST\u6846\u67b6\u3001CI/CD\u6d41\u7a0b\uff0c\u4e30\u5bccPlugin\u673a\u5236\uff0c\u5f15\u5165\u66f4\u591abackend\u652f\u6301 Working in progress 2023.12 \u5b8c\u6210\u524d\u7aefGUI\u529f\u80fd Planning","title":"openEuler OpenStack \u5f00\u53d1\u5e73\u53f0"},{"location":"spec/openstack-sig-tool/#openeuler-openstack","text":"openEuler OpenStack SIG\u6210\u7acb\u4e8e2021\u5e74\uff0c\u662f\u7531\u4e2d\u56fd\u8054\u901a\u3001\u4e2d\u56fd\u7535\u4fe1\u3001\u534e\u4e3a\u3001\u7edf\u4fe1\u7b49\u516c\u53f8\u7684\u5f00\u53d1\u8005\u5171\u540c\u6295\u5165\u5e76\u7ef4\u62a4\u7684SIG\u5c0f\u7ec4\uff0c\u65e8\u5728openEuler\u4e4b\u4e0a\u63d0\u4f9b\u539f\u751f\u7684OpenStack\uff0c\u6784\u5efa\u5f00\u653e\u53ef\u9760\u7684\u4e91\u8ba1\u7b97\u6280\u672f\u6808\uff0c\u662fopenEuler\u7684\u6807\u6746SIG\u3002\u4f46OpenStack\u672c\u8eab\u6280\u672f\u590d\u6742\u3001\u5305\u542b\u670d\u52a1\u4f17\u591a\uff0c\u5f00\u53d1\u95e8\u69db\u8f83\u9ad8\uff0c\u5bf9\u8d21\u732e\u8005\u7684\u6280\u672f\u80fd\u529b\u8981\u6c42\u4e5f\u8f83\u9ad8\uff0c\u4eba\u529b\u6210\u672c\u9ad8\u5c45\u4e0d\u4e0b\uff0c\u5728\u5b9e\u9645\u5f00\u53d1\u4e0e\u8d21\u732e\u4e2d\u5b58\u5728\u5404\u79cd\u5404\u6837\u7684\u95ee\u9898\u3002\u4e3a\u4e86\u89e3\u51b3SIG\u9762\u4e34\u7684\u95ee\u9898\uff0c\u4e9f\u9700\u4e00\u4e2aopenEuler+OpenStack\u89e3\u51b3\u65b9\u6848\uff0c\u4ece\u800c\u964d\u4f4e\u5f00\u53d1\u8005\u95e8\u69db\uff0c\u964d\u4f4e\u6295\u5165\u6210\u672c\uff0c\u63d0\u9ad8\u5f00\u53d1\u6548\u7387\uff0c\u4fdd\u8bc1SIG\u7684\u6301\u7eed\u6d3b\u8dc3\u4e0e\u53ef\u6301\u7eed\u53d1\u5c55\u3002","title":"openEuler OpenStack \u5f00\u53d1\u5e73\u53f0"},{"location":"spec/openstack-sig-tool/#1","text":"","title":"1. \u6982\u8ff0"},{"location":"spec/openstack-sig-tool/#11","text":"\u76ee\u524d\uff0c\u968f\u7740SIG\u7684\u4e0d\u65ad\u53d1\u5c55\uff0c\u6211\u4eec\u660e\u663e\u7684\u9047\u5230\u4e86\u4ee5\u4e0b\u51e0\u7c7b\u95ee\u9898\uff1a 1. OpenStack\u6280\u672f\u590d\u6742\uff0c\u6d89\u53ca\u4e91IAAS\u5c42\u7684\u8ba1\u7b97\u3001\u7f51\u7edc\u3001\u5b58\u50a8\u3001\u955c\u50cf\u3001\u9274\u6743\u7b49\u65b9\u65b9\u9762\u9762\u7684\u6280\u672f\uff0c\u5f00\u53d1\u8005\u5f88\u96be\u5168\u77e5\u5168\u4f1a\uff0c\u63d0\u4ea4\u7684\u4ee3\u7801\u903b\u8f91\u3001\u8d28\u91cf\u582a\u5fe7\u3002 2. OpenStack\u662f\u7531python\u7f16\u5199\u7684\uff0cpython\u8f6f\u4ef6\u7684\u4f9d\u8d56\u95ee\u9898\u96be\u4ee5\u5904\u7406\uff0c\u4ee5OpenStack Wallaby\u7248\u672c\u4e3a\u4f8b\uff0c\u6d89\u53ca\u6838\u5fc3python\u8f6f\u4ef6\u5305400+\uff0c \u6bcf\u4e2a\u8f6f\u4ef6\u7684\u4f9d\u8d56\u5c42\u7ea7\u3001\u4f9d\u8d56\u7248\u672c\u9519\u7efc\u590d\u6742\uff0c\u9009\u578b\u56f0\u96be\uff0c\u96be\u4ee5\u5f62\u6210\u95ed\u73af\u3002 3. OpenStack\u8f6f\u4ef6\u5305\u4f17\u591a\uff0cRPM Spec\u7f16\u5199\u5f00\u53d1\u91cf\u5de8\u5927\uff0c\u5e76\u4e14\u968f\u7740openEuler\u3001OpenStack\u672c\u8eab\u7248\u672c\u7684\u4e0d\u65ad\u6f14\u8fdb\uff0cN:N\u7684\u9002\u914d\u5173\u7cfb\u4f1a\u5bfc\u81f4\u5de5\u4f5c\u91cf\u6210\u500d\u589e\u957f\uff0c\u4eba\u529b\u6210\u672c\u8d8a\u6765\u8d8a\u5927\u3002 4. OpenStack\u6d4b\u8bd5\u95e8\u69db\u8fc7\u9ad8\uff0c\u4e0d\u4ec5\u9700\u8981\u5f00\u53d1\u4eba\u5458\u719f\u6089OpenStack\uff0c\u8fd8\u8981\u5bf9\u865a\u62df\u5316\u3001\u865a\u62df\u7f51\u6865\u3001\u5757\u5b58\u50a8\u7b49Linux\u5e95\u5c42\u6280\u672f\u6709\u4e00\u5b9a\u4e86\u89e3\u4e0e\u638c\u63e1\uff0c\u90e8\u7f72\u4e00\u5957OpenStack\u73af\u5883\u8017\u65f6\u8fc7\u957f\uff0c\u529f\u80fd\u6d4b\u8bd5\u96be\u5ea6\u5de8\u5927\u3002\u5e76\u4e14\u6d4b\u8bd5\u573a\u666f\u591a\uff0c\u6bd4\u5982X86\u3001ARM64\u67b6\u6784\u6d4b\u8bd5\uff0c\u88f8\u673a\u3001\u865a\u673a\u79cd\u7c7b\u6d4b\u8bd5\uff0cOVS\u3001OVN\u7f51\u6865\u6d4b\u8bd5\uff0cLVM\u3001Ceph\u5b58\u50a8\u6d4b\u8bd5\u7b49\u7b49\uff0c\u66f4\u52a0\u52a0\u91cd\u4e86\u4eba\u529b\u6210\u672c\u4ee5\u53ca\u6280\u672f\u95e8\u69db\u3002","title":"1.1 \u5f53\u524d\u73b0\u72b6"},{"location":"spec/openstack-sig-tool/#12","text":"\u9488\u5bf9\u4ee5\u4e0a\u76ee\u524dSIG\u9047\u5230\u7684\u95ee\u9898\uff0c\u89c4\u8303\u5316\u3001\u5de5\u5177\u5316\u3001\u81ea\u52a8\u5316\u7684\u76ee\u6807\u52bf\u5728\u5fc5\u884c\u3002\u672c\u7bc7\u8bbe\u8ba1\u6587\u6863\u65e8\u5728\u5728openEuler OpenStack SIG\u4e2d\u63d0\u4f9b\u4e00\u4e2a\u7aef\u5230\u7aef\u53ef\u7528\u7684\u5f00\u53d1\u89e3\u51b3\u65b9\u6848\uff0c\u4ece\u6280\u672f\u89c4\u8303\u5230\u6280\u672f\u5b9e\u73b0\uff0c\u63d0\u51fa\u4e25\u683c\u7684\u6807\u51c6\u8981\u6c42\u4e0e\u8bbe\u8ba1\u65b9\u6848\uff0c\u6ee1\u8db3SIG\u5f00\u53d1\u8005\u7684\u65e5\u5e38\u5f00\u53d1\u9700\u6c42\uff0c\u964d\u4f4e\u5f00\u53d1\u6210\u672c\uff0c\u51cf\u5c11\u4eba\u529b\u6295\u5165\u6210\u672c\uff0c\u964d\u4f4e\u5f00\u53d1\u95e8\u69db\uff0c\u4ece\u800c\u63d0\u9ad8\u5f00\u53d1\u6548\u7387\u3001\u63d0\u9ad8SIG\u8f6f\u4ef6\u8d28\u91cf\u3001\u53d1\u5c55SIG\u751f\u6001\u3001\u5438\u5f15\u66f4\u591a\u5f00\u53d1\u8005\u52a0\u5165SIG\u3002\u4e3b\u8981\u52a8\u4f5c\u5982\u4e0b\uff1a 1. \u8f93\u51faOpenStack\u670d\u52a1\u7c7b\u8f6f\u4ef6\u3001\u4f9d\u8d56\u5e93\u8f6f\u4ef6\u7684RPM SPEC\u5f00\u53d1\u89c4\u8303\uff0c\u5f00\u53d1\u8005\u53caReviewer\u9700\u8981\u4e25\u683c\u9075\u5b88\u89c4\u8303\u8fdb\u884c\u5f00\u53d1\u5b9e\u65bd\u3002 2. \u63d0\u4f9bOpenStack python\u8f6f\u4ef6\u4f9d\u8d56\u5206\u6790\u529f\u80fd\uff0c\u4e00\u952e\u751f\u6210\u4f9d\u8d56\u62d3\u6251\u4e0e\u7ed3\u679c\uff0c\u4fdd\u8bc1\u4f9d\u8d56\u95ed\u73af\uff0c\u907f\u514d\u8f6f\u4ef6\u4f9d\u8d56\u98ce\u9669\u3002 3. \u63d0\u4f9bOpenStack RPM spec\u751f\u6210\u529f\u80fd\uff0c\u9488\u5bf9\u901a\u7528\u6027\u8f6f\u4ef6\uff0c\u63d0\u4f9b\u4e00\u952e\u751f\u6210 RPM spec\u7684\u529f\u80fd\uff0c\u7f29\u77ed\u5f00\u53d1\u65f6\u95f4\uff0c\u964d\u4f4e\u6295\u5165\u6210\u672c\u3002 4. \u63d0\u4f9b\u81ea\u52a8\u5316\u90e8\u7f72\u3001\u6d4b\u8bd5\u5e73\u53f0\u529f\u80fd\uff0c\u5b9e\u73b0\u4e00\u952e\u5728\u4efb\u4f55openEuler\u7248\u672c\u4e0a\u90e8\u7f72\u6307\u5b9aOpenStack\u7248\u672c\u7684\u80fd\u529b\uff0c\u5feb\u901f\u6d4b\u8bd5\u3001\u5feb\u901f\u8fed\u4ee3\u3002 5. \u63d0\u4f9bopenEuler Gitee\u4ed3\u5e93\u81ea\u52a8\u5316\u5904\u7406\u80fd\u529b\uff0c\u6ee1\u8db3\u6279\u91cf\u4fee\u6539\u8f6f\u4ef6\u7684\u9700\u6c42\uff0c\u6bd4\u5982\u521b\u5efa\u4ee3\u7801\u5206\u652f\u3001\u521b\u5efa\u4ed3\u5e93\u3001\u63d0\u4ea4Pull Request\u7b49\u529f\u80fd\u3002 \u4ee5\u4e0a\u89e3\u51b3\u65b9\u6cd5\u53ef\u4ee5\u7edf\u4e00\u5230\u4e00\u4e2a\u7cfb\u7edf\u5e73\u53f0\u4e2d\uff0c\u6211\u4eec\u79f0\u4f5cOpenStack SIG Tool\uff08\u4ee5\u4e0b\u7b80\u79f0oos\uff09\uff0c\u5373\u5c31\u662fopenEuler OpenStack\u5f00\u53d1\u5e73\u53f0\uff0c\u5177\u4f53\u67b6\u6784\u5982\u4e0b\uff1a \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 CLI \u2502 \u2502 GUI \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 \u2502 Built-in\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502REST \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 OpenStack Develop Platform \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2510 \u2502Dependency Analysis\u2502 \u2502SPEC Generation\u2502 \u2502Deploy and Test\u2502 \u2502Code Action\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u8be5\u67b6\u6784\u4e3b\u8981\u6709\u4ee5\u4e0b\u4e24\u79cd\u6a21\u5f0f\uff1a 1. Client/Server\u6a21\u5f0f \u5728\u8fd9\u79cd\u6a21\u5f0f\u4e0b\uff0coos\u90e8\u7f72\u6210Web Server\u5f62\u5f0f\uff0cClient\u901a\u8fc7REST\u65b9\u5f0f\u8c03\u7528oos\u3002 - \u4f18\u70b9\uff1a\u63d0\u4f9b\u5f02\u6b65\u8c03\u7528\u80fd\u529b\uff0c\u652f\u6301\u5e76\u53d1\u5904\u7406\uff0c\u652f\u6301\u8bb0\u5f55\u6301\u4e45\u5316\u3002 - \u7f3a\u70b9\uff1a\u6709\u4e00\u5b9a\u5b89\u88c5\u90e8\u7f72\u6210\u672c\uff0c\u4f7f\u7528\u65b9\u5f0f\u8f83\u4e3a\u6b7b\u677f\u3002 Built-in\u6a21\u5f0f \u5728\u8fd9\u79cd\u6a21\u5f0f\u4e0b\uff0coos\u65e0\u9700\u90e8\u7f72\uff0c\u4ee5\u5185\u7f6eCLI\u7684\u65b9\u5f0f\u5bf9\u5916\u63d0\u4f9b\u670d\u52a1\uff0c\u7528\u6237\u901a\u8fc7cli\u76f4\u63a5\u8c03\u7528\u5404\u79cd\u529f\u80fd\u3002 \u4f18\u70b9\uff1a\u65e0\u9700\u90e8\u7f72\uff0c\u968f\u65f6\u968f\u5730\u53ef\u7528\u3002 \u7f3a\u70b9\uff1a\u6ca1\u6709\u6301\u4e45\u5316\u80fd\u529b\uff0c\u4e0d\u652f\u6301\u5e76\u53d1\uff0c\u5355\u4eba\u5355\u7528\u3002","title":"1.2 \u89e3\u51b3\u65b9\u6848"},{"location":"spec/openstack-sig-tool/#2","text":"","title":"2. \u8be6\u7ec6\u8bbe\u8ba1"},{"location":"spec/openstack-sig-tool/#21-openstack-spec","text":"Spec\u89c4\u8303\u662f\u4e00\u4e2a\u6216\u591a\u4e2aspec\u6a21\u677f\uff0c\u9488\u5bf9RPM spec\u7684\u6bcf\u4e2a\u5173\u952e\u5b57\u53ca\u6784\u5efa\u7ae0\u8282\uff0c\u4e25\u683c\u89c4\u5b9a\u76f8\u5173\u5185\u5bb9\uff0c\u5f00\u53d1\u8005\u5728\u7f16\u5199spec\u65f6\uff0c\u5fc5\u987b\u6ee1\u8db3\u89c4\u8303\u8981\u6c42\uff0c\u5426\u5219\u4ee3\u7801\u4e0d\u5141\u8bb8\u88ab\u5408\u5165\u3002\u89c4\u8303\u5185\u5bb9\u7531SIG maintainer\u516c\u5f00\u8ba8\u8bba\u540e\u5f62\u6210\u7ed3\u8bba\uff0c\u5e76\u5b9a\u671f\u5ba1\u89c6\u66f4\u65b0\u3002\u4efb\u4f55\u4eba\u90fd\u6709\u6743\u5229\u63d0\u51fa\u5bf9\u89c4\u8303\u7684\u8d28\u7591\u548c\u5efa\u8bae\uff0c maintainer\u8d1f\u8d23\u89e3\u91ca\u4e0e\u5237\u65b0\u3002\u89c4\u8303\u76ee\u524d\u5305\u62ec\u4e24\u7c7b\uff1a 1. \u670d\u52a1\u7c7b\u8f6f\u4ef6\u89c4\u8303 \u6b64\u7c7b\u8f6f\u4ef6\u4ee5Nova\u3001Neutron\u3001Cinder\u7b49OpenStack\u6838\u5fc3\u670d\u52a1\u4e3a\u4f8b\uff0c\u5b83\u4eec\u4e00\u822c\u5b9a\u5236\u5316\u8981\u6c42\u9ad8\uff0c\u5185\u5bb9\u533a\u522b\u5927\uff0c\u5fc5\u8981\u4eba\u4e3a\u624b\u52a8\u7f16\u5199\u3002\u89c4\u8303\u9700\u6e05\u6670\u89c4\u5b9a\u8f6f\u4ef6\u7684\u5206\u5c42\u65b9\u6cd5\u3001\u6784\u5efa\u65b9\u6cd5\u3001\u8f6f\u4ef6\u5305\u7ec4\u6210\u5185\u5bb9\u3001\u6d4b\u8bd5\u65b9\u6cd5\u3001\u7248\u672c\u53f7\u89c4\u5219\u7b49\u5185\u5bb9\u3002 \u901a\u7528\u4f9d\u8d56\u7c7b\u8f6f\u4ef6\u89c4\u8303 \u6b64\u7c7b\u8f6f\u4ef6\u4e00\u822c\u5b9a\u5236\u5316\u4f4e\uff0c\u5185\u5bb9\u7ed3\u6784\u533a\u522b\u5c0f\uff0c\u9002\u5408\u81ea\u52a8\u5316\u5de5\u5177\u4e00\u952e\u751f\u6210\uff0c\u6211\u4eec\u53ea\u9700\u8981\u5728\u89c4\u8303\u4e2d\u5b9a\u4e49\u76f8\u5173\u5de5\u5177\u7684\u751f\u6210\u89c4\u5219\u5373\u53ef\u3002","title":"2.1 OpenStack Spec\u89c4\u8303"},{"location":"spec/openstack-sig-tool/#211","text":"OpenStack\u6bcf\u4e2a\u670d\u52a1\u901a\u5e38\u5305\u542b\u82e5\u5e72\u5b50\u670d\u52a1\uff0c\u9488\u5bf9\u8fd9\u4e9b\u5b50\u670d\u52a1\uff0c\u6211\u4eec\u5728\u6253\u5305\u7684\u65f6\u5019\u4e5f\u8981\u505a\u62c6\u5305\u5904\u7406\uff0c\u5206\u6210\u82e5\u5e72\u4e2a\u5b50RPM\u5305\u3002\u672c\u7ae0\u8282\u89c4\u5b9a\u4e86openEuler SIG\u5bf9OpenStack\u670d\u52a1\u7684RPM\u5305\u62c6\u5206\u7684\u539f\u5219\u3002","title":"2.1.1 \u670d\u52a1\u7c7b\u8f6f\u4ef6\u89c4\u8303"},{"location":"spec/openstack-sig-tool/#2111","text":"\u91c7\u7528\u5206\u5c42\u67b6\u6784\uff0cRPM\u5305\u7ed3\u6784\u5982\u4e0b\u56fe\u6240\u793a\uff0c\u4ee5openstack-nova\u4e3a\u4f8b\uff1a Level | Package | Example | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25021\u2502 | \u2502 Root Package \u2502 \u2502 Doc Package (Optional) \u2502 | \u2502 openstack-nova.rpm \u2502 \u2502 openstack-nova-doc.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2502 | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | | \u2502 \u2502 | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25022\u2502 | \u2502 Service1 Package \u2502 \u2502 Service2 Package \u2502 | | \u2502 openstack-nova-compute.rpm \u2502 \u2502 openstack-nova-api.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | | | | | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | | | | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25023\u2502 | \u2502 Common Package \u2502 | | \u2502 openstack-nova-common.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2502 | | | \u2502 | | | \u2502 | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25024\u2502 | \u2502 Library Package \u25c4------------| Library Test Package (Optional) \u2502 | \u2502 python2-nova.rpm \u2502 \u2502 python2-nova-tests.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u5982\u56fe\u6240\u793a\uff0c\u5206\u4e3a4\u7ea7 Root Package\u4e3a\u603bRPM\u5305\uff0c\u539f\u5219\u4e0a\u4e0d\u5305\u542b\u4efb\u4f55\u6587\u4ef6\u3002\u53ea\u505a\u670d\u52a1\u96c6\u5408\u7528\u3002\u7528\u6237\u53ef\u4ee5\u4f7f\u7528\u8be5RPM\u4e00\u952e\u5b89\u88c5\u6240\u6709\u5b50RPM\u5305\u3002 \u5982\u679c\u9879\u76ee\u6709doc\u76f8\u5173\u7684\u6587\u4ef6\uff0c\u4e5f\u53ef\u4ee5\u5355\u72ec\u6210\u5305\uff08\u53ef\u9009\uff09 Service Package\u4e3a\u5b50\u670d\u52a1RPM\u5305\uff0c\u5305\u542b\u8be5\u670d\u52a1\u7684systemd\u670d\u52a1\u542f\u52a8\u6587\u4ef6\u3001\u81ea\u5df1\u72ec\u6709\u7684\u914d\u7f6e\u6587\u4ef6\u7b49\u3002 Common Package\u662f\u5171\u7528\u4f9d\u8d56\u7684RPM\u5305\uff0c\u5305\u542b\u5404\u4e2a\u5b50\u670d\u52a1\u4f9d\u8d56\u7684\u901a\u7528\u914d\u7f6e\u6587\u4ef6\u3001\u7cfb\u7edf\u914d\u7f6e\u6587\u4ef6\u7b49\u3002 Library Package\u4e3apython\u6e90\u7801\u5305\uff0c\u5305\u542b\u4e86\u8be5\u9879\u76ee\u7684python\u4ee3\u7801\u3002 \u5982\u679c\u9879\u76ee\u6709test\u76f8\u5173\u7684\u6587\u4ef6\uff0c\u4e5f\u53ef\u4ee5\u5355\u72ec\u6210\u5305\uff08\u53ef\u9009\uff09 \u6d89\u53ca\u672c\u539f\u5219\u7684\u9879\u76ee\u6709\uff1a openstack-nova openstack-cinder openstack-glance openstack-placment openstack-ironic","title":"2.1.1.1 \u901a\u7528\u539f\u5219"},{"location":"spec/openstack-sig-tool/#2112","text":"\u6709\u4e9bopenstack\u7ec4\u4ef6\u672c\u8eab\u53ea\u5305\u542b\u4e00\u4e2a\u670d\u52a1\uff0c\u4e0d\u5b58\u5728\u5b50\u670d\u52a1\u7684\u6982\u5ff5,\u8fd9\u79cd\u670d\u52a1\u5219\u53ea\u9700\u8981\u5206\u4e3a\u4e24\u7ea7\uff1a Level | Package | Example | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25021\u2502 | \u2502 Root Package \u2502 \u2502 Doc Package (Optional) \u2502 | \u2502 openstack-keystone.rpm \u2502 \u2502 openstack-keystone-doc.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25022\u2502 | \u2502 Library Package \u25c4-----| Library Test Package (Optional) \u2502 | \u2502 python2-keystone.rpm \u2502 \u2502 python2-keystone-tests.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 Root Package RPM\u5305\u5305\u542b\u4e86\u9664python\u6e90\u7801\u5916\u7684\u5176\u4ed6\u6240\u6709\u6587\u4ef6\uff0c\u5305\u62ec\u670d\u52a1\u542f\u52a8\u6587\u4ef6\u3001\u9879\u76ee\u914d\u7f6e\u6587\u4ef6\u3001\u7cfb\u7edf\u914d\u7f6e\u6587\u4ef6\u7b49\u7b49\u3002 \u5982\u679c\u9879\u76ee\u6709doc\u76f8\u5173\u7684\u6587\u4ef6\uff0c\u4e5f\u53ef\u4ee5\u5355\u72ec\u6210\u5305\uff08\u53ef\u9009\uff09 Library Package\u4e3apython\u6e90\u7801\u5305\uff0c\u5305\u542b\u4e86\u8be5\u9879\u76ee\u7684python\u4ee3\u7801\u3002 \u5982\u679c\u9879\u76ee\u6709test\u76f8\u5173\u7684\u6587\u4ef6\uff0c\u4e5f\u53ef\u4ee5\u5355\u72ec\u6210\u5305\uff08\u53ef\u9009\uff09 \u6d89\u53ca\u672c\u539f\u5219\u7684\u9879\u76ee\u6709\uff1a openstack-keystone openstack-horizon \u8fd8\u6709\u4e9b\u9879\u76ee\u867d\u7136\u6709\u82e5\u5e72\u5b50RPM\u5305\uff0c\u4f46\u8fd9\u4e9b\u5b50RPM\u5305\u662f\u4e92\u65a5\u7684\uff0c\u5219\u8fd9\u79cd\u670d\u52a1\u7684\u7ed3\u6784\u5982\u4e0b\uff1a Level | Package | Example | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25021\u2502 | \u2502 Root Package \u2502 \u2502 Doc Package (Optional) \u2502 | \u2502 openstack-neutron.rpm \u2502 \u2502 openstack-neutron-doc.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2502 | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | | \u2502 | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25022\u2502 | \u2502 Service1 Package \u2502 \u2502 Service2 Package \u2502 \u2502 Service3 Package \u2502 | | \u2502 openstack-neutron-server.rpm \u2502 \u2502 openstack-neutron-openvswitch.rpm \u2502 \u2502 openstack-neutron-linuxbridge.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | | | | | | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | | | | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25023\u2502 | \u2502 Common Package \u2502 | | \u2502 openstack-neutron-common.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2502 | | | \u2502 | | | \u2502 | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25024\u2502 | \u2502 Library Package \u25c4------| Library Test Package (Optional) \u2502 | \u2502 python2-neutron.rpm \u2502 \u2502 python2-neutron-tests.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u5982\u56fe\u6240\u793a\uff0cService2\u548cService3\u4e92\u65a5\u3002 Root\u5305\u53ea\u5305\u542b\u4e0d\u4e92\u65a5\u7684\u5b50\u5305\uff0c\u4e92\u65a5\u7684\u5b50\u5305\u5355\u72ec\u63d0\u4f9b\u3002 \u5982\u679c\u9879\u76ee\u6709doc\u76f8\u5173\u7684\u6587\u4ef6\uff0c\u4e5f\u53ef\u4ee5\u5355\u72ec\u6210\u5305\uff08\u53ef\u9009\uff09 Service Package\u4e3a\u5b50\u670d\u52a1RPM\u5305\uff0c\u5305\u542b\u8be5\u670d\u52a1\u7684systemd\u670d\u52a1\u542f\u52a8\u6587\u4ef6\u3001\u81ea\u5df1\u72ec\u6709\u7684\u914d\u7f6e\u6587\u4ef6\u7b49\u3002 \u4e92\u65a5\u7684Service\u5305\u4e0d\u88abRoot\u5305\u6240\u5305\u542b\uff0c\u7528\u6237\u9700\u8981\u5355\u72ec\u5b89\u88c5\u3002 Common Package\u662f\u5171\u7528\u4f9d\u8d56\u7684RPM\u5305\uff0c\u5305\u542b\u5404\u4e2a\u5b50\u670d\u52a1\u4f9d\u8d56\u7684\u901a\u7528\u914d\u7f6e\u6587\u4ef6\u3001\u7cfb\u7edf\u914d\u7f6e\u6587\u4ef6\u7b49\u3002 Library Package\u4e3apython\u6e90\u7801\u5305\uff0c\u5305\u542b\u4e86\u8be5\u9879\u76ee\u7684python\u4ee3\u7801\u3002 \u5982\u679c\u9879\u76ee\u6709test\u76f8\u5173\u7684\u6587\u4ef6\uff0c\u4e5f\u53ef\u4ee5\u5355\u72ec\u6210\u5305\uff08\u53ef\u9009\uff09 \u6d89\u53ca\u672c\u539f\u5219\u7684\u9879\u76ee\u6709\uff1a openstack-neutron","title":"2.1.1.2 \u7279\u6b8a\u60c5\u51b5"},{"location":"spec/openstack-sig-tool/#212","text":"\u4e00\u4e2a\u4f9d\u8d56\u5e93\u4e00\u822c\u53ea\u5305\u542b\u4e00\u4e2aRPM\u5305\uff0c\u4e0d\u9700\u8981\u505a\u62c6\u5206\u5904\u7406\u3002 Level | Package | Example | | \u250c\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 | \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u25021\u2502 | \u2502 Library Package \u2502 \u2502 Help Package (Optional)\u2502 | \u2502 python2-oslo-service.rpm \u2502 \u2502 python2-oslo-service-help.rpm \u2502 \u2514\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 NOTE openEuler\u793e\u533a\u5bf9python2\u548cpython3 RPM\u5305\u7684\u547d\u540d\u6709\u8981\u6c42\uff0cpython2\u7684\u5305\u524d\u7f00\u4e3a python2- \uff0cpython3\u7684\u5305\u524d\u7f00\u4e3a python3- \u3002\u56e0\u6b64\uff0cOpenStack\u8981\u6c42\u5f00\u53d1\u8005\u5728\u6253Library\u7684RPM\u5305\u65f6\uff0c\u4e5f\u8981\u9075\u5b88openEuler\u793e\u533a\u89c4\u8303\u3002","title":"2.1.2 \u901a\u7528\u4f9d\u8d56\u7c7b\u8f6f\u4ef6\u89c4\u8303"},{"location":"spec/openstack-sig-tool/#22","text":"\u8f6f\u4ef6\u4f9d\u8d56\u5206\u6790\u529f\u80fd\u4e3a\u7528\u6237\u63d0\u4f9b\u4e00\u952e\u5206\u6790\u76ee\u6807OpenStack\u7248\u672c\u5305\u542b\u7684\u5168\u91cfpython\u8f6f\u4ef6\u4f9d\u8d56\u62d3\u6251\u53ca\u5bf9\u5e94\u8f6f\u4ef6\u7248\u672c\u7684\u80fd\u529b\u3002\u5e76\u81ea\u52a8\u4e0e\u76ee\u6807openEuler\u7248\u672c\u8fdb\u884c\u6bd4\u5bf9\uff0c\u8f93\u51fa\u5bf9\u5e94\u7684\u8f6f\u4ef6\u5305\u5f00\u53d1\u5efa\u8bae\u3002\u672c\u529f\u80fd\u5305\u542b\u4e24\u4e2a\u5b50\u529f\u80fd\uff1a - \u4f9d\u8d56\u5206\u6790 \u5bf9OpenStack python\u5305\u7684\u4f9d\u8d56\u6811\u8fdb\u884c\u89e3\u6790\uff0c\u62c6\u89e3\u4f9d\u8d56\u62d3\u6251\u3002\u4f9d\u8d56\u6811\u672c\u8d28\u4e0a\u662f\u5bf9\u6709\u5411\u56fe\u7684\u904d\u5386\uff0c\u7406\u8bba\u4e0a\uff0c\u4e00\u4e2a\u6b63\u5e38\u7684python\u4f9d\u8d56\u6811\u662f\u4e00\u4e2a\u6709\u5411\u65e0\u73af\u56fe\uff0c\u6709\u5411\u65e0\u73af\u56fe\u7684\u89e3\u6790\u65b9\u6cd5\u5f88\u591a\uff0c\u8fd9\u91cc\u91c7\u7528\u5e38\u7528\u7684\u5e7f\u5ea6\u4f18\u5148\u641c\u7d22\u65b9\u6cd5\u5373\u53ef\u3002\u4f46\u5728\u67d0\u4e9b\u7279\u6b8a\u573a\u666f\u4e0b\uff0cpython\u4f9d\u8d56\u6811\u4f1a\u53d8\u6210\u6709\u5411\u6709\u73af\u56fe\uff0c\u4f8b\u5982\uff1aSphinx\u662f\u4e00\u4e2a\u6587\u6863\u751f\u4ea7\u9879\u76ee\uff0c\u4f46\u5b83\u81ea\u5df1\u7684\u6587\u6863\u751f\u6210\u4e5f\u4f9d\u8d56Sphinx\uff0c\u8fd9\u5c31\u5bfc\u81f4\u4e86\u4f9d\u8d56\u73af\u7684\u5f62\u6210\u3002\u9488\u5bf9\u8fd9\u79cd\u95ee\u9898\uff0c\u6211\u4eec\u53ea\u9700\u8981\u628a\u73af\u4e0a\u7684\u7279\u5b9a\u8282\u70b9\u624b\u52a8\u65ad\u5f00\u5373\u53ef\u3002\u7c7b\u4f3c\u7684\u8fd8\u6709\u4e00\u4e9b\u6d4b\u8bd5\u4f9d\u8d56\u5e93\u3002\u53e6\u4e00\u79cd\u89c4\u907f\u65b9\u6cd5\u662f\u8df3\u8fc7\u6587\u6863\u3001\u6d4b\u8bd5\u8fd9\u79cd\u975e\u6838\u5fc3\u5e93\uff0c\u8fd9\u6837\u4e0d\u4ec5\u907f\u514d\u4e86\u4f9d\u8d56\u73af\u7684\u5f62\u6210\uff0c\u4e5f\u4f1a\u6781\u5927\u51cf\u5c11\u8f6f\u4ef6\u5305\u7684\u6570\u91cf\uff0c\u964d\u4f4e\u5f00\u53d1\u5de5\u4f5c\u91cf\u3002\u4ee5OpenStack Wallaby\u7248\u672c\u4e3a\u4f8b\uff0c\u5168\u91cf\u4f9d\u8d56\u5305\u5927\u6982\u5728700+\u4ee5\u4e0a\uff0c\u53bb\u6389\u6587\u6863\u3001\u6d4b\u8bd5\u540e\uff0c\u4f9d\u8d56\u5305\u5927\u6982\u662f300+\u5de6\u53f3\u3002\u56e0\u6b64\u6211\u4eec\u5f15\u5165`core`\u6838\u5fc3\u7684\u6982\u5ff5\uff0c\u7528\u6237\u6839\u636e\u81ea\u5df1\u7684\u9700\u6c42\uff0c\u9009\u62e9\u8981\u5206\u6790\u7684\u8f6f\u4ef6\u8303\u56f4\u3002\u53e6\u5916\u867d\u7136OpenStack\u5305\u542b\u670d\u52a1\u51e0\u5341\u4e2a\uff0c\u4f46\u7528\u6237\u53ef\u80fd\u53ea\u9700\u8981\u5176\u4e2d\u7684\u67d0\u4e9b\u670d\u52a1\uff0c\u56e0\u6b64\u6211\u4eec\u53e6\u5916\u5f15\u5165`projects`\u8fc7\u6ee4\u5668\uff0c\u7528\u6237\u53ef\u4ee5\u6839\u636e\u81ea\u5df1\u7684\u9700\u6c42\uff0c\u6307\u5b9a\u5206\u6790\u7684\u8f6f\u4ef6\u4f9d\u8d56\u8303\u56f4\u3002 \u4f9d\u8d56\u6bd4\u5bf9 \u4f9d\u8d56\u5206\u6790\u5b8c\u540e\uff0c\u8fd8\u8981\u6709\u5bf9\u5e94\u7684openEuler\u5f00\u53d1\u52a8\u4f5c\uff0c\u56e0\u6b64\u6211\u4eec\u8fd8\u8981\u63d0\u4f9b\u57fa\u4e8e\u76ee\u6807openEuler\u7248\u672c\u7684RPM\u8f6f\u4ef6\u5305\u5f00\u53d1\u5efa\u8bae\u3002openEuler\u4e0eOpenStack\u7248\u672c\u4e4b\u95f4\u6709N:N\u7684\u6620\u5c04\u5173\u7cfb\uff0c\u4e00\u4e2aopenEuler\u7248\u672c\u53ef\u4ee5\u652f\u6301\u591a\u4e2aOpenStack\u7248\u672c\uff0c\u4e00\u4e2aOpenStack\u7248\u672c\u53ef\u4ee5\u90e8\u7f72\u5728\u591a\u4e2aopenEuler\u7248\u672c\u4e0a\u3002\u7528\u6237\u5728\u6307\u5b9a\u4e86\u76ee\u6807openEuler\u7248\u672c\u548cOpenStack\u7248\u672c\u540e\uff0c\u672c\u529f\u80fd\u81ea\u52a8\u904d\u5386openEuler\u8f6f\u4ef6\u5e93\uff0c\u5206\u6790\u5e76\u8f93\u51faOpenStack\u6d89\u53ca\u7684\u5168\u91cf\u8f6f\u4ef6\u5305\u9700\u8981\u8fdb\u884c\u4e86\u64cd\u4f5c\uff0c\u4f8b\u5982\u9700\u8981\u521d\u59cb\u5316\u4ed3\u5e93\u3001\u521b\u5efaopenEuler\u5206\u652f\u3001\u5347\u7ea7\u8f6f\u4ef6\u5305\u7b49\u7b49\u3002\u4e3a\u5f00\u53d1\u8005\u540e\u7eed\u7684\u5f00\u53d1\u63d0\u4f9b\u6307\u5bfc\u3002","title":"2.2 \u8f6f\u4ef6\u4f9d\u8d56\u529f\u80fd"},{"location":"spec/openstack-sig-tool/#221","text":"\u4f9d\u8d56\u5206\u6790 \u8f93\u5165\uff1a\u76ee\u6807OpenStack\u7248\u672c\u3001\u76ee\u6807OpenStack\u670d\u52a1\u5217\u8868\u3001\u662f\u5426\u53ea\u5206\u6790\u6838\u5fc3\u8f6f\u4ef6 \u8f93\u51fa\uff1a\u6240\u6709\u6d89\u53ca\u7684\u8f6f\u4ef6\u5305\u53ca\u6bcf\u4e2a\u8f6f\u4ef6\u5305\u7684\u5bf9\u5e94\u5185\u5bb9\u3002\u683c\u5f0f\u5982\u4e0b\uff1a \u2514\u2500\u2500{OpenStack\u7248\u672c\u540d}_cached_file \u2514\u2500\u2500packageA.yaml \u2514\u2500\u2500packageB.yaml \u2514\u2500\u2500packageC.yaml ...... \u6bcf\u4e2a\u8f6f\u4ef6\u5185\u5bb9\u683c\u5f0f\u5982\u4e0b\uff1a { \"name\": \"packageA\", \"version_dict\": { \"version\": \"0.3.7\", \"eq_version\": \"\", \"ge_version\": \"0.3.5\", \"lt_version\": \"\", \"ne_version\": [], \"upper_version\": \"0.3.7\"}, \"deep\": { \"count\": 1, \"list\": [\"packageB\", \"packageC\"]}, \"requires\": {} } \u5173\u952e\u5b57\u8bf4\u660e | Key | Description | |:-----------------:|:-----------:| | name | \u8f6f\u4ef6\u5305\u540d | | version_dict | \u8f6f\u4ef6\u7248\u672c\u8981\u6c42\uff0c\u5305\u62ec\u7b49\u4e8e\u3001\u5927\u4e8e\u7b49\u4e8e\u3001\u5c0f\u4e8e\u3001\u4e0d\u7b49\u4e8e\uff0c\u7b49\u7b49 | | version_dict.deep | \u8868\u793a\u8be5\u8f6f\u4ef6\u5728\u5168\u91cf\u4f9d\u8d56\u6811\u7684\u6df1\u5ea6\uff0c\u4ee5\u53ca\u6df1\u5ea6\u904d\u5386\u7684\u8def\u5f84 | | requires | \u5305\u542b\u672c\u8f6f\u4ef6\u7684\u4f9d\u8d56\u8f6f\u4ef6\u5217\u8868 | \u4f9d\u8d56\u6bd4\u5bf9 \u8f93\u5165\uff1a\u4f9d\u8d56\u5206\u6790\u7ed3\u679c\u3001\u76ee\u6807openEuler\u7248\u672c\u4ee5\u53cabase\u6bd4\u5bf9\u57fa\u7ebf \u8f93\u51fa\uff1a\u4e00\u4e2a\u8868\u683c\uff0c\u5305\u542b\u6bcf\u4e2a\u8f6f\u4ef6\u7684\u5206\u6790\u7ed3\u679c\u53ca\u5904\u7406\u5efa\u8bae\uff0c\u6bcf\u4e00\u884c\u8868\u793a\u4e00\u4e2a\u8f6f\u4ef6\uff0c\u6240\u6709\u5217\u540d\u53ca\u5b9a\u4e49\u89c4\u8303\u5982\u4e0b\uff1a Column Description Project Name \u8f6f\u4ef6\u5305\u540d openEuler Repo \u8f6f\u4ef6\u5728openEuler\u4e0a\u7684\u6e90\u7801\u4ed3\u5e93\u540d Repo version openEuler\u4e0a\u7684\u6e90\u7801\u7248\u672c Required (Min) Version \u8981\u6c42\u7684\u6700\u5c0f\u7248\u672c lt Version \u8981\u6c42\u5c0f\u4e8e\u7684\u7248\u672c ne Version \u8981\u6c42\u7684\u4e0d\u7b49\u4e8e\u7248\u672c Upper Version \u8981\u6c42\u7684\u6700\u5927\u7248\u672c Status \u5f00\u53d1\u5efa\u8bae Requires \u8f6f\u4ef6\u7684\u4f9d\u8d56\u5217\u8868 Depth \u8f6f\u4ef6\u7684\u4f9d\u8d56\u6811\u6df1\u5ea6 \u5176\u4e2d Status \u5305\u542b\u7684\u5efa\u8bae\u6709: - \u201cOK\u201d\uff1a\u5f53\u524d\u7248\u672c\u76f4\u63a5\u53ef\u7528\uff0c\u4e0d\u9700\u8981\u5904\u7406\u3002 - \u201cNeed Create Repo\u201d\uff1aopenEuler \u7cfb\u7edf\u4e2d\u6ca1\u6709\u6b64\u8f6f\u4ef6\u5305\uff0c\u9700\u8981\u5728 Gitee \u4e2d\u7684 src-openeuler repo \u4ed3\u65b0\u5efa\u4ed3\u5e93\u3002 - \u201cNeed Create Branch\u201d\uff1a\u4ed3\u5e93\u4e2d\u6ca1\u6709\u6240\u9700\u5206\u652f\uff0c\u9700\u8981\u5f00\u53d1\u8005\u521b\u5efa\u5e76\u521d\u59cb\u5316\u3002 - \u201cNeed Init Branch\u201d\uff1a\u8868\u660e\u5206\u652f\u5b58\u5728\uff0c\u4f46\u662f\u91cc\u9762\u5e76\u6ca1\u6709\u4efb\u4f55\u7248\u672c\u7684\u6e90\u7801\u5305\uff0c\u5f00\u53d1\u8005\u9700\u8981\u5bf9\u6b64\u5206\u652f\u8fdb\u884c\u521d\u59cb\u5316\u3002 - \u201cNeed Downgrade\u201d\uff1a\u964d\u7ea7\u8f6f\u4ef6\u5305\u3002 - \u201cNeed Upgrade\u201d\uff1a\u5347\u7ea7\u8f6f\u4ef6\u5305\u3002 \u5f00\u53d1\u8005\u6839\u636e Status \u7684\u5efa\u8bae\u8fdb\u884c\u540e\u7eed\u5f00\u53d1\u52a8\u4f5c\u3002","title":"2.2.1 \u7248\u672c\u5339\u914d\u89c4\u8303"},{"location":"spec/openstack-sig-tool/#222-apicli","text":"\u521b\u5efa\u4f9d\u8d56\u5206\u6790 CLI: oos dependence analysis create endpoint: /dependence/analysis type: POST sync OR async: async request body: { \"release\"[required]: Enum(\"OpenStack Relase\"), \"runtime\"[optional][Default: \"3.10\"]: Enum(\"Python version\"), \"core\"[optional][Default: False]: Boolean, \"projects\"[optional][Default: None]: List(\"OpenStack service\") } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u83b7\u53d6\u4f9d\u8d56\u5206\u6790 CLI: oos dependence analysis show \u3001 oos dependence analysis list endpoint: /dependence/analysis/{UUID} \u3001 /dependence/analysis type: GET sync OR async: sync request body: None response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\", \"OK\") } \u5220\u9664\u4f9d\u8d56\u5206\u6790 CLI: oos dependence analysis delete endpoint: /dependence/analysis/{UUID} type: DELETE sync OR async: sync request body: None response body: { \"ID\": UUID, \"status\": Enum(\"Error\", \"OK\") } \u521b\u5efa\u4f9d\u8d56\u6bd4\u5bf9 CLI: oos dependence generate endpoint: /dependence/generate type: POST sync OR async: async request body: { \"analysis_id\"[required]: UUID, \"compare\"[optional][Default: None]: { \"token\"[required]: GITEE_TOKEN_ID, \"compare-from\"[optional][Default: master]: Enum(\"openEuler project branch\"), \"compare-branch\"[optional][Default: master]: Enum(\"openEuler project branch\") } } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u83b7\u53d6\u4f9d\u8d56\u6bd4\u5bf9 CLI: oos dependence generate show \u3001 oos dependence generate list endpoint: /dependence/generate/{UUID} \u3001 /dependence/generate type: GET sync OR async: sync request body: None response body: { \"ID\": UUID, \"data\" RAW(result data file) } \u5220\u9664\u4f9d\u8d56\u6bd4\u5bf9 CLI: oos dependence generate delete endpoint: /dependence/generate/{UUID} type: DELETE sync OR async: sync request body: None response body: { \"ID\": UUID, \"status\": Enum(\"Error\", \"OK\") }","title":"2.2.2 API\u548cCLI\u5b9a\u4e49"},{"location":"spec/openstack-sig-tool/#23-spec","text":"OpenStack\u4f9d\u8d56\u7684\u5927\u91cfpython\u5e93\u662f\u9762\u5411\u5f00\u53d1\u8005\u7684\uff0c\u8fd9\u79cd\u5e93\u4e0d\u5bf9\u5916\u63d0\u4f9b\u7528\u6237\u670d\u52a1\uff0c\u53ea\u63d0\u4f9b\u4ee3\u7801\u7ea7\u8c03\u7528\uff0c\u5176RPM\u5185\u5bb9\u6784\u6210\u5355\u4e00\u3001\u683c\u5f0f\u56fa\u5b9a\uff0c\u9002\u5408\u4f7f\u7528\u5de5\u5177\u5316\u65b9\u5f0f\u63d0\u9ad8\u5f00\u53d1\u6548\u7387\u3002","title":"2.3 \u8f6f\u4ef6SPEC\u751f\u6210\u529f\u80fd"},{"location":"spec/openstack-sig-tool/#231-spec","text":"SPEC\u7f16\u5199\u4e00\u822c\u5206\u4e3a\u51e0\u4e2a\u9636\u6bb5\uff0c\u6bcf\u4e2a\u9636\u6bb5\u6709\u5bf9\u5e94\u7684\u89c4\u8303\u8981\u6c42\uff1a 1. \u5e38\u89c4\u9879\u586b\u5199\uff0c\u5305\u62ecName\u3001Version\u3001Release\u3001Summary\u3001License\u7b49\u5185\u5bb9\uff0c\u8fd9\u4e9b\u5185\u5bb9\u7531\u76ee\u6807\u8f6f\u4ef6\u7684pypi\u4fe1\u606f\u63d0\u4f9b 2. \u5b50\u8f6f\u4ef6\u5305\u4fe1\u606f\u586b\u5199\uff0c\u5305\u62ec\u8f6f\u4ef6\u5305\u540d\u3001\u7f16\u8bd1\u4f9d\u8d56\u3001\u5b89\u88c5\u4f9d\u8d56\u3001\u63cf\u8ff0\u4fe1\u606f\u7b49\u3002\u8fd9\u4e9b\u5185\u5bb9\u4e5f\u7531\u76ee\u6807\u8f6f\u4ef6\u7684pypi\u4fe1\u606f\u63d0\u4f9b\u3002\u5176\u4e2d\u8f6f\u4ef6\u5305\u540d\u9700\u8981\u6709\u660e\u663e\u7684python\u5316\u663e\u793a\uff0c\u6bd4\u5982\u4ee5 python3- \u4e3a\u524d\u7f00\u3002 3. \u6784\u5efa\u8fc7\u7a0b\u4fe1\u606f\u586b\u5199\uff0c\u5305\u62ec%prep\u3001%build %install %check\u5185\u5bb9\uff0c\u8fd9\u4e9b\u5185\u5bb9\u5f62\u5f0f\u56fa\u5b9a\uff0c\u751f\u6210\u5bf9\u5e94rpm\u5b8f\u547d\u4ee4\u5373\u53ef\u3002 4. RPM\u5305\u6587\u4ef6\u5c01\u88c5\u9636\u6bb5\uff0c\u672c\u9636\u6bb5\u901a\u8fc7\u6587\u4ef6\u641c\u7d22\u65b9\u5f0f\uff0c\u628abin\u3001lib\u3001doc\u7b49\u5185\u5bb9\u5206\u522b\u653e\u5230\u5bf9\u5e94\u76ee\u5f55\u5373\u53ef\u3002 NOTE \uff1a\u5728\u901a\u7528\u89c4\u8303\u5916\uff0c\u4e5f\u6709\u4e00\u4e9b\u4f8b\u5916\u60c5\u51b5\uff0c\u9700\u8981\u7279\u6b8a\u8bf4\u660e\uff1a 1. \u8f6f\u4ef6\u5305\u540d\u5982\u679c\u672c\u8eab\u5df2\u5305\u542b python \u8fd9\u6837\u7684\u5b57\u773c\uff0c\u4e0d\u518d\u9700\u8981\u6dfb\u52a0 python- \u6216 python3- \u524d\u7f00\u3002 2. \u8f6f\u4ef6\u6784\u5efa\u548c\u5b89\u88c5\u9636\u6bb5\uff0c\u6839\u636e\u8f6f\u4ef6\u672c\u8eab\u7684\u5b89\u88c5\u65b9\u5f0f\u4e0d\u540c\uff0c\u5b8f\u547d\u4ee4\u5305\u62ec %py3_build \u6216 pyproject_build \uff0c\u9700\u8981\u4eba\u5de5\u5ba1\u89c6\u3002 3. \u5982\u679c\u8f6f\u4ef6\u672c\u8eab\u5305\u542bC\u8bed\u8a00\u7b49\u7f16\u8bd1\u7c7b\u4ee3\u7801\uff0c\u5219\u9700\u8981\u79fb\u9664 BuildArch: noarch \u5173\u952e\u5b57,\u5e76\u4e14\u5728%file\u9636\u6bb5\u6ce8\u610fRPM\u5b8f %{python3_sitelib} \u548c %{python3_sitearch} \u7684\u533a\u522b\u3002","title":"2.3.1 SPEC\u751f\u6210\u89c4\u8303"},{"location":"spec/openstack-sig-tool/#232-apicli","text":"\u521b\u5efaSPEC CLI: oos spec create endpoint: /spec type: POST sync OR async: async request body: { \"name\"[required]: String, \"version\"[optional][Default: \"latest\"]: String, \"arch\"[optional][Default: False]: Boolean, \"check\"[optional][Default: True]: Boolean, \"pyproject\"[optional][Default: False]: Boolean, } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u83b7\u53d6SPEC CLI: oos spec show \u3001 oos spec list endpoint: /spec/{UUID} \u3001 /spec/ type: GET sync OR async: sync request body: None response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\", \"OK\") } \u66f4\u65b0SPEC CLI: oos spec update endpoint: /spec/{UUID} type: POST sync OR async: async request body: { \"name\"[required]: String, \"version\"[optional][Default: \"latest\"]: String, } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u5220\u9664SPEC CLI: oos spec delete endpoint: /spec/{UUID} type: DELETE sync OR async: sync request body: None response body: { \"ID\": UUID, \"status\": Enum(\"Error\", \"OK\") }","title":"2.3.2 API\u548cCLI\u5b9a\u4e49"},{"location":"spec/openstack-sig-tool/#24","text":"OpenStack\u7684\u90e8\u7f72\u573a\u666f\u591a\u6837\u3001\u90e8\u7f72\u6d41\u7a0b\u590d\u6742\u3001\u90e8\u7f72\u6280\u672f\u95e8\u69db\u8f83\u9ad8\uff0c\u4e3a\u4e86\u89e3\u51b3\u95e8\u69db\u9ad8\u3001\u6548\u7387\u4f4e\u3001\u4eba\u529b\u591a\u7684\u95ee\u9898\uff0copenEuler OpenStack\u5f00\u53d1\u5e73\u53f0\u9700\u8981\u63d0\u4f9b\u81ea\u52a8\u5316\u90e8\u7f72\u3001\u6d4b\u8bd5\u529f\u80fd\u3002 \u81ea\u52a8\u5316\u90e8\u7f72 \u63d0\u4f9b\u57fa\u4e8eopenEuler\u7684OpenStack\u7684\u4e00\u952e\u90e8\u7f72\u80fd\u529b\uff0c\u5305\u62ec\u652f\u6301\u4e0d\u540c\u67b6\u6784\u3001\u4e0d\u540c\u670d\u52a1\u3001\u4e0d\u540c\u573a\u666f\u7684\u90e8\u7f72\u529f\u80fd\uff0c\u63d0\u4f9b\u57fa\u4e8e\u4e0d\u540c\u73af\u5883\u5feb\u901f\u53d1\u653e\u3001\u914d\u7f6eopenEuler\u73af\u5883\u7684\u80fd\u529b\u3002\u5e76\u63d0\u4f9b \u63d2\u4ef6\u5316 \u80fd\u529b\uff0c\u65b9\u4fbf\u7528\u6237\u6269\u5c55\u652f\u6301\u7684\u90e8\u7f72\u540e\u7aef\u548c\u573a\u666f\u3002 \u81ea\u52a8\u5316\u6d4b\u8bd5 \u63d0\u4f9b\u57fa\u4e8eopenEuler\u7684OpenStack\u7684\u4e00\u952e\u6d4b\u8bd5\u80fd\u529b\uff0c\u5305\u62ec\u652f\u6301\u4e0d\u540c\u573a\u666f\u7684\u6d4b\u8bd5\uff0c\u63d0\u4f9b\u7528\u6237\u81ea\u5b9a\u4e49\u6d4b\u8bd5\u7684\u80fd\u529b\uff0c\u5e76\u89c4\u8303\u6d4b\u8bd5\u62a5\u544a\uff0c\u4ee5\u53ca\u652f\u6301\u5bf9\u6d4b\u8bd5\u7ed3\u679c\u4e0a\u62a5\u548c\u6301\u4e45\u5316\u7684\u80fd\u529b\u3002","title":"2.4 \u81ea\u52a8\u5316\u90e8\u7f72\u3001\u6d4b\u8bd5\u529f\u80fd"},{"location":"spec/openstack-sig-tool/#241","text":"\u81ea\u52a8\u5316\u90e8\u7f72\u4e3b\u8981\u5305\u62ec\u4e24\u90e8\u5206\uff1aopenEuler\u73af\u5883\u51c6\u5907\u548cOpenStack\u90e8\u7f72\u3002 openEuler\u73af\u5883\u51c6\u5907 \u63d0\u4f9b\u5feb\u901f\u53d1\u653eopenEuler\u73af\u5883\u7684\u80fd\u529b\uff0c\u652f\u6301\u7684\u53d1\u653e\u65b9\u5f0f\u5305\u62ec \u521b\u5efa\u516c\u6709\u4e91\u8d44\u6e90 \u548c \u7eb3\u7ba1\u5df2\u6709\u73af\u5883 \uff0c\u5177\u4f53\u8bbe\u8ba1\u5982\u4e0b\uff1a **NOTE** openEuler\u7684OpenStack\u652f\u6301\u4ee5RPM + systemd\u7684\u65b9\u5f0f\u4e3a\u4e3b\uff0c\u6682\u4e0d\u652f\u6301\u5bb9\u5668\u65b9\u5f0f\u3002 \u521b\u5efa\u516c\u6709\u4e91\u8d44\u6e90 \u521b\u5efa\u516c\u6709\u4e91\u8d44\u6e90\u4ee5\u865a\u62df\u673a\u652f\u6301\u4e3a\u4e3b\uff08\u88f8\u673a\u5728\u4e91\u4e0a\u64cd\u4f5c\u8d1f\u8d23\uff0c\u751f\u6001\u6ee1\u8db3\u5ea6\u4e0d\u8db3\uff0c\u6682\u4e0d\u505a\u652f\u6301\uff09\u3002\u91c7\u7528\u63d2\u4ef6\u5316\u65b9\u5f0f\uff0c\u63d0\u4f9b\u591a\u4e91\u652f\u6301\u7684\u80fd\u529b\uff0c\u4ee5\u534e\u4e3a\u4e91\u4e3a\u53c2\u8003\u5b9e\u73b0\uff0c\u4f18\u5148\u5b9e\u73b0\u3002\u5176\u4ed6\u4e91\u7684\u652f\u6301\u6839\u636e\u7528\u6237\u9700\u6c42\uff0c\u6301\u7eed\u63a8\u8fdb\u3002\u6839\u636e\u573a\u666f\uff0c\u652f\u6301all in one\u548c\u4e09\u8282\u70b9\u62d3\u6251\u3002 1. \u521b\u5efa\u73af\u5883 - CLI: oos env create - endpoint: `/environment` - type: POST - sync OR async: async - request body: ``` { \"name\"[required]: String, \"type\"[required]: Enmu(\"all-in-one\", \"cluster\"), \"release\"[required]: Enmu(\"openEuler_Release\"), \"flavor\"[required]\uff1a Enmu(\"small\", \"medium\", \"large\"), \"arch\"[required]\uff1a Enmu(\"x86\", \"arm64\"), } ``` - response body: ``` { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } ``` \u67e5\u8be2\u73af\u5883 CLI: oos env list endpoint: /environment type: GET sync OR async: async request body: None response body: { \"ID\": UUID, \"Provider\": String, \"Name\": String, \"IP\": IP_ADDRESS, \"Flavor\": Enmu(\"small\", \"medium\", \"large\"), \"openEuler_release\": String, \"OpenStack_release\": String, \"create_time\": TIME, } \u5220\u9664\u73af\u5883 CLI: oos env delete endpoint: /environment/{UUID} type: DELETE sync OR async: sync request body: None response body: { \"ID\": UUID, \"status\": Enum(\"Error\", \"OK\") } \u7eb3\u7ba1\u5df2\u6709\u73af\u5883 \u7528\u6237\u8fd8\u53ef\u4ee5\u76f4\u63a5\u4f7f\u7528\u5df2\u6709\u7684openEuler\u73af\u5883\u8fdb\u884cOpenStack\u90e8\u7f72\uff0c\u9700\u8981\u628a\u5df2\u6709\u73af\u5883\u7eb3\u7ba1\u5230\u5e73\u53f0\u4e2d\u3002\u7eb3\u7ba1\u540e\uff0c\u73af\u5883\u4e0e\u521b\u5efa\u7684\u9879\u76ee\uff0c\u53ef\u4ee5\u76f4\u63a5\u67e5\u8be2\u6216\u5220\u9664\u3002 1. \u7eb3\u7ba1\u73af\u5883 - CLI: oos env manage - endpoint: `/environment/manage` - type: POST - sync OR async: sync - request body: ``` { \"name\"[required]: String, \"ip\"[required]: IP_ADDRESS, \"release\"[required]: Enmu(\"openEuler_Release\"), \"password\"[required]\uff1a String, } ``` - response body: ``` { \"ID\": UUID, \"status\": Enum(\"Error\", \"OK\") } ``` OpenStack\u90e8\u7f72 \u63d0\u4f9b\u5728\u5df2\u521b\u5efa/\u7eb3\u7ba1\u7684openEuler\u73af\u5883\u4e0a\u90e8\u7f72\u6307\u5b9aOpenStack\u7248\u672c\u7684\u80fd\u529b\u3002 1. \u90e8\u7f72OpenStack - CLI: oos env setup - endpoint: `/environment/setup` - type: POST - sync OR async: async - request body: ``` { \"target\"[required]: UUID(environment), \"release\"[required]: Enmu(\"OpenStack_Release\"), } ``` - response body: ``` { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } ``` \u521d\u59cb\u5316OpenStack\u8d44\u6e90 CLI: oos env init endpoint: /environment/init type: POST sync OR async: async request body: { \"target\"[required]: UUID(environment), } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u5378\u8f7d\u5df2\u90e8\u7f72OpenStack CLI: oos env clean endpoint: /environment/clean type: POST sync OR async: async request body: { \"target\"[required]: UUID(environment), } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") }","title":"2.4.1 \u81ea\u52a8\u5316\u90e8\u7f72"},{"location":"spec/openstack-sig-tool/#_1","text":"\u73af\u5883\u90e8\u7f72\u6210\u529f\u540e\uff0cSIG\u5f00\u53d1\u5e73\u53f0\u63d0\u4f9b\u57fa\u4e8e\u5df2\u90e8\u7f72OpenStack\u73af\u5883\u7684\u81ea\u52a8\u5316\u6d4b\u8bd5\u529f\u80fd\u3002\u4e3b\u8981\u5305\u542b\u4ee5\u4e0b\u51e0\u4e2a\u91cd\u8981\u5185\u5bb9\uff1a OpenStack\u672c\u8eab\u63d0\u4f9b\u4e00\u5957\u5b8c\u5584\u7684\u6d4b\u8bd5\u6846\u67b6\u3002\u5305\u62ec \u5355\u5143\u6d4b\u8bd5 \u548c \u529f\u80fd\u6d4b\u8bd5 \uff0c\u5176\u4e2d \u5355\u5143\u6d4b\u8bd5 \u5728 2.3\u7ae0\u8282 \u4e2d\u5df2\u7ecf\u7531RPM spec\u5305\u542b\uff0cspec\u7684%check\u9636\u6bb5\u53ef\u4ee5\u5b9a\u4e49\u6bcf\u4e2a\u9879\u76ee\u7684\u5355\u5143\u6d4b\u8bd5\u65b9\u5f0f\uff0c\u4e00\u822c\u60c5\u51b5\u4e0b\u53ea\u9700\u8981\u6dfb\u52a0 pytest \u6216 stestr \u5373\u53ef\u3002 \u529f\u80fd\u6d4b\u8bd5 \u7531OpenStack Tempest\u670d\u52a1\u63d0\u4f9b\uff0c\u5728\u4e0a\u6587\u6240\u8ff0\u7684\u81ea\u52a8\u5316\u90e8\u7f72 oos env init \u9636\u6bb5\uff0coos\u4f1a\u81ea\u52a8\u5b89\u88c5Tempest\u5e76\u751f\u6210\u9ed8\u8ba4\u7684\u914d\u7f6e\u6587\u4ef6\u3002 - CLI: oos env test endpoint: /environment/test type: POST sync OR async: async request body: { \"target\"[required]: UUID(environment), } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } \u6d4b\u8bd5\u6267\u884c\u5b8c\u540e\uff0coos\u4f1a\u8f93\u51fa\u6d4b\u8bd5\u62a5\u544a\uff0c\u9ed8\u8ba4\u60c5\u51b5\u4e0b\uff0coos\u4f7f\u7528 subunit2html \u5de5\u5177\uff0c\u751f\u6210html\u683c\u5f0f\u7684Tempest\u6d4b\u8bd5\u7ed3\u679c\u6587\u4ef6\u3002","title":"\u81ea\u52a8\u5316\u6d4b\u8bd5"},{"location":"spec/openstack-sig-tool/#25-openeuler","text":"OpenStack\u6d89\u53ca\u8f6f\u4ef6\u5305\u4f17\u591a\uff0c\u968f\u7740\u7248\u672c\u4e0d\u65ad\u5730\u6f14\u8fdb\u3001\u652f\u6301\u670d\u52a1\u4e0d\u65ad\u7684\u5b8c\u5584\uff0cSIG\u7ef4\u62a4\u7684\u8f6f\u4ef6\u5305\u5217\u8868\u4f1a\u4e0d\u65ad\u5237\u65b0\uff0c\u4e3a\u4e86\u964d\u4f4e\u91cd\u590d\u7684\u5f00\u53d1\u52a8\u4f5c\uff0coos\u8fd8\u5c01\u88c5\u4e86\u4e00\u4e9b\u6613\u7528\u7684\u4ee3\u7801\u5f00\u53d1\u5e73\u53f0\u81ea\u52a8\u5316\u80fd\u529b\uff0c\u6bd4\u5982\u57fa\u4e8eGitee\u7684\u81ea\u52a8\u4ee3\u7801\u63d0\u4ea4\u80fd\u529b\u3002\u529f\u80fd\u5982\u4e0b\uff1a \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 Code Action \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502Repo Action\u2502 \u2502Branch Action\u2502 \u2502Pull Request Action\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 Repo Action \u63d0\u4f9b\u4e0e\u8f6f\u4ef6\u4ed3\u76f8\u5173\u7684\u81ea\u52a8\u5316\u529f\u80fd\uff1a \u81ea\u52a8\u5efa\u4ed3 CLI: oos repo create endpoint: /repo type: POST sync OR async: async request body: { \"project\"[required]: String, \"repo\"[required]: String, \"push\"[optional][Default: \"False\"]: Boolean, } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } Branch Action \u63d0\u4f9b\u4e0e\u8f6f\u4ef6\u5206\u652f\u76f8\u5173\u7684\u81ea\u52a8\u5316\u529f\u80fd\uff1a \u81ea\u52a8\u521b\u5efa\u5206\u652f CLI: oos repo branch-create endpoint: /repo/branch type: POST sync OR async: async request body: { \"branches\"[required]: { \"branch-name\"[required]: String, \"branch-type\"[optional][Default: \"None\"]: Enum(\"protected\"), \"parent-branch\"[required]: String } } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") } Pull Request Action \u63d0\u4f9b\u4e0e\u4ee3\u7801PR\u76f8\u5173\u7684\u81ea\u52a8\u5316\u529f\u80fd\uff1a \u65b0\u589ePR\u8bc4\u8bba\uff0c\u65b9\u4fbf\u7528\u6237\u6267\u884c\u7c7b\u4f3c retest \u3001 /lgtm \u7b49\u5e38\u89c4\u5316\u8bc4\u8bba\u3002 CLI: oos repo pr-comment endpoint: /repo/pr/comment type: POST sync OR async: sync request body: { \"repo\"[required]: String, \"pr_number\"[required]: Int, \"comment\"[required]: String } response body: { \"ID\": UUID, \"status\": Enum(\"OK\", \"Error\") } \u83b7\u53d6SIG\u6240\u6709PR\uff0c\u65b9\u4fbfmaintainer\u83b7\u53d6\u5f53\u524dSIG\u7684\u5f00\u53d1\u73b0\u72b6\uff0c\u63d0\u9ad8\u8bc4\u5ba1\u6548\u7387\u3002 CLI: oos repo pr-fetch endpoint: /repo/pr/fetch type: POST sync OR async: async request body: { \"repo\"[optional][Default: \"None\"]: List[String] } response body: { \"ID\": UUID, \"status\": Enum(\"Running\", \"Error\") }","title":"2.5 openEuler\u81ea\u52a8\u5316\u5f00\u53d1\u529f\u80fd"},{"location":"spec/openstack-sig-tool/#3","text":"SIG\u5f00\u6e90\u8f6f\u4ef6\u9700\u8981\u7b26\u5408openeEuler\u793e\u533a\u5bf9\u5176\u4e2d\u8f6f\u4ef6\u7684\u5404\u79cd\u8981\u6c42\uff0c\u5e76\u4e14\u4e5f\u8981\u7b26\u5408OpenStack\u793e\u533a\u8f6f\u4ef6\u7684\u51fa\u53e3\u6807\u51c6\u3002","title":"3. \u8d28\u91cf\u3001\u5b89\u5168\u4e0e\u5408\u89c4"},{"location":"spec/openstack-sig-tool/#31","text":"\u8f6f\u4ef6\u8d28\u91cf\uff08\u53ef\u670d\u52a1\u6027\uff09 \u5bf9\u5e94\u8f6f\u4ef6\u4ee3\u7801\u9700\u5305\u542b\u5355\u5143\u6d4b\u8bd5\uff0c\u8986\u76d6\u7387\u4e0d\u4f4e\u4e8e80%\u3002 \u9700\u63d0\u4f9b\u7aef\u5230\u7aef\u529f\u80fd\u6d4b\u8bd5\uff0c\u8986\u76d6\u4e0a\u8ff0\u6240\u6709\u63a5\u53e3\uff0c\u4ee5\u53ca\u6838\u5fc3\u7684\u573a\u666f\u6d4b\u8bd5\u3002 \u57fa\u4e8eopenEuler\u793e\u533aCI\uff0c\u6784\u5efaCI/CD\u6d41\u7a0b\uff0c\u6240\u6709Pull Request\u8981\u6709CI\u4fdd\u8bc1\u4ee3\u7801\u8d28\u91cf\uff0c\u5b9a\u671f\u53d1\u5e03release\u7248\u672c\uff0c\u8f6f\u4ef6\u53d1\u5e03\u95f4\u9694\u4e0d\u5927\u4e8e3\u4e2a\u6708\u3002 \u57fa\u4e8eGitee ISSUE\u7cfb\u7edf\u5904\u7406\u7528\u6237\u53d1\u73b0\u5e76\u53cd\u9988\u7684\u95ee\u9898\uff0c\u95ed\u73af\u7387\u5927\u4e8e80%\uff0c\u95ed\u73af\u5468\u671f\u4e0d\u8d85\u8fc71\u5468\u3002 \u8f6f\u4ef6\u5b89\u5168 \u6570\u636e\u5b89\u5168\uff1a\u8f6f\u4ef6\u5168\u7a0b\u4e0d\u8054\u7f51\uff0c\u6301\u4e45\u5b58\u50a8\u4e2d\u4e0d\u5305\u542b\u7528\u6237\u654f\u611f\u4fe1\u606f\u3002 \u7f51\u7edc\u5b89\u5168\uff1aOOS\u5728REST\u67b6\u6784\u4e0b\u4f7f\u7528http\u534f\u8bae\u901a\u4fe1\uff0c\u4f46\u8f6f\u4ef6\u8bbe\u8ba1\u76ee\u6807\u5b9e\u5728\u5185\u7f51\u73af\u5883\u4e2d\u4f7f\u7528\uff0c\u4e0d\u5efa\u8bae\u66b4\u9732\u5728\u516c\u7f51IP\u4e2d\uff0c\u5982\u5fc5\u987b\u5982\u6b64\uff0c\u5efa\u8bae\u589e\u52a0\u8bbf\u95eeIP\u767d\u540d\u5355\u9650\u5236\u3002 \u7cfb\u7edf\u5b89\u5168\uff1a\u57fa\u4e8eopenEuler\u5b89\u5168\u673a\u5236\uff0c\u5b9a\u671f\u53d1\u5e03CVE\u4fee\u590d\u6216\u5b89\u5168\u8865\u4e01\u3002 \u5e94\u7528\u5c42\u5b89\u5168\uff1a\u4e0d\u6d89\u53ca\uff0c\u4e0d\u63d0\u4f9b\u5e94\u7528\u7ea7\u5b89\u5168\u670d\u52a1\uff0c\u4f8b\u5982\u5bc6\u7801\u7b56\u7565\u3001\u8bbf\u95ee\u63a7\u5236\u7b49\u3002 \u7ba1\u7406\u5b89\u5168\uff1a\u8f6f\u4ef6\u63d0\u4f9b\u65e5\u5fd7\u751f\u6210\u548c\u5468\u671f\u6027\u5907\u4efd\u673a\u5236\uff0c\u65b9\u4fbf\u7528\u6237\u5b9a\u671f\u5ba1\u8ba1\u3002 \u53ef\u9760\u6027 \u672c\u8f6f\u4ef6\u9762\u5411openEuler\u793e\u533aOpenStack\u5f00\u53d1\u884c\u4e3a\uff0c\u4e0d\u6d89\u53ca\u670d\u52a1\u4e0a\u7ebf\u6216\u8005\u5546\u4e1a\u751f\u4ea7\u843d\u5730\uff0c\u6240\u6709\u4ee3\u7801\u516c\u5f00\u900f\u660e\uff0c\u4e0d\u6d89\u53ca\u79c1\u6709\u529f\u80fd\u53ca\u4ee3\u7801\u3002\u56e0\u6b64\u4e0d\u63d0\u4f9b\u4f8b\u5982\u8282\u70b9\u5197\u4f59\u3001\u5bb9\u707e\u5907\u4efd\u80fd\u529f\u80fd\u3002","title":"3.1 \u8d28\u91cf\u4e0e\u5b89\u5168"},{"location":"spec/openstack-sig-tool/#32","text":"License\u5408\u89c4 \u672c\u5e73\u53f0\u91c7\u7528Apache2.0 License\uff0c\u4e0d\u9650\u5236\u4e0b\u6e38fork\u8f6f\u4ef6\u7684\u95ed\u6e90\u4e0e\u5546\u4e1a\u884c\u4e3a\uff0c\u4f46\u4e0b\u6e38\u8f6f\u4ef6\u9700\u6807\u6ce8\u4ee3\u7801\u6765\u6e90\u4ee5\u53ca\u4fdd\u7559\u539f\u6709License\u3002 \u6cd5\u52a1\u5408\u89c4 \u672c\u5e73\u53f0\u7531\u5f00\u6e90\u5f00\u53d1\u8005\u5171\u540c\u5f00\u53d1\u7ef4\u62a4\uff0c\u4e0d\u6d89\u53ca\u5546\u4e1a\u516c\u53f8\u7684\u79d8\u5bc6\u4ee5\u53ca\u975e\u516c\u5f00\u4ee3\u7801\u3002\u6240\u6709\u8d21\u732e\u8005\u9700\u9075\u5b88openEuler\u793e\u533a\u8d21\u732e\u51c6\u5219\uff0c\u786e\u4fdd\u81ea\u8eab\u7684\u8d21\u732e\u5408\u89c4\u5408\u6cd5\u3002SIG\u53ca\u793e\u533a\u672c\u8eab\u4e0d\u627f\u62c5\u76f8\u5e94\u8d23\u4efb\u3002 \u5982\u53d1\u73b0\u4e0d\u5408\u89c4\u7684\u6e90\u7801\uff0cSIG\u65e0\u9700\u83b7\u53d6\u8d21\u732e\u8005\u7684\u5141\u8bb8\uff0c\u6709\u6743\u5229\u53ca\u4e49\u52a1\u53ca\u65f6\u5220\u9664\u3002\u5e76\u6709\u6743\u7981\u6b62\u4e0d\u5408\u89c4\u4ee3\u7801\u6216\u5f00\u53d1\u8005\u7ee7\u7eed\u8d21\u732e\u3002 \u5f00\u53d1\u8005\u5982\u679c\u6709\u975e\u516c\u5f00\u4ee3\u7801\u9700\u8981\u8d21\u732e\uff0c\u5219\u8981\u5148\u9075\u5b88\u672c\u516c\u53f8\u7684\u5f00\u6e90\u6d41\u7a0b\u4e0e\u89c4\u5b9a\uff0c\u5e76\u6309\u7167openEuler\u793e\u533a\u5f00\u6e90\u89c4\u8303\u516c\u5f00\u8d21\u732e\u4ee3\u7801\u3002","title":"3.2 \u5408\u89c4"},{"location":"spec/openstack-sig-tool/#4","text":"\u65f6\u95f4 \u5185\u5bb9 \u72b6\u6001 2021.06 \u5b8c\u6210\u8f6f\u4ef6\u6574\u4f53\u6846\u67b6\u7f16\u5199\uff0c\u5b9e\u73b0CLI Built-in\u673a\u5236\uff0c\u81f3\u5c11\u4e00\u4e2aAPI\u53ef\u7528 Done 2021.12 \u5b8c\u6210CLI Built-in\u673a\u5236\u7684\u5168\u91cf\u529f\u80fd\u53ef\u7528 Done 2022.06 \u5b8c\u6210\u8d28\u91cf\u52a0\u56fa\uff0c\u4fdd\u8bc1\u529f\u80fd\uff0c\u5728openEuler OpenStack\u793e\u533a\u5f00\u53d1\u6d41\u7a0b\u4e2d\u6b63\u5f0f\u5f15\u5165OOS Done 2022.12 \u4e0d\u65ad\u5b8c\u6210OOS\uff0c\u4fdd\u8bc1\u6613\u7528\u6027\u3001\u5065\u58ee\u6027\uff0c\u81ea\u52a8\u5316\u8986\u76d6\u5ea6\u8d85\u8fc780%\uff0c\u964d\u4f4e\u5f00\u53d1\u4eba\u529b\u6295\u5165 Done 2023.06 \u8865\u9f50REST\u6846\u67b6\u3001CI/CD\u6d41\u7a0b\uff0c\u4e30\u5bccPlugin\u673a\u5236\uff0c\u5f15\u5165\u66f4\u591abackend\u652f\u6301 Working in progress 2023.12 \u5b8c\u6210\u524d\u7aefGUI\u529f\u80fd Planning","title":"4. \u5b9e\u65bd\u8ba1\u5212"},{"location":"spec/priority_vm/","text":"\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u6df7\u90e8 \u00b6 \u865a\u62df\u673a\u6df7\u5408\u90e8\u7f72\u662f\u6307\u628a\u5bf9CPU\u3001IO\u3001Memory\u7b49\u8d44\u6e90\u6709\u4e0d\u540c\u9700\u6c42\u7684\u865a\u62df\u673a\u901a\u8fc7\u8c03\u5ea6\u65b9\u5f0f\u90e8\u7f72\u3001\u8fc1\u79fb\u5230\u540c\u4e00\u4e2a\u8ba1\u7b97\u8282\u70b9\u4e0a\uff0c\u4ece\u800c\u4f7f\u5f97\u8282\u70b9\u7684\u8d44\u6e90\u5f97\u5230\u5145\u5206\u5229\u7528\u3002\u5728\u5355\u673a\u7684\u8d44\u6e90\u8c03\u5ea6\u5206\u914d\u4e0a\uff0c\u533a\u5206\u51fa\u9ad8\u4f4e\u4f18\u5148\u7ea7\uff0c\u5373\u9ad8\u4f18\u5148\u7ea7\u865a\u673a\u548c\u4f4e\u4f18\u5148\u7ea7\u865a\u673a\u53d1\u751f\u8d44\u6e90\u7ade\u4e89\u65f6\uff0c\u8d44\u6e90\u4f18\u5148\u5206\u914d\u7ed9\u524d\u8005\uff0c\u4e25\u683c\u4fdd\u969c\u5176QoS\u3002 \u865a\u62df\u673a\u6df7\u5408\u90e8\u7f72\u7684\u573a\u666f\u6709\u591a\u79cd\uff0c\u6bd4\u5982\u901a\u8fc7\u52a8\u6001\u8d44\u6e90\u8c03\u5ea6\u6ee1\u8db3\u8282\u70b9\u8d44\u6e90\u7684\u52a8\u6001\u8c03\u6574\uff1b\u6839\u636e\u7528\u6237\u4f7f\u7528\u4e60\u60ef\u52a8\u6001\u8c03\u6574\u8282\u70b9\u865a\u62df\u673a\u5206\u5e03\u7b49\u7b49\u3002\u800c\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u8c03\u5ea6\u4e5f\u662f\u5176\u4e2d\u7684\u4e00\u79cd\u5b9e\u73b0\u65b9\u6cd5\u3002 \u5728OpenStack Nova\u4e2d\u5f15\u5165\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u6280\u672f\uff0c\u53ef\u4ee5\u4e00\u5b9a\u7a0b\u5ea6\u4e0a\u6ee1\u8db3\u865a\u62df\u673a\u7684\u6df7\u5408\u90e8\u7f72\u8981\u6c42\u3002\u672c\u6587\u6863\u4e3b\u8981\u9488\u5bf9OpenStack Nova\u865a\u62df\u673a\u521b\u5efa\u529f\u80fd\uff0c\u4ecb\u7ecd\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u8c03\u5ea6\u7684\u8bbe\u8ba1\u4e0e\u5b9e\u73b0\u3002 \u5b9e\u73b0\u65b9\u6848 \u00b6 \u5728Nova\u7684\u865a\u62df\u673a\u521b\u5efa\u3001\u8fc1\u79fb\u6d41\u7a0b\u4e2d\u5f15\u5165\u9ad8\u4f4e\u4f18\u5148\u7ea7\u6982\u5ff5\uff0c\u865a\u62df\u673a\u5bf9\u8c61\u65b0\u589e\u9ad8\u4f4e\u4f18\u5148\u7ea7\u5c5e\u6027\u3002\u9ad8\u4f18\u5148\u7ea7\u865a\u62df\u673a\u5728\u8c03\u5ea6\u7684\u8fc7\u7a0b\u4e2d\uff0c\u4f1a\u5c3d\u53ef\u80fd\u7684\u8c03\u5ea6\u5230\u8d44\u6e90\u5145\u8db3\u7684\u8282\u70b9\uff0c\u8fd9\u6837\u7684\u8282\u70b9\u9700\u8981\u81f3\u5c11\u6ee1\u8db3\u5185\u5b58\u4e0d\u8d85\u5356\u3001\u9ad8\u4f18\u5148\u7ea7\u865a\u62df\u673a\u6240\u7528CPU\u4e0d\u8d85\u5356\u7684\u8981\u6c42\u3002 \u672c\u7279\u6027\u7684\u5b9e\u73b0\u57fa\u4e8eOpenStack Yoga\u7248\u672c\uff0c\u627f\u8f7d\u4e8eopenEuler 22.09\u521b\u65b0\u7248\u672c\u4e2d\u3002\u540c\u65f6\u5f15\u5165openEuler 22.03 LTS SP1\u7684Train\u7248\u672c\u3002 \u603b\u4f53\u67b6\u6784 \u00b6 \u7528\u6237\u521b\u5efaflavor\u6216\u521b\u5efa\u865a\u673a\u65f6\uff0c\u53ef\u6307\u5b9a\u5176\u4f18\u5148\u7ea7\u5c5e\u6027\u3002\u4f46\u4f18\u5148\u7ea7\u5c5e\u6027\u4e0d\u5f71\u54cdNova\u73b0\u6709\u7684\u8d44\u6e90\u6a21\u578b\u53ca\u8282\u70b9\u8c03\u5ea6\u7b56\u7565\uff0c\u5373Nova\u4ecd\u6309\u6b63\u5e38\u6d41\u7a0b\u9009\u53d6\u8ba1\u7b97\u8282\u70b9\u53ca\u521b\u5efa\u865a\u673a\u3002 \u865a\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u4e3b\u8981\u5f71\u54cd\u865a\u673a\u521b\u5efa\u540e\u5355\u673a\u5c42\u9762\u7684\u8d44\u6e90\u8c03\u5ea6\u5206\u914d\u7b56\u7565\u3002\u9ad8\u4f18\u5148\u7ea7\u865a\u673a\u548c\u4f4e\u4f18\u5148\u7ea7\u865a\u673a\u53d1\u751f\u8d44\u6e90\u7ade\u4e89\u65f6\uff0c\u8d44\u6e90\u4f18\u5148\u5206\u914d\u7ed9\u524d\u8005\uff0c\u4e25\u683c\u4fdd\u969c\u5176QoS\u3002 Nova\u9488\u5bf9\u865a\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u6709\u4ee5\u4e0b\u6539\u53d8\uff1a 1. VM\u5bf9\u8c61\u548cflavor\u65b0\u589e\u9ad8\u4f4e\u4f18\u5148\u7ea7\u5c5e\u6027\u914d\u7f6e\u3002\u540c\u65f6\u7ed3\u5408\u4e1a\u52a1\u573a\u666f\uff0c\u7ea6\u675f\u9ad8\u4f18\u5148\u7ea7\u5c5e\u6027\u53ea\u80fd\u8bbe\u7f6e\u7ed9\u7ed1\u6838\u7c7b\u578b\u865a\u673a\uff0c\u4f4e\u4f18\u5148\u7ea7\u5c5e\u6027\u53ea\u80fd\u8bbe\u7f6e\u7ed9\u975e\u7ed1\u6838\u7c7b\u865a\u673a\u3002 2. \u5bf9\u4e8e\u5177\u6709\u4f18\u5148\u7ea7\u5c5e\u6027\u7684\u865a\u673a\uff0c\u9700\u4fee\u6539libvirt XML\u914d\u7f6e\uff0c\u8ba9\u5355\u673a\u4e0a\u7684QoS\u7ba1\u7406\u7ec4\u4ef6\uff08\u540d\u4e3aSkylark\uff09\u611f\u77e5\uff0c\u4ece\u800c\u81ea\u52a8\u8fdb\u884c\u8d44\u6e90\u5206\u914d\u548cQoS\u7ba1\u7406\u3002 3. \u4f4e\u4f18\u5148\u7ea7\u865a\u673a\u7684\u7ed1\u6838\u8303\u56f4\u6709\u6539\u53d8\uff0c\u4ee5\u5145\u5206\u5229\u7528\u9ad8\u4f18\u5148\u7ea7\u865a\u673a\u7a7a\u95f2\u7684\u8d44\u6e90\u3002 \u8d44\u6e90\u6a21\u578b \u00b6 VM\u5bf9\u8c61\u65b0\u589e\u53ef\u9009\u5c5e\u6027 priority \uff0c priority \u53ef\u88ab\u8bbe\u7f6e\u6210 high \u6216 low \uff0c\u5206\u522b\u8868\u793a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u3002 flavor extra_specs\u65b0\u589e hw:cpu_priority \u5b57\u6bb5\uff0c\u6807\u8bc6\u4e3a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u89c4\u683c\uff0c\u503c\u4e3a high \u6216 low \u3002 \u53c2\u6570\u9650\u5236\u53ca\u89c4\u5219\uff1a priority=high \u5fc5\u987b\u4e0e hw:cpu_policy=dedicated \u914d\u5957\u4f7f\u7528\uff0c\u5426\u5219\u62a5\u9519\u3002 priority=low \u5fc5\u987b\u4e0e hw:cpu_policy=shared (\u9ed8\u8ba4\u503c)\u914d\u5957\u4f7f\u7528\uff0c\u5426\u5219\u62a5\u9519\u3002 VM\u5bf9\u8c61\u7684\u4f18\u5148\u7ea7\u914d\u7f6e\u548cflavor\u7684\u4f18\u5148\u7ea7\u914d\u7f6e\u90fd\u4e3a\u53ef\u9009\uff0c\u90fd\u4e0d\u914d\u7f6e\u65f6\u4ee3\u8868\u662f\u666e\u901aVM\uff0c\u90fd\u914d\u7f6e\u65f6\u4ee5VM\u5bf9\u8c61\u7684\u4f18\u5148\u7ea7\u5c5e\u6027\u4e3a\u51c6\u3002 \u666e\u901aVM\u53ef\u4e0e\u5177\u6709\u4f18\u5148\u7ea7\u5c5e\u6027\u7684VM\u5171\u5b58\uff0c\u56e0\u4e3a\u4f18\u5148\u7ea7\u5c5e\u6027\u4e0d\u5f71\u54cdNova\u73b0\u6709\u7684\u8d44\u6e90\u6a21\u578b\u53ca\u8282\u70b9\u8c03\u5ea6\u7b56\u7565\u3002\u5f53\u666e\u901aVM\u4e0e\u9ad8\u4f18\u5148\u7ea7VM\u53d1\u751f\u8d44\u6e90\u7ade\u4e89\u65f6\uff0cSkylark\u7ec4\u4ef6\u4e0d\u4f1a\u5e72\u9884\u3002\u5f53\u666e\u901aVM\u4e0e\u4f4e\u4f18\u5148\u7ea7VM\u53d1\u751f\u8d44\u6e90\u7ade\u4e89\u65f6\uff0cSkylark\u7ec4\u4ef6\u4f1a\u4f18\u5148\u4fdd\u969c\u666e\u901aVM\u7684\u8d44\u6e90\u5206\u914d\u3002 API \u00b6 \u521b\u5efa\u865a\u62df\u673aAPI\u4e2d\u53ef\u9009\u53c2\u6570 os:scheduler_hints.priority \u53ef\u88ab\u8bbe\u7f6e\u6210 high \u6216 low \uff0c\u7528\u4e8e\u8bbe\u7f6eVM\u5bf9\u8c61\u7684\u4f18\u5148\u7ea7\u3002 POST v2/servers (v2.1\u9ed8\u8ba4\u7248\u672c) { \"OS-SCH-HNT:scheduler_hints\": {\"priority\": \"high\"} } Scheduler \u00b6 \u4fdd\u6301\u4e0d\u53d8 Compute \u00b6 \u8d44\u6e90\u4e0a\u62a5 \u00b6 \u4fdd\u6301\u4e0d\u53d8 \u8d44\u6e90\u5206\u914d\u7ed1\u5b9a \u00b6 \u9ad8\u4f4e\u4f18\u5148\u7ea7\u673a\u5668\u521b\u5efa\u6309\u7167priority\u6807\u5fd7\u5206\u914dCPU\uff1a \u9ad8\u4f18\u5148\u7ea7\u865a\u62df\u673a\u53ea\u80fd\u662f\u7ed1\u6838\u7c7b\u578b\u865a\u673a\uff0c\u4e00\u5bf9\u4e00\u7ed1\u5b9a cpu_dedicated_set \u4e2d\u6307\u5b9aCPU \u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u53ea\u80fd\u662f\u975e\u7ed1\u6838\u7c7b\u578b\u865a\u673a\uff0c\u9ed8\u8ba4\u8303\u56f4\u7ed1\u5b9a cpu_shared_set \u4e2d\u6307\u5b9a\u7684CPU\u3002 \u6b64\u5916\uff0c nova.conf \u7684 compute \u5757\u4e2d\u65b0\u589e\u914d\u7f6e\u9879 cpu_priority_mix_enable \uff0c\u9ed8\u8ba4\u503c\u4e3aFalse\u3002\u8bbe\u7f6e\u4e3aTrue\u540e\uff0c\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u53ef\u4f7f\u7528\u9ad8\u4f18\u5148\u7ea7\u7684\u865a\u62df\u673a\u7ed1\u5b9a\u7684CPU\uff0c\u5373\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u53ef\u8303\u56f4\u7ed1\u5b9a cpu_shared_set \u4e0e cpu_dedicated_set \u6307\u5b9a\u7684CPU\u3002 \u865a\u62df\u673axml \u00b6 \u9ad8\u4f4e\u4f18\u5148\u7ea7\u673a\u5668\u521b\u5efa\u6309\u7167priority\u6807\u5fd7\uff0c\u5bf9\u865a\u62df\u673a\u8fdb\u884c\u6807\u8bc6\u3002 Libvirt XML\u4e2d\u65b0\u589e\u5c5e\u6027 \u7247\u6bb5\uff0c\u5305\u62ec /high_prio_machine \u3001 /low_prio_machine \u4e24\u79cd\u503c\uff0c\u5206\u522b\u8868\u793a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u3002\u8be5\u7247\u6bb5\u672c\u8eab\u5728Nova\u4e2d\u6ca1\u6709\u4efb\u4f55\u4f5c\u7528\uff0c\u53ea\u662f\u4e3a Skylark QoS\u670d\u52a1\u6307\u660eVM\u7684\u9ad8\u4f4e\u4f18\u5148\u7ea7\u5c5e\u6027\u3002 \u4e3e\u4f8b \u00b6 \u5047\u8bbe\u4e00\u4e2acompute\u8282\u70b9\u62e5\u670914\u4e2acore\uff0c\u8bbe\u7f6ecpu_dedicated_set=0-11\uff0c\u4e00\u517112\u4e2a\u6838\uff0ccpu_shared_set=12-13\uff0c\u4e00\u51712\u4e2a\u6838\u5fc3\uff0ccpu_allocation_ratio=8 \u5219\uff1a \u9ad8\u4f18VM\u5728scheduler\u89c6\u89d2\u53ef\u7528core\u4e3a12\uff0ccompute\u89c6\u89d2\u53ef\u7ed1\u6838core\u4e5f\u662f12\uff0c\u4e0eNova\u539f\u6709\u903b\u8f91\u4e00\u81f4\u3002 \u4f4e\u4f18VM\u5728scheduler\u89c6\u89d2\u53ef\u7528core\u4e3a2 * 8 = 16\uff0ccompute\u89c6\u89d2\u53ef\u7ed1\u6838core\u4e3a2(\u5f53cpu_priority_mix_enable=False)\uff0c\u4e0eNova\u539f\u6709\u903b\u8f91\u4e00\u81f4\u3002 \u4f4e\u4f18VM\u5728scheduler\u89c6\u89d2\u53ef\u7528core\u4e3a2 * 8 = 16\uff0ccompute\u89c6\u89d2\u53ef\u7ed1\u6838core\u4e3a2+12(\u5f53cpu_priority_mix_enable=True)\uff0c\u4e0eNova\u539f\u6709\u903b\u8f91\u6709\u5dee\u5f02\u3002 \u53c2\u6570\u914d\u7f6e\u5efa\u8bae \u00b6 \u5148\u786e\u5b9a\u5168\u5c40\u8d85\u5206\u6bd4\u548c\u6781\u7aef\u8d85\u5206\u6bd4\u3002 \u5168\u5c40\u8d85\u5206\u6bd4\u7684\u5b9a\u4e49\uff1a\u6240\u6709\u53ef\u5206\u914dvCPU\u6570\u91cf\uff08\u9ad8\u548c\u4f4e\u603b\u548c\uff09\u4e0e\u6240\u6709\u53ef\u7528\u7269\u7406core\u6570\u91cf\u7684\u6bd4\u503c\uff0c\u8fd9\u662f\u4e00\u4e2a\u8ba1\u7b97\u51fa\u6765\u7684\u7406\u8bba\u503c\uff0c\u6bd4\u5982\u4e0a\u8ff0\u4e3e\u4f8b\u4e2d\uff0c\u5168\u5c40\u8d85\u5206\u6bd4\u4e3a (12 + 2 \\* 8) / 14 = 2\u3002 \u5168\u5c40\u8d85\u5206\u6bd4\u7684\u610f\u4e49\uff1a\u5728\u9ad8\u4f4e\u4f18\u5148\u7ea7\u573a\u666f\u4e0b\uff0c\u5168\u5c40\u8d85\u5206\u6bd4\u4e3b\u8981\u5f71\u54cd\u4f4e\u4f18\u5148\u7ea7\u865a\u673a\u4e00\u822c\u6761\u4ef6\u4e0b\uff08\u9ad8\u4f18\u5148\u7ea7\u865a\u673avCPU\u6ca1\u6709\u540c\u65f6\u51b2\u9ad8\uff09\u7684QoS\u3002\u8bbe\u7f6e\u5408\u7406\u7684\u5168\u5c40\u8d85\u5206\u6bd4\u53ef\u4ee5\u51cf\u5c11\u5e95\u5c42\u8d44\u6e90\u5145\u8db3\u4f46\u8c03\u5ea6\u5931\u8d25\u7684\u60c5\u51b5\u51fa\u73b0\u3002 \u6781\u7aef\u8d85\u5206\u6bd4\u7684\u5b9a\u4e49\uff1a\u5373cpu_allocation_ratio\u3002\u53ea\u5f71\u54cdshare\u6838\u5fc3\u7684\u8d85\u5206\u80fd\u529b\u3002 \u6781\u7aef\u8d85\u5206\u6bd4\u7684\u610f\u4e49\uff1a\u5728\u9ad8\u4f4e\u4f18\u5148\u7ea7\u573a\u666f\u4e0b\uff0c\u6781\u7aef\u8d85\u5206\u6bd4\u4e3b\u8981\u5f71\u54cd\u4f4e\u4f18\u5148\u7ea7\u865a\u673a\u6781\u7aef\u6761\u4ef6\u4e0b\uff08\u6240\u6709\u9ad8\u4f18\u5148\u7ea7\u865a\u673avCPU\u540c\u65f6\u51b2\u9ad8\uff09\u7684QoS\u3002 \u7528\u6237\u7ed3\u5408\u4e1a\u52a1\u7279\u5f81\u53caQoS\u76ee\u6807\uff0c\u9009\u62e9\u5408\u9002\u7684\u5168\u5c40\u8d85\u5206\u6bd4\u548c\u6781\u7aef\u8d85\u5206\u6bd4\u540e\uff0c\u7136\u540e\u6309\u7167\u4e0b\u9762\u7684\u8ba1\u7b97\u516c\u5f0f\uff0c\u914d\u7f6e\u5408\u7406\u7684cpu_dedicated_set\u53cacpu_shared_set\u3002 \u8ba1\u7b97\u516c\u5f0f\uff1a ``` \u7528\u6237\u671f\u671b\u7684\u5168\u5c40\u8d85\u5206\u6bd4 = (\u6781\u7aef\u8d85\u5206\u6bd4 * shared\u6838\u5fc3\u6570 + dedicated\u6838\u5fc3\u6570) / compute\u6240\u6709\u6838\u5fc3\u6570 ``` \u8fd8\u662f\u4ee5\u4e0a\u8ff0compute\u8282\u70b9\u4e3a\u4f8b\uff0ccompute\u6240\u6709\u6838\u5fc3\u6570\u4e3a14\uff0c\u5047\u8bbe\u6781\u7aef\u8d85\u5206\u6bd4\u4e3a8\uff0c\u5219\u8ba1\u7b97\u53ef\u5f97\uff1a ``` \u5f53dedicated\u6838\u5fc3\u6570\u4e3a12\u65f6\uff0cshared\u6838\u5fc3\u6570\u4e3a2\u65f6\uff0c\u7528\u6237\u671f\u671b\u7684\u5168\u5c40\u8d85\u5206 = (8*2+12)/14 = 2 \u5f53dedicated\u6838\u5fc3\u6570\u4e3a4\u65f6\uff0cshared\u6838\u5fc3\u6570\u4e3a10\u65f6\uff0c\u7528\u6237\u671f\u671b\u7684\u5168\u5c40\u8d85\u5206 = (8*10+4)/14 = 6 ``` \u5f00\u53d1\u8282\u594f \u00b6 \u5f00\u53d1\u8005\uff1a \u738b\u73ba\u6e90 wangxiyuan1007@gmail.com \u90ed\u96f7 guolei_yewu@cmss.chinamobile.com \u9a6c\u5e72\u6797 maganlin_yewu@cmss.chinamobile.com \u97e9\u5149\u5b87 hanguangyu@uniontech.com \u5f20\u8fce zhangy1317@foxmail.com \u5f20\u5e06 zh.f@outlook.com \u65f6\u95f4\u70b9\uff1a 2022-04-01\u52302022-05-30 \u5b8c\u6210\u5f00\u53d1 2022-06-01\u52302022-07-30 \u6d4b\u8bd5\u3001\u8054\u8c03\u3001\u5237\u65b0\u4ee3\u7801 2022-08-01\u52302022-08-30 \u5b8c\u6210RPM\u5305\u6784\u5efa 2022-09-30\u5f15\u5165openEuler 22.09 Yoga\u7248\u672c 2022-12-30\u5f15\u5165openEuler 22.03 LTS SP1 Train\u7248\u672c","title":"\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7"},{"location":"spec/priority_vm/#_1","text":"\u865a\u62df\u673a\u6df7\u5408\u90e8\u7f72\u662f\u6307\u628a\u5bf9CPU\u3001IO\u3001Memory\u7b49\u8d44\u6e90\u6709\u4e0d\u540c\u9700\u6c42\u7684\u865a\u62df\u673a\u901a\u8fc7\u8c03\u5ea6\u65b9\u5f0f\u90e8\u7f72\u3001\u8fc1\u79fb\u5230\u540c\u4e00\u4e2a\u8ba1\u7b97\u8282\u70b9\u4e0a\uff0c\u4ece\u800c\u4f7f\u5f97\u8282\u70b9\u7684\u8d44\u6e90\u5f97\u5230\u5145\u5206\u5229\u7528\u3002\u5728\u5355\u673a\u7684\u8d44\u6e90\u8c03\u5ea6\u5206\u914d\u4e0a\uff0c\u533a\u5206\u51fa\u9ad8\u4f4e\u4f18\u5148\u7ea7\uff0c\u5373\u9ad8\u4f18\u5148\u7ea7\u865a\u673a\u548c\u4f4e\u4f18\u5148\u7ea7\u865a\u673a\u53d1\u751f\u8d44\u6e90\u7ade\u4e89\u65f6\uff0c\u8d44\u6e90\u4f18\u5148\u5206\u914d\u7ed9\u524d\u8005\uff0c\u4e25\u683c\u4fdd\u969c\u5176QoS\u3002 \u865a\u62df\u673a\u6df7\u5408\u90e8\u7f72\u7684\u573a\u666f\u6709\u591a\u79cd\uff0c\u6bd4\u5982\u901a\u8fc7\u52a8\u6001\u8d44\u6e90\u8c03\u5ea6\u6ee1\u8db3\u8282\u70b9\u8d44\u6e90\u7684\u52a8\u6001\u8c03\u6574\uff1b\u6839\u636e\u7528\u6237\u4f7f\u7528\u4e60\u60ef\u52a8\u6001\u8c03\u6574\u8282\u70b9\u865a\u62df\u673a\u5206\u5e03\u7b49\u7b49\u3002\u800c\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u8c03\u5ea6\u4e5f\u662f\u5176\u4e2d\u7684\u4e00\u79cd\u5b9e\u73b0\u65b9\u6cd5\u3002 \u5728OpenStack Nova\u4e2d\u5f15\u5165\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u6280\u672f\uff0c\u53ef\u4ee5\u4e00\u5b9a\u7a0b\u5ea6\u4e0a\u6ee1\u8db3\u865a\u62df\u673a\u7684\u6df7\u5408\u90e8\u7f72\u8981\u6c42\u3002\u672c\u6587\u6863\u4e3b\u8981\u9488\u5bf9OpenStack Nova\u865a\u62df\u673a\u521b\u5efa\u529f\u80fd\uff0c\u4ecb\u7ecd\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u8c03\u5ea6\u7684\u8bbe\u8ba1\u4e0e\u5b9e\u73b0\u3002","title":"\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u6df7\u90e8"},{"location":"spec/priority_vm/#_2","text":"\u5728Nova\u7684\u865a\u62df\u673a\u521b\u5efa\u3001\u8fc1\u79fb\u6d41\u7a0b\u4e2d\u5f15\u5165\u9ad8\u4f4e\u4f18\u5148\u7ea7\u6982\u5ff5\uff0c\u865a\u62df\u673a\u5bf9\u8c61\u65b0\u589e\u9ad8\u4f4e\u4f18\u5148\u7ea7\u5c5e\u6027\u3002\u9ad8\u4f18\u5148\u7ea7\u865a\u62df\u673a\u5728\u8c03\u5ea6\u7684\u8fc7\u7a0b\u4e2d\uff0c\u4f1a\u5c3d\u53ef\u80fd\u7684\u8c03\u5ea6\u5230\u8d44\u6e90\u5145\u8db3\u7684\u8282\u70b9\uff0c\u8fd9\u6837\u7684\u8282\u70b9\u9700\u8981\u81f3\u5c11\u6ee1\u8db3\u5185\u5b58\u4e0d\u8d85\u5356\u3001\u9ad8\u4f18\u5148\u7ea7\u865a\u62df\u673a\u6240\u7528CPU\u4e0d\u8d85\u5356\u7684\u8981\u6c42\u3002 \u672c\u7279\u6027\u7684\u5b9e\u73b0\u57fa\u4e8eOpenStack Yoga\u7248\u672c\uff0c\u627f\u8f7d\u4e8eopenEuler 22.09\u521b\u65b0\u7248\u672c\u4e2d\u3002\u540c\u65f6\u5f15\u5165openEuler 22.03 LTS SP1\u7684Train\u7248\u672c\u3002","title":"\u5b9e\u73b0\u65b9\u6848"},{"location":"spec/priority_vm/#_3","text":"\u7528\u6237\u521b\u5efaflavor\u6216\u521b\u5efa\u865a\u673a\u65f6\uff0c\u53ef\u6307\u5b9a\u5176\u4f18\u5148\u7ea7\u5c5e\u6027\u3002\u4f46\u4f18\u5148\u7ea7\u5c5e\u6027\u4e0d\u5f71\u54cdNova\u73b0\u6709\u7684\u8d44\u6e90\u6a21\u578b\u53ca\u8282\u70b9\u8c03\u5ea6\u7b56\u7565\uff0c\u5373Nova\u4ecd\u6309\u6b63\u5e38\u6d41\u7a0b\u9009\u53d6\u8ba1\u7b97\u8282\u70b9\u53ca\u521b\u5efa\u865a\u673a\u3002 \u865a\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u4e3b\u8981\u5f71\u54cd\u865a\u673a\u521b\u5efa\u540e\u5355\u673a\u5c42\u9762\u7684\u8d44\u6e90\u8c03\u5ea6\u5206\u914d\u7b56\u7565\u3002\u9ad8\u4f18\u5148\u7ea7\u865a\u673a\u548c\u4f4e\u4f18\u5148\u7ea7\u865a\u673a\u53d1\u751f\u8d44\u6e90\u7ade\u4e89\u65f6\uff0c\u8d44\u6e90\u4f18\u5148\u5206\u914d\u7ed9\u524d\u8005\uff0c\u4e25\u683c\u4fdd\u969c\u5176QoS\u3002 Nova\u9488\u5bf9\u865a\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u6709\u4ee5\u4e0b\u6539\u53d8\uff1a 1. VM\u5bf9\u8c61\u548cflavor\u65b0\u589e\u9ad8\u4f4e\u4f18\u5148\u7ea7\u5c5e\u6027\u914d\u7f6e\u3002\u540c\u65f6\u7ed3\u5408\u4e1a\u52a1\u573a\u666f\uff0c\u7ea6\u675f\u9ad8\u4f18\u5148\u7ea7\u5c5e\u6027\u53ea\u80fd\u8bbe\u7f6e\u7ed9\u7ed1\u6838\u7c7b\u578b\u865a\u673a\uff0c\u4f4e\u4f18\u5148\u7ea7\u5c5e\u6027\u53ea\u80fd\u8bbe\u7f6e\u7ed9\u975e\u7ed1\u6838\u7c7b\u865a\u673a\u3002 2. \u5bf9\u4e8e\u5177\u6709\u4f18\u5148\u7ea7\u5c5e\u6027\u7684\u865a\u673a\uff0c\u9700\u4fee\u6539libvirt XML\u914d\u7f6e\uff0c\u8ba9\u5355\u673a\u4e0a\u7684QoS\u7ba1\u7406\u7ec4\u4ef6\uff08\u540d\u4e3aSkylark\uff09\u611f\u77e5\uff0c\u4ece\u800c\u81ea\u52a8\u8fdb\u884c\u8d44\u6e90\u5206\u914d\u548cQoS\u7ba1\u7406\u3002 3. \u4f4e\u4f18\u5148\u7ea7\u865a\u673a\u7684\u7ed1\u6838\u8303\u56f4\u6709\u6539\u53d8\uff0c\u4ee5\u5145\u5206\u5229\u7528\u9ad8\u4f18\u5148\u7ea7\u865a\u673a\u7a7a\u95f2\u7684\u8d44\u6e90\u3002","title":"\u603b\u4f53\u67b6\u6784"},{"location":"spec/priority_vm/#_4","text":"VM\u5bf9\u8c61\u65b0\u589e\u53ef\u9009\u5c5e\u6027 priority \uff0c priority \u53ef\u88ab\u8bbe\u7f6e\u6210 high \u6216 low \uff0c\u5206\u522b\u8868\u793a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u3002 flavor extra_specs\u65b0\u589e hw:cpu_priority \u5b57\u6bb5\uff0c\u6807\u8bc6\u4e3a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u89c4\u683c\uff0c\u503c\u4e3a high \u6216 low \u3002 \u53c2\u6570\u9650\u5236\u53ca\u89c4\u5219\uff1a priority=high \u5fc5\u987b\u4e0e hw:cpu_policy=dedicated \u914d\u5957\u4f7f\u7528\uff0c\u5426\u5219\u62a5\u9519\u3002 priority=low \u5fc5\u987b\u4e0e hw:cpu_policy=shared (\u9ed8\u8ba4\u503c)\u914d\u5957\u4f7f\u7528\uff0c\u5426\u5219\u62a5\u9519\u3002 VM\u5bf9\u8c61\u7684\u4f18\u5148\u7ea7\u914d\u7f6e\u548cflavor\u7684\u4f18\u5148\u7ea7\u914d\u7f6e\u90fd\u4e3a\u53ef\u9009\uff0c\u90fd\u4e0d\u914d\u7f6e\u65f6\u4ee3\u8868\u662f\u666e\u901aVM\uff0c\u90fd\u914d\u7f6e\u65f6\u4ee5VM\u5bf9\u8c61\u7684\u4f18\u5148\u7ea7\u5c5e\u6027\u4e3a\u51c6\u3002 \u666e\u901aVM\u53ef\u4e0e\u5177\u6709\u4f18\u5148\u7ea7\u5c5e\u6027\u7684VM\u5171\u5b58\uff0c\u56e0\u4e3a\u4f18\u5148\u7ea7\u5c5e\u6027\u4e0d\u5f71\u54cdNova\u73b0\u6709\u7684\u8d44\u6e90\u6a21\u578b\u53ca\u8282\u70b9\u8c03\u5ea6\u7b56\u7565\u3002\u5f53\u666e\u901aVM\u4e0e\u9ad8\u4f18\u5148\u7ea7VM\u53d1\u751f\u8d44\u6e90\u7ade\u4e89\u65f6\uff0cSkylark\u7ec4\u4ef6\u4e0d\u4f1a\u5e72\u9884\u3002\u5f53\u666e\u901aVM\u4e0e\u4f4e\u4f18\u5148\u7ea7VM\u53d1\u751f\u8d44\u6e90\u7ade\u4e89\u65f6\uff0cSkylark\u7ec4\u4ef6\u4f1a\u4f18\u5148\u4fdd\u969c\u666e\u901aVM\u7684\u8d44\u6e90\u5206\u914d\u3002","title":"\u8d44\u6e90\u6a21\u578b"},{"location":"spec/priority_vm/#api","text":"\u521b\u5efa\u865a\u62df\u673aAPI\u4e2d\u53ef\u9009\u53c2\u6570 os:scheduler_hints.priority \u53ef\u88ab\u8bbe\u7f6e\u6210 high \u6216 low \uff0c\u7528\u4e8e\u8bbe\u7f6eVM\u5bf9\u8c61\u7684\u4f18\u5148\u7ea7\u3002 POST v2/servers (v2.1\u9ed8\u8ba4\u7248\u672c) { \"OS-SCH-HNT:scheduler_hints\": {\"priority\": \"high\"} }","title":"API"},{"location":"spec/priority_vm/#scheduler","text":"\u4fdd\u6301\u4e0d\u53d8","title":"Scheduler"},{"location":"spec/priority_vm/#compute","text":"","title":"Compute"},{"location":"spec/priority_vm/#_5","text":"\u4fdd\u6301\u4e0d\u53d8","title":"\u8d44\u6e90\u4e0a\u62a5"},{"location":"spec/priority_vm/#_6","text":"\u9ad8\u4f4e\u4f18\u5148\u7ea7\u673a\u5668\u521b\u5efa\u6309\u7167priority\u6807\u5fd7\u5206\u914dCPU\uff1a \u9ad8\u4f18\u5148\u7ea7\u865a\u62df\u673a\u53ea\u80fd\u662f\u7ed1\u6838\u7c7b\u578b\u865a\u673a\uff0c\u4e00\u5bf9\u4e00\u7ed1\u5b9a cpu_dedicated_set \u4e2d\u6307\u5b9aCPU \u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u53ea\u80fd\u662f\u975e\u7ed1\u6838\u7c7b\u578b\u865a\u673a\uff0c\u9ed8\u8ba4\u8303\u56f4\u7ed1\u5b9a cpu_shared_set \u4e2d\u6307\u5b9a\u7684CPU\u3002 \u6b64\u5916\uff0c nova.conf \u7684 compute \u5757\u4e2d\u65b0\u589e\u914d\u7f6e\u9879 cpu_priority_mix_enable \uff0c\u9ed8\u8ba4\u503c\u4e3aFalse\u3002\u8bbe\u7f6e\u4e3aTrue\u540e\uff0c\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u53ef\u4f7f\u7528\u9ad8\u4f18\u5148\u7ea7\u7684\u865a\u62df\u673a\u7ed1\u5b9a\u7684CPU\uff0c\u5373\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u53ef\u8303\u56f4\u7ed1\u5b9a cpu_shared_set \u4e0e cpu_dedicated_set \u6307\u5b9a\u7684CPU\u3002","title":"\u8d44\u6e90\u5206\u914d\u7ed1\u5b9a"},{"location":"spec/priority_vm/#xml","text":"\u9ad8\u4f4e\u4f18\u5148\u7ea7\u673a\u5668\u521b\u5efa\u6309\u7167priority\u6807\u5fd7\uff0c\u5bf9\u865a\u62df\u673a\u8fdb\u884c\u6807\u8bc6\u3002 Libvirt XML\u4e2d\u65b0\u589e\u5c5e\u6027 \u7247\u6bb5\uff0c\u5305\u62ec /high_prio_machine \u3001 /low_prio_machine \u4e24\u79cd\u503c\uff0c\u5206\u522b\u8868\u793a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u3002\u8be5\u7247\u6bb5\u672c\u8eab\u5728Nova\u4e2d\u6ca1\u6709\u4efb\u4f55\u4f5c\u7528\uff0c\u53ea\u662f\u4e3a Skylark QoS\u670d\u52a1\u6307\u660eVM\u7684\u9ad8\u4f4e\u4f18\u5148\u7ea7\u5c5e\u6027\u3002","title":"\u865a\u62df\u673axml"},{"location":"spec/priority_vm/#_7","text":"\u5047\u8bbe\u4e00\u4e2acompute\u8282\u70b9\u62e5\u670914\u4e2acore\uff0c\u8bbe\u7f6ecpu_dedicated_set=0-11\uff0c\u4e00\u517112\u4e2a\u6838\uff0ccpu_shared_set=12-13\uff0c\u4e00\u51712\u4e2a\u6838\u5fc3\uff0ccpu_allocation_ratio=8 \u5219\uff1a \u9ad8\u4f18VM\u5728scheduler\u89c6\u89d2\u53ef\u7528core\u4e3a12\uff0ccompute\u89c6\u89d2\u53ef\u7ed1\u6838core\u4e5f\u662f12\uff0c\u4e0eNova\u539f\u6709\u903b\u8f91\u4e00\u81f4\u3002 \u4f4e\u4f18VM\u5728scheduler\u89c6\u89d2\u53ef\u7528core\u4e3a2 * 8 = 16\uff0ccompute\u89c6\u89d2\u53ef\u7ed1\u6838core\u4e3a2(\u5f53cpu_priority_mix_enable=False)\uff0c\u4e0eNova\u539f\u6709\u903b\u8f91\u4e00\u81f4\u3002 \u4f4e\u4f18VM\u5728scheduler\u89c6\u89d2\u53ef\u7528core\u4e3a2 * 8 = 16\uff0ccompute\u89c6\u89d2\u53ef\u7ed1\u6838core\u4e3a2+12(\u5f53cpu_priority_mix_enable=True)\uff0c\u4e0eNova\u539f\u6709\u903b\u8f91\u6709\u5dee\u5f02\u3002","title":"\u4e3e\u4f8b"},{"location":"spec/priority_vm/#_8","text":"\u5148\u786e\u5b9a\u5168\u5c40\u8d85\u5206\u6bd4\u548c\u6781\u7aef\u8d85\u5206\u6bd4\u3002 \u5168\u5c40\u8d85\u5206\u6bd4\u7684\u5b9a\u4e49\uff1a\u6240\u6709\u53ef\u5206\u914dvCPU\u6570\u91cf\uff08\u9ad8\u548c\u4f4e\u603b\u548c\uff09\u4e0e\u6240\u6709\u53ef\u7528\u7269\u7406core\u6570\u91cf\u7684\u6bd4\u503c\uff0c\u8fd9\u662f\u4e00\u4e2a\u8ba1\u7b97\u51fa\u6765\u7684\u7406\u8bba\u503c\uff0c\u6bd4\u5982\u4e0a\u8ff0\u4e3e\u4f8b\u4e2d\uff0c\u5168\u5c40\u8d85\u5206\u6bd4\u4e3a (12 + 2 \\* 8) / 14 = 2\u3002 \u5168\u5c40\u8d85\u5206\u6bd4\u7684\u610f\u4e49\uff1a\u5728\u9ad8\u4f4e\u4f18\u5148\u7ea7\u573a\u666f\u4e0b\uff0c\u5168\u5c40\u8d85\u5206\u6bd4\u4e3b\u8981\u5f71\u54cd\u4f4e\u4f18\u5148\u7ea7\u865a\u673a\u4e00\u822c\u6761\u4ef6\u4e0b\uff08\u9ad8\u4f18\u5148\u7ea7\u865a\u673avCPU\u6ca1\u6709\u540c\u65f6\u51b2\u9ad8\uff09\u7684QoS\u3002\u8bbe\u7f6e\u5408\u7406\u7684\u5168\u5c40\u8d85\u5206\u6bd4\u53ef\u4ee5\u51cf\u5c11\u5e95\u5c42\u8d44\u6e90\u5145\u8db3\u4f46\u8c03\u5ea6\u5931\u8d25\u7684\u60c5\u51b5\u51fa\u73b0\u3002 \u6781\u7aef\u8d85\u5206\u6bd4\u7684\u5b9a\u4e49\uff1a\u5373cpu_allocation_ratio\u3002\u53ea\u5f71\u54cdshare\u6838\u5fc3\u7684\u8d85\u5206\u80fd\u529b\u3002 \u6781\u7aef\u8d85\u5206\u6bd4\u7684\u610f\u4e49\uff1a\u5728\u9ad8\u4f4e\u4f18\u5148\u7ea7\u573a\u666f\u4e0b\uff0c\u6781\u7aef\u8d85\u5206\u6bd4\u4e3b\u8981\u5f71\u54cd\u4f4e\u4f18\u5148\u7ea7\u865a\u673a\u6781\u7aef\u6761\u4ef6\u4e0b\uff08\u6240\u6709\u9ad8\u4f18\u5148\u7ea7\u865a\u673avCPU\u540c\u65f6\u51b2\u9ad8\uff09\u7684QoS\u3002 \u7528\u6237\u7ed3\u5408\u4e1a\u52a1\u7279\u5f81\u53caQoS\u76ee\u6807\uff0c\u9009\u62e9\u5408\u9002\u7684\u5168\u5c40\u8d85\u5206\u6bd4\u548c\u6781\u7aef\u8d85\u5206\u6bd4\u540e\uff0c\u7136\u540e\u6309\u7167\u4e0b\u9762\u7684\u8ba1\u7b97\u516c\u5f0f\uff0c\u914d\u7f6e\u5408\u7406\u7684cpu_dedicated_set\u53cacpu_shared_set\u3002 \u8ba1\u7b97\u516c\u5f0f\uff1a ``` \u7528\u6237\u671f\u671b\u7684\u5168\u5c40\u8d85\u5206\u6bd4 = (\u6781\u7aef\u8d85\u5206\u6bd4 * shared\u6838\u5fc3\u6570 + dedicated\u6838\u5fc3\u6570) / compute\u6240\u6709\u6838\u5fc3\u6570 ``` \u8fd8\u662f\u4ee5\u4e0a\u8ff0compute\u8282\u70b9\u4e3a\u4f8b\uff0ccompute\u6240\u6709\u6838\u5fc3\u6570\u4e3a14\uff0c\u5047\u8bbe\u6781\u7aef\u8d85\u5206\u6bd4\u4e3a8\uff0c\u5219\u8ba1\u7b97\u53ef\u5f97\uff1a ``` \u5f53dedicated\u6838\u5fc3\u6570\u4e3a12\u65f6\uff0cshared\u6838\u5fc3\u6570\u4e3a2\u65f6\uff0c\u7528\u6237\u671f\u671b\u7684\u5168\u5c40\u8d85\u5206 = (8*2+12)/14 = 2 \u5f53dedicated\u6838\u5fc3\u6570\u4e3a4\u65f6\uff0cshared\u6838\u5fc3\u6570\u4e3a10\u65f6\uff0c\u7528\u6237\u671f\u671b\u7684\u5168\u5c40\u8d85\u5206 = (8*10+4)/14 = 6 ```","title":"\u53c2\u6570\u914d\u7f6e\u5efa\u8bae"},{"location":"spec/priority_vm/#_9","text":"\u5f00\u53d1\u8005\uff1a \u738b\u73ba\u6e90 wangxiyuan1007@gmail.com \u90ed\u96f7 guolei_yewu@cmss.chinamobile.com \u9a6c\u5e72\u6797 maganlin_yewu@cmss.chinamobile.com \u97e9\u5149\u5b87 hanguangyu@uniontech.com \u5f20\u8fce zhangy1317@foxmail.com \u5f20\u5e06 zh.f@outlook.com \u65f6\u95f4\u70b9\uff1a 2022-04-01\u52302022-05-30 \u5b8c\u6210\u5f00\u53d1 2022-06-01\u52302022-07-30 \u6d4b\u8bd5\u3001\u8054\u8c03\u3001\u5237\u65b0\u4ee3\u7801 2022-08-01\u52302022-08-30 \u5b8c\u6210RPM\u5305\u6784\u5efa 2022-09-30\u5f15\u5165openEuler 22.09 Yoga\u7248\u672c 2022-12-30\u5f15\u5165openEuler 22.03 LTS SP1 Train\u7248\u672c","title":"\u5f00\u53d1\u8282\u594f"},{"location":"test/openEuler-20.03-LTS-SP2/","text":"\u7248\u6743\u6240\u6709 \u00a9 2021 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95ee https://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1a https://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2021-6-16 1 \u521d\u7a3f \u738b\u73ba\u6e90 2021-6-17 2 \u589e\u52a0Rocky\u7248\u672c\u6d4b\u8bd5\u62a5\u544a \u9ec4\u586b\u534e \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728openEuler 20.03 LTS SP2\u7248\u672c\u4e2d\u63d0\u4f9bOpenStack Queens\u3001Rocky\u7248\u672c\u7684RPM\u5b89\u88c5\u5305\u3002\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72OpenStack\u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668 1 \u7279\u6027\u6982\u8ff0 \u00b6 \u5728openEuler 20.03 LTS SP2 release\u4e2d\u63d0\u4f9bOpenStack Queens\u3001Rocky RPM\u5b89\u88c5\u5305\u652f\u6301\uff0c\u5305\u62ec\u9879\u76ee\uff1aKeystone\u3001Glance\u3001Nova\u3001Neutron\u3001Cinder\u3001Ironic\u3001Trove\u3001Kolla\u3001Horizon\u3001Tempest\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684CLI\u3002 2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f \u00b6 \u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 20.03 LTS SP2 (OpenStack\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2021.6.1 2021.6.7 openEuler 20.03 LTS SP2 \uff08OpenStack\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5\uff09 2021.6.8 2021.6.10 openEuler 20.03 LTS SP2 \uff08OpenStack tempest\u96c6\u6210\u6d4b\u8bd5\uff09 2021.6.11 2021.6.15 openEuler 20.03 LTS SP2 \uff08\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5\uff09 2021.6.16 2021.6.17 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a TaiShan 200-2280 Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAM ARM\u67b6\u6784\u670d\u52a1\u5668 3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0 \u00b6 3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba \u00b6 OpenStack Queens\u7248\u672c\uff0c\u5171\u8ba1\u6267\u884cTempest\u7528\u4f8b1164\u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc77*24\u7684\u957f\u7a33\u6d4b\u8bd5\uff0cSkip\u7528\u4f8b52\u4e2a\uff08\u5168\u662fopenStack Queens\u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b3\u4e2a\uff08\u6d4b\u8bd5\u7528\u4f8b\u672c\u8eab\u95ee\u9898\uff09\uff0c\u5176\u4ed61109\u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Rocky\u7248\u672c\uff0c\u5171\u8ba1\u6267\u884cTempest\u7528\u4f8b1197\u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc77*24\u7684\u957f\u7a33\u6d4b\u8bd5\uff0cSkip\u7528\u4f8b105\u4e2a\uff08\u5168\u662fopenStack Rocky\u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b1\u4e2a\uff0c\u5176\u4ed61091\u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Queens\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1164\u4e2a\uff0c\u5176\u4e2dSkip 52\u4e2a\uff0cFail 3\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Rocky\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1197\u4e2a\uff0c\u5176\u4e2dSkip 105\u4e2a\uff0cFail 1\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38 3.2 \u7ea6\u675f\u8bf4\u660e \u00b6 \u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6OpenStack Queens\u3001Rocky\u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728openEuler 20.03 LTS SP2\u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002 3.3 \u9057\u7559\u95ee\u9898\u5206\u6790 \u00b6 3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd \u00b6 \u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 1 targetcli\u8f6f\u4ef6\u5305\u4e0epython2-rtslib-fb\u5305\u51b2\u7a81\uff0c\u65e0\u6cd5\u5b89\u88c5 \u4e2d \u4f7f\u7528tgtadm\u4ee3\u66fflioadm\u547d\u4ee4 \u89e3\u51b3\u4e2d 2 python2-flake8\u8f6f\u4ef6\u5305\u4f9d\u8d56\u4f4e\u7248\u672c\u7684pyflakes\uff0c\u5bfc\u81f4yum update\u547d\u4ee4\u62a5\u51fa\u8b66\u544a \u4f4e \u4f7f\u7528yum update --nobest\u547d\u4ee4\u5347\u7ea7\u8f6f\u4ef6\u5305 \u89e3\u51b3\u4e2d 3.3.2 \u95ee\u9898\u7edf\u8ba1 \u00b6 \u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 14 3 6 5 \u767e\u5206\u6bd4 100 21.4 42.8 35.8 4 \u6d4b\u8bd5\u6267\u884c \u00b6 4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e \u00b6 \u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 20.03 LTS SP2 OpenStack Queens 1164 \u901a\u8fc71109\u4e2a\uff0cskip 52\u4e2a\uff0cFail 3\u4e2a 7 openEuler 20.03 LTS SP2 OpenStack Rocky 1197 \u901a\u8fc71001\u4e2a\uff0cskip 101\u4e2a 7 4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae \u00b6 \u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5 5 \u9644\u4ef6 \u00b6 N/A","title":"openEuler-20.03-LTS-SP2"},{"location":"test/openEuler-20.03-LTS-SP2/#1","text":"\u5728openEuler 20.03 LTS SP2 release\u4e2d\u63d0\u4f9bOpenStack Queens\u3001Rocky RPM\u5b89\u88c5\u5305\u652f\u6301\uff0c\u5305\u62ec\u9879\u76ee\uff1aKeystone\u3001Glance\u3001Nova\u3001Neutron\u3001Cinder\u3001Ironic\u3001Trove\u3001Kolla\u3001Horizon\u3001Tempest\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684CLI\u3002","title":"1 \u7279\u6027\u6982\u8ff0"},{"location":"test/openEuler-20.03-LTS-SP2/#2","text":"\u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 20.03 LTS SP2 (OpenStack\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2021.6.1 2021.6.7 openEuler 20.03 LTS SP2 \uff08OpenStack\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5\uff09 2021.6.8 2021.6.10 openEuler 20.03 LTS SP2 \uff08OpenStack tempest\u96c6\u6210\u6d4b\u8bd5\uff09 2021.6.11 2021.6.15 openEuler 20.03 LTS SP2 \uff08\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5\uff09 2021.6.16 2021.6.17 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a TaiShan 200-2280 Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAM ARM\u67b6\u6784\u670d\u52a1\u5668","title":"2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f"},{"location":"test/openEuler-20.03-LTS-SP2/#3","text":"","title":"3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0"},{"location":"test/openEuler-20.03-LTS-SP2/#31","text":"OpenStack Queens\u7248\u672c\uff0c\u5171\u8ba1\u6267\u884cTempest\u7528\u4f8b1164\u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc77*24\u7684\u957f\u7a33\u6d4b\u8bd5\uff0cSkip\u7528\u4f8b52\u4e2a\uff08\u5168\u662fopenStack Queens\u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b3\u4e2a\uff08\u6d4b\u8bd5\u7528\u4f8b\u672c\u8eab\u95ee\u9898\uff09\uff0c\u5176\u4ed61109\u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Rocky\u7248\u672c\uff0c\u5171\u8ba1\u6267\u884cTempest\u7528\u4f8b1197\u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc77*24\u7684\u957f\u7a33\u6d4b\u8bd5\uff0cSkip\u7528\u4f8b105\u4e2a\uff08\u5168\u662fopenStack Rocky\u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b1\u4e2a\uff0c\u5176\u4ed61091\u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Queens\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1164\u4e2a\uff0c\u5176\u4e2dSkip 52\u4e2a\uff0cFail 3\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Rocky\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1197\u4e2a\uff0c\u5176\u4e2dSkip 105\u4e2a\uff0cFail 1\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38","title":"3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba"},{"location":"test/openEuler-20.03-LTS-SP2/#32","text":"\u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6OpenStack Queens\u3001Rocky\u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728openEuler 20.03 LTS SP2\u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002","title":"3.2 \u7ea6\u675f\u8bf4\u660e"},{"location":"test/openEuler-20.03-LTS-SP2/#33","text":"","title":"3.3 \u9057\u7559\u95ee\u9898\u5206\u6790"},{"location":"test/openEuler-20.03-LTS-SP2/#331","text":"\u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 1 targetcli\u8f6f\u4ef6\u5305\u4e0epython2-rtslib-fb\u5305\u51b2\u7a81\uff0c\u65e0\u6cd5\u5b89\u88c5 \u4e2d \u4f7f\u7528tgtadm\u4ee3\u66fflioadm\u547d\u4ee4 \u89e3\u51b3\u4e2d 2 python2-flake8\u8f6f\u4ef6\u5305\u4f9d\u8d56\u4f4e\u7248\u672c\u7684pyflakes\uff0c\u5bfc\u81f4yum update\u547d\u4ee4\u62a5\u51fa\u8b66\u544a \u4f4e \u4f7f\u7528yum update --nobest\u547d\u4ee4\u5347\u7ea7\u8f6f\u4ef6\u5305 \u89e3\u51b3\u4e2d","title":"3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd"},{"location":"test/openEuler-20.03-LTS-SP2/#332","text":"\u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 14 3 6 5 \u767e\u5206\u6bd4 100 21.4 42.8 35.8","title":"3.3.2 \u95ee\u9898\u7edf\u8ba1"},{"location":"test/openEuler-20.03-LTS-SP2/#4","text":"","title":"4 \u6d4b\u8bd5\u6267\u884c"},{"location":"test/openEuler-20.03-LTS-SP2/#41","text":"\u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 20.03 LTS SP2 OpenStack Queens 1164 \u901a\u8fc71109\u4e2a\uff0cskip 52\u4e2a\uff0cFail 3\u4e2a 7 openEuler 20.03 LTS SP2 OpenStack Rocky 1197 \u901a\u8fc71001\u4e2a\uff0cskip 101\u4e2a 7","title":"4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e"},{"location":"test/openEuler-20.03-LTS-SP2/#42","text":"\u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5","title":"4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae"},{"location":"test/openEuler-20.03-LTS-SP2/#5","text":"N/A","title":"5 \u9644\u4ef6"},{"location":"test/openEuler-20.03-LTS-SP3/","text":"\u7248\u6743\u6240\u6709 \u00a9 2021 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95ee https://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1a https://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2021-12-10 1 \u521d\u7a3f\u53ca\u540c\u6b65Train\u7248\u672c\u6d4b\u8bd5\u60c5\u51b5 \u9ec4\u586b\u534e \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728openEuler 20.03 LTS SP3\u7248\u672c\u4e2d\u63d0\u4f9bOpenStack Queens\u3001Rocky\u3001Train\u7248\u672c\u7684RPM\u5b89\u88c5\u5305\u3002\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72OpenStack\u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668 1 \u7279\u6027\u6982\u8ff0 \u00b6 \u5728openEuler 20.03 LTS SP2 release\u4e2d\u63d0\u4f9bOpenStack Queens\u3001Rocky RPM\u5b89\u88c5\u5305\u652f\u6301\uff0c\u5305\u62ec\u9879\u76ee\uff1aKeystone\u3001Glance\u3001Nova\u3001Neutron\u3001Cinder\u3001Ironic\u3001Trove\u3001Kolla\u3001Horizon\u3001Tempest\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684CLI\u3002 openEuler 20.03 LTS SP3 release\u589e\u52a0\u4e86OpenStack Train\u7248\u672cRPM\u5b89\u88c5\u5305\u652f\u6301\uff0c\u5305\u62ec\u9879\u76ee\uff1aKeystone\u3001Glance\u3001Placement\u3001Nova\u3001Neutron\u3001Cinder\u3001Ironic\u3001Trove\u3001Kolla\u3001Heat\u3001Aodh\u3001Ceilometer\u3001Gnocchi\u3001Swift\u3001Horizon\u3001Tempest\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684CLI\u3002 2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f \u00b6 \u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 20.03 LTS SP3 RC1 \uff08OpenStack Train\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5\uff09 2021.11.25 2021.11.30 openEuler 20.03 LTS SP3 RC1 \uff08OpenStack Train\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5\uff09 2021.12.1 2021.12.2 openEuler 20.03 LTS SP3 RC2 \uff08OpenStack Train\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5\uff09 2021.12.3 2021.12.9 openEuler 20.03 LTS SP3 RC3 \uff08OpenStack Train\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5\uff09 2021.12.10 2021.12.12 openEuler 20.03 LTS SP3 RC3 \uff08OpenStack Queens&Rocky\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5\uff09 2021.12.10 2021.12.13 openEuler 20.03 LTS SP3 RC3 \uff08OpenStack Queens&Rocky\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5\uff09 2021.12.14 2021.12.16 openEuler 20.03 LTS SP3 RC4 \uff08OpenStack Queens&Rocky\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5\uff09 2021.12.17 2021.12.20 openEuler 20.03 LTS SP3 RC4 \uff08OpenStack Queens&Rocky\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5\uff09 2021.12.21 2021.12.23 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a TaiShan 200-2280 Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAM ARM\u67b6\u6784\u670d\u52a1\u5668 3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0 \u00b6 3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba \u00b6 OpenStack Queens\u7248\u672c\uff0c\u5171\u8ba1\u6267\u884cTempest\u7528\u4f8b1164\u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0cSkip\u7528\u4f8b52\u4e2a\uff08\u5168\u662fopenStack Queens\u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b3\u4e2a\uff08\u6d4b\u8bd5\u7528\u4f8b\u672c\u8eab\u95ee\u9898\uff09\uff0c\u5176\u4ed61109\u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Rocky\u7248\u672c\uff0c\u5171\u8ba1\u6267\u884cTempest\u7528\u4f8b1197\u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0cSkip\u7528\u4f8b101\u4e2a\uff08\u5168\u662fopenStack Rocky\u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff09\uff0c\u5176\u4ed61096\u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Train\u7248\u672c\u9664\u4e86Cyborg\uff08Cyborg\u5b89\u88c5\u90e8\u7f72\u6b63\u5e38\uff0c\u529f\u80fd\u4e0d\u53ef\u7528\uff09\u5404\u7ec4\u4ef6\u57fa\u672c\u529f\u80fd\u6b63\u5e38\uff0c\u5171\u8ba1\u6267\u884cTempest\u7528\u4f8b1179\u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0cSkip\u7528\u4f8b115\u4e2a\uff08\u5305\u62ec\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff0c\u5305\u62ec\u4e00\u4e9b\u590d\u6742\u529f\u80fd\uff0c\u6bd4\u5982\u6587\u4ef6\u6ce8\u5165\uff0c\u865a\u62df\u673a\u914d\u7f6e\u7b49\uff09\uff0c\u5176\u4ed61064\u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u5171\u8ba1\u53d1\u73b0\u95ee\u989814\u4e2a\uff08\u5305\u62eclibvirt 1\u4e2a\u95ee\u9898\uff09\uff0c\u5747\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Queens\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1164\u4e2a\uff0c\u5176\u4e2dSkip 52\u4e2a\uff0cFail 3\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Rocky\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1197\u4e2a\uff0c\u5176\u4e2dSkip 101\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Train\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1179\u4e2a\uff0c\u5176\u4e2dSkip 115\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38 3.2 \u7ea6\u675f\u8bf4\u660e \u00b6 \u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6OpenStack Queens\u3001Rocky\u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728openEuler 20.03 LTS SP3\u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\uff0c\u53e6\u5916Cyborg\u529f\u80fd\u4e0d\u53ef\u7528\u3002 3.3 \u9057\u7559\u95ee\u9898\u5206\u6790 \u00b6 3.3.1 Queens&Rocky\u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd \u00b6 \u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 1 targetcli\u8f6f\u4ef6\u5305\u4e0epython2-rtslib-fb\u5305\u51b2\u7a81\uff0c\u65e0\u6cd5\u5b89\u88c5 \u4e2d \u4f7f\u7528tgtadm\u4ee3\u66fflioadm\u547d\u4ee4 \u89e3\u51b3\u4e2d 2 python2-flake8\u8f6f\u4ef6\u5305\u4f9d\u8d56\u4f4e\u7248\u672c\u7684pyflakes\uff0c\u5bfc\u81f4yum update\u547d\u4ee4\u62a5\u51fa\u8b66\u544a \u4f4e \u4f7f\u7528yum update --nobest\u547d\u4ee4\u5347\u7ea7\u8f6f\u4ef6\u5305 \u89e3\u51b3\u4e2d 3.3.2 Train\u7248\u672c\u95ee\u9898\u7edf\u8ba1 \u00b6 \u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 14 1 6 7 \u767e\u5206\u6bd4 100 7.1 42.9 50 4 \u6d4b\u8bd5\u6267\u884c \u00b6 4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e \u00b6 \u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 20.03 LTS SP3 OpenStack Queens 1164 \u901a\u8fc71109\u4e2a\uff0cskip 52\u4e2a\uff0cFail 3\u4e2a 0 openEuler 20.03 LTS SP3 OpenStack Rocky 1197 \u901a\u8fc71096\u4e2a\uff0cskip 101\u4e2a 0 openEuler 20.03 LTS SP3 OpenStack Train 1179 \u901a\u8fc71064\u4e2a\uff0cskip 115\u4e2a 14 4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae \u00b6 \u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5 5 \u9644\u4ef6 \u00b6 N/A","title":"openEuler-20.03-LTS-SP3"},{"location":"test/openEuler-20.03-LTS-SP3/#1","text":"\u5728openEuler 20.03 LTS SP2 release\u4e2d\u63d0\u4f9bOpenStack Queens\u3001Rocky RPM\u5b89\u88c5\u5305\u652f\u6301\uff0c\u5305\u62ec\u9879\u76ee\uff1aKeystone\u3001Glance\u3001Nova\u3001Neutron\u3001Cinder\u3001Ironic\u3001Trove\u3001Kolla\u3001Horizon\u3001Tempest\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684CLI\u3002 openEuler 20.03 LTS SP3 release\u589e\u52a0\u4e86OpenStack Train\u7248\u672cRPM\u5b89\u88c5\u5305\u652f\u6301\uff0c\u5305\u62ec\u9879\u76ee\uff1aKeystone\u3001Glance\u3001Placement\u3001Nova\u3001Neutron\u3001Cinder\u3001Ironic\u3001Trove\u3001Kolla\u3001Heat\u3001Aodh\u3001Ceilometer\u3001Gnocchi\u3001Swift\u3001Horizon\u3001Tempest\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684CLI\u3002","title":"1 \u7279\u6027\u6982\u8ff0"},{"location":"test/openEuler-20.03-LTS-SP3/#2","text":"\u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 20.03 LTS SP3 RC1 \uff08OpenStack Train\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5\uff09 2021.11.25 2021.11.30 openEuler 20.03 LTS SP3 RC1 \uff08OpenStack Train\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5\uff09 2021.12.1 2021.12.2 openEuler 20.03 LTS SP3 RC2 \uff08OpenStack Train\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5\uff09 2021.12.3 2021.12.9 openEuler 20.03 LTS SP3 RC3 \uff08OpenStack Train\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5\uff09 2021.12.10 2021.12.12 openEuler 20.03 LTS SP3 RC3 \uff08OpenStack Queens&Rocky\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5\uff09 2021.12.10 2021.12.13 openEuler 20.03 LTS SP3 RC3 \uff08OpenStack Queens&Rocky\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5\uff09 2021.12.14 2021.12.16 openEuler 20.03 LTS SP3 RC4 \uff08OpenStack Queens&Rocky\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5\uff09 2021.12.17 2021.12.20 openEuler 20.03 LTS SP3 RC4 \uff08OpenStack Queens&Rocky\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5\uff09 2021.12.21 2021.12.23 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a TaiShan 200-2280 Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAM ARM\u67b6\u6784\u670d\u52a1\u5668","title":"2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f"},{"location":"test/openEuler-20.03-LTS-SP3/#3","text":"","title":"3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0"},{"location":"test/openEuler-20.03-LTS-SP3/#31","text":"OpenStack Queens\u7248\u672c\uff0c\u5171\u8ba1\u6267\u884cTempest\u7528\u4f8b1164\u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0cSkip\u7528\u4f8b52\u4e2a\uff08\u5168\u662fopenStack Queens\u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b3\u4e2a\uff08\u6d4b\u8bd5\u7528\u4f8b\u672c\u8eab\u95ee\u9898\uff09\uff0c\u5176\u4ed61109\u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Rocky\u7248\u672c\uff0c\u5171\u8ba1\u6267\u884cTempest\u7528\u4f8b1197\u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0cSkip\u7528\u4f8b101\u4e2a\uff08\u5168\u662fopenStack Rocky\u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff09\uff0c\u5176\u4ed61096\u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Train\u7248\u672c\u9664\u4e86Cyborg\uff08Cyborg\u5b89\u88c5\u90e8\u7f72\u6b63\u5e38\uff0c\u529f\u80fd\u4e0d\u53ef\u7528\uff09\u5404\u7ec4\u4ef6\u57fa\u672c\u529f\u80fd\u6b63\u5e38\uff0c\u5171\u8ba1\u6267\u884cTempest\u7528\u4f8b1179\u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0cSkip\u7528\u4f8b115\u4e2a\uff08\u5305\u62ec\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff0c\u5305\u62ec\u4e00\u4e9b\u590d\u6742\u529f\u80fd\uff0c\u6bd4\u5982\u6587\u4ef6\u6ce8\u5165\uff0c\u865a\u62df\u673a\u914d\u7f6e\u7b49\uff09\uff0c\u5176\u4ed61064\u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u5171\u8ba1\u53d1\u73b0\u95ee\u989814\u4e2a\uff08\u5305\u62eclibvirt 1\u4e2a\u95ee\u9898\uff09\uff0c\u5747\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Queens\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1164\u4e2a\uff0c\u5176\u4e2dSkip 52\u4e2a\uff0cFail 3\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Rocky\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1197\u4e2a\uff0c\u5176\u4e2dSkip 101\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Train\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1179\u4e2a\uff0c\u5176\u4e2dSkip 115\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38","title":"3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba"},{"location":"test/openEuler-20.03-LTS-SP3/#32","text":"\u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6OpenStack Queens\u3001Rocky\u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728openEuler 20.03 LTS SP3\u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\uff0c\u53e6\u5916Cyborg\u529f\u80fd\u4e0d\u53ef\u7528\u3002","title":"3.2 \u7ea6\u675f\u8bf4\u660e"},{"location":"test/openEuler-20.03-LTS-SP3/#33","text":"","title":"3.3 \u9057\u7559\u95ee\u9898\u5206\u6790"},{"location":"test/openEuler-20.03-LTS-SP3/#331-queensrocky","text":"\u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 1 targetcli\u8f6f\u4ef6\u5305\u4e0epython2-rtslib-fb\u5305\u51b2\u7a81\uff0c\u65e0\u6cd5\u5b89\u88c5 \u4e2d \u4f7f\u7528tgtadm\u4ee3\u66fflioadm\u547d\u4ee4 \u89e3\u51b3\u4e2d 2 python2-flake8\u8f6f\u4ef6\u5305\u4f9d\u8d56\u4f4e\u7248\u672c\u7684pyflakes\uff0c\u5bfc\u81f4yum update\u547d\u4ee4\u62a5\u51fa\u8b66\u544a \u4f4e \u4f7f\u7528yum update --nobest\u547d\u4ee4\u5347\u7ea7\u8f6f\u4ef6\u5305 \u89e3\u51b3\u4e2d","title":"3.3.1 Queens&Rocky\u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd"},{"location":"test/openEuler-20.03-LTS-SP3/#332-train","text":"\u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 14 1 6 7 \u767e\u5206\u6bd4 100 7.1 42.9 50","title":"3.3.2 Train\u7248\u672c\u95ee\u9898\u7edf\u8ba1"},{"location":"test/openEuler-20.03-LTS-SP3/#4","text":"","title":"4 \u6d4b\u8bd5\u6267\u884c"},{"location":"test/openEuler-20.03-LTS-SP3/#41","text":"\u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 20.03 LTS SP3 OpenStack Queens 1164 \u901a\u8fc71109\u4e2a\uff0cskip 52\u4e2a\uff0cFail 3\u4e2a 0 openEuler 20.03 LTS SP3 OpenStack Rocky 1197 \u901a\u8fc71096\u4e2a\uff0cskip 101\u4e2a 0 openEuler 20.03 LTS SP3 OpenStack Train 1179 \u901a\u8fc71064\u4e2a\uff0cskip 115\u4e2a 14","title":"4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e"},{"location":"test/openEuler-20.03-LTS-SP3/#42","text":"\u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5","title":"4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae"},{"location":"test/openEuler-20.03-LTS-SP3/#5","text":"N/A","title":"5 \u9644\u4ef6"},{"location":"test/openEuler-22.03-LTS-SP1/","text":"openEuler 22.03 LTS SP1\u6d4b\u8bd5\u62a5\u544a \u00b6 \u7248\u6743\u6240\u6709 \u00a9 2021 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95ee https://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1a https://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2022-12-2 1 \u521d\u7a3f \u738b\u73ba\u6e90 \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728 openEuler 22.03 LTS SP1 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668 1 \u7279\u6027\u6982\u8ff0 \u00b6 \u5728 openEuler 22.03 LTS SP1 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest 2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f \u00b6 \u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 22.03 LTS SP1 RC1 (OpenStack Train\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2022.11.23 2022.11.29 openEuler 22.03 LTS SP1 RC1 (OpenStack Train\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2022.11.23 2022.11.29 openEuler 22.03 LTS SP1 RC2 (OpenStack Train\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2022.12.02 2022.12.08 openEuler 22.03 LTS SP1 RC3 (OpenStack Train\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2022.12.09 2022.12.15 openEuler 22.03 LTS SP1 RC3 (OpenStack Wallaby\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2022.12.09 2022.12.15 openEuler 22.03 LTS SP1 RC3 (OpenStack Wallaby\u57fa\u7248\u672c\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2022.12.09 2022.12.15 openEuler 22.03 LTS SP1 RC4 (OpenStack Wallaby\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2022.12.16 2022.12.20 openEuler 22.03 LTS SP1 RC4 (OpenStack Wallaby\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2022.12.16 2022.12.20 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a TaiShan 200-2280 Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAM ARM\u67b6\u6784\u670d\u52a1\u5668 3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0 \u00b6 3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba \u00b6 OpenStack Train \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1354 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 64 \u4e2a\uff08\u5168\u662f OpenStack Train \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1290 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Wallaby \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1164 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 70 \u4e2a\uff08\u5168\u662f OpenStack Wallaby \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1094 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Train\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1354\u4e2a\uff0c\u5176\u4e2dSkip 64\u4e2a\uff0cFail 0\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Wallaby\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1164\u4e2a\uff0c\u5176\u4e2dSkip 70\u4e2a\uff0cFail 0\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38 3.2 \u7ea6\u675f\u8bf4\u660e \u00b6 \u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Train \u3001 OpenStack Wallaby \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 22.03 LTS SP1 \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002 3.3 \u9057\u7559\u95ee\u9898\u5206\u6790 \u00b6 3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd \u00b6 \u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A 3.3.2 \u95ee\u9898\u7edf\u8ba1 \u00b6 \u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 2 1 0 1 0 \u767e\u5206\u6bd4 100 50 0 50 0 ISSUE Link https://gitee.com/openeuler/openstack/issues/I64OL3 https://gitee.com/openeuler/openstack/issues/I66IEB 4 \u6d4b\u8bd5\u6267\u884c \u00b6 4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e \u00b6 \u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 22.03 LTS SP1 OpenStack Train 1354 \u901a\u8fc71289\u4e2a\uff0cskip 64\u4e2a\uff0cFail 0\u4e2a 2 openEuler 22.03 LTS SP1 OpenStack Wallaby 1164 \u901a\u8fc71088\u4e2a\uff0cskip 70\u4e2a\uff0cFail 0\u4e2a 1 4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae \u00b6 \u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5 5 \u9644\u4ef6 \u00b6 N/A","title":"openEuler-22.03-LTS-SP1"},{"location":"test/openEuler-22.03-LTS-SP1/#openeuler-2203-lts-sp1","text":"\u7248\u6743\u6240\u6709 \u00a9 2021 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95ee https://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1a https://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2022-12-2 1 \u521d\u7a3f \u738b\u73ba\u6e90 \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728 openEuler 22.03 LTS SP1 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668","title":"openEuler 22.03 LTS SP1\u6d4b\u8bd5\u62a5\u544a"},{"location":"test/openEuler-22.03-LTS-SP1/#1","text":"\u5728 openEuler 22.03 LTS SP1 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest","title":"1 \u7279\u6027\u6982\u8ff0"},{"location":"test/openEuler-22.03-LTS-SP1/#2","text":"\u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 22.03 LTS SP1 RC1 (OpenStack Train\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2022.11.23 2022.11.29 openEuler 22.03 LTS SP1 RC1 (OpenStack Train\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2022.11.23 2022.11.29 openEuler 22.03 LTS SP1 RC2 (OpenStack Train\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2022.12.02 2022.12.08 openEuler 22.03 LTS SP1 RC3 (OpenStack Train\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2022.12.09 2022.12.15 openEuler 22.03 LTS SP1 RC3 (OpenStack Wallaby\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2022.12.09 2022.12.15 openEuler 22.03 LTS SP1 RC3 (OpenStack Wallaby\u57fa\u7248\u672c\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2022.12.09 2022.12.15 openEuler 22.03 LTS SP1 RC4 (OpenStack Wallaby\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2022.12.16 2022.12.20 openEuler 22.03 LTS SP1 RC4 (OpenStack Wallaby\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2022.12.16 2022.12.20 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a TaiShan 200-2280 Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAM ARM\u67b6\u6784\u670d\u52a1\u5668","title":"2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f"},{"location":"test/openEuler-22.03-LTS-SP1/#3","text":"","title":"3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0"},{"location":"test/openEuler-22.03-LTS-SP1/#31","text":"OpenStack Train \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1354 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 64 \u4e2a\uff08\u5168\u662f OpenStack Train \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1290 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Wallaby \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1164 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 70 \u4e2a\uff08\u5168\u662f OpenStack Wallaby \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1094 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Train\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1354\u4e2a\uff0c\u5176\u4e2dSkip 64\u4e2a\uff0cFail 0\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Wallaby\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1164\u4e2a\uff0c\u5176\u4e2dSkip 70\u4e2a\uff0cFail 0\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38","title":"3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba"},{"location":"test/openEuler-22.03-LTS-SP1/#32","text":"\u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Train \u3001 OpenStack Wallaby \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 22.03 LTS SP1 \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002","title":"3.2 \u7ea6\u675f\u8bf4\u660e"},{"location":"test/openEuler-22.03-LTS-SP1/#33","text":"","title":"3.3 \u9057\u7559\u95ee\u9898\u5206\u6790"},{"location":"test/openEuler-22.03-LTS-SP1/#331","text":"\u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A","title":"3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd"},{"location":"test/openEuler-22.03-LTS-SP1/#332","text":"\u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 2 1 0 1 0 \u767e\u5206\u6bd4 100 50 0 50 0 ISSUE Link https://gitee.com/openeuler/openstack/issues/I64OL3 https://gitee.com/openeuler/openstack/issues/I66IEB","title":"3.3.2 \u95ee\u9898\u7edf\u8ba1"},{"location":"test/openEuler-22.03-LTS-SP1/#4","text":"","title":"4 \u6d4b\u8bd5\u6267\u884c"},{"location":"test/openEuler-22.03-LTS-SP1/#41","text":"\u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 22.03 LTS SP1 OpenStack Train 1354 \u901a\u8fc71289\u4e2a\uff0cskip 64\u4e2a\uff0cFail 0\u4e2a 2 openEuler 22.03 LTS SP1 OpenStack Wallaby 1164 \u901a\u8fc71088\u4e2a\uff0cskip 70\u4e2a\uff0cFail 0\u4e2a 1","title":"4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e"},{"location":"test/openEuler-22.03-LTS-SP1/#42","text":"\u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5","title":"4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae"},{"location":"test/openEuler-22.03-LTS-SP1/#5","text":"N/A","title":"5 \u9644\u4ef6"},{"location":"test/openEuler-22.03-LTS-SP2/","text":"openEuler 22.03 LTS SP2\u6d4b\u8bd5\u62a5\u544a \u00b6 \u7248\u6743\u6240\u6709 \u00a9 2023 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95eehttps://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1ahttps://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2023-06-21 1 \u521d\u7a3f \u738b\u73ba\u6e90 \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728 openEuler 22.03 LTS SP2 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668 1 \u7279\u6027\u6982\u8ff0 \u00b6 \u5728 openEuler 22.03 LTS SP2 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest Barbican Octavia designate Manila Masakari Mistral Senlin Zaqar 2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f \u00b6 \u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 22.03 LTS SP2 RC1 (OpenStack Train\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2023.05.17 2023.05.23 openEuler 22.03 LTS SP2 RC1 (OpenStack Train\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2023.05.17 2023.05.23 openEuler 22.03 LTS SP2 RC2 (OpenStack Train\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2023.05.24 2023.06.02 openEuler 22.03 LTS SP2 RC3 (OpenStack Train\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2023.06.03 2023.06.09 openEuler 22.03 LTS SP2 RC3 (OpenStack Wallaby\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2023.06.03 2023.06.09 openEuler 22.03 LTS SP2 RC3 (OpenStack Wallaby\u57fa\u7248\u672c\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2023.06.03 2023.06.09 openEuler 22.03 LTS SP2 RC4 (OpenStack Wallaby\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2023.06.10 2023.06.16 openEuler 22.03 LTS SP2 RC4 (OpenStack Wallaby\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2023.06.10 2023.06.16 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a 3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0 \u00b6 3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba \u00b6 OpenStack Train \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1354 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 64 \u4e2a\uff08\u5168\u662f OpenStack Train \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1290 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Wallaby \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1164 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 70 \u4e2a\uff08\u5168\u662f OpenStack Wallaby \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1094 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Train\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1354\u4e2a\uff0c\u5176\u4e2dSkip 64\u4e2a\uff0cFail 0\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Wallaby\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1164\u4e2a\uff0c\u5176\u4e2dSkip 70\u4e2a\uff0cFail 0\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38 3.2 \u7ea6\u675f\u8bf4\u660e \u00b6 \u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Train \u3001 OpenStack Wallaby \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 22.03 LTS SP2 \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002 3.3 \u9057\u7559\u95ee\u9898\u5206\u6790 \u00b6 3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd \u00b6 \u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A 3.3.2 \u95ee\u9898\u7edf\u8ba1 \u00b6 \u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 12 0 5 6 1 \u767e\u5206\u6bd4 100 0 42 50 8 ISSUE Link https://gitee.com/src-openeuler/python-flask-restful/issues/I7ABYH https://gitee.com/src-openeuler/python-zVMCloudConnector/issues/I79KJO https://gitee.com/src-openeuler/openvswitch/issues/I79K23 https://gitee.com/src-openeuler/openstack-nova/issues/I79JC8 https://gitee.com/src-openeuler/python-rtslib-fb/issues/I79IXG https://gitee.com/src-openeuler/python-suds-jurko/issues/I79IQM https://gitee.com/src-openeuler/ovn/issues/I79I7O https://gitee.com/openeuler/openstack/issues/I77LN7 https://gitee.com/openeuler/openstack/issues/I77LQN https://gitee.com/openeuler/openstack/issues/I79OIL https://gitee.com/openeuler/openstack/issues/I7BQC0 https://gitee.com/openeuler/openstack/issues/I7CC2N 4 \u6d4b\u8bd5\u6267\u884c \u00b6 4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e \u00b6 \u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 22.03 LTS SP2 OpenStack Train 1354 \u901a\u8fc71289\u4e2a\uff0cskip 64\u4e2a\uff0cFail 0\u4e2a 2 openEuler 22.03 LTS SP2 OpenStack Wallaby 1164 \u901a\u8fc71088\u4e2a\uff0cskip 70\u4e2a\uff0cFail 0\u4e2a 1 4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae \u00b6 \u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5\u3002 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5\u3002 \u91cd\u70b9\u6d4b\u8bd5SP2\u65b0\u589eOpenStack\u670d\u52a1\uff0c\u5c3d\u65e9\u53d1\u73b0\u95ee\u9898\uff0c\u89e3\u51b3\u95ee\u9898\u3002 5 \u9644\u4ef6 \u00b6 N/A","title":"openEuler-22.03-LTS-SP2"},{"location":"test/openEuler-22.03-LTS-SP2/#openeuler-2203-lts-sp2","text":"\u7248\u6743\u6240\u6709 \u00a9 2023 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95eehttps://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1ahttps://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2023-06-21 1 \u521d\u7a3f \u738b\u73ba\u6e90 \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728 openEuler 22.03 LTS SP2 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668","title":"openEuler 22.03 LTS SP2\u6d4b\u8bd5\u62a5\u544a"},{"location":"test/openEuler-22.03-LTS-SP2/#1","text":"\u5728 openEuler 22.03 LTS SP2 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest Barbican Octavia designate Manila Masakari Mistral Senlin Zaqar","title":"1 \u7279\u6027\u6982\u8ff0"},{"location":"test/openEuler-22.03-LTS-SP2/#2","text":"\u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 22.03 LTS SP2 RC1 (OpenStack Train\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2023.05.17 2023.05.23 openEuler 22.03 LTS SP2 RC1 (OpenStack Train\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2023.05.17 2023.05.23 openEuler 22.03 LTS SP2 RC2 (OpenStack Train\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2023.05.24 2023.06.02 openEuler 22.03 LTS SP2 RC3 (OpenStack Train\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2023.06.03 2023.06.09 openEuler 22.03 LTS SP2 RC3 (OpenStack Wallaby\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2023.06.03 2023.06.09 openEuler 22.03 LTS SP2 RC3 (OpenStack Wallaby\u57fa\u7248\u672c\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2023.06.03 2023.06.09 openEuler 22.03 LTS SP2 RC4 (OpenStack Wallaby\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2023.06.10 2023.06.16 openEuler 22.03 LTS SP2 RC4 (OpenStack Wallaby\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2023.06.10 2023.06.16 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a","title":"2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f"},{"location":"test/openEuler-22.03-LTS-SP2/#3","text":"","title":"3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0"},{"location":"test/openEuler-22.03-LTS-SP2/#31","text":"OpenStack Train \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1354 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 64 \u4e2a\uff08\u5168\u662f OpenStack Train \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1290 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Wallaby \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1164 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 70 \u4e2a\uff08\u5168\u662f OpenStack Wallaby \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1094 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Train\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1354\u4e2a\uff0c\u5176\u4e2dSkip 64\u4e2a\uff0cFail 0\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Wallaby\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1164\u4e2a\uff0c\u5176\u4e2dSkip 70\u4e2a\uff0cFail 0\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38","title":"3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba"},{"location":"test/openEuler-22.03-LTS-SP2/#32","text":"\u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Train \u3001 OpenStack Wallaby \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 22.03 LTS SP2 \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002","title":"3.2 \u7ea6\u675f\u8bf4\u660e"},{"location":"test/openEuler-22.03-LTS-SP2/#33","text":"","title":"3.3 \u9057\u7559\u95ee\u9898\u5206\u6790"},{"location":"test/openEuler-22.03-LTS-SP2/#331","text":"\u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A","title":"3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd"},{"location":"test/openEuler-22.03-LTS-SP2/#332","text":"\u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 12 0 5 6 1 \u767e\u5206\u6bd4 100 0 42 50 8 ISSUE Link https://gitee.com/src-openeuler/python-flask-restful/issues/I7ABYH https://gitee.com/src-openeuler/python-zVMCloudConnector/issues/I79KJO https://gitee.com/src-openeuler/openvswitch/issues/I79K23 https://gitee.com/src-openeuler/openstack-nova/issues/I79JC8 https://gitee.com/src-openeuler/python-rtslib-fb/issues/I79IXG https://gitee.com/src-openeuler/python-suds-jurko/issues/I79IQM https://gitee.com/src-openeuler/ovn/issues/I79I7O https://gitee.com/openeuler/openstack/issues/I77LN7 https://gitee.com/openeuler/openstack/issues/I77LQN https://gitee.com/openeuler/openstack/issues/I79OIL https://gitee.com/openeuler/openstack/issues/I7BQC0 https://gitee.com/openeuler/openstack/issues/I7CC2N","title":"3.3.2 \u95ee\u9898\u7edf\u8ba1"},{"location":"test/openEuler-22.03-LTS-SP2/#4","text":"","title":"4 \u6d4b\u8bd5\u6267\u884c"},{"location":"test/openEuler-22.03-LTS-SP2/#41","text":"\u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 22.03 LTS SP2 OpenStack Train 1354 \u901a\u8fc71289\u4e2a\uff0cskip 64\u4e2a\uff0cFail 0\u4e2a 2 openEuler 22.03 LTS SP2 OpenStack Wallaby 1164 \u901a\u8fc71088\u4e2a\uff0cskip 70\u4e2a\uff0cFail 0\u4e2a 1","title":"4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e"},{"location":"test/openEuler-22.03-LTS-SP2/#42","text":"\u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5\u3002 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5\u3002 \u91cd\u70b9\u6d4b\u8bd5SP2\u65b0\u589eOpenStack\u670d\u52a1\uff0c\u5c3d\u65e9\u53d1\u73b0\u95ee\u9898\uff0c\u89e3\u51b3\u95ee\u9898\u3002","title":"4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae"},{"location":"test/openEuler-22.03-LTS-SP2/#5","text":"N/A","title":"5 \u9644\u4ef6"},{"location":"test/openEuler-22.03-LTS-SP3/","text":"openEuler 22.03 LTS SP3\u6d4b\u8bd5\u62a5\u544a \u00b6 \u7248\u6743\u6240\u6709 \u00a9 2023 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95eehttps://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1ahttps://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2023-12-27 1 \u521d\u7a3f \u90d1\u633a \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728 openEuler 22.03 LTS SP3 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668 1 \u7279\u6027\u6982\u8ff0 \u00b6 \u5728 openEuler 22.03 LTS SP3 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest Barbican Octavia designate Manila Masakari Mistral Senlin Zaqar 2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f \u00b6 \u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 22.03 LTS SP3 RC1 (OpenStack Train\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2023.11.23 2023.11.27 openEuler 22.03 LTS SP3 RC1 (OpenStack Train\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2023.11.28 2023.12.1 openEuler 22.03 LTS SP3 RC2 (OpenStack Train\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2023.12.2 2023.12.6 openEuler 22.03 LTS SP3 RC3 (OpenStack Train\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2023.12.7 2023.12.11 openEuler 22.03 LTS SP3 RC3 (OpenStack Wallaby\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2023.12.12 2023.12.16 openEuler 22.03 LTS SP3 RC3 (OpenStack Wallaby\u57fa\u7248\u672c\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2023.12.17 2023.12.21 openEuler 22.03 LTS SP3 RC4 (OpenStack Wallaby\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2023.12.21 2023.12.25 openEuler 22.03 LTS SP3 RC4 (OpenStack Wallaby\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2023.12.25 2023.12.28 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a 3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0 \u00b6 3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba \u00b6 OpenStack Train \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1303 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 65 \u4e2a\uff08\u5168\u662f OpenStack Train \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1238 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Wallaby \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1263 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 93 \u4e2a\uff08\u5168\u662f OpenStack Wallaby \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1170 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Train\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1303\u4e2a\uff0c\u5176\u4e2dSkip 65\u4e2a\uff0cFail 0\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Wallaby\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1263\u4e2a\uff0c\u5176\u4e2dSkip 93\u4e2a\uff0cFail 0\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38 3.2 \u7ea6\u675f\u8bf4\u660e \u00b6 \u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Train \u3001 OpenStack Wallaby \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 22.03 LTS SP3 \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002 3.3 \u9057\u7559\u95ee\u9898\u5206\u6790 \u00b6 3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd \u00b6 \u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A 3.3.2 \u95ee\u9898\u7edf\u8ba1 \u00b6 \u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 1 0 1 0 0 \u767e\u5206\u6bd4 100 0 100 0 0 ISSUE Link https://gitee.com/src-openeuler/python-ndg-httpsclient/issues/I8Q6GR 4 \u6d4b\u8bd5\u6267\u884c \u00b6 4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e \u00b6 \u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 22.03 LTS SP3 OpenStack Train 1303 \u901a\u8fc71238\u4e2a\uff0cskip 65\u4e2a\uff0cFail 0\u4e2a 0 openEuler 22.03 LTS SP3 OpenStack Wallaby 1263 \u901a\u8fc71170\u4e2a\uff0cskip 93\u4e2a\uff0cFail 0\u4e2a 1 4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae \u00b6 \u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5\u3002 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5\u3002 \u91cd\u70b9\u6d4b\u8bd5SP3\u65b0\u589eOpenStack\u670d\u52a1\uff0c\u5c3d\u65e9\u53d1\u73b0\u95ee\u9898\uff0c\u89e3\u51b3\u95ee\u9898\u3002 5 \u9644\u4ef6 \u00b6 N/A","title":"openEuler-22.03-LTS-SP3"},{"location":"test/openEuler-22.03-LTS-SP3/#openeuler-2203-lts-sp3","text":"\u7248\u6743\u6240\u6709 \u00a9 2023 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95eehttps://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1ahttps://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2023-12-27 1 \u521d\u7a3f \u90d1\u633a \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728 openEuler 22.03 LTS SP3 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668","title":"openEuler 22.03 LTS SP3\u6d4b\u8bd5\u62a5\u544a"},{"location":"test/openEuler-22.03-LTS-SP3/#1","text":"\u5728 openEuler 22.03 LTS SP3 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest Barbican Octavia designate Manila Masakari Mistral Senlin Zaqar","title":"1 \u7279\u6027\u6982\u8ff0"},{"location":"test/openEuler-22.03-LTS-SP3/#2","text":"\u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 22.03 LTS SP3 RC1 (OpenStack Train\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2023.11.23 2023.11.27 openEuler 22.03 LTS SP3 RC1 (OpenStack Train\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2023.11.28 2023.12.1 openEuler 22.03 LTS SP3 RC2 (OpenStack Train\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2023.12.2 2023.12.6 openEuler 22.03 LTS SP3 RC3 (OpenStack Train\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2023.12.7 2023.12.11 openEuler 22.03 LTS SP3 RC3 (OpenStack Wallaby\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2023.12.12 2023.12.16 openEuler 22.03 LTS SP3 RC3 (OpenStack Wallaby\u57fa\u7248\u672c\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2023.12.17 2023.12.21 openEuler 22.03 LTS SP3 RC4 (OpenStack Wallaby\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2023.12.21 2023.12.25 openEuler 22.03 LTS SP3 RC4 (OpenStack Wallaby\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2023.12.25 2023.12.28 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a","title":"2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f"},{"location":"test/openEuler-22.03-LTS-SP3/#3","text":"","title":"3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0"},{"location":"test/openEuler-22.03-LTS-SP3/#31","text":"OpenStack Train \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1303 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 65 \u4e2a\uff08\u5168\u662f OpenStack Train \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1238 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Wallaby \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1263 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 93 \u4e2a\uff08\u5168\u662f OpenStack Wallaby \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1170 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Train\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1303\u4e2a\uff0c\u5176\u4e2dSkip 65\u4e2a\uff0cFail 0\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Wallaby\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1263\u4e2a\uff0c\u5176\u4e2dSkip 93\u4e2a\uff0cFail 0\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38","title":"3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba"},{"location":"test/openEuler-22.03-LTS-SP3/#32","text":"\u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Train \u3001 OpenStack Wallaby \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 22.03 LTS SP3 \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002","title":"3.2 \u7ea6\u675f\u8bf4\u660e"},{"location":"test/openEuler-22.03-LTS-SP3/#33","text":"","title":"3.3 \u9057\u7559\u95ee\u9898\u5206\u6790"},{"location":"test/openEuler-22.03-LTS-SP3/#331","text":"\u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A","title":"3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd"},{"location":"test/openEuler-22.03-LTS-SP3/#332","text":"\u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 1 0 1 0 0 \u767e\u5206\u6bd4 100 0 100 0 0 ISSUE Link https://gitee.com/src-openeuler/python-ndg-httpsclient/issues/I8Q6GR","title":"3.3.2 \u95ee\u9898\u7edf\u8ba1"},{"location":"test/openEuler-22.03-LTS-SP3/#4","text":"","title":"4 \u6d4b\u8bd5\u6267\u884c"},{"location":"test/openEuler-22.03-LTS-SP3/#41","text":"\u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 22.03 LTS SP3 OpenStack Train 1303 \u901a\u8fc71238\u4e2a\uff0cskip 65\u4e2a\uff0cFail 0\u4e2a 0 openEuler 22.03 LTS SP3 OpenStack Wallaby 1263 \u901a\u8fc71170\u4e2a\uff0cskip 93\u4e2a\uff0cFail 0\u4e2a 1","title":"4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e"},{"location":"test/openEuler-22.03-LTS-SP3/#42","text":"\u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5\u3002 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5\u3002 \u91cd\u70b9\u6d4b\u8bd5SP3\u65b0\u589eOpenStack\u670d\u52a1\uff0c\u5c3d\u65e9\u53d1\u73b0\u95ee\u9898\uff0c\u89e3\u51b3\u95ee\u9898\u3002","title":"4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae"},{"location":"test/openEuler-22.03-LTS-SP3/#5","text":"N/A","title":"5 \u9644\u4ef6"},{"location":"test/openEuler-22.03-LTS-SP4/","text":"openEuler 22.03 LTS SP4\u6d4b\u8bd5\u62a5\u544a \u00b6 \u7248\u6743\u6240\u6709 \u00a9 2023 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95eehttps://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1ahttps://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2024-06-21 1 \u521d\u7a3f \u738b\u9759 \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728 openEuler 22.03 LTS SP4 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668 1 \u7279\u6027\u6982\u8ff0 \u00b6 \u5728 openEuler 22.03 LTS SP4 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest Barbican Octavia designate Manila Masakari Mistral Senlin Zaqar 2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f \u00b6 \u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 22.03 LTS SP4 RC1 (OpenStack Train\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2024.04.23 2024.04.27 openEuler 22.03 LTS SP4 RC1 (OpenStack Train\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2024.04.28 2024.05.09 openEuler 22.03 LTS SP4 RC2 (OpenStack Train\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2024.05.09 2024.05.16 openEuler 22.03 LTS SP4 RC3 (OpenStack Train\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2024.05.17 2024.05.21 openEuler 22.03 LTS SP4 RC3 (OpenStack Wallaby\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2024.05.22 2024.05.25 openEuler 22.03 LTS SP4 RC3 (OpenStack Wallaby\u57fa\u7248\u672c\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2024.05.27 2024.05.30 openEuler 22.03 LTS SP4 RC4 (OpenStack Wallaby\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2024.06.01 2024.06.07 openEuler 22.03 LTS SP4 RC4 (OpenStack Wallaby\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2024.06.08 2024.06.19 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a 3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0 \u00b6 3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba \u00b6 OpenStack Train \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1420 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 66 \u4e2a\uff08\u5168\u662f OpenStack Train \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1354 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Wallaby \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1436 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 95 \u4e2a\uff08\u5168\u662f OpenStack Wallaby \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1341 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Train\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1420\u4e2a\uff0c\u5176\u4e2dSkip 66\u4e2a\uff0cFail 0\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Wallaby\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1436\u4e2a\uff0c\u5176\u4e2dSkip 95\u4e2a\uff0cFail 0\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38 3.2 \u7ea6\u675f\u8bf4\u660e \u00b6 \u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Train \u3001 OpenStack Wallaby \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 22.03 LTS SP4 \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002 3.3 \u9057\u7559\u95ee\u9898\u5206\u6790 \u00b6 3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd \u00b6 \u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A 3.3.2 \u95ee\u9898\u7edf\u8ba1 \u00b6 \u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 1 0 1 0 0 \u767e\u5206\u6bd4 100 0 100 0 0 ISSUE Link 4 \u6d4b\u8bd5\u6267\u884c \u00b6 4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e \u00b6 \u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 22.03 LTS SP4 OpenStack Train 1420 \u901a\u8fc71354\u4e2a\uff0cskip 66\u4e2a\uff0cFail 0\u4e2a 0 openEuler 22.03 LTS SP4 OpenStack Wallaby 1436 \u901a\u8fc71431\u4e2a\uff0cskip 95\u4e2a\uff0cFail 0\u4e2a 0 4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae \u00b6 \u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5\u3002 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5\u3002 \u91cd\u70b9\u6d4b\u8bd5SP4\u65b0\u589eOpenStack\u670d\u52a1\uff0c\u5c3d\u65e9\u53d1\u73b0\u95ee\u9898\uff0c\u89e3\u51b3\u95ee\u9898\u3002 5 \u9644\u4ef6 \u00b6 N/A","title":"openEuler-22.03-LTS-SP4"},{"location":"test/openEuler-22.03-LTS-SP4/#openeuler-2203-lts-sp4","text":"\u7248\u6743\u6240\u6709 \u00a9 2023 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95eehttps://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1ahttps://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2024-06-21 1 \u521d\u7a3f \u738b\u9759 \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728 openEuler 22.03 LTS SP4 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668","title":"openEuler 22.03 LTS SP4\u6d4b\u8bd5\u62a5\u544a"},{"location":"test/openEuler-22.03-LTS-SP4/#1","text":"\u5728 openEuler 22.03 LTS SP4 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest Barbican Octavia designate Manila Masakari Mistral Senlin Zaqar","title":"1 \u7279\u6027\u6982\u8ff0"},{"location":"test/openEuler-22.03-LTS-SP4/#2","text":"\u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 22.03 LTS SP4 RC1 (OpenStack Train\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2024.04.23 2024.04.27 openEuler 22.03 LTS SP4 RC1 (OpenStack Train\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2024.04.28 2024.05.09 openEuler 22.03 LTS SP4 RC2 (OpenStack Train\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2024.05.09 2024.05.16 openEuler 22.03 LTS SP4 RC3 (OpenStack Train\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2024.05.17 2024.05.21 openEuler 22.03 LTS SP4 RC3 (OpenStack Wallaby\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2024.05.22 2024.05.25 openEuler 22.03 LTS SP4 RC3 (OpenStack Wallaby\u57fa\u7248\u672c\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2024.05.27 2024.05.30 openEuler 22.03 LTS SP4 RC4 (OpenStack Wallaby\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2024.06.01 2024.06.07 openEuler 22.03 LTS SP4 RC4 (OpenStack Wallaby\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2024.06.08 2024.06.19 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a","title":"2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f"},{"location":"test/openEuler-22.03-LTS-SP4/#3","text":"","title":"3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0"},{"location":"test/openEuler-22.03-LTS-SP4/#31","text":"OpenStack Train \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1420 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 66 \u4e2a\uff08\u5168\u662f OpenStack Train \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1354 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Wallaby \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1436 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 95 \u4e2a\uff08\u5168\u662f OpenStack Wallaby \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1341 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Train\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1420\u4e2a\uff0c\u5176\u4e2dSkip 66\u4e2a\uff0cFail 0\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Wallaby\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1436\u4e2a\uff0c\u5176\u4e2dSkip 95\u4e2a\uff0cFail 0\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38","title":"3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba"},{"location":"test/openEuler-22.03-LTS-SP4/#32","text":"\u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Train \u3001 OpenStack Wallaby \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 22.03 LTS SP4 \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002","title":"3.2 \u7ea6\u675f\u8bf4\u660e"},{"location":"test/openEuler-22.03-LTS-SP4/#33","text":"","title":"3.3 \u9057\u7559\u95ee\u9898\u5206\u6790"},{"location":"test/openEuler-22.03-LTS-SP4/#331","text":"\u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A","title":"3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd"},{"location":"test/openEuler-22.03-LTS-SP4/#332","text":"\u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 1 0 1 0 0 \u767e\u5206\u6bd4 100 0 100 0 0 ISSUE Link","title":"3.3.2 \u95ee\u9898\u7edf\u8ba1"},{"location":"test/openEuler-22.03-LTS-SP4/#4","text":"","title":"4 \u6d4b\u8bd5\u6267\u884c"},{"location":"test/openEuler-22.03-LTS-SP4/#41","text":"\u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 22.03 LTS SP4 OpenStack Train 1420 \u901a\u8fc71354\u4e2a\uff0cskip 66\u4e2a\uff0cFail 0\u4e2a 0 openEuler 22.03 LTS SP4 OpenStack Wallaby 1436 \u901a\u8fc71431\u4e2a\uff0cskip 95\u4e2a\uff0cFail 0\u4e2a 0","title":"4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e"},{"location":"test/openEuler-22.03-LTS-SP4/#42","text":"\u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5\u3002 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5\u3002 \u91cd\u70b9\u6d4b\u8bd5SP4\u65b0\u589eOpenStack\u670d\u52a1\uff0c\u5c3d\u65e9\u53d1\u73b0\u95ee\u9898\uff0c\u89e3\u51b3\u95ee\u9898\u3002","title":"4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae"},{"location":"test/openEuler-22.03-LTS-SP4/#5","text":"N/A","title":"5 \u9644\u4ef6"},{"location":"test/openEuler-22.03-LTS/","text":"openEuler 22.03 LTS \u6d4b\u8bd5\u62a5\u544a \u00b6 \u7248\u6743\u6240\u6709 \u00a9 2021 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95ee https://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1a https://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2022-03-21 1 \u521d\u7a3f \u674e\u4f73\u4f1f \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728 openEuler 22.03 LTS \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668 1 \u7279\u6027\u6982\u8ff0 \u00b6 \u5728 openEuler 22.03 LTS \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest 2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f \u00b6 \u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 22.03 LTS RC1 (OpenStack Train\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2022.02.20 2022.02.27 openEuler 22.03 LTS RC1 (OpenStack Train\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2022.02.28 2022.03.03 openEuler 22.03 LTS RC2 (OpenStack Train\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2022.03.04 2022.03.07 openEuler 22.03 LTS RC3 (OpenStack Train\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2022.03.08 2022.03.09 openEuler 22.03 LTS RC3 (OpenStack Wallaby\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2022.03.10 2022.03.15 openEuler 22.03 LTS RC3 (OpenStack Wallaby\u57fa\u7248\u672c\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2022.03.16 2022.03.19 openEuler 22.03 LTS RC4 (OpenStack Wallaby\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2022.03.20 2022.03.21 openEuler 22.03 LTS RC4 (OpenStack Wallaby\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2022.03.21 2022.03.22 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a TaiShan 200-2280 Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAM ARM\u67b6\u6784\u670d\u52a1\u5668 3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0 \u00b6 3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba \u00b6 OpenStack Train \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1354 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 64 \u4e2a\uff08\u5168\u662f OpenStack Train \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 1 \u4e2a\uff08\u6d4b\u8bd5\u7528\u4f8b\u672c\u8eab\u95ee\u9898\uff09\uff0c\u5176\u4ed6 1289 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Wallaby \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1164 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 70 \u4e2a\uff08\u5168\u662f OpenStack Wallaby \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b 6 \u4e2a\uff0c\u5176\u4ed6 1088 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Train\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1354\u4e2a\uff0c\u5176\u4e2dSkip 64\u4e2a\uff0cFail 1\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Wallaby\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1164\u4e2a\uff0c\u5176\u4e2dSkip 70\u4e2a\uff0cFail 6\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38 3.2 \u7ea6\u675f\u8bf4\u660e \u00b6 \u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Train \u3001 OpenStack Wallaby \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 22.03 LTS \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002 3.3 \u9057\u7559\u95ee\u9898\u5206\u6790 \u00b6 3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd \u00b6 \u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A 3.3.2 \u95ee\u9898\u7edf\u8ba1 \u00b6 \u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 10 2 6 2 0 \u767e\u5206\u6bd4 100 20 60 20 0 4 \u6d4b\u8bd5\u6267\u884c \u00b6 4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e \u00b6 \u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 22.03 LTS OpenStack Train 1354 \u901a\u8fc71289\u4e2a\uff0cskip 64\u4e2a\uff0cFail 1\u4e2a 7 openEuler 22.03 LTS OpenStack Wallaby 1164 \u901a\u8fc71088\u4e2a\uff0cskip 70\u4e2a\uff0cFail 6\u4e2a 3 4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae \u00b6 \u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5 5 \u9644\u4ef6 \u00b6 N/A","title":"openEuler-22.03-LTS"},{"location":"test/openEuler-22.03-LTS/#openeuler-2203-lts","text":"\u7248\u6743\u6240\u6709 \u00a9 2021 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95ee https://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1a https://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2022-03-21 1 \u521d\u7a3f \u674e\u4f73\u4f1f \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728 openEuler 22.03 LTS \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668","title":"openEuler 22.03 LTS \u6d4b\u8bd5\u62a5\u544a"},{"location":"test/openEuler-22.03-LTS/#1","text":"\u5728 openEuler 22.03 LTS \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Train \u3001 OpenStack Wallaby \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest","title":"1 \u7279\u6027\u6982\u8ff0"},{"location":"test/openEuler-22.03-LTS/#2","text":"\u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 22.03 LTS RC1 (OpenStack Train\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2022.02.20 2022.02.27 openEuler 22.03 LTS RC1 (OpenStack Train\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2022.02.28 2022.03.03 openEuler 22.03 LTS RC2 (OpenStack Train\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2022.03.04 2022.03.07 openEuler 22.03 LTS RC3 (OpenStack Train\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2022.03.08 2022.03.09 openEuler 22.03 LTS RC3 (OpenStack Wallaby\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2022.03.10 2022.03.15 openEuler 22.03 LTS RC3 (OpenStack Wallaby\u57fa\u7248\u672c\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2022.03.16 2022.03.19 openEuler 22.03 LTS RC4 (OpenStack Wallaby\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2022.03.20 2022.03.21 openEuler 22.03 LTS RC4 (OpenStack Wallaby\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2022.03.21 2022.03.22 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a TaiShan 200-2280 Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAM ARM\u67b6\u6784\u670d\u52a1\u5668","title":"2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f"},{"location":"test/openEuler-22.03-LTS/#3","text":"","title":"3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0"},{"location":"test/openEuler-22.03-LTS/#31","text":"OpenStack Train \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1354 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 64 \u4e2a\uff08\u5168\u662f OpenStack Train \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 1 \u4e2a\uff08\u6d4b\u8bd5\u7528\u4f8b\u672c\u8eab\u95ee\u9898\uff09\uff0c\u5176\u4ed6 1289 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Wallaby \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1164 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86API\u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 70 \u4e2a\uff08\u5168\u662f OpenStack Wallaby \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b 6 \u4e2a\uff0c\u5176\u4ed6 1088 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Train\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1354\u4e2a\uff0c\u5176\u4e2dSkip 64\u4e2a\uff0cFail 1\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Wallaby\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1164\u4e2a\uff0c\u5176\u4e2dSkip 70\u4e2a\uff0cFail 6\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38","title":"3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba"},{"location":"test/openEuler-22.03-LTS/#32","text":"\u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Train \u3001 OpenStack Wallaby \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 22.03 LTS \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002","title":"3.2 \u7ea6\u675f\u8bf4\u660e"},{"location":"test/openEuler-22.03-LTS/#33","text":"","title":"3.3 \u9057\u7559\u95ee\u9898\u5206\u6790"},{"location":"test/openEuler-22.03-LTS/#331","text":"\u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A","title":"3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd"},{"location":"test/openEuler-22.03-LTS/#332","text":"\u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 10 2 6 2 0 \u767e\u5206\u6bd4 100 20 60 20 0","title":"3.3.2 \u95ee\u9898\u7edf\u8ba1"},{"location":"test/openEuler-22.03-LTS/#4","text":"","title":"4 \u6d4b\u8bd5\u6267\u884c"},{"location":"test/openEuler-22.03-LTS/#41","text":"\u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 22.03 LTS OpenStack Train 1354 \u901a\u8fc71289\u4e2a\uff0cskip 64\u4e2a\uff0cFail 1\u4e2a 7 openEuler 22.03 LTS OpenStack Wallaby 1164 \u901a\u8fc71088\u4e2a\uff0cskip 70\u4e2a\uff0cFail 6\u4e2a 3","title":"4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e"},{"location":"test/openEuler-22.03-LTS/#42","text":"\u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5","title":"4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae"},{"location":"test/openEuler-22.03-LTS/#5","text":"N/A","title":"5 \u9644\u4ef6"},{"location":"test/openEuler-22.09/","text":"openEuler 22.09 OpenStack Yoga + OpenSD + \u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u6d4b\u8bd5\u62a5\u544a \u00b6 \u7248\u6743\u6240\u6709 \u00a9 2022 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95ee https://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1a https://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2022-09-15 1 \u521d\u7a3f \u97e9\u5149\u5b87 2022-09-16 2 \u683c\u5f0f\u6574\u6539\uff0c\u65b0\u589eopensd\u6d4b\u8bd5\u62a5\u544a,\u65b0\u589e\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u6d4b\u8bd5\u62a5\u544a \u738b\u73ba\u6e90 \u5173\u952e\u8bcd\uff1a OpenStack\u3001opensd \u6458\u8981\uff1a \u5728 openEuler 22.09 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Yoga \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 opensd\u662f\u4e2d\u56fd\u8054\u901a\u5728openEuler\u5f00\u6e90\u7684OpenStack\u90e8\u7f72\u5de5\u5177\uff0c\u5728 openEuler 22.09 \u4e2d\u63d0\u4f9b\u5bf9 OpenStack Yoga \u7684\u652f\u6301\u3002 \u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7 \u7279\u6027\u662fOpenStack SIG\u81ea\u7814\u7684OpenStack\u7279\u6027\uff0c\u8be5\u7279\u6027\u5141\u8bb8\u7528\u6237\u6307\u5b9a\u865a\u62df\u673a\u7684\u4f18\u5148\u7ea7\uff0c\u57fa\u4e8e\u4e0d\u540c\u7684\u4f18\u5148\u7ea7\uff0cOpenStack\u81ea\u52a8\u5206\u914d\u4e0d\u540c\u7684\u7ed1\u6838\u7b56\u7565\uff0c\u914d\u5408openEuler\u81ea\u7814\u7684 skylark QOS\u670d\u52a1\uff0c\u5b9e\u73b0\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u5bf9\u8d44\u6e90\u7684\u5408\u7406\u4f7f\u7528\u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668 1 \u7279\u6027\u6982\u8ff0 \u00b6 \u5728 openEuler 22.09 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Yoga \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest \u5728 openEuler 22.09 \u7248\u672c\u4e2d\u63d0\u4f9b opensd \u7684\u5b89\u88c5\u5305\u4ee5\u53ca\u5bf9 openEuler \u548c OpenStack Yoga \u7684\u652f\u6301\u80fd\u529b\u3002 \u5728 openEuler 22.09 \u7248\u672c\u4e2d\u63d0\u4f9b openstack-plugin-priority-vm \u5b89\u88c5\u5305\uff0c\u652f\u6301\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u3002 2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f \u00b6 \u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 22.09 RC1 (OpenStack Yoga\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5\uff1bopensd\u5b89\u88c5\u80fd\u529b\u6d4b\u8bd5\uff1b\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u5b89\u88c5\u6d4b\u8bd5) 2022.08.10 2022.08.17 openEuler 22.09 RC2 (OpenStack Yoga\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\u3001\u5377 \u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5\uff1bopensd\u652f\u6301openEuler\u7684\u80fd\u529b\u6d4b\u8bd5\uff1b\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u529f\u80fd\u6d4b\u8bd5) 2022.08.18 2022.08.23 openEuler 22.09 RC3 (OpenStack Yoga\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5\uff1bopensd\u652f\u6301OpenStack Yoga\u7684\u80fd\u529b\u6d4b\u8bd5\uff1b\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2022.08.24 2022.09.07 openEuler 22.09 RC4 (OpenStack Yoga\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5\uff1bopensd\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2022.09.08 2022.09.15 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G x86\u865a\u62df\u673a \u8054\u901a\u4e91ECS Intel(R) Xeon(R) Silver 4114 2.20GHz 8U16G X86\u865a\u62df\u673a \u534e\u4e3a 2288H V5 Intel Xeon Gold 6146 3.20GHz 48U192G X86\u7269\u7406\u673a \u8054\u901a\u4e91ECS Huawei Kunpeng 920 2.6GHz 4U8G arm64\u865a\u62df\u673a \u98de\u817eS2500 FT-S2500 2.1GHz 8U16G arm64\u865a\u62df\u673a \u98de\u817eS2500 FT-S2500,64 Core@2.1GHz*2; 512GB DDR4 RAM arm64\u7269\u7406\u673a 3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0 \u00b6 3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba \u00b6 OpenStack Yoga \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1452 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 95 \u4e2a\uff08 OpenStack Yoga \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff08FLAT\u7f51\u7edc\u672a\u5b9e\u9645\u8054\u901a\u53ca\u5b58\u5728\u4e00\u4e9b\u8d85\u65f6\u95ee\u9898\uff09\uff0c\u5176\u4ed6 1357 \u4e2a\u7528\u4f8b\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 opensd \u652f\u6301 Yoga \u7248\u672c mariadb\u3001rabbitmq\u3001memcached\u3001ceph_client\u3001keystone\u3001glance\u3001cinder\u3001placement\u3001nova\u3001neutron \u517110\u4e2a\u9879\u76ee\u7684\u90e8\u7f72\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\u3002 \u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027 \uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Yoga\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1452\u4e2a\uff0c\u5176\u4e2dSkip 95\u4e2a\uff0cFail 0\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38 3.2 \u7ea6\u675f\u8bf4\u660e \u00b6 \u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Yoga \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 22.09 \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002 opensd \u53ea\u652f\u6301\u6d4b\u8bd5\u8303\u56f4\u5185\u7684\u670d\u52a1\u90e8\u7f72\uff0c\u5176\u4ed6\u670d\u52a1\u672a\u7ecf\u8fc7\u6d4b\u8bd5\uff0c\u4e0d\u4fdd\u8bc1\u8d28\u91cf\u3002 \u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027 \u9700\u8981\u914d\u5408openEuelr 22.09 skylark\u670d\u52a1\u4f7f\u7528\u3002 3.3 \u9057\u7559\u95ee\u9898\u5206\u6790 \u00b6 3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd \u00b6 \u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A 3.3.2 \u95ee\u9898\u7edf\u8ba1 \u00b6 \u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 4 1 2 1 0 \u767e\u5206\u6bd4 100 25 59 25 0 4 \u6d4b\u8bd5\u6267\u884c \u00b6 4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e \u00b6 \u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 22.09 OpenStack Yoga 1452 \u901a\u8fc71357\u4e2a\uff0cskip 95\u4e2a\uff0cFail 0\u4e2a 3 4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae \u00b6 \u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5 opensd\u6d4b\u8bd5\u9a8c\u8bc1\u66f4\u591aOpenStack\u670d\u52a1\u3002 5 \u9644\u4ef6 \u00b6 N/A","title":"openEuler-22.09"},{"location":"test/openEuler-22.09/#openeuler-2209-openstack-yoga-opensd","text":"\u7248\u6743\u6240\u6709 \u00a9 2022 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95ee https://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1a https://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2022-09-15 1 \u521d\u7a3f \u97e9\u5149\u5b87 2022-09-16 2 \u683c\u5f0f\u6574\u6539\uff0c\u65b0\u589eopensd\u6d4b\u8bd5\u62a5\u544a,\u65b0\u589e\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u6d4b\u8bd5\u62a5\u544a \u738b\u73ba\u6e90 \u5173\u952e\u8bcd\uff1a OpenStack\u3001opensd \u6458\u8981\uff1a \u5728 openEuler 22.09 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Yoga \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 opensd\u662f\u4e2d\u56fd\u8054\u901a\u5728openEuler\u5f00\u6e90\u7684OpenStack\u90e8\u7f72\u5de5\u5177\uff0c\u5728 openEuler 22.09 \u4e2d\u63d0\u4f9b\u5bf9 OpenStack Yoga \u7684\u652f\u6301\u3002 \u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7 \u7279\u6027\u662fOpenStack SIG\u81ea\u7814\u7684OpenStack\u7279\u6027\uff0c\u8be5\u7279\u6027\u5141\u8bb8\u7528\u6237\u6307\u5b9a\u865a\u62df\u673a\u7684\u4f18\u5148\u7ea7\uff0c\u57fa\u4e8e\u4e0d\u540c\u7684\u4f18\u5148\u7ea7\uff0cOpenStack\u81ea\u52a8\u5206\u914d\u4e0d\u540c\u7684\u7ed1\u6838\u7b56\u7565\uff0c\u914d\u5408openEuler\u81ea\u7814\u7684 skylark QOS\u670d\u52a1\uff0c\u5b9e\u73b0\u9ad8\u4f4e\u4f18\u5148\u7ea7\u865a\u62df\u673a\u5bf9\u8d44\u6e90\u7684\u5408\u7406\u4f7f\u7528\u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668","title":"openEuler 22.09 OpenStack Yoga + OpenSD + \u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u6d4b\u8bd5\u62a5\u544a"},{"location":"test/openEuler-22.09/#1","text":"\u5728 openEuler 22.09 \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Yoga \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest \u5728 openEuler 22.09 \u7248\u672c\u4e2d\u63d0\u4f9b opensd \u7684\u5b89\u88c5\u5305\u4ee5\u53ca\u5bf9 openEuler \u548c OpenStack Yoga \u7684\u652f\u6301\u80fd\u529b\u3002 \u5728 openEuler 22.09 \u7248\u672c\u4e2d\u63d0\u4f9b openstack-plugin-priority-vm \u5b89\u88c5\u5305\uff0c\u652f\u6301\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u3002","title":"1 \u7279\u6027\u6982\u8ff0"},{"location":"test/openEuler-22.09/#2","text":"\u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 22.09 RC1 (OpenStack Yoga\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5\uff1bopensd\u5b89\u88c5\u80fd\u529b\u6d4b\u8bd5\uff1b\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u5b89\u88c5\u6d4b\u8bd5) 2022.08.10 2022.08.17 openEuler 22.09 RC2 (OpenStack Yoga\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\u3001\u5377 \u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5\uff1bopensd\u652f\u6301openEuler\u7684\u80fd\u529b\u6d4b\u8bd5\uff1b\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u529f\u80fd\u6d4b\u8bd5) 2022.08.18 2022.08.23 openEuler 22.09 RC3 (OpenStack Yoga\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5\uff1bopensd\u652f\u6301OpenStack Yoga\u7684\u80fd\u529b\u6d4b\u8bd5\uff1b\u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2022.08.24 2022.09.07 openEuler 22.09 RC4 (OpenStack Yoga\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5\uff1bopensd\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2022.09.08 2022.09.15 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G x86\u865a\u62df\u673a \u8054\u901a\u4e91ECS Intel(R) Xeon(R) Silver 4114 2.20GHz 8U16G X86\u865a\u62df\u673a \u534e\u4e3a 2288H V5 Intel Xeon Gold 6146 3.20GHz 48U192G X86\u7269\u7406\u673a \u8054\u901a\u4e91ECS Huawei Kunpeng 920 2.6GHz 4U8G arm64\u865a\u62df\u673a \u98de\u817eS2500 FT-S2500 2.1GHz 8U16G arm64\u865a\u62df\u673a \u98de\u817eS2500 FT-S2500,64 Core@2.1GHz*2; 512GB DDR4 RAM arm64\u7269\u7406\u673a","title":"2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f"},{"location":"test/openEuler-22.09/#3","text":"","title":"3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0"},{"location":"test/openEuler-22.09/#31","text":"OpenStack Yoga \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1452 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 95 \u4e2a\uff08 OpenStack Yoga \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff08FLAT\u7f51\u7edc\u672a\u5b9e\u9645\u8054\u901a\u53ca\u5b58\u5728\u4e00\u4e9b\u8d85\u65f6\u95ee\u9898\uff09\uff0c\u5176\u4ed6 1357 \u4e2a\u7528\u4f8b\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 opensd \u652f\u6301 Yoga \u7248\u672c mariadb\u3001rabbitmq\u3001memcached\u3001ceph_client\u3001keystone\u3001glance\u3001cinder\u3001placement\u3001nova\u3001neutron \u517110\u4e2a\u9879\u76ee\u7684\u90e8\u7f72\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\u3002 \u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027 \uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Yoga\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1452\u4e2a\uff0c\u5176\u4e2dSkip 95\u4e2a\uff0cFail 0\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38","title":"3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba"},{"location":"test/openEuler-22.09/#32","text":"\u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Yoga \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 22.09 \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002 opensd \u53ea\u652f\u6301\u6d4b\u8bd5\u8303\u56f4\u5185\u7684\u670d\u52a1\u90e8\u7f72\uff0c\u5176\u4ed6\u670d\u52a1\u672a\u7ecf\u8fc7\u6d4b\u8bd5\uff0c\u4e0d\u4fdd\u8bc1\u8d28\u91cf\u3002 \u865a\u62df\u673a\u9ad8\u4f4e\u4f18\u5148\u7ea7\u7279\u6027 \u9700\u8981\u914d\u5408openEuelr 22.09 skylark\u670d\u52a1\u4f7f\u7528\u3002","title":"3.2 \u7ea6\u675f\u8bf4\u660e"},{"location":"test/openEuler-22.09/#33","text":"","title":"3.3 \u9057\u7559\u95ee\u9898\u5206\u6790"},{"location":"test/openEuler-22.09/#331","text":"\u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A","title":"3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd"},{"location":"test/openEuler-22.09/#332","text":"\u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 4 1 2 1 0 \u767e\u5206\u6bd4 100 25 59 25 0","title":"3.3.2 \u95ee\u9898\u7edf\u8ba1"},{"location":"test/openEuler-22.09/#4","text":"","title":"4 \u6d4b\u8bd5\u6267\u884c"},{"location":"test/openEuler-22.09/#41","text":"\u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 22.09 OpenStack Yoga 1452 \u901a\u8fc71357\u4e2a\uff0cskip 95\u4e2a\uff0cFail 0\u4e2a 3","title":"4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e"},{"location":"test/openEuler-22.09/#42","text":"\u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5 opensd\u6d4b\u8bd5\u9a8c\u8bc1\u66f4\u591aOpenStack\u670d\u52a1\u3002","title":"4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae"},{"location":"test/openEuler-22.09/#5","text":"N/A","title":"5 \u9644\u4ef6"},{"location":"test/openEuler-24.03-LTS/","text":"openEuler 24.03 LTS \u6d4b\u8bd5\u62a5\u544a \u00b6 \u7248\u6743\u6240\u6709 \u00a9 2024 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95eehttps://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1ahttps://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2024-06-03 1 \u521d\u7a3f \u90d1\u633a \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728 openEuler 24.03 LTS \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Wallaby \u3001 OpenStack Antelope \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668 1 \u7279\u6027\u6982\u8ff0 \u00b6 \u5728 openEuler 24.03 LTS \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Wallaby \u3001 OpenStack Antelope \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest Barbican Octavia designate Manila Masakari Mistral Senlin Zaqar 2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f \u00b6 \u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 24.03 LTS RC1 (OpenStack Antelope\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2024.03.31 2024.04.03 openEuler 24.03 LTS RC1 (OpenStack Antelope\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2024.04.04 2024.04.09 openEuler 24.03 LTS RC2 (OpenStack Antelope\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2024.04.10 2024.04.19 openEuler 24.03 LTS RC3 (OpenStack Antelope\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2024.04.20 2024.05.09 openEuler 24.03 LTS RC4 (OpenStack Wallaby\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2024.05.10 2024.05.14 openEuler 24.03 LTS RC4 (OpenStack Wallaby\u57fa\u7248\u672c\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2024.05.15 2024.05.21 openEuler 24.03 LTS RC5 (OpenStack Wallaby\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2024.05.22 2024.05.28 openEuler 24.03 LTS RC5 (OpenStack Wallaby\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2024.05.29 2024.06.03 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a 3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0 \u00b6 3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba \u00b6 OpenStack Antelope \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1483 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 100 \u4e2a\uff08\u5168\u662f OpenStack Antelope \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1383 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Wallaby \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1434 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 95 \u4e2a\uff08\u5168\u662f OpenStack Wallaby \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1339 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Antelope\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1483\u4e2a\uff0c\u5176\u4e2dSkip 100\u4e2a\uff0cFail 0\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Wallaby\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1434\u4e2a\uff0c\u5176\u4e2dSkip 95\u4e2a\uff0cFail 0\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38 3.2 \u7ea6\u675f\u8bf4\u660e \u00b6 \u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Antelope \u3001 OpenStack Wallaby \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 24.03 LTS \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002 3.3 \u9057\u7559\u95ee\u9898\u5206\u6790 \u00b6 3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd \u00b6 \u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A 3.3.2 \u95ee\u9898\u7edf\u8ba1 \u00b6 \u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 6 0 6 0 0 \u767e\u5206\u6bd4 100 0 100 0 0 ISSUE Link https://gitee.com/openeuler/openstack/issues/I9RUHD?from=project-issue https://gitee.com/openeuler/openstack/issues/I9RKHC?from=project-issue https://gitee.com/openeuler/openstack/issues/I9S2L0?from=project-issue https://gitee.com/openeuler/openstack/issues/I9S2LT?from=project-issue https://gitee.com/openeuler/openstack/issues/I9UF6L?from=project-issue https://gitee.com/openeuler/openstack/issues/I9UFAZ?from=project-issue 4 \u6d4b\u8bd5\u6267\u884c \u00b6 4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e \u00b6 \u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 24.03 LTS OpenStack Antelope 1483 \u901a\u8fc71383\u4e2a\uff0cskip 100\u4e2a\uff0cFail 0\u4e2a 1 openEuler 24.03 LTS OpenStack Wallaby 1434 \u901a\u8fc71339\u4e2a\uff0cskip 95\u4e2a\uff0cFail 0\u4e2a 5 4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae \u00b6 \u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5\u3002 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5\u3002 \u91cd\u70b9\u6d4b\u8bd5Antelope\u548cWallaby\u7248\u672c\u5bf9python3.11\u7248\u672c\u7684\u9002\u914d\u60c5\u51b5\u3002 5 \u9644\u4ef6 \u00b6 N/A","title":"openEuler-24.03-LTS"},{"location":"test/openEuler-24.03-LTS/#openeuler-2403-lts","text":"\u7248\u6743\u6240\u6709 \u00a9 2024 openEuler\u793e\u533a \u60a8\u5bf9\u201c\u672c\u6587\u6863\u201d\u7684\u590d\u5236\u3001\u4f7f\u7528\u3001\u4fee\u6539\u53ca\u5206\u53d1\u53d7\u77e5\u8bc6\u5171\u4eab(Creative Commons)\u7f72\u540d\u2014\u76f8\u540c\u65b9\u5f0f\u5171\u4eab4.0\u56fd\u9645\u516c\u5171\u8bb8\u53ef\u534f\u8bae(\u4ee5\u4e0b\u7b80\u79f0\u201cCC BY-SA 4.0\u201d)\u7684\u7ea6\u675f\u3002\u4e3a\u4e86\u65b9\u4fbf\u7528\u6237\u7406\u89e3\uff0c\u60a8\u53ef\u4ee5\u901a\u8fc7\u8bbf\u95eehttps://creativecommons.org/licenses/by-sa/4.0/ \u4e86\u89e3CC BY-SA 4.0\u7684\u6982\u8981 (\u4f46\u4e0d\u662f\u66ff\u4ee3)\u3002CC BY-SA 4.0\u7684\u5b8c\u6574\u534f\u8bae\u5185\u5bb9\u60a8\u53ef\u4ee5\u8bbf\u95ee\u5982\u4e0b\u7f51\u5740\u83b7\u53d6\uff1ahttps://creativecommons.org/licenses/by-sa/4.0/legalcode\u3002 \u4fee\u8ba2\u8bb0\u5f55 \u65e5\u671f \u4fee\u8ba2\u7248\u672c \u4fee\u6539\u63cf\u8ff0 \u4f5c\u8005 2024-06-03 1 \u521d\u7a3f \u90d1\u633a \u5173\u952e\u8bcd\uff1a OpenStack \u6458\u8981\uff1a \u5728 openEuler 24.03 LTS \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Wallaby \u3001 OpenStack Antelope \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u65b9\u4fbf\u7528\u6237\u5feb\u901f\u90e8\u7f72 OpenStack \u3002 \u7f29\u7565\u8bed\u6e05\u5355\uff1a \u7f29\u7565\u8bed \u82f1\u6587\u5168\u540d \u4e2d\u6587\u89e3\u91ca CLI Command Line Interface \u547d\u4ee4\u884c\u5de5\u5177 ECS Elastic Cloud Server \u5f39\u6027\u4e91\u670d\u52a1\u5668","title":"openEuler 24.03 LTS \u6d4b\u8bd5\u62a5\u544a"},{"location":"test/openEuler-24.03-LTS/#1","text":"\u5728 openEuler 24.03 LTS \u7248\u672c\u4e2d\u63d0\u4f9b OpenStack Wallaby \u3001 OpenStack Antelope \u7248\u672c\u7684 RPM \u5b89\u88c5\u5305\uff0c\u5305\u62ec\u4ee5\u4e0b\u9879\u76ee\u4ee5\u53ca\u6bcf\u4e2a\u9879\u76ee\u914d\u5957\u7684 CLI \u3002 Keystone Neutron Cinder Nova Placement Glance Horizon Aodh Ceilometer Cyborg Gnocchi Heat Swift Ironic Kolla Trove Tempest Barbican Octavia designate Manila Masakari Mistral Senlin Zaqar","title":"1 \u7279\u6027\u6982\u8ff0"},{"location":"test/openEuler-24.03-LTS/#2","text":"\u672c\u8282\u63cf\u8ff0\u88ab\u6d4b\u5bf9\u8c61\u7684\u7248\u672c\u4fe1\u606f\u548c\u6d4b\u8bd5\u7684\u65f6\u95f4\u53ca\u6d4b\u8bd5\u8f6e\u6b21\uff0c\u5305\u62ec\u4f9d\u8d56\u7684\u786c\u4ef6\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u8d77\u59cb\u65f6\u95f4 \u6d4b\u8bd5\u7ed3\u675f\u65f6\u95f4 openEuler 24.03 LTS RC1 (OpenStack Antelope\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2024.03.31 2024.04.03 openEuler 24.03 LTS RC1 (OpenStack Antelope\u7248\u672c\u57fa\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2024.04.04 2024.04.09 openEuler 24.03 LTS RC2 (OpenStack Antelope\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2024.04.10 2024.04.19 openEuler 24.03 LTS RC3 (OpenStack Antelope\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2024.04.20 2024.05.09 openEuler 24.03 LTS RC4 (OpenStack Wallaby\u7248\u672c\u5404\u7ec4\u4ef6\u7684\u5b89\u88c5\u90e8\u7f72\u6d4b\u8bd5) 2024.05.10 2024.05.14 openEuler 24.03 LTS RC4 (OpenStack Wallaby\u57fa\u7248\u672c\u672c\u529f\u80fd\u6d4b\u8bd5\uff0c\u5305\u62ec\u865a\u62df\u673a\uff0c\u5377\uff0c\u7f51\u7edc\u76f8\u5173\u8d44\u6e90\u7684\u589e\u5220\u6539\u67e5) 2024.05.15 2024.05.21 openEuler 24.03 LTS RC5 (OpenStack Wallaby\u7248\u672ctempest\u96c6\u6210\u6d4b\u8bd5) 2024.05.22 2024.05.28 openEuler 24.03 LTS RC5 (OpenStack Wallaby\u7248\u672c\u95ee\u9898\u56de\u5f52\u6d4b\u8bd5) 2024.05.29 2024.06.03 \u63cf\u8ff0\u7279\u6027\u6d4b\u8bd5\u7684\u786c\u4ef6\u73af\u5883\u4fe1\u606f \u786c\u4ef6\u578b\u53f7 \u786c\u4ef6\u914d\u7f6e\u4fe1\u606f \u5907\u6ce8 \u534e\u4e3a\u4e91ECS Intel Cascade Lake 3.0GHz 8U16G \u534e\u4e3a\u4e91x86\u865a\u62df\u673a \u534e\u4e3a\u4e91ECS Huawei Kunpeng 920 2.6GHz 8U16G \u534e\u4e3a\u4e91arm64\u865a\u62df\u673a","title":"2 \u7279\u6027\u6d4b\u8bd5\u4fe1\u606f"},{"location":"test/openEuler-24.03-LTS/#3","text":"","title":"3 \u6d4b\u8bd5\u7ed3\u8bba\u6982\u8ff0"},{"location":"test/openEuler-24.03-LTS/#31","text":"OpenStack Antelope \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1483 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 100 \u4e2a\uff08\u5168\u662f OpenStack Antelope \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982Keystone V1\u3001Cinder V1\u7b49\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1383 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 OpenStack Wallaby \u7248\u672c\uff0c\u5171\u8ba1\u6267\u884c Tempest \u7528\u4f8b 1434 \u4e2a\uff0c\u4e3b\u8981\u8986\u76d6\u4e86 API \u6d4b\u8bd5\u548c\u529f\u80fd\u6d4b\u8bd5\uff0c\u901a\u8fc7 7*24 \u7684\u957f\u7a33\u6d4b\u8bd5\uff0c Skip \u7528\u4f8b 95 \u4e2a\uff08\u5168\u662f OpenStack Wallaby \u7248\u4e2d\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u6216\u63a5\u53e3\uff0c\u5982KeystoneV1\u3001Cinder V1\u7b49\uff0c\u548c\u4e0d\u652f\u6301\u7684barbican\u9879\u76ee\uff09\uff0c\u5931\u8d25\u7528\u4f8b 0 \u4e2a\uff0c\u5176\u4ed6 1339 \u4e2a\u7528\u4f8b\u5168\u90e8\u901a\u8fc7\uff0c\u53d1\u73b0\u95ee\u9898\u5df2\u89e3\u51b3\uff0c\u56de\u5f52\u901a\u8fc7\uff0c\u65e0\u9057\u7559\u98ce\u9669\uff0c\u6574\u4f53\u8d28\u91cf\u826f\u597d\u3002 \u6d4b\u8bd5\u6d3b\u52a8 tempest\u96c6\u6210\u6d4b\u8bd5 \u63a5\u53e3\u6d4b\u8bd5 API\u5168\u8986\u76d6 \u529f\u80fd\u6d4b\u8bd5 Antelope\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1483\u4e2a\uff0c\u5176\u4e2dSkip 100\u4e2a\uff0cFail 0\u4e2a\uff0c\u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u529f\u80fd\u6d4b\u8bd5 Wallaby\u7248\u672c\u8986\u76d6Tempest\u6240\u6709\u76f8\u5173\u6d4b\u8bd5\u7528\u4f8b1434\u4e2a\uff0c\u5176\u4e2dSkip 95\u4e2a\uff0cFail 0\u4e2a, \u5176\u4ed6\u5168\u901a\u8fc7\u3002 \u6d4b\u8bd5\u6d3b\u52a8 \u529f\u80fd\u6d4b\u8bd5 \u529f\u80fd\u6d4b\u8bd5 \u865a\u62df\u673a\uff08KVM\u3001Qemu)\u3001\u5b58\u50a8\uff08lvm\u3001NFS\u3001Ceph\u540e\u7aef\uff09\u3001\u7f51\u7edc\u8d44\u6e90\uff08linuxbridge\u3001openvswitch\uff09\u7ba1\u7406\u64cd\u4f5c\u6b63\u5e38","title":"3.1 \u6d4b\u8bd5\u6574\u4f53\u7ed3\u8bba"},{"location":"test/openEuler-24.03-LTS/#32","text":"\u672c\u6b21\u6d4b\u8bd5\u6ca1\u6709\u8986\u76d6 OpenStack Antelope \u3001 OpenStack Wallaby \u7248\u4e2d\u660e\u786e\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff0c\u56e0\u6b64\u4e0d\u80fd\u4fdd\u8bc1\u5df2\u5e9f\u5f03\u7684\u529f\u80fd\u548c\u63a5\u53e3\uff08\u524d\u6587\u63d0\u5230\u7684Skip\u7684\u7528\u4f8b\uff09\u5728 openEuler 24.03 LTS \u4e0a\u80fd\u6b63\u5e38\u4f7f\u7528\u3002","title":"3.2 \u7ea6\u675f\u8bf4\u660e"},{"location":"test/openEuler-24.03-LTS/#33","text":"","title":"3.3 \u9057\u7559\u95ee\u9898\u5206\u6790"},{"location":"test/openEuler-24.03-LTS/#331","text":"\u95ee\u9898\u5355\u53f7 \u95ee\u9898\u63cf\u8ff0 \u95ee\u9898\u7ea7\u522b \u95ee\u9898\u5f71\u54cd\u548c\u89c4\u907f\u63aa\u65bd \u5f53\u524d\u72b6\u6001 N/A N/A N/A N/A N/A","title":"3.3.1 \u9057\u7559\u95ee\u9898\u5f71\u54cd\u4ee5\u53ca\u89c4\u907f\u63aa\u65bd"},{"location":"test/openEuler-24.03-LTS/#332","text":"\u95ee\u9898\u603b\u6570 \u4e25\u91cd \u4e3b\u8981 \u6b21\u8981 \u4e0d\u91cd\u8981 \u6570\u76ee 6 0 6 0 0 \u767e\u5206\u6bd4 100 0 100 0 0 ISSUE Link https://gitee.com/openeuler/openstack/issues/I9RUHD?from=project-issue https://gitee.com/openeuler/openstack/issues/I9RKHC?from=project-issue https://gitee.com/openeuler/openstack/issues/I9S2L0?from=project-issue https://gitee.com/openeuler/openstack/issues/I9S2LT?from=project-issue https://gitee.com/openeuler/openstack/issues/I9UF6L?from=project-issue https://gitee.com/openeuler/openstack/issues/I9UFAZ?from=project-issue","title":"3.3.2 \u95ee\u9898\u7edf\u8ba1"},{"location":"test/openEuler-24.03-LTS/#4","text":"","title":"4 \u6d4b\u8bd5\u6267\u884c"},{"location":"test/openEuler-24.03-LTS/#41","text":"\u672c\u8282\u5185\u5bb9\u6839\u636e\u6d4b\u8bd5\u7528\u4f8b\u53ca\u5b9e\u9645\u6267\u884c\u60c5\u51b5\u8fdb\u884c\u7279\u6027\u6574\u4f53\u6d4b\u8bd5\u7684\u7edf\u8ba1\uff0c\u53ef\u6839\u636e\u7b2c\u4e8c\u7ae0\u7684\u6d4b\u8bd5\u8f6e\u6b21\u5206\u5f00\u8fdb\u884c\u7edf\u8ba1\u8bf4\u660e\u3002 \u7248\u672c\u540d\u79f0 \u6d4b\u8bd5\u7528\u4f8b\u6570 \u7528\u4f8b\u6267\u884c\u7ed3\u679c \u53d1\u73b0\u95ee\u9898\u5355\u6570 openEuler 24.03 LTS OpenStack Antelope 1483 \u901a\u8fc71383\u4e2a\uff0cskip 100\u4e2a\uff0cFail 0\u4e2a 1 openEuler 24.03 LTS OpenStack Wallaby 1434 \u901a\u8fc71339\u4e2a\uff0cskip 95\u4e2a\uff0cFail 0\u4e2a 5","title":"4.1 \u6d4b\u8bd5\u6267\u884c\u7edf\u8ba1\u6570\u636e"},{"location":"test/openEuler-24.03-LTS/#42","text":"\u6db5\u76d6\u4e3b\u8981\u7684\u6027\u80fd\u6d4b\u8bd5\u3002 \u8986\u76d6\u66f4\u591a\u7684driver/plugin\u6d4b\u8bd5\u3002 \u91cd\u70b9\u6d4b\u8bd5Antelope\u548cWallaby\u7248\u672c\u5bf9python3.11\u7248\u672c\u7684\u9002\u914d\u60c5\u51b5\u3002","title":"4.2 \u540e\u7eed\u6d4b\u8bd5\u5efa\u8bae"},{"location":"test/openEuler-24.03-LTS/#5","text":"N/A","title":"5 \u9644\u4ef6"}]} \ No newline at end of file diff --git a/site/search/worker.js b/site/search/worker.js new file mode 100644 index 0000000000000000000000000000000000000000..8628dbce9442638095bf6ae885651b7dec0c91ea --- /dev/null +++ b/site/search/worker.js @@ -0,0 +1,133 @@ +var base_path = 'function' === typeof importScripts ? '.' : '/search/'; +var allowSearch = false; +var index; +var documents = {}; +var lang = ['en']; +var data; + +function getScript(script, callback) { + console.log('Loading script: ' + script); + $.getScript(base_path + script).done(function () { + callback(); + }).fail(function (jqxhr, settings, exception) { + console.log('Error: ' + exception); + }); +} + +function getScriptsInOrder(scripts, callback) { + if (scripts.length === 0) { + callback(); + return; + } + getScript(scripts[0], function() { + getScriptsInOrder(scripts.slice(1), callback); + }); +} + +function loadScripts(urls, callback) { + if( 'function' === typeof importScripts ) { + importScripts.apply(null, urls); + callback(); + } else { + getScriptsInOrder(urls, callback); + } +} + +function onJSONLoaded () { + data = JSON.parse(this.responseText); + var scriptsToLoad = ['lunr.js']; + if (data.config && data.config.lang && data.config.lang.length) { + lang = data.config.lang; + } + if (lang.length > 1 || lang[0] !== "en") { + scriptsToLoad.push('lunr.stemmer.support.js'); + if (lang.length > 1) { + scriptsToLoad.push('lunr.multi.js'); + } + if (lang.includes("ja") || lang.includes("jp")) { + scriptsToLoad.push('tinyseg.js'); + } + for (var i=0; i < lang.length; i++) { + if (lang[i] != 'en') { + scriptsToLoad.push(['lunr', lang[i], 'js'].join('.')); + } + } + } + loadScripts(scriptsToLoad, onScriptsLoaded); +} + +function onScriptsLoaded () { + console.log('All search scripts loaded, building Lunr index...'); + if (data.config && data.config.separator && data.config.separator.length) { + lunr.tokenizer.separator = new RegExp(data.config.separator); + } + + if (data.index) { + index = lunr.Index.load(data.index); + data.docs.forEach(function (doc) { + documents[doc.location] = doc; + }); + console.log('Lunr pre-built index loaded, search ready'); + } else { + index = lunr(function () { + if (lang.length === 1 && lang[0] !== "en" && lunr[lang[0]]) { + this.use(lunr[lang[0]]); + } else if (lang.length > 1) { + this.use(lunr.multiLanguage.apply(null, lang)); // spread operator not supported in all browsers: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator#Browser_compatibility + } + this.field('title'); + this.field('text'); + this.ref('location'); + + for (var i=0; i < data.docs.length; i++) { + var doc = data.docs[i]; + this.add(doc); + documents[doc.location] = doc; + } + }); + console.log('Lunr index built, search ready'); + } + allowSearch = true; + postMessage({config: data.config}); + postMessage({allowSearch: allowSearch}); +} + +function init () { + var oReq = new XMLHttpRequest(); + oReq.addEventListener("load", onJSONLoaded); + var index_path = base_path + '/search_index.json'; + if( 'function' === typeof importScripts ){ + index_path = 'search_index.json'; + } + oReq.open("GET", index_path); + oReq.send(); +} + +function search (query) { + if (!allowSearch) { + console.error('Assets for search still loading'); + return; + } + + var resultDocuments = []; + var results = index.search(query); + for (var i=0; i < results.length; i++){ + var result = results[i]; + doc = documents[result.ref]; + doc.summary = doc.text.substring(0, 200); + resultDocuments.push(doc); + } + return resultDocuments; +} + +if( 'function' === typeof importScripts ) { + onmessage = function (e) { + if (e.data.init) { + init(); + } else if (e.data.query) { + postMessage({ results: search(e.data.query) }); + } else { + console.error("Worker - Unrecognized message: " + e); + } + }; +} diff --git a/site/security/security-guide/index.html b/site/security/security-guide/index.html new file mode 100644 index 0000000000000000000000000000000000000000..07684f0d612a0553336518a863cccbc8c45514e9 --- /dev/null +++ b/site/security/security-guide/index.html @@ -0,0 +1,8246 @@ + + + + + + + + 安全指南 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

OpenStack安全指南

+

本文翻译自上游安全指南

+
+ +
+

摘要

+

本书提供了有关保护OpenStack云的最佳实践和概念信息。

+

本指南最后一次更新是在Train发布期间,记录了OpenStack Train、Stein和Rocky版本。它可能不适用于EOL版本(例如Newton)。我们建议您在计划为您的OpenStack云实施安全措施时,自行阅读本文。本指南仅供参考。OpenStack安全团队基于OpenStack社区的自愿贡献。您可以在OFTC IRC上的#OpenStack-Security频道中直接联系安全社区,或者通过向OpenStack-Discussion邮件列表发送主题标题中带有[Security]前缀的邮件来联系。

+

内容

+
    +
  • +

    约定

    +
      +
    • 通知
    • +
    • 命令提示符
    • +
    +
  • +
  • +

    介绍

    +
      +
    • 确定
    • +
    • 我们为什么以及如何写这本书
    • +
    • OpenStack简介
    • +
    • 安全边界和威胁
    • +
    • 选择支持软件
    • +
    +
  • +
  • +

    系统文档

    +
      +
    • 系统文档要求
    • +
    +
  • +
  • +

    管理

    +
      +
    • 持续的系统管理
    • +
    • 完整性生命周期
    • +
    • 管理界面
    • +
    +
  • +
  • +

    安全通信

    +
      +
    • TLS和SSL简介
    • +
    • TLS代理和HTTP服务
    • +
    • 安全参考架构
    • +
    +
  • +
  • +

    端点

    +
      +
    • APL端点配置建议
    • +
    +
  • +
  • +

    身份

    +
      +
    • 认证
    • +
    • 身份验证方法
    • +
    • 授权
    • +
    • 政策
    • +
    • 令牌
    • +
    • +
    • 联合梯形失真
    • +
    • 清单
    • +
    +
  • +
  • +

    仪表板

    +
      +
    • 域名、仪表板升级和基本Web服务器配置
    • +
    • HTTPS、HSTS、XSS和SSRF
    • +
    • 前端缓存和会话后端
    • +
    • 静态媒体
    • +
    • 密码
    • +
    • 密钥
    • +
    • 网站数据
    • +
    • 跨域资源共享 (CORS)
    • +
    • 调试
    • +
    • 检查表
    • +
    +
  • +
  • +

    计算

    +
      +
    • 虚拟机管理程序选择
    • +
    • 强化虚拟化层
    • +
    • 强化计算部署
    • +
    • 漏洞意识
    • +
    • 如何选择虚拟控制台
    • +
    • 检查表
    • +
    +
  • +
  • +

    块存储

    +
      +
    • 音量擦除
    • +
    • 检查表
    • +
    +
  • +
  • +

    图像存储

    +
      +
    • 检查表
    • +
    +
  • +
  • +

    共享文件系统

    +
      +
    • 介绍
    • +
    • 网络和安全模型
    • +
    • 安全服务
    • +
    • 共享访问控制
    • +
    • 共享类型访问控制
    • +
    • 政策
    • +
    • 检查表
    • +
    +
  • +
  • +

    联网

    +
      +
    • 网络架构
    • +
    • 网络服务
    • +
    • 网络服务安全最佳做法
    • +
    • 保护 OpenStack 网络服务
    • +
    • 检查表
    • +
    +
  • +
  • +

    对象存储

    +
      +
    • 网络安全
    • +
    • 一般事务安全
    • +
    • 保护存储服务
    • +
    • 保护代理服务
    • +
    • 对象存储身份验证
    • +
    • 其他值得注意的项目
    • +
    +
  • +
  • +

    机密管理

    +
      +
    • 现有技术摘要
    • +
    • 相关 Openstack 项目
    • +
    • 使用案例
    • +
    • 密钥管理服务
    • +
    • 密钥管理接口
    • +
    • 常见问题解答
    • +
    • 检查表
    • +
    +
  • +
  • +

    消息队列

    +
      +
    • 邮件安全
    • +
    +
  • +
  • +

    数据处理

    +
      +
    • 数据处理简介
    • +
    • 部署
    • +
    • 配置和强化
    • +
    +
  • +
  • +

    数据库

    +
      +
    • 数据库后端注意事项
    • +
    • 数据库访问控制
    • +
    • 数据库传输安全性
    • +
    +
  • +
  • +

    租户数据隐私

    +
      +
    • 数据隐私问题
    • +
    • 数据加密
    • +
    • 密钥管理
    • +
    +
  • +
  • +

    实例安全管理

    +
      +
    • 实例的安全服务
    • +
    +
  • +
  • +

    监视和日志记录

    +
      +
    • 取证和事件响应
    • +
    +
  • +
  • +

    合规

    +
      +
    • 合规性概述
    • +
    • 了解审核流程
    • +
    • 合规活动
    • +
    • 认证和合规声明
    • +
    • 隐私
    • +
    +
  • +
  • +

    安全审查

    +
      +
    • 体系结构页面指南
    • +
    +
  • +
  • +

    安全检查表

    +
  • +
  • +

    附录

    +
      +
    • 社区支持
    • +
    • 词汇表
    • +
    +
  • +
+

约定

+

OpenStack 文档使用了几种排版约定。

+

注意事项

+

注意

+
带有附加信息的注释,用于解释文本的某一部分。
+

重要

+
在继续之前,您必须注意这一点。
+

提示

+
一个额外但有用的实用建议。
+

警示

+
防止用户犯错误的有用信息。
+

警告

+
有关数据丢失风险或安全问题的关键信息。
+

命令提示符

+
$ command
+

任何用户(包括root用户)都可以运行以$提示符为前缀的命令。

+
# command
+

root用户必须运行前缀为#提示符的命令。您还可以在这些命令前面加上sudo命令(如果可用),以运行这些命令。

+

介绍

+

《OpenStack 安全指南》是许多人经过五天协作的成果。本文档旨在提供部署安全 OpenStack 云的最佳实践指南。它旨在反映OpenStack社区的当前安全状态,并为由于复杂性或其他特定于环境的细节而无法列出特定安全控制措施的决策提供框架。

+
    +
  • 致谢
  • +
  • +

    我们为什么以及如何写这本书

    +
      +
    • 目标
    • +
    • 如何
    • +
    +
  • +
  • +

    OpenStack 简介

    +
      +
    • 云类型
    • +
    • OpenStack 服务概述
    • +
    +
  • +
  • +

    安全边界和威胁

    +
      +
    • 安全域
    • +
    • 桥接安全域
    • +
    • 威胁分类、参与者和攻击媒介
    • +
    +
  • +
  • +

    选择支持软件

    +
      +
    • 团队专长
    • +
    • 产品或项目成熟度
    • +
    • 通用标准
    • +
    • 硬件问题
    • +
    +
  • +
+

致谢

+

OpenStack 安全组要感谢以下组织的贡献,他们为本书的出版做出了贡献。这些组织是:

+

../_images/book-sprint-all-logos.png

+

我们为什么以及如何写这本书

+

随着 OpenStack 的普及和产品成熟,安全性已成为重中之重。OpenStack 安全组已经认识到需要一个全面而权威的安全指南。《OpenStack 安全指南》旨在概述提高 OpenStack 部署安全性的安全最佳实践、指南和建议。作者带来了他们在各种环境中部署和保护 OpenStack 的专业知识。

+

本指南是对《OpenStack 操作指南》的补充,可用于强化现有的 OpenStack 部署或评估 OpenStack 云提供商的安全控制。

+

目标

+
    +
  • 识别 OpenStack 中的安全域
  • +
  • 提供保护 OpenStack 部署的指导
  • +
  • 强调当今 OpenStack 中的安全问题和潜在的缓解措施
  • +
  • 讨论即将推出的安全功能
  • +
  • 为知识获取和传播提供社区驱动的设施
  • +
+

写作记录

+

与《OpenStack 操作指南》一样,我们遵循了本书的冲刺方法。书籍冲刺过程允许快速开发和制作大量书面作品。OpenStack 安全组的协调员重新邀请了 Adam Hyde 作为协调人。该项目在俄勒冈州波特兰市的OpenStack峰会上正式宣布。

+

由于该小组的一些关键成员离得很近,该团队聚集在马里兰州安纳波利斯。这是公共部门情报界成员、硅谷初创公司和一些大型知名科技公司之间的非凡合作。该书的冲刺在2013年6月的最后一周进行,第一版在五天内完成。

+

该团队包括:

+
    +
  • +

    Bryan D. Payne,星云 + Bryan D. Payne 博士是 Nebula 的安全研究总监,也是 OpenStack 安全组织 (OSSG) 的联合创始人。在加入 Nebula 之前,他曾在桑迪亚国家实验室、国家安全局、BAE Systems 和 IBM 研究院工作。他毕业于佐治亚理工学院计算机学院,获得计算机科学博士学位,专攻系统安全。Bryan 是《OpenStack 安全指南》的编辑和负责人,负责该指南在编写后的两年中持续增长。

    +
  • +
  • +

    Robert Clark,惠普

    +
  • +
+

Robert Clark 是惠普云服务的首席安全架构师,也是 OpenStack 安全组织 (OSSG) 的联合创始人。在被惠普招募之前,他曾在英国情报界工作。Robert 在威胁建模、安全架构和虚拟化技术方面拥有深厚的背景。Robert 拥有威尔士大学的软件工程硕士学位。

+
    +
  • Keith Basil ,红帽
  • +
+

Keith Basil 是红帽 OpenStack 的首席产品经理,专注于红帽的 OpenStack 产品管理、开发和战略。在美国公共部门,Basil 带来了为联邦民用机构和承包商设计授权、安全、高性能云架构的经验。

+
    +
  • Cody Bunch,拉克空间
  • +
+

Cody Bunch 是 Rackspace 的私有云架构师。Cody 与人合著了《The OpenStack Cookbook》的更新以及有关 VMware 自动化的书籍。

+
    +
  • Malini Bhandaru,英特尔
  • +
+

Malini Bhandaru 是英特尔的一名安全架构师。她拥有多元化的背景,曾在英特尔从事平台功能和性能方面的工作,在 Nuance 从事语音产品方面的工作,在 ComBrio 从事远程监控和管理工作,在 Verizon 从事网络商务工作。她拥有马萨诸塞大学阿默斯特分校的人工智能博士学位。

+
    +
  • Gregg Tally,约翰霍普金斯大学应用物理实验室
  • +
+

Gregg Tally 是 JHU/APL 网络系统部门非对称运营部的总工程师。他主要从事系统安全工程方面的工作。此前,他曾在斯巴达、迈克菲和可信信息系统公司工作,参与网络安全研究项目。

+
    +
  • Eric Lopez, 威睿
  • +
+

Eric Lopez 是 VMware 网络和安全业务部门的高级解决方案架构师,他帮助客户实施 OpenStack 和 VMware NSX(以前称为 Nicira 的网络虚拟化平台)。在加入 VMware(通过公司收购 Nicira)之前,他曾在 Q1 Labs、Symantec、Vontu 和 Brightmail 工作。他拥有加州大学伯克利分校的电气工程/计算机科学和核工程学士学位和旧金山大学的工商管理硕士学位。

+
    +
  • Shawn Wells,红帽
  • +
+

Shawn Wells 是红帽创新项目总监,专注于改进美国政府内部采用、促进和管理开源技术的流程。此外,Shawn 还是 SCAP 安全指南项目的上游维护者,该项目与美国军方、NSA 和 DISA 一起制定虚拟化和操作系统强化策略。Shawn曾是NSA的平民,利用大型分布式计算基础设施开发了SIGINT收集系统。

+
    +
  • Ben de Bont,惠普
  • +
+

Ben de Bont 是惠普云服务的首席战略官。在担任现职之前,Ben 领导 MySpace 的信息安全小组和 MSN Security 的事件响应团队。Ben 拥有昆士兰科技大学的计算机科学硕士学位。

+
    +
  • Nathanael Burton,国家安全局
  • +
+

纳塔内尔·伯顿(Nathanael Burton)是美国国家安全局(National Security Agency)的计算机科学家。他在该机构工作了 10 多年,从事分布式系统、大规模托管、开源计划、操作系统、安全、存储和虚拟化技术方面的工作。他拥有弗吉尼亚理工大学的计算机科学学士学位。

+
    +
  • Vibha Fauver
  • +
+

Vibha Fauver,GWEB,CISSP,PMP,在信息技术领域拥有超过15年的经验。她的专业领域包括软件工程、项目管理和信息安全。她拥有计算机与信息科学学士学位和工程管理硕士学位,专业和系统工程证书。

+
    +
  • Eric Windisch,云缩放
  • +
+

Eric Windisch 是 Cloudscaling 的首席工程师,他为 OpenStack 贡献了两年多。埃里克(Eric)在网络托管行业拥有十多年的经验,一直在敌对环境的战壕中,建立了租户隔离和基础设施安全性。自 2007 年以来,他一直在构建云计算基础设施和自动化。

+
    +
  • Andrew Hay,云道
  • +
+

Andrew Hay 是 CloudPassage, Inc. 的应用安全研究总监,负责领导该公司及其专为动态公有云、私有云和混合云托管环境构建的服务器安全产品的安全研究工作。

+
    +
  • Adam Hyde
  • +
+

亚当促成了这个 Book Sprint。他还创立了 Book Sprint 方法论,并且是最有经验的 Book Sprint 促进者。Adam 创立了 FLOSS Manuals,这是一个由 3,000 人组成的社区,致力于开发关于自由软件的自由手册。他还是 Booktype 的创始人和项目经理,Booktype 是一个用于在线和印刷书籍编写、编辑和出版的开源项目。

+

在冲刺期间,我们还得到了 Anne Gentle、Warren Wang、Paul McMillan、Brian Schott 和 Lorin Hochstein 的帮助。

+

这本书是在为期 5 天的图书冲刺中制作的。图书冲刺是一个高度协作、促进的过程,它将一个小组聚集在一起,在 3-5 天内制作一本书。这是一个由亚当·海德(Adam Hyde)创立和发展的特定方法的有力促进过程。有关更多信息,请访问BookSprints的Book Sprint网页。

+

如何为本书做贡献

+

本书的最初工作是在一间空调过高的房间里进行的,该房间是整个文档冲刺期间的小组办公室。

+

要了解有关如何为 OpenStack 文档做出贡献的更多信息,请参阅 OpenStack 文档贡献者指南。

+

OpenStack 简介

+

本指南提供了对 OpenStack 部署的安全见解。目标受众是云架构师、部署人员和管理员。此外,云用户会发现该指南在提供商选择方面既有教育意义又有帮助,而审计人员会发现它作为参考文档很有用,可以支持他们的合规性认证工作。本指南也推荐给任何对云安全感兴趣的人。

+

每个 OpenStack 部署都包含各种各样的技术,包括 Linux 发行版、数据库系统、消息队列、OpenStack 组件本身、访问控制策略、日志记录服务、安全监控工具等等。所涉及的安全问题同样多种多样也就不足为奇了,对这些问题的深入分析需要一些指南。我们努力寻找平衡点,提供足够的背景信息来理解OpenStack安全问题及其处理,并为进一步的信息提供外部参考。该指南可以从头到尾阅读,也可以像参考一样使用。

+

我们简要介绍了云的种类(私有云、公有云和混合云),然后在本章的其余部分概述了 OpenStack 组件及其相关的安全问题。

+

在整本书中,我们提到了几种类型的OpenStack云用户:管理员、操作员和用户。我们使用这些术语来标识每个角色具有的安全访问级别,尽管实际上,我们知道不同的角色通常由同一个人担任。

+

云类型

+

OpenStack是采用云技术的关键推动因素,并具有几个常见的部署用例。这些模型通常称为公共模型、专用模型和混合模型。以下各节使用美国国家标准与技术研究院 (NIST) 对云的定义来介绍这些适用于 OpenStack 的不同类型的云。

+

公有云

+

根据NIST的说法,公共云是基础设施向公众开放供消费的云。OpenStack公有云通常由服务提供商运行,可供个人、公司或任何付费客户使用。除了多种实例类型外,公有云提供商还可能公开一整套功能,例如软件定义网络或块存储。

+

就其性质而言,公有云面临更高的风险。作为公有云的使用者,您应该验证所选提供商是否具有必要的认证、证明和其他法规注意事项。作为公有云提供商,根据您的目标客户,您可能需要遵守一项或多项法规。此外,即使不需要满足法规要求,提供商也应确保租户隔离,并保护管理基础结构免受外部攻击。

+

私有云

+

在频谱的另一端是私有云。正如NIST所定义的那样,私有云被配置为由多个消费者(如业务部门)组成的单个组织独占使用。云可能由组织、第三方或它们的某种组合拥有、管理和运营,并且可能存在于本地或外部。私有云用例多种多样,因此,它们各自的安全问题各不相同。

+

社区云

+

NIST 将社区云定义为其基础结构仅供具有共同关注点(例如,任务、安全要求、策略或合规性注意事项)的组织的特定消费者社区使用。云可能由社区中的一个或多个组织、第三方或它们的某种组合拥有、管理和运营,并且它可能存在于本地或外部。

+

混合云

+

NIST将混合云定义为两个或多个不同的云基础设施(如私有云、社区云或公共云)的组合,这些云基础设施仍然是唯一的实体,但通过标准化或专有技术绑定在一起,从而实现数据和应用程序的可移植性,例如用于云之间负载平衡的云爆发。例如,在线零售商可能会在允许弹性配置的公有云上展示其广告和目录。这将使他们能够以灵活、具有成本效益的方式处理季节性负载。一旦客户开始处理他们的订单,他们就会被转移到一个更安全的私有云中,该私有云符合PCI标准。

+

在本文档中,我们以类似的方式对待社区和混合云,仅从安全角度明确处理公有云和私有云的极端情况。安全措施取决于部署在私有公共连续体上的位置。

+

OpenStack 服务概述

+

OpenStack 采用模块化架构,提供一组核心服务,以促进可扩展性和弹性作为核心设计原则。本章简要回顾了 OpenStack 组件、它们的用例和安全注意事项。

+

../_images/marketecture-diagram.png

+

计算

+

OpenStack Compute 服务 (nova) 提供的服务支持大规模管理虚拟机实例、托管多层应用程序的实例、开发或测试环境、处理 Hadoop 集群的“大数据”或高性能计算。

+

计算服务通过与支持的虚拟机监控程序交互的抽象层来促进这种管理(我们稍后会更详细地讨论这个问题)。

+

在本指南的后面部分,我们将重点介绍虚拟化堆栈,因为它与虚拟机管理程序相关。

+

有关功能支持的当前状态的信息,请参阅 OpenStack Hypervisor 支持矩阵。

+

计算安全性对于OpenStack部署至关重要。强化技术应包括对强实例隔离的支持、计算子组件之间的安全通信以及面向公众的 API 终结点的复原能力。

+

对象存储

+

OpenStack 对象存储服务 (swift) 支持在云中存储和检索任意数据。对象存储服务提供本机 API 和亚马逊云科技 S3 兼容 API。该服务通过数据复制提供高度的复原能力,并且可以处理 PB 级的数据。

+

请务必了解对象存储不同于传统的文件系统存储。对象存储最适合用于静态数据,例如媒体文件(MP3、图像或视频)、虚拟机映像和备份文件。

+

对象安全应侧重于传输中和静态数据的访问控制和加密。其他问题可能与系统滥用、非法或恶意内容存储以及交叉身份验证攻击媒介有关。

+

块存储

+

OpenStack 块存储服务 (cinder) 为计算实例提供持久性块存储。块存储服务负责管理块设备的生命周期,从创建卷和附加到实例,再到释放。

+

块存储的安全注意事项与对象存储的安全注意事项类似。

+

共享文件系统

+

共享文件系统服务(马尼拉)提供了一组用于管理多租户云环境中的共享文件系统的服务,类似于 OpenStack 通过 OpenStack 块存储服务项目提供基于块的存储管理的方式。使用共享文件系统服务,您可以创建远程文件系统,将文件系统挂载到实例上,然后从实例读取和写入文件系统中的数据。

+

网络

+

OpenStack 网络服务(neutron,以前称为量子)为云用户(租户)提供各种网络服务,例如 IP 地址管理、DNS、DHCP、负载均衡和安全组(网络访问规则,如防火墙策略)。此服务为软件定义网络 (SDN) 提供了一个框架,允许与各种网络解决方案进行可插拔集成。

+

OpenStack Networking 允许云租户管理其访客网络配置。网络服务的安全问题包括网络流量隔离、可用性、完整性和机密性。

+

仪表板

+

OpenStack 仪表板 (horizon) 为云管理员和云租户提供了一个基于 Web 的界面。使用此界面,管理员和租户可以预配、管理和监视云资源。仪表板通常以面向公众的方式部署,具有公共 Web 门户的所有常见安全问题。

+

身份鉴别服务

+

OpenStack Identity 服务 (keystone) 是一项共享服务,可在整个云基础架构中提供身份验证和授权服务。Identity 服务具有对多种身份验证形式的可插入支持。

+

Identity 服务的安全问题包括对身份验证的信任、授权令牌的管理以及安全通信。

+

镜像服务

+

OpenStack 镜像服务(glance)提供磁盘镜像管理服务,包括镜像发现、注册和根据需要向计算服务交付服务。

+

需要受信任的进程来管理磁盘映像的生命周期,以及前面提到的与数据安全有关的所有问题。

+

数据处理服务

+

数据处理服务 (sahara) 提供了一个平台,用于配置、管理和使用运行常用处理框架的群集。

+

数据处理的安全注意事项应侧重于数据隐私和与预置集群的安全通信。

+

其他配套技术

+

消息传递用于多个 OpenStack 服务之间的内部通信。默认情况下,OpenStack 使用基于 AMQP 的消息队列。与大多数 OpenStack 服务一样,AMQP 支持可插拔组件。现在,实现后端可以是 RabbitMQ、Qpid 或 ZeroMQ。

+

由于大多数管理命令都流经消息队列系统,因此消息队列安全性是任何 OpenStack 部署的主要安全问题,本指南稍后将对此进行详细讨论。

+

有几个组件使用数据库,尽管它没有显式调用。保护数据库访问是另一个安全问题,因此在本指南后面将更详细地讨论。

+

安全边界和威胁

+

云可以抽象为逻辑组件的集合,因为它们的功能、用户和共享的安全问题,我们称之为安全域。威胁参与者和向量根据其动机和对资源的访问进行分类。我们的目标是根据您的风险/漏洞保护目标,让您了解每个域的安全问题。

+

安全域

+

安全域包括用户、应用程序、服务器或网络,它们在系统中具有共同的信任要求和期望。通常,它们具有相同的身份验证和授权 (AuthN/Z) 要求和用户。

+

尽管您可能希望进一步细分这些域(我们稍后将讨论在哪些方面可能合适),但我们通常指的是四个不同的安全域,它们构成了安全部署任何 OpenStack 云所需的最低限度。这些安全域包括:

+
    +
  1. 公共域
  2. +
  3. 访客域
  4. +
  5. 管理域
  6. +
  7. 数据域
  8. +
+

我们之所以选择这些安全域,是因为它们可以独立映射,也可以组合起来,以表示给定 OpenStack 部署中大多数可能的信任区域。例如,某些部署拓扑可能由一个物理网络上的来宾域和数据域的组合组成,而其他拓扑则将这些域分开。在每种情况下,云操作员都应注意适当的安全问题。安全域应针对特定的 OpenStack 部署拓扑进行映射。域及其信任要求取决于云实例是公有云实例、私有云实例还是混合云实例。

+

../_images/untrusted_trusted.png

+

公共

+

公共安全域是云基础架构中完全不受信任的区域。它可以指整个互联网,也可以简单地指您无权访问的网络。任何具有机密性或完整性要求传输此域的数据都应使用补偿控制进行保护。

+

此域应始终被视为不受信任。

+

访客

+

访客安全域通常用于计算实例到实例的流量,它处理由云上的实例生成的计算数据,但不处理支持云操作的服务,例如 API 调用。

+

如果公有云和私有云提供商对实例使用没有严格控制,也不允许对虚拟机进行不受限制的 Internet 访问,则应将此域视为不受信任的域。私有云提供商可能希望将此网络视为内部网络,并且只有在实施适当的控制以断言实例和所有关联租户都是可信的时。

+

管理

+

管理安全域是服务交互的地方。有时称为“控制平面”,此域中的网络传输机密数据,例如配置参数、用户名和密码。命令和控制流量通常驻留在此域中,这需要强大的完整性要求。对此域的访问应受到高度限制和监视。同时,此域仍应采用本指南中描述的所有安全最佳做法。

+

在大多数部署中,此域被视为受信任的域。但是,在考虑 OpenStack 部署时,有许多系统将此域与其他域桥接起来,这可能会降低您可以对该域的信任级别。有关更多信息,请参阅桥接安全域。

+

数据

+

数据安全域主要关注与OpenStack中的存储服务有关的信息。通过该网络传输的大多数数据都需要高度的完整性和机密性。在某些情况下,根据部署类型,可能还会有很强的可用性要求。

+

此网络的信任级别很大程度上取决于部署决策,因此我们不会为其分配任何默认的信任级别。

+

桥接安全域

+

网桥是存在于多个安全域中的组件。必须仔细配置桥接具有不同信任级别或身份验证要求的安全域的任何组件。这些网桥通常是网络架构中的薄弱环节。桥接应始终配置为满足它所桥接的任何域的最高信任级别的安全要求。在许多情况下,由于攻击的可能性,桥接器的安全控制应该是主要关注点。

+

../_images/bridging_security_domains_1.png

+

上图显示了桥接数据和管理域的计算节点;因此,应将计算节点配置为满足管理域的安全要求。同样,此图中的 API 端点正在桥接不受信任的公共域和管理域,应将其配置为防止从公共域传播到管理域的攻击。

+

../_images/bridging_domains_clouduser.png

+

在某些情况下,部署人员可能希望考虑将网桥保护到比它所在的任何域更高的标准。鉴于上述 API 端点示例,攻击者可能会从公共域以 API 端点为目标,利用它来入侵或访问管理域。

+

OpenStack的设计使得安全域的分离是很困难的。由于核心服务通常至少桥接两个域,因此在对它们应用安全控制时必须特别考虑。

+

威胁分类、参与者和攻击向量

+

大多数类型的云部署(公有云或私有云)都会受到某种形式的攻击。在本章中,我们将对攻击者进行分类,并总结每个安全域中的潜在攻击类型。

+

威胁参与者

+

威胁参与者是一种抽象的方式,用于指代您可能尝试防御的一类对手。参与者的能力越强,成功缓解和预防攻击所需的安全控制就越昂贵。安全性是成本、可用性和防御之间的权衡。在某些情况下,不可能针对我们在此处描述的所有威胁参与者保护云部署。那些部署OpenStack云的人将不得不决定其部署/使用的平衡点在哪里。

+
情报机构
+

本指南认为是最有能力的对手。情报部门和其他国家行为者可以为目标带来巨大的资源。他们拥有超越任何其他参与者的能力。如果没有极其严格的控制措施,无论是人力还是技术,都很难防御这些行为者。

+
严重有组织犯罪
+

能力强且受经济驱动的攻击者群体。能够资助内部漏洞开发和目标研究。近年来,俄罗斯商业网络(Russian Business Network)等组织的崛起,一个庞大的网络犯罪企业,已经证明了网络攻击如何成为一种商品。工业间谍活动属于严重的有组织犯罪集团。

+
高能力的团队
+

这是指“黑客行动主义者”类型的组织,他们通常没有商业资助,但可能对服务提供商和云运营商构成严重威胁。

+
有动机的个人
+

这些攻击者单独行动,以多种形式出现,例如流氓或恶意员工、心怀不满的客户或小规模的工业间谍活动。

+
脚本攻击者
+

自动漏洞扫描/利用。非针对性攻击。通常,只有这些行为者之一的滋扰、妥协才会对组织的声誉构成重大风险。

+

../_images/threat_actors.png

+

公有云和私有云注意事项

+

私有云通常由企业或机构在其网络内部和防火墙后面部署。企业将对允许哪些数据退出其网络有严格的政策,甚至可能为特定目的使用不同的云。私有云的用户通常是拥有云的组织的员工,并且能够对其行为负责。员工通常会在访问云之前参加培训课程,并且可能会参加定期安排的安全意识培训。相比之下,公有云不能对其用户、云用例或用户动机做出任何断言。对于公有云提供商来说,这会立即将客户机安全域推入完全不受信任的状态。

+

公有云攻击面的一个显着区别是,它们必须提供对其服务的互联网访问。实例连接、通过 Internet 访问文件以及与云控制结构(如 API 端点和仪表板)交互的能力是公有云的必备条件。

+

公有云和私有云用户的隐私问题通常是截然相反的。在私有云中生成和存储的数据通常由云运营商拥有,他们能够部署数据丢失防护 (DLP) 保护、文件检查、深度数据包检查和规范性防火墙等技术。相比之下,隐私是采用公有云基础设施的主要障碍之一,因为前面提到的许多控制措施并不存在。

+

出站攻击和声誉风险

+

应仔细考虑云部署中潜在的出站滥用。无论是公有云还是私有云,云往往都有大量可用资源。通过黑客攻击或授权访问在云中建立存在点的攻击者(例如流氓员工)可以使这些资源对整个互联网产生影响。具有计算服务的云是理想的 DDoS 和暴力引擎。对于公有云来说,这个问题更为紧迫,因为它们的用户在很大程度上是不负责任的,并且可以迅速启动大量一次性实例进行出站攻击。如果一家公司因托管恶意软件或对其他网络发起攻击而闻名,可能会对公司的声誉造成重大损害。预防方法包括出口安全组、出站流量检查、客户教育和意识,以及欺诈和滥用缓解策略。

+

攻击类型

+

该图显示了上一节中描述的参与者可能预期的典型攻击类型。请注意,此图不排除有不可预期的攻击类型。

+

../_images/high-capability.png

+

攻击类型

+

每种攻击形式的规范性防御超出了本文档的范围。上图可以帮助您就应防范哪些类型的威胁和威胁参与者做出明智的决定。对于商业公有云部署,这可能包括预防严重犯罪。对于那些为政府使用部署私有云的人来说,应该建立更严格的保护机制,包括精心保护的设施和供应链。相比之下,那些建立基本开发或测试环境的人可能需要限制较少的控制(中间)。

+

选择支持软件

+

您选择的支持软件(如消息传递和负载平衡)可能会对云产生严重的安全影响。为组织做出正确的选择非常重要。本节提供了选择支持软件的一些一般准则。

+

为了选择最佳支持软件,请考虑以下因素:

+
    +
  • 团队专业知识
  • +
  • 产品或项目成熟度
  • +
  • 通用标准
  • +
  • 硬件问题
  • +
+

团队专业知识

+

团队越熟悉特定产品、其配置和特殊性,就越少会出现配置错误。此外,将员工的专业知识分散到整个组织中可以增加系统的可用性,允许分工,并在团队成员不可用时减轻问题。

+

产品或项目成熟度

+

给定产品或项目的成熟度对您的安全状况至关重要。部署云后,产品成熟度会产生许多影响:

+
    +
  • 专业知识的可用性
  • +
  • 活跃的开发人员和用户社区
  • +
  • 更新的及时性和可用性
  • +
  • 事件响应
  • +
+

通用标准

+

通用标准是一个国际标准化的软件评估过程,政府和商业公司使用它来验证软件技术的性能是否如宣传的那样。

+

硬件问题

+

考虑运行软件的硬件的可支持性。此外,请考虑硬件中可用的其他功能,以及您选择的软件如何支持这些功能。

+

系统文档

+

OpenStack 云部署的系统文档应遵循组织中企业信息技术系统的模板和最佳实践。组织通常有合规性要求,这可能需要一个整体的系统安全计划来清点和记录给定系统的架构。整个行业都面临着与记录动态云基础架构和保持信息最新相关的共同挑战。

+
    +
  • +

    系统文档要求

    +
      +
    • 系统角色和类型
    • +
    • 系统清单
    • +
    • 网络拓扑
    • +
    • 服务、协议和端口
    • +
    +
  • +
+

系统文档要求

+

系统角色和类型

+

通常构成 OpenStack 安装的两种广义节点类型是:

+
基础设施节点
+

运行与云相关的服务,例如 OpenStack Identity 服务、消息队列服务、存储、网络以及支持云运行所需的其他服务。

+
计算、存储或其他资源节点
+

为云提供存储容量或虚拟机。

+

系统清单

+

文档应提供OpenStack环境的一般描述,并涵盖使用的所有系统(例如,生产、开发或测试)。记录系统组件、网络、服务和软件通常提供全面覆盖和考虑安全问题、攻击媒介和可能的安全域桥接点所需的鸟瞰图。系统清单可能需要捕获临时资源,例如虚拟机或虚拟磁盘卷,否则这些资源将成为传统 IT 系统中的持久性资源。

+

硬件清单

+

对书面文档没有严格合规性要求的云可能会受益于配置管理数据库 (CMDB)。CMDB通常用于硬件资产跟踪和整体生命周期管理。通过利用 CMDB,组织可以快速识别云基础设施硬件,例如计算节点、存储节点或网络设备。CMDB可以帮助识别网络上存在的资产,这些资产可能由于维护不足、保护不足或被取代和遗忘而存在漏洞。如果底层硬件支持必要的自动发现功能,则 OpenStack 置备系统可以提供一些基本的 CMDB 功能。

+

软件清单

+

与硬件一样,OpenStack 部署中的所有软件组件都应记录在案。示例包括:

+
    +
  • 系统数据库,例如 MySQL 或 mongoDB
  • +
  • OpenStack 软件组件,例如 Identity 或 Compute
  • +
  • 支持组件,例如负载均衡器、反向代理、DNS 或 DHCP 服务
  • +
+

在评估库、应用程序或软件类别中泄露或漏洞的影响时,软件组件的权威列表可能至关重要。

+

网络拓扑

+

应提供网络拓扑,并突出显示安全域之间的数据流和桥接点。网络入口和出口点应与任何 OpenStack 逻辑系统边界一起标识。可能需要多个图表来提供系统的完整视觉覆盖。网络拓扑文档应包括系统代表租户创建的虚拟网络,以及 OpenStack 创建的虚拟机实例和网关。

+

服务、协议和端口

+

了解有关组织资产的信息通常是最佳做法。资产表可以帮助验证安全要求,并帮助维护标准安全组件,例如防火墙配置、服务端口冲突、安全修正区域和合规性。此外,该表还有助于理解 OpenStack 组件之间的关系。该表可能包括:

+
    +
  • OpenStack 部署中使用的服务、协议和端口。
  • +
  • 云基础架构中运行的所有服务的概述。
  • +
+

强烈建议 OpenStack 部署记录与此类似的信息。该表可以根据从 CMDB 派生的信息创建,也可以手动构建。

+

下面提供了一个表格示例:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
服务协议端口目的使用者安全域
beam.smpAMQP5672/tcpAMQP 消息服务RabbitMQ管理域
tgtdiSCSI3260/tcpiSCSI 发起程序服务iSCSI私有(数据网络)
sshdssh22/tcp允许安全登录到节点和来宾虚拟机Various按需配置作用于管理域、公共域和访客域
mysqldmysql3306/tcp数据库服务Various管理域
apache2http443/tcp仪表板Tenants公共域
dnsmasqdns53/tcpDNS 服务Guest VMs访客域
+

管理

+

云部署是一个不断变化的系统。机器老化和故障,软件过时,漏洞被发现。当配置中出现错误或遗漏时,或者必须应用软件修复时,必须以安全但方便的方式进行这些更改。这些更改通常通过配置管理来解决。

+

保护云部署不被恶意实体配置或操纵非常重要。由于云中的许多系统都采用计算和网络虚拟化,因此 OpenStack 面临着明显的挑战,必须通过完整性生命周期管理来解决这些挑战。

+

管理员必须对云执行命令和控制,以实现各种操作功能。理解和保护这些指挥和控制设施非常重要。

+
    +
  • +

    持续的系统管理

    +
      +
    • 漏洞管理
    • +
    • 配置管理
    • +
    • 安全备份和恢复
    • +
    • 安全审计工具
    • +
    +
  • +
  • +

    完整性生命周期

    +
      +
    • 安全引导
    • +
    • 运行时验证
    • +
    • 服务器加固
    • +
    +
  • +
  • +

    管理界面

    +
      +
    • 仪表板
    • +
    • OpenStack 接口
    • +
    • 安全外壳 (SSH)
    • +
    • 管理实用程序
    • +
    • 带外管理接口
    • +
    +
  • +
+

持续的系统管理

+

云系统总会存在漏洞,其中一些可能是安全问题。因此,准备好应用安全更新和常规软件更新至关重要。这涉及到配置管理工具的智能使用,下面将对此进行讨论。这还涉及了解何时需要升级。

+

漏洞管理

+

有关安全相关更改的公告,请订阅 OpenStack Announce 邮件列表。安全通知还会通过下游软件包发布,例如,通过您可能作为软件包更新的一部分订阅的 Linux 发行版。

+

OpenStack组件只是云中软件的一小部分。与所有这些其他组件保持同步也很重要。虽然某些数据源是特定于部署的,但云管理员必须订阅必要的邮件列表,以便接收适用于组织环境的任何安全更新的通知。通常,这就像跟踪上游 Linux 发行版一样简单。

+

注意

+
OpenStack 通过两个渠道发布安全信息。
+
+- OpenStack 安全公告 (OSSA) 由 OpenStack 漏洞管理团队 (VMT) 创建。它们与核心OpenStack服务中的安全漏洞有关。有关 VMT 的更多信息,请参阅漏洞管理流程。
+- OpenStack 安全说明 (OSSN) 由 OpenStack 安全组 (OSSG) 创建,以支持 VMT 的工作。OSSN解决了支持软件和常见部署配置中的问题。本指南中引用了它们。安全说明存档在OSSN上。
+
分类
+

收到安全更新通知后,下一步是确定此更新对给定云部署的重要性。在这种情况下,拥有预定义的策略很有用。现有的漏洞评级系统(如通用漏洞评分系统 (CVSS))无法正确考虑云部署。

+

在此示例中,我们引入了一个评分矩阵,该矩阵将漏洞分为三类:权限提升、拒绝服务和信息泄露。了解漏洞的类型及其在基础架构中发生的位置将使您能够做出合理的响应决策。

+

权限提升描述了用户使用系统中其他用户的权限进行操作的能力,绕过适当的授权检查。来宾用户执行的操作允许他们以管理员权限执行未经授权的操作,这是此类漏洞的一个示例。

+

拒绝服务是指被利用的漏洞,可能导致服务或系统中断。这既包括使网络资源不堪重负的分布式攻击,也包括通常由资源分配错误或输入引起的系统故障缺陷引起的单用户攻击。

+

信息泄露漏洞会泄露有关您的系统或操作的信息。这些漏洞的范围从调试信息泄露到关键安全数据(如身份验证凭据和密码)的暴露。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
攻击者位置/权限级别
外部云用户云管理员控制平面
权限提升(3 级)紧急n/an/an/a
权限提升(2 个级别)紧急紧急n/an/a
特权提升(1 级)紧急紧急紧急n/a
拒绝服务
信息披露紧急/高紧急/高中/低
+

该表说明了一种通用方法,该方法根据漏洞在部署中发生的位置和影响来衡量漏洞的影响。例如,计算 API 节点上的单级权限提升可能允许 API 的标准用户升级为具有与节点上的 root 用户相同的权限。

+

我们建议云管理员使用此表作为模型,以帮助定义要针对各种安全级别执行的操作。例如,关键级别的安全更新可能需要快速升级云,而低级别的更新可能需要更长的时间才能完成。

+
测试更新
+

在生产环境中部署任何更新之前,应对其进行测试。通常,这需要有一个单独的测试云设置,该设置首先接收更新。在软件和硬件方面,此云应尽可能接近生产云。应在性能影响、稳定性、应用程序影响等方面对更新进行全面测试。特别重要的是验证更新理论上解决的问题(例如特定漏洞)是否已实际修复。

+
部署更新
+

完全测试更新后,可以将其部署到生产环境。应使用下面所述的配置管理工具完全自动化此部署。

+

配置管理

+

生产质量的云应始终使用工具来自动执行配置和部署。这消除了人为错误,并允许云更快地扩展。自动化还有助于持续集成和测试。

+

在构建 OpenStack 云时,强烈建议在设计和实现时考虑配置管理工具或框架。通过配置管理,您可以避免在构建、管理和维护像 OpenStack 这样复杂的基础架构时固有的许多陷阱。通过生成配置管理实用程序所需的清单、说明书或模板,您可以满足许多文档和法规报告要求。此外,配置管理还可以作为业务连续性计划 (BCP) 和数据恢复 (DR) 计划的一部分,您可以在其中将节点或服务重建回 DR 事件中的已知状态或给定的妥协状态。

+

此外,当与 Git 或 SVN 等版本控制系统结合使用时,您可以跟踪环境随时间推移而发生的更改,并重新调解可能发生的未经授权的更改。例如,文件 nova.conf 或其他配置文件不符合您的标准,您的配置管理工具可以还原或替换该文件,并将您的配置恢复到已知状态。最后,配置管理工具也可用于部署更新;简化安全补丁流程。这些工具具有广泛的功能,在该领域非常有用。保护云的关键点是选择一种配置管理工具并使用它。

+

有许多配置管理解决方案;在撰写本文时,市场上有两个在支持 OpenStack 环境方面非常强大的公司:Chef 和 Puppet。下面提供了此空间中的工具的非详尽列表:

+
    +
  • Chef
  • +
  • Puppet
  • +
  • Salt Stack
  • +
  • Ansible
  • +
+
策略更改
+

每当更改策略或配置管理时,最好记录活动并备份新集的副本。通常,此类策略和配置存储在受版本控制的存储库(如 Git)中。

+

安全备份和恢复

+

在整个系统安全计划中包括备份过程和策略非常重要。有关 OpenStack 备份和恢复功能和过程的概述,请参阅有关备份和恢复的 OpenStack 操作指南。

+
    +
  • 确保只有经过身份验证的用户和备份客户端才能访问备份服务器。
  • +
  • 使用数据加密选项来存储和传输备份。
  • +
  • 使用专用且强化的备份服务器。备份服务器的日志必须每天进行监视,并且只有少数人可以访问。
  • +
  • 定期测试数据恢复选项,包括存储在安全备份中的镜像,是确保灾难恢复准备的关键部分。在发生安全漏洞或受损时,终止运行中的实例并从已知的安全镜像备份中重新启动实例确实是最佳做法。这有助于确保受损的实例被消除,并且可以迅速从备份的镜像中重新部署干净、可信赖的版本。
  • +
+

安全审计工具

+

安全审核工具可以补充配置管理工具。安全审核工具可自动执行验证给定系统配置是否满足大量安全控制的过程。这些工具有助于弥合从安全配置指南文档(例如,STIG 和 NSA 指南)到特定系统安装的差距。例如,SCAP 可以将正在运行的系统与预定义的配置文件进行比较。SCAP 输出一份报告,详细说明配置文件中的哪些控件已满足,哪些控件未通过,哪些控件未选中。

+

将配置管理和安全审计工具相结合,形成了一个强大的组合。审核工具将突出显示部署问题。配置管理工具简化了更改每个系统的过程,以解决审计问题。以这种方式一起使用,这些工具有助于维护满足从基本强化到合规性验证等安全要求的云环境。

+

配置管理和安全审计工具将给云带来另一层复杂性。这种复杂性带来了额外的安全问题。考虑到其安全优势,我们认为这是一种可接受的风险权衡。对于这些工具的操作安全性保障超出了本指南的范围。

+

完整性生命周期

+

我们将完整性生命周期定义为一个深思熟虑的过程,它确保我们始终在整个云中以预期的配置运行预期的软件。此过程从安全引导开始,并通过配置管理和安全监控进行维护。本章就如何处理完整性生命周期过程提供了建议。

+

安全引导

+

云中的节点,包括计算、存储、网络、服务和混合节点,应该有一个自动化的配置过程。这确保了节点的一致和正确配置。这也便于安全补丁、升级、故障修复和其他关键变更。由于这个过程安装了在云中具有最高特权级别的新软件,因此验证安装正确的软件非常重要,包括启动过程的最早阶段。

+

有多种技术可以验证这些早期启动阶段。这些通常需要硬件支持,例如可信平台模块 (TPM)、英特尔可信执行技术 (TXT)、动态信任根测量 (DRTM) 和统一可扩展固件接口 (UEFI) 安全启动。在本书中,我们将所有这些统称为安全启动技术。我们建议使用安全启动,同时承认部署此启动所需的许多部分需要高级技术技能才能为每个环境自定义工具。与本指南中的许多其他建议相比,使用安全启动需要更深入的集成和自定义。TPM 技术虽然在大多数商务级笔记本电脑和台式机中很常见数年,但现在已与支持的 BIOS 一起在服务器中可用。正确的规划对于成功的安全启动部署至关重要。

+

有关安全启动部署的完整教程超出了本书的范围。相反,我们在这里提供了一个框架,用于将安全启动技术与典型的节点预配过程集成。有关更多详细信息,云架构师应参考相关规范和软件配置手册。

+
节点配置
+

节点应使用预引导执行环境(PXE)进行配置。这大大减少了重新部署节点所需的工作量。典型的过程涉及节点从服务器接收各种引导阶段(即执行的软件逐渐复杂)。

+

../_images/node-provisioning-pxe.png

+

我们建议在管理安全域中使用单独的隔离网络进行置备。此网络将处理所有 PXE 流量,以及上面描述的后续启动阶段下载。请注意,节点引导过程从两个不安全的操作开始:DHCP 和 TFTP。然后,引导过程使用 TLS 下载部署节点所需的其余信息。这可能是操作系统安装程序、由 Chef 或 Puppet 管理的基本安装,甚至是直接写入磁盘的完整文件系统映像。

+

虽然在 PXE 启动过程中使用 TLS 更具挑战性,但常见的 PXE 固件项目(如 iPXE)提供了这种支持。通常,这涉及在了解允许的 TLS 证书链的情况下构建 PXE 固件,以便它可以正确验证服务器证书。这通过限制不安全的纯文本网络操作的数量来提高攻击者的门槛。

+
验证启动
+

通常,有两种不同的策略来验证启动过程。传统的安全启动将验证在过程中的每个步骤运行的代码,并在代码不正确时停止启动。启动证明将记录在每个步骤中运行的代码,并将此信息提供给另一台计算机,以证明启动过程按预期完成。在这两种情况下,第一步都是在运行之前测量每段代码。在这种情况下,测量实际上是代码的 SHA-1 哈希值,在执行之前获取。哈希存储在 TPM 的平台配置寄存器 (PCR) 中。

+

注意

+
此处使用 SHA-1,因为这是 TPM 芯片支持的内容。
+

每个 TPM 至少有 24 个 PCR。2005 年 3 月的 TCG 通用服务器规范 v1.0 定义了启动时完整性测量的 PCR 分配。下表显示了典型的PCR配置。上下文指示这些值是根据节点硬件(固件)还是根据节点上置备的软件确定的。某些值受固件版本、磁盘大小和其他低级信息的影响。因此,在配置管理方面采取良好的做法非常重要,以确保部署的每个系统都完全按照预期进行配置。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
注册测量内容上下文
PCR-00核心信任根测量 (CRTM)、BIOS 代码、主机平台扩展硬件
PCR-01主机平台配置硬件
PCR-02选项 ROM 代码硬件
PCR-03选项 ROM 配置和数据硬件
PCR-04初始程序加载程序 (IPL) 代码。例如,主引导记录。软件
PCR-05IPL 代码配置和数据软件
PCR-06状态转换和唤醒事件软件
PCR-07主机平台制造商控制软件
PCR-08特定于平台,通常是内核、内核扩展和驱动程序软件
PCR-09特定于平台,通常是 Initramfs软件
PCR-10 至 PCR-23特定于平台软件
+

安全启动可能是构建云的一个选项,但需要在硬件选择方面进行仔细规划。例如,确保您具有 TPM 和英特尔 TXT 支持。然后验证节点硬件供应商如何填充 PCR 值。例如,哪些值可用于验证。通常,上表中软件上下文下列出的 PCR 值是云架构师可以直接控制的值。但即使这些也可能随着云中软件的升级而改变。配置管理应链接到 PCR 策略引擎,以确保验证始终是最新的。

+

每个制造商都必须为其服务器提供 BIOS 和固件代码。不同的服务器、虚拟机监控程序和操作系统将选择填充不同的 PCR。在大多数实际部署中,不可能根据已知的良好数量(“黄金测量”)验证每个PCR。经验表明,即使在单个供应商的产品线中,给定PCR的测量过程也可能不一致。建议为每个服务器建立基线,并监视 PCR 值以查找意外更改。第三方软件可能可用于协助 TPM 预配和监视过程,具体取决于所选的虚拟机监控程序解决方案。

+

初始程序加载程序 (IPL) 代码很可能是 PXE 固件,假设采用上述节点部署策略。因此,安全启动或启动证明过程可以测量所有早期启动代码,例如 BIOS、固件、PXE 固件和内核映像。确保每个节点都安装了这些部件的正确版本,为构建节点软件堆栈的其余部分奠定了坚实的基础。

+

根据所选的策略,在发生故障时,节点将无法启动,或者它可以将故障报告给云中的另一个实体。为了实现安全引导,节点将无法引导,管理安全域中的置备服务必须识别这一点并记录事件。对于启动证明,当检测到故障时,节点将已经在运行。在这种情况下,应通过禁用节点的网络访问来立即隔离节点。然后,应分析事件的根本原因。无论哪种情况,策略都应规定在失败后如何继续。云可能会自动尝试重新配置节点一定次数。或者,它可能会立即通知云管理员调查问题。此处的正确策略是特定于部署和故障模式的。

+
节点加固
+

此时,我们知道节点已使用正确的内核和底层组件启动。下一步是强化操作系统,它从一组行业公认的强化控件开始。以下指南是很好的示例:

+

安全技术实施指南 (STIG)

+

国防信息系统局 (DISA)(隶属于美国国防部)发布适用于各种操作系统、应用程序和硬件的 STIG 内容。这些控件在未附加任何许可证的情况下发布。

+

互联网安全中心 (CIS) 基准测试

+

CIS 会定期发布安全基准以及自动应用这些安全控制的自动化工具。这些基准测试是在具有一些限制的知识共享许可下发布的。

+

这些安全控制最好通过自动化方法应用。自动化确保每次对每个系统都以相同的方式应用控制,并且它们还提供了一种用于审核现有系统的快速方法。自动化有多种选择:

+

OpenSCAP

+

OpenSCAP 是一个开源工具,它采用 SCAP 内容(描述安全控制的 XML 文件)并将该内容应用于各种系统。目前可用的大多数内容都适用于 Red Hat Enterprise Linux 和 CentOS,但这些工具适用于任何 Linux 或 Windows 系统。

+

ansible 加固

+

ansible-hardening 项目提供了一个 Ansible 角色,可将安全控制应用于各种 Linux 操作系统。它还可用于审核现有系统。仔细检查每个控制措施,以确定它是否可能对生产系统造成损害。这些控件基于 Red Hat Enterprise Linux 7 STIG。

+

完全加固的系统是一个具有挑战性的过程,可能需要对某些系统进行大量更改。其中一些更改可能会影响生产工作负载。如果系统无法完全加固,强烈建议进行以下两项更改,以便在不造成重大中断的情况下提高安全性:

+
强制访问控制 (MAC)
+

强制访问控制会影响系统上的所有用户,包括 root,内核的工作是根据当前安全策略审查活动。如果活动不在允许的策略范围内,则会被阻止,即使对于 root 用户也是如此。有关更多详细信息,请查看下面关于 sVirt、SELinux 和 AppArmor 的讨论。

+
删除软件包并停止服务
+

确保系统安装的软件包数量尽可能少,并且运行的服务数量尽可能少。删除不需要的软件包可以更轻松地进行修补,并减少系统上可能导致违规的项目数量。停止不需要的服务会缩小系统上的攻击面,并使攻击更加困难。

+

我们还建议对生产节点执行以下附加步骤:

+
只读文件系统
+

尽可能使用只读文件系统。确保可写文件系统不允许执行。这可以使用 noexec 中的 、 nosuidnodev 挂载选项来处理 /etc/fstab

+
系统验证
+

最后,节点内核应该有一种机制来验证节点的其余部分是否以已知的良好状态启动。这提供了从引导验证过程到验证整个系统的必要链接。执行此操作的步骤将特定于部署。例如,内核模块可以在使用 dm-verity 挂载文件系统之前验证组成文件系统的块的哈希值。

+

运行时验证

+

一旦节点运行,我们需要确保它随着时间的推移保持良好的状态。从广义上讲,这包括配置管理和安全监控。这些领域中每个领域的目标都不同。通过检查这两者,我们可以更好地确保系统按预期运行。我们将在管理部分讨论配置管理,并在下面讨论安全监控。

+
入侵检测系统
+

基于主机的入侵检测工具对于自动验证云内部也很有用。有各种各样的基于主机的入侵检测工具可用。有些是免费提供的开源项目,而另一些则是商业项目。通常,这些工具会分析来自各种来源的数据,并根据规则集和/或训练生成安全警报。典型功能包括日志分析、文件完整性检查、策略监控和 rootkit 检测。更高级(通常是自定义)工具可以验证内存中进程映像是否与磁盘上的可执行文件匹配,并验证正在运行的进程的执行状态。

+

对于云架构师来说,一个关键的策略决策是如何处理安全监控工具的输出。实际上有两种选择。首先是提醒人类进行调查和/或采取纠正措施。这可以通过在云管理员的日志或事件源中包含安全警报来完成。第二种选择是让云自动采取某种形式的补救措施,以及记录事件。补救措施可能包括从重新安装节点到执行次要服务配置的任何内容。但是,由于可能存在误报,自动补救措施可能具有挑战性。

+

当安全监视工具为良性事件生成安全警报时,会发生误报。由于安全监控工具的性质,误报肯定会不时发生。通常,云管理员可以调整安全监控工具以减少误报,但这也可能同时降低整体检测率。在云中设置安全监控系统时,必须了解并考虑这些经典的权衡。

+

基于主机的入侵检测工具的选择和配置具有高度的部署特异性。我们建议从探索以下开源项目开始,这些项目实现了各种基于主机的入侵检测和文件监控功能。

+
    +
  • OSSEC
  • +
  • Samhain
  • +
  • Tripwire
  • +
  • AIDE
  • +
+

网络入侵检测工具是对基于主机的工具的补充。OpenStack 没有内置特定的网络 IDS,但 OpenStack Networking 提供了一种插件机制,可以通过 Networking API 启用不同的技术。此插件体系结构将允许租户开发 API 扩展,以插入和配置自己的高级网络服务,例如防火墙、入侵检测系统或虚拟机之间的 VPN。

+

与基于主机的工具类似,基于网络的入侵检测工具的选择和配置是特定于部署的。Snort 是领先的开源网络入侵检测工具,也是了解更多信息的良好起点。

+

对于基于网络和主机的入侵检测系统,有一些重要的安全注意事项。

+
    +
  • 重要的是要考虑将网络 IDS 放置在云上(例如,将其添加到网络边界和/或敏感网络周围)。放置位置取决于您的网络环境,但请确保监控 IDS 可能对您的服务产生的影响,具体取决于您选择添加的位置。网络 IDS 通常无法检查加密流量(如 TLS)的内容。但是,网络 IDS 在识别网络上的异常未加密流量方面仍可能提供一些好处。
  • +
  • 在某些部署中,可能需要在安全域网桥上的敏感组件上添加基于主机的 IDS。基于主机的 IDS 可能会通过组件上遭到入侵或未经授权的进程来检测异常活动。IDS 应在管理网络上传输警报和日志信息。
  • +
+

服务器加固

+

云环境中的服务器,包括 undercloud 和 overcloud 基础架构,应实施强化最佳实践。由于操作系统和服务器强化很常见,因此此处不涵盖适用的最佳实践,包括但不限于日志记录、用户帐户限制和定期更新,但应应用于所有基础结构。

+
文件完整性管理(FIM)
+

文件完整性管理 (FIM) 是确保敏感系统或应用程序配置文件等文件不会损坏或更改以允许未经授权的访问或恶意行为的方法。这可以通过实用程序(如 Samhain)来完成,该实用程序将创建指定资源的校验和哈希,然后定期验证该哈希,或者通过 DMVerity 等工具来完成,该工具可以获取块设备的哈希值,并在系统访问这些哈希值时对其进行验证,然后再将其呈现给用户。

+

这些应该放在适当的位置,以监控和报告对系统、虚拟机管理程序和应用程序配置文件(如 和 /etc/keystone/keystone.conf )以及内核模块(如 /etc/pam.d/system-auth virtio)的更改。最佳做法是使用 lsmod 命令来显示系统上定期加载的内容,以帮助确定 FIM 检查中应包含或不应包含的内容。

+

管理界面

+

管理员需要对云执行命令和控制,以实现各种操作功能。理解和保护这些指挥和控制设施非常重要。

+

OpenStack 为运维人员和租户提供了多种管理界面:

+
    +
  • OpenStack 仪表板 (horizon)
  • +
  • OpenStack 接口
  • +
  • 安全外壳 (SSH)
  • +
  • OpenStack 管理实用程序,例如 nova-manage 和 glance-manage
  • +
  • 带外管理接口,如 IPMI
  • +
+

仪表板

+

OpenStack 仪表板 (horizon) 为管理员和租户提供了一个基于 Web 的图形界面,用于置备和访问基于云的资源。仪表板通过调用 OpenStack API 与后端服务进行通信。

+
功能
+
    +
  • 作为云管理员,仪表板提供云大小和状态的整体视图。您可以创建用户和租户/项目,将用户分配给租户/项目,并对可供他们使用的资源设置限制。
  • +
  • 仪表板为租户用户提供了一个自助服务门户,用于在管理员设置的限制范围内预配自己的资源。
  • +
  • 仪表板为路由器和负载平衡器提供 GUI 支持。例如,仪表板现在实现了所有主要的网络功能。
  • +
  • 它是一个可扩展的 Django Web 应用程序,允许轻松插入第三方产品和服务,例如计费、监控和其他管理工具。
  • +
  • 仪表板还可以为服务提供商和其他商业供应商打造品牌。
  • +
+
安全注意事项
+
    +
  • 仪表板要求在 Web 浏览器中启用 Cookie 和 JavaScript。
  • +
  • 托管仪表板的 Web 服务器应配置为使用 TLS,以确保数据已加密。
  • +
  • Horizon Web Service 及其用于与后端通信的 OpenStack API 都容易受到 Web 攻击媒介(如拒绝服务)的攻击,因此必须对其进行监控。
  • +
  • 现在可以通过仪表板将镜像文件直接从用户的硬盘上传到 OpenStack 镜像服务(尽管存在许多部署/安全隐患)。对于多 GB 的映像,仍强烈建议使用 glance CLI 进行上传。
  • +
  • 通过仪表盘创建和管理安全组。安全组允许对安全策略进行 L3-L4 数据包筛选,以保护虚拟机。
  • +
+
参考书目
+

OpenStack.org,ReleaseNotes/Liberty。2015. OpenStack Liberty 发行说明

+

OpenStack 接口

+

OpenStack API 是一个 RESTful Web 服务端点,用于访问、配置和自动化基于云的资源。操作员和用户通常通过命令行实用程序(例如, nova 或)、特定于语言的库或 glance 第三方工具访问 API。

+
功能
+
    +
  • To the cloud administrator, the API provides an overall view of the size and state of the cloud deployment and allows the creation of users, tenants/projects, assigning users to tenants/projects, and specifying resource quotas on a per tenant/project basis. + 对于云管理员来说,API 提供了云部署大小和状态的整体视图,并允许创建用户、租户/项目、将用户分配给租户/项目,以及为每个租户/项目指定资源配额。
  • +
  • The API provides a tenant interface for provisioning, managing, and accessing their resources. + API 提供了一个租户接口,用于预配、管理和访问其资源。
  • +
+
安全注意事项
+
    +
  • 应为 TLS 配置 API 服务,以确保数据已加密。
  • +
  • 作为 Web 服务,OpenStack API 容易受到熟悉的网站攻击媒介的影响,例如拒绝服务攻击。
  • +
+

安全外壳 (SSH)

+

使用安全外壳 (SSH) 访问来管理 Linux 和 Unix 系统已成为行业惯例。SSH 使用安全的加密原语进行通信。鉴于 SSH 在典型 OpenStack 部署中的范围和重要性,了解部署 SSH 的最佳实践非常重要。

+
主机密钥指纹
+

经常被忽视的是 SSH 主机的密钥管理需求。由于 OpenStack 部署中的大多数或所有主机都将提供 SSH 服务,因此对与这些主机的连接充满信心非常重要。不能低估的是,未能提供合理安全且可访问的方法来验证 SSH 主机密钥指纹是滥用和利用的成熟时机。

+

所有 SSH 守护程序都具有专用主机密钥,并在连接时提供主机密钥指纹。此主机密钥指纹是未签名公钥的哈希值。在与这些主机建立 SSH 连接之前,必须知道这些主机密钥指纹。验证主机密钥指纹有助于检测中间人攻击。

+

通常,在安装 SSH 守护程序时,将生成主机密钥。在主机密钥生成过程中,主机必须具有足够的熵。主机密钥生成期间的熵不足可能导致窃听 SSH 会话。

+

生成 SSH 主机密钥后,主机密钥指纹应存储在安全且可查询的位置。一个特别方便的解决方案是使用 RFC-4255 中定义的 SSHFP 资源记录的 DNS。为了安全起见,有必要部署 DNSSEC。

+

管理实用程序

+

OpenStack Management Utilities 是进行 API 调用的开源 Python 命令行客户端。每个 OpenStack 服务都有一个客户端(例如,nova、glance)。除了标准的 CLI 客户端之外,大多数服务都具有管理命令行实用程序,用于直接调用数据库。这些专用管理实用程序正在慢慢被弃用。

+
安全注意事项
+
    +
  • 在某些情况下,专用管理实用程序 (*-manage) 使用直接数据库连接。
  • +
  • 确保包含凭据信息的 .rc 文件是安全的。
  • +
+
参考书目
+

OpenStack.org,“OpenStack 最终用户指南”部分。2016. OpenStack 命令行客户端概述。

+

OpenStack.org,使用 OpenStack RC 文件设置环境变量。2016. 下载并获取 OpenStack RC 文件。

+

带外管理接口

+

OpenStack 管理依赖于带外管理接口(如 IPMI 协议)来访问运行 OpenStack 组件的节点。IPMI 是一种非常流行的规范,用于远程管理、诊断和重新启动服务器,无论操作系统正在运行还是系统崩溃。

+
安全注意事项
+
    +
  • 使用强密码并保护它们,或使用客户端 TLS 身份验证。
  • +
  • 确保网络接口位于其自己的专用(管理或单独的)网络上。使用防火墙或其他网络设备隔离管理域。
  • +
  • 如果您使用 Web 界面与 BMC/IPMI 交互,请始终使用 TLS 接口,例如 HTTPS 或端口 443。此 TLS 接口不应使用自签名证书(通常是默认的),但应具有使用正确定义的完全限定域名 (FQDN) 的受信任证书。
  • +
  • 监控管理网络上的流量。与繁忙的计算节点相比,异常可能更容易跟踪。
  • +
+

带外管理界面通常还包括图形计算机控制台访问。这些接口通常可以加密,但不一定是默认的。请参阅系统软件文档以加密这些接口。

+
参考书目
+

SANS 技术研究所,InfoSec Handlers 日记博客。2012. 黑客攻击已关闭的服务器。

+

安全通信

+

设备间通信是一个严重的安全问题。在大型项目错误(如 Heartbleed)或更高级的攻击(如 BEAST 和 CRIME)之间,通过网络进行安全通信的方法变得越来越重要。但是,应该记住,加密应该作为更大的安全策略的一部分来应用。端点的入侵意味着攻击者不再需要破坏所使用的加密,而是能够在系统处理消息时查看和操纵消息。

+

本章将回顾有关配置 TLS 以保护内部和外部资源的几个功能,并指出应特别注意的特定类别的系统。

+
    +
  • +

    TLS 和 SSL 简介

    +
      +
    • 证书颁发机构
    • +
    • TLS 库
    • +
    • 加密算法、密码模式和协议
    • +
    • 总结
    • +
    +
  • +
  • +

    TLS 代理和 HTTP 服务

    +
      +
    • 例子
    • +
    • HTTP 严格传输安全性
    • +
    • 完美前向保密
    • +
    +
  • +
  • +

    安全参考架构

    +
      +
    • SSL/TLS 代理在前面
    • +
    • SSL/TLS 与 API 端点位于同一物理主机上
    • +
    • 负载均衡器上的 SSL/TLS
    • +
    • 外部和内部环境的加密分离
    • +
    +
  • +
+

TLS 和 SSL 简介

+

在某些情况下,需要安全来确保 OpenStack 部署中网络流量的机密性或完整性。这通常是使用加密措施实现的,例如传输层安全性 (TLS) 协议。

+

在典型部署中,通过公共网络传输的所有流量都是安全的,但安全最佳实践要求内部流量也必须得到保护。仅仅依靠安全域分离进行保护是不够的。如果攻击者获得对虚拟机监控程序或主机资源的访问权限,破坏 API 端点或任何其他服务,则他们一定无法轻松注入或捕获消息、命令或以其他方式影响云的管理功能。

+

所有域都应使用 TLS 进行保护,包括管理域服务和服务内通信。TLS 提供了确保用户与 OpenStack 服务之间以及 OpenStack 服务本身之间通信的身份验证、不可否认性、机密性和完整性的机制。

+

由于安全套接字层 (SSL) 协议中已发布的漏洞,我们强烈建议优先使用 TLS 而不是 SSL,并且在任何情况下都禁用 SSL,除非需要与过时的浏览器或库兼容。

+

公钥基础设施 (PKI) 是用于保护网络通信的框架。它由一组系统和流程组成,以确保在验证各方身份的同时可以安全地发送流量。此处描述的 PKI 配置文件是由 PKIX 工作组开发的 Internet 工程任务组 (IETF) 公钥基础结构 (PKIX) 配置文件。PKI的核心组件包括:

+

数字证书

+

签名公钥证书是具有实体的可验证数据、其公钥以及其他一些属性的数据结构。这些证书由证书颁发机构 (CA) 颁发。由于证书由受信任的 CA 签名,因此一旦验证,与实体关联的公钥将保证与所述实体相关联。用于定义这些证书的最常见标准是 X.509 标准。X.509 v3 是当前的标准,在 RFC5280 中进行了详细描述。证书由 CA 颁发,作为证明在线实体身份的机制。CA 通过从证书创建消息摘要并使用其私钥对摘要进行加密,对证书进行数字签名。

+

结束实体

+

作为证书主题的用户、进程或系统。最终实体将其证书请求发送到注册机构 (RA) 进行审批。如果获得批准,RA 会将请求转发给证书颁发机构 (CA)。证书颁发机构验证请求,如果信息正确,则生成证书并签名。然后,此签名证书将发送到证书存储库。

+

信赖方

+

接收数字签名证书的终结点,该证书可参考证书上列出的公钥进行验证。信赖方应能够验证证书的链上,确保它不存在于 CRL 中,并且还必须能够验证证书的到期日期。

+

证书颁发机构 (CA)

+

CA 是受信任的实体,无论是最终方还是依赖证书进行证书策略、管理处理和证书颁发的一方。

+

注册机构 (RA)

+

CA 将某些管理功能委派给的可选系统,这包括在 CA 颁发证书之前对终端实体进行身份验证等功能。

+

证书吊销列表 (CRL)

+

证书吊销列表 (CRL) 是已吊销的证书序列号列表。在 PKI 模型中,不应信任提供这些证书的最终实体。吊销可能由于多种原因而发生,例如密钥泄露、CA 泄露。

+

CRL 发行人

+

CA 将证书吊销列表的发布委托给的可选系统。

+

证书存储库

+

存储和查找最终实体证书和证书吊销列表的位置 - 有时称为证书捆绑包。

+

PKI 构建了一个框架,用于提供加密算法、密码模式和协议,以保护数据和身份验证。强烈建议使用公钥基础结构 (PKI) 保护所有服务,包括对 API 终结点使用 TLS。仅靠传输或消息的加密或签名是不可能解决所有这些问题的。主机本身必须是安全的,并实施策略、命名空间和其他控制措施来保护其私有凭据和密钥。但是,密钥管理和保护的挑战并没有减少这些控制的必要性,也没有降低它们的重要性。

+

证书颁发机构

+

许多组织都建立了公钥基础设施,其中包含自己的证书颁发机构 (CA)、证书策略和管理,他们应该使用这些证书为内部 OpenStack 用户或服务颁发证书。公共安全域面向 Internet 的组织还需要由广泛认可的公共 CA 签名的证书。对于通过管理网络进行的加密通信,建议不要使用公共 CA。相反,我们期望并建议大多数部署部署自己的内部 CA。

+

建议 OpenStack 云架构师考虑对内部系统和面向客户的服务使用单独的 PKI 部署。这使云部署人员能够保持对其 PKI 基础设施的控制,并且使内部系统的证书请求、签名和部署变得更加容易。高级配置可以对不同的安全域使用单独的 PKI 部署。这允许部署人员保持环境的加密隔离,确保颁发给一个环境的证书不被另一个环境识别。

+

用于在面向 Internet 的云端点(或客户接口,其中客户预计不会安装除标准操作系统提供的证书捆绑包以外的任何内容)上支持 TLS 的证书应使用安装在操作系统证书捆绑包中的证书颁发机构进行预配。典型的知名供应商包括 Let's Encrypt、Verisign 和 Thawte,但还有许多其他供应商。

+

在创建和签署证书方面存在管理、策略和技术方面的挑战。在这个领域,云架构师或操作员可能希望寻求行业领导者和供应商的建议,以及此处推荐的指导。

+

TLS 库

+

OpenStack 生态系统中的组件、服务和应用程序或 OpenStack 的依赖项已实现或可以配置为使用 TLS 库。OpenStack 中的 TLS 和 HTTP 服务通常使用 OpenSSL 实现,OpenSSL 具有已针对 FIPS 140-2 验证的模块。但是,请记住,每个应用程序或服务在使用 OpenSSL 库的方式上仍可能引入弱点。

+

加密算法、密码模式和协议

+

建议至少使用 TLS 1.2。旧版本(如 TLS 1.0、1.1 和所有版本的 SSL(TLS 的前身)容易受到多种公开已知的攻击,因此不得使用。TLS 1.2 可用于广泛的客户端兼容性,但在启用此协议时要小心。仅当存在强制性兼容性要求并且您了解所涉及的风险时,才启用 TLS 版本 1.1。

+

使用 TLS 1.2 并同时控制客户端和服务器时,密码套件应限制为 ECDHE-ECDSA-AES256-GCM-SHA384 .在不控制这两个终结点并使用 TLS 1.1 或 1.2 的情况下,更通用 HIGH:!aNULL:!eNULL:!DES:!3DES:!SSLv3:!TLSv1:!CAMELLIA 的是合理的密码选择。

+

但是,由于本书并不打算全面介绍密码学,因此我们不希望规定在OpenStack服务中应该启用或禁用哪些特定的算法或密码模式。我们想推荐一些权威的参考资料,以提供更多信息:

+
    +
  • 国家安全局,Suite B 密码学
  • +
  • OWASP密码学指南
  • +
  • OWASP 传输层保护备忘单
  • +
  • SoK:SSL 和 HTTPS:重温过去的挑战并评估证书信任模型增强功能
  • +
  • 世界上最危险的代码:在非浏览器软件中验证SSL证书
  • +
  • OpenSSL 和 FIPS 140-2
  • +
+

总结

+

鉴于 OpenStack 组件的复杂性和部署可能性的数量,您必须注意确保每个组件都获得 TLS 证书、密钥和 CA 的适当配置。后续部分将讨论以下服务:

+
    +
  • 计算 API 端点
  • +
  • 身份 API 端点
  • +
  • 网络 API 端点
  • +
  • 存储 API 端点
  • +
  • 消息服务器
  • +
  • 数据库服务器
  • +
  • 仪表板
  • +
+

TLS 代理和 HTTP 服务

+

OpenStack的终端是提供API给公共网络上的终端用户和管理网络上的其他OpenStack服务的HTTP服务。强烈建议所有这些请求,无论是内部还是外部,都使用TLS进行操作。为了实现这个目标,API服务必须部署在TLS代理后面,该代理能够建立和终止TLS会话。下表提供了可用于此目的的开源软件的非详尽列表:

+
    +
  • +

    Pound

    +
  • +
  • +

    Stud

    +
  • +
  • +

    Nginx

    +
  • +
  • +

    Apache httpd

    +
  • +
+

在软件终端性能不足的情况下,硬件加速器可能值得探索作为替代选项。请务必注意任何选定的 TLS 代理将处理的请求的大小。

+

示例

+

下面我们提供了一些更流行的 Web 服务器/TLS 终结器中启用 TLS 的推荐配置设置示例。

+

在深入研究配置之前,我们简要讨论密码的配置元素及其格式。有关可用密码和 OpenSSL 密码列表格式的更详尽处理,请参阅:密码。

+
ciphers = "HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM"
+

+
ciphers = "kEECDH:kEDH:kRSA:HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM"
+

密码字符串选项由 “:” 分隔,而 “!” 提供紧接着的元素的否定。元素顺序指示首选项,除非被限定符(如 HIGH)覆盖。让我们仔细看看上面示例字符串中的元素。

+

kEECDH:kEDH

+

临时椭圆曲线 Diffie-Hellman(缩写为 EECDH 和 ECDHE)。

+

Ephemeral Diffie-Hellman(缩写为 EDH 或 DHE)使用素数场群。

+

这两种方法都提供完全前向保密 (PFS)。有关正确配置 PFS 的更多讨论,请参阅完全前向保密。

+

临时椭圆曲线要求服务器配置命名曲线,并提供比主字段组更好的安全性和更低的计算成本。但是,主要字段组的实现范围更广,因此通常两者都包含在列表中。

+

kRSA

+

分别使用 RSA 交换、身份验证或两者之一的密码套件。

+

HIGH

+

在协商阶段选择可能的最高安全密码。这些密钥通常具有长度为 128 位或更长的密钥。

+

!RC4

+

没有 RC4。RC4 在 TLS V3 的上下文中存在缺陷。请参阅 TLS 和 WPA 中 RC4 的安全性。

+

!MD5

+

没有 MD5。MD5 不具有防冲突功能,因此不接受消息验证码 (MAC) 或签名。

+

!aNULL:!eNULL

+

Disallows clear text. 不允许明文。

+

!EXP

+

不允许导出加密算法,这些算法在设计上往往很弱,通常使用 40 位和 56 位密钥。

+

美国对密码学系统的出口限制已被取消,不再需要支持。

+

!LOW:!MEDIUM

+

不允许使用低(56 或 64 位长密钥)和中等(128 位长密钥)密码,因为它们容易受到暴力攻击(示例 2-DES)。此规则仍允许三重数据加密标准 (Triple DES),也称为三重数据加密算法 (TDEA) 和高级加密标准 (AES),每个标准都具有大于等于 128 位的密钥,因此更安全。

+

Protocols

+

协议通过SSL_CTX_set_options启用/禁用。建议禁用 SSLv2/v3 并启用 TLS。

+
Pound
+

此 Pound 示例启用 AES-NI 加速,这有助于提高具有支持此功能的处理器的系统的性能。默认配置文件位于 /etc/pound/pound.cfg Ubuntu、RHEL、CentOS、 /etc/pound.cfg openSUSE 和 SUSE Linux Enterprise 上。

+
## see pound(8) for details
+daemon      1
+######################################################################
+## global options:
+User        "swift"
+Group       "swift"
+#RootJail   "/chroot/pound"
+## Logging: (goes to syslog by default)
+##  0   no logging
+##  1   normal
+##  2   extended
+##  3   Apache-style (common log format)
+LogLevel    0
+## turn on dynamic scaling (off by default)
+# Dyn Scale 1
+## check backend every X secs:
+Alive       30
+## client timeout
+#Client     10
+## allow 10 second proxy connect time
+ConnTO      10
+## use hardware-acceleration card supported by openssl(1):
+SSLEngine   "aesni"
+# poundctl control socket
+Control "/var/run/pound/poundctl.socket"
+######################################################################
+## listen, redirect and ... to:
+## redirect all swift requests on port 443 to local swift proxy
+ListenHTTPS
+    Address 0.0.0.0
+    Port    443
+    Cert    "/etc/pound/cert.pem"
+    ## Certs to accept from clients
+    ##  CAlist      "CA_file"
+    ## Certs to use for client verification
+    ##  VerifyList  "Verify_file"
+    ## Request client cert - don't verify
+    ##  Ciphers     "AES256-SHA"
+    ## allow PUT and DELETE also (by default only GET, POST and HEAD)?:
+    NoHTTPS11   0
+    ## allow PUT and DELETE also (by default only GET, POST and HEAD)?:
+    xHTTP       1
+    Service
+        BackEnd
+            Address 127.0.0.1
+            Port    80
+        End
+    End
+End
+
Stud
+

密码行可以根据您的需要进行调整,但这是一个合理的起点。默认配置文件位于目录中 /etc/stud 。但是,默认情况下不提供它。

+
# SSL x509 certificate file.
+pem-file = "
+# SSL protocol.
+tls = on
+ssl = off
+# List of allowed SSL ciphers.
+# OpenSSL's high-strength ciphers which require authentication
+# NOTE: forbids clear text, use of RC4 or MD5 or LOW and MEDIUM strength ciphers
+ciphers = "HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM"
+# Enforce server cipher list order
+prefer-server-ciphers = on
+# Number of worker processes
+workers = 4
+# Listen backlog size
+backlog = 1000
+# TCP socket keepalive interval in seconds
+keepalive = 3600
+# Chroot directory
+chroot = ""
+# Set uid after binding a socket
+user = "www-data"
+# Set gid after binding a socket
+group = "www-data"
+# Quiet execution, report only error messages
+quiet = off
+# Use syslog for logging
+syslog = on
+# Syslog facility to use
+syslog-facility = "daemon"
+# Run as daemon
+daemon = off
+# Report client address using SENDPROXY protocol for haproxy
+# Disabling this until we upgrade to HAProxy 1.5
+write-proxy = off
+
Nginx
+

此 Nginx 示例需要 TLS v1.1 或 v1.2 才能获得最大的安全性。可以根据您的需要调整生产线 ssl_ciphers ,但这是一个合理的起点。缺省配置文件为 /etc/nginx/nginx.conf

+
server {
+    listen : ssl;
+    ssl_certificate ;
+    ssl_certificate_key ;
+    ssl_protocols TLSv1.1 TLSv1.2;
+    ssl_ciphers HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM
+    ssl_session_tickets off;
+
+    server_name _;
+    keepalive_timeout 5;
+
+    location / {
+
+    }
+}
+
Apache
+

默认配置文件位于 /etc/apache2/apache2.conf Ubuntu、RHEL 和 CentOS、 /etc/httpd/conf/httpd.conf /etc/apache2/httpd.conf openSUSE 和 SUSE Linux Enterprise 上。

+
<VirtualHost <ip address>:80>
+  ServerName <site FQDN>
+  RedirectPermanent / https://<site FQDN>/
+</VirtualHost>
+<VirtualHost <ip address>:443>
+  ServerName <site FQDN>
+  SSLEngine On
+  SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2
+  SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM
+  SSLCertificateFile    /path/<site FQDN>.crt
+  SSLCACertificateFile  /path/<site FQDN>.crt
+  SSLCertificateKeyFile /path/<site FQDN>.key
+  WSGIScriptAlias / <WSGI script location>
+  WSGIDaemonProcess horizon user=<user> group=<group> processes=3 threads=10
+  Alias /static <static files location>
+  <Directory <WSGI dir>>
+    # For http server 2.2 and earlier:
+    Order allow,deny
+    Allow from all
+
+    # Or, in Apache http server 2.4 and later:
+    # Require all granted
+  </Directory>
+</VirtualHost>
+

Apache 中的计算 API SSL 端点,必须与简短的 WSGI 脚本配对。

+
<VirtualHost <ip address>:8447>
+  ServerName <site FQDN>
+  SSLEngine On
+  SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2
+  SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM
+  SSLCertificateFile    /path/<site FQDN>.crt
+  SSLCACertificateFile  /path/<site FQDN>.crt
+  SSLCertificateKeyFile /path/<site FQDN>.key
+  SSLSessionTickets Off
+  WSGIScriptAlias / <WSGI script location>
+  WSGIDaemonProcess osapi user=<user> group=<group> processes=3 threads=10
+  <Directory <WSGI dir>>
+    # For http server 2.2 and earlier:
+    Order allow,deny
+    Allow from all
+
+    # Or, in Apache http server 2.4 and later:
+    # Require all granted
+  </Directory>
+</VirtualHost>
+

HTTP 严格传输安全

+

建议所有生产部署都使用 HTTP 严格传输安全性 (HSTS)。此标头可防止浏览器在建立单个安全连接后建立不安全的连接。如果您已将 HTTP 服务部署在公共域或不受信任的域上,则 HSTS 尤为重要。要启用 HSTS,请将 Web 服务器配置为发送包含所有请求的标头,如下所示:

+
Strict-Transport-Security: max-age=31536000; includeSubDomains
+

在测试期间从 1 天的短暂停开始,并在测试表明您没有给用户带来问题后将其提高到一年。请注意,一旦此标头设置为较大的超时,它(根据设计)就很难禁用。

+

完全前向保密

+

配置 TLS 服务器以实现完美的前向保密需要围绕密钥大小、会话 ID 和会话票证进行仔细规划。此外,对于多服务器部署,共享状态也是一个重要的考虑因素。上面的 Apache 和 Nginx 示例配置禁用了会话票证选项,以帮助缓解其中一些问题。实际部署可能希望启用此功能以提高性能。这可以安全地完成,但需要特别考虑密钥管理。此类配置超出了本指南的范围。我们建议阅读 ImperialViolet 的 How to botch TLS forward secrecy 作为理解问题空间的起点。

+

安全参考架构

+

建议在 TLS 代理和 HTTP 服务的公用网络和管理网络上使用 SSL/TLS。但是,如果实际在任何地方部署 SSL/TLS 太困难,我们建议您评估您的 OpenStack SSL/TLS 需求,并遵循此处讨论的架构之一。

+

在评估其 OpenStack SSL/TLS 需求时,应该做的第一件事是识别威胁。您可以将这些威胁分为外部攻击者和内部攻击者类别,但由于 OpenStack 的某些组件在公共和管理网络上运行,因此界限往往会变得模糊。

+

对于面向公众的服务,威胁非常简单。用户将使用其用户名和密码对 Horizon 和 Keystone 进行身份验证。用户还将使用其 keystone 令牌访问其他服务的 API 端点。如果此网络流量未加密,则攻击者可以使用中间人攻击截获密码和令牌。然后,攻击者可以使用这些有效凭据执行恶意操作。所有实际部署都应使用 SSL/TLS 来保护面向公众的服务。

+

对于部署在管理网络上的服务,由于安全域与网络安全的桥接,威胁并不那么明确。有权访问管理网络的管理员总是有可能决定执行恶意操作。在这种情况下,如果允许攻击者访问私钥,SSL/TLS 将无济于事。当然,并不是管理网络上的每个人都被允许访问私钥,因此使用 SSL/TLS 来保护自己免受内部攻击者的攻击仍然很有价值。即使允许访问您的管理网络的每个人都是 100% 受信任的,仍然存在未经授权的用户通过利用错误配置或软件漏洞访问您的内部网络的威胁。必须记住,用户在 OpenStack Compute 节点中的实例上运行自己的代码,这些节点部署在管理网络上。如果漏洞允许他们突破虚拟机管理程序,他们将可以访问您的管理网络。在管理网络上使用 SSL/TLS 可以最大程度地减少攻击者可能造成的损害。

+

SSL/TLS 代理在前面

+

人们普遍认为,最好尽早加密敏感数据,并尽可能晚地解密。尽管有这种最佳实践,但在OpenStack服务前面使用SSL / TLS代理并在之后使用清晰的通信似乎是很常见的,如下所示:

+

../_images/secure-arch-ref-1.png

+

如上图所示,使用 SSL/TLS 代理的一些问题:

+
    +
  • OpenStack 服务中的原生 SSL/TLS 的性能/扩展性不如 SSL 代理(特别是对于像 Eventlet 这样的 Python 实现)。
  • +
  • OpenStack 服务中的原生 SSL/TLS 没有像更成熟的解决方案那样经过仔细审查/审计。
  • +
  • 本机 SSL/TLS 配置很困难(没有很好的文档记录、测试或跨服务保持一致)。
  • +
  • 权限分离(OpenStack 服务进程不应直接访问用于 SSL/TLS 的私钥)。
  • +
  • 流量检查需要负载均衡。
  • +
+

以上所有问题都是有道理的,但它们都不能阻止在管理网络上使用 SSL/TLS。让我们考虑下一个部署模型。

+

与 API 端点位于同一物理主机上的 SSL/TLS

+

../_images/secure-arch-ref-2.png

+

这与前面的 SSL/TLS 代理非常相似,但 SSL/TLS 代理与 API 端点位于同一物理系统上。API 端点将配置为仅侦听本地网络接口。与 API 端点的所有远程通信都将通过 SSL/TLS 代理进行。通过此部署模型,我们将解决 SSL/TLS 代理中的许多要点:将使用性能良好的经过验证的 SSL 实现。所有服务都将使用相同的 SSL 代理软件,因此 API 端点的 SSL 配置将是一致的。OpenStack 服务进程将无法直接访问用于 SSL/TLS 的私钥,因为您将以不同的用户身份运行 SSL 代理,并使用权限限制访问(以及使用 SELinux 之类的额外强制访问控制)。理想情况下,我们会让 API 端点在 Unix 套接字上监听,这样我们就可以使用权限和强制访问控制来限制对它的访问。不幸的是,根据我们的测试,这在 Eventlet 中目前似乎不起作用。这是一个很好的未来发展目标。

+

SSL/TLS负载平衡器

+

需要检查流量的高可用性或负载均衡部署会怎样?以前的部署模型(与 API 端点位于同一物理主机上的 SSL/TLS)不允许进行深度数据包检测,因为流量是加密的。如果仅出于基本路由目的而需要检查流量,则负载均衡器可能没有必要访问未加密的流量。HAProxy 能够在握手期间提取 SSL/TLS 会话 ID,然后可以使用该 ID 来实现会话亲和性(会话 ID 配置详细信息 此处 )。HAProxy还可以使用TLS服务器名称指示(SNI)扩展来确定应将流量路由到的位置(SNI配置详细信息请在此处)。这些功能可能涵盖了一些最常见的负载均衡器需求。在这种情况下,HAProxy 将能够将 HTTPS 流量直接传递到 API 端点系统:

+

../_images/secure-arch-ref-3.png

+

外部和内部环境的加密分离

+

如果您希望对外部和内部环境进行加密分离,该怎么办?公有云提供商可能希望其面向公众的服务(或代理)使用由 CA 颁发的证书,该证书链接到受信任的根 CA,该根 CA 分布在流行的 SSL/TLS Web 浏览器软件中。对于内部服务,可能希望改用自己的 PKI 来颁发 SSL/TLS 证书。可以通过在网络边界终止 SSL,然后使用内部颁发的证书重新加密来实现这种加密分离。流量将在面向公众的 SSL/TLS 代理上短时间内未加密,但永远不会以明文形式通过网络传输。如果负载均衡器上确实需要深度数据包检测,也可以使用用于实现加密分离的相同重新加密方法。下面是此部署模型的样子:下面是此部署模型的外观:

+

../_images/secure-arch-ref-4.png

+

与大多数事情一样,需要权衡取舍。主要的权衡是在安全性和性能之间。加密是有代价的,但被黑客入侵也是有代价的。每个部署的安全性和性能要求都会有所不同,因此如何使用 SSL/TLS 最终将由个人决定。

+

API 端点

+

使用 OpenStack 云的过程是通过查询 API 端点开始的。虽然公共和专用终结点面临不同的挑战,但这些是高价值资产,如果遭到入侵,可能会带来重大风险。

+

本章建议对面向公共和私有的 API 端点进行安全增强。

+
    +
  • +

    API 端点配置建议

    +
      +
    • 内部 API 通信
    • +
    • 粘贴件和中间件
    • +
    • API 端点进程隔离和策略
    • +
    • API 终端节点速率限制
    • +
    +
  • +
+

API 端点配置建议

+

内部 API 通信

+

OpenStack 提供面向公众和私有的 API 端点。默认情况下,OpenStack 组件使用公开定义的端点。建议将这些组件配置为在适当的安全域中使用 API 端点。

+

服务根据 OpenStack 服务目录选择各自的 API 端点。这些服务可能不遵守列出的公共或内部 API 端点值。这可能会导致内部管理流量路由到外部 API 终结点。

+
在身份服务目录中配置内部 URL
+

Identity 服务目录应了解您的内部 URL。虽然默认情况下不使用此功能,但可以通过配置来利用它。此外,一旦此行为成为默认行为,它应该与预期的更改向前兼容。

+

要为终结点注册内部 URL,请执行以下操作:

+
$ openstack endpoint create identity \
+  --region RegionOne internal \
+  https://MANAGEMENT_IP:5000/v3
+

替换为 MANAGEMENT_IP 控制器节点的管理 IP 地址。

+
为内部 URL 配置应用程序
+

您可以强制某些服务使用特定的 API 端点。因此,建议必须将每个与另一个服务的 API 通信的 OpenStack 服务显式配置为访问正确的内部 API 端点。

+

每个项目都可能呈现定义目标 API 端点的不一致方式。OpenStack 的未来版本试图通过一致地使用身份服务目录来解决这些不一致问题。

+

配置示例 #1:nova

+
cinder_catalog_info='volume:cinder:internalURL'
+glance_protocol='https'
+neutron_url='https://neutron-host:9696'
+neutron_admin_auth_url='https://neutron-host:9696'
+s3_host='s3-host'
+s3_use_ssl=True
+

配置示例 #2:cinder

+
glance_host = 'https://glance-server'
+

粘贴和中间件

+

OpenStack 中的大多数 API 端点和其他 HTTP 服务都使用 Python Paste Deploy 库。从安全角度来看,此库允许通过应用程序的配置来操作请求筛选器管道。此链中的每个元素都称为中间件。更改管道中筛选器的顺序或添加其他中间件可能会产生不可预知的安全影响。

+

通常,实现者会添加中间件来扩展 OpenStack 的基本功能。我们建议实现者仔细考虑将非标准软件组件添加到其 HTTP 请求管道中可能带来的风险。

+

有关粘贴部署的更多信息,请参阅 Python 粘贴部署文档。

+

API 端点进程隔离和策略

+

您应该隔离 API 端点进程,尤其是那些位于公共安全域中的进程,应尽可能隔离。在部署允许的情况下,API 端点应部署在单独的主机上,以增强隔离性。

+
命名空间
+

现在,许多操作系统都提供分区化支持。Linux 支持命名空间将进程分配到独立的域中。本指南的其他部分更详细地介绍了系统区隔。

+
网络策略
+

由于 API 端点通常桥接多个安全域,因此您必须特别注意 API 进程的划分。有关此区域的其他信息,请参阅桥接安全域。

+

通过仔细建模,您可以使用网络 ACL 和 IDS 技术在网络服务之间强制实施显式点对点通信。作为一项关键的跨域服务,这种显式强制执行对 OpenStack 的消息队列服务非常有效。

+

要实施策略,您可以配置服务、基于主机的防火墙(例如 iptables)、本地策略(SELinux 或 AppArmor)以及可选的全局网络策略。

+
强制访问控制
+

您应该将 API 端点进程彼此隔离,并隔离计算机上的其他进程。这些进程的配置不仅应通过任意访问控制,还应通过强制访问控制来限制这些进程。这些增强的访问控制的目标是帮助遏制和升级 API 端点安全漏洞。通过强制访问控制,此类违规行为会严重限制对资源的访问,并针对此类事件提供早期警报。

+

API 端点速率限制

+

速率限制是一种控制基于网络的应用程序接收事件频率的方法。如果不存在可靠的速率限制,则可能导致应用程序容易受到各种拒绝服务攻击。对于 API 尤其如此,因为 API 的本质是旨在接受高频率的类似请求类型和操作。

+

在 OpenStack 中,建议通过速率限制代理或 Web 应用程序防火墙为所有端点(尤其是公共端点)提供额外的保护层。

+

在配置和实现任何速率限制功能时,运营商必须仔细规划并考虑其 OpenStack 云中用户和服务的个人性能需求,这一点至关重要。

+

提供速率限制的常见解决方案是 Nginx、HAProxy、OpenPose 或 Apache 模块,例如 mod_ratelimit、mod_qos 或 mod_security。

+

身份鉴别

+

Keystone身份服务为OpenStack系列服务专门提供身份、令牌、目录和策略服务。身份服务组织为一组内部服务,通过一个或多个端点暴露。这些服务中的许多是由前端以组合方式使用的。例如,身份验证调用通过身份服务验证用户和项目凭据。如果成功,它将使用令牌服务创建并返回令牌。更多信息可以在Keystone开发者文档中找到。

+
    +
  • +

    认证

    +
      +
    • 无效的登录尝试
    • +
    • 多因素认证
    • +
    +
  • +
  • +

    认证方法

    +
      +
    • 内部实施的认证方法
    • +
    • 外部认证方法
    • +
    +
  • +
  • +

    授权

    +
      +
    • 建立正式的访问控制策略
    • +
    • 服务授权
    • +
    • 管理原用户
    • +
    • 终端用户
    • +
    +
  • +
  • +

    策略

    +
  • +
  • +

    令牌

    +
      +
    • Fernet 令牌
    • +
    • JWT 令牌
    • +
    +
  • +
  • +

    +
  • +
  • +

    联合 Keystone

    +
  • +
  • +

    为什么要使用联合鉴别

    +
  • +
  • +

    检查表

    +
      +
    • Check-Identity-01:配置文件的用户/组所有权是否设置为 keystone?
    • +
    • Check-Identity-02:是否为身份配置文件设置了严格权限
    • +
    • Check-Identity-03:是否为 Identity 启用了 TLS?
    • +
    • Check-Identity-04:(已过时)
    • +
    • Check-Identity-05:是否 max_request_body_size 设置为默认值 (114688)?
    • +
    • check-identity-06:禁用/etc/keystone/keystone.conf中的管理令牌
    • +
    • check-identity-07:/etc/keystone/keystone.conf中的不安全_调试为假
    • +
    • check-identity-08:使用/etc/keystone/keystone.conf中的Fernet令牌
    • +
    +
  • +
+

认证

+

身份认证是任何实际OpenStack部署中不可或缺的一部分,因此应该仔细考虑系统设计的这一方面。本主题的完整处理超出了本指南的范围,但是以下各节介绍了一些关键主题。

+

从根本上说,身份认证是确认身份的过程 - 用户实际上是他们声称的身份。一个熟悉的示例是在登录系统时提供用户名和密码。

+

OpenStack 身份鉴别服务(keystone)支持多种身份验证方法,包括用户名和密码、LDAP 和外部身份验证方法。身份认证成功后,身份鉴别服务会向用户提供用于后续服务请求的授权令牌。

+

传输层安全性 (TLS) 使用 X.509 证书在服务和人员之间提供身份验证。尽管 TLS 的默认模式是仅服务器端身份验证,但证书也可用于客户端身份验证。

+

无效的登录尝试

+

从 Newton 版本开始,身份鉴别服务可以在多次登录尝试失败后限制对帐户的访问。重复失败登录尝试的模式通常是暴力攻击的指标(请参阅攻击类型)。这种类型的攻击在公有云部署中更为普遍。

+

对于需要此功能的旧部署,可以使用外部身份验证系统进行预防,该系统在配置的登录尝试失败次数后锁定帐户。然后,只有通过进一步的侧信道干预才能解锁该帐户。

+

如果无法预防,则可以使用检测来减轻损害。检测涉及频繁查看访问控制日志,以识别未经授权的帐户访问尝试。可能的补救措施包括检查用户密码的强度,或通过防火墙规则阻止攻击的网络源。Keystone 服务器上限制连接数的防火墙规则可用于降低攻击效率,从而劝阻攻击者。

+

此外,检查帐户活动是否存在异常登录时间和可疑操作,并采取纠正措施(如禁用帐户)也很有用。通常,信用卡提供商采用这种方法进行欺诈检测和警报。

+

多因素身份验证

+

采用多重身份验证对特权用户帐户进行网络访问。身份鉴别服务通过可提供此功能的 Apache Web 服务器支持外部身份验证服务。服务器还可以使用证书强制执行客户端身份验证。

+

此建议可防止暴力破解、社会工程以及可能泄露管理员密码的狙击和大规模网络钓鱼攻击。

+

身份验证方法

+

内部实现的认证方式

+

身份认证服务可以将用户凭据存储在 SQL 数据库中,也可以使用符合 LDAP 的目录服务器。身份数据库可以与其他 OpenStack 服务使用的数据库分开,以降低存储凭据泄露的风险。

+

当您使用用户名和密码进行身份验证时,身份服务不会强制执行 NIST Special Publication 800-118(草案)中推荐的有关密码强度、过期或失败身份验证尝试的策略。希望执行更严格密码策略的组织应考虑使用身份服务的扩展或外部认证服务。

+

LDAP 简化了身份认证与组织现有目录服务和用户帐户管理流程的集成。

+

OpenStack 中的身份验证和授权策略可以委托给其他服务。一个典型的用例是寻求部署私有云的组织,并且已经在 LDAP 系统中拥有员工和用户的数据库。使用此身份验证机构,将对身份服务的请求委托给 LDAP 系统,然后 LDAP 系统将根据其策略进行授权或拒绝。身份验证成功后,身份鉴别服务会生成一个令牌,用于访问授权服务。

+

请注意,如果 LDAP 系统具有为用户定义的属性,例如 admin、finance、HR 等,则必须将这些属性映射到身份鉴别中的角色和组,以供各种 OpenStack 服务使用。该文件 /etc/keystone/keystone.conf 将 LDAP 属性映射到身份属性。

+

不得允许身份服务写入用于 OpenStack 部署之外的身份验证的 LDAP 服务,因为这将允许具有足够权限的 keystone 用户对 LDAP 目录进行更改。这将允许在更广泛的组织内进行权限升级,或促进对其他信息和资源的未经授权的访问。在这样的部署中,用户配置将超出 OpenStack 部署的范围。

+

注意

+
有一个关于 keystone.conf 权限的 OpenStack 安全说明 (OSSN)。
+
+有一个关于潜在 DoS 攻击的 OpenStack 安全说明 (OSSN)。
+

外部认证方式

+

本组织可能希望实现外部身份验证,以便与现有身份验证服务兼容,或强制实施更强的身份验证策略要求。尽管密码是最常见的身份验证形式,但它们可以通过多种方法泄露,包括击键记录和密码泄露。外部身份验证服务可以提供替代形式的身份验证,以最大程度地降低弱密码带来的风险。

+

这些包括:

+

密码策略实施

+

要求用户密码符合长度、字符多样性、过期或登录尝试失败的最低标准。在外部身份验证方案中,这将是原始身份存储上的密码策略。

+

多因素身份验证

+

身份验证服务要求用户根据他们拥有的内容(如一次性密码令牌或 X.509 证书)和他们知道的内容(如密码)提供信息。

+

Kerberos

+

一种使用“票证”进行双向认证的网络协议,用于保护客户端和服务器之间的通信。Kerberos 票证授予票证可安全地为特定服务提供票证。

+

授权

+

身份服务支持组和角色的概念。用户属于组,而组具有角色列表。OpenStack 服务引用尝试访问该服务的用户的角色。OpenStack 策略执行器中间件会考虑与每个资源关联的策略规则,然后考虑用户的组/角色和关联,以确定是否允许访问所请求的资源。

+

策略实施中间件支持对 OpenStack 资源进行细粒度的访问控制。策略中深入讨论了策略的行为。

+

建立正式的访问控制策略

+

在配置角色、组和用户之前,请记录 OpenStack 安装所需的访问控制策略。这些策略应与组织的任何法规或法律要求保持一致。将来对访问控制配置的修改应与正式策略保持一致。策略应包括创建、删除、禁用和启用帐户以及为帐户分配权限的条件和过程。定期查看策略,并确保配置符合批准的策略。

+

服务授权

+

云管理员必须为每个服务定义一个具有管理员角色的用户,如《OpenStack 管理员指南》中所述。此服务帐户为服务提供对用户进行身份验证的授权。

+

可以将计算和对象存储服务配置为使用身份服务来存储身份验证信息。存储身份验证信息的其他选项包括使用“tempAuth”文件,但不应将其部署在生产环境中,因为密码以纯文本形式显示。

+

身份鉴别服务支持对 TLS 进行客户端身份验证,该身份验证可能已启用。除了用户名和密码之外,TLS 客户端身份验证还提供了额外的身份验证因素,从而提高了用户标识的可靠性。当用户名和密码可能被泄露时,它降低了未经授权访问的风险。但是,向用户颁发证书会产生额外的管理开销和成本,这在每次部署中都可能不可行。

+

注意

+
我们建议您将客户端身份验证与 TLS 结合使用,以便对身份鉴别服务进行身份验证。
+

云管理员应保护敏感的配置文件免遭未经授权的修改。这可以通过强制性访问控制框架(如 SELinux)来实现,包括 /etc/keystone/keystone.conf X.509 证书。

+

使用 TLS 的客户端身份验证需要向服务颁发证书。这些证书可以由外部或内部证书颁发机构签名。默认情况下,OpenStack 服务会根据受信任的 CA 检查证书签名的有效性,如果签名无效或 CA 不可信,连接将失败。云部署人员可以使用自签名证书。在这种情况下,必须禁用有效性检查,或者应将证书标记为受信任。若要禁用自签名证书的验证,请在 /etc/nova/api.paste.ini 文件的 [filter:authtoken] “部分”中进行设置 insecure=False 。此设置还会禁用其他组件的证书。

+

管理员用户

+

我们建议管理员用户使用身份服务和支持 2 因素身份验证的外部身份验证服务(例如证书)进行身份验证。这样可以降低密码可能被泄露的风险。此建议符合 NIST 800-53 IA-2(1) 指南,即使用多重身份验证对特权帐户进行网络访问。

+

终端用户

+

身份鉴别服务可以直接提供最终用户身份验证,也可以配置为使用外部身份验证方法以符合组织的安全策略和要求。

+

政策

+

每个 OpenStack 服务都在关联的策略文件中定义其资源的访问策略。例如,资源可以是 API 访问、附加到卷或启动实例的能力。策略规则以 JSON 格式指定,文件称为 policy.json .此文件的语法和格式在配置参考中进行了讨论。

+

云管理员可以修改或更新这些策略,以控制对各种资源的访问。确保对访问控制策略的任何更改都不会无意中削弱任何资源的安全性。另请注意,对 policy.json 文件的更改会立即生效,并且不需要重新启动服务。

+

以下示例显示了该服务如何将创建、更新和删除资源的访问权限限制为仅具有角色 cloud_admin 的用户,该角色已定义为 role = admindomain_id = admin_domain_id 的结合,而 get 和 list 资源可供角色为 cloud_adminadmin 的用户使用。

+
{
+    "admin_required": "role:admin",
+    "cloud_admin": "rule:admin_required and domain_id:admin_domain_id",
+    "service_role": "role:service",
+    "service_or_admin": "rule:admin_required or rule:service_role",
+    "owner" : "user_id:%(user_id)s or user_id:%(target.token.user_id)s",
+    "admin_or_owner": "(rule:admin_required and domain_id:%(target.token.user.domain.id)s) or rule:owner",
+    "admin_or_cloud_admin": "rule:admin_required or rule:cloud_admin",
+    "admin_and_matching_domain_id": "rule:admin_required and domain_id:%(domain_id)s",
+    "service_admin_or_owner": "rule:service_or_admin or rule:owner",
+
+    "default": "rule:admin_required",
+
+    "identity:get_service": "rule:admin_or_cloud_admin",
+    "identity:list_services": "rule:admin_or_cloud_admin",
+    "identity:create_service": "rule:cloud_admin",
+    "identity:update_service": "rule:cloud_admin",
+    "identity:delete_service": "rule:cloud_admin",
+
+    "identity:get_endpoint": "rule:admin_or_cloud_admin",
+    "identity:list_endpoints": "rule:admin_or_cloud_admin",
+    "identity:create_endpoint": "rule:cloud_admin",
+    "identity:update_endpoint": "rule:cloud_admin",
+    "identity:delete_endpoint": "rule:cloud_admin",
+
+}
+

令牌

+

用户通过身份验证后,将生成一个令牌,用于授权和访问 OpenStack 环境。代币可以具有可变的生命周期;但是,expiry 的默认值为 1 小时。建议的过期值应设置为较低的值,以便内部服务有足够的时间完成任务。如果令牌在任务完成之前过期,云可能会变得无响应或停止提供服务。例如,计算服务将磁盘映像传输到虚拟机监控程序以进行本地缓存所需的时间。允许在使用有效的服务令牌时提取过期的令牌。

+

令牌通常在 Identity 服务响应的较大上下文的结构中传递。这些响应还提供了各种 OpenStack 服务的目录。列出了每个服务的名称、内部访问、管理员访问和公共访问的访问终结点。

+

可以使用标识 API 吊销令牌。

+

在 Stein 版本中,有两种受支持的令牌类型:fernet 和 JWT。

+

fernet 和 JWT 令牌都不需要持久性。Keystone 令牌数据库不再因身份验证的副作用而遭受膨胀。过期令牌的修剪会自动进行。也不再需要跨多个节点进行复制。只要每个 keystone 节点共享相同的存储库,就可以在所有节点上立即创建和验证令牌。

+

Fernet 令牌

+

Fernet 令牌是 Stein 支持的令牌提供程序(默认)。Fernet 是一种安全的消息传递格式,专门设计用于 API 令牌。它们是轻量级的(范围在 180 到 240 字节之间),并减少了运行云所需的运营开销。身份验证和授权元数据被整齐地捆绑到消息打包的有效负载中,然后对其进行加密并作为 fernet 令牌登录。

+

JWT 令牌

+

JSON Web 签名 (JWS) 令牌是在 Stein 版本中引入的。与fernet相比,JWS通过限制需要共享对称加密密钥的主机数量,为运营商提供了潜在的好处。这有助于防止可能已在部署中站稳脚跟的恶意参与者扩散到其他节点。

+

有关这些令牌提供程序之间差异的更多详细信息,请参阅此处 https://docs.openstack.org/keystone/stein/admin/tokens-overview.html#token-providers

+

+

域是项目、用户和组的高级容器。因此,它们可用于集中管理所有基于 keystone 的身份组件。随着帐户域的引入,服务器、存储和其他资源现在可以在逻辑上分组到多个项目(以前称为租户)中,这些项目本身可以分组到类似主帐户的容器下。此外,可以在一个帐户域中管理多个用户,并为每个项目分配不同的角色。

+

Identity V3 API 支持多个域。不同域的用户可能在不同的身份验证后端中表示,甚至具有不同的属性,这些属性必须映射到一组角色和权限,这些角色和权限在策略定义中用于访问各种服务资源。

+

如果规则可以仅指定对管理员用户和属于租户的用户的访问权限,则映射可能很简单。在其他情况下,云管理员可能需要批准每个租户的映射例程。

+

特定于域的身份验证驱动程序允许使用特定于域的配置文件为多个域配置标识服务。启用驱动程序并设置特定于域的配置文件位置发生在 keystone.conf 文件 [identity] 部分中:

+
[identity]
+domain_specific_drivers_enabled = True
+domain_config_dir = /etc/keystone/domains
+

任何没有特定于域的配置文件的域都将使用主 keystone.conf 文件中的选项。

+

联合鉴权

+

重要定义:

+

服务提供商 (SP)

+

向委托人或其他系统实体提供服务的系统实体,在本例中,OpenStack Identity 是服务提供者。

+

身份提供商 (IdP)

+

目录服务(如 LDAP、RADIUS 和 Active Directory)允许用户使用用户名和密码登录,是身份提供商处身份验证令牌(例如密码)的典型来源。

+

联合鉴权是一种在 IdP 和 SP 之间建立信任的机制,在本例中,是在身份提供者和 OpenStack Cloud 提供的服务之间建立信任。它提供了一种安全的方法,可以使用现有凭据跨多个端点访问云资源,例如服务器、卷和数据库。凭证由用户的 IdP 维护。

+

为什么要使用联合身份?

+

两个根本原因:

+
    +
  1. 降低复杂性使部署更易于保护。
  2. +
  3. +

    它为您和您的用户节省了时间。

    +
  4. +
  5. +

    集中管理帐户,防止 OpenStack 基础架构内部的重复工作。

    +
  6. +
  7. 减轻用户负担。单点登录允许使用单一身份验证方法来访问许多不同的服务和环境。
  8. +
  9. 将密码恢复过程的责任转移到 IdP。
  10. +
+

进一步的理由和细节可以在 Keystone 关于联合的文档中找到。

+

检查表

+

Check-Identity-01:配置文件的用户/组所有权是否设置为 keystone?

+

配置文件包含组件平稳运行所需的关键参数和信息。如果非特权用户有意或无意地修改或删除任何参数或文件本身,则会导致严重的可用性问题,从而导致对其他最终用户的拒绝服务。因此,此类关键配置文件的用户和组所有权必须设置为该组件所有者。此外,包含目录应具有相同的所有权,以确保正确拥有新文件。

+

运行以下命令:

+
$ stat -L -c "%U %G" /etc/keystone/keystone.conf | egrep "keystone keystone"
+$ stat -L -c "%U %G" /etc/keystone/keystone-paste.ini | egrep "keystone keystone"
+$ stat -L -c "%U %G" /etc/keystone/policy.json | egrep "keystone keystone"
+$ stat -L -c "%U %G" /etc/keystone/logging.conf | egrep "keystone keystone"
+$ stat -L -c "%U %G" /etc/keystone/ssl/certs/signing_cert.pem | egrep "keystone keystone"
+$ stat -L -c "%U %G" /etc/keystone/ssl/private/signing_key.pem | egrep "keystone keystone"
+$ stat -L -c "%U %G" /etc/keystone/ssl/certs/ca.pem | egrep "keystone keystone"
+$ stat -L -c "%U %G" /etc/keystone | egrep "keystone keystone"
+

通过:如果所有这些配置文件的用户和组所有权都设置为 keystone。上述命令显示 keystone keystone 的输出。

+

失败:如果上述命令未返回任何输出,因为用户或组所有权可能已设置为除 keystone 以外的任何用户。

+

推荐于:内部实现的身份验证方法。

+

Check-Identity-02:是否为 Identity 配置文件设置了严格权限?

+

与前面的检查类似,建议对此类配置文件设置严格的访问权限。

+

运行以下命令:

+
$ stat -L -c "%a" /etc/keystone/keystone.conf
+$ stat -L -c "%a" /etc/keystone/keystone-paste.ini
+$ stat -L -c "%a" /etc/keystone/policy.json
+$ stat -L -c "%a" /etc/keystone/logging.conf
+$ stat -L -c "%a" /etc/keystone/ssl/certs/signing_cert.pem
+$ stat -L -c "%a" /etc/keystone/ssl/private/signing_key.pem
+$ stat -L -c "%a" /etc/keystone/ssl/certs/ca.pem
+$ stat -L -c "%a" /etc/keystone
+

还可以进行更广泛的限制:如果包含目录设置为 750,则保证此目录中新创建的文件具有所需的权限。

+

通过:如果权限设置为 640 或更严格,或者包含目录设置为 750。

+

失败:如果权限未设置为至少 640/750。

+

推荐于:内部实现的身份验证方法。

+

Check-Identity-03:是否为 Identity 启用了 TLS?

+

OpenStack 组件使用各种协议相互通信,通信可能涉及敏感或机密数据。攻击者可能会尝试窃听频道以访问敏感信息。因此,所有组件都必须使用安全通信协议(如 HTTPS)相互通信。

+

如果将 HTTP/WSGI 服务器用于标识,则应在 HTTP/WSGI 服务器上启用 TLS。

+

通过:如果在 HTTP 服务器上启用了 TLS。

+

失败:如果 HTTP 服务器上未启用 TLS。

+

推荐于:安全通信。

+

Check-Identity-04:(已过时)

+

Check-Identity-05:是否 max_request_body_size 设置为默认值 (114688)?

+

该参数 max_request_body_size 定义每个请求的最大正文大小(以字节为单位)。如果未定义最大大小,攻击者可以构建任意大容量请求,导致服务崩溃,最终导致拒绝服务攻击。分配最大值可确保阻止任何恶意的超大请求,从而确保组件的持续可用性。

+

通过:如果参数 max_request_body_size in /etc/keystone/keystone.conf 的值设置为默认值 (114688) 或根据您的环境设置的某个合理值。

+

失败:如果未设置参数 max_request_body_size 值。

+

check-identity-06:禁用/etc/keystone/keystone.conf中的管理令牌

+

管理员令牌通常用于引导 Identity。此令牌是最有价值的标识资产,可用于获取云管理员权限。

+

通过:如果 admin_token under [DEFAULT] section in /etc/keystone/keystone.conf 被禁用。并且, AdminTokenAuthMiddleware under [filter:admin_token_auth]/etc/keystone/keystone-paste.ini

+

失败:如果 admin_token 设置了 under [DEFAULT] 部分并 AdminTokenAuthMiddleware 存在于 keystone-paste.ini 中。

+

建议

+
禁用 `admin_token` 意味着它的值为 `<none>` 。
+

check-identity-07:/etc/keystone/keystone.conf中的不安全_调试为假

+

如果 insecure_debug 设置为 true,则服务器将在 HTTP 响应中返回信息,这些信息可能允许未经身份验证或经过身份验证的用户获取比正常情况更多的信息,例如有关身份验证失败原因的其他详细信息。

+

通过:如果 insecure_debug under [DEFAULT] section in /etc/keystone/keystone.conf 为 false。

+

失败:如果 insecure_debug under [DEFAULT] section in /etc/keystone/keystone.conf 为 true。

+

check-identity-08:使用/etc/keystone/keystone.conf中的Fernet令牌

+

OpenStack Identity 服务提供 uuidfernet 作为令牌提供者。 uuid 令牌必须持久化,并被视为不安全。

+

通过:如果 section in /etc/keystone/keystone.conf 下的 [token] 参数 provider 值设置为 fernet。

+

失败:如果 section 下的 [token] 参数 provider 值设置为 uuid。

+

仪表板

+

Dashboard (horizon) 是 OpenStack 仪表板,它为用户提供了一个自助服务门户,以便在管理员设置的限制范围内配置自己的资源。其中包括预置用户、定义实例变种、上传虚拟机 (VM) 映像、管理网络、设置安全组、启动实例以及通过控制台访问实例。

+

仪表板基于 Django Web 框架,确保 Django 的安全部署实践直接应用于 Horizon。本指南提供了一组 Django 安全建议。更多信息可以通过阅读 Django 文档找到。

+

仪表板附带默认安全设置,并具有部署和配置文档。

+
    +
  • +

    域名、仪表板升级和基本 Web 服务器配置

    +
      +
    • 域名
    • +
    • 基本 Web 服务器配置
    • +
    • 允许的主机
    • +
    • 映像上传
    • +
    +
  • +
  • +

    HTTPS、HSTS、XSS 和 SSRF

    +
      +
    • 跨站点脚本 (XSS)
    • +
    • 跨站点请求伪造 (CSRF)
    • +
    • 跨帧脚本 (XFS)
    • +
    • HTTPS协议
    • +
    • HTTP 严格传输安全 (HSTS)
    • +
    +
  • +
  • +

    前端缓存和会话后端

    +
      +
    • 前端缓存
    • +
    • 会话后端
    • +
    +
  • +
  • +

    静态媒体

    +
  • +
  • 密码
  • +
  • 密钥
  • +
  • 网站数据
  • +
  • 跨域资源共享 (CORS)
  • +
  • 调试
  • +
  • +

    检查表

    +
      +
    • Check-Dashboard-01:用户/配置文件组是否设置为 root/horizon?
    • +
    • Check-Dashboard-02:是否为 Horizon 配置文件设置了严格权限?
    • +
    • Check-Dashboard-03:参数是否 DISALLOW_IFRAME_EMBED 设置为 True ?
    • +
    • Check-Dashboard-04:参数是否 CSRF_COOKIE_SECURE 设置为 True ?
    • +
    • Check-Dashboard-05:参数是否 SESSION_COOKIE_SECURE 设置为 True ?
    • +
    • Check-Dashboard-06:参数是否 SESSION_COOKIE_HTTPONLY 设置为 True ?
    • +
    • Check-Dashboard-07: PASSWORD_AUTOCOMPLETE 设置为 False ?
    • +
    • Check-Dashboard-08: DISABLE_PASSWORD_REVEAL 设置为 True ?
    • +
    • Check-Dashboard-09: ENFORCE_PASSWORD_CHECK 设置为 True ?
    • +
    • Check-Dashboard-10:是否 PASSWORD_VALIDATOR 已配置?
    • +
    • Check-Dashboard-11:是否 SECURE_PROXY_SSL_HEADER 已配置?
    • +
    +
  • +
+

域名、仪表板升级和基本 Web 服务器配置

+

域名

+

许多组织通常在总体组织域的子域中部署 Web 应用程序。用户很自然地期望 openstack.example.org .在此上下文中,通常存在部署在同一个二级命名空间中的应用程序。此名称结构非常方便,并简化了名称服务器的维护。

+

我们强烈建议将仪表板部署到二级域,例如 ,而不是在任何级别的共享子域上部署仪表板,例如 https://example.com https://openstack.example.orghttps://horizon.openstack.example.org 。我们还建议不要部署到裸内部域,例如 https://horizon/ .这些建议基于浏览器同源策略的限制。

+

如果将仪表板部署在还托管用户生成内容的域中,则本指南中提供的建议无法有效防范已知攻击,即使此内容驻留在单独的子域中也是如此。用户生成的内容可以包含任何类型的脚本、图像或上传内容。大多数主要的 Web 存在(包括 googleusercontent.com、fbcdn.com、github.io 和 twimg.co)都使用这种方法将用户生成的内容与 Cookie 和安全令牌隔离开来。

+

如果您不遵循有关二级域的建议,请避免使用 Cookie 支持的会话存储,并采用 HTTP 严格传输安全 (HSTS)。当部署在子域上时,仪表板的安全性等同于部署在同一二级域上的安全性最低的应用程序。

+

基本的 Web 服务器配置

+

仪表板应部署为 HTTPS 代理(如 Apache 或 Nginx)后面的 Web 服务网关接口 (WSGI) 应用程序。如果 Apache 尚未使用,我们建议使用 Nginx,因为它是轻量级的,并且更容易正确配置。

+

使用 Nginx 时,我们建议 gunicorn 作为 WSGI 主机,并具有适当数量的同步工作线程。使用 Apache 时,我们建议 mod_wsgi 托管仪表板。

+

允许的主机

+

使用 OpenStack 仪表板提供的完全限定主机名配置设置 ALLOWED_HOSTS 。提供此设置后,如果传入 HTTP 请求的“Host:”标头中的值与此列表中的任何值都不匹配,则将引发错误,并且请求者将无法继续。如果未能配置此选项,或者在指定的主机名中使用通配符,将导致仪表板容易受到与虚假 HTTP 主机标头关联的安全漏洞的影响。

+

有关更多详细信息,请参阅 Django 文档。

+

Horizon 镜像上传

+

我们建议实施者禁用HORIZON_IMAGES_ALLOW_UPLOAD,除非他们已实施防止资源耗尽和拒绝服务的计划。

+

HTTPS、HSTS、XSS 和 SSRF

+

跨站脚本 (XSS)

+

与许多类似的系统不同,OpenStack 仪表板允许在大多数字段中使用整个 Unicode 字符集。这意味着开发人员犯错误的自由度较小,这些错误为跨站点脚本 (XSS) 打开了攻击媒介。

+

Dashboard 为开发人员提供了避免创建 XSS 漏洞的工具,但它们只有在开发人员正确使用它们时才有效。审核任何自定义仪表板,特别注意 mark_safe 函数的使用、与自定义模板标记的使用 is_safesafe 模板标记的使用、关闭自动转义的任何位置,以及任何可能评估不当转义数据的 JavaScript。

+

跨站请求伪造 (CSRF)

+

Django 有专门的中间件用于跨站请求伪造 (CSRF)。有关更多详细信息,请参阅 Django 文档。

+

OpenStack 仪表板旨在阻止开发人员在引入线程时使用自定义仪表板引入跨站点脚本漏洞。应审核使用多个 JavaScript 实例的仪表板是否存在漏洞,例如不当使用 @csrf_exempt 装饰器。在放宽限制之前,应仔细评估任何不遵循这些建议的安全设置的仪表板。

+

跨帧脚本 (XFS)

+

传统浏览器仍然容易受到跨帧脚本 (XFS) 漏洞的攻击,因此 OpenStack 仪表板提供了一个选项 DISALLOW_IFRAME_EMBED ,允许在部署中不使用 iframe 的情况下进行额外的安全强化。

+

HTTPS 函数

+

使用来自公认的证书颁发机构 (CA) 的有效受信任证书,将仪表板部署在安全 HTTPS 服务器后面。仅当信任根预安装在所有用户浏览器中时,私有组织颁发的证书才适用。

+

配置对仪表板域的 HTTP 请求,以重定向到完全限定的 HTTPS URL。

+

HTTP 严格传输安全 (HSTS)

+

强烈建议使用 HTTP 严格传输安全 (HSTS)。

+

注意

+
如果您在 Web 服务器前面使用 HTTPS 代理,而不是使用具有 HTTPS 功能的 HTTP 服务器,请修改该 `SECURE_PROXY_SSL_HEADER` 变量。有关修改 `SECURE_PROXY_SSL_HEADER` 变量的信息,请参阅 Django 文档。
+

有关 HTTPS 配置(包括 HSTS 配置)的更具体建议和服务器配置,请参阅“安全通信”一章。

+

前端缓存和会话后端

+

前端缓存

+

我们不建议在仪表板中使用前端缓存工具。仪表板正在渲染直接由 OpenStack API 请求生成的动态内容,前端缓存层(如 varnish)可能会阻止显示正确的内容。在 Django 中,静态媒体直接从 Apache 或 Nginx 提供,并且已经受益于 Web 主机缓存。

+

会话后端

+

Horizon 的默认会话后端 django.contrib.sessions.backends.signed_cookies 将用户数据保存在浏览器中存储的已签名但未加密的 Cookie 中。由于每个仪表板实例都是无状态的,因此前面提到的方法提供了实现最简单的会话后端扩展的能力。

+

应该注意的是,在这种类型的实现中,敏感的访问令牌将存储在浏览器中,并将随着每个请求的发出而传输。后端确保会话数据的完整性,即使传输的数据仅通过 HTTPS 加密。

+

如果您的架构允许共享存储,并且您正确配置了缓存,我们建议您将其设置为 SESSION_ENGINE django.contrib.sessions.backends.cache 并用作基于缓存的会话后端,并将 memcached 作为缓存。Memcached 是一种高效的内存键值存储,用于存储数据块,可在高可用性和分布式环境中使用,并且易于配置。但是,您需要确保没有数据泄漏。Memcached 利用备用 RAM 来存储经常访问的数据块,就像重复访问信息的内存缓存一样。由于 memcached 使用本地内存,因此不会产生数据库和文件系统使用开销,从而导致直接从 RAM 而不是从磁盘访问数据。

+

我们建议使用 memcached 而不是本地内存缓存,因为它速度快,数据保留时间更长,多进程安全,并且能够在多个服务器上共享缓存,但仍将其视为单个缓存。

+

要启用 memcached,请执行以下命令:

+
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
+CACHES = {
+    'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache'
+}
+

有关更多详细信息,请参阅 Django 文档。

+

静态媒体

+

仪表板的静态媒体应部署到仪表板域的子域,并由 Web 服务器提供服务。使用外部内容分发网络 (CDN) 也是可以接受的。此子域不应设置 Cookie 或提供用户提供的内容。媒体也应使用 HTTPS 提供。

+

Django 媒体设置记录在 Django 文档中。

+

Dashboard 的默认配置使用 django_compressor 来压缩和缩小 CSS 和 JavaScript 内容,然后再提供这些内容。此过程应在部署仪表板之前静态完成,而不是使用默认的请求内动态压缩,并将生成的文件与已部署的代码一起复制到 CDN 服务器。压缩应在非生产生成环境中完成。如果这不可行,我们建议完全禁用资源压缩。不应在生产计算机上安装联机压缩依赖项(较少,Node.js)。

+

密码

+

密码管理应该是云管理计划不可或缺的一部分。关于密码的权威教程超出了本书的范围;但是,云管理员应参考 NIST 企业密码管理特别出版物指南第 4 章中推荐的最佳实践。

+

无论是通过仪表板还是其他应用程序,基于浏览器的 OpenStack 云访问都会引入额外的注意事项。现代浏览器都支持某种形式的密码存储和自动填充记住的站点的凭据。这在使用不容易记住或键入的强密码时非常有用,但如果客户端的物理安全性受到威胁,可能会导致浏览器成为薄弱环节。如果浏览器的密码存储本身不受强密码保护,或者如果允许密码存储在会话期间保持解锁状态,则很容易获得对系统的未经授权的访问。

+

KeePassX 和 Password Safe 等密码管理应用程序非常有用,因为大多数应用程序都支持生成强密码和定期提醒生成新密码。最重要的是,密码存储仅短暂保持解锁状态,从而降低了密码泄露和通过浏览器或系统入侵进行未经授权的资源访问的风险。

+

密钥

+

仪表板依赖于某些安全功能的共享 SECRET_KEY 设置。密钥应为随机生成的字符串,长度至少为 64 个字符,必须在所有活动仪表板实例之间共享。泄露此密钥可能允许远程攻击者执行任意代码。轮换此密钥会使现有用户会话和缓存失效。请勿将此密钥提交到公共存储库。

+

Cookies

+

会话Cookies应设置为 HTTPONLY:

+
SESSION_COOKIE_HTTPONLY = True
+

切勿将 CSRF 或会话 Cookie 配置为具有带前导点的通配符域。使用 HTTPS 部署时,应保护 Horizon 的会话和 CSRF Cookie:

+
CSRF_COOKIE_SECURE = True
+SESSION_COOKIE_SECURE = True
+

跨域资源共享 (CORS)

+

将 Web 服务器配置为在每次响应时发送限制性 CORS 标头,仅允许仪表板域和协议:

+
Access-Control-Allow-Origin: https://example.com/
+

永远不允许通配符来源。

+

调试

+

建议在生产环境中将 DEBUG 该设置设置为 False 。如果 DEBUG 设置为 True,则当抛出异常时,Django 将显示堆栈跟踪和敏感的 Web 服务器状态信息。

+

检查表

+

Check-Dashboard-01:用户/配置文件组是否设置为 root/horizon?

+

配置文件包含组件平稳运行所需的关键参数和信息。如果非特权用户有意或无意地修改或删除任何参数或文件本身,则会导致严重的可用性问题,从而导致对其他最终用户的拒绝服务。因此,此类关键配置文件的用户所有权必须设置为 root,组所有权必须设置为 horizon。

+

运行以下命令:

+
$ stat -L -c "%U %G"  /etc/openstack-dashboard/local_settings.py | egrep "root horizon"
+

通过:如果配置文件的用户和组所有权分别设置为 root 和 horizon。上面的命令显示了根地平线的输出。

+

失败:如果上述命令未返回任何输出,因为用户和组所有权可能已设置为除 root 以外的任何用户或除 Horizon 以外的任何组。

+

Check-Dashboard-02:是否为 Horizon 配置文件设置了严格权限?

+

与前面的检查类似,建议对此类配置文件设置严格的访问权限。

+

运行以下命令:

+
$ stat -L -c "%a" /etc/openstack-dashboard/local_settings.py
+

通过:如果权限设置为 640 或更严格。640 的权限转换为所有者 r/w、组 r,而对其他人没有权限,即“u=rw,g=r,o=”。请注意,使用 Check-Dashboard-01 时:用户/配置文件组是否设置为 root/horizon?权限设置为 640,则 root 用户具有读/写访问权限,Horizon 具有对这些配置文件的读取访问权限。也可以使用以下命令验证访问权限。仅当此命令支持 ACL 时,它才在您的系统上可用。

+
$ getfacl --tabular -a /etc/openstack-dashboard/local_settings.py
+getfacl: Removing leading '/' from absolute path names
+# file: etc/openstack-dashboard/local_settings.py
+USER   root     rw-
+GROUP  horizon  r--
+mask            r--
+other           ---
+

失败:如果权限未设置为至少 640。

+

Check-Dashboard-03:参数是否 DISALLOW_IFRAME_EMBED 设置为 True

+

DISALLOW_IFRAME_EMBED 可用于防止 OpenStack Dashboard 嵌入到 iframe 中。

+

旧版浏览器仍然容易受到跨帧脚本 (XFS) 漏洞的影响,因此此选项允许在部署中未使用 iframe 的情况下进行额外的安全强化。

+

默认设置为 True。

+

通过:如果参数 DISALLOW_IFRAME_EMBED in /etc/openstack-dashboard/local_settings.py 的值设置为 True

+

失败:如果参数 DISALLOW_IFRAME_EMBED in /etc/openstack-dashboard/local_settings.py 的值设置为 False

+

推荐用于:HTTPS、HSTS、XSS 和 SSRF。

+ +

CSRF(跨站点请求伪造)是一种攻击,它迫使最终用户在他/她当前经过身份验证的 Web 应用程序上执行未经授权的命令。成功的 CSRF 漏洞可能会危及最终用户的数据和操作。如果目标最终用户具有管理员权限,这可能会危及整个 Web 应用程序。

+

通过:如果参数 CSRF_COOKIE_SECURE in /etc/openstack-dashboard/local_settings.py 的值设置为 True

+

失败:如果参数 CSRF_COOKIE_SECURE in /etc/openstack-dashboard/local_settings.py 的值设置为 False

+

推荐于:Cookies。

+ +

“SECURE”cookie 属性指示 Web 浏览器仅通过加密的 HTTPS (SSL/TLS) 连接发送 cookie。此会话保护机制是强制性的,以防止通过 MitM(中间人)攻击泄露会话 ID。它确保攻击者无法简单地从 Web 浏览器流量中捕获会话 ID。

+

通过:如果参数 SESSION_COOKIE_SECURE in /etc/openstack-dashboard/local_settings.py 的值设置为 True

+

失败:如果参数 SESSION_COOKIE_SECURE in /etc/openstack-dashboard/local_settings.py 的值设置为 False

+

推荐于:Cookies。

+ +

“HTTPONLY”cookie 属性指示 Web 浏览器不允许脚本(例如 JavaScript 或 VBscript)通过 DOM document.cookie 对象访问 cookie。此会话 ID 保护是必需的,以防止通过 XSS 攻击窃取会话 ID。

+

通过:如果参数 SESSION_COOKIE_HTTPONLY in /etc/openstack-dashboard/local_settings.py 的值设置为 True

+

失败:如果参数 SESSION_COOKIE_HTTPONLY in /etc/openstack-dashboard/local_settings.py 的值设置为 False

+

推荐于:Cookies。

+

Check-Dashboard-07: PASSWORD_AUTOCOMPLETE 设置为 False

+

应用程序用于为用户提供便利的常见功能是将密码本地缓存在浏览器中(在客户端计算机上),并在所有后续请求中“预先键入”。虽然此功能对普通用户来说非常友好,但同时,它引入了一个缺陷,因为在客户端计算机上使用相同帐户的任何人都可以轻松访问用户帐户,从而可能导致用户帐户受损。

+

通过:如果参数 PASSWORD_AUTOCOMPLETE in /etc/openstack-dashboard/local_settings.py 的值设置为 off

+

失败:如果参数 PASSWORD_AUTOCOMPLETE in /etc/openstack-dashboard/local_settings.py 的值设置为 on

+

Check-Dashboard-08: DISABLE_PASSWORD_REVEAL 设置为 True

+

与之前的检查类似,建议不要显示密码字段。

+

通过:如果参数 DISABLE_PASSWORD_REVEAL in /etc/openstack-dashboard/local_settings.py 的值设置为 True

+

失败:如果参数 DISABLE_PASSWORD_REVEAL in /etc/openstack-dashboard/local_settings.py 的值设置为 False

+

注意

+
此选项是在 Kilo 版本中引入的。
+

Check-Dashboard-09: ENFORCE_PASSWORD_CHECK 设置为 True

+

设置为 ENFORCE_PASSWORD_CHECK True 将在“更改密码”窗体上显示“管理员密码”字段,以验证是否确实是管理员登录的要更改密码。

+

通过:如果参数 ENFORCE_PASSWORD_CHECK in /etc/openstack-dashboard/local_settings.py 的值设置为 True

+

失败:如果参数 ENFORCE_PASSWORD_CHECK in /etc/openstack-dashboard/local_settings.py 的值设置为 False

+

Check-Dashboard-10:是否 PASSWORD_VALIDATOR 已配置?

+

允许正则表达式验证用户密码的复杂性。

+

通过:如果参数 PASSWORD_VALIDATOR in /etc/openstack-dashboard/local_settings.py 的值设置为 defaul 之外的任何值,则允许所有 “regex”: '.*',

+

失败:如果参数 PASSWORD_VALIDATOR in /etc/openstack-dashboard/local_settings.py 的值设置为允许所有 “regex”: '.*'

+

Check-Dashboard-11:是否 SECURE_PROXY_SSL_HEADER 已配置?

+

如果 OpenStack Dashboard 部署在代理后面,并且代理从所有传入请求中剥离 X-Forwarded-Proto 标头,或者设置标头 X-Forwarded-Proto 并将其发送到 Dashboard,但仅适用于最初通过 HTTPS 传入的请求,那么您应该考虑配置 SECURE_PROXY_SSL_HEADER

+

更多信息可以在 Django 文档中找到。

+

通过:如果参数 SECURE_PROXY_SSL_HEADER in /etc/openstack-dashboard/local_settings.py 的值设置为 'HTTP_X_FORWARDED_PROTO', 'https'

+

失败:如果参数 SECURE_PROXY_SSL_HEADER in /etc/openstack-dashboard/local_settings.py 的值未设置为 'HTTP_X_FORWARDED_PROTO', 'https' 或注释掉。

+

计算

+

OpenStack 计算服务 (nova) 在整个云中的许多位置运行,并与各种内部服务进行交互。OpenStack 计算服务提供了多种配置选项,这些选项可能是特定于部署的。

+

在本章中,我们将介绍有关计算安全性的一般最佳实践,以及可能导致安全问题的特定已知配置。 nova.conf 文件和 /var/lib/nova 位置应受到保护。应实施集中式日志记录、 policy.json 文件和强制访问控制框架等控制措施。

+
    +
  • +

    虚拟机管理程序选择

    +
      +
    • OpenStack 中的虚拟机管理程序
    • +
    • 纳入排除标准
    • +
    • 团队专长
    • +
    • 产品或项目成熟度
    • +
    • 认证和证明
    • +
    • 通用标准
    • +
    • 加密标准
    • +
    • FIPS 140-2
    • +
    • 硬件问题
    • +
    • 虚拟机管理程序与裸机
    • +
    • 虚拟机管理程序内存优化
    • +
    • KVM 内核 Samepage 合并
    • +
    • XEN透明页面共享
    • +
    • 内存优化的安全注意事项
    • +
    • 其他安全功能
    • +
    • 书目
    • +
    +
  • +
  • +

    强化虚拟化层

    +
      +
    • 物理硬件(PCI 直通)
    • +
    • 虚拟硬件 (QEMU)
    • +
    • 最小化 QEMU 代码库
    • +
    • 编译器强化
    • +
    • 安全加密虚拟化
    • +
    • 强制访问控制
    • +
    • sVirt:SELinux 和虚拟化
    • +
    • 标签和类别
    • +
    • SELinux 用户和角色
    • +
    • 布尔值
    • +
    +
  • +
  • +

    强化计算部署

    +
      +
    • OpenStack 漏洞管理团队
    • +
    • OpenStack 安全说明
    • +
    • OpenStack-dev 邮件列表
    • +
    • 虚拟机管理程序邮件列表
    • +
    +
  • +
  • +

    漏洞意识

    +
      +
    • OpenStack 漏洞管理团队
    • +
    • OpenStack 安全说明
    • +
    • OpenStack-讨论邮件列表
    • +
    • 虚拟机管理程序邮件列表
    • +
    +
  • +
  • +

    如何选择虚拟控制台

    +
      +
    • 虚拟网络计算机 (VNC)
    • +
    • 独立计算环境的简单协议 (SPICE)
    • +
    +
  • +
  • +

    检查表

    +
      +
    • Check-Compute-01:配置文件的用户/组所有权是否设置为 root/nova?
    • +
    • Check-Compute-02:是否为配置文件设置了严格的权限?
    • +
    • Check-Compute-03:Keystone 是否用于身份验证?
    • +
    • Check-Compute-04:是否使用安全协议进行身份验证?
    • +
    • Check-Compute-05:Nova 与 Glance 的通信是否安全?
    • +
    +
  • +
+

虚拟机管理程序选择

+

OpenStack 中的虚拟机管理程序

+

无论OpenStack是部署在私有数据中心内,还是作为公共云服务部署,底层虚拟化技术都能在可扩展性、资源效率和正常运行时间方面提供企业级功能。虽然在许多 OpenStack 支持的虚拟机管理程序技术中通常都具有这种高级优势,但每个虚拟机管理程序的安全架构和功能都存在显著差异,尤其是在考虑弹性 OpenStack 环境特有的安全威胁向量时。随着应用程序整合到单个基础架构即服务 (IaaS) 平台中,虚拟机管理程序级别的实例隔离变得至关重要。安全隔离的要求在商业、政府和军事社区中都适用。

+

在 OpenStack 框架中,您可以在众多虚拟机管理程序平台和相应的 OpenStack 插件中进行选择,以优化您的云环境。在本指南的上下文中,重点介绍了虚拟机管理程序选择注意事项,因为它们与对安全性至关重要的功能集有关。但是,这些注意事项并不意味着对特定虚拟机管理程序的优缺点进行详尽的调查。NIST 在特别出版物 800-125“完整虚拟化技术安全指南”中提供了其他指导。

+

选择标准

+

作为虚拟机管理程序选择过程的一部分,您必须考虑许多重要因素,以帮助改善您的安全状况。具体来说,您必须熟悉以下方面:

+
    +
  • 团队专长
  • +
  • 产品或项目成熟度
  • +
  • 通用标准
  • +
  • 认证和证明
  • +
  • 硬件问题
  • +
  • 虚拟机管理程序与裸机
  • +
  • 其他安全功能
  • +
+

此外,强烈建议在为 OpenStack 部署选择虚拟机管理程序时评估以下与安全相关的标准: * 虚拟机管理程序是否经过通用标准认证?如果是这样,达到什么水平?* 底层密码学是否经过第三方认证?

+

团队专长

+

最有可能的是,在选择虚拟机管理程序时,最重要的方面是您的员工在管理和维护特定虚拟机管理程序平台方面的专业知识。您的团队对给定产品、其配置及其怪癖越熟悉,配置错误就越少。此外,在给定的虚拟机管理程序上将员工专业知识分布在整个组织中可以提高系统的可用性,允许职责分离,并在团队成员不可用时缓解问题。

+

产品或项目成熟度

+

给定虚拟机管理程序产品或项目的成熟度对您的安全状况也至关重要。部署云后,产品成熟度会产生许多影响:给定虚拟机管理程序产品或项目的成熟度对您的安全状况也至关重要。部署云后,产品成熟度会产生许多影响:

+
    +
  • 专业知识的可用性
  • +
  • 活跃的开发人员和用户社区
  • +
  • 更新的及时性和可用性
  • +
  • 发病率响应
  • +
+

虚拟机管理程序成熟度的最大指标之一是围绕它的社区的规模和活力。由于这涉及安全性,因此如果您需要额外的云操作员,社区的质量会影响专业知识的可用性。这也表明了虚拟机管理程序的广泛部署,进而导致任何参考架构和最佳实践的战备状态。

+

此外,社区的质量,因为它围绕着KVM或Xen等开源虚拟机管理程序,对错误修复和安全更新的及时性有直接影响。在调查商业和开源虚拟机管理程序时,您必须查看它们的发布和支持周期,以及发布错误或安全问题与补丁或响应之间的时间差。最后,OpenStack 计算支持的功能因所选的虚拟机管理程序而异。请参阅 OpenStack Hypervisor Support Matrix,了解 Hypervisor 对 OpenStack 计算功能的支持。

+

认证和证明

+

选择虚拟机管理程序时,另一个考虑因素是各种正式认证和证明的可用性。虽然它们可能不是特定组织的要求,但这些认证和证明说明了特定虚拟机管理程序平台所经过的测试的成熟度、生产准备情况和彻底性。

+

通用标准

+

通用标准是一个国际标准化的软件评估过程,政府和商业公司使用它来验证软件技术是否如宣传的那样。在政府部门,NSTISSP 第 11 号规定美国政府机构只能采购已通过通用标准认证的软件,该政策自 2002 年 7 月起实施。

+

注意

+

OpenStack尚未通过通用标准认证,但许多可用的虚拟机管理程序都经过了认证。

+

除了验证技术能力外,通用标准流程还评估技术的开发方式。

+
    +
  • 如何进行源代码管理?
  • +
  • 如何授予用户对构建系统的访问权限?
  • +
  • 该技术在分发前是否经过加密签名?
  • +
+

KVM 虚拟机管理程序已通过美国政府和商业发行版的通用标准认证。这些已经过验证,可以将虚拟机的运行时环境彼此分离,从而提供基础技术来实施实例隔离。除了虚拟机隔离之外,KVM 还通过了通用标准认证:

+
"...provide system-inherent separation mechanisms to the resources of virtual
+machines. This separation ensures that large software component used for
+virtualizing and simulating devices executing for each virtual machine
+cannot interfere with each other. Using the SELinux multi-category
+mechanism, the virtualization and simulation software instances are
+isolated. The virtual machine management framework configures SELinux
+multi-category settings transparently to the administrator."
+

虽然许多虚拟机管理程序供应商(如 Red Hat、Microsoft 和 VMware)已获得通用标准认证,但其基础认证功能集有所不同,但我们建议评估供应商声明,以确保它们至少满足以下要求:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
审计该系统提供了审核大量事件的功能,包括单个系统调用和受信任进程生成的事件。审计数据以 ASCII 格式收集在常规文件中。系统提供了一个用于搜索审计记录的程序。系统管理员可以定义一个规则库,以将审核限制为他们感兴趣的事件。这包括将审核限制为特定事件、特定用户、特定对象或所有这些的组合的能力。审计记录可以传输到远程审计守护程序。
自主访问控制自主访问控制 (DAC) 限制对基于 ACL 的文件系统对象的访问,这些对象包括用户、组和其他人员的标准 UNIX 权限。访问控制机制还可以保护 IPC 对象免受未经授权的访问。该系统包括 ext4 文件系统,它支持 POSIX ACL。这允许定义对此类文件系统中文件的访问权限,精确到单个用户的粒度。
强制访问控制强制访问控制 (MAC) 根据分配给主体和对象的标签来限制对对象的访问。敏感度标签会自动附加到进程和对象。使用这些标签强制实施的访问控制策略派生自 Bell-LaPadula 模型。SELinux 类别附加到虚拟机及其资源。如果虚拟机的类别与所访问资源的类别相同,则使用这些类别强制实施的访问控制策略将授予虚拟机对资源的访问权限。TOE 实现非分层类别来控制对虚拟机的访问。
基于角色的访问控制基于角色的访问控制 (RBAC) 允许角色分离,无需全能的系统管理员。
对象重用文件系统对象、内存和 IPC 对象在被属于其他用户的进程重用之前会被清除。
安全管理系统安全关键参数的管理由管理用户执行。一组需要 root 权限(或使用 RBAC 时需要特定角色)的命令用于系统管理。安全参数存储在特定文件中,这些文件受系统的访问控制机制保护,防止非管理用户未经授权的访问。
安全通信系统支持使用 SSH 定义可信通道。支持基于密码的身份验证。在评估的配置中,这些协议仅支持有限数量的密码套件。
存储加密系统支持加密块设备,通过 dm_crypt 提供存储机密性。
TSF 保护在运行时,内核软件和数据受到硬件内存保护机制的保护。内核的内存和进程管理组件确保用户进程无法访问内核存储或属于其他进程的存储。非内核 TSF 软件和数据受 DAC 和进程隔离机制保护。在评估的配置中,保留用户 ID root 拥有定义 TSF 配置的目录和文件。通常,包含内部 TSF 数据的文件和目录(如配置文件和批处理作业队列)也受到 DAC 权限的保护,不会被读取。系统以及硬件和固件组件需要受到物理保护,以防止未经授权的访问。系统内核调解对硬件机制本身的所有访问,但程序可见的 CPU 指令函数除外。此外,还提供了防止堆栈溢出攻击的机制。
+

密码学标准

+

OpenStack 中提供了多种加密算法,用于识别和授权、数据传输和静态数据保护。选择虚拟机管理程序时,我们建议采用以下算法和实现标准:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
算法密钥长度预期目的安全功能执行标准
AES128、192 或 256 位加密/解密受保护的数据传输,保护静态数据RFC 4253
TDES168 位加密/解密受保护的数据传输RFC 4253
RSA1024、2048 或 3072 位身份验证、密钥交换识别和身份验证,受保护的数据传输U.S. NIST FIPS PUB 186-3
DSAL=1024,N=160位身份验证、密钥交换识别和身份验证,受保护的数据传输U.S. NIST FIPS PUB 186-3
Serpent128、192 或 256 位加密/解密静态数据保护http://www.cl.cam.ac.uk/~rja14/Papers/serpent.pdf
Twofish128、192 或 256 位加密/解密静态数据保护https://www.schneier.com/paper-twofish-paper.html
SHA-1消息摘要保护静态数据,受保护的数据传输U.S. NIST FIPS PUB 180-3
SHA-2(224、256、384 或 512 位)消息摘要Protection for data at rest, identification and authentication 保护静态数据、识别和身份验证U.S. NIST FIPS PUB 180-3
+

FIPS 140-2

+

在美国,美国国家科学技术研究院 (NIST) 通过称为加密模块验证计划的过程对加密算法进行认证。NIST 认证算法符合联邦信息处理标准 140-2 (FIPS 140-2),确保:

+
"... Products validated as conforming to FIPS 140-2 are accepted by the Federal
+agencies of both countries [United States and Canada] for the protection of
+sensitive information (United States) or Designated Information (Canada).
+The goal of the CMVP is to promote the use of validated cryptographic
+modules and provide Federal agencies with a security metric to use in
+procuring equipment containing validated cryptographic modules."
+

在评估基本虚拟机管理程序技术时,请考虑虚拟机管理程序是否已通过 FIPS 140-2 认证。根据美国政府政策,不仅强制要求符合 FIPS 140-2,而且正式认证表明已对加密算法的给定实现进行了审查,以确保符合模块规范、加密模块端口和接口;角色、服务和身份验证;有限状态模型;人身安全;操作环境;加密密钥管理;电磁干扰/电磁兼容性(EMI/EMC);自检;设计保证;以及缓解其他攻击。

+

硬件问题

+

在评估虚拟机管理程序平台时,请考虑运行虚拟机管理程序的硬件的可支持性。此外,请考虑硬件中可用的其他功能,以及您在 OpenStack 部署中选择的虚拟机管理程序如何支持这些功能。为此,每个虚拟机管理程序都有自己的硬件兼容性列表 (HCL)。在选择兼容的硬件时,从安全角度来看,提前了解哪些基于硬件的虚拟化技术是重要的,这一点很重要。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
描述科技解释
I/O MMUVT-d / AMD-Vi保护 PCI 直通所必需的
英特尔可信执行技术Intel TXT / SEM动态证明服务是必需的
PCI-SIG I/O 虚拟化SR-IOV, MR-IOV, ATS需要允许安全共享 PCI Express 设备
网络虚拟化VT-c提高虚拟机管理程序上的网络 I/O 性能
+

虚拟机管理程序与裸机

+

重要的是要认识到使用 Linux 容器 (LXC) 或裸机系统与使用 KVM 等虚拟机管理程序之间的区别。具体来说,本安全指南的重点主要基于拥有虚拟机管理程序和虚拟化平台。但是,如果您的实现需要使用裸机或 LXC 环境,则必须注意该环境部署方面的特殊差异。

+

在重新预配之前,请确保最终用户已正确清理节点的数据。此外,在重用节点之前,必须保证硬件未被篡改或以其他方式受到损害。

+

注意

+

虽然OpenStack有一个裸机项目,但对运行裸机的特殊安全影响的讨论超出了本书的范围。

+

由于书本冲刺的时间限制,该团队选择在我们的示例实现和架构中使用 KVM 作为虚拟机管理程序。

+

注意

+

有一个关于在计算中使用 LXC 的 OpenStack 安全说明。

+

Hypervisor 内存优化

+

许多虚拟机监控程序使用内存优化技术将内存过量使用到来宾虚拟机。这是一项有用的功能,可用于部署非常密集的计算群集。实现此目的的一种方法是通过重复数据消除或共享内存页。当两个虚拟机在内存中具有相同的数据时,让它们引用相同的内存是有好处的。

+

通常,这是通过写入时复制 (COW) 机制实现的。这些机制已被证明容易受到侧信道攻击,其中一个 VM 可以推断出另一个 VM 的状态,并且可能不适用于并非所有租户都受信任或共享相同信任级别的多租户环境。

+

KVM 内核同页合并

+

在版本 2.6.32 中引入到 Linux 内核中,内核相同页合并 (KSM) 在 Linux 进程之间整合了相同的内存页。由于 KVM 虚拟机管理程序下的每个客户机虚拟机都在自己的进程中运行,因此 KSM 可用于优化虚拟机之间的内存使用。

+

XEN 透明页面共享

+

XenServer 5.6 包含一个名为透明页面共享 (TPS) 的内存过量使用功能。TPS 扫描 4 KB 区块中的内存以查找任何重复项。找到后,Xen 虚拟机监视器 (VMM) 将丢弃其中一个重复项,并记录第二个副本的引用。

+

内存优化的安全注意事项

+

传统上,内存重复数据消除系统容易受到侧信道攻击。KSM 和 TPS 都已被证明容易受到某种形式的攻击。在学术研究中,攻击者能够通过分析攻击者虚拟机上的内存访问时间来识别相邻虚拟机上运行的软件包和版本,以及软件下载和其他敏感信息。

+

如果云部署需要强租户分离(如公有云和某些私有云的情况),部署人员应考虑禁用 TPS 和 KSM 内存优化。

+

其他安全功能

+

选择虚拟机管理程序平台时要考虑的另一件事是特定安全功能的可用性。特别是功能。例如,Xen Server 的 XSM 或 Xen 安全模块、sVirt、Intel TXT 或 AppArmor。

+

下表按常见虚拟机管理程序平台列出了这些功能。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
XSMsVirtTXTAppArmorcgroupsMAC 策略
KVMXXXXX
XenXX
ESXiX
Hyper-V
+

注意

+
此表中的功能可能不适用于所有虚拟机管理程序,也可能无法在虚拟机管理程序之间直接映射。
+

参考书目

+
    +
  • Sunar、Eisenbarth、Inci、Gorka Irazoqui Apecechea。对 Xen 和 VMware 进行细粒度跨虚拟机攻击是可能的!2014。 https://eprint.iacr.org/2014/248.pfd
  • +
  • Artho、Yagi、Iijima、Kuniyasu Suzaki。内存重复数据删除对客户机操作系统的威胁。2011 年。https://staff.aist.go.jp/c.artho/papers/EuroSec2011-suzaki.pdf
  • +
  • KVM:基于内核的虚拟机。内核相同页合并。2010。http://www.linux-kvm.org/page/KSM
  • +
  • Xen 项目,Xen 安全模块:XSM-FLASK。2014。 http://wiki.xen.org/wiki/Xen_Security_Modules_:_XSM-FLASK
  • +
  • SELinux 项目,SVirt。2011。 http://selinuxproject.org/page/SVirt
  • +
  • Intel.com,采用英特尔可信执行技术 (Intel TXT) 的可信计算池。http://www.intel.com/txt
  • +
  • AppArmor.net,AppArmor 主页。2011。 http://wiki.apparmor.net/index.php/Main_Page
  • +
  • Kernel.org,CGroups。2004。https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt
  • +
  • 计算机安全资源中心。完整虚拟化技术安全指南。2011。 http://csrc.nist.gov/publications/nistpubs/800-125/SP800-125-final.pdf
  • +
  • 国家信息保障伙伴关系,国家安全电信和信息系统安全政策。2003。http://www.niap-ccevs.org/cc-scheme/nstissp_11_revised_factsheet.pdf
  • +
+

加固虚拟化层

+

在本章的开头,我们将讨论实例对物理和虚拟硬件的使用、相关的安全风险以及缓解这些风险的一些建议。然后,我们将讨论如何使用安全加密虚拟化技术来加密支持该技术的基于 AMD 的机器上的虚拟机的内存。在本章的最后,我们将讨论 sVirt,这是一个开源项目,用于将 SELinux 强制访问控制与虚拟化组件集成。

+

物理硬件(PCI直通)

+

许多虚拟机管理程序都提供一种称为 PCI 直通的功能。这允许实例直接访问节点上的硬件。例如,这可用于允许实例访问提供计算统一设备架构 (CUDA) 以实现高性能计算的视频卡或 GPU。此功能存在两种类型的安全风险:直接内存访问和硬件感染。

+

直接内存访问 (DMA) 是一种功能,它允许某些硬件设备访问主机中的任意物理内存地址。视频卡通常具有此功能。但是,不应向实例授予任意物理内存访问权限,因为这将使其能够全面了解主机系统和在同一节点上运行的其他实例。在这些情况下,硬件供应商使用输入/输出内存管理单元 (IOMMU) 来管理 DMA 访问。我们建议云架构师应确保虚拟机监控程序配置为使用此硬件功能。

+

KVM: KVM:

+

如何在 KVM 中使用 VT-d 分配设备

+

Xen: Xen:

+

Xen VTd Howto Xen VTd 贴士指南

+

注意

+

IOMMU 功能由 Intel 作为 VT-d 销售,由 AMD 以 AMD-Vi 销售。

+

当实例对固件或设备的某些其他部分进行恶意修改时,就会发生硬件感染。由于此设备由其他实例或主机操作系统使用,因此恶意代码可能会传播到这些系统中。最终结果是,一个实例可以在其安全域之外运行代码。这是一个重大的漏洞,因为重置物理硬件的状态比重置虚拟硬件更难,并且可能导致额外的暴露,例如访问管理网络。

+

硬件感染问题的解决方案是特定于域的。该策略是确定实例如何修改硬件状态,然后确定在使用硬件完成实例时如何重置任何修改。例如,一种选择可能是在使用后重新刷新固件。需要平衡硬件寿命和安全性,因为某些固件在大量写入后会出现故障。安全引导中所述的 TPM 技术是一种用于检测未经授权的固件更改的解决方案。无论选择哪种策略,都必须了解与此类硬件共享相关的风险,以便针对给定的部署方案适当缓解这些风险。

+

由于与 PCI 直通相关的风险和复杂性,默认情况下应禁用它。如果为特定需求启用,则需要制定适当的流程,以确保硬件在重新发行之前是干净的。

+

虚拟硬件 (QEMU)

+

运行虚拟机时,虚拟硬件是为虚拟机提供硬件接口的软件层。实例使用此功能提供可能需要的网络、存储、视频和其他设备。考虑到这一点,环境中的大多数实例将专门使用虚拟硬件,少数实例需要直接硬件访问。主要的开源虚拟机管理程序使用 QEMU 来实现此功能。虽然 QEMU 满足了对虚拟化平台的重要需求,但它已被证明是一个非常具有挑战性的软件项目。QEMU 中的许多功能都是通过大多数开发人员难以理解的低级代码实现的。QEMU 虚拟化的硬件包括许多传统设备,这些设备有自己的一套怪癖。综上所述,QEMU 一直是许多安全问题的根源,包括虚拟机管理程序突破攻击。

+

采取积极主动的措施来强化 QEMU 非常重要。我们建议执行三个具体步骤:

+
    +
  • 最小化代码库。
  • +
  • 使用编译器强化。
  • +
  • 使用强制访问控制,例如 sVirt、SELinux 或 AppArmor。
  • +
+

确保您的 iptables 具有过滤网络流量的默认策略,并考虑检查现有规则集以了解每个规则并确定是否需要扩展该策略。

+

最小化 QEMU 代码库

+

我们建议通过从系统中删除未使用的组件来最小化 QEMU 代码库。QEMU 为许多不同的虚拟硬件设备提供支持,但给定实例只需要少量设备。最常见的硬件设备是 virtio 设备。某些旧实例将需要访问特定硬件,这些硬件可以使用 glance 元数据指定:

+
$ glance image-update \
+--property hw_disk_bus=ide \
+--property hw_cdrom_bus=ide \
+--property hw_vif_model=e1000 \
+f16-x86_64-openstack-sda
+

云架构师应决定向云用户提供哪些设备。任何不需要的东西都应该从 QEMU 中删除。此步骤需要在修改传递给 QEMU 配置脚本的选项后重新编译 QEMU。要获得最新选项的完整列表,只需从 QEMU 源目录中运行 ./configure --help。确定部署所需的内容,并禁用其余选项。

+

编译器加固

+

使用编译器强化选项强化 QEMU。现代编译器提供了多种编译时选项,以提高生成的二进制文件的安全性。这些功能包括只读重定位 (RELRO)、堆栈金丝雀、从不执行 (NX)、位置无关可执行文件 (PIE) 和地址空间布局随机化 (ASLR)。

+

许多现代 Linux 发行版已经在构建启用编译器强化的 QEMU,我们建议在继续操作之前验证现有的可执行文件。可以帮助您进行此验证的一种工具称为 checksec.sh

+

RELocation 只读 (RELRO)

+

强化可执行文件的数据部分。gcc 支持完整和部分 RELRO 模式。对于QEMU来说,完整的RELLO是您的最佳选择。这将使全局偏移表成为只读的,并在生成的可执行文件中将各种内部数据部分放在程序数据部分之前。

+

栈保护

+

将值放在堆栈上并验证其是否存在,以帮助防止缓冲区溢出攻击。

+

从不执行 (NX)

+

也称为数据执行保护 (DEP),确保无法执行可执行文件的数据部分。

+

位置无关可执行文件 (PIE)

+

生成一个独立于位置的可执行文件,这是 ASLR 所必需的。

+

地址空间布局随机化 (ASLR)

+

这确保了代码和数据区域的放置都是随机的。当使用 PIE 构建可执行文件时,由内核启用(所有现代 Linux 内核都支持 ASLR)。

+

编译 QEMU 时,建议对 GCC 使用以下编译器选项:

+
CFLAGS="-arch x86_64 -fstack-protector-all -Wstack-protector \
+--param ssp-buffer-size=4 -pie -fPIE -ftrapv -D_FORTIFY_SOURCE=2 -O2 \
+-Wl,-z,relro,-z,now"
+

我们建议在编译 QEMU 可执行文件后对其进行测试,以确保编译器强化正常工作。

+

大多数云部署不会手动构建软件,例如 QEMU。最好使用打包来确保该过程是可重复的,并确保最终结果可以轻松地部署在整个云中。下面的参考资料提供了有关将编译器强化选项应用于现有包的一些其他详细信息。

+

DEB 封装:

+

硬化指南

+

RPM 包:

+

如何创建 RPM 包

+

安全加密虚拟化

+

安全加密虚拟化 (SEV) 是 AMD 的一项技术,它允许使用 VM 唯一的密钥对 VM 的内存进行加密。SEV 在 Train 版本中作为技术预览版提供,在某些基于 AMD 的机器上提供 KVM 客户机,用于评估技术。

+

nova 配置指南的 KVM 虚拟机管理程序部分包含配置计算机和虚拟机管理程序所需的信息,并列出了 SEV 的几个限制。

+

SEV 为正在运行的 VM 使用的内存中的数据提供保护。但是,虽然 SEV 与 OpenStack 集成的第一阶段支持虚拟机加密内存,但重要的是它不提供 SEV 固件提供的 LAUNCH_MEASURE or LAUNCH_SECRET 功能。这意味着受 SEV 保护的 VM 使用的数据可能会受到控制虚拟机监控程序的有动机的对手的攻击。例如,虚拟机监控程序计算机上的恶意管理员可以为具有后门和间谍软件的租户提供 VM 映像,这些后门和间谍软件能够窃取机密,或者替换 VNC 服务器进程以窥探发送到 VM 控制台或从 VM 控制台发送的数据,包括解锁全磁盘加密解决方案的密码。

+

为了减少恶意管理员未经授权访问数据的机会,使用 SEV 时应遵循以下安全做法:

+
    +
  • VM 应使用完整磁盘加密解决方案。
  • +
  • 应在 VM 上使用引导加载程序密码。
  • +
+

此外,应将标准安全最佳做法用于 VM,包括以下内容:

+
    +
  • VM 应得到良好的维护,包括定期进行安全扫描和修补,以确保 VM 持续保持强大的安全态势。
  • +
  • 与 VM 的连接应使用加密和经过身份验证的协议,例如 HTTPS 和 SSH。
  • +
  • 应考虑使用其他安全工具和流程,并将其用于适合数据敏感度级别的 VM。
  • +
+

强制访问控制

+

编译器加固使攻击 QEMU 进程变得更加困难。但是,如果攻击者得逞,则需要限制攻击的影响。强制访问控制通过将 QEMU 进程上的权限限制为仅需要的权限来实现此目的。这可以通过使用 sVirt、SELinux 或 AppArmor 来实现。使用 sVirt 时,SELinux 配置为在单独的安全上下文下运行每个 QEMU 进程。AppArmor 可以配置为提供类似的功能。我们在以下 sVirt 和实例隔离部分中提供了有关 sVirt 和实例隔离的更多详细信息:SELinux 和虚拟化。

+

特定的 SELinux 策略可用于许多 OpenStack 服务。CentOS 用户可以通过安装 selinux-policy 源码包来查看这些策略。最新的策略出现在 Fedora 的 selinux-policy 存储库中。rawhide-contrib 分支包含以 .te 结尾的文件,例如 cinder.te ,这些文件可以在运行 SELinux 的系统上使用。

+

OpenStack 服务的 AppArmor 配置文件当前不存在,但 OpenStack-Ansible 项目通过将 AppArmor 配置文件应用于运行 OpenStack 服务的每个容器来处理此问题。

+

sVirt:SELinux 和虚拟化

+

凭借独特的内核级架构和国家安全局 (NSA) 开发的安全机制,KVM 为多租户提供了基础隔离技术。安全虚拟化 (sVirt) 技术的发展起源于 2002 年,是 SELinux 对现代虚拟化的应用。SELinux 旨在应用基于标签的分离控制,现已扩展为在虚拟机进程、设备、数据文件和代表它们执行操作的系统进程之间提供隔离。

+

OpenStack 的 sVirt 实现旨在保护虚拟机管理程序主机和虚拟机免受两个主要威胁媒介的侵害:

+

虚拟机监控程序威胁

+

在虚拟机中运行的受损应用程序会攻击虚拟机监控程序以访问底层资源。例如,当虚拟机能够访问虚拟机监控程序操作系统、物理设备或其他应用程序时。此威胁向量存在相当大的风险,因为虚拟机监控程序上的入侵可能会感染物理硬件并暴露其他虚拟机和网段。

+

虚拟机(多租户)威胁

+

在 VM 中运行的受损应用程序会攻击虚拟机监控程序,以访问或控制另一个虚拟机及其资源。这是虚拟化特有的威胁向量,存在相当大的风险,因为大量虚拟机文件映像可能因单个应用程序中的漏洞而受到损害。这种虚拟网络攻击是一个主要问题,因为用于保护真实网络的管理技术并不直接适用于虚拟环境。

+

每个基于 KVM 的虚拟机都是一个由 SELinux 标记的进程,从而有效地在每个虚拟机周围建立安全边界。此安全边界由 Linux 内核监视和强制执行,从而限制虚拟机访问其边界之外的资源,例如主机数据文件或其他 VM。

+

无论虚拟机内运行的客户机操作系统如何,都会提供 sVirt 隔离。可以使用 Linux 或 Windows VM。此外,许多 Linux 发行版在操作系统中提供 SELinux,使虚拟机能够保护内部虚拟资源免受威胁。

+

标签和类别

+

基于 KVM 的虚拟机实例使用其自己的 SELinux 数据类型进行标记,称为 svirt_image_t 。内核级保护可防止未经授权的系统进程(如恶意软件)操纵磁盘上的虚拟机映像文件。关闭虚拟机电源后,映像的存储 svirt_image_t 方式如下所示:

+
system_u:object_r:svirt_image_t:SystemLow image1
+system_u:object_r:svirt_image_t:SystemLow image2
+system_u:object_r:svirt_image_t:SystemLow image3
+system_u:object_r:svirt_image_t:SystemLow image4
+

svirt_image_t 标签唯一标识磁盘上的图像文件,允许 SELinux 策略限制访问。当基于 KVM 的计算映像通电时,sVirt 会将随机数字标识符附加到映像中。sVirt 能够为每个虚拟机管理程序节点最多分配 524,288 个虚拟机的数字标识符,但大多数 OpenStack 部署极不可能遇到此限制。

+

此示例显示了 sVirt 类别标识符:

+
system_u:object_r:svirt_image_t:s0:c87,c520 image1
+system_u:object_r:svirt_image_t:s0:419,c172 image2
+

SELinux 用户和角色

+

SELinux 管理用户角色。可以通过 -Z 标志或使用 semanage 命令查看这些内容。在虚拟机管理程序上,只有管理员才能访问系统,并且应该围绕管理用户和系统上的任何其他用户具有适当的上下文。有关更多信息,请参阅 SELinux 用户文档。

+

布尔值

+

为了减轻管理 SELinux 的管理负担,许多企业 Linux 平台利用 SELinux 布尔值来快速改变 sVirt 的安全态势。

+

基于 Red Hat Enterprise Linux 的 KVM 部署使用以下 sVirt 布尔值:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
sVirt SELinux 布尔值描述
virt_use_common允许 virt 使用串行或并行通信端口。
virt_use_fusefs允许 virt 读取 FUSE 挂载的文件。
virt_use_nfs允许 virt 管理 NFS 挂载的文件。
virt_use_samba允许 virt 管理 CIFS 挂载的文件。
virt_use_sanlock允许受限的虚拟访客与 sanlock 交互。
virt_use_sysfs允许 virt 管理设备配置 (PCI)。
virt_use_usb允许 virt 使用 USB 设备。
virt_use_xserver允许虚拟机与 X Window 系统交互。
+

加固计算部署

+

任何OpenStack部署的主要安全问题之一是围绕敏感文件(如 nova.conf 文件)的安全性和控制。此配置文件通常包含在 /etc 目录中,包含许多敏感选项,包括配置详细信息和服务密码。应为所有此类敏感文件授予严格的文件级权限,并通过文件完整性监视 (FIM) 工具(如 iNotify 或 Samhain)监视更改。这些实用程序将获取处于已知良好状态的目标文件的哈希值,然后定期获取该文件的新哈希值,并将其与已知良好的哈希值进行比较。如果发现警报被意外修改,则可以创建警报。

+

可以检查文件的权限,我移动到文件所在的目录并运行 ls -lh 命令。这将显示有权访问文件的权限、所有者和组,以及其他信息,例如上次修改文件的时间和创建时间。

+

/var/lib/nova 目录用于保存有关给定计算主机上的实例的详细信息。此目录也应被视为敏感目录,并具有严格强制执行的文件权限。此外,应定期备份它,因为它包含与该主机关联的实例的信息和元数据。

+

如果部署不需要完整的虚拟机备份,建议排除该 /var/lib/nova/instances 目录,因为它的大小将与该节点上运行的每个 VM 的总空间一样大。如果部署确实需要完整 VM 备份,则需要确保成功备份此目录。

+

监视是 IT 基础结构的关键组件,我们建议监视和分析计算日志文件,以便可以创建有意义的警报。

+

OpenStack 漏洞管理团队

+

我们建议在发布安全问题和建议时及时了解它们。OpenStack 安全门户是一个中央门户,可以在这里协调建议、通知、会议和流程。此外,OpenStack 漏洞管理团队 (VMT) 门户通过将 Bug 标记为“此 bug 是安全漏洞”来协调 OpenStack 项目内的补救措施,以及调查负责任地(私下)向 VMT 披露的报告 bug 的过程。VMT 流程页面中概述了更多详细信息,并生成了 OpenStack 安全公告 (OSSA)。此 OSSA 概述了问题和修复程序,并链接到原始错误和补丁托管位置。

+

OpenStack 安全注意事项

+

报告的安全漏洞被发现是配置错误的结果,或者不是严格意义上的 OpenStack 的一部分,这些漏洞将被起草到 OpenStack 安全说明 (OSSN) 中。这些问题包括配置问题,例如确保身份提供程序映射以及非 OpenStack,但关键问题(例如影响 OpenStack 使用的平台的 Bashbug/Ghost 或 Venom 漏洞)。当前的 OSSN 集位于安全说明 wiki 中。

+

OpenStack-dev 邮件列表

+

所有错误、OSSA 和 OSSN 都通过 openstack-discuss 邮件列表公开发布,主题行中带有 [security] 主题。我们建议订阅此列表以及邮件过滤规则,以确保不会遗漏 OSSN、OSSA 和其他重要公告。openstack-discuss 邮件列表通过 OpenStack Development Mailing List 进行管理。openstack-discuss 使用《项目团队指南》中定义的标记。

+

虚拟机管理程序邮件列表

+

在实施OpenStack时,核心决策之一是使用哪个虚拟机管理程序。我们建议您了解与您选择的虚拟机管理程序相关的公告。以下是几个常见的虚拟机管理程序安全列表:

+

Xen:

+

http://xenbits.xen.org/xsa/

+

VMWare:

+

http://blogs.vmware.com/security/

+

其他(KVM 等):

+

http://seclists.org/oss-sec

+

漏洞意识

+

OpenStack 漏洞管理团队

+

我们建议在发布安全问题和建议时及时了解它们。OpenStack 安全门户是一个中央门户,可以在这里协调建议、通知、会议和流程。此外,OpenStack 漏洞管理团队 (VMT) 门户协调 OpenStack 内部的补救措施,以及调查负责任地(私下)向 VMT 披露的报告错误的过程,方法是将错误标记为“此错误是安全漏洞”。VMT 流程页面中概述了更多详细信息,并生成了 OpenStack 安全公告 (OSSA)。此 OSSA 概述了问题和修复程序,并链接到原始错误和补丁托管位置。

+

OpenStack 安全注意事项

+

报告的安全漏洞被发现是配置错误的结果,或者不是严格意义上的 OpenStack 的一部分,将被起草到 OpenStack 安全说明 (OSSN) 中。这些问题包括配置问题,例如确保身份提供商映射,以及非 OpenStack 但关键的问题,例如影响 OpenStack 使用的平台的 Bashbug/Ghost 或 Venom 漏洞。当前的 OSSN 集位于安全说明 wiki 中。

+

OpenStack-discuss 邮件列表

+

所有 bug、OSSA 和 OSSN 都通过 openstack-discuss 邮件列表公开发布,主题行中包含 [security] 主题。我们建议订阅此列表以及邮件过滤规则,以确保不会遗漏 OSSN、OSSA 和其他重要公告。openstack-discuss 邮件列表通过 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss 进行管理。openstack-discuss 使用《项目团队指南》中定义的标记。

+

虚拟机管理程序邮件列表

+

在实施OpenStack时,核心决策之一是使用哪个虚拟机管理程序。我们建议您了解与您选择的虚拟机管理程序相关的公告。以下是几个常见的虚拟机管理程序安全列表:

+
    +
  • Xen:
  • +
+

http://xenbits.xen.org/xsa/

+
    +
  • VMWare:
  • +
+

http://blogs.vmware.com/security/

+
    +
  • 其他(KVM 等):
  • +
+

http://seclists.org/oss-sec

+

如何选择虚拟控制台

+

云架构师需要做出的有关计算服务配置的一个决定是使用 VNC 还是 SPICE。

+

虚拟网络计算机 (VNC)

+

OpenStack 可以配置为使用虚拟网络计算机 (VNC) 协议为租户和管理员提供对实例的远程桌面控制台访问。

+
功能
+
    +
  1. OpenStack Dashboard (horizon) 可以使用 HTML5 noVNC 客户端直接在网页上为实例提供 VNC 控制台。这要求 nova-novncproxy 服务从公用网络桥接到管理网络。
  2. +
  3. nova 命令行实用程序可以返回 VNC 控制台的 URL,以供 nova Java VNC 客户端访问。这要求 nova-xvpvncproxy 服务从公用网络桥接到管理网络。
  4. +
+
安全注意事项
+
    +
  1. 默认情况下, nova-novncproxynova-xvpvncproxy 服务会打开经过令牌身份验证的面向公众的端口。
  2. +
  3. 默认情况下,远程桌面流量未加密。可以启用 TLS 来加密 VNC 流量。请参阅 TLS 和 SSL 简介以获取适当的建议。
  4. +
+
参考书目
+
    +
  1. blog.malchuk.ru, OpenStack VNC Security. 2013. Secure Connections to VNC ports + blog.malchuk.ru,OpenStack VNC 安全性。2013. 与 VNC 端口的安全连接
  2. +
  3. OpenStack Mailing List, [OpenStack] nova-novnc SSL configuration - Havana. 2014. OpenStack nova-novnc SSL Configuration + OpenStack 邮件列表,[OpenStack] nova-novnc SSL 配置 - 哈瓦那。2014. OpenStack nova-novnc SSL配置
  4. +
  5. Redhat.com/solutions,在 OpenStack 中使用 SSL 加密 nova-novacproxy。2014. OpenStack nova-novncproxy SSL加密
  6. +
+

独立计算环境的简单协议 (SPICE)

+

作为 VNC 的替代方案,OpenStack 使用独立计算环境的简单协议 (SPICE) 协议提供对客户机虚拟机的远程桌面访问。

+
功能
+
    +
  1. OpenStack Dashboard (horizon) 直接在实例网页上支持 SPICE。这需要服务 nova-spicehtml5proxy
  2. +
  3. nova 命令行实用程序可以返回 SPICE 控制台的 URL,以供 SPICE-html 客户端访问。
  4. +
+
限制
+
    +
  1. 尽管 SPICE 与 VNC 相比具有许多优势,但 spice-html5 浏览器集成目前不允许管理员利用这些优势。为了利用 多显示器、USB 直通等 SPICE 功能,我们建议管理员在管理网络中使用独立的 SPICE 客户端。
  2. +
+
安全注意事项
+
    +
  1. 默认情况下,该 nova-spicehtml5proxy 服务会打开经过令牌身份验证的面向公众的端口。
  2. +
  3. 功能和集成仍在不断发展。我们将在下一个版本中访问这些功能并提出建议。
  4. +
  5. 与 VNC 的情况一样,目前我们建议从管理网络使用 SPICE,此外还限制使用少数人。
  6. +
+
参考书目
+
    +
  1. OpenStack 管理员指南。SPICE控制台。SPICE控制台。
  2. +
  3. bugzilla.redhat.com, Bug 913607 - RFE: 支持通过 websockets 隧道传输 SPICE。2013. RedHat 错误913607。
  4. +
+

检查表

+

Check-Compute-01:配置文件的用户/组所有权是否设置为 root/nova?

+

配置文件包含组件平稳运行所需的关键参数和信息。如果非特权用户有意或无意地修改或删除任何参数或文件本身,则会导致严重的可用性问题,从而导致拒绝向其他最终用户提供服务。此类关键配置文件的用户所有权必须设置为 novaroot 并且组所有权必须设置为 。此外,包含目录应具有相同的所有权,以确保正确拥有新文件。

+

运行以下命令:

+
$ stat -L -c "%U %G" /etc/nova/nova.conf | egrep "root nova"
+$ stat -L -c "%U %G" /etc/nova/api-paste.ini | egrep "root nova"
+$ stat -L -c "%U %G" /etc/nova/policy.json | egrep "root nova"
+$ stat -L -c "%U %G" /etc/nova/rootwrap.conf | egrep "root nova"
+$ stat -L -c "%U %G" /etc/nova | egrep "root nova"
+

通过:如果所有这些配置文件的用户和组所有权分别设置为 rootnova 。上述命令显示 的 root nova 输出。

+

失败:如果上述命令未返回任何输出,则用户和组所有权可能已设置为除 以外的任何用户或除 nova 以外的 root 任何组。

+

推荐于:计算。

+

Check-Compute-02:是否为配置文件设置了严格的权限?

+

与前面的检查类似,我们建议为此类配置文件设置严格的访问权限。

+

运行以下命令:

+
$ stat -L -c "%a" /etc/nova/nova.conf
+$ stat -L -c "%a" /etc/nova/api-paste.ini
+$ stat -L -c "%a" /etc/nova/policy.json
+$ stat -L -c "%a" /etc/nova/rootwrap.conf
+

还可以进行更广泛的限制:如果包含目录设置为 750,则保证此目录中新创建的文件具有所需的权限。

+

通过:如果权限设置为 640 或更严格,或者包含目录设置为 750。640/750 的权限转换为所有者 r/w、组 r,而对其他人没有权限。例如,“u=rw,g=r,o=”。

+

注意

+
如果 Check-Compute-01:配置文件的用户/组所有权是否设置为 root/nova?权限设置为 640,root  具有读/写访问权限,nova 具有对这些配置文件的读取访问权限。也可以使用以下命令验证访问权限。仅当此命令支持 ACL  时,它才在您的系统上可用。
+
$ getfacl --tabular -a /etc/nova/nova.conf
+getfacl: Removing leading '/' from absolute path names
+# file: etc/nova/nova.conf
+USER   root  rw-
+GROUP  nova  r--
+mask         r--
+other        ---
+

失败:如果权限未设置为至少 640/750。

+

推荐于:计算。

+

Check-Compute-03:Keystone 是否用于身份验证?

+

注意

+
此项仅适用于 OpenStack 版本 Rocky 及之前版本,因为 `auth_strategy` Stein 中已弃用。
+

OpenStack 支持各种身份验证策略,如 noauth 和 keystone。如果使用 noauth 策略,那么用户无需任何身份验证即可与 OpenStack 服务进行交互。这可能是一个潜在的风险,因为攻击者可能会获得对 OpenStack 组件的未经授权的访问。我们强烈建议所有服务都必须使用其服务帐户通过 keystone 进行身份验证。

+

在Ocata之前:

+

通过:如果 section in 下的参数 auth_strategy 设置为 keystone[DEFAULT] /etc/nova/nova.conf

+

失败:如果 section 下的 [DEFAULT] 参数 auth_strategy 值设置为 noauthnoauth2

+

在Ocata之后:

+

通过:如果 under [api][DEFAULT] section in /etc/nova/nova.conf 的参数 auth_strategy 值设置为 keystone

+

失败:如果 or [DEFAULT] 部分下的 [api] 参数 auth_strategy 值设置为 noauthnoauth2

+

Check-Compute-04:是否使用安全协议进行身份验证?

+

OpenStack 组件使用各种协议相互通信,通信可能涉及敏感或机密数据。攻击者可能会尝试窃听频道以访问敏感信息。所有组件必须使用安全通信协议相互通信。

+

通过:如果 section in /etc/nova/nova.conf 下的参数值设置为 Identity API 端点开头, https:// 并且 same /etc/nova/nova.conf 中同一 [keystone_authtoken] 部分下的 [keystone_authtoken] 参数 www_authenticate_uri insecure 值设置为 False

+

失败:如果 in /etc/nova/nova.conf 部分下的 [keystone_authtoken] 参数 www_authenticate_uri 值未设置为以 开头的身份 API 端点, https:// 或者同一 /etc/nova/nova.conf 部分中的参数 insecure [keystone_authtoken] 值设置为 True

+

Check-Compute-05:Nova 与 Glance 的通信是否安全?

+

OpenStack 组件使用各种协议相互通信,通信可能涉及敏感或机密数据。攻击者可能会尝试窃听频道以访问敏感信息。所有组件必须使用安全通信协议相互通信。

+

通过:如果 section in 下的参数值设置为 False ,并且 section in /etc/nova/nova.conf /etc/nova/nova.conf 下的 [glance] [glance] 参数 api_insecure api_servers 值设置为以 https:// 开头的值。

+

失败:如果 in /etc/nova/nova.conf 节下的参数值设置为 True ,或者 in /etc/nova/nova.conf 节下的 [glance] [glance] 参数 api_insecure api_servers 值设置为不以 https:// 开头的值。

+

块存储

+

OpenStack Block Storage (cinder) 是一项服务,它提供软件(服务和库)来自助管理持久性块级存储设备。这将创建对块存储资源的按需访问,以便与 OpenStack 计算 (nova) 实例一起使用。通过将块存储池虚拟化到各种后端存储设备(可以是软件实现或传统硬件存储产品),通过抽象创建软件定义存储。其主要功能是管理块设备的创建、附加和分离。消费者不需要知道后端存储设备的类型或它的位置。

+

计算实例通过行业标准存储协议(如 iSCSI、以太网 ATA 或光纤通道)存储和检索块存储。这些资源通过 OpenStack 原生标准 HTTP RESTful API 进行管理和配置。有关 API 的更多详细信息,请参阅 OpenStack 块存储文档。

+
    +
  • 卷擦除
  • +
  • 检查表
  • +
  • Check-Block-01:配置文件的用户/组所有权是否设置为 root/cinder?
  • +
  • Check-Block-02:是否为配置文件设置了严格的权限?
  • +
  • Check-Block-03:Keystone 是否用于身份验证?
  • +
  • Check-Block-04:是否启用了 TLS 进行身份验证?
  • +
  • Check-Block-05:cinder 是否通过 TLS 与 nova 通信?
  • +
  • Check-Block-06:cinder 是否通过 TLS 与 glance 通信?
  • +
  • Check-Block-07: NAS 是否在安全的环境中运行?
  • +
  • Check-Block-08:请求正文的最大大小是否设置为默认值 (114688)?
  • +
  • Check-Block-09:是否启用了卷加密功能?
  • +
+

注意

+
虽然本章目前对具体指南的介绍很少,但预计将遵循标准的强化实践。本节将扩展相关信息。
+

卷擦除

+

有几种方法可以擦除块存储设备。传统的方法是将 lvm_type 设置为 thin ,如果使用 LVM 后端,则使用 volume_clear 该参数。或者,如果使用卷加密功能,则在删除卷加密密钥时不需要卷擦除。有关设置的详细信息,请参阅卷加密部分中的 OpenStack 配置参考文档,以及有关密钥删除的 Castellan 使用文档

+

注意

+
在较旧的 OpenStack 版本中, `lvm_type=default` 用于表示擦除。虽然此方法仍然有效,但 `lvm_type=default` 不建议用于设置安全删除。
+

volume_clear 参数可以设置为 zero 。该 zero 参数将向设备写入一次零传递。

+

有关该 lvm_type 参数的更多信息,请参阅 cinder 项目文档的精简置备中的 LVM 和超额订阅部分。

+

有关该 volume_clear 参数的详细信息,请参阅 cinder 项目文档的 Cinder 配置选项部分。

+

检查表

+

Check-Block-01:配置文件的用户/组所有权是否设置为 root/cinder?

+

配置文件包含组件平稳运行所需的关键参数和信息。如果非特权用户有意或无意地修改或删除任何参数或文件本身,则会导致严重的可用性问题,从而导致拒绝向其他最终用户提供服务。因此,此类关键配置文件的用户所有权必须设置为 root,组所有权必须设置为 cinder。此外,包含目录应具有相同的所有权,以确保正确拥有新文件。

+

运行以下命令:

+
$ stat -L -c "%U %G" /etc/cinder/cinder.conf | egrep "root cinder"
+$ stat -L -c "%U %G" /etc/cinder/api-paste.ini | egrep "root cinder"
+$ stat -L -c "%U %G" /etc/cinder/policy.json | egrep "root cinder"
+$ stat -L -c "%U %G" /etc/cinder/rootwrap.conf | egrep "root cinder"
+$ stat -L -c "%U %G" /etc/cinder | egrep "root cinder"
+

通过:如果所有这些配置文件的用户和组所有权分别设置为 root 和 cinder。上面的命令显示了根煤渣的输出。

+

失败:如果上述命令未返回任何输出,因为用户和组所有权可能已设置为除 root 以外的任何用户或除 cinder 以外的任何组。

+

Check-Block-02:是否为配置文件设置了严格的权限?

+

与前面的检查类似,我们建议为此类配置文件设置严格的访问权限。

+

运行以下命令:

+
$ stat -L -c "%a" /etc/cinder/cinder.conf
+$ stat -L -c "%a" /etc/cinder/api-paste.ini
+$ stat -L -c "%a" /etc/cinder/policy.json
+$ stat -L -c "%a" /etc/cinder/rootwrap.conf
+$ stat -L -c "%a" /etc/cinder
+

还可以进行更广泛的限制:如果包含目录设置为 750,则保证此目录中新创建的文件具有所需的权限。

+

通过:如果权限设置为 640 或更严格,或者包含目录设置为 750。640/750 的权限转换为所有者 r/w、组 r,而对其他人没有权限,即“u=rw,g=r,o=”。请注意,使用 Check-Block-01 时:配置文件的用户/组所有权是否设置为 root/cinder?权限设置为 640,root 具有读/写访问权限,cinder 具有对这些配置文件的读取访问权限。也可以使用以下命令验证访问权限。仅当此命令支持 ACL 时,它才在您的系统上可用。

+
$ getfacl --tabular -a /etc/cinder/cinder.conf
+getfacl: Removing leading '/' from absolute path names
+# file: etc/cinder/cinder.conf
+USER   root  rw-
+GROUP  cinder  r--
+mask         r--
+other        ---
+

失败:如果权限未设置为至少 640。

+

Check-Block-03:Keystone 是否用于身份验证?

+

注意

+
此项仅适用于 OpenStack 版本 Rocky 及之前版本,因为 `auth_strategy` Stein 中已弃用。
+

OpenStack 支持各种身份验证策略,如 noauth、keystone 等。如果使用“noauth”策略,那么用户无需任何身份验证即可与OpenStack服务进行交互。这可能是一个潜在的风险,因为攻击者可能会获得对 OpenStack 组件的未经授权的访问。因此,我们强烈建议所有服务都必须使用其服务帐户通过 keystone 进行身份验证。

+

通过:如果 section in 下的参数 auth_strategy 设置为 keystone[DEFAULT] /etc/cinder/cinder.conf

+

失败:如果 section 下的 [DEFAULT] 参数 auth_strategy 值设置为 noauth

+

Check-Block-04:是否启用了 TLS 进行身份验证?

+

OpenStack 组件使用各种协议相互通信,通信可能涉及敏感/机密数据。攻击者可能会尝试窃听频道以访问敏感信息。因此,所有组件都必须使用安全的通信协议相互通信。

+

通过:如果 section in /etc/cinder/cinder.conf 下的参数值设置为 Identity API 端点开头, https:// 并且 same /etc/cinder/cinder.conf 中同一 [keystone_authtoken] 部分下的 [keystone_authtoken] 参数 www_authenticate_uri insecure 值设置为 False

+

失败:如果 in /etc/cinder/cinder.conf 部分下的 [keystone_authtoken] 参数 www_authenticate_uri 值未设置为以 开头的身份 API 端点, https:// 或者同一 /etc/cinder/cinder.conf 部分中的参数 insecure [keystone_authtoken] 值设置为 True

+

Check-Block-05:cinder 是否通过 TLS 与 nova 通信?

+

OpenStack 组件使用各种协议相互通信,通信可能涉及敏感/机密数据。攻击者可能会尝试窃听频道以访问敏感信息。因此,所有组件都必须使用安全的通信协议相互通信。

+

通过:如果 section in 下的参数 nova_api_insecure 设置为 False[DEFAULT] /etc/cinder/cinder.conf

+

失败:如果 section in 下的参数 nova_api_insecure 设置为 True[DEFAULT] /etc/cinder/cinder.conf

+

Check-Block-06:cinder 是否通过 TLS 与 glance 通信?

+

与之前的检查(Check-Block-05:cinder 是否通过 TLS 与 nova 通信?)类似,我们建议所有组件使用安全通信协议相互通信。

+

通过:如果 in 部分下的 [DEFAULT] 参数值设置为 False 并且参数 glance_api_servers glance_api_insecure 值设置为以 https:// 开头 /etc/cinder/cinder.conf 的值。

+

失败:如果将 section in 下的参数值设置为 True 或参数 glance_api_servers glance_api_insecure 值设置为不以 https:// 开头的值。 [DEFAULT] /etc/cinder/cinder.conf

+

Check-Block-07: NAS 是否在安全的环境中运行?

+

Cinder 支持 NFS 驱动程序,其工作方式与传统的块存储驱动程序不同。NFS 驱动程序实际上不允许实例在块级别访问存储设备。相反,文件是在 NFS 共享上创建的,并映射到模拟块储存设备的实例。Cinder 通过在创建 Cinder 卷时控制文件权限来支持此类文件的安全配置。Cinder 配置还可以控制是以 root 用户身份还是当前 OpenStack 进程用户身份运行文件操作。

+

通过:如果 section in 下的参数 nas_secure_file_permissions 设置为 auto[DEFAULT] /etc/cinder/cinder.conf 如果设置为 auto ,则在 cinder 启动期间进行检查以确定是否存在现有的 cinder 卷,任何卷都不会将选项设置为 True ,并使用安全文件权限。检测现有卷会将选项设置为 False ,并使用当前不安全的方法来处理文件权限。如果 section in 下的参数 nas_secure_file_operations 设置为 auto[DEFAULT] /etc/cinder/cinder.conf 当设置为“auto”时,在 cinder 启动期间进行检查以确定是否存在现有的 cinder 卷,任何卷都不会将选项设置为 True ,安全且不以 root 用户身份运行。对现有卷的检测会将选项设置为 False ,并使用当前方法以 root 用户身份运行操作。对于新安装,会编写一个“标记文件”,以便随后重新启动 cinder 将知道原始确定是什么。

+

失败:如果 section in 下的参数值设置为 False ,并且 section in /etc/cinder/cinder.conf /etc/cinder/cinder.conf 下的 [DEFAULT] [DEFAULT] 参数 nas_secure_file_permissions nas_secure_file_operations 值设置为 False

+

Check-Block-08:请求正文的最大大小是否设置为默认值 (114688)?

+

如果未定义每个请求的最大正文大小,攻击者可以构建任意较大的osapi请求,导致服务崩溃,最终导致拒绝服务攻击。分配最大值可确保阻止任何恶意超大请求,从而确保服务的持续可用性。

+

通过:如果 section in 下的参数值设置为 114688 114688 ,或者 section in /etc/cinder/cinder.conf /etc/cinder/cinder.conf 下的 [oslo_middleware] [DEFAULT] 参数 osapi_max_request_body_size max_request_body_size 值设置为 。

+

失败:如果 section in 下的参数值未设置为 114688114688 或者 section in /etc/cinder/cinder.conf /etc/cinder/cinder.conf 下的 [oslo_middleware] [DEFAULT] 参数 osapi_max_request_body_size max_request_body_size 值未设置为 。

+

Check-Block-09:是否启用了卷加密功能?

+

未加密的卷数据使卷托管平台成为攻击者特别高价值的目标,因为它允许攻击者读取许多不同 VM 的数据。此外,物理存储介质可能会被窃取、重新装载和从另一台计算机访问。加密卷数据可以降低这些风险,并为卷托管平台提供深度防御。块存储 (cinder) 能够在将卷数据写入磁盘之前对其进行加密,因此建议开启卷加密功能。有关说明,请参阅 Openstack Cinder 服务配置文档的卷加密部分。

+

通过:如果 1) 设置了 in [key_manager] 部分下的参数值,2) 设置了 in 下的 [key_manager] 参数 backend backend 值,以及 3) 如果正确遵循了 /etc/cinder/cinder.conf /etc/nova/nova.conf 上述文档中的说明。

+

若要进一步验证,请在完成卷加密设置并为 LUKS 创建卷类型后执行这些步骤,如上述文档中所述。

+
    +
  1. 创建 VM:
  2. +
+
$ openstack server create --image cirros-0.3.1-x86_64-disk --flavor m1.tiny TESTVM
+
    +
  1. 创建加密卷并将其附加到 VM:
  2. +
+
$ openstack volume create --size 1 --type LUKS 'encrypted volume'
+$ openstack volume list
+$ openstack server add volume --device /dev/vdb TESTVM 'encrypted volume'
+
    +
  1. 在 VM 上,将一些文本发送到新附加的卷并同步它:
  2. +
+
# echo "Hello, world (encrypted /dev/vdb)" >> /dev/vdb
+# sync && sleep 2
+
    +
  1. 在托管 cinder 卷服务的系统上,同步以刷新 I/O 缓存,然后测试是否可以找到字符串:
  2. +
+
# sync && sleep 2
+# strings /dev/stack-volumes/volume-* | grep "Hello"
+

搜索不应返回写入加密卷的字符串。

+

失败:如果未设置 in 部分下的参数值,或者未设置 in /etc/cinder/cinder.conf /etc/nova/nova.conf 部分下的 [key_manager] [key_manager] 参数 backend backend 值,或者未正确遵循上述文档中的说明。

+

图像存储

+

OpenStack Image Storage (glance) 是一项服务,用户可以在其中上传和发现旨在与其他服务一起使用的数据资产。这目前包括图像和元数据定义。

+

映像服务包括发现、注册和检索虚拟机映像。Glance 有一个 RESTful API,允许查询 VM 映像元数据以及检索实际映像。

+

有关该服务的更多详细信息,请参阅 OpenStack Glance 文档。

+
    +
  • 检查表
  • +
  • Check-Image-01:配置文件的用户/组所有权是否设置为 root/glance?
  • +
  • Check-Image-02:是否为配置文件设置了严格的权限?
  • +
  • Check-Image-03:Keystone 是否用于身份验证?
  • +
  • Check-Image-04:是否启用了 TLS 进行身份验证?
  • +
  • Check-Image-05:是否阻止了屏蔽端口扫描?
  • +
+

注意

+
虽然本章目前对具体指南的介绍很少,但预计将遵循标准的强化实践。本节将扩展相关信息。
+

检查表

+

Check-Image-01:配置文件的用户/组所有权是否设置为 root/glance?

+

配置文件包含组件平稳运行所需的关键参数和信息。如果非特权用户有意或无意地修改或删除任何参数或文件本身,则会导致严重的可用性问题,从而导致拒绝向其他最终用户提供服务。因此,必须将此类关键配置文件的用户所有权设置为 glanceroot 并且必须将组所有权设置为 。此外,包含目录应具有相同的所有权,以确保正确拥有新文件。

+

运行以下命令:

+
$ stat -L -c "%U %G" /etc/glance/glance-api-paste.ini | egrep "root glance"
+$ stat -L -c "%U %G" /etc/glance/glance-api.conf | egrep "root glance"
+$ stat -L -c "%U %G" /etc/glance/glance-cache.conf | egrep "root glance"
+$ stat -L -c "%U %G" /etc/glance/glance-manage.conf | egrep "root glance"
+$ stat -L -c "%U %G" /etc/glance/glance-registry-paste.ini | egrep "root glance"
+$ stat -L -c "%U %G" /etc/glance/glance-registry.conf | egrep "root glance"
+$ stat -L -c "%U %G" /etc/glance/glance-scrubber.conf | egrep "root glance"
+$ stat -L -c "%U %G" /etc/glance/glance-swift-store.conf | egrep "root glance"
+$ stat -L -c "%U %G" /etc/glance/policy.json | egrep "root glance"
+$ stat -L -c "%U %G" /etc/glance/schema-image.json | egrep "root glance"
+$ stat -L -c "%U %G" /etc/glance/schema.json | egrep "root glance"
+$ stat -L -c "%U %G" /etc/glance | egrep "root glance"
+

通过:如果所有这些配置文件的用户和组所有权分别设置为 root 和 glance。上面的命令显示了 root glance 的输出。

+

失败:如果上述命令不返回任何输出。

+

Check-Image-02:是否为配置文件设置了严格的权限?

+

与前面的检查类似,我们建议您为此类配置文件设置严格的访问权限。

+

运行以下命令:

+
$ stat -L -c "%a" /etc/glance/glance-api-paste.ini
+$ stat -L -c "%a" /etc/glance/glance-api.conf
+$ stat -L -c "%a" /etc/glance/glance-cache.conf
+$ stat -L -c "%a" /etc/glance/glance-manage.conf
+$ stat -L -c "%a" /etc/glance/glance-registry-paste.ini
+$ stat -L -c "%a" /etc/glance/glance-registry.conf
+$ stat -L -c "%a" /etc/glance/glance-scrubber.conf
+$ stat -L -c "%a" /etc/glance/glance-swift-store.conf
+$ stat -L -c "%a" /etc/glance/policy.json
+$ stat -L -c "%a" /etc/glance/schema-image.json
+$ stat -L -c "%a" /etc/glance/schema.json
+$ stat -L -c "%a" /etc/glance
+

还可以进行更广泛的限制:如果包含目录设置为 750,则保证此目录中新创建的文件具有所需的权限。

+

通过:如果权限设置为 640 或更严格,或者包含目录设置为 750。640/750 的权限转换为所有者 r/w、组 r,而对其他人没有权限。例如, u=rw,g=r,o= .

+

注意

+
使用 Check-Image-01: Devices / Group Ownership of config files 是否设置为  root/glance?,权限设置为 640,则 root 具有读/写访问权限,glance  具有对这些配置文件的读取访问权限。也可以使用以下命令验证访问权限。仅当此命令支持 ACL 时,它才在您的系统上可用。
+
$ getfacl --tabular -a /etc/glance/glance-api.conf
+getfacl: Removing leading '/' from absolute path names
+# file: /etc/glance/glance-api.conf
+USER   root  rw-
+GROUP  glance  r--
+mask         r--
+other        ---
+

失败:如果权限未设置为至少 640。

+

Check-Image-03:Keystone 是否用于身份验证?

+

注意

+
此项仅适用于 OpenStack 版本 Rocky 及之前版本,因为 `auth_strategy` Stein 中已弃用。
+

OpenStack 支持各种身份验证策略,包括 noauth 和 keystone。如果使用该 noauth 策略,则用户无需任何身份验证即可与 OpenStack 服务进行交互。这可能是一个潜在的风险,因为攻击者可能会获得对 OpenStack 组件的未经授权的访问。我们强烈建议所有服务都必须使用其服务帐户通过 keystone 进行身份验证。

+

通过:如果 section in 下的参数值设置为 , keystone 并且 section in /etc/glance/glance-api.conf /etc/glance /glance-registry.conf 下的 [DEFAULT] [DEFAULT] 参数 auth_strategy auth_strategy 值设置为 keystone

+

失败:如果 section in 下的参数值设置为 noauth 或 section in /etc/glance/glance-api.conf /etc/glance/glance- registry.conf 下的 [DEFAULT] [DEFAULT] 参数 auth_strategy auth_strategy 值设置为 noauth

+

Check-Image-04:是否启用了 TLS 进行身份验证?

+

OpenStack 组件使用各种协议相互通信,通信可能涉及敏感或机密数据。攻击者可能会尝试窃听频道以访问敏感信息。所有组件必须使用安全的通信协议相互通信。

+

通过:如果 section in 下的参数值设置为以 开头的 Identity API 端点 https:// ,并且该参数 insecure www_authenticate_uri 的值位于 same /etc/glance/glance-registry.conf 中的同一 [keystone_authtoken] 部分下,则设置为 False[keystone_authtoken] /etc/glance/glance-api.conf

+

失败:如果 中的 /etc/glance/glance-api.conf 部分下的 [keystone_authtoken] 参数 www_authenticate_uri 值未设置为以 https:// 开头的标识 API 端点,或者同一 /etc/glance/glance-api.conf 部分中的参数 insecure [keystone_authtoken] 值设置为 True

+

Check-Image-05:是否阻止了屏蔽端口扫描?

+

Glance 提供的映像服务 API v1 中的 copy_from 功能可允许攻击者执行屏蔽的网络端口扫描。如果启用了 v1 API,则应将此策略设置为受限值。

+

通过:如果参数 copy_from in /etc/glance/policy.json 的值设置为受限值,例如 role:admin .

+

失败:未设置参数 copy_from in /etc/glance/policy.json 的值。

+

共享文件系统

+

共享文件系统服务(manila)提供了一组服务,用于管理多租户云环境中的共享文件系统。它类似于OpenStack通过OpenStack块存储服务(cinder)项目提供基于块的存储管理的方式。使用共享文件系统服务,您可以创建共享文件系统并管理其属性,例如可见性、可访问性和使用配额。

+

共享文件系统服务适用于使用以下共享文件系统协议的各种存储提供程序:NFS、CIFS、GlusterFS 和 HDFS。

+

共享文件系统服务的用途与 Amazon Elastic File System (EFS) 相同。

+
    +
  • 介绍
  • +
  • 一般安全信息
  • +
  • 网络和安全模型
  • +
  • 共享后端模式
  • +
  • 扁平化网络与分段化网络
  • +
  • 网络插件
  • +
  • 安全服务
  • +
  • 安全服务简介
  • +
  • 安全服务管理
  • +
  • 共享访问控制
  • +
  • 共享类型访问控制
  • +
  • 政策
  • +
  • 检查表
  • +
  • Check-Shared-01:配置文件的用户/组所有权是否设置为 root/manila?
  • +
  • Check-Shared-02:是否为配置文件设置了严格的权限?
  • +
  • Check-Shared-03:OpenStack Identity 是否用于身份验证?
  • +
  • Check-Shared-04:是否启用了 TLS 进行身份验证?
  • +
  • Check-Shared-05:共享文件系统是否通过 TLS 与计算联系?
  • +
  • Check-Shared-06:共享文件系统是否通过 TLS 与网络联系?
  • +
  • Check-Shared-07:共享文件系统是否通过 TLS 与块存储联系?
  • +
  • Check-Shared-08:请求正文的最大大小是否设置为默认值 (114688)?
  • +
+

介绍

+

共享文件系统服务(马尼拉)旨在在单节点或跨多个节点运行。共享文件系统服务由四个主要服务组成,它们类似于块存储服务:

+
    +
  • manila-api
  • +
  • manila-scheduler
  • +
  • manila-share
  • +
  • manila-data
  • +
+

manila-api

+

提供稳定 RESTful API 的服务。该服务在整个共享文件系统服务中对请求进行身份验证和路由。有 python-manilaclient 可以与 API 交互。有关共享文件系统 API 的更多详细信息,请参阅 OpenStack 共享文件系统 API。

+

manila-share

+

负责管理共享文件服务设备,特别是后端设备。

+

manila-scheduler

+

负责安排请求并将其路由到相应的 manila-share 服务。它通过选择一个后端,同时过滤除一个后端之外的所有后端来实现这一点。

+

manila-data

+

此服务负责管理数据操作,如果不单独处理,可能需要很长时间才能完成,并阻止其他服务。

+

共享文件系统服务使用基于 SQL 的中央数据库,该数据库由系统中的所有共享文件系统服务共享。它可以使用 ORM SQLALcvery 支持的任何 SQL 方言,但仅使用 MySQL 和 PostgreSQL 数据库进行测试。

+

使用 SQL,共享文件系统服务类似于其他 OpenStack 服务,可以与任何 OpenStack 部署一起使用。有关 API 的更多详细信息,请参阅 OpenStack 共享文件系统 API 说明。有关 CLI 用法和配置的更多详细信息,请参阅共享文件系统云管理指南。

+

下图中,您可以看到共享文件系统服务的不同部分如何相互交互。

+

../_images/manila-intro.png

+

除了已经描述的服务之外,您还可以在图像上看到另外两个实体: python-manilaclientstorage controller

+

python-manilaclient

+

命令行界面,用于通过 manila-api 与共享文件系统服务进行交互,以及用于以编程方式与共享文件系统服务交互的 Python 模块。

+

Storage controller

+

通常是一个金属盒,带有旋转磁盘、以太网端口和某种软件,允许网络客户端在磁盘上读取和写入文件。还有一些在任意硬件上运行的纯软件存储控制器,群集控制器可能允许多个物理设备显示为单个存储控制器,或纯虚拟存储控制器。

+

共享是远程的、可装载的文件系统。您可以一次将共享装载到多个主机,也可以由多个用户从多个主机访问共享。

+

共享文件系统服务可以使用不同的网络类型:扁平网络、VLAN、VXLAN 或 GRE,并支持分段网络。此外,还有不同的网络插件,它们提供了与 OpenStack 提供的网络服务的各种集成方法。

+

不同供应商创建了大量共享驱动程序,这些驱动程序支持不同的硬件存储解决方案,例如 NetApp 集群模式 Data ONTAP ( cDOT )驱动程序,华为 NAS 驱动程序或 GlusterFS 驱动程序。每个共享驱动程序都是一个 Python 类,可以为后端设置并在后端运行以管理共享操作,其中一些操作可能是特定于供应商的。后端是 manila-share 服务的一个实例。

+

客户端用于身份验证和授权的配置数据可以由安全服务存储。可以配置和使用 LDAP、Kerberos 或 Microsoft Active Directory 身份验证服务等协议。

+

除非未在 policy.json 中显式更改,否则管理员或拥有共享的租户都能够管理对共享的访问。访问管理是通过创建访问规则来完成的,该规则通过 IP 地址、用户、组或 TLS 证书进行身份验证。可用的身份验证方法取决于您配置和使用的共享驱动程序和安全服务。

+

注意

+
不同的驱动程序支持不同的访问选项,具体取决于使用的共享文件系统协议。支持的共享文件系统协议包括 NFS、CIFS、GlusterFS 和  HDFS。例如,通用(块存储作为后端)驱动程序不支持用户和证书身份验证方法。它还不支持任何安全服务,例如 LDAP、Kerberos 或  Active Directory。有关不同驱动程序支持的功能的详细信息,请参阅马尼拉共享功能支持映射。
+

作为管理员,您可以创建共享类型,使计划程序能够在创建共享之前筛选后端。共享类型具有额外的规范,您可以为计划程序设置这些规范,以筛选和权衡后端,以便为请求创建共享的用户选择适当的共享类型。共享和共享类型可以创建为公共或私有。此可见性级别定义其他租户是否能够看到这些对象并对其进行操作。管理员可以为身份服务中的特定用户或租户添加对专用共享类型的访问权限。因此,您授予访问权限的用户可以看到可用的共享类型,并使用它们创建共享。

+

不同用户及其角色的 API 调用权限由策略决定,就像在其他 OpenStack 服务中一样。

+

标识服务可用于共享文件系统服务中的身份验证。请参阅“身份”部分中的身份服务安全性的详细信息。

+

一般安全信息

+

与其他 OpenStack 项目类似,共享文件系统服务已注册到 Identity 服务,因此您可以使用 manila endpoints 命令查找共享服务 v1 和 v2 的 API 端点:

+
$ manila endpoints
++-------------+-----------------------------------------+
+| manila      | Value                                   |
++-------------+-----------------------------------------+
+| adminURL    | http://172.18.198.55:8786/v1/20787a7b...|
+| region      | RegionOne                               |
+| publicURL   | http://172.18.198.55:8786/v1/20787a7b...|
+| internalURL | http://172.18.198.55:8786/v1/20787a7b...|
+| id          | 82cc5535aa444632b64585f138cb9b61        |
++-------------+-----------------------------------------+
+
++-------------+-----------------------------------------+
+| manilav2    | Value                                   |
++-------------+-----------------------------------------+
+| adminURL    | http://172.18.198.55:8786/v2/20787a7b...|
+| region      | RegionOne                               |
+| publicURL   | http://172.18.198.55:8786/v2/20787a7b...|
+| internalURL | http://172.18.198.55:8786/v2/20787a7b...|
+| id          | 2e8591bfcac4405fa7e5dc3fd61a2b85        |
++-------------+-----------------------------------------+
+

默认情况下,共享文件系统 API 服务仅侦听 tcp6 类型同时支持 IPv4 和 IPv6 的端口 8786

+

注意

+

该端口是共享文件系统服务的默认端口 8786 。它可以更改为任何其他端口,但此更改也应在配置文件中的 选项中进行,该选项 osapi_share_listen_port 默认为 8786

+

/etc/manila/ 目录中,您可以找到几个配置文件:

+
api-paste.ini
+manila.conf
+policy.json
+rootwrap.conf
+rootwrap.d
+
+./rootwrap.d:
+share.filters
+

建议您将共享文件系统服务配置为在非 root 服务帐户下运行,并更改文件权限,以便只有系统管理员才能修改它们。共享文件系统服务要求只有管理员才能写入配置文件,而服务只能通过其在组中的 manila 组成员身份读取它们。其他人一定无法读取这些文件,因为这些文件包含不同服务的管理员密码。

+

应用检查 Check-Shared-01:配置文件的用户/组所有权是否设置为 root/manila?和 Check-Shared-02:是否为配置文件设置了严格的权限?从清单中验证权限设置是否正确。

+

注意

+
文件中的 manila-rootwrap 配置和文件中 `rootwrap.conf`  `rootwrap.d/share.filters` 共享节点的 manila-rootwrap 命令过滤器应归 root 用户所有,并且只能由 root 用户写入。
+

建议

+
manila 配置文件 `manila.conf` 可以放置在任何位置。默认情况下,该路径 `/etc/manila/manila.conf` 是必需的。
+

网络和安全模型

+

共享文件系统服务中的共享驱动程序是一个 Python 类,可以为后端设置并在其中运行以管理共享操作,其中一些操作是特定于供应商的。后端是 manila-share 服务的实例。共享文件系统服务中有许多由不同供应商创建的共享驱动程序。每个共享驱动程序都支持一种或多种后端模式:共享服务器和无共享服务器。管理员通过在配置文件中 manila.conf 指定模式来选择使用哪种模式。它使用了一个选项 driver_handles_share_servers

+

共享服务器模式可以配置为扁平网络,也可以配置分段网络。这取决于网络提供商。

+

如果您想使用不同的配置,则可以为不同的模式使用相同的硬件使用单独的驱动程序。根据选择的模式,管理员可能需要通过配置文件提供更多配置详细信息。

+

共享后端模式

+

每个共享驱动程序至少支持一种可能的驱动程序模式:

+
    +
  • 共享服务器模式
  • +
  • 无共享服务器模式
  • +
+

设置共享服务器模式或无共享服务器模式的 manila.conf 配置选项是 driver_handles_share_servers 选项。它指示驱动程序是自行处理共享服务器,还是期望共享文件系统服务执行此操作。

+ + + + + + + + + + + + + + + + + + + + +
模式配置选项描述
共享服务器driver_handles_share_servers =True共享驱动程序创建共享服务器并管理或处理共享服务器生命周期。
无共享服务器driver_handles_share_servers =False管理员(而不是共享驱动程序)使用某些网络接口(而不是共享服务器的存在)管理裸机存储。
+

无共享服务器模式

+

在这种模式下,驱动程序基本上没有任何网络要求。假定由驱动程序管理的存储控制器具有所需的所有网络接口。共享文件系统服务将期望驱动程序直接设置共享,而无需事先创建任何共享服务器。此模式对应于某些现有驱动程序已在执行的操作,但它使管理员可以明确选择。在此模式下,共享创建时不需要共享网络,也不得提供共享网络。

+

注意

+
在无共享服务器模式下,共享文件系统服务将假定所有租户都已可访问用于导出任何共享的网络接口。
+

在无共享服务器模式下,共享驱动程序不处理存储生命周期。管理员应处理存储、网络接口和其他主机配置。在此模式下,管理员可以将存储设置为导出共享的主机。此模式的主要特征是存储不由共享文件系统服务处理。租户中的用户共享公共网络、主机、处理器和网络管道。如果管理员或代理之前配置的存储没有正确的平衡调整,它们可能会相互阻碍。在公有云中,所有网络容量可能都由一个客户端使用,因此管理员应注意不要发生这种情况。平衡调整可以通过任何方式完成,而不一定是使用 OpenStack 工具。

+

共享服务器模式

+

在此模式下,驱动程序能够创建共享服务器并将其插入现有网络。提供新的共享服务器时,驱动程序需要来自共享文件系统服务的 IP 地址和子网。

+

与无共享服务器模式不同,在共享服务器模式下,用户具有一个共享网络和一个为每个共享网络创建的共享服务器。因此,所有用户都有单独的 CPU、CPU 时间、网络、容量和吞吐量。

+

您还可以在共享服务器和无共享服务器后端模式下配置安全服务。但是,如果没有共享服务器后端模式,管理员应在主机上手动设置所需的身份验证服务。在共享服务器模式下,可以使用共享驱动程序支持的任何现有安全服务自动配置共享文件系统服务。

+

扁平化与分段化网络

+

共享文件系统服务允许使用不同类型的网络:

+
    +
  • flat
  • +
  • GRE
  • +
  • VLAN
  • +
  • VXLAN
  • +
+

注意

+
共享文件系统服务只是将有关网络的信息保存在数据库中,而真正的网络则由网络提供商提供。在OpenStack中,它可以是传统网络(nova-network)或网络(neutron)服务,但共享文件系统服务甚至可以在OpenStack之外工作。这是允许的, `StandaloneNetworkPlugin` 可以与任何网络平台一起使用,并且不需要OpenStack中的某些特定网络服务,如Networking或Legacy网络服务。您可以在其配置文件中设置网络参数。
+

在共享服务器后端模式下,共享驱动程序为每个共享网络创建和管理共享服务器。此模式可分为两种变体:

+
    +
  • 共享服务器后端模式下的扁平网络
  • +
  • 共享服务器后端模式下的分段网络
  • +
+

最初,在创建共享网络时,您可以设置 OpenStack Networking (neutron) 的网络和子网,也可以设置 Legacy 网络 (nova-network) 服务网络。第三种方法是在没有旧版网络和网络服务的情况下配置网络。 StandaloneNetworkPlugin 可与任何网络平台一起使用。您可以在其配置文件中设置网络参数。

+

建议

+
所有使用 OpenStack Compute 服务的共享驱动程序都不使用网络插件。在 Mitaka 版本中,它是 Windows 和通用驱动程序。这些共享驱动器具有其他选项并使用不同的方法。
+

创建共享网络后,共享文件系统服务将检索由网络提供商确定的网络信息:网络类型、分段标识符(如果网络使用分段)和 CIDR 表示法中的 IP 块,以便从中分配网络。

+

共享服务器后端模式下的扁平网络

+

在此模式下,某些存储控制器可以创建共享服务器,但由于物理或逻辑网络的各种限制,所有共享服务器都必须位于扁平网络上。在此模式下,共享驱动程序需要一些东西来为共享服务器预配 IP 地址,但 IP 将全部来自同一子网,并且假定所有租户都可以访问该子网本身。

+

共享网络的安全服务部分指定安全要求,例如 AD 或 LDAP 域或 Kerberos 域。共享文件系统服务假定安全服务中引用的任何主机都可以从创建共享服务器的子网访问,这限制了可以使用此模式的情况数。

+

共享服务器后端模式下的分段网络

+

在此模式下,共享驱动程序能够创建共享服务器并将其插入到现有的分段网络。共享驱动程序期望共享文件系统服务为每个新的共享服务器提供子网定义。此定义应包括分段类型、分段 ID 以及与分段类型相关的任何其他信息。

+

注意

+
某些共享驱动程序可能不支持所有类型的分段,有关详细信息,请参阅正在使用的驱动程序的规范。
+

网络插件

+

共享文件系统服务体系结构定义了用于网络资源调配的抽象层。它允许管理员从不同的选项中进行选择,以决定如何将网络资源分配给其租户的网络存储。有几个网络插件提供了与OpenStack提供的网络服务的各种集成方法。

+

网络插件允许使用 OpenStack Networking 和 Legacy 网络服务的任何功能、配置。可以使用网络服务支持的任何网络分段,也可以使用传统网络 (nova-network) 服务的扁平网络或 VLAN 分段网络,也可以使用插件来独立于 OpenStack 网络服务指定网络。有关如何使用不同网络插件的详细信息,请参阅共享文件系统服务网络插件。

+

安全服务

+

对于客户端的身份验证和授权,可以选择使用不同的网络身份验证协议配置共享文件系统存储服务。支持的身份验证协议包括 LDAP、Kerberos 和 Microsoft Active Directory 身份验证服务。

+

安全服务介绍

+

创建共享并获取其导出位置后,用户无权装载该共享并处理文件。共享文件系统服务需要显式授予对新共享的访问权限。

+

用于身份验证和授权 (AuthN/AuthZ) 的客户机配置数据可以通过 存储 security services 。如果使用的驱动程序和后端支持 LDAP、Kerberos 或 Microsoft Active Directory,则共享文件系统服务可以使用它们。身份验证服务也可以在没有共享文件系统服务的情况下进行配置。

+

注意

+
在某些情况下,需要显式指定其中一项安全服务,例如,NetApp、EMC 和 Windows 驱动程序需要 Active Directory 才能创建与 CIFS 协议的共享。
+

安全服务管理

+

安全服务是共享文件系统服务(马尼拉)实体,它抽象出一组选项,这些选项为特定共享文件系统协议(如 Active Directory 域或 Kerberos 域)定义安全域。安全服务包含共享文件系统创建加入给定域的服务器所需的所有信息。

+

使用 API,用户可以创建、更新、查看和删除安全服务。安全服务的设计基于以下假设:

+
    +
  • 租户提供安全服务的详细信息。
  • +
  • 管理员关心安全服务:他们配置此类安全服务的服务器端。
  • +
  • 在共享文件系统 API 中,a security_serviceshare_networks 关联。
  • +
  • 共享驱动程序使用安全服务中的数据来配置新创建的共享服务器。
  • +
+

创建安全服务时,可以选择以下身份验证服务之一:

+ + + + + + + + + + + + + + + + + + + + + +
身份验证服务描述
LDAP轻量级目录访问协议。用于通过 IP 网络访问和维护分布式目录信息服务的应用程序协议。
Kerberos网络身份验证协议,它基于票证工作,允许通过非安全网络进行通信的节点以安全的方式相互证明其身份。
活动目录Microsoft 为 Windows 域网络开发的目录服务。使用 LDAP、Microsoft 的 Kerberos 版本和 DNS。
+

共享文件系统服务允许您使用以下选项配置安全服务:

+
    +
  • 租户网络内部使用的 DNS IP 地址。
  • +
  • 安全服务的 IP 地址或主机名。
  • +
  • 安全服务的域。
  • +
  • 租户使用的用户名或组名。
  • +
  • 如果指定用户名,则需要一个用户密码。
  • +
+

现有安全服务实体可以与共享网络实体相关联,这些实体通知共享文件系统服务一组共享的安全性和网络配置。您还可以查看指定共享网络的所有安全服务的列表,并取消它们与共享网络的关联。

+

有关通过 API 管理安全服务的详细信息,请参阅安全服务 API。您还可以通过 python-manilaclient 管理安全服务,请参阅安全服务 CLI 管理。

+

管理员和作为共享所有者的用户可以通过创建访问规则,并通过 IP 地址、用户、组或 TLS 证书进行身份验证来管理对共享的访问。身份验证方法取决于您配置和使用的共享驱动程序和安全服务。

+

因此,作为管理员,您可以将后端配置为通过网络使用特定的身份验证服务,它将存储用户。身份验证服务可以在没有共享文件系统和标识服务的客户端上运行。

+

注意

+
不同的共享驱动程序支持不同的身份验证服务。有关不同驱动程序支持功能的详细信息,请参阅马尼拉共享功能支持映射。驱动程序对特定身份验证服务的支持并不意味着可以使用任何共享文件系统协议对其进行配置。支持的共享文件系统协议包括 NFS、CIFS、GlusterFS 和 HDFS。有关特定驱动程序及其安全服务配置的信息,请参阅驱动程序供应商的文档。
+

某些驱动程序支持安全服务,而其他驱动程序不支持上述任何安全服务。例如,具有 NFS 或 CIFS 共享文件系统协议的通用驱动程序仅支持通过 IP 地址的身份验证方法。

+

建议

+
- 在大多数情况下,支持 CIFS 共享文件系统协议的驱动程序可以配置为使用 Active Directory 并通过用户身份验证管理访问。
+- 支持 GlusterFS 协议的驱动程序可以通过 TLS 证书进行身份验证。
+- 使用支持 NFS 协议的驱动程序,通过 IP 地址进行身份验证是唯一受支持的选项。
+- 由于 HDFS 共享文件系统协议使用 NFS 访问,因此也可以将其配置为通过 IP 地址进行身份验证。
+
+但请注意,通过 IP 进行的身份验证是最不安全的身份验证类型。
+

共享文件系统服务实际使用情况的建议配置是使用 CIFS 共享协议创建共享,并向其添加 Microsoft Active Directory 目录服务。在此配置中,您将获得集中式数据库以及将Kerberos和LDAP方法结合在一起的服务。这是一个真实的用例,对于生产共享文件系统来说很方便。

+

共享访问控制

+

共享文件系统服务允许授予或拒绝其他客户端对服务的不同实体的访问。

+

将共享作为文件系统的可远程挂载实例,可以管理对指定共享的访问,并列出指定共享的权限。

+

共享可以是公共的,也可以是私有的。这是共享的可见性级别,用于定义其他租户是否可以看到共享。默认情况下,所有共享都创建为专用共享。创建共享时,请使用密钥 --public 将共享公开,供其他租户查看共享列表并查看其详细信息。

+

根据 policy.json 文件,管理员和作为共享所有者的用户可以通过创建访问规则来管理对共享的访问。使用 manila access-allow、manila access-deny 和 manila access-list 命令,您可以相应地授予、拒绝和列出对指定共享的访问权限。

+

建议

+
默认情况下,当创建共享并具有其导出位置时,共享文件系统服务期望任何人都无法通过装载共享来访问该共享。请注意,您使用的共享驱动程序可以更改此配置,也可以直接在共享存储上更改。要确保访问共享,请检查导出协议的挂载配置。
+

刚创建共享时,没有与之关联的默认访问规则和装载权限。这可以在正在使用的导出协议的挂载配置中看到。例如,存储上有一个 NFS 命令 exportfs/etc/exports 文件,用于控制每个远程共享并定义可以访问它的主机。如果没有人可以挂载共享,则为空。对于远程 CIFS 服务器,有一个 net conf list 显示配置的命令。 hosts deny 参数应由共享驱动程序设置 0.0.0.0/0 ,这意味着任何主机都被拒绝挂载共享。

+

使用共享文件系统服务,可以通过指定以下支持的共享访问级别之一来授予或拒绝对共享的访问:

+
    +
  • rw。读取和写入 (RW) 访问。这是默认值。
  • +
  • ro。只读 (RO) 访问。
  • +
+

建议

+
当管理员为某些特定编辑者或贡献者提供读写 (RW) 访问权限并为其余用户(查看者)提供只读 (RO) 访问权限时,RO 访问级别在公共共享中会很有帮助。
+

您还必须指定以下受支持的身份验证方法之一:

+
    +
  • ip。通过实例的 IP 地址对实例进行身份验证。有效格式为 XX.XX.XX.XX 或 XX.XX.XX.XX/XX。例如,0.0.0.0/0。
  • +
  • cert。通过 TLS 证书对实例进行身份验证。将 TLS 标识指定为 IDENTKEY。有效值是证书公用名 (CN) 中长度不超过 64 个字符的任何字符串。
  • +
  • user。按指定的用户名或组名进行身份验证。有效值是一个字母数字字符串,可以包含一些特殊字符,长度为 4 到 32 个字符。
  • +
+

注意

+
支持的身份验证方法取决于您配置和使用的共享驱动程序、安全服务和共享文件系统协议。支持的共享文件系统协议包括 NFS、CIFS、GlusterFS 和 HDFS。支持的安全服务包括 LDAP、Kerberos 协议或 Microsoft Active  Directory 服务。有关不同驱动程序支持功能的详细信息,请参阅马尼拉共享功能支持映射。
+

下面是与通用驱动程序共享的 NFS 示例。创建共享后,它具有导出位置 10.254.0.3:/shares/share-b2874f8d-d428-4a5c-b056-e6af80a995de 。如果您尝试使用 10.254.0.4 IP 地址将其挂载到主机上,您将收到“权限被拒绝”消息。

+
# mount.nfs -v 10.254.0.3:/shares/share-b2874f8d-d428-4a5c-b056-e6af80a995de /mnt
+mount.nfs: timeout set for Mon Oct 12 13:07:47 2015
+mount.nfs: trying text-based options 'vers=4,addr=10.254.0.3,clientaddr=10.254.0.4'
+mount.nfs: mount(2): Permission denied
+mount.nfs: access denied by server while mounting 10.254.0.3:/shares/share-b2874f8d-...
+

作为管理员,您可以通过 SSH 连接到具有 IP 地址的 10.254.0.3 主机,检查其 /etc/exports 上的文件并查看它是否为空:

+
# cat /etc/exports
+#
+

我们在示例中使用的通用驱动程序不支持任何安全服务,因此使用 NFS 共享文件系统协议,我们只能通过 IP 地址授予访问权限:

+
$ manila access-allow Share_demo2 ip 10.254.0.4
++--------------+--------------------------------------+
+| Property     | Value                                |
++--------------+--------------------------------------+
+| share_id     | e57c25a8-0392-444f-9ffc-5daadb9f756c |
+| access_type  | ip                                   |
+| access_to    | 10.254.0.4                           |
+| access_level | rw                                   |
+| state        | new                                  |
+| id           | 62b8e453-d712-4074-8410-eab6227ba267 |
++--------------+--------------------------------------+
+

规则进入状态 active 后,我们可以再次连接到 10.254.0.3 主机并检查 /etc/exports 文件,并查看是否添加了带有规则的行:

+
# cat /etc/exports
+/shares/share-b2874f8d-d428-4a5c-b056-e6af80a995de     10.254.0.4(rw,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,anonuid=65534,anongid=65534,sec=sys,rw,root_squash,no_all_squash)
+

现在,我们可以使用 IP 地址 10.254.0.4 在主机上挂载共享,并拥有 rw 共享权限:

+
# mount.nfs -v 10.254.0.3:/shares/share-b2874f8d-d428-4a5c-b056-e6af80a995de /mnt
+# ls -a /mnt
+.  ..  lost+found
+# echo "Hello!" > /mnt/1.txt
+# ls -a /mnt
+.  ..  1.txt  lost+found
+#
+

共享类型访问控制

+

共享类型是管理员定义的“服务类型”,由租户可见描述和租户不可见键值对列表(额外规范)组成。manila-scheduler 使用额外的规范来做出调度决策,驱动程序控制共享创建。

+

管理员可以创建和删除共享类型,还可以管理在共享文件系统服务中赋予它们含义的额外规范。租户可以列出共享类型,并可以使用它们创建新共享。有关管理共享类型的详细信息,请参阅共享文件系统 API 和共享类型管理文档。

+

共享类型可以创建为公共和私有。这是共享类型的可见性级别,用于定义其他租户是否可以在共享类型列表中看到它,并使用它来创建新共享。

+

默认情况下,共享类型创建为公共类型。创建共享类型时,请使用 --is_public 参数集 设置为 False 私有共享类型,这将防止其他租户在共享类型列表中看到它并使用它创建新共享。另一方面,公共共享类型可供云中的每个租户使用。

+

共享文件系统服务允许管理员授予或拒绝对租户的专用共享类型的访问权限。还可以获取有关指定专用共享类型的访问权限的信息。

+

建议

+
由于共享类型由于其额外的规范而有助于在用户创建共享之前筛选或选择后端,因此使用对共享类型的访问权限,可以限制客户端选择特定的后端。
+

例如,作为管理员租户中的管理员用户,可以创建名为 my_type 的专用共享类型,并在列表中查看它。在控制台示例中,省略了登录和注销,并提供了环境变量以显示当前登录的用户。

+
$ env | grep OS_
+...
+OS_USERNAME=admin
+OS_TENANT_NAME=admin
+...
+$ manila type-list --all
++----+--------+-----------+-----------+-----------------------------------+-----------------------+
+| ID | Name   | Visibility| is_default| required_extra_specs              | optional_extra_specs  |
++----+--------+-----------+-----------+-----------------------------------+-----------------------+
+| 4..| my_type| private   | -         | driver_handles_share_servers:False| snapshot_support:True |
+| 5..| default| public    | YES       | driver_handles_share_servers:True | snapshot_support:True |
++----+--------+-----------+-----------+-----------------------------------+-----------------------+
+

demo 租户中的 demo 用户可以列出类型,并且命名 my_type 的专用共享类型对他不可见。

+
$ env | grep OS_
+...
+OS_USERNAME=demo
+OS_TENANT_NAME=demo
+...
+$ manila type-list --all
++----+--------+-----------+-----------+----------------------------------+----------------------+
+| ID | Name   | Visibility| is_default| required_extra_specs             | optional_extra_specs |
++----+--------+-----------+-----------+----------------------------------+----------------------+
+| 5..| default| public    | YES       | driver_handles_share_servers:True| snapshot_support:True|
++----+--------+-----------+-----------+----------------------------------+----------------------+
+

管理员可以授予对租户 ID 等于 df29a37db5ae48d19b349fe947fada46 的演示租户的专用共享类型的访问权限:

+
$ env | grep OS_
+...
+OS_USERNAME=admin
+OS_TENANT_NAME=admin
+...
+$ openstack project list
++----------------------------------+--------------------+
+| ID                               | Name               |
++----------------------------------+--------------------+
+| ...                              | ...                |
+| df29a37db5ae48d19b349fe947fada46 | demo               |
++----------------------------------+--------------------+
+$ manila type-access-add my_type df29a37db5ae48d19b349fe947fada46
+

因此,现在演示租户中的用户可以看到专用共享类型,并在共享创建中使用它:

+
$ env | grep OS_
+...
+OS_USERNAME=demo
+OS_TENANT_NAME=demo
+...
+$ manila type-list --all
++----+--------+-----------+-----------+-----------------------------------+-----------------------+
+| ID | Name   | Visibility| is_default| required_extra_specs              | optional_extra_specs  |
++----+--------+-----------+-----------+-----------------------------------+-----------------------+
+| 4..| my_type| private   | -         | driver_handles_share_servers:False| snapshot_support:True |
+| 5..| default| public    | YES       | driver_handles_share_servers:True | snapshot_support:True |
++----+--------+-----------+-----------+-----------------------------------+-
+

要拒绝对指定项目的访问,请使用 manila type-access-remove 命令。

+

建议

+
一个真实的生产用例显示了共享类型的用途和对它们的访问,当你有两个后端时:廉价的 LVM 作为公共存储,昂贵的 Ceph 作为私有存储。在这种情况下,可以向某些租户授予访问权限,并使用 `user/group` 身份验证方法进行访问。
+

政策

+

共享文件系统服务有自己的基于角色的访问策略。它们确定哪个用户可以以哪种方式访问哪些对象,并在服务的 policy.json 文件中定义。

+

建议

+
配置文件 `policy.json` 可以放置在任何位置。默认情况下,该路径 `/etc/manila/policy.json` 是必需的。
+

每当对共享文件系统服务进行 API 调用时,策略引擎都会使用相应的策略定义来确定是否可以接受该调用。

+

策略规则确定在什么情况下允许 API 调用。当 /etc/manila/policy.json 规则为空字符串时,该文件具有始终允许操作的规则: "" ;基于用户角色或规则的规则;带有布尔表达式的规则。下面是共享文件系统服务 policy.json 的文件片段。从一个OpenStack版本到另一个OpenStack版本,可以对其进行更改。

+
{
+    "context_is_admin": "role:admin",
+    "admin_or_owner": "is_admin:True or project_id:%(project_id)s",
+    "default": "rule:admin_or_owner",
+    "share_extension:quotas:show": "",
+    "share_extension:quotas:update": "rule:admin_api",
+    "share_extension:quotas:delete": "rule:admin_api",
+    "share_extension:quota_classes": "",
+}
+

必须将用户分配到策略中引用的组和角色。当使用用户管理命令时,服务会自动完成此操作。

+

注意

+
任何更改 `/etc/manila/policy.json` 都会立即生效,这允许在共享文件系统服务运行时实施新策略。手动修改策略可能会产生意想不到的副作用,因此不鼓励这样做。有关详细信息,请参阅 policy.json 文件。
+

检查表

+

Check-Shared-01:配置文件的用户/组所有权是否设置为 root/manila?

+

配置文件包含组件平稳运行所需的关键参数和信息。如果非特权用户有意或无意地修改或删除任何参数或文件本身,则会导致严重的可用性问题,从而导致拒绝向其他最终用户提供服务。因此,此类关键配置文件的用户所有权必须设置为 root,组所有权必须设置为 manila。此外,包含目录应具有相同的所有权,以确保正确拥有新文件。

+

运行以下命令:

+
$ stat -L -c "%U %G" /etc/manila/manila.conf | egrep "root manila"
+$ stat -L -c "%U %G" /etc/manila/api-paste.ini | egrep "root manila"
+$ stat -L -c "%U %G" /etc/manila/policy.json | egrep "root manila"
+$ stat -L -c "%U %G" /etc/manila/rootwrap.conf | egrep "root manila"
+$ stat -L -c "%U %G" /etc/manila | egrep "root manila"
+

通过:如果所有这些配置文件的用户和组所有权分别设置为 root 和 manila。上面的命令显示了根马尼拉的输出。

+

失败:如果上述命令未返回任何输出,因为用户和组所有权可能已设置为除 root 以外的任何用户或马尼拉以外的任何组。

+

Check-Shared-02:是否为配置文件设置了严格的权限?

+

与前面的检查类似,建议对此类配置文件设置严格的访问权限。

+

运行以下命令:

+
$ stat -L -c "%a" /etc/manila/manila.conf
+$ stat -L -c "%a" /etc/manila/api-paste.ini
+$ stat -L -c "%a" /etc/manila/policy.json
+$ stat -L -c "%a" /etc/manila/rootwrap.conf
+$ stat -L -c "%a" /etc/manila
+

还可以进行更广泛的限制:如果包含目录设置为 750,则保证此目录中新创建的文件具有所需的权限。

+

通过:如果权限设置为 640 或更严格,或者包含目录设置为 750。640 的权限转换为所有者 r/w、组 r,而对其他人没有权限,即“u=rw,g=r,o=”。请注意,使用 Check-Shared-01:配置文件的用户/组所有权是否设置为 root/manila?权限设置为 640,root 具有读/写访问权限,manila 具有对这些配置文件的读取访问权限。也可以使用以下命令验证访问权限。仅当此命令支持 ACL 时,它才在您的系统上可用。

+
$ getfacl --tabular -a /etc/manila/manila.conf
+getfacl: Removing leading '/' from absolute path names
+# file: etc/manila/manila.conf
+USER   root  rw-
+GROUP  manila  r--
+mask         r--
+other        ---
+

失败:如果权限未设置为至少 640。

+

Check-Shared-03:OpenStack Identity 是否用于身份验证?

+

注意

+
此项仅适用于 OpenStack 版本 Rocky 及之前版本,因为 `auth_strategy` Stein 中已弃用。
+

OpenStack 支持各种身份验证策略,如 noauth 和 keystone。如果使用 ' noauth ' 策略,则用户无需任何身份验证即可与 OpenStack 服务进行交互。这可能是一个潜在的风险,因为攻击者可能会获得对 OpenStack 组件的未经授权的访问。因此,强烈建议所有服务都必须使用其服务帐户通过 keystone 进行身份验证。

+

通过:如果 section in 下的参数 auth_strategy 设置为 keystone[DEFAULT] manila.conf

+

失败:如果 section 下的 [DEFAULT] 参数 auth_strategy 值设置为 noauth

+

Check-Shared-04:是否启用了 TLS 进行身份验证?

+

OpenStack 组件使用各种协议相互通信,通信可能涉及敏感或机密数据。攻击者可能会尝试窃听频道以访问敏感信息。所有组件必须使用安全通信协议相互通信。

+

通过:如果 section in /etc/manila/manila.conf 下的参数值设置为 Identity API 端点开头, https:// 并且 same /etc/manila/manila.conf 中同一 [keystone_authtoken] 部分下的 [keystone_authtoken] 参数 www_authenticate_uri insecure 值设置为 False

+

失败:如果 in /etc/manila/manila.conf 部分下的 [keystone_authtoken] 参数 www_authenticate_uri 值未设置为以 开头的身份 API 端点, https:// 或者同一 /etc/manila/manila.conf 部分中的参数 insecure [keystone_authtoken] 值设置为 True

+

Check-Shared-05:共享文件系统是否通过 TLS 与计算联系?

+

注意

+
此项仅适用于 OpenStack 版本 Train 及之前版本,因为 `auth_strategy` Ussuri 中已弃用。
+

OpenStack 组件使用各种协议相互通信,通信可能涉及敏感或机密数据。攻击者可能会尝试窃听频道以访问敏感信息。因此,所有组件都必须使用安全的通信协议相互通信。

+

通过:如果 section in 下的参数 nova_api_insecure 设置为 False[DEFAULT] manila.conf

+

失败:如果 section in 下的参数 nova_api_insecure 设置为 True[DEFAULT] manila.conf

+

Check-Shared-06:共享文件系统是否通过 TLS 与网络联系?

+

注意

+
此项仅适用于 OpenStack 版本 Train 及之前版本,因为 `auth_strategy` Ussuri 中已弃用。
+

与之前的检查(Check-Shared-05:共享文件系统是否通过 TLS 与计算联系?)类似,建议所有组件必须使用安全通信协议相互通信。

+

通过:如果 section in 下的参数 neutron_api_insecure 设置为 False[DEFAULT] manila.conf

+

失败:如果 section in 下的参数 neutron_api_insecure 设置为 True[DEFAULT] manila.conf

+

Check-Shared-07:共享文件系统是否通过 TLS 与块存储联系?

+

注意

+
此项仅适用于 OpenStack 版本 Train 及之前版本,因为 `auth_strategy` Ussuri 中已弃用。
+

与之前的检查(Check-Shared-05:共享文件系统是否通过 TLS 与计算联系?)类似,建议所有组件必须使用安全通信协议相互通信。

+

通过:如果 section in 下的参数 cinder_api_insecure 设置为 False[DEFAULT] manila.conf

+

失败:如果 section in 下的参数 cinder_api_insecure 设置为 True[DEFAULT] manila.conf

+

Check-Shared-08:请求正文的最大大小是否设置为默认值 (114688)?

+

如果未定义每个请求的最大正文大小,攻击者可以构建任意较大的OSAPI请求,导致服务崩溃,最终导致拒绝服务攻击。分配最大值可确保阻止任何恶意超大请求,从而确保服务的持续可用性。

+

通过:如果 in 节下的参数值设置为 ,或者 in manila.conf manila.conf 节下的 [oslo_middleware] [DEFAULT] 参数 max_request_body_size osapi_max_request_body_size 值设置为 114688114688 下面的 [DEFAULT] 参数 osapi_max_request_body_size 已弃用,最好使用 [oslo_middleware]/ max_request_body_size

+

失败:如果 in manila.conf 节下的参数值未设置为 114688 ,或者 in manila.conf 节下的 [DEFAULT] [oslo_middleware] 参数 max_request_body_size osapi_max_request_body_size 值未设置为 114688

+

联网

+

OpenStack 网络服务 (neutron) 使最终用户或租户能够定义、利用和使用网络资源。OpenStack Networking 提供了一个面向租户的 API,用于定义云中实例的网络连接和 IP 寻址,以及编排网络配置。随着向以 API 为中心的网络服务的过渡,云架构师和管理员应考虑最佳实践来保护物理和虚拟网络基础架构和服务。

+

OpenStack Networking 采用插件架构设计,通过开源社区或第三方服务提供 API 的可扩展性。在评估架构设计要求时,确定 OpenStack Networking 核心服务中有哪些功能、第三方产品提供的任何其他服务以及需要在物理基础架构中实现哪些补充服务非常重要。

+

本节简要概述了在实现 OpenStack Networking 时应考虑哪些流程和最佳实践。

+
    +
  • 网络架构
  • +
  • 在物理服务器上放置 OpenStack Networking 服务
  • +
  • 网络服务
  • +
  • 使用 VLAN 和隧道的 L2 隔离
  • +
  • 网络服务
  • +
  • 网络服务扩展
  • +
  • 网络服务限制
  • +
  • 网络服务安全最佳做法
  • +
  • +

    OpenStack Networking 服务配置

    +
  • +
  • +

    保护 OpenStack 网络服务

    +
  • +
  • 项目网络服务工作流程
  • +
  • 网络资源策略引擎
  • +
  • 安全组
  • +
  • 配额
  • +
  • +

    缓解 ARP 欺骗

    +
  • +
  • +

    检查表

    +
  • +
  • Check-Neutron-01:配置文件的用户/组所有权是否设置为 root/neutron?
  • +
  • Check-Neutron-02:是否为配置文件设置了严格的权限?
  • +
  • Check-Neutron-03:Keystone是否用于身份验证?
  • +
  • Check-Neutron-04:是否使用安全协议进行身份验证?
  • +
  • Check-Neutron-05:Neutron API 服务器上是否启用了 TLS?
  • +
+

网络架构

+

OpenStack Networking 是一个独立的服务,通常在多个节点上部署多个进程。这些进程彼此交互,并与其他 OpenStack 服务交互。OpenStack Networking 服务的主要进程是 neutron-server,这是一个 Python 守护进程,它公开 OpenStack Networking API,并将租户请求传递给一组插件进行额外处理。

+

OpenStack Networking 组件包括:

+

neutron 服务器(neutron-server 和 neutron-*-plugin)

+

此服务在网络节点上运行,为网络 API 及其扩展提供服务。它还强制执行每个端口的网络模型和 IP 寻址。neutron-server 需要间接访问持久性数据库。这是通过插件实现的,插件使用 AMQP(高级消息队列协议)与数据库进行通信。

+

插件代理 (neutron-*-agent)

+

在每个计算节点上运行,以管理本地虚拟交换机 (vswitch) 配置。您使用的插件决定了运行哪些代理。此服务需要消息队列访问,并取决于所使用的插件。一些插件,如 OpenDaylight(ODL) 和开放虚拟网络 (OVN),在计算节点上不需要任何 python 代理。

+

DHCP 代理 (neutron-dhcp-agent)

+

为租户网络提供DHCP服务。此代理在所有插件中都是相同的,并负责维护 DHCP 配置。neutron-dhcp-agent 需要消息队列访问。可选,具体取决于插件。

+

L3 代理(neutron-L3-agent)

+

为租户网络上的虚拟机提供 L3/NAT 转发。需要消息队列访问权限。可选,具体取决于插件。

+

网络提供商服务(SDN 服务器/服务)

+

为租户网络提供其他网络服务。这些 SDN 服务可以通过 REST API 等通信通道与 neutron-server、neutron-plugin 和 plugin-agents 进行交互。

+

下图显示了 OpenStack Networking 组件的架构和网络流程图:

+

../_images/sdn-connections.png

+

OpenStack Networking 服务在物理服务器上的放置

+

本指南重点介绍一个标准架构,其中包括一个云控制器主机、一个网络主机和一组用于运行 VM 的计算虚拟机监控程序。

+
物理服务器的网络连接
+

../_images/1aa-network-domains-diagram.png

+

标准的 OpenStack Networking 设置最多有四个不同的物理数据中心网络:

+
    +
  • 管理网络
  • +
+

用于 OpenStack 组件之间的内部通信。此网络上的 IP 地址应只能在数据中心内访问,并被视为管理安全域。

+
    +
  • 访客网络
  • +
+

用于云部署中的 VM 数据通信。此网络的 IP 寻址要求取决于所使用的 OpenStack Networking 插件以及租户对虚拟网络所做的网络配置选择。此网络被视为客户机安全域。

+
    +
  • 外部网络
  • +
+

用于在某些部署方案中为 VM 提供 Internet 访问权限。Internet 上的任何人都可以访问此网络上的 IP 地址。此网络被视为属于公共安全域。

+
    +
  • API网络
  • +
+

向租户公开所有 OpenStack API,包括 OpenStack 网络 API。Internet 上的任何人都可以访问此网络上的 IP 地址。这可能与外部网络是同一网络,因为可以为使用 IP 分配范围的外部网络创建一个子网,以便仅使用 IP 块中小于全部范围的 IP 地址。此网络被视为公共安全域。

+

有关更多信息,请参阅《OpenStack 管理员指南》。

+

网络服务

+

在设计 OpenStack 网络基础架构的初始架构阶段,确保提供适当的专业知识来协助设计物理网络基础架构,确定适当的安全控制和审计机制非常重要。

+

OpenStack Networking 增加了一层虚拟化网络服务,使租户能够构建自己的虚拟网络。目前,这些虚拟化服务还没有传统网络的成熟。在采用这些虚拟化服务之前,请考虑这些服务的当前状态,因为它决定了您可能需要在虚拟化和传统网络边界上实现哪些控制。

+

使用 VLAN 和隧道的 L2 隔离

+

OpenStack Networking 可以采用两种不同的机制对每个租户/网络组合进行流量隔离:VLAN(IEEE 802.1Q 标记)或使用 GRE 封装的 L2 隧道。OpenStack 部署的范围和规模决定了您应该使用哪种方法进行流量隔离或隔离。

+
VLANs
+

VLAN 在特定物理网络上实现为数据包,其中包含具有特定 VLAN ID (VID) 字段值的 IEEE 802.1Q 标头。共享同一物理网络的 VLAN 网络在 L2 上彼此隔离,甚至可以有重叠的 IP 地址空间。每个支持 VLAN 网络的不同物理网络都被视为一个单独的 VLAN 中继,具有不同的 VID 值空间。有效的 VID 值为 1 到 4094。

+

VLAN 配置的复杂性取决于您的 OpenStack 设计要求。为了让 OpenStack Networking 能够有效地使用 VLAN,您必须分配一个 VLAN 范围(每个租户一个),并将每个计算节点物理交换机端口转换为 VLAN 中继端口。

+

注意

+
如果您打算让您的网络支持超过 4094 个租户,则 VLAN 可能不是您的正确选择,因为需要多个“黑客”才能将 VLAN 标记扩展到超过 4094 个租户。
+
L2 隧道
+

网络隧道使用唯一的“tunnel-id”封装每个租户/网络组合,该 ID 用于标识属于该组合的网络流量。租户的 L2 网络连接与物理位置或基础网络设计无关。通过将流量封装在 IP 数据包中,该流量可以跨越第 3 层边界,无需预配置 VLAN 和 VLAN 中继。隧道为网络数据流量增加了一层混淆,从监控的角度降低了单个租户流量的可见性。

+

OpenStack Networking 目前支持 GRE 和 VXLAN 封装。

+

提供 L2 隔离的技术选择取决于将在部署中创建的租户网络的范围和大小。如果您的环境的 VLAN ID 可用性有限或将具有大量 L2 网络,我们建议您使用隧道。

+

网络服务

+

租户网络隔离的选择会影响租户服务的网络安全和控制边界的实现方式。以下附加网络服务已经可用或目前正在开发中,以增强 OpenStack 网络架构的安全态势。

+
访问控制列表
+

OpenStack 计算在与旧版 nova-network 服务一起部署时直接支持租户网络流量访问控制,或者可以将访问控制推迟到 OpenStack Networking 服务。

+

请注意,旧版 nova-network 安全组使用 iptables 应用于实例上的所有虚拟接口端口。

+

安全组允许管理员和租户指定流量类型以及允许通过虚拟接口端口的方向(入口/出口)。安全组规则是有状态的 L2-L4 流量过滤器。

+

使用网络服务时,建议在此服务中启用安全组,并在计算服务中禁用安全组。

+
L3 路由和 NAT
+

OpenStack Networking 路由器可以连接多个 L2 网络,并且还可以提供连接一个或多个私有 L2 网络到共享外部网络(例如用于访问互联网的公共网络)的网关。

+

L3 路由器在将路由器上行链路到外部网络的网关端口上提供基本的网络地址转换 (NAT) 功能。默认情况下,此路由器会 SNAT(静态 NAT)所有流量,并支持浮动 IP,这会创建从外部网络上的公共 IP 到连接到路由器的其他子网上的专用 IP 的静态一对一映射。

+

我们建议利用每个租户的 L3 路由和浮动 IP 来实现租户 VM 的更精细连接。

+
服务质量 (QoS)
+

默认情况下,服务质量 (QoS) 策略和规则由云管理员管理,这会导致租户无法创建特定的 QoS 规则,也无法将特定端口附加到策略。在某些用例中,例如某些电信应用程序,管理员可能信任租户,因此允许他们创建自己的策略并将其附加到端口。这可以通过修改 policy.json 文件和特定文档来实现。将与扩展一起发布。

+

网络服务 (neutron) 支持 Liberty 及更高版本中的带宽限制 QoS 规则。此 QoS 规则已命名 QosBandwidthLimitRule ,它接受两个非负整数,以千比特/秒为单位:

+
    +
  • max-kbps :带宽
  • +
  • max-burst-kbps :突发缓冲区
  • +
+

QoSBandwidthLimitRule 在 neutron Open vSwitch、Linux 网桥和单根输入/输出虚拟化 (SR-IOV) 驱动程序中实现。

+

在 Newton 中,添加了 QoS 规则 QosDscpMarkingRule 。此规则在 IPv4 (RFC 2474) 上的服务标头类型和 IPv6 上的流量类标头中标记差分服务代码点 (DSCP) 值,这些值适用于应用规则的虚拟机的所有流量。这是一个 6 位标头,具有 21 个有效值,表示数据包在遇到拥塞时穿过网络时的丢弃优先级。防火墙还可以使用它来将有效或无效流量与其访问控制列表进行匹配。

+

端口镜像服务涉及将进入或离开一个端口的数据包副本发送到另一个端口,该端口通常与被镜像数据包的原始目的地不同。Tap-as-a-Service (TaaS) 是 OpenStack 网络服务 (neutron) 的扩展。它为租户虚拟网络提供远程端口镜像功能。此服务主要旨在帮助租户(或云管理员)调试复杂的虚拟网络,并通过监视与其关联的网络流量来了解其 VM。TaaS 遵循租户边界,其镜像会话能够跨越多个计算和网络节点。它是一个必不可少的基础设施组件,可用于向各种网络分析和安全应用程序提供数据。

+
负载均衡
+

OpenStack Networking 的另一个特性是负载均衡器即服务 (LBaaS)。LBaaS 参考实现基于 HA-Proxy。OpenStack Networking 中的扩展正在开发第三方插件,以便为虚拟接口端口提供广泛的 L4-L7 功能。

+
防火墙
+

FW-as-a-Service(FWaaS)被认为是OpenStack Networking的Kilo版本的实验性功能。FWaaS 满足了管理和利用典型防火墙产品提供的丰富安全功能的需求,这些产品通常比当前安全组提供的要全面得多。飞思卡尔和英特尔都开发了第三方插件作为OpenStack Networking的扩展,以在Kilo版本中支持此组件。有关 FWaaS 管理的更多详细信息,请参阅《OpenStack 管理员指南》中的防火墙即服务 (FWaaS) 概述。

+

在设计 OpenStack Networking 基础架构时,了解可用网络服务的当前特性和局限性非常重要。了解虚拟网络和物理网络的边界将有助于在您的环境中添加所需的安全控件。

+

网络服务扩展

+

开源社区或使用 OpenStack Networking 的 SDN 公司提供的已知插件列表可在 OpenStack neutron 插件和驱动程序 wiki 页面上找到。

+

网络服务限制

+

OpenStack Networking 具有以下已知限制:

+

重叠的 IP 地址

+

如果运行 neutron-l3-agent 或 neutron-dhcp-agent 的节点使用重叠的 IP 地址,则这些节点必须使用 Linux 网络命名空间。默认情况下,DHCP 和 L3 代理使用 Linux 网络命名空间,并在各自的命名空间中运行。但是,如果主机不支持多个命名空间,则 DHCP 和 L3 代理应在不同的主机上运行。这是因为 L3 代理和 DHCP 代理创建的 IP 地址之间没有隔离。

+

如果不存在网络命名空间支持,则 L3 代理的另一个限制是仅支持单个逻辑路由器。

+

多主机 DHCP 代理

+

OpenStack Networking 支持多个具有负载均衡功能的 L3 和 DHCP 代理。但是,不支持虚拟机位置的紧密耦合。换言之,在创建虚拟机时,默认虚拟机调度程序不会考虑代理的位置。

+

L3 代理不支持 IPv6

+

neutron-l3-agent 被许多插件用于实现 L3 转发,仅支持 IPv4 转发。

+

网络服务安全最佳做法

+

要保护 OpenStack Networking,您必须了解如何将租户实例创建的工作流过程映射到安全域。

+

有四个主要服务与 OpenStack Networking 交互。在典型的 OpenStack 部署中,这些服务映射到以下安全域:

+
    +
  • OpenStack 仪表板:公共和管理
  • +
  • OpenStack Identity:管理
  • +
  • OpenStack 计算节点:管理和客户端
  • +
  • OpenStack 网络节点:管理、客户端,以及可能的公共节点,具体取决于正在使用的 neutron-plugin。
  • +
  • SDN 服务节点:管理、访客和可能的公共服务,具体取决于使用的产品。
  • +
+

../_images/1aa-logical-neutron-flow.png

+

要隔离 OpenStack Networking 服务与其他 OpenStack 核心服务之间的敏感数据通信,请将这些通信通道配置为仅允许通过隔离的管理网络进行通信。

+

OpenStack Networking 服务配置

+
限制 API 服务器的绑定地址:neutron-server
+

要限制 OpenStack Networking API 服务为传入客户端连接绑定网络套接字的接口或 IP 地址,请在 neutron.conf 文件中指定 bind_host 和 bind_port,如下所示:

+
# Address to bind the API server
+bind_host = IP ADDRESS OF SERVER
+
+# Port the bind the API server to
+bind_port = 9696
+
限制 OpenStack Networking 服务的 DB 和 RPC 通信
+

OpenStack Networking 服务的各种组件使用消息队列或数据库连接与 OpenStack Networking 中的其他组件进行通信。

+

对于需要直接数据库连接的所有组件,建议您遵循数据库身份验证和访问控制中提供的准则。

+

建议您遵循队列身份验证和访问控制中提供的准则,适用于需要 RPC 通信的所有组件。

+

保护 OpenStack 网络服务

+

本节讨论 OpenStack Networking 配置最佳实践,因为它们适用于 OpenStack 部署中的项目网络安全。

+

项目网络服务工作流

+

OpenStack Networking 为用户提供网络资源和配置的自助服务。云架构师和运维人员必须评估其设计用例,以便为用户提供创建、更新和销毁可用网络资源的能力。

+

网络资源策略引擎

+

OpenStack Networking 中的策略引擎及其配置文件 policy.json 提供了一种方法,可以对用户在项目网络方法和对象上提供更细粒度的授权。OpenStack Networking 策略定义会影响网络可用性、网络安全和整体 OpenStack 安全性。云架构师和运维人员应仔细评估其对用户和项目访问网络资源管理的策略。有关 OpenStack Networking 策略定义的更详细说明,请参阅《OpenStack 管理员指南》中的“身份验证和授权”部分。

+

注意

+
请务必查看默认网络资源策略,因为可以修改此策略以适合您的安全状况。
+

如果您的 OpenStack 部署为不同的安全域提供了多个外部访问点,那么限制项目将多个 vNIC 连接到多个外部访问点的能力非常重要,这将桥接这些安全域,并可能导致不可预见的安全危害。通过利用 OpenStack Compute 提供的主机聚合功能,或者将项目虚拟机拆分为具有不同虚拟网络配置的多个项目项目,可以降低这种风险。

+

安全组

+

OpenStack Networking 服务使用比 OpenStack Compute 中内置的安全组功能更灵活、更强大的机制提供安全组功能。因此,在使用 OpenStack Network 时,应始终禁用内置安全组, nova.conf 并将所有安全组调用代理到 OpenStack Networking API。如果不这样做,将导致两个服务同时应用冲突的安全策略。要将安全组代理到 OpenStack Networking,请使用以下配置值:

+
    +
  • firewall_driver 必须设置为 nova.virt.firewall.NoopFirewallDriver ,以便 nova-compute 本身不执行基于 iptables 的过滤。
  • +
  • security_group_api 必须设置为 neutron 以便将所有安全组请求代理到 OpenStack Networking 服务。
  • +
+

安全组是安全组规则的容器。安全组及其规则允许管理员和项目指定允许通过虚拟接口端口的流量类型和方向(入口/出口)。在 OpenStack Networking 中创建虚拟接口端口时,该端口与安全组相关联。有关端口安全组默认行为的更多详细信息,请参阅网络安全组行为文档。可以将规则添加到默认安全组,以便根据每个部署更改行为。

+

使用 OpenStack Compute API 修改安全组时,更新后的安全组将应用于实例上的所有虚拟接口端口。这是因为 OpenStack Compute 安全组 API 是基于实例的,而不是基于端口的,如 OpenStack Networking 中所示。

+

配额

+

配额提供了限制项目可用的网络资源数量的功能。您可以对所有项目强制实施默认配额。包括 /etc/neutron/neutron.conf 以下配额选项:

+
[QUOTAS]
+# resource name(s) that are supported in quota features
+quota_items = network,subnet,port
+
+# default number of resource allowed per tenant, minus for unlimited
+#default_quota = -1
+
+# number of networks allowed per tenant, and minus means unlimited
+quota_network = 10
+
+# number of subnets allowed per tenant, and minus means unlimited
+quota_subnet = 10
+
+# number of ports allowed per tenant, and minus means unlimited
+quota_port = 50
+
+# number of security groups allowed per tenant, and minus means unlimited
+quota_security_group = 10
+
+# number of security group rules allowed per tenant, and minus means unlimited
+quota_security_group_rule = 100
+
+# default driver to use for quota checks
+quota_driver = neutron.quota.ConfDriver
+

OpenStack Networking 还通过配额扩展 API 支持每个项目的配额限制。要启用每个项目的配额,必须在 中设置选项 quota_driver neutron.conf

+
quota_driver = neutron.db.quota.driver.DbQuotaDriver
+

缓解 ARP 欺骗

+

使用扁平网络时,不能假定共享同一第 2 层网络(或广播域)的项目彼此完全隔离。这些项目可能容易受到 ARP 欺骗的攻击,从而有可能遭受中间人攻击。

+

如果使用支持 ARP 字段匹配的 Open vSwitch 版本,则可以通过启用 Open vSwitch 代理 prevent_arp_spoofing 选项来帮助降低此风险。此选项可防止实例执行欺骗攻击;它不能保护他们免受欺骗攻击。请注意,此设置预计将在 Ocata 中删除,该行为将永久处于活动状态。

+

例如,在 /etc/neutron/plugins/ml2/openvswitch_agent.ini

+
prevent_arp_spoofing = True
+

除 Open vSwitch 外,其他插件也可能包含类似的缓解措施;建议您在适当的情况下启用此功能。

+

注意

+
即使启用 `prevent_arp_spoofing` 了扁平网络,也无法提供完整的项目隔离级别,因为所有项目流量仍会发送到同一 VLAN。
+

检查表

+

Check-Neutron-01:配置文件的用户/组所有权是否设置为 root/neutron?

+

配置文件包含组件平稳运行所需的关键参数和信息。如果非特权用户有意或无意地修改或删除任何参数或文件本身,则会导致严重的可用性问题,从而导致对其他最终用户的拒绝服务。因此,此类关键配置文件的用户所有权必须设置为 root,组所有权必须设置为 neutron。此外,包含目录应具有相同的所有权,以确保正确拥有新文件。

+

运行以下命令:

+
$ stat -L -c "%U %G" /etc/neutron/neutron.conf | egrep "root neutron"
+$ stat -L -c "%U %G" /etc/neutron/api-paste.ini | egrep "root neutron"
+$ stat -L -c "%U %G" /etc/neutron/policy.json | egrep "root neutron"
+$ stat -L -c "%U %G" /etc/neutron/rootwrap.conf | egrep "root neutron"
+$ stat -L -c "%U %G" /etc/neutron | egrep "root neutron"
+

通过:如果所有这些配置文件的用户和组所有权分别设置为 root 和 neutron。上面的命令显示了根中子的输出。

+

失败:如果上述命令未返回任何输出,因为用户和组所有权可能已设置为除 root 以外的任何用户或除 neutron 以外的任何组。

+

Check-Neutron-02:是否为配置文件设置了严格的权限?

+

与前面的检查类似,建议对此类配置文件设置严格的访问权限。

+

运行以下命令:

+
$ stat -L -c "%a" /etc/neutron/neutron.conf
+$ stat -L -c "%a" /etc/neutron/api-paste.ini
+$ stat -L -c "%a" /etc/neutron/policy.json
+$ stat -L -c "%a" /etc/neutron/rootwrap.conf
+$ stat -L -c "%a" /etc/neutron
+

还可以进行更广泛的限制:如果包含目录设置为 750,则保证此目录中新创建的文件具有所需的权限。

+

通过:如果权限设置为 640 或更严格,或者包含目录设置为 750。640 的权限转换为所有者 r/w、组 r,而对其他人没有权限,即“u=rw,g=r,o=”。

+

请注意,使用 Check-Neutron-01:配置文件的用户/组所有权是否设置为 root/neutron?权限设置为 640,root 具有读/写访问权限,neutron 具有对这些配置文件的读取访问权限。也可以使用以下命令验证访问权限。仅当此命令支持 ACL 时,它才在您的系统上可用。

+
$ getfacl --tabular -a /etc/neutron/neutron.conf
+getfacl: Removing leading '/' from absolute path names
+# file: etc/neutron/neutron.conf
+USER   root     rw-
+GROUP  neutron  r--
+mask            r--
+other           ---
+

失败:如果权限没有设置至少为640。

+

Check-Neutron-03:Keystone是否用于身份验证?

+

注意

+
此项仅适用于 OpenStack 版本 Rocky 及之前版本,因为 `auth_strategy` Stein 中已弃用。
+

OpenStack 支持各种身份验证策略,如 noauth、keystone 等。如果使用“noauth”策略,那么用户无需任何身份验证即可与OpenStack服务进行交互。这可能是一个潜在的风险,因为攻击者可能会获得对 OpenStack 组件的未经授权的访问。因此,强烈建议所有服务都必须使用其服务帐户通过 keystone 进行身份验证。

+

通过:如果 section in 下的参数 auth_strategy 设置为 keystone[DEFAULT] /etc/neutron/neutron.conf

+

失败:如果 section 下的 [DEFAULT] 参数 auth_strategy 值设置为 noauthnoauth2

+

Check-Neutron-04:是否使用安全协议进行身份验证?

+

OpenStack 组件使用各种协议相互通信,通信可能涉及敏感/机密数据。攻击者可能会尝试窃听频道以访问敏感信息。因此,所有组件都必须使用安全的通信协议相互通信。

+

通过:如果 section in /etc/neutron/neutron.conf 下的参数值设置为 Identity API 端点开头, https:// 并且 same /etc/neutron/neutron.conf 中同一 [keystone_authtoken] 部分下的 [keystone_authtoken] 参数 www_authenticate_uri insecure 值设置为 False

+

失败:如果 in /etc/neutron/neutron.conf 部分下的 [keystone_authtoken] 参数 www_authenticate_uri 值未设置为以 开头的身份 API 端点, https:// 或者同一 /etc/neutron/neutron.conf 部分中的参数 insecure [keystone_authtoken] 值设置为 True

+

Check-Neutron-05:Neutron API 服务器上是否启用了 TLS?

+

与之前的检查类似,建议在 API 服务器上启用安全通信。

+

通过:如果 section in 下的参数 use_ssl 设置为 True[DEFAULT] /etc/neutron/neutron.conf

+

失败:如果 section in 下的参数 use_ssl 设置为 False[DEFAULT] /etc/neutron/neutron.conf

+

对象存储

+

OpenStack 对象存储 (swift) 服务提供通过 HTTP 存储和检索数据的软件。对象(数据 blob)存储在组织层次结构中,该层次结构提供匿名只读访问、ACL 定义的访问,甚至临时访问。对象存储支持通过中间件实现的多种基于令牌的身份验证机制。

+

应用程序通过行业标准的 HTTP RESTful API 在对象存储中存储和检索数据。对象存储的后端组件遵循相同的 RESTful 模型,尽管某些 API(例如管理持久性的 API)对集群是私有的。有关 API 的更多详细信息,请参阅 OpenStack Storage API。

+

对象存储的组件分为以下主要组:

+
    +
  1. 代理服务
  2. +
  3. 身份验证服务
  4. +
  5. 存储服务
  6. +
  7. 账户服务
  8. +
  9. 容器服务
  10. +
  11. 对象服务
  12. +
+

_images/swift_network_diagram-1.png

+

OpenStack 对象存储管理指南 (2013) 中的示例图

+

注意

+
对象存储安装不必位于 Internet 上,也可以是私有云,其中公共交换机是组织内部网络基础架构的一部分。
+

网络安全

+

要保护对象存储服务,首先要保护网络组件。如果您跳过了网络章节,请返回到网络部分。

+

rsync 协议用于在存储服务节点之间复制数据以实现高可用性。此外,在客户端端点和云环境之间来回中继数据时,代理服务会与存储服务进行通信。

+

警告

+
对象存储不对节点间通信进行加密或身份验证。这就是您在体系结构图中看到专用交换机或专用网络 ([V]LAN) 的原因。这个数据域也应该与其他OpenStack数据网络分开。有关安全域的进一步讨论,请参阅安全边界和威胁。
+

建议

+
对数据域中的存储节点使用专用 (V)LAN 网段。
+

这需要代理节点具有双接口(物理或虚拟):

+
    +
  1. 一个作为消费者访问的公共界面。
  2. +
  3. 另一个作为可以访问存储节点的专用接口。
  4. +
+

下图演示了一种可能的网络体系结构。

+

_images/swift_network_diagram-2.png

+

具有管理节点(OSAM)的对象存储网络架构

+

一般服务安全

+

以非 root 用户身份运行服务

+

我们建议您将对象存储服务配置为在非 root (UID 0) 服务帐户下运行。一个建议是 swift 具有主组 swift 的用户名。例如, proxy-server 对象存储服务包括、、 container-server account-server 。有关设置和配置的详细步骤,请参阅《安装指南》的“添加对象存储”一章的 OpenStack 文档索引。

+

注意

+
上面的链接默认为Ubuntu版本。
+
文件权限
+

/etc/swift 目录包含有关环形拓扑和环境配置的信息。建议使用以下权限:

+
# chown -R root:swift /etc/swift/*
+# find /etc/swift/ -type f -exec chmod 640 {} \;
+# find /etc/swift/ -type d -exec chmod 750 {} \;
+

这将限制只有 root 用户能够修改配置文件,同时允许服务通过其 swift 在组中的组成员身份读取它们。

+

保护存储服务

+

以下是各种存储服务的默认侦听端口:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
服务名称港口类型
账户服务6002TCP
容器服务6001TCP
对象服务6000TCP
同步 [1]873TCP
+

如果使用 ssync 而不是 rsync,则使用对象服务端口来维护持久性。

+

重要

+
在存储节点上不进行身份验证。如果能够在其中一个端口上连接到存储节点,则无需身份验证即可访问或修改数据。为了防止此问题,您应该遵循之前给出的有关使用专用存储网络的建议。
+
对象存储帐户术语
+

对象存储帐户不是用户帐户或凭据。下面对这些关系进行说明:

+ + + + + + + + + + + + + + + + + +
对象存储帐户容器的收集;不是用户帐户或身份验证。哪些用户与该帐户相关联以及他们如何访问该帐户取决于所使用的身份验证系统。请参阅对象存储身份验证。
对象存储容器对象的集合。容器上的元数据可用于 ACL。ACL 的含义取决于所使用的身份验证系统。
对象存储对象实际数据对象。对象级别的 ACL 也可以与元数据一起使用,并且取决于所使用的身份验证系统。
+

在每个级别,您都有 ACL,用于指示谁拥有哪种类型的访问权限。ACL 是根据正在使用的身份验证系统进行解释的。最常用的两种身份验证提供程序类型是 Identity service (keystone) 和 TempAuth。自定义身份验证提供程序也是可能的。有关更多信息,请参阅对象存储身份验证。

+

保护代理服务

+

代理节点应至少具有两个接口(物理或虚拟):一个公共接口和一个专用接口。防火墙或服务绑定可能会保护公共接口。面向公众的服务是一个 HTTP Web 服务器,用于处理端点客户端请求、对其进行身份验证并执行相应的操作。专用接口不需要任何侦听服务,而是用于建立与专用存储网络上的存储节点的传出连接。

+
HTTP 监听端口
+

如前所述,您应该将 Web 服务配置为非 root(无 UID 0)用户 swift 。需要使用大于 1024 的端口才能轻松完成此操作,并避免以 root 身份运行 Web 容器的任何部分。通常,使用 HTTP REST API 并执行身份验证的客户端会自动从身份验证响应中检索所需的完整 REST API URL。OpenStack 的 REST API 允许客户端对一个 URL 进行身份验证,然后被告知对实际服务使用完全不同的 URL。例如,客户端向 https://identity.cloud.example.org:55443/v1/auth 进行身份验证,并获取其身份验证密钥和存储 URL(代理节点或负载均衡器的 URL)https://swift.cloud.example.org:44443/v1/AUTH_8980 响应。

+

将 Web 服务器配置为以非 root 用户身份启动和运行的方法因 Web 服务器和操作系统而异。

+
负载均衡器
+

如果使用 Apache 的选项不可行,或者为了提高性能,您希望减轻 TLS 工作,则可以使用专用的网络设备负载平衡器。这是在使用多个代理节点时提供冗余和负载平衡的常用方法。

+

如果选择卸载 TLS,请确保负载均衡器和代理节点之间的网络链路位于专用 (V)LAN 网段上,以便网络上的其他节点(可能已泄露)无法窃听(嗅探)未加密的流量。如果发生此类违规行为,攻击者可以访问端点客户端或云管理员凭据并访问云数据。

+

您使用的身份验证服务(例如身份服务(keystone)或TempAuth)将决定如何在对端点客户端的响应中配置不同的URL,以便它们使用负载平衡器而不是单个代理节点。

+

对象存储身份验证

+

对象存储使用 WSGI 模型来提供中间件功能,该功能不仅提供通用可扩展性,还用于端点客户端的身份验证。身份验证提供程序定义存在的角色和用户类型。有些使用传统的用户名和密码凭据,而另一些则可能利用 API 密钥令牌甚至客户端 x.509 证书。自定义提供程序可以集成到使用自定义中间件中。

+

对象存储默认自带两个认证中间件模块,其中任何一个模块都可以作为开发自定义认证中间件的示例代码。

+
TempAuth 函数
+

TempAuth 是对象存储的默认身份验证。与 Identity 相比,它将用户帐户、凭据和元数据存储在对象存储本身中。有关更多信息,请参阅对象存储 (swift) 文档的身份验证系统部分。

+
Keystone
+

Keystone 是 OpenStack 中常用的身份提供程序。它还可用于对象存储中的身份验证。Identity 中已提供保护 keystone 的覆盖范围。

+

其他值得注意的事项

+

在 中 /etc/swift ,在每个节点上,都有一个设置和一个 swift_hash_path_prefix swift_hash_path_suffix 设置。提供这些是为了减少存储对象发生哈希冲突的可能性,并避免一个用户覆盖另一个用户的数据。

+

此值最初应使用加密安全的随机数生成器进行设置,并在所有节点上保持一致。确保它受到适当的 ACL 保护,并且您有备份副本以避免数据丢失。

+

机密管理

+

操作员通过使用各种加密应用程序来保护云部署中的敏感信息。例如,对静态数据进行加密或对映像进行签名以证明其未被篡改。在所有情况下,这些加密功能都需要某种密钥材料才能运行。

+

机密管理描述了一组旨在保护软件系统中的关键材料的技术。传统上,密钥管理涉及硬件安全模块 (HSM) 的部署。这些设备已经过物理强化,可防止篡改。

+

随着技术的进步,需要保护的秘密物品的数量已经从密钥材料增加到包括证书对、API 密钥、系统密码、签名密钥等。这种增长产生了对更具可扩展性的密钥管理方法的需求,并导致创建了许多提供可扩展动态密钥管理的软件服务。本章介绍了目前存在的服务,并重点介绍了那些能够集成到OpenStack云中的服务。

+
    +
  • 现有技术摘要
  • +
  • 相关 Openstack 项目
  • +
  • 使用案例
  • +
  • 镜像签名验证
  • +
  • 卷加密
  • +
  • 临时磁盘加密
  • +
  • Sahara
  • +
  • Magnum
  • +
  • Octavia/LBaaS
  • +
  • Swift
  • +
  • 配置文件中的密码
  • +
  • Barbican
  • +
  • 概述
  • +
  • 加密插件
  • +
  • 简单的加密插件
  • +
  • PKCS#11加密插件
  • +
  • 密钥商店插件
  • +
  • KMIP插件
  • +
  • Dogtag 插件
  • +
  • Vault 插件
  • +
  • Castellan
  • +
  • 概述
  • +
  • 常见问题解答
  • +
  • 检查表
  • +
  • Check-Key-Manager-01:配置文件的所有权是否设置为 root/barbican?
  • +
  • Check-Key-Manager-02:是否为配置文件设置了严格的权限?
  • +
  • Check-Key-Manager-03:OpenStack Identity 是否用于身份验证?
  • +
  • Check-Key-Manager-04:是否启用了 TLS 进行身份验证?
  • +
+

现有技术摘要

+

在OpenStack中,有两种推荐用于机密管理的解决方案,即Barbican和Castellan。本章将概述不同的方案,以帮助操作员选择使用哪个密钥管理器。

+

第三种不受支持的方法是固定/硬编码密钥。众所周知,某些 OpenStack 服务可以选择在其配置文件中指定密钥。这是最不安全的操作方式,我们不建议在任何类型的生产环境中使用。

+

其他解决方案包括 KeyWhiz、Confidant、Conjur、EJSON、Knox 和 Red October,但在本文档的讨论范围之外,无法涵盖所有可用的 Key Manager。

+

对于机密的存储,强烈建议使用硬件安全模块 (HSM) 。HSM 可以有多种形式。传统设备是机架式设备,如以下博客文章中所示。

+

相关 Openstack 项目

+

Castellan 是一个库,它提供了一个简单的通用接口来存储、生成和检索机密。大多数 Openstack 服务都使用它进行机密管理。作为一个图书馆,Castellan 本身并不提供秘密存储。相反,需要部署后端实现。

+

请注意,Castellan 不提供任何身份验证。它只是通过身份验证凭据(例如Keystone令牌)传递到后端。

+

Barbican 是一个 OpenStack 服务,为 Castellan 提供后端。Barbican 需要并验证 keystone 身份验证令牌,以识别访问或存储密钥的用户和项目。然后,它应用策略来确定是否允许访问。它还提供了许多额外的有用功能来改进密钥管理,包括配额、每个密钥的 ACL、跟踪密钥使用者以及密钥容器中的密钥分组。例如,明锐直接与巴比肯(而不是卡斯特拉兰)集成,以利用其中一些功能。

+

Barbican 有许多后端插件,可用于将机密安全地存储在本地数据库或 HSM 中。

+

目前,Barbican 是 Castellan 唯一可用的后端。然而,有几个后端正在开发中,包括 KMIP、Dogtag、Hashicorp Vault 和 Custodia。对于那些不希望部署 Barbican 并且密钥管理需求相对简单的部署人员来说,使用这些后端之一可能是一个可行的替代方案。但是,在检索密钥时,缺少的是多租户和租户策略的实施,以及上面提到的任何额外功能。

+

使用案例

+

镜像签名验证

+

验证镜像签名可确保镜像自原始上传以来不会被替换或更改。镜像签名验证功能使用 Castellan 作为其密钥管理器来存储加密签名。镜像签名和证书 UUID 将与镜像一起上传到镜像 (glance) 服务。Glance 在从密钥管理器检索证书后验证签名。启动镜像时,计算服务 (nova) 在从密钥管理器检索证书后验证签名。

+

有关更多详细信息,请参阅可信映像文档。

+

卷加密

+

卷加密功能使用 Castellan 提供静态数据加密。当用户创建加密卷类型并使用该类型创建卷时,块存储 (cinder) 服务会请求密钥管理器创建要与该卷关联的密钥。当卷附加到实例时,nova 会检索密钥。

+

有关详细信息,请参阅数据加密部分。和卷加密。

+

临时磁盘加密

+

临时磁盘加密功能可解决数据隐私问题。临时磁盘是虚拟主机操作系统使用的临时工作空间。如果不加密,可以在此磁盘上访问敏感的用户信息,并且在卸载磁盘后可能会保留残留信息。

+

临时磁盘加密功能可以通过安全包装器与密钥管理服务交互,并通过按租户提供临时磁盘加密密钥来支持数据隔离。建议使用后端密钥存储以增强安全性(例如,HSM 或 KMIP 服务器可用作 barbican 后端密钥存储)。

+

有关详细信息,请参阅临时磁盘加密文档。

+

Sahara

+

Sahara在操作过程中生成并存储多个密码。为了加强Sahara对密码的使用,可以指示它使用外部密钥管理器来存储和检索这些密钥。要启用此功能,必须首先在堆栈中部署一个 OpenStack Key Manager 服务。

+

在堆栈上部署密钥管理器服务后,必须将 sahara 配置为启用密钥的外部存储。Sahara 使用 Castellan 库与 OpenStack Key Manager 服务进行交互。此库提供对密钥管理器的可配置访问。

+

有关详细信息,请参阅 Sahara 高级配置指南。

+

Magnum

+

为了使用本机客户端( dockerkubectl 分别)提供对 Docker Swarm 或 Kubernetes 的访问,magnum 使用 TLS 证书。要存储证书,建议使用 Barbican 或 Magnum 数据库 ( x590keypair )。

+

也可以使用本地目录 ( local ),但被认为是不安全的,不适合生产环境。

+

有关为 Magnum 设置证书管理器的更多详细信息,请参阅容器基础架构管理服务文档。

+

Octavia/LBaaS

+

Neutron 和 Octavia 项目的 LBaaS(负载均衡器即服务)功能需要证书及其私钥来为 TLS 连接提供负载均衡。Barbican 可用于存储此敏感信息。

+

有关详细信息,请参阅如何创建 TLS 负载均衡器和部署以 TLS 结尾的 HTTPS 负载均衡器。

+

Swift

+

对称密钥可用于加密 Swift 容器,以降低用户数据被读取的风险,如果未经授权的一方要获得对磁盘的物理访问权限。

+

有关更多详细信息,请参阅官方 swift 文档中的对象加密部分。

+

配置文件中的密码

+

OpenStack 服务的配置文件包含许多纯文本密码。例如,这些包括服务用户用于向 keystone 进行身份验证以验证 keystone 令牌的密码。

+

目前没有对这些密码进行模糊处理的解决方案。建议通过文件权限适当地保护这些文件。

+

目前正在努力将这些密钥存储在 Castellan 后端,然后让 oslo.config 使用 Castellan 来检索这些密钥。

+

Barbican

+

概述

+

Barbican 是一个 REST API,旨在安全存储、配置和管理密码、加密密钥和 X.509 证书等机密。它旨在对所有环境都有用,包括大型短暂云。

+

Barbican 与多个 OpenStack 功能集成,可以直接集成,也可以作为 Castellan 的后端集成。

+

Barbican 通常用作密钥管理系统,以实现图像签名验证、卷加密等用例。这些用例在用例中进行了概述

+
Barbican 基于角色的访问控制
+

待定

+
机密存储后端
+

Key Manager 服务具有插件架构,允许部署程序将密钥存储在一个或多个密钥存储中。机密存储可以是基于软件的(如软件令牌),也可以是基于硬件设备(如硬件安全模块 (HSM))的。本节介绍当前可用的插件,并讨论每个插件的安全状况。插件已启用并使用配置文件中的 /etc/barbican/barbican.conf 设置进行配置。

+

有两种类型的插件:加密插件和机密存储插件。

+

加密插件

+

加密插件将机密存储为 Barbican 数据库中的加密 blob。调用该插件来加密密钥存储上的密钥,并在密钥检索时解密密钥。目前有两种类型的存储插件可用:Simple Crypto 插件和 PKCS#11 加密插件。

+

简单的加密插件

+

默认情况下,在 中 barbican.conf 配置了简单的加密插件。该插件使用单个对称密钥(KEK - 或“密钥加密密钥”),该密钥以纯文本形式存储在 barbican.conf 文件中,以加密和解密所有机密。此插件被认为是不太安全的选项,仅适用于开发和测试,因为主密钥以纯文本形式存储在配置文件中,因此不建议在生产部署中使用。

+

PKCS#11 加密插件

+

PKCS#11 加密插件可用于与使用 PKCS#11 协议的硬件安全模块 (HSM) 连接。机密由项目特定的密钥加密密钥 (KEK) 加密 (并在检索时解密) 。KEK 受主 KEK (MKEK) 保护(加密)。MKEK 与 HMAC 一起驻留在 HSM 中。由于每个项目都使用不同的 KEK,并且由于 KEK 以加密形式(而不是配置文件中的明文)存储在数据库中,因此 PKCS#11 插件比简单的加密插件安全得多。它是 Barbican 部署中最受欢迎的后端。

+

机密存储插件

+

密钥存储插件与安全存储系统接口,以将密钥存储在这些系统中。密钥存储插件有三种类型:KMIP 插件、Dogtag 插件和 Vault 插件。

+

KMIP 插件

+

密钥管理互操作性协议 (KMIP) 密钥存储插件用于与启用了 KMIP 的设备(如硬件安全模块 (HSM))进行通信。密钥直接安全地存储在启用了 KMIP 的设备中,而不是存储在 Barbican 数据库中。Barbican 数据库维护对密钥位置的引用,以供以后检索。该插件可以配置为使用用户名和密码或使用客户端证书向启用了 KMIP 的设备进行身份验证。此信息存储在 Barbican 配置文件中。

+

Dogtag 插件

+

Dogtag 秘密存储插件用于与 Dogtag 通信。Dogtag 是对应于 Red Hat 证书系统的上游项目,Red Hat Certificate System 是一个通用标准/FIPS 认证的 PKI 解决方案,包含证书管理器 (CA) 和密钥恢复机构 (KRA),用于安全存储机密。KRA 将机密作为加密的 blob 存储在其内部数据库中,主加密密钥存储在基于软件的 NSS 安全数据库中,或存储在硬件安全模块 (HSM) 中。基于软件的 NSS 数据库配置为不希望使用 HSM 的部署提供了安全选项。KRA 是 FreeIPA 的一个组件,因此可以使用 FreeIPA 服务器配置插件。以下博客文章中提供了有关如何使用 FreeIPA 设置 Barbican 的更详细说明。

+

Vault 插件

+

Vault 是 Hashicorp 开发的秘密存储,用于安全访问机密和其他对象,例如 API 密钥、密码或证书。保险柜为任何机密提供统一的界面,同时提供严格的访问控制并记录详细的审核日志。Vault 企业版还允许与 HSM 集成以进行自动解封、提供 FIPS 密钥存储和熵增强。但是,Vault 插件的缺点是它不支持多租户,因此所有密钥都将存储在同一个键/值密钥引擎下。挂载点。

+
威胁分析
+

Barbican 团队与 OpenStack 安全项目合作,对最佳实践 Barbican 部署进行了安全审查。安全审查的目的是识别服务设计和体系结构中的弱点和缺陷,并提出解决这些问题的控制或修复措施。

+

巴比肯威胁分析确定了八项安全发现和两项建议,以提高巴比肯部署的安全性。这些结果可以在安全分析存储库中查看,以及 Barbican 体系结构图和体系结构描述页。

+

Castellan

+

概述

+

Castellan 是由 Barbican 团队开发的通用密钥管理器界面。它使项目能够使用可配置的密钥管理器,该管理器可以特定于部署。

+

常见问题解答

+

​ 1.在 OpenStack 中安全存储密钥的推荐方法是什么?

+

在OpenStack中安全地存储和管理密钥的推荐方法是使用Barbican。

+

​ 2.我为什么要使用Barbican?

+

Barbican 是一种 OpenStack 服务,它支持多租户,并使用 Keystone 令牌进行身份验证。这意味着对密钥的访问是通过租户和 RBAC 角色的 OpenStack 策略来控制的。

+

Barbican 具有多个可插拔后端,可以使用 PKCS#11 或 KMIP 与基于软件和硬件的安全模块进行通信。

+

​ 3.如果我不想使用Barbican怎么办?

+

在 Openstack 上下文中,需要管理两种类型的密钥 - 需要密钥失真令牌才能访问的密钥,以及不需要密钥验证令牌的密钥。

+

需要 keystone 身份验证的密钥的一个示例是特定项目拥有的密码和密钥。例如,这些包括项目加密煤渣卷的加密密钥或项目概览图像的签名密钥。

+

不需要 keystone 令牌即可访问的密钥示例包括服务配置文件中服务用户的密码或不属于任何特定项目的加密密钥。

+

需要 keystone 令牌的机密应使用 Barbican 进行存储。

+

不需要 keystone 身份验证的密钥可以存储在任何密钥存储中,该密钥存储实现了通过 Castellan 公开的简单密钥存储 API。这也包括巴比肯。

+

​ 4.如何使用 Vault、Keywhiz、Custodia 等...?

+

如果已为该密钥管理器编写了 Castellan 插件,则您选择的密钥管理器可以与该密钥管理器一起使用。一旦该插件被编写出来,直接使用该插件或在 Barbican 后面使用该插件是相对微不足道的。

+

目前,Vault 和 Custodia 插件正在为 Queens 周期开发。

+

检查表

+

Check-Key-Manager-01:配置文件的所有权是否设置为 root/barbican?

+

配置文件包含组件平稳运行所需的关键参数和信息。如果非特权用户有意或无意地修改或删除任何参数或文件本身,则会导致严重的可用性问题,从而导致拒绝向其他最终用户提供服务。此类关键配置文件的用户所有权必须设置为 root,组所有权必须设置为 barbican。此外,包含目录应具有相同的所有权,以确保正确拥有新文件。

+

运行以下命令:

+
$ stat -L -c "%U %G" /etc/barbican/barbican.conf | egrep "root barbican"
+$ stat -L -c "%U %G" /etc/barbican/barbican-api-paste.ini | egrep "root barbican"
+$ stat -L -c "%U %G" /etc/barbican/policy.json | egrep "root barbican"
+$ stat -L -c "%U %G" /etc/barbican | egrep "root barbican"
+

通过:如果所有这些配置文件的用户和组所有权分别设置为 root 和 barbican。上面的命令显示了 root / barbican 的输出。

+

失败:如果上述命令未返回任何输出,则用户和组所有权可能已设置为除 root 以外的任何用户或除 barbican 以外的任何组。

+

Check-Key-Manager-02:是否为配置文件设置了严格的权限?

+

与前面的检查类似,我们建议为此类配置文件设置严格的访问权限。

+

运行以下命令:

+
$ stat -L -c "%a" /etc/barbican/barbican.conf
+$ stat -L -c "%a" /etc/barbican/barbican-api-paste.ini
+$ stat -L -c "%a" /etc/barbican/policy.json
+$ stat -L -c "%a" /etc/barbican
+

还可以进行更广泛的限制:如果包含目录设置为 750,则保证此目录中新创建的文件具有所需的权限。

+

通过:如果权限设置为 640 或更严格,或者包含目录设置为 750。640 的权限转换为所有者 r/w、组 r,而对其他人没有权限,例如“u=rw,g=r,o=”。

+

注意

+
使用 Check-Key-Manager-01:配置文件的所有权是否设置为 root/barbican?权限设置为 640,root  具有读/写访问权限,Barbican 具有对这些配置文件的读取访问权限。也可以使用以下命令验证访问权限。仅当此命令支持 ACL  时,它才在您的系统上可用。
+
$ getfacl --tabular -a /etc/barbican/barbican.conf
+getfacl: Removing leading '/' from absolute path names
+# file: etc/barbican/barbican.conf
+USER   root  rw-
+GROUP  barbican  r--
+mask         r--
+other        ---
+

失败:如果权限设置大于 640。

+

Check-Key-Manager-03:OpenStack Identity 是否用于身份验证?

+

OpenStack 支持各种身份验证策略,如 noauthkeystone 。如果使用该 noauth 策略,则用户无需任何身份验证即可与 OpenStack 服务进行交互。这可能是一个潜在的风险,因为攻击者可能会获得对 OpenStack 组件的未经授权的访问。我们强烈建议所有服务都必须使用其服务帐户通过 keystone 进行身份验证。

+

通过:如果参数 authtoken 列在 中的 pipeline:barbican-api-keystone barbican-api-paste.ini 部分下。

+

失败:如果 中的 pipeline:barbican-api-keystone barbican-api-paste.ini 部分下缺少该参数 authtoken

+

Check-Key-Manager-04:是否启用了 TLS 进行身份验证?

+

OpenStack 组件使用各种协议相互通信,通信可能涉及敏感或机密数据。攻击者可能会尝试窃听频道以访问敏感信息。所有组件必须使用安全通信协议相互通信。

+

通过:如果 section in /etc/barbican/barbican.conf 下的参数值设置为 Identity API 端点开头, https:// 并且 same /etc/barbican/barbican.conf 中同一 [keystone_authtoken] 部分下的 [keystone_authtoken] 参数 www_authenticate_uri insecure 值设置为 False

+

失败:如果 in /etc/barbican/barbican.conf 部分下的 [keystone_authtoken] 参数 www_authenticate_uri 值未设置为以 开头的身份 API 端点, https:// 或者同一 /etc/barbican/barbican.conf 部分中的参数 insecure [keystone_authtoken] 值设置为 True

+

消息队列

+

消息队列服务促进了 OpenStack 中的进程间通信。OpenStack 支持以下消息队列服务后端:

+
    +
  • RabbitMQ
  • +
  • Qpid
  • +
  • ZeroMQ 或 0MQ
  • +
+

RabbitMQ 和 Qpid 都是高级消息队列协议 (AMQP) 框架,它们为点对点通信提供消息队列。队列实现通常部署为集中式或分散式队列服务器池。ZeroMQ 通过 TCP 套接字提供直接的点对点通信。

+

消息队列有效地促进了跨 OpenStack 部署的命令和控制功能。一旦允许访问队列,就不会执行进一步的授权检查。可通过队列访问的服务会验证实际消息负载中的上下文和令牌。但是,您必须注意令牌的到期日期,因为令牌可能可重播,并且可以授权基础结构中的其他服务。

+

OpenStack 不支持消息级别的安全性,例如消息签名。因此,您必须对消息传输本身进行安全和身份验证。对于高可用性 (HA) 配置,您必须执行队列对队列的身份验证和加密。

+

通过 ZeroMQ 消息传递,IPC 套接字在单个机器上使用。由于这些套接字容易受到攻击,因此请确保云运营商已保护它们。

+
    +
  • 消息安全
  • +
  • 消息传输安全
  • +
  • 队列身份验证和访问控制
  • +
  • 消息队列进程隔离和策略
  • +
+

消息安全

+

本节讨论 OpenStack 中使用的三种最常见的消息队列解决方案的安全强化方法:RabbitMQ、Qpid 和 ZeroMQ。

+

消息传输安全

+

基于 AMQP 的解决方案(Qpid 和 RabbitMQ)支持使用 TLS 的传输级安全性。ZeroMQ 消息传递本身不支持 TLS,但使用标记的 IPsec 或 CIPSO 网络标签可以实现传输级安全性。

+

我们强烈建议为您的消息队列启用传输级加密。将 TLS 用于消息传递客户端连接可以保护通信在传输到消息传递服务器的过程中不被篡改和窃听。以下是有关如何为两个常用消息传递服务器 Qpid 和 RabbitMQ 配置 TLS 的指南。在配置消息传递服务器用于验证客户机连接的可信证书颁发机构 (CA) 捆绑软件时,建议仅将其限制为用于节点的 CA,最好是内部管理的 CA。受信任的 CA 捆绑包将确定哪些客户端证书将获得授权,并通过设置 TLS 连接的客户端-服务器验证步骤。请注意,在安装证书和密钥文件时,请确保文件权限受到限制,例如使用 chmod 0600 ,并且所有权限制为消息传递服务器守护程序用户,以防止消息传递服务器上的其他进程和用户进行未经授权的访问。

+
RabbitMQ 服务器 SSL 配置
+

应将以下行添加到系统范围的 RabbitMQ 配置文件中,通常 /etc/rabbitmq/rabbitmq.config

+
[
+  {rabbit, [
+     {tcp_listeners, [] },
+     {ssl_listeners, [{"<IP address or hostname of management network interface>", 5671}] },
+     {ssl_options, [{cacertfile,"/etc/ssl/cacert.pem"},
+                    {certfile,"/etc/ssl/rabbit-server-cert.pem"},
+                    {keyfile,"/etc/ssl/rabbit-server-key.pem"},
+                    {verify,verify_peer},
+                    {fail_if_no_peer_cert,true}]}
+   ]}
+].
+

请注意,该 tcp_listeners 选项设置为 [] 阻止它侦听非 SSL 端口。应将该 ssl_listeners 选项限制为仅在管理网络上侦听服务。

+

有关 RabbitMQ SSL 配置的更多信息,请参阅:

+
    +
  • RabbitMQ 配置
  • +
  • RabbitMQ SSL协议
  • +
+
Qpid 服务器 SSL 配置
+

Apache 基金会为 Qpid 提供了消息传递安全指南。请参阅:

+
    +
  • Apache Qpid SSL
  • +
+

队列认证和访问控制

+

RabbitMQ 和 Qpid 提供身份验证和访问控制机制,用于控制对队列的访问。ZeroMQ 不提供此类机制。

+

简单身份验证和安全层 (SASL) 是 Internet 协议中用于身份验证和数据安全的框架。RabbitMQ 和 Qpid 都提供 SASL 和其他可插入的身份验证机制,而不仅仅是简单的用户名和密码,从而可以提高身份验证安全性。虽然 RabbitMQ 支持 SASL,但 OpenStack 中的支持目前不允许请求特定的 SASL 身份验证机制。OpenStack 中的 RabbitMQ 支持允许通过未加密的连接进行用户名和密码身份验证,或者将用户名和密码与 X.509 客户端证书结合使用,以建立安全的 TLS 连接。

+

我们建议在所有 OpenStack 服务节点上配置 X.509 客户端证书,以便客户端连接到消息传递队列,并在可能的情况下(目前仅 Qpid)使用 X.509 客户端证书执行身份验证。使用用户名和密码时,应按服务和节点创建帐户,以便对队列的访问进行更精细的可审核性。

+

在部署之前,请考虑排队服务器使用的 TLS 库。Qpid 使用 Mozilla 的 NSS 库,而 RabbitMQ 使用 Erlang 的 TLS 模块,该模块使用 OpenSSL。

+
身份验证配置示例:RabbitMQ
+

在 RabbitMQ 服务器上,删除默认 guest 用户:

+
# rabbitmqctl delete_user guest
+

在 RabbitMQ 服务器上,对于与消息队列通信的每个 OpenStack 服务或节点,请设置用户帐户和权限:

+
# rabbitmqctl add_user compute01 RABBIT_PASS
+# rabbitmqctl set_permissions compute01 ".*" ".*" ".*"
+

将RABBIT_PASS替换为合适的密码。

+

有关其他配置信息,请参阅:

+
    +
  • RabbitMQ 访问控制
  • +
  • RabbitMQ 身份验证
  • +
  • RabbitMQ 插件
  • +
  • RabbitMQ SASL 外部身份验证
  • +
+
OpenStack 服务配置:RabbitMQ
+
[DEFAULT]
+rpc_backend = nova.openstack.common.rpc.impl_kombu
+rabbit_use_ssl = True
+rabbit_host = RABBIT_HOST
+rabbit_port = 5671
+rabbit_user = compute01
+rabbit_password = RABBIT_PASS
+kombu_ssl_keyfile = /etc/ssl/node-key.pem
+kombu_ssl_certfile = /etc/ssl/node-cert.pem
+kombu_ssl_ca_certs = /etc/ssl/cacert.pem
+
身份验证配置示例:Qpid
+

有关配置信息,请参阅:

+
    +
  • Apache Qpid 身份验证
  • +
  • Apache Qpid 授权
  • +
+
OpenStack 服务配置:Qpid
+
[DEFAULT]
+rpc_backend = nova.openstack.common.rpc.impl_qpid
+qpid_protocol = ssl
+qpid_hostname = <IP or hostname of management network interface of messaging server>
+qpid_port = 5671
+qpid_username = compute01
+qpid_password = QPID_PASS
+

(可选)如果将 SASL 与 Qpid 一起使用,请通过添加以下内容来指定正在使用的 SASL 机制:

+
qpid_sasl_mechanisms = <space separated list of SASL mechanisms to use for auth>
+

消息队列进程隔离和策略

+

每个项目都提供了许多发送和使用消息的服务。每个发送消息的二进制文件都应该使用队列中的消息,如果只是回复的话。

+

消息队列服务进程应彼此隔离,并应与计算机上的其他进程隔离。

+
命名空间
+

强烈建议在 OpenStack Compute Hypervisor 上运行的所有服务使用网络命名空间。这将有助于防止 VM 来宾和管理网络之间的网络流量桥接。

+

使用 ZeroMQ 消息传递时,每个主机必须至少运行一个 ZeroMQ 消息接收器,以接收来自网络的消息并通过 IPC 将消息转发到本地进程。在 IPC 命名空间中为每个项目运行一个独立的消息接收器是可能的,也是可取的,以及同一项目中的其他服务。

+
网络策略
+

队列服务器应仅接受来自管理网络的连接。这适用于所有实现。这应通过服务配置来实现,并可选择通过全局网络策略强制实施。

+

使用 ZeroMQ 消息传递时,每个项目都应在专用于属于该项目的服务的端口上运行单独的 ZeroMQ 接收方进程。这相当于 AMQP 的控制交换概念。

+
强制访问控制
+

使用强制访问控制 (MAC) 和自由访问控制 (DAC) 将进程的配置限制为仅这些进程。此限制可防止这些进程与在同一台计算机上运行的其他进程隔离。

+

数据处理

+

数据处理服务(sahara)提供了一个平台,用于使用Hadoop和Spark等处理框架来配置和管理实例集群。通过 OpenStack Dashboard 或 REST API,用户能够上传和执行框架应用程序,这些应用程序可以访问对象存储或外部提供程序中的数据。数据处理控制器使用编排服务 (heat) 创建实例集群,这些集群可以作为长期运行的组存在,这些组可以根据请求进行扩展和收缩,也可以作为为单个工作负载创建的瞬态组存在。

+
    +
  • 数据处理简介
  • +
  • 架构
  • +
  • 涉及的技术
  • +
  • 用户对资源的访问权限
  • +
  • 部署
  • +
  • 控制器对集群的网络访问
  • +
  • 配置和强化
  • +
  • TLS
  • +
  • 基于角色的访问控制策略
  • +
  • 安全组
  • +
  • 代理域
  • +
  • 自定义网络拓扑
  • +
  • 间接访问
  • +
  • 根包装
  • +
  • 日志记录
  • +
  • 参考书目
  • +
+

数据处理简介

+

数据处理服务控制器将负责创建、维护和销毁为其集群创建的任何实例。控制器将使用网络服务在自身和集群实例之间建立网络路径。它还将管理要在集群上运行的用户应用程序的部署和生命周期。集群中的实例包含框架处理引擎的核心,数据处理服务提供了多个选项来创建和管理与这些实例的连接。

+

数据处理资源(群集、作业和数据源)按身份服务中定义的项目进行分隔。这些资源在项目中共享,了解使用该服务的人员的访问需求非常重要。通过使用基于角色的访问控制,可以进一步限制项目中的活动(例如启动集群、上传作业等)。

+

在本章中,我们将讨论如何评估数据处理用户对其应用程序、他们使用的数据以及他们在项目中的预期功能的需求。我们还将演示服务控制器及其集群的一些强化技术,并提供各种控制器配置和用户管理方法的示例,以确保足够的安全和隐私级别。

+

架构

+

下图显示了数据处理服务如何适应更大的 OpenStack 生态系统的概念视图。

+

../_images/data_processing_architecture.png

+

数据处理服务在集群配置过程中大量使用计算、编排、镜像和块存储服务。它还将使用在群集创建期间提供的由网络服务创建的一个或多个网络来管理实例。当用户运行框架应用程序时,控制器和集群将访问对象存储服务。鉴于这些服务用法,我们建议按照系统文档中概述的说明对安装的所有组件进行编目。

+

涉及的技术

+

数据处理服务负责部署和管理多个应用程序。为了全面了解所提供的安全选项,我们建议操作员大致熟悉这些应用程序。突出显示的技术列表分为两部分:第一部分,对安全性影响较大的高优先级应用程序,第二部分,支持影响较小的应用程序。

+

更高的影响

+
    +
  • Hadoop
  • +
  • Hadoop安全模式文档
  • +
  • HDFS
  • +
  • Spark
  • +
  • Spark 安全
  • +
  • Storm
  • +
  • Zookeeper
  • +
+

较低的影响

+
    +
  • Oozie
  • +
  • Hive
  • +
  • Pig
  • +
+

这些技术构成了与数据处理服务一起部署的框架的核心。除了这些技术之外,该服务还包括第三方供应商提供的捆绑框架。这些捆绑框架是使用上述相同核心部分以及供应商包含的配置和应用程序构建的。有关第三方框架捆绑包的更多信息,请参阅以下链接:

+
    +
  • Cloudera CDH
  • +
  • Hortonworks Data Platform
  • +
  • MapR
  • +
+

用户访问资源

+

数据处理服务的资源(集群、作业和数据源)在项目范围内共享。尽管单个控制器安装可以管理多组资源,但这些资源的范围将限定为单个项目。鉴于此限制,我们建议密切监视项目中的用户成员身份,以保持资源的适当隔离。

+

由于部署此服务的组织的安全要求会根据其特定需求而有所不同,因此我们建议运营商将重点放在数据隐私、集群管理和最终用户应用程序上,作为评估用户需求的起点。这些决策将有助于指导配置用户对服务的访问的过程。有关数据隐私的扩展讨论,请参阅租户数据隐私。

+

数据处理安装的默认假设是用户将有权访问其项目中的所有功能。如果需要更精细的控制,数据处理服务会提供策略文件(如策略中所述)。这些配置将高度依赖于安装组织的需求,因此没有关于其使用的一般建议:有关详细信息,请参阅基于角色的访问控制策略。

+

部署

+

与许多其他 OpenStack 服务一样,数据处理服务被部署为在连接到堆栈的主机上运行的应用程序。从 Kilo 版本开始,它能够以分布式方式部署多个冗余控制器。与其他服务一样,它也需要一个数据库来存储有关其资源的信息。请参阅数据库。请务必注意,数据处理服务将需要管理多个标识服务信任,直接与业务流程和网络服务通信,并可能在代理域中创建用户。由于这些原因,控制器将需要访问控制平面,因此我们建议将其与其他服务控制器一起安装。

+

数据处理直接与多个 OpenStack 服务交互:

+
    +
  • 计算
  • +
  • 身份验证
  • +
  • 联网
  • +
  • 对象存储
  • +
  • 配器
  • +
  • 块存储(可选)
  • +
+

建议记录这些服务与数据处理控制器之间的所有数据流和桥接点。请参阅系统文档。

+

数据处理服务使用对象存储服务来存储作业二进制文件和数据源。希望访问完整数据处理服务功能的用户将需要在他们正在使用的项目中存储对象。

+

网络服务在群集的配置中起着重要作用。在预配之前,用户应为群集实例提供一个或多个网络。关联网络的操作类似于通过仪表板启动实例时分配网络的过程。控制器使用这些网络对其集群的实例和框架进行管理访问。

+

另外值得注意的是身份服务。数据处理服务的用户需要在其项目中具有适当的角色,以允许为其集群预置实例。使用代理域配置的安装需要特别注意。请参阅代理域。具体而言,数据处理服务将需要能够在代理域中创建用户。

+

控制器对集群的网络访问

+

数据处理控制器的主要任务之一是与其生成的实例进行通信。这些实例是预置的,然后根据所使用的框架进行配置。控制器和实例之间的通信使用安全外壳 (SSH) 和 HTTP 协议。

+

在预配集群时,将在用户提供的网络中为每个实例提供一个 IP 地址。第一个网络通常称为数据处理管理网络,实例可以使用网络服务为此网络分配的固定 IP 地址。控制器还可以配置为除了固定地址之外,还对实例使用浮动 IP 地址。与实例通信时,控制器将首选浮动地址(如果启用)。

+

对于固定和浮动 IP 地址无法提供所需功能的情况,控制器可以通过两种替代方法提供访问:自定义网络拓扑和间接访问。自定义网络拓扑功能允许控制器通过配置文件中提供的 shell 命令访问实例。间接访问用于指定用户在集群置备期间可用作代理网关的实例。这些选项通过配置和强化中的用法示例进行讨论。

+

配置和强化

+

有多个配置选项和部署策略可以提高数据处理服务的安全性。服务控制器通过主配置文件和一个或多个策略文件进行配置。使用数据局部性功能的安装还将具有两个附加文件,用于指定计算节点和对象存储节点的物理位置。

+

TLS系统

+

与许多其他 OpenStack 控制器一样,数据处理服务控制器可以配置为需要 TLS 连接。

+

Pre-Kilo 版本将需要 TLS 代理,因为控制器不允许直接 TLS 连接。TLS 代理和 HTTP 服务中介绍了如何配置 TLS 代理,我们建议按照其中的建议创建此类安装。

+

从 Kilo 版本开始,数据处理控制器允许直接 TLS 连接,我们建议这样做。启用此行为需要对控制器配置文件进行一些小的调整。

+

例。配置对控制器的 TLS 访问

+
[ssl]
+ca_file = cafile.pem
+cert_file = certfile.crt
+key_file = keyfile.key
+

基于角色的访问控制策略

+

数据处理服务使用策略文件(如策略中所述)来配置基于角色的访问控制。使用策略文件,操作员可以限制组对特定数据处理功能的访问。

+

执行此操作的原因将根据安装的组织要求而更改。通常,这些细粒度控件用于操作员需要限制数据处理服务资源的创建、删除和检索的情况。需要限制项目内访问的操作员应充分意识到,需要有其他方法让用户访问服务的核心功能(例如,配置集群)。

+

例。允许所有用户使用所有方法(默认策略)

+
{
+    "default": ""
+}
+

例。禁止对非管理员用户进行映像注册表操作

+
{
+    "default": "",
+
+    "data-processing:images:register": "role:admin",
+    "data-processing:images:unregister": "role:admin",
+    "data-processing:images:add_tags": "role:admin",
+    "data-processing:images:remove_tags": "role:admin"
+}
+

安全组

+

数据处理服务允许将安全组与为其集群预置的实例相关联。无需其他配置,该服务将对预置集群的任何项目使用默认安全组。如果请求,可以使用不同的安全组,或者存在一个自动选项,该选项指示服务根据所访问框架指定的端口创建安全组。

+

对于生产环境,我们建议手动控制安全组,并创建一组适合安装的组规则。通过这种方式,操作员可以确保默认安全组将包含所有适当的规则。有关安全组的扩展讨论,请参阅安全组。

+

代理域

+

将对象存储服务与数据处理结合使用时,需要添加存储访问凭据。使用代理域,数据处理服务可以改用来自标识服务的委派信任,以允许通过域中创建的临时用户进行存储访问。要使此委派机制起作用,必须将数据处理服务配置为使用代理域,并且操作员必须为代理用户配置身份域。

+

数据处理控制器保留为对象存储访问提供的用户名和密码的临时存储。使用代理域时,控制器将为代理用户生成此对,并且此用户的访问将仅限于身份信任的访问。我们建议在控制器或其数据库具有与公共网络之间的路由的任何安装中使用代理域。

+

示例:为名为“dp_proxy”的代理域进行配置

+
[DEFAULT]
+use_domain_for_proxy_users = true
+proxy_user_domain_name = dp_proxy
+proxy_user_role_names = Member
+

自定义网络拓扑

+

数据处理控制器可以配置为使用代理命令来访问其集群实例。通过这种方式,可以为不使用网络服务直接提供的网络的安装创建自定义网络拓扑。对于需要限制控制器和实例之间访问的安装,我们建议使用此选项。

+

示例:通过指定的中继机访问实例

+
[DEFAULT]
+proxy_command='ssh relay-machine-{tenant_id} nc {host} {port}'
+

示例:通过自定义网络命名空间访问实例

+
[DEFAULT]
+proxy_command='ip netns exec ns_for_{network_id} nc {host} {port}'
+

间接访问

+

对于控制器对集群所有实例的访问权限有限的安装,由于对浮动 IP 地址或安全规则的限制,可以配置间接访问。这允许将某些实例指定为集群其他实例的代理网关。

+

只有在定义将构成数据处理集群的节点组模板时,才能启用此配置。它作为运行时选项提供,可在群集置备过程中启用。

+

Rootwrap

+

在为网络访问创建自定义拓扑时,可能需要允许非 root 用户运行代理命令。对于这些情况,oslo rootwrap 软件包用于为非 root 用户提供运行特权命令的工具。此配置要求与数据处理控制器应用程序关联的用户位于 sudoers 列表中,并在配置文件中启用该选项。或者,可以提供备用 rootwrap 命令。

+

示例:启用 rootwrap 用法并显示默认命令

+
[DEFAULT]
+use_rootwrap=True
+rootwrap_command=’sudo sahara-rootwrap /etc/sahara/rootwrap.conf’
+

关于 rootwrap 项目的更多信息,请参考官方文档:https://wiki.openstack.org/wiki/Rootwrap

+

日志

+

监视服务控制器的输出是一个强大的取证工具,如监视和日志记录中更详细地描述的那样。数据处理服务控制器提供了几个选项来设置日志记录的位置和级别。

+

示例:将日志级别设置为高于警告并指定输出文件。

+
[DEFAULT]
+verbose = true
+log_file = /var/log/data-processing.log
+

参考书目

+

OpenStack.org,欢迎来到Sahara!2016.Sahara项目文档

+

Apache 软件基金会,欢迎来到 Apache Hadoop!2016. Apache Hadoop 项目

+

Apache 软件基金会,安全模式下的 Hadoop。2016. Hadoop 安全模式文档

+

Apache 软件基金会,HDFS 用户指南。2016. Hadoop HDFS 文档

+

Apache 软件基金会,Spark。2016. Spark项目

+

Apache 软件基金会,Spark Security。2016. Spark 安全文档

+

Apache 软件基金会,Apache Storm。2016. Storm 项目

+

Apache 软件基金会,Apache Zookeeper。2016. Zookeeper 项目

+

Apache 软件基金会,Apache Oozie Workflow Scheduler for Hadoop。2016. Oozie项目

+

Apache 软件基金会,Apache Hive。2016. Hive

+

Apache 软件基金会,欢迎来到 Apache Pig。2016.Pig

+

Apache 软件基金会,Cloudera 产品文档。2016. Cloudera CDH 文档

+

Hortonworks,Hortonworks。2016. Hortonworks 数据平台文档

+

MapR Technologies,用于 MapR 融合数据平台的 Apache Hadoop。2016. MapR 项目

+

数据库

+

数据库服务器的选择是 OpenStack 部署安全性的一个重要考虑因素。在决定使用数据库服务器时,应考虑多种因素,但在本本书的范围内,将只讨论安全注意事项。OpenStack 支持多种数据库类型。有关更多信息,请参阅《OpenStack 管理员指南》。

+

《安全指南》目前主要针对 PostgreSQL 和 MySQL。

+
    +
  • 数据库后端注意事项
  • +
  • 数据库后端的安全参考
  • +
  • 数据库访问控制
  • +
  • OpenStack 数据库访问模型
  • +
  • 数据库身份验证和访问控制
  • +
  • 要求用户帐户需要 SSL 传输
  • +
  • 使用 X.509 证书进行身份验证
  • +
  • OpenStack 服务数据库配置
  • +
  • Nova-conductor
  • +
  • 数据库传输安全性
  • +
  • 数据库服务器 IP 地址绑定
  • +
  • 数据库传输
  • +
  • MySQL SSL配置
  • +
  • PostgreSQL SSL 配置
  • +
+

数据库后端注意事项

+

PostgreSQL 具有许多理想的安全功能,例如 Kerberos 身份验证、对象级安全性和加密支持。PostgreSQL 社区在提供可靠的指导、文档和工具以促进积极的安全实践方面做得很好。

+

MySQL拥有庞大的社区,被广泛采用,并提供高可用性选项。MySQL还能够通过插件身份验证机制提供增强的客户端身份验证。MySQL社区中的分叉发行版提供了许多可供考虑的选项。根据对安全态势的全面评估和为给定发行版提供的支持级别,选择MySQL的特定实现非常重要。

+

数据库后端的安全参考

+

建议部署 MySQL 或 PostgreSQL 的用户参考现有的安全指南。下面列出了一些参考资料:

+

MySQL数据库:

+
    +
  • +

    OWASP MySQL强化

    +
  • +
  • +

    MySQL 可插入身份验证

    +
  • +
  • MySQL中的安全性
  • +
+

PostgreSQL格式:

+
    +
  • OWASP PostgreSQL 强化
  • +
  • PostgreSQL 数据库中的总体安全性
  • +
+

数据库访问控制

+

每个核心 OpenStack 服务(计算、身份、网络、块存储)都将状态和配置信息存储在数据库中。在本章中,我们将讨论当前在OpenStack中使用数据库的方式。我们还探讨了安全问题,以及数据库后端选择的安全后果。

+

OpenStack 数据库访问模型

+

OpenStack 项目中的所有服务都访问单个数据库。目前没有用于创建基于表或行的数据库访问限制的参考策略。

+

在OpenStack中,没有对数据库操作进行精细控制的一般规定。访问权限和特权的授予仅基于节点是否有权访问数据库。在这种情况下,有权访问数据库的节点可能具有 DROP、INSERT 或 UPDATE 函数的完全权限。

+
精细访问控制
+

默认情况下,每个 OpenStack 服务及其进程都使用一组共享凭据访问数据库。这使得审核数据库操作和撤消服务及其进程对数据库的访问权限变得特别困难。

+

../_images/databaseusername.png

+
Nova-conductor
+

计算节点是 OpenStack 中最不受信任的服务,因为它们托管租户实例。引入该 nova-conductor 服务作为数据库代理,充当计算节点和数据库之间的中介。我们将在本章后面讨论其后果。

+

我们强烈建议:

+
    +
  • 所有数据库通信都与管理网络隔离
  • +
  • 使用 TLS 保护通信
  • +
  • 为每个 OpenStack 服务端点创建唯一的数据库用户帐户(如下图所示)
  • +
+

../_images/databaseusernamessl.png

+

数据库认证和访问控制

+

考虑到访问数据库的风险,我们强烈建议为每个需要访问数据库的节点创建唯一的数据库用户帐户。这样做有助于更好地进行分析和审核,以确保合规性,或者在节点遭到入侵时,通过在检测到该节点时删除该节点对数据库的访问来隔离受感染的主机。创建这些每个服务终结点数据库用户帐户时,应注意确保将其配置为需要 TLS。或者,为了提高安全性,建议除了用户名和密码外,还使用 X.509 证书身份验证来配置数据库帐户。

+
权限
+

应创建并保护一个单独的数据库管理员 (DBA) 帐户,该帐户具有创建/删除数据库、创建用户帐户和更新用户权限的完全权限。这种简单的责任分离方法有助于防止意外配置错误,降低风险并缩小危害范围。

+

为 OpenStack 服务和每个节点创建的数据库用户帐户的权限应仅限于与该节点所属的服务相关的数据库。

+

要求用户帐户需要 SSL 传输

+
配置示例 #1:(MySQL)
+
GRANT ALL ON dbname.* to 'compute01'@'hostname' IDENTIFIED BY 'NOVA_DBPASS' REQUIRE SSL;
+
配置示例 #2:(PostgreSQL)
+

在文件中 pg_hba.conf

+
hostssl dbname compute01 hostname md5
+

请注意,此命令仅添加通过 SSL 进行通信的功能,并且是非独占的。应禁用可能允许未加密传输的其他访问方法,以便 SSL 是唯一的访问方法。

+

md5 参数将身份验证方法定义为哈希密码。我们在以下部分中提供了一个安全身份验证示例。

+
OpenStack 服务数据库配置
+

如果数据库服务器配置为使用 TLS 传输,则需要指定用于 SQLAlchemy 查询中的初始连接字符串的证书颁发机构信息。

+
MySQL :sql_connection 的字符串示例:
+
sql_connection = mysql://compute01:NOVA_DBPASS@localhost/nova?charset=utf8&ssl_ca=/etc/mysql/cacert.pem
+

使用 X.509 证书进行身份验证

+

通过要求使用 X.509 客户端证书进行身份验证,可以增强安全性。以这种方式对数据库进行身份验证可以为与数据库建立连接的客户端提供更好的身份保证,并确保通信是加密的。

+
配置示例 #1:(MySQL)
+
GRANT ALL on dbname.* to 'compute01'@'hostname' IDENTIFIED BY 'NOVA_DBPASS' REQUIRE SUBJECT
+'/C=XX/ST=YYY/L=ZZZZ/O=cloudycloud/CN=compute01' AND ISSUER
+'/C=XX/ST=YYY/L=ZZZZ/O=cloudycloud/CN=cloud-ca';
+
配置示例 #2:(PostgreSQL)
+
hostssl dbname compute01 hostname cert
+

OpenStack 服务数据库配置

+

如果数据库服务器配置为需要 X.509 证书进行身份验证,则需要为数据库后端指定相应的 SQLAlchemy 查询参数。这些参数指定用于初始连接字符串的证书、私钥和证书颁发机构信息。

+

MySQL 的 X.509 证书身份验证 :sql_connection 字符串示例:

+
sql_connection = mysql://compute01:NOVA_DBPASS@localhost/nova?
+charset=utf8&ssl_ca = /etc/mysql/cacert.pem&ssl_cert=/etc/mysql/server-cert.pem&ssl_key=/etc/mysql/server-key.pem
+

Nova-conductor

+

OpenStack Compute 提供了一个称为 nova-conductor 的子服务,用于代理数据库连接,其主要目的是让 nova 计算节点与 nova-conductor 连接以满足数据持久性需求,而不是直接与数据库通信。

+

Nova-conductor 通过 RPC 接收请求并代表调用服务执行操作,而无需授予对数据库、其表或其中数据的精细访问权限。Nova-conductor 实质上将直接数据库访问从计算节点中抽象出来。

+

这种抽象的优点是将服务限制为使用参数执行方法,类似于存储过程,从而防止大量系统直接访问或修改数据库数据。这是在不在数据库本身的上下文或范围内存储或执行这些过程的情况下完成的,这是对典型存储过程的常见批评。

+

../_images/novaconductor.png

+

遗憾的是,此解决方案使更细粒度的访问控制和审核数据访问的能力的任务复杂化。由于 nova-conductor 服务通过 RPC 接收请求,因此它突出了提高消息传递安全性的重要性。任何有权访问消息队列的节点都可以执行 nova-conductor 提供的这些方法,并有效地修改数据库。

+

请注意,由于 nova-conductor 仅适用于 OpenStack Compute,因此对于其他 OpenStack 组件(如 Telemetry(云高计)、网络和块存储)的运行,可能仍然需要从计算主机直接访问数据库。

+

若要禁用 nova-conductor,请将以下内容放入 nova.conf 文件中(在计算主机上):

+
[conductor]
+use_local = true
+

数据库传输安全性

+

本章介绍与数据库服务器之间的网络通信相关的问题。这包括 IP 地址绑定和使用 TLS 加密网络流量。

+

数据库服务器 IP 地址绑定

+

若要隔离服务和数据库之间的敏感数据库通信,强烈建议将数据库服务器配置为仅允许通过隔离的管理网络与数据库进行通信。这是通过限制数据库服务器为传入客户端连接绑定网络套接字的接口或 IP 地址来实现的。

+
限制 MySQL 的绑定地址
+

my.cnf

+
[mysqld]
+...
+bind-address <ip address or hostname of management network interface>
+
限制 PostgreSQL 的监听地址
+

postgresql.conf

+
listen_addresses = <ip address or hostname of management network interface>
+

数据库传输

+

除了将数据库通信限制为管理网络外,我们还强烈建议云管理员将其数据库后端配置为需要 TLS。将 TLS 用于数据库客户端连接可保护通信不被篡改和窃听。正如下一节将讨论的那样,使用 TLS 还提供了通过 X.509 证书(通常称为 PKI)执行数据库用户身份验证的框架。以下是有关如何为两个流行的数据库后端 MySQL 和 PostgreSQL 配置 TLS 的指南。

+

注意

+
安装证书和密钥文件时,请确保文件权限受到限制,例如 `chmod 0600` ,所有权限制为数据库守护程序用户,以防止数据库服务器上的其他进程和用户进行未经授权的访问。
+

MySQL SSL配置

+

应在系统范围的MySQL配置文件中添加以下行:

+

my.cnf

+
[[mysqld]]
+...
+ssl-ca = /path/to/ssl/cacert.pem
+ssl-cert = /path/to/ssl/server-cert.pem
+ssl-key = /path/to/ssl/server-key.pem
+

(可选)如果您希望限制用于加密连接的 SSL 密码集。有关密码列表和用于指定密码字符串的语法,请参阅密码:

+
ssl-cipher = 'cipher:list'
+

PostgreSQL SSL 配置

+

应在系统范围的 PostgreSQL 配置文件中添加以下行。 postgresql.conf

+
ssl = true
+

(可选)如果您希望限制用于加密连接的 SSL 密码集。有关密码列表和用于指定密码字符串的语法,请参阅密码:

+
ssl-ciphers = 'cipher:list'
+

服务器证书、密钥和证书颁发机构 (CA) 文件应放在以下文件的 $PGDATA 目录中:

+
    +
  • $PGDATA/server.crt - 服务器证书
  • +
  • $PGDATA/server.key - 私钥对应于 server.crt
  • +
  • $PGDATA/root.crt - 可信证书颁发机构
  • +
  • $PGDATA/root.crl - 证书撤销列表
  • +
+

租户数据隐私

+

OpenStack旨在支持多租户,这些租户很可能有不同的数据要求。作为云构建者或运营商,您必须确保您的 OpenStack 环境能够解决数据隐私问题和法规。在本章中,我们将讨论与 OpenStack 实现相关的数据驻留和处置。

+
    +
  • 数据隐私问题
  • +
  • 数据驻留
  • +
  • 数据处置
  • +
  • 数据加密
  • +
  • 卷加密
  • +
  • 临时磁盘加密
  • +
  • 对象存储对象
  • +
  • 块存储性能和后端
  • +
  • 网络数据
  • +
  • 密钥管理
  • +
  • 参考书目:
  • +
+

数据隐私问题

+

数据驻留

+

在过去几年中,数据的隐私和隔离一直被认为是采用云的主要障碍。过去,对谁拥有云中数据以及云运营商是否可以最终信任这些数据的保管人的担忧一直是重大问题。

+

许多 OpenStack 服务维护属于租户的数据和元数据或参考租户信息。

+

存储在 OpenStack 云中的租户数据可能包括以下项目:

+
    +
  • 对象存储对象
  • +
  • 计算实例临时文件系统存储
  • +
  • 计算实例内存
  • +
  • 块存储卷数据
  • +
  • 用于计算访问的公钥
  • +
  • 映像服务中的虚拟机映像
  • +
  • 计算机快照
  • +
  • 传递给 OpenStack Compute 的配置驱动器扩展的数据
  • +
+

OpenStack 云存储的元数据包括以下非详尽项目:

+
    +
  • 组织名称
  • +
  • 用户的“真实姓名”
  • +
  • 正在运行的实例、存储桶、对象、卷和其他配额相关项目的数量或大小
  • +
  • 运行实例或存储数据的小时数
  • +
  • 用户的 IP 地址
  • +
  • 内部生成的用于计算映像捆绑的私钥
  • +
+

数据处置

+

OpenStack运营商应努力提供一定程度的租户数据处置保证。最佳实践建议操作员在处置、释放组织控制或释放以供重复使用之前对云系统介质(数字和非数字)进行清理。鉴于信息的特定安全域和敏感性,清理方法应实现适当级别的强度和完整性。

+

“清理过程会从介质中删除信息,因此无法检索或重建信息。清理技术,包括清除、清除、加密擦除和销毁,可防止在重复使用或释放处置此类介质时向未经授权的个人披露信息。NIST 特别出版物 800-53 修订版 4

+

NIST建议的安全控制措施中采用的一般数据处置和清理指南。云运营商应:

+
    +
  1. 跟踪、记录和验证介质清理和处置操作。
  2. +
  3. 测试清理设备和程序以验证其性能是否正常。
  4. +
  5. 在将便携式可移动存储设备连接到云基础架构之前,先对其进行清理。
  6. +
  7. 销毁无法清理的云系统介质。
  8. +
+

在 OpenStack 部署中,您需要解决以下问题:

+
    +
  • 安全数据擦除
  • +
  • 实例内存清理
  • +
  • 块存储卷数据
  • +
  • 计算实例临时存储
  • +
  • 裸机服务器清理
  • +
+
数据未安全删除
+

在OpenStack中,某些数据可能会被删除,但在上述NIST标准的上下文中不会被安全删除。这通常适用于存储在数据库中的大多数或全部上述定义的元数据和信息。这可以通过数据库和/或系统配置进行自动吸尘和定期可用空间擦除来修复。

+
实例内存清理
+

特定于各种虚拟机管理程序的是实例内存的处理。OpenStack Compute 中没有定义此行为,尽管通常期望 hypervisor 在删除实例和/或创建实例时尽最大努力清理内存。

+

Xen 显式地为实例分配专用内存区域,并在实例(或 Xen 术语中的域)销毁时清理数据。KVM 在很大程度上依赖于 Linux 页面管理;KVM 文档中定义了一组与 KVM 分页相关的复杂规则。

+

需要注意的是,使用 Xen 内存气球功能可能会导致信息泄露。我们强烈建议避免使用此功能。

+

对于这些和其他虚拟机管理程序,我们建议参考特定于虚拟机管理程序的文档。

+
Cinder 卷数据
+

强烈建议使用 OpenStack 卷加密功能。下面“卷加密”下的“数据加密”部分对此进行了讨论。使用此功能时,通过安全地删除加密密钥来完成数据销毁。最终用户可以在创建卷时选择此功能,但请注意,管理员必须先执行卷加密功能的一次性设置。有关此设置的说明,请参阅“配置参考”的“块存储”部分的“卷加密”下。

+

如果不使用 OpenStack 卷加密功能,那么其他方法通常更难启用。如果使用后端插件,则可能存在独立的加密方法或非标准覆盖解决方案。OpenStack Block Storage 的插件将以多种方式存储数据。许多插件特定于供应商或技术,而其他插件则更多地是围绕文件系统(如 LVM 或 ZFS)的 DIY 解决方案。安全销毁数据的方法因插件而异,因供应商的解决方案而异,也因文件系统而异。

+

一些后端(如 ZFS)将支持写入时复制,以防止数据泄露。在这些情况下,从未写入块中读取将始终返回零。其他后端(如 LVM)可能本身不支持此功能,因此块存储插件负责在将之前写入的块交给用户之前覆盖它们。请务必查看所选卷后端提供哪些保证,并查看哪些中介可用于未提供的保证。

+
镜像服务延时删除功能
+

OpenStack 镜像服务具有延迟删除功能,该功能将在定义的时间段内等待镜像的删除。如果存在安全问题,建议通过编辑 etc/glance/glance-api.conf 文件并将 delayed_delete 选项设置为 False 来禁用此功能。

+
计算软删除功能
+

OpenStack Compute 具有软删除功能,该功能使被删除的实例在定义的时间段内处于软删除状态。实例可以在此时间段内恢复。若要禁用软删除功能,请编辑 etc/nova/nova.conf 文件并将该 reclaim_instance_interval 选项留空。

+
计算实例临时存储
+

请注意,OpenStack 临时磁盘加密功能提供了一种改进临时存储隐私和隔离的方法,无论是在主动使用期间还是在销毁数据时。与加密块存储一样,只需删除加密密钥即可有效地销毁数据。

+

在创建和销毁临时存储时,提供数据隐私的替代措施将在一定程度上取决于所选的虚拟机管理程序和 OpenStack 计算插件。

+

用于计算的 libvirt 插件可以直接在文件系统上或 LVM 中维护临时存储。文件系统存储通常不会在删除数据时覆盖数据,但可以保证不会向用户提供脏盘区。

+

当使用 LVM 支持的基于块的临时存储时,OpenStack 计算软件必须安全地擦除块以防止信息泄露。过去曾存在与不当擦除的临时块存储设备相关的信息泄露漏洞。

+

文件系统存储对于临时块存储设备来说是一种比 LVM 更安全的解决方案,因为无法为用户提供脏盘区。但是,需要注意的是,用户数据不会被破坏,因此建议对后备文件系统进行加密。

+
裸机服务器清理
+

用于计算的裸机服务器驱动程序正在开发中,此后已转移到一个名为 ironic 的单独项目中。在撰写本文时,具有讽刺意味的是,似乎没有解决驻留在物理硬件中的租户数据的清理问题。

+

此外,裸机系统的租户可以修改系统固件。安全引导中所述的 TPM 技术提供了一种用于检测未经授权的固件更改的解决方案。

+

数据加密

+

该选项可供实施者加密租户数据,无论这些数据存储在磁盘上或通过网络传输,例如下面描述的 OpenStack 卷加密功能。这超出了用户在将自己的数据发送给提供商之前加密自己的数据的一般建议。

+

代表租户加密数据的重要性很大程度上与提供商承担的攻击者可能访问租户数据的风险有关。政府可能有要求,也有每个策略的要求,私有合同,甚至与公共云提供商的私有合同有关的判例法。建议在选择租户加密策略之前进行风险评估和法律顾问。

+

按实例或按对象加密比按项目、按租户、按主机和按云聚合降序进行加密更可取。这项建议与实施的复杂性和难度相反。目前,在某些项目中,很难或不可能实现像每个租户一样松散的加密。我们建议实现者尽最大努力加密租户数据。

+

通常,数据加密与可靠地销毁租户和每个实例数据的能力呈正相关,只需丢弃密钥即可。应该指出的是,在这样做时,以可靠和安全的方式销毁这些密钥变得非常重要。

+

Opportunities to encrypt data for users are present: +存在为用户加密数据的机会:

+
    +
  • 对象存储对象
  • +
  • 网络数据
  • +
+

卷加密

+

OpenStack 中的卷加密功能支持基于每个租户的隐私保护。从 Kilo 版本开始,支持以下功能:

+
    +
  • 创建和使用加密卷类型,通过仪表板或命令行界面启动
  • +
  • 启用加密并选择加密算法和密钥大小等参数
  • +
  • iSCSI 数据包中包含的卷数据已加密
  • +
  • 如果原始卷已加密,则支持加密备份
  • +
  • 仪表板指示卷加密状态。包括卷已加密的指示,并包括算法和密钥大小等加密参数
  • +
  • 通过安全包装器与密钥管理服务交互
  • +
  • 后端密钥存储支持卷加密,以增强安全性(例如,硬件安全模块 (HSM) 或 KMIP 服务器可用作 barbican 后端密钥存储)
  • +
+

临时磁盘加密

+

临时磁盘加密功能可解决数据隐私问题。临时磁盘是虚拟主机操作系统使用的临时工作空间。如果不加密,可以在此磁盘上访问敏感的用户信息,并且在卸载磁盘后可能会保留残留信息。从 Kilo 版本开始,支持以下临时磁盘加密功能:

+
    +
  • 创建和使用加密的 LVM 临时磁盘(注意:目前 OpenStack 计算服务仅支持 LVM 格式的加密临时磁盘)
  • +
  • 计算配置 , nova.conf 在“[ephemeral_storage_encryption]”部分中具有以下默认参数
      +
    • 选项:“密码 = AES-XTS-plain64”
    • +
    • 此字段设置用于加密临时存储的密码和模式。NIST建议将AES-XTS专门用于磁盘存储,该名称是使用XTS加密模式的AES加密的简写。可用的密码取决于内核支持。在命令行中,输入“cryptsetup benchmark”以确定可用选项(并查看基准测试结果),或转到 /proc/crypto
    • +
    • 选项: 'enabled = false'
    • +
    • 要使用临时磁盘加密,请设置选项:“enabled = true”
    • +
    • 选项:“key_size = 512”
    • +
    • 请注意,后端密钥管理器可能存在密钥大小限制,可能需要使用“key_size = 256”,这仅提供 128 位的 AES 密钥大小。除了 AES 所需的加密密钥外,XTS 还需要自己的“调整密钥”。这通常表示为单个大键。在这种情况下,使用 512 位设置,AES 将使用 256 位,XTS 将使用 256 位。(见NIST)
    • +
    +
  • +
  • 通过安全包装器与密钥管理服务交互
  • +
  • 密钥管理服务将通过为每个租户提供临时磁盘加密密钥来支持数据隔离
  • +
  • 后端密钥存储支持临时磁盘加密,以增强安全性(例如,HSM 或 KMIP 服务器可用作 barbican 后端密钥存储)
  • +
  • 使用密钥管理服务时,当不再需要临时磁盘时,只需删除密钥即可取代覆盖临时磁盘存储区域
  • +
+

对象存储对象

+

对象存储 (swift) 支持对存储节点上的静态对象数据进行可选加密。对象数据的加密旨在降低在未经授权的一方获得对磁盘的物理访问权限时读取用户数据的风险。

+

静态数据加密由中间件实现,中间件可能包含在代理服务器 WSGI 管道中。该功能是 swift 集群内部的,不通过 API 公开。客户端不知道 swift 服务内部的此功能对数据进行了加密;内部加密的数据不应通过 swift API 返回给客户端。

+

以下数据在 swift 中静态时被加密:

+
    +
  • 对象内容。例如,对象 PUT 请求正文的内容
  • +
  • 具有非零内容的对象的实体标记 (ETag)
  • +
  • 所有自定义用户对象元数据值。例如,使用 X-Object-Meta- 带有 PUT 或 POST 请求的前缀标头发送的元数据
  • +
+

上述列表中未包含的任何数据或元数据均未加密,包括:

+
    +
  • 帐户、容器和对象名称
  • +
  • 帐户和容器自定义用户元数据值
  • +
  • 所有自定义用户元数据名称
  • +
  • 对象内容类型值
  • +
  • 对象大小
  • +
  • 系统元数据
  • +
+

有关对象存储加密的部署、操作或实施的更多信息,请参阅有关对象加密的 swift 开发人员文档。

+

块存储性能和后端

+

启用操作系统时,可以使用 Intel 和 AMD 处理器中当前可用的硬件加速功能来增强 OpenStack Volume Encryption 性能。OpenStack 卷加密功能和 OpenStack 临时磁盘加密功能都用于 dm-crypt 保护卷数据。 dm-crypt 是 Linux 内核版本 2.6 及更高版本中的透明磁盘加密功能。启用卷加密后,加密数据将通过 iSCSI 发送到块存储,从而同时保护传输中的数据和静态数据。使用硬件加速时,这两种加密功能对性能的影响都会降到最低。

+

虽然我们建议使用 OpenStack 卷加密功能,但块存储支持多种替代后端来提供可挂载卷,其中一些还可能提供卷加密。由于后端如此之多,并且必须从每个供应商处获取信息,因此指定在任何一个供应商中实施加密的建议超出了本指南的范围。

+

网络数据

+

计算的租户数据可以通过 IPsec 或其他隧道进行加密。这在OpenStack中并不常见或标准,但对于有动力和感兴趣的实现者来说,这是一个选项。

+

同样,加密数据在通过网络传输时将保持加密状态。

+

密钥管理

+

为了解决经常提到的租户数据隐私和限制云提供商责任的问题,OpenStack社区对使数据加密更加普遍的兴趣越来越大。对于最终用户来说,在将数据保存到云之前对其进行加密相对容易,这是租户对象(如媒体文件、数据库存档等)的可行路径。在某些情况下,客户端加密用于加密虚拟化技术保存的数据,这需要客户端交互(例如提供密钥)来解密数据以供将来使用。为了无缝地保护数据并使其可访问,而无需给客户带来管理其密钥的负担,并以交互方式向他们提供 OpenStack 中的密钥管理服务。作为OpenStack的一部分,提供加密和密钥管理服务可以简化静态数据安全采用,并解决客户对隐私或数据滥用的担忧,同时也限制了云提供商的责任。这有助于减少提供商在多租户公有云中的事件调查期间处理租户数据时的责任。

+

卷加密和临时磁盘加密功能依赖于密钥管理服务(例如,barbican)来创建和安全存储密钥。密钥管理器是可插入的,以方便需要第三方硬件安全模块 (HSM) 或使用密钥管理交换协议 (KMIP) 的部署,该协议由名为 PyKMIP 的开源项目支持。

+

参考书目:

+
    +
  • OpenStack.org,欢迎来到 barbican 的开发者文档!2014。Barbican 开发者文档
  • +
  • oasis-open.org,OASIS 密钥管理互操作性协议 (KMIP)。2014年。KMIP
  • +
  • PyKMIP 库
  • +
  • 机密管理 机密管理
  • +
+

实例安全管理

+

在虚拟化环境中运行实例的优点之一是,它为安全控制开辟了新的机会,而这些控制在部署到裸机上时通常不可用。有几种技术可以应用于虚拟化堆栈,为云租户带来更好的信息保障。

+

具有强烈安全要求的 OpenStack 部署人员或用户可能需要考虑部署这些技术。并非所有情况都适用。在某些情况下,由于规范性业务需求,可能会排除在云中使用技术。同样,某些技术会检查实例数据,例如运行状态,这对系统用户来说可能是不希望的。

+

在本章中,我们将探讨这些技术,并描述它们可用于增强实例或底层实例安全性的情况。我们还试图强调可能存在隐私问题的地方。这些包括数据传递、内省或提供熵源。在本节中,我们将重点介绍以下附加安全服务:

+
    +
  • 实例的熵
  • +
  • 将实例调度到节点
  • +
  • 受信任的映像
  • +
  • 实例迁移
  • +
  • 监控、警报和报告
  • +
  • 更新和补丁
  • +
  • 防火墙和其他基于主机的安全控制
  • +
  • 实例的安全服务
  • +
  • 实例的熵
  • +
  • 将实例调度到节点
  • +
  • 受信任的映像
  • +
  • 实例迁移
  • +
  • 监控、警报和报告
  • +
  • 更新和补丁
  • +
  • 防火墙和其他基于主机的安全控制
  • +
+

实例的安全服务

+

实例的熵

+

我们认为熵是指实例可用的随机数据的质量和来源。加密技术通常严重依赖随机性,需要高质量的熵池才能从中汲取。虚拟机通常很难获得足够的熵来支持这些操作,这称为熵饥饿。熵饥饿可以表现为看似无关的事情。例如,启动时间慢可能是由于实例等待 ssh 密钥生成造成的。熵饥饿还可能促使用户在实例中使用质量较差的熵源,从而使在云中运行的应用程序整体安全性降低。

+

幸运的是,云架构师可以通过为云实例提供高质量的熵源来解决这些问题。这可以通过在云中拥有足够的硬件随机数生成器 (HRNG) 来支持实例来实现。在这种情况下,“足够”在某种程度上是特定于域的。对于日常操作,现代 HRNG 可能会产生足够的熵来支持 50-100 个计算节点。高带宽 HRNG(例如英特尔 Ivy Bridge 和更新的处理器提供的 RdRand 指令)可能会处理更多节点。对于给定的云,架构师需要了解应用程序要求,以确保有足够的熵可用。

+

Virtio RNG 是一个随机数生成器,默认情况下用作 /dev/random 熵源,但可以配置为使用硬件 RNG 或熵收集守护程序 (EGD) 等工具,以提供一种通过分布式系统公平安全地分配熵的方法。Virtio RNG 是使用用于创建实例的元数据的 hw_rng 属性启用的。

+

将实例调度到节点

+

在创建实例之前,必须选择用于镜像实例化的主机。此选择由 nova-scheduler 确定如何分派计算和卷请求的 执行。

+

这是 FilterScheduler OpenStack Compute的默认调度程序,尽管存在其他调度程序(请参阅 OpenStack Configuration Reference 中的 Scheduling 部分)。这与“过滤器提示”协同工作,以决定实例的启动位置。此主机选择过程允许管理员满足许多不同的安全性和合规性要求。例如,根据云部署类型,如果数据隔离是主要问题,则可以选择尽可能让租户实例驻留在相同的主机上。相反,出于可用性或容错原因,可以尝试将租户的实例驻留在尽可能多的不同主机上。

+

筛选器计划程序分为四大类:

+

基于资源的筛选器

+

这些筛选器将根据虚拟机监控程序主机集的利用率创建实例,并可以在可用或使用的属性(如 RAM、IO 或 CPU 利用率)上触发。

+

基于映像的过滤器

+

这将根据使用的映像(例如 VM 的操作系统或使用的映像类型)委派实例创建。

+

基于环境的过滤器

+

此筛选器将基于外部详细信息创建实例,例如在特定 IP 范围内、跨可用区或与其他实例位于同一主机上。

+

自定义条件

+

此筛选器将根据用户或管理员提供的条件(如信任或元数据分析)委派实例创建。

+

可以同时应用多个筛选器,例如,筛选器用于确保在一组特定主机的成员上创建实例,以及 ServerGroupAntiAffinity 用于确保不会在另一组特定主机上创建同一实例的筛选器 ServerGroupAffinity 。应仔细分析这些筛选器,以确保它们不会相互冲突,并导致阻止创建实例的规则。

+

../_images/filteringWorkflow1.png

+

GroupAffinityGroupAntiAffinity 筛选器冲突,不应同时启用。

+

筛选器 DiskFilter 能够超额订阅磁盘空间。虽然通常不是问题,但对于精简预配的存储设备来说,这可能是一个问题,并且此筛选器应与应用经过充分测试的配额一起使用。

+

我们建议您禁用过滤器,这些过滤器可以分析用户提供的内容或可操作的内容,例如元数据。

+

可信镜像

+

在云环境中,用户使用预安装的映像或他们自己上传的映像。在这两种情况下,用户都应该能够确保他们正在使用的图像没有被篡改。验证图像的能力是安全性的基本要求。从映像源到使用映像的目标需要信任链。这可以通过对从受信任来源获取的映像进行签名并在使用前验证签名来实现。下面将讨论获取和创建已验证图像的各种方法,然后介绍图像签名验证功能。

+
镜像创建过程
+

OpenStack 文档提供了有关如何创建映像并将其上传到映像服务的指导。此外,假定您有一个安装和强化操作系统的过程。因此,以下各项将提供有关如何确保将映像安全地传输到 OpenStack 中的额外指导。有多种选项可用于获取图像。每个步骤都有特定的步骤,有助于验证图像的出处。

+

第一个选项是从受信任的来源获取启动媒体。

+
$ mkdir -p /tmp/download_directorycd /tmp/download_directory
+$ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/ubuntu-12.04.2-server-amd64.iso
+$ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/SHA256SUMS
+$ wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/SHA256SUMS.gpg
+$ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 0xFBB75451
+$ gpg --verify SHA256SUMS.gpg SHA256SUMSsha256sum -c SHA256SUMS 2>&1 | grep OK
+

第二种选择是使用 OpenStack 虚拟机映像指南。在这种情况下,您需要遵循组织的操作系统强化准则或受信任的第三方(如 Linux STIG)提供的准则。

+

最后一种选择是使用自动映像生成器。以下示例使用 Oz 映像生成器。OpenStack 社区最近创建了一个值得研究的新工具:disk-image-builder。我们尚未从安全角度评估此工具。

+

RHEL 6 CCE-26976-1 示例,这将有助于在 OZ 中实施 NIST 800-53 第 AC-19(d)节。

+
<template>
+<name>centos64</name>
+<os>
+  <name>RHEL-6</name>
+  <version>4</version>
+  <arch>x86_64</arch>
+  <install type='iso'>
+  <iso>http://trusted_local_iso_mirror/isos/x86_64/RHEL-6.4-x86_64-bin-DVD1.iso</iso>
+  </install>
+  <rootpw>CHANGE THIS TO YOUR ROOT PASSWORD</rootpw>
+</os>
+<description>RHEL 6.4 x86_64</description>
+<repositories>
+  <repository name='epel-6'>
+  <url>http://download.fedoraproject.org/pub/epel/6/$basearch</url>
+  <signed>no</signed>
+  </repository>
+</repositories>
+<packages>
+  <package name='epel-release'/>
+  <package name='cloud-utils'/>
+  <package name='cloud-init'/>
+</packages>
+<commands>
+  <command name='update'>
+  yum update
+  yum clean all
+  rm -rf /var/log/yum
+  sed -i '/^HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth0
+  echo -n > /etc/udev/rules.d/70-persistent-net.rules
+  echo -n > /lib/udev/rules.d/75-persistent-net-generator.rules
+  chkconfig --level 0123456 autofs off
+  service autofs stop
+  </command>
+</commands>
+</template>
+

建议避免手动映像构建过程,因为它很复杂且容易出错。此外,使用 Oz 等自动化系统进行映像构建,或使用 Chef 或 Puppet 等配置管理实用程序进行启动后映像强化,使您能够生成一致的映像,并跟踪基础映像在一段时间内是否符合其各自的强化准则。

+

如果订阅公有云服务,则应与云提供商联系,了解用于生成其默认映像的过程的概述。如果提供商允许您上传自己的映像,则需要确保在使用映像创建实例之前能够验证映像是否未被修改。为此,请参阅以下有关图像签名验证的部分,如果无法使用签名,请参阅以下段落。

+

映像从节点上的映像服务传输到计算服务。应通过通过 TLS 运行来保护此传输。映像位于节点上后,将使用基本校验和对其进行验证,然后根据要启动的实例的大小扩展其磁盘。如果稍后在此节点上以相同的实例大小启动同一映像,则会从同一扩展映像启动该映像。由于此扩展映像在启动前默认不会重新验证,因此它可能已被篡改。除非在生成的映像中对文件执行手动检查,否则用户不会意识到篡改。

+
映像签名验证
+

OpenStack 中现在提供了一些与映像签名相关的功能。从 Mitaka 版本开始,映像服务可以验证这些已签名的映像,并且为了提供完整的信任链,计算服务可以选择在映像启动之前执行映像签名验证。在映像启动之前成功进行签名验证可确保已签名的映像未更改。启用此功能后,可以检测到未经授权的映像修改(例如,修改映像以包含恶意软件或 rootkit)。

+

管理员可以通过在文件中将 verify_glance_signatures 标志设置为来 True 启用实例签名 /etc/nova/nova.conf 验证。启用后,计算服务会在从影像服务检索签名实例时自动对其进行验证。如果此验证失败,则不会启动。《OpenStack 操作指南》提供了有关如何创建和上传签名映像以及如何使用此功能的指导。有关更多信息,请参阅《操作指南》中的添加签名映像。

+

实例迁移

+

OpenStack 和底层虚拟化层提供在 OpenStack 节点之间实时迁移映像,使您能够无缝地执行 OpenStack 计算节点的滚动升级,而无需实例停机。但是,实时迁移也存在重大风险。若要了解所涉及的风险,以下是在实时迁移期间执行的高级步骤:

+
    +
  1. 在目标主机上启动实例
  2. +
  3. 传输内存
  4. +
  5. 停止客户机和同步磁盘
  6. +
  7. 传输状态
  8. +
  9. 启动客户机
  10. +
+
实时迁移风险
+

在实时迁移过程的各个阶段,实例运行时、内存和磁盘的内容以纯文本形式通过网络传输。因此,在使用实时迁移时需要解决一些风险。以下详尽列表详细介绍了其中的一些风险:

+
    +
  • 拒绝服务 (DoS):如果在迁移过程中出现故障,实例可能会丢失。
  • +
  • 数据泄露:必须安全地处理内存或磁盘传输。
  • +
  • 数据操纵:如果内存或磁盘传输未得到安全处理,则攻击者可以在迁移过程中操纵用户数据。
  • +
  • 代码注入:如果内存或磁盘传输未得到安全处理,则攻击者可以在迁移期间操纵磁盘或内存中的可执行文件。
  • +
+
实时迁移缓解措施
+

有几种方法可以缓解与实时迁移相关的一些风险,以下列表详细介绍了其中的一些方法:

+
    +
  • 禁用实时迁移
  • +
  • I隔离的迁移网络
  • +
  • 加密实时迁移
  • +
+
禁用实时迁移
+

目前,OpenStack 中默认启用实时迁移。可以通过向 nova policy.json 文件添加以下行来禁用实时迁移:

+
{
+    "compute_extension:admin_actions:migrate": "!",
+    "compute_extension:admin_actions:migrateLive": "!",
+}
+
迁移网络
+

一般做法是,实时迁移流量应限制在管理安全域内,请参阅安全边界和威胁。对于实时迁移流量,由于其纯文本性质以及您正在传输正在运行的实例的磁盘和内存内容,因此建议您进一步将实时迁移流量分离到专用网络上。将流量隔离到专用网络可以降低暴露风险。

+
加密实时迁移
+

如果有足够的业务案例来保持实时迁移的启用状态,则 libvirtd 可以为实时迁移提供加密隧道。但是,此功能目前尚未在 OpenStack Dashboard 或 nova-client 命令中公开,只能通过手动配置 libvirtd 来访问。然后,实时迁移过程将更改为以下高级步骤:

+
    +
  1. 实例数据从虚拟机管理程序复制到 libvirtd。
  2. +
  3. 在源主机和目标主机上的 libvirtd 进程之间创建加密隧道。
  4. +
  5. 目标 libvirtd 主机将实例复制回底层虚拟机管理程序。
  6. +
+

监控、告警和报告

+

由于 OpenStack 虚拟机是能够跨主机复制的服务器映像,因此日志记录的最佳实践同样适用于物理主机和虚拟主机。应记录操作系统级和应用程序级事件,包括对主机和数据的访问事件、用户添加和删除、权限更改以及环境规定的其他事件。理想情况下,您可以将这些日志配置为导出到日志聚合器,该聚合器收集日志事件,将它们关联起来进行分析,并存储它们以供参考或进一步操作。实现此目的的一个常见工具是 ELK 堆栈,即 Elasticsearch、Logstash 和 Kibana。

+

应定期查看这些日志,例如由网络运营中心 (NOC) 实时查看,或者如果环境不够大而不需要 NOC,则日志应定期进行日志审查过程。

+

很多时候,有趣的事件会触发警报,该警报将发送给响应方以采取行动。通常,此警报采用包含相关消息的电子邮件形式。一个有趣的事件可能是重大故障,也可能是挂起故障的已知运行状况指示器。用于管理告警的两个常见实用程序是 Nagios 和 Zabbix。

+

更新和补丁

+

虚拟机监控程序运行独立的虚拟机。此虚拟机管理程序可以在操作系统中运行,也可以直接在硬件上运行(称为裸机)。对虚拟机监控程序的更新不会向下传播到虚拟机。例如,如果部署使用的是 XenServer,并且具有一组 Debian 虚拟机,则对 XenServer 的更新不会更新 Debian 虚拟机上运行的任何内容。

+

因此,我们建议分配虚拟机的明确所有权,并由这些所有者负责虚拟机的强化、部署和持续功能。我们还建议定期部署更新。这些补丁应在尽可能接近生产环境的环境中进行测试,以确保补丁背后的问题的稳定性和解决方案。

+

防火墙和其他基于主机的安全控制

+

最常见的操作系统包括基于主机的防火墙,以提高安全性。虽然我们建议虚拟机运行尽可能少的应用程序(如果可能的话,达到单一用途实例的程度),但应分析虚拟机上运行的所有应用程序,以确定应用程序需要访问哪些系统资源、运行所需的最低特权级别,以及将进出虚拟机的预期网络流量。此预期流量应作为允许的流量(或列入白名单)添加到基于主机的防火墙中,以及任何必要的日志记录和管理通信,例如 SSH 或 RDP。应在防火墙配置中明确拒绝所有其他流量。

+

在 Linux 虚拟机上,上述应用程序配置文件可以与 audit2allow 等工具结合使用,以构建 SELinux 策略,以进一步保护大多数 Linux 发行版上的敏感系统信息。SELinux 使用用户、策略和安全上下文的组合来划分应用程序运行所需的资源,并将其与其他不需要的系统资源区分开来。

+

OpenStack 为主机和网络提供安全组,以增加对给定项目中虚拟机的深度防御。这些规则类似于基于主机的防火墙,因为它们根据端口、协议和地址允许或拒绝传入流量,但安全组规则仅适用于传入流量,而基于主机的防火墙规则能够应用于传入和传出流量。主机和网络安全组规则也可能发生冲突并拒绝合法流量。我们建议确保为正在使用的网络正确配置安全组。有关详细信息,请参阅本指南中的安全组。

+

监视和日志记录

+

在云环境中,硬件、操作系统、虚拟机管理器、OpenStack 服务、云用户活动(例如创建实例和附加存储)、网络以及使用在各种实例上运行的应用程序的最终用户混合在一起。

+

日志记录的基础知识:配置、设置日志级别、日志文件的位置、如何使用和自定义日志,以及如何集中收集日志,这些在 OpenStack 操作指南中都有很好的介绍。

+
    +
  • 取证和事件响应
  • +
  • 监控用例
  • +
  • 参考书目
  • +
+

取证和事件响应

+

日志的生成和收集是安全监控 OpenStack 基础架构的重要组成部分。日志提供对管理员、租户和来宾日常操作的可见性,以及计算、网络和存储以及构成 OpenStack 部署的其他组件中的活动。

+

日志不仅对主动安全和持续合规性活动很有价值,而且也是调查和响应事件的宝贵信息源。

+

例如,分析身份服务或其替代身份验证系统的访问日志会提醒我们登录失败、频率、源 IP、事件是否仅限于选择帐户和其他相关信息。日志分析支持检测。

+

可以采取措施来缓解潜在的恶意活动,例如将 IP 地址列入黑名单、建议加强用户密码或停用被视为休眠的用户帐户。

+

监控用例

+

事件监控是一种更主动的方法,可以保护环境,提供实时检测和响应。有几种工具可以帮助进行监控。

+

对于OpenStack云实例,我们需要监控硬件、OpenStack服务和云资源使用情况。后者源于希望具有弹性,以适应用户的动态需求。

+

以下是在实施日志聚合、分析和监控时需要考虑的几个重要用例。这些用例可以通过各种应用程序、工具或脚本来实现和监控。有开源和商业解决方案,一些运营商开发自己的内部解决方案。这些工具和脚本可以生成事件,这些事件可以通过电子邮件发送给管理员或在集成仪表板中查看。请务必考虑可能适用于您的特定网络的其他用例,以及您可能认为的异常行为。

+
    +
  • 检测日志生成缺失是一个具有很高价值的事件。此类事件将表明服务失败,甚至表示入侵者暂时关闭了日志记录或修改了日志级别以隐藏其踪迹。
  • +
  • 应用程序事件(如计划外的启动或停止事件)也是要监视和检查可能的安全隐患的事件。
  • +
  • OpenStack 服务机器上的操作系统事件(如用户登录或重新启动)也为系统的正确和不当使用提供了有价值的见解。
  • +
  • 能够检测OpenStack服务器上的负载还可以通过引入其他服务器进行负载平衡来做出响应,以确保高可用性。
  • +
  • 其他可操作的事件包括网络网桥关闭、计算节点上的 IP 表被刷新,以及随之而来的对实例的访问丢失,导致客户不满意。
  • +
  • 为了降低在身份服务中删除用户、租户或域时孤立实例的安全风险,我们讨论了在系统中生成通知,并让 OpenStack 组件适当地响应这些事件,例如终止实例、断开连接的卷、回收 CPU 和存储资源等。
  • +
+

云将托管许多虚拟实例,并且监视这些实例超出了可能仅包含 CRUD 事件的硬件监视和日志文件。

+

安全监控控制(如入侵检测软件、防病毒软件以及间谍软件检测和删除实用程序)可以生成日志,显示攻击或入侵发生的时间和方式。在云计算机上部署这些工具可提供价值和保护。云用户,即在云上运行实例的用户,可能也希望在其实例上运行此类工具。

+

参考书目

+

Siwczak, Piotr,在 OpenStack 云中进行监控的一些实际注意事项。2012.

+

blog.sflow.com, sflow:主机 sFlow 分布式代理。2012.

+

blog.sflow.com,sflow:LAN 和 WAN。2009.

+

blog.sflow.com、sflow:快速检测大流量 sFlow 与 NetFlow/IPFIX。2013.

+

合规

+

OpenStack 部署可能需要出于多种目的进行合规性活动,例如法规和法律要求、客户需求、隐私注意事项和安全最佳实践。合规功能对企业及其客户很重要。合规意味着遵守法规、规范、标准和法律。它还用于描述有关评估、审核和认证的组织状态。如果操作得当,合规性可以统一和加强本指南中讨论的其他安全主题。

+

本章有几个目标:

+
    +
  • 查看常见的安全原则。
  • +
  • 讨论常见的控制框架和认证资源,以实现行业认证或监管机构认证。
  • +
  • 在评估 OpenStack 部署时,可作为审计人员的参考。
  • +
  • 介绍特定于 OpenStack 和云环境的隐私注意事项。
  • +
  • 合规性概述
  • +
  • 安全原则
  • +
  • 常见控制框架
  • +
  • 审核参考
  • +
  • 了解审核流程
  • +
  • 确定审计范围
  • +
  • 审计阶段
  • +
  • 内部审计
  • +
  • 准备外部审计
  • +
  • 外部审计
  • +
  • 合规性维护
  • +
  • 合规活动
  • +
  • 信息安全管理系统(ISMS)
  • +
  • 风险评估
  • +
  • 访问和日志审查
  • +
  • 备份和灾难恢复
  • +
  • 安全培训
  • +
  • 安全审查
  • +
  • 漏洞管理
  • +
  • 数据分类
  • +
  • 异常过程
  • +
  • 认证和合规声明
  • +
  • 商业标准
  • +
  • 政府标准
  • +
  • 隐私
  • +
+

合规性概述

+

安全原则

+

行业标准安全原则为合规性认证和证明提供了基准。如果在整个 OpenStack 部署过程中考虑和引用这些原则,则可以简化认证活动。

+
分层防御
+

确定云架构中存在风险的位置,并应用控制措施来降低风险。在重大关注领域,分层防御提供多种互补控制,将风险管理到可接受的水平。例如,为了确保云租户之间的充分隔离,我们建议强化 QEMU,使用支持 SELinux 的虚拟机管理程序,实施强制访问控制策略,并减少整体攻击面。基本原则是用多层防御来强化关注区域,这样,如果任何一层受到损害,其他层将存在以提供保护并最大限度地减少暴露。

+
安全失败
+

在发生故障的情况下,系统应配置为在关闭的安全状态中失败。例如,如果TLS证书验证未通过,即CNAME与服务器的DNS名称不匹配,应通过切断网络连接来安全失败。在这种情况下,软件通常会以开放方式失败,允许连接在没有CNAME匹配的情况下继续进行,这样不够安全,也不建议。

+
最小权限
+

仅授予用户和系统服务的最低访问级别。这种访问基于角色、职责和工作职能。这种最小特权安全原则已写入多个国际政府安全策略中,例如美国境内的 NIST 800-53 第 AC-6 节。

+
分隔
+

系统应以这样一种方式隔离,即如果一台计算机或系统级服务受到损害,其他系统的安全性将保持不变。实际上,SELinux 的启用和正确使用有助于实现这一目标。

+
促进隐私
+

应尽量减少可以收集的有关系统及其用户的信息量。

+
日志记录能力
+

实施适当的日志记录以监控未经授权的使用、事件响应和取证。我们强烈建议选定的审计子系统通过通用标准认证,该标准在大多数国家/地区提供不可证明的事件记录。

+

常用控制框架

+

以下是组织可用于构建其安全控制的控制框架列表。

+

云安全联盟 (CSA) 通用控制矩阵 (CCM)

+

CSA CCM 专门用于提供基本的安全原则,以指导云供应商并帮助潜在的云客户评估云提供商的整体安全风险。CSA CCM 提供了一个跨 16 个安全域保持一致的控制框架。云控制矩阵的基础在于其与其他行业标准、法规和控制框架的定制关系,例如:ISO 27001:2013、COBIT 5.0、PCI:DSS v3、AICPA 2014 信任服务原则和标准,并增强了服务组织控制报告证明的内部控制方向。

+

CSA CCM 通过减少云中的安全威胁和漏洞来加强现有的信息安全控制环境,提供标准化的安全和运营风险管理,并寻求规范化安全期望、云分类和术语以及在云中实施的安全措施。

+

ISO 27001/2:2013 ISO 27001/2:2013 认证

+

ISO 27001 信息安全标准和认证多年来一直用于评估和区分组织是否符合信息安全最佳实践。该标准由两部分组成:定义信息安全管理系统 (ISMS) 的强制性条款和包含按领域组织的控制列表的附录 A。

+

信息安全管理系统通过应用风险管理流程来保持信息的机密性、完整性和可用性,并使相关方相信风险得到充分管理。

+

可信安全原则

+

信托服务是一套基于一套核心原则和标准的专业认证和咨询服务,用于解决 IT 系统和隐私计划的风险和机遇。通常称为 SOC 审计,这些原则定义了要求是什么,组织有责任定义满足要求的控制措施。

+

审计参考

+

OpenStack在许多方面都是创新的,但是用于审计OpenStack部署的过程相当普遍。审核员将根据两个标准评估流程:控制是否有效设计以及控制是否有效运行。了解审计师如何评估控制措施是否有效设计和运行,将在“了解审计过程”一节中讨论。

+

用于审核和评估云部署的最常见框架包括前面提到的 ISO 27001/2 信息安全标准、ISACA 的信息和相关技术控制目标 (COBIT) 框架、特雷德韦委员会赞助组织委员会 (COSO) 和信息技术基础设施库 (ITIL)。审计通常包括一个或多个这些框架中的重点领域。幸运的是,这些框架之间有很多重叠,因此采用框架的组织将在审计时处于有利地位。

+

了解审核流程

+

信息系统安全合规性依赖于两个基本流程的完成:

+

安全控制的实施和操作

+

使信息系统与范围内的标准和法规保持一致涉及内部任务,这些任务必须在正式评估之前进行。审核员可能会参与此状态,以进行差距分析,提供指导,并增加成功认证的可能性。

+

独立验证和确认

+

在许多信息系统获得认证状态之前,需要向中立的第三方证明系统安全控制已实施并有效运行,符合范围内的标准和法规。许多认证需要定期审核,以确保持续认证,这被认为是总体持续监控实践的一部分。

+

确定审计范围

+

确定审计范围,特别是需要哪些控制措施,以及如何设计或修改OpenStack部署以满足这些控制措施,应该是最初的规划步骤。

+

在出于合规性目的确定 OpenStack 部署范围时,应优先考虑对敏感服务的控制,例如命令和控制功能以及基本虚拟化技术。这些设施的妥协可能会影响整个 OpenStack 环境。

+

缩小范围有助于确保 OpenStack 架构师建立针对特定部署量身定制的高质量安全控制,但最重要的是确保这些实践不会遗漏安全强化中的区域或功能。一个常见的例子是PCI-DSS准则,其中与支付相关的基础设施可能会受到安全问题的审查,但支持服务被忽视,并且容易受到攻击。

+

在解决合规性问题时,您可以通过确定适用于多个认证的常见领域和标准来提高效率并减少工作量。本书中讨论的许多审计原则和准则将有助于确定这些控制措施,此外,一些外部实体提供了全面的清单。以下是一些示例:

+

云安全联盟云控制矩阵 (CCM) 可帮助云提供商和消费者评估云提供商的整体安全性。CSA CMM 提供了一个控制框架,该框架映射到许多行业公认的标准和法规,包括 ISO 27001/2、ISACA、COBIT、PCI、NIST、Jericho Forum 和 NERC CIP。

+

《SCAP 安全指南》是另一个有用的参考。这仍然是一个新兴的来源,但我们预计这将发展成为一个工具,其控件映射更侧重于美国联邦政府的认证和建议。例如,SCAP 安全指南目前包含安全技术实施指南 (STIG) 和 NIST-800-53 的一些映射。

+

这些控制映射将有助于识别跨认证的通用控制标准,并为审核员和被审核方提供对特定合规性认证和证明的控制集中问题区域的可见性。

+

审计的阶段

+

审计有四个不同的阶段,尽管大多数利益相关者和控制所有者只会参与一两个阶段。四个阶段是规划、实地考察、报告和总结。下面将讨论这些阶段中的每一个。

+

规划阶段通常在实地工作开始前两周到六个月进行。在此阶段,将讨论并最终确定时间范围、时间表、要评估的控制措施和控制所有者等审计项目。对资源可用性、公正性和成本的担忧也得到了解决。

+

实地考察阶段是审计中最明显的部分。这是审计员在现场的地方,与控制所有者面谈,记录现有的控制措施,并确定任何问题。需要注意的是,审计师将使用两部分流程来评估现有的控制措施。第一部分是评估控制的设计有效性。在这里,审计员将评估控制是否能够有效地预防或检测和纠正弱点和缺陷。控件必须通过此测试才能在第二阶段进行评估。这是因为对于设计无效的控件,没有必要考虑它是否有效运行。第二部分是运营效率。操作有效性测试将确定如何应用控制措施,应用控制措施的一致性以及由谁或以何种方式应用控制措施。一项控制可能依赖于其他控制(间接控制),如果它们依赖于其他控制,则审计师可能需要额外的证据来证明这些间接控制的运作有效性,以确定控制的整体运作有效性。

+

在报告阶段,管理层将对在实地工作阶段发现的任何问题进行验证。出于后勤目的,一些活动(例如问题验证)可能会在实地工作阶段执行。管理层还需要提供补救计划来解决问题,并确保它们不会再次发生。将向利益攸关方和管理层分发一份总体报告草稿,供其审查。商定的修改被纳入,更新后的草案将送交高级管理层审查和批准。一旦高级管理层批准报告,该报告就会定稿并分发给执行管理层。任何问题都会输入到组织使用的问题跟踪或风险跟踪机制中。

+

总结阶段是审计正式终止的地方。此时,管理层将开始整改活动。使用过程和通知确保将任何与审计相关的信息都被移至安全存储库0。

+

内部审计

+

部署云后,就该进行内部审计了。现在是时候将上面确定的控件与云中使用的设计、功能和部署策略进行比较了。目标是了解每个控件的处理方式以及存在差距的位置。记录所有发现以备将来参考。

+

在审计OpenStack云时,了解OpenStack架构固有的多租户环境是很重要的。需要关注的一些关键领域包括数据处置、虚拟机管理程序安全性、节点强化和身份验证机制。

+

准备外部审计

+

一旦内部审计结果看起来不错,就该为外部审计做准备了。在此阶段需要采取几项关键行动,这些行动概述如下:

+
    +
  • 保持内部审计的良好记录。这些将在外部审计期间证明很有用,因此您可以准备好回答有关将合规性控制映射到特定部署的问题。
  • +
  • 部署自动化测试工具,确保云长期保持合规。
  • +
  • 选择审计员。
  • +
+

选择审计师可能具有挑战性。理想情况下,您正在寻找具有云合规性审核经验的人。OpenStack经验是另一大优势。通常,最好咨询经历过此过程的人进行转诊。成本可能会因参与范围和所考虑的审计公司而有很大差异。

+

外部审计

+

这是正式的审计过程。审计员将测试特定认证范围内的安全控制措施,并要求提供证据要求,以证明这些控制措施在审计窗口内也已到位(例如,SOC 2 审计通常在 6-12 个月内评估安全控制措施)。任何控制失败都会被记录下来,并将记录在外部审计师的最终报告中。根据 OpenStack 部署的类型,客户可能会查看这些报告,因此避免控制失败非常重要。这就是为什么审计准备如此重要的原因。

+

合规性维护

+

该过程不会因单一的外部审计而结束。大多数认证都需要持续的合规活动,这意味着要定期重复审核过程。我们建议将自动合规性验证工具集成到云中,以确保其始终合规。除了其他安全监控工具之外,还应该这样做。请记住,目标既是安全性,也是合规性。如果在上述任何一项方面都失败,将使未来的审计变得非常复杂。

+

合规活动

+

有许多标准活动将极大地帮助合规过程。本章概述了一些最常见的合规性活动。这些并不是OpenStack所特有的,但是本书中提供了相关章节的参考资料,作为有用的上下文。

+

信息安全管理系统 (ISMS)

+

信息安全管理系统 (ISMS) 是组织创建和维护的一套全面的策略和流程,用于管理信息资产的风险。云部署最常见的 ISMS 是 ISO/IEC 27001/2,它为安全控制和实践奠定了坚实的基础,以实现更严格的合规性认证。该标准于 2013 年进行了更新,以反映云服务的日益使用,并更加强调衡量和评估组织的 ISMS 性能。

+

风险评估

+

风险评估框架可识别组织或服务中的风险,并指定这些风险的所有权,以及实施和缓解策略。风险适用于服务的所有领域,从技术控制到环境灾难场景和人为因素。例如,恶意内部人员。可以使用多种机制对风险进行评级。例如,可能性与影响。OpenStack 部署风险评估可以包括控制差距。

+

访问和日志审查

+

需要定期访问和日志审查,以确保服务部署中的身份验证、授权和问责制。有关这些主题的 OpenStack 的具体指南在监控和日志记录中进行了深入讨论。

+

OpenStack Identity 服务支持云审计数据联合 (CADF) 通知,提供审计数据以符合安全性、操作和业务流程。有关更多信息,请参阅 Keystone 开发人员文档。

+

备份和灾难恢复

+

灾难恢复 (DR) 和业务连续性规划 (BCP) 计划是 ISMS 和合规性活动的常见要求。这些计划必须定期测试并记录在案。在 OpenStack 中,关键区域位于管理安全域中,以及任何可以识别单点故障 (SPOF) 的地方。

+

安全培训

+

针对特定角色的年度安全培训是几乎所有合规性认证和证明的强制性要求。为了优化安全培训的有效性,一种常见的方法是提供特定于角色的培训,例如向开发人员、操作人员和非技术人员提供培训。基于此强化指南的其他云安全或 OpenStack 安全培训将是理想的选择。

+

安全审查

+

由于OpenStack是一个流行的开源项目,因此许多代码库和架构已经过个人贡献者、组织和企业的审查。从安全角度来看,这可能是有利的,但是对于服务提供商来说,安全审查的需求仍然是一个关键的考虑因素,因为部署各不相同,而且安全性并不总是贡献者的主要关注点。全面的安全审查过程可能包括架构审查、威胁建模、源代码分析和渗透测试。有许多用于进行安全审查的技术和建议,可以在公开发布中找到。一个经过充分测试的例子是 Microsoft SDL,它是作为 Microsoft 可信计算计划的一部分创建的。

+

漏洞管理

+

安全更新对于任何 IaaS 部署(无论是私有部署还是公共部署)都至关重要。易受攻击的系统扩大了攻击面,是攻击者的明显目标。常见的扫描技术和漏洞通知服务可以帮助缓解这种威胁。重要的是,扫描要经过身份验证,并且缓解策略要超越简单的外围强化。OpenStack 等多租户架构特别容易受到虚拟机管理程序漏洞的影响,因此这是漏洞管理系统的关键部分。

+

数据分类

+

数据分类定义了一种对信息进行分类和处理的方法,通常用于保护客户信息免遭意外或故意盗窃、丢失或不当披露。最常见的情况是,这涉及将信息分类为敏感或非敏感信息,或个人身份信息 (PII)。根据部署的上下文,可以使用各种其他分类标准(政府、医疗保健)。基本原则是明确定义和使用数据分类。最常见的保护机制包括行业标准加密技术。

+

异常过程

+

异常过程是 ISMS 的重要组成部分。当某些操作不符合组织定义的安全策略时,必须记录这些操作。需要包括适当的理由、描述和缓解细节,并由有关当局签署。OpenStack 默认配置在满足各种合规性标准方面可能会有所不同,应记录不符合合规性要求的区域,并考虑潜在的修复程序以对社区做出贡献。

+

认证和合规声明

+

合规性和安全性不是排他性的,必须一起解决。如果不进行安全强化,OpenStack 部署不太可能满足合规性要求。下面的列表提供了 OpenStack 架构师的基础知识和指导,以实现对商业和政府认证和标准的合规性。

+

商业标准

+

对于OpenStack的商业部署,我们建议将SOC 1/2与ISO 2700 1/2相结合,作为OpenStack认证活动的起点。这些认证规定的所需安全活动有助于为安全最佳实践和通用控制标准奠定基础,从而有助于实现更严格的合规性活动,包括政府证明和认证。

+

完成这些初始认证后,其余认证将更加特定于部署。例如,处理信用卡交易的云需要 PCI-DSS,存储医疗保健信息的云需要 HIPAA,联邦政府内部的云可能需要 FedRAMP/FISMA 和 ITAR 认证。

+
SOC 1 (SSAE 16) / ISAE 3402
+

服务组织控制 (SOC) 标准由美国注册会计师协会 (AICPA) 定义。SOC 控制评估服务提供商的相关财务报表和断言,例如是否遵守《萨班斯-奥克斯利法案》。 SOC 1 取代了审计准则第 70 号声明 (SAS 70) II 类报告。这些控制措施通常包括范围内的物理数据中心。

+

有两种类型的 SOC 1 报告:

+
    +
  • 类型 1 - 报告管理层对服务组织系统的描述的公允性,以及控制设计是否适合实现截至指定日期的描述中包含的相关控制目标。
  • +
  • 类型 2 - 报告管理层对服务组织系统的描述的公允性,以及控制措施的设计和运营有效性是否适合在特定时期内实现描述中包含的相关控制目标
  • +
+

有关详细信息,请参阅AICPA关于与用户实体财务报告内部控制相关的服务组织控制的报告。

+
SOC 2 函数
+

服务组织控制 (SOC) 2 是对影响服务组织用于处理用户数据的系统的安全性、可用性和处理完整性以及这些系统处理的信息的机密性和隐私性的控制的自我证明。用户示例包括负责服务组织治理的人员、服务组织的客户、监管机构、业务合作伙伴、供应商以及了解服务组织及其控制措施的其他人员。

+

有两种类型的 SOC 2 报告:

+
    +
  • 类型 1 - 报告管理层对服务组织系统的描述的公允性,以及控制设计是否适合实现截至指定日期的描述中包含的相关控制目标。
  • +
  • 类型 2 - 报告管理层对服务组织系统的描述的公允性,以及控制的设计和运营有效性的适用性,以在特定时期内实现描述中包含的相关控制目标。
  • +
+

有关详细信息,请参阅 AICPA 关于服务组织中与安全性、可用性、处理完整性、机密性或隐私相关的控制的报告。

+
SOC 3 函数
+

服务组织控制 (SOC) 3 是服务组织的信任服务报告。这些报告旨在满足以下用户的需求:这些用户希望确保服务组织中与安全性、可用性、处理完整性、机密性或隐私相关的控制措施,但没有有效使用 SOC 2 报告所需的知识。这些报告是根据 AICPA/加拿大特许会计师协会 (CICA) 关于安全性、可用性、处理完整性、机密性和隐私的信托服务原则、标准和插图编写的。由于 SOC 3 报告是通用报告,因此可以作为印章自由分发或发布在网站上。

+

有关详细信息,请参阅服务组织的 AICPA 信任服务报告。

+
ISO 27001/2 认证
+

ISO/IEC 27001/2 标准取代了 BS7799-2,是信息安全管理体系 (ISMS) 的规范。ISMS 是组织为管理信息资产风险而创建和维护的一整套策略和过程。这些风险基于用户信息的机密性、完整性和可用性 (CIA)。中央情报局的安全三合会已被用作本书大部分章节的基础。

+

有关详细信息,请参阅 ISO 27001。

+
HIPAA / HITECH
+

健康保险流通与责任法案 (HIPAA) 是美国国会的一项法案,用于管理患者健康记录的收集、存储、使用和销毁。该法案规定,受保护的健康信息(PHI)必须对未经授权的人员“不可用、不可读或无法破译”,并且应解决“静态”和“动态”数据的加密问题。

+

HIPAA 不是认证,而是保护医疗保健数据的指南。与 PCI-DSS 类似,PCI 和 HIPPA 最重要的问题是不会发生信用卡信息和健康数据泄露的情况。在发生违规行为时,将仔细审查云提供商是否符合 PCI 和 HIPPA 控制措施。如果证明合规,提供商将立即实施补救控制、违规通知责任以及用于额外合规活动的大量支出。如果不合规,云提供商可能会面临现场审计团队、罚款、潜在的商家 ID (PCI) 丢失以及巨大的声誉影响。

+

拥有 PHI 的用户或组织必须支持 HIPAA 要求,并且是 HIPAA 涵盖的实体。如果实体打算使用某项服务,或者在本例中,使用可能使用、存储或访问该 PHI 的 OpenStack 云,则必须签署业务伙伴协议 (BAA)。BAA 是 HIPAA 涵盖的实体与 OpenStack 服务提供商之间的合同,要求提供商根据 HIPAA 要求处理该 PHI。如果服务提供商不处理 PHI,例如安全控制和强化,那么他们将受到 HIPAA 的罚款和处罚。

+

OpenStack 架构师解释和响应 HIPAA 声明,数据加密仍然是核心实践。目前,这将要求使用行业标准加密算法对 OpenStack 部署中包含的任何受保护的健康信息进行加密。未来潜在的OpenStack项目,如对象加密,将促进HIPAA准则的遵守。

+

有关详细信息,请参阅《健康保险流通与责任法案》。

+
PCI-DSS
+

支付卡行业数据安全标准 (PCI DSS) 由支付卡行业标准委员会定义,旨在加强对持卡人数据的控制,以减少信用卡欺诈。年度合规性验证由外部合格安全评估机构 (QSA) 进行评估,该评估机构会根据持卡人的交易量创建合规报告 (ROC),或通过自我评估问卷 (SAQ) 进行评估。

+

存储、处理或传输支付卡详细信息的 OpenStack 部署在 PCI-DSS 的范围内。所有未从处理支付数据的系统或网络中正确分割的 OpenStack 组件都属于 PCI-DSS 的准则。PCI-DSS 上下文中的分段不支持多租户,而是物理分离(主机/网络)。

+

有关详细信息,请参阅 PCI 安全标准。

+

政府标准

+
FedRAMP
+

“联邦风险和授权管理计划 (FedRAMP) 是一项政府范围的计划,它为云产品和服务的安全评估、授权和持续监控提供了一种标准化方法”。NIST 800-53 是 FISMA 和 FedRAMP 的基础,后者要求专门选择安全控制以在云环境中提供保护。由于安全控制的特殊性以及满足政府标准所需的文档量,FedRAMP 可能非常密集。

+

有关详细信息,请参阅 FedRAMP。

+
ITAR
+

《国际武器贸易条例》(ITAR) 是一套美国政府法规,用于控制美国军需品清单 (USML) 和相关技术数据中与国防相关的物品和服务的进出口。ITAR通常被云提供商视为“操作一致性”,而不是正式认证。这通常涉及按照 FISMA 要求,遵循基于 NIST 800-53 框架的做法实施隔离的云环境,并辅以限制仅访问“美国人”和背景筛选的额外控制措施。

+

有关详细信息,请参阅《国际武器贸易条例》(ITAR)。

+
FISMA
+

《联邦信息安全管理法》要求政府机构制定一项全面的计划,以实施众多政府安全标准,并在 2002 年的《电子政务法》中颁布。FISMA概述了一个过程,该过程利用多个NIST出版物,准备了一个信息系统来存储和处理政府数据。

+

此过程分为三个主要类别:

+

系统分类:

+

信息系统将收到联邦信息处理标准出版物 199 (FIPS 199) 中定义的安全类别。这些类别反映了系统入侵的潜在影响。

+

控件选择:

+

根据 FIPS 199 中定义的系统安全类别,组织利用 FIPS 200 来确定信息系统的特定安全控制要求。例如,如果系统被归类为“中等”,则可能会引入强制要求“安全密码”的要求。

+

控制定制:

+

一旦确定了系统安全控制措施,OpenStack 架构师将利用 NIST 800-53 来提取量身定制的控制措施选择。例如,规范什么是“安全密码”。

+

隐私

+

隐私是合规计划中越来越重要的元素。客户对企业的要求越来越高,他们越来越有兴趣从隐私的角度了解他们的数据是如何被处理的。

+

OpenStack部署可能需要证明符合组织的隐私政策,以及美国-欧盟。安全港框架、ISO/IEC 29100:2011 隐私框架或其他特定于隐私的准则。在美国,美国注册会计师协会(AICPA)已经定义了10个隐私重点领域,在商业环境中部署OpenStack可能希望证明其中的部分或全部原则。

+

为了帮助 OpenStack 架构师保护个人数据,我们建议 OpenStack 架构师查看 NIST 出版物 800-122,标题为“保护个人身份信息 (PII) 机密性指南”。本指南逐步完成保护过程:

+
+

"...由机构维护的有关个人的任何信息,包括 (1) 可用于区分或追踪个人身份的任何信息,例如姓名、社会安全号码、出生日期和地点、母亲的婚前姓氏或生物识别记录;(2)与个人有联系或可联系的任何其他信息,如医疗、教育、财务和就业信息......”

+
+

全面的隐私管理需要大量的准备、思考和投资。在构建全球OpenStack云时,还引入了额外的复杂性,例如,在美国和更严格的欧盟隐私法之间的差异中导航。此外,在处理敏感的 PII 时需要格外小心,其中可能包括信用卡号或医疗记录等信息。这些敏感数据不仅受隐私法的约束,还受监管和政府法规的约束。通过遵循既定的最佳实践,包括政府发布的最佳实践,可以为OpenStack部署创建和实践一个全面的隐私管理政策。

+

安全审查

+

OpenStack社区安全审查的目标是识别OpenStack项目设计或实现中的弱点。虽然这些弱点很少见,但可能会对OpenStack部署的安全性产生灾难性的影响,因此应该努力将这些缺陷在已发布项目中的可能性降到最低。在安全审查过程中,应了解并记录以下内容:

+
    +
  • 系统的所有入口点
  • +
  • 风险资产
  • +
  • 数据持久化的位置
  • +
  • 数据如何在系统组件之间传输
  • +
  • 数据格式和转换
  • +
  • 项目的外部依赖项
  • +
  • 一组商定的调查结果和/或缺陷
  • +
  • 项目如何与外部依赖项交互
  • +
+

对 OpenStack 可交付存储库执行安全审查的一个常见原因是协助漏洞管理团队 (VMT) 监督。OpenStack VMT 列出了受监督的存储库,其中漏洞的报告接收和披露由 VMT 管理。虽然不是严格的要求,但某种形式的安全审查、审计或威胁分析可以帮助每个人更轻松地查明系统更容易出现漏洞的区域,并在它们成为用户问题之前解决它们。

+

OpenStack VMT 建议,对项目推荐的部署进行架构审查是一种适当的安全审查形式,在审查需求与 OpenStack 规模的项目资源需求之间取得平衡。安全架构审查通常也称为威胁分析、安全分析或威胁建模。在OpenStack安全审查的背景下,这些术语是架构安全审查的同义词,它可以识别项目或参考架构设计中的缺陷,并可能导致进一步的调查工作来验证部分实现。

+

对于新项目以及第三方未进行安全审查或无法共享其结果的情况,预计安全审查将是正常途径。需要安全审查的项目的信息将在即将到来的安全审查过程中提供。

+

如果第三方已经执行了安全审查,或者项目更喜欢使用第三方来执行审查,则在即将到来的第三方安全审查过程中将提供有关如何获取该第三方审查的输出并将其提交验证的信息。

+

无论哪种情况,对文档工件的要求都是相似的 - 项目必须提供最佳实践部署的架构图。虽然强烈建议作为所有团队开发周期的一部分,但漏洞扫描和静态分析扫描不足以作为第三方审查的证据。

+
    +
  • 架构页面指南
  • +
  • 标题、版本信息、联系方式
  • +
  • 项目描述和目的
  • +
  • 主要用户和用例
  • +
  • 外部依赖关系和关联的安全假设
  • +
  • 组件
  • +
  • 服务架构图
  • +
  • 数据资产
  • +
  • 数据资产影响分析
  • +
  • 接口
  • +
  • 资源
  • +
+

架构页面指南

+

架构页面的目的是记录服务或项目的体系结构、用途和安全控制。它应该记录该项目的最佳实践部署。

+

架构页面有一些关键部分,下面将更详细地解释这些部分:

+
    +
  • 标题、版本信息、联系方式
  • +
  • 项目描述和目的
  • +
  • 主要用户和用例
  • +
  • 外部依赖关系和关联的安全假设
  • +
  • 组件
  • +
  • 架构图
  • +
  • 数据资产
  • +
  • 数据资产影响分析
  • +
  • 接口
  • +
+

标题、版本信息、联系方式

+

本部分为架构页面添加标题,提供评审状态(草稿、准备评审、已审核),并捕获项目的发布和版本(如果相关)。它还记录了项目的 PTL、负责生成架构页面、图表和完成评审的项目架构师(这可能是也可能不是 PTL)和安全评审员。

+

项目描述和目的

+

本节将包含项目的简要说明,以向第三方介绍该服务。这应该是一两个段落,可以从 wiki 或其他文档中剪切/粘贴。包括相关演示文稿和更多文档的链接(如果有)。

+

例如:

+

“Anchor 是一种公钥基础设施 (PKI) 服务,它使用自动证书请求验证来自动做出颁发决策。证书的颁发时间很短(通常为 12-48 小时),以避免与 CRL 和 OCSP 相关的有缺陷的吊销问题。

+

主要用户和用例

+

已实现架构的预期主要用户及其用例的列表。“用户”可以是 OpenStack 中的参与者或其他服务。

+

例如:

+
    +
  1. 最终用户将使用系统来存储敏感数据,例如密码、加密密钥等。
  2. +
  3. 云管理员将使用管理 API 来管理资源配额。
  4. +
+

外部依赖和相关的安全假设

+

外部依赖项是服务操作所需的不受控制的项,如果它们受到威胁或变得不可用,可能会影响服务。这些项目通常不在开发人员的控制范围内,但在部署者的控制范围内,或者它们可能由第三方操作。设备应被视为外部依赖项。

+

例如:

+
    +
  • Nova 计算服务依赖于外部身份验证和授权服务。在典型部署中,此依赖关系将由 keystone 服务实现。
  • +
  • Barbican 依赖于硬件安全模块 (HSM) 设备的使用。
  • +
+

组件

+

已部署项目的组件列表,不包括外部实体。每个组件都应命名并简要描述其用途,并使用使用的主要技术(例如 Python、MySQL、RabbitMQ)进行标记。

+

例如:

+
    +
  • keystone 监听器进程 (Python):使用 keystone 服务发布的 keystone 事件的 Python 进程。
  • +
  • 数据库 (MySQL):MySQL 数据库,用于存储与其托管实体及其元数据相关的巴比肯状态数据。
  • +
+

服务架构图

+

架构图显示了系统的逻辑布局,以便安全审阅者可以与项目团队一起逐步完成架构。它是一个逻辑图,显示组件如何交互、它们如何连接到外部实体以及通信跨越信任边界的位置。有关架构图的更多信息,包括符号键,将在即将发布的架构图指南中给出。可以在任何可以生成使用键中符号的图表的工具中绘制图表,但强烈建议 draw.io。

+

此示例显示了 barbican 架构图:

+

../_images/security_review_barbican_architecture.png

+

数据资产

+

数据资产是攻击者可能针对的用户数据、高价值数据、配置项、授权令牌或其他项。数据项集因项目而异,但一般而言,应将其视为对项目预期操作至关重要的类别。所需的详细程度在某种程度上取决于上下文。数据通常可以分组,例如“用户数据”、“机密数据”或“配置文件”,但也可以是单数,例如“管理员身份令牌”或“用户身份令牌”或“数据库配置文件”。

+

数据资产应包括该资产持久化位置的声明。

+

例如:

+
    +
  • 机密数据 - 密码、加密密钥、RSA 密钥 - 保留在数据库 [PKCS#11] 或 HSM [KMIP] 或 [KMIP、Dogtag] 中
  • +
  • RBAC 规则集 - 保留在 policy.json 中
  • +
  • RabbitMQ 凭证 - 保留在 barbican.conf 中
  • +
  • keystone 事件队列凭据 - 保留在 barbican.conf 中
  • +
  • 中间件配置 - 保留在粘贴 .ini 中
  • +
+

数据资产影响分析

+

数据资产影响分析分解了每个数据资产的机密性、完整性或可用性损失的影响。项目架构师应该尝试完成这项工作,因为他们最详细地了解他们的项目,但 OpenStack 安全项目 (OSSP) 将在安全审查期间与项目一起解决这 个问题,并可能添加或更新影响细节。

+

例如:

+
    +
  • RabbitMQ 凭据:
  • +
  • 完整性故障影响:barbican 和 Workers 无法再访问队列。拒绝服务。
  • +
  • 机密性故障影响:攻击者可以将新任务添加到队列中,这些任务将由工作人员执行。攻击者可能耗尽用户配额。拒绝服务。用户将无法创建真正的机密。
  • +
  • 可用性故障影响:如果没有对队列的访问权限,barbican 无法再创建新密钥。
  • +
  • Keystone 凭据:
  • +
  • 完整性故障影响:barbican 将无法验证用户凭据并失败。拒绝服务。
  • +
  • 机密性故障影响:恶意用户可能会滥用其他 OpenStack 服务(取决于 keystone 角色配置),但 barbican 不受影响。如果用于令牌验证的服务帐户也具有 barbican 管理员权限,则恶意用户可以操纵 barbican 管理员功能。
  • +
  • 可用性故障影响:barbican 将无法验证用户凭据并失败。拒绝服务。
  • +
+

接口

+

接口列表捕获了审查范围内的接口。这包括架构图上跨越信任边界或不使用行业标准加密协议(如 TLS 或 SSH)的模块之间的连接。对于每个接口,将捕获以下信息:

+
+
    +
  • 使用的协议
  • +
  • 通过该接口传输的任何数据资产
  • +
  • 有关用于连接到该接口的身份验证的信息
  • +
  • 接口用途的简要说明。
  • +
+
+

记录格式如下:

+

从>到[传输方式]:

+
    +
  • 动态资产
  • +
  • 身份认证?
  • +
  • 描述
  • +
+

例如:

+
    +
  1. 客户端>API 进程 [TLS]:
  2. +
  3. 传输中的资产:用户密钥失真凭据、明文密钥、HTTP 谓词、密钥 ID、路径
  4. +
  5. 对 keystone 凭据或明文机密的访问被视为系统的完全安全故障 - 此接口必须具有强大的机密性和完整性控制。
  6. +
+

资源

+

列出与项目相关的资源,例如描述其部署和用法的 Wiki 页面,以及指向代码存储库和相关演示文稿的链接。

+

安全检查表

+
    +
  • 身份服务检查表
  • +
  • 仪表板检查表
  • +
  • 计算服务检查表
  • +
  • 块存储服务检查表
  • +
  • 共享文件系统服务检查表
  • +
  • 网络服务检查表
  • +
+

附录

+
    +
  • 社区支持
  • +
  • 词汇表
  • +
+

社区支持

+

以下资源可帮助您运行和使用 OpenStack。OpenStack社区不断改进和增加OpenStack的主要功能,但如果您有任何问题,请随时提问。使用以下资源获取 OpenStack 支持并对安装进行故障排除。

+

文档

+

有关可用的 OpenStack 文档,请参阅 docs.openstack.org。

+

以下指南解释了如何安装概念验证 OpenStack 云及其相关组件:

+
    +
  • Rocky 安装指南
  • +
+

以下书籍介绍了如何配置和运行 OpenStack 云:

+
    +
  • 架构设计指南
  • +
  • Rocky 管理员指南
  • +
  • Rocky 配置指南
  • +
  • Rocky 网络指南
  • +
  • 高可用性指南
  • +
  • 安全指南
  • +
  • 虚拟机映像指南
  • +
+

以下书籍介绍了如何使用命令行客户端:

+
    +
  • Rocky API 绑定
  • +
+

以下文档提供了 OpenStack API 的参考和指导信息:

+
    +
  • API 文档
  • +
+

以下指南提供了有关如何为 OpenStack 文档做出贡献的信息:

+
    +
  • 文档贡献者指南
  • +
+

OpenStack wiki

+

OpenStack wiki 包含广泛的主题,但有些信息可能很难找到或只有几页深。幸运的是,Wiki 搜索功能使您能够按标题或内容进行搜索。如果您搜索特定信息,例如有关网络或 OpenStack 计算的信息,您可以找到大量相关材料。更多内容一直在添加,因此请务必经常回来查看。您可以在任何 OpenStack wiki 页面的右上角找到搜索框。

+

Launchpad bug 区域

+

OpenStack 社区重视您的设置和测试工作,并希望得到您的反馈。要记录bug,您必须注册一个 Launchpad 帐户。您可以在 Launchpad bug 区域中查看现有bug并报告bug。使用搜索功能确定bug是否已报告或已修复。如果您的bug似乎仍未报告,请填写bug报告。

+

一些提示:

+
    +
  • 给出一个清晰、简洁的总结。
  • +
  • 在描述中提供尽可能多的详细信息。粘贴命令输出或堆栈跟踪、屏幕截图链接以及可能有用的任何其他信息。
  • +
  • 请务必包括您正在使用的软件和软件包版本,尤其是在使用开发分支(如 "Kilo release" vs git commit bc79c3ecc55929bac585d04a03475b72e06a3208 .
  • +
  • 任何特定于部署的信息都很有用,例如您使用的是 Ubuntu 14.04 还是正在执行多节点安装。
  • +
+

以下 Launchpad Bug 区域可用:

+
    +
  • Bugs:OpenStack 块存储 (cinder)
  • +
  • Bugs:OpenStack 计算(nova)
  • +
  • Bugs:OpenStack 仪表板(horizon)
  • +
  • Bugs:OpenStack 身份认证(keystone)
  • +
  • Bugs:OpenStack 镜像服务 (glance)
  • +
  • Bugs:OpenStack 网络(neutron)
  • +
  • Bugs:OpenStack 对象存储 (swift)
  • +
  • Bugs:应用程序目录 (murano)
  • +
  • Bugs:裸机服务(ironic)
  • +
  • Bugs:集群服务(senlin)
  • +
  • Bugs:容器基础架构管理服务(magnum)
  • +
  • Bugs:数据处理服务(sahara)
  • +
  • Bugs:数据库服务 (trove)
  • +
  • Bugs:DNS服务(designate)
  • +
  • Bugs:密钥管理服务(barbican)
  • +
  • Bugs:监控 (monasca)
  • +
  • Bugs:编排 (heat)
  • +
  • Bugs:评级 (cloudkitty)
  • +
  • Bugs:共享文件系统 (manila)
  • +
  • Bugs:遥测(ceilometer)
  • +
  • Bugs:遥测v3 (gnocchi)
  • +
  • Bugs:工作流服务 (mistral)
  • +
  • Bugs:消息传递服务 (zaqar)
  • +
  • Bugs:容器服务 (zun)
  • +
  • Bugs:OpenStack API 文档 (developer.openstack.org)
  • +
  • Bugs:OpenStack 文档 (docs.openstack.org)
  • +
+

文档反馈

+

要提供有关文档的反馈,请加入我们在 OFTC IRC 网络上的 IRC 频道 #openstack-doc ,或在 Launchpad 中报告错误并选择文档所属的特定项目。

+

OpenStack IRC 频道

+

OpenStack 社区位于 OFTC 网络上的 #openstack IRC 频道中。您可以在这里提问,获取即时反馈,解决紧急问题。要安装 IRC 客户端或使用基于浏览器的客户端,请访问 https://webchat.oftc.net/。您还可以使用Colloquy (Mac OS X)、mIRC (Windows) 或 XChat (Linux)。当您在 IRC 频道中并且想要共享代码或命令输出时,通常接受的方法是使用 Paste Bin。OpenStack 项目有一个Paste网站。只需将较长的文本或日志粘贴到 Web 表单中,即可获得一个URL,可以将其粘贴到频道中。OpenStack IRC 频道处于 #openstack . irc.oftc.net 您可以在 wiki 的 IRC 页面上找到所有 OpenStack IRC 频道的列表。

+

OpenStack 邮件列表

+

获得答案和见解的一个好方法是将您的问题或有问题的场景发布到 OpenStack 邮件列表中。您可以向可能遇到类似问题的其他人学习和提供帮助。要订阅或查看存档,请访问一般的 OpenStack 邮件列表。如果您对特定项目或开发的其他邮件列表感兴趣,请参阅邮件列表。

+

OpenStack 发行包

+

以下 Linux 发行版为 OpenStack 提供社区支持的软件包:

+
    +
  • CentOS, Fedora, and Red Hat Enterprise Linux: https://www.rdoproject.org/
  • +
  • openSUSE and SUSE Linux Enterprise Server: https://en.opensuse.org/Portal:OpenStack
  • +
  • Ubuntu: https://wiki.ubuntu.com/OpenStack/CloudArchive
  • +
+

词汇表

+

本词汇表提供了一系列术语和定义,用于定义 OpenStack 相关概念的词汇表。

+

要添加到 OpenStack 术语表,请克隆 openstack/openstack-manuals 存储库,并通过 OpenStack 贡献过程更新源文件 doc/common/glossary.rst

+

0-9

+
    +
  • 2023.1 Antelope
  • +
+

OpenStack 第 27 版的代号。此版本是基于“年”之后形成的新版本标识过程的第一个版本。年内释放计数“,Antelope是一种敏捷而亲切的动物,也是一种蒸汽机车的类型。

+
    +
  • 2023.2 Bobcat
  • +
+

OpenStack 第 28 版的代号。

+
    +
  • 2024.1 Caracal
  • +
+

OpenStack 第 29 版的代号。

+
    +
  • 6to4
  • +
+

一种允许 IPv6 数据包通过 IPv4 网络传输的机制,提供迁移到 IPv6 的策略。

+

A

+

绝对限制

+

客户机虚拟机的不可逾越限制。 设置包括总 RAM 大小、最大 vCPU 数和最大磁盘大小。

+

访问控制列表(ACL)

+

附加到对象的权限列表。ACL 指定哪些用户或系统进程有权访问对象。它还定义可以对指定对象执行哪些操作。典型 ACL 中的每个条目都指定一个主题和一个操作。例如,文件的 ACL 条目 (Alice, delete) 授予 Alice 删除该文件的权限。

+

访问密钥

+

Amazon EC2 访问密钥的替代术语。请参阅 EC2 访问密钥。

+

账户

+

对象存储中账户的上下文。不要与身份验证服务中的用户帐户混淆,例如 Active Directory、/etc/passwd、OpenLDAP、OpenStack Identity 等。

+

账户审核员

+

通过对后端 SQLite 数据库运行查询,检查指定对象存储帐户中缺少的副本以及不正确或损坏的对象。

+

账户数据库

+

一个 SQLite 数据库,其中包含对象存储帐户和相关元数据,并且帐户服务器可以访问该数据库。

+

账户回收器

+

一个对象存储工作线程,用于扫描和删除帐户数据库,并且帐户服务器已标记为删除。

+

账户服务器

+

列出对象存储中的容器,并将容器信息存储在帐户数据库中。

+

账户服务

+

对象存储组件,提供列表、创建、修改、审计等账号服务。不要与 OpenStack Identity 服务、OpenLDAP 或类似的用户帐户服务混淆。

+

会计

+

计算服务通过事件通知和系统使用情况数据工具提供会计信息。

+

活动目录

+

Microsoft 基于 LDAP 的身份验证和身份服务。在 OpenStack 中受支持。

+

主/主配置

+

在具有主/主配置的高可用性设置中,多个系统一起分担负载,如果其中一个系统发生故障,则负载将分配给其余系统。

+

主/备配置

+

在具有主/备配置的高可用性设置中,系统设置为使其他资源联机以替换那些出现故障的资源。

+

地址池

+

分配给项目的一组固定和/或浮动 IP 地址,可由项目中的 VM 实例使用或分配给项目。

+

地址解析协议 (ARP)

+

将三层IP地址解析为二层链路本地地址的协议。

+

管理员 API

+

授权管理员可访问的 API 调用子集,最终用户或公共 Internet 通常无法访问这些调用。它们可以作为单独的服务 (keystone) 存在,也可以是另一个 API (nova) 的子集。

+

管理员服务器

+

在 Identity 服务的上下文中,提供对管理 API 的访问的工作进程。

+

管理员

+

负责安装、配置和管理 OpenStack 云的人员。

+

高级消息队列协议 (AMQP)

+

OpenStack 组件用于服务内部通信的开放标准消息传递协议,由 RabbitMQ、Qpid 或 ZeroMQ 提供。

+

高级 RISC 机器 (ARM)

+

低功耗 CPU 常见于移动和嵌入式设备中。由 OpenStack 支持。

+

警报

+

计算服务可以通过其通知系统发送警报,该系统包括用于创建自定义通知驱动程序的工具。警报可以发送到并在仪表板上显示。

+

分配

+

从地址池中获取浮动 IP 地址,以便将其与来宾 VM 实例上的固定 IP 相关联的过程。

+

Amazon 内核映像 (AKI)

+

VM 容器格式和磁盘格式。受Image服务支持。

+

Amazon 系统映像 (AMI)

+

VM 容器格式和磁盘格式。受Image服务支持。

+

Amazon Ramdisk 映像 (ARI)

+

VM 容器格式和磁盘格式。受Image服务支持。

+

Anvil

+

将名为 DevStack 的基于 shell 脚本的项目移植到 Python 的项目。

+

AODH

+

OpenStack 遥测服务的一部分;提供报警功能。

+

Apache

+

Apache 软件基金会支持 Apache 开源软件项目的 Apache 社区。这些项目为公共利益提供软件产品。

+

Apache 许可证 2.0

+

所有 OpenStack 核心项目都是根据 Apache License 2.0 许可证的条款提供的。

+

Apache Web 服务器

+

目前在 Internet 上使用的最常用的 Web 服务器软件。

+

API 端点

+

客户端为访问 API 而与之通信的守护程序、工作程序或服务。API 终结点可以提供任意数量的服务,例如身份验证、销售数据、性能指标、计算 VM 命令、人口普查数据等。

+

API 扩展

+

扩展某些 OpenStack 核心 API 的自定义模块。

+

API 扩展插件

+

网络插件或网络 API 扩展的替代术语。

+

API 密钥

+

API 令牌的替代术语。

+

API 服务器

+

运行提供 API 端点的守护程序或工作线程的任何节点。

+

API 令牌

+

传递给 API 请求并由 OpenStack 用于验证客户端是否有权运行请求的操作。

+

API 版本

+

在 OpenStack 中,项目的 API 版本是 URL 的一部分。例如, example.com/nova/v1/foobar .

+

小应用程序

+

可以嵌入到网页中的 Java 程序。

+

应用程序目录服务(murano)

+

提供应用程序目录服务的项目,以便用户可以在管理应用程序生命周期的同时,在应用程序抽象级别上编写和部署复合环境。

+

应用程序编程接口(API)

+

用于访问服务、应用程序或程序的规范集合。包括服务调用、每个调用的必需参数以及预期的返回值。

+

应用服务器

+

一种软件,它使另一种软件在网络上可用。

+

应用服务提供者商(ASP)

+

租用专用应用程序的公司,这些应用程序可帮助企业和组织以更低的成本提供附加服务。

+

可分配

+

用于维护 Linux 内核防火墙模块中的地址解析协议数据包过滤规则的工具。在计算中与 iptables、ebtables 和 ip6tables 一起使用,为 VM 提供防火墙服务。

+

关联

+

将计算浮动 IP 地址与固定 IP 地址关联的过程。

+

异步 JavaScript 和 XML (AJAX)

+

一组相互关联的 Web 开发技术,用于在客户端创建异步 Web 应用程序。在地平线中广泛使用。

+

以太网 ATA (AoE)

+

在以太网中建立隧道的磁盘存储协议。

+

附加

+

在网络中将 VIF 或 vNIC 连接到 L2 网络的过程。在计算上下文中,此过程将存储卷连接到实例。

+

附件(网络)

+

接口 ID 与逻辑端口的关联。将接口插入端口。

+

审计

+

通过系统使用情况数据工具在计算中提供。

+

审计员

+

验证对象存储对象、容器和帐户完整性的工作进程。审核员是对象存储帐户审计员、容器审计员和对象审计员的统称。

+

Austin

+

OpenStack 初始版本的代号。首届设计峰会在美国德克萨斯州奥斯汀举行。

+

auth 节点

+

对象存储授权节点的替代术语。

+

身份验证

+

通过私钥、秘密令牌、密码、指纹或类似方法确认用户、进程或客户端确实是他们所说的人的过程。

+

身份验证令牌

+

身份验证后提供给客户端的文本字符串。必须由用户或进程在对 API 端点的后续请求中提供。

+

AuthN

+

提供身份验证服务的标识服务组件。

+

授权

+

验证用户、进程或客户端是否有权执行操作的行为。

+

授权节点

+

提供授权服务的对象存储节点。

+

AuthZ

+

提供高级授权服务的身份组件。

+

自动确认

+

RabbitMQ 中的配置设置,用于启用或禁用消息确认。默认启用。

+

自动声明

+

一个 Compute RabbitMQ 设置,用于确定在程序启动时是否自动创建消息交换。

+

可用区

+

用于容错的隔离区域的 Amazon EC2 概念。不要与 OpenStack Compute 区域或单元混淆。

+

AWS CloudFormation 模板

+

AWS CloudFormation 允许 Amazon Web Services (AWS) 用户创建和管理相关资源的集合。编排服务支持与 CloudFormation 兼容的格式 (CFN)。

+

B

+

后端

+

对用户进行模糊处理的交互和进程,例如计算卷挂载、守护程序向 iSCSI 目标传输数据或对象存储对象完整性检查。

+

后端目录

+

身份服务目录服务用于存储和检索有关客户端可用的 API 端点的信息的存储方法。示例包括 SQL 数据库、LDAP 数据库或 KVS 后端。

+

后端存储

+

用于保存和检索服务信息的持久性数据存储,例如对象存储对象列表、客户机虚拟机的当前状态、用户名列表等。此外,映像服务用于获取和存储 VM 映像的方法。选项包括对象存储、本地挂载的文件系统、RADOS 块设备、VMware 数据存储和 HTTP。

+

备份、恢复和灾难恢复服务(freezer)

+

提供用于备份、还原和恢复文件系统、实例或数据库备份的集成工具的项目。

+

带宽

+

通信资源(如 Internet)使用的可用数据量。表示用于下载内容的数据量或可供下载的数据量。

+

barbican

+

Key Manager 服务的代号。

+

裸机

+

映像服务容器格式,指示 VM 映像不存在容器。

+

裸机服务(ironic)

+

OpenStack 服务,它提供服务和关联的库,能够以安全感知和容错的方式管理和配置物理机。

+

基础映像

+

OpenStack 提供的映像。

+

Bell-LaPadula 模型

+

一种安全模型,侧重于数据机密性和对机密信息的受控访问。该模型将实体分为主体和客体。将主体的许可与主体的分类进行比较,以确定主体是否被授权用于特定的访问模式。间隙或分类方案用晶格表示。

+

基准服务(反弹)

+

OpenStack项目,为单个OpenStack组件的性能分析和基准测试以及完整的生产OpenStack云部署提供了一个框架。

+

Bexar

+

2011 年 2 月发布的与 OpenStack 相关的项目的分组版本。它仅包括计算 (nova) 和对象存储 (swift)。Bexar 是 OpenStack 第二个版本的代号。设计峰会在美国德克萨斯州圣安东尼奥举行,这里是贝克萨尔县的县城。

+

二进制

+

仅由 1 和 0 组成的信息,这是计算机的语言。

+

+

位是以 2 为基数的个位数(0 或 1)。带宽使用量以每秒位数为单位。

+

每秒比特数 (BPS)

+

通用测量数据从一个地方传输到另一个地方的速度。

+

块设备

+

一种以块的形式移动数据的设备。这些设备节点连接设备,例如硬盘、CD-ROM 驱动器、闪存驱动器和其他可寻址内存区域。

+

区块迁移

+

KVM 使用的一种虚拟机实时迁移方法,用于在用户启动的切换期间将实例从一台主机撤离到另一台主机,停机时间非常短。不需要共享存储。由计算支持。

+

块存储 API

+

单独终结点上的 API,用于为计算 VM 附加、分离和创建块存储。

+

块存储服务(cinder)

+

OpenStack 服务,它实现了服务和库,通过在其他块存储设备之上的抽象和自动化,提供对块存储资源的按需自助访问。

+

BMC(基板管理控制器)

+

IPMI架构中的智能,它是一种专用的微控制器,嵌入在计算机主板上并充当服务器。管理系统管理软件和平台硬件之间的接口。

+

可启动磁盘映像

+

一种 VM 映像类型,以单个可启动文件的形式存在。

+

Bootstrap 协议 (BOOTP)

+

网络客户端用于从配置服务器获取 IP 地址的网络协议。在使用 FlatDHCP 管理器或 VLAN 管理器网络管理器时,通过 dnsmasq 守护程序进行计算中提供。

+

边界网关协议 (BGP)

+

边界网关协议是一种连接自治系统的动态路由协议。该协议被认为是互联网的骨干,将不同的网络连接起来,形成一个更大的网络。

+

浏览器

+

使计算机或设备能够访问 Internet 的任何客户端软件。

+

构建器文件

+

包含对象存储用于重新配置环或在发生严重故障后从头开始重新创建环的配置信息。

+

扩展

+

在主环境资源受限时,利用辅助环境按需弹性构建实例的做法。

+

按钮类

+

地平线中的一组相关按钮类型。用于启动、停止和挂起 VM 的按钮位于一个类中。用于关联和取消关联浮动 IP 地址的按钮位于另一个类中,依此类推。

+

字节

+

构成单个字符的位集;一个字节通常有 8 位。

+

C

+

缓存修剪器

+

将映像服务虚拟机映像缓存保持在或低于其配置的最大大小的程序。

+

Cactus

+

2011 年春季发布的 OpenStack 项目分组版本。它包括计算 (nova)、对象存储 (swift) 和图像服务 (glance)。Cactus 是美国德克萨斯州的一个城市,是 OpenStack 第三个版本的代号。当OpenStack版本从3个月延长到6个月时,该版本的代号发生了变化,以匹配最接近上一次峰会的地理位置。

+

调用

+

OpenStack 消息队列软件使用的 RPC 原语之一。发送消息并等待响应。

+

能力

+

定义单元的资源,包括 CPU、存储和网络。可以应用于一个单元或整个单元内的特定服务。

+

容量缓存

+

计算后端数据库表,其中包含当前工作负载、可用 RAM 量以及每个主机上运行的 VM 数。用于确定 VM 在哪个主机上启动。

+

容量更新程序

+

监视 VM 实例并根据需要更新容量缓存的通知驱动程序。

+

投射

+

OpenStack 消息队列软件使用的 RPC 原语之一。发送消息,不等待响应。

+

目录

+

用户在使用 Identity 服务进行身份验证后可用的 API 端点列表。

+

目录服务

+

一种身份服务,列出用户在使用 Identity 服务进行身份验证后可用的 API 端点。

+

测高仪

+

OpenStack Telemetry 服务的一部分;收集和存储来自其他 OpenStack 服务的指标。

+

单元格

+

在子关系和父关系中提供计算资源的逻辑分区。如果父单元无法提供请求的资源,则请求将从父单元传递到子单元。

+

单元格转发

+

一个“计算”选项,该选项使父单元能够在父单元无法提供所请求的资源时将资源请求传递给子单元。

+

单元格管理器

+

计算组件,其中包含单元中每个主机的当前功能列表,并根据需要路由请求。

+

CentOS 操作系统

+

与 OpenStack 兼容的 Linux 发行版。

+

Ceph 函数

+

可大规模扩展的分布式存储系统,由对象存储、块存储和兼容 POSIX 的分布式文件系统组成。与OpenStack兼容。

+

CephFS

+

Ceph 提供的符合 POSIX 标准的文件系统。

+

证书颁发机构 (CA)

+

在密码学中,颁发数字证书的实体。数字证书通过证书的指定主体证明公钥的所有权。这使其他人(依赖方)能够依赖与认证公钥相对应的私钥所做的签名或断言。在这种信任关系模型中,CA 是证书主体(所有者)和依赖证书的一方的受信任第三方。CA 是许多公钥基础结构 (PKI) 方案的特征。在 OpenStack 中,Compute 为 cloudpipe VPN 和 VM 映像解密提供了一个简单的证书颁发机构。

+

挑战握手身份验证协议 (CHAP)

+

计算支持的 iSCSI 身份验证方法。

+

机会调度器

+

计算使用的一种计划方法,用于从池中随机选择可用主机。

+

自上次更改以来

+

一个计算 API 参数,该参数允许下载自上次请求以来对所请求项的更改,而不是下载一组新的数据并将其与旧数据进行比较。

+

Chef

+

支持 OpenStack 部署的操作系统配置管理工具。

+

子单元格

+

如果请求的资源(如 CPU 时间、磁盘存储或内存)在父单元中不可用,则该请求将转发到其关联的子单元。如果子单元可以满足请求,则它确实可以。否则,它会尝试将请求传递给其任何子级。

+

cinder

+

块存储服务的代号。

+

CirrOS

+

一个最小的 Linux 发行版,设计用作云(如 OpenStack)上的测试映像。

+

Cisco neutron 插件

+

适用于 Cisco 设备和技术(包括 UCS 和 Nexus)的网络插件。

+

云架构师

+

计划、设计和监督云创建的人。

+

云审计数据联邦 (CADF)

+

Cloud Auditing Data Federation (CADF) 是用于审核事件数据的规范。CADF 受 OpenStack Identity 支持。

+

云计算

+

一种模型,支持访问可配置计算资源(如网络、服务器、存储、应用程序和服务)的共享池,这些资源可以快速配置和发布,只需最少的管理工作或服务提供商交互。

+

云计算基础设施

+

支持云计算模型的计算要求所需的硬件和软件组件,例如服务器、存储、网络和虚拟化软件。

+

云计算平台软件

+

通过互联网提供不同的服务。这些资源包括数据存储、服务器、数据库、网络和软件等工具和应用程序。只要电子设备可以访问网络,它就可以访问数据和运行它的软件程序。

+

云计算服务架构

+

云服务体系结构定义了在企业业务网络边界内和跨企业业务网络边界实施的整体云计算服务和解决方案。考虑核心业务需求,并将其与可能的云解决方案相匹配。

+

云控制器

+

表示云全局状态的计算组件的集合;通过队列与服务(例如身份认证、对象存储和节点/存储工作线程)进行通信。

+

云控制器节点

+

运行网络、卷、API、调度程序和映像服务的节点。每个服务都可以分解为单独的节点,以实现可伸缩性或可用性。

+

云数据管理接口(CDMI)

+

SINA标准定义了一个RESTful API,用于管理云中的对象,目前在OpenStack中不受支持。

+

云基础设施管理接口(CIMI)

+

正在进行的云管理规范。目前在 OpenStack 中不受支持。

+

云技术

+

云是由管理和自动化软件编排的虚拟源工具。这包括原始处理能力、内存、网络、基于云的应用程序的存储。

+

cloud-init 函数

+

通常安装在 VM 映像中的包,用于在启动后使用从元数据服务检索到的信息(如 SSH 公钥和用户数据)执行实例的初始化。

+

cloudadmin

+

计算 RBAC 系统中的默认角色之一。授予完整的系统访问权限。

+

Cloudbase-初始化

+

提供来宾初始化功能的 Windows 项目,类似于 cloud-init。

+

cloudpipe

+

一种基于每个项目创建 VPN 的计算服务。

+

CloudPipe 镜像

+

作为 cloudpipe 服务器的预制 VM 镜像。从本质上讲,OpenVPN运行在Linux上。

+

集群服务(senlin)

+

实现集群服务和库的项目,用于管理由其他 OpenStack 服务公开的同构对象组。

+

命令过滤器

+

列出计算 rootwrap 工具中允许的命令。

+

命令行界面 (CLI)

+

一个基于文本的客户端,可帮助您创建脚本以与 OpenStack 云进行交互。

+

通用 Internet 文件系统 (CIFS)

+

文件共享协议。它是 Microsoft 开发和使用的原始服务器消息块 (SMB) 协议的公共或开放变体。与 SMB 协议一样, CIFS 在更高级别运行并使用 TCP/IP 协议。

+

公共库 (oslo)

+

生成一组 python 库的项目,其中包含 OpenStack 项目共享的代码。这些库提供的 API 应该是高质量、稳定、一致、有文档记录的和普遍适用的。

+

社区项目

+

一个没有得到OpenStack技术委员会正式认可的项目。如果项目足够成功,它可能会被提升为孵化项目,然后被提升为核心项目,或者它可能与主代码主干合并。

+

压缩

+

通过特殊编码减小文件大小,文件可以再次解压缩为原始内容。OpenStack 支持 Linux 文件系统级别的压缩,但不支持对对象存储对象或镜像服务虚拟机映像等内容进行压缩。

+

计算 API (nova API)

+

nova-api 守护程序提供对 nova 服务的访问。可以与其他 API 通信,例如 Amazon EC2 API。

+

计算控制器

+

计算组件,用于选择要在其上启动 VM 实例的合适主机。

+

计算主机

+

专用于运行计算节点的物理主机。

+

计算节点

+

运行 nova-compute 守护程序的节点,该守护程序管理提供各种服务(如 Web 应用程序和分析)的 VM 实例。

+

计算服务 (nova)

+

OpenStack 核心项目,用于实现服务和相关库,以提供对计算资源(包括裸机、虚拟机和容器)的大规模可扩展、按需、自助访问。

+

计算工作进程

+

在每个计算节点上运行并管理 VM 实例生命周期的计算组件,包括运行、重新启动、终止、附加/分离卷等。由 nova-compute 守护程序提供。

+

串联对象

+

对象存储组合并发送到客户端的一组分段对象。

+

导体

+

在计算中,conductor 是代理来自计算进程的数据库请求的进程。使用 conductor 可以提高安全性,因为计算节点不需要直接访问数据库。

+

congress

+

治理服务的代码名称。

+

一致性窗口

+

所有客户端都可以访问新的对象存储对象所需的时间。

+

控制台日志

+

包含计算中 Linux VM 控制台的输出。

+

容器

+

在对象存储中组织和存储对象。类似于 Linux 目录的概念,但不能嵌套。影像服务容器格式的替代术语。

+

容器审核员

+

通过对 SQLite 后端数据库的查询,检查指定对象存储容器中缺少副本或不正确的对象。

+

容器数据库

+

存储对象存储容器和容器元数据的 SQLite 数据库。容器服务器访问此数据库。

+

容器格式

+

映像服务使用的包装器,其中包含 VM 映像及其关联的元数据,例如计算机状态、OS 磁盘大小等。

+

容器基础设施管理服务(magnum)

+

该项目提供一组用于预配、扩展和管理容器编排引擎的服务。

+

容器服务器

+

管理容器的对象存储服务器。

+

容器服务

+

提供创建、删除、列表等容器服务的对象存储组件。

+

内容分发网络 (CDN)

+

内容分发网络是用于将内容分发到客户端的专用网络,通常位于客户端附近以提高性能。

+

持续交付

+

一种软件工程方法,团队在短周期内生产软件,确保软件可以随时可靠地发布,并且在发布软件时手动发布。

+

持续部署

+

一种软件发布过程,该过程使用自动化测试来验证对代码库的更改是否正确且稳定,以便立即自主部署到生产环境。

+

持续集成

+

每天多次将所有开发人员的工作副本合并到共享主线的做法。

+

控制器节点

+

云控制器节点的替代术语。

+

核心 API

+

根据上下文,核心 API 可以是 OpenStack API 或特定核心项目的主 API,例如计算、网络、映像服务等。

+

核心服务

+

由 Interop 工作组定义为核心的官方 OpenStack 服务。目前由块存储服务(cinder)、计算服务(nova)、身份服务(keystone)、镜像服务(glance)、网络服务(neutron)和对象存储服务(swift)组成。

+

成本

+

在计算分布式计划程序下,这是通过查看每个主机相对于所请求的 VM 实例的风格的功能来计算的。

+

凭证

+

只有用户知道或可访问的数据,用于验证用户是否是他所说的人。在身份验证期间,将凭据提供给服务器。示例包括密码、密钥、数字证书和指纹。

+

CRL 函数

+

PKI 模型中的证书吊销列表 (CRL) 是已吊销的证书列表。不应信任提供这些证书的最终实体。

+

跨域资源共享 (CORS)

+

一种机制,允许从资源来源域之外的另一个域请求网页上的许多资源(例如,字体、JavaScript)。特别是,JavaScript 的 AJAX 调用可以使用 XMLHttpRequest 机制。

+

Crowbar

+

SUSE 的开源社区项目,旨在提供所有必要的服务,以快速部署和管理云。

+

当前工作负载

+

计算容量缓存的一个元素,根据给定主机上当前正在进行的生成、快照、迁移和调整大小操作的数量进行计算。

+

客户

+

项目的替代术语。

+

自定义模块

+

用户创建的 Python 模块,由 horizon 加载,用于更改仪表板的外观。

+

D

+

守护进程

+

在后台运行并等待请求的进程。可能侦听也可能不侦听 TCP 或 UDP 端口。不要与工人混淆。

+

仪表板(horizon)

+

OpenStack 项目,为所有 OpenStack 服务提供可扩展的、统一的、基于 Web 的用户界面。

+

数据加密

+

镜像服务和计算都支持加密的虚拟机 (VM) 镜像(但不支持实例)。OpenStack 支持使用 HTTPS、SSL、TLS 和 SSH 等技术进行传输中数据加密。对象存储不支持应用程序级别的对象加密,但可能支持使用磁盘加密的存储。

+

数据丢失防护(DLP) 软件

+

用于保护敏感信息并通过检测和拒绝数据传输来防止其泄漏到网络边界之外的软件程序。

+

数据处理服务(sahara)

+

OpenStack 项目,提供可扩展的数据处理堆栈和关联的管理接口。

+

数据存储

+

数据库服务支持的数据库引擎。

+

数据库 ID

+

为对象存储数据库的每个副本指定的唯一 ID。

+

数据库复制器

+

一个对象存储组件,用于将帐户、容器和对象数据库中的更改复制到其他节点。

+

数据库服务(trove)

+

一个集成项目,为关系和非关系数据库引擎提供可扩展且可靠的云数据库即服务功能。

+

解除分配

+

删除浮动 IP 地址和固定 IP 地址之间的关联的过程。删除此关联后,浮动 IP 将返回到地址池。

+

Debian

+

与 OpenStack 兼容的 Linux 发行版。

+

重复数据删除

+

在磁盘块、文件和/或对象级别查找重复数据以最大程度地减少存储使用的过程 - 目前在 OpenStack 中不受支持。

+

默认面板

+

用户访问仪表板时显示的默认面板。

+

默认项目

+

如果在创建用户时未指定任何项目,则会将新用户分配给此项目。

+

默认令牌

+

一个标识服务令牌,该令牌不与特定项目关联,并交换为作用域内令牌。

+

延迟删除

+

影像服务中的一个选项,用于在预定义的秒数后删除影像,而不是立即删除影像。

+

交付方式

+

Compute RabbitMQ消息投递模式的设置;可以设置为瞬态或持久性。

+

拒绝服务 (DoS)

+

拒绝服务 (DoS) 是拒绝服务攻击的简称。这是阻止合法用户使用服务的恶意尝试。

+

已弃用的身份验证

+

计算中的一个选项,使管理员能够通过 nova-manage 命令创建和管理用户,而不是使用标识服务。

+

指定

+

DNS 服务的代号。

+

桌面即服务

+

一个平台,它提供了一套桌面环境,用户可以通过访问这些环境从任何位置接收桌面体验。这可以提供通用、开发甚至同构测试环境。

+

开发者

+

计算 RBAC 系统中的默认角色之一,也是分配给新用户的默认角色。

+

设备 ID

+

将对象存储分区映射到物理存储设备。

+

设备权重

+

根据每个设备的存储容量,在对象存储设备之间按比例分配分区。

+

开发堆栈

+

使用 shell 脚本快速构建完整 OpenStack 开发环境的社区项目。

+

DHCP代理

+

为虚拟网络提供 DHCP 服务的 OpenStack Networking 代理。

+

Diablo

+

2011 年秋季发布的与 OpenStack 相关的项目的分组版本,是 OpenStack 的第四个版本。它包括计算 (nova 2011.3)、对象存储 (swift 1.4.3) 和镜像服务 (glance)。Diablo是OpenStack第四个版本的代号。设计峰会在美国加利福尼亚州圣克拉拉附近的湾区举行,Diablo是附近的城市。

+

直接消费者

+

Compute RabbitMQ 的一个元素,在执行 RPC 调用时生效。它通过唯一的独占队列连接到直接交换,发送消息,然后终止。

+

直接交换

+

RPC 调用期间在 Compute RabbitMQ 中创建的路由表;为每个调用的 RPC 调用创建一个。

+

直接发布者

+

RabbitMQ 的元素,用于提供对传入 MQ 消息的响应。

+

解除关联

+

删除浮动 IP 地址和固定 IP 之间的关联,从而将浮动 IP 地址返回到地址池的过程。

+

自主访问控制 (DAC)

+

控制使用者访问对象的能力,同时使用户能够做出策略决策并分配安全属性。传统的用户、组和读-写-执行权限的 UNIX 系统就是 DAC 的一个示例。

+

磁盘加密

+

能够在文件系统、磁盘分区或整个磁盘级别加密数据。在计算 VM 中受支持。

+

磁盘格式

+

VM 的磁盘映像在映像服务后端存储中存储的基础格式。例如,AMI、ISO、QCOW2、VMDK 等。

+

分散

+

在对象存储中,用于测试和确保对象和容器分散以确保容错的工具。

+

分布式虚拟路由器 (DVR)

+

使用 OpenStack Networking (neutron) 时实现高可用性多主机路由的机制。

+

Django

+

在地平线中广泛使用的 Web 框架。

+

DNS 记录

+

指定有关特定域并属于该域的信息的记录。

+

DNS服务(指定)

+

OpenStack 项目,以与技术无关的方式提供对权威 DNS 服务的可扩展、按需、自助访问。

+

dnsmasq

+

为虚拟网络提供 DNS、DHCP、BOOTP 和 TFTP 服务的守护程序。

+

+

标识 API v3 实体。表示项目、组和用户的集合,用于定义用于管理 OpenStack Identity 实体的管理边界。在 Internet 上,将网站与其他网站分开。通常,域名有两个或多个部分,用点分隔。例如,yahoo.com、usa.gov、harvard.edu 或 mail.yahoo.com。此外,域是包含一条或多条记录的所有 DNS 相关信息的实体或容器。

+

域名系统(DNS)

+

用于确定 Internet 域名到地址和地址到名称解析的系统。DNS 通过将 IP 地址转换为更易于记忆的地址来帮助浏览 Internet。例如,将 111.111.111.1 转换为 www.yahoo.com。所有域及其组件(如邮件服务器)都利用 DNS 解析到适当的位置。DNS服务器通常设置在主从关系中,以便主服务器故障调用从服务器。还可以对 DNS 服务器进行群集或复制,以便对一个 DNS 服务器所做的更改自动传播到其他活动服务器。在计算中,支持将 DNS 条目与浮动 IP 地址、节点或单元相关联,以便主机名在重新启动时保持一致。

+

下载

+

将数据(通常以文件的形式)从一台计算机传输到另一台计算机。

+

持久交换

+

服务器重新启动时保持活动状态的 Compute RabbitMQ 消息交换。

+

持久队列

+

一个 Compute RabbitMQ 消息队列,在服务器重新启动时保持活动状态。

+

动态主机配置协议 (DHCP)

+

一种网络协议,用于配置连接到网络的设备,以便它们可以使用 Internet 协议 (IP) 在该网络上进行通信。该协议在客户端-服务器模型中实现,其中 DHCP 客户端从 DHCP 服务器请求配置数据,例如 IP 地址、默认路由以及一个或多个 DNS 服务器地址。一种在引导时自动为主机配置网络的方法。由网络和计算提供。

+

动态超文本标记语言 (DHTML)

+

使用 HTML、JavaScript 和级联样式表使用户能够与网页交互或显示简单动画的页面。

+

E

+

东西向流量

+

同一云或数据中心中的服务器之间的网络流量。另请参阅南北向流量。

+

EBS 启动卷

+

包含可启动 VM 映像的 Amazon EBS 存储卷,OpenStack 目前不支持该映像。

+

ebtables

+

用于 Linux 桥接防火墙的过滤工具,支持过滤通过 Linux 桥接的网络流量。在计算中与 arptables、iptables 和 ip6tables 一起使用,以确保网络通信的隔离。

+

EC2 函数

+

Amazon 商业计算产品,类似于计算。

+

EC2 访问密钥

+

与 EC2 私有密钥一起使用以访问计算 EC2 API。

+

EC2 API

+

OpenStack 支持通过计算访问 Amazon EC2 API。

+

EC2 兼容性 API

+

使 OpenStack 能够与 Amazon EC2 通信的计算组件。

+

EC2 私有密钥

+

与计算 EC2 API 通信时与 EC2 访问密钥一起使用;用于对每个请求进行数字签名。

+

边缘计算

+

在云中运行更少的进程,并将这些进程移动到本地。

+

弹性块存储 (EBS)

+

Amazon 商业块存储产品。

+

封装

+

将一种数据包类型置于另一种数据包类型中,以提取或保护数据。示例包括 GRE、MPLS 或 IPsec。

+

加密

+

OpenStack支持HTTPS、SSH、SSL、TLS、数字证书、数据加密等加密技术。

+

端点

+

请参阅 API 端点。

+

端点注册表

+

身份服务目录的替代术语。

+

端点模板

+

URL 和端口号端点列表,指示可以访问服务(如对象存储、计算、标识等)的位置。

+

企业云计算

+

位于防火墙后面的计算环境,为企业提供软件、基础设施和平台服务。

+

实体

+

任何想要连接到网络(网络连接服务)提供的网络服务的硬件或软件。实体可以通过实现 VIF 来利用网络。

+

临时映像

+

不保存对其卷所做的更改并在实例终止后将其恢复到原始状态的 VM 映像。

+

临时卷

+

不保存对其所做的更改并在当前用户放弃控制权时恢复到其原始状态的卷。

+

Essex

+

2012 年 4 月发布的与 OpenStack 相关的项目的分组版本,是 OpenStack 的第五个版本。它包括计算(nova 2012.1)、对象存储(swift 1.4.8)、图像(glance)、身份(keystone)和仪表板(horizon)。Essex 是 OpenStack 第五个版本的代号。设计峰会在美国马萨诸塞州波士顿举行,Essex是附近的城市。

+

ESXi

+

支持 OpenStack 的虚拟机管理程序。

+

ETag 函数

+

对象存储中对象的 MD5 哈希值,用于确保数据完整性。

+

euca2ools

+

用于管理 VM 的命令行工具集合;大多数都与OpenStack兼容。

+

Eucalyptus Kernel Image (EKI)

+

与 ERI 一起使用以创建 EMI。

+

Eucalyptus机器映像 (EMI)

+

映像服务支持的虚拟机镜像容器格式。

+

Eucalyptus Ramdisk 镜像 (ERI)

+

与 EKI 一起使用以创建 EMI。

+

撤离

+

将一个或所有虚拟机 (VM) 实例从一台主机迁移到另一台主机的过程,与共享存储实时迁移和块迁移兼容。

+

交换

+

RabbitMQ 消息交换的替代术语。

+

交换类型

+

Compute RabbitMQ 中的路由算法。

+

独占队列

+

由 RabbitMQ 中的直接使用者连接到 - 计算,消息只能由当前连接使用。

+

扩展属性 (xattr)

+

文件系统选项,用于存储所有者、组、权限、修改时间等以外的其他信息。底层对象存储文件系统必须支持扩展属性。

+

扩展

+

API 扩展或插件的替代术语。在 Identity 服务的上下文中,这是特定于实现的调用,例如添加对 OpenID 的支持。

+

外部网络

+

通常用于 Internet 访问的网段。

+

额外规格

+

指定计算确定从何处开始新实例时的其他要求。示例包括最小网络带宽或 GPU 量。

+

F

+

FakeLDAP

+

创建用于测试身份和计算的本地 LDAP 目录的简单方法。需要 Redis。

+

fan-out交换

+

在 RabbitMQ 和 Compute 中,调度程序服务使用消息传递接口从计算、卷和网络节点接收功能消息。

+

联合身份

+

一种在身份提供商和 OpenStack 云之间建立信任的方法。

+

Fedora

+

与 OpenStack 兼容的 Linux 发行版。

+

光纤通道

+

存储协议在概念上类似于 TCP/IP;封装 SCSI 命令和数据。

+

以太网光纤通道 (FCoE)

+

光纤通道协议在以太网内通过隧道传输。

+

填充优先调度器

+

计算计划方法,尝试用 VM 填充主机,而不是在各种主机上启动新 VM。

+

过滤器

+

计算计划过程中的步骤,当无法运行 VM 的主机被淘汰且未被选中时。

+

防火墙

+

用于限制主机和/或节点之间的通信,在计算中使用 iptables、arptables、ip6tables 和 ebtables 实现。

+

防火墙即服务 (FWaaS)

+

提供外围防火墙功能的网络扩展。

+

固定 IP 地址

+

每次启动实例时都与同一实例关联的 IP 地址通常不对最终用户或公共 Internet 访问,并用于管理实例。

+

平面管理器

+

计算组件为授权节点提供 IP 地址,并假定 DHCP、DNS 以及路由配置和服务由其他设备提供。

+

平面模式注入

+

一种计算网络方法,在实例启动之前将操作系统网络配置信息注入到 VM 映像中。

+

平面网络

+

虚拟网络类型,不使用VLAN或隧道来分隔项目流量。每个平面网络通常需要定义由桥接映射定义的单独的底层物理接口。但是,平面网络可以包含多个子网。FlatDHCP 管理器

+

提供 dnsmasq(DHCP、DNS、BOOTP、TFTP)和 radvd(路由)服务的计算组件。

+

规格

+

VM 实例类型的替代术语

+

规格ID

+

每种计算或映像服务虚拟机规格或实例类型的 UUID。

+

浮动 IP 地址

+

项目可以与 VM 关联的 IP 地址,以便实例在每次启动时都具有相同的公有 IP 地址。您可以创建一个浮动 IP 地址池,并在实例启动时将其分配给实例,以保持一致的 IP 地址以维护 DNS 分配。

+

Folsom

+

2012 年秋季发布的与 OpenStack 相关的项目的分组版本,是 OpenStack 的第六个版本。它包括计算 (nova)、对象存储 (swift)、身份 (keystone)、网络 (neutron)、映像服务 (glance) 以及卷或块存储 (cinder)。Folsom 是 OpenStack 第六个版本的代号。设计峰会在美国加利福尼亚州旧金山举行,福尔瑟姆是附近的城市。

+

FormPost

+

对象存储中间件,通过网页上的表单上传(发布)图像。

+

freezer

+

备份、还原和灾难恢复服务的代号。

+

前端

+

用户与服务交互的点;可以是 API 端点、仪表板或命令行工具。

+

G

+

网关

+

通常分配给路由器的 IP 地址,用于在不同网络之间传递网络流量。

+

通用接收卸载 (GRO)

+

某些网络接口驱动程序的功能,在传送到内核 IP 堆栈之前,将许多较小的接收数据包合并为一个大数据包。

+

通用路由封装 (GRE)

+

在虚拟点对点链路中封装各种网络层协议的协议。

+

glance

+

影像服务的代号。

+

glance API 服务器

+

图像 API 的替代名称。

+

glance 注册表

+

映像服务映像注册表的替代术语。

+

全局端点模板

+

包含可用于所有项目的服务的标识服务终结点模板。

+

GlusterFS

+

一个旨在聚合 NAS 主机的文件系统,与 OpenStack 兼容。

+

gnocchi

+

OpenStack Telemetry 服务的一部分;提供索引器和时序数据库。

+

golden映像

+

一种操作系统安装方法,其中创建最终的磁盘映像,然后由所有节点使用,无需修改。

+

治理服务(大会)

+

该项目在任何云服务集合中提供治理即服务,以便监视、实施和审核动态基础结构上的策略。

+

图形交换格式 (GIF)

+

一种通常用于网页上的动画图像的图像文件。

+

图形处理单元 (GPU)

+

OpenStack 目前不支持根据 GPU 的存在来选择主机。

+

绿色线程

+

Python 使用的协作线程模型;减少争用条件,并且仅在进行特定库调用时进行上下文切换。每个 OpenStack 服务都是它自己的线程。

+

Grizzly

+

OpenStack 第七个版本的代号。设计峰会在美国加利福尼亚州圣地亚哥举行,Grizzly是加利福尼亚州州旗的一个元素。

+

分组

+

Identity v3 API 实体。表示特定域所拥有的用户集合。

+

客户机操作系统

+

在虚拟机监控程序的控制下运行的操作系统实例。

+

H

+

Hadoop

+

Apache Hadoop 是一个开源软件框架,支持数据密集型分布式应用程序。

+

Hadoop 分布式文件系统 (HDFS)

+

一种分布式、高度容错的文件系统,设计用于在低成本商用硬件上运行。

+

交接

+

对象存储中的一种对象状态,其中由于驱动器故障而自动创建对象的新副本。

+

HAProxy 函数

+

为基于 TCP 和 HTTP 的应用程序提供负载平衡器,将请求分散到多个服务器。

+

硬重启

+

一种重新启动类型,其中按下物理或虚拟电源按钮,而不是正常、正确地关闭操作系统。

+

Havana

+

OpenStack 第八版的代号。设计峰会在美国俄勒冈州波特兰市举行,Havana是俄勒冈州的一个非法人社区。

+

健康监视器

+

确定 VIP 池的后端成员是否可以处理请求。一个池可以有多个与之关联的运行状况监视器。当池有多个与之关联的监视器时,所有监视器都会检查池的每个成员。所有监视器都必须声明成员运行状况良好,才能保持活动状态。

+

heat

+

业务流程服务的代号。

+

Heat 编排模板 (HOT)

+

以 OpenStack 原生格式的 Heat 输入。

+

高可用性 (HA)

+

高可用性系统设计方法和相关服务实施可确保在合同测量期间达到预先安排的运营绩效水平。高可用性系统力求最大限度地减少系统停机时间和数据丢失。

+

horizon

+

仪表板的代号。

+

Horizon 插件

+

OpenStack Dashboard (horizon) 的插件。

+

主机

+

物理计算机,而不是 VM 实例(节点)。

+

主机聚合

+

一种将可用性区域进一步细分为虚拟机管理程序池(公共主机的集合)的方法。

+

主机总线适配器 (HBA)

+

插入 PCI 插槽(如光纤通道或网卡)的设备。

+

混合云

+

混合云是由两个或多个云(私有云、社区云或公有云)组成的,这些云仍然是不同的实体,但绑定在一起,提供多种部署模型的优势。混合云还意味着能够将主机托管、托管和/或专用服务与云资源连接起来。

+

混合云计算

+

混合了本地、私有云和第三方公有云服务,并在两个平台之间进行编排。

+

Hyper-V

+

OpenStack 支持的虚拟机管理程序之一。

+

超链接

+

包含指向其他网站的链接的任何类型的文本,常见于单击一个或多个单词会打开其他网站的文档中。

+

超文本传输协议 (HTTP)

+

用于分布式、协作式、超媒体信息系统的应用协议。它是万维网数据通信的基础。超文本是在包含文本的节点之间使用逻辑链接(超链接)的结构化文本。HTTP是交换或传输超文本的协议。

+

安全超文本传输协议 (HTTPS)一种加密通信协议,用于通过计算机网络进行安全通信,在 Internet 上的部署特别广泛。从技术上讲,它本身不是一个协议;相反,它是简单地将超文本传输协议 (HTTP) 分层在 TLS 或 SSL 协议之上的结果,从而将 TLS 或 SSL 的安全功能添加到标准 HTTP 通信中。大多数 OpenStack API 端点和许多组件间通信都支持 HTTPS 通信。

+

虚拟机管理程序

+

仲裁和控制 VM 对实际底层硬件的访问的软件。

+

虚拟机管理程序池

+

通过主机聚合组合在一起的虚拟机管理程序的集合。

+

I

+

Icehouse

+

OpenStack 第九个版本的代号。设计峰会在香港举行,Ice House是该市的一条街道的名字。

+

身份证号码

+

与身份中的每个用户关联的唯一数字 ID,在概念上类似于 Linux 或 LDAP UID。

+

身份验证 API

+

Identity 服务 API 的替代术语。

+

身份验证后端

+

Identity 服务用于检索用户信息的源;例如,OpenLDAP 服务器。

+

身份提供者

+

一种目录服务,允许用户使用用户名和密码登录。它是身份验证令牌的典型来源。

+

身份服务(keystone)

+

促进 API 客户端身份验证、服务发现、分布式多项目授权和审计的项目。它提供了一个用户映射到他们可以访问的 OpenStack 服务的中央目录。它还为 OpenStack 服务注册端点,并充当通用身份验证系统。

+

身份服务 API

+

用于访问通过 keystone 提供的 OpenStack Identity 服务的 API。

+

IETF (英语)

+

Internet 工程任务组 (IETF) 是一个开放标准组织,负责制定 Internet 标准,尤其是与 TCP/IP 相关的标准。

+

映像

+

用于创建或重建服务器的特定操作系统 (OS) 的文件集合。OpenStack 提供预构建的映像。您还可以从已启动的服务器创建自定义映像或快照。自定义映像可用于数据备份,或用作其他服务器的“黄金”映像。

+

映像API

+

用于管理 VM 映像的映像服务 API 终结点。处理客户端对 VM 的请求,更新注册表服务器上的映像服务元数据,并与存储适配器通信以从后端存储上传 VM 映像。

+

映像缓存

+

由图像服务用于获取本地主机上的图像,而不是在每次请求图像时从图像服务器重新下载图像。

+

映像 ID

+

URI 和 UUID 的组合,用于通过镜像 API 访问镜像服务虚拟机镜像。

+

映像成员

+

可以在映像服务中访问给定 VM 映像的项目列表。

+

映像所有者

+

拥有镜像服务虚拟机镜像的项目。

+

映像注册表

+

可通过映像服务获取的 VM 映像的列表。

+

映像服务(glance)

+

OpenStack 服务,它提供服务和关联的库来存储、浏览、共享、分发和管理可启动磁盘映像、与初始化计算资源密切相关的其他数据以及元数据定义。

+

映像状态

+

镜像服务中虚拟机镜像的当前状态,不要与正在运行的实例的状态混淆。

+

映像存储

+

映像服务用于存储虚拟机映像的后端存储,选项包括对象存储、本地挂载的文件系统、RADOS 块设备、VMware 数据存储或 HTTP。

+

映像 UUID

+

映像服务用于唯一标识每个 VM 映像的 UUID。

+

孵化项目

+

社区项目可以提升到此状态,然后提升为核心项目

+

基础设施优化服务(观察者)

+

OpenStack项目,旨在为基于OpenStack的多项目云提供灵活且可扩展的资源优化服务。

+

基础架构即服务 (IaaS)

+

IaaS 是一种配置模型,在这种模型中,组织外包数据中心的物理组件,例如存储、硬件、服务器和网络组件。服务提供商拥有设备,并负责设备的安装、操作和维护。客户通常按使用量付费。IaaS 是一种提供云服务的模型。

+

Ingress 过滤

+

筛选传入网络流量的过程。由计算支持。

+

INI 格式

+

OpenStack 配置文件使用 INI 格式来描述选项及其值。它由部分和键值对组成。

+

注入

+

在启动实例之前将文件放入虚拟机映像的过程。

+

每秒输入/输出操作数 (IOPS)

+

IOPS 是一种常见的性能度量,用于对计算机存储设备(如硬盘驱动器、固态驱动器和存储区域网络)进行基准测试。

+

实例

+

正在运行的 VM 或处于已知状态(如挂起)的 VM,可以像硬件服务器一样使用。

+

实例ID

+

例如UUID的替代术语。

+

实例状态

+

来宾虚拟机映像的当前状态。

+

实例隧道网络

+

用于计算节点和网络节点之间的实例流量隧道的网段。

+

实例类型

+

描述可供用户使用的各种虚拟机映像的参数;包括 CPU、存储和内存等参数。风味的替代术语。

+

实例类型 ID

+

特定实例 ID 的替代术语。

+

实例 UUID

+

分配给每个来宾 VM 实例的唯一 ID。

+

智能平台管理接口(IPMI)

+

IPMI 是系统管理员用于计算机系统带外管理和监控其操作的标准化计算机系统接口。通俗地说,它是一种使用直接网络连接管理计算机的方法,无论它是否打开;连接到硬件,而不是操作系统或登录 shell。

+

接口

+

提供与其他设备或介质的连接的物理或虚拟设备。

+

接口 ID

+

UUID 形式的网络 VIF 或 vNIC 的唯一 ID。

+

互联网控制消息协议 (ICMP)

+

网络设备用于控制消息的网络协议。例如,ping 使用 ICMP 来测试连接。

+

互联网协议 (IP)

+

Internet 协议套件中的主要通信协议,用于跨网络边界中继数据报。

+

互联网服务提供商 (ISP)

+

任何向个人或企业提供互联网访问的企业。

+

互联网小型计算机系统接口(iSCSI)

+

封装 SCSI 帧以通过 IP 网络传输的存储协议。受计算、对象存储和镜像服务支持。

+

IO

+

输入和输出的缩写。

+

IP 地址

+

Internet 上每个计算机系统唯一的编号。地址使用了两个版本的 Internet 协议 (IP):IPv4 和 IPv6。

+

IP 地址管理 (IPAM)

+

自动执行 IP 地址分配、解除分配和管理的过程。目前由 Compute、melange 和 Networking 提供。

+

ip6tables

+

用于在 Linux 内核中设置、维护和检查 IPv6 数据包过滤规则表的工具。在 OpenStack 计算中,ip6tables 与 arptables、ebtables 和 iptables 一起使用,为节点和虚拟机创建防火墙。

+

ipset

+

对 iptables 的扩展,允许创建同时匹配整个 IP 地址“集”的防火墙规则。这些集驻留在索引数据结构中以提高效率,尤其是在具有大量规则的系统上。

+

iptables

+

iptables 与 arptables 和 ebtables 一起使用,可在 Compute 中创建防火墙。iptables 是 Linux 内核防火墙(作为不同的 Netfilter 模块实现)提供的表及其存储的链和规则。目前不同的内核模块和程序用于不同的协议:iptables 适用于 IPv4,ip6tables 适用于 IPv6,arptables 适用于 ARP,ebtables 用于以太网帧。需要 root 权限才能操作。

+

ironic

+

裸机服务的代号。

+

iSCSI 限定名称 (IQN)

+

IQN 是最常用的 iSCSI 名称格式,用于唯一标识 iSCSI 网络中的节点。所有 IQN 都遵循 iqn.yyyy-mm.domain:identifier 模式,其中“yyyy-mm”是域名注册的年份和月份,“domain”是颁发组织的反向域名,“identifier”是一个可选字符串,使同一域名下的每个 IQN 都是唯一的。例如,“iqn.2015-10.org.openstack.408ae959bce1”。

+

ISO9660

+

镜像服务支持的虚拟机镜像磁盘格式之一。

+

ITSEC 函数

+

计算 RBAC 系统中的默认角色,可以隔离任何项目中的实例。

+

J

+

Java

+

一种编程语言,用于创建通过网络涉及多台计算机的系统。

+

JavaScript

+

一种用于生成网页的脚本语言。

+

JavaScript 对象表示法 (JSON)

+

OpenStack 中支持的响应格式之一。

+

框架的形状

+

现代以太网网络中的功能,支持高达约 9000 字节的帧。

+

Juno

+

OpenStack 第十版的代号。设计峰会在美国佐治亚州亚特兰大举行,Juno是佐治亚州的一个非法人社区。

+

K

+

Kerberos

+

一种基于票证的网络身份验证协议。Kerberos 允许节点通过非安全网络进行通信,并允许节点以安全的方式相互证明其身份。

+

基于内核的虚拟机 (KVM)

+

支持 OpenStack 的虚拟机管理程序。KVM 是适用于 Linux on x86 硬件的完整虚拟化解决方案,包含虚拟化扩展(Intel VT 或 AMD-V)、ARM、IBM Power 和 IBM zSeries。它由一个可加载的内核模块组成,该模块提供核心虚拟化基础架构和特定于处理器的模块。

+

密钥管理器服务(barbican)

+

该项目产生一个秘密存储和生成系统,能够为希望启用加密功能的服务提供密钥管理。

+

keystone

+

Identity 服务的代号。

+

快速启动

+

用于在基于 Red Hat、Fedora 和 CentOS 的 Linux 发行版上自动进行系统配置和安装的工具。

+

Kilo

+

OpenStack 第 11 版的代号。设计峰会在法国巴黎举行。由于名称选择的延迟,该版本仅被称为 K。由于 k kilo 是单位符号,而 kilogram 参考工件存放在巴黎附近的塞夫尔 Pavillon de Breteuil 中,因此社区选择了 Kilo 作为版本名称。

+

L

+

大对象

+

Object Storage 中大于 5 GB 的对象。

+

启动板

+

OpenStack 的协作站点。

+

二层(L2)代理

+

为虚拟网络提供第 2 层连接的 OpenStack Networking 代理。

+

二层网络

+

OSI 网络体系结构中用于数据链路层的术语。数据链路层负责媒体访问控制、流量控制以及检测和纠正物理层中可能发生的错误。

+

三层 (L3) 代理

+

OpenStack Networking 代理,为虚拟网络提供第 3 层(路由)服务。

+

三层网络

+

在 OSI 网络体系结构中用于网络层的术语。网络层负责数据包转发,包括从一个节点到另一个节点的路由。

+

Liberty

+

OpenStack 第 12 版的代号。设计峰会在加拿大温哥华举行,Liberty是加拿大萨斯喀彻温省一个村庄的名字。

+

libvirt

+

OpenStack 用来与许多受支持的虚拟机管理程序进行交互的虚拟化 API 库。

+

轻量级目录访问协议 (LDAP)

+

用于通过 IP 网络访问和维护分布式目录信息服务的应用程序协议。

+

Linux 操作系统

+

类Unix计算机操作系统,在自由和开源软件开发和分发的模式下组装。

+

Linux桥接

+

使多个 VM 能够在计算中共享单个物理 NIC 的软件。

+

Linux Bridge neutron 插件

+

使 Linux 网桥能够理解网络端口、接口连接和其他抽象。

+

Linux 容器 (LXC)

+

支持 OpenStack 的虚拟机管理程序。

+

实时迁移

+

计算中能够将正在运行的虚拟机实例从一台主机移动到另一台主机,在切换期间仅发生少量服务中断。

+

负载均衡器

+

负载均衡器是属于云帐户的逻辑设备。它用于根据定义为其配置一部分的条件在多个后端系统或服务之间分配工作负载。

+

负载均衡

+

在两个或多个节点之间分散客户端请求以提高性能和可用性的过程。

+

负载均衡器即服务(LBaaS)

+

使网络能够在指定实例之间均匀分配传入请求。

+

负载均衡服务(octavia)

+

该项目旨在以与技术无关的方式提供对负载均衡器服务的可扩展、按需、自助服务访问。

+

逻辑卷管理器 (LVM)

+

提供一种在大容量存储设备上分配空间的方法,该方法比传统的分区方案更灵活。

+

M

+

magnum

+

容器基础结构管理服务的代号。

+

管理 API

+

管理 API 的替代术语。

+

管理网络

+

用于管理的网段,公共 Internet 无法访问。

+

管理器

+

相关代码的逻辑分组,例如块存储卷管理器或网络管理器。

+

清单

+

用于跟踪对象存储中大型对象的段。

+

manifest 对象

+

一个特殊的对象存储对象,其中包含大型对象的清单。

+

manila

+

OpenStack 共享文件系统服务的代号。

+

manila分享

+

负责管理共享文件系统服务设备,特别是后端设备。

+

最大传输单元 (MTU)

+

特定网络介质的最大帧或数据包大小。以太网通常为 1500 字节。

+

机制驱动 程序

+

模块化第 2 层 (ML2) neutron 插件的驱动程序,为虚拟实例提供第 2 层连接。单个 OpenStack 安装可以使用多个机制驱动程序。

+

melange

+

OpenStack Network Information Service 的项目名称。将与网络合并。

+

成员关系

+

镜像服务虚拟机镜像与项目之间的关联。允许与指定项目共享图像。

+

成员列表

+

可以在映像服务中访问给定 VM 映像的项目列表。

+

内存缓存

+

对象存储用于缓存的分布式内存对象缓存系统。

+

内存过量分配

+

能够根据主机的实际内存使用情况启动新的 VM 实例,而不是根据每个正在运行的实例认为其可用的 RAM 量来做出决定。也称为 RAM 过量使用。

+

消息代理

+

用于在计算中提供 AMQP 消息传递功能的软件包。默认包为 RabbitMQ。

+

消息总线

+

所有 AMQP 消息用于计算中的云间通信的主要虚拟通信线路。

+

消息队列

+

将来自客户端的请求传递给相应的工作线程,并在作业完成后将输出返回给客户端。

+

消息服务 (zaqar)

+

该项目提供消息传递服务,该服务以高效、可扩展和高度可用的方式提供各种分布式应用程序模式,并创建和维护关联的 Python 库和文档。

+

元数据服务器 (MDS)

+

存储 CephFS 元数据。

+

元数据代理

+

为实例提供元数据服务的 OpenStack Networking 代理。

+

迁移

+

将 VM 实例从一台主机移动到另一台主机的过程。

+

mistral

+

工作流服务的代号。

+

Mitaka

+

OpenStack 第 13 版的代号。设计峰会在日本东京举行。Mitaka是东京的一座城市。

+

模块化第 2 层 (ML2)neutron插件

+

可以在网络中同时使用多种二层网络技术,如802.1Q和VXLAN。

+

monasca

+

OpenStack 监控的代号。

+

监控 (LBaaS)

+

LBaaS 功能,使用 ping 命令、TCP 和 HTTP/HTTPS GET 提供可用性监控。

+

监视器 (Mon)

+

一个 Ceph 组件,用于与外部客户端通信、检查数据状态和一致性以及执行仲裁功能。

+

监控 (monasca)

+

OpenStack 服务,为指标、复杂事件处理和日志记录提供多项目、高度可扩展、高性能、容错的监控即服务解决方案。为高级监控服务构建一个可扩展的平台,运营商和项目都可以使用该平台来获得运营洞察力和可见性,确保可用性和稳定性。

+

多云计算

+

在单个网络架构中使用多种云计算和存储服务。

+

多云 SDK

+

提供多云抽象层并包含对 OpenStack 的支持的 SDK。这些 SDK 非常适合编写需要使用多种类型的云提供商的应用程序,但可能会公开一组更有限的功能。

+

多因素身份验证

+

使用两个或多个凭据(如密码和私钥)的身份验证方法。目前在 Identity 中不受支持。

+

多主机

+

传统 (nova) 网络的高可用性模式。每个计算节点处理 NAT 和 DHCP,并充当其上所有 VM 的网关。一个计算节点上的网络故障不会影响其他计算节点上的 VM。

+

multinic 函数

+

计算中的工具,允许每个虚拟机实例连接多个 VIF。

+

murano

+

应用程序目录服务的代号。

+

N

+

Nebula

+

NASA 于 2010 年以开源形式发布,是 Compute 的基础。

+

网络管理员

+

计算 RBAC 系统中的默认角色之一。允许用户为实例分配可公开访问的 IP 地址并更改防火墙规则。

+

NetApp 卷驱动程序

+

使计算能够通过 NetApp OnCommand 配置管理器与 NetApp 存储设备进行通信。

+

网络

+

在实体之间提供连接的虚拟网络。例如,共享网络连接的虚拟端口的集合。在网络术语中,网络始终是第 2 层网络。

+

网络地址转换 (NAT)

+

在传输过程中修改 IP 地址信息的过程。由计算和网络支持。

+

网络控制器

+

一个计算守护程序,用于协调节点的网络配置,包括 IP 地址、VLAN 和桥接。还管理公共网络和专用网络的路由。

+

网络文件系统 (NFS)

+

一种使文件系统在网络上可用的方法。由 OpenStack 支持。

+

网络 ID

+

分配给网络中每个网段的唯一 ID。与网络 UUID 相同。

+

网络管理器

+

用于管理各种网络组件(如防火墙规则、IP 地址分配等)的计算组件。

+

网络命名空间

+

Linux 内核功能,在单个主机上提供独立的虚拟网络实例,具有单独的路由表和接口。类似于物理网络设备上的虚拟路由和转发 (VRF) 服务。

+

网络节点

+

运行 Network Worker 守护程序的任何计算节点。

+

网络段

+

表示网络中虚拟的隔离 OSI 第 2 层子网。

+

网络服务标头 (NSH)

+

提供沿实例化服务路径进行元数据交换的机制。

+

网络时间协议 (NTP)

+

通过与可信、准确的时间源通信来保持主机或节点时钟正确的方法。

+

网络 UUID

+

网络网段的唯一 ID。

+

网络工作进程

+

nova-network worker 守护进程;提供诸如为启动的 nova 实例提供 IP 地址等服务。

+

网络 API(Neutron API)

+

用于访问 OpenStack Networking 的 API。提供可扩展的体系结构以启用自定义插件创建。

+

网络服务(neutron)

+

OpenStack 项目,它实现了服务和相关库,以提供按需、可扩展且与技术无关的网络抽象。

+

neutron

+

OpenStack Networking 服务的代号。

+

neutron API

+

网络 API 的替代名称。

+

Neutron 管理器

+

启用计算和网络集成,使网络能够对来宾 VM 执行网络管理。

+

Neutron 插件

+

网络中的接口,使组织能够为高级功能(如 QoS、ACL 或 IDS)创建自定义插件。

+

Newton

+

OpenStack 第 14 版的代号。设计峰会在美国德克萨斯州奥斯汀举行。该版本以位于德克萨斯州奥斯汀市第九街 1013 号的“Newton House”命名。被列入国家史迹名录。

+

Nexenta 卷驱动程序

+

为计算中的 NexentaStor 设备提供支持。

+

NFV 编排服务(tacker)

+

OpenStack 服务,旨在实现网络功能虚拟化 (NFV) 编排服务和库,用于网络服务和虚拟网络功能 (VNF) 的端到端生命周期管理。

+

Nginx 函数

+

HTTP 和反向代理服务器、邮件代理服务器和通用 TCP/UDP 代理服务器。

+

无 ACK

+

在 Compute RabbitMQ 中禁用服务器端消息确认。提高性能但降低可靠性。

+

节点

+

在主机上运行的 VM 实例。

+

非持久交换

+

服务重新启动时清除的消息交换。其数据不会写入持久性存储。

+

非持久队列

+

服务重新启动时清除的消息队列。其数据不会写入持久性存储。

+

非持久化卷

+

临时卷的替代术语。

+

南北向流量

+

用户或客户端(北)与服务器(南)之间的网络流量,或进入云(南)和云外(北)的流量。另请参阅东西向流量。

+

nova

+

OpenStack 计算服务的代号。

+

Nova API 接口

+

计算 API 的替代术语。

+

nova-network (新星网络)

+

一个计算组件,用于管理 IP 地址分配、防火墙和其他与网络相关的任务。这是旧版网络选项,也是网络的替代方法。

+

O

+

对象

+

对象存储保存的数据的 BLOB;可以是任何格式。

+

对象审计器

+

打开对象服务器的所有对象,并验证每个对象的 MD5 哈希、大小和元数据。

+

对象过期

+

Object Storage 中的一个可配置选项,用于在经过指定时间或达到特定日期后自动删除对象。

+

对象哈希

+

对象存储对象的唯一 ID。

+

对象路径哈希

+

对象存储用于确定对象在环中的位置。将对象映射到分区。

+

对象复制器

+

一个对象存储组件,用于将对象复制到远程分区以实现容错。

+

对象服务器

+

负责管理对象的对象存储组件。

+

对象存储 API

+

用于访问 OpenStack 对象存储的 API。

+

对象存储设备 (OSD)

+

Ceph 存储守护进程。

+

对象存储服务(swift)

+

OpenStack 核心项目,为固定数字内容提供最终一致性和冗余的存储和检索。

+

对象版本控制

+

允许用户在对象存储容器上设置标志,以便对容器内的所有对象进行版本控制。

+

Ocata

+

OpenStack 第 15 版的代号。设计峰会在西班牙巴塞罗那举行。Ocata是巴塞罗那北部的一个海滩。

+

Octavia

+

负载平衡服务的代号。

+

Oldie

+

长时间运行的对象存储进程的术语。可以指示挂起的进程。

+

开放云计算接口(OCCI)

+

用于管理计算、数据和网络资源的标准化接口,目前在 OpenStack 中不受支持。

+

开放虚拟化格式 (OVF)

+

打包 VM 映像的标准。在 OpenStack 中受支持。

+

打开 vSwitch

+

Open vSwitch 是在开源 Apache 2.0 许可证下获得许可的生产质量的多层虚拟交换机。它旨在通过编程扩展实现大规模网络自动化,同时仍支持标准管理接口和协议(例如 NetFlow、sFlow、SPAN、RSPAN、CLI、LACP、802.1ag)。

+

Open vSwitch(OVS)代理

+

为网络插件提供底层 Open vSwitch 服务的接口。

+

打开 vSwitch neutron 插件

+

在网络中提供对 Open vSwitch 的支持。

+

OpenDev

+

OpenDev 是一个协作开源软件开发的空间。

+

OpenDev 的使命是为开源软件项目提供项目托管、持续集成工具和虚拟协作空间。OpenDev 本身是自托管在这套工具上,包括代码审查、持续集成、etherpad、wiki、代码浏览等。这意味着 OpenDev 本身就像一个开源项目一样运行,您可以加入我们并帮助运行系统。此外,运行的所有服务本身都是开源软件。

+

OpenStack 项目是使用 OpenDev 的最大项目。

+

OpenLDAP

+

开源 LDAP 服务器。受计算和标识支持。

+

OpenStack

+

OpenStack 是一个云操作系统,可控制整个数据中心的大型计算、存储和网络资源池,所有这些资源都通过仪表板进行管理,该仪表板使管理员能够进行控制,同时授权用户通过 Web 界面配置资源。OpenStack 是一个根据 Apache License 2.0 许可的开源项目。

+

OpenStack 代码名称

+

每个 OpenStack 版本都有一个代号。代号按字母顺序排列:Austin, Bexar, Cactus, Diablo, Essex, Folsom, Grizzly, Havana, Icehouse, Juno, Kilo, Liberty, Mitaka, Newton, Ocata, Pike, Queens, Rocky, Stein, Train, Ussuri, Victoria, Wallaby, Xena, Yoga, Zed。

+

Wallaby 是新策略选择的第一个代号:代号由社区按照字母顺序选择,有关详细信息,请参阅发布名称标准。

+

维多利亚的名字是姓氏,其中代号是靠近相应OpenStack设计峰会举办地的城市或县。一个例外,称为沃尔登例外,被授予州旗中听起来特别酷的元素。代号由大众投票选出。

+

与此同时,随着OpenStack发行版的字母表用完,技术委员会改变了命名过程,将发行号和发行版名称作为识别码。版本号将是主要标识符:“year”。年内发布计数“,该名称将主要用于营销目的。第一个这样的版本是 2023.1 Antelope。紧随其后的是 2023.2 Bobcat、2024.1 Caracal。

+

openSUSE

+

与 OpenStack 兼容的 Linux 发行版。

+

操作员

+

负责规划和维护 OpenStack 安装的人员。

+

可选服务

+

由 Interop 工作组定义为可选的官方 OpenStack 服务。目前,由 Dashboard (horizon)、Telemetry 服务 (Telemetry)、Orchestration 服务 (heat)、Database 服务 (trove)、Bare Metal 服务 (ironic) 等组成。

+

编排服务(heat)

+

OpenStack 服务,它通过 OpenStack 原生 REST API 使用声明性模板格式编排复合云应用程序。

+

orphan

+

在对象存储的上下文中,这是一个在升级、重新启动或重新加载服务后不会终止的过程。

+

Oslo

+

Common Libraries 项目的代号。

+

P

+

panko

+

OpenStack Telemetry 服务的一部分;提供事件存储。

+

父单元格

+

如果请求的资源(如 CPU 时间、磁盘存储或内存)在父单元中不可用,则该请求将转发到关联的子单元。

+

分区

+

对象存储中用于存储对象的存储单元。它存在于设备之上,并被复制以实现容错。.

+

分区索引

+

包含环内所有对象存储分区的位置。

+

分区偏移值

+

对象存储用于确定数据应驻留在哪个分区上。

+

路径 MTU 发现 (PMTUD)

+

IP 网络中用于检测端到端 MTU 并相应地调整数据包大小的机制。

+

暂停

+

未发生任何更改(内存未更改、网络通信停止等)的 VM 状态;VM 已冻结,但未关闭。

+

PCI直通

+

为客户机虚拟机提供对 PCI 设备的独占访问权限。目前在 OpenStack Havana 及更高版本中受支持。

+

持久消息

+

存储在内存和磁盘上的消息。失败或重新启动后,消息不会丢失。

+

持久卷

+

将保存对这些类型的磁盘卷所做的更改。

+

个性文件

+

用于自定义 Compute 实例的文件。它可用于注入 SSH 密钥或特定的网络配置。

+

Pike

+

OpenStack 第 16 版的代号。OpenStack峰会在美国马萨诸塞州波士顿举行。该版本以马萨诸塞州收费公路命名,通常缩写为马萨诸塞州收费公路,这是 90 号州际公路最东端的路段。

+

平台即服务(PaaS)

+

为使用者提供操作系统,通常还为语言运行时和库(统称为“平台”)提供,消费者可以在其上运行自己的应用程序代码,而无需提供对底层基础结构的任何控制。平台即服务提供商的示例包括 Cloud Foundry 和 OpenShift。

+

插件

+

为网络 API 或计算 API 提供实际实现的软件组件,具体取决于上下文。

+

策略服务

+

标识组件,提供规则管理接口和基于规则的授权引擎。

+

基于策略的路由 (PBR)

+

提供一种机制,用于根据网络管理员定义的策略实现数据包转发和路由。

+

+

一组逻辑设备,例如 Web 服务器,您可以将其组合在一起以接收和处理流量。负载平衡功能选择池中的哪个成员处理在 VIP 地址上收到的新请求或连接。每个VIP都有一个游泳池。

+

池成员

+

在负载平衡系统中的后端服务器上运行的应用程序。

+

端口

+

网络中的虚拟网络端口;VIF / vNIC 连接到端口。

+

端口 UUID

+

网络端口的唯一 ID。

+

预置

+

在基于 Debian 的 Linux 发行版上自动进行系统配置和安装的工具。

+

私有云

+

一个企业或组织独占使用的计算资源。

+

私有映像

+

仅对指定项目可用的映像服务虚拟机映像。

+

私有 IP 地址

+

用于管理和管理的 IP 地址,不可用于公共 Internet。

+

专用网络

+

网络控制器提供虚拟网络,使计算服务器能够相互交互以及与公用网络交互。所有计算机都必须具有公共和专用网络接口。专用网络接口可以是平面网络接口,也可以是 VLAN 网络接口。扁平化网络接口由具有扁平化管理器的flat_interface控制。VLAN 网络接口由带有 VLAN 管理器的 vlan_interface 选件控制。

+

项目

+

项目代表了OpenStack中“所有权”的基本单位,因为OpenStack中的所有资源都应该由特定项目拥有。在 OpenStack Identity 中,项目必须由特定域拥有。

+

项目 ID

+

Identity 服务分配给每个项目的唯一 ID。

+

项目 VPN

+

cloudpipe 的替代术语。

+

混杂模式

+

使网络接口将其接收的所有流量传递到主机,而不是仅传递寻址到它的帧。

+

受保护的属性

+

通常,只有云管理员才能访问的映像服务映像上的额外属性。限制哪些用户角色可以对该属性执行 CRUD 操作。云管理员可以将任何映像属性配置为受保护。

+

提供者

+

有权访问所有主机和实例的管理员。

+

代理节点

+

提供Object Storage代理服务的节点。

+

代理服务器

+

对象存储的用户通过代理服务器与服务进行交互,代理服务器又在环内查找所请求数据的位置,并将结果返回给用户。

+

公共 API

+

用于服务到服务通信和最终用户交互的 API 终结点。

+

公有云

+

许多用户可通过 Internet 访问的数据中心。

+

公共镜像

+

可供所有项目使用的镜像服务虚拟机镜像。

+

公网 IP 地址

+

最终用户可访问的 IP 地址。

+

公钥认证

+

使用密钥而不是密码的身份验证方法。

+

公网

+

网络控制器提供虚拟网络,使计算服务器能够相互交互以及与公用网络交互。所有计算机都必须具有公共和专用网络接口。公用网络接口由该 public_interface 选项控制。

+

Puppet

+

OpenStack支持的操作系统配置管理工具。

+

Python 模型

+

OpenStack中广泛使用的编程语言。

+

Q

+

QEMU 写入时复制 2 (QCOW2)

+

镜像服务支持的虚拟机镜像磁盘格式之一。

+

Qpid

+

penStack支持的消息队列软件;RabbitMQ 的替代品。

+

服务质量 (QoS)

+

保证某些网络或存储要求以满足应用程序提供商和最终用户之间的服务级别协议 (SLA) 的能力。通常包括网络带宽、延迟、抖动校正和可靠性等性能要求,以及每秒输入/输出操作数 (IOPS) 中的存储性能、限制协议和峰值负载下的性能预期。

+

隔离

+

如果对象存储发现对象、容器或帐户已损坏,则会将其置于此状态,不会被复制,客户端无法读取,并且会重新复制正确的副本。

+

Queens

+

OpenStack 第 17 版的代号。OpenStack峰会在澳大利亚悉尼举行。该版本以新南威尔士州南海岸地区的皇后庞德河命名。

+

Quick EMUlator (QEMU) (快速 EMUlator)

+

QEMU 是一个通用的开源机器仿真器和虚拟化器。OpenStack 支持的虚拟机管理程序之一,通常用于开发目的。

+

配额

+

在计算和块存储中,能够基于每个项目设置资源限制。

+

R

+

RabbitMQ 模型

+

OpenStack 使用的默认消息队列软件。

+

Rackspace 云文件

+

2010 年由 Rackspace 开源发布;对象存储的基础。

+

RADOS 块设备 (RBD)

+

Ceph 组件,使 Linux 块设备能够在多个分布式数据存储上进行条带化。

+

radvd

+

路由器通告守护程序,由计算 VLAN 管理器和 FlatDHCP 管理器用于为 VM 实例提供路由服务。

+

rally

+

Benchmark 服务的代号。

+

RAM过滤器

+

启用或禁用 RAM 过量分配的计算设置。

+

RAM 过量分配

+

能够根据主机的实际内存使用情况启动新的 VM 实例,而不是根据每个正在运行的实例认为其可用的 RAM 量来做出决定。也称为内存过量使用。

+

速率限制

+

对象存储中的可配置选项,用于限制每个帐户和/或每个容器的数据库写入。

+

原始

+

映像服务支持的虚拟机映像磁盘格式之一;非结构化磁盘映像。

+

重新平衡

+

在环中的所有驱动器之间分配对象存储分区的过程;在初始环创建期间和环重新配置后使用。

+

重启

+

对服务器进行软重启或硬重启。通过软重启,操作系统会发出重新启动信号,从而可以正常关闭所有进程。硬重启相当于重启服务器。虚拟化平台应确保重新启动操作已成功完成,即使在基础域/VM 暂停或停止/停止的情况下也是如此。

+

重建

+

删除服务器上的所有数据,并将其替换为指定的映像。服务器 ID 和 IP 地址保持不变。

+

侦察

+

用于收集计量的对象存储组件。

+

记录

+

属于特定域,用于指定有关该域的信息。有几种类型的 DNS 记录。每种记录类型都包含用于描述该记录用途的特定信息。示例包括邮件交换 (MX) 记录,它指定特定域的邮件服务器;和名称服务器 (NS) 记录,用于指定域的权威名称服务器。

+

记录 ID

+

数据库中的一个数字,每次进行更改时都会递增。对象存储在复制时使用。

+

Red Hat Enterprise Linux (RHEL) (英语)

+

与 OpenStack 兼容的 Linux 发行版。

+

参考架构

+

OpenStack 云的推荐架构。

+

区域

+

具有专用 API 端点的离散 OpenStack 环境,通常仅与其他区域共享身份 (keystone)。

+

注册表

+

影像服务注册表的替代术语。

+

注册表服务器

+

向客户端提供虚拟机镜像元数据信息的镜像服务。

+

可靠、自主的分布式对象存储

+

(雷达)

+

在 Ceph 中提供对象存储的组件集合。类似于 OpenStack Object Storage。

+

远程过程调用 (RPC)

+

计算RabbitMQ 用于服务内通信的方法。

+

副本

+

通过创建对象存储对象、帐户和容器的副本来提供数据冗余和容错,以便在底层存储发生故障时不会丢失它们。

+

副本数量

+

对象存储环中数据的副本数。

+

复制

+

将数据复制到单独的物理设备以实现容错和性能的过程。

+

复制器

+

对象存储后端进程,用于创建和管理对象副本。

+

请求 ID

+

分配给发送到计算的每个请求的唯一 ID。

+

救援映像

+

一种特殊类型的 VM 映像,在将实例置于救援模式时启动。允许管理员挂载实例的文件系统以更正问题。

+

调整大小

+

将现有服务器转换为其他风格,从而扩展或缩减服务器。保存原始服务器以在出现问题时启用回滚。必须测试并明确确认所有调整大小,此时将删除原始服务器。

+

RESTful

+

一种使用 REST 或具象状态传输的 Web 服务 API。REST是用于万维网的超媒体系统的架构风格

+

+

将对象存储数据映射到分区的实体。每个服务(例如帐户、对象和容器)都存在一个单独的环。

+

环构建器

+

在对象存储中构建和管理环,为设备分配分区,并将配置推送到其他存储节点。

+

Rocky

+

OpenStack 第 18 版的代号。OpenStack峰会在加拿大温哥华举行。该版本以落基山脉命名。

+

角色

+

用户为执行一组特定操作而假定的个性。角色包括一组权限和特权。担任该角色的用户将继承这些权利和特权。

+

基于角色的访问控制 (RBAC)

+

提供用户可以执行的操作的预定义列表,例如启动或停止 VM、重置密码等。在标识和计算中均受支持,可以使用仪表板进行配置。

+

角色 ID

+

分配给每个身份服务角色的字母数字 ID。

+

根本原因分析(RCA)服务(Vitrage)

+

OpenStack项目旨在组织、分析和可视化OpenStack警报和事件,深入了解问题的根本原因,并在直接检测到问题之前推断出它们的存在。

+

rootwrap

+

计算的一项功能,允许非特权“nova”用户以 Linux root 用户身份运行指定的命令列表。

+

循环调度器

+

在可用主机之间均匀分配实例的计算计划程序的类型。

+

路由器

+

在不同网络之间传递网络流量的物理或虚拟网络设备。

+

路由密钥

+

计算直接交换、扇出交换和主题交换使用此密钥来确定如何处理消息;处理方式因 Exchange 类型而异。

+

RPC 驱动程序

+

模块化系统,允许更改 Compute 的底层消息队列软件。例如,从 RabbitMQ 到 ZeroMQ 或 Qpid。

+

rsync

+

由对象存储用于推送对象副本。

+

RXTX 限 制

+

计算 VM 实例可以发送和接收的网络流量的绝对限制。

+

RXTX 配额

+

对计算 VM 实例可以发送和接收的网络流量的软限制。

+

S

+

sahara

+

数据处理服务的代号。

+

SAML 断言

+

包含标识提供者提供的有关用户的信息。这表示用户已通过身份验证。

+

沙盒

+

一个虚拟空间,可以在其中安全地运行新的或未经测试的软件。

+

调度器管理器

+

一个计算组件,用于确定 VM 实例的启动位置。采用模块化设计,支持多种调度程序类型。

+

作用域令牌

+

与特定项目关联的身份服务 API 访问令牌。

+

洗涤器

+

检查并删除未使用的虚拟机;实现延迟删除的影像服务组件。

+

密钥

+

只有用户知道的文本字符串;与访问密钥一起使用,以向计算 API 发出请求。

+

安全启动

+

系统固件验证启动过程中涉及的代码的真实性的过程。

+

安全外壳 (SSH)

+

用于通过加密通信通道访问远程主机的开源工具,计算支持 SSH 密钥注入。

+

安全组

+

应用于计算实例的一组网络流量筛选规则。

+

分段对象

+

已分解为多个部分的对象存储大型对象。重新组合的对象称为串联对象。

+

自助服务

+

对于 IaaS,常规(非特权)帐户能够在不涉及管理员的情况下管理虚拟基础架构组件(如网络)。

+

SELinux 函数

+

Linux 内核安全模块,提供用于支持访问控制策略的机制。

+

senlin

+

群集服务的代码名称。

+

服务器

+

为该系统上运行的客户端软件提供显式服务的计算机,通常管理各种计算机操作。服务器是计算系统中的 VM 实例。风格和图像是创建服务器时的必要元素。

+

服务器映像

+

VM 映像的替代术语。

+

服务器 UUID

+

分配给每个来宾 VM 实例的唯一 ID。

+

服务

+

OpenStack 服务,例如计算、对象存储或映像服务。提供一个或多个端点,用户可以通过这些端点访问资源和执行操作。

+

服务目录

+

Identity 服务目录的替代术语。

+

服务功能链 (SFC)

+

对于给定的服务,SFC 是所需服务功能及其应用顺序的抽象视图。

+

服务 ID

+

分配给 Identity 服务目录中可用的每个服务的唯一 ID。

+

服务水平协议 (SLA)

+

确保服务可用性的合同义务。

+

服务项目

+

包含目录中列出的所有服务的特殊项目。

+

服务提供者

+

向其他系统实体提供服务的系统。在联合身份的情况下,OpenStack 身份是服务提供者。

+

服务注册

+

一种身份服务功能,使服务(如计算)能够自动注册到目录。

+

服务令牌

+

管理员定义的令牌,由计算用于与身份服务进行安全通信。

+

会话后端

+

Horizon 用于跟踪客户端会话的存储方法,例如本地内存、Cookie、数据库或 memcached。

+

会话持久化

+

负载平衡服务的一项功能。只要某个服务处于联机状态,它就会尝试强制将服务的后续连接重定向到同一节点。

+

会话存储

+

用于存储和跟踪客户端会话信息的 Horizon 组件。通过 Django 会话框架实现。

+

共享

+

共享文件系统服务上下文中的远程可挂载文件系统。您可以一次将共享装载到多个主机,也可以由多个用户从多个主机访问共享。

+

共享网络

+

共享文件系统服务上下文中的实体,用于封装与网络服务的交互。如果所选驱动程序在需要此类交互的模式下运行,则需要指定共享网络以创建共享。

+

共享文件系统 API

+

提供稳定 RESTful API 的共享文件系统服务。该服务在整个共享文件系统服务中对请求进行身份验证和路由。有 python-manilaclient 可以与 API 交互。

+

共享文件系统服务(manila)

+

该服务提供一组服务,用于管理多项目云环境中的共享文件系统,类似于 OpenStack 通过 OpenStack Block Storage 服务项目提供基于块的存储管理。使用共享文件系统服务,您可以创建远程文件系统并将文件系统挂载到您的实例上。您还可以在文件系统中读取和写入实例中的数据。

+

共享 IP 地址

+

可分配给共享 IP 组中的 VM 实例的 IP 地址。公共 IP 地址可以在多个服务器之间共享,以便在各种高可用性方案中使用。当 IP 地址共享到另一台服务器时,将修改云网络限制,使每个服务器都能侦听和响应该 IP 地址。您可以选择指定修改目标服务器网络配置。共享 IP 地址可以与许多标准检测信号工具(如 keepalive)一起使用,这些工具可监视故障并管理 IP 故障转移。

+

共享 IP 组

+

可以与组的其他成员共享 IP 的服务器集合。组中的任何服务器都可以与组中的任何其他服务器共享一个或多个公共 IP。除了共享 IP 组中的第一台服务器外,服务器必须启动到共享 IP 组中。一台服务器只能是一个共享 IP 组的成员。

+

共享存储

+

可由多个客户端同时访问的块存储,例如 NFS。

+

Sheepdog

+

面向 QEMU 的分布式块存储系统,由 OpenStack 提供支持。

+

简单云身份管理 (SCIM)

+

用于在云中管理身份的规范,目前不受 OpenStack 支持。

+

独立计算环境的简单协议 (SPICE)

+

SPICE 提供对客户机虚拟机的远程桌面访问。它是 VNC 的替代品。OpenStack支持SPICE。

+

单根 I/O 虚拟化 (SR-IOV)

+

当由物理 PCIe 设备实现时,该规范使其能够显示为多个单独的 PCIe 设备。这使多个虚拟化客户机能够共享对物理设备的直接访问,从而提供比等效虚拟设备更高的性能。目前在 OpenStack Havana 及更高版本中受支持。

+

SmokeStack

+

针对核心 OpenStack API 运行自动化测试;用 Rails 编写。

+

快照

+

OpenStack 存储卷或映像的时间点副本。使用存储卷快照备份卷。使用映像快照来备份数据,或作为其他服务器的“黄金”映像。

+

软重启

+

通过操作系统命令正确重启 VM 实例的受控重启。

+

软件开发工具包 (SDK)

+

包含代码、示例和文档,您可以使用这些代码、示例和文档以所选语言创建应用程序。

+

软件开发生命周期自动化服务(solum)

+

OpenStack项目,旨在通过自动化从源到映像的过程,并简化以应用程序为中心的部署,使云服务更易于使用并与应用程序开发过程集成。

+

软件定义网络 (SDN)

+

为网络管理员提供一种方法,通过抽象较低级别的功能来管理计算机网络服务。

+

SolidFire 卷驱动程序

+

SolidFire iSCSI 存储设备的块存储驱动程序。

+

solum

+

软件开发生命周期自动化服务的代号。

+

点差优先调度器

+

计算 VM 计划算法,尝试以最小的负载在主机上启动新 VM。

+

SQLAlchemy

+

用于 Python 的开源 SQL 工具包,用于 OpenStack。

+

SQLite

+

一个轻量级的 SQL 数据库,在许多 OpenStack 服务中用作默认的持久化存储方法。

+

堆栈

+

由编排服务根据给定模板(AWS CloudFormation 模板或 Heat 编排模板 (HOT))创建和管理的一组 OpenStack 资源。

+

StackTach

+

捕获计算 AMQP 通信的社区项目;对调试很有用。

+

静态 IP 地址

+

固定 IP 地址的替代术语。

+

静态网页

+

对象存储的 WSGI 中间件组件,将容器数据作为静态网页提供。

+

Stein

+

OpenStack 第 19 版的代号。OpenStack峰会在德国柏林举行。该版本以柏林的 Steinstraße 街命名。

+

存储后端

+

服务用于持久性存储的方法,例如 iSCSI、NFS 或本地磁盘。

+

存储管理器

+

一个 XenAPI 组件,它提供可插入接口以支持各种持久性存储后端。

+

存储管理器后端

+

XenAPI 支持的持久性存储方法,例如 iSCSI 或 NFS。

+

存储节点

+

提供容器服务、账户服务和对象服务的对象存储节点;控制帐户数据库、容器数据库和对象存储。

+

存储服务

+

提供容器服务、账户服务和对象服务的对象存储节点;控制帐户数据库、容器数据库和对象存储。

+

存储服务

+

对象存储对象服务、容器服务和帐户服务的集合名称。

+

策略

+

指定镜像服务或身份使用的认证源。在数据库服务中,它是指为数据存储实现的扩展。

+

子域

+

父域中的域。无法注册子域。子域使您能够委派域。子域本身可以有子域,因此可以进行三级、四级、五级和更深级别的嵌套。

+

子网

+

IP 网络的逻辑细分。

+

SUSE Linux Enterprise Server (SLES) (英语)

+

与 OpenStack 兼容的 Linux 发行版。

+

挂起

+

虚拟机实例将暂停,其状态将保存到主机的磁盘中。

+

交换

+

操作系统使用的基于磁盘的虚拟内存,用于提供比系统上实际可用的内存更多的内存。

+

swift

+

OpenStack 对象存储服务的代号。

+

swift 多合一 (SAIO)

+

Swift 中间件

+

提供附加功能的对象存储组件的统称。

+

Swift 代理服务器

+

充当对象存储的网守,并负责对用户进行身份验证。

+

Swift 存储节点

+

运行对象存储帐户、容器和对象服务的节点。

+

同步点

+

自上次容器和帐户数据库在对象存储中的节点之间同步以来的时间点。

+

系统管理员

+

计算 RBAC 系统中的默认角色之一。使用户能够将其他用户添加到项目中,与与项目关联的 VM 映像进行交互,以及启动和停止 VM 实例。

+

系统使用情况

+

一个计算组件,它与通知系统一起收集计量和使用情况信息。此信息可用于计费。

+

T

+

Tacker

+

NFV 编排服务的代码名称

+

遥测服务(telemetry)

+

OpenStack项目收集包含已部署云的物理和虚拟资源利用率的测量值,保留此数据以供后续检索和分析,并在满足定义的条件时触发操作。

+

TempAuth 函数

+

Object Storage中的一种身份验证工具,使Object Storage本身能够执行身份验证和授权。经常用于测试和开发。

+

Tempest

+

自动化软件测试套件,旨在针对 OpenStack 核心项目的主干运行。

+

TempURL

+

一个对象存储中间件组件,用于创建用于临时对象访问的 URL。

+

租户

+

一组用户;用于隔离对计算资源的访问。项目的替代术语。

+

租户 API

+

项目可访问的 API。

+

租户端点

+

与一个或多个项目关联的身份服务 API 端点。

+

租户 ID

+

项目 ID 的替代术语。

+

令牌

+

用于访问 OpenStack API 和资源的字母数字文本字符串。

+

令牌服务

+

一个身份服务组件,用于在用户或项目通过身份验证后管理和验证令牌。

+

逻辑删除

+

用于标记已删除的对象存储对象;确保对象在删除后不会在另一个节点上更新。

+

主题发布者

+

执行 RPC 调用时创建的进程;用于将消息推送到主题交换。

+

Torpedo

+

用于针对 OpenStack API 运行自动化测试的社区项目。

+

Train

+

OpenStack 第 20 版的代号。OpenStack 基础架构峰会在美国科罗拉多州丹佛市举行。

+

丹佛的两次项目团队聚会会议在从市中心到机场的火车线旁边的一家酒店举行。那里的交叉信号灯过去曾出现过某种故障,导致它们在火车正常驶来时没有停下车厢。因此,火车在经过该地区时必须鸣喇叭。显然,住在酒店里,乘坐火车24/7吹喇叭,不太理想。结果,出现了许多关于丹佛和火车的笑话——因此这个版本被称为火车。

+

交易 ID

+

分配给每个对象存储请求的唯一 ID;用于调试和跟踪。

+

瞬态

+

非耐用品的替代术语。

+

瞬态交换

+

非持久交换的替代术语。

+

瞬态消息

+

存储在内存中并在服务器重新启动后丢失的消息。

+

瞬态队列

+

非持久队列的替代术语。

+

TripleO

+

OpenStack-on-OpenStack 程序。OpenStack Deployment 程序的代号。

+

Trove

+

OpenStack 数据库服务的代号。

+

可信平台模块(TPM)

+

专用微处理器,用于将加密密钥整合到设备中,以验证和保护硬件平台。

+

U

+

Ubuntu

+

基于 Debian 的 Linux 发行版。

+

无作用域令牌

+

Identity 服务默认令牌的替代术语。

+

更新器

+

一组对象存储组件的统称,用于处理容器和对象的排队和失败的更新。

+

用户

+

在 OpenStack Identity 中,实体代表单个 API 使用者,并由特定域拥有。在 OpenStack 计算中,用户可以与角色和/或项目相关联。

+

用户数据

+

用户在启动实例时可以指定的数据 Blob。实例可以通过元数据服务或配置驱动器访问此数据。通常用于传递实例在启动时运行的 shell 脚本。

+

用户模式 Linux (UML)

+

支持 OpenStack 的虚拟机管理程序。

+

Ussuri

+

OpenStack 第 21 版的代号。OpenStack基础设施峰会在中华人民共和国上海举行。该版本以乌苏里河命名。

+

V

+

Victoria

+

OpenStack 第 22 版的代号。OpenDev + PTG 计划在加拿大不列颠哥伦比亚省温哥华举行。该版本以不列颠哥伦比亚省首府维多利亚命名。

+

由于 COVID-19,现场活动被取消。该事件正在虚拟化。

+

VIF UUID

+

分配给每个网络 VIF 的唯一 ID。

+

虚拟中央处理器 (vCPU)

+

细分物理 CPU。然后,实例可以使用这些分区。

+

虚拟磁盘映像 (VDI)

+

映像服务支持的虚拟机映像磁盘格式之一。

+

虚拟可扩展局域网 (VXLAN)

+

一种网络虚拟化技术,试图减少与大型云计算部署相关的可伸缩性问题。它使用类似 VLAN 的封装技术将以太网帧封装在 UDP 数据包中。

+

虚拟硬盘 (VHD)

+

镜像服务支持的虚拟机镜像磁盘格式之一。

+

虚拟 IP 地址 (VIP)

+

在负载平衡器上配置的 Internet 协议 (IP) 地址,供连接到负载平衡服务的客户端使用。传入连接将根据负载均衡器的配置分发到后端节点。

+

虚拟机 (VM)

+

在虚拟机监控程序上运行的操作系统实例。多个 VM 可以在同一物理主机上同时运行。

+

虚拟网络

+

网络中的 L2 网段。

+

虚拟网络计算 (VNC)

+

用于远程控制台访问 VM 的开源 GUI 和 CLI 工具。

+

虚拟网络接口 (VIF)

+

插入网络网络中的端口的接口。通常属于 VM 的虚拟网络接口。

+

虚拟网络

+

使用物理网络基础架构上的虚拟机和覆盖网络组合实现网络功能虚拟化(如交换、路由、负载平衡和安全性)的通用术语。

+

虚拟端口

+

虚拟接口连接到虚拟网络的连接点。

+

虚拟专用网络 (VPN)

+

由 Compute 以 cloudpipes 的形式提供,这些专用实例用于按项目创建 VPN。

+

虚拟服务器

+

VM 或来宾的替代术语。

+

虚拟交换机 (vSwitch)

+

在主机或节点上运行并提供基于硬件的网络交换机的特性和功能的软件。

+

虚拟 VLAN

+

虚拟网络的替代术语。

+

VirtualBox

+

支持 OpenStack 的虚拟机管理程序。

+

Vitrage

+

Root Cause Analysis服务的代码名称。

+

VLAN 管理器

+

一个 Compute 组件,它提供 dnsmasq 和 radvd,并设置与 cloudpipe 实例之间的转发。

+

VLAN 网络

+

网络控制器提供虚拟网络,使计算服务器能够相互交互以及与公用网络交互。所有计算机都必须具有公共和专用网络接口。VLAN 网络是一个专用网络接口,由 VLAN 管理器 vlan_interface 选项控制。

+

虚拟机磁盘(VMDK)

+

镜像服务支持的虚拟机镜像磁盘格式之一。

+

虚拟机映像

+

映像的替代术语。

+

虚拟机远程控制 (VMRC)

+

使用 Web 浏览器访问 VM 实例控制台的方法。由计算支持。

+

VMware API 接口

+

支持在计算中与 VMware 产品进行交互。

+

VMware NSX Neutron 插件

+

在 Neutron 中提供对 VMware NSX 的支持。

+

VNC 代理

+

一个计算组件,允许用户通过 VNC 或 VMRC 访问其 VM 实例的控制台。

+

+

基于磁盘的数据存储通常表示为具有支持扩展属性的文件系统的 iSCSI 目标;可以是持久的,也可以是短暂的。

+

卷 API

+

块存储 API 的替代名称。

+

卷控制器

+

一个块存储组件,用于监督和协调存储卷操作。

+

卷驱动程序

+

卷插件的替代术语。

+

卷 ID

+

应用于块存储控制下每个存储卷的唯一 ID。

+

卷管理器

+

用于创建、附加和分离持久性存储卷的块存储组件。

+

卷节点

+

运行 cinder-volume 守护程序的块存储节点。

+

卷插件

+

为块存储卷管理器提供对新型和专用后端存储类型的支持。

+

卷工作器

+

一个 cinder 组件,它与后端存储交互,以管理卷的创建和删除以及计算卷的创建,由 cinder-volume 守护程序提供。

+

vSphere

+

支持 OpenStack 的虚拟机管理程序。

+

W

+

Wallaby

+

OpenStack 第 23 版的代号。小袋鼠原产于澳大利亚,在这个命名期开始时,澳大利亚正在经历前所未有的野火。

+

Watcher

+

基础结构优化服务的代号。

+

权重

+

对象存储设备用于确定哪些存储设备适合作业。设备按大小加权。

+

加权成本

+

决定在计算中启动新 VM 实例的位置时所使用的每个成本的总和。

+

加权

+

一个计算过程,用于确定 VM 实例是否适合特定主机的作业。例如,主机上的 RAM 不足、主机上的 CPU 过多等。

+

工作者

+

侦听队列并执行任务以响应消息的守护程序。例如, cinder-volume worker 管理存储阵列上的卷创建和删除。

+

工作流服务 (mistral)

+

OpenStack服务提供了一种基于YAML的简单语言来编写工作流(任务和转换规则),以及一种允许上传、修改、大规模和高度可用的方式运行它们、管理和监控工作流执行状态和单个任务状态的服务。

+

X

+

X.509

+

X.509 是定义数字证书的最广泛使用的标准。它是一种数据结构,包含主题(实体)可识别信息,例如其名称及其公钥。证书还可以包含一些其他属性,具体取决于版本。X.509 的最新标准版本是 v3。

+

Xen

+

Xen 是一个使用微内核设计的虚拟机管理程序,它提供的服务允许多个计算机操作系统在同一计算机硬件上同时执行。

+

Xen API

+

Xen 管理 API,受 Compute 支持。

+

Xen 云平台 (XCP)

+

支持 OpenStack 的虚拟机管理程序。

+

Xen Storage Manager 卷驱动程序

+

支持与 Xen Storage Manager API 进行通信的块存储卷插件。

+

Xena

+

OpenStack 第 24 版的代号。该版本以虚构的战士公主命名。

+

XenServer

+

An OpenStack-supported hypervisor. 支持 OpenStack 的虚拟机管理程序。

+

XFS 函数

+

由 Silicon Graphics 创建的高性能 64 位文件系统。在并行 I/O 操作和数据一致性方面表现出色。

+

Y

+

Yoga

+

OpenStack 第 25 版的代号。该版本以来自印度的一所哲学学校命名,该学校具有心理和身体实践。

+

Z

+

Yoga

+

消息服务的代号。

+

Zed

+

OpenStack 第 26 版的代号。该版本以字母 Z 的发音命名。

+

ZeroMQ

+

OpenStack 支持的消息队列软件。RabbitMQ 的替代品。也拼写为 0MQ。

+

Zuul

+

Zuul 是一个开源 CI/CD 平台,专门用于在登陆单个补丁之前跨多个系统和应用程序进行门控更改。

+

Zuul 用于 OpenStack 开发,以确保只有经过测试的代码才会被合并。

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + +
+ + + + + + + + + diff --git a/site/sitemap.xml b/site/sitemap.xml new file mode 100644 index 0000000000000000000000000000000000000000..0f8724efd9fecfd8e03fbb4401d666e764ce9cf5 --- /dev/null +++ b/site/sitemap.xml @@ -0,0 +1,3 @@ + + + \ No newline at end of file diff --git a/site/sitemap.xml.gz b/site/sitemap.xml.gz new file mode 100644 index 0000000000000000000000000000000000000000..b3d9f09530152b1e505db81e36a45e9a83eac32b Binary files /dev/null and b/site/sitemap.xml.gz differ diff --git a/site/spec/distributed-traffic/index.html b/site/spec/distributed-traffic/index.html new file mode 100644 index 0000000000000000000000000000000000000000..18e6472b6094410a86ae23d21a6e2ed6dfe03b8b --- /dev/null +++ b/site/spec/distributed-traffic/index.html @@ -0,0 +1,464 @@ + + + + + + + + 流量分散 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

流量分散

+

概述

+

OpenStack为用户提供计算和网络服务。用户创建虚拟机并连接Router可以访问外部网络,同时可以开启浮动IP的端口映射,让外部网络的设备访问虚拟机内部的服务。但与此同时,随着虚拟机和浮动IP +端口映射的数量的增多,网络节点的压力也越来越大,必须找到分散网络节点流量,疏解网络节点压力的方法。本方案实现了在OpenStack环境中将网络节点流量分散,保证兼容支持L3 +HA和DVR,同时又将网络资源使用最小化。

+

背景

+

用户创建虚拟机并连接Router的基本流程如下。

+
    +
  1. 用户提前创建内部网络和外部网络。
  2. +
  3. 创建Router时指定External Gateway为提前创建的外部网络。
  4. +
  5. 将Router和创建好的内部网络进行连接。
  6. +
  7. 创建虚拟机实例时指定内部网络。
  8. +
  9. 利用创建的外部网络创建浮动IP。
  10. +
  11. 为虚拟机实例开启浮动IP端口映射。
  12. +
+

经过上面的操作,用户创建的虚拟机实例可以访问到外部网络,外部网络的设备也可以根据浮动IP指定的端口访问虚拟机实例内部的服务。

+

在一个基本的OpenStack环境中虚拟机实例的流量走向如下所示。

+

单个虚拟机网络流量

+

在用户创建完多个实例后,虚拟机实例可能会均匀分布在各个计算节点,虚拟机的流量走向可能如下图所示。

+

多个虚拟机网络流量

+

可以看到,不论虚拟机的东西流量还是南北流量都会经过Network-1节点,这无疑加大了网络节点的负载,同时当网络节点发生故障时不能很好的进行故障恢复。 +那么是否可以将同一子网绑定多个Router,在OpenStack中同一子网可以绑定多个Router,但是子网在绑定Router时默认会将子网的网关地址绑定到Router上,一个子网只有一个网关地址,同时这个网关地址又会在DHCP服务中用到,用于给虚拟机实例提供下一跳的网关地址,于是乎即使将子网绑定到多个Router上,虚拟机内部下一跳的网关地址还会是子网的网关地址,而且Router选择的网络节点用户是不可控的,难免会出现虽然子网绑定了两个Router,但是这两个Router在同一个网络节点上的尴尬场面。

+

为了分散流量OpenStack有应对的策略,可以将neutron的DVR功能打开,为预防网络节点的单点故障也可以打开neutron的L3 +HA,但是上述方法也有它们的局限性。

+

DVR的流量分散有比较大的局限性,原因有以下几点。 +DVR只是作用于同一Router下不同计算节点的虚拟机实例之间的东西流量,已经绑定浮动IP虚拟机的南北流量,对于未绑定浮动IP的虚拟机访问外部网络依据需要经过网络节点。

+

生产环境下,给每个虚拟机都绑定浮动IP是不切实际的,但是可以通过开启浮动IP的端口映射,让多台虚拟机对应一个浮动IP,但在目前的OpenStack版本中,不论是否开启DVR,浮动IP的端口映射的实现都是在网络节点的网络命名空间中完成的。 +最后一点,DVR模式下,为了让虚拟机的南北流量不经过网络节点,从计算节点上直接走出,都会在每个计算节点上生成一个fip开头的网络命名空间,即使虚拟机不会绑定浮动IP。而这个fip网络命名空间中会占用一个外部网络的IP地址,这无疑会加大网络资源的消耗。

+

L3 HA也有几点不足,开启L3 HA后,Router利用keepalived会在几个网络节点之间进行选择,只有Keepalived的状态为Master +的网络节点才会担任真正的流量运输的任务,而对于网络节点选择,用户无权干涉。虽然neutron中给出了Router的默认调度策略,也就是最少Router数,Router会调度到Router个数最少的网络节点上。而且在底层keepalived开启的模式是非抢占的,也就是当vip发生漂移后,即使主服务器恢复正常,也不会自动将资源从备用服务器手中抢占回来,这又增加了对于真正运行Router的网络节点的不确定性。

+

总结一下,现有的技术方案做不到真正的流量分发,即使在开启DVR后,一方面会有一些额外网络资源的损耗,同时又因为Router的网络节点的不确定性,导致虚拟机的南北流量无法做到很好的分发。

+

需要解决的问题

+

实现DVR模式和L3 HA模式下以及Legacy模式下网络分发。首先要解决以下几个技术问题:

+
    +
  1. Router可以指定网络节点,不论是否开启L3 HA。
  2. +
  3. 同一子网绑定多个Router时,DHCP服务能为不同计算节点的虚拟机提供不同的路由方式。
  4. +
  5. 在用户使用端口映射时,可以将Router的External Gateway的IP地址作为外部网络的地址。
  6. +
+

实现方案

+

解决指定L3 agent的问题

+

首先修改Router的底层数据库为其添加一个configurations字段,用于存储Router的相关配置信息,configurations的格式如下所示。

+
{
+  "configurations": {
+    "preferred_agent": "network-1"
+  }
+}
+

在未开启L3 HA时,preferred_agent字段用于指定Router位于的网络节点。 +在开启L3 HA时,configurations的格式如下所示。

+
{
+  "configurations": {
+    "slave_agents": [
+      "compute-1"
+    ],
+    "master_agent": "network-1"
+  }
+}
+

master_agent用于指定Master角色的网络节点,slave_agents用于指定Slave角色的网络节点数组。

+

然后要修改Router的创建逻辑,需要为Router新增一个调度方法。Neutron中router_scheduler_driver默认是LeastRoutersScheduler(最少Router个数的网络节点),继承该类新增调度方法,可以根据Router的configurations字段选择指定的网络节点。

+

L3 Scheduler

+

最后需要修改neutron-l3-agent的Router更新的逻辑代码,由于neutron-l3-agent启动时会初始化一个资源队列用于更新资源状态,同时开启一个守护线程用于读取资源队列,每次网络资源状态有变化(创建、删除或者更新)时,就会添加到该队列中,最后根据资源的类型和状态确定将要执行的动作。 +这里Router创建完后,neutron-l3-agent最后会执行_process_added_router方法,先调用RouterInfo的initialize方法,再调用process方法。 +initialize方法主要涉及到Router信息的一些初始化,包括网络命名空间的创建、port的创建、keepalived进程的初始化等等。 +process方法中会做下面几个操作。

+
    +
  1. 设置内部的Port,用于连接内部网络;
  2. +
  3. 设置外部Port,用于连接外部网络;
  4. +
  5. 更新路由表;
  6. +
  7. 对于开启L3 HA的Router,需要设置HA的Port,然后开启keepalived进程。
  8. +
  9. 对于开启DVR的Router,还需要设置一下fip命名空间中的Port。
  10. +
+

这里只需要考虑L3 HA开启的情况,因为在未开启L3 +HA时,neutron-server创建完Router后,经过新的调度方法选择特定的网络节点,RPC调用直接发送给特定网络节点的neutron-l3-agent服务。开启L3 +HA时,调度方法会选择出master和slave网络节点,并且RPC调用会发送给这些网络节点上的neutron-l3-agent服务。 +neutron-l3-agent会为每个Router启动一个keepalived进程用于L3 +HA,所以需要在keepalived初始化时,将keepalived启动逻辑修改。利用configurations字段的信息,获取master和slave网络节点,同时和当前网络节点的信息判断,确定网络节点的角色。最后,因为指定了master和slave节点,避免出现master网络节点宕机恢复后,vip依旧在slave节点的情况,要把keepalived的模式改为抢占模式。

+

解决路由问题

+

解决同一子网绑定多个Router后,虚拟机实例的路由问题。DHCP协议功能不仅包括和DNS服务器分配还包括网关地址分配,也就是可以通过DHCP协议将路由信息传给虚拟机实例。在OpenStack中,虚拟机实例的DHCP由neutron-dhcp-agent提供,neutron-dhcp-agent的核心功能基本由dnsmasq完成。

+

dnsmasq中提供tag标签,可以为指定IP地址添加标签,然后可以根据标签下发配置。 +dnsmasq的host配置文件如下所示。

+
fa:16:3e:28:a5:0a,host-172-16-0-1.openstacklocal,172.16.0.1,set:subnet-6a4db541-e563-43ff-891b-aa8c05c988c5
+fa:16:3e:2b:dd:88,host-172-16-0-10.openstacklocal,172.16.0.10,set:subnet-6a4db541-e563-43ff-891b-aa8c05c988c5
+fa:16:3e:a1:96:fc,host-172-16-0-207.openstacklocal,172.16.0.207,set:compute-1-subnet-6a4db541-e563-43ff-891b-aa8c05c988c5
+fa:16:3e:45:b4:1a,host-172-16-10-1.openstacklocal,172.16.10.1,set:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902
+

dnsmasq的option配置文件如下所示。

+
tag:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902,option:dns-server,8.8.8.8
+tag:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902,option:classless-static-route,172.16.0.0/24,0.0.0.0,169.254.169.254/32,172.16.0.2,0.0.0.0/0,172.16.0.1
+tag:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902,249,172.16.0.0/24,0.0.0.0,169.254.169.254/32,172.16.0.2,0.0.0.0/0,172.16.0.1
+tag:subnet-faeec4d1-2c0c-4f7a-bc9b-0af562694902,option:router,172.16.0.1
+tag:compute-1-subnet-6a4db541-e563-43ff-891b-aa8c05c988c5,option:classless-static-route,172.16.10.0/24,0.0.0.0,169.254.169.254/32,172.16.0.2,0.0.0.0/0,172.16.0.10
+tag:compute-1-subnet-6a4db541-e563-43ff-891b-aa8c05c988c5,249,172.16.0.0/24,0.0.0.0,169.254.169.254/32,172.16.0.2,0.0.0.0/0,172.16.0.10
+tag:compute-1-subnet-6a4db541-e563-43ff-891b-aa8c05c988c5,option:router,172.16.0.10
+

可以看到IP172.16.0.207被打上了compute-1开头的tag,匹配到option文件后,172.16.0.207的虚拟机的默认路由网关地址就会从172.16.0.1变为172.16.0.10。当然这一切的前提子网需要绑定多个Router。 +同时为neutron-dhcp-agent提供可供管理员修改的配置项,用于指定计算节点和网络节点的关系,可以是一对一,可以是多对一。

+

解决Router Gateway端口转发的问题

+

将原本基于浮动IP的端口映射改为基于Router的External Gateway的方式。原因有二:

+
    +
  1. 基于浮动IP的端口映射,对于原本就要使用Router的External Gateway的用户就会多占用一个外部网络的IP,为减少外部网络IP的使用改用External + Gateway的方式进行端口映射。
  2. +
  3. 基于浮动IP的端口映射,依赖Router的网络命名空间来做NAT,不开启L3 + HA时,同一子网在绑定多个Router后,由于端口映射创建的逻辑,NAT会发生在子网网关地址所在的Router的网络命名空间中(特定的网络节点),不会分散在各个Router的网络命名空间中(每个网络节点)。这样在端口映射时,会增加网络节点的压力。 + 实现方式和基于浮动IP的端口映射类似,与之不同的是External Gateway不需要选择Router,因为External + Gateway本来和Router就是相关联的。基于浮动IP的端口映射在选择Router时,选择的是子网的网关地址所在的Router。
  4. +
+

最后,在实现上面三个部分后,用户实现流量分散的步骤如下。

+
    +
  1. 用户修改neutron-dhcp-agent的配置文件,修改计算节点和网络节点的映射关系。例如三个网络节点、三个计算节点,配置compute-1走network-1节点,compute-2和compute-3走network-2节点。
  2. +
  3. 利用neutron的API创建多个Router并指定网络节点,并将Router绑定到同一子网。
  4. +
  5. 利用子网网络创建多个虚拟机实例。
  6. +
+

虚拟机实例的网络流量的流向如下图所示。

+

网络流量

+

可以看到,VM-1访问外部网络经过的是network-1节点,VM-2和VM-3访问外部网络经过的是network-2节点。同时VM-1、VM-2和VM-3又是在同一个子网下,可以互相访问。

+

API

+

查看路由器网关端口转发列表

+
GET /v2.0/routers/{router_id}/gateway_port_forwardings
+
+Response
+{
+  "gateway_port_forwardings": [
+    {
+      "id": "67a70b09-f9e7-441e-bd49-7177fe70bb47",
+      "external_port": 34203,
+      "protocol": "tcp",
+      "internal_port_id": "b671c61a-95c3-49cd-89f2-b7e817d1f486",
+      "internal_ip_address": "172.16.0.196",
+      "internal_port": 518,
+      "gw_ip_address": "192.168.57.234"
+    }
+  ]
+}
+

查看路由器网关端口转发

+
GET /v2.0/routers/{router_id}/gateway_port_forwardings/{port_forwarding_id}
+
+Response
+{
+  "gateway_port_forwarding": {
+    "id": "67a70b09-f9e7-441e-bd49-7177fe70bb47",
+    "external_port": 34203,
+    "protocol": "tcp",
+    "internal_port_id": "b671c61a-95c3-49cd-89f2-b7e817d1f486",
+    "internal_ip_address": "172.16.0.196",
+    "internal_port": 518,
+    "gw_ip_address": "192.168.57.234"
+  }
+}
+

创建路由器网关端口转发

+
POST /v2.0/routers/{router_id}/gateway_port_forwardings
+Request Body
+{
+  "gateway_port_forwarding": {
+    "external_port": int,
+    "internal_port": int,
+    "internal_ip_address": "string",
+    "protocol": "tcp",
+    "internal_port_id": "string"
+  }
+}
+
+Response
+{
+  "gateway_port_forwarding": {
+    "id": "da554833-b756-4626-9900-6256c361f94b",
+    "external_port": 14122,
+    "protocol": "tcp",
+    "internal_port_id": "b671c61a-95c3-49cd-89f2-b7e817d1f486",
+    "internal_ip_address": "172.16.0.196",
+    "internal_port": 3634,
+    "gw_ip_address": "192.168.57.234"
+  }
+}
+

更新路由器网关端口转发

+
PUT /v2.0/routers/{router_id}/gateway_port_forwardings/{port_forwarding_id}
+Request Body
+{
+  "gateway_port_forwarding": {
+    "external_port": int,
+    "internal_port": int,
+    "internal_ip_address": "string",
+    "protocol": "tcp",
+    "internal_port_id": "string"
+  }
+}
+
+Response
+{
+  "gateway_port_forwarding": {
+    "id": "da554833-b756-4626-9900-6256c361f94b",
+    "external_port": 14122,
+    "protocol": "tcp",
+    "internal_port_id": "b671c61a-95c3-49cd-89f2-b7e817d1f486",
+    "internal_ip_address": "172.16.0.196",
+    "internal_port": 3634,
+    "gw_ip_address": "192.168.57.234"
+  }
+}
+

删除路由器网关端口转发

+
DELETE /v2.0/routers/{router_id}/gateway_port_forwardings/{port_forwarding_id}
+

新建路由器

+
POST /v2.0/routers
+Request Body
+{
+    "router": {
+        "name": "string",
+        "admin_state_up": true,
+        "configurations": {
+            "preferred_agent": "string",
+            "master_agent": "string",
+            "slave_agents": [
+                "string"
+            ]
+        }
+    }
+}
+

更新路由器

+
PUT /v2.0/routers/{router_id}
+Request Body
+{
+  "router": {
+    "name": "string",
+    "admin_state_up": true,
+    "configurations": {
+      "preferred_agent": "string",
+      "master_agent": "control01",
+      "slave_agents": [
+        "control01"
+      ]
+    }
+  }
+}
+

开发节奏

+
    +
  • 2023-07-28到2023-08-30 完成开发
  • +
  • 2023-09-01到2023-11-15 测试、问题修复
  • +
  • 2023-11-30引入openEuler 20.03 LTS SP4版本
  • +
  • 2023-12-30引入openEuler 22.03 LTS SP3版本
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/spec/openkite/index.html b/site/spec/openkite/index.html new file mode 100644 index 0000000000000000000000000000000000000000..c1c390fa3599ca95018cacdf4643787e0b2d0072 --- /dev/null +++ b/site/spec/openkite/index.html @@ -0,0 +1,751 @@ + + + + + + + + 1、前序 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ +

1、前序

+

1.1、 软件许可协议

+

本软件基于LGPL V3协议,请用户和开发者注意LGPL协议的要求,其中最重要的一点是不允许fork项目闭源

+

1.2、 软件用途

+

1.3、 开发人员名单

+

1.4、 生命开发周期

+

1.5、 功能开发顺序

+

2、开发规范约定

+

2.1、 窗体控件命名规范

+
    +
  • 控件原名称_窗体_控件名称组合体首字母大写
  • +
  • 示例: +
    按钮原名称:pushButton 主窗体 菜单按钮 
    +命名规范:pushButton_MainWindow_Menu
    +
    +按钮原名称:toolButton 主窗体 上传按钮
    +命名规范:toolButton_MainWindow_UpLoad
  • +
+

2.2、 后台功能实现命名规范

+
    +
  • 变量、常量、函数、类、容器等
  • +
+

2.3、 软件包文件名命名规范

+

2.4、 文件命名规范

+

2.5、 标注

+
    +
  • 删除、移动、改名、权限设置
  • +
+

3、窗口主体控件名称、尺寸、用途

+

3.1、菜单功能大类

+
    +
  • PushButton控件用于菜单大类调用窗口
  • +
  • 控件尺寸:
  • +
  • 固定尺寸 80*25
  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
控件中文名控件种类控件名用途
菜单PushButtonpushButton_MainWindow_Menu调出菜单窗口
帮助PushButtonpushButton_MainWindow_Help调出帮助窗口
工具PushButtonpushButton_MainWindow_Tool调出工具窗口
报错分析PushButtonpushButton_MainWindow_ErrorAnalysis调出报错分析窗口
监控PushButtonpushButton_MainWindow_Monitor调出监控窗口
运维日志PushButtonpushButton_MainWindow_OperationLog调出运维日志窗口
+
3.1.1、菜单子类
+
    +
  • 设置
  • +
  • 软件主题
  • +
+
3.1.2、帮助类
+
    +
  • 社区
  • +
  • 版本更新
  • +
  • 使用手册
  • +
+
3.1.3、工具类
+
    +
  • 插件仓库
  • +
  • img镜像工具
  • +
  • MD5校验工具
  • +
  • OpenStack模块功能测试
  • +
  • 压力测试
  • +
+
3.1.4、报错分析类
+
    +
  • 系统报错(节点报错分析)
  • +
  • OpenStack报错
  • +
  • K8S报错
  • +
+
3.1.5、监控类
+
    +
  • OPS监控状态与性能使用分析
  • +
  • K8S监控状态与性能使用分析
  • +
+
3.1.6、运维日志类
+
    +
  • 查看历史运维日志
  • +
  • 日志导出
  • +
+

3.2、数据可视化类

+
3.2.1、计算机硬件信息类
+
    +
  • ProgressBar控件显示计算机硬件性能占用比
  • +
  • 控件尺寸:
  • +
  • 最小尺寸 116*27
  • +
  • 高度尺寸固定
  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
控件中文名控件种类控件名用途
本机CPUProgressBarprogressBar_MainWindow_LocalCPU显示本地CPU使用率
目标CPUProgressBarprogressBar_MainWindow_TargetCPU显示目标CPU使用率
本机RAMProgressBarprogressBar_MainWindow_LocalRAM显示本机RAM使用率
目标RAMProgressBarprogressBar_MainWindow_TargetRAM显示目标RAM使用率
本机网络ProgressBarprogressBar_MainWindow_LocalNetwork显示本机网络带宽使用率
目标网络ProgressBarprogressBar_MainWindow_TargetNetwork显示目标网络带宽使用率
本机磁盘ProgressBarprogressBar_MainWindow_LocalDisk显示本机磁盘IO使用率
目标磁盘ProgressBarprogressBar_MainWindow_TargetDisk显示目标磁盘IO使用率
+
3.2.2、计算机软件信息类
+
    +
  • Label控件显示系统IP与DNS
  • +
  • 控件尺寸:
  • +
  • 固定尺寸 110*27
  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
控件中文名控件种类控件名用途
本机IPLabellabel_MainWindow_LocalIP显示本机IP
目标IPLabellabel_MainWindow_TargetIP显示目标IP
本机DNSLabellabel_MainWindow_LocalNDS显示本机DNS
目标DNSLabellabel_MainWindow_TargetNDS显示目标DNS
+
    +
  • ListWidget控件显示系统必要信息项说明
  • +
  • 控件尺寸:
  • +
  • 固定尺寸 200*111
  • +
+ + + + + + + + + + + + + + + + + +
控件中文名控件种类控件名用途
系统信息显示ListWidgetslistWidget_MainWidow_SystemShow显示系统必要信息
+
    +
  • 系统必要信息显示所用变量的API接口
  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
中文名变量类型变量名用途
发行版QStringListsystemNameShowlinux发行版名称
版本号QStringListsystemVersionlinux发行版版本号
内核号QStringListsystemKernellinux发行版内核版本
管理权限QStringListsystemAdminPower当前账号操作权限
服务名称QStringListsystemServiceName当前运维软件服务名称
服务版本QStringListsystemServicVersion当前运维软件版本
+
    +
  • Label与ProgressBar控件显示当前运行命令及进度
  • +
  • 控件尺寸:
  • +
  • 当前运行命令控件尺寸:
      +
    • 最小尺寸:500*31
    • +
    • 高度尺寸固定
    • +
    +
  • +
  • 当前命令进度控件尺寸:
      +
    • 最小尺寸:171*31
    • +
    • 高度尺寸固定
    • +
    +
  • +
+ + + + + + + + + + + + + + + + + + + + + + + +
中文名控件种类控件名用途
当前运行命令Labellabel_MainWindow_ShowCurrentCommand显示当前集群或节点正在运行的命令
当前命令进度ProgressBarprogressBar_MainWindow_ShowCommandProgress显示当前集群或节点正在运行的命令的进度
+

3.3、添加集群类

+
3.3.1、 集群添加类
+
    +
  • ToolButton控件添加集群节点信息
  • +
  • 控件尺寸:
  • +
  • 固定尺寸:300*31
  • +
+ + + + + + + + + + + + + + + + + +
中文名控件类型控件名用途
添加集群/节点ToolButtontoolButton_MainWindow_AddNode弹出窗口添加集群或节点
+
    +
  • 单节点添加
  • +
  • 批量节点添加
  • +
  • 集群添加
  • +
+
3.3.2、集群显示类
+
    +
  • TreeWidget控件显示集群信息
  • +
  • 控件尺寸:
  • +
  • 最小尺寸:200*438
  • +
  • 宽度尺寸固定
  • +
+ + + + + + + + + + + + + + + + + +
中文名控件类型控件名用途
节点信息TreeWidgettreeWidget_MainWindow_ShowNode用于显示集群与节点信息或点击信息后创建SSH远程窗口界面
+
    +
  • 集群名称
  • +
  • 节点名称
  • +
  • 节点IP地址
  • +
+

3.4、脚本与部署类

+
    +
  • TerrWidget控件弹窗
  • +
  • 控件尺寸:
  • +
  • 上传、脚本按钮固定尺寸:63*31
  • +
  • 部署按钮固定尺寸:65*31
  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
中文名控件类型控件名用途
上传terrWidgettoolButton_MainWindow_UpLoad弹出上传窗体:load.ui
脚本terrWidgettoolButton_MainWindow_Shell弹出脚本窗体:shell.ui
部署terrWidgettoolButton_MainWindow_Deploy弹出部署窗体:deploy.ui
+
3.4.1、上传与下载功能类
+
    +
  • 脚本编译器
  • +
  • yaml编译器
  • +
  • 脚本编译器
  • +
  • 加载本地策略
  • +
  • 加载集群配置策略
  • +
  • 加载节点配置策略
  • +
  • 上传文件到目标计算机
  • +
  • 单节点
  • +
  • 多节点
  • +
  • 下载文件到本地计算机
  • +
  • 单节点
  • +
  • 多节点
  • +
  • 目标计算机文件互传
  • +
  • 点对点互传
  • +
  • 点对多互传
  • +
+
3.4.2、脚本类
+
    +
  • 编辑
  • +
  • 编辑子模块脚本
  • +
  • 编辑集群模块脚本
  • +
  • 查看
  • +
  • 查看子模块脚本
  • +
  • 查看集群模块脚本
  • +
  • 导出
  • +
  • 导出子模块脚本
  • +
  • 导出集群模块脚本
  • +
  • 导出所有脚本
  • +
+
3.4.3、部署类
+
    +
  • 部署
  • +
  • 可批量选择节点部署不同功能脚本
  • +
  • 可集群部署不同节点不同功能脚本
  • +
  • 可单节点部署不同功能脚本
  • +
  • 终止
  • +
  • 可批量多节点、单节点、集群终止当前部署
  • +
+

3.5、功能插件类

+
3.5.1、基础运维类
+
    +
  • 修改服务器计算机名
  • +
  • 修改服务器用户名
  • +
  • 修改服务器密码
  • +
  • 修改防火墙配置
  • +
  • 修改host
  • +
  • 修改DNS
  • +
  • 修改网关
  • +
  • 修改IP
  • +
  • 部署时间服务
  • +
  • 部署DNS服务
  • +
+
3.5.2、其他功能插件类
+
    +
  • OpenStack插件类
  • +
  • K8S插件类
  • +
  • Ceph插件类
  • +
+

3.6、ssh远程显示类

+
    +
  • 可复制粘贴命令,中文显示综合端口
  • +
+
3.6.1、集群SSH远程显示类
+
    +
  • 综合端口显示,点对多ssh远程
  • +
+
3.6.2、单节点SSH远程显示类
+
    +
  • 点对点ssh远程
  • +
+

4、窗口主体功能插件添加方式、规范、API与功能注释

+

4.1、工具类

+
    +
  • 开发规范:
  • +
  • API接口:
  • +
  • 功能注释:
  • +
  • 面板添加方式:
  • +
  • 后台功能模块添加方式:
  • +
  • 文件夹位置:
  • +
+

4.2、功能插件类

+
    +
  • 开发规范:
  • +
  • API接口:
  • +
  • 功能注释:
  • +
  • 面板添加方式:
  • +
  • 后台功能模块添加方式:
  • +
  • 文件夹位置:
  • +
+

5、后台API调用、规范与使用说明

+

5.1、计算机硬件

+
5.1.1、CPU
+
5.1.2、RAM
+

5.2、计算机软件

+
5.2.1、本地软件包
+
5.2.2、源软件包
+

6、开发思路备注

+
    +
  • 在各种操作前进行判断本地网络与目标网络是否连同
  • +
  • 在目标网络无法连通时提示:目标IP网络不通
  • +
  • 在集群节点都无法联通时,集群节点字体灰色
  • +
  • 在集群操作或多节点操作时提示无法连接的目标信息,并提示确实是否继续,如继续则屏蔽无法连接的节点去进行批量部署
  • +
  • 界面信息刷新频率
  • +
  • 软硬件信息刷新频率
      +
    • cpu、内存等占比显示信息的刷新频率为0.5s
    • +
    +
  • +
  • ssh界面刷屏频率为实时刷新
  • +
  • 集群显示信息为实时刷新
  • +
  • 系统必要信息显示区域为实时刷新
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/site/spec/openstack-sig-tool-requirement/index.html b/site/spec/openstack-sig-tool-requirement/index.html new file mode 100644 index 0000000000000000000000000000000000000000..0db85027a182b7406d5686009686a5ceff7b75bf --- /dev/null +++ b/site/spec/openstack-sig-tool-requirement/index.html @@ -0,0 +1,326 @@ + + + + + + + + openEuler OpenStack开发平台需求说明书 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ +

openEuler OpenStack开发平台需求说明书

+

背景

+

目前,随着SIG的不断发展,我们明显的遇到了以下几类问题: +1. OpenStack技术复杂,涉及云IAAS层的计算、网络、存储、镜像、鉴权等方方面面的技术,开发者很难全知全会,提交的代码逻辑、质量堪忧。 +2. OpenStack是由python编写的,python软件的依赖问题难以处理,以OpenStack Wallaby版本为例,涉及核心python软件包400+, 每个软件的依赖层级、依赖版本错综复杂,选型困难,难以形成闭环。 +3. OpenStack软件包众多,RPM Spec编写开发量巨大,并且随着openEuler、OpenStack本身版本的不断演进,N:N的适配关系会导致工作量成倍增长,人力成本越来越大。 +4. OpenStack测试门槛过高,不仅需要开发人员熟悉OpenStack,还要对虚拟化、虚拟网桥、块存储等Linux底层技术有一定了解与掌握,部署一套OpenStack环境耗时过长,功能测试难度巨大。并且测试场景多,比如X86、ARM64架构测试,裸机、虚机种类测试,OVS、OVN网桥测试,LVM、Ceph存储测试等等,更加加重了人力成本以及技术门槛

+

针对以上问题需要在openEuler OpenStack提供一个开发平台,解决开发过程遇到的以上痛点问题。

+

目标

+

设计并开发一个OpenStack强相关的openEuler开源开发平台,通过规范化、工具化、自动化的方式,满足SIG开发者的日常开发需求,降低开发成本,减少人力投入成本,降低开发门槛,从而提高开发效率、提高SIG软件质量、发展SIG生态、吸引更多开发者加入SIG。

+

范围

+

用户范围:openEuler OpenStack SIG开发者

+

业务范围:openEuler OpenStack SIG日常开发活动

+

编程语言:Python、Ansible、Jinja、JavaScript

+

IT技术:Web服务、RestFul规范、CLI规范、前端GUI、数据库使用

+

功能

+

OpenStack开发平台整体采用C/S架构,以SIG对外提供平台能力,client端面向指定用户白名单开放。

+

为方便白名单以外用户使用,本平台还提供CLI模式,在此模式下不需要额外服务端通信,在本地即可开箱即用。

+
    +
  1. 输出OpenStack服务类软件、依赖库软件的RPM SPEC开发规范,开发者及Reviewer需要严格遵守规范进行开发实施。
  2. +
  3. 提供OpenStack python软件依赖分析功能,一键生成依赖拓扑与结果,保证依赖闭环,避免软件依赖风险。
  4. +
  5. 提供OpenStack RPM spec生成功能,针对通用性软件,提供一键生成 RPM spec的功能,缩短开发时间,降低投入成本。
  6. +
  7. 提供自动化部署、测试平台功能,实现一键在任何openEuler版本上部署指定OpenStack版本的能力,快速测试、快速迭代。
  8. +
  9. 提供openEuler Gitee仓库自动化处理能力,满足批量修改软件的需求,比如创建代码分支、创建仓库、提交Pull Request等功能。
  10. +
+

SPEC开发规范制定

+

【功能点】

+
1. 约束OpenStack服务级项目SPEC格式与内容规范
+2. 规定OpenStack依赖库级别项目SPEC的框架。
+

【先决条件】:OpenStack SIG全体Maintainer达成一致,参与厂商没有分歧。

+

【参与方】:中国电信、中国联通、统信软件

+

【输入】:RPM SPEC编写标准

+

【输出】:服务级、依赖库级SPEC模板;软件分层规范。

+

【对其他功能的影响】:本功能是以下软件功能的前提,下述如SPEC自动生成功能需遵循本规范执行。

+

依赖分析需求

+

【功能点】

+
1. 自动生成基于指定openEuler版本的OpenStack依赖表。
+2. 能处理依赖成环、版本缺省、名称不一致等依赖常见问题。
+

【先决条件】:N/A

+

【参与方】:OpenStack SIG核心开发者

+

【输入】:openEuler版本号、OpenStack版本号、目标依赖范围(核心/测试/文档)

+

【输出】:指定OpenStack版本的全量依赖库信息,包括最小/最大依赖版本、所属openEuler SIG、RPM包名、依赖层级、子依赖树等内容,可以以Excel表格的方式输出。

+

【对其他功能的影响】:N/A

+

Spec自动生成需求

+

【功能点】

+
1. 一键生成OpenStack依赖库类软件的RPM SPEC
+2. 支持各种Python软件构建系统,比如setuptools、pyproject等。
+

【先决条件】:需遵守SPEC开发规范

+

【参与方】:OpenStack SIG核心开发者

+

【输入】:指定软件名及目标版本

+

【输出】:对应软件的RPM SPEC文件

+

【对其他功能的影响】:生成的SPEC可以通过下述代码提交功能一键push到openEuler社区。

+

自动化部署、测试需求

+

【功能点】

+
1. 一键快速部署指定OpenStack版本、拓扑、功能的OpenStack单/多节点环境
+2. 一键基于已部署OpenStack环境进行资源预配置与功能测试。
+3. 支持多云、主机纳管功能,支持插件自定义功能。
+

【先决条件】:N/A

+

【参与方】:OpenStack SIG核心开发者、各个云平台相关开发者

+

【输入】:目标OpenStack版本、计算/网络/存储的driver场景

+

【输出】:一个可以一键执行OpenStack Tempest测试的OpenStack环境;Tempest测试报告。

+

【对其他功能的影响】: N/A

+

一键代码处理需求

+

【功能点】

+
1. 一键针对openEuler OpenStack所属项目的Repo、Branch、PR执行各种操作。
+2. 操作包括:建立/删除源码仓;建立/删除openEuler分支;提交软件Update PR;在PR中添加评审意见。
+

【先决条件】:提交PR功能依赖上述SPEC生成功能

+

【参与方】:OpenStack SIG核心开发者

+

【输入】:指定软件名、openEuler release名、目标Spec文件、评审意见内容。

+

【输出】:软件建仓PR;软件创建分支PR;软件升级PR;PR新增评审意见。

+

【对其他功能的影响】:N/A

+

非功能需求

+

测试需求

+
    +
  1. 对应软件代码需包含单元测试,覆盖率不低于80%。
  2. +
  3. 需提供端到端功能测试,覆盖上述所有接口,以及核心的场景测试。
  4. +
  5. 基于openEuler社区CI,构建CI/CD流程,所有Pull Request要有CI保证代码质量,定期发布release版本,软件发布间隔不大于3个月。
  6. +
+

安全

+
    +
  1. 数据安全:软件全程不联网,持久存储中不包含用户敏感信息。
  2. +
  3. 网络安全:OOS在REST架构下使用http协议通信,但软件设计目标实在内网环境中使用,不建议暴露在公网IP中,如必须如此,建议增加访问IP白名单限制。
  4. +
  5. 系统安全:基于openEuler安全机制,定期发布CVE修复或安全补丁。
  6. +
  7. 应用层安全:不涉及,不提供应用级安全服务,例如密码策略、访问控制等。
  8. +
  9. 管理安全:软件提供日志生成和周期性备份机制,方便用户定期审计。
  10. +
+

可靠性

+

本软件面向openEuler社区OpenStack开发行为,不涉及服务上线或者商业生产落地,所有代码公开透明,不涉及私有功能及代码。因此不提供例如节点冗余、容灾备份能功能。

+

开源合规

+

本平台采用Apache2.0 License,不限制下游fork软件的闭源与商业行为,但下游软件需标注代码来源以及保留原有License。

+

实施计划

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
时间内容
2021.06完成软件整体框架编写,实现CLI Built-in机制,至少一个API可用
2021.12完成CLI Built-in机制的全量功能可用
2022.06完成质量加固,保证功能,在openEuler OpenStack社区开发流程中正式引入OOS
2022.12不断完成OOS,保证易用性、健壮性,自动化覆盖度超过80%,降低开发人力投入
2023.06补齐REST框架、CI/CD流程,丰富Plugin机制,引入更多backend支持
2023.12完成前端GUI功能
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/site/spec/openstack-sig-tool/index.html b/site/spec/openstack-sig-tool/index.html new file mode 100644 index 0000000000000000000000000000000000000000..54eed82730605994581a44f421f0cf59ac4893a2 --- /dev/null +++ b/site/spec/openstack-sig-tool/index.html @@ -0,0 +1,1283 @@ + + + + + + + + openEuler OpenStack 开发平台 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ +

openEuler OpenStack 开发平台

+

openEuler OpenStack SIG成立于2021年,是由中国联通、中国电信、华为、统信等公司的开发者共同投入并维护的SIG小组,旨在openEuler之上提供原生的OpenStack,构建开放可靠的云计算技术栈,是openEuler的标杆SIG。但OpenStack本身技术复杂、包含服务众多,开发门槛较高,对贡献者的技术能力要求也较高,人力成本高居不下,在实际开发与贡献中存在各种各样的问题。为了解决SIG面临的问题,亟需一个openEuler+OpenStack解决方案,从而降低开发者门槛,降低投入成本,提高开发效率,保证SIG的持续活跃与可持续发展。

+

1. 概述

+

1.1 当前现状

+

目前,随着SIG的不断发展,我们明显的遇到了以下几类问题: +1. OpenStack技术复杂,涉及云IAAS层的计算、网络、存储、镜像、鉴权等方方面面的技术,开发者很难全知全会,提交的代码逻辑、质量堪忧。 +2. OpenStack是由python编写的,python软件的依赖问题难以处理,以OpenStack Wallaby版本为例,涉及核心python软件包400+, 每个软件的依赖层级、依赖版本错综复杂,选型困难,难以形成闭环。 +3. OpenStack软件包众多,RPM Spec编写开发量巨大,并且随着openEuler、OpenStack本身版本的不断演进,N:N的适配关系会导致工作量成倍增长,人力成本越来越大。 +4. OpenStack测试门槛过高,不仅需要开发人员熟悉OpenStack,还要对虚拟化、虚拟网桥、块存储等Linux底层技术有一定了解与掌握,部署一套OpenStack环境耗时过长,功能测试难度巨大。并且测试场景多,比如X86、ARM64架构测试,裸机、虚机种类测试,OVS、OVN网桥测试,LVM、Ceph存储测试等等,更加加重了人力成本以及技术门槛。

+

1.2 解决方案

+

针对以上目前SIG遇到的问题,规范化、工具化、自动化的目标势在必行。本篇设计文档旨在在openEuler OpenStack SIG中提供一个端到端可用的开发解决方案,从技术规范到技术实现,提出严格的标准要求与设计方案,满足SIG开发者的日常开发需求,降低开发成本,减少人力投入成本,降低开发门槛,从而提高开发效率、提高SIG软件质量、发展SIG生态、吸引更多开发者加入SIG。主要动作如下: +1. 输出OpenStack服务类软件、依赖库软件的RPM SPEC开发规范,开发者及Reviewer需要严格遵守规范进行开发实施。 +2. 提供OpenStack python软件依赖分析功能,一键生成依赖拓扑与结果,保证依赖闭环,避免软件依赖风险。 +3. 提供OpenStack RPM spec生成功能,针对通用性软件,提供一键生成 RPM spec的功能,缩短开发时间,降低投入成本。 +4. 提供自动化部署、测试平台功能,实现一键在任何openEuler版本上部署指定OpenStack版本的能力,快速测试、快速迭代。 +5. 提供openEuler Gitee仓库自动化处理能力,满足批量修改软件的需求,比如创建代码分支、创建仓库、提交Pull Request等功能。

+

以上解决方法可以统一到一个系统平台中,我们称作OpenStack SIG Tool(以下简称oos),即就是openEuler OpenStack开发平台,具体架构如下: +

            ┌────────────────────┐        ┌─────────────────────┐
+            │         CLI        │        │         GUI         │
+            └─────┬─────────┬────┘        └──────────┬──────────┘
+                  │         │                        │
+          Built-in│         └───────────┬────────────┘
+                  │                     │REST
+┌─────────────────▼─────────────────────▼────────────────────────────────┐
+│                       OpenStack Develop Platform                       │
+└───────────────────────────────────┬────────────────────────────────────┘
+                                    │
+          ┌────────────────────┬────┴─────────────┬────────────────┐
+          │                    │                  │                │
+┌─────────▼─────────┐  ┌───────▼───────┐  ┌───────▼───────┐  ┌─────▼─────┐
+│Dependency Analysis│  │SPEC Generation│  │Deploy and Test│  │Code Action│
+└───────────────────┘  └───────────────┘  └───────────────┘  └───────────┘
+该架构主要有以下两种模式: +1. Client/Server模式 + 在这种模式下,oos部署成Web Server形式,Client通过REST方式调用oos。 + - 优点:提供异步调用能力,支持并发处理,支持记录持久化。

+
 - 缺点:有一定安装部署成本,使用方式较为死板。
+
    +
  1. +

    Built-in模式 + 在这种模式下,oos无需部署,以内置CLI的方式对外提供服务,用户通过cli直接调用各种功能。

    +
      +
    • +

      优点:无需部署,随时随地可用。

      +
    • +
    • +

      缺点:没有持久化能力,不支持并发,单人单用。

      +
    • +
    +
  2. +
+

2. 详细设计

+

2.1 OpenStack Spec规范

+

Spec规范是一个或多个spec模板,针对RPM spec的每个关键字及构建章节,严格规定相关内容,开发者在编写spec时,必须满足规范要求,否则代码不允许被合入。规范内容由SIG maintainer公开讨论后形成结论,并定期审视更新。任何人都有权利提出对规范的质疑和建议, maintainer负责解释与刷新。规范目前包括两类: +1. 服务类软件规范 + 此类软件以Nova、Neutron、Cinder等OpenStack核心服务为例,它们一般定制化要求高,内容区别大,必要人为手动编写。规范需清晰规定软件的分层方法、构建方法、软件包组成内容、测试方法、版本号规则等内容。

+
    +
  1. 通用依赖类软件规范 + 此类软件一般定制化低,内容结构区别小,适合自动化工具一键生成,我们只需要在规范中定义相关工具的生成规则即可。
  2. +
+

2.1.1 服务类软件规范

+

OpenStack每个服务通常包含若干子服务,针对这些子服务,我们在打包的时候也要做拆包处理,分成若干个子RPM包。本章节规定了openEuler SIG对OpenStack服务的RPM包拆分的原则。

+
2.1.1.1 通用原则
+

采用分层架构,RPM包结构如下图所示,以openstack-nova为例:

+
Level | Package                                                                       | Example
+      |                                                                               |  
+ ┌─┐  |                       ┌──────────────┐        ┌────────────────────────┐      | ┌────────────────────┐ ┌────────────────────────┐
+ │1│  |                       │ Root Package │        │ Doc Package (Optional) │      | │ openstack-nova.rpm │ │ openstack-nova-doc.rpm │
+ └─┘  |                       └────────┬─────┘        └────────────────────────┘      | └────────────────────┘ └────────────────────────┘
+      |                                │                                              |
+      |          ┌─────────────────────┼───────────────────────────┐                  |
+      |          │                     │                           |                  |
+ ┌─┐  | ┌────────▼─────────┐ ┌─────────▼────────┐                  |                  | ┌────────────────────────────┐ ┌────────────────────────┐
+ │2│  | │ Service1 Package │ │ Service2 Package │                  |                  | │ openstack-nova-compute.rpm │ │ openstack-nova-api.rpm │
+ └─┘  | └────────┬─────────┘ └────────┬─────────┘                  |                  | └────────────────────────────┘ └────────────────────────┘
+      |          |                    |                            |                  |
+      |          └──────────┬─────────┘                            |                  |
+      |                     |                                      |                  |
+ ┌─┐  |             ┌───────▼────────┐                             |                  | ┌───────────────────────────┐
+ │3│  |             │ Common Package │                             |                  | │ openstack-nova-common.rpm │
+ └─┘  |             └───────┬────────┘                             |                  | └───────────────────────────┘
+      |                     │                                      |                  |
+      |                     │                                      |                  |
+      |                     │                                      |                  |
+ ┌─┐  |            ┌────────▼────────┐            ┌────────────────▼────────────────┐ | ┌──────────────────┐ ┌────────────────────────┐
+ │4│  |            │ Library Package ◄------------| Library Test Package (Optional) │ | │ python2-nova.rpm │ │ python2-nova-tests.rpm │
+ └─┘  |            └─────────────────┘            └─────────────────────────────────┘ | └──────────────────┘ └────────────────────────┘
+

如图所示,分为4级

+
    +
  1. Root Package为总RPM包,原则上不包含任何文件。只做服务集合用。用户可以使用该RPM一键安装所有子RPM包。 + 如果项目有doc相关的文件,也可以单独成包(可选)
  2. +
  3. Service Package为子服务RPM包,包含该服务的systemd服务启动文件、自己独有的配置文件等。
  4. +
  5. Common Package是共用依赖的RPM包,包含各个子服务依赖的通用配置文件、系统配置文件等。
  6. +
  7. Library Package为python源码包,包含了该项目的python代码。 + 如果项目有test相关的文件,也可以单独成包(可选)
  8. +
+

涉及本原则的项目有:

+
    +
  • openstack-nova
  • +
  • openstack-cinder
  • +
  • openstack-glance
  • +
  • openstack-placment
  • +
  • openstack-ironic
  • +
+
2.1.1.2 特殊情况
+

有些openstack组件本身只包含一个服务,不存在子服务的概念,这种服务则只需要分为两级:

+
 Level | Package                                                         | Example
+       |                                                                 |  
+  ┌─┐  |               ┌──────────────┐  ┌────────────────────────┐      | ┌────────────────────────┐ ┌────────────────────────────┐
+  │1│  |               │ Root Package │  │ Doc Package (Optional) │      | │ openstack-keystone.rpm │ │ openstack-keystone-doc.rpm │
+  └─┘  |               └───────┬──────┘  └────────────────────────┘      | └────────────────────────┘ └────────────────────────────┘
+       |                       |                                         |   
+       |          ┌────────────┴───────────────────┐                     |
+  ┌─┐  |  ┌───────▼─────────┐     ┌────────────────▼────────────────┐    | ┌──────────────────────┐ ┌────────────────────────────┐
+  │2│  |  │ Library Package ◄-----| Library Test Package (Optional) │    | │ python2-keystone.rpm │ │ python2-keystone-tests.rpm │
+  └─┘  |  └─────────────────┘     └─────────────────────────────────┘    | └──────────────────────┘ └────────────────────────────┘
+
    +
  1. Root Package RPM包包含了除python源码外的其他所有文件,包括服务启动文件、项目配置文件、系统配置文件等等。 + 如果项目有doc相关的文件,也可以单独成包(可选)
  2. +
  3. Library Package为python源码包,包含了该项目的python代码。 + 如果项目有test相关的文件,也可以单独成包(可选)
  4. +
+

涉及本原则的项目有:

+
    +
  • openstack-keystone
  • +
  • openstack-horizon
  • +
+

还有些项目虽然有若干子RPM包,但这些子RPM包是互斥的,则这种服务的结构如下:

+
Level | Package                                                                           | Example
+      |                                                                                   |  
+ ┌─┐  |                       ┌──────────────┐        ┌────────────────────────┐          | ┌───────────────────────┐ ┌───────────────────────────┐
+ │1│  |                       │ Root Package │        │ Doc Package (Optional) │          | │ openstack-neutron.rpm │ │ openstack-neutron-doc.rpm │
+ └─┘  |                       └────────┬─────┘        └────────────────────────┘          | └───────────────────────┘ └───────────────────────────┘
+      |                                │                                                  |
+      |          ┌─────────────────────┴───────────────────────────────┐                  |
+      |          │                                                     |                  |
+ ┌─┐  | ┌────────▼─────────┐ ┌──────────────────┐ ┌──────────────────┐ |                  | ┌──────────────────────────────┐ ┌───────────────────────────────────┐ ┌───────────────────────────────────┐
+ │2│  | │ Service1 Package │ │ Service2 Package │ │ Service3 Package │ |                  | │ openstack-neutron-server.rpm │ │ openstack-neutron-openvswitch.rpm │ │ openstack-neutron-linuxbridge.rpm │
+ └─┘  | └────────┬─────────┘ └────────┬─────────┘ └────────┬─────────┘ |                  | └──────────────────────────────┘ └───────────────────────────────────┘ └───────────────────────────────────┘
+      |          |                    |                    |           |                  |
+      |          └────────────────────┼────────────────────┘           |                  |
+      |                               |                                |                  |
+ ┌─┐  |                       ┌───────▼────────┐                       |                  | ┌──────────────────────────────┐
+ │3│  |                       │ Common Package │                       |                  | │ openstack-neutron-common.rpm │
+ └─┘  |                       └───────┬────────┘                       |                  | └──────────────────────────────┘
+      |                               │                                |                  |
+      |                               │                                |                  |
+      |                               │                                |                  |
+ ┌─┐  |                      ┌────────▼────────┐      ┌────────────────▼────────────────┐ | ┌─────────────────────┐ ┌───────────────────────────┐
+ │4│  |                      │ Library Package ◄------| Library Test Package (Optional) │ | │ python2-neutron.rpm │ │ python2-neutron-tests.rpm │
+ └─┘  |                      └─────────────────┘      └─────────────────────────────────┘ | └─────────────────────┘ └───────────────────────────┘
+

如图所示,Service2和Service3互斥。

+
    +
  1. Root包只包含不互斥的子包,互斥的子包单独提供。 + 如果项目有doc相关的文件,也可以单独成包(可选)
  2. +
  3. Service Package为子服务RPM包,包含该服务的systemd服务启动文件、自己独有的配置文件等。 + 互斥的Service包不被Root包所包含,用户需要单独安装。
  4. +
  5. Common Package是共用依赖的RPM包,包含各个子服务依赖的通用配置文件、系统配置文件等。
  6. +
  7. Library Package为python源码包,包含了该项目的python代码。 + 如果项目有test相关的文件,也可以单独成包(可选)
  8. +
+

涉及本原则的项目有:

+
    +
  • openstack-neutron
  • +
+

2.1.2 通用依赖类软件规范

+

一个依赖库一般只包含一个RPM包,不需要做拆分处理。

+
 Level | Package                                         | Example
+       |                                                 |      
+  ┌─┐  |  ┌─────────────────┐ ┌────────────────────────┐ | ┌──────────────────────────┐ ┌───────────────────────────────┐
+  │1│  |  │ Library Package │ │ Help Package (Optional)│ | │ python2-oslo-service.rpm │ │ python2-oslo-service-help.rpm │
+  └─┘  |  └─────────────────┘ └────────────────────────┘ | └──────────────────────────┘ └───────────────────────────────┘
+

NOTE

+

openEuler社区对python2和python3 RPM包的命名有要求,python2的包前缀为python2-,python3的包前缀为python3-。因此,OpenStack要求开发者在打Library的RPM包时,也要遵守openEuler社区规范。

+

2.2 软件依赖功能

+

软件依赖分析功能为用户提供一键分析目标OpenStack版本包含的全量python软件依赖拓扑及对应软件版本的能力。并自动与目标openEuler版本进行比对,输出对应的软件包开发建议。本功能包含两个子功能: +- 依赖分析

+
 对OpenStack python包的依赖树进行解析,拆解依赖拓扑。依赖树本质上是对有向图的遍历,理论上,一个正常的python依赖树是一个有向无环图,有向无环图的解析方法很多,这里采用常用的广度优先搜索方法即可。但在某些特殊场景下,python依赖树会变成有向有环图,例如:Sphinx是一个文档生产项目,但它自己的文档生成也依赖Sphinx,这就导致了依赖环的形成。针对这种问题,我们只需要把环上的特定节点手动断开即可。类似的还有一些测试依赖库。另一种规避方法是跳过文档、测试这种非核心库,这样不仅避免了依赖环的形成,也会极大减少软件包的数量,降低开发工作量。以OpenStack Wallaby版本为例,全量依赖包大概在700+以上,去掉文档、测试后,依赖包大概是300+左右。因此我们引入`core`核心的概念,用户根据自己的需求,选择要分析的软件范围。另外虽然OpenStack包含服务几十个,但用户可能只需要其中的某些服务,因此我们另外引入`projects`过滤器,用户可以根据自己的需求,指定分析的软件依赖范围。
+
    +
  • 依赖比对 + 依赖分析完后,还要有对应的openEuler开发动作,因此我们还要提供基于目标openEuler版本的RPM软件包开发建议。openEuler与OpenStack版本之间有N:N的映射关系,一个openEuler版本可以支持多个OpenStack版本,一个OpenStack版本可以部署在多个openEuler版本上。用户在指定了目标openEuler版本和OpenStack版本后,本功能自动遍历openEuler软件库,分析并输出OpenStack涉及的全量软件包需要进行了操作,例如需要初始化仓库、创建openEuler分支、升级软件包等等。为开发者后续的开发提供指导。
  • +
+

2.2.1 版本匹配规范

+
    +
  • +

    依赖分析

    +

    输入:目标OpenStack版本、目标OpenStack服务列表、是否只分析核心软件

    +

    输出:所有涉及的软件包及每个软件包的对应内容。格式如下:

    +
    └──{OpenStack版本名}_cached_file
    +     └──packageA.yaml
    +     └──packageB.yaml
    +     └──packageC.yaml
    +     ......
    +

    每个软件内容格式如下:

    +
    {
    +   "name": "packageA", 
    +   "version_dict": {
    +      "version": "0.3.7",
    +      "eq_version": "",
    +      "ge_version": "0.3.5",
    +      "lt_version": "",
    +      "ne_version": [],
    +      "upper_version": "0.3.7"},
    +      "deep": {
    +         "count": 1,
    +         "list": ["packageB", "packageC"]},
    +      "requires": {}
    +}
    +

    关键字说明 + | Key | Description | + |:-----------------:|:-----------:| + | name | 软件包名 | + | version_dict | 软件版本要求,包括等于、大于等于、小于、不等于,等等 | + | version_dict.deep | 表示该软件在全量依赖树的深度,以及深度遍历的路径 | + | requires | 包含本软件的依赖软件列表 |

    +
  • +
  • +

    依赖比对 + 输入:依赖分析结果、目标openEuler版本以及base比对基线

    +

    输出:一个表格,包含每个软件的分析结果及处理建议,每一行表示一个软件,所有列名及定义规范如下:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ColumnDescription
    Project Name软件包名
    openEuler Repo软件在openEuler上的源码仓库名
    Repo versionopenEuler上的源码版本
    Required (Min) Version要求的最小版本
    lt Version要求小于的版本
    ne Version要求的不等于版本
    Upper Version要求的最大版本
    Status开发建议
    Requires软件的依赖列表
    Depth软件的依赖树深度
    +

    其中Status包含的建议有: + - “OK”:当前版本直接可用,不需要处理。 + - “Need Create Repo”:openEuler 系统中没有此软件包,需要在 Gitee 中的 src-openeuler repo 仓新建仓库。 + - “Need Create Branch”:仓库中没有所需分支,需要开发者创建并初始化。 + - “Need Init Branch”:表明分支存在,但是里面并没有任何版本的源码包,开发者需要对此分支进行初始化。 + - “Need Downgrade”:降级软件包。 + - “Need Upgrade”:升级软件包。

    +

    开发者根据Status的建议进行后续开发动作。

    +
  • +
+

2.2.2 API和CLI定义

+
    +
  1. +

    创建依赖分析

    +
      +
    • +

      CLI: oos dependence analysis create

      +
    • +
    • +

      endpoint: /dependence/analysis

      +
    • +
    • +

      type: POST

      +
    • +
    • +

      sync OR async: async

      +
    • +
    • +

      request body: +

      {
      +     "release"[required]: Enum("OpenStack Relase"),
      +     "runtime"[optional][Default: "3.10"]: Enum("Python version"),
      +     "core"[optional][Default: False]: Boolean,
      +     "projects"[optional][Default: None]: List("OpenStack service")
      +}

      +
    • +
    • response body: +
      {
      +     "ID": UUID,
      +     "status": Enum("Running", "Error")
      +}
    • +
    • 获取依赖分析
    • +
    • +

      CLI: oos dependence analysis showoos dependence analysis list

      +
    • +
    • +

      endpoint: /dependence/analysis/{UUID}/dependence/analysis

      +
    • +
    • +

      type: GET

      +
    • +
    • +

      sync OR async: sync

      +
    • +
    • +

      request body: None

      +
    • +
    • +

      response body: +

      {
      +     "ID": UUID,
      +     "status": Enum("Running", "Error", "OK")
      +}

      +
    • +
    +
  2. +
  3. +

    删除依赖分析

    +
      +
    • +

      CLI: oos dependence analysis delete

      +
    • +
    • +

      endpoint: /dependence/analysis/{UUID}

      +
    • +
    • +

      type: DELETE

      +
    • +
    • +

      sync OR async: sync

      +
    • +
    • +

      request body: None

      +
    • +
    • +

      response body: +

      {
      +     "ID": UUID,
      +     "status": Enum("Error", "OK")
      +}

      +
    • +
    +
  4. +
  5. +

    创建依赖比对

    +
      +
    • +

      CLI: oos dependence generate

      +
    • +
    • +

      endpoint: /dependence/generate

      +
    • +
    • +

      type: POST

      +
    • +
    • +

      sync OR async: async

      +
    • +
    • +

      request body: +

      {
      +     "analysis_id"[required]: UUID,
      +     "compare"[optional][Default: None]: {
      +          "token"[required]: GITEE_TOKEN_ID,
      +          "compare-from"[optional][Default: master]: Enum("openEuler project branch"),
      +          "compare-branch"[optional][Default: master]: Enum("openEuler project branch")
      +
      +     }
      +}

      +
    • +
    • response body: +
      {
      +     "ID": UUID,
      +     "status": Enum("Running", "Error")
      +}
    • +
    +
  6. +
  7. +

    获取依赖比对

    +
      +
    • +

      CLI: oos dependence generate showoos dependence generate list

      +
    • +
    • +

      endpoint: /dependence/generate/{UUID}/dependence/generate

      +
    • +
    • +

      type: GET

      +
    • +
    • +

      sync OR async: sync

      +
    • +
    • +

      request body: None

      +
    • +
    • +

      response body: +

      {
      +     "ID": UUID,
      +     "data" RAW(result data file)
      +}

      +
    • +
    +
  8. +
  9. +

    删除依赖比对

    +
      +
    • +

      CLI: oos dependence generate delete

      +
    • +
    • +

      endpoint: /dependence/generate/{UUID}

      +
    • +
    • +

      type: DELETE

      +
    • +
    • +

      sync OR async: sync

      +
    • +
    • +

      request body: None

      +
    • +
    • +

      response body: +

      {
      +     "ID": UUID,
      +     "status": Enum("Error", "OK")
      +}

      +
    • +
    +
  10. +
+

2.3 软件SPEC生成功能

+

OpenStack依赖的大量python库是面向开发者的,这种库不对外提供用户服务,只提供代码级调用,其RPM内容构成单一、格式固定,适合使用工具化方式提高开发效率。

+

2.3.1 SPEC生成规范

+

SPEC编写一般分为几个阶段,每个阶段有对应的规范要求: +1. 常规项填写,包括Name、Version、Release、Summary、License等内容,这些内容由目标软件的pypi信息提供 +2. 子软件包信息填写,包括软件包名、编译依赖、安装依赖、描述信息等。这些内容也由目标软件的pypi信息提供。其中软件包名需要有明显的python化显示,比如以python3-为前缀。 +3. 构建过程信息填写,包括%prep、%build %install %check内容,这些内容形式固定,生成对应rpm宏命令即可。 +4. RPM包文件封装阶段,本阶段通过文件搜索方式,把bin、lib、doc等内容分别放到对应目录即可。

+

NOTE:在通用规范外,也有一些例外情况,需要特殊说明: +1. 软件包名如果本身已包含python这样的字眼,不再需要添加python-python3-前缀。 +2. 软件构建和安装阶段,根据软件本身的安装方式不同,宏命令包括%py3_buildpyproject_build,需要人工审视。 +3. 如果软件本身包含C语言等编译类代码,则需要移除BuildArch: noarch关键字,并且在%file阶段注意RPM宏%{python3_sitelib}%{python3_sitearch}的区别。

+

2.3.2 API和CLI定义

+
    +
  1. +

    创建SPEC

    +
      +
    • +

      CLI: oos spec create

      +
    • +
    • +

      endpoint: /spec

      +
    • +
    • +

      type: POST

      +
    • +
    • +

      sync OR async: async

      +
    • +
    • +

      request body: +

      {
      +     "name"[required]: String,
      +     "version"[optional][Default: "latest"]: String,
      +     "arch"[optional][Default: False]: Boolean,
      +     "check"[optional][Default: True]: Boolean,
      +     "pyproject"[optional][Default: False]: Boolean,
      +}

      +
    • +
    • response body: +
      {
      +     "ID": UUID,
      +     "status": Enum("Running", "Error")
      +}
    • +
    +
  2. +
  3. +

    获取SPEC

    +
      +
    • +

      CLI: oos spec showoos spec list

      +
    • +
    • +

      endpoint: /spec/{UUID}/spec/

      +
    • +
    • +

      type: GET

      +
    • +
    • +

      sync OR async: sync

      +
    • +
    • +

      request body: None

      +
    • +
    • +

      response body: +

      {
      +     "ID": UUID,
      +     "status": Enum("Running", "Error", "OK")
      +}

      +
    • +
    +
  4. +
  5. +

    更新SPEC

    +
      +
    • +

      CLI: oos spec update

      +
    • +
    • +

      endpoint: /spec/{UUID}

      +
    • +
    • +

      type: POST

      +
    • +
    • +

      sync OR async: async

      +
    • +
    • +

      request body: +

      {
      +     "name"[required]: String,
      +     "version"[optional][Default: "latest"]: String,
      +}

      +
    • +
    • +

      response body: +

      {
      +     "ID": UUID,
      +     "status": Enum("Running", "Error")
      +}

      +
    • +
    +
  6. +
  7. +

    删除SPEC

    +
      +
    • +

      CLI: oos spec delete

      +
    • +
    • +

      endpoint: /spec/{UUID}

      +
    • +
    • +

      type: DELETE

      +
    • +
    • +

      sync OR async: sync

      +
    • +
    • +

      request body: None

      +
    • +
    • +

      response body: +

      {
      +     "ID": UUID,
      +     "status": Enum("Error", "OK")
      +}

      +
    • +
    +
  8. +
+

2.4 自动化部署、测试功能

+

OpenStack的部署场景多样、部署流程复杂、部署技术门槛较高,为了解决门槛高、效率低、人力多的问题,openEuler OpenStack开发平台需要提供自动化部署、测试功能。

+
    +
  • +

    自动化部署

    +

    提供基于openEuler的OpenStack的一键部署能力,包括支持不同架构、不同服务、不同场景的部署功能,提供基于不同环境快速发放、配置openEuler环境的能力。并提供插件化能力,方便用户扩展支持的部署后端和场景。

    +
  • +
  • +

    自动化测试

    +

    提供基于openEuler的OpenStack的一键测试能力,包括支持不同场景的测试,提供用户自定义测试的能力,并规范测试报告,以及支持对测试结果上报和持久化的能力。

    +
  • +
+

2.4.1 自动化部署

+

自动化部署主要包括两部分:openEuler环境准备和OpenStack部署。

+
    +
  • openEuler环境准备
  • +
+

提供快速发放openEuler环境的能力,支持的发放方式包括创建公有云资源纳管已有环境,具体设计如下:

+
**NOTE**
+   openEuler的OpenStack支持以RPM + systemd的方式为主,暂不支持容器方式。
+
    +
  • +

    创建公有云资源

    +

    创建公有云资源以虚拟机支持为主(裸机在云上操作负责,生态满足度不足,暂不做支持)。采用插件化方式,提供多云支持的能力,以华为云为参考实现,优先实现。其他云的支持根据用户需求,持续推进。根据场景,支持all in one和三节点拓扑。 + 1. 创建环境 + - CLI: oos env create

    +
      - endpoint: `/environment`
    +
    +  - type: POST
    +
    +  - sync OR async: async
    +
    +  - request body:
    +       ```
    +       {
    +            "name"[required]: String,
    +            "type"[required]: Enmu("all-in-one", "cluster"),
    +            "release"[required]: Enmu("openEuler_Release"),
    +            "flavor"[required]: Enmu("small", "medium", "large"),
    +            "arch"[required]: Enmu("x86", "arm64"),
    +       }
    +       ```
    +
    +  - response body:
    +       ```
    +       {
    +            "ID": UUID,
    +            "status": Enum("Running", "Error")
    +       }
    +       ```
    +
      +
    1. +

      查询环境

      +
        +
      • +

        CLI: oos env list

        +
      • +
      • +

        endpoint: /environment

        +
      • +
      • +

        type: GET

        +
      • +
      • +

        sync OR async: async

        +
      • +
      • +

        request body: None

        +
      • +
      • +

        response body: +

        {
        +     "ID": UUID,
        +     "Provider": String,
        +     "Name": String,
        +     "IP": IP_ADDRESS,
        +     "Flavor": Enmu("small", "medium", "large"),
        +     "openEuler_release": String,
        +     "OpenStack_release": String,
        +     "create_time": TIME,
        +}

        +
          +
        1. 删除环境
        2. +
        +
      • +
      • +

        CLI: oos env delete

        +
      • +
      • +

        endpoint: /environment/{UUID}

        +
      • +
      • +

        type: DELETE

        +
      • +
      • +

        sync OR async: sync

        +
      • +
      • +

        request body: None

        +
      • +
      • +

        response body: +

        {
        +     "ID": UUID,
        +     "status": Enum("Error", "OK")
        +}

        +
      • +
      +
    2. +
    +
  • +
  • +

    纳管已有环境

    +

    用户还可以直接使用已有的openEuler环境进行OpenStack部署,需要把已有环境纳管到平台中。纳管后,环境与创建的项目,可以直接查询或删除。 +1. 纳管环境 + - CLI: oos env manage

    +
      - endpoint: `/environment/manage`
    +
    +  - type: POST
    +
    +  - sync OR async: sync
    +
    +  - request body:
    +       ```
    +       {
    +            "name"[required]: String,
    +            "ip"[required]: IP_ADDRESS,
    +            "release"[required]: Enmu("openEuler_Release"),
    +            "password"[required]: String,
    +       }
    +       ```
    +
    +  - response body:
    +       ```
    +       {
    +            "ID": UUID,
    +            "status": Enum("Error", "OK")
    +       }
    +       ```
    +
  • +
  • +

    OpenStack部署

    +

    提供在已创建/纳管的openEuler环境上部署指定OpenStack版本的能力。 + 1. 部署OpenStack + - CLI: oos env setup

    +
      - endpoint: `/environment/setup`
    +
    +  - type: POST
    +
    +  - sync OR async: async
    +
    +  - request body:
    +       ```
    +       {
    +            "target"[required]: UUID(environment),
    +            "release"[required]: Enmu("OpenStack_Release"),
    +       }
    +       ```
    +
    +  - response body:
    +       ```
    +       {
    +            "ID": UUID,
    +            "status": Enum("Running", "Error")
    +       }
    +       ```
    +
      +
    1. +

      初始化OpenStack资源

      +
        +
      • +

        CLI: oos env init

        +
      • +
      • +

        endpoint: /environment/init

        +
      • +
      • +

        type: POST

        +
      • +
      • +

        sync OR async: async

        +
      • +
      • +

        request body: +

        {
        +     "target"[required]: UUID(environment),
        +}

        +
      • +
      • +

        response body: +

        {
        +     "ID": UUID,
        +     "status": Enum("Running", "Error")
        +}

        +
      • +
      +
    2. +
    3. +

      卸载已部署OpenStack

      +
        +
      • +

        CLI: oos env clean

        +
      • +
      • +

        endpoint: /environment/clean

        +
      • +
      • +

        type: POST

        +
      • +
      • +

        sync OR async: async

        +
      • +
      • +

        request body: +

        {
        +     "target"[required]: UUID(environment),
        +}

        +
      • +
      • +

        response body: +

        {
        +     "ID": UUID,
        +     "status": Enum("Running", "Error")
        +}

        +
      • +
      +
    4. +
    +
  • +
+

自动化测试

+

环境部署成功后,SIG开发平台提供基于已部署OpenStack环境的自动化测试功能。主要包含以下几个重要内容:

+

OpenStack本身提供一套完善的测试框架。包括单元测试功能测试,其中单元测试2.3章节中已经由RPM spec包含,spec的%check阶段可以定义每个项目的单元测试方式,一般情况下只需要添加pyteststestr即可。功能测试由OpenStack Tempest服务提供,在上文所述的自动化部署oos env init阶段,oos会自动安装Tempest并生成默认的配置文件。 +- CLI: oos env test

+
    +
  • +

    endpoint: /environment/test

    +
  • +
  • +

    type: POST

    +
  • +
  • +

    sync OR async: async

    +
  • +
  • +

    request body: +

    {
    +     "target"[required]: UUID(environment),
    +}

    +
  • +
  • +

    response body: +

    {
    +     "ID": UUID,
    +     "status": Enum("Running", "Error")
    +}

    +
  • +
+

测试执行完后,oos会输出测试报告,默认情况下,oos使用subunit2html工具,生成html格式的Tempest测试结果文件。

+

2.5 openEuler自动化开发功能

+

OpenStack涉及软件包众多,随着版本不断地演进、支持服务不断的完善,SIG维护的软件包列表会不断刷新,为了降低重复的开发动作,oos还封装了一些易用的代码开发平台自动化能力,比如基于Gitee的自动代码提交能力。功能如下: +

     ┌───────────────────────────────────────────────────┐
+     │                     Code Action                   │
+     └─────────────────────┬─────────────────────────────┘
+                           │
+           ┌───────────────┼───────────────────┐
+           │               │                   │
+     ┌─────▼─────┐  ┌──────▼──────┐  ┌─────────▼─────────┐
+     │Repo Action│  │Branch Action│  │Pull Request Action│
+     └───────────┘  └─────────────┘  └───────────────────┘

+
    +
  1. +

    Repo Action提供与软件仓相关的自动化功能:

    +
      +
    1. +

      自动建仓

      +
        +
      • +

        CLI: oos repo create

        +
      • +
      • +

        endpoint: /repo

        +
      • +
      • +

        type: POST

        +
      • +
      • +

        sync OR async: async

        +
      • +
      • +

        request body: +

        {
        +     "project"[required]: String,
        +     "repo"[required]: String,
        +     "push"[optional][Default: "False"]: Boolean,
        +}

        +
      • +
      • +

        response body: +

        {
        +     "ID": UUID,
        +     "status": Enum("Running", "Error")
        +}

        +
      • +
      +
    2. +
    +
  2. +
  3. +

    Branch Action提供与软件分支相关的自动化功能:

    +
      +
    1. +

      自动创建分支

      +
        +
      • +

        CLI: oos repo branch-create

        +
      • +
      • +

        endpoint: /repo/branch

        +
      • +
      • +

        type: POST

        +
      • +
      • +

        sync OR async: async

        +
      • +
      • +

        request body: +

        {
        +     "branches"[required]: {
        +          "branch-name"[required]: String,
        +          "branch-type"[optional][Default: "None"]: Enum("protected"),
        +          "parent-branch"[required]: String
        +     }
        +}

        +
      • +
      • +

        response body: +

        {
        +     "ID": UUID,
        +     "status": Enum("Running", "Error")
        +}

        +
      • +
      +
    2. +
    +
  4. +
  5. +

    Pull Request Action提供与代码PR相关的自动化功能:

    +
      +
    1. +

      新增PR评论,方便用户执行类似retest/lgtm等常规化评论。

      +
        +
      • +

        CLI: oos repo pr-comment

        +
      • +
      • +

        endpoint: /repo/pr/comment

        +
      • +
      • +

        type: POST

        +
      • +
      • +

        sync OR async: sync

        +
      • +
      • +

        request body: +

        {
        +     "repo"[required]: String,
        +     "pr_number"[required]: Int,
        +     "comment"[required]: String
        +}

        +
      • +
      • +

        response body: +

        {
        +     "ID": UUID,
        +     "status": Enum("OK", "Error")
        +}

        +
      • +
      +
    2. +
    3. +

      获取SIG所有PR,方便maintainer获取当前SIG的开发现状,提高评审效率。

      +
        +
      • +

        CLI: oos repo pr-fetch

        +
      • +
      • +

        endpoint: /repo/pr/fetch

        +
      • +
      • +

        type: POST

        +
      • +
      • +

        sync OR async: async

        +
      • +
      • +

        request body: +

        {
        +     "repo"[optional][Default: "None"]: List[String]
        +}

        +
      • +
      • +

        response body: +

        {
        +     "ID": UUID,
        +     "status": Enum("Running", "Error")
        +}

        +
      • +
      +
    4. +
    +
  6. +
+

3. 质量、安全与合规

+

SIG开源软件需要符合openeEuler社区对其中软件的各种要求,并且也要符合OpenStack社区软件的出口标准。

+

3.1 质量与安全

+
    +
  • +

    软件质量(可服务性)

    +
      +
    1. 对应软件代码需包含单元测试,覆盖率不低于80%。
    2. +
    3. 需提供端到端功能测试,覆盖上述所有接口,以及核心的场景测试。
    4. +
    5. 基于openEuler社区CI,构建CI/CD流程,所有Pull Request要有CI保证代码质量,定期发布release版本,软件发布间隔不大于3个月。
    6. +
    7. 基于Gitee ISSUE系统处理用户发现并反馈的问题,闭环率大于80%,闭环周期不超过1周。
    8. +
    +
  • +
  • +

    软件安全

    +
      +
    1. 数据安全:软件全程不联网,持久存储中不包含用户敏感信息。
    2. +
    3. 网络安全:OOS在REST架构下使用http协议通信,但软件设计目标实在内网环境中使用,不建议暴露在公网IP中,如必须如此,建议增加访问IP白名单限制。
    4. +
    5. 系统安全:基于openEuler安全机制,定期发布CVE修复或安全补丁。
    6. +
    7. 应用层安全:不涉及,不提供应用级安全服务,例如密码策略、访问控制等。
    8. +
    9. 管理安全:软件提供日志生成和周期性备份机制,方便用户定期审计。
    10. +
    +
  • +
  • +

    可靠性

    +

    本软件面向openEuler社区OpenStack开发行为,不涉及服务上线或者商业生产落地,所有代码公开透明,不涉及私有功能及代码。因此不提供例如节点冗余、容灾备份能功能。

    +
  • +
+

3.2 合规

+
    +
  1. +

    License合规

    +

    本平台采用Apache2.0 License,不限制下游fork软件的闭源与商业行为,但下游软件需标注代码来源以及保留原有License。

    +
  2. +
  3. +

    法务合规

    +

    本平台由开源开发者共同开发维护,不涉及商业公司的秘密以及非公开代码。所有贡献者需遵守openEuler社区贡献准则,确保自身的贡献合规合法。SIG及社区本身不承担相应责任。

    +

    如发现不合规的源码,SIG无需获取贡献者的允许,有权利及义务及时删除。并有权禁止不合规代码或开发者继续贡献。

    +

    开发者如果有非公开代码需要贡献,则要先遵守本公司的开源流程与规定,并按照openEuler社区开源规范公开贡献代码。

    +
  4. +
+

4. 实施计划

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
时间内容状态
2021.06完成软件整体框架编写,实现CLI Built-in机制,至少一个API可用Done
2021.12完成CLI Built-in机制的全量功能可用Done
2022.06完成质量加固,保证功能,在openEuler OpenStack社区开发流程中正式引入OOSDone
2022.12不断完成OOS,保证易用性、健壮性,自动化覆盖度超过80%,降低开发人力投入Done
2023.06补齐REST框架、CI/CD流程,丰富Plugin机制,引入更多backend支持Working in progress
2023.12完成前端GUI功能Planning
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + + +
+ + + + + + + + + diff --git a/site/spec/priority_vm/index.html b/site/spec/priority_vm/index.html new file mode 100644 index 0000000000000000000000000000000000000000..2d97c8735541f55546f50e4312193b4f89246154 --- /dev/null +++ b/site/spec/priority_vm/index.html @@ -0,0 +1,338 @@ + + + + + + + + 虚拟机高低优先级 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

高低优先级虚拟机混部

+

虚拟机混合部署是指把对CPU、IO、Memory等资源有不同需求的虚拟机通过调度方式部署、迁移到同一个计算节点上,从而使得节点的资源得到充分利用。在单机的资源调度分配上,区分出高低优先级,即高优先级虚机和低优先级虚机发生资源竞争时,资源优先分配给前者,严格保障其QoS。

+

虚拟机混合部署的场景有多种,比如通过动态资源调度满足节点资源的动态调整;根据用户使用习惯动态调整节点虚拟机分布等等。而虚拟机高低优先级调度也是其中的一种实现方法。

+

在OpenStack Nova中引入虚拟机高低优先级技术,可以一定程度上满足虚拟机的混合部署要求。本文档主要针对OpenStack Nova虚拟机创建功能,介绍虚拟机高低优先级调度的设计与实现。

+

实现方案

+

在Nova的虚拟机创建、迁移流程中引入高低优先级概念,虚拟机对象新增高低优先级属性。高优先级虚拟机在调度的过程中,会尽可能的调度到资源充足的节点,这样的节点需要至少满足内存不超卖、高优先级虚拟机所用CPU不超卖的要求。

+

本特性的实现基于OpenStack Yoga版本,承载于openEuler 22.09创新版本中。同时引入openEuler 22.03 LTS SP1的Train版本。

+

总体架构

+

用户创建flavor或创建虚机时,可指定其优先级属性。但优先级属性不影响Nova现有的资源模型及节点调度策略,即Nova仍按正常流程选取计算节点及创建虚机。

+

虚机高低优先级特性主要影响虚机创建后单机层面的资源调度分配策略。高优先级虚机和低优先级虚机发生资源竞争时,资源优先分配给前者,严格保障其QoS。

+

Nova针对虚机高低优先级特性有以下改变: +1. VM对象和flavor新增高低优先级属性配置。同时结合业务场景,约束高优先级属性只能设置给绑核类型虚机,低优先级属性只能设置给非绑核类虚机。 +2. 对于具有优先级属性的虚机,需修改libvirt XML配置,让单机上的QoS管理组件(名为Skylark)感知,从而自动进行资源分配和QoS管理。 +3. 低优先级虚机的绑核范围有改变,以充分利用高优先级虚机空闲的资源。

+

资源模型

+
    +
  • +

    VM对象新增可选属性prioritypriority可被设置成highlow,分别表示高低优先级。

    +
  • +
  • +

    flavor extra_specs新增hw:cpu_priority字段,标识为高低优先级虚拟机规格,值为highlow

    +
  • +
+

参数限制及规则:

+
    +
  1. priority=high必须与hw:cpu_policy=dedicated配套使用,否则报错。
  2. +
  3. priority=low必须与hw:cpu_policy=shared(默认值)配套使用,否则报错。
  4. +
  5. VM对象的优先级配置和flavor的优先级配置都为可选,都不配置时代表是普通VM,都配置时以VM对象的优先级属性为准。
  6. +
+

普通VM可与具有优先级属性的VM共存,因为优先级属性不影响Nova现有的资源模型及节点调度策略。当普通VM与高优先级VM发生资源竞争时,Skylark组件不会干预。当普通VM与低优先级VM发生资源竞争时,Skylark组件会优先保障普通VM的资源分配。

+

API

+

创建虚拟机API中可选参数os:scheduler_hints.priority可被设置成highlow,用于设置VM对象的优先级。

+
POST v2/servers (v2.1默认版本)
+{
+    "OS-SCH-HNT:scheduler_hints": {"priority": "high"}
+}
+

Scheduler

+

保持不变

+

Compute

+

资源上报

+

保持不变

+

资源分配绑定

+

高低优先级机器创建按照priority标志分配CPU:

+
    +
  • 高优先级虚拟机只能是绑核类型虚机,一对一绑定cpu_dedicated_set中指定CPU
  • +
  • 低优先级虚拟机只能是非绑核类型虚机,默认范围绑定cpu_shared_set中指定的CPU。
  • +
+

此外,nova.confcompute块中新增配置项cpu_priority_mix_enable,默认值为False。设置为True后,低优先级虚拟机可使用高优先级的虚拟机绑定的CPU,即低优先级虚拟机可范围绑定cpu_shared_setcpu_dedicated_set指定的CPU。

+

虚拟机xml

+

高低优先级机器创建按照priority标志,对虚拟机进行标识。

+
    +
  • Libvirt XML中新增属性<resource>片段,包括 /high_prio_machine/low_prio_machine两种值,分别表示高低优先级虚拟机。该片段本身在Nova中没有任何作用,只是为SkylarkQoS服务指明VM的高低优先级属性。
  • +
+

举例

+

假设一个compute节点拥有14个core,设置cpu_dedicated_set=0-11,一共12个核,cpu_shared_set=12-13,一共2个核心,cpu_allocation_ratio=8 则:

+
    +
  1. 高优VM在scheduler视角可用core为12,compute视角可绑核core也是12,与Nova原有逻辑一致。
  2. +
  3. 低优VM在scheduler视角可用core为2 * 8 = 16,compute视角可绑核core为2(当cpu_priority_mix_enable=False),与Nova原有逻辑一致。
  4. +
  5. 低优VM在scheduler视角可用core为2 * 8 = 16,compute视角可绑核core为2+12(当cpu_priority_mix_enable=True),与Nova原有逻辑有差异。
  6. +
+

参数配置建议

+

先确定全局超分比和极端超分比。

+
全局超分比的定义:所有可分配vCPU数量(高和低总和)与所有可用物理core数量的比值,这是一个计算出来的理论值,比如上述举例中,全局超分比为 (12 + 2 \* 8) / 14 = 2。
+全局超分比的意义:在高低优先级场景下,全局超分比主要影响低优先级虚机一般条件下(高优先级虚机vCPU没有同时冲高)的QoS。设置合理的全局超分比可以减少底层资源充足但调度失败的情况出现。
+
+极端超分比的定义:即cpu_allocation_ratio。只影响share核心的超分能力。
+极端超分比的意义:在高低优先级场景下,极端超分比主要影响低优先级虚机极端条件下(所有高优先级虚机vCPU同时冲高)的QoS。
+

用户结合业务特征及QoS目标,选择合适的全局超分比和极端超分比后,然后按照下面的计算公式,配置合理的cpu_dedicated_set及cpu_shared_set。 + 计算公式:

+
```
+用户期望的全局超分比 = (极端超分比 * shared核心数 + dedicated核心数) / compute所有核心数
+```
+
+还是以上述compute节点为例,compute所有核心数为14,假设极端超分比为8,则计算可得:
+
+```
+当dedicated核心数为12时,shared核心数为2时,用户期望的全局超分 = (8*2+12)/14 = 2
+
+当dedicated核心数为4时,shared核心数为10时,用户期望的全局超分 = (8*10+4)/14 = 6
+```
+

开发节奏

+

开发者:

+ +

时间点:

+
    +
  • 2022-04-01到2022-05-30 完成开发
  • +
  • 2022-06-01到2022-07-30 测试、联调、刷新代码
  • +
  • 2022-08-01到2022-08-30 完成RPM包构建
  • +
  • 2022-09-30引入openEuler 22.09 Yoga版本
  • +
  • 2022-12-30引入openEuler 22.03 LTS SP1 Train版本
  • +
+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/test/openEuler-20.03-LTS-SP2/index.html b/site/test/openEuler-20.03-LTS-SP2/index.html new file mode 100644 index 0000000000000000000000000000000000000000..7b1e7e4f89c02c215834d9c0a7c0a262d1f91ece --- /dev/null +++ b/site/test/openEuler-20.03-LTS-SP2/index.html @@ -0,0 +1,476 @@ + + + + + + + + openEuler-20.03-LTS-SP2 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

openEuler ico

+

版权所有 © 2021 openEuler社区 + 您对“本文档”的复制、使用、修改及分发受知识共享(Creative Commons)署名—相同方式共享4.0国际公共许可协议(以下简称“CC BY-SA 4.0”)的约束。为了方便用户理解,您可以通过访问https://creativecommons.org/licenses/by-sa/4.0/了解CC BY-SA 4.0的概要 (但不是替代)。CC BY-SA 4.0的完整协议内容您可以访问如下网址获取:https://creativecommons.org/licenses/by-sa/4.0/legalcode。

+

修订记录

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
日期修订版本修改描述作者
2021-6-161初稿王玺源
2021-6-172增加Rocky版本测试报告黄填华
+

关键词:

+

OpenStack

+

摘要:

+

在openEuler 20.03 LTS SP2版本中提供OpenStack Queens、Rocky版本的RPM安装包。方便用户快速部署OpenStack。

+

缩略语清单:

+ + + + + + + + + + + + + + + + + + + + +
缩略语英文全名中文解释
CLICommand Line Interface命令行工具
ECSElastic Cloud Server弹性云服务器
+

1 特性概述

+

在openEuler 20.03 LTS SP2 release中提供OpenStack Queens、Rocky RPM安装包支持,包括项目:Keystone、Glance、Nova、Neutron、Cinder、Ironic、Trove、Kolla、Horizon、Tempest以及每个项目配套的CLI。

+

2 特性测试信息

+

本节描述被测对象的版本信息和测试的时间及测试轮次,包括依赖的硬件。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试起始时间测试结束时间
openEuler 20.03 LTS SP2
(OpenStack各组件的安装部署测试)
2021.6.12021.6.7
openEuler 20.03 LTS SP2
(OpenStack基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2021.6.82021.6.10
openEuler 20.03 LTS SP2
(OpenStack tempest集成测试)
2021.6.112021.6.15
openEuler 20.03 LTS SP2
(问题回归测试)
2021.6.162021.6.17
+

描述特性测试的硬件环境信息

+ + + + + + + + + + + + + + + + + + + + + + + + + +
硬件型号硬件配置信息备注
华为云ECSIntel Cascade Lake 3.0GHz 8U16G华为云x86虚拟机
华为云ECSHuawei Kunpeng 920 2.6GHz 8U16G华为云arm64虚拟机
TaiShan 200-2280Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAMARM架构服务器
+

3 测试结论概述

+

3.1 测试整体结论

+

OpenStack Queens版本,共计执行Tempest用例1164个,主要覆盖了API测试和功能测试,通过7*24的长稳测试,Skip用例52个(全是openStack Queens版中已废弃的功能或接口,如Keystone V1、Cinder V1等),失败用例3个(测试用例本身问题),其他1109个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+

OpenStack Rocky版本,共计执行Tempest用例1197个,主要覆盖了API测试和功能测试,通过7*24的长稳测试,Skip用例105个(全是openStack Rocky版中已废弃的功能或接口,如KeystoneV1、Cinder V1等,和不支持的barbican项目),失败用例1个,其他1091个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+ + + + + + + + + + + + + + + + + + + + + +
测试活动tempest集成测试
接口测试API全覆盖
功能测试Queens版本覆盖Tempest所有相关测试用例1164个,其中Skip 52个,Fail 3个,其他全通过。
功能测试Rocky版本覆盖Tempest所有相关测试用例1197个,其中Skip 105个,Fail 1个, 其他全通过。
+ + + + + + + + + + + + + +
测试活动功能测试
功能测试虚拟机(KVM、Qemu)、存储(lvm、NFS、Ceph后端)、网络资源(linuxbridge、openvswitch)管理操作正常
+

3.2 约束说明

+

本次测试没有覆盖OpenStack Queens、Rocky版中明确废弃的功能和接口,因此不能保证已废弃的功能和接口(前文提到的Skip的用例)在openEuler 20.03 LTS SP2上能正常使用。

+

3.3 遗留问题分析

+

3.3.1 遗留问题影响以及规避措施

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
问题单号问题描述问题级别问题影响和规避措施当前状态
1targetcli软件包与python2-rtslib-fb包冲突,无法安装使用tgtadm代替lioadm命令解决中
2python2-flake8软件包依赖低版本的pyflakes,导致yum update命令报出警告使用yum update --nobest命令升级软件包解决中
+

3.3.2 问题统计

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
问题总数严重主要次要不重要
数目14365
百分比10021.442.835.8
+

4 测试执行

+

4.1 测试执行统计数据

+

本节内容根据测试用例及实际执行情况进行特性整体测试的统计,可根据第二章的测试轮次分开进行统计说明。

+ + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试用例数用例执行结果发现问题单数
openEuler 20.03 LTS SP2 OpenStack Queens1164通过1109个,skip 52个,Fail 3个7
openEuler 20.03 LTS SP2 OpenStack Rocky1197通过1001个,skip 101个7
+

4.2 后续测试建议

+
    +
  1. 涵盖主要的性能测试
  2. +
  3. 覆盖更多的driver/plugin测试
  4. +
+

5 附件

+

N/A

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/test/openEuler-20.03-LTS-SP3/index.html b/site/test/openEuler-20.03-LTS-SP3/index.html new file mode 100644 index 0000000000000000000000000000000000000000..f15d9c43a8c292d071984f50b637e267466f3106 --- /dev/null +++ b/site/test/openEuler-20.03-LTS-SP3/index.html @@ -0,0 +1,502 @@ + + + + + + + + openEuler-20.03-LTS-SP3 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

openEuler ico

+

版权所有 © 2021 openEuler社区 + 您对“本文档”的复制、使用、修改及分发受知识共享(Creative Commons)署名—相同方式共享4.0国际公共许可协议(以下简称“CC BY-SA 4.0”)的约束。为了方便用户理解,您可以通过访问https://creativecommons.org/licenses/by-sa/4.0/了解CC BY-SA 4.0的概要 (但不是替代)。CC BY-SA 4.0的完整协议内容您可以访问如下网址获取:https://creativecommons.org/licenses/by-sa/4.0/legalcode。

+

修订记录

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
日期修订版本修改描述作者
2021-12-101初稿及同步Train版本测试情况黄填华
+

关键词:

+

OpenStack

+

摘要:

+

在openEuler 20.03 LTS SP3版本中提供OpenStack Queens、Rocky、Train版本的RPM安装包。方便用户快速部署OpenStack。

+

缩略语清单:

+ + + + + + + + + + + + + + + + + + + + +
缩略语英文全名中文解释
CLICommand Line Interface命令行工具
ECSElastic Cloud Server弹性云服务器
+

1 特性概述

+

在openEuler 20.03 LTS SP2 release中提供OpenStack Queens、Rocky RPM安装包支持,包括项目:Keystone、Glance、Nova、Neutron、Cinder、Ironic、Trove、Kolla、Horizon、Tempest以及每个项目配套的CLI。 +openEuler 20.03 LTS SP3 release增加了OpenStack Train版本RPM安装包支持,包括项目:Keystone、Glance、Placement、Nova、Neutron、Cinder、Ironic、Trove、Kolla、Heat、Aodh、Ceilometer、Gnocchi、Swift、Horizon、Tempest以及每个项目配套的CLI。

+

2 特性测试信息

+

本节描述被测对象的版本信息和测试的时间及测试轮次,包括依赖的硬件。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试起始时间测试结束时间
openEuler 20.03 LTS SP3 RC1
(OpenStack Train版本各组件的安装部署测试)
2021.11.252021.11.30
openEuler 20.03 LTS SP3 RC1
(OpenStack Train版本基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2021.12.12021.12.2
openEuler 20.03 LTS SP3 RC2
(OpenStack Train版本tempest集成测试)
2021.12.32021.12.9
openEuler 20.03 LTS SP3 RC3
(OpenStack Train版本问题回归测试)
2021.12.102021.12.12
openEuler 20.03 LTS SP3 RC3
(OpenStack Queens&Rocky版本各组件的安装部署测试)
2021.12.102021.12.13
openEuler 20.03 LTS SP3 RC3
(OpenStack Queens&Rocky版本基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2021.12.142021.12.16
openEuler 20.03 LTS SP3 RC4
(OpenStack Queens&Rocky版本tempest集成测试)
2021.12.172021.12.20
openEuler 20.03 LTS SP3 RC4
(OpenStack Queens&Rocky版本问题回归测试)
2021.12.212021.12.23
+

描述特性测试的硬件环境信息

+ + + + + + + + + + + + + + + + + + + + + + + + + +
硬件型号硬件配置信息备注
华为云ECSIntel Cascade Lake 3.0GHz 8U16G华为云x86虚拟机
华为云ECSHuawei Kunpeng 920 2.6GHz 8U16G华为云arm64虚拟机
TaiShan 200-2280Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAMARM架构服务器
+

3 测试结论概述

+

3.1 测试整体结论

+

OpenStack Queens版本,共计执行Tempest用例1164个,主要覆盖了API测试和功能测试,Skip用例52个(全是openStack Queens版中已废弃的功能或接口,如Keystone V1、Cinder V1等),失败用例3个(测试用例本身问题),其他1109个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+

OpenStack Rocky版本,共计执行Tempest用例1197个,主要覆盖了API测试和功能测试,Skip用例101个(全是openStack Rocky版中已废弃的功能或接口,如KeystoneV1、Cinder V1等),其他1096个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+

OpenStack Train版本除了Cyborg(Cyborg安装部署正常,功能不可用)各组件基本功能正常,共计执行Tempest用例1179个,主要覆盖了API测试和功能测试,Skip用例115个(包括已废弃的功能或接口,如Keystone V1、Cinder V1等,包括一些复杂功能,比如文件注入,虚拟机配置等),其他1064个用例全部通过,共计发现问题14个(包括libvirt 1个问题),均已解决,回归通过,无遗留风险,整体质量良好。

+ + + + + + + + + + + + + + + + + + + + + + + + + +
测试活动tempest集成测试
接口测试API全覆盖
功能测试Queens版本覆盖Tempest所有相关测试用例1164个,其中Skip 52个,Fail 3个,其他全通过。
功能测试Rocky版本覆盖Tempest所有相关测试用例1197个,其中Skip 101个,其他全通过。
功能测试Train版本覆盖Tempest所有相关测试用例1179个,其中Skip 115个,其他全通过。
+ + + + + + + + + + + + + +
测试活动功能测试
功能测试虚拟机(KVM、Qemu)、存储(lvm)、网络资源(linuxbridge)管理操作正常
+

3.2 约束说明

+

本次测试没有覆盖OpenStack Queens、Rocky版中明确废弃的功能和接口,因此不能保证已废弃的功能和接口(前文提到的Skip的用例)在openEuler 20.03 LTS SP3上能正常使用,另外Cyborg功能不可用。

+

3.3 遗留问题分析

+

3.3.1 Queens&Rocky遗留问题影响以及规避措施

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
问题单号问题描述问题级别问题影响和规避措施当前状态
1targetcli软件包与python2-rtslib-fb包冲突,无法安装使用tgtadm代替lioadm命令解决中
2python2-flake8软件包依赖低版本的pyflakes,导致yum update命令报出警告使用yum update --nobest命令升级软件包解决中
+

3.3.2 Train版本问题统计

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
问题总数严重主要次要不重要
数目14167
百分比1007.142.950
+

4 测试执行

+

4.1 测试执行统计数据

+

本节内容根据测试用例及实际执行情况进行特性整体测试的统计,可根据第二章的测试轮次分开进行统计说明。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试用例数用例执行结果发现问题单数
openEuler 20.03 LTS SP3 OpenStack Queens1164通过1109个,skip 52个,Fail 3个0
openEuler 20.03 LTS SP3 OpenStack Rocky1197通过1096个,skip 101个0
openEuler 20.03 LTS SP3 OpenStack Train1179通过1064个,skip 115个14
+

4.2 后续测试建议

+
    +
  1. 涵盖主要的性能测试
  2. +
  3. 覆盖更多的driver/plugin测试
  4. +
+

5 附件

+

N/A

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/test/openEuler-22.03-LTS-SP1/index.html b/site/test/openEuler-22.03-LTS-SP1/index.html new file mode 100644 index 0000000000000000000000000000000000000000..3cd520aada7e810bb2746f3c8afbaebcd0b3f84e --- /dev/null +++ b/site/test/openEuler-22.03-LTS-SP1/index.html @@ -0,0 +1,570 @@ + + + + + + + + openEuler-22.03-LTS-SP1 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

openEuler 22.03 LTS SP1测试报告

+

openEuler ico

+

版权所有 © 2021 openEuler社区 +您对“本文档”的复制、使用、修改及分发受知识共享(Creative Commons)署名—相同方式共享4.0国际公共许可协议(以下简称“CC BY-SA 4.0”)的约束。为了方便用户理解,您可以通过访问https://creativecommons.org/licenses/by-sa/4.0/了解CC BY-SA 4.0的概要 (但不是替代)。CC BY-SA 4.0的完整协议内容您可以访问如下网址获取:https://creativecommons.org/licenses/by-sa/4.0/legalcode。

+

修订记录

+ + + + + + + + + + + + + + + + + +
日期修订版本修改描述作者
2022-12-21初稿王玺源
+

关键词:

+

OpenStack

+

摘要:

+

openEuler 22.03 LTS SP1 版本中提供 OpenStack TrainOpenStack Wallaby 版本的 RPM 安装包,方便用户快速部署 OpenStack

+

缩略语清单:

+ + + + + + + + + + + + + + + + + + + + +
缩略语英文全名中文解释
CLICommand Line Interface命令行工具
ECSElastic Cloud Server弹性云服务器
+

1 特性概述

+

openEuler 22.03 LTS SP1 版本中提供 OpenStack TrainOpenStack Wallaby 版本的RPM安装包,包括以下项目以及每个项目配套的 CLI

+
    +
  • +

    Keystone

    +
  • +
  • +

    Neutron

    +
  • +
  • +

    Cinder

    +
  • +
  • +

    Nova

    +
  • +
  • +

    Placement

    +
  • +
  • +

    Glance

    +
  • +
  • +

    Horizon

    +
  • +
  • +

    Aodh

    +
  • +
  • +

    Ceilometer

    +
  • +
  • +

    Cyborg

    +
  • +
  • +

    Gnocchi

    +
  • +
  • +

    Heat

    +
  • +
  • +

    Swift

    +
  • +
  • +

    Ironic

    +
  • +
  • +

    Kolla

    +
  • +
  • +

    Trove

    +
  • +
  • +

    Tempest

    +
  • +
+

2 特性测试信息

+

本节描述被测对象的版本信息和测试的时间及测试轮次,包括依赖的硬件。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试起始时间测试结束时间
openEuler 22.03 LTS SP1 RC1
(OpenStack Train版本各组件的安装部署测试)
2022.11.232022.11.29
openEuler 22.03 LTS SP1 RC1
(OpenStack Train版本基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2022.11.232022.11.29
openEuler 22.03 LTS SP1 RC2
(OpenStack Train版本tempest集成测试)
2022.12.022022.12.08
openEuler 22.03 LTS SP1 RC3
(OpenStack Train版本问题回归测试)
2022.12.092022.12.15
openEuler 22.03 LTS SP1 RC3
(OpenStack Wallaby版本各组件的安装部署测试)
2022.12.092022.12.15
openEuler 22.03 LTS SP1 RC3
(OpenStack Wallaby基版本本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2022.12.092022.12.15
openEuler 22.03 LTS SP1 RC4
(OpenStack Wallaby版本tempest集成测试)
2022.12.162022.12.20
openEuler 22.03 LTS SP1 RC4
(OpenStack Wallaby版本问题回归测试)
2022.12.162022.12.20
+

描述特性测试的硬件环境信息

+ + + + + + + + + + + + + + + + + + + + + + + + + +
硬件型号硬件配置信息备注
华为云ECSIntel Cascade Lake 3.0GHz 8U16G华为云x86虚拟机
华为云ECSHuawei Kunpeng 920 2.6GHz 8U16G华为云arm64虚拟机
TaiShan 200-2280Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAMARM架构服务器
+

3 测试结论概述

+

3.1 测试整体结论

+

OpenStack Train 版本,共计执行 Tempest 用例 1354 个,主要覆盖了 API 测试和功能测试,通过 7*24 的长稳测试,Skip 用例 64 个(全是 OpenStack Train 版中已废弃的功能或接口,如Keystone V1、Cinder V1等),失败用例 0 个,其他 1290 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+

OpenStack Wallaby 版本,共计执行 Tempest 用例 1164 个,主要覆盖了API测试和功能测试,通过 7*24 的长稳测试,Skip 用例 70 个(全是 OpenStack Wallaby 版中已废弃的功能或接口,如KeystoneV1、Cinder V1等,和不支持的barbican项目),失败用例 0 个,其他 1094 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+ + + + + + + + + + + + + + + + + + + + + +
测试活动tempest集成测试
接口测试API全覆盖
功能测试Train版本覆盖Tempest所有相关测试用例1354个,其中Skip 64个,Fail 0个,其他全通过。
功能测试Wallaby版本覆盖Tempest所有相关测试用例1164个,其中Skip 70个,Fail 0个, 其他全通过。
+ + + + + + + + + + + + + +
测试活动功能测试
功能测试虚拟机(KVM、Qemu)、存储(lvm、NFS、Ceph后端)、网络资源(linuxbridge、openvswitch)管理操作正常
+

3.2 约束说明

+

本次测试没有覆盖 OpenStack TrainOpenStack Wallaby 版中明确废弃的功能和接口,因此不能保证已废弃的功能和接口(前文提到的Skip的用例)在 openEuler 22.03 LTS SP1 上能正常使用。

+

3.3 遗留问题分析

+

3.3.1 遗留问题影响以及规避措施

+ + + + + + + + + + + + + + + + + + + +
问题单号问题描述问题级别问题影响和规避措施当前状态
N/AN/AN/AN/AN/A
+

3.3.2 问题统计

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
问题总数严重主要次要不重要
数目21010
百分比100500500
+ + + + + + + + + + + + + + +
ISSUE Link
https://gitee.com/openeuler/openstack/issues/I64OL3
https://gitee.com/openeuler/openstack/issues/I66IEB
+

4 测试执行

+

4.1 测试执行统计数据

+

本节内容根据测试用例及实际执行情况进行特性整体测试的统计,可根据第二章的测试轮次分开进行统计说明。

+ + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试用例数用例执行结果发现问题单数
openEuler 22.03 LTS SP1 OpenStack Train1354通过1289个,skip 64个,Fail 0个2
openEuler 22.03 LTS SP1 OpenStack Wallaby1164通过1088个,skip 70个,Fail 0个1
+

4.2 后续测试建议

+
    +
  1. 涵盖主要的性能测试
  2. +
  3. 覆盖更多的driver/plugin测试
  4. +
+

5 附件

+

N/A

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/test/openEuler-22.03-LTS-SP2/index.html b/site/test/openEuler-22.03-LTS-SP2/index.html new file mode 100644 index 0000000000000000000000000000000000000000..2998daa01b3077eebcfcc6379da9e902a48d1256 --- /dev/null +++ b/site/test/openEuler-22.03-LTS-SP2/index.html @@ -0,0 +1,620 @@ + + + + + + + + openEuler-22.03-LTS-SP2 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

openEuler 22.03 LTS SP2测试报告

+

openEuler ico

+

版权所有 © 2023 openEuler社区 + 您对“本文档”的复制、使用、修改及分发受知识共享(Creative Commons)署名—相同方式共享4.0国际公共许可协议(以下简称“CC BY-SA 4.0”)的约束。为了方便用户理解,您可以通过访问https://creativecommons.org/licenses/by-sa/4.0/ 了解CC BY-SA 4.0的概要 (但不是替代)。CC BY-SA 4.0的完整协议内容您可以访问如下网址获取:https://creativecommons.org/licenses/by-sa/4.0/legalcode。

+

修订记录

+ + + + + + + + + + + + + + + + + +
日期修订版本修改描述作者
2023-06-211初稿王玺源
+

关键词:

+

OpenStack

+

摘要:

+

openEuler 22.03 LTS SP2 版本中提供 OpenStack TrainOpenStack Wallaby 版本的 RPM 安装包,方便用户快速部署 OpenStack

+

缩略语清单:

+ + + + + + + + + + + + + + + + + + + + +
缩略语英文全名中文解释
CLICommand Line Interface命令行工具
ECSElastic Cloud Server弹性云服务器
+

1 特性概述

+

openEuler 22.03 LTS SP2 版本中提供 OpenStack TrainOpenStack Wallaby 版本的RPM安装包,包括以下项目以及每个项目配套的 CLI

+
    +
  • +

    Keystone

    +
  • +
  • +

    Neutron

    +
  • +
  • +

    Cinder

    +
  • +
  • +

    Nova

    +
  • +
  • +

    Placement

    +
  • +
  • +

    Glance

    +
  • +
  • +

    Horizon

    +
  • +
  • +

    Aodh

    +
  • +
  • +

    Ceilometer

    +
  • +
  • +

    Cyborg

    +
  • +
  • +

    Gnocchi

    +
  • +
  • +

    Heat

    +
  • +
  • +

    Swift

    +
  • +
  • +

    Ironic

    +
  • +
  • +

    Kolla

    +
  • +
  • +

    Trove

    +
  • +
  • +

    Tempest

    +
  • +
  • +

    Barbican

    +
  • +
  • +

    Octavia

    +
  • +
  • +

    designate

    +
  • +
  • +

    Manila

    +
  • +
  • +

    Masakari

    +
  • +
  • +

    Mistral

    +
  • +
  • +

    Senlin

    +
  • +
  • +

    Zaqar

    +
  • +
+

2 特性测试信息

+

本节描述被测对象的版本信息和测试的时间及测试轮次,包括依赖的硬件。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试起始时间测试结束时间
openEuler 22.03 LTS SP2 RC1
(OpenStack Train版本各组件的安装部署测试)
2023.05.172023.05.23
openEuler 22.03 LTS SP2 RC1
(OpenStack Train版本基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2023.05.172023.05.23
openEuler 22.03 LTS SP2 RC2
(OpenStack Train版本tempest集成测试)
2023.05.242023.06.02
openEuler 22.03 LTS SP2 RC3
(OpenStack Train版本问题回归测试)
2023.06.032023.06.09
openEuler 22.03 LTS SP2 RC3
(OpenStack Wallaby版本各组件的安装部署测试)
2023.06.032023.06.09
openEuler 22.03 LTS SP2 RC3
(OpenStack Wallaby基版本本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2023.06.032023.06.09
openEuler 22.03 LTS SP2 RC4
(OpenStack Wallaby版本tempest集成测试)
2023.06.102023.06.16
openEuler 22.03 LTS SP2 RC4
(OpenStack Wallaby版本问题回归测试)
2023.06.102023.06.16
+

描述特性测试的硬件环境信息

+ + + + + + + + + + + + + + + + + + + + +
硬件型号硬件配置信息备注
华为云ECSIntel Cascade Lake 3.0GHz 8U16G华为云x86虚拟机
华为云ECSHuawei Kunpeng 920 2.6GHz 8U16G华为云arm64虚拟机
+

3 测试结论概述

+

3.1 测试整体结论

+

OpenStack Train 版本,共计执行 Tempest 用例 1354 个,主要覆盖了 API 测试和功能测试,通过 7*24 的长稳测试,Skip 用例 64 个(全是 OpenStack Train 版中已废弃的功能或接口,如Keystone V1、Cinder V1等),失败用例 0 个,其他 1290 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+

OpenStack Wallaby 版本,共计执行 Tempest 用例 1164 个,主要覆盖了API测试和功能测试,通过 7*24 的长稳测试,Skip 用例 70 个(全是 OpenStack Wallaby 版中已废弃的功能或接口,如KeystoneV1、Cinder V1等,和不支持的barbican项目),失败用例 0 个,其他 1094 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+ + + + + + + + + + + + + + + + + + + + + +
测试活动tempest集成测试
接口测试API全覆盖
功能测试Train版本覆盖Tempest所有相关测试用例1354个,其中Skip 64个,Fail 0个,其他全通过。
功能测试Wallaby版本覆盖Tempest所有相关测试用例1164个,其中Skip 70个,Fail 0个, 其他全通过。
+ + + + + + + + + + + + + +
测试活动功能测试
功能测试虚拟机(KVM、Qemu)、存储(lvm、NFS、Ceph后端)、网络资源(linuxbridge、openvswitch)管理操作正常
+

3.2 约束说明

+

本次测试没有覆盖 OpenStack TrainOpenStack Wallaby 版中明确废弃的功能和接口,因此不能保证已废弃的功能和接口(前文提到的Skip的用例)在 openEuler 22.03 LTS SP2 上能正常使用。

+

3.3 遗留问题分析

+

3.3.1 遗留问题影响以及规避措施

+ + + + + + + + + + + + + + + + + + + +
问题单号问题描述问题级别问题影响和规避措施当前状态
N/AN/AN/AN/AN/A
+

3.3.2 问题统计

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
问题总数严重主要次要不重要
数目120561
百分比100042508
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ISSUE Link
https://gitee.com/src-openeuler/python-flask-restful/issues/I7ABYH
https://gitee.com/src-openeuler/python-zVMCloudConnector/issues/I79KJO
https://gitee.com/src-openeuler/openvswitch/issues/I79K23
https://gitee.com/src-openeuler/openstack-nova/issues/I79JC8
https://gitee.com/src-openeuler/python-rtslib-fb/issues/I79IXG
https://gitee.com/src-openeuler/python-suds-jurko/issues/I79IQM
https://gitee.com/src-openeuler/ovn/issues/I79I7O
https://gitee.com/openeuler/openstack/issues/I77LN7
https://gitee.com/openeuler/openstack/issues/I77LQN
https://gitee.com/openeuler/openstack/issues/I79OIL
https://gitee.com/openeuler/openstack/issues/I7BQC0
https://gitee.com/openeuler/openstack/issues/I7CC2N
+

4 测试执行

+

4.1 测试执行统计数据

+

本节内容根据测试用例及实际执行情况进行特性整体测试的统计,可根据第二章的测试轮次分开进行统计说明。

+ + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试用例数用例执行结果发现问题单数
openEuler 22.03 LTS SP2 OpenStack Train1354通过1289个,skip 64个,Fail 0个2
openEuler 22.03 LTS SP2 OpenStack Wallaby1164通过1088个,skip 70个,Fail 0个1
+

4.2 后续测试建议

+
    +
  1. 涵盖主要的性能测试。
  2. +
  3. 覆盖更多的driver/plugin测试。
  4. +
  5. 重点测试SP2新增OpenStack服务,尽早发现问题,解决问题。
  6. +
+

5 附件

+

N/A

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/test/openEuler-22.03-LTS-SP3/index.html b/site/test/openEuler-22.03-LTS-SP3/index.html new file mode 100644 index 0000000000000000000000000000000000000000..814e8ab5c8df5736b750483a82b6a0b107f35797 --- /dev/null +++ b/site/test/openEuler-22.03-LTS-SP3/index.html @@ -0,0 +1,587 @@ + + + + + + + + openEuler-22.03-LTS-SP3 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

openEuler 22.03 LTS SP3测试报告

+

openEuler ico

+

版权所有 © 2023 openEuler社区 + 您对“本文档”的复制、使用、修改及分发受知识共享(Creative Commons)署名—相同方式共享4.0国际公共许可协议(以下简称“CC BY-SA 4.0”)的约束。为了方便用户理解,您可以通过访问https://creativecommons.org/licenses/by-sa/4.0/ 了解CC BY-SA 4.0的概要 (但不是替代)。CC BY-SA 4.0的完整协议内容您可以访问如下网址获取:https://creativecommons.org/licenses/by-sa/4.0/legalcode。

+

修订记录

+ + + + + + + + + + + + + + + + + +
日期修订版本修改描述作者
2023-12-271初稿郑挺
+

关键词:

+

OpenStack

+

摘要:

+

openEuler 22.03 LTS SP3 版本中提供 OpenStack TrainOpenStack Wallaby 版本的 RPM 安装包,方便用户快速部署 OpenStack

+

缩略语清单:

+ + + + + + + + + + + + + + + + + + + + +
缩略语英文全名中文解释
CLICommand Line Interface命令行工具
ECSElastic Cloud Server弹性云服务器
+

1 特性概述

+

openEuler 22.03 LTS SP3 版本中提供 OpenStack TrainOpenStack Wallaby 版本的RPM安装包,包括以下项目以及每个项目配套的 CLI

+
    +
  • +

    Keystone

    +
  • +
  • +

    Neutron

    +
  • +
  • +

    Cinder

    +
  • +
  • +

    Nova

    +
  • +
  • +

    Placement

    +
  • +
  • +

    Glance

    +
  • +
  • +

    Horizon

    +
  • +
  • +

    Aodh

    +
  • +
  • +

    Ceilometer

    +
  • +
  • +

    Cyborg

    +
  • +
  • +

    Gnocchi

    +
  • +
  • +

    Heat

    +
  • +
  • +

    Swift

    +
  • +
  • +

    Ironic

    +
  • +
  • +

    Kolla

    +
  • +
  • +

    Trove

    +
  • +
  • +

    Tempest

    +
  • +
  • +

    Barbican

    +
  • +
  • +

    Octavia

    +
  • +
  • +

    designate

    +
  • +
  • +

    Manila

    +
  • +
  • +

    Masakari

    +
  • +
  • +

    Mistral

    +
  • +
  • +

    Senlin

    +
  • +
  • +

    Zaqar

    +
  • +
+

2 特性测试信息

+

本节描述被测对象的版本信息和测试的时间及测试轮次,包括依赖的硬件。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试起始时间测试结束时间
openEuler 22.03 LTS SP3 RC1
(OpenStack Train版本各组件的安装部署测试)
2023.11.232023.11.27
openEuler 22.03 LTS SP3 RC1
(OpenStack Train版本基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2023.11.282023.12.1
openEuler 22.03 LTS SP3 RC2
(OpenStack Train版本tempest集成测试)
2023.12.22023.12.6
openEuler 22.03 LTS SP3 RC3
(OpenStack Train版本问题回归测试)
2023.12.72023.12.11
openEuler 22.03 LTS SP3 RC3
(OpenStack Wallaby版本各组件的安装部署测试)
2023.12.122023.12.16
openEuler 22.03 LTS SP3 RC3
(OpenStack Wallaby基版本本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2023.12.172023.12.21
openEuler 22.03 LTS SP3 RC4
(OpenStack Wallaby版本tempest集成测试)
2023.12.212023.12.25
openEuler 22.03 LTS SP3 RC4
(OpenStack Wallaby版本问题回归测试)
2023.12.252023.12.28
+

描述特性测试的硬件环境信息

+ + + + + + + + + + + + + + + + + + + + +
硬件型号硬件配置信息备注
华为云ECSIntel Cascade Lake 3.0GHz 8U16G华为云x86虚拟机
华为云ECSHuawei Kunpeng 920 2.6GHz 8U16G华为云arm64虚拟机
+

3 测试结论概述

+

3.1 测试整体结论

+

OpenStack Train 版本,共计执行 Tempest 用例 1303 个,主要覆盖了 API 测试和功能测试,通过 7*24 的长稳测试,Skip 用例 65 个(全是 OpenStack Train 版中已废弃的功能或接口,如Keystone V1、Cinder V1等),失败用例 0 个,其他 1238 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+

OpenStack Wallaby 版本,共计执行 Tempest 用例 1263 个,主要覆盖了API测试和功能测试,通过 7*24 的长稳测试,Skip 用例 93 个(全是 OpenStack Wallaby 版中已废弃的功能或接口,如KeystoneV1、Cinder V1等,和不支持的barbican项目),失败用例 0 个,其他 1170 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+ + + + + + + + + + + + + + + + + + + + + +
测试活动tempest集成测试
接口测试API全覆盖
功能测试Train版本覆盖Tempest所有相关测试用例1303个,其中Skip 65个,Fail 0个,其他全通过。
功能测试Wallaby版本覆盖Tempest所有相关测试用例1263个,其中Skip 93个,Fail 0个, 其他全通过。
+ + + + + + + + + + + + + +
测试活动功能测试
功能测试虚拟机(KVM、Qemu)、存储(lvm、NFS、Ceph后端)、网络资源(linuxbridge、openvswitch)管理操作正常
+

3.2 约束说明

+

本次测试没有覆盖 OpenStack TrainOpenStack Wallaby 版中明确废弃的功能和接口,因此不能保证已废弃的功能和接口(前文提到的Skip的用例)在 openEuler 22.03 LTS SP3 上能正常使用。

+

3.3 遗留问题分析

+

3.3.1 遗留问题影响以及规避措施

+ + + + + + + + + + + + + + + + + + + +
问题单号问题描述问题级别问题影响和规避措施当前状态
N/AN/AN/AN/AN/A
+

3.3.2 问题统计

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
问题总数严重主要次要不重要
数目10100
百分比100010000
+ + + + + + + + + + + +
ISSUE Link
https://gitee.com/src-openeuler/python-ndg-httpsclient/issues/I8Q6GR
+

4 测试执行

+

4.1 测试执行统计数据

+

本节内容根据测试用例及实际执行情况进行特性整体测试的统计,可根据第二章的测试轮次分开进行统计说明。

+ + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试用例数用例执行结果发现问题单数
openEuler 22.03 LTS SP3 OpenStack Train1303通过1238个,skip 65个,Fail 0个0
openEuler 22.03 LTS SP3 OpenStack Wallaby1263通过1170个,skip 93个,Fail 0个1
+

4.2 后续测试建议

+
    +
  1. 涵盖主要的性能测试。
  2. +
  3. 覆盖更多的driver/plugin测试。
  4. +
  5. 重点测试SP3新增OpenStack服务,尽早发现问题,解决问题。
  6. +
+

5 附件

+

N/A

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/test/openEuler-22.03-LTS-SP4/index.html b/site/test/openEuler-22.03-LTS-SP4/index.html new file mode 100644 index 0000000000000000000000000000000000000000..0431e6d9f4bea76c6d2d22e3f3df6d9d25ac37d7 --- /dev/null +++ b/site/test/openEuler-22.03-LTS-SP4/index.html @@ -0,0 +1,587 @@ + + + + + + + + openEuler-22.03-LTS-SP4 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

openEuler 22.03 LTS SP4测试报告

+

avatar

+

版权所有 © 2023 openEuler社区 + 您对“本文档”的复制、使用、修改及分发受知识共享(Creative Commons)署名—相同方式共享4.0国际公共许可协议(以下简称“CC BY-SA 4.0”)的约束。为了方便用户理解,您可以通过访问https://creativecommons.org/licenses/by-sa/4.0/ 了解CC BY-SA 4.0的概要 (但不是替代)。CC BY-SA 4.0的完整协议内容您可以访问如下网址获取:https://creativecommons.org/licenses/by-sa/4.0/legalcode。

+

修订记录

+ + + + + + + + + + + + + + + + + +
日期修订版本修改描述作者
2024-06-211初稿王静
+

关键词:

+

OpenStack

+

摘要:

+

openEuler 22.03 LTS SP4 版本中提供 OpenStack TrainOpenStack Wallaby 版本的 RPM 安装包,方便用户快速部署 OpenStack

+

缩略语清单:

+ + + + + + + + + + + + + + + + + + + + +
缩略语英文全名中文解释
CLICommand Line Interface命令行工具
ECSElastic Cloud Server弹性云服务器
+

1 特性概述

+

openEuler 22.03 LTS SP4 版本中提供 OpenStack TrainOpenStack Wallaby 版本的RPM安装包,包括以下项目以及每个项目配套的 CLI

+
    +
  • +

    Keystone

    +
  • +
  • +

    Neutron

    +
  • +
  • +

    Cinder

    +
  • +
  • +

    Nova

    +
  • +
  • +

    Placement

    +
  • +
  • +

    Glance

    +
  • +
  • +

    Horizon

    +
  • +
  • +

    Aodh

    +
  • +
  • +

    Ceilometer

    +
  • +
  • +

    Cyborg

    +
  • +
  • +

    Gnocchi

    +
  • +
  • +

    Heat

    +
  • +
  • +

    Swift

    +
  • +
  • +

    Ironic

    +
  • +
  • +

    Kolla

    +
  • +
  • +

    Trove

    +
  • +
  • +

    Tempest

    +
  • +
  • +

    Barbican

    +
  • +
  • +

    Octavia

    +
  • +
  • +

    designate

    +
  • +
  • +

    Manila

    +
  • +
  • +

    Masakari

    +
  • +
  • +

    Mistral

    +
  • +
  • +

    Senlin

    +
  • +
  • +

    Zaqar

    +
  • +
+

2 特性测试信息

+

本节描述被测对象的版本信息和测试的时间及测试轮次,包括依赖的硬件。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试起始时间测试结束时间
openEuler 22.03 LTS SP4 RC1
(OpenStack Train版本各组件的安装部署测试)
2024.04.232024.04.27
openEuler 22.03 LTS SP4 RC1
(OpenStack Train版本基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2024.04.282024.05.09
openEuler 22.03 LTS SP4 RC2
(OpenStack Train版本tempest集成测试)
2024.05.092024.05.16
openEuler 22.03 LTS SP4 RC3
(OpenStack Train版本问题回归测试)
2024.05.172024.05.21
openEuler 22.03 LTS SP4 RC3
(OpenStack Wallaby版本各组件的安装部署测试)
2024.05.222024.05.25
openEuler 22.03 LTS SP4 RC3
(OpenStack Wallaby基版本本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2024.05.272024.05.30
openEuler 22.03 LTS SP4 RC4
(OpenStack Wallaby版本tempest集成测试)
2024.06.012024.06.07
openEuler 22.03 LTS SP4 RC4
(OpenStack Wallaby版本问题回归测试)
2024.06.082024.06.19
+

描述特性测试的硬件环境信息

+ + + + + + + + + + + + + + + + + + + + +
硬件型号硬件配置信息备注
华为云ECSIntel Cascade Lake 3.0GHz 8U16G华为云x86虚拟机
华为云ECSHuawei Kunpeng 920 2.6GHz 8U16G华为云arm64虚拟机
+

3 测试结论概述

+

3.1 测试整体结论

+

OpenStack Train 版本,共计执行 Tempest 用例 1420 个,主要覆盖了 API 测试和功能测试,通过 7*24 的长稳测试,Skip 用例 66 个(全是 OpenStack Train 版中已废弃的功能或接口,如Keystone V1、Cinder V1等),失败用例 0 个,其他 1354 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+

OpenStack Wallaby 版本,共计执行 Tempest 用例 1436 个,主要覆盖了API测试和功能测试,通过 7*24 的长稳测试,Skip 用例 95 个(全是 OpenStack Wallaby 版中已废弃的功能或接口,如KeystoneV1、Cinder V1等,和不支持的barbican项目),失败用例 0 个,其他 1341 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+ + + + + + + + + + + + + + + + + + + + + +
测试活动tempest集成测试
接口测试API全覆盖
功能测试Train版本覆盖Tempest所有相关测试用例1420个,其中Skip 66个,Fail 0个,其他全通过。
功能测试Wallaby版本覆盖Tempest所有相关测试用例1436个,其中Skip 95个,Fail 0个, 其他全通过。
+ + + + + + + + + + + + + +
测试活动功能测试
功能测试虚拟机(KVM、Qemu)、存储(lvm、NFS、Ceph后端)、网络资源(linuxbridge、openvswitch)管理操作正常
+

3.2 约束说明

+

本次测试没有覆盖 OpenStack TrainOpenStack Wallaby 版中明确废弃的功能和接口,因此不能保证已废弃的功能和接口(前文提到的Skip的用例)在 openEuler 22.03 LTS SP4 上能正常使用。

+

3.3 遗留问题分析

+

3.3.1 遗留问题影响以及规避措施

+ + + + + + + + + + + + + + + + + + + +
问题单号问题描述问题级别问题影响和规避措施当前状态
N/AN/AN/AN/AN/A
+

3.3.2 问题统计

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
问题总数严重主要次要不重要
数目10100
百分比100010000
+ + + + + + + + + + + +
ISSUE Link
+

4 测试执行

+

4.1 测试执行统计数据

+

本节内容根据测试用例及实际执行情况进行特性整体测试的统计,可根据第二章的测试轮次分开进行统计说明。

+ + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试用例数用例执行结果发现问题单数
openEuler 22.03 LTS SP4 OpenStack Train1420通过1354个,skip 66个,Fail 0个0
openEuler 22.03 LTS SP4 OpenStack Wallaby1436通过1431个,skip 95个,Fail 0个0
+

4.2 后续测试建议

+
    +
  1. 涵盖主要的性能测试。
  2. +
  3. 覆盖更多的driver/plugin测试。
  4. +
  5. 重点测试SP4新增OpenStack服务,尽早发现问题,解决问题。
  6. +
+

5 附件

+

N/A

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/test/openEuler-22.03-LTS/index.html b/site/test/openEuler-22.03-LTS/index.html new file mode 100644 index 0000000000000000000000000000000000000000..e21970ef18f8595988a8efb8aa6a17443b842a00 --- /dev/null +++ b/site/test/openEuler-22.03-LTS/index.html @@ -0,0 +1,555 @@ + + + + + + + + openEuler-22.03-LTS - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

openEuler 22.03 LTS 测试报告

+

openEuler ico

+

版权所有 © 2021 openEuler社区 +您对“本文档”的复制、使用、修改及分发受知识共享(Creative Commons)署名—相同方式共享4.0国际公共许可协议(以下简称“CC BY-SA 4.0”)的约束。为了方便用户理解,您可以通过访问https://creativecommons.org/licenses/by-sa/4.0/了解CC BY-SA 4.0的概要 (但不是替代)。CC BY-SA 4.0的完整协议内容您可以访问如下网址获取:https://creativecommons.org/licenses/by-sa/4.0/legalcode。

+

修订记录

+ + + + + + + + + + + + + + + + + +
日期修订版本修改描述作者
2022-03-211初稿李佳伟
+

关键词:

+

OpenStack

+

摘要:

+

openEuler 22.03 LTS 版本中提供 OpenStack TrainOpenStack Wallaby 版本的 RPM 安装包,方便用户快速部署 OpenStack

+

缩略语清单:

+ + + + + + + + + + + + + + + + + + + + +
缩略语英文全名中文解释
CLICommand Line Interface命令行工具
ECSElastic Cloud Server弹性云服务器
+

1 特性概述

+

openEuler 22.03 LTS 版本中提供 OpenStack TrainOpenStack Wallaby 版本的RPM安装包,包括以下项目以及每个项目配套的 CLI

+
    +
  • +

    Keystone

    +
  • +
  • +

    Neutron

    +
  • +
  • +

    Cinder

    +
  • +
  • +

    Nova

    +
  • +
  • +

    Placement

    +
  • +
  • +

    Glance

    +
  • +
  • +

    Horizon

    +
  • +
  • +

    Aodh

    +
  • +
  • +

    Ceilometer

    +
  • +
  • +

    Cyborg

    +
  • +
  • +

    Gnocchi

    +
  • +
  • +

    Heat

    +
  • +
  • +

    Swift

    +
  • +
  • +

    Ironic

    +
  • +
  • +

    Kolla

    +
  • +
  • +

    Trove

    +
  • +
  • +

    Tempest

    +
  • +
+

2 特性测试信息

+

本节描述被测对象的版本信息和测试的时间及测试轮次,包括依赖的硬件。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试起始时间测试结束时间
openEuler 22.03 LTS RC1
(OpenStack Train版本各组件的安装部署测试)
2022.02.202022.02.27
openEuler 22.03 LTS RC1
(OpenStack Train版本基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2022.02.282022.03.03
openEuler 22.03 LTS RC2
(OpenStack Train版本tempest集成测试)
2022.03.042022.03.07
openEuler 22.03 LTS RC3
(OpenStack Train版本问题回归测试)
2022.03.082022.03.09
openEuler 22.03 LTS RC3
(OpenStack Wallaby版本各组件的安装部署测试)
2022.03.102022.03.15
openEuler 22.03 LTS RC3
(OpenStack Wallaby基版本本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2022.03.162022.03.19
openEuler 22.03 LTS RC4
(OpenStack Wallaby版本tempest集成测试)
2022.03.202022.03.21
openEuler 22.03 LTS RC4
(OpenStack Wallaby版本问题回归测试)
2022.03.212022.03.22
+

描述特性测试的硬件环境信息

+ + + + + + + + + + + + + + + + + + + + + + + + + +
硬件型号硬件配置信息备注
华为云ECSIntel Cascade Lake 3.0GHz 8U16G华为云x86虚拟机
华为云ECSHuawei Kunpeng 920 2.6GHz 8U16G华为云arm64虚拟机
TaiShan 200-2280Kunpeng 920,48 Core@2.6GHz*2; 256GB DDR4 RAMARM架构服务器
+

3 测试结论概述

+

3.1 测试整体结论

+

OpenStack Train 版本,共计执行 Tempest 用例 1354 个,主要覆盖了 API 测试和功能测试,通过 7*24 的长稳测试,Skip 用例 64 个(全是 OpenStack Train 版中已废弃的功能或接口,如Keystone V1、Cinder V1等),失败用例 1 个(测试用例本身问题),其他 1289 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+

OpenStack Wallaby 版本,共计执行 Tempest 用例 1164 个,主要覆盖了API测试和功能测试,通过 7*24 的长稳测试,Skip 用例 70 个(全是 OpenStack Wallaby 版中已废弃的功能或接口,如KeystoneV1、Cinder V1等,和不支持的barbican项目),失败用例 6 个,其他 1088 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+ + + + + + + + + + + + + + + + + + + + + +
测试活动tempest集成测试
接口测试API全覆盖
功能测试Train版本覆盖Tempest所有相关测试用例1354个,其中Skip 64个,Fail 1个,其他全通过。
功能测试Wallaby版本覆盖Tempest所有相关测试用例1164个,其中Skip 70个,Fail 6个, 其他全通过。
+ + + + + + + + + + + + + +
测试活动功能测试
功能测试虚拟机(KVM、Qemu)、存储(lvm、NFS、Ceph后端)、网络资源(linuxbridge、openvswitch)管理操作正常
+

3.2 约束说明

+

本次测试没有覆盖 OpenStack TrainOpenStack Wallaby 版中明确废弃的功能和接口,因此不能保证已废弃的功能和接口(前文提到的Skip的用例)在 openEuler 22.03 LTS 上能正常使用。

+

3.3 遗留问题分析

+

3.3.1 遗留问题影响以及规避措施

+ + + + + + + + + + + + + + + + + + + +
问题单号问题描述问题级别问题影响和规避措施当前状态
N/AN/AN/AN/AN/A
+

3.3.2 问题统计

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
问题总数严重主要次要不重要
数目102620
百分比1002060200
+

4 测试执行

+

4.1 测试执行统计数据

+

本节内容根据测试用例及实际执行情况进行特性整体测试的统计,可根据第二章的测试轮次分开进行统计说明。

+ + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试用例数用例执行结果发现问题单数
openEuler 22.03 LTS OpenStack Train1354通过1289个,skip 64个,Fail 1个7
openEuler 22.03 LTS OpenStack Wallaby1164通过1088个,skip 70个,Fail 6个3
+

4.2 后续测试建议

+
    +
  1. 涵盖主要的性能测试
  2. +
  3. 覆盖更多的driver/plugin测试
  4. +
+

5 附件

+

N/A

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/test/openEuler-22.09/index.html b/site/test/openEuler-22.09/index.html new file mode 100644 index 0000000000000000000000000000000000000000..887ef3117c3d4bff9b1e7584436a8ef697eab932 --- /dev/null +++ b/site/test/openEuler-22.09/index.html @@ -0,0 +1,535 @@ + + + + + + + + openEuler-22.09 - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

openEuler 22.09 OpenStack Yoga + OpenSD + 虚拟机高低优先级特性测试报告

+

openEuler ico

+

版权所有 © 2022 openEuler社区 +您对“本文档”的复制、使用、修改及分发受知识共享(Creative Commons)署名—相同方式共享4.0国际公共许可协议(以下简称“CC BY-SA 4.0”)的约束。为了方便用户理解,您可以通过访问https://creativecommons.org/licenses/by-sa/4.0/了解CC BY-SA 4.0的概要 (但不是替代)。CC BY-SA 4.0的完整协议内容您可以访问如下网址获取:https://creativecommons.org/licenses/by-sa/4.0/legalcode。

+

修订记录

+ + + + + + + + + + + + + + + + + + + + + + + +
日期修订版本修改描述作者
2022-09-151初稿韩光宇
2022-09-162格式整改,新增opensd测试报告,新增虚拟机高低优先级特性测试报告王玺源
+

关键词:

+

OpenStack、opensd

+

摘要:

+

openEuler 22.09 版本中提供 OpenStack Yoga 版本的 RPM 安装包,方便用户快速部署 OpenStack

+

opensd是中国联通在openEuler开源的OpenStack部署工具,在openEuler 22.09中提供对OpenStack Yoga的支持。

+

虚拟机高低优先级特性是OpenStack SIG自研的OpenStack特性,该特性允许用户指定虚拟机的优先级,基于不同的优先级,OpenStack自动分配不同的绑核策略,配合openEuler自研的skylark QOS服务,实现高低优先级虚拟机对资源的合理使用。

+

缩略语清单:

+ + + + + + + + + + + + + + + + + + + + +
缩略语英文全名中文解释
CLICommand Line Interface命令行工具
ECSElastic Cloud Server弹性云服务器
+

1 特性概述

+
    +
  1. +

    openEuler 22.09 版本中提供 OpenStack Yoga 版本的RPM安装包,包括以下项目以及每个项目配套的 CLI

    +
  2. +
  3. +

    Keystone

    +
  4. +
  5. Neutron
  6. +
  7. Cinder
  8. +
  9. Nova
  10. +
  11. Placement
  12. +
  13. Glance
  14. +
  15. Horizon
  16. +
  17. Aodh
  18. +
  19. Ceilometer
  20. +
  21. Cyborg
  22. +
  23. Gnocchi
  24. +
  25. Heat
  26. +
  27. Swift
  28. +
  29. Ironic
  30. +
  31. Kolla
  32. +
  33. Trove
  34. +
  35. +

    Tempest

    +
  36. +
  37. +

    openEuler 22.09 版本中提供opensd的安装包以及对openEulerOpenStack Yoga的支持能力。

    +
  38. +
  39. +

    openEuler 22.09 版本中提供openstack-plugin-priority-vm安装包,支持虚拟机高低优先级特性。

    +
  40. +
+

2 特性测试信息

+

本节描述被测对象的版本信息和测试的时间及测试轮次,包括依赖的硬件。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试起始时间测试结束时间
openEuler 22.09 RC1
(OpenStack Yoga版本各组件的安装部署测试;opensd安装能力测试;虚拟机高低优先级特性安装测试)
2022.08.102022.08.17
openEuler 22.09 RC2
(OpenStack Yoga版本基本功能测试,包括虚拟机、卷
网络相关资源的增删改查;opensd支持openEuler的能力测试;虚拟机高低优先级特性功能测试)2022.08.182022.08.23
openEuler 22.09 RC3
(OpenStack Yoga版本tempest集成测试;opensd支持OpenStack Yoga的能力测试;虚拟机高低优先级特性问题回归测试)
2022.08.242022.09.07
openEuler 22.09 RC4
(OpenStack Yoga版本问题回归测试;opensd问题回归测试)
2022.09.082022.09.15
+

描述特性测试的硬件环境信息

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
硬件型号硬件配置信息备注
华为云ECSIntel Cascade Lake 3.0GHz 8U16Gx86虚拟机
联通云ECSIntel(R) Xeon(R) Silver 4114 2.20GHz 8U16GX86虚拟机
华为 2288H V5Intel Xeon Gold 6146 3.20GHz 48U192GX86物理机
联通云ECSHuawei Kunpeng 920 2.6GHz 4U8Garm64虚拟机
飞腾S2500FT-S2500 2.1GHz 8U16Garm64虚拟机
飞腾S2500FT-S2500,64 Core@2.1GHz*2; 512GB DDR4 RAMarm64物理机
+

3 测试结论概述

+

3.1 测试整体结论

+

OpenStack Yoga 版本,共计执行 Tempest 用例 1452 个,主要覆盖了 API 测试和功能测试,通过 7*24 的长稳测试,Skip 用例 95 个( OpenStack Yoga 版中已废弃的功能或接口,如Keystone V1、Cinder V1等),失败用例 0 个(FLAT网络未实际联通及存在一些超时问题),其他 1357 个用例通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+

opensd支持Yoga版本mariadb、rabbitmq、memcached、ceph_client、keystone、glance、cinder、placement、nova、neutron共10个项目的部署,发现问题已解决,回归通过,无遗留风险。

+

虚拟机高低优先级特性,发现问题已解决,回归通过,无遗留风险。

+ + + + + + + + + + + + + + + + + +
测试活动tempest集成测试
接口测试API全覆盖
功能测试Yoga版本覆盖Tempest所有相关测试用例1452个,其中Skip 95个,Fail 0个, 其他全通过。
+ + + + + + + + + + + + + +
测试活动功能测试
功能测试虚拟机(KVM、Qemu)、存储(lvm、NFS、Ceph后端)、网络资源(linuxbridge、openvswitch)管理操作正常
+

3.2 约束说明

+

本次测试没有覆盖 OpenStack Yoga 版中明确废弃的功能和接口,因此不能保证已废弃的功能和接口(前文提到的Skip的用例)在 openEuler 22.09 上能正常使用。

+

opensd 只支持测试范围内的服务部署,其他服务未经过测试,不保证质量。

+

虚拟机高低优先级特性需要配合openEuelr 22.09 skylark服务使用。

+

3.3 遗留问题分析

+

3.3.1 遗留问题影响以及规避措施

+ + + + + + + + + + + + + + + + + + + +
问题单号问题描述问题级别问题影响和规避措施当前状态
N/AN/AN/AN/AN/A
+

3.3.2 问题统计

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
问题总数严重主要次要不重要
数目41210
百分比1002559250
+

4 测试执行

+

4.1 测试执行统计数据

+

本节内容根据测试用例及实际执行情况进行特性整体测试的统计,可根据第二章的测试轮次分开进行统计说明。

+ + + + + + + + + + + + + + + + + +
版本名称测试用例数用例执行结果发现问题单数
openEuler 22.09 OpenStack Yoga1452通过1357个,skip 95个,Fail 0个3
+

4.2 后续测试建议

+
    +
  1. 涵盖主要的性能测试
  2. +
  3. 覆盖更多的driver/plugin测试
  4. +
  5. opensd测试验证更多OpenStack服务。
  6. +
+

5 附件

+

N/A

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/site/test/openEuler-24.03-LTS/index.html b/site/test/openEuler-24.03-LTS/index.html new file mode 100644 index 0000000000000000000000000000000000000000..8a2d55615501e32cd0829843e2bcb55ab784e2c4 --- /dev/null +++ b/site/test/openEuler-24.03-LTS/index.html @@ -0,0 +1,602 @@ + + + + + + + + openEuler-24.03-LTS - OpenStack SIG Doc + + + + + + + + + + + + + +
+ + +
+ +
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

openEuler 24.03 LTS 测试报告

+

openEuler ico

+

版权所有 © 2024 openEuler社区 + 您对“本文档”的复制、使用、修改及分发受知识共享(Creative Commons)署名—相同方式共享4.0国际公共许可协议(以下简称“CC BY-SA 4.0”)的约束。为了方便用户理解,您可以通过访问https://creativecommons.org/licenses/by-sa/4.0/ 了解CC BY-SA 4.0的概要 (但不是替代)。CC BY-SA 4.0的完整协议内容您可以访问如下网址获取:https://creativecommons.org/licenses/by-sa/4.0/legalcode。

+

修订记录

+ + + + + + + + + + + + + + + + + +
日期修订版本修改描述作者
2024-06-031初稿郑挺
+

关键词:

+

OpenStack

+

摘要:

+

openEuler 24.03 LTS 版本中提供 OpenStack WallabyOpenStack Antelope 版本的 RPM 安装包,方便用户快速部署 OpenStack

+

缩略语清单:

+ + + + + + + + + + + + + + + + + + + + +
缩略语英文全名中文解释
CLICommand Line Interface命令行工具
ECSElastic Cloud Server弹性云服务器
+

1 特性概述

+

openEuler 24.03 LTS 版本中提供 OpenStack WallabyOpenStack Antelope 版本的RPM安装包,包括以下项目以及每个项目配套的 CLI

+
    +
  • +

    Keystone

    +
  • +
  • +

    Neutron

    +
  • +
  • +

    Cinder

    +
  • +
  • +

    Nova

    +
  • +
  • +

    Placement

    +
  • +
  • +

    Glance

    +
  • +
  • +

    Horizon

    +
  • +
  • +

    Aodh

    +
  • +
  • +

    Ceilometer

    +
  • +
  • +

    Cyborg

    +
  • +
  • +

    Gnocchi

    +
  • +
  • +

    Heat

    +
  • +
  • +

    Swift

    +
  • +
  • +

    Ironic

    +
  • +
  • +

    Kolla

    +
  • +
  • +

    Trove

    +
  • +
  • +

    Tempest

    +
  • +
  • +

    Barbican

    +
  • +
  • +

    Octavia

    +
  • +
  • +

    designate

    +
  • +
  • +

    Manila

    +
  • +
  • +

    Masakari

    +
  • +
  • +

    Mistral

    +
  • +
  • +

    Senlin

    +
  • +
  • +

    Zaqar

    +
  • +
+

2 特性测试信息

+

本节描述被测对象的版本信息和测试的时间及测试轮次,包括依赖的硬件。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试起始时间测试结束时间
openEuler 24.03 LTS RC1
(OpenStack Antelope版本各组件的安装部署测试)
2024.03.312024.04.03
openEuler 24.03 LTS RC1
(OpenStack Antelope版本基本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2024.04.042024.04.09
openEuler 24.03 LTS RC2
(OpenStack Antelope版本tempest集成测试)
2024.04.102024.04.19
openEuler 24.03 LTS RC3
(OpenStack Antelope版本问题回归测试)
2024.04.202024.05.09
openEuler 24.03 LTS RC4
(OpenStack Wallaby版本各组件的安装部署测试)
2024.05.102024.05.14
openEuler 24.03 LTS RC4
(OpenStack Wallaby基版本本功能测试,包括虚拟机,卷,网络相关资源的增删改查)
2024.05.152024.05.21
openEuler 24.03 LTS RC5
(OpenStack Wallaby版本tempest集成测试)
2024.05.222024.05.28
openEuler 24.03 LTS RC5
(OpenStack Wallaby版本问题回归测试)
2024.05.292024.06.03
+

描述特性测试的硬件环境信息

+ + + + + + + + + + + + + + + + + + + + +
硬件型号硬件配置信息备注
华为云ECSIntel Cascade Lake 3.0GHz 8U16G华为云x86虚拟机
华为云ECSHuawei Kunpeng 920 2.6GHz 8U16G华为云arm64虚拟机
+

3 测试结论概述

+

3.1 测试整体结论

+

OpenStack Antelope 版本,共计执行 Tempest 用例 1483 个,主要覆盖了 API 测试和功能测试,通过 7*24 的长稳测试,Skip 用例 100 个(全是 OpenStack Antelope 版中已废弃的功能或接口,如Keystone V1、Cinder V1等),失败用例 0 个,其他 1383 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+

OpenStack Wallaby 版本,共计执行 Tempest 用例 1434 个,主要覆盖了 API 测试和功能测试,通过 7*24 的长稳测试,Skip 用例 95 个(全是 OpenStack Wallaby 版中已废弃的功能或接口,如KeystoneV1、Cinder V1等,和不支持的barbican项目),失败用例 0 个,其他 1339 个用例全部通过,发现问题已解决,回归通过,无遗留风险,整体质量良好。

+ + + + + + + + + + + + + + + + + + + + + +
测试活动tempest集成测试
接口测试API全覆盖
功能测试Antelope版本覆盖Tempest所有相关测试用例1483个,其中Skip 100个,Fail 0个,其他全通过。
功能测试Wallaby版本覆盖Tempest所有相关测试用例1434个,其中Skip 95个,Fail 0个, 其他全通过。
+ + + + + + + + + + + + + +
测试活动功能测试
功能测试虚拟机(KVM、Qemu)、存储(lvm、NFS、Ceph后端)、网络资源(linuxbridge、openvswitch)管理操作正常
+

3.2 约束说明

+

本次测试没有覆盖 OpenStack AntelopeOpenStack Wallaby 版中明确废弃的功能和接口,因此不能保证已废弃的功能和接口(前文提到的Skip的用例)在 openEuler 24.03 LTS 上能正常使用。

+

3.3 遗留问题分析

+

3.3.1 遗留问题影响以及规避措施

+ + + + + + + + + + + + + + + + + + + +
问题单号问题描述问题级别问题影响和规避措施当前状态
N/AN/AN/AN/AN/A
+

3.3.2 问题统计

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
问题总数严重主要次要不重要
数目60600
百分比100010000
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ISSUE Link
https://gitee.com/openeuler/openstack/issues/I9RUHD?from=project-issue
https://gitee.com/openeuler/openstack/issues/I9RKHC?from=project-issue
https://gitee.com/openeuler/openstack/issues/I9S2L0?from=project-issue
https://gitee.com/openeuler/openstack/issues/I9S2LT?from=project-issue
https://gitee.com/openeuler/openstack/issues/I9UF6L?from=project-issue
https://gitee.com/openeuler/openstack/issues/I9UFAZ?from=project-issue
+

4 测试执行

+

4.1 测试执行统计数据

+

本节内容根据测试用例及实际执行情况进行特性整体测试的统计,可根据第二章的测试轮次分开进行统计说明。

+ + + + + + + + + + + + + + + + + + + + + + + +
版本名称测试用例数用例执行结果发现问题单数
openEuler 24.03 LTS OpenStack Antelope1483通过1383个,skip 100个,Fail 0个1
openEuler 24.03 LTS OpenStack Wallaby1434通过1339个,skip 95个,Fail 0个5
+

4.2 后续测试建议

+
    +
  1. 涵盖主要的性能测试。
  2. +
  3. 覆盖更多的driver/plugin测试。
  4. +
  5. 重点测试Antelope和Wallaby版本对python3.11版本的适配情况。
  6. +
+

5 附件

+

N/A

+ +
+
+ +
+
+ +
+ +
+ +
+ + + + « 上一章 + + + 下一章 » + + +
+ + + + + + + + + diff --git a/templet/architecture.md b/templet/architecture.md deleted file mode 100644 index 409c88f58783a9b8dd0c79c4c13cc5fd3e0f87d6..0000000000000000000000000000000000000000 --- a/templet/architecture.md +++ /dev/null @@ -1,137 +0,0 @@ -# 概述 - -OpenStack每个服务通常包含若干子服务,针对这些子服务,我们在打包的时候也要做拆包处理,分成若干个子RPM包。本文档规定了openEuler SIG对OpenStack服务的RPM包拆分的原则。 - -# 原则 - -## 通用原则 - -采用分层架构,RPM包结构如下图所示,以openstack-nova为例: - -``` -Level | Package | Example - | | - ┌─┐ | ┌──────────────┐ ┌────────────────────────┐ | ┌────────────────────┐ ┌────────────────────────┐ - │1│ | │ Root Package │ │ Doc Package (Optional) │ | │ openstack-nova.rpm │ │ openstack-nova-doc.rpm │ - └─┘ | └────────┬─────┘ └────────────────────────┘ | └────────────────────┘ └────────────────────────┘ - | │ | - | ┌─────────────────────┼───────────────────────────┐ | - | │ │ | | - ┌─┐ | ┌────────▼─────────┐ ┌─────────▼────────┐ | | ┌────────────────────────────┐ ┌────────────────────────┐ - │2│ | │ Service1 Package │ │ Service2 Package │ | | │ openstack-nova-compute.rpm │ │ openstack-nova-api.rpm │ - └─┘ | └────────┬─────────┘ └────────┬─────────┘ | | └────────────────────────────┘ └────────────────────────┘ - | | | | | - | └──────────┬─────────┘ | | - | | | | - ┌─┐ | ┌───────▼────────┐ | | ┌───────────────────────────┐ - │3│ | │ Common Package │ | | │ openstack-nova-common.rpm │ - └─┘ | └───────┬────────┘ | | └───────────────────────────┘ - | │ | | - | │ | | - | │ | | - ┌─┐ | ┌────────▼────────┐ ┌────────────────▼────────────────┐ | ┌──────────────────┐ ┌────────────────────────┐ - │4│ | │ Library Package ◄------------| Library Test Package (Optional) │ | │ python2-nova.rpm │ │ python2-nova-tests.rpm │ - └─┘ | └─────────────────┘ └─────────────────────────────────┘ | └──────────────────┘ └────────────────────────┘ -``` - -如图所示,分为4级 - -1. Root Package为总RPM包,原则上不包含任何文件。只做服务集合用。用户可以使用该RPM一键安装所有子RPM包。 - 如果项目有doc相关的文件,也可以单独成包(可选) -2. Service Package为子服务RPM包,包含该服务的systemd服务启动文件、自己独有的配置文件等。 -3. Common Package是共用依赖的RPM包,包含各个子服务依赖的通用配置文件、系统配置文件等。 -4. Library Package为python源码包,包含了该项目的python代码。 - 如果项目有test相关的文件,也可以单独成包(可选) - -涉及本原则的项目有: - -* openstack-nova -* openstack-cinder -* openstack-glance -* openstack-placment -* openstack-ironic - -### 特殊情况 - -有些openstack组件本身只包含一个服务,不存在子服务的概念,这种服务则只需要分为两级: - -``` - Level | Package | Example - | | - ┌─┐ | ┌──────────────┐ ┌────────────────────────┐ | ┌────────────────────────┐ ┌────────────────────────────┐ - │1│ | │ Root Package │ │ Doc Package (Optional) │ | │ openstack-keystone.rpm │ │ openstack-keystone-doc.rpm │ - └─┘ | └───────┬──────┘ └────────────────────────┘ | └────────────────────────┘ └────────────────────────────┘ - | | | - | ┌────────────┴───────────────────┐ | - ┌─┐ | ┌───────▼─────────┐ ┌────────────────▼────────────────┐ | ┌──────────────────────┐ ┌────────────────────────────┐ - │2│ | │ Library Package ◄-----| Library Test Package (Optional) │ | │ python2-keystone.rpm │ │ python2-keystone-tests.rpm │ - └─┘ | └─────────────────┘ └─────────────────────────────────┘ | └──────────────────────┘ └────────────────────────────┘ -``` - -1. Root Package RPM包包含了除python源码外的其他所有文件,包括服务启动文件、项目配置文件、系统配置文件等等。 - 如果项目有doc相关的文件,也可以单独成包(可选) -2. Library Package为python源码包,包含了该项目的python代码。 - 如果项目有test相关的文件,也可以单独成包(可选) - -涉及本原则的项目有: - -* openstack-keystone -* openstack-horizon - -还有些项目虽然有若干子RPM包,但这些子RPM包是互斥的,则这种服务的结构如下: - -``` -Level | Package | Example - | | - ┌─┐ | ┌──────────────┐ ┌────────────────────────┐ | ┌───────────────────────┐ ┌───────────────────────────┐ - │1│ | │ Root Package │ │ Doc Package (Optional) │ | │ openstack-neutron.rpm │ │ openstack-neutron-doc.rpm │ - └─┘ | └────────┬─────┘ └────────────────────────┘ | └───────────────────────┘ └───────────────────────────┘ - | │ | - | ┌─────────────────────┴───────────────────────────────┐ | - | │ | | - ┌─┐ | ┌────────▼─────────┐ ┌──────────────────┐ ┌──────────────────┐ | | ┌──────────────────────────────┐ ┌───────────────────────────────────┐ ┌───────────────────────────────────┐ - │2│ | │ Service1 Package │ │ Service2 Package │ │ Service3 Package │ | | │ openstack-neutron-server.rpm │ │ openstack-neutron-openvswitch.rpm │ │ openstack-neutron-linuxbridge.rpm │ - └─┘ | └────────┬─────────┘ └────────┬─────────┘ └────────┬─────────┘ | | └──────────────────────────────┘ └───────────────────────────────────┘ └───────────────────────────────────┘ - | | | | | | - | └────────────────────┼────────────────────┘ | | - | | | | - ┌─┐ | ┌───────▼────────┐ | | ┌──────────────────────────────┐ - │3│ | │ Common Package │ | | │ openstack-neutron-common.rpm │ - └─┘ | └───────┬────────┘ | | └──────────────────────────────┘ - | │ | | - | │ | | - | │ | | - ┌─┐ | ┌────────▼────────┐ ┌────────────────▼────────────────┐ | ┌─────────────────────┐ ┌───────────────────────────┐ - │4│ | │ Library Package ◄------| Library Test Package (Optional) │ | │ python2-neutron.rpm │ │ python2-neutron-tests.rpm │ - └─┘ | └─────────────────┘ └─────────────────────────────────┘ | └─────────────────────┘ └───────────────────────────┘ -``` - -如图所示,Service2和Service3互斥。 - -1. Root包只包含不互斥的子包,互斥的子包单独提供。 - 如果项目有doc相关的文件,也可以单独成包(可选) -2. Service Package为子服务RPM包,包含该服务的systemd服务启动文件、自己独有的配置文件等。 - 互斥的Service包不被Root包所包含,用户需要单独安装。 -3. Common Package是共用依赖的RPM包,包含各个子服务依赖的通用配置文件、系统配置文件等。 -4. Library Package为python源码包,包含了该项目的python代码。 - 如果项目有test相关的文件,也可以单独成包(可选) - -涉及本原则的项目有: - -* openstack-neutron - -## 依赖库 - -一个依赖库一般只包含一个RPM包,不需要做拆分处理。 - -``` - Level | Package | Example - | | - ┌─┐ | ┌─────────────────┐ ┌────────────────────────┐ | ┌──────────────────────────┐ ┌───────────────────────────────┐ - │1│ | │ Library Package │ │ Help Package (Optional)│ | │ python2-oslo-service.rpm │ │ python2-oslo-service-help.rpm │ - └─┘ | └─────────────────┘ └────────────────────────┘ | └──────────────────────────┘ └───────────────────────────────┘ -``` - -# 其他 - -1. openEuler社区对python2和python3 RPM包的命名有要求,python2的包前缀为*python2-*,python3的包前缀为*python3-*。因此,OpenStack要求开发者在打Library的RPM包时,也要遵守openEuler社区规范。 diff --git a/templet/library-templet.spec b/templet/library-templet.spec deleted file mode 100644 index 0238255fc92b58b3d2493083885dd0e645078689..0000000000000000000000000000000000000000 --- a/templet/library-templet.spec +++ /dev/null @@ -1,124 +0,0 @@ -# --------------------------------------------------------- # -# 该模板以python-oslo.service.spec为例 -# ----------------------------------------------------------- # - -# openEuler OpenStack SIG提供了命令行工具oos,其中包含了RPM spec自动生成功能。 -# 地址:https://gitee.com/openeuler/openstack/tools/oos -# -# -# Spec命令示例: -# # 在当前目录生成olso.service 2.6.0 的spec文件 -# oos spec build --name oslo.service --version 2.6.0 -o python-oslo-service.spec -# # 生成最新版本oslo.service spec文件、下载对应源码包,并在自动执行rpmbuild命令 -# oos spec build -n oslo.service -b - -# 下面是使用oos生成的python-oslo-service spec -%global _empty_manifest_terminate_build 0 -Name: python-oslo-service -Version: 2.6.0 -Release: 1 -Summary: oslo.service library -License: Apache-2.0 -URL: https://docs.openstack.org/oslo.service/latest/ -Source0: https://files.pythonhosted.org/packages/bb/1f/a72c0ca35e9805704ce3cc4db704f955eb944170cb3b214cc7af03cb8851/oslo.service-2.6.0.tar.gz -BuildArch: noarch -%description - Team and repository tags .. Change things from this point on oslo.service --Library for running OpenStack services :target: . - -%package -n python3-oslo-service -Summary: oslo.service library -Provides: python-oslo-service -# Base build requires -BuildRequires: python3-devel -BuildRequires: python3-setuptools -BuildRequires: python3-pbr -BuildRequires: python3-pip -BuildRequires: python3-wheel -# General requires -BuildRequires: python3-paste -BuildRequires: python3-pastedeploy -BuildRequires: python3-routes -BuildRequires: python3-webob -BuildRequires: python3-yappi -BuildRequires: python3-debtcollector -BuildRequires: python3-eventlet -BuildRequires: python3-fixtures -BuildRequires: python3-greenlet -BuildRequires: python3-oslo-concurrency -BuildRequires: python3-oslo-config -BuildRequires: python3-oslo-i18n -BuildRequires: python3-oslo-log -BuildRequires: python3-oslo-utils -# General requires -Requires: python3-paste -Requires: python3-pastedeploy -Requires: python3-routes -Requires: python3-webob -Requires: python3-yappi -Requires: python3-debtcollector -Requires: python3-eventlet -Requires: python3-fixtures -Requires: python3-greenlet -Requires: python3-oslo-concurrency -Requires: python3-oslo-config -Requires: python3-oslo-i18n -Requires: python3-oslo-log -Requires: python3-oslo-utils -%description -n python3-oslo-service - Team and repository tags .. Change things from this point on oslo.service --Library for running OpenStack services :target: . - -%package help -Summary: oslo.service library -Provides: python3-oslo-service-doc -%description help - Team and repository tags .. Change things from this point on oslo.service --Library for running OpenStack services :target: . - -%prep -%autosetup -n oslo.service-2.6.0 - -%build -%py3_build - -%install -%py3_install -install -d -m755 %{buildroot}/%{_pkgdocdir} -if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi -if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi -if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi -if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi -pushd %{buildroot} -if [ -d usr/lib ]; then - find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst -fi -if [ -d usr/lib64 ]; then - find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst -fi -if [ -d usr/bin ]; then - find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst -fi -if [ -d usr/sbin ]; then - find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst -fi -touch doclist.lst -if [ -d usr/share/man ]; then - find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst -fi -popd -mv %{buildroot}/filelist.lst . -mv %{buildroot}/doclist.lst . - -%check -%{__python3} setup.py test - -%files -n python3-oslo-service -f filelist.lst -%dir %{python3_sitelib}/* - -%files help -f doclist.lst -%{_docdir}/* - -%changelog -* Tue Jul 13 2021 OpenStack_SIG - 2.6.0-1 -- Package Spec generate diff --git a/templet/service-templet.spec b/templet/service-templet.spec deleted file mode 100644 index 19c1bcc3443b5359d4831fe8cc9ba35a839fdb50..0000000000000000000000000000000000000000 --- a/templet/service-templet.spec +++ /dev/null @@ -1,273 +0,0 @@ -# ------------------------------------------------- # -# 该模板以openstack-nova为例,删减了部分重复结构与内容 # -# ------------------------------------------------- # - -# 根据需要,定义全局变量, 例如某些频繁使用的字符串、变量名等。 e.g. %global with_doc 0 -%global glocal_variable value - -# Name与gitee对应的项目保持一致 -Name: openstack-nova -# Version为openstack上游对应版本 -Version: 22.0.0 -# 初次引入时,Release为1。后续每次修改spec时,需要按数字顺序依次增长。这里的Nova spec经过了两次修改,因此为2 -# 注意,有的社区要求release的数字后面需要加?{dist}用以区分OS的版本,但openEuler社区不要求,社区的RPM构建系统会自动给RPM包名补上dist信息。 -Release: 2 -# 软件的一句话描述,Summary要简短,一般来自上游代码本身的介绍文档。 -Summary: OpenStack Compute (nova) -# 这里指定上游源码的License,openEuler社区支持的License有Apache-2.0、GPL等通用协议。 -License: Apache-2.0 -# 该项目的上游项目地址,一般指向开放的源码git仓库,如果源码地址无法提供,则可以指向项目主页 -URL: https://opendev.org/openstack/nova/ -# Source0需要指向上游软件包地址 -Source0: https://tarballs.openstack.org/nova/nova-22.0.0.tar.gz -# 新增文件按Source1、Source2...顺序依次填写 -Source1: nova-dist.conf -Source2: nova.logrotate -# 对上游源码如果需要修改,可以按顺序依次添加Git格式的Patch。例如openstack上游某些严重Bug或CVE漏洞,需要以Patch方式回合。 -Patch0001: xxx.patch -Patch0002: xxx.patch -# 软件包的目标架构,可选:x86_64、aarch64、noarch,选择不同架构会影响%install时的目录 -# BuildArch为可选项,如果不指定,则默认为与当前编译执行机架构一致。由于OpenStack是python的,因此要求指定为noarch -BuildArch: noarch - -# 编译该RPM包时需要在编译环境中提前安装的依赖,采用yum install方式安装。 -# 也可以使用yum-builddep工具进行依赖安装(yum-builddep xxx.spec) -BuildRequires: openstack-macros -BuildRequires: intltool -BuildRequires: python3-devel - -# 安装该RPM包时需要的依赖,采用yum install方式安装。 -# 注意,虽然RPM有机制自动生成python项目的依赖,但这是隐形的,不方便后期维护,因此OpenStack SIG要求开发者在spec中显示写出所有依赖项 -Requires: openstack-nova-compute = %{version}-%{release} -Requires: openstack-nova-scheduler = %{version}-%{release} - -# 该RPM包的详细描述,可以适当丰富内容,不宜过长,也不可不写。 -%description -Nova is the OpenStack project that provides a way to provision compute instances -(aka virtual servers). Nova supports creating virtual machines, baremetal servers -(through the use of ironic), and has limited support for system containers. - -# 该项目还提供多个子RPM包,一般是为了细化软件功能、文件组成,提供细粒度的RPM包。格式与上面内容相同。 -# 下面例子提供了openstack-nova-common子RPM包 -%package common -Summary: Components common to all OpenStack Nova services - -BuildRequires: systemd -BuildRequires: python3-castellan >= 0.16.0 - -Requires: python3-nova -# pre表示该依赖只在RPM包安装的%pre阶段依赖。 -Requires(pre): shadow-utils - -%description common -OpenStack Compute (Nova) -This package contains scripts, config and dependencies shared -between all the OpenStack nova services. - -# 下面例子提供了openstack-nova-compute子RPM包。 -# 注意,openstack-nova还提供了更多子RPM包,例如api、scheduler、conductor等。本示例做了删减。 -# -# 子RPM包的命名规则: -# 以项目名为缺省值,使用"项目名-子服务名"的方式,例如这里的openstack-nova-compute -# openstack-nova为服务名,compute为子服务名。 -%package compute -Summary: OpenStack Nova Compute Service - -Requires: openstack-nova-common = %{version}-%{release} -Requires: curl -Requires(pre): qemu - -%description compute -This package contains scripts, config for nova-compute service - -# 下面例子提供了python-nova子RPM包。 -%package -n python-nova -Summary: Nova Python libraries - -Requires: openssl -Requires: openssh -Requires: sudo -Requires: python-paramiko - -# -n表示非缺省,包的全名为python-nova,而不是openstack-nova-python-nova -%description -n python-nova -OpenStack Compute (Nova) -This package contains the nova Python library. - -# 下面例子提供了openstack-nova-doc子RPM包。 -# 目前doc和test类的子RPM不做强制要求,可以不提供。 -%package doc -Summary: Documentation for OpenStack Compute -BuildRequires: graphviz -BuildRequires: python3-openstackdocstheme - -%description doc -OpenStack Compute (Nova) -This package contains documentation files for nova. - -# 下面进入RPM包正式编译流程,按照%prep、%build、%install、%check、%clean的步骤进行。 -# %prep: 打包准备阶段执行一些命令(如,解压源码包,打补丁等),以便开始编译。一般仅包含 "%autosetup";如果源码包需要解压并切换至 NAME 目录,则输入 "%autosetup -n NAME"。 -# %build: 包含构建阶段执行的命令,构建完成后便开始后续安装。程序应该包含有如何编译的介绍。 -# %install: 包含安装阶段执行的命令。命令将文件从 %{_builddir} 目录安装至 %{buildroot} 目录。 -# %check: 包含测试阶段执行的命令。此阶段在 %install 之后执行,通常包含 "make test" 或 "make check" 命令。此阶段要与 %build 分开,以便在需要时忽略测试。 -# %clean: 可选步骤,清理安装目录的命令。一般只包含:rm -rf %{buildroot} - -# nova的prep阶段进行源码包解压、删除无用文件。 -%prep -%autosetup -n nova-%{upstream_version} -find . \( -name .gitignore -o -name .placeholder \) -delete -find nova -name \*.py -exec sed -i '/\/usr\/bin\/env python/{d;q}' {} + -%py_req_cleanup - -# build阶段使用py3_build命令快速编译,并生成、修改个别文件(如nova.conf配置文件) -%build -PYTHONPATH=. oslo-config-generator --config-file=etc/nova/nova-config-generator.conf -PYTHONPATH=. oslopolicy-sample-generator --config-file=etc/nova/nova-policy-generator.conf -%{py3_build} -%{__python3} setup.py compile_catalog -d build/lib/nova/locale -D nova -sed -i 's|group/name|group;name|; s|\[DEFAULT\]/|DEFAULT;|' etc/nova/nova.conf.sample - -# install阶段使用py3_install命令快速安装,配置文件对应权限、生成doc、systemd服务启动文件等。 -%install -%{py3_install} -export PYTHONPATH=. -sphinx-build -b html doc/source doc/build/html -rm -rf doc/build/html/.{doctrees,buildinfo} -install -p -D -m 644 doc/build/man/*.1 %{buildroot}%{_mandir}/man1/ -install -d -m 755 %{buildroot}%{_sharedstatedir}/nova -install -d -m 755 %{buildroot}%{_sharedstatedir}/nova/buckets -install -d -m 755 %{buildroot}%{_sharedstatedir}/nova/instances -cat > %{buildroot}%{_sysconfdir}/nova/release < os_xenapi/client.py </dev/null || groupadd -r nova --gid 162 -if ! getent passwd nova >/dev/null; then - useradd -u 162 -r -g nova -G nova,nobody -d %{_sharedstatedir}/nova -s /sbin/nologin -c "OpenStack Nova Daemons" nova -fi -exit 0 - -# openstack-nova-compute类似 -%pre compute -usermod -a -G qemu nova -usermod -a -G libvirt nova - -# openstack-nova-compute安装之后会刷新systemd配置。 -%post compute -%systemd_post %{name}-compute.service - -# openstack-nova-compute卸载之前会刷新systemd配置。 -%preun compute -%systemd_preun %{name}-compute.service - -# openstack-nova-compute卸载之后会做类似操作。 -%postun compute -%systemd_postun_with_restart %{name}-compute.service - -# spec的最后是%files部分, 此部分列出了需要被打包的文件和目录,即RPM包里包含哪些文件和目录,以及这些文件和目录会被安装到哪里。 - -# %files表示openstack-nova RPM包含的东西,这些什么都没写,表示openstack-nova RPM包是个空包, 但是rpm本身对于其他rpm有依赖。 -%files -# 下面表示openstack-nova-common RPM包里包含的文件和目录,有nova.conf配置文件、policy.json权限文件、一些nova可执行文件等等。 -%files common -f nova.lang -%license LICENSE -%doc etc/nova/policy.yaml.sample -%dir %{_datarootdir}/nova -%attr(-, root, nova) %{_datarootdir}/nova/nova-dist.conf -%{_datarootdir}/nova/interfaces.template -%dir %{_sysconfdir}/nova -%{_sysconfdir}/nova/release -%config(noreplace) %attr(-, root, nova) %{_sysconfdir}/nova/nova.conf -%config(noreplace) %attr(-, root, nova) %{_sysconfdir}/nova/api-paste.ini -%config(noreplace) %attr(-, root, nova) %{_sysconfdir}/nova/rootwrap.conf -%config(noreplace) %attr(-, root, nova) %{_sysconfdir}/nova/policy.json -%config(noreplace) %{_sysconfdir}/logrotate.d/openstack-nova -%config(noreplace) %{_sysconfdir}/sudoers.d/nova -%dir %attr(0750, nova, root) %{_localstatedir}/log/nova -%dir %attr(0755, nova, root) %{_localstatedir}/run/nova -%{_bindir}/nova-manage -%{_bindir}/nova-policy -%{_bindir}/nova-rootwrap -%{_bindir}/nova-rootwrap-daemon -%{_bindir}/nova-status -%if 0%{?with_doc} -%{_mandir}/man1/nova*.1.gz -%endif -%defattr(-, nova, nova, -) -%dir %{_sharedstatedir}/nova -%dir %{_sharedstatedir}/nova/buckets -%dir %{_sharedstatedir}/nova/instances -%dir %{_sharedstatedir}/nova/keys -%dir %{_sharedstatedir}/nova/networks -%dir %{_sharedstatedir}/nova/tmp - -# openstack-nova-compute RPM包包含nova-compute可执行文件、systemd服务配置文件、nova-compute提权文件。 -# 会分别安装到/usr/bin等系统目录。 -%files compute -%{_bindir}/nova-compute -%{_unitdir}/openstack-nova-compute.service -%{_datarootdir}/nova/rootwrap/compute.filters - -# python-nova RPM包里包含了nova的python文件,会被安装到python3_sitelib目录中,一般是/usr/lib/python3/site-packages/ -%files -n python-nova -%license LICENSE -%{python3_sitelib}/nova -%{python3_sitelib}/nova-*.egg-info -%exclude %{python3_sitelib}/nova/tests - -# openstack-nova-doc RPM包里只有doc的相关文件 -%files doc -%license LICENSE -%doc doc/build/html - -# 最后别忘了填写changelog,每次修改spec文件,都要新增changelog信息,从上到下按时间由近及远的顺序。 -%changelog -* Sat Feb 20 2021 wangxiyuan -- Fix require issue - -* Fri Jan 15 2021 joec88 -- openEuler build version - - -# 附录: -# 上面spec代码中使用了大量的RPM宏定义,更多相关宏的详细说明参考fedora官方wiki -# https://fedoraproject.org/wiki/How_to_create_an_RPM_package/zh-cn#.E5.AE.8F diff --git a/tools/docker/Dockerfile b/tools/docker/Dockerfile deleted file mode 100644 index e58c61a2da56cd3e6c951a9312cb0748db7bc34e..0000000000000000000000000000000000000000 --- a/tools/docker/Dockerfile +++ /dev/null @@ -1,24 +0,0 @@ -# NOTE: the following image name is the openEuler docker images name from dailybuild, -# may need to modify -FROM openeuler-22.03-lts-next:latest - -SHELL ["/bin/bash", "-o", "pipefail", "-c"] - -COPY openEuler.repo /etc/yum.repos.d/ - -RUN yum update -y && yum install -y \ - sudo git tar curl patch shadow-utils make cmake gcc gcc-c++ \ - rpm-build dnf-plugins-core make rpmdevtools wget python3-pip - -RUN yum update -y && yum install -y openssl-devel libffi-devel \ - python3-devel python3-wheel - -RUN sed -i s'/TMOUT=300/TMOUT=300000000000/' /etc/bashrc - -RUN rpmdev-setuptree - -WORKDIR /root/ -RUN git clone https://gitee.com/openeuler/openstack \ - && cd openstack/tools/oos \ - && pip3 install -r requirement.txt \ - && python3 setup.py develop diff --git a/tools/docker/README.md b/tools/docker/README.md deleted file mode 100644 index faa436a02d3c5997fd18343edacd6a171f148b3e..0000000000000000000000000000000000000000 --- a/tools/docker/README.md +++ /dev/null @@ -1,10 +0,0 @@ -## Dockerfile for basic environment for running tools - -This Dockerfile is for quickly building a environment for running the tools. - -0. Check the `openEuler.repo`, replace the content with suitable URLs. - -1. Run `build-img.sh` script to build Docker image, you need to specify the - openEuler base image URL with `BASE_IMG_URL` env variable. - -2. Use `openeuler-pkg-build` Docker image to build environment for running tools diff --git a/tools/docker/build-img.sh b/tools/docker/build-img.sh deleted file mode 100755 index 5f0383d6be42424f75cb09ac3f5c858a16d31e83..0000000000000000000000000000000000000000 --- a/tools/docker/build-img.sh +++ /dev/null @@ -1,18 +0,0 @@ -#!/bin/bash - -# please specify the daily build docker images URL, e.g. -# http://121.36.84.172/dailybuild/openEuler-20.03-LTS-SP2/openeuler-2021-07-05-12-47-12/docker_img/aarch64/openEuler-docker.aarch64.tar.xz - -if [ -z "$BASE_IMG_URL" ]; then - echo "Please specify openEuler docker image URL with BASE_IMG_URL!" - exit -fi - -cd `dirname $0` -image_ref=$(curl -L "$BASE_IMG_URL" | docker load) -image_name=${image_ref#*:} - -cp Dockerfile Dockerfile-tmp -sed -i "s/FROM.*/FROM $image_name/" Dockerfile-tmp -docker build . -t openeuler-pkg-build -f Dockerfile-tmp -rm Dockerfile-tmp diff --git a/tools/docker/openEuler.repo b/tools/docker/openEuler.repo deleted file mode 100644 index 43b71c128ba36124992887dcf16ecc5b557ca094..0000000000000000000000000000000000000000 --- a/tools/docker/openEuler.repo +++ /dev/null @@ -1,24 +0,0 @@ -[everything] -name=everything -baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/Next/standard_x86_64 -enabled=1 -gpgcheck=0 - -[EPOL] -name=epol -baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/Next:/Epol/standard_x86_64 -enabled=1 -gpgcheck=0 - -[openstack_train] -name=openstack_train -baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/Next:/Epol:/Multi-Version:/OpenStack:/Train/standard_x86_64 -enabled=1 -gpgcheck=0 - -#[openstack_wallay] -#name=openstack_wallay -#baseurl=http://119.3.219.20:82/openEuler:/22.03:/LTS:/Next:/Epol:/Multi-Version:/OpenStack:/Wallaby/standard_x86_64 -#enabled=1 -#gpgcheck=0 - diff --git a/tools/oos/LICENSE b/tools/oos/LICENSE deleted file mode 100644 index 261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64..0000000000000000000000000000000000000000 --- a/tools/oos/LICENSE +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/tools/oos/README.md b/tools/oos/README.md deleted file mode 100644 index 869d405410570f8d9fdc5b827f6305eb5bbaa8fa..0000000000000000000000000000000000000000 --- a/tools/oos/README.md +++ /dev/null @@ -1,351 +0,0 @@ -# OpenStack SIG 开发工具 - -oos(openEuler OpenStack SIG)是OpenStack SIG提供的命令行工具。该工具为OpenStack SIG开发提供了若干功能。包括 - -1. 自动生成RPM Spec -2. 自动分析OpenStack软件包依赖 -3. 自动提交Spec PR到openEuler社区 -4. 获取OpenStack SIG CI失败的PR列表 -5. 为软件仓创建分支 - -oos在不断开发中,用户可以使用pypi上已发布的稳定版 - -``` -pip install openstack-sig-tool -``` - -## 自动生成RPM Spec - -分别支持单个生成RPM Spec和批量生成RPM Spec - -- 生成单个软件包的RPM Spec -```shell script -oos spec build --build-rpm --name stevedore --version 1.28.0 -# or: oos spec build -b -n stevedore -v 1.28.0 -``` - -- 批量化生成RPM Spec - -批量化生RPM Spec 需要预先准备一个`.csv`文件存放要生成RPM Spec的软件包列表,`.csv`文件中需要 -包含`pypi_name`和`version`两列。 -```shell script -oos spec build --projects-data projects.csv -# or: oos spec build -p projects.csv -``` - -除了上述基本用法,`oos spec build`命令支持的参数如下: -``` ---build-root - 指定build spec的根目录,默认为用户目(通常为root)录的rpmbuild/目录,建议使用默认 --n, --name - 生成单个软件包的RPM Spec的时候指定软件包pypi上的名称 --v, --version - 生成单个软件包的RPM Spec的时候指定软件包版本号 --p, --projects-data - 批量生成软件包RPM Spec的时候指定projects列表的csv文件,必须包含`pypi_name`和`version`两列 --q, --query - 过滤器,模糊匹配projects-data中的软件包名称,通常用于重新生成软件包列表中的某一个,如‘-q cinderclient’ --a, --arch - 指定生成Spec文件的arch,默认为'noarch' --py2, --python2 - 指定生成python2的软件包的Spec --sd, --short-description - 指定在生成spec的时候是否对description做截短处理,默认为True --nc, --no-check - 指定在生成的Spec文件中不添加check步骤 --b, --build-rpm - 指定是否在生成Spec的时候打rpm包,若不指定,只生成Spec,不打RPM包 --o, --output - 指定输出spec文件的位置,不指定的话默认生成在rpmbuild/SPECS/目录下面 --rs, --reuse-spec - 复用已存在的spec文件,不再重新生成。 - -注意:必选参数为--projects-data,或者--name和--version,若同时指定这3个参数,则自动忽略 ---projects-data参数。 -``` - -## 自动分析OpenStack软件包依赖 - -以OpenStack train为例, - -1. 调用脚本,生成缓存文件,默认存放在`train_cached_file`目录 - -``` -cd tools/oos/scripts -python3 generate_dependence.py train -本命令默认会生成Train版本SIG支持的所有OpenStack服务,用户也可以根据自己需求,指定openstack项目,例如 -python3 generate_dependence.py --projects nova,cinder train -``` - -2. 调用oos命令,生成依赖分析结果 - -``` -oos dependence generate train_cached_file -``` - -其他支持的参数有: - -``` --c, --compare - 结果是否与openeuler社区仓库进行比对,生成建议 --cb, --compare-branch' - 指定openEuler比对的仓库分支,默认是master --t, --token - 如果使用了-c,需要同时指定gitee token,否则gitee可能会拒接访问。 - 或者配置环境变量GITEE_PAT也行。 --o, --output - 指定命令行生成的文件名,默认为result.csv -``` - -该命令运行完后,根目录下会生成1个结果文件,默认为`result.csv`。 - - -## 自动提交Spec PR到openEuler社区 - -可以通过`oos spec push`命令构建Spec文件并将其提交到openEuler社区 - -- 批量生成spec并提交到Gitee,需要预先准备软件包列表的`.csv`文件,以及Gitee的账号等信息,例子如下: -```shell script -export GITEE_PAT= -oos spec push --projects-data projects.csv --dest-branch master -# or: oos spec push -p projects.csv -d master -``` - -- 生成单个包的spec并提交: -```shell script -export GITEE_PAT= -oos spec push --name stevedore --version 1.28.0 -# or: oos spec push --n stevedore --v 1.28.0 -``` - -该命令所支持的参数如下: -``` ---build-root - [可选] 指定build spec的根目录,默认为用户目(通常为root)录的rpmbuild/目录,建议使用默认 --t,--gitee-pat - [必选] 个人Gitee账户personal access token,必选参数,可以使用GITEE_PAT环境变量指定 --e,--gitee-email - [可选] 个人Gitee账户email地址,可使用GITEE_EMAIL指定, 若在Gitee账户公开,可通过Token自动获取 --o --gitee-org - [可选] gitee组织的名称,默认为src-openeuler,必选参数,可以使用GITEE_ORG环境变量指定 --p, --projects-data - [可选] 软件包列表的.csv文件,必须包含`pypi_name`和`version`两列, 和“--version、--name”参数二选一 --d, --dest-branch - [可选] 指定push spec到openEuler仓库的目标分支名,默认为master --r, --repos-dir - [可选] 指定存放openEuler仓库的本地目录,默认为build root目录下面的src-repos目录 --q, --query - [可选] 过滤器,模糊匹配projects-data中的软件包名称,通常用于重新生成软件包列表中的某一个,如‘-q cinderclient’ --dp, --do-push - [可选] 指定是否执行push到gitee仓库上并提交PR,如果不指定则只会提交到本地的仓库中 --a, --arch - [可选] 指定生成Spec文件的arch,默认为'noarch' --py2, --python2 - [可选] 指定生成python2的软件包的Spec --nc, --no-check - [可选] 指定在生成的Spec文件中不添加check步骤 --rs, --reuse-spec - [可选] 复用已存在的spec文件,不再重新生成。 -``` -**注意:** `oos spec push`命令必选参数为`--gitee-pat` 即Gitee账号的token,可以指定 ---name,--version来提交单个包的spec,或者--projects-data指定包列表批量化提交, -其他参数均有默认值为可选参数。 -**注意:默认只是在本地repo提交,需要显示指定`-dp/--do-push`参数才能提交到Gitee上。** - -## 获取OpenStack SIG CI失败的PR列表 - -该工具能够扫描OpenStack SIG下面CI跑失败的PR,梳理成列表,包含PR责任人,失败日志链接等 - -1. 调用oos命令, 将CI跑失败的PR信息梳理成列表输出 - -``` -oos repo pr-fetch --gitee-org GITEE_ORG -r REPO -s STATE -``` - -该命令所支持的参数如下: - -``` --g, --gitee-org - [可选] gitee组织的名称,默认为src-openeuler,可以使用GITEE_ORG环境变量指定 --r, --repo - [可选] 组织仓库的名称,默认为组织下的所有仓库 --s, --state - [可选] Pull Request 状态,选项有open、closed、merged、all,默认为open --o, --output - [可选] 输出文件名,默认为failed_PR_result.csv -``` - -该命令运行完后,目录下会生成1个结果文件,默认为`failed_PR_result.csv`。 - -## 为软件仓创建分支 - -可以使用`oos repo branch-create`命令为openeuler软件仓创建分支 - -- 为软件仓创建分支,需要提供要创建分支的软件仓列表`.csv`文件或者指定单个软件仓名称,对应新建分支信息以及Gitee的账号等信息, -以为openstack-nova仓创建openEuler-21.09分支为例: -```shell script -oos repo branch-create --repo openstack-nova -b openEuler-21.09 protected master -t GITEE_PAT -``` - -- 为软件仓批量创建多分支,需要提供要创建分支的软件仓列表`.csv`文件或者指定单个软件仓名称,对应新建分支信息以及Gitee的账号等信息, -以为repos.csv中软件仓创建openEuler-21.09分支和openEuler-22.03-LTS多分支为例,并提交pr为例: -```shell script -oos repo branch-create --repos-file repos.csv -b openEuler-21.09 protected master --b openEuler-22.03-LTS protected openEuler-22.03-LTS-Next -t GITEE_PAT --do-push -``` - -该命令所支持的参数如下: - -``` --t, --gitee-pat - [必选] 个人Gitee账户personal access token,可以使用GITEE_PAT环境变量指定 --e, --gitee-email - [可选] 个人Gitee账户email地址,可使用GITEE_EMAIL指定, 若在Gitee账户公开,可通过Token自动获取 --o, --gitee-org - [可选] repo所属的gitee组织名称,默认为src-openeuler --r, --repo - [可选] 软件仓名,和--repos-file参数二选一 --rf, --repos-file - [可选] openEuler社区软件仓库名的.csv文件,目前只需要包含`repo_name`一列,和--repo参数二选一 --b, --branches - [必选] 需要为软件包创建的分支信息,每个分支信息包含:要创建的分支名称,分支类型(一般为protected)和父分支名称, - 可以携带多个-b来批量创建多个分支 ---community-path - [可选] openeuler/community项目本地仓路径 --w, --work-branch - [可选] 本地工作分支,默认为openstack-create-branch --dp, --do-push - [可选] 指定是否执行push到gitee仓库上并提交PR,如果不指定则只会提交到本地的仓库中 -``` - -## 为软件仓删除分支 - -可以使用`oos repo branch-delete`命令为openeuler软件仓删除分支 - -- 为软件仓删除分支,需要提供要删除分支的软件仓列表`.csv`文件或者指定单个软件仓名称,对应需要删除的分支信息以及Gitee的账号等信息, -以为openstack-nova仓删除openEuler-21.09分支为例: -```shell script -oos repo branch-delete --repo openstack-nova -b openEuler-21.09 -t GITEE_PAT -``` - -- 为软件仓批量删除多个分支,需要提供要删除分支的软件仓列表`.csv`文件或者指定单个软件仓名称,对应需要删除的分支信息以及Gitee的账号等信息, -以为repos.csv中软件仓删除openEuler-21.09分支和openEuler-22.03-LTS多分支为例,并提交pr为例: -```shell script -oos repo branch-delete --repos-file repos.csv -b openEuler-21.09 -b openEuler-22.03-LTS -t GITEE_PAT --do-push -``` - -该命令所支持的参数如下: - -``` --t, --gitee-pat - [必选] 个人Gitee账户personal access token,可以使用GITEE_PAT环境变量指定 --e, --gitee-email - [可选] 个人Gitee账户email地址,可使用GITEE_EMAIL指定, 若在Gitee账户公开,可通过Token自动获取 --o, --gitee-org - [可选] repo所属的gitee组织名称,默认为src-openeuler --r, --repo - [可选] 软件仓名,和--repos-file参数二选一 --rf, --repos-file - [可选] openEuler社区软件仓库名的.csv文件,目前只需要包含`repo_name`一列,和--repo参数二选一 --b, --branch - [必选] 需要为软件仓删除的分支名称,可以携带多个-b来批量删除多个分支 ---community-path - [可选] openeuler/community项目本地仓路径 --w, --work-branch - [可选] 本地工作分支,默认为openstack-delete-branch --dp, --do-push - [可选] 指定是否执行push到gitee仓库上并提交PR,如果不指定则只会提交到本地的仓库中 -``` - -## 软件包放入OBS工程 - -可以使用`oos repo obs-create`命令将openEuler软件仓放入OBS工程,如果没有对应OBS工程,此命令会同时创建对应OBS工程 - -- 将单个软件放入OBS工程,以将openstack-nova放入openEuler:22.09:Epol工程为例,需要指定repo名,分支名以及Gitee账号信息: -```shell script -oos repo obs-create --repo openstack-nova -b openEuler-22.09 -t GITEE_PAT -``` - -- 将软件包放入OBS工程,默认是放入OBS对应工程的Epol仓,如果需要放入Mainline仓,可以通过--mainline参数来指定 - -以将openstack-releases放入openEuler:22.09:Mainline仓为例,需要指定repo名,分支名以及Gitee账号信息: -```shell script -oos repo obs-create --repo openstack-releases -b openEuler-22.09 --mainline -t GITEE_PAT -``` - -- 将多个软件放入OBS工程,以将repos.csv中软件放入openEuler:22.03:LTS:SP1:Epol:Multi-Version:OpenStacack:Train支为例,并提交pr: -```shell script -oos repo obs-create --repos-file repos.csv -b Multi-Version_OpenStack-Train_openEuler-22.03-LTS-SP1 -t GITEE_PAT --do-push -``` - -该命令所支持的参数如下: - -``` --t, --gitee-pat - [必选] 个人Gitee账户personal access token,可以使用GITEE_PAT环境变量指定 --e, --gitee-email - [可选] 个人Gitee账户email地址,可使用GITEE_EMAIL指定, 若在Gitee账户公开,可通过Token自动获取 --r, --repo - [可选] 软件仓名,和--repos-file参数二选一 --rf, --repos-file - [可选] openEuler社区软件仓库名的.csv文件,目前只需要包含`repo_name`一列,和--repo参数二选一 --b, --branch - [必选] 指定要放入OBS工程的对应repo的分支 ---mainline - [可选] 是否将软件包放到对应工程的mainline仓,默认放入Epol仓 ---obs-path - [可选] src-openeuler/obs_meta项目本地仓路径 --w, --work-branch - [可选] 本地工作分支,默认为openstack-create-branch --dp, --do-push - [可选] 指定是否执行push到gitee仓库上并提交PR,如果不指定则只会提交到本地的仓库中 -``` - -## 软件包从OBS工程移除 - -可以使用`oos repo obs-delete`命令将openEuler软件仓从OBS工程移除 - -- 将单个软件从OBS工程移除,以将python-mox从openEuler:21.09:Epol工程移除为例,需要指定repo名,分支名以及Gitee账号信息: -```shell script -oos repo obs-delete --repo python-mox -b openEuler-21.09 -t GITEE_PAT -``` - -- 将多个软件从OBS工程移除,以将repos.csv中软件从openEuler:22.03:LTS:Epol:Multi-Version:OpenStacack:Train工程移除支为例,并提交pr: -```shell script -oos repo obs-delete --repos-file repos.csv -b Multi-Version_OpenStack-Train_openEuler-22.03-LTS -t GITEE_PAT --do-push -``` - -该命令所支持的参数如下: - -``` --t, --gitee-pat - [必选] 个人Gitee账户personal access token,可以使用GITEE_PAT环境变量指定 --e, --gitee-email - [可选] 个人Gitee账户email地址,可使用GITEE_EMAIL指定, 若在Gitee账户公开,可通过Token自动获取 --r, --repo - [可选] 软件仓名,和--repos-file参数二选一 --rf, --repos-file - [可选] openEuler社区软件仓库名的.csv文件,目前只需要包含`repo_name`一列,和--repo参数二选一 --b, --branch - [必选] 指定要放入OBS工程的对应repo的分支 ---obs-path - [可选] src-openeuler/obs_meta项目本地仓路径 --w, --work-branch - [可选] 本地工作分支,默认为openstack-create-branch --dp, --do-push - [可选] 指定是否执行push到gitee仓库上并提交PR,如果不指定则只会提交到本地的仓库中 -``` - -## 环境和依赖 -上述`oos spec build`和`oos spec push`命令需要依赖于`rpmbuild`工具,因此需要安装以下相关软件包: -```shell script -yum install rpm-build rpmdevtools -``` -同时,需要预先准备好`rpmbuild`命令所需的相关工作目录,执行如下命令: -```shell script -rpmdev-setuptree -``` -在执行`oos spec build`和`oos spec push`命令时需指定`--build-root`参数为`rpmbuild`工作目录 -的根目录,默认为当前用户目录下`rpmbuild/`目录。 - -另外,为了便于使用该工具,可以使用`Docker`快速构建一个打包环境,具体详见`docker/`目录下的[README](https://gitee.com/openeuler/openstack/blob/master/tools/docker/README.md). diff --git a/tools/oos/etc/constants.yaml b/tools/oos/etc/constants.yaml deleted file mode 100644 index d774988c65731c942aa5a63711696265a14a0b28..0000000000000000000000000000000000000000 --- a/tools/oos/etc/constants.yaml +++ /dev/null @@ -1,1973 +0,0 @@ ---- -# : -pypi2pkgname: - Babel: babel - Django: Django - Flask-RESTful: flask-restful - Flask: flask - Jinja2: jinja2 - Mako: mako - MarkupSafe: markupsafe - Paste: paste - PasteDeploy: paste-deploy - Pillow: pillow - Pint: pint - PyJWT: jwt - PyMySQL: PyMySQL - PyNaCl: pynacl - PySocks: pysocks - Pygments: pygments - Routes: routes - SQLAlchemy: sqlalchemy - Sphinx: sphinx - Tempita: tempita - WSME: wsme - WebOb: webob - WebTest: webtest - Werkzeug: werkzeug - alabaster: sphinx-theme-alabaster - dnspython: dns - gitdb2: gitdb - oslosphinx: oslo-sphinx - ovs: openvswitch - prometheus-client: prometheus_client - pyenchant: enchant - pygraphviz: graphviz - pyinotify: inotify - pykerberos: kerberos - python-json-logger: json_logger - python-qpid-proton: qpid-proton - python-subunit: subunit - repoze.lru: repoze-lru - semantic-version: semantic_version - smmap2: smmap - systemd-python: systemd - -pypi2reponame: - Babel: babel - Django: django - Flask-RESTful: flask-restful - Flask: flask - Jinja2: jinja2 - Mako: mako - MarkupSafe: markupsafe - Paste: paste - PasteDeploy: paste-deploy - Pillow: pillow - Pint: pint - PrettyTable: prettytable - PyECLib: pyeclib - PyJWT: python-jwt - PyNaCl: python-pynacl - PySocks: pysocks - Pygments: python-pygments - Routes: routes - SQLAlchemy: sqlalchemy - Sphinx: sphinx - Tempita: tempita - WSME: wsme - WebOb: webob - WebTest: webtest - Werkzeug: werkzeug - Yappi: yappi - alabaster: python-sphinx-theme-alabaster - dfs_sdk: dfs-sdk - dnspython: dns - gitdb2: python-gitdb - grpcio: grpc - jaraco.collections: jaraco-collections - jaraco.functools: jaraco-functools - oslosphinx: python-oslo.sphinx - ovs: openvswitch - prometheus-client: python-prometheus_client - pycrypto: python-crypto - pygraphviz: python-graphviz - pyinotify: python-inotify - pykerberos: kerberos - pyldap: python-ldap - python-json-logger: python-json_logger - python-subunit: subunit - repoze.lru: repoze-lru - ruamel.yaml: ruamel-yaml - semantic-version: python-semantic_version - setuptools-git: setuptools_git - smmap2: python-smmap - sphinx_feature_classification: sphinx-feature-classification - sphinx-rtd-theme: python-sphinx_rtd_theme - sphinxcontrib.autoprogram: sphinxcontrib-autoprogram - systemd-python: python-systemd - zope.component: zope-component - zope.interface: zope-interface - -# Some projects' name in pypi json file doesn't correct, this mapping fix it. -# Wrong name : Right name(pypi name) -pypi_name_fix: - BeautifulSoup4: beautifulsoup4 - Click: click - IPython: ipython - Ipython: ipython - Six: six - Webob: WebOb - babel: Babel - cloud-sptheme: cloud_sptheme - couchdb: CouchDB - cython: Cython - django: Django - flask: Flask - jinja2: Jinja2 - logbook: Logbook - openstack.nose-plugin: openstack.nose_plugin - prometheus_client: prometheus-client - pygments: Pygments - pyjwt: PyJWT - pympler: Pympler - pymysql: PyMySQL - pyopenssl: pyOpenSSL - pyyaml: PyYAML - requests_mock: requests-mock - secretstorage: SecretStorage - sphinx: Sphinx - sphinxcontrib_issuetracker: sphinxcontrib-issuetracker - sqlalchemy-utils: SQLAlchemy-Utils - sqlalchemy: SQLAlchemy - webob: WebOb - werkzeug: Werkzeug - -# Some project's version doesn't exist, this mapping correct the version. -pypi_version_fix: - PrettyTable-0.7: 0.7.2 - Sphinx-1.6.0: 1.6.1 - aioeventlet-0.4: 0.5.1 - alabaster-0.7: 0.7.1 - attrs-19.0: 19.1.0 - bitmath-1.3.0: 1.3.0.1 - chardet-2.0: 2.2.1 - chardet-2.2: 2.2.1 - cloud_sptheme-1.10.1: 1.10.1.post20200504175005 - cradox-2.0.0: 2.1.2 - furo-2021.6.24: 2021.6.24b37 - html5lib-0.99999999pre: 1.0 - importlib-resources-1.6: 2.0.0 - lazy-object-proxy-1.4.*: 1.4.3 - msgpack-0.4.0: 0.5.0 - nbformat-5.0: 5.0.2 - openstackdocstheme-1.32.1: 2.0.0ß - pep517-0.9: 0.9.1 - pluggy-0.7: 0.7.1 - prompt-toolkit-2.0.0: 2.0.1 - py-1.5.0: 1.5.1 - pyOpenSSL-1.0.0: 16.0.0 - pyldap-2.4: 2.4.14 - pyngus-2.0.0: 2.0.3 - pytz-0a: 2020.1 - setuptools-0.6a2: 0.7.2 - stone-2.*: 3.0.0 - trollius-1.0: 1.0.4 - -black_list: - # Some projects don't exist on pypi. We only deal them by hand, skip them in oos. - - infinisim - - pyev - - rados - - python-rados - - rbd - # Some projects are out of date. - - argparse - # Some project don't support python3 - - anyjson - - mox - - enum34 - - futures - # Some projects are removed from openEuler. - # The projects which relies on this kind of projects should be removed as well. - - nose - - nosehtmloutput - - nosexcover - - openstack.nose_plugin - - django-babel - - CouchDB - - unicodecsv - -# : -pypi_license: - django-compressor: Apache-2.0 MIT - django-pyscss: BSD - jsonpath-rw-ext: Apache-2.0 - sqlalchemy-migrate: MIT - testresources: Apache-2.0 - XStatic-Angular-FileUpload: MIT - XStatic-Angular-lrdragndrop: MIT - XStatic-Bootstrap-Datepicker: Apache-2.0 - XStatic-Hogan: Apache-2.0 - XStatic-Jasmine: MIT - XStatic-jQuery: MIT - XStatic-JQuery-Migrate: MIT - XStatic-jquery-ui: MIT - XStatic-JQuery.quicksearch: MIT - XStatic-JQuery.TableSorter: MIT - XStatic-mdi: SIL OFL 1.1 - XStatic-Rickshaw: MIT - XStatic-smart-table: MIT - XStatic-Spin: MIT - XStatic-term.js: MIT - XStatic-tv4: Public Domain - URLObject: Unlicense - -pkg_description: - Babel: |- - Babel is an integrated collection of utilities that assist in internationalizing and - localizing Python applications, with an emphasis on web-based applications. - diskimage-builder: |- - diskimage-builder is a flexible suite of components for building a - wide-range of disk images, filesystem images and ramdisk images for - use with OpenStack. - elasticsearch7: |- - Official low-level client for Elasticsearch. Its goal is to provide common - ground for all Elasticsearch-related code in Python; because of this it tries - to be opinion-free and very extendable. - future: |- - This package intends to provides a compatibility layer for Python between its - two version release. The future and past packages are both provides for backports - and forwards, in which you are able to use a single, clean codebase to run under - Python3 environmets easily. With also providing futurize and pasteurize scripts, - you can convert you Python code to support both version. - networking-baremetal: |- - This project's goal is to provide deep integration between the Networking - service and the Bare Metal service and advanced networking features like - notifications of port status changes and routed networks support in clouds - with Bare Metal service. - networking-generic-switch: |- - The Modular Layer 2 (ml2) plugin driver, that allow to work with switches - from different vendors. It uses netmiko library that configures equipment - via SSH. - numpy: |- - NumPy is the fundamental package for scientific computing with Python. - It contains among other things: - 1. a powerful N-dimensional array object - 2. sophisticated (broadcasting) functions - 3. tools for integrating C/C++ and Fortran code - useful linear algebra, Fourier transform, and random number capabilities - Besides its obvious scientific uses, NumPy can also be used as an efficient - multi-dimensional container of generic data. Arbitrary data-types can be - defined. This allows NumPy to seamlessly and speedily integrate with a wide - variety of databases. - protobuf: |- - This package containers Protocol Buffers compiler for all programming languages. - pyflakes: |- - Pyflakes A simple program which checks Python source files for errors - pylint: |- - Pylint is a Python source code analyzer which looks for programming - errors, helps enforcing a coding standard and sniffs for some code - smells (as defined in Martin Fowler's Refactoring book). - Pylint can be seen as another PyChecker since nearly all tests you - can do with PyChecker can also be done with Pylint. However, Pylint - offers some more features, like checking length of lines of code, - checking if variable names are well-formed according to your coding - standard, or checking if declared interfaces are truly implemented, - and much more. - Additionally, it is possible to write plugins to add your own checks. - pyOpenSSL: |- - pyOpenSSL is a rather thin wrapper around (a subset of) the OpenSSL library. - With thin wrapper we mean that a lot of the object methods do nothing more - than calling a corresponding function in the OpenSSL library. - pyparsing: |- - The pyparsing module is an alternative approach to creating and executing simple - grammars, vs. the traditional lex/yacc approach, or the use of regular expressions. - The pyparsing module provides a library of classes that client code uses to - construct the grammar directly in Python code. - pyScss: |- - A Scss compiler for Python - pytest: |- - py.test provides simple, yet powerful testing for Python. - actdiag: |- - actdiag generate activity-diagram image file from spec-text file. - alembic: |- - Alembic is a database migrations tool written by the author of SQLAlchemy. A - migrations tool offers the following functionality: - 1. Can emit ALTER statements to a database in order to change the structure of - tables and other constructs - 2. Provides a system whereby "migration scripts" may be constructed; each script - indicates a particular series of steps that can "upgrade" a target database to a - new version, and optionally a series of steps that can "downgrade" similarly, - doing the same steps in reverse. - 3. Allows the scripts to execute in some sequential manner. - amqp: |- - This is a fork of amqplib which was originally written by Barry Pederson. - It is maintained by the Celery project, and used by kombu as a pure python - alternative when librabbitmq is not available. - This library should be API compatible with librabbitmq. - aniso8601: |- - A python library for parsing ISO 8601 strings. - aodhclient: |- - Python client library for Aodh. - appdirs: |- - A small Python 3 module for determining appropriate " + " platform-specific directories, - e.g. a "user data dir". - asgiref: |- - ASGI is a standard for Python asynchronous web apps and servers to communicate - with each other, and positioned as an asynchronous successor to WSGI. You can - read more at package includes ASGI base libraries, such as:Sync-to-async and - async-to-sync function wrappers, asgiref.sync Server base classes, - asgiref.server A WSGI-to-ASGI adapter, in asgiref. - astroid: |- - An abstract syntax tree for Python with inference support. - The aim of this module is to provide a common base representation of python - source code. It is currently the library powering pylint capabilities. - attrs: |- - attrs is an MIT-licensed Python package with class decorators that - ease the chores of implementing the most common attribute-related - object protocols. - automaton: |- - Automaton Friendly state machines for python. The goal of this library is to - provide well documented state machine classes and associated utilities. The - state machine pattern (or the implemented variation there-of) is a commonly used - pattern and has a multitude of various usages. Some of the usages for this - library include providing state & transition validation and - running/scheduling/analyzing the execution of tasks. - bandit: |- - A security linter from PyCQA - python-barbicanclient: |- - python-barbicanclient This is a client for the Barbican. - bashate: |- - This program attempts to be an automated style checker for bash scripts - to fill the same part of code review that pep8 does in most OpenStack - projects. It started from humble beginnings in the DevStack project, - and will continue to evolve over time. - bcrypt: |- - Good password hashing for your software and your servers. - This library should be compatible with py-bcrypt and it will run on - Python 2.7, 3.4+, and PyPy 2.6+. - beautifulsoup4: |- - This package provides a python library which is designed for quick - turnaround projects.It provides methods for navigating and modifying - a parse tree.It can help convert incoming documents to Unicode - and outgoing documents to utf-8. - bitmath: |- - simplifies many facets of interacting with file sizes in various units. - blockdiag: |- - * Generate block-diagram from dot like text (basic feature). - * Multilingualization for node-label (utf-8 only). - boto: |- - Boto is a Python package that provides interfaces to Amazon Web Services. - It supports over thirty services, such as S3 (Simple Storage Service), - SQS (Simple Queue Service), and EC2 (Elastic Compute Cloud) via their - REST and Query APIs. The goal of boto is to support the full breadth - and depth of Amazon Web Services. In addition, boto provides support - for other public services such as Google Storage in addition to private - cloud systems like Eucalyptus, OpenStack and Open Nebula. - boto3: |- - Boto3 is the Amazon Web Services (AWS) Software Development - Kit (SDK) for Python, which allows Python developers to - write software that makes use of services like Amazon S3 - and Amazon EC2. - botocore: |- - A low-level interface to a growing number of Amazon Web Services. The - botocore package is the foundation for the AWS CLI as well as boto3. - cachetools: |- - This module provides various memoizing collections and decorators, - including variants of the Python Standard Library's `@lru_cache`_ - function decorator. - cachez: |- - Cache decorator for global or instance level memoize. - castellan: |- - Generic Key Manager interface for OpenStack - certifi: |- - Certifi provides Mozilla carefully curated collection of Root Certificates for validating the - trustworthiness of SSL certificates while verifying the identity of TLS hosts. It has been - extracted from the Requests project. - cffi: |- - C Foreign Function Interface for Python. Interact with almost any C code from Python, - based on C-like declarations that you can often copy-paste from header files or documentation. - cfgv: |- - Validate configuration and produce human readable error messages. - chardet: |- - Universal encoding detector for Python 2 and 3 - python-cinderclient: |- - the OpenStack Cinder API This is a client for the OpenStack Cinder API. There's - a Python API (the cinderclient module), and a command-line script (cinder). - click: |- - Click is a Python package for creating beautiful command line interfaces - in a composable way with as little code as necessary. It's the - "Command Line Interface Creation Kit". It's highly configurable but comes - with sensible defaults out of the box. - cliff: |- - cliff is a framework for building command line programs. It uses - entry points to provide subcommands, output formatters, and other extensions. - cmd2: |- - quickly build feature-rich and user-friendly interactive command line applications in Python - colorama: |- - Makes ANSI escape character sequences (for producing colored terminal - text and cursor positioning) work under MS Windows. - construct: |- - Construct is a powerful declarative and symmetrical parser and builder for binary data. - Instead of writing imperative code to parse a piece of data, you declaratively define - a data structure that describes your data. As this data structure is not code, you can - use it in one direction to parse data into Pythonic objects, and in the other direction, - to build objects into binary data. - python-consul: |- - Python client for Consul (http://www.consul.io/). - coverage: |- - Coverage.py measures code coverage, typically during test execution. It uses - the code analysis tools and tracing hooks provided in the Python standard - library to determine which lines are executable, and which have been executed. - cryptography: |- - cryptography is a package designed to expose cryptographic primitives and - recipes to Python developers. - cursive: |- - Cursive implements OpenStack-specific validation of digital signatures. - daiquiri: |- - The daiquiri library provides an easy way to configure logging. - It also provides some custom formatters and handlers. - python-dateutil: |- - The dateutil module provides powerful extensions to the standard datetime module, available in Python. - ddt: |- - A library to multiply test cases - debtcollector: |- - A collection of Python deprecation patterns and strategies that help you collect your technical debt in a non-destructive manner. - decorator: |- - The goal of the decorator module is to make it easy to define signature-preserving - function decorators and decorator factories. It also includes an implementation of multiple dispatch and - other niceties (please check the docs). - defusedxml: |- - XML bomb protection for Python stdlib modules. - python-designateclient: |- - Client library and command line utility for interacting with Openstack Designate API - distlib: |- - A library of packaging functionality which is intended to be used as the - basis for third-party packaging tools. - Django: |- - A high-level Python Web framework that encourages rapid development and clean, pragmatic design. - django-appconf: |- - A helper class for handling configuration defaults of packaged Django apps gracefully. - django-compressor: |- - Django Compressor processes, combines and minifies linked and inline - Javascript or CSS in a Django template into cacheable static files. - It supports compilers such as coffeescript, LESS and SASS and is - extensible by custom processing steps. - django-debreach: |- - Basic/extra mitigation against the BREACH attack for Django projects. - When combined with rate limiting in your web-server, or by using something - like django-ratelimit, the techniques here should provide at least some - protection against the BREACH attack. - django-pyscss: |- - A collection of tools for making it easier to use pyScss within Django. - This version only supports pyScss 1.3.4 and greater. For pyScss 1.2 support, - you can use the 1.x series of django-pyscss. - dnspython: |- - dnspython is a DNS toolkit for Python. It supports - almost all record types. It can be used for queries, - zone transfers, and dynamic updates. It supports TSIG - authenticated messages and EDNS0. - dnspython provides both high and low level access to DNS. - The high level classes perform queries for data of a given - name, type, and class, and return an answer set. The low - level classes allow direct manipulation of DNS zones, - messages, names, and records. - doc8: |- - doc8 is an opinionated style checker for rst (with basic support for plain text) - styles of documentation. - docker: |- - A Python library for the Docker Engine API. - docutils: |- - Docutils is an open-source text processing system for processing plaintext - documentation into useful formats, such as HTML, LaTeX, man-pages, - open-document or XML. It includes reStructuredText, the easy to read, easy - to use, what-you-see-is-what-you-get plaintext markup language. - dogpile.cache: |- - A caching front-end based on the Dogpile lock. - python-dracclient: |- - Library for managing machines with Dell iDRAC cards. - dulwich: |- - Dulwich is a Python implementation of the Git file formats and protocols, - which does not depend on Git itself. - python-editor: |- - Python-editor is a library that provides the editor module for programmatically interfacing with your system's $EDITOR. - Editor first looks for the ${EDITOR} environment variable. If set, it uses the value as-is, without fallbacks. - If no $EDITOR is set, editor will search through a list of known editors, and use the first one that exists on the system. - enmerkar: |- - This package contains various utilities for integration of Babel into the - Django web framework: - * A message extraction plugin for Django templates. - * A middleware class that adds the Babel Locale object to requests. - * A set of template tags for date and number formatting. - etcd3: |- - Python client for the etcd API v3, supported under python 2.7, 3.4 and 3.5. - etcd3gw: |- - A python client for etcd3 grpc-gateway v3 API - eventlet: |- - Eventlet is a concurrent networking library for Python that allows you to change how you run your code, not how you write it. - extras: |- - python-extras is a set of extensions to the standard library. - fasteners: |- - A python package that provides useful locks. - feedparser: |- - Universal feed parser, handles RSS 0.9x, RSS 1.0, RSS 2.0, CDF, Atom 0.3, and Atom 1.0 feeds - filelock: |- - This package contains a single module, which implements a platform - independent file locking mechanism for Python. - fixtures: |- - Fixtures is a python contract that provides reusable state / support logic - for unit testing. It includes some helper and adaptation logic to write your - own fixtures using the fixtures contract. - flake8: |- - The modular source code checker: pep8 pyflakes and co. - flake8-docstrings: |- - A simple module that adds an extension for the fantastic pydocstyle tool to flake8. - flake8-import-order: |- - A flake8 and Pylama plugin that checks the ordering of your imports. It does - not check anything else about the imports. Merely that they are grouped and ordered correctly. - Flask: |- - Flask is a lightweight WSGI web application framework. It is designed - to make getting started quick and easy, with the ability to scale up - to complex applications. It began as a simple wrapper around Werkzeug - and Jinja and has become one of the most popular Python web application - frameworks. - Flask-RESTful: |- - Flask-RESTful provides the building blocks for creating a REST API. - freezegun: |- - Simulate the datetime module with the python-freezegun library, allowing your python - tests to travel through time. - funcparserlib: |- - Parser combinators are just higher-order functions that take parsers as their - arguments and return them as result values. - futurist: |- - Useful additions to futures, from the future. - gabbi: |- - Gabbi is a tool for running HTTP tests where requests and responses - are represented in a declarative YAML-based form. - geomet: |- - Convert GeoJSON to WKT/WKB (Well-Known Text/Binary), and vice versa. - gitdb: |- - GitDB allows you to access bare git repositories for reading and writing. - It aims at allowing full access to loose objects as well as packs with - performance and scalability in mind. It operates exclusively on streams, - allowing to handle large objects with a small memory footprint. - GitPython: |- - GitPython is a python library used to interact with git repositories, - high-level like git-porcelain, or low-level like git-plumbing. - python-glanceclient: |- - This is a client for the OpenStack Glance API. There's a Python API (the - glanceclient module), and a command-line script (glance). Each implements - 100% of the OpenStack Glance API - glance-store: |- - Glance store library. - graphviz: |- - This package makes it easier to create and render a graph description in - the DOT language which from the Python Graphviz graph drawing software. - greenlet: |- - The greenlet package is a spin-off of Stackless, a version of CPython - that supports micro-threads called "tasklets". Tasklets run pseudo-concurrently - (typically in a single or a few OS-level threads) and are synchronized - with data exchanges on "channels". - hacking: |- - hacking is a set of flake8 plugins that test and enforce the OpenStack StyleGuide - Hacking pins its dependencies, as a new release of some dependency can break - hacking based gating jobs. This is because new versions of dependencies can - introduce new rules, or make existing rules stricter. - python-heatclient: |- - This is a client library for Heat built on the Heat orchestration API. It provides a Python API (the heatclient - module) and a command-line tool (heat). - httplib2: |- - httplib2 is a comprehensive HTTP client library, httplib2.py supports many - features left out of other HTTP libraries. - identify: |- - File identification library for Python. - Given a file (or some information about a file), return a set of standardized tags identifying what the file is. - idna: |- - A library to support the Internationalised Domain Names in - Applications (IDNA) protocol as specified in RFC 5891 - http://tools.ietf.org/html/rfc5891. This version of the protocol - is often referred to as “IDNA2008” and can produce different - results from the earlier standard from 2003. - ifaddr: |- - ifaddr is a small Python library that allows you to find all the - IP addresses of the computer. It is tested on **Linux**, **OS X**, and - **Windows**. Other BSD derivatives like **OpenBSD**, **FreeBSD**, and - **NetBSD** should work too, but I haven't personally tested those. - **Solaris/Illumos** should also work. - This library is open source and released under the MIT License. It works - with Python 2.7 and 3.5+. - imagesize: |- - This module analyzes JPEG/JPEG 2000/PNG/GIF/TIFF image headers and returns image size. - importlib-metadata: |- - importlib_metadata is a library which provides an API for accessing an - installed package’s metadata (see PEP 566), such as its entry points or - its top-level name. - importlib-resources: |- - Importlib_resources is a backport of Python standard library. - iniconfig: |- - iniconfig is a small and simple INI-file parser module having a unique set of features: - tested against Python2.4 across to Python3.2, Jython, PyPy - maintains order of sections and entries - supports multi-line values with or without line-continuations - supports “#” comments everywhere - raises errors with proper line-numbers - no bells and whistles like automatic substitutions - iniconfig raises an Error if two sections have the same name. - pyinotify: |- - Pyinotify is a Python module for monitoring filesystems changes. - Pyinotify relies on a Linux Kernel feature (merged in kernel 2.6.13) called inotify. - inotify is an event-driven notifier, its notifications are exported from kernel space - to user space through three system calls. pyinotify binds these system calls and provides - an implementation on top of them offering a generic and abstract way to manipulate those - functionalities. - python-ironicclient: |- - This is a client for the OpenStack Bare Metal API. - python-ironic-inspector-client: |- - This is a client library and tool for Ironic. - ironic-lib: |- - A common library to be used exclusively by projects under the Ironic governance. - iso8601: |- - The package parses the most common forms of ISO 8601 date strings - (e.g. 2007-01-14T20:34:22+00:00) into datetime objects. - isort: |- - Isort is a Python utility / library to sort imports alphabetically, and automatically separated into sections and by type. - itsdangerous: |- - Various helpers to pass data to untrusted environments and to get it back safe and sound. - Data is cryptographically signed to ensure that a token has not been tampered with. - jaeger-client: |- - This is a client-side library that can be used to instrument Python apps for distributed trace collection, - and to send those traces to Jaeger. See the OpenTracing Python API for additional detail. - jeepney: |- - This is a low-level, pure Python DBus protocol client. - Jinja2: |- - Jinja2 is one of the most used template engines for Python. It is inspired by Django's - templating system but extends it with an expressive language that gives template authors - a more powerful set of tools. On top of that it adds sandboxed execution and optional - automatic escaping for applications where security is important. - jmespath: |- - JMESPath is a python library which allows you to declaratively specify how to - extract elements from a JSON document. - python-json-logger: |- - This library is provided to allow standard python logging to output log data - as json objects. With JSON we can make our logs more readable by machines and - we can stop writing custom parsers for syslog type records. - jsonpatch: |- - Library to apply JSON Patches according to RFC 6902 - Python 2 build. - jsonpath-rw: |- - This library provides a robust and significantly extended implementation - of JSONPath for Python. It is tested with Python 2.6, 2.7, 3.2, 3.3. - jsonpath-rw-ext: |- - Extensions for JSONPath RW - jsonpointer: |- - python-json-pointer is a Python library for resolving JSON pointers (RFC 6901). Python 2.7, 3.4+ and PyPy are supported. - jsonschema: |- - jsonschema is JSON Schema validator currently based on http://tools.ietf.org/html/draft-zyp-json-schema-03 - PyJWT: |- - PyJWT is a Python library which allows you to encode and decode JSON Web Tokens (JWT). - JWT is an open, industry-standard (RFC 7519) for representing claims securely between two parties. - kazoo: |- - Kazoo is a Python library designed to make working with Zookeeper a more hassle-free experience that is less prone to errors. - keyring: |- - On Linux, the KWallet backend relies on dbus-python_, which does not always - install correctly when using pip (compilation is needed). For best results, - install dbus-python as a system package. - keystoneauth1: |- - Keystoneauth provides a standard way to do authentication and service requests \ - within the OpenStack ecosystem. It is designed for use in conjunction with \ - the existing OpenStack clients and for simplifying the process of writing \ - new clients. - python-keystoneclient: |- - This is a client for the OpenStack Identity API, implemented by the Keystone team; it contains a Python API (the - keystoneclient module) for OpenStack's Identity Service. - keystonemiddleware: |- - OpenStack Identity API (Keystone) This package contains middleware modules - designed to provide authentication and authorization features to web services - kombu: |- - Kombu is a messaging library for Python. - The aim of Kombu is to make messaging in Python as easy as possible by - providing an idiomatic high-level interface for the AMQ protocol, and also - provide proven and tested solutions to common messaging problems. - AMQP is the Advanced Message Queuing Protocol, an open standard protocol - for message orientation, queuing, routing, reliability and security, for - which the RabbitMQ messaging server is the most popular implementation. - lazy-object-proxy: |- - A fast and thorough lazy object proxy that rebuilds a new - abstract syntax tree from Python's ast. - python-ldap: |- - python-ldap provides an object-oriented API for working with LDAP within - Python programs. It allows access to LDAP directory servers by wrapping the - OpenLDAP 2.x libraries, and contains modules for other LDAP-related tasks - (including processing LDIF, LDAPURLs, LDAPv3 schema, etc.). - ldappool: |- - A simple connector pool for python-ldap. - The pool keeps LDAP connectors alive and let you reuse them, - drastically reducing the time spent to initiate a ldap connection. - linecache2: |- - The linecache module allows one to get any line from any file, while - attempting to optimize internally, using a cache, the common case where many - lines are read from a single file. This package provides a backport of - linecache to older supported Python versions. - Logbook: |- - Logbook is a logging system for Python that replaces the standard library’s - logging module. It was designed with both complex and simple applications in - mind and the idea to make logging fun - logutils: |- - This package provides many handlers which beyond the scope of standard library - or are ported from newer Python releases for using in older versions of Python. - These handlers are designed for Python standard library's logging package. - lxml: |- - The lxml XML toolkit is a Pythonic binding for the C libraries libxml2 and libxslt. - It is unique in that it combines the speed and XML feature completeness of these libraries with - the simplicity of a native Python API, mostly compatible but superior to the well-known ElementTree API. - The latest release works with all CPython versions from 2.7 to 3.7. - Mako: |- - Python-mako is a template library for Python. It provides a familiar, non-XML - syntax which compiles into Python modules for maximum performance. Mako's syntax - and API borrows from the best ideas of many others, including Django templates, - Cheetah, Myghty, and Genshi. - MarkupSafe: |- - MarkupSafe implements a text object that escapes characters so it is safe to use in HTML and XML. - Characters that have special meanings are replaced so that they display as the actual characters. - This mitigates injection attacks, meaning untrusted user input can safely be displayed on a page. - mccabe: |- - Ned's script to check McCabe complexity. This module provides a plugin - for flake8, the Python code checker. - python-memcached: |- - This software is a 100% Python interface to the memcached memory cache - daemon. It is the client side software which allows storing values in one - or more, possibly remote, memcached servers. Search google for memcached - for more information. - microversion-parse: |- - A small set of functions to manage OpenStack microversion headers that can - be used in middleware, application handlers and decorators to effectively - manage microversions.Also included, in the middleware module, is a - MicroversionMiddleware that will process incoming microversion headers. - python-mimeparse: |- - This module provides basic functions for handling mime-types. - It can handle matching mime-types against a list of media-ranges. - python-mistralclient: |- - Team and repository tags Mistral is a workflow service. Most business processes - consist of multiple distinct interconnected steps that need to be executed in a - particular order in a distributed environment. A user can describe such a - process as a set of tasks and their transitions. - mock: |- - Mock is a Python module which provides a core mock class. It removes the need - to create a host of stubs throughout your test suite. After performing an - action, you can make assertions about which methods / attributes were used and - arguments they were called with. You can also specify return values and set - needed attributes in the normal way. - monotonic: |- - This module provides a monotonic() function which returns the value - (in fractional seconds) of a clock which never goes backwards. - more-itertools: |- - This is a python library for efficient use of itertools utility, which also - includes implementations of the recipes from the itertools documentation. - See https://pythonhosted.org/more-itertools/index.html for more information. - msgpack: |- - MessagePack is an efficient binary serialization format. It lets you exchange - data among multiple languages like JSON. But it's faster and smaller. This - package provides CPython bindings for reading and writing MessagePack data. - munch: |- - munch is a fork of David Schoonover's **Bunch** package, providing similar functionality. - mypy: |- - Add type annotations to your Python programs, and use mypy to type - check them. Mypy is essentially a Python linter on steroids, and it - can catch many programming errors by analyzing your program, without - actually having to run it. Mypy has a powerful type system with - features such as type inference, gradual typing, generics and union - types. - netaddr: |- - A pure Python network address representation and manipulation library - netifaces: |- - It’s been annoying me for some time that there’s no easy way to get the - address(es) of the machine’s network interfaces from Python. There is - a good reason for this difficulty, which is that it is virtually impossible - to do so in a portable manner. However, it seems to me that there should - be a package you can easy_install that will take care of working out the - details of doing so on the machine you’re using, then you can get on with - writing Python code without concerning yourself with the nitty gritty of - system-dependent low-level networking APIs. - This package attempts to solve that problem. - netmiko: |- - Multi-vendor library to simplify Paramiko SSH connections to network devices - networkx: |- - NetworkX is a Python package for the creation, manipulation, - and study of the structure, dynamics, and functions - of complex networks. - python-neutronclient: |- - Client library and command line utility for interacting with OpenStack Neutron's API - neutron-lib: |- - OpenStack Neutron library shared by all Neutron sub-projects. - python-novaclient: |- - This is a client for the OpenStack Nova API. There's a Python API (the - novaclient module), and a command-line script (nova). Each implements 100% of - the OpenStack Nova API. - oauth2client: |- - This is a python client module for accessing resources protected by OAuth 2.0 - oauthlib: |- - AuthLib is a framework which implements the logic of OAuth1 or OAuth2 - without assuming a specific HTTP request object or web framework. Use - it to graft OAuth client support onto your favorite HTTP library, or - provide support onto your favourite web framework. If you're a - maintainer of such a library, write a thin veneer on top of OAuthLib - and get OAuth support for very little effort. - python-openstackclient: |- - python-openstackclient is a unified command-line client for the OpenStack APIs. - openstackdocstheme: |- - Theme and extension support for Sphinx documentation that is published by Open Infrastructure Foundation projects. - openstacksdk: |- - A collection of libraries for building applications to work with OpenStack clouds. - os-brick: |- - Volume discovery and local storage management lib - osc-lib: |- - OpenStackClient Library - os-client-config: |- - The os-client-config is a library for collecting client configuration for - using an OpenStack cloud in a consistent and comprehensive manner. It - will find cloud config for as few as 1 cloud and as many as you want to - put in a config file. It will read environment variables and config files, - and it also contains some vendor specific default values so that you don't - have to know extra info to use OpenStack - osc-placement: |- - osc-placement OpenStackClient plugin for the Placement serviceThis is an - OpenStackClient plugin, that provides CLI for the Placement service. Python API - binding is not implemented Placement API consumers are encouraged to use the - REST API directly, CLI is provided only for convenience of users. - os-ken: |- - Os-ken is a fork of Ryu. It provides software components with well - defined API that make it easy for developers to create new network - management and control applications. - oslo.cache: |- - oslo.cache aims to provide a generic caching mechanism for OpenStack projects by - wrapping the dogpile. - oslo.concurrency: |- - OpenStack library for all concurrency-related code - oslo.config: |- - The Oslo configuration API supports parsing command line arguments and .ini style configuration files. - oslo.context: |- - Oslo Context Library The Oslo context library has helpers to maintain useful - information about a request context. The request context is usually populated in - the WSGI pipeline and used by various modules such as logging. - oslo.db: |- - OpenStack Common DB Code - oslo.i18n: |- - Internationalization and translation library - oslo.log: - The oslo.log (logging) configuration library provides - standardized configuration for all openstack projects. - It also provides custom formatters, handlers and support - for context specific logging (like resource id’s etc). - oslo.messaging: |- - Team and repository tags .. Change things from this point onOslo Messaging - Library The Oslo messaging API supports RPC and notifications over a number of - different messaging transports. - oslo.middleware: |- - Oslo middleware library includes components that can be injected into wsgi pipelines to intercept request/response flows. - The base class can be enhanced with functionality like add/delete/modification of http headers and support for limiting - size/connection etc. - oslo.policy: |- - An OpenStack library for policy. - oslo.privsep: |- - OpenStack library for privilege separation.This library helps applications - perform actions which require more or less privileges than they were started - with in a safe, easy to code and easy to use manner. - oslo.reports: |- - OpenStack library for creating Guru Meditation Reports and other reports - oslo.rootwrap: |- - OpenStack library for rootwrap - oslo.serialization: |- - The oslo.serialization library provides support for representing objects in - transmittable and storable formats, such as Base64, JSON and MessagePack. - oslo.service: |- - Library for running OpenStack services - oslosphinx: |- - Theme and extension support for Sphinx documentation from the OpenStack project. - oslo.upgradecheck: |- - Common code for writing OpenStack upgrade checks. - oslo.utils: |- - The oslo.utils library provides support for common utility type functions, - such as encoding, exception handling, string manipulation, and time handling. - oslo.versionedobjects: |- - OpenStack versioned objects library - oslo.vmware: |- - Oslo VMware library for OpenStack projects - oslotest: |- - OpenStack Testing Framework and Utilities - osprofiler: |- - OSProfiler is an OpenStack cross-project profiling library. - os-resource-classes: |- - A library containing standardized resource class names in the Placement service. - os-service-types: |- - Python library for consuming OpenStack sevice-types-authority data - os-testr: |- - ostestr is a testr wrapper that uses subunit-trace for output and builds - some helpful extra functionality around testr. - os-traits: |- - os-traits is an OpenStack library containing standardized trait strings - os-vif: |- - A library for plugging and unplugging virtual interfaces in OpenStack. - os-win: |- - This library contains Windows/Hyper-V code commonly used in the OpenStack - projects: nova, cinder, networking-hyperv. The library can be used in any - other OpenStack projects where it is needed. - os-xenapi: |- - XenAPI library for OpenStack projects. - ovsdbapp: |- - A library for writing Open vSwitch OVSDB-based applications. - packaging: |- - Reusable core utilities for various Python Packaging interoperability specifications. - This library provides utilities that implement the interoperability specifications - which have clearly one correct behaviour (eg: PEP 440) or benefit greatly from having - a single shared implementation (eg: PEP 425). - paramiko: |- - Paramiko is a combination of the Esperanto words for "paranoid" and "friend". It is a module - for Python 2.7/3.4+ that implements the SSH2 protocol for secure (encrypted and authenticated) - connections to remote machines. - passlib: |- - Passlib is a password hashing library for Python 2 & 3, which provides - cross-platform implementations of over 30 password hashing algorithms, as well - as a framework for managing existing password hashes. It's designed to be useful - for a wide range of tasks, from verifying a hash found in /etc/shadow, to - providing full-strength password hashing for multi-user applications. - Paste: |- - Paste provides several pieces of "middleware" (or filters) that can be nested - to build web applications. Each piece of middleware uses the WSGI (PEP 333) - interface, and should be compatible with other middleware based on those interfaces. - PasteDeploy: |- - This tool provides code to load WSGI applications and servers from URIs. These - URIs can refer to Python eggs for INI-style configuration files. Paste Script - provides commands to serve applications based on this configuration file. - pathspec: |- - pathspec is a utility library for pattern matching of file paths. So far this - only includes Git's wildmatch pattern matching which itself is derived from - Rsync's wildmatch. Git uses wildmatch for its gitignore files. - pbr: |- - PBR is a library that injects some useful and sensible default behaviors into - your setuptools run. It started off life as the chunks of code that were copied - between all of the OpenStack projects. Around the time that OpenStack hit 18 - different projects each with at least 3 active branches, it seems like a good - time to make that code into a proper re-usable library. - pecan: |- - A WSGI object-dispatching web framework, designed to be lean and fast with few dependencies. - pep257: |- - PEP 257 docstring style checker **pep257*is a static analysis tool for checking - compliance with Python PEP 257 - pep8: |- - pep8 Python style guide checker pep8 is a tool to check your Python code against - some of the style conventions in PEP 8... _PEP 8: architecture: Adding new - checks is easy.Parseable output: Jump to error location in your editor.Small: - Just one Python file, requires only stdlib. You can use just the pep8.py file - for this purpose.Comes with a comprehensive test suite.Installation You can - install, upgrade, uninstall pep8.py with these commands:: $ pip install pep8 $ - pip install --upgrade pep8 $ pip uninstall pep8There's also a package for - Debian/Ubuntu, but it's not always the latest version. - persist-queue: |- - persist-queue A thread-safe, disk-based queue for Python - pexpect: |- - Pexpect is a pure Python module for spawning child applications; controlling - them; and responding to expected patterns in their output. Pexpect works like - Don Libes' Expect. Pexpect allows your script to spawn a child application and - control it as if a human were typing commands. - Pexpect can be used for automating interactive applications such as ssh, ftp, - passwd, telnet, etc. It can be used to a automate setup scripts for duplicating - software package installations on different servers. It can be used for - automated software testing. Pexpect is in the spirit of Don Libes' Expect, but - Pexpect is pure Python. - pifpaf: |- - Pifpaf is a suite of fixtures and a command-line tool that allows to start and stop - daemons for a quick throw-away usage. - Pillow: |- - Pillow is the friendly PIL fork by Alex Clark and Contributors. PIL is the Python Imaging - Library by Fredrik Lundh and Contributors. As of 2019, Pillow development is supported by Tidelift. - Pint: |- - Pint is a Python package to define, operate and manipulate physical - quantities: the product of a numerical value and a unit of measurement. - It allows arithmetic operations between them and conversions from and - to different units. - It is distributed with a comprehensive list of physical units, prefixes - and constants. Due to its modular design, you can extend (or even rewrite!) - the complete list without changing the source code. It supports a lot of - numpy mathematical operations **without monkey patching or wrapping numpy**. - It has a complete test coverage. It runs in Python 3.6+ with no other dependency. - If you need Python 2.7 or 3.4/3.5 compatibility, use Pint 0.9. - It is licensed under BSD. - pluggy: |- - A minimalist production ready plugin system.This is the core - framework used by the pytest, tox, and devpi projects. - ply: |- - PLY is an implementation of lex and yacc parsing tools for Python. - Here is a list of its essential features: - * It is implemented entirely in Python. - * It uses LR-parsing which is reasonably efficient and well suited for larger - grammars. - * PLY provides most of the standard lex/yacc features including support - for empty productions, precedence rules, error recovery, and support - for ambiguous grammars. - * PLY is straightforward to use and provides very extensive error checking. - * PLY doesn't try to do anything more or less than provide the basic lex/yacc - functionality. In other words, it's not a large parsing framework or a - component of some larger system. - prettytable: |- - PrettyTable is a simple Python library designed to make it quick and easy to - represent tabular data in visually appealing ASCII tables. It was inspired by - the ASCII tables used in the PostgreSQL shell psql. PrettyTable allows for - selection of which columns are to be printed, independent alignment of columns - (left or right justified or centred) and printing of "sub-tables" by specifying - a row range. - proliantutils: |- - Proliantutils is a set of python utility libraries for interfacing and managing various - components (like iLO, HPSSA) for HPE iLO-based Servers. This library uses Redfish to - interact with Gen10 servers and RIBCL/RIS to interact with Gen8 and Gen9 servers. - A subset of proliantutils can be used to discover server properties (aka Discovery Engine). - prometheus-client: |- - Python client for the Prometheus monitoring system. - psutil: |- - psutil (process and system utilities) is a cross-platform library for retrieving information - on running processes and system utilization (CPU, memory, disks, network, sensors) in Python. - It is useful mainly for system monitoring, profiling and limiting process resources and - management of running processes.It implements many functionalities offered by classic UNIX - command line tools such as ps, top, iotop, lsof, netstat, ifconfig, free and others. - ptyprocess: |- - Launch a subprocess in a pseudo terminal (pty), and interact with both the - process and its pty. - py: |- - The py lib is a Python development support library featuring th\ - following tools and modules: - * py.path: uniform local and svn path objects - * py.apipkg: explicit API control and lazy-importing - * py.iniconfig: easy parsing of .ini files - * py.code: dynamic code generation and introspection - pyasn1: |- - Abstract Syntax Notation One (ASN.1) is a technology for exchanging structured data - in a universally understood, hardware agnostic way. Many industrial, security and - telephony applications heavily rely on ASN.1. - The pyasn1 library implements ASN.1 support in pure-Python. - pyasn1-modules: |- - A collection of ASN.1 modules expressed in form of pyasn1 classes. Includes - protocols PDUs definition (SNMP, LDAP etc.) and various data structures - (X.509, PKCS etc.). - pycadf: |- - This library provides an auditing data model based on - the Cloud Auditing Data Federation specification, primarily - for use by OpenStack. The goal is to establish strict expectations - about what auditors can expect from audit notifications. - pycodestyle: |- - pycodestyle (formerly pep8) is a tool to check your Python code against some - of the style conventions in PEP 8. - pycparser: |- - pycparser is a parser for the C language, written in pure Python. - It is a module designed to be easily integrated into applications - that need to parse C source code. - pycryptodomex: |- - PyCryptodome is a self-contained Python package of low-level cryptographic primitives. - pydot: |- - pydot written in pure Python is an interface to Graphviz, can parse and dump into the DOT language used by GraphViz, - and networkx can convert its graphs to pydot. - pydotplus: |- - PyDotPlus is an improved version of the old pydot project that - provides a Python Interface to Graphviz's Dot language. - pyghmi: |- - Pyghmi is a pure Python (mostly IPMI) server management library. - Pygments: |- - Pygments is a generic syntax highlighter suitable for use - in code hosting, forums, wikis or other applications that - need to prettify source code. - pymemcache: |- - A comprehensive, fast, pure Python memcached client - pymongo: |- - The PyMongo distribution contains tools for interacting with - MongoDB database from Python. - PyMongo supports MongoDB 2.6, 3.0, 3.2, 3.4, 3.6, 4.0 and 4.2. - PyMySQL: |- - This package contains a pure-Python MySQL client library, based on PEP 249. - Most public APIs are compatible with mysqlclient and MySQLdb. - PyNaCl: |- - PyNaCl is a Python binding to libsodium, which is a fork of the Networking and Cryptography library. - These libraries have a stated goal of improving usability, security and speed. It supports Python 2.7 - and 3.4+ as well as PyPy 2.6+. - pyngus: |- - Callback API implemented over Proton - pyperclip: |- - Pyperclip is a cross-platform Python module for copy and paste clipboard functions. It works with Python 2 and 3. - pyroute2: |- - Pyroute2 is a pure Python **netlink** library. The core requires only Python - stdlib, no 3rd party libraries. The library was started as an RTNL protocol - implementation, so the name is **pyroute2**, but now it supports many netlink - protocols. - pyrsistent: |- - Pyrsistent is a number of persistent collections (by some referred to as functional data structures). Persistent in - the sense that they are immutable. - pysaml2: |- - PySAML2 is a pure python implementation of SAML2. It contains all - necessary pieces for building a SAML2 service provider or an identity - provider. The distribution contains examples of both. Originally - written to work in a WSGI environment there are extensions that allow - you to use it with other frameworks. - pysendfile: |- - sendfile(2) is a system call which provides a "zero-copy" way of copying data - from one file descriptor to another (a socket). - pyserial: |- - Python Serial Port Extension - pysmi: |- - A pure-Python implementation of SNMP/SMI MIB parsing and conversion library. - pysnmp: |- - SNMP v1/v2c/v3 engine and Standard Applications suite written in pure-Python. - Supports Manager/Agent/Proxy roles, Manager/Agent-side MIBs, asynchronous - operation and multiple network transports. - pytest-metadata: |- - pytest plugin for test session metadata - pyudev: |- - This package supports almost all libudev functionality.The lisence - is LGPL.It is a python 2/3 binding to libudev which is a linux - library supporting device management.The usage of pyudev is simple - and you can use it after a quick learning. - rcssmin: |- - The minifier is based on the semantics of the YUI compressor, which itself - is based on the rule list by Isaac Schlueter. - redis: |- - The Python interface to the Redis key-value store. - reno: |- - Reno is a release notes manager designed with high throughput in mind, - supporting fast distributed development teams without introducing additional - development processes. Our goal is to encourage detailed and accurate release - notes for every release. - repoze-lru: |- - It is a LRU (least recently used) cache implementation. It works under - Python 2.7 and Python 3.4+. - requests: |- - Requests is an HTTP library, written in Python, as an alternative - to Python's builtin urllib2 which requires work (even method overrides) - to perform basic tasks. - requestsexceptions: |- - The python requests library bundles the urllib3 library, however, some - software distributions modify requests to remove the bundled library. - This makes some operations, such as supressing the "insecure platform - warning" messages that urllib emits difficult. This is a simple - library to find the correct path to exceptions in the requests library - regardless of whether they are bundled. - requests-mock: |- - Mocked responses for the requests library - restructuredtext-lint: |- - Lint reStructuredText linter files with an API or a CLI. - retrying: |- - Retrying is an Apache 2.0 licensed general-purpose retrying library, written in - Python, to simplify the task of adding retry behavior to just about anything. - retryz: |- - Retry decorator with a bunch of configuration parameters. - rfc3986: |- - A Python implementation of `RFC 3986`_ including validation and authority - parsing. - rjsmin: |- - Docs for rJSmin, which is a javascript minifier written in python. - Routes: |- - Routing Recognition and Generation Tools - capacity: |- - Data types to describe capacity - rtslib-fb: |- - API for generic Linux SCSI kernel target. Includes the 'target' service and targetctl tool for restoring configuration. - confetti: |- - Generic configuration mechanism - python-scciclient: |- - Python ServerView Common Command Interface (SCCI) Client Library - flux: |- - Artificial time library - gossip: |- - Signaling and hooking library - vintage: |- - Python library for deprecating code - seqdiag: |- - seqdiag generate sequence-diagram image file from spec-text file. - mitba: |- - Python library for caching results from functions and methods - setuptools: |- - Setuptools is a collection of enhancements to the Python distutils that allow - you to more easily build and distribute Python packages, especially ones that - have dependencies on other packages. - simplegeneric: |- - The package lets you define simple single-dispatch generic functions, akin to Python's built-in - generic functions like len(), iter() and so on. - pact: |- - Promises library in Python - six: |- - Python-six provides simple utilities for wrapping over differences Python 3. - waiting: |- - Utility for waiting for stuff to happen - storage-interfaces: |- - Abstract classes for representing storage-related objects - Sphinx: |- - Sphinx is a tool that makes it easy to create intelligent and - beautiful documentation for Python projects (or other documents - consisting of multiple reStructuredText sources), written by Georg - Brandl. It was originally created to translate the new Python - documentation, but has now been cleaned up in the hope that it will be - useful to many other projects. - sphinxcontrib-actdiag: |- - A sphinx extension for embedding activity diagram using actdiag. - sphinxcontrib-apidoc: |- - sphinx-apidoc is a tool for automatic generation of Sphinx sources - that using the autodoc extension, documents a whole package in the - style of other automatic API documentation tools. sphinx-apidoc does - not actually build documentation - rather it simply generates it. - sphinxcontrib-applehelp: |- - sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books. - sphinxcontrib-blockdiag: |- - A sphinx extension for embedding block diagram using blockdiag. - sphinxcontrib-devhelp: |- - sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document. - sphinxcontrib-htmlhelp: |- - sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files. - sphinxcontrib-httpdomain: |- - This contrib extension provides a Sphinx domain for describing HTTP APIs. - sphinxcontrib-issuetracker: |- - Sphinx integration with different issuetrackers - sphinxcontrib-jsmath: |- - sphinxcontrib-jsmath is a sphinx extension which renders display math in HTML - via JavaScript. - sphinxcontrib-pecanwsme: |- - Extension to Sphinx for documenting APIs built with Pecan and WSME - sphinxcontrib-qthelp: |- - sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document. - sphinxcontrib-seqdiag: |- - A sphinx extension for embedding sequence diagram using seqdiag. - sphinxcontrib-serializinghtml: |- - sphinxcontrib-serializinghtml is a sphinx extension which outputs "serialized" - HTML files (json and pickle). - sphinxcontrib-svg2pdfconverter: |- - This extension converts SVG images to PDF in case the builder does not support SVG images natively (e.g. LaTeX). - sphinx-feature-classification: |- - This is a Sphinx directive that allows creating matrices of drivers - a project contains and which features they support. - alabaster: |- - This theme is a modified "Kr" Sphinx theme from @kennethreitz (especially as - used in his Requests project), which was itself originally based on @mitsuhiko's - theme used for Flask & related projects. - sqlalchemy-migrate: |- - Fork from http://code.google.com/p/sqlalchemy-migrate/ to get it working with - SQLAlchemy 0.8. - Inspired by Ruby on Rails' migrations, Migrate provides a way to deal with - database schema changes in `SQLAlchemy `_ projects. - Migrate extends SQLAlchemy to have database changeset handling. It provides a - database change repository mechanism which can be used from the command line as - well as from inside python code. - SQLAlchemy-Utils: |- - Various utility functions and custom data types for SQLAlchemy. - transaction: |- - Transaction management for Python - statsd: |- - statsd is a friendly front-end to Graphite. - glance-tempest-plugin: |- - Tempest plugin tests for Glance. - stevedore: |- - Manage dynamic plugins for Python applications - cassandra-driver: |- - DataStax Driver for Apache Cassandra - sphinxcontrib-autoprogram: |- - Documenting CLI programs - python-swiftclient: |- - This is a python client for the Swift API. There’s a Python API - (the swiftclient module), and a command-line script (swift). - python-zaqarclient: |- - Client Library for OpenStack Zaqar Messaging API - suds-jurko: |- - Lightweight SOAP client (Jurko's fork) - Tempita: |- - Tempita is a small templating language for text substitution. - This isn't meant to be the Next Big Thing in templating; it is just a handy - little templating language for when your project outgrows string.Template - or % substitution. It is small, it embeds Python in strings, and it does not - do much else. - tenacity: |- - Tenacity is a general-purpose retrying library to simplify the task of adding - retry behavior to just about anything. - termcolor: |- - ANSII Color formatting for output in terminal. - testrepository: |- - A repository of test results. - testresources: |- - pyunit extension for managing expensive test resources - testscenarios: |- - testscenarios provides clean dependency injection for python unittest style tests. - This can be used for interface testing (testing many implementations via a single test suite) - or for classic dependency injection (provide tests with dependencies externally to - the test code itself, allowing easy testing in different situations). - sysv-ipc: |- - System V IPC primitives (semaphores, shared memory and message queues) for Python - textfsm: |- - Python module which implements a template based state machine for parsing - semi-formatted text. Originally developed to allow programmatic access to - information returned from the command line interface (CLI) of networking - devices. - tinyrpc: |- - A small, modular, transport and protocol neutral RPC library that, among - other things, supports JSON-RPC and zmq. - toml: |- - TOML aims to be a minimal configuration file format that's easy to read due - to obvious semantics. TOML is designed to map unambiguously to a hash table. - TOML should be easy to parse into data structures in a wide variety of languages. - python-troveclient: |- - This is a client for the OpenStack Trove API. There is a Python API (the troveclient module), - and a command-line script (trove). Each implements 100% of the OpenStack Trove API. - tornado: |- - Tornado is an open source version of the scalable, non-blocking web server and tools. - traceback2: |- - A backport of traceback to older supported Pythons. - zake: |- - A python package that works to provide a nice set of testing utilities for - the kazoo library. - unittest2: |- - unittest2 is a backport of the new features added to - the unittest testing framework in Python 2.7 and onwards. - It is tested to run on Python 2.6, 2.7, 3.2, 3.3, 3.4 and pypy. - urllib3: |- - Python3 HTTP module with connection pooling and file POST abilities. - vine: |- - vine - Python Promises - rsa: |- - Python-RSA is a pure-Python RSA implementation. It supports - encryption and decryption, signing and verifying signatures, - and key generation according to PKCS#1 version 1.5. - s3transfer: |- - S3transfer is a Python library for managing Amazon S3 transfers. - scp: |- - The scp.py module uses a paramiko transport to send and recieve files via the - scp1 protocol. This is the protocol as referenced from the openssh scp program, - and has only been tested with this implementation. - warlock: |- - Warlock — self-validating Python objects using JSON schema - wcwidth: |- - This library is mainly for those implementing a Terminal Emulator, or - programs that carefully produce output to be interpreted by one. - POSIX.1-2001 and POSIX.1-2008 conforming systems provide wcwidth(3) - and wcswidth(3) C functions of which this python module's functions - precisely copy. These functions return the number of cells a unicode - string is expected to occupy. - webcolors: |- - a module for working with HTML/CSS color definitions. - SecretStorage: |- - This module provides a way for securely storing passwords and other secrets. - semantic-version: |- - This small python library provides a few tools to handle semantic versioning in Python. - websockify: |- - websockify: WebSockets support for any application/server - WebTest: |- - WebTest helps you test your WSGI-based web applications. This can be - any application that has a WSGI interface, including an application - written in a framework that supports WSGI (which includes most actively - developed Python web frameworks -- almost anything that even nominally - supports WSGI should be testable). - Werkzeug: |- - *werkzeug* German noun: "tool". Etymology: *werk* ("work"), *zeug* ("stuff") - Werkzeug is a comprehensive `WSGI`_ web application library. It began as - a simple collection of various utilities for WSGI applications and has - become one of the most advanced WSGI utility libraries. - wheel: |- - A built-package format for Python. - A wheel is a ZIP-format archive with a specially formatted filename and the - .whl extension. It is designed to contain all the files for a PEP 376 - compatible install in a way that is very close to the on-disk format. - wrapt: |- - The aim of the wrapt module is to provide a transparent object proxy for Python, - which can be used as the basis for the construction of function wrappers and decorator functions. - The wrapt module focuses very much on correctness. It therefore goes way beyond existing mechanisms - such as functools.wraps() to ensure that decorators preserve introspectability, signatures, - type checking abilities etc. The decorators that can be constructed using this module will work in - far more scenarios than typical decorators and provide more predictable and consistent behaviour. - wsgi-intercept: |- - Testing a WSGI application sometimes involves starting a server at a - local host and port, then pointing your test code to that address. - Instead, this library lets you intercept calls to any specific host/port - combination and redirect them into a `WSGI application`_ importable by - your test program. Thus, you can avoid spawning multiple processes or - threads to test your Web app. - setproctitle: |- - A Python module to customize the process title - xmltodict: |- - Python module that makes working with XML feel like you are working with JSON - XStatic: |- - The goal of XStatic family of packages is to provide static file packages - with minimal overhead - without selling you some dependencies you don't want. - XStatic has some minimal support code for working with the XStatic-* packages. - XStatic-Angular: |- - Angular JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-Angular-Bootstrap: |- - Angular-Bootstrap JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-Angular-FileUpload: |- - Angular-FileUpload JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-Angular-Gettext: |- - Angular-Gettext javascript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-Angular-lrdragndrop: |- - lrDragNDrop javascript library packaged for setuptools (easy_install) / pip. - XStatic-Angular-Schema-Form: |- - Angular JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-Bootstrap-Datepicker: |- - Bootstrap-Datepicker JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-Bootstrap-SCSS: |- - Bootstrap style library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-bootswatch: |- - bootswatch javascript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package - `XStatic`. - XStatic-D3: |- - D3 JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-Font-Awesome: |- - Font Awesome icons packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-Hogan: |- - Hogan JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-Jasmine: |- - Jasmine JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-jQuery: |- - jQuery javascript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-JQuery.quicksearch: |- - JQuery.quicksearch JavaScript library packaged for setuptools (easy_install)/pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-JQuery.TableSorter: |- - JQuery.TableSorter JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-JQuery-Migrate: |- - JQuery-Migrate JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-jquery-ui: |- - jquery-ui javascript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-JSEncrypt: |- - JSEncrypt JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-mdi: |- - mdi javascript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package - `XStatic`. - XStatic-objectpath: |- - Angular JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-Rickshaw: |- - Rickshaw JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-smart-table: |- - smart-table javascript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package - `XStatic`. - XStatic-Spin: |- - Spin JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-term.js: |- - term.js javascript library packaged for setuptools (easy_install) / pip. - * term.js project: `Github chjj/term.js `_ - * XStatic package: `Github takluyver/XStatic-termjs `_ - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - XStatic-tv4: |- - Angular JavaScript library packaged for setuptools (easy_install) / pip. - This package is intended to be used by **any** project that needs these files. - It intentionally does **not** provide any extra code except some metadata - **nor** has any extra requirements. You MAY use some minimal support code from - the XStatic base package, if you like. - You can find more info about the xstatic packaging way in the package `XStatic`. - xvfbwrapper: |- - Development documents and examples for xvfbwrapper - yappi: |- - Yappi, Yet Another Python Profiler, provides multithreading and cpu-time - support to profile python programs. - yaql: |- - YAQL (Yet Another Query Language) is an embeddable and extensible query - language, that allows performing complex queries against arbitrary objects. It - has a vast and comprehensive standard library of frequently used querying - functions and can be extend even further with user-specified functions. YAQL is - written in python and is distributed via PyPI. - simplejson: |- - Simple, fast, extensible JSON encoder/decoder for Python - zeroconf: |- - Pure Python Multicast DNS Service Discovery Library (Bonjour/Avahi compatible) - zipp: |- - A pathlib-compatible Zipfile object wrapper. A backport of the Path object. - zope-interface: |- - This package is intended to be independently reusable in any Python project. - It is maintained by the Zope Toolkit project. - This package provides an implementation of "object interfaces" for Python. - Interfaces are a mechanism for labeling objects as conforming to a given - API or contract. So, this package can be considered as implementation of - the Design By Contract methodology support in Python. - zstd: |- - Simple Python bindings for the Zstd compression library. - pytz: |- - pytz brings the Olson tz database into Python. This library allows - accurate and cross platform timezone calculations using Python 2.4 - or higher. It also solves the issue of ambiguous times at the end - of daylight saving time, which you can read more about in the - Python Library Reference (datetime.tzinfo). - pywbem: |- - A WBEM client allows issuing operations to a WBEM server, using the CIM (Common - Information Model) operations over HTTP (CIM-XML) protocol defined in the DMTF - standards DSP0200 and DSP0201. The CIM/WBEM infrastructure is used for a wide - variety of systems management tasks supported by systems running WBEM servers. - A WBEM indication listener allows receiving indications generated by a WBEM server. - PyYAML: |- - YAML is a data serialization format designed for human readability and - interaction with scripting languages. PyYAML is a YAML parser and emitter for - Python. - PyYAML features a complete YAML 1.1 parser, Unicode support, pickle support, - capable extension API, and sensible error messages. PyYAML supports standard - YAML tags and provides Python-specific tags that allow to represent an - arbitrary Python object. - PyYAML is applicable for a broad range of tasks from complex configuration - files to object serialization and persistence. - python-qpid-proton: |- - Proton is a high performance, lightweight messaging library. It can be used in - the widest range of messaging applications including brokers, client libraries, - routers, bridges, proxies, and more. Proton makes it trivial to integrate with - the AMQP 1.0 ecosystem from any platform, environment, or language. - python-subunit: |- - Subunit C bindings. See the python-subunit package for test processing - functionality. - thrift: |- - The thrift-qt package contains Qt bindings for thrift. - sphinxcontrib-programoutput: |- - Sphinx extension to include program output - typing-extensions: |- - Typing Extensions - Backported and Experimental Type Hints for Python. - The typing module was added to the standard library in Python 3.5, but many new features have been added to the module since then. - This means users of Python 3.5 - 3.6 who are unable to upgrade will not be able to take advantage of new types added to the typing - module, such as typing.Protocol or typing.TypedDict. - The typing_extensions module contains backports of these changes. Experimental types that will eventually be added to the typing - module are also included in typing_extensions, such as typing.ParamSpec and typing.TypeGuard. - pytest-html: |- - pytest plugin for generating HTML reports - krest: |- - The Kaminario REST (krest) is a client library that provides ORM like interface for working with Kaminario K2 REST API - python-rsdclient: |- - OpenStack client plugin for Rack Scale Design - storpool.spopenstack: |- - OpenStack helpers for the StorPool API - storpool: |- - Bindings for the StorPool distributed storage API - confget: |- - confget parse configuration files The confget library parses configuration files - (currently INI-style files and CGI QUERY_STRING environment variable) and allows - a program to use the values defined in them. It provides various options for - selecting the variable names and values to return and the configuration file - sections to fetch them from.The confget library may also be used as a command- - line tool with the same interface as the C implementation.The confget library is - fully typed.Specifying configuration values for the backends The confget. - cinder-tempest-plugin: |- - Tempest Integration for Cinder This directory contains additional Cinder - tempest tests.See the tempest plugin docs for information on using it: run all - tests from this plugin, install cinder into your environment. Then from the - tempest directory run:: $ tox -e all -cinder_tempest_plugin It is expected that - Cinder third party CI's use the all tox environment above for all test runs. - Developers can also use this locally to perform more extensive testing.Any - typical devstack instance should be able to run all Cinder plugin tests. - ironic-tempest-plugin: |- - This repository contains a Tempest plugin for OpenStack Bare Metal and Bare - Metal Introspection projects. - scripttest: |- - scripttest is a library to help you test your interactive command-line - applications. With it you can easily run the command (in a subprocess) - and see the output (stdout, stderr) and any file modifications. - python-octaviaclient: |- - Team and repository tags .. Change things from this point onpython-octaviaclient - Octavia client for OpenStack Load BalancingThis is an OpenStack Client (OSC) - plugin for Octavia, an OpenStack Load Balancing project.For more information - about Octavia see: more information about the OpenStack Client see: software: - Apache license Documentation: . - soupsieve: |- - Soup Sieve is a CSS selector library designed to be used with Beautiful Soup 4. - It aims to provide selecting, matching, and filtering using modern CSS selectors. - Soup Sieve currently provides selectors from the CSS level 1 specifications up - through the latest CSS level 4 drafts and beyond (though some are not yet implemented). - pyxcli: |- - IBM Python XCLI Client for Spectrum Accelerate Storage Family. - bunch: |- - Bunch is a dictionary that supports attribute-style access, a la JavaScript. - python-saharaclient: |- - This is a client for the OpenStack Sahara API. - xmlschema: |- - An XML Schema validator and decoder - elementpath: |- - XPath 1.0/2.0 parsers and selectors for ElementTree and lxml - pyeclib: |- - This library provides a simple Python interface for implementing erasure - codes. A number of back-end implementations is supported either directly - or through the C interface liberasurecode. - storops: |- - StorOps: The Python Library for VNX & Unity. - python-zunclient: |- - This is a client library for Zun built on the Zun API. - python-manilaclient: |- - This is a client for the OpenStack Manila API. - ntc-templates: |- - TextFSM Templates for Network Devices, and Python wrapper for TextFSM's CliTable. - mypy-extensions: |- - The "mypy_extensions" module defines experimental extensions to the standard - "typing" module that are supported by the mypy typechecker. - typed-ast: |- - It is a Python 3 package that provides a Python 2.7 and Python 3 parser similar to the standard ast library. - Unlike ast, the parsers in typed_ast include PEP 484 type comments and are independent of the version of Python - under which they are run. The typed_ast parsers produce the standard Python AST (plus type comments), and are - both fast and correct, as they are based on the CPython 2.7 and 3.7 parsers. typed_ast runs on CPython 3.5-3.8 - on Linux, OS X and Windows. - moto: |- - Moto is a library that allows your tests to easily mock out AWS Services. - responses: |- - A utility library for mocking out the requests Python library. - flake8-logging-format: |- - Flake8 extension to validate (lack of) logging format strings - nocasedict: |- - A case-insensitive ordered dictionary for Python. - nocaselist: |- - A case-insensitive list for Python. - yamlloader: |- - This module provides loaders and dumpers for PyYAML. - infi.dtypes.wwn: |- - Datatype for WWN - python-ibmcclient: |- - python-ibmcclient is a Python library to communicate with HUAWEI iBMC based systems. - whereto: |- - whereto is an app for testing redirect rules like what may appear in a .htaccess file - for Apache. It provides a way to test those rules in CI jobs. - python-muranoclient: |- - Murano Project introduces an application catalog, which allows application developers and - cloud administrators to publish various cloud-ready applications in a browsable categorised - catalog, which may be used by the cloud users (including the inexperienced ones) to pick-up - the needed applications and services and composes the reliable environments out of them in - a "push-the-button" manner. - yamllint: |- - A linter for YAML files. - yamllint does not only check for syntax validity, but for weirdnesses like key - repetition and cosmetic problems such as lines length, trailing spaces, indentation, etc. - pypowervm: |- - pypowervm provides a Python-based API wrapper for interaction with IBM PowerVM-based systems. - pytest-django: |- - Welcome to pytest-django! pytest-django allows you to test your Django - project/applications with the pytest testing tool - selenium: |- - Selenium Client Driver Introduction Python language bindings for Selenium - WebDriver.The selenium package is used to automate web browser interaction from - Python. - rsd-lib: |- - rsd-lib Extended Sushy library for Rack Scale DesignThis library extends the - existing Sushy library to include functionality for Intel RackScale Design - enabled hardware. - sushy-oem-idrac: |- - Dell EMC OEM extension for sushy Sushy is a client [library]( designed to - communicate with [Redfish]( based BMC.Redfish specification offers extensibility - mechanism to let hardware vendors introduce their own features with the common - Redfish framework. At the same time, sushy supports extending its data model by - loading extensions found within its "oem" namespace.The sushy-oem-idrac package - is a sushy extension package that aims at adding high-level hardware management - abstractions, that are specific to Dell EMC BMC (which is known under the name - of iDRAC), to the tree of sushy Redfish resources. - keystone-tempest-plugin: |- - Tempest plugin for functional testing of keystone's LDAP and federation - features. More information can be found in the keystone developer documentation. - dfs-sdk: |- - Datera Python SDK IntroductionThis is Python SDK version v1.2 for the - **Datera*Fabric Services API. - python-3parclient: |- - HPE 3PAR REST Client This is a Client library that can talk to the HPE 3PAR - Storage array. The 3PAR storage array has a REST web service interface and a - command line interface. This client library implements a simple interface for - talking with either interface, as needed. The python Requests library is used to - communicate with the REST interface. - proboscis: |- - Proboscis is a Python test framework that extends Python's built-in unittest - module and Nose with features from TestNG. - murano-pkg-check: |- - Team and repository tags .. Change things from this point on murano-pkg-check - Murano package validator toolAfter checking out tool from repository easiest - method to run tool - python-senlinclient: |- - Plugin for Senlin Clustering Service This is a client library for Senlin built - on the Senlin clustering API. It provides a plugin for the openstackclient - command-line tool. - purestorage: |- - Pure Storage FlashArray REST 1.X SDK This library is designed to provide a - simple interface for issuing commands to a Pure Storage FlashArray using a REST - API. It communicates with the array using the python requests HTTP library. - gnocchiclient: |- - gnocchiclient Python bindings to the Gnocchi APIThis is a client for Gnocchi - API - ujson: |- - UltraJSON is an ultra fast JSON encoder and decoder written in pure C with bindings for Python 3.6+ - python-lefthandclient: |- - HPE LeftHand/StoreVirtual HTTP REST Client - sphinx-testing: |- - sphinx-testing provides testing utility classes and functions for Sphinx extensions - opentracing: |- - This library is a Python platform API for OpenTracing. - threadloop: |- - Tornado IOLoop Backed Concurrent Futures. - python-watcherclient: |- - Python client library for Watcher API. - nodeenv: |- - Node.js virtual environment builder. - python-binary-memcached: |- - A pure python module to access memcached via its binary protocol with SASL auth support. - The main purpose of this module it to be able to communicate with memcached using binary protocol - and support authentication, so it can work with Heroku for example. - uhashring: |- - uhashring implements consistent hashing in pure Python. - os-api-ref: |- - Sphinx Extensions to support API reference sites in OpenStack - trove-tempest-plugin: |- - Tempest plugin for Trove Project It contains tempest tests for Trove project. - infi.dtypes.iqn: |- - IQN datatype in Python Datatype for iSCSI IQN in Python. - pre-commit: |- - A framework for managing and maintaining multi-language pre-commit hooks. - tempest-lib: |- - OpenStack Functional Testing Library - infinisdk: |- - Infinidat API SDK - api-object-schema: |- - Utilities for defining schemas of Pythonic objects interacting with external APIs - sentinels: |- - The sentinels module is a small utility providing the Sentinel class, along - with useful instances. - arrow: |- - Arrow: Better dates & times for Python. - smmap: |- - A pure Python implementation of a sliding window memory map manager - snowballstemmer: |- - This module uses it to accelerate for PyStemmer. It includes following language - algorithms: Danish, Dutch, English(Standard, Porter), Finnish, French, German, - Hungarian, Italian, Norwegian, Portuguese, Romanian, Russian, Spanish, Swedish, - Turkis. - sortedcontainers: |- - Sorted Containers is an Apache2 licensed sorted collections library, - written in pure-Python, and fast as C-extensions. - SQLAlchemy: |- - SQLAlchemy is an Object Relational Mapper (ORM) that provides a flexible, - high-level interface to SQL databases. It contains a powerful mapping layer - that users can choose to work as automatically or as manually, determining - relationships based on foreign keys or to bridge the gap between database - and domain by letting you define the join conditions explicitly. - sqlparse: |- - A non-validating SQL parser. - stestr: |- - A parallel Python test runner built around subunit - sushy: |- - Sushy is a small Python library to communicate with Redfish based systems - tabulate: |- - Pretty-print tabular data in Python, a library and a command-line - utility. - taskflow: |- - A library to do [jobs, tasks, flows] in a highly available, easy to understand - and declarative manner (and more!) to be used with OpenStack and other projects. - URLObject: |- - URLObject is a utility class for manipulating URLs. - testtools: |- - Testtools is a set of extensions to the Python standard library's unit testing framework. These - extensions have been derived from years of experience with unit testing in Python and come from - many different sources. - tooz: |- - The Tooz project aims at centralizing the most common distributed primitives - like group membership protocol, lock service and leader election by providing - a coordination API helping developers to build distributed applications. - virtualenv: |- - Virtualenv is a tool to create isolated Python environments. Since Python - 3.3, a subset of it has been integrated into the standard library under - the venv module. Note though, that the venv module does not offer all - features of this library (e.g. cannot create bootstrap scripts, cannot - create virtual environments for other python versions than the host python, - not relocatable, etc.). Tools in general as such still may prefer using - virtualenv for its ease of upgrading (via pip), unified handling of different - Python versions and some more advanced features. - voluptuous: |- - Voluptuous, despite the name, is a Python data validation library. - It is primarily intended for validating data coming into Python as - JSON, YAML, etc. - waitress: |- - Waitress is meant to be a production-quality pure-Python WSGI server - with very acceptable performance. It has no dependencies except ones - which live in the Python standard library. It runs on CPython on Unix - and Windows under Python 2.7+ and Python 3.5+. It is also known to run - on PyPy 1.6.0+ on UNIX. It supports HTTP/1.0 and HTTP/1.1. - WebOb: |- - WebOb provides wrappers around the WSGI request environment, - and an object to help create WSGI responses. The objects map - much of the specified behavior of HTTP, including header parsing - and accessors for other standard parts of the environment. - websocket-client: |- - websocket-client module is WebSocket client for python. - This provide the low level APIs for WebSocket. All APIs - are the synchronous functions. - websocket-client supports only hybi-13. - WSME: |- - Web Services Made Easy (WSME) simplify the writing of REST APIs, and extend - them with additional protocols. - pure-sasl: |- - pure-sasl is a pure python client-side SASL implementation. - At the moment, it supports the following mechanisms: ANONYMOUS, PLAIN, - EXTERNAL, CRAM-MD5, DIGEST-MD5, and GSSAPI. Support for other mechanisms - may be added in the future. Only GSSAPI supports a QOP higher than auth. - Always use TLS! Both Python 2 and Python 3 are supported. - Pympler: |- - Pympler is a development tool to measure, monitor and analyze the memory - behavior of Python objects in a running Python application.By pympling a Python - application, detailed insight in the size and the lifetime of Python objects can - be obtained. Undesirable or unexpected runtime behavior like memory bloat and - other "pymples" can easily be identified.Pympler integrates three previously - separate projects into a single, comprehensive profiling tool. Asizeof provides - basic size information for one or several Python objects, muppy is used for on- - line monitoring of a Python application and the class tracker provides off-line - analysis of the lifetime of selected Python objects. - codecov: |- - Hosted coverage reports for GitHub, Bitbucket and Gitlab - pep517: |- - Wrappers to build Python packages using PEP 517 hooks - pylama: |- - Code audit tool for Python and JavaScript. - memory-profiler: |- - Memory Profiler is a python module for monitoring memory consumption of a - process as well as line-by-line analysis of memory consumption for python - programs. It is a pure python module which depends on the psutil < module. - cotyledon: |- - Cotyledon provides a framework for defining long-running services. - pytimeparse: |- - A small Python library to parse various kinds of time expressions - xattr: |- - Extended attributes extend the basic attributes of files and directories in the file system. - They are stored as name:data pairs associated with file system objects (files, directories, symlinks, etc). - setuptools-rust: |- - Setuptools helpers for Rust Python extensions. Compile and distribute Python - extensions written in Rust as easily as if they were written in C. - liberasurecode: |- - An API library for Erasure Code, written in C. It provides a number - of pluggable backends, such as Intel ISA-L library. - repoze.sphinx.autointerface: |- - Sphinx extension: auto-generates API docs from Zope interfaces - Cython: |- - Cython is a language that makes writing C extensions - for Python as easy as Python itself. - python-psycopg2: |- - Psycopg is the most popular PostgreSQL adapter for the Python - programming language. Its core is a complete implementation of the Python DB - API 2.0 specifications. Several extensions allow access to many of the - features offered by PostgreSQL. - ansible: |- - Ansible is a radically simple model-driven configuration management, - multi-node deployment, and remote task execution system. Ansible works - over SSH and does not require any software or daemons to be installed - on remote nodes. Extension modules can be written in any language and - are transferred to managed machines automatically. - python-scrypt: |- - Scrypt is useful when encrypting password as it is possible to specify a - minimum amount of time to use when encrypting and decrypting. If, for example, - a password takes 0.05 seconds to verify, a user won’t notice the slight delay - when signing in, but doing a brute force search of several billion passwords - will take a considerable amount of time. This is in contrast to more traditional - hash functions such as MD5 or the SHA family which can be implemented extremely - fast on cheap hardware. - lz4: |- - LZ4 Bindings for Python - breathe: |- - Breathe is an extension to reStructuredText and Sphinx to be able to read and - render Doxygen xml output. - dib-utils: |- - These tools were originally part of the diskimage-builder project, but they - have uses outside of that project as well. Because disk space is at a premium - in base cloud images, pulling in all of diskimage-builder and its dependencies - just to use something like dib-run-parts is not desirable. This project allows - consumers to use the tools while pulling in only one small package with few/no - additional dependencies. - unicodecsv: |- - The unicodecsv is a drop-in replacement for Python 2.7’s csv module which - supports unicode strings without a hassle. Supported versions are python 2.7, - 3.3, 3.4, 3.5, and pypy 2.4.0. diff --git a/tools/oos/etc/inventory/all_in_one.yaml b/tools/oos/etc/inventory/all_in_one.yaml deleted file mode 100644 index bd84d542cc1439584ce7fd339910beeeac045910..0000000000000000000000000000000000000000 --- a/tools/oos/etc/inventory/all_in_one.yaml +++ /dev/null @@ -1,64 +0,0 @@ -all: - hosts: - controller: - ansible_host: - ansible_ssh_private_key_file: - ansible_ssh_user: root - vars: - mysql_root_password: root - mysql_project_password: root - rabbitmq_password: root - project_identity_password: root - enabled_service: - - keystone - - neutron - - cinder - - placement - - nova - - glance - - horizon - - aodh - - ceilometer - - cyborg - - gnocchi - - kolla - - heat - - swift - - trove - # - rally - - tempest - neutron_provider_interface_name: br-ex - default_ext_subnet_range: 10.100.100.0/24 - default_ext_subnet_gateway: 10.100.100.1 - neutron_dataplane_interface_name: eth1 - cinder_block_device: vdb - swift_storage_devices: - - vdc - swift_hash_path_suffix: ash - swift_hash_path_prefix: has - glance_api_workers: 2 - cinder_api_workers: 2 - nova_api_workers: 2 - nova_metadata_api_workers: 2 - nova_conductor_workers: 2 - nova_scheduler_workers: 2 - neutron_api_workers: 2 - children: - compute: - hosts: controller - storage: - hosts: controller - network: - hosts: controller - vars: - test-key: test-value - dashboard: - hosts: controller - vars: - allowed_host: '*' - kolla: - hosts: controller - vars: - # We add openEuler OS support for kolla in OpenStack Queens/Rocky release - # Set this var to true if you want to use it in Q/R - openeuler_plugin: false diff --git a/tools/oos/etc/inventory/cluster.yaml b/tools/oos/etc/inventory/cluster.yaml deleted file mode 100644 index e742a5f38760c818633de47f934b7b507d32dd3a..0000000000000000000000000000000000000000 --- a/tools/oos/etc/inventory/cluster.yaml +++ /dev/null @@ -1,80 +0,0 @@ -all: - hosts: - controller: - ansible_host: - ansible_ssh_private_key_file: - ansible_ssh_user: root - compute01: - ansible_host: - ansible_ssh_private_key_file: - ansible_ssh_user: root - compute02: - ansible_host: - ansible_ssh_private_key_file: - ansible_ssh_user: root - vars: - mysql_root_password: root - mysql_project_password: root - rabbitmq_password: root - project_identity_password: root - enabled_service: - - keystone - - neutron - - cinder - - placement - - nova - - glance - - horizon - - aodh - - ceilometer - - cyborg - - gnocchi - - kolla - - heat - - swift - - trove - # - rally - - tempest - neutron_provider_interface_name: br-ex - default_ext_subnet_range: 10.100.100.0/24 - default_ext_subnet_gateway: 10.100.100.1 - neutron_dataplane_interface_name: eth1 - cinder_block_device: vdb - swift_storage_devices: - - vdc - swift_hash_path_suffix: ash - swift_hash_path_prefix: has - glance_api_workers: 2 - cinder_api_workers: 2 - nova_api_workers: 2 - nova_metadata_api_workers: 2 - nova_conductor_workers: 2 - nova_scheduler_workers: 2 - neutron_api_workers: 2 - children: - compute: - children: - compute1: - hosts: compute01 - compute2: - hosts: compute02 - storage: - children: - storage1: - hosts: compute01 - storage2: - hosts: compute02 - network: - hosts: controller - vars: - test-key: test-value - dashboard: - hosts: controller - vars: - allowed_host: '*' - kolla: - hosts: controller - vars: - # We add openEuler OS support for kolla in OpenStack Queens/Rocky release - # Set this var to true if you want to use it in Q/R - openeuler_plugin: false diff --git a/tools/oos/etc/inventory/oos_inventory.j2 b/tools/oos/etc/inventory/oos_inventory.j2 deleted file mode 100644 index ec35e64e03ecd978e629ca8fa538ea841df9b881..0000000000000000000000000000000000000000 --- a/tools/oos/etc/inventory/oos_inventory.j2 +++ /dev/null @@ -1,123 +0,0 @@ -{ - "_meta": { - "hostvars": { - {% if oos_env_type == 'cluster' %} - "compute01": { - "ansible_host": "{{ compute01_ip }}" - }, - "compute02": { - "ansible_host": "{{ compute02_ip }}" - }, - {% endif %} - "controller": { - "ansible_host": "{{ controller_ip }}", - "horizon_allowed_host": "{{ horizon_allowed_host }}", - "kolla_openeuler_plugin": "{{ kolla_openeuler_plugin }}" - } - } - }, - "all": { - "children": [ - "compute", - "dashboard", - "kolla", - "network", - "storage", - "ungrouped" - ], - "vars": { - "ansible_python_interpreter": "/usr/bin/python3", - "cinder_block_device": "{{ cinder_block_device }}", - "default_ext_subnet_gateway": "{{ default_ext_subnet_gateway }}", - "default_ext_subnet_range": "{{ default_ext_subnet_range }}", - "enabled_service": [ - {% for service in enabled_service %} - "{{ service }}"{% if not loop.last %},{% endif %} - {% endfor %} - ], - "mysql_project_password": "{{ mysql_project_password }}", - "mysql_root_password": "{{ mysql_root_password }}", - "neutron_dataplane_interface_name": "{{ neutron_dataplane_interface_name }}", - "neutron_provider_interface_name": "{{ neutron_provider_interface_name }}", - "project_identity_password": "{{ project_identity_password }}", - "rabbitmq_password": "{{ rabbitmq_password }}", - "swift_hash_path_prefix": "{{ swift_hash_path_prefix }}", - "swift_hash_path_suffix": "{{ swift_hash_path_suffix }}", - "swift_storage_devices": [ - {% for device in swift_storage_devices %} - "{{ device }}"{% if not loop.last %},{% endif %} - {% endfor %} - ], - "openstack_release": "{{ openstack_release }}", - "oos_env_type": "{{ oos_env_type }}", - "keypair_dir": "{{ keypair_dir }}", - "cinder_api_workers": "{{ cinder_api_workers }}", - "glance_api_workers": "{{ glance_api_workers }}", - "neutron_api_workers": "{{ neutron_api_workers }}", - "nova_api_workers": "{{ nova_api_workers }}", - "nova_metadata_api_workers": "{{ nova_metadata_api_workers }}", - "nova_conductor_workers": "{{ nova_conductor_workers }}", - "nova_scheduler_workers": "{{ nova_scheduler_workers }}" - } - }, - {% if oos_env_type == 'cluster' %} - "compute": { - "children": [ - "compute1", - "compute2" - ] - }, - "compute1": { - "hosts": [ - "compute01" - ] - }, - "compute2": { - "hosts": [ - "compute02" - ] - }, - "storage": { - "children": [ - "storage1", - "storage2" - ] - }, - "storage1": { - "hosts": [ - "compute01" - ] - }, - "storage2": { - "hosts": [ - "compute02" - ] - }, - {% else %} - "compute": { - "hosts": [ - "controller" - ] - }, - "storage": { - "hosts": [ - "controller" - ] - }, - {% endif %} - "dashboard": { - "hosts": [ - "controller" - ] - }, - "kolla": { - "hosts": [ - "controller" - ] - }, - "network": { - "hosts": [ - "controller" - ] - } -} diff --git a/tools/oos/etc/inventory/oos_inventory.py b/tools/oos/etc/inventory/oos_inventory.py deleted file mode 100755 index 21aa1abd57915ccd75c125d505ce89586bdb0829..0000000000000000000000000000000000000000 --- a/tools/oos/etc/inventory/oos_inventory.py +++ /dev/null @@ -1,83 +0,0 @@ -#!/usr/bin/python3 -import argparse -import configparser -import os -import sys - -import jinja2 - -import oos - - -def parse_inventory(inventory_template_dir, config): - env = jinja2.Environment(loader=jinja2.FileSystemLoader(inventory_template_dir)) - template = env.get_template('oos_inventory.j2') - template_vars = {'controller_ip': os.environ.get('CONTROLLER_IP'), - 'compute01_ip': os.environ.get('COMPUTE01_IP'), - 'compute02_ip': os.environ.get('COMPUTE02_IP'), - 'mysql_root_password': config.get('environment', 'mysql_root_password'), - 'mysql_project_password': config.get('environment', 'mysql_project_password'), - 'rabbitmq_password': config.get('environment', 'rabbitmq_password'), - 'project_identity_password': config.get('environment', 'project_identity_password'), - 'enabled_service': config.get('environment', 'enabled_service').split(','), - 'neutron_provider_interface_name': config.get('environment', 'neutron_provider_interface_name'), - 'default_ext_subnet_range': config.get('environment', 'default_ext_subnet_range'), - 'default_ext_subnet_gateway': config.get('environment', 'default_ext_subnet_gateway'), - 'neutron_dataplane_interface_name': config.get('environment', 'neutron_dataplane_interface_name'), - 'cinder_block_device': config.get('environment', 'cinder_block_device'), - 'swift_storage_devices': config.get('environment', 'swift_storage_devices').split(','), - 'swift_hash_path_suffix': config.get('environment', 'swift_hash_path_suffix'), - 'swift_hash_path_prefix': config.get('environment', 'swift_hash_path_prefix'), - 'glance_api_workers': config.get('environment', 'glance_api_workers'), - 'cinder_api_workers': config.get('environment', 'cinder_api_workers'), - 'nova_api_workers': config.get('environment', 'nova_api_workers'), - 'nova_metadata_api_workers': config.get('environment', 'nova_metadata_api_workers'), - 'nova_conductor_workers': config.get('environment', 'nova_conductor_workers'), - 'nova_scheduler_workers': config.get('environment', 'nova_scheduler_workers'), - 'neutron_api_workers': config.get('environment', 'neutron_api_workers'), - 'horizon_allowed_host': config.get('environment', 'horizon_allowed_host'), - 'kolla_openeuler_plugin': config.get('environment', 'kolla_openeuler_plugin'), - 'oos_env_type': os.environ.get('OOS_ENV_TYPE'), - 'openstack_release': os.environ.get('OpenStack_Release'), - 'keypair_dir': os.environ.get('keypair_dir') - } - output = template.render(template_vars) - return output - - -def init_config(): - search_paths = ['/etc/oos/', - os.path.join(os.path.dirname(oos.__path__[0]), 'etc'), - os.environ.get("OOS_CONF_DIR", ""), '/usr/local/etc/oos', - '/usr/etc/oos', - ] - inventory_template_dir = None - config = None - for conf_path in search_paths: - pkg_tpl = os.path.join(conf_path, "inventory/oos_inventory.j2") - conf_file = os.path.join(conf_path, "oos.conf") - if not inventory_template_dir and os.path.isfile(pkg_tpl): - inventory_template_dir = os.path.join(conf_path, "inventory") - if not config and os.path.isfile(conf_file): - config = configparser.ConfigParser() - config.read(conf_file) - return inventory_template_dir, config - - -def main(): - inventory_template_dir, config = init_config() - parser = argparse.ArgumentParser() - args_group = parser.add_mutually_exclusive_group(required=True) - args_group.add_argument('--list', action='store_true', - help='List inventories') - args_group.add_argument('--host', help='Show the specified host info') - parsed_args = parser.parse_args() - inventories = parse_inventory(inventory_template_dir, config) - if parsed_args.list: - print(inventories) - else: - print({}) - - -if __name__ == '__main__': - sys.exit(main()) diff --git a/tools/oos/etc/key_pair/id_rsa b/tools/oos/etc/key_pair/id_rsa deleted file mode 100644 index 23901c64612063307dfe7d7bfa68bad15718ce5c..0000000000000000000000000000000000000000 --- a/tools/oos/etc/key_pair/id_rsa +++ /dev/null @@ -1,38 +0,0 @@ ------BEGIN OPENSSH PRIVATE KEY----- -b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn -NhAAAAAwEAAQAAAYEAoAASpdnYji1aJHOifEgPrA3BkDblNS3gkfM3jlld69dAA3ONGmHg -eNhTmo6B0fscRDpyI03ii3/kHdr5SVOf7JPPq/RdVTYuvXa0L3kQqThrZvH4eFmw3yqVIq -BMLHEQjsHA8LYU7GE36ED+hSS83xD91+AAD2PqBqJ6N/oi4V+8Svfgaz3rtTidTYeDT3Zt -0zRoKUnKkoNXQ7yDIOLYBJWzQF6QC5Dyu5Cz4i8bApMhNZQv33yZ6n48na6NF7wkbfC44u -tYi4H2qte2sOVnNCGFJr2IK2vqPfP0+zuISjcy9YdEtQRDkGxSqOWg+cXom4+D0p8OJutJ -YKvA9b8jz42lxw2BX7DTgTPkahf4/NmF7kxAXtbDGjrWmlW/4YmI2lG4PlYIxrzLSrKHET -A6/xO5ekz6cG8rcTi60oY3QZhSyarCHg9ZXGU8xMp/RVbQGH0DF+3SDYv0lkLnUfPEc03/ -DofomYsF9hsQRPsOKu0rrJoYB35clDW0R8umYl6xAAAFgB3yt9Ad8rfQAAAAB3NzaC1yc2 -EAAAGBAKAAEqXZ2I4tWiRzonxID6wNwZA25TUt4JHzN45ZXevXQANzjRph4HjYU5qOgdH7 -HEQ6ciNN4ot/5B3a+UlTn+yTz6v0XVU2Lr12tC95EKk4a2bx+HhZsN8qlSKgTCxxEI7BwP -C2FOxhN+hA/oUkvN8Q/dfgAA9j6gaiejf6IuFfvEr34Gs967U4nU2Hg092bdM0aClJypKD -V0O8gyDi2ASVs0BekAuQ8ruQs+IvGwKTITWUL998mep+PJ2ujRe8JG3wuOLrWIuB9qrXtr -DlZzQhhSa9iCtr6j3z9Ps7iEo3MvWHRLUEQ5BsUqjloPnF6JuPg9KfDibrSWCrwPW/I8+N -pccNgV+w04Ez5GoX+PzZhe5MQF7Wwxo61ppVv+GJiNpRuD5WCMa8y0qyhxEwOv8TuXpM+n -BvK3E4utKGN0GYUsmqwh4PWVxlPMTKf0VW0Bh9Axft0g2L9JZC51HzxHNN/w6H6JmLBfYb -EET7DirtK6yaGAd+XJQ1tEfLpmJesQAAAAMBAAEAAAGANyuPQoz5dR0CRitxTbVzYfpkUh -v7sPiexPS+pWD/V8EjG42OjBhP1JuTSGn3LbaOqqAUl0PV6BAzUnAdIUGqlWLqavqZ7DYA -q+fwfaYbLp57ukWZTbZvnKQMRKJNYc2izfbVVqsST+e95WHz4WknjytGvFdK7gOfwKXpyr -9/o4LlZFxQj+oMCrL42rDtgErv17HscMA3D4omXv7zoDVYE0yjQIDa3oIekLp2rHldsOeW -vejZERDf6dGZiS2VDSgcRAw69crXzfcDK1DGB6WUqnizVGuFYbinNtxOY4ZsuY4ztpwAtS -ugnTg93GDn8FXnfPmy701r4o0ztVJELpNvcVUwSDCZ7fpSJLfvgwlIkbo9RGWUVdJzvm+d -fAqDA86uJuC/E+K1kQzyqySnfBv2wcsuydyxEA1gts4U0BZWayTmEDyfVLfb1SH+Rmvwmf -U8r+p6Nzz+boVIdwomcffhxpLSlvx8DaTs7dv/YiA0XrwnqMPtc9RqYgvFL6Upgo2hAAAA -wBB0A6exZiXkKcLBROFQp5JK7S3XCkDEoEI28PWfGuq+yW4n85qiTRYYpEMyIBmawcFaXq -lPqGrii/iLsAFhaOE1xEmWLMp6GhyLyY9LcS6g5Fi5tp19aecYAuF93GWCZSsv88eYYVWO -oPCyMj5LEt4kaMf2lwVBV8fINBxqAE0qDhRraMVEBI4pNRYTmW13sZBXMLnUscuSBXvRp8 -MVuksiH+/INHgpdZ9Wgibycox0pIeCnx5r8CpNfCzya1hkTgAAAMEAzgkfplltrRtxohpF -8LcAy+9Z6zhnKlpVastAbj1zQIb6wcqR6+LWnGVQj8+rhQDwAJYEUB3hqVm8mdElaH9wDg -99nzdg6y5Bw/WEywrEVD8n3lseHoRbn4ox/2Rt/CHTiaFLCxj76paMEnoYMU/td+SGMlsP -BOrBBNCytnUfDRU2vkXeMdwTPWGn3Aotuk2loGcgLvxyxXrgeOjpiF+NqCHCm3/rDaoGRr -dwN5y+hLK2lnOUu2VvxsTd98fe+d2FAAAAwQDGzQiG4wLO03DpP2KQD/2vtU8hheD8BAtq -mlxTqVHqxh1Gy3Srfdhthsfc4LYpUuVQfO5+lLaVniq2PyoZaJj9RvJI3DKNKGqNwQKr+s -pF7FQnbXGaNycldk1s7TUBRsnNUH90VYae5uyWxnZAg5TDlpmFCKgZFiigNrB4j88HgCxS -amadbnhXF3sqkShx19tzlhPUT7gaGWHqKaW90PuQPEljefciD2qMduRiSoDfP++U8WcXml -ayDAAHzalHHj0AAAAIcm9vdEB3eHkBAgM= ------END OPENSSH PRIVATE KEY----- diff --git a/tools/oos/etc/key_pair/id_rsa.pub b/tools/oos/etc/key_pair/id_rsa.pub deleted file mode 100644 index 7d8a2de71984268b74d892de0b7d718cde37eee5..0000000000000000000000000000000000000000 --- a/tools/oos/etc/key_pair/id_rsa.pub +++ /dev/null @@ -1 +0,0 @@ -ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgABKl2diOLVokc6J8SA+sDcGQNuU1LeCR8zeOWV3r10ADc40aYeB42FOajoHR+xxEOnIjTeKLf+Qd2vlJU5/sk8+r9F1VNi69drQveRCpOGtm8fh4WbDfKpUioEwscRCOwcDwthTsYTfoQP6FJLzfEP3X4AAPY+oGono3+iLhX7xK9+BrPeu1OJ1Nh4NPdm3TNGgpScqSg1dDvIMg4tgElbNAXpALkPK7kLPiLxsCkyE1lC/ffJnqfjydro0XvCRt8Lji61iLgfaq17aw5Wc0IYUmvYgra+o98/T7O4hKNzL1h0S1BEOQbFKo5aD5xeibj4PSnw4m60lgq8D1vyPPjaXHDYFfsNOBM+RqF/j82YXuTEBe1sMaOtaaVb/hiYjaUbg+VgjGvMtKsocRMDr/E7l6TPpwbytxOLrShjdBmFLJqsIeD1lcZTzEyn9FVtAYfQMX7dINi/SWQudR88RzTf8Oh+iZiwX2GxBE+w4q7SusmhgHflyUNbRHy6ZiXrE= oos diff --git a/tools/oos/etc/oos.conf b/tools/oos/etc/oos.conf deleted file mode 100644 index 847e17ac58fa534861cc0fd9c5312334e317fe0a..0000000000000000000000000000000000000000 --- a/tools/oos/etc/oos.conf +++ /dev/null @@ -1,32 +0,0 @@ -[provider] -driver = huaweicloud - -[huaweicloud] -ak = -sk = -project_id = c847fb717563484fa95d18a75ae7831c -endpoint = https://ecs.ap-southeast-3.myhuaweicloud.com - -[environment] -mysql_root_password = root -mysql_project_password = root -rabbitmq_password = root -project_identity_password = root -enabled_service = keystone,neutron,cinder,placement,nova,glance,horizon,aodh,ceilometer,cyborg,gnocchi,kolla,heat,swift,trove,tempest -neutron_provider_interface_name = br-ex -default_ext_subnet_range = 10.100.100.0/24 -default_ext_subnet_gateway = 10.100.100.1 -neutron_dataplane_interface_name = eth1 -cinder_block_device = vdb -swift_storage_devices = vdc -swift_hash_path_suffix = ash -swift_hash_path_prefix = has -glance_api_workers = 2 -cinder_api_workers = 2 -nova_api_workers = 2 -nova_metadata_api_workers = 2 -nova_conductor_workers = 2 -nova_scheduler_workers = 2 -neutron_api_workers = 2 -horizon_allowed_host = * -kolla_openeuler_plugin = false diff --git a/tools/oos/etc/openeuler_repo.yaml b/tools/oos/etc/openeuler_repo.yaml deleted file mode 100644 index 31744ee7d64130f4f11dbf505909feca3b16f6d9..0000000000000000000000000000000000000000 --- a/tools/oos/etc/openeuler_repo.yaml +++ /dev/null @@ -1,8708 +0,0 @@ -A-FOT: Compiler -A-Ops: sig-ops -A-Tune: A-Tune -A-Tune-BPF-Collection: A-Tune -A-Tune-Collector: A-Tune -A-Tune-UI: A-Tune -AvxToNeon: sig-AccLib -BareBonesBrowserLaunch: sig-Java -BiSheng-Adoptium: Compiler -CPython: Compiler -CUnit: Base-service -CharLS: Desktop -ComputeLibrary: ai -Control-FREEC: sig-bio -Cython: Programming-language -DCache: Storage -DevIL: Application -Done: Private -DyscheOS-kernel: sig-DyscheOS -DyscheOS-meta: sig-DyscheOS -DyscheOS-utils: sig-DyscheOS -EulerRobot: sig-QA -FAudio: sig-compat-winapp -Fast-CDR: sig-ROS -Fast-DDS: sig-ROS -Flask-RESTful: Private -G11N: G11N -GAPP: dev-utils -GATK: sig-bio -GATK3: sig-bio -GConf2: Desktop -GearOS: sig-industrial-control -GeoIP: Networking -GeoIP-GeoLite-data: Networking -GraphicsMagick: Application -HdrHistogram: Application -HikariCP: sig-Java -ImageMagick: Others -Imath: Runtime -Intel-Arch-doc: sig-Intel-Arch -Intel-gcc: sig-Intel-Arch -Intel-glibc: sig-Intel-Arch -Intel-kernel: sig-Intel-Arch -Java-Packages: sig-Java -Judy: Base-service -Keras: Private -KubeHawk: sig-CloudNative -KubeHawkeyes: sig-CloudNative -KubeOS: sig-CloudNative -LZMA-SDK: Base-service -LibRaw: Programming-language -Lmod: Application -ModemManager: Networking -NestOS: sig-CloudNative -NetworkManager: Networking -NetworkManager-libreswan: System-tool -NutShell-Kernel: sig-RISC-V -NutShell-riscv-glibc: sig-RISC-V -NutShell-riscv-pk: sig-RISC-V -NutShell-systemd: sig-RISC-V -ORBit2: Desktop -OpenAMP: sig-embedded -OpenEXR: Runtime -OpenIPMI: Networking -PEGTL: dev-utils -PackageKit: Desktop -PackageKit-Qt: Desktop -Picard: sig-bio -PilotGo: sig-ops -PilotGo-plugins: sig-ops -PilotGo-web: sig-ops -Preempt_RT: sig-industrial-control -PyPAM: Others -PyQt4: sig-python-modules -PyYAML: sig-python-modules -QA: sig-QA -R-core: sig-bio -R-knitr: dev-utils -R-language: Application -R-rpm-macros: Programming-language -RISC-V: sig-RISC-V -SDL: Programming-language -SDL2: Others -SDL_sound: dev-utils -SuperLU: dev-utils -SuperLUMT: ai -TCP_option_address: Kernel -TeXamator: Base-service -Thunar: xfce -UniProton: sig-embedded -WALinuxAgent: sig-CloudNative -WLLVM: Compiler -X-diagnosis: sig-ops -Xaw3d: Desktop -XmlSchema: Application -a52dec: sig-epol -aalib: Others -aalto-xml: Application -aarch32-rootfs-builder: sig-Yocto -abattis-cantarell-fonts: Desktop -abduco: Application -abi-compliance-checker: sig-EasyLife -abi-dumper: sig-EasyLife -abichecker: dev-utils -abrt: Application -abrt-java-connector: Application -abrt-server-info-page: Application -abseil-cpp: Base-service -accountsservice: Desktop -accumulo: bigdata -acl: Base-service -acme-tiny: Application -acpica-tools: Base-service -acpid: Computing -actionlib: sig-ROS -activemq: Application -adb-enhanced: dev-utils -adcli: Base-service -adobe-mappings-cmap: Desktop -adobe-mappings-pdf: Desktop -adobe-source-code-pro-fonts: Private -adwaita-icon-theme: Desktop -adwaita-qt: Desktop -aesh: sig-Java -aespipe: sig-security-facility -afflib: Others -afterburn: sig-CloudNative -aha: Application -aide: Base-service -airline: Base-service -alertmanager: sig-CloudNative -allegro: Private -alluxio: bigdata -allwinner-kernel: sig-RaspberryPi -alpine: Application -alsa-firmware: Computing -alsa-lib: Computing -alsa-plugins: Runtime -alsa-tools: Computing -alsa-utils: Application -amanda: Application -amavis: Application -ambari: bigdata -ament_cmake: sig-ROS -ament_cmake_ros: sig-ROS -ament_index: sig-ROS -ament_lint: sig-ROS -ament_package: sig-ROS -amtk: Application -anaconda: sig-OS-Builder -anaconda-user-help: sig-OS-Builder -anbox: sig-recycle -android-json-org-java: sig-Java -angles: sig-ROS -annobin: Others -annotation-indexer: sig-Java -ansible: dev-utils -ansible-2.9: dev-utils -ansible-lint: sig-openstack -ansible-runner-service: oVirt -ant: sig-Java -ant-antunit: Base-service -ant-contrib: sig-Java -anthy: Others -antlr: dev-utils -antlr-maven-plugin: Application -antlr3: sig-Java -antlr32: Application -antlr4: sig-Java -aom: Desktop -aopalliance: sig-Java -apache-commons-beanutils: sig-Java -apache-commons-chain: Base-service -apache-commons-cli: sig-Java -apache-commons-codec: sig-Java -apache-commons-collections: Application -apache-commons-collections4: dev-utils -apache-commons-compress: sig-Java -apache-commons-configuration: sig-Java -apache-commons-csv: Application -apache-commons-daemon: Application -apache-commons-dbcp: Application -apache-commons-digester: sig-Java -apache-commons-discovery: sig-Java -apache-commons-el: Base-service -apache-commons-exec: sig-Java -apache-commons-fileupload: dev-utils -apache-commons-io: sig-Java -apache-commons-javaflow: Base-service -apache-commons-jci: Base-service -apache-commons-jexl: dev-utils -apache-commons-jxpath: sig-Java -apache-commons-lang: sig-Java -apache-commons-lang3: sig-Java -apache-commons-logging: Application -apache-commons-math: dev-utils -apache-commons-net: Networking -apache-commons-ognl: sig-Java -apache-commons-parent: sig-Java -apache-commons-pool: Application -apache-commons-pool2: dev-utils -apache-commons-validator: Application -apache-commons-vfs: sig-Java -apache-ivy: sig-Java -apache-log4j-extras: dev-utils -apache-logging-parent: Application -apache-mime4j: Application -apache-mina: Base-service -apache-mod_jk: Application -apache-orc: DB -apache-parent: sig-Java -apache-poi: dev-utils -apache-rat: Application -apache-resource-bundles: sig-Java -apache-rpm-macros: Application -apache-sshd: dev-utils -apache2-mod_xforward: Others -apacheds: sig-Java -apacheds-ldap-api: sig-Java -api-guarder: sig-EasyLife -apiguardian: sig-Java -apisix: sig-OpenResty -apisix-dashboard: sig-OpenResty -apisix-deps: sig-OpenResty -apiviz: dev-utils -apparmor: sig-security-facility -appstream: dev-utils -appstream-data: sig-recycle -appweb: Networking -apr: Base-service -apr-util: Base-service -aqute-bnd: sig-Java -argbash: Application -argon2: Base-service -args4j: sig-Java -argus: Application -argyllcms: System-tool -aria2: Application -aries-blueprint-annotation-api: Base-service -aries-blueprint-api: dev-utils -aries-blueprint-core: Application -aries-blueprint-parser: Application -aries-proxy-api: Application -aries-proxy-impl: Base-service -aries-quiesce-api: dev-utils -aries-util: dev-utils -ark: sig-KDE -arm-ml-examples: ai -arm-trusted-firmware: Base-service -armadillo: dev-utils -armnn: ai -arp-scan: Networking -arpack: sig-epol -arpack-ng: Computing -arptables: Networking -arpwatch: Networking -arquillian-core: sig-Java -artemis: sig-Java -asciidoc: Base-service -asdcplib: sig-epol -asio: dev-utils -aspectjweaver: sig-Java -aspell: Application -aspell-en: Application -assertj-core: Programming-language -assimp: Others -astream: Kernel -asymptote: dev-utils -async-libfuse: iSulad -at: Base-service -at-spi2-atk: Desktop -at-spi2-core: Desktop -atf: Base-service -atinject: sig-Java -atk: Desktop -atkmm: GNOME -atlas: Runtime -atmel-firmware: Networking -atomic: sig-Ostree-Assembly -atop: dev-utils -atril: sig-UKUI -attest-tools: sig-security-facility -attica: sig-UKUI -attr: Storage -audiofile: Base-service -audit: sig-security-facility -augeas: Desktop -auter: Application -authd: Base-service -authselect: Base-service -authz: iSulad -auto: dev-utils -auto_py2to3: dev-utils -autoconf: Base-service -autoconf-archive: Programming-language -autoconf213: Programming-language -autofdo: Compiler -autofs: System-tool -autogen: Base-service -automake: Base-service -automoc: dev-utils -autossh: Application -autotrace: Private -autotune: Others -avahi: Desktop -avalon-framework: sig-Java -avalon-logkit: sig-Java -avocado: sig-QA -avocado-vt: sig-QA -avor: bigdata -avro: Application -aws-sdk-java: sig-Java -axel: sig-epol -axiom: Base-service -azote: Desktop -b43-openfwwf: Networking -b43-tools: System-tool -babel: Base-service -babeld: Networking -babeltrace: Base-service -babeltrace2: Base-service -babl: Others -backintime: Desktop -backupninja: Application -bacula: System-tool -bam: dev-utils -bamf: sig-UKUI -bamtools: sig-bio -banner: Application -baobab: GNOME -barcode: Application -barman: DB -barrier: Application -base64coder: dev-utils -basesystem: Base-service -bash: Base-service -bash-argsparse: dev-utils -bash-completion: Base-service -batik: sig-Java -bazel: ai -bc: Base-service -bcache-tools: Storage -bcc: dev-utils -bcel: dev-utils -bcftools: sig-bio -bcm283x-firmware: Private -bcrypt: dev-utils -bea-stax: dev-utils -beakerlib: Others -bean-validation-api: sig-Java -beanstalkd: Computing -bedops: sig-bio -bedtools: sig-bio -beijing_est_institute_2021: sig-recycle -beust-jcommander: dev-utils -bgenix: sig-bio -bgmprovider: Compiler -biber: sig-perl-modules -bigdata: bigdata -bigtop: bigdata -bind: Networking -bind-dyndb-ldap: Networking -binutils: Compiler -biobambam2: sig-bio -bioinformatics: sig-bio -biometric-authentication: sig-UKUI -biosdevname: Others -bird: sig-epol -bishengjdk-11: Compiler -bishengjdk-17: Compiler -bishengjdk-8: Compiler -bishengjdk-build: Compiler -bishengjdk-riscv: Compiler -bison: Base-service -blackbox_exporter: sig-CloudNative -blaze: bigdata -blesschess: sig-minzuchess -blis: sig-epol -blivet-gui: sig-OS-Builder -blktrace: Storage -blog: sig-recycle -blueman: sig-UKUI -bluez: Base-service -blur-effect: Desktop -boilerpipe: Application -bolt: System-tool -bond_core: sig-ROS -boom-boot: Others -boost: Computing -booth: sig-Ha -bootstrap: sig-nodejs -bootupd: sig-OS-Builder -bouncycastle: dev-utils -bowtie2: sig-bio -box86: sig-compat-winapp -bpftrace: dev-utils -bpg-fonts: Desktop -bpytop: System-tool -brasero: Application -breeze-icon-theme: Others -bridge-method-injector: dev-utils -bridge-utils: Networking -brltty: Desktop -brotli: Base-service -brpc: dev-utils -bsf: dev-utils -bsh: sig-Java -btrfs-progs: Storage -bubblemail: GNOME -bubblewrap: Base-service -bucardo: DB -buildnumber-maven-plugin: sig-Java -buildroot: dev-utils -bullet: sig-compat-winapp -busybox: sig-CloudNative -bval: Base-service -bwa: Application -byacc: Programming-language -byaccj: sig-Java -byte-buddy: Base-service -bytelist: sig-Java -byteman: sig-Java -byzanz: Desktop -bzip2: Base-service -c-STAR: sig-bio -c-ares: Networking -c-blosc: dev-utils -c3p0: sig-Java -ca-certificates: Base-service -cachefilesd: Storage -cadvisor: sig-CloudNative -caffe: sig-recycle -caffeine: dev-utils -cairo: Desktop -cairomm: Desktop -caja: sig-mate-desktop -caja-extensions: sig-mate-desktop -cal10n: dev-utils -calc: Application -calcite: bigdata -callaudiod: GNOME -calls: GNOME -canal: DB -canfestival: sig-industrial-control -canfestival-xenomai: sig-industrial-control -canopennode: sig-industrial-control -capstone: Base-service -capsule: Virt -cartographer: sig-ROS -cartographer_ros: sig-ROS -cassandra-java-driver: sig-Java -castor: Base-service -castor-maven-plugin: sig-Java -catatonit: sig-CloudNative -catch1: dev-utils -catfish: xfce -catkin: sig-ROS -cbi-plugins: Base-service -ccache: Compiler -ccid: Storage -cdi-api: sig-Java -cdparanoia: Desktop -cdrdao: Application -cdrkit: sig-OS-Builder -celt051: Runtime -ceph: sig-ceph -ceph-ansible: sig-ceph -ceph-csi: sig-ceph -ceph-deploy: sig-ceph -ceph_dev: sig-ceph -cereal: dev-utils -ceres-solver: sig-epol -certmonger: sig-security-facility -cfitsio: dev-utils -cgdcbxd: System-tool -cgit: sig-epol -cglib: dev-utils -check: Programming-language -checkpolicy: sig-security-facility -checkstyle: Application -cheese: Desktop -chkconfig: Base-service -chromaprint: dev-utils -chromium: Application -chrony: Networking -chrpath: Base-service -ci-bot: sig-Gatekeeper -ci_check: Private -ci_project: Private -cifs-utils: Storage -cilium: sig-high-performance-network -cim-schema: System-tool -cinnamon: sig-cinnamon -cinnamon-control-center: sig-cinnamon -cinnamon-desktop: sig-cinnamon -cinnamon-menu: sig-cinnamon -cinnamon-screensaver: sig-cinnamon -cinnamon-session: sig-cinnamon -cinnamon-settings-daemon: sig-cinnamon -cinnamon-spices-applets: sig-cinnamon -cinnamon-spices-desklets: sig-cinnamon -cinnamon-spices-extensions: sig-cinnamon -cinnamon-spices-themes: sig-cinnamon -cinnamon-themes: sig-cinnamon -cinnamon-translations: sig-cinnamon -cjkuni-ukai-fonts: Desktop -cjkuni-uming-fonts: sig-mate-desktop -cjose: Application -cjs: sig-cinnamon -cjson: DB -ck: dev-utils -clamav: Others -clamav-unofficial-sigs: Private -clang: Compiler -class_loader: sig-ROS -classloader-leak-test-framework: sig-Java -classmate: sig-Java -claws-mail: sig-UKUI -cldr-emoji-annotation: Others -clevis: Base-service -cli-parser: Application -cli11: dev-utils -clibcni: iSulad -cliquer: sig-epol -clitest: Application -cloc: dev-utils -cloog: sig-compat-winapp -closure-compiler: sig-nodejs -cloud-init: Base-service -cloud-utils: Base-service -cloudnative: sig-CloudNative -clucene: Others -clutter: Desktop -clutter-gst2: Runtime -clutter-gst3: Desktop -clutter-gtk: Desktop -cmake: Programming-language -cmake_modules: sig-ROS -cmark: GNOME -cmocka: Programming-language -cmockery: sig-recycle -cobbler: sig-OS-Builder -cockpit: Desktop -cockpit-ovirt: oVirt -codegen: Application -codehaus-parent: sig-Java -codelite: sig-UKUI -codemodel: sig-Java -codenarc: Application -coffee-script: Application -cogl: Desktop -coin-or-cbc: sig-epol -coin-or-cgl: sig-epol -coin-or-clp: sig-epol -coin-or-coinutils: sig-epol -coin-or-data-miplib3: sig-epol -coin-or-data-netlib: sig-epol -coin-or-dylp: sig-epol -coin-or-osi: sig-epol -coin-or-sample: sig-epol -coin-or-vol: sig-epol -collectd: oVirt -colm: dev-utils -color-filesystem: Desktop -colord: Desktop -colord-gtk: System-tool -colordiff: dev-utils -common_interfaces: sig-ROS -common_msgs: sig-ROS -common_tutorials: sig-ROS -community: TC -community-issue: TC -compass-ci: sig-CICD -compass-ci-web: sig-CICD -compat-libgfortran: Private -compat-lua: sig-epol -compat-winapp: sig-compat-winapp -compface: Application -compiler-rt: Compiler -compiler-test: sig-QA -compliance: sig-compliance -compliance-sbom: sig-compliance -compress-lzf: Application -comps-extras: Desktop -concurrent-trees: dev-utils -conmon: Others -conntrack-tools: Base-service -console-login-helper-messages: sig-CloudNative -console-setup: Application -console_bridge: sig-ROS -console_bridge_vendor: sig-ROS -container-exception-logger: Others -container-selinux: sig-CloudNative -container-test: sig-QA -containerd: sig-CloudNative -containernetworking-plugins: sig-CloudNative -containers-common: sig-CloudNative -control_msgs: sig-ROS -control_toolbox: sig-ROS -convmv: Application -cookcc: Base-service -cookxml: sig-recycle -copy-jdk-configs: Packaging -coredns: sig-CloudNative -coreutils: Base-service -coro-mock: dev-utils -corosync: sig-Ha -corosync-qdevice: sig-Ha -courier-unicode: dev-utils -cowsay: sig-recycle -cpio: Base-service -cpp-hocon: Base-service -cpp-httplib: dev-utils -cppcheck: Programming-language -cpptasks: sig-Java -cppunit: Programming-language -cpuid: dev-utils -cracklib: sig-security-facility -crash: Base-service -crash-gcore-command: Programming-language -crash-trace-command: Programming-language -crda: sig-recycle -createrepo_c: Base-service -cri-o: sig-CloudNative -cri-tools: sig-CloudNative -criu: sig-ops -cronie: Base-service -crontabs: Base-service -crudini: sig-openstack -crun: sig-CloudNative -cryptacular: dev-utils -crypto-policies: sig-security-facility -cryptopp: sig-security-facility -cryptsetup: Storage -cscope: Programming-language -csmith: dev-utils -ct-ng: dev-utils -ctags: Base-service -cufflinks: sig-bio -culmus-fonts: Application -cups: Desktop -cups-filters: System-tool -cups-pk-helper: Desktop -curator: dev-utils -curl: Networking -curvesapi: dev-utils -custodia: System-tool -custom_build_tool: Application -cve-manager: Infrastructure -cvs: Programming-language -cvsps: Private -cxf: Base-service -cxf-build-utils: sig-Java -cxf-xjc-utils: sig-Java -cyclonedds: sig-ROS -cyrus-imapd: Application -cyrus-sasl: Base-service -czmq: dev-utils -d-feet: Desktop -daala: sig-epol -dain-snappy: sig-Java -dash: Application -datafu: bigdata -datanucleus-api-jdo: sig-Java -datanucleus-core: sig-Java -datanucleus-maven-parent: sig-Java -datanucleus-rdbms: sig-Java -dav1d: Desktop -dblatex: Application -dbus: Base-service -dbus-broker: Base-service -dbus-cpp: dev-utils -dbus-glib: Base-service -dbus-python: Base-service -dbusmenu-qt: dev-utils -dbxtool: Base-service -dcmtk: sig-epol -dconf: Desktop -dconf-editor: Desktop -dcraw: Application -ddcutil: sig-UKUI -dde: sig-DDE -dde-account-faces: sig-DDE -dde-api: sig-DDE -dde-calendar: sig-DDE -dde-clipboard: sig-DDE -dde-control-center: sig-DDE -dde-daemon: sig-DDE -dde-device-formatter: sig-DDE -dde-dock: sig-DDE -dde-file-manager: sig-DDE -dde-introduction: sig-DDE -dde-kwin: sig-DDE -dde-launcher: sig-DDE -dde-network-utils: sig-DDE -dde-polkit-agent: sig-DDE -dde-qt-dbus-factory: sig-DDE -dde-server-industry-config: sig-DDE -dde-session-shell: sig-DDE -dde-session-ui: sig-DDE -debian-keyring: Private -debootstrap: Others -debugedit: Base-service -decentxml: Application -deepin-anything: sig-DDE -deepin-clone: sig-DDE -deepin-compressor: sig-DDE -deepin-dbus-generator: sig-DDE -deepin-default-settings: sig-DDE -deepin-desktop-base: sig-DDE -deepin-desktop-schemas: sig-DDE -deepin-devicemanager: sig-DDE -deepin-draw: sig-DDE -deepin-editor: sig-DDE -deepin-font-manager: sig-DDE -deepin-gettext-tools: sig-DDE -deepin-graphics-driver-manager: sig-DDE -deepin-gtk-theme: sig-DDE -deepin-icon-theme: sig-DDE -deepin-image-viewer: sig-DDE -deepin-kwin: sig-recycle -deepin-log-viewer: sig-DDE -deepin-manual: sig-DDE -deepin-menu: sig-DDE -deepin-movie: sig-DDE -deepin-music: sig-DDE -deepin-qml-widgets: sig-DDE -deepin-reader: sig-DDE -deepin-rpm-installer: sig-DDE -deepin-screen-recorder: sig-DDE -deepin-shortcut-viewer: sig-DDE -deepin-sound-theme: sig-DDE -deepin-system-monitor: sig-DDE -deepin-terminal: sig-DDE -deepin-turbo: sig-DDE -deepin-upgrade-tool: sig-DDE -deepin-wallpapers: sig-DDE -dejagnu: Programming-language -dejavu-fonts: System-tool -delta: bigdata -deltarpm: Base-service -delve: dev-utils -derby: DB -desktop-file-utils: Desktop -devhelp: GNOME -dhcp: Networking -diagnostics: sig-ROS -dialog: Base-service -dibbler: sig-openstack -dict2xml: Private -dietlibc: Base-service -diffstat: Base-service -diffutils: Base-service -digest-list-tools: sig-security-facility -dim_tools: sig-security-facility -ding-libs: Base-service -discount: Application -diskimage-builder: sig-openstack -disomaster: Desktop -disruptor: Base-service -djvulibre: Others -dkms: Others -dleyna-connector-dbus: dev-utils -dleyna-core: dev-utils -dleyna-renderer: GNOME -dleyna-server: Application -dlib: ai -dlm: Application -dmidecode: Computing -dmraid: Storage -dnf: sig-OS-Builder -dnf-plugin-ovl: sig-OS-Builder -dnf-plugins-core: sig-OS-Builder -dnsjava: sig-Java -dnsmasq: Networking -dnssec-trigger: Application -docbook-dtds: Base-service -docbook-style-dsssl: Application -docbook-style-xsl: Base-service -docbook-utils: Application -docbook2X: Application -docbook5-schemas: Application -docbook5-style-xsl: Application -docker: sig-CloudNative -docker-anaconda-addon: sig-recycle -docker-client-java: sig-CloudNative -docker-compose: sig-CloudNative -docs: doc -dogtail: Others -dom4j: sig-Java -doracms: sig-cms -dos2unix: Base-service -dosfstools: Storage -dotconf: Programming-language -double-conversion: Computing -dovecot: Application -doxygen: Application -dpdk: sig-high-performance-network -dpkg: Others -dpu-core: sig-DPU -dracut: Base-service -drbd: sig-Ha -dropwatch: Networking -drpm: Base-service -druid: bigdata -dsoftbus_standard: sig-embedded -dtc: Base-service -dtkcore: sig-DDE -dtkcore2: sig-DDE -dtkgui: sig-DDE -dtkwidget: sig-DDE -dtkwidget2: sig-DDE -dtkwm: sig-DDE -duktape: Base-service -dump: Application -duoyibu-ai: sig-minzuchess -dust: Application -dvdplusrw-tools: Others -dwarves: sig-high-performance-network -dwz: Base-service -dynamic_reconfigure: sig-ROS -dyninst: Computing -e2fsprogs: Storage -easy-checker: sig-EasyLife -easymock: sig-Java -ebtables: Networking -ecj: dev-utils -eclipse: sig-Java -eclipse-cdt: sig-Java -eclipse-ecf: sig-Java -eclipse-egit: sig-Java -eclipse-emf: sig-Java -eclipse-gef: sig-Java -eclipse-jgit: sig-Java -eclipse-launchbar: sig-Java -eclipse-license: sig-Java -eclipse-linuxtools: sig-Java -eclipse-m2e-workspace: sig-Java -eclipse-mylyn: sig-Java -eclipse-photran: sig-Java -eclipse-ptp: sig-Java -eclipse-remote: sig-Java -eclipse-subclipse: sig-Java -eclipse-swtbot: sig-Java -eclipse-tm-terminal: sig-Java -eclipselink: sig-Java -eclipselink-persistence-api: sig-Java -ed: Base-service -ed25519-java: sig-Java -edac-utils: sig-recycle -edk2: Virt -efi-rpm-macros: sig-OS-Builder -efibootmgr: sig-OS-Builder -efivar: sig-OS-Builder -efl: sig-compat-winapp -eggo: sig-CloudNative -egl-wayland: Programming-language -eglexternalplatform: Programming-language -ehcache-core: sig-Java -ehcache-parent: sig-Java -ehcache-sizeof-agent: sig-Java -eigen: ai -eigen3: sig-epol -eigen3_cmake_module: sig-ROS -elfutils: Base-service -eli5: ai -elinks: Application -elixir: Application -emacs: Desktop -emacs-auctex: Others -embedded: sig-embedded -embedded-kernel: sig-embedded -emma: dev-utils -enca: dev-utils -enchant: Desktop -enchant2: Programming-language -engine-db-query: oVirt -engrampa: sig-UKUI -enscript: Application -ensmallen: ai -entr: System-tool -environment-modules: Base-service -eog: GNOME -eom: sig-mate-desktop -epiphany: Desktop -epstool: bigdata -epydoc: Base-service -erlang: Programming-language -erlang-eflame: Programming-language -erlang-erlsyslog: Programming-language -erlang-erlydtl: Programming-language -erlang-getopt: Programming-language -erlang-gettext: Programming-language -erlang-hamcrest: Programming-language -erlang-lfe: Programming-language -erlang-meck: Programming-language -erlang-mustache: Programming-language -erlang-neotoma: Programming-language -erlang-proper: Programming-language -erlang-protobuffs: Application -erlang-rebar: Application -erlang-rpm-macros: Programming-language -erlang-sd_notify: Programming-language -esc: Application -espeak-ng: Others -etcd: sig-CloudNative -etckeeper: Application -ethercat-igh: sig-industrial-control -ethtool: Networking -etmem: Storage -eulerfs: Kernel -eulerfs-test: Kernel -evince: GNOME -evo-inflector: sig-Java -evolution-data-server: Desktop -exec-maven-plugin: sig-Java -execstack: sig-Ha -executive_smach: sig-ROS -exempi: Base-service -exfat-utils: Application -exfatprogs: Application -exiv2: Desktop -exo: xfce -expat: Base-service -expect: Base-service -expresso: sig-nodejs -extfuse: Storage -extlinux-bootloader: Private -extra-cmake-modules: sig-UKUI -extra166y: sig-Java -ezmorph: dev-utils -f29-backgrounds: Private -f2fs-tools: dev-utils -faad2: sig-epol -fabtests: Others -facter: Base-service -fakechroot: System-tool -fakeroot: Programming-language -farstream02: Others -fastdb: DB -fastdfs: Storage -fasterxml-oss-parent: sig-Java -fastp: sig-bio -fastqc: sig-bio -fastutil: sig-Java -faust: Application -fbida: sig-epol -fbset: sig-epol -fcgi: dev-utils -fcitx: Desktop -fcitx-cloudpinyin: Desktop -fcitx-configtool: Desktop -fcitx-libpinyin: Desktop -fcitx-qt5: Desktop -fcitx-sunpinyin: Desktop -fcoe-utils: System-tool -fdupes: Application -feedbackd: GNOME -felix-bundlerepository: sig-Java -felix-framework: sig-Java -felix-gogo-command: sig-Java -felix-gogo-parent: sig-Java -felix-gogo-runtime: Application -felix-gogo-shell: Application -felix-main: sig-Java -felix-osgi-compendium: Base-service -felix-osgi-core: Base-service -felix-osgi-foundation: Base-service -felix-osgi-obr: Base-service -felix-osgi-obr-resolver: sig-Java -felix-parent: sig-Java -felix-scr: sig-Java -felix-scr-annotations: Application -felix-scr-generator: Application -felix-shell: Base-service -felix-utils: sig-Java -fence-agents: sig-Ha -fence-virt: sig-Ha -festival: Others -festival-freebsoft-utils: Others -fetch-crl: Application -fetchmail: Application -ffmpeg: Desktop -ffmpegthumbnailer: Desktop -fftw: Runtime -fftw2: Application -figlet: dev-utils -file: Storage -file-roller: GNOME -filebench: Storage -filesystem: Storage -filter: sig-ROS -findbugs: sig-Java -findbugs-bcel: sig-Java -findutils: Base-service -fio: Application -fipscheck: Base-service -firebird: DB -firefox: Application -firesim: sig-RISC-V -firewalld: Networking -fish: Desktop -flac: Computing -flang: Compiler -flatbuffers: dev-utils -flatpak: Programming-language -flatpak-builder: GNOME -flex: Base-service -flexiblas: sig-epol -flink: bigdata -flite: Others -fltk: Desktop -fluid-soundfont: Private -fluidsynth: Application -flume: bigdata -fmpp: Application -fmt: dev-utils -folks: GNOME -folks-telepathy: Private -fontawesome-fonts: System-tool -fontconfig: Desktop -fontforge: Application -fontpackages: sig-recycle -fonts-rpm-macros: Others -fonts-tweak-tool: Application -foomatic: DB -foomatic-db: DB -foonathan_memory_vendor: sig-ROS -fop: sig-Java -forbidden-apis: Application -foreman: sig-ops -foreman-installer: sig-ops -foreman-proxy: sig-ops -foreman-selinux: sig-ops -forge-parent: Application -fpaste: Base-service -fping: dev-utils -fprintd: System-tool -freeglut: Runtime -freeimage: dev-utils -freeipa: oVirt -freeipmi: System-tool -freemarker: dev-utils -freeradius: System-tool -freeradius-client: Networking -freerdp: Application -freetds: Runtime -freetype: Desktop -freexl: dev-utils -frei0r-plugins: Base-service -fribidi: Desktop -fros: Desktop -fstrm: sig-epol -fswatch: System-tool -ftgl: bigdata -ftp: Networking -fuse: Storage -fuse-exfat: Application -fuse-overlayfs: sig-CloudNative -fuse-python: Others -fuse-sshfs: sig-CloudNative -fuse3: Storage -fusesource-pom: Base-service -future: Base-service -fwupd: System-tool -fwupdate: Private -fxload: Storage -g2clib: sig-recycle -gajim: sig-mate-desktop -galera: Others -game-music-emu: dev-utils -gamemode: Desktop -gamin: sig-recycle -ganglia: Base-service -garcon: xfce -gavl: Others -gawk: Base-service -gazebo_ros_pkgs: sig-ROS -gazelle: sig-high-performance-network -gazelle-cni: sig-high-performance-network -gc: Base-service -gcab: GNOME -gcc: Compiler -gcc-anti-sca: Compiler -gcc-cross: Compiler -gcc_secure: Others -gcolor2: sig-mate-desktop -gcr: Desktop -gd: Desktop -gdal: Application -gdb: Computing -gdbm: Storage -gdbus-codegen-glibmm: sig-KIRAN-DESKTOP -gdcm: sig-epol -gdisk: Storage -gdk-pixbuf-xlib: Desktop -gdk-pixbuf2: Desktop -gdlmm: GNOME -gdm: Desktop -geany: Desktop -gearmand: Application -gecode: sig-epol -gedit: GNOME -gedit-control-your-tabs: GNOME -gegl04: Others -gemini-blueprint: Base-service -gencpp: sig-ROS -genders: Application -geneus: sig-ROS -genlisp: sig-ROS -genmsg: sig-ROS -gennodejs: sig-ROS -genpy: sig-ROS -genwqe-tools: Application -geo-coding: dev-utils -geoclue2: Desktop -geocode-glib: Desktop -geolatte-geom: DB -geolite2: sig-recycle -geometry: sig-ROS -geometry2: sig-ROS -geometry_tutorials: sig-ROS -geos: dev-utils -geronimo-annotation: sig-Java -geronimo-commonj: sig-Java -geronimo-ejb: dev-utils -geronimo-interceptor: sig-Java -geronimo-jaspic-spec: sig-Java -geronimo-jaxrpc: sig-Java -geronimo-jcache: dev-utils -geronimo-jcdi-1.0-api: dev-utils -geronimo-jms: sig-Java -geronimo-jpa: dev-utils -geronimo-jta: sig-Java -geronimo-osgi-support: dev-utils -geronimo-parent-poms: Application -geronimo-saaj: sig-Java -geronimo-validation: sig-Java -gettext: Base-service -gfbgraph: GNOME -gffcompare: sig-bio -gffread: sig-bio -gflags: Programming-language -gfs2-utils: System-tool -ghostscript: Base-service -gi-docgen: GNOME -giflib: Desktop -gigolo: xfce -gimp: Others -gio-qt: Desktop -giraph: bigdata -git: Base-service -git-basics: sig-OSCourse -git-cola: dev-utils -git-lfs: dev-utils -git-review: dev-utils -git-secret: dev-utils -git-tools: dev-utils -gitbook-theme-hugo: Infrastructure -gjs: Desktop -gl-manpages: Application -gl2ps: bigdata -gl_dependency: sig-ROS -glade: Desktop -glade3: sig-mate-desktop -glassfish-annotation-api: sig-Java -glassfish-dtd-parser: dev-utils -glassfish-ejb-api: Base-service -glassfish-el: sig-Java -glassfish-fastinfoset: Base-service -glassfish-gmbal: sig-Java -glassfish-hk2: sig-Java -glassfish-jax-rs-api: Networking -glassfish-jaxb: dev-utils -glassfish-jaxb-api: Others -glassfish-jaxrpc-api: sig-Java -glassfish-jsp: dev-utils -glassfish-jsp-api: sig-Java -glassfish-legal: DB -glassfish-management-api: sig-Java -glassfish-master-pom: Base-service -glassfish-pfl: sig-Java -glassfish-servlet-api: sig-Java -glassfish-toplink-essentials: sig-Java -glassfish-transaction-api: sig-Java -glassfish-websocket-api: Base-service -glew: Runtime -glib: Programming-language -glib-networking: GNOME -glib2: Base-service -glibc: Computing -glibmm24: Others -glm: dev-utils -globalization: G11N -glog: Application -glpk: ai -glslang: sig-epol -gluster_exporter: sig-CloudNative -glusterfs: Storage -glyphicons-halflings-fonts: Application -gmavenplus-plugin: Programming-language -gmetric4j: Application -gmetrics: dev-utils -gmime30: Base-service -gmp: Computing -gmt: Application -gnocchi: sig-openstack -gnome-abrt: Desktop -gnome-autoar: GNOME -gnome-backgrounds: GNOME -gnome-bluetooth: Desktop -gnome-boxes: Desktop -gnome-builder: GNOME -gnome-calculator: GNOME -gnome-calendar: GNOME -gnome-characters: GNOME -gnome-clocks: Desktop -gnome-color-manager: GNOME -gnome-common: Programming-language -gnome-connections: GNOME -gnome-console: GNOME -gnome-contacts: Desktop -gnome-control-center: GNOME -gnome-desktop: Application -gnome-desktop3: GNOME -gnome-dictionary: Desktop -gnome-disk-utility: GNOME -gnome-doc-utils: Desktop -gnome-font-viewer: GNOME -gnome-getting-started-docs: GNOME -gnome-icon-theme: Desktop -gnome-icon-theme-extras: sig-recycle -gnome-icon-theme-symbolic: sig-recycle -gnome-initial-setup: Desktop -gnome-keyring: Desktop -gnome-logs: GNOME -gnome-maps: GNOME -gnome-menus: Desktop -gnome-music: GNOME -gnome-online-accounts: GNOME -gnome-online-miners: GNOME -gnome-packagekit: Others -gnome-photos: GNOME -gnome-python2: sig-recycle -gnome-remote-desktop: GNOME -gnome-screenshot: Desktop -gnome-session: Desktop -gnome-settings-daemon: Desktop -gnome-shell: Desktop -gnome-shell-extension-appindicator: GNOME -gnome-shell-extension-bubblemail: GNOME -gnome-shell-extension-caffeine: GNOME -gnome-shell-extension-customize-ibus: GNOME -gnome-shell-extension-dash-to-dock: GNOME -gnome-shell-extension-desktop-icons: GNOME -gnome-shell-extension-disconnect-wifi: GNOME -gnome-shell-extension-gsconnect: GNOME -gnome-shell-extension-historymanager-prefix-search: GNOME -gnome-shell-extension-system-monitor-applet: GNOME -gnome-shell-extension-topicons-plus: GNOME -gnome-shell-extension-windowoverlay-icons: GNOME -gnome-shell-extensions: Desktop -gnome-shell-theme-flat-remix: GNOME -gnome-software: Desktop -gnome-system-monitor: Desktop -gnome-terminal: Desktop -gnome-text-editor: GNOME -gnome-themes-standard: GNOME -gnome-tour: GNOME -gnome-tweaks: GNOME -gnome-user-docs: GNOME -gnome-user-share: GNOME -gnome-vfs2: GNOME -gnome-video-effects: GNOME -gnome-weather: GNOME -gnu-efi: Programming-language -gnu-free-fonts: Desktop -gnu-getopt: sig-Java -gnulib: Base-service -gnupg: Application -gnupg2: sig-security-facility -gnuplot: Application -gnustep-base: sig-epol -gnustep-make: sig-epol -gnutls: sig-security-facility -go-bindata: sig-high-performance-network -go-compilers: sig-recycle -go-gitee: Infrastructure -go-ovirt-engine-sdk4: oVirt -go-rpm-macros: sig-OKD -go-srpm-macros: sig-OKD -goaccess: Application -gobject-introspection: Base-service -goebpf: sig-high-performance-network -golang: sig-golang -golang-github-coreos-go-iptables: sig-recycle -golang-github-cpuguy83-go-md2man: sig-recycle -golang-github-d2g-dhcp4: sig-recycle -golang-github-fsnotify-fsnotify: sig-recycle -golang-github-go-tomb-tomb: sig-recycle -golang-github-golang-sys: sig-recycle -golang-github-hpcloud-tail: sig-recycle -golang-github-onsi-ginkgo: sig-recycle -golang-github-onsi-gomega: sig-recycle -golang-github-russross-blackfriday: sig-recycle -golang-github-vishvananda-netlink: sig-recycle -golang-github-vishvananda-netns: sig-recycle -golang-googlecode-go-crypto: sig-recycle -golang-googlecode-goprotobuf: sig-recycle -golang-googlecode-net: sig-recycle -golang-googlecode-text: sig-recycle -golang-googlecode-tools: sig-recycle -golang-gopkg-yaml: sig-recycle -gom: GNOME -google-api-core: sig-openstack -google-api-python-client: sig-python-modules -google-auth-httplib2: sig-openstack -google-croscore-fonts: Desktop -google-crosextra-carlito-fonts: Application -google-droid-fonts: Desktop -google-gson: dev-utils -google-guice: sig-Java -google-http-java-client: sig-Java -google-noto-cjk-fonts: Desktop -google-noto-emoji-fonts: Desktop -google-noto-fonts: Desktop -google-oauth-java-client: sig-Java -google-roboto-slab-fonts: Application -google_benchmark_vendor: sig-ROS -googleapis-common-protos: sig-openstack -googletest: sig-ROS -goversioninfo: sig-OKD -gpac: sig-epol -gpars: sig-Java -gparted: sig-mate-desktop -gpdb: DB -gperf: Programming-language -gperftools: Computing -gpgme: Base-service -gphoto2: Base-service -gpm: Desktop -gradle: sig-Java -grafana: Application -grantlee: sig-UKUI -grantlee-qt5: sig-UKUI -graphene: GNOME -graphite2: Desktop -graphviz: Desktop -greatsql: DB -grep: Base-service -grilo: Desktop -grilo-plugins: GNOME -grizzly: dev-utils -grizzly-npn: sig-Java -groff: Base-service -gromacs: Application -groovy: sig-Java -groovy18: sig-Java -grpc: Networking -grub2: sig-OS-Builder -grubby: Base-service -gsbase: sig-Java -gsettings-desktop-schemas: Desktop -gsettings-qt: sig-UKUI -gshhg-gmt-nc4: Application -gsl: Runtime -gsm: Desktop -gsoap: Application -gsound: GNOME -gspell: GNOME -gssdp: Programming-language -gssntlmssp: dev-utils -gssproxy: Base-service -gst-editing-services: GNOME -gstreamer: sig-recycle -gstreamer-plugins-base: sig-recycle -gstreamer-plugins-good: sig-recycle -gstreamer1: Desktop -gstreamer1-libav: GNOME -gstreamer1-plugins-bad-free: Programming-language -gstreamer1-plugins-base: Desktop -gstreamer1-plugins-good: Others -gtest: Programming-language -gtk: Others -gtk-doc: GNOME -gtk-layer-shell: sig-mate-desktop -gtk-murrine-engine: sig-mate-desktop -gtk-vnc: GNOME -gtk2: Desktop -gtk2-engines: sig-mate-desktop -gtk3: Desktop -gtk4: Others -gtkmm24: dev-utils -gtkmm30: Others -gtksourceview3: Others -gtksourceview4: sig-mate-desktop -gtksourceview5: GNOME -gtkspell: sig-recycle -gtkspell3: dev-utils -gtkspellmm30: dev-utils -guacamole: Networking -guava: dev-utils -guava20: sig-Java -gubbi-fonts: Desktop -gucharmap: sig-mate-desktop -guile: Desktop -gumbo-parser: sig-UKUI -gupnp: Programming-language -gupnp-av: Base-service -gupnp-dlna: GNOME -gupnp-igd: Programming-language -gutenprint: System-tool -gv: Desktop -gvfs: Desktop -gvisor: sig-CloudNative -gyp: Application -gzip: Base-service -h2: DB -ha-api: sig-Ha -ha-web: sig-Ha -hadoop: bigdata -hadoop-3.1: bigdata -hamcrest: dev-utils -hands-on: sig-OSCourse -haproxy: System-tool -haproxy_exporter: sig-CloudNative -harbor: sig-CloudNative -hardinfo: dev-utils -hardlink: sig-recycle -harfbuzz: Desktop -haveged: Base-service -hawtbuf: sig-Java -hawtdispatch: dev-utils -hawtjni: sig-Java -hawtjni-runtime: Private -hazelcast: Application -hbase: bigdata -hddtemp: xfce -hdf: Application -hdf5: Runtime -hdparm: Storage -heketi: Storage -hello: dev-utils -help2man: Programming-language -hesiod: sig-recycle -hessian: sig-Java -hexedit: Base-service -hfsplus-tools: Others -hibernate: sig-Java -hibernate-commons-annotations: sig-Java -hibernate-jpa: sig-Java -hibernate-jpa-2.0-api: sig-Java -hibernate-jpa-2.1-api: sig-Java -hibernate-search: sig-Java -hibernate-validator: sig-Java -hibernate3: sig-Java -hibernate4: sig-Java -hicolor-icon-theme: Desktop -highlight: oVirt -hiredis: Base-service -hive: bigdata -hivex: System-tool -hostha: sig-openstack -hostname: Networking -hpc: sig-HPC -hpcrunner: sig-HPC -hping: dev-utils -hplip: System-tool -hppc: Base-service -hspell: Application -hsqldb: sig-Java -hsqldb1: Application -htop: dev-utils -htrace: Base-service -htslib: sig-bio -http-builder: sig-Java -http-parser: Networking -http_load: dev-utils -httpcomponents-asyncclient: Application -httpcomponents-client: sig-Java -httpcomponents-core: sig-Java -httpcomponents-project: dev-utils -httpd: Networking -httpry: Application -httpunit: sig-Java -hudi: bigdata -hue: bigdata -hunspell: Application -hunspell-ak: Application -hunspell-am: Application -hunspell-ar: Application -hunspell-as: Application -hunspell-ast: Application -hunspell-az: Application -hunspell-be: Application -hunspell-ber: Application -hunspell-bg: Application -hunspell-bn: Application -hunspell-br: Application -hunspell-ca: Application -hunspell-cop: Application -hunspell-csb: Application -hunspell-cv: Application -hunspell-cy: Application -hunspell-da: Application -hunspell-de: Application -hunspell-dsb: Application -hunspell-el: Application -hunspell-en: Application -hunspell-eo: Application -hunspell-es: Application -hunspell-et: Application -hunspell-eu: Application -hunspell-fa: Application -hunspell-fj: Application -hunspell-fo: Application -hunspell-fr: Application -hunspell-fur: Application -hunspell-fy: Application -hunspell-ga: Application -hunspell-gd: Application -hunspell-gl: Application -hunspell-grc: Application -hunspell-gu: Application -hunspell-gv: Application -hunspell-haw: Application -hunspell-hil: Application -hunspell-hr: Application -hunspell-hsb: Application -hunspell-ht: Application -hunspell-hu: Application -hunspell-hy: Application -hunspell-ia: Application -hunspell-id: Application -hunspell-is: Application -hunspell-it: Application -hunspell-kk: Application -hunspell-km: Application -hunspell-kn: Application -hunspell-ko: Application -hunspell-ku: Application -hunspell-ky: Application -hunspell-la: Application -hunspell-lb: Application -hunspell-ln: Application -hunspell-lt: Application -hunspell-mai: Application -hunspell-mg: Application -hunspell-mi: Application -hunspell-mk: Application -hunspell-ml: Application -hunspell-mn: Application -hunspell-mos: Application -hunspell-mr: Application -hunspell-ms: Application -hunspell-mt: Application -hunspell-nds: Application -hunspell-ne: Application -hunspell-nl: Application -hunspell-no: Application -hunspell-nr: Application -hunspell-nso: Application -hunspell-ny: Application -hunspell-oc: Application -hunspell-om: Application -hunspell-or: Application -hunspell-pa: Application -hunspell-pl: Application -hunspell-pt: Application -hunspell-qu: Application -hunspell-ro: Application -hunspell-ru: Application -hunspell-rw: Application -hunspell-sc: Application -hunspell-se: Application -hunspell-si: Application -hunspell-sk: Application -hunspell-sl: Application -hunspell-smj: Application -hunspell-so: Application -hunspell-sq: Application -hunspell-sr: Application -hunspell-ss: Application -hunspell-st: Application -hunspell-sv: Application -hunspell-sw: Application -hunspell-ta: Application -hunspell-te: Application -hunspell-tet: Application -hunspell-th: Application -hunspell-ti: Application -hunspell-tl: Application -hunspell-tn: Application -hunspell-tpi: Application -hunspell-ts: Application -hunspell-uk: Application -hunspell-ur: Application -hunspell-uz: Application -hunspell-ve: Application -hunspell-vi: Application -hunspell-wa: Application -hunspell-xh: Application -hunspell-yi: Application -hunspell-zu: Application -hwdata: Computing -hwinfo: Computing -hwloc: Application -hyperscan: Desktop -hyphen: Application -hyphen-as: Application -hyphen-bg: Application -hyphen-bn: Application -hyphen-ca: Application -hyphen-cy: Application -hyphen-da: Application -hyphen-de: Application -hyphen-el: Application -hyphen-es: Application -hyphen-eu: Application -hyphen-fa: Application -hyphen-fo: Application -hyphen-fr: Application -hyphen-ga: Application -hyphen-gl: Application -hyphen-gu: Application -hyphen-hi: Application -hyphen-hsb: Application -hyphen-ia: Application -hyphen-id: Application -hyphen-is: Application -hyphen-it: Application -hyphen-kn: Application -hyphen-ku: Application -hyphen-lt: Application -hyphen-ml: Application -hyphen-mn: Application -hyphen-mr: Application -hyphen-nl: Application -hyphen-or: Application -hyphen-pa: Application -hyphen-pl: Application -hyphen-pt: Application -hyphen-ro: Application -hyphen-ru: Application -hyphen-sa: Application -hyphen-sk: Application -hyphen-sl: Application -hyphen-sv: Application -hyphen-ta: Application -hyphen-te: Application -hyphen-tk: Application -hyphen-uk: Application -i2c-tools: Computing -i40e: Networking -iSulad: iSulad -iSulad-img: sig-CloudNative -iavf: Networking -ibis: bigdata -ibus: Desktop -ibus-hangul: Desktop -ibus-kkc: Desktop -ibus-libpinyin: Desktop -ibus-libzhuyin: Desktop -ibus-m17n: Desktop -ibus-sayura: Desktop -ibus-table: Desktop -ibus-table-array30: Desktop -ibus-table-chinese: Desktop -ibus-theme-tools: Desktop -ibus-typing-booster: Desktop -icc-profiles-openicc: Application -iceberg: bigdata -icedtea-web: Compiler -icfg: Networking -icon-naming-utils: Desktop -icoutils: Others -icu: Base-service -icu4j: sig-Java -id3lib: sig-epol -idlj-maven-plugin: sig-Java -idm-console-framework: Application -iftop: dev-utils -igh-ethercat-xenomai: sig-industrial-control -ignite: bigdata -ignition: sig-OKD -iio-sensor-proxy: Application -ilmbase: Programming-language -im-chooser: sig-mate-desktop -ima-evm-utils: Base-service -imageTailor: sig-OS-Builder -image_common: sig-ROS -image_pipeline: sig-ROS -image_transport: sig-ROS -image_transport_plugins: sig-ROS -imake: Desktop -imgbased: oVirt -imlib2: sig-UKUI -impala: bigdata -imsettings: Desktop -imwheel: sig-UKUI -incubator-mxnet: ai -indent: Application -indicator-china-weather: sig-UKUI -infiniband-diags: sig-recycle -infinispan: sig-Java -influxdb_exporter: sig-CloudNative -infrastructure: Infrastructure -inih: dev-utils -iniparser: dev-utils -initial-setup: System-tool -initscripts: Networking -inkscape: Private -inotify-tools: Application -inst-source-utils: sig-perl-modules -install-scripts: sig-OS-Builder -integration-test: sig-QA -intel-cmt-cat: Programming-language -intel-device-plugins-for-kubernetes: sig-confidential-computing -intel-sgx-ssl: sig-confidential-computing -interactive_markers: sig-ROS -internal-issue: Private -intltool: Programming-language -invokebinder: sig-Java -inxi: sig-cinnamon -ioping: Application -ioprocess: oVirt -iotop: Storage -iowatcher: sig-recycle -iozone: dev-utils -ipcalc: Networking -iperf2: Application -iperf3: Application -ipipe: sig-industrial-control -ipmitool: Networking -iproute: Networking -iprutils: Storage -ipset: Networking -iptables: Networking -iptraf-ng: Networking -iptstate: Networking -iputils: Networking -ipvsadm: Networking -ipwatchd: dev-utils -ipxe: sig-OS-Builder -ipython: Private -irclib: sig-Java -ironjacamar: sig-Java -irqbalance: Computing -irrXML: Private -irrlicht: Others -irssi: Application -isa-l: dev-utils -isl: sig-compat-winapp -iso-codes: Base-service -isomd5sum: Base-service -isorelax: Base-service -istack-commons: Base-service -isula-build: iSulad -isula-transform: iSulad -itext: sig-Java -itrustee_client: sig-confidential-computing -itrustee_sdk: sig-confidential-computing -itrustee_tzdriver: sig-confidential-computing -itstool: Programming-language -ivtv-firmware: Base-service -iw: Networking -jBCrypt: dev-utils -jFormatString: sig-Java -jack-audio-connection-kit: sig-compat-winapp -jackcess: sig-Java -jackcess-encrypt: sig-Java -jackson: sig-Java -jackson-annotations: sig-Java -jackson-bom: sig-Java -jackson-core: sig-Java -jackson-databind: sig-Java -jackson-dataformat-xml: sig-Java -jackson-dataformats-binary: sig-Java -jackson-dataformats-text: sig-Java -jackson-datatype-joda: sig-Java -jackson-datatypes-collections: sig-Java -jackson-jaxrs-providers: sig-Java -jackson-modules-base: sig-Java -jackson-parent: sig-Java -jacoco: sig-Java -jacorb: sig-Java -jaf: sig-Java -jai-imageio-core: Private -jakarta-commons-httpclient: sig-Java -jakarta-el: dev-utils -jakarta-oro: dev-utils -jakarta-server-pages: dev-utils -jakarta-servlet: sig-Java -jamm: Computing -jamonapi: sig-Java -jandex: sig-Java -jandex-maven-plugin: sig-Java -janino: sig-Java -jansi: sig-Java -jansi-native: sig-Java -jansson: Base-service -jarjar: sig-Java -jasper: sig-recycle -jasperreports: sig-Java -jastow: sig-Java -jasypt: sig-Java -jatl: sig-Java -java: Private -java-atk-wrapper: Application -java-base64: sig-Java -java-client-kubevirt: oVirt -java-comment-preprocessor: sig-Java -java-libpst: sig-Java -java-oauth: sig-Java -java-ovirt-engine-sdk4: oVirt -java-service-wrapper: sig-Java -java-uuid-generator: sig-Java -java-xmlbuilder: sig-Java -java_cup: dev-utils -javacc: sig-Java -javacc-maven-plugin: sig-Java -javaewah: sig-Java -javamail: Application -javapackages-tools: sig-Java -javaparser: dev-utils -javapoet: sig-Java -javassist: sig-Java -jaxb2-common-basics: sig-Java -jaxb2-maven-plugin: sig-Java -jaxen: sig-Java -jberet: sig-Java -jbig2dec: Desktop -jbigkit: Desktop -jboss-annotations-1.2-api: dev-utils -jboss-batch-1.0-api: sig-Java -jboss-classfilewriter: sig-Java -jboss-common-beans: sig-Java -jboss-concurrency-1.0-api: sig-Java -jboss-connector-1.6-api: dev-utils -jboss-connector-1.7-api: sig-Java -jboss-dmr: sig-Java -jboss-ejb-3.1-api: sig-Java -jboss-ejb-3.2-api: sig-Java -jboss-ejb-client: sig-Java -jboss-ejb3-ext-api: sig-Java -jboss-el: Programming-language -jboss-el-2.2-api: sig-Java -jboss-el-3.0-api: sig-Java -jboss-iiop-client: sig-Java -jboss-integration: sig-Java -jboss-interceptors-1.1-api: sig-Java -jboss-interceptors-1.2-api: sig-Java -jboss-invocation: sig-Java -jboss-jacc-1.4-api: sig-Java -jboss-jacc-1.5-api: sig-Java -jboss-jaspi-1.0-api: sig-Java -jboss-jaspi-1.1-api: sig-Java -jboss-jaxb-2.2-api: sig-Java -jboss-jaxrpc-1.1-api: sig-Java -jboss-jaxrs-2.0-api: dev-utils -jboss-jaxws-2.2-api: sig-Java -jboss-jms-1.1-api: sig-Java -jboss-jms-2.0-api: Application -jboss-jsf-2.1-api: dev-utils -jboss-jsf-2.2-api: sig-Java -jboss-jsp-2.2-api: sig-Java -jboss-jsp-2.3-api: dev-utils -jboss-jstl-1.2-api: dev-utils -jboss-logging: sig-Java -jboss-logging-tools: sig-Java -jboss-logging-tools1: sig-Java -jboss-logmanager: sig-Java -jboss-marshalling: sig-Java -jboss-metadata: sig-Java -jboss-modules: sig-Java -jboss-msc: sig-Java -jboss-negotiation: sig-Java -jboss-parent: sig-CloudNative -jboss-remote-naming: sig-Java -jboss-remoting: sig-Java -jboss-remoting-jmx: sig-Java -jboss-rmi-1.0-api: sig-Java -jboss-sasl: sig-Java -jboss-servlet-2.5-api: sig-Java -jboss-servlet-3.0-api: sig-Java -jboss-servlet-3.1-api: dev-utils -jboss-specs-parent: sig-Java -jboss-stdio: dev-utils -jboss-threads: sig-Java -jboss-transaction: Private -jboss-transaction-1.1-api: sig-Java -jboss-transaction-1.2-api: sig-Java -jboss-transaction-spi: sig-Java -jboss-vfs: sig-Java -jboss-websocket-1.0-api: dev-utils -jboss-websocket-1.1-api: sig-Java -jbossws-api: sig-Java -jbossws-parent: sig-Java -jcifs: sig-Java -jcip-annotations: sig-Java -jcodings: sig-Java -jcommon: sig-Java -jcsp: sig-Java -jctools: sig-Java -jdbi: sig-Java -jdeparser1: Base-service -jdeparser2: sig-Java -jdepend: Application -jdependency: sig-Java -jdiff: sig-Java -jdo-api: sig-Java -jdo2-api: sig-Java -jdom: sig-Java -jdom2: sig-Java -je: sig-Java -jedis: Application -jemalloc: Runtime -jenkins-executable-war: sig-Java -jenkins-xstream: sig-Java -jeromq: Base-service -jersey: sig-Java -jersey1: sig-Java -jetbrains-annotations: dev-utils -jets3t: sig-Java -jettison: dev-utils -jetty: sig-Java -jetty-alpn: sig-recycle -jetty-alpn-api: sig-Java -jetty-artifact-remote-resources: sig-Java -jetty-assembly-descriptors: sig-Java -jetty-build-support: sig-Java -jetty-distribution-remote-resources: sig-Java -jetty-parent: dev-utils -jetty-schemas: sig-Java -jetty-test-helper: sig-Java -jetty-test-policy: sig-Java -jetty-toolchain: sig-Java -jetty-version-maven-plugin: sig-Java -jetty8: sig-recycle -jexcelapi: dev-utils -jffi: dev-utils -jflex: sig-Java -jfreechart: sig-Java -jfsutils: Base-service -jgit: sig-Java -jgroups: sig-Java -jhighlight: sig-Java -jibx: sig-Java -jimtcl: Programming-language -jing-trang: sig-Java -jitterentropy-library: Base-service -jline: sig-Java -jline1: sig-Java -jmatio: sig-Java -jmh: sig-Java -jmock: sig-Java -jna: sig-Java -jnr-constants: dev-utils -jnr-enxio: dev-utils -jnr-ffi: dev-utils -jnr-netdb: dev-utils -jnr-posix: dev-utils -jnr-unixsocket: dev-utils -jnr-x86asm: sig-Java -joda-convert: dev-utils -joda-time: sig-Java -johnzon: sig-Java -joint_state_publisher: sig-ROS -jomolhari-fonts: Desktop -joni: dev-utils -jopt-simple: dev-utils -jose: Base-service -jpegoptim: Application -jq: Base-service -jruby: sig-ruby -js-excanvas: dev-utils -js-jquery: sig-nodejs -js-jquery1: sig-recycle -js-jquery2: sig-recycle -js-sizzle: sig-nodejs -js-underscore: Private -jsch: Application -jsch-agent-proxy: sig-Java -json-c: Base-service -json-glib: Desktop -json-lib: dev-utils -json-path: sig-Java -json-smart: sig-Java -json_simple: Application -jsoncpp: Programming-language -jsonic: sig-Java -jsonnet: Application -jsonp: Application -jsonrpc-glib: GNOME -jsoup: sig-Java -jspc: sig-Java -jsr-305: sig-Java -jsr-311: sig-Java -jss: Application -jtidy: sig-Java -jtoaster: sig-Java -jtreg: Compiler -jts: sig-Java -jul-to-slf4j-stub: sig-Java -julietaula-montserrat-fonts: System-tool -junit: dev-utils -junit-addons: sig-Java -junit5: dev-utils -junitperf: sig-Java -juniversalchardet: sig-Java -jupyter: bigdata -jvnet-parent: sig-Java -jwnl: sig-Java -jxrlib: Desktop -jython: sig-Java -jzlib: Application -kabi-dw: Kernel -kacst-fonts: Desktop -kae_driver: sig-AccLib -kafka: bigdata -kafka-python: sig-openstack -kata-containers: sig-CloudNative -kata-micro-kernel: sig-CloudNative -kata_integration: sig-CloudNative -katello: sig-ops -katran: sig-high-performance-network -kbackup: sig-KDE -kbd: Desktop -kbox: sig-ops -kbuild-standalone: Kernel -kde-filesystem: System-tool -kde-settings: Desktop -kdecoration: sig-KDE -kdevelop: sig-KDE -kdevelopkdevelop-pg-qt: sig-KDE -kdl_parser: sig-ROS -kdump-anaconda-addon: Base-service -keepalived: Networking -kernel: Kernel -kernel-portal: Kernel -kexec-tools: Base-service -keybinder: xfce -keybinder3: Desktop -keycloak-httpd-client-install: sig-security-facility -keyrings-filesystem: Others -keyutils: sig-security-facility -kf5: sig-UKUI -kf5-attica: sig-KDE -kf5-bluez-qt: sig-KDE -kf5-frameworkintegration: sig-KDE -kf5-kactivities: sig-KDE -kf5-karchive: sig-UKUI -kf5-kauth: sig-UKUI -kf5-kbookmarks: sig-KDE -kf5-kcmutils: sig-KDE -kf5-kcodecs: sig-UKUI -kf5-kcompletion: sig-KDE -kf5-kconfig: sig-UKUI -kf5-kconfigwidgets: sig-UKUI -kf5-kcoreaddons: sig-UKUI -kf5-kcrash: sig-KDE -kf5-kdbusaddons: sig-KDE -kf5-kdeclarative: sig-KDE -kf5-kded: sig-KDE -kf5-kdelibs4support: sig-KDE -kf5-kdesignerplugin: sig-KDE -kf5-kdesu: sig-KDE -kf5-kdewebkit: sig-KDE -kf5-kdoctools: sig-UKUI -kf5-kemoticons: sig-KDE -kf5-kglobalaccel: sig-KDE -kf5-kguiaddons: sig-UKUI -kf5-khtml: sig-KDE -kf5-ki18n: sig-UKUI -kf5-kiconthemes: sig-KDE -kf5-kidletime: sig-UKUI -kf5-kinit: sig-KDE -kf5-kio: sig-KDE -kf5-kirigami2: sig-KDE -kf5-kitemmodels: sig-KDE -kf5-kitemviews: sig-KDE -kf5-kjobwidgets: sig-KDE -kf5-kjs: sig-KDE -kf5-knewstuff: sig-KDE -kf5-knotifications: sig-KDE -kf5-knotifyconfig: sig-KDE -kf5-kpackage: sig-KDE -kf5-kparts: sig-KDE -kf5-kplotting: sig-KDE -kf5-kpty: sig-KDE -kf5-krunner: sig-KDE -kf5-kservice: sig-KDE -kf5-ktexteditor: sig-KDE -kf5-ktextwidgets: sig-KDE -kf5-kunitconversion: sig-KDE -kf5-kwallet: sig-KDE -kf5-kwayland: sig-UKUI -kf5-kwidgetsaddons: sig-UKUI -kf5-kwindowsystem: sig-UKUI -kf5-kxmlgui: sig-KDE -kf5-networkmanager-qt: sig-KDE -kf5-plasma: sig-KDE -kf5-solid: sig-UKUI -kf5-sonnet: sig-KDE -kf5-syntax-highlighting: sig-KDE -kf5-threadweaver: sig-KDE -khmeros-fonts: System-tool -kim-api: Application -kiran-authentication-service: sig-KIRAN-DESKTOP -kiran-avatar-editor: sig-KIRAN-DESKTOP -kiran-biometrics: sig-KIRAN-DESKTOP -kiran-calculator: sig-KIRAN-DESKTOP -kiran-calendar: sig-KIRAN-DESKTOP -kiran-cc-daemon: sig-KIRAN-DESKTOP -kiran-control-panel: sig-KIRAN-DESKTOP -kiran-cpanel-account: sig-KIRAN-DESKTOP -kiran-cpanel-appearance: sig-KIRAN-DESKTOP -kiran-cpanel-display: sig-KIRAN-DESKTOP -kiran-cpanel-keybinding: sig-KIRAN-DESKTOP -kiran-cpanel-keyboard: sig-KIRAN-DESKTOP -kiran-cpanel-menu: sig-KIRAN-DESKTOP -kiran-cpanel-mouse: sig-KIRAN-DESKTOP -kiran-cpanel-power: sig-KIRAN-DESKTOP -kiran-cpanel-timedate: sig-KIRAN-DESKTOP -kiran-desktop: sig-KIRAN-DESKTOP -kiran-flameshot: sig-KIRAN-DESKTOP -kiran-gtk-theme: sig-KIRAN-DESKTOP -kiran-icon-theme: sig-KIRAN-DESKTOP -kiran-log: sig-KIRAN-DESKTOP -kiran-menu: sig-KIRAN-DESKTOP -kiran-panel: sig-KIRAN-DESKTOP -kiran-qdbusxml2cpp: sig-KIRAN-DESKTOP -kiran-qt5-integration: sig-KIRAN-DESKTOP -kiran-screensaver: sig-KIRAN-DESKTOP -kiran-screensaver-dialog: sig-KIRAN-DESKTOP -kiran-session-guard: sig-KIRAN-DESKTOP -kiran-session-manager: sig-KIRAN-DESKTOP -kiran-themes: sig-KIRAN-DESKTOP -kiran-wallpapers: sig-KIRAN-DESKTOP -kiran-widgets-qt5: sig-KIRAN-DESKTOP -kite-sdk: bigdata -kiwi: Base-service -kiwi-dlimage: Private -kiwi-template-openEuler: Private -kml_adapter: sig-AccLib -kmod: Computing -kmod-drbd90: sig-Ha -kmod-kvdo: Runtime -kmodtool: dev-utils -knox: bigdata -kohsuke-pom: Application -kpatch: Base-service -krb5: Base-service -kronosnet: Networking -kryo: sig-Java -ksc-defender: sig-security-facility -kscreenlocker: sig-KDE -ksh: Base-service -kubeedge: sig-Edge -kubekey: sig-KubeSphere -kubernetes: sig-CloudNative -kubevirt: sig-CloudNative -kudu: bigdata -kunpengsecl: sig-security-facility -kurdit-unikurd-web-fonts: Desktop -kvm-bindings: Virt -kvm-ioctls: Virt -kwayland-integration: sig-KDE -kwayland-server: sig-KDE -kwin: sig-KDE -kxml: sig-Java -kylin-burner: sig-UKUI -kylin-calculator: sig-UKUI -kylin-display-switch: sig-UKUI -kylin-installer: sig-UKUI -kylin-ipmsg: sig-UKUI -kylin-music: sig-UKUI -kylin-nm: sig-UKUI -kylin-photo-viewer: sig-UKUI -kylin-printer: sig-UKUI -kylin-recorder: sig-UKUI -kylin-scanner: sig-UKUI -kylin-screenshot: sig-UKUI -kylin-software-center: sig-UKUI -kylin-usb-creator: sig-UKUI -kylin-user-guide: sig-UKUI -kylin-video: sig-UKUI -kyotocabinet: Others -kyua: Base-service -labltk: Programming-language -ladspa: dev-utils -lame: Others -lammps: sig-HPC -langpacks: sig-recycle -langtable: Base-service -language-detector: sig-Java -lanzhou_university_2021: sig-recycle -lapack: Programming-language -laser_assembler: sig-ROS -laser_filters: sig-ROS -laser_geometry: sig-ROS -laser_pipeline: sig-ROS -lasso: Base-service -lastpass-cli: Application -latex2html: Others -latexmk: sig-perl-modules -lato-fonts: Desktop -latrace: sig-recycle -launch: sig-ROS -launch_ros: sig-ROS -layer-shell-qt: sig-KDE -lcdf-typetools: Application -lcms2: Desktop -lcr: iSulad -ldapjdk: dev-utils -ldaptive: sig-Java -ldns: Networking -leatherman: Base-service -ledmon: Application -lensfun: dev-utils -lep: sig-embedded -leptonica: Base-service -less: Base-service -lettuce: sig-Java -leveldb: System-tool -leveldb-java: sig-Java -leveldbjni: sig-Java -lfs-course: sig-OSCourse -lftp: Networking -li-wen: sig-EasyLife -lib-shim-v2: iSulad -libCoAP: sig-embedded -libEMF: Desktop -libICE: Desktop -libIDL: Base-service -libSM: Desktop -libX11: Desktop -libXScrnSaver: Programming-language -libXau: Desktop -libXaw: Desktop -libXcomposite: Desktop -libXcursor: Desktop -libXdamage: Desktop -libXdmcp: Desktop -libXext: Desktop -libXfixes: Desktop -libXfont2: Desktop -libXft: Desktop -libXi: Desktop -libXinerama: Desktop -libXmu: Desktop -libXp: Programming-language -libXpm: Desktop -libXpresent: sig-mate-desktop -libXrandr: Desktop -libXrender: Desktop -libXres: Desktop -libXt: Desktop -libXtst: Desktop -libXv: Desktop -libXvMC: Desktop -libXxf86dga: Desktop -libXxf86misc: sig-recycle -libXxf86vm: Desktop -libabigail: Application -libadwaita: GNOME -libaec: Application -libaesgm: Others -libaio: Storage -libao: Runtime -libappindicator: Desktop -libappstream-glib: Programming-language -libapr1: Application -libarchive: Base-service -libart_lgpl: Desktop -libass: Application -libassuan: Networking -libasyncns: Desktop -libatasmart: Desktop -libatomic_ops: Computing -libavc1394: Runtime -libblockdev: Storage -libbluray: Desktop -libbonobo: Desktop -libbonoboui: Desktop -libboundscheck: sig-libboundscheck -libbpf: sig-high-performance-network -libbs2b: Application -libbsd: Base-service -libburn: Others -libburn1: Application -libbytesize: Base-service -libcaca: sig-epol -libcacard: Desktop -libcanberra: Desktop -libcap: sig-security-facility -libcap-ng: Base-service -libcareplus: Virt -libcbor: Base-service -libcddb: sig-epol -libcdio: Desktop -libcdio-paranoia: Desktop -libcec: sig-epol -libcgroup: sig-CloudNative -libchamplain: GNOME -libclc: Base-service -libcomps: Base-service -libconfig: Base-service -libconfuse: Base-service -libcroco: sig-recycle -libcrystalhd: sig-UKUI -libcue: Desktop -libcutl: sig-KIRAN-DESKTOP -libcxx: Compiler -libcxxabi: Compiler -libcyaml: Base-service -libdaemon: Base-service -libdap: sig-recycle -libdatrie: Base-service -libdazzle: Desktop -libdb: Base-service -libdbi: Base-service -libdbusextended-qt5: Desktop -libdbusmenu: Programming-language -libdc1394: sig-epol -libdca: sig-epol -libdeflate: sig-bio -libdmapsharing: Application -libdmx: Desktop -libdnet: Networking -libdnf: sig-OS-Builder -libdrm: Desktop -libdv: Programming-language -libdvbpsi: sig-epol -libdvdnav: Application -libdvdread: Application -libdwarf: Programming-language -libeasyfc: Application -libebml: sig-epol -libecap: Base-service -libecb: dev-utils -libedit: Base-service -libell: Programming-language -libepoxy: Desktop -liberasurecode: sig-openstack -liberation-fonts: System-tool -liberation-sans-fonts: sig-recycle -libesmtp: Networking -libestr: Base-service -libetpan: sig-UKUI -libev: Base-service -libevdev: Computing -libevent: Base-service -libevhtp: sig-CloudNative -libewf: Others -libexif: Desktop -libfabric: Programming-language -libfastcommon: Storage -libfastjson: Base-service -libffado: sig-compat-winapp -libffi: Base-service -libfm: xfce -libfontenc: Desktop -libfprint: sig-UKUI -libfreenect: sig-epol -libftdi: sig-embedded -libgadu: Application -libgcrypt: Networking -libgdata: Desktop -libgdiplus: Base-service -libgdither: Others -libgdl: GNOME -libgee: Desktop -libgeotiff: dev-utils -libgexiv2: Base-service -libgit2: Base-service -libgit2-glib: Base-service -libglade2: Desktop -libglademm24: xfce -libglvnd: Desktop -libgnome: GNOME -libgnome-keyring: Programming-language -libgnomecanvas: GNOME -libgnomecanvasmm26: xfce -libgnomekbd: GNOME -libgnomeui: Others -libgovirt: Others -libgpg-error: Base-service -libgphoto2: System-tool -libgsasl: Application -libgsf: Base-service -libgssglue: dev-utils -libgta: dev-utils -libgtop2: Desktop -libgudev: Desktop -libguess: Desktop -libguestfs: System-tool -libgusb: Desktop -libgweather: Desktop -libgxim: Desktop -libgxps: Desktop -libhandy: dev-utils -libhangul: System-tool -libharu: sig-epol -libhbaapi: Base-service -libhbalinux: Others -libhdfs: bigdata -libhugetlbfs: Computing -libibmad: sig-recycle -libical: Base-service -libid3tag: Others -libidn: Base-service -libidn2: Base-service -libiec61883: Runtime -libieee1284: Runtime -libijs: Computing -libimagequant: Programming-language -libimobiledevice: Desktop -libindicator: Programming-language -libinput: Computing -libiodbc: bigdata -libipt: Computing -libiptcdata: Desktop -libisal: Desktop -libiscsi: Storage -libisoburn: Base-service -libisofs: Others -libisofs1: Private -libjpeg-turbo: Desktop -libkae: sig-AccLib -libkate: sig-recycle -libkcapi: Base-service -libkeepalive: dev-utils -libkefir: sig-high-performance-network -libkkc: Application -libkkc-data: Private -libkml: sig-recycle -libkomparediff2: sig-KDE -libksba: Base-service -libkscreen-qt5: sig-UKUI -libksysguard: sig-KDE -libldac: GNOME -libldb: Desktop -libldm: Storage -liblockfile: Application -liblognorm: System-tool -liblouis: Application -libmad: Others -libmatchbox: Desktop -libmatekbd: sig-mate-desktop -libmatemixer: sig-mate-desktop -libmateweather: sig-mate-desktop -libmatroska: sig-epol -libmaus2: sig-bio -libmaxminddb: Base-service -libmbim: Networking -libmediaart: Desktop -libmediainfo: Desktop -libmemcached: Programming-language -libmetal: sig-embedded -libmetalink: Base-service -libmicrodns: sig-epol -libmicrohttpd: Application -libmikmod: Application -libmng: Desktop -libmnl: Base-service -libmodbus: sig-industrial-control -libmodbus-xenomai: sig-industrial-control -libmodman: sig-recycle -libmodplug: dev-utils -libmodulemd: Base-service -libmp4v2: sig-epol -libmpc: Computing -libmpcdec: Runtime -libmpd: xfce -libmpeg2: Application -libmpris-qt5: Desktop -libmspack: Base-service -libmtp: Application -libmusicbrainz5: Application -libmypaint: Others -libmysofa: sig-epol -libnatpmp: sig-epol -libndp: Networking -libnet: Networking -libnetconf2: sig-industrial-control -libnetfilter_conntrack: Networking -libnetfilter_cthelper: Base-service -libnetfilter_cttimeout: Base-service -libnetfilter_queue: Networking -libnetwork: sig-CloudNative -libnfnetlink: Networking -libnfs: Base-service -libnftnl: Base-service -libnice: Runtime -libnl3: Networking -libnma: Base-service -libnotify: Desktop -libnsl2: Base-service -libntlm: Application -liboauth: Base-service -libofa: sig-recycle -libogg: Computing -liboggz: sig-recycle -liboil: sig-recycle -libomp: Private -libomxil-bellagio: Base-service -libopenmpt: sig-epol -libopenraw: Others -libosinfo: Base-service -libotf: System-tool -libpaper: Base-service -libpcap: Networking -libpciaccess: Storage -libpeas: GNOME -libpfm: Programming-language -libpinyin: Others -libpipeline: Base-service -libplist: Base-service -libpng: Base-service -libpng12: sig-recycle -libportal: GNOME -libpq: DB -libproxy: Networking -libpsl: Base-service -libpwquality: sig-security-facility -libqb: Computing -libqmi: Networking -libqtxdg: sig-UKUI -libquvi: Base-service -libquvi-scripts: Base-service -librabbitmq: Runtime -libraqm: Desktop -libraw1394: System-tool -librdkafka: Programming-language -librelp: Programming-language -librepo: Base-service -libreport: Base-service -libreswan: System-tool -librevenge: dev-utils -librpcsecgss: dev-utils -librsvg2: Desktop -librsync: sig-epol -librttopo: sig-epol -librx: sig-epol -libsamplerate: Computing -libsane-hpaio: Private -libsass: Base-service -libseccomp: Base-service -libsecret: Base-service -libselinux: sig-security-facility -libsemanage: sig-security-facility -libsepol: sig-security-facility -libserf: Networking -libsexy: Desktop -libshout: Runtime -libsigcpp20: Others -libsigsegv: Base-service -libslirp: sig-CloudNative -libsmbios: System-tool -libsmi: Runtime -libsndfile: Computing -libsodium: Others -libsolv: sig-OS-Builder -libsoup: Desktop -libsoup3: GNOME -libspatialaudio: sig-epol -libspatialite: dev-utils -libspectre: Programming-language -libspiro: Others -libsrtp: Programming-language -libssh: Networking -libssh2: Networking -libstatgrab: sig-UKUI -libstatistics_collector: sig-ROS -libstemmer: Programming-language -libstoragemgmt: Runtime -libsvm: ai -libsysstat: sig-UKUI -libtalloc: Storage -libtar: Base-service -libtasn1: Base-service -libtcnative: Application -libtdb: Base-service -libteam: Base-service -libtevent: Storage -libthai: Computing -libtheora: Base-service -libtiff: Desktop -libtiger: sig-epol -libtimezonemap: Desktop -libtins: sig-high-performance-network -libtirpc: Networking -libtomcrypt: Base-service -libtommath: Base-service -libtool: Base-service -libtorrent: sig-epol -libtpms: sig-security-facility -libtraceevent: Programming-language -libucil: sig-epol -libudfread: xfce -libumem: Computing -libunicap: sig-epol -libuninameslist: dev-utils -libunistring: Base-service -libunwind: Base-service -libupnp: sig-UKUI -liburing: Storage -libusb: Storage -libusbmuxd: Storage -libusbx: Storage -libuser: Base-service -libutempter: Base-service -libuv: Programming-language -libva: Runtime -libvarlink: sig-CloudNative -libvdpau: Runtime -libverto: Base-service -libvirt: Virt -libvirt-glib: Virt -libvirt-python: Virt -libvisual: Computing -libvma: sig-high-performance-network -libvncserver: GNOME -libvoikko: Runtime -libvorbis: Base-service -libvpx: Application -libwacom: Computing -libwbxml: Application -libwd: sig-AccLib -libwebp: Desktop -libwebsockets: iSulad -libwmf: Others -libwnck: sig-mate-desktop -libwnck3: Desktop -libwpd: dev-utils -libwpe: dev-utils -libwpg: dev-utils -libx86emu: Desktop -libxcb: Desktop -libxcrypt: Base-service -libxcvt: GNOME -libxfce4ui: xfce -libxfce4util: xfce -libxkbcommon: Desktop -libxkbfile: Desktop -libxklavier: Desktop -libxml2: Base-service -libxml2-rust: Base-service -libxmlb: Others -libxmlpp: sig-compat-winapp -libxshmfence: Desktop -libxslt: Base-service -libxsmm: ai -libyami: Application -libyaml: Base-service -libyaml_vendor: sig-ROS -libyang: sig-high-performance-network -libyang1: sig-industrial-control -libytnef: sig-UKUI -libyubikey: dev-utils -libzapojit: Others -libzen: Desktop -libzip: Programming-language -lightcouch: sig-Java -lightdm: Desktop -lightdm-gtk: Desktop -lightdm-gtk-greeter: sig-recycle -lightdm-kiran-greeter: sig-KIRAN-DESKTOP -lightgbm: ai -lighttpd: Networking -lilv: sig-epol -lilypond: sig-desktop-apps -linkchecker: Application -linux-firmware: Computing -linux-operation: sig-OSCourse -linux-sgx: sig-confidential-computing -linux-sgx-driver: sig-confidential-computing -linux-system-roles: oVirt -linux-test-project: sig-QA -linuxconsoletools: Application -linuxdoc-tools: Application -linuxptp: Application -lirc: sig-epol -live555: sig-epol -livy: DB -lklug-fonts: System-tool -lksctp-tools: Application -llama: bigdata -lldb: dev-utils -lldpad: Networking -llvm: Compiler -llvm-bolt: Compiler -llvm-libunwind: Compiler -lm_sensors: Computing -lmbench: dev-utils -lmdb: Base-service -lmfit: Application -lockdev: Computing -lodash: sig-nodejs -log4cplus: dev-utils -log4cpp: dev-utils -log4j: sig-Java -log4j-jboss-logmanager: sig-Java -log4j12: Application -logback: sig-Java -logrotate: Base-service -logwatch: System-tool -lohit-assamese-fonts: Application -lohit-bengali-fonts: Application -lohit-devanagari-fonts: Application -lohit-gujarati-fonts: Application -lohit-gurmukhi-fonts: Application -lohit-kannada-fonts: Application -lohit-malayalam-fonts: Application -lohit-marathi-fonts: Application -lohit-nepali-fonts: Application -lohit-odia-fonts: Application -lohit-tamil-fonts: Application -lohit-telugu-fonts: Application -lorax: sig-OS-Builder -low-memory-monitor: Desktop -lpg: Application -lrzsz: Application -lshw: Base-service -lshw-B.02.18: sig-recycle -lsof: Base-service -lsscsi: Storage -lsyncd: Application -ltrace: Programming-language -lttng-ust: Computing -lua: Base-service -lua-expat: Base-service -lua-filesystem: Programming-language -lua-json: Base-service -lua-lpeg: Base-service -lua-lunit: Programming-language -lua-posix: Programming-language -lua-socket: Networking -lua-term: Application -luajit: Base-service -luarocks: sig-OpenResty -lucene: sig-Java -lucene3: sig-Java -lucene4: sig-Java -luksmeta: Storage -lunar-date: sig-KIRAN-DESKTOP -lutok: Base-service -lv2: sig-epol -lvm2: Storage -lwip: sig-high-performance-network -lxappearance: xfce -lxc: iSulad -lxcfs: iSulad -lxcfs-tools: iSulad -lxde-common: xfce -lxde-icon-theme: xfce -lxdm: xfce -lxhotkey: xfce -lxinput: xfce -lxlauncher: xfce -lxmenu-data: xfce -lxpanel: xfce -lxqt-build-tools: sig-UKUI -lxsession: Desktop -lxshortcut: xfce -lxtask: xfce -lxterminal: xfce -lynx: Application -lz4: Base-service -lz4-java: sig-Java -lzip: bigdata -lzma: sig-recycle -lzma-java: sig-Java -lzo: Base-service -lzop: Base-service -m17n-db: System-tool -m17n-lib: System-tool -m2crypto: Runtime -m4: Base-service -mac-robber: Others -madan-fonts: Desktop -mahout: bigdata -mailcap: Base-service -maildrop: Application -mailman: Application -mailx: Desktop -mainline.list: Private -make: Base-service -makeself: oVirt -malaga: Private -malaga-suomi-voikko: Private -mallard-rng: Programming-language -man-db: Base-service -man-pages: Base-service -man2html: sig-epol -manifest: Private -mapserver: Application -marco: sig-mate-desktop -mariadb: DB -mariadb-connector-c: Base-service -mariadb-connector-odbc: DB -marisa: Others -marketing: Marketing -masscan: sig-epol -mate-applets: sig-mate-desktop -mate-backgrounds: sig-mate-desktop -mate-calc: sig-mate-desktop -mate-common: sig-mate-desktop -mate-control-center: sig-mate-desktop -mate-desktop: sig-mate-desktop -mate-icon-theme: sig-mate-desktop -mate-media: sig-mate-desktop -mate-menus: sig-mate-desktop -mate-notification-daemon: sig-mate-desktop -mate-panel: sig-mate-desktop -mate-polkit: sig-mate-desktop -mate-power-manager: sig-mate-desktop -mate-screensaver: sig-mate-desktop -mate-session-manager: sig-mate-desktop -mate-settings-daemon: sig-mate-desktop -mate-system-monitor: sig-mate-desktop -mate-terminal: sig-mate-desktop -mate-themes: sig-mate-desktop -mate-user-guide: sig-mate-desktop -mate-utils: sig-mate-desktop -mathjax: sig-UKUI -maven: sig-Java -maven-antrun-plugin: sig-Java -maven-archiver: sig-Java -maven-artifact-resolver: sig-Java -maven-artifact-transfer: sig-Java -maven-assembly-plugin: sig-Java -maven-checkstyle-plugin: sig-Java -maven-clean-plugin: sig-Java -maven-common-artifact-filters: sig-Java -maven-compiler-plugin: sig-Java -maven-dependency-analyzer: sig-Java -maven-dependency-plugin: sig-Java -maven-dependency-tree: sig-Java -maven-doxia: sig-Java -maven-doxia-sitetools: sig-Java -maven-eclipse-plugin: sig-Java -maven-enforcer: sig-Java -maven-file-management: sig-Java -maven-filtering: sig-Java -maven-gpg-plugin: sig-Java -maven-idea-plugin: sig-Java -maven-injection-plugin: sig-Java -maven-install-plugin: sig-Java -maven-invoker: sig-Java -maven-invoker-plugin: sig-Java -maven-jar-plugin: sig-Java -maven-jarsigner-plugin: sig-Java -maven-javadoc-plugin: sig-Java -maven-jaxb2-plugin: sig-Java -maven-license-plugin: sig-Java -maven-local: sig-Java -maven-mapping: sig-Java -maven-native: sig-Java -maven-osgi: dev-utils -maven-parent: sig-Java -maven-plugin-build-helper: sig-Java -maven-plugin-bundle: sig-Java -maven-plugin-testing: sig-Java -maven-plugin-tools: sig-Java -maven-plugins-pom: sig-Java -maven-processor-plugin: sig-Java -maven-release: sig-Java -maven-remote-resources-plugin: sig-Java -maven-replacer: sig-Java -maven-reporting-api: sig-Java -maven-reporting-exec: sig-Java -maven-reporting-impl: sig-Java -maven-resolver: sig-Java -maven-resources-plugin: sig-Java -maven-scm: sig-Java -maven-script-interpreter: sig-Java -maven-shade-plugin: sig-Java -maven-shared: sig-Java -maven-shared-incremental: sig-Java -maven-shared-io: sig-Java -maven-shared-jar: sig-Java -maven-shared-jarsigner: sig-Java -maven-shared-utils: sig-Java -maven-site-plugin: sig-Java -maven-source-plugin: sig-Java -maven-surefire: sig-Java -maven-verifier: sig-Java -maven-verifier-plugin: Base-service -maven-wagon: sig-Java -maven-war-plugin: sig-Java -maven2: sig-Java -mavibot: sig-Java -mc: Application -mcelog: Base-service -mchange-commons: sig-Java -mcpp: Base-service -mcstrans: sig-security-facility -mdadm: Storage -mdb: dev-utils -mdm: sig-cinnamon -mdm-themes: sig-cinnamon -meanwhile: Programming-language -mecab: Base-service -media-player-info: Application -media_export: sig-ROS -meld: sig-desktop-apps -memcached: Application -memcached_exporter: sig-CloudNative -memkind: Computing -memleax: dev-utils -memory-scan: Storage -memoryfilesystem: sig-Java -memtester: dev-utils -memwatch: dev-utils -menu-cache: xfce -mercurial: Base-service -mesa: Desktop -mesa-demos: Runtime -mesa-libGLU: Desktop -mesa-libGLw: sig-recycle -meson: Programming-language -message_filters: sig-ROS -message_generation: sig-ROS -message_runtime: sig-ROS -metacity: Desktop -metadata-extractor2: Application -metainf-services: sig-Java -metis: Application -metrics: sig-Java -microcode_ctl: System-tool -migration-assistant: sig-Migration -mikmod: Application -mimepull: sig-Java -mimick_vendor: sig-ROS -mina-ftpserver: Application -mingw-binutils: sig-compat-winapp -mingw-crt: sig-compat-winapp -mingw-filesystem: sig-compat-winapp -mingw-gcc: sig-compat-winapp -mingw-headers: sig-compat-winapp -mingw-spice-vdagent: oVirt -mingw-srvany: Private -mingw-wine-gecko: sig-compat-winapp -mingw-winpthreads: sig-compat-winapp -miniasm: dev-utils -minicom: System-tool -minimap2: dev-utils -minlog: dev-utils -mkeuleros: Private -mksh: System-tool -mlocate: Base-service -mlpack: ai -mm-common: GNOME -mobile-broadband-provider-info: Networking -mocha: dev-utils -mock: dev-utils -mockito: Programming-language -mod_auth_gssapi: System-tool -mod_auth_openidc: sig-security-facility -mod_authnz_pam: sig-security-facility -mod_fcgid: System-tool -mod_http2: Networking -mod_intercept_form_submit: Application -mod_lookup_identity: Application -mod_perl: sig-perl-modules -mod_security: System-tool -mod_security_crs: Base-service -mod_wsgi: Application -modello: Base-service -mojarra: sig-Java -mojo-parent: sig-Java -mokutil: sig-security-facility -mom: oVirt -mongo-c-driver: Others -mongo-java-driver: sig-Java -mongo-java-driver2: Base-service -mongo-tools: DB -mongodb: sig-recycle -mono: Base-service -moosefs: sig-OKD -morfologik-stemming: Base-service -morphia: dev-utils -mosquitto: Application -motif: Runtime -mousepad: xfce -mousetweaks: Application -mozilla-filesystem: Desktop -mozjs52: sig-recycle -mozjs60: sig-recycle -mozjs68: sig-recycle -mozjs78: Desktop -mp: sig-epol -mpfr: Computing -mpg123: sig-UKUI -mpi4py: sig-epol -mpich: Programming-language -mpv: Desktop -mrtg: Application -msgpack-c: sig-epol -mstflint: System-tool -msv: Application -mt-st: Others -mtd-utils: sig-embedded -mtdev: Base-service -mtools: Storage -mtr: Networking -mtx: System-tool -muffin: sig-cinnamon -mugen: sig-QA -mujs: Desktop -multilib-rpm-config: Packaging -multipath-tools: Storage -multitail: dev-utils -multithreadedtc: Base-service -multiverse: sig-Java -mumps: sig-epol -munge: Application -munge-maven-plugin: Base-service -musescore: sig-desktop-apps -musl: Computing -mustache-java: sig-Java -mutt: Application -mutter: GNOME -mvapich2: Programming-language -mvel: Base-service -mx4j: sig-Java -mxml: Application -mxparser: sig-Java -mybatis: sig-Java -mybatis-generator: sig-Java -mybatis-parent: sig-Java -mypaint-brushes: Others -mysema-commons-lang: Base-service -mysql: Others -mysql-connector-java: dev-utils -mysql-selinux: sig-security-facility -mysql5: DB -mysqltuner: DB -mythes: Application -n5p-core: sig-n5p -nafees-web-naskh-fonts: Desktop -nagios: System-tool -nagios-plugins: Networking -nailgun: Base-service -nankai_university_2021: sig-recycle -nano: Application -nanomsg: DB -narayana: sig-Java -nasm: Programming-language -native-platform: sig-Java -native-turbo: A-Tune -native-turbo-kernel: A-Tune -nautilus: GNOME -nautilus-sendto: Private -nauty: sig-epol -navigation: sig-ROS -navigation_msgs: sig-ROS -navilu-fonts: Desktop -nbdkit: Others -ncbi-blast: sig-bio -nccl: ai -ncdu: dev-utils -ncompress: Base-service -ncurses: Base-service -ndctl: Storage -ndisc6: Programming-language -neXtaw: dev-utils -neethi: Networking -nekohtml: Application -nemo: sig-cinnamon -nemo-extensions: sig-cinnamon -neo4j: DB -neofetch: dev-utils -neon: Programming-language -nestos-installer: sig-CloudNative -net-snmp: Networking -net-tools: Networking -netcdf: dev-utils -netcdf-cxx: sig-epol -netcf: Networking -netdata: Base-service -nethogs: dev-utils -netlabel_tools: System-tool -netopeer2: sig-industrial-control -netpbm: Application -netperf: dev-utils -netsniff-ng: sig-epol -nettle: Base-service -netty: sig-Java -netty-tcnative: sig-Java -netty3: sig-Java -network-manager-applet: Networking -networking-baremetal: sig-openstack -networking-generic-switch: sig-openstack -new.list: Private -newlib: Computing -newt: Base-service -nexus: Application -nfdump: sig-epol -nfs-fontmanager: Application -nfs-utils: Storage -nfs4-acl-tools: Storage -nftables: Networking -nghttp2: Networking -nginx: Packaging -nilfs-utils: Others -nim: Programming-language -ninja-build: Programming-language -nispor: oVirt -nmap: Networking -nmon: dev-utils -nmstate: oVirt -nng: dev-utils -node-gyp: sig-nodejs -node_exporter: sig-CloudNative -nodejs: sig-nodejs -nodejs-abbrev: sig-nodejs -nodejs-acorn: sig-nodejs -nodejs-ansi: sig-nodejs -nodejs-ansi-font: sig-nodejs -nodejs-ansi-regex: sig-nodejs -nodejs-ansi-styles: sig-nodejs -nodejs-appium: sig-nodejs -nodejs-are-we-there-yet: sig-nodejs -nodejs-argparse: sig-nodejs -nodejs-argv-parse: sig-ops -nodejs-array-differ: sig-nodejs -nodejs-array-index: sig-nodejs -nodejs-array-union: sig-nodejs -nodejs-array-uniq: sig-nodejs -nodejs-arrify: sig-nodejs -nodejs-asap: sig-nodejs -nodejs-asn1: sig-nodejs -nodejs-assert-plus: sig-nodejs -nodejs-assertion-error: sig-nodejs -nodejs-async: sig-nodejs -nodejs-aws-sign2: sig-nodejs -nodejs-babel-core: sig-ops -nodejs-babel-loader: sig-nodejs -nodejs-balanced-match: sig-nodejs -nodejs-better-assert: sig-nodejs -nodejs-bindings: sig-nodejs -nodejs-bl: sig-nodejs -nodejs-block-stream: sig-nodejs -nodejs-bluebird: sig-nodejs -nodejs-boom: sig-nodejs -nodejs-brace-expansion: sig-nodejs -nodejs-buffer-equal: sig-nodejs -nodejs-builtin-modules: sig-nodejs -nodejs-bunker: sig-nodejs -nodejs-burrito: sig-nodejs -nodejs-bytes: sig-nodejs -nodejs-caller-callsite: sig-nodejs -nodejs-caller-path: sig-nodejs -nodejs-callsite: sig-nodejs -nodejs-callsites: sig-nodejs -nodejs-caseless: sig-nodejs -nodejs-chai: sig-nodejs -nodejs-chalk: sig-nodejs -nodejs-character-parser: sig-nodejs -nodejs-charm: sig-nodejs -nodejs-cjson: sig-nodejs -nodejs-clean-css: sig-nodejs -nodejs-cli-color: sig-nodejs -nodejs-clone: sig-nodejs -nodejs-closure-compiler: sig-nodejs -nodejs-colors: sig-nodejs -nodejs-combined-stream: sig-nodejs -nodejs-commander: sig-nodejs -nodejs-commonmark: sig-nodejs -nodejs-compression-webpack-plugin: sig-nodejs -nodejs-concat-map: sig-nodejs -nodejs-concat-stream: sig-nodejs -nodejs-console-dot-log: sig-nodejs -nodejs-constantinople: sig-nodejs -nodejs-core-util-is: sig-nodejs -nodejs-cryptiles: sig-nodejs -nodejs-css: sig-nodejs -nodejs-css-loader: sig-nodejs -nodejs-css-parse: sig-nodejs -nodejs-css-stringify: sig-nodejs -nodejs-ctype: sig-nodejs -nodejs-d: sig-nodejs -nodejs-dateformat: sig-nodejs -nodejs-debug: sig-nodejs -nodejs-deep-eql: sig-nodejs -nodejs-deep-equal: sig-nodejs -nodejs-deep-is: sig-nodejs -nodejs-defence: sig-nodejs -nodejs-defence-cli: sig-nodejs -nodejs-define-properties: sig-nodejs -nodejs-defined: sig-nodejs -nodejs-delayed-stream: sig-nodejs -nodejs-delegates: sig-nodejs -nodejs-diff: sig-nodejs -nodejs-difflet: sig-nodejs -nodejs-difflib: sig-nodejs -nodejs-docopt: sig-nodejs -nodejs-dotenv: sig-ops -nodejs-dreamopt: sig-nodejs -nodejs-duplexer: sig-nodejs -nodejs-ebnf-parser: sig-nodejs -nodejs-ejs: sig-nodejs -nodejs-end-of-stream: sig-nodejs -nodejs-entities: sig-nodejs -nodejs-es-abstract: sig-nodejs -nodejs-es-to-primitive: sig-nodejs -nodejs-es5-ext: sig-nodejs -nodejs-es6-iterator: sig-nodejs -nodejs-es6-symbol: sig-nodejs -nodejs-es6-weak-map: sig-nodejs -nodejs-escape-string-regexp: sig-nodejs -nodejs-escodegen: sig-nodejs -nodejs-esprima: sig-nodejs -nodejs-estraverse: sig-nodejs -nodejs-esutils: sig-nodejs -nodejs-event-emitter: sig-nodejs -nodejs-eventemitter2: sig-nodejs -nodejs-events-to-array: sig-nodejs -nodejs-exit: sig-nodejs -nodejs-expect-dot-js: sig-nodejs -nodejs-expose-loader: sig-nodejs -nodejs-extend: sig-nodejs -nodejs-extract-text-webpack-plugin: sig-ops -nodejs-eyes: sig-nodejs -nodejs-fast-levenshtein: sig-nodejs -nodejs-faye-websocket: sig-nodejs -nodejs-figures: sig-nodejs -nodejs-file-loader: sig-ops -nodejs-fileset: sig-nodejs -nodejs-fill-keys: sig-nodejs -nodejs-find-up: sig-nodejs -nodejs-findup-sync: sig-nodejs -nodejs-flot: sig-nodejs -nodejs-for-each: sig-nodejs -nodejs-foreach: sig-nodejs -nodejs-foreman-js: sig-nodejs -nodejs-forever-agent: sig-nodejs -nodejs-form-data: sig-nodejs -nodejs-formatio: sig-nodejs -nodejs-from: sig-nodejs -nodejs-fstream: sig-nodejs -nodejs-function-bind: sig-nodejs -nodejs-gauge: sig-nodejs -nodejs-gaze: sig-nodejs -nodejs-generate-function: sig-nodejs -nodejs-generate-object-property: sig-nodejs -nodejs-getobject: sig-nodejs -nodejs-github-url-from-git: sig-nodejs -nodejs-glob: sig-nodejs -nodejs-globule: sig-nodejs -nodejs-graceful-fs: sig-nodejs -nodejs-graceful-readlink: sig-nodejs -nodejs-growl: sig-nodejs -nodejs-grunt: sig-nodejs -nodejs-grunt-cli: sig-nodejs -nodejs-grunt-contrib-clean: sig-nodejs -nodejs-grunt-contrib-internal: sig-nodejs -nodejs-grunt-contrib-nodeunit: sig-nodejs -nodejs-grunt-contrib-uglify: sig-nodejs -nodejs-grunt-contrib-watch: sig-nodejs -nodejs-grunt-known-options: sig-nodejs -nodejs-grunt-legacy-log: sig-nodejs -nodejs-grunt-legacy-log-utils: sig-nodejs -nodejs-grunt-legacy-util: sig-nodejs -nodejs-gzip-size: sig-nodejs -nodejs-handlebars: sig-nodejs -nodejs-har-validator: sig-nodejs -nodejs-has: sig-nodejs -nodejs-has-ansi: sig-nodejs -nodejs-has-color: sig-nodejs -nodejs-has-flag: sig-nodejs -nodejs-has-symbols: sig-nodejs -nodejs-has-unicode: sig-nodejs -nodejs-hash_file: sig-nodejs -nodejs-hashish: sig-nodejs -nodejs-hawk: sig-nodejs -nodejs-heap: sig-nodejs -nodejs-hoek: sig-nodejs -nodejs-hooker: sig-nodejs -nodejs-hosted-git-info: sig-nodejs -nodejs-http-signature: sig-nodejs -nodejs-iconv: sig-nodejs -nodejs-iconv-lite: sig-nodejs -nodejs-image-size: sig-nodejs -nodejs-inflight: sig-nodejs -nodejs-inherits: sig-nodejs -nodejs-inherits1: sig-nodejs -nodejs-interpret: sig-nodejs -nodejs-intl: sig-ops -nodejs-is: sig-nodejs -nodejs-is-builtin-module: sig-nodejs -nodejs-is-callable: sig-nodejs -nodejs-is-date-object: sig-nodejs -nodejs-is-function: sig-nodejs -nodejs-is-my-json-valid: sig-nodejs -nodejs-is-object: sig-nodejs -nodejs-is-property: sig-nodejs -nodejs-is-regex: sig-nodejs -nodejs-is-symbol: sig-nodejs -nodejs-is-typedarray: sig-nodejs -nodejs-isarray: sig-nodejs -nodejs-isexe: sig-nodejs -nodejs-isstream: sig-nodejs -nodejs-istanbul: sig-nodejs -nodejs-jade: sig-nodejs -nodejs-jed: sig-ops -nodejs-jison: sig-nodejs -nodejs-jison-lex: sig-nodejs -nodejs-jju: sig-nodejs -nodejs-js-yaml: sig-nodejs -nodejs-json-diff: sig-nodejs -nodejs-json-parse-helpfulerror: sig-nodejs -nodejs-json-stringify-safe: sig-nodejs -nodejs-jsonify: sig-nodejs -nodejs-jsonpointer: sig-nodejs -nodejs-jsonselect: sig-nodejs -nodejs-less: sig-nodejs -nodejs-less-plugin-clean-css: sig-nodejs -nodejs-levn: sig-nodejs -nodejs-lex-parser: sig-nodejs -nodejs-load-grunt-tasks: sig-nodejs -nodejs-locate-path: sig-nodejs -nodejs-lolex: sig-nodejs -nodejs-lru-queue: sig-nodejs -nodejs-make-arrow-function: sig-nodejs -nodejs-make-generator-function: sig-nodejs -nodejs-maxmin: sig-nodejs -nodejs-mdurl: sig-nodejs -nodejs-memoizee: sig-nodejs -nodejs-merge-descriptors: sig-nodejs -nodejs-mime: sig-nodejs -nodejs-mime-db: sig-nodejs -nodejs-mime-types: sig-nodejs -nodejs-minimatch: sig-nodejs -nodejs-minimist: sig-nodejs -nodejs-mkdirp: sig-nodejs -nodejs-mock-fs: sig-nodejs -nodejs-module-not-found-error: sig-nodejs -nodejs-monocle: sig-nodejs -nodejs-ms: sig-nodejs -nodejs-multimatch: sig-nodejs -nodejs-nan: sig-nodejs -nodejs-nan0: sig-nodejs -nodejs-nan1: sig-nodejs -nodejs-next-tick: sig-nodejs -nodejs-node-sass: sig-ops -nodejs-node-uuid: sig-nodejs -nodejs-nodemon: sig-nodejs -nodejs-nomnom: sig-nodejs -nodejs-nopt: sig-nodejs -nodejs-noptify: sig-nodejs -nodejs-normalize-package-data: sig-nodejs -nodejs-npmlog: sig-nodejs -nodejs-oauth-sign: sig-nodejs -nodejs-object-assign: sig-nodejs -nodejs-object-dot-assign: sig-nodejs -nodejs-object-inspect: sig-nodejs -nodejs-object-is: sig-nodejs -nodejs-object-keys: sig-nodejs -nodejs-once: sig-nodejs -nodejs-optimist: sig-nodejs -nodejs-optionator: sig-nodejs -nodejs-os-homedir: sig-nodejs -nodejs-os-tmpdir: sig-nodejs -nodejs-osenv: sig-nodejs -nodejs-p-limit: sig-nodejs -nodejs-p-locate: sig-nodejs -nodejs-package: sig-nodejs -nodejs-packaging: sig-nodejs -nodejs-paperboy: sig-nodejs -nodejs-path-array: sig-nodejs -nodejs-path-exists: sig-nodejs -nodejs-path-is-absolute: sig-nodejs -nodejs-path-parse: sig-nodejs -nodejs-pinkie: sig-nodejs -nodejs-pinkie-promise: sig-nodejs -nodejs-pkg-up: sig-nodejs -nodejs-prelude-ls: sig-nodejs -nodejs-pretty-bytes: sig-nodejs -nodejs-process-nextick-args: sig-nodejs -nodejs-promise: sig-nodejs -nodejs-promises-aplus-tests: sig-nodejs -nodejs-proxyquire: sig-nodejs -nodejs-qs: sig-nodejs -nodejs-raw-body: sig-nodejs -nodejs-react-intl: sig-nodejs -nodejs-read-package-json: sig-nodejs -nodejs-readable-stream: sig-nodejs -nodejs-readdirp: sig-nodejs -nodejs-rechoir: sig-nodejs -nodejs-replace-require-self: sig-nodejs -nodejs-request: sig-nodejs -nodejs-require-directory: sig-nodejs -nodejs-require-inject: sig-nodejs -nodejs-require-uncached: sig-nodejs -nodejs-requirejs: sig-nodejs -nodejs-resolve: sig-nodejs -nodejs-resolve-from: sig-nodejs -nodejs-resolve-pkg: sig-nodejs -nodejs-resumer: sig-nodejs -nodejs-rimraf: sig-nodejs -nodejs-rollup: sig-nodejs -nodejs-runforcover: sig-nodejs -nodejs-safe-buffer: sig-nodejs -nodejs-samsam: sig-nodejs -nodejs-sass-loader: sig-nodejs -nodejs-semver: sig-nodejs -nodejs-set-immediate-shim: sig-nodejs -nodejs-shelljs: sig-nodejs -nodejs-should: sig-nodejs -nodejs-should-equal: sig-nodejs -nodejs-should-format: sig-nodejs -nodejs-should-type: sig-nodejs -nodejs-simple-assert: sig-nodejs -nodejs-sinon: sig-nodejs -nodejs-slide: sig-nodejs -nodejs-sntp: sig-nodejs -nodejs-source-map: sig-nodejs -nodejs-source-map-support: sig-nodejs -nodejs-spdx-correct: sig-nodejs -nodejs-spdx-exceptions: sig-nodejs -nodejs-spdx-expression-parse: sig-nodejs -nodejs-spdx-license-ids: sig-nodejs -nodejs-sprintf-js: sig-nodejs -nodejs-stream-replace: sig-nodejs -nodejs-string: sig-nodejs -nodejs-string-dot-prototype-dot-repeat: sig-nodejs -nodejs-string-dot-prototype-dot-trim: sig-nodejs -nodejs-string_decoder: sig-nodejs -nodejs-stringstream: sig-nodejs -nodejs-strip-ansi: sig-nodejs -nodejs-strip-json-comments: sig-nodejs -nodejs-style-loader: sig-nodejs -nodejs-supports-color: sig-nodejs -nodejs-tap: sig-nodejs -nodejs-tap-parser: sig-nodejs -nodejs-tape: sig-nodejs -nodejs-tar: sig-nodejs -nodejs-temporary: sig-nodejs -nodejs-test: sig-nodejs -nodejs-through: sig-nodejs -nodejs-through2: sig-nodejs -nodejs-timers-ext: sig-nodejs -nodejs-tiny-lr-fork: sig-nodejs -nodejs-tough-cookie: sig-nodejs -nodejs-transformers: sig-nodejs -nodejs-traverse: sig-nodejs -nodejs-tunnel-agent: sig-nodejs -nodejs-type-check: sig-nodejs -nodejs-type-detect: sig-nodejs -nodejs-typescript: sig-nodejs -nodejs-uglifyjs-webpack-plugin: sig-ops -nodejs-underscore: sig-nodejs -nodejs-underscore-dot-string: sig-nodejs -nodejs-unpipe: sig-nodejs -nodejs-uri-path: sig-nodejs -nodejs-url-loader: sig-ops -nodejs-util: sig-nodejs -nodejs-util-deprecate: sig-nodejs -nodejs-validate-npm-package-license: sig-nodejs -nodejs-vows: sig-nodejs -nodejs-webpack: sig-nodejs -nodejs-webpack-stats-plugin: sig-nodejs -nodejs-websocket-driver: sig-nodejs -nodejs-which: sig-nodejs -nodejs-window-size: sig-nodejs -nodejs-with: sig-nodejs -nodejs-wordwrap: sig-nodejs -nodejs-wrappy: sig-nodejs -nodejs-xtend: sig-nodejs -nodejs-yamlish: sig-nodejs -nodejs-yargs: sig-nodejs -nodejs-yarn: oVirt -nodejsporter: dev-utils -nodelet_core: sig-ROS -nodeunit: sig-nodejs -noggit: sig-Java -notebook: bigdata -notepadqq: Desktop -notification-daemon: Networking -nototools: Programming-language -novnc: sig-openstack -npth: Computing -nrpe: System-tool -nsis-simple-service-plugin: oVirt -nspr: Computing -nss: sig-security-facility -nss-altfiles: Application -nss-mdns: Application -nss-pam-ldapd: Base-service -nss-pem: sig-security-facility -nss_nis: Base-service -nss_wrapper: Application -ntfs-3g: Application -ntp: Networking -ntpstat: Networking -numactl: Computing -numad: Computing -numpy: Programming-language -nv-codec-headers: Desktop -nvme-cli: System-tool -nvme-snsd: sig-REDF -nvmetcli: System-tool -nvml: Programming-language -nvwa: sig-ops -oath-toolkit: sig-security-facility -objectweb-asm: Application -objectweb-asm3: Application -objectweb-pom: sig-Java -objenesis: sig-Java -obs-build: Others -obs-bundled-gems: Others -obs-env: Programming-language -obs-server: Others -obs-service-download_files: Others -obs-service-extract_file: Others -obs-service-rust2rpm: Others -obs-service-set_version: Others -obs_meta: sig-release-management -ocaml: Programming-language -ocaml-calendar: dev-utils -ocaml-camlp4: dev-utils -ocaml-camomile: dev-utils -ocaml-cppo: dev-utils -ocaml-csexp: sig-confidential-computing -ocaml-csv: dev-utils -ocaml-curses: dev-utils -ocaml-dune: sig-confidential-computing -ocaml-extlib: dev-utils -ocaml-fileutils: dev-utils -ocaml-findlib: dev-utils -ocaml-gettext: dev-utils -ocaml-libvirt: dev-utils -ocaml-ocamlbuild: Programming-language -ocaml-ounit: Application -ocaml-xml-light: dev-utils -oci-systemd-hook: sig-recycle -ocl-icd: Base-service -octave: ai -oddjob: Base-service -oec-application: sig-Compatibility-Infra -oec-hardware: sig-Compatibility-Infra -oecp: sig-Compatibility-Infra -oemaker: sig-OS-Builder -ogdi: dev-utils -ohc: sig-Java -okhttp: ai -okteta: sig-KDE -oldstandard-sfd-fonts: Application -ompi: ai -onboard: Others -oneDNN: ai -ongres-scram: dev-utils -ongres-stringprep: sig-Java -oniguruma: Base-service -oozie: bigdata -opa-psm2: dev-utils -open-chinese-fonts: sig-mate-desktop -open-iscsi: Storage -open-isns: Storage -open-sans-fonts: System-tool -open-source-summer: sig-OSCourse -openEuler-Advisor: sig-EasyLife -openEuler-bootstrap: dev-utils -openEuler-indexhtml: Base-service -openEuler-latest-release: Others -openEuler-logos: Base-service -openEuler-lsb: sig-release-management -openEuler-menus: sig-KDE -openEuler-pkginfo: dev-utils -openEuler-release: Base-service -openEuler-repos: Base-service -openEuler-rpm-config: Base-service -openEuler_chroot: Private -openRSO: Kernel -openal-soft: Application -openapi-schema-validator: sig-python-modules -openapi-spec-validator: sig-python-modules -openblas: Programming-language -openbox: Desktop -opencc: Others -opencl: ai -opencl-clhpp: ai -opencl-filesystem: ai -opencl-headers: ai -openconnect: Application -opencore-amr: Desktop -opencryptoki: dev-utils -opencv: ai -opendesign: sig-OpenDesign -opendesign-backend: sig-OpenDesign -opendesign-build: sig-OpenDesign -opendesign-components: sig-OpenDesign -opendesign-datastat: sig-OpenDesign -opendesign-deployment: sig-OpenDesign -opendesign-internship: sig-OpenDesign -opendesign-miniprogram: sig-OpenDesign -opendesign-templates: sig-OpenDesign -openeuler-docker-images: sig-CloudNative -openeuler-jenkins: sig-Gatekeeper -openeuler-obs: sig-Gatekeeper -openeuler-os-build: sig-Gatekeeper -openeuler-wiki-bot: bigdata -opengauss-dcf: DB -opengauss-server: DB -openhpi: System-tool -openjade: Application -openjdk-1.8.0: Compiler -openjdk-11: Compiler -openjdk-17: Compiler -openjdk-latest: Compiler -openjfx11: Compiler -openjfx8: Compiler -openjpa: sig-Java -openjpeg: sig-recycle -openjpeg2: Desktop -openldap: Networking -openmotif: Private -openmpi: Application -opennlp: Application -opennn: ai -openoffice-lv: Application -openoffice.org-dict-cs_CZ: Application -openpgm: dev-utils -openpmix: ai -openresty: sig-OpenResty -openresty-openssl: sig-OpenResty -openresty-openssl111: sig-OpenResty -openresty-pcre: sig-OpenResty -openresty-valgrind: sig-OpenResty -openresty-zlib: sig-OpenResty -opensbi: sig-RISC-V -opensc: Base-service -openscap: Programming-language -openshift-ansible: sig-OKD -openslam_gmapping: sig-ROS -openslide: sig-epol -openslp: Networking -opensm: Application -opensource-intern: sig-OSCourse -opensp: Application -openssh: Networking -openssl: sig-security-facility -openssl-pkcs11: sig-security-facility -openssl_tpm2_engine: sig-security-facility -openstack: sig-openstack -openstack-aodh: sig-openstack -openstack-ceilometer: sig-openstack -openstack-cinder: sig-openstack -openstack-cyborg: sig-openstack -openstack-glance: sig-openstack -openstack-heat: sig-openstack -openstack-heat-agents: sig-openstack -openstack-horizon: sig-openstack -openstack-ironic: sig-openstack -openstack-ironic-inspector: sig-openstack -openstack-ironic-python-agent: sig-openstack -openstack-ironic-python-agent-builder: sig-openstack -openstack-ironic-staging-drivers: sig-openstack -openstack-java-sdk: oVirt -openstack-keystone: sig-openstack -openstack-kolla: sig-openstack -openstack-kolla-ansible: sig-openstack -openstack-kolla-ansible-plugin: sig-openstack -openstack-kolla-plugin: sig-openstack -openstack-macros: sig-openstack -openstack-neutron: sig-openstack -openstack-nova: sig-openstack -openstack-panko: sig-openstack -openstack-placement: sig-openstack -openstack-plugin: sig-openstack -openstack-rally: sig-openstack -openstack-rally-plugins: sig-openstack -openstack-releases: sig-openstack -openstack-swift: sig-openstack -openstack-tempest: sig-openstack -openstack-trove: sig-openstack -opentest4j: Application -openvpn: Application -openvswitch: Networking -openvswitch-kmod: sig-recycle -openwebbeans: sig-Java -openwsman: System-tool -operator-manager: sig-CloudNative -options: dev-utils -optipng: Application -opus: Computing -opusfile: Others -orage: sig-recycle -orc: Base-service -orca: Desktop -origin: sig-OKD -orocos_kdl: sig-ROS -orocos_kinematics_dynamics: sig-ROS -os-maven-plugin: sig-Java -os-prober: Base-service -osc: Others -oscap-anaconda-addon: sig-security-facility -osgi-annotation: sig-Java -osgi-compendium: sig-Java -osgi-core: sig-Java -osinfo-db: Base-service -osinfo-db-tools: Base-service -osrf_pycommon: sig-ROS -osrf_testing_tools_cpp: sig-ROS -ostree: Base-service -ostree_assembly: sig-Ostree-Assembly -otopi: oVirt -overpass-fonts: System-tool -ovirt-ansible-cluster-upgrade: oVirt -ovirt-ansible-collection: oVirt -ovirt-ansible-disaster-recovery: oVirt -ovirt-ansible-engine-setup: oVirt -ovirt-ansible-hosted-engine-setup: oVirt -ovirt-ansible-image-template: oVirt -ovirt-ansible-infra: oVirt -ovirt-ansible-manageiq: oVirt -ovirt-ansible-repositories: oVirt -ovirt-ansible-roles: oVirt -ovirt-ansible-shutdown-env: oVirt -ovirt-ansible-v2v-conversion-host: oVirt -ovirt-ansible-vm-infra: oVirt -ovirt-cockpit-sso: oVirt -ovirt-dependencies: oVirt -ovirt-engine: oVirt -ovirt-engine-api-explorer: oVirt -ovirt-engine-api-model: oVirt -ovirt-engine-appliance: oVirt -ovirt-engine-cli: oVirt -ovirt-engine-dwh: oVirt -ovirt-engine-extension-aaa-jdbc: oVirt -ovirt-engine-extension-aaa-ldap: oVirt -ovirt-engine-extension-aaa-misc: oVirt -ovirt-engine-extension-logger-log4j: oVirt -ovirt-engine-extensions-api: oVirt -ovirt-engine-metrics: oVirt -ovirt-engine-nodejs: oVirt -ovirt-engine-nodejs-modules: oVirt -ovirt-engine-ui-extensions: oVirt -ovirt-engine-wildfly: oVirt -ovirt-engine-wildfly-overlay: oVirt -ovirt-guest-agent: oVirt -ovirt-guest-agent-windows: oVirt -ovirt-guest-tools-iso: oVirt -ovirt-host: oVirt -ovirt-host-deploy: oVirt -ovirt-hosted-engine-ha: oVirt -ovirt-hosted-engine-setup: oVirt -ovirt-imageio: oVirt -ovirt-imageio-common: oVirt -ovirt-imageio-daemon: oVirt -ovirt-imageio-proxy: oVirt -ovirt-iso-uploader: oVirt -ovirt-jboss-modules-maven-plugin: oVirt -ovirt-lldp-labeler: oVirt -ovirt-log-collector: oVirt -ovirt-node-ng: oVirt -ovirt-node-ng-image-update: oVirt -ovirt-provider-ovn: oVirt -ovirt-release43: oVirt -ovirt-scheduler-proxy: oVirt -ovirt-setup-lib: oVirt -ovirt-vmconsole: oVirt -ovirt-web-ui: oVirt -p11-kit: Base-service -p7zip: dev-utils -pacemaker: sig-Ha -pacemaker-mgmt: sig-Ha -package-reinforce-test: sig-QA -pakchois: Packaging -paktype-naqsh-fonts: Desktop -paktype-naskh-basic-fonts: Desktop -paktype-tehreer-fonts: Desktop -pam: sig-security-facility -pam_krb5: Application -pango: Desktop -pangomm: Programming-language -pangox-compat: sig-mate-desktop -papi: Programming-language -papirus-icon-theme: Others -paps: Application -paranamer: Base-service -paratype-pt-sans-fonts: Desktop -parboiled: sig-Java -parfait: dev-utils -parole: xfce -parquet-format: bigdata -partclone: System-tool -parted: Storage -passenger: Application -passivetex: Private -passwd: Base-service -passwd_group_generator: sig-security-facility -patch: Base-service -patch-tracking: sig-EasyLife -patchelf: sig-epol -patchutils: Application -pavucontrol: Application -pax: Application -pbzip2: Application -pcaudiolib: Others -pciutils: Storage -pcl_conversion: sig-ROS -pcl_msgs: sig-ROS -pcmanfm: xfce -pcp: Application -pcre: Base-service -pcre2: Base-service -pcs: sig-Ha -pcsc-lite: Storage -pdf-renderer: sig-Java -pdfbox: sig-Java -pdfpc: Application -pdoc: sig-python-modules -pdsh: Application -pegdown: sig-Java -peking_university_2021: sig-recycle -peony: sig-UKUI -peony-extensions: sig-UKUI -perception_pcl: sig-ROS -percona-server: DB -percona-toolkit: DB -percona-xtrabackup: DB -performance_test_fixture: sig-ROS -perftest: Application -perl: Base-service -perl-Acme-Damn: sig-perl-modules -perl-Algorithm-Combinatorics: sig-perl-modules -perl-Algorithm-Dependency: sig-perl-modules -perl-Algorithm-Diff: sig-perl-modules -perl-Algorithm-Diff-XS: sig-perl-modules -perl-Algorithm-LUHN: sig-perl-modules -perl-Algorithm-Loops: sig-perl-modules -perl-Algorithm-NaiveBayes: sig-perl-modules -perl-Alien-Build: sig-perl-modules -perl-Alien-Libxml2: sig-perl-modules -perl-Alien-Packages: sig-perl-modules -perl-Any-Moose: sig-perl-modules -perl-Any-URI-Escape: sig-perl-modules -perl-AnyEvent: sig-perl-modules -perl-Apache-LogFormat-Compiler: sig-perl-modules -perl-Apache-Session: sig-perl-modules -perl-Apache-Session-Wrapper: sig-perl-modules -perl-App-Cmd: sig-perl-modules -perl-App-FatPacker: sig-perl-modules -perl-AppConfig: sig-perl-modules -perl-Archive-Any-Lite: sig-perl-modules -perl-Archive-Tar: sig-perl-modules -perl-Archive-Zip: Programming-language -perl-Authen-SASL: Programming-language -perl-B-COW: sig-perl-modules -perl-B-Compiling: sig-perl-modules -perl-B-Debug: sig-perl-modules -perl-B-Hooks-EndOfScope: sig-perl-modules -perl-B-Hooks-OP-Annotation: sig-perl-modules -perl-B-Hooks-OP-Check: sig-perl-modules -perl-B-Hooks-OP-PPAddr: sig-perl-modules -perl-B-Hooks-Parser: sig-perl-modules -perl-B-Keywords: sig-perl-modules -perl-B-Lint: Application -perl-B-Utils: sig-perl-modules -perl-BSD-Resource: sig-perl-modules -perl-BSSolv: sig-perl-modules -perl-BibTeX-Parser: sig-perl-modules -perl-Biblio-EndnoteStyle: sig-perl-modules -perl-Bit-Vector: Programming-language -perl-Browser-Open: sig-perl-modules -perl-Business-CreditCard: sig-perl-modules -perl-Business-Hours: sig-perl-modules -perl-Business-ISBN: sig-perl-modules -perl-Business-ISBN-Data: sig-perl-modules -perl-Business-ISMN: sig-perl-modules -perl-Business-ISSN: sig-perl-modules -perl-Business-Stripe: sig-perl-modules -perl-CAD-Format-STL: sig-perl-modules -perl-CBOR-XS: sig-perl-modules -perl-CDDB: sig-perl-modules -perl-CGI: sig-perl-modules -perl-CGI-Ajax: sig-perl-modules -perl-CGI-Application: sig-perl-modules -perl-CGI-Application-PSGI: sig-perl-modules -perl-CGI-Application-Plugin-ConfigAuto: sig-perl-modules -perl-CGI-Application-Plugin-DBH: sig-perl-modules -perl-CGI-Application-Plugin-DBIC-Schema: sig-perl-modules -perl-CGI-Application-Plugin-DevPopup: sig-perl-modules -perl-CGI-Application-Plugin-ErrorPage: sig-perl-modules -perl-CGI-Application-Plugin-FillInForm: sig-perl-modules -perl-CGI-Application-Plugin-FormState: sig-perl-modules -perl-CGI-Application-Plugin-JSON: sig-perl-modules -perl-CGI-Application-Plugin-LinkIntegrity: sig-perl-modules -perl-CGI-Application-Plugin-MessageStack: sig-perl-modules -perl-CGI-Application-Plugin-Redirect: sig-perl-modules -perl-CGI-Application-Plugin-Session: sig-perl-modules -perl-CGI-Application-Plugin-Stream: sig-perl-modules -perl-CGI-Application-Plugin-TT: sig-perl-modules -perl-CGI-Application-Standard-Config: sig-perl-modules -perl-CGI-Deurl-XS: sig-perl-modules -perl-CGI-Emulate-PSGI: sig-perl-modules -perl-CGI-Ex: sig-perl-modules -perl-CGI-Fast: sig-perl-modules -perl-CGI-FormBuilder: sig-perl-modules -perl-CGI-PSGI: sig-perl-modules -perl-CGI-Prototype: sig-perl-modules -perl-CGI-Session: sig-perl-modules -perl-CGI-Session-Driver-memcached: sig-perl-modules -perl-CGI-Simple: sig-perl-modules -perl-CLASS: sig-perl-modules -perl-CPAN: sig-perl-modules -perl-CPAN-Changes: sig-perl-modules -perl-CPAN-Common-Index: sig-perl-modules -perl-CPAN-DistnameInfo: sig-perl-modules -perl-CPAN-Meta: sig-perl-modules -perl-CPAN-Meta-Check: sig-perl-modules -perl-CPAN-Meta-Requirements: sig-perl-modules -perl-CPAN-Meta-YAML: sig-perl-modules -perl-CSS-DOM: sig-perl-modules -perl-CSS-Minifier-XS: sig-perl-modules -perl-CSS-Squish: sig-perl-modules -perl-CSS-Tiny: sig-perl-modules -perl-Cache-Cache: sig-perl-modules -perl-Cache-FastMmap: sig-perl-modules -perl-Cache-LRU: sig-perl-modules -perl-Cache-Memcached: sig-perl-modules -perl-Canary-Stability: sig-perl-modules -perl-Capture-Tiny: Programming-language -perl-Carp: sig-perl-modules -perl-Carp-Assert: sig-perl-modules -perl-Carp-Assert-More: sig-perl-modules -perl-Carp-Clan: Programming-language -perl-Carp-Fix-1_25: sig-perl-modules -perl-Carton: sig-perl-modules -perl-Catalyst-Manual: sig-perl-modules -perl-Catalyst-Plugin-CustomErrorMessage: sig-perl-modules -perl-Chatbot-Eliza: sig-perl-modules -perl-Check-ISA: sig-perl-modules -perl-Child: sig-perl-modules -perl-ClamAV-Client: sig-perl-modules -perl-Class-Accessor: sig-perl-modules -perl-Class-Accessor-Chained: sig-perl-modules -perl-Class-Accessor-Classy: sig-perl-modules -perl-Class-Accessor-Grouped: sig-perl-modules -perl-Class-Accessor-Lite: sig-perl-modules -perl-Class-Adapter: sig-perl-modules -perl-Class-Autouse: sig-perl-modules -perl-Class-Base: sig-perl-modules -perl-Class-C3: sig-perl-modules -perl-Class-C3-Adopt-NEXT: sig-perl-modules -perl-Class-C3-Componentised: sig-perl-modules -perl-Class-C3-XS: sig-perl-modules -perl-Class-Can: sig-perl-modules -perl-Class-Container: sig-perl-modules -perl-Class-Data-Accessor: sig-perl-modules -perl-Class-Date: sig-perl-modules -perl-Class-ErrorHandler: sig-perl-modules -perl-Class-Factory-Util: sig-perl-modules -perl-Class-Field: sig-perl-modules -perl-Class-ISA: sig-perl-modules -perl-Class-Inspector: sig-perl-modules -perl-Class-Load: sig-perl-modules -perl-Class-Load-XS: sig-perl-modules -perl-Class-Method-Modifiers: sig-perl-modules -perl-Class-MethodMaker: sig-perl-modules -perl-Class-Prototyped: sig-perl-modules -perl-Class-Refresh: sig-perl-modules -perl-Class-ReturnValue: sig-perl-modules -perl-Class-Std: sig-perl-modules -perl-Class-Std-Fast: sig-perl-modules -perl-Class-Throwable: sig-perl-modules -perl-Class-Tiny: sig-perl-modules -perl-Class-Trigger: sig-perl-modules -perl-Class-Unload: sig-perl-modules -perl-Class-Utils: sig-perl-modules -perl-Class-Virtual: sig-perl-modules -perl-Class-XSAccessor: sig-perl-modules -perl-Clipboard: sig-perl-modules -perl-Clone: sig-perl-modules -perl-Clone-Choose: sig-perl-modules -perl-Clone-PP: sig-perl-modules -perl-Color-Library: sig-perl-modules -perl-Command-Runner: sig-perl-modules -perl-Commandable: sig-perl-modules -perl-Compress-Bzip2: sig-perl-modules -perl-Compress-LZ4: sig-perl-modules -perl-Compress-LZF: sig-perl-modules -perl-Compress-Raw-Bzip2: sig-perl-modules -perl-Compress-Raw-Zlib: sig-perl-modules -perl-Compress-Snappy: sig-perl-modules -perl-Config-Any: sig-perl-modules -perl-Config-Auto: sig-perl-modules -perl-Config-AutoConf: Programming-language -perl-Config-Extend-MySQL: sig-perl-modules -perl-Config-General: sig-perl-modules -perl-Config-GitLike: sig-perl-modules -perl-Config-Grammar: sig-perl-modules -perl-Config-INI: sig-perl-modules -perl-Config-INI-Reader-Multiline: sig-perl-modules -perl-Config-INI-Reader-Ordered: sig-perl-modules -perl-Config-IniFiles: sig-perl-modules -perl-Config-Perl-V: sig-perl-modules -perl-Config-Properties: sig-perl-modules -perl-Config-Std: sig-perl-modules -perl-Config-Tiny: sig-perl-modules -perl-Config-ZOMG: sig-perl-modules -perl-Const-Fast: sig-perl-modules -perl-Context-Preserve: sig-perl-modules -perl-Contextual-Return: sig-perl-modules -perl-Convert-ASN1: sig-perl-modules -perl-Convert-BER: sig-perl-modules -perl-Convert-Base32: sig-perl-modules -perl-Convert-Base64: sig-perl-modules -perl-Convert-Bencode: sig-perl-modules -perl-Convert-BinHex: sig-perl-modules -perl-Convert-Binary-C: sig-perl-modules -perl-Convert-Color: sig-perl-modules -perl-Convert-Color-XTerm: sig-perl-modules -perl-Convert-NLS_DATE_FORMAT: sig-perl-modules -perl-Convert-TNEF: sig-perl-modules -perl-Convert-UU: sig-perl-modules -perl-Cookie-Baker: sig-perl-modules -perl-Cpanel-JSON-XS: sig-perl-modules -perl-Crypt-Blowfish: sig-perl-modules -perl-Crypt-CBC: sig-perl-modules -perl-Crypt-Cracklib: sig-perl-modules -perl-Crypt-DES: sig-perl-modules -perl-Crypt-ECB: sig-perl-modules -perl-Crypt-GPG: sig-perl-modules -perl-Crypt-GeneratePassword: sig-perl-modules -perl-Crypt-IDEA: sig-perl-modules -perl-Crypt-OpenSSL-Bignum: sig-perl-modules -perl-Crypt-OpenSSL-DSA: sig-perl-modules -perl-Crypt-OpenSSL-EC: sig-perl-modules -perl-Crypt-OpenSSL-Guess: sig-perl-modules -perl-Crypt-OpenSSL-PKCS10: sig-perl-modules -perl-Crypt-OpenSSL-RSA: Programming-language -perl-Crypt-OpenSSL-Random: Programming-language -perl-Crypt-OpenSSL-X509: sig-perl-modules -perl-Crypt-PasswdMD5: sig-perl-modules -perl-Crypt-RandPasswd: sig-perl-modules -perl-Crypt-Rijndael: sig-perl-modules -perl-Crypt-Salsa20: sig-perl-modules -perl-Crypt-SaltedHash: sig-perl-modules -perl-Crypt-ScryptKDF: sig-perl-modules -perl-Crypt-URandom: sig-perl-modules -perl-Crypt-UnixCrypt_XS: sig-perl-modules -perl-Crypt-X509: sig-perl-modules -perl-Cwd-Guard: sig-perl-modules -perl-Cwd-utf8: sig-perl-modules -perl-DBD-MariaDB: DB -perl-DBD-Mock: sig-perl-modules -perl-DBD-MySQL: DB -perl-DBD-Pg: sig-perl-modules -perl-DBD-SQLite: DB -perl-DBD-SQLite2: sig-perl-modules -perl-DBD-XBase: sig-perl-modules -perl-DBI: sig-perl-modules -perl-DBICx-AutoDoc: sig-perl-modules -perl-DBIx-Class: sig-perl-modules -perl-DBIx-Class-Candy: sig-perl-modules -perl-DBIx-Class-Cursor-Cached: sig-perl-modules -perl-DBIx-Class-IntrospectableM2M: sig-perl-modules -perl-DBIx-Class-OptimisticLocking: sig-perl-modules -perl-DBIx-Class-Schema-Config: sig-perl-modules -perl-DBIx-Connector: sig-perl-modules -perl-DBIx-DBSchema: sig-perl-modules -perl-DBIx-Introspector: sig-perl-modules -perl-DBIx-RunSQL: sig-perl-modules -perl-DBIx-Safe: DB -perl-DBIx-Simple: sig-perl-modules -perl-DBIx-XHTML_Table: sig-perl-modules -perl-DBM-Deep: sig-perl-modules -perl-DB_File: sig-perl-modules -perl-Daemon-Control: sig-perl-modules -perl-Data-AsObject: sig-perl-modules -perl-Data-Binary: sig-perl-modules -perl-Data-Compare: sig-perl-modules -perl-Data-Dmp: sig-perl-modules -perl-Data-Dump: Programming-language -perl-Data-Dump-Streamer: sig-perl-modules -perl-Data-Dumper: sig-perl-modules -perl-Data-Dumper-Concise: sig-perl-modules -perl-Data-Dumper-Names: sig-perl-modules -perl-Data-Munge: sig-perl-modules -perl-Data-OptList: Programming-language -perl-Data-Page: sig-perl-modules -perl-Data-Perl: sig-perl-modules -perl-Data-Section: Programming-language -perl-Data-Section-Simple: sig-perl-modules -perl-Data-Stream-Bulk: sig-perl-modules -perl-Data-TreeDumper: sig-perl-modules -perl-Data-Tumbler: sig-perl-modules -perl-Data-UUID: Programming-language -perl-Data-Validate-Type: sig-perl-modules -perl-Database-DumpTruck: sig-perl-modules -perl-Date-Calc: Programming-language -perl-Date-Calc-XS: sig-perl-modules -perl-Date-Easter: sig-perl-modules -perl-Date-Holidays-DE: sig-perl-modules -perl-Date-ISO8601: sig-perl-modules -perl-Date-JD: sig-perl-modules -perl-Date-Leapyear: sig-perl-modules -perl-Date-Manip: Programming-language -perl-Date-Simple: sig-perl-modules -perl-Date-Tiny: sig-perl-modules -perl-Debug-ShowStuff: sig-perl-modules -perl-Declare-Constraints-Simple: sig-perl-modules -perl-Devel-Autoflush: sig-perl-modules -perl-Devel-Caller: sig-perl-modules -perl-Devel-Caller-IgnoreNamespaces: sig-perl-modules -perl-Devel-CheckBin: sig-perl-modules -perl-Devel-CheckCompiler: sig-perl-modules -perl-Devel-CheckLib: Programming-language -perl-Devel-CheckOS: sig-perl-modules -perl-Devel-Confess: sig-perl-modules -perl-Devel-Cycle: sig-perl-modules -perl-Devel-Dumpvar: sig-perl-modules -perl-Devel-EnforceEncapsulation: sig-perl-modules -perl-Devel-FindPerl: sig-perl-modules -perl-Devel-Gladiator: sig-perl-modules -perl-Devel-GlobalDestruction: sig-perl-modules -perl-Devel-GoFaster: sig-perl-modules -perl-Devel-Hexdump: sig-perl-modules -perl-Devel-LexAlias: sig-perl-modules -perl-Devel-MAT-Dumper: sig-perl-modules -perl-Devel-OverloadInfo: sig-perl-modules -perl-Devel-OverrideGlobalRequire: sig-perl-modules -perl-Devel-PPPort: sig-perl-modules -perl-Devel-PartialDump: sig-perl-modules -perl-Devel-Pragma: sig-perl-modules -perl-Devel-Refcount: sig-perl-modules -perl-Devel-SelfStubber: sig-perl-modules -perl-Devel-SimpleTrace: sig-perl-modules -perl-Devel-Size: sig-perl-modules -perl-Devel-StackTrace-AsHTML: sig-perl-modules -perl-Devel-StackTrace-WithLexicals: sig-perl-modules -perl-Devel-StringInfo: sig-perl-modules -perl-Devel-Symdump: sig-perl-modules -perl-Devel-Timer: sig-perl-modules -perl-Devel-Trace: sig-perl-modules -perl-Diff-LibXDiff: sig-perl-modules -perl-Digest: Base-service -perl-Digest-BubbleBabble: sig-perl-modules -perl-Digest-HMAC: Programming-language -perl-Digest-JHash: sig-perl-modules -perl-Digest-MD2: sig-perl-modules -perl-Digest-MD4: sig-perl-modules -perl-Digest-MD5: sig-perl-modules -perl-Digest-MD5-File: sig-perl-modules -perl-Digest-Nilsimsa: sig-perl-modules -perl-Digest-Perl-MD5: sig-perl-modules -perl-Digest-SHA: sig-perl-modules -perl-Digest-SHA1: sig-perl-modules -perl-Digest-SHA3: sig-perl-modules -perl-Dir-Manifest: sig-perl-modules -perl-Dir-Self: sig-perl-modules -perl-Directory-Scratch: sig-perl-modules -perl-Dist-Metadata: sig-perl-modules -perl-EBook-EPUB: sig-perl-modules -perl-ElasticSearch-SearchBuilder: sig-perl-modules -perl-Email-Abstract: sig-perl-modules -perl-Email-Address: sig-perl-modules -perl-Email-Address-List: sig-perl-modules -perl-Email-Address-XS: sig-perl-modules -perl-Email-Date: sig-perl-modules -perl-Email-Date-Format: sig-perl-modules -perl-Email-MIME: sig-perl-modules -perl-Email-MIME-Attachment-Stripper: sig-perl-modules -perl-Email-MIME-ContentType: sig-perl-modules -perl-Email-MIME-Encodings: sig-perl-modules -perl-Email-MessageID: sig-perl-modules -perl-Email-Reply: sig-perl-modules -perl-Email-Send: sig-perl-modules -perl-Email-Sender: sig-perl-modules -perl-Email-Simple: sig-perl-modules -perl-Encode: sig-perl-modules -perl-Encode-Detect: sig-perl-modules -perl-Encode-IMAPUTF7: sig-perl-modules -perl-Encode-Locale: sig-perl-modules -perl-Encode-Newlines: sig-perl-modules -perl-Env: sig-perl-modules -perl-Env-C: sig-perl-modules -perl-Env-Path: sig-perl-modules -perl-Env-Sanctify: sig-perl-modules -perl-Error: sig-perl-modules -perl-Error-Pure: sig-perl-modules -perl-Error-Pure-Output-Text: sig-perl-modules -perl-Eval-Closure: sig-perl-modules -perl-Eval-LineNumbers: sig-perl-modules -perl-Eval-WithLexicals: sig-perl-modules -perl-Event: sig-perl-modules -perl-Excel-Writer-XLSX: sig-perl-modules -perl-Exception-Base: sig-perl-modules -perl-Exception-Class: sig-perl-modules -perl-Exception-Class-TryCatch: sig-perl-modules -perl-Exception-Tiny: sig-perl-modules -perl-Expect: sig-perl-modules -perl-Expect-Simple: sig-perl-modules -perl-Export-Attrs: sig-perl-modules -perl-Exporter: sig-perl-modules -perl-Exporter-Declare: sig-perl-modules -perl-Exporter-Declare-Magic: sig-perl-modules -perl-Exporter-Easy: sig-perl-modules -perl-Exporter-Lite: sig-perl-modules -perl-Exporter-Tiny: sig-perl-modules -perl-ExtUtils-AutoInstall: sig-perl-modules -perl-ExtUtils-CBuilder: Programming-language -perl-ExtUtils-CChecker: sig-perl-modules -perl-ExtUtils-Config: sig-perl-modules -perl-ExtUtils-Depends: sig-mate-desktop -perl-ExtUtils-HasCompiler: sig-perl-modules -perl-ExtUtils-Helpers: sig-perl-modules -perl-ExtUtils-InferConfig: sig-perl-modules -perl-ExtUtils-Install: sig-perl-modules -perl-ExtUtils-InstallPaths: sig-perl-modules -perl-ExtUtils-LibBuilder: sig-perl-modules -perl-ExtUtils-MakeMaker: Base-service -perl-ExtUtils-Manifest: sig-perl-modules -perl-ExtUtils-ParseXS: sig-perl-modules -perl-ExtUtils-PkgConfig: sig-mate-desktop -perl-ExtUtils-TBone: sig-perl-modules -perl-ExtUtils-Typemap: sig-perl-modules -perl-ExtUtils-Typemaps-Default: sig-perl-modules -perl-FCGI: sig-perl-modules -perl-FCGI-ProcManager: sig-perl-modules -perl-FFI-CheckLib: sig-perl-modules -perl-Fedora-VSP: Programming-language -perl-Fennec-Lite: sig-perl-modules -perl-File-BOM: sig-perl-modules -perl-File-BaseDir: sig-perl-modules -perl-File-CheckTree: sig-perl-modules -perl-File-ConfigDir: sig-perl-modules -perl-File-Copy-Recursive: sig-perl-modules -perl-File-Copy-Recursive-Reduced: sig-perl-modules -perl-File-DesktopEntry: sig-perl-modules -perl-File-FcntlLock: sig-perl-modules -perl-File-Fetch: sig-perl-modules -perl-File-Find-Object: sig-perl-modules -perl-File-Find-Object-Rule: sig-perl-modules -perl-File-Find-Rule: sig-perl-modules -perl-File-Find-Rule-PPI: sig-perl-modules -perl-File-Find-Rule-Perl: sig-perl-modules -perl-File-Find-Rule-VCS: sig-perl-modules -perl-File-Find-utf8: sig-perl-modules -perl-File-FindLib: sig-perl-modules -perl-File-Flat: sig-perl-modules -perl-File-HomeDir: sig-perl-modules -perl-File-KeePass: sig-perl-modules -perl-File-Listing: Programming-language -perl-File-LoadLines: sig-perl-modules -perl-File-MMagic: sig-perl-modules -perl-File-Map: sig-perl-modules -perl-File-MimeInfo: sig-perl-modules -perl-File-Modified: sig-perl-modules -perl-File-NCopy: sig-perl-modules -perl-File-NFSLock: sig-perl-modules -perl-File-Next: sig-perl-modules -perl-File-Object: sig-perl-modules -perl-File-Path: sig-perl-modules -perl-File-Path-Tiny: sig-perl-modules -perl-File-PathList: sig-perl-modules -perl-File-Pid: sig-perl-modules -perl-File-Read: sig-perl-modules -perl-File-ReadBackwards: sig-perl-modules -perl-File-Remove: sig-perl-modules -perl-File-SearchPath: sig-perl-modules -perl-File-Share: sig-perl-modules -perl-File-ShareDir: sig-perl-modules -perl-File-ShareDir-Install: sig-perl-modules -perl-File-ShareDir-ProjectDistDir: sig-perl-modules -perl-File-Slurp: Programming-language -perl-File-Slurp-Tiny: sig-perl-modules -perl-File-Slurper: sig-perl-modules -perl-File-Spec-Native: sig-perl-modules -perl-File-Sync: sig-perl-modules -perl-File-Temp: sig-perl-modules -perl-File-Touch: sig-perl-modules -perl-File-Type: sig-perl-modules -perl-File-Type-WebImages: sig-perl-modules -perl-File-Which: sig-perl-modules -perl-File-Zglob: sig-perl-modules -perl-File-chdir: sig-perl-modules -perl-File-chmod: sig-perl-modules -perl-File-pushd: sig-perl-modules -perl-FileHandle-Fmode: sig-perl-modules -perl-FileHandle-Unget: sig-perl-modules -perl-Filesys-Notify-Simple: sig-perl-modules -perl-Filter: sig-perl-modules -perl-Filter-Simple: sig-perl-modules -perl-Finance-YahooQuote: sig-perl-modules -perl-Flow: sig-perl-modules -perl-Font-TTF: Programming-language -perl-Format-Human-Bytes: sig-perl-modules -perl-FreezeThaw: sig-perl-modules -perl-GD: sig-perl-modules -perl-GD-Barcode: sig-perl-modules -perl-GD-SVG: sig-perl-modules -perl-GPS-OID: sig-perl-modules -perl-GSSAPI: sig-perl-modules -perl-Games-Solitaire-Verify: sig-perl-modules -perl-Geo-Constants: sig-perl-modules -perl-Geo-Ellipsoids: sig-perl-modules -perl-Geo-Forward: sig-perl-modules -perl-Geo-Functions: sig-perl-modules -perl-Geo-IP: sig-perl-modules -perl-Geo-IPfree: sig-perl-modules -perl-Geo-Inverse: sig-perl-modules -perl-Geography-Countries: sig-perl-modules -perl-Getopt-ArgvFile: sig-perl-modules -perl-Getopt-Euclid: sig-perl-modules -perl-Getopt-Long: sig-perl-modules -perl-Getopt-Long-Descriptive: sig-perl-modules -perl-Getopt-Lucid: sig-perl-modules -perl-Getopt-Simple: sig-perl-modules -perl-Git-Repository: sig-perl-modules -perl-Git-Repository-Plugin-AUTOLOAD: sig-perl-modules -perl-Git-Version-Compare: sig-perl-modules -perl-Git-Wrapper: sig-perl-modules -perl-Glib: sig-mate-desktop -perl-Graph: sig-perl-modules -perl-Graphics-ColorNames: sig-perl-modules -perl-Graphics-ColorNames-WWW: sig-perl-modules -perl-Graphics-ColorNamesLite-WWW: sig-perl-modules -perl-Growl-GNTP: sig-perl-modules -perl-Guard: sig-perl-modules -perl-HTML-Defang: sig-perl-modules -perl-HTML-Encoding: sig-perl-modules -perl-HTML-FillInForm: sig-perl-modules -perl-HTML-Form: sig-perl-modules -perl-HTML-Format: sig-perl-modules -perl-HTML-GenToc: sig-perl-modules -perl-HTML-HTML5-Entities: sig-perl-modules -perl-HTML-LinkList: sig-perl-modules -perl-HTML-Lint: sig-perl-modules -perl-HTML-Mason: sig-perl-modules -perl-HTML-Parser: Programming-language -perl-HTML-Quoted: sig-perl-modules -perl-HTML-RewriteAttributes: sig-perl-modules -perl-HTML-Scrubber: sig-perl-modules -perl-HTML-SimpleParse: sig-perl-modules -perl-HTML-Strip: sig-perl-modules -perl-HTML-StripScripts: sig-perl-modules -perl-HTML-StripScripts-Parser: sig-perl-modules -perl-HTML-Table: sig-perl-modules -perl-HTML-TagCloud: sig-perl-modules -perl-HTML-Tagset: Programming-language -perl-HTML-Template: sig-perl-modules -perl-HTML-Template-Pro: sig-perl-modules -perl-HTML-Tiny: sig-perl-modules -perl-HTML-TokeParser-Simple: sig-perl-modules -perl-HTTP-Body: sig-perl-modules -perl-HTTP-BrowserDetect: sig-perl-modules -perl-HTTP-Cache-Transparent: sig-perl-modules -perl-HTTP-CookieMonster: sig-perl-modules -perl-HTTP-Cookies: Programming-language -perl-HTTP-Daemon: sig-perl-modules -perl-HTTP-Date: Programming-language -perl-HTTP-Exception: sig-perl-modules -perl-HTTP-Headers-Fast: sig-perl-modules -perl-HTTP-Link-Parser: sig-perl-modules -perl-HTTP-Lite: sig-perl-modules -perl-HTTP-Message: Programming-language -perl-HTTP-MultiPartParser: sig-perl-modules -perl-HTTP-Negotiate: Programming-language -perl-HTTP-Parser: sig-perl-modules -perl-HTTP-Parser-XS: sig-perl-modules -perl-HTTP-Request-AsCGI: sig-perl-modules -perl-HTTP-Request-Params: sig-perl-modules -perl-HTTP-Server-Simple: sig-perl-modules -perl-HTTP-Server-Simple-PSGI: sig-perl-modules -perl-HTTP-Thin: sig-perl-modules -perl-HTTP-Tiny: sig-perl-modules -perl-HTTP-Tiny-Multipart: sig-perl-modules -perl-HTTP-Tinyish: sig-perl-modules -perl-Ham-Reference-QRZ: sig-perl-modules -perl-HarfBuzz-Shaper: sig-perl-modules -perl-Hash-Case: sig-perl-modules -perl-Hash-Diff: sig-perl-modules -perl-Hash-Flatten: sig-perl-modules -perl-Hash-Merge: sig-perl-modules -perl-Hash-Merge-Simple: sig-perl-modules -perl-Hash-MoreUtils: sig-perl-modules -perl-Hash-MultiValue: sig-perl-modules -perl-Hash-Util-FieldHash-Compat: sig-perl-modules -perl-Hook-LexWrap: sig-perl-modules -perl-IO: sig-perl-modules -perl-IO-All: sig-perl-modules -perl-IO-Any: sig-perl-modules -perl-IO-Compress: sig-perl-modules -perl-IO-HTML: Programming-language -perl-IO-Interactive: sig-perl-modules -perl-IO-Interface: sig-perl-modules -perl-IO-Multiplex: sig-perl-modules -perl-IO-Pager: sig-perl-modules -perl-IO-Pipely: sig-perl-modules -perl-IO-Prompt-Tiny: sig-perl-modules -perl-IO-Prompter: sig-perl-modules -perl-IO-Pty-Easy: sig-perl-modules -perl-IO-Socket-INET6: Programming-language -perl-IO-Socket-IP: sig-perl-modules -perl-IO-Socket-SSL: sig-perl-modules -perl-IO-Socket-Socks: sig-perl-modules -perl-IO-Socket-Timeout: sig-perl-modules -perl-IO-String: Programming-language -perl-IO-Stty: sig-perl-modules -perl-IO-Tee: sig-perl-modules -perl-IO-TieCombine: sig-perl-modules -perl-IO-Tty: sig-perl-modules -perl-IO-stringy: sig-perl-modules -perl-IPC-Cmd: sig-perl-modules -perl-IPC-Run: sig-perl-modules -perl-IPC-Run3: sig-perl-modules -perl-IPC-ShareLite: sig-perl-modules -perl-IPC-SysV: sig-perl-modules -perl-IPC-System-Simple: sig-perl-modules -perl-IPTables-ChainMgr: sig-perl-modules -perl-IPTables-Parse: sig-perl-modules -perl-IRC-Utils: sig-perl-modules -perl-Image-Base: sig-perl-modules -perl-Image-ExifTool: sig-perl-modules -perl-Image-Info: sig-perl-modules -perl-Image-Math-Constrain: sig-perl-modules -perl-Image-Size: sig-perl-modules -perl-Image-Xbm: sig-perl-modules -perl-Image-Xpm: sig-perl-modules -perl-Import-Into: sig-perl-modules -perl-Importer: sig-perl-modules -perl-Inline: sig-perl-modules -perl-Iterator-Simple: sig-perl-modules -perl-Iterator-Simple-Lookahead: sig-perl-modules -perl-JSON: Programming-language -perl-JSON-MaybeXS: sig-perl-modules -perl-JSON-PP: sig-perl-modules -perl-JSON-Parse: sig-perl-modules -perl-JSON-Pointer: sig-perl-modules -perl-JSON-RPC-Common: sig-perl-modules -perl-JSON-Tiny: sig-perl-modules -perl-JSON-XS: sig-perl-modules -perl-JavaScript-Beautifier: sig-perl-modules -perl-L: sig-perl-modules -perl-LWP-MediaTypes: Programming-language -perl-LWP-Online: sig-perl-modules -perl-LWP-Protocol-https: Programming-language -perl-LaTeX-ToUnicode: sig-perl-modules -perl-Language-Functional: sig-perl-modules -perl-Lchown: sig-perl-modules -perl-Lemplate: sig-perl-modules -perl-Lexical-Persistence: sig-perl-modules -perl-Lexical-SealRequireHints: sig-perl-modules -perl-Library-CallNumber-LC: sig-perl-modules -perl-Lingua-EN-Alphabet-Shaw: sig-perl-modules -perl-Lingua-EN-Fathom: sig-perl-modules -perl-Lingua-EN-FindNumber: sig-perl-modules -perl-Lingua-EN-Inflect: sig-perl-modules -perl-Lingua-EN-Inflect-Number: sig-perl-modules -perl-Lingua-EN-Number-IsOrdinal: sig-perl-modules -perl-Lingua-EN-Numbers: sig-perl-modules -perl-Lingua-EN-Numbers-Easy: sig-perl-modules -perl-Lingua-EN-Numbers-Ordinate: sig-perl-modules -perl-Lingua-EN-PluralToSingular: sig-perl-modules -perl-Lingua-EN-Sentence: sig-perl-modules -perl-Lingua-EN-Syllable: sig-perl-modules -perl-Lingua-EN-Words2Nums: sig-perl-modules -perl-Lingua-Flags: sig-perl-modules -perl-Lingua-Identify: sig-perl-modules -perl-Lingua-KO-Hangul-Util: sig-perl-modules -perl-Lingua-PT-Stemmer: sig-perl-modules -perl-Lingua-Stem-Ru: sig-perl-modules -perl-Lingua-Stem-Snowball: sig-perl-modules -perl-Lingua-Translit: sig-perl-modules -perl-Linux-Pid: sig-perl-modules -perl-List-AllUtils: sig-perl-modules -perl-List-MoreUtils: sig-perl-modules -perl-List-MoreUtils-XS: sig-perl-modules -perl-List-Pairwise: sig-perl-modules -perl-List-SomeUtils: sig-perl-modules -perl-List-SomeUtils-XS: sig-perl-modules -perl-List-UtilsBy: sig-perl-modules -perl-Locale-Codes: sig-perl-modules -perl-Locale-Currency-Format: sig-perl-modules -perl-Locale-MO-File: sig-perl-modules -perl-Locale-Maketext: sig-perl-modules -perl-Locale-Maketext-Gettext: sig-perl-modules -perl-Locale-Maketext-Lexicon: sig-perl-modules -perl-Locale-Maketext-Simple: sig-perl-modules -perl-Locale-Msgfmt: sig-perl-modules -perl-Locale-PO: sig-perl-modules -perl-Locale-SubCountry: sig-perl-modules -perl-Locale-TextDomain-OO: sig-perl-modules -perl-Locale-TextDomain-OO-Util: sig-perl-modules -perl-Locale-US: sig-perl-modules -perl-Locale-Utils-PlaceholderBabelFish: sig-perl-modules -perl-Locale-Utils-PlaceholderMaketext: sig-perl-modules -perl-Locale-Utils-PlaceholderNamed: sig-perl-modules -perl-Log-Any: sig-perl-modules -perl-Log-Contextual: sig-perl-modules -perl-Log-Dispatch: sig-perl-modules -perl-Log-Handler: sig-perl-modules -perl-Log-Log4perl: sig-perl-modules -perl-Log-Message: sig-perl-modules -perl-Log-Message-Simple: sig-perl-modules -perl-Log-Trace: sig-perl-modules -perl-Log-Trivial: sig-perl-modules -perl-Log-ger: sig-perl-modules -perl-MIME-Base32: sig-perl-modules -perl-MIME-Base64: sig-perl-modules -perl-MIME-Charset: sig-perl-modules -perl-MIME-EncWords: sig-perl-modules -perl-MIME-Lite: sig-perl-modules -perl-MIME-Types: sig-perl-modules -perl-MIME-tools: sig-perl-modules -perl-MP3-Info: sig-perl-modules -perl-MRO-Compat: Programming-language -perl-Mail-AuthenticationResults: sig-perl-modules -perl-Mail-Box: sig-perl-modules -perl-Mail-Box-POP3: sig-perl-modules -perl-Mail-Box-Parser-C: sig-perl-modules -perl-Mail-DKIM: Programming-language -perl-Mail-IMAPTalk: sig-perl-modules -perl-Mail-JMAPTalk: sig-perl-modules -perl-Mail-Message: sig-perl-modules -perl-Mail-SPF: sig-perl-modules -perl-Mail-Sender: sig-perl-modules -perl-Mail-Sendmail: sig-perl-modules -perl-Mail-Transport: sig-perl-modules -perl-MailTools: Programming-language -perl-Makefile-DOM: sig-perl-modules -perl-MasonX-Request-WithApacheSession: sig-perl-modules -perl-Math-Base36: sig-perl-modules -perl-Math-Base85: sig-perl-modules -perl-Math-BaseCnv: sig-perl-modules -perl-Math-BigInt: sig-perl-modules -perl-Math-BigInt-FastCalc: sig-perl-modules -perl-Math-BigRat: sig-perl-modules -perl-Math-Calc-Units: sig-perl-modules -perl-Math-Cartesian-Product: sig-perl-modules -perl-Math-Complex: sig-perl-modules -perl-Math-ConvexHull: sig-perl-modules -perl-Math-ConvexHull-MonotoneChain: sig-perl-modules -perl-Math-Derivative: sig-perl-modules -perl-Math-Expression-Evaluator: sig-perl-modules -perl-Math-FFT: sig-perl-modules -perl-Math-Int64: sig-perl-modules -perl-Math-MatrixReal: sig-perl-modules -perl-Math-Polygon: sig-perl-modules -perl-Math-Round: sig-perl-modules -perl-Math-Spline: sig-perl-modules -perl-Math-Utils: sig-perl-modules -perl-Math-Vec: sig-perl-modules -perl-MemHandle: sig-perl-modules -perl-Memoize: sig-perl-modules -perl-Menlo: sig-perl-modules -perl-Menlo-Legacy: sig-perl-modules -perl-Meta-Builder: sig-perl-modules -perl-Method-Signatures-Simple: sig-perl-modules -perl-Metrics-Any: sig-perl-modules -perl-Mixin-ExtraFields: sig-perl-modules -perl-Mixin-Linewise: sig-perl-modules -perl-Mock-Config: sig-perl-modules -perl-Mock-Quick: sig-perl-modules -perl-Mock-Sub: sig-perl-modules -perl-Modern-Perl: sig-perl-modules -perl-Module-Build: Programming-language -perl-Module-Build-Deprecated: sig-perl-modules -perl-Module-Build-Pluggable: sig-perl-modules -perl-Module-Build-Tiny: sig-perl-modules -perl-Module-Build-Using-PkgConfig: sig-perl-modules -perl-Module-CPANfile: sig-perl-modules -perl-Module-Compile: sig-perl-modules -perl-Module-CoreList: sig-perl-modules -perl-Module-Data: sig-perl-modules -perl-Module-Depends: sig-perl-modules -perl-Module-Extract: sig-perl-modules -perl-Module-Extract-Namespaces: sig-perl-modules -perl-Module-Extract-Use: sig-perl-modules -perl-Module-Find: sig-perl-modules -perl-Module-Install: sig-perl-modules -perl-Module-Install-AuthorRequires: sig-perl-modules -perl-Module-Install-AuthorTests: sig-perl-modules -perl-Module-Install-Authority: sig-perl-modules -perl-Module-Install-AutoLicense: sig-perl-modules -perl-Module-Install-AutoManifest: sig-perl-modules -perl-Module-Install-ExtraTests: sig-perl-modules -perl-Module-Install-GithubMeta: sig-perl-modules -perl-Module-Install-ManifestSkip: sig-perl-modules -perl-Module-Install-ReadmeFromPod: Programming-language -perl-Module-Install-ReadmeMarkdownFromPod: Programming-language -perl-Module-Install-Repository: Programming-language -perl-Module-Install-TestBase: sig-perl-modules -perl-Module-Install-TrustMetaYml: sig-perl-modules -perl-Module-Load: sig-perl-modules -perl-Module-Load-Conditional: sig-perl-modules -perl-Module-Load-Util: sig-perl-modules -perl-Module-Manifest: sig-perl-modules -perl-Module-Manifest-Skip: Programming-language -perl-Module-Mask: sig-perl-modules -perl-Module-Math-Depends: sig-perl-modules -perl-Module-Metadata: sig-perl-modules -perl-Module-Package: Programming-language -perl-Module-Package-Au: Programming-language -perl-Module-Path: sig-perl-modules -perl-Module-Pluggable: sig-perl-modules -perl-Module-Reader: sig-perl-modules -perl-Module-Refresh: sig-perl-modules -perl-Module-Runtime: Programming-language -perl-Module-Runtime-Conflicts: sig-perl-modules -perl-Module-ScanDeps: Programming-language -perl-Module-Signature: sig-perl-modules -perl-Module-Starter: sig-perl-modules -perl-Module-Util: sig-perl-modules -perl-MogileFS-Client: sig-perl-modules -perl-MogileFS-Utils: sig-perl-modules -perl-Mojo-DOM58: sig-perl-modules -perl-Mojolicious: sig-perl-modules -perl-Monitoring-Plugin: sig-perl-modules -perl-Monotone-AutomateStdio: sig-perl-modules -perl-Moo: Programming-language -perl-MooX: sig-perl-modules -perl-MooX-Cmd: sig-perl-modules -perl-MooX-ConfigFromFile: sig-perl-modules -perl-MooX-File-ConfigDir: sig-perl-modules -perl-MooX-HandlesVia: sig-perl-modules -perl-MooX-HasEnv: sig-perl-modules -perl-MooX-Locale-Passthrough: sig-perl-modules -perl-MooX-Locale-TextDomain-OO: sig-perl-modules -perl-MooX-Log-Any: sig-perl-modules -perl-MooX-Role-Parameterized: sig-perl-modules -perl-MooX-Roles-Pluggable: sig-perl-modules -perl-MooX-Singleton: sig-perl-modules -perl-MooX-StrictConstructor: sig-perl-modules -perl-Moose: sig-perl-modules -perl-Moose-Autobox: sig-perl-modules -perl-MooseX-Aliases: sig-perl-modules -perl-MooseX-App-Cmd: sig-perl-modules -perl-MooseX-ArrayRef: sig-perl-modules -perl-MooseX-Async: sig-perl-modules -perl-MooseX-Attribute-Chained: sig-perl-modules -perl-MooseX-CascadeClearing: sig-perl-modules -perl-MooseX-ClassAttribute: sig-perl-modules -perl-MooseX-CoercePerAttribute: sig-perl-modules -perl-MooseX-ConfigFromFile: sig-perl-modules -perl-MooseX-Configuration: sig-perl-modules -perl-MooseX-Daemonize: sig-perl-modules -perl-MooseX-Emulate-Class-Accessor-Fast: sig-perl-modules -perl-MooseX-Getopt: sig-perl-modules -perl-MooseX-GlobRef: sig-perl-modules -perl-MooseX-Has-Options: sig-perl-modules -perl-MooseX-Has-Sugar: sig-perl-modules -perl-MooseX-InsideOut: sig-perl-modules -perl-MooseX-Iterator: sig-perl-modules -perl-MooseX-LazyRequire: sig-perl-modules -perl-MooseX-MarkAsMethods: sig-perl-modules -perl-MooseX-Meta-TypeConstraint-ForceCoercion: sig-perl-modules -perl-MooseX-Meta-TypeConstraint-Mooish: sig-perl-modules -perl-MooseX-MethodAttributes: sig-perl-modules -perl-MooseX-MultiInitArg: sig-perl-modules -perl-MooseX-NonMoose: sig-perl-modules -perl-MooseX-Object-Pluggable: sig-perl-modules -perl-MooseX-OneArgNew: sig-perl-modules -perl-MooseX-POE: sig-perl-modules -perl-MooseX-Param: sig-perl-modules -perl-MooseX-Params-Validate: sig-perl-modules -perl-MooseX-RelatedClassRoles: sig-perl-modules -perl-MooseX-Role-Cmd: sig-perl-modules -perl-MooseX-Role-Matcher: sig-perl-modules -perl-MooseX-Role-Parameterized: sig-perl-modules -perl-MooseX-Role-Strict: sig-perl-modules -perl-MooseX-Role-Tempdir: sig-perl-modules -perl-MooseX-SemiAffordanceAccessor: sig-perl-modules -perl-MooseX-SetOnce: sig-perl-modules -perl-MooseX-SimpleConfig: sig-perl-modules -perl-MooseX-Singleton: sig-perl-modules -perl-MooseX-StrictConstructor: sig-perl-modules -perl-MooseX-TraitFor-Meta-Class-BetterAnonClassNames: sig-perl-modules -perl-MooseX-Traits: sig-perl-modules -perl-MooseX-Traits-Pluggable: sig-perl-modules -perl-MooseX-Types: sig-perl-modules -perl-MooseX-Types-Common: sig-perl-modules -perl-MooseX-Types-LoadableClass: sig-perl-modules -perl-MooseX-Types-Path-Class: sig-perl-modules -perl-MooseX-Types-Path-Tiny: sig-perl-modules -perl-MooseX-Types-Perl: sig-perl-modules -perl-MooseX-Types-Stringlike: sig-perl-modules -perl-Mozilla-CA: Base-service -perl-Mozilla-LDAP: sig-perl-modules -perl-Mozilla-PublicSuffix: sig-perl-modules -perl-NNTPClient: sig-perl-modules -perl-NTLM: Programming-language -perl-Net-AMQP: sig-perl-modules -perl-Net-BGP: sig-perl-modules -perl-Net-CIDR: sig-perl-modules -perl-Net-CIDR-Lite: sig-perl-modules -perl-Net-DNS: Programming-language -perl-Net-DNS-Resolver-Mock: sig-perl-modules -perl-Net-DNS-Resolver-Programmable: sig-perl-modules -perl-Net-DNS-SEC: sig-perl-modules -perl-Net-Daemon: sig-perl-modules -perl-Net-Domain-TLD: sig-perl-modules -perl-Net-Google-AuthSub: sig-perl-modules -perl-Net-HL7: sig-perl-modules -perl-Net-HTTP: Programming-language -perl-Net-INET6Glue: sig-perl-modules -perl-Net-IP: sig-perl-modules -perl-Net-IP-Match-Regexp: sig-perl-modules -perl-Net-IP-Minimal: sig-perl-modules -perl-Net-LDAP-SID: sig-perl-modules -perl-Net-LibIDN: sig-perl-modules -perl-Net-LibIDN2: sig-perl-modules -perl-Net-MQTT-Simple: sig-perl-modules -perl-Net-OAuth: sig-perl-modules -perl-Net-OpenSSH: sig-perl-modules -perl-Net-POP3S: sig-perl-modules -perl-Net-Ping-External: sig-perl-modules -perl-Net-Random: sig-perl-modules -perl-Net-RawIP: sig-perl-modules -perl-Net-SFTP-Foreign: sig-perl-modules -perl-Net-SMTP-SSL: sig-perl-modules -perl-Net-SMTPS: sig-perl-modules -perl-Net-SNMP: sig-perl-modules -perl-Net-SSLeay: sig-perl-modules -perl-Net-Server: sig-perl-modules -perl-Net-Server-SS-PreFork: sig-perl-modules -perl-Net-Telnet: sig-perl-modules -perl-Net-Telnet-Cisco: sig-perl-modules -perl-Net-UPnP: sig-perl-modules -perl-NetAddr-IP: Programming-language -perl-Nmap-Parser: sig-perl-modules -perl-Number-Bytes-Human: sig-perl-modules -perl-Number-Compare: sig-perl-modules -perl-Number-Format: sig-perl-modules -perl-Number-Misc: sig-perl-modules -perl-Number-Range: sig-perl-modules -perl-Number-Tolerant: sig-perl-modules -perl-OLE-Storage_Lite: sig-perl-modules -perl-Object-Accessor: sig-perl-modules -perl-Object-HashBase: sig-perl-modules -perl-Object-Pluggable: sig-perl-modules -perl-Object-Realize-Later: sig-perl-modules -perl-Object-Signature: sig-perl-modules -perl-Object-Tiny: sig-perl-modules -perl-Ouch: sig-perl-modules -perl-PAR: sig-perl-modules -perl-PAR-Dist: sig-perl-modules -perl-PBKDF2-Tiny: sig-perl-modules -perl-PDF-Create: sig-perl-modules -perl-PDF-Reuse: sig-perl-modules -perl-PFT: sig-perl-modules -perl-PHP-Serialization: sig-perl-modules -perl-POD2-Base: sig-perl-modules -perl-POE: sig-perl-modules -perl-POE-Test-Loops: sig-perl-modules -perl-POSIX-strftime-Compiler: sig-perl-modules -perl-POSIX-strptime: sig-perl-modules -perl-PPI: sig-perl-modules -perl-PPI-HTML: sig-perl-modules -perl-PPI-XS: sig-perl-modules -perl-PPIx-EditorTools: sig-perl-modules -perl-PPIx-QuoteLike: sig-perl-modules -perl-PPIx-Regexp: sig-perl-modules -perl-PSGI: sig-perl-modules -perl-Package-Anon: sig-perl-modules -perl-Package-Constants: sig-perl-modules -perl-Package-DeprecationManager: sig-perl-modules -perl-Package-Generator: sig-perl-modules -perl-Package-New: sig-perl-modules -perl-Package-Stash: sig-perl-modules -perl-Package-Variant: sig-perl-modules -perl-PadWalker: sig-perl-modules -perl-Palm: sig-perl-modules -perl-Palm-PDB: sig-perl-modules -perl-Panotools-Script: sig-perl-modules -perl-Parallel-ForkManager: sig-perl-modules -perl-Parallel-Iterator: sig-perl-modules -perl-Parallel-Pipes: sig-perl-modules -perl-Parallel-Runner: sig-perl-modules -perl-Parallel-Scoreboard: sig-perl-modules -perl-Params-CallbackRequest: sig-perl-modules -perl-Params-Check: sig-perl-modules -perl-Params-Coerce: sig-perl-modules -perl-Params-Util: Programming-language -perl-Params-Validate: sig-perl-modules -perl-Parse-DMIDecode: sig-perl-modules -perl-Parse-Debian-Packages: sig-perl-modules -perl-Parse-EDID: sig-perl-modules -perl-Parse-ErrorString-Perl: sig-perl-modules -perl-Parse-ExuberantCTags: sig-perl-modules -perl-Parse-Gitignore: sig-perl-modules -perl-Parse-MIME: sig-perl-modules -perl-Parse-PMFile: sig-perl-modules -perl-Parse-Yapp: Programming-language -perl-Path-Class: Programming-language -perl-Path-FindDev: sig-perl-modules -perl-Path-IsDev: sig-perl-modules -perl-Path-Iterator-Rule: sig-perl-modules -perl-Path-ScanINC: sig-perl-modules -perl-Path-Tiny: Programming-language -perl-Path-Tiny-Rule: sig-perl-modules -perl-PathTools: sig-perl-modules -perl-Pegex: sig-perl-modules -perl-Perl-OSType: sig-perl-modules -perl-Perl-PrereqScanner: sig-perl-modules -perl-Perl-Stripper: sig-perl-modules -perl-Perl-Tidy: sig-perl-modules -perl-Perl-Tidy-Sweetened: sig-perl-modules -perl-Perl-Version: sig-perl-modules -perl-Perl6-Caller: sig-perl-modules -perl-Perl6-Junction: sig-perl-modules -perl-Perl6-Slurp: sig-perl-modules -perl-PerlIO-Layers: sig-perl-modules -perl-PerlIO-buffersize: sig-perl-modules -perl-PerlIO-eol: sig-perl-modules -perl-PerlIO-gzip: sig-perl-modules -perl-PerlIO-locale: sig-perl-modules -perl-PerlIO-utf8_strict: sig-perl-modules -perl-PerlIO-via-QuotedPrint: sig-perl-modules -perl-PerlIO-via-Timeout: sig-perl-modules -perl-Perlilog: sig-perl-modules -perl-PkgConfig-LibPkgConf: sig-perl-modules -perl-Pod-Checker: sig-perl-modules -perl-Pod-Constants: sig-perl-modules -perl-Pod-Coverage: sig-perl-modules -perl-Pod-Coverage-Moose: sig-perl-modules -perl-Pod-Coverage-TrustPod: sig-perl-modules -perl-Pod-Elemental: sig-perl-modules -perl-Pod-Elemental-PerlMunger: sig-perl-modules -perl-Pod-Escapes: sig-perl-modules -perl-Pod-Eventual: sig-perl-modules -perl-Pod-LaTeX: Application -perl-Pod-Markdown: Programming-language -perl-Pod-Markdown-Github: sig-perl-modules -perl-Pod-MinimumVersion: sig-perl-modules -perl-Pod-POM: sig-perl-modules -perl-Pod-Parser: sig-perl-modules -perl-Pod-Perldoc: sig-perl-modules -perl-Pod-Plainer: sig-perl-modules -perl-Pod-PseudoPod: sig-perl-modules -perl-Pod-Simple: sig-perl-modules -perl-Pod-Simple-Wiki: sig-perl-modules -perl-Pod-Snippets: sig-perl-modules -perl-Pod-Spell: sig-perl-modules -perl-Pod-Spell-CommonMistakes: sig-perl-modules -perl-Pod-Strip: sig-perl-modules -perl-Pod-Tidy: sig-perl-modules -perl-Pod-Usage: sig-perl-modules -perl-Pod-Wrap: sig-perl-modules -perl-Pod-Xhtml: sig-perl-modules -perl-Printer: sig-perl-modules -perl-Proc-Daemon: sig-perl-modules -perl-Proc-InvokeEditor: sig-perl-modules -perl-Proc-PID-File: sig-perl-modules -perl-Proc-ProcessTable: sig-perl-modules -perl-Proc-Simple: sig-perl-modules -perl-Proc-Terminator: sig-perl-modules -perl-Proc-Wait3: sig-perl-modules -perl-Promises: sig-perl-modules -perl-RDF-NS: sig-perl-modules -perl-RDF-NS-Curated: sig-perl-modules -perl-RDF-Prefixes: sig-perl-modules -perl-REST-Client: sig-perl-modules -perl-RPM2: sig-perl-modules -perl-Readonly: sig-perl-modules -perl-ReadonlyX: sig-perl-modules -perl-Redis: sig-perl-modules -perl-Ref-Util: sig-perl-modules -perl-Ref-Util-XS: sig-perl-modules -perl-Regexp-Assemble: sig-perl-modules -perl-Regexp-Assemble-Compressed: sig-perl-modules -perl-Regexp-Common: sig-perl-modules -perl-Regexp-Common-net-CIDR: sig-perl-modules -perl-Regexp-Grammars: sig-perl-modules -perl-Regexp-IPv6: sig-perl-modules -perl-Regexp-Pattern: sig-perl-modules -perl-Regexp-Stringify: sig-perl-modules -perl-Regexp-Util: sig-perl-modules -perl-Retry: sig-perl-modules -perl-Return-MultiLevel: sig-perl-modules -perl-Return-Value: sig-perl-modules -perl-Role-Basic: sig-perl-modules -perl-Role-Identifiable: sig-perl-modules -perl-Role-Tiny: Programming-language -perl-Roman: sig-perl-modules -perl-Router-Simple: sig-perl-modules -perl-SGMLSpm: Programming-language -perl-SNMP_Session: sig-perl-modules -perl-SQL-Abstract: sig-perl-modules -perl-SQL-Interp: sig-perl-modules -perl-SQL-Library: sig-perl-modules -perl-SQL-ReservedWords: sig-perl-modules -perl-STD: sig-perl-modules -perl-SUPER: sig-perl-modules -perl-SVG: sig-perl-modules -perl-SVG-Parser: sig-perl-modules -perl-Safe-Isa: sig-perl-modules -perl-Scalar-Construct: sig-perl-modules -perl-Scalar-List-Utils: sig-perl-modules -perl-Scalar-String: sig-perl-modules -perl-Schedule-Cron: sig-perl-modules -perl-Scope-Guard: sig-perl-modules -perl-Scope-Upper: sig-perl-modules -perl-Scriptalicious: sig-perl-modules -perl-SelfLoader: sig-perl-modules -perl-Server-Starter: sig-perl-modules -perl-Set-Array: sig-perl-modules -perl-Set-Crontab: sig-perl-modules -perl-Set-Infinite: sig-perl-modules -perl-Set-IntSpan: sig-perl-modules -perl-Set-Scalar: sig-perl-modules -perl-Set-Tiny: sig-perl-modules -perl-Shell: sig-perl-modules -perl-Shell-Guess: sig-perl-modules -perl-Smart-Comments: sig-perl-modules -perl-Snowball-Swedish: sig-perl-modules -perl-Socket: sig-perl-modules -perl-Socket-MsgHdr: sig-perl-modules -perl-Socket6: Programming-language -perl-Software-License: Programming-language -perl-Software-License-CCpack: sig-perl-modules -perl-Sort-Key: sig-perl-modules -perl-Sort-MergeSort: sig-perl-modules -perl-Sort-Naturally: sig-perl-modules -perl-Sort-Versions: sig-perl-modules -perl-Spellunker: sig-perl-modules -perl-Spiffy: sig-perl-modules -perl-Spreadsheet-ParseExcel: sig-perl-modules -perl-Statistics-Basic: sig-perl-modules -perl-Statistics-CaseResampling: sig-perl-modules -perl-Statistics-ChiSquare: sig-perl-modules -perl-Statistics-Contingency: sig-perl-modules -perl-Statistics-Descriptive: sig-perl-modules -perl-Storable: Base-service -perl-Stream-Buffered: sig-perl-modules -perl-String-Approx: sig-perl-modules -perl-String-Base: sig-perl-modules -perl-String-CRC32: sig-perl-modules -perl-String-CamelCase: sig-perl-modules -perl-String-Copyright: sig-perl-modules -perl-String-Dirify: sig-perl-modules -perl-String-Escape: sig-perl-modules -perl-String-Format: sig-perl-modules -perl-String-Formatter: sig-perl-modules -perl-String-Interpolate-Named: sig-perl-modules -perl-String-Print: sig-perl-modules -perl-String-Random: sig-perl-modules -perl-String-RewritePrefix: sig-perl-modules -perl-String-ShellQuote: Programming-language -perl-String-Similarity: sig-perl-modules -perl-String-Tagged: sig-perl-modules -perl-String-Tagged-Terminal: sig-perl-modules -perl-String-Trim: sig-perl-modules -perl-String-Truncate: sig-perl-modules -perl-String-Util: sig-perl-modules -perl-Struct-Dumb: sig-perl-modules -perl-Sub-Attribute: sig-perl-modules -perl-Sub-Exporter: sig-perl-modules -perl-Sub-Exporter-ForMethods: sig-perl-modules -perl-Sub-Exporter-GlobExporter: sig-perl-modules -perl-Sub-Exporter-Progressive: sig-perl-modules -perl-Sub-Identify: sig-perl-modules -perl-Sub-Infix: sig-perl-modules -perl-Sub-Info: sig-perl-modules -perl-Sub-Install: Programming-language -perl-Sub-Name: Programming-language -perl-Sub-Override: sig-perl-modules -perl-Sub-Prototype: sig-perl-modules -perl-Sub-Quote: Programming-language -perl-Sub-Uplevel: sig-perl-modules -perl-Sub-WrapPackages: sig-perl-modules -perl-Switch: sig-perl-modules -perl-Symbol-Global-Name: sig-perl-modules -perl-Symbol-Util: sig-perl-modules -perl-Syntax-Keyword-Gather: sig-perl-modules -perl-Syntax-Keyword-Junction: sig-perl-modules -perl-Syntax-Keyword-Try: sig-perl-modules -perl-Sys-CPU: sig-perl-modules -perl-Sys-Hostname-Long: sig-perl-modules -perl-Sys-Info: sig-perl-modules -perl-Sys-Info-Base: sig-perl-modules -perl-Sys-MemInfo: sig-perl-modules -perl-Sys-Mmap: sig-perl-modules -perl-Sys-Statistics-Linux: sig-perl-modules -perl-Sys-Syslog: sig-perl-modules -perl-Sys-Virt: Virt -perl-System-Command: sig-perl-modules -perl-System-Info: sig-perl-modules -perl-TAP-Formatter-HTML: sig-perl-modules -perl-TAP-Harness-Archive: sig-perl-modules -perl-TAP-Harness-JUnit: sig-perl-modules -perl-TAP-SimpleOutput: sig-perl-modules -perl-TOML-Parser: sig-perl-modules -perl-Taint-Util: sig-perl-modules -perl-Tangerine: sig-perl-modules -perl-Tapper: sig-perl-modules -perl-Task-Kensho-Exceptions: sig-perl-modules -perl-Task-Moose: sig-perl-modules -perl-Task-Weaken: sig-perl-modules -perl-TeX-Encode: sig-perl-modules -perl-TeX-Hyphen: sig-perl-modules -perl-Template-Alloy: sig-perl-modules -perl-Template-Multilingual: sig-perl-modules -perl-Template-Plugin-Class: sig-perl-modules -perl-Template-Plugin-Cycle: sig-perl-modules -perl-Template-Tiny: sig-perl-modules -perl-Template-Toolkit: sig-perl-modules -perl-Template-Toolkit-Simple: sig-perl-modules -perl-Term-ANSIColor: sig-perl-modules -perl-Term-Cap: sig-perl-modules -perl-Term-Chrome: sig-perl-modules -perl-Term-Clui: sig-perl-modules -perl-Term-EditorEdit: sig-perl-modules -perl-Term-Encoding: sig-perl-modules -perl-Term-ProgressBar: sig-perl-modules -perl-Term-Size: sig-perl-modules -perl-Term-Table: sig-perl-modules -perl-Term-UI: sig-perl-modules -perl-TermReadKey: sig-perl-modules -perl-Test-API: sig-perl-modules -perl-Test-Abortable: sig-perl-modules -perl-Test-Assert: sig-perl-modules -perl-Test-Assertions: sig-perl-modules -perl-Test-Base: sig-perl-modules -perl-Test-CPAN-Meta: Private -perl-Test-CPAN-Meta-JSON: sig-perl-modules -perl-Test-CPAN-Meta-YAML: sig-perl-modules -perl-Test-CheckChanges: sig-perl-modules -perl-Test-CheckDeps: sig-perl-modules -perl-Test-Class: sig-perl-modules -perl-Test-Class-Most: sig-perl-modules -perl-Test-CleanNamespaces: sig-perl-modules -perl-Test-Cmd: sig-perl-modules -perl-Test-Command: sig-perl-modules -perl-Test-Compile: sig-perl-modules -perl-Test-ConsistentVersion: sig-perl-modules -perl-Test-Deep: Programming-language -perl-Test-Deep-Fuzzy: sig-perl-modules -perl-Test-Deep-Type: sig-perl-modules -perl-Test-Dependencies: sig-perl-modules -perl-Test-Differences: sig-perl-modules -perl-Test-Dir: sig-perl-modules -perl-Test-Directory: sig-perl-modules -perl-Test-Dist-VersionSync: sig-perl-modules -perl-Test-Distribution: sig-perl-modules -perl-Test-Dynamic: sig-perl-modules -perl-Test-EOL: sig-perl-modules -perl-Test-Exception: sig-perl-modules -perl-Test-Exception-LessClever: sig-perl-modules -perl-Test-Exit: sig-perl-modules -perl-Test-Expect: sig-perl-modules -perl-Test-FailWarnings: Programming-language -perl-Test-Fatal: Programming-language -perl-Test-File: sig-perl-modules -perl-Test-File-Contents: sig-perl-modules -perl-Test-File-ShareDir: sig-perl-modules -perl-Test-Filename: sig-perl-modules -perl-Test-Fixme: sig-perl-modules -perl-Test-HTTP-Server-Simple: sig-perl-modules -perl-Test-Harness: sig-perl-modules -perl-Test-Harness-Straps: sig-perl-modules -perl-Test-HasVersion: sig-perl-modules -perl-Test-HexDifferences: sig-perl-modules -perl-Test-HexString: sig-perl-modules -perl-Test-Identity: sig-perl-modules -perl-Test-InDistDir: Programming-language -perl-Test-Inter: sig-perl-modules -perl-Test-Is: sig-perl-modules -perl-Test-JSON: sig-perl-modules -perl-Test-LWP-UserAgent: sig-perl-modules -perl-Test-LeakTrace: sig-perl-modules -perl-Test-LectroTest: sig-perl-modules -perl-Test-LoadAllModules: sig-perl-modules -perl-Test-LongString: sig-perl-modules -perl-Test-Manifest: sig-perl-modules -perl-Test-Memory-Cycle: sig-perl-modules -perl-Test-Metrics-Any: sig-perl-modules -perl-Test-Mock-LWP: sig-perl-modules -perl-Test-Mock-Time: sig-perl-modules -perl-Test-MockModule: sig-perl-modules -perl-Test-MockObject: sig-perl-modules -perl-Test-MockRandom: sig-perl-modules -perl-Test-MockTime: sig-perl-modules -perl-Test-Modern: sig-perl-modules -perl-Test-Mojibake: sig-perl-modules -perl-Test-Moose-More: sig-perl-modules -perl-Test-More-UTF8: sig-perl-modules -perl-Test-Most: sig-perl-modules -perl-Test-Name-FromLine: sig-perl-modules -perl-Test-Needs: Programming-language -perl-Test-Nginx: sig-perl-modules -perl-Test-NiceDump: sig-perl-modules -perl-Test-NoBreakpoints: sig-perl-modules -perl-Test-NoPlan: sig-perl-modules -perl-Test-NoTabs: sig-perl-modules -perl-Test-NoWarnings: Programming-language -perl-Test-Number-Delta: sig-perl-modules -perl-Test-Object: sig-perl-modules -perl-Test-Output: sig-perl-modules -perl-Test-POE-Client-TCP: sig-perl-modules -perl-Test-POE-Server-TCP: sig-perl-modules -perl-Test-Pod: Programming-language -perl-Test-Pod-Content: sig-perl-modules -perl-Test-Pod-Coverage: Programming-language -perl-Test-Pod-No404s: sig-perl-modules -perl-Test-Pod-Spelling-CommonMistakes: sig-perl-modules -perl-Test-Portability-Files: sig-perl-modules -perl-Test-Prereq: sig-perl-modules -perl-Test-Regexp: sig-perl-modules -perl-Test-Regression: sig-perl-modules -perl-Test-Requires: Programming-language -perl-Test-Requires-Git: sig-perl-modules -perl-Test-RequiresInternet: sig-perl-modules -perl-Test-Roo: sig-perl-modules -perl-Test-Routine: sig-perl-modules -perl-Test-Run: sig-perl-modules -perl-Test-Run-CmdLine: sig-perl-modules -perl-Test-Script-Run: sig-perl-modules -perl-Test-SharedFork: sig-perl-modules -perl-Test-Simple: sig-perl-modules -perl-Test-Spelling: sig-perl-modules -perl-Test-Strict: sig-perl-modules -perl-Test-SubCalls: sig-perl-modules -perl-Test-Synopsis: sig-perl-modules -perl-Test-TCP: sig-perl-modules -perl-Test-Taint: sig-perl-modules -perl-Test-Time: sig-perl-modules -perl-Test-TinyMocker: sig-perl-modules -perl-Test-Toolbox: sig-perl-modules -perl-Test-TrailingSpace: sig-perl-modules -perl-Test-Trap: sig-perl-modules -perl-Test-Unit-Lite: sig-perl-modules -perl-Test-UseAllModules: sig-perl-modules -perl-Test-Vars: sig-perl-modules -perl-Test-WWW-Selenium: sig-perl-modules -perl-Test-Warn: sig-perl-modules -perl-Test-Warnings: Programming-language -perl-Test-Without-Module: sig-perl-modules -perl-Test-WriteVariants: sig-perl-modules -perl-Test-YAML: sig-perl-modules -perl-Test-YAML-Valid: sig-perl-modules -perl-Test-mysqld: sig-perl-modules -perl-Test-utf8: sig-perl-modules -perl-Test2-Suite: sig-perl-modules -perl-TestML: sig-perl-modules -perl-Text-ASCIITable: sig-perl-modules -perl-Text-Affixes: sig-perl-modules -perl-Text-Aligner: sig-perl-modules -perl-Text-Autoformat: sig-perl-modules -perl-Text-Balanced: sig-perl-modules -perl-Text-CSV-Separator: sig-perl-modules -perl-Text-CharWidth: Programming-language -perl-Text-Clip: sig-perl-modules -perl-Text-Diff: sig-perl-modules -perl-Text-Diff-HTML: sig-perl-modules -perl-Text-Diff-Parser: sig-perl-modules -perl-Text-FindIndent: sig-perl-modules -perl-Text-Format: sig-perl-modules -perl-Text-FormatTable: sig-perl-modules -perl-Text-Fuzzy: sig-perl-modules -perl-Text-Glob: Programming-language -perl-Text-Haml: sig-perl-modules -perl-Text-Levenshtein-Damerau: sig-perl-modules -perl-Text-Levenshtein-Damerau-XS: sig-perl-modules -perl-Text-Markdown: sig-perl-modules -perl-Text-MultiMarkdown: sig-perl-modules -perl-Text-Ngram: sig-perl-modules -perl-Text-PDF: sig-perl-modules -perl-Text-ParseWords: sig-perl-modules -perl-Text-Quoted: sig-perl-modules -perl-Text-Reflow: sig-perl-modules -perl-Text-Reform: sig-perl-modules -perl-Text-Roman: sig-perl-modules -perl-Text-Soundex: sig-perl-modules -perl-Text-Sprintf-Named: sig-perl-modules -perl-Text-Table: sig-perl-modules -perl-Text-Table-Tiny: sig-perl-modules -perl-Text-Tabs-Wrap: Programming-language -perl-Text-TabularDisplay: sig-perl-modules -perl-Text-Template: Programming-language -perl-Text-Template-Simple: sig-perl-modules -perl-Text-Textile: sig-perl-modules -perl-Text-Unidecode: Programming-language -perl-Text-VCardFast: sig-perl-modules -perl-Text-VisualWidth-PP: sig-perl-modules -perl-Text-WikiFormat: sig-perl-modules -perl-Text-WordDiff: sig-perl-modules -perl-Text-WrapI18N: Programming-language -perl-Text-Wrapper: sig-perl-modules -perl-Text-vCard: sig-perl-modules -perl-Text-vFile-asData: sig-perl-modules -perl-Text-xSV: sig-perl-modules -perl-Thread-Queue: sig-perl-modules -perl-Thread-SigMask: sig-perl-modules -perl-Throwable: sig-perl-modules -perl-Tie-Cache: sig-perl-modules -perl-Tie-Cycle: sig-perl-modules -perl-Tie-DBI: sig-perl-modules -perl-Tie-DataUUID: sig-perl-modules -perl-Tie-EncryptedHash: sig-perl-modules -perl-Tie-Handle-Offset: sig-perl-modules -perl-Tie-Hash-MultiValue: sig-perl-modules -perl-Tie-IxHash: sig-perl-modules -perl-Tie-Simple: sig-perl-modules -perl-Tie-Sub: sig-perl-modules -perl-Time-Clock: sig-perl-modules -perl-Time-Duration: sig-perl-modules -perl-Time-Duration-Parse: sig-perl-modules -perl-Time-HiRes: sig-perl-modules -perl-Time-Interval: sig-perl-modules -perl-Time-Local: sig-perl-modules -perl-Time-Mock: sig-perl-modules -perl-Time-Moment: sig-perl-modules -perl-Time-ParseDate: sig-perl-modules -perl-Time-Period: sig-perl-modules -perl-Time-Piece: sig-perl-modules -perl-Time-Tiny: sig-perl-modules -perl-Time-Warp: sig-perl-modules -perl-Time-Zone: Private -perl-Time-timegm: sig-perl-modules -perl-Time-y2038: sig-perl-modules -perl-TimeDate: Programming-language -perl-Tk: sig-perl-modules -perl-Tk-Canvas-GradientColor: sig-perl-modules -perl-Tk-ColoredButton: sig-perl-modules -perl-Tk-DoubleClick: sig-perl-modules -perl-Tk-FontDialog: sig-perl-modules -perl-Tk-ObjScanner: sig-perl-modules -perl-Tk-Pod: sig-perl-modules -perl-Tk-Text-SuperText: sig-perl-modules -perl-Tree: sig-perl-modules -perl-Tree-DAG_Node: sig-perl-modules -perl-Tree-R: sig-perl-modules -perl-Tree-Simple: sig-perl-modules -perl-Try-Tiny: Programming-language -perl-Type-Tiny-XS: sig-perl-modules -perl-Types-Serialiser: sig-perl-modules -perl-UNIVERSAL-can: sig-perl-modules -perl-UNIVERSAL-isa: sig-perl-modules -perl-UNIVERSAL-require: sig-perl-modules -perl-URI: sig-perl-modules -perl-URI-Encode: sig-perl-modules -perl-URI-Escape-XS: sig-perl-modules -perl-URI-Find: sig-perl-modules -perl-URI-Find-Simple: sig-perl-modules -perl-URI-FromHash: sig-perl-modules -perl-URI-Nested: sig-perl-modules -perl-URI-Query: sig-perl-modules -perl-URI-Title: sig-perl-modules -perl-URI-db: sig-perl-modules -perl-URI-ws: sig-perl-modules -perl-URL-Encode: sig-perl-modules -perl-URL-Encode-XS: sig-perl-modules -perl-UUID: sig-perl-modules -perl-UUID-Tiny: sig-perl-modules -perl-Unicode-CaseFold: sig-perl-modules -perl-Unicode-Casing: sig-perl-modules -perl-Unicode-CheckUTF8: sig-perl-modules -perl-Unicode-CheckUTF8-PP: sig-perl-modules -perl-Unicode-Collate: sig-perl-modules -perl-Unicode-EastAsianWidth: sig-perl-modules -perl-Unicode-LineBreak: sig-perl-modules -perl-Unicode-Map: sig-perl-modules -perl-Unicode-Normalize: sig-perl-modules -perl-Unicode-String: sig-perl-modules -perl-Unicode-UTF8: Programming-language -perl-Unix-Process: sig-perl-modules -perl-Unix-Syslog: Private -perl-User: sig-perl-modules -perl-User-Identity: sig-perl-modules -perl-Validation-Class: sig-perl-modules -perl-Variable-Magic: sig-perl-modules -perl-Verilog-Readmem: sig-perl-modules -perl-Version-Next: sig-perl-modules -perl-Version-Requirements: sig-perl-modules -perl-WWW-DuckDuckGo: sig-perl-modules -perl-WWW-GoodData: sig-perl-modules -perl-WWW-Mechanize: sig-perl-modules -perl-WWW-RobotRules: Programming-language -perl-WWW-Shorten: sig-perl-modules -perl-WWW-Splunk: sig-perl-modules -perl-WWW-Twilio-API: sig-perl-modules -perl-Want: sig-perl-modules -perl-WebService-Linode: sig-perl-modules -perl-WebService-Validator-HTML-W3C: sig-perl-modules -perl-Win32-ShellQuote: sig-perl-modules -perl-XML-Atom-SimpleFeed: sig-perl-modules -perl-XML-Bare: sig-perl-modules -perl-XML-Catalog: sig-perl-modules -perl-XML-CommonNS: sig-perl-modules -perl-XML-DOM: sig-perl-modules -perl-XML-Dumper: sig-cinnamon -perl-XML-Fast: sig-perl-modules -perl-XML-FeedPP: sig-perl-modules -perl-XML-Flow: sig-perl-modules -perl-XML-Generator: sig-perl-modules -perl-XML-Generator-PerlData: sig-perl-modules -perl-XML-Hash-LX: sig-perl-modules -perl-XML-LibXML: Programming-language -perl-XML-LibXML-Debugging: sig-perl-modules -perl-XML-LibXML-PrettyPrint: sig-perl-modules -perl-XML-LibXML-Simple: sig-perl-modules -perl-XML-Merge: sig-perl-modules -perl-XML-NamespaceFactory: sig-perl-modules -perl-XML-NamespaceSupport: Programming-language -perl-XML-Parser: sig-perl-modules -perl-XML-Parser-Lite: sig-perl-modules -perl-XML-Parser-Lite-Tree: sig-perl-modules -perl-XML-Parser-Lite-Tree-XPath: sig-perl-modules -perl-XML-RegExp: sig-perl-modules -perl-XML-SAX: Programming-language -perl-XML-SAX-Base: Programming-language -perl-XML-SAX-ExpatXS: sig-perl-modules -perl-XML-SemanticDiff: sig-perl-modules -perl-XML-Simple: Programming-language -perl-XML-Stream: sig-perl-modules -perl-XML-Structured: sig-perl-modules -perl-XML-Tidy: sig-perl-modules -perl-XML-Tidy-Tiny: sig-perl-modules -perl-XML-Tiny: sig-perl-modules -perl-XML-TokeParser: sig-perl-modules -perl-XML-TreePP: sig-perl-modules -perl-XML-Twig: sig-perl-modules -perl-XML-Writer: sig-perl-modules -perl-XML-XPath: Programming-language -perl-XML-XPathEngine: sig-perl-modules -perl-XXX: sig-perl-modules -perl-YAML: Programming-language -perl-YAML-LibYAML: sig-perl-modules -perl-YAML-PP: sig-perl-modules -perl-YAML-Syck: sig-perl-modules -perl-YAML-Tiny: Programming-language -perl-ZMQ-Constants: sig-perl-modules -perl-accessors: sig-perl-modules -perl-aliased: sig-perl-modules -perl-autobox: sig-perl-modules -perl-autobox-Core: sig-perl-modules -perl-autodie: sig-perl-modules -perl-bareword-filehandles: sig-perl-modules -perl-bignum: sig-perl-modules -perl-boolean: Application -perl-common-sense: sig-perl-modules -perl-constant: sig-perl-modules -perl-constant-boolean: sig-perl-modules -perl-constant-defer: sig-perl-modules -perl-constant-tiny: sig-perl-modules -perl-curry: sig-perl-modules -perl-enum: sig-perl-modules -perl-experimental: sig-perl-modules -perl-failures: sig-perl-modules -perl-generators: Programming-language -perl-gettext: Programming-language -perl-inc-latest: Programming-language -perl-indirect: sig-perl-modules -perl-latest: sig-perl-modules -perl-lexical-underscore: sig-perl-modules -perl-lib-abs: sig-perl-modules -perl-lib-relative: sig-perl-modules -perl-libintl-perl: Programming-language -perl-libnet: sig-perl-modules -perl-libwww-perl: Programming-language -perl-libxml-perl: Programming-language -perl-local-lib: sig-perl-modules -perl-mixin: sig-perl-modules -perl-multidimensional: sig-perl-modules -perl-namespace-autoclean: sig-perl-modules -perl-namespace-clean: sig-perl-modules -perl-namespace-sweep: sig-perl-modules -perl-parent: sig-perl-modules -perl-perlfaq: sig-perl-modules -perl-pmtools: sig-perl-modules -perl-podlators: sig-perl-modules -perl-prefork: sig-perl-modules -perl-re-engine-PCRE2: sig-perl-modules -perl-strictures: Programming-language -perl-syntax: sig-perl-modules -perl-threads: sig-perl-modules -perl-threads-shared: sig-perl-modules -perl-v6: sig-perl-modules -perl-version: sig-perl-modules -perlporter: dev-utils -pesign: Application -pesign-obs-integration: sig-security-facility -pgadmin4-server: DB -pgpool2: DB -phantomjs: bigdata -phodav: GNOME -phoenix: DB -phonon: Others -phonon-backend-gstreamer: Private -php: Base-service -php-pear: sig-epol -php-pecl-zip: sig-epol -physfs: dev-utils -picketbox: sig-Java -picketbox-commons: sig-Java -picketbox-xacml: sig-Java -pidgin: Application -pig: DB -pigpio: sig-RaspberryPi -pigz: Base-service -pinentry: Desktop -pinfo: System-tool -pipewire: Desktop -pistache: sig-high-performance-network -pixman: Desktop -pkcs11-helper: Runtime -pkg_management_info: sig-epol -pkgconf: Base-service -pkgconfig: Private -pkgdiff: sig-EasyLife -pkgporter: dev-utils -pkgship: sig-EasyLife -pki-core: Application -plasma-breeze: sig-KDE -plasma-wayland-protocols: sig-UKUI -platform: sig-epol -platform_build: sig-recycle -platform_frameworks_base: sig-recycle -platform_frameworks_native: sig-recycle -platform_frameworks_opt_net_wifi: sig-recycle -platform_hardware_libhardware_legacy: sig-recycle -platform_hardware_ril: sig-recycle -platform_manifests: sig-recycle -platform_packages_apps_PackageInstaller: sig-recycle -platform_system_core: sig-recycle -plexus-ant-factory: sig-Java -plexus-archiver: sig-Java -plexus-bsh-factory: sig-Java -plexus-build-api: sig-Java -plexus-cipher: sig-Java -plexus-classworlds: sig-Java -plexus-cli: sig-Java -plexus-compiler: sig-Java -plexus-component-api: sig-Java -plexus-component-factories-pom: Base-service -plexus-components-pom: sig-Java -plexus-containers: sig-Java -plexus-i18n: sig-Java -plexus-interactivity: sig-Java -plexus-interpolation: sig-Java -plexus-io: sig-Java -plexus-languages: sig-Java -plexus-pom: sig-Java -plexus-resources: sig-Java -plexus-sec-dispatcher: sig-Java -plexus-utils: sig-Java -plexus-velocity: sig-Java -plotutils: Application -pluginlib: sig-ROS -pluma: sig-UKUI -plymouth: Desktop -plymouth-theme-kiran: sig-KIRAN-DESKTOP -pmix: Base-service -pngcrush: Application -pngquant: Application -pnm2ppa: Application -po4a: Application -poco: sig-ROS -podman: sig-CloudNative -policycoreutils: sig-security-facility -polkit: Base-service -polkit-gnome: Base-service -polkit-pkla-compat: Base-service -polkit-qt-1: sig-UKUI -poly2tri: Others -polycube: sig-high-performance-network -polyglot: sig-Java -poppler: Desktop -poppler-data: Desktop -popt: Base-service -portals-pom: dev-utils -portaudio: dev-utils -portlet-2.0-api: dev-utils -portreserve: System-tool -postfix: Networking -postgresql: DB -postgresql-13: DB -postgresql-jdbc: sig-Java -postgresql-odbc: DB -potrace: Application -powermock: sig-Java -powertop: Base-service -ppp: Networking -pps-tools: System-tool -pptp: Application -predixy: Application -prefetch_tuning: A-Tune -prelink: Application -presto: bigdata -process1: dev-utils -procinfo: Others -procmail: Networking -procps-ng: Computing -proftpd: Application -proguard: sig-Java -proj: Others -prometheus: sig-CloudNative -promu: sig-CloudNative -properties-maven-plugin: sig-Java -proselint: Application -protobuf: sig-CloudNative -protobuf-c: Runtime -protobuf2: bigdata -protoparser: sig-Java -protostream: sig-Java -proxool: sig-Java -proxytoys: sig-Java -prrte: ai -ps_mem: Computing -psacct: Application -psm: dev-utils -psmisc: Computing -pstoedit: Application -psutils: Application -ptcr: sig-CloudNative -ptpython: sig-python-modules -publicsuffix-list: Base-service -pugixml: sig-epol -pulp: sig-OKD -pulseaudio: Computing -puppet-agent-puppet-strings: sig-ops -pushgateway: sig-CloudNative -pv: Base-service -pwgen: sig-recycle -py3c: sig-python-modules -pyOpenSSL: sig-security-facility -pyScss: sig-python-modules -pyang: sig-high-performance-network -pyang-swagger: sig-high-performance-network -pyatspi: Desktop -pybind11: sig-python-modules -pycairo: Desktop -pycharm-community: sig-UKUI -pyelftools: Programming-language -pyflakes: Programming-language -pygobject2: sig-recycle -pygobject3: Base-service -pygtk2: sig-recycle -pyisula: iSulad -pykickstart: Base-service -pyliblzma: Base-service -pylint: Application -pyorbit: sig-recycle -pyparsing: Base-service -pyparted: Base-service -pyporter: dev-utils -pysendfile: Private -pyserial: Base-service -pytest: Programming-language -python-3parclient: sig-openstack -python-APScheduler: sig-python-modules -python-ATpy: sig-python-modules -python-AWSIoTPythonSDK: sig-python-modules -python-Arpeggio: sig-python-modules -python-Automat: sig-python-modules -python-Bottleneck: Private -python-Brotli: sig-python-modules -python-Cerberus: sig-python-modules -python-Cerealizer: sig-python-modules -python-Chameleon: sig-python-modules -python-ConfigArgParse: sig-python-modules -python-CouchDB: sig-recycle -python-Cython: sig-python-modules -python-ECPy: sig-python-modules -python-EditorConfig: sig-python-modules -python-Faker: sig-python-modules -python-Flask-APScheduler: sig-python-modules -python-Flask-Admin: sig-python-modules -python-Flask-Assets: sig-python-modules -python-Flask-AutoIndex: sig-python-modules -python-Flask-Bootstrap: sig-python-modules -python-Flask-Cache: sig-python-modules -python-Flask-Classy: sig-python-modules -python-Flask-Cors: sig-python-modules -python-Flask-HTTPAuth: sig-python-modules -python-Flask-Limiter: sig-python-modules -python-Flask-Mail: sig-python-modules -python-Flask-Mako: sig-python-modules -python-Flask-OAuth: sig-python-modules -python-Flask-OpenID: sig-python-modules -python-Flask-Paranoid: sig-python-modules -python-Flask-Principal: sig-python-modules -python-Flask-RSTPages: sig-python-modules -python-Flask-SQLAlchemy: sig-python-modules -python-Flask-Script: sig-python-modules -python-Flask-Silk: sig-python-modules -python-FormEncode: sig-python-modules -python-GitPython: sig-python-modules -python-GridDataFormats: sig-python-modules -python-HeapDict: sig-python-modules -python-IPy: Networking -python-JPype1: sig-python-modules -python-JSON_minify: sig-python-modules -python-JayDeBeApi: sig-python-modules -python-Kajiki: sig-python-modules -python-Keras: A-Tune -python-Keras_Preprocessing: A-Tune -python-Lasagne: sig-python-modules -python-Logbook: sig-python-modules -python-ModestMaps: sig-python-modules -python-Naked: sig-python-modules -python-Parsley: sig-python-modules -python-PyDispatcher: sig-python-modules -python-PyDrive: sig-python-modules -python-PyLaTeX: sig-python-modules -python-PyLink: sig-python-modules -python-PyMI: sig-openstack -python-PyMeeus: sig-python-modules -python-PyMySQL: sig-python-modules -python-PyNLPl: sig-python-modules -python-PyOpenGL: sig-python-modules -python-PyPDF2: sig-python-modules -python-PyRSS2Gen: sig-python-modules -python-PySimpleSOAP: sig-python-modules -python-PySnooper: sig-python-modules -python-Pympler: sig-python-modules -python-Pyphen: sig-python-modules -python-QtAwesome: sig-python-modules -python-QtPy: sig-python-modules -python-SALib: sig-python-modules -python-SQLAlchemy-Utils: sig-python-modules -python-SecretStorage: Programming-language -python-Send2Trash: sig-python-modules -python-Slowloris: sig-python-modules -python-TGScheduler: sig-python-modules -python-Theano: sig-python-modules -python-Trololio: sig-python-modules -python-TurboGears2: sig-python-modules -python-Twiggy: sig-python-modules -python-URLObject: sig-openstack -python-Unipath: sig-python-modules -python-WSGIProxy2: sig-python-modules -python-XStatic: sig-python-modules -python-XStatic-Angular: sig-openstack -python-XStatic-Angular-Bootstrap: sig-openstack -python-XStatic-Angular-FileUpload: sig-openstack -python-XStatic-Angular-Gettext: sig-openstack -python-XStatic-Angular-Schema-Form: sig-openstack -python-XStatic-Angular-lrdragndrop: sig-openstack -python-XStatic-Bootstrap-Datepicker: sig-openstack -python-XStatic-Bootstrap-SCSS: sig-openstack -python-XStatic-D3: sig-openstack -python-XStatic-Font-Awesome: sig-openstack -python-XStatic-Hogan: sig-openstack -python-XStatic-JQuery-Migrate: sig-openstack -python-XStatic-JQuery.TableSorter: sig-openstack -python-XStatic-JQuery.quicksearch: sig-openstack -python-XStatic-JSEncrypt: sig-openstack -python-XStatic-Jasmine: sig-openstack -python-XStatic-Patternfly-Bootstrap-Treeview: sig-python-modules -python-XStatic-Rickshaw: sig-openstack -python-XStatic-Spin: sig-openstack -python-XStatic-bootswatch: sig-openstack -python-XStatic-jQuery: sig-openstack -python-XStatic-jquery-ui: sig-openstack -python-XStatic-mdi: sig-openstack -python-XStatic-objectpath: sig-openstack -python-XStatic-roboto-fontface: sig-openstack -python-XStatic-smart-table: sig-openstack -python-XStatic-term.js: sig-openstack -python-XStatic-tv4: sig-openstack -python-XlsxWriter: sig-python-modules -python-Yapsy: sig-python-modules -python-ZEO: sig-python-modules -python-aaargh: sig-python-modules -python-abclient: sig-python-modules -python-abimap: sig-python-modules -python-absl-py: A-Tune -python-actdiag: sig-python-modules -python-adb-shell: sig-python-modules -python-adodbapi: sig-python-modules -python-aenum: sig-python-modules -python-aexpect: sig-python-modules -python-agate: sig-python-modules -python-agate-dbf: sig-python-modules -python-agate-excel: sig-python-modules -python-agate-sql: sig-python-modules -python-aiodns: sig-python-modules -python-aiofiles: sig-python-modules -python-aiohttp: Programming-language -python-aiohttp-negotiate: sig-python-modules -python-aiomqtt: sig-python-modules -python-aiomysql: sig-python-modules -python-aioodbc: sig-python-modules -python-aiorpcX: sig-python-modules -python-aiosmtpd: sig-python-modules -python-aiosnmp: sig-python-modules -python-aiostream: sig-python-modules -python-aiounittest: sig-python-modules -python-aiowinreg: sig-python-modules -python-aiozeroconf: sig-python-modules -python-airspeed: sig-python-modules -python-alembic: sig-python-modules -python-alsa: sig-python-modules -python-altgraph: sig-python-modules -python-amqp: sig-python-modules -python-animatplot: sig-python-modules -python-aniso8601: sig-python-modules -python-ansi2html: sig-python-modules -python-ansible-inventory-grapher: sig-python-modules -python-ansible-review: sig-python-modules -python-ansible-runner: oVirt -python-ansicolor: sig-python-modules -python-ansicolors: sig-python-modules -python-anyjson: sig-python-modules -python-anykeystore: sig-python-modules -python-anymarkup: sig-python-modules -python-anymarkup-core: sig-python-modules -python-anytree: sig-python-modules -python-aodhclient: sig-openstack -python-api-object-schema: sig-openstack -python-apipkg: sig-python-modules -python-appdirs: sig-python-modules -python-apptools: sig-python-modules -python-apypie: sig-python-modules -python-argcomplete: sig-python-modules -python-argh: sig-python-modules -python-argon2-cffi: sig-python-modules -python-argparse: sig-openstack -python-argparse-manpage: sig-python-modules -python-args: sig-python-modules -python-arpy: sig-python-modules -python-arrow: sig-openstack -python-asciitree: sig-python-modules -python-asgiref: sig-python-modules -python-asn1crypto: Base-service -python-aspectlib: sig-python-modules -python-aspy: sig-python-modules -python-asteval: sig-python-modules -python-astor: A-Tune -python-astral: sig-python-modules -python-astroML: sig-python-modules -python-astroid: sig-python-modules -python-astroplan: sig-python-modules -python-astropy: sig-python-modules -python-astropy-healpix: sig-python-modules -python-astropy-helpers: sig-python-modules -python-astroquery: sig-python-modules -python-astunparse: sig-python-modules -python-async-timeout: sig-python-modules -python-async_generator: sig-python-modules -python-asyncssh: sig-python-modules -python-asynctest: sig-python-modules -python-asysocks: sig-python-modules -python-atomicwrites: Programming-language -python-atpublic: sig-python-modules -python-attrs: Programming-language -python-audioread: sig-python-modules -python-augeas: Programming-language -python-auth.credential: sig-python-modules -python-authheaders: sig-python-modules -python-authres: sig-python-modules -python-automaton: sig-openstack -python-autopep8: sig-python-modules -python-babelfish: sig-python-modules -python-backcall: sig-python-modules -python-backlash: sig-python-modules -python-backoff: sig-python-modules -python-backports: Base-service -python-backports-functools_lru_cache: Private -python-backports-shutil_get_terminal_size: Private -python-backports-ssl_match_hostname: Networking -python-backports-unittest_mock: sig-python-modules -python-backports_abc: Programming-language -python-baluhn: sig-python-modules -python-bandit: sig-python-modules -python-barbicanclient: sig-openstack -python-bashate: sig-python-modules -python-bcc: sig-python-modules -python-bcrypt: sig-python-modules -python-beaker: Base-service -python-beanbag: sig-python-modules -python-beautifulsoup4: sig-openstack -python-behave: sig-python-modules -python-beniget: sig-python-modules -python-betamax: sig-python-modules -python-bids-validator: sig-python-modules -python-bigsuds: sig-python-modules -python-billiard: sig-python-modules -python-binary-memcached: sig-openstack -python-binaryornot: sig-python-modules -python-binstruct: sig-python-modules -python-bintrees: sig-python-modules -python-bitcoinlib: sig-python-modules -python-bitmath: sig-python-modules -python-bitstring: sig-python-modules -python-blazarclient: sig-openstack -python-blaze: sig-python-modules -python-bleach: sig-python-modules -python-blessed: sig-python-modules -python-blessings: sig-python-modules -python-blindspin: sig-python-modules -python-blinker: Programming-language -python-blivet: sig-OS-Builder -python-blockdiag: sig-python-modules -python-blowfish: sig-python-modules -python-blurb: sig-python-modules -python-bobo: sig-python-modules -python-booleanOperations: sig-python-modules -python-boom: sig-python-modules -python-boto: sig-python-modules -python-boto3: sig-python-modules -python-botocore: sig-python-modules -python-bottle: Programming-language -python-bottle-sqlite: sig-python-modules -python-branca: sig-python-modules -python-breathe: sig-python-modules -python-bson: sig-python-modules -python-btrfs: sig-python-modules -python-bucky: sig-python-modules -python-bugzilla: sig-python-modules -python-bunch: sig-openstack -python-bytesize: sig-python-modules -python-bz2file: sig-python-modules -python-cached_property: sig-python-modules -python-cachelib: sig-python-modules -python-cachetools: sig-python-modules -python-cachez: sig-python-modules -python-cairocffi: sig-python-modules -python-canonicaljson: sig-python-modules -python-capacity: sig-openstack -python-capturer: sig-python-modules -python-caribou: sig-python-modules -python-case: sig-python-modules -python-cassandra-driver: sig-openstack -python-castellan: sig-openstack -python-catkin-sphinx: sig-python-modules -python-cccolutils: sig-python-modules -python-ccdproc: sig-python-modules -python-cchardet: sig-python-modules -python-ceilometermiddleware: sig-openstack -python-celery: sig-python-modules -python-certbot: sig-python-modules -python-certifi: sig-python-modules -python-cffi: Base-service -python-cfgv: sig-python-modules -python-chardet: Base-service -python-charset-normalizer: Base-service -python-check-manifest: sig-python-modules -python-cheetah: Base-service -python-cheroot: sig-python-modules -python-cherrypy: sig-python-modules -python-cinder-tempest-plugin: sig-openstack -python-cinderclient: sig-openstack -python-citeproc-py: sig-python-modules -python-cjdns: sig-python-modules -python-click: Programming-language -python-click-completion: sig-python-modules -python-click-help-colors: sig-python-modules -python-click-log: sig-python-modules -python-click-man: sig-python-modules -python-click-threading: sig-python-modules -python-clickclick: sig-python-modules -python-cliff: sig-openstack -python-cliff-tablib: sig-python-modules -python-cligj: sig-python-modules -python-clint: sig-python-modules -python-cloud_sptheme: sig-python-modules -python-cloudpickle: sig-python-modules -python-cltk: sig-python-modules -python-clufter: sig-python-modules -python-cmarkgfm: sig-python-modules -python-cmd2: sig-python-modules -python-cmdln: sig-python-modules -python-cmigemo: sig-python-modules -python-cocotb: sig-python-modules -python-codecov: sig-python-modules -python-colcon-bazel: sig-python-modules -python-colorama: sig-python-modules -python-colorclass: sig-python-modules -python-coloredlogs: sig-python-modules -python-colorful: sig-python-modules -python-colorlog: sig-python-modules -python-colorspacious: sig-python-modules -python-colour: sig-python-modules -python-colour-runner: sig-python-modules -python-commonmark: Base-service -python-concurrent-log-handler: sig-python-modules -python-conditional: sig-python-modules -python-condor: sig-python-modules -python-confetti: sig-openstack -python-confget: sig-openstack -python-configobj: Storage -python-configparser: Programming-language -python-configshell: Programming-language -python-confluent-kafka: sig-openstack -python-confuse: sig-python-modules -python-congressclient: sig-openstack -python-connexion: sig-python-modules -python-constantly: sig-python-modules -python-construct: Programming-language -python-consul: sig-openstack -python-contextlib2: sig-python-modules -python-contextvars: sig-python-modules -python-convertdate: sig-python-modules -python-copr: sig-python-modules -python-copr-common: sig-python-modules -python-copr-messaging: sig-python-modules -python-cotyledon: sig-openstack -python-cov-core: sig-python-modules -python-coverage: Desktop -python-cpio: Base-service -python-cram: sig-python-modules -python-crank: sig-python-modules -python-crashtest: sig-python-modules -python-crayons: sig-python-modules -python-crcelk: sig-python-modules -python-croniter: sig-python-modules -python-crypto: Base-service -python-cryptography: Base-service -python-cryptography-vectors: Programming-language -python-cson: sig-python-modules -python-css-parser: sig-python-modules -python-cssmin: sig-python-modules -python-cssselect: sig-python-modules -python-cssselect2: sig-python-modules -python-cssutils: sig-mate-desktop -python-csvkit: sig-python-modules -python-cu2qu: sig-python-modules -python-cups: Programming-language -python-curio: sig-python-modules -python-cursive: sig-openstack -python-curtsies: sig-python-modules -python-cvss: sig-python-modules -python-cxxfilt: sig-python-modules -python-cyborgclient: sig-openstack -python-cycler: sig-python-modules -python-cypy: sig-python-modules -python-d2to1: sig-python-modules -python-daemon: oVirt -python-daemonize: sig-python-modules -python-daiquiri: sig-python-modules -python-dasbus: sig-OS-Builder -python-dataclasses: sig-python-modules -python-datanommer.consumer: sig-python-modules -python-datanommer.models: sig-python-modules -python-datashape: sig-python-modules -python-dateparser: sig-python-modules -python-dateutil: Base-service -python-dbfread: sig-python-modules -python-dbus-client-gen: sig-python-modules -python-dbus-signature-pyparsing: sig-python-modules -python-dbusmock: GNOME -python-ddt: sig-python-modules -python-deap: sig-python-modules -python-debrepo: sig-python-modules -python-debtcollector: sig-openstack -python-decorator: Base-service -python-deepmerge: sig-python-modules -python-defusedxml: sig-python-modules -python-demjson: sig-python-modules -python-deprecation: sig-python-modules -python-designateclient: sig-openstack -python-dfs-sdk: sig-openstack -python-dib-utils: sig-openstack -python-dict.sorted: sig-python-modules -python-dict2xml: sig-python-modules -python-dictdumper: sig-python-modules -python-diff-match-patch: sig-python-modules -python-dill: sig-python-modules -python-dingz: sig-python-modules -python-dirq: sig-python-modules -python-discover: sig-openstack -python-diskcache: sig-python-modules -python-distlib: sig-python-modules -python-distro: Programming-language -python-distro-info: sig-python-modules -python-distutils-extra: xfce -python-dj-database-url: sig-python-modules -python-dj-email-url: sig-python-modules -python-dj-search-url: sig-python-modules -python-django: sig-python-modules -python-django-ajax-selects: sig-python-modules -python-django-angular: sig-python-modules -python-django-annoying: sig-python-modules -python-django-appconf: sig-python-modules -python-django-auth-ldap: sig-python-modules -python-django-authority: sig-python-modules -python-django-babel: sig-recycle -python-django-cache-url: sig-python-modules -python-django-cacheops: sig-python-modules -python-django-compressor: sig-python-modules -python-django-contrib-comments: sig-python-modules -python-django-cors-headers: sig-python-modules -python-django-crispy-forms: sig-python-modules -python-django-debreach: sig-python-modules -python-django-debug-toolbar: sig-python-modules -python-django-discover-runner: sig-python-modules -python-django-filter: sig-python-modules -python-django-haystack: sig-python-modules -python-django-health-check: sig-python-modules -python-django-helpdesk: sig-python-modules -python-django-ipware: sig-python-modules -python-django-macros: sig-python-modules -python-django-markdownx: sig-python-modules -python-django-mptt: sig-python-modules -python-django-nose: sig-python-modules -python-django-pipeline: sig-python-modules -python-django-pyscss: sig-python-modules -python-django-pytest: sig-python-modules -python-django-redis: sig-python-modules -python-django-registration: sig-python-modules -python-django-rest-framework: sig-python-modules -python-django-reversion: sig-python-modules -python-django-robots: sig-python-modules -python-django-rules: sig-python-modules -python-django-simple-captcha: sig-python-modules -python-django-tagging: sig-python-modules -python-django-tastypie: sig-python-modules -python-django-timezone-field: sig-python-modules -python-django-tinymce: sig-python-modules -python-djangoql: sig-python-modules -python-dkimpy: sig-python-modules -python-dmidecode: Desktop -python-dnf: sig-python-modules -python-dns: Programming-language -python-dns-lexicon: sig-python-modules -python-dnspython: sig-python-modules -python-doc8: sig-openstack -python-docker: sig-python-modules -python-docker-pycreds: sig-python-modules -python-docker-squash: sig-python-modules -python-dockerfile-parse: sig-python-modules -python-dockerpty: sig-python-modules -python-docopt: Base-service -python-docutils: Programming-language -python-docx: sig-python-modules -python-dogpile.cache: sig-python-modules -python-doit: sig-python-modules -python-dominate: sig-python-modules -python-dotenv: sig-python-modules -python-dpath: sig-python-modules -python-dpkt: sig-python-modules -python-dracclient: sig-openstack -python-drat: sig-python-modules -python-duecredit: sig-python-modules -python-dulwich: sig-python-modules -python-easyargs: sig-python-modules -python-easygui: sig-python-modules -python-ecdsa: sig-python-modules -python-editdistance: sig-python-modules -python-editor: sig-python-modules -python-elasticsearch2: sig-openstack -python-elasticsearch7: sig-EasyLife -python-elementpath: sig-openstack -python-eli5: sig-python-modules -python-email_reply_parser: sig-python-modules -python-emcee: sig-python-modules -python-emoji: sig-python-modules -python-enchant: Programming-language -python-enmerkar: sig-python-modules -python-entrypoints: Programming-language -python-enum34: Base-service -python-envisage: sig-python-modules -python-enzyme: sig-python-modules -python-epdb: sig-python-modules -python-epub: sig-python-modules -python-esdk-obs-python: Application -python-estimator: A-Tune -python-et_xmlfile: sig-python-modules -python-etcd3: sig-openstack -python-etcd3gw: sig-openstack -python-ethtool: Desktop -python-evdev: sig-python-modules -python-eventlet: sig-python-modules -python-execnet: sig-python-modules -python-exif: sig-python-modules -python-extras: Programming-language -python-ez_setup: sig-python-modules -python-f5-icontrol-rest: sig-python-modules -python-fabulous: sig-python-modules -python-falcon: sig-python-modules -python-fastavro: sig-python-modules -python-fasteners: sig-python-modules -python-fastimport: sig-python-modules -python-fastjsonschema: bigdata -python-fastnumbers: sig-python-modules -python-fastpurge: sig-python-modules -python-faust: Programming-language -python-fauxquests: sig-python-modules -python-fdb: sig-python-modules -python-feedgenerator: sig-python-modules -python-feedparser: sig-python-modules -python-fido2: sig-python-modules -python-fields: sig-python-modules -python-filelock: sig-python-modules -python-filetype: sig-python-modules -python-fire: sig-python-modules -python-firebirdsql: sig-python-modules -python-firehose: sig-python-modules -python-firewall: sig-python-modules -python-first: sig-python-modules -python-fisx: sig-python-modules -python-fixtures: Programming-language -python-flake8: sig-python-modules -python-flake8-docstrings: sig-openstack -python-flake8-import-order: sig-python-modules -python-flake8-logging-format: sig-openstack -python-flake8-polyfill: sig-python-modules -python-flaky: sig-python-modules -python-flann: sig-python-modules -python-flask: Programming-language -python-flask-multistatic: sig-python-modules -python-flask-oidc: sig-python-modules -python-flask-restful: sig-python-modules -python-flask-restplus: sig-python-modules -python-flask-restx: sig-python-modules -python-flask-session: sig-python-modules -python-flask-testing: sig-python-modules -python-flask-whooshee: sig-python-modules -python-flatpak-module-tools: sig-python-modules -python-flexmock: sig-python-modules -python-flickrapi: sig-python-modules -python-flit: Programming-language -python-flock: sig-python-modules -python-flower: sig-python-modules -python-flufl.bounce: sig-python-modules -python-flufl.i18n: sig-python-modules -python-flufl.lock: sig-python-modules -python-flufl.testing: sig-python-modules -python-flup: sig-python-modules -python-flux: sig-openstack -python-fontMath: sig-python-modules -python-fontdump: sig-python-modules -python-fontname: sig-python-modules -python-fonttools: Programming-language -python-formats: sig-python-modules -python-freezegun: Programming-language -python-friendlyloris: sig-python-modules -python-frozendict: sig-python-modules -python-fsmonitor: sig-python-modules -python-fsspec: sig-python-modules -python-fuckit: sig-python-modules -python-funcparserlib: sig-python-modules -python-funcsigs: Programming-language -python-furl: sig-python-modules -python-fusepy: sig-python-modules -python-futures: Programming-language -python-futurist: sig-openstack -python-fuzzyfinder: sig-python-modules -python-fypp: sig-python-modules -python-gabbi: sig-python-modules -python-gast: sig-python-modules -python-gatspy: sig-python-modules -python-gccinvocation: sig-python-modules -python-gearbox: sig-python-modules -python-genshi: Programming-language -python-genty: sig-python-modules -python-geographiclib: sig-python-modules -python-geojson: sig-python-modules -python-geomet: sig-python-modules -python-gerrit-view: sig-python-modules -python-gerritlib: sig-python-modules -python-gerrymander: sig-python-modules -python-getmac: sig-python-modules -python-gevent: Programming-language -python-gflags: sig-python-modules -python-ghp-import2: sig-python-modules -python-git-url-parse: sig-python-modules -python-gitapi: sig-python-modules -python-gitdb: sig-python-modules -python-githubpy: sig-python-modules -python-gitlab: sig-python-modules -python-glad: sig-python-modules -python-glance-store: sig-openstack -python-glance-tempest-plugin: sig-openstack -python-glanceclient: sig-openstack -python-glances_api: sig-python-modules -python-glob2: sig-python-modules -python-gnocchiclient: sig-openstack -python-gntp: sig-python-modules -python-gnupg: sig-python-modules -python-google-apputils: Base-service -python-google-auth: sig-python-modules -python-google-auth-oauthlib: sig-python-modules -python-google-compute-engine: sig-python-modules -python-google-pasta: A-Tune -python-gossip: sig-openstack -python-gpxpy: sig-python-modules -python-grabbit: sig-python-modules -python-grabserial: sig-python-modules -python-graphviz: sig-python-modules -python-greenlet: Programming-language -python-gssapi: Base-service -python-guizero: sig-python-modules -python-gunicorn: sig-python-modules -python-gwebsockets: sig-python-modules -python-h11: sig-python-modules -python-h2: sig-python-modules -python-h5io: sig-python-modules -python-h5py: A-Tune -python-hacking: sig-openstack -python-hamcrest: sig-python-modules -python-hdfs: sig-python-modules -python-heat-cfntools: sig-openstack -python-heatclient: sig-openstack -python-heketi: sig-python-modules -python-hgapi: sig-python-modules -python-hidapi: sig-openstack -python-hkdf: sig-python-modules -python-hl7: sig-python-modules -python-hole: sig-python-modules -python-holidays: sig-python-modules -python-horovod: ai -python-hpack: sig-python-modules -python-hstspreload: sig-python-modules -python-html2text: sig-python-modules -python-html5lib: Networking -python-htmlmin: sig-python-modules -python-httmock: sig-python-modules -python-http_client: sig-python-modules -python-httpbin: Private -python-httpie: sig-python-modules -python-httplib2: Runtime -python-httpretty: Programming-language -python-httpsig_cffi: sig-python-modules -python-httptools: sig-python-modules -python-httpx: sig-python-modules -python-humanfriendly: sig-python-modules -python-humanize: sig-python-modules -python-humblewx: sig-python-modules -python-hupper: sig-python-modules -python-husl: sig-python-modules -python-hvac: sig-python-modules -python-hwdata: sig-python-modules -python-hyperframe: sig-python-modules -python-hyperlink: sig-python-modules -python-hyperopt: A-Tune -python-hypothesis: Programming-language -python-hypothesis-fspaths: sig-python-modules -python-ibmcclient: sig-openstack -python-icalendar: sig-python-modules -python-icdiff: sig-python-modules -python-identify: sig-python-modules -python-idna: Networking -python-idstools: sig-python-modules -python-ifaddr: sig-python-modules -python-ifcfg: sig-python-modules -python-igor: sig-python-modules -python-igraph: sig-python-modules -python-imagesize: Programming-language -python-img2pdf: sig-python-modules -python-importlib-metadata: sig-python-modules -python-importlib-resources: sig-python-modules -python-importmagic: sig-python-modules -python-incremental: sig-python-modules -python-inema: sig-python-modules -python-infi.dtypes.iqn: sig-openstack -python-infi.dtypes.wwn: sig-openstack -python-infinisdk: sig-openstack -python-inflection: sig-python-modules -python-iniconfig: sig-python-modules -python-iniparse: Base-service -python-injector: sig-python-modules -python-inotify: Base-service -python-interfile: sig-python-modules -python-intervaltree: sig-python-modules -python-into-dbus-python: sig-python-modules -python-invoke: sig-python-modules -python-iowait: sig-python-modules -python-ipaddress: Networking -python-ipdb: sig-python-modules -python-ipgetter2: sig-python-modules -python-iptools: sig-python-modules -python-ipykernel: bigdata -python-ipython_genutils: sig-python-modules -python-ironic-inspector-client: sig-openstack -python-ironic-lib: sig-openstack -python-ironic-prometheus-exporter: sig-openstack -python-ironic-tempest-plugin: sig-openstack -python-ironic-ui: sig-openstack -python-ironicclient: sig-openstack -python-iso8601: Programming-language -python-isodate: sig-python-modules -python-isort: sig-python-modules -python-itsdangerous: Programming-language -python-jaeger-client: sig-openstack -python-jaraco-classes: sig-python-modules -python-jaraco-collections: sig-python-modules -python-jaraco-functools: sig-python-modules -python-jaraco-text: sig-python-modules -python-jaraco.packaging: sig-openstack -python-javalang: sig-python-modules -python-jdcal: sig-python-modules -python-jedi: sig-epol -python-jeepney: sig-python-modules -python-jenkins: sig-Gatekeeper -python-jinja2: Base-service -python-jinja2-time: sig-python-modules -python-jinja2_pluralize: sig-python-modules -python-jira: sig-python-modules -python-jmespath: sig-python-modules -python-joblib: sig-python-modules -python-josepy: sig-python-modules -python-journal-brief: sig-python-modules -python-jsmin: sig-python-modules -python-json-tricks: A-Tune -python-json2table: sig-python-modules -python-json5: sig-python-modules -python-json_logger: sig-python-modules -python-jsonmodels: sig-python-modules -python-jsonpatch: Base-service -python-jsonpath-rw: sig-python-modules -python-jsonpath-rw-ext: sig-python-modules -python-jsonpointer: Base-service -python-jsonschema: Base-service -python-junitxml: sig-python-modules -python-jupyter-client: bigdata -python-jupyter-core: bigdata -python-justbytes: sig-python-modules -python-jwcrypto: sig-python-modules -python-jwt: Base-service -python-kaitaistruct: sig-python-modules -python-kaptan: sig-python-modules -python-karborclient: sig-openstack -python-kazoo: sig-openstack -python-kdcproxy: sig-python-modules -python-keras-applications: Private -python-keras-rl2: A-Tune -python-kerberos: sig-python-modules -python-keyczar: sig-python-modules -python-keyring: Programming-language -python-keystone-tempest-plugin: sig-openstack -python-keystoneauth1: sig-openstack -python-keystoneclient: sig-openstack -python-keystonemiddleware: sig-openstack -python-kickstart: sig-python-modules -python-kitchen: sig-python-modules -python-kiwisolver: sig-python-modules -python-klusta: sig-python-modules -python-kmod: Base-service -python-kombu: sig-python-modules -python-krbcontext: sig-python-modules -python-krest: sig-openstack -python-kubernetes: sig-CloudNative -python-landslide: sig-python-modules -python-langtable: sig-python-modules -python-lark-parser: sig-python-modules -python-lasso: sig-python-modules -python-latexcodec: sig-python-modules -python-launchpadlib: sig-python-modules -python-lazr.config: sig-python-modules -python-lazr.delegates: sig-python-modules -python-lazr.restfulclient: sig-python-modules -python-lazr.smtptest: sig-python-modules -python-lazr.uri: sig-python-modules -python-lazy-object-proxy: sig-python-modules -python-lazyarray: sig-python-modules -python-ldap: sig-python-modules -python-ldap3: sig-python-modules -python-ldappool: sig-openstack -python-leather: sig-python-modules -python-lefthandclient: sig-openstack -python-lesscpy: sig-python-modules -python-lexicon: sig-python-modules -python-lhsmdu: Base-service -python-libNeuroML: sig-python-modules -python-libarchive-c: sig-python-modules -python-libcloud: sig-python-modules -python-libevdev: sig-python-modules -python-liblinear: sig-python-modules -python-libmount: sig-python-modules -python-libnacl: sig-python-modules -python-libnl: sig-python-modules -python-librosa: sig-python-modules -python-libsass: sig-python-modules -python-libtmux: sig-python-modules -python-libusb1: sig-python-modules -python-libvoikko: sig-python-modules -python-libyang: sig-python-modules -python-lightgbm: sig-python-modules -python-limits: sig-python-modules -python-linecache2: Programming-language -python-linux-procfs: Base-service -python-liquidctl: sig-python-modules -python-listparser: sig-python-modules -python-lit: Programming-language -python-littleutils: sig-python-modules -python-livereload: sig-python-modules -python-lmdb: sig-python-modules -python-locket: sig-python-modules -python-lockfile: sig-python-modules -python-logging_tree: sig-python-modules -python-logutils: sig-python-modules -python-logzero: sig-python-modules -python-losant-rest: sig-python-modules -python-lrparsing: sig-python-modules -python-lttngust: sig-python-modules -python-luftdaten: sig-python-modules -python-lupa: sig-python-modules -python-lxml: Base-service -python-lz4: sig-openstack -python-m2r: sig-python-modules -python-magic: sig-python-modules -python-magnumclient: sig-openstack -python-mailer: sig-python-modules -python-makeelf: sig-python-modules -python-mako: Base-service -python-manhole: sig-python-modules -python-manilaclient: sig-openstack -python-manuel: sig-python-modules -python-maps: sig-python-modules -python-markdown: Programming-language -python-markdown2: sig-python-modules -python-markupsafe: Base-service -python-marshmallow: sig-python-modules -python-matplotlib: sig-python-modules -python-mccabe: sig-python-modules -python-mdx_gh_links: sig-python-modules -python-med: sig-python-modules -python-meh: Base-service -python-memcached: sig-python-modules -python-memory-profiler: sig-openstack -python-metaextract: sig-python-modules -python-metar: sig-python-modules -python-micawber: sig-python-modules -python-microfs: sig-python-modules -python-microversion-parse: sig-openstack -python-migen: sig-python-modules -python-migrate: sig-python-modules -python-mimeparse: Programming-language -python-mimerender: sig-python-modules -python-minibelt: sig-python-modules -python-minidb: sig-python-modules -python-minidump: sig-python-modules -python-mistralclient: sig-openstack -python-mistune: sig-python-modules -python-mitba: sig-openstack -python-mitmproxy: sig-python-modules -python-mne: sig-python-modules -python-mne-bids: sig-python-modules -python-mnemonic: sig-python-modules -python-mock: Programming-language -python-mode: Programming-language -python-modernize: sig-python-modules -python-moksha.common: sig-python-modules -python-monascaclient: sig-openstack -python-mongoengine: sig-python-modules -python-monotonic: sig-python-modules -python-more-itertools: Programming-language -python-moto: sig-openstack -python-mox: Base-service -python-mox3: sig-openstack -python-mpmath: sig-python-modules -python-msgpack: sig-python-modules -python-mtg: sig-python-modules -python-multi_key_dict: sig-recycle -python-multidict: sig-python-modules -python-multio: sig-python-modules -python-multipart: sig-python-modules -python-multipledispatch: sig-python-modules -python-multiprocessing: sig-openstack -python-munch: sig-python-modules -python-munkres: sig-python-modules -python-murano-pkg-check: sig-openstack -python-muranoclient: sig-openstack -python-musicbrainzngs: sig-python-modules -python-mutagen: sig-python-modules -python-mwclient: sig-python-modules -python-mygpoclient: sig-python-modules -python-mypy: sig-python-modules -python-mypy-extensions: sig-openstack -python-mysqlclient: sig-python-modules -python-myst-parser: bigdata -python-nb2plots: sig-python-modules -python-nbconvert: bigdata -python-nbformat: bigdata -python-nbsphinx: bigdata -python-nbval: bigdata -python-nbxmpp: sig-mate-desktop -python-ndjson-testrunner: sig-python-modules -python-neomodel: sig-python-modules -python-neotime: sig-python-modules -python-neovim: sig-python-modules -python-netaddr: Programming-language -python-netdata: sig-python-modules -python-netifaces: A-Tune -python-netmiko: sig-openstack -python-networkx: A-Tune -python-neutron-lib: sig-openstack -python-neutron-tempest-plugin: sig-openstack -python-neutronclient: sig-openstack -python-ngram: sig-python-modules -python-nine: sig-python-modules -python-nltk: sig-python-modules -python-nmap: sig-python-modules -python-nmrglue: sig-python-modules -python-nni: A-Tune -python-nocasedict: sig-openstack -python-nocaselist: sig-openstack -python-nodeenv: sig-openstack -python-nose: sig-recycle -python-nose-cov: sig-python-modules -python-nose-exclude: sig-python-modules -python-nose-ignore-docstring: sig-python-modules -python-nose-parameterized: sig-python-modules -python-nose-progressive: sig-python-modules -python-nose-timer: sig-python-modules -python-nose2: sig-python-modules -python-nose_fixes: sig-python-modules -python-nosehtmloutput: sig-openstack -python-nosexcover: sig-openstack -python-notario: sig-python-modules -python-notify2: sig-python-modules -python-notmuch: sig-python-modules -python-novaclient: sig-openstack -python-ns1-python: sig-python-modules -python-nss: sig-python-modules -python-ntc-templates: sig-openstack -python-ntlm-auth: sig-python-modules -python-ntplib: Desktop -python-nudatus: sig-python-modules -python-num2words: sig-python-modules -python-numexpr: Private -python-numpoly: sig-python-modules -python-numpydoc: Private -python-oauth2client: sig-python-modules -python-oauthlib: Base-service -python-octaviaclient: sig-openstack -python-odML: sig-python-modules -python-odfpy: sig-python-modules -python-odo: sig-python-modules -python-offtrac: sig-python-modules -python-ofxparse: sig-python-modules -python-okaara: sig-python-modules -python-olefile: sig-python-modules -python-oletools: sig-python-modules -python-openidc-client: sig-python-modules -python-openpyxl: sig-python-modules -python-opensensemap-api: sig-python-modules -python-openstack-doc-tools: sig-openstack -python-openstack.nose_plugin: sig-openstack -python-openstackclient: sig-openstack -python-openstackdocstheme: sig-openstack -python-openstacksdk: sig-openstack -python-opentracing: sig-openstack -python-openvswitch: sig-python-modules -python-opt-einsum: A-Tune -python-ordered-set: Base-service -python-orderedmultidict: sig-python-modules -python-orderedset: Private -python-os-api-ref: sig-openstack -python-os-apply-config: sig-openstack -python-os-brick: sig-openstack -python-os-client-config: sig-openstack -python-os-collect-config: sig-openstack -python-os-faults: sig-openstack -python-os-ken: sig-openstack -python-os-refresh-config: sig-openstack -python-os-resource-classes: sig-openstack -python-os-service-types: sig-openstack -python-os-testr: sig-openstack -python-os-traits: sig-openstack -python-os-vif: sig-openstack -python-os-win: sig-openstack -python-os-xenapi: sig-openstack -python-osc-lib: sig-openstack -python-osc-placement: sig-openstack -python-oslo.cache: sig-openstack -python-oslo.concurrency: sig-openstack -python-oslo.config: sig-openstack -python-oslo.context: sig-openstack -python-oslo.db: sig-openstack -python-oslo.i18n: sig-openstack -python-oslo.log: sig-openstack -python-oslo.messaging: sig-openstack -python-oslo.middleware: sig-openstack -python-oslo.policy: sig-openstack -python-oslo.privsep: sig-openstack -python-oslo.reports: sig-openstack -python-oslo.rootwrap: sig-openstack -python-oslo.serialization: sig-openstack -python-oslo.service: sig-openstack -python-oslo.sphinx: sig-openstack -python-oslo.upgradecheck: sig-openstack -python-oslo.utils: sig-openstack -python-oslo.versionedobjects: sig-openstack -python-oslo.vmware: sig-openstack -python-oslotest: sig-openstack -python-osprofiler: sig-openstack -python-outcome: sig-python-modules -python-outdated: sig-python-modules -python-ovirt-engine-sdk4: oVirt -python-ovsdbapp: sig-openstack -python-packaging: Programming-language -python-packit: sig-python-modules -python-pacpy: sig-python-modules -python-pact: sig-openstack -python-paho-mqtt: sig-python-modules -python-pallets-sphinx-themes: sig-python-modules -python-pamela: sig-python-modules -python-pandas: sig-python-modules -python-pandocfilters: sig-python-modules -python-paperwork-backend: sig-python-modules -python-parameterized: sig-python-modules -python-paramiko: Programming-language -python-parse: sig-python-modules -python-parse_type: sig-python-modules -python-parsedatetime: sig-python-modules -python-parso: sig-python-modules -python-passlib: sig-python-modules -python-pasta: Private -python-paste: Networking -python-paste-deploy: sig-python-modules -python-pastel: sig-python-modules -python-patch-ng: sig-python-modules -python-path: sig-python-modules -python-pathlib: sig-openstack -python-pathlib2: sig-python-modules -python-pathspec: sig-python-modules -python-pathtools: sig-python-modules -python-patool: sig-python-modules -python-patsy: sig-python-modules -python-pbkdf2: sig-python-modules -python-pbr: Programming-language -python-pdc-client: sig-python-modules -python-pdfkit: sig-python-modules -python-pdfminer: sig-python-modules -python-pdfrw: sig-python-modules -python-pdir2: sig-python-modules -python-pecan: sig-python-modules -python-peewee: sig-python-modules -python-pendulum: sig-python-modules -python-pep257: sig-openstack -python-pep517: sig-python-modules -python-pep8: sig-openstack -python-pep8-naming: sig-python-modules -python-periodictable: sig-python-modules -python-persist-queue: sig-python-modules -python-pexpect: sig-python-modules -python-pg8000: sig-python-modules -python-pgpdump: sig-python-modules -python-phonenumbers: sig-python-modules -python-phpserialize: sig-python-modules -python-pickleshare: Private -python-pid: Base-service -python-piexif: sig-python-modules -python-pifpaf: sig-openstack -python-pigpio: sig-python-modules -python-pika: sig-openstack -python-pika-pool: sig-python-modules -python-pillow: sig-python-modules -python-pint: sig-python-modules -python-pip: Base-service -python-pip-api: sig-openstack -python-pipdeptree: sig-python-modules -python-pipreqs: sig-openstack -python-pkgconfig: A-Tune -python-pkginfo: sig-python-modules -python-pkgwat.api: sig-python-modules -python-plaintable: sig-python-modules -python-platformdirs: Application -python-player: sig-python-modules -python-plink: sig-python-modules -python-pluggy: Programming-language -python-pluginbase: sig-python-modules -python-pluginlib: sig-python-modules -python-plugnplay: sig-python-modules -python-plumbum: sig-python-modules -python-ply: Base-service -python-pocketlint: Base-service -python-podcastparser: sig-python-modules -python-polib: Base-service -python-portalocker: sig-python-modules -python-portend: sig-python-modules -python-posix_ipc: sig-python-modules -python-power: sig-python-modules -python-poyo: sig-python-modules -python-praw: sig-python-modules -python-pre-commit: sig-openstack -python-precis_i18n: sig-mate-desktop -python-pretend: Programming-language -python-prettyprinter: sig-python-modules -python-prettytable: Base-service -python-priority: sig-python-modules -python-proboscis: sig-openstack -python-process-tests: sig-python-modules -python-productmd: Base-service -python-profilehooks: sig-python-modules -python-progress: sig-python-modules -python-progressbar2: sig-python-modules -python-proliantutils: sig-openstack -python-prometheus-api-client: sig-python-modules -python-prometheus_client: sig-python-modules -python-prompt-toolkit: sig-python-modules -python-prompt_toolkit: Private -python-psutil: Programming-language -python-psycogreen: sig-python-modules -python-psycopg2: sig-python-modules -python-ptyprocess: sig-python-modules -python-publicsuffix2: sig-python-modules -python-pudb: sig-python-modules -python-pulsectl: sig-python-modules -python-pungi: sig-python-modules -python-pure-sasl: sig-python-modules -python-purestorage: sig-openstack -python-pvc: sig-python-modules -python-py: Programming-language -python-py-cpuinfo: sig-python-modules -python-py-make: sig-python-modules -python-py2pack: sig-python-modules -python-pyLibravatar: sig-python-modules -python-pyModbusTCP: sig-python-modules -python-pyPEG2: sig-python-modules -python-pyRFC3339: sig-python-modules -python-pyTelegramBotAPI: sig-python-modules -python-pyactivetwo: sig-python-modules -python-pyaes: sig-python-modules -python-pyaml: Base-service -python-pyasn1: Programming-language -python-pyasn1-modules: sig-python-modules -python-pybeam: sig-python-modules -python-pybtex: sig-python-modules -python-pybtex-docutils: sig-python-modules -python-pycadf: sig-openstack -python-pycares: sig-python-modules -python-pycdlib: sig-OS-Builder -python-pyclipper: sig-python-modules -python-pycodestyle: sig-python-modules -python-pycollada: sig-python-modules -python-pycparser: Base-service -python-pycryptodome: sig-python-modules -python-pycryptodomex: sig-python-modules -python-pycscope: sig-python-modules -python-pycurl: Base-service -python-pydbus: Base-service -python-pydenticon: sig-python-modules -python-pydicom: sig-python-modules -python-pydocstyle: sig-python-modules -python-pydot: sig-python-modules -python-pydotplus: sig-openstack -python-pyeclib: sig-openstack -python-pyelectro: sig-python-modules -python-pyephem: sig-python-modules -python-pyface: sig-python-modules -python-pyfakefs: sig-python-modules -python-pyftdi: sig-python-modules -python-pyftpdlib: sig-python-modules -python-pygal: sig-python-modules -python-pygatt: sig-python-modules -python-pygeoip: sig-python-modules -python-pyghmi: sig-openstack -python-pyglet: Private -python-pygments: Programming-language -python-pygments-style-solarized: sig-python-modules -python-pyhcl: sig-python-modules -python-pyi2cflash: sig-python-modules -python-pyinstaller: sig-python-modules -python-pykalman: sig-python-modules -python-pykeepass: sig-python-modules -python-pylama: sig-openstack -python-pylast: sig-python-modules -python-pylev: sig-python-modules -python-pylons-sphinx-themes: sig-python-modules -python-pymatreader: sig-python-modules -python-pymemcache: sig-python-modules -python-pymoc: sig-python-modules -python-pymod2pkg: sig-python-modules -python-pymongo: Programming-language -python-pynacl: sig-python-modules -python-pynetdicom: sig-python-modules -python-pyngus: sig-python-modules -python-pynvml: sig-python-modules -python-pyocr: sig-python-modules -python-pyotp: sig-python-modules -python-pypandoc: sig-python-modules -python-pyperclip: sig-python-modules -python-pypng: sig-python-modules -python-pypowervm: sig-openstack -python-pyprocdev: sig-python-modules -python-pyquery: sig-python-modules -python-pyrad: sig-python-modules -python-pyramid_fas_openid: sig-python-modules -python-pyreadline: sig-python-modules -python-pyroute2: sig-python-modules -python-pyrpm: sig-python-modules -python-pyrsistent: sig-python-modules -python-pyrtlsdr: sig-python-modules -python-pysaml2: sig-python-modules -python-pysb: sig-python-modules -python-pysendfile: sig-python-modules -python-pyserial: sig-python-modules -python-pyshark: sig-python-modules -python-pyshp: sig-python-modules -python-pyside: Private -python-pysmi: sig-python-modules -python-pysnmp: sig-python-modules -python-pysocks: Base-service -python-pyspf: sig-python-modules -python-pyspiflash: sig-python-modules -python-pysrt: sig-python-modules -python-pystache: sig-python-modules -python-pystalk: sig-python-modules -python-pystoi: sig-python-modules -python-pystray: sig-python-modules -python-pytest-arraydiff: sig-python-modules -python-pytest-asyncio: sig-python-modules -python-pytest-beakerlib: sig-python-modules -python-pytest-cache: sig-python-modules -python-pytest-catchlog: sig-python-modules -python-pytest-cov: Base-service -python-pytest-datafiles: sig-python-modules -python-pytest-django: sig-openstack -python-pytest-doctestplus: sig-python-modules -python-pytest-expect: Base-service -python-pytest-faulthandler: sig-python-modules -python-pytest-fixture-config: Base-service -python-pytest-flakes: sig-python-modules -python-pytest-forked: sig-python-modules -python-pytest-helpers-namespace: sig-python-modules -python-pytest-html: sig-openstack -python-pytest-httpbin: Private -python-pytest-isort: sig-python-modules -python-pytest-lazy-fixture: sig-python-modules -python-pytest-metadata: sig-python-modules -python-pytest-mock: Base-service -python-pytest-multihost: sig-python-modules -python-pytest-openfiles: sig-python-modules -python-pytest-ordering: sig-python-modules -python-pytest-pep8: sig-python-modules -python-pytest-random-order: sig-python-modules -python-pytest-relaxed: sig-python-modules -python-pytest-remotedata: sig-python-modules -python-pytest-repeat: sig-python-modules -python-pytest-rerunfailures: sig-python-modules -python-pytest-runner: sig-python-modules -python-pytest-shutil: sig-python-modules -python-pytest-sourceorder: sig-python-modules -python-pytest-subtests: sig-python-modules -python-pytest-sugar: sig-python-modules -python-pytest-testmon: sig-python-modules -python-pytest-timeout: sig-python-modules -python-pytest-toolbox: sig-python-modules -python-pytest-tornado: sig-python-modules -python-pytest-virtualenv: Base-service -python-pytest-watch: sig-python-modules -python-pytest-xdist: sig-python-modules -python-pytest-xprocess: sig-python-modules -python-pythonwebhdfs: A-Tune -python-pytimeparse: sig-python-modules -python-pytoml: Programming-language -python-pytools: sig-python-modules -python-pytrailer: sig-python-modules -python-pytzdata: sig-python-modules -python-pyudev: Base-service -python-pyusb: sig-python-modules -python-pyvit: sig-python-modules -python-pyvmomi: sig-python-modules -python-pyvo: sig-python-modules -python-pyxcli: sig-openstack -python-pyxdg: sig-python-modules -python-pyxs: sig-python-modules -python-pyzabbix: sig-python-modules -python-pyzmq: sig-python-modules -python-pyzolib: sig-python-modules -python-qcelemental: sig-python-modules -python-qrcode: Base-service -python-qrcodegen: sig-python-modules -python-qt5: sig-python-modules -python-quantities: sig-python-modules -python-queuelib: sig-python-modules -python-random2: sig-python-modules -python-randomize: sig-python-modules -python-rangehttpserver: sig-python-modules -python-rarfile: sig-python-modules -python-ratelimitingfilter: sig-python-modules -python-rawkit: sig-python-modules -python-rbd-iscsi-client: sig-openstack -python-rcssmin: sig-python-modules -python-rdflib: sig-python-modules -python-readme-renderer: sig-python-modules -python-rebulk: sig-python-modules -python-recommonmark: Base-service -python-redis: Base-service -python-regex: sig-python-modules -python-remoto: sig-python-modules -python-renderspec: sig-python-modules -python-reno: sig-openstack -python-reportlab: sig-python-modules -python-repoze-lru: Base-service -python-repoze.sphinx.autointerface: sig-python-modules -python-repoze.tm2: sig-python-modules -python-repoze.who: sig-python-modules -python-repoze.who.plugins.sa: sig-python-modules -python-requests: Networking -python-requests-aws: sig-openstack -python-requests-file: Base-service -python-requests-ftp: Networking -python-requests-gssapi: sig-python-modules -python-requests-kerberos: sig-python-modules -python-requests-mock: sig-openstack -python-requests-ntlm: sig-python-modules -python-requests-oauthlib: sig-python-modules -python-requests-toolbelt: sig-python-modules -python-requests-unixsocket: bigdata -python-requestsexceptions: sig-openstack -python-requirementslib: sig-openstack -python-responses: sig-openstack -python-restfly: sig-python-modules -python-restructuredtext-lint: sig-openstack -python-restsh: sig-python-modules -python-retask: sig-python-modules -python-retrying: sig-python-modules -python-retryz: sig-python-modules -python-rfc3339-validator: sig-python-modules -python-rfc3986: sig-python-modules -python-ripozo: sig-python-modules -python-rjsmin: sig-python-modules -python-rmtest: sig-python-modules -python-rnc2rng: sig-python-modules -python-robotframework: sig-ROS -python-rocket: sig-python-modules -python-roman: sig-python-modules -python-rope: sig-python-modules -python-rosinstall: sig-python-modules -python-routes: sig-python-modules -python-rpdb: sig-python-modules -python-rpkg: sig-python-modules -python-rply: sig-python-modules -python-rpm-generators: Base-service -python-rpmfluff: Private -python-rsa: sig-python-modules -python-rsd-lib: sig-openstack -python-rsdclient: sig-openstack -python-rst.linker: sig-openstack -python-rst2txt: Programming-language -python-rstcheck: sig-python-modules -python-rtslib: sig-python-modules -python-rtslib-fb: sig-openstack -python-ruamel-yaml: sig-python-modules -python-ruamel-yaml-clib: sig-python-modules -python-ruffus: sig-python-modules -python-rustcfg: sig-python-modules -python-rxjson: sig-python-modules -python-ryu: sig-openstack -python-s3transfer: sig-python-modules -python-saharaclient: sig-openstack -python-saml: sig-python-modules -python-sanction: sig-python-modules -python-scales: sig-python-modules -python-scandir: sig-python-modules -python-scapy: sig-python-modules -python-scciclient: sig-openstack -python-schedutils: Base-service -python-schema: sig-python-modules -python-scikit-learn: sig-python-modules -python-scikit-optimize: sig-python-modules -python-scons: Programming-language -python-scour: Private -python-scp: sig-python-modules -python-scramp: sig-python-modules -python-scripttest: sig-openstack -python-scripttester: sig-python-modules -python-scrypt: sig-python-modules -python-searchlightclient: sig-openstack -python-selectors2: sig-python-modules -python-selenium: sig-openstack -python-semantic_version: sig-python-modules -python-semver: sig-python-modules -python-senlinclient: sig-openstack -python-sentinels: sig-openstack -python-sentry-sdk: sig-python-modules -python-seqdiag: sig-python-modules -python-serpy: sig-python-modules -python-service-identity: sig-python-modules -python-setproctitle: sig-python-modules -python-setuptools: Base-service -python-setuptools-rust: sig-openstack -python-setuptools_git: sig-python-modules -python-setuptools_hg: sig-python-modules -python-setuptools_scm: Programming-language -python-sh: sig-python-modules -python-shamir-mnemonic: sig-python-modules -python-shodan: sig-python-modules -python-shortuuid: sig-python-modules -python-should_dsl: sig-python-modules -python-sieve: sig-python-modules -python-simplebayes: sig-python-modules -python-simpleeval: sig-python-modules -python-simplegeneric: sig-python-modules -python-simplejson: sig-python-modules -python-simpleline: Base-service -python-simplepam: sig-python-modules -python-simplevisor: sig-python-modules -python-simpy: sig-python-modules -python-singledispatch: Programming-language -python-siphash: sig-python-modules -python-six: Base-service -python-slack-cleaner: sig-python-modules -python-slacker: sig-python-modules -python-slip: Base-service -python-slixmpp: sig-python-modules -python-slugify: sig-python-modules -python-smartypants: GNOME -python-smmap: sig-python-modules -python-sniffio: sig-python-modules -python-snowballstemmer: Programming-language -python-snuggs: sig-python-modules -python-social-auth-app-flask: sig-python-modules -python-social-auth-app-flask-sqlalchemy: sig-python-modules -python-social-auth-storage-sqlalchemy: sig-python-modules -python-sockjs-tornado: sig-python-modules -python-socks5line: sig-python-modules -python-sortedcontainers: sig-python-modules -python-soupsieve: sig-openstack -python-spake2: sig-python-modules -python-spdx-lookup: sig-python-modules -python-speaklater: sig-python-modules -python-spec: sig-recycle -python-speedtest-cli: sig-python-modules -python-speg: sig-python-modules -python-sphinx: Programming-language -python-sphinx-argparse: sig-python-modules -python-sphinx-bootstrap-theme: sig-python-modules -python-sphinx-epytext: sig-python-modules -python-sphinx-feature-classification: sig-python-modules -python-sphinx-gallery: sig-python-modules -python-sphinx-intl: sig-python-modules -python-sphinx-issues: sig-python-modules -python-sphinx-notfound-page: sig-python-modules -python-sphinx-testing: sig-openstack -python-sphinx-theme-alabaster: Programming-language -python-sphinx_lv2_theme: sig-epol -python-sphinx_rtd_theme: Programming-language -python-sphinxcontrib-actdiag: sig-python-modules -python-sphinxcontrib-apidoc: sig-python-modules -python-sphinxcontrib-applehelp: sig-python-modules -python-sphinxcontrib-autoprogram: sig-openstack -python-sphinxcontrib-bibtex: sig-python-modules -python-sphinxcontrib-blockdiag: sig-python-modules -python-sphinxcontrib-devhelp: sig-python-modules -python-sphinxcontrib-fulltoc: sig-python-modules -python-sphinxcontrib-github-alt: bigdata -python-sphinxcontrib-htmlhelp: sig-python-modules -python-sphinxcontrib-httpdomain: sig-python-modules -python-sphinxcontrib-issuetracker: sig-python-modules -python-sphinxcontrib-jsmath: sig-python-modules -python-sphinxcontrib-log-cabinet: Programming-language -python-sphinxcontrib-pecanwsme: sig-python-modules -python-sphinxcontrib-programoutput: sig-openstack -python-sphinxcontrib-qthelp: sig-python-modules -python-sphinxcontrib-seqdiag: sig-python-modules -python-sphinxcontrib-serializinghtml: sig-python-modules -python-sphinxcontrib-spelling: Base-service -python-sphinxcontrib-svg2pdfconverter: sig-python-modules -python-sphinxcontrib-websupport: Programming-language -python-sphinxtesters: sig-python-modules -python-sphobjinv: sig-python-modules -python-spur: sig-python-modules -python-sqlalchemy: Programming-language -python-sqlalchemy-collectd: sig-python-modules -python-sqlalchemy-migrate: sig-openstack -python-sqlalchemy_schemadisplay: sig-python-modules -python-sqlparse: sig-python-modules -python-sseclient: sig-python-modules -python-statistics: sig-python-modules -python-statsd: sig-python-modules -python-statsmodels: sig-python-modules -python-stdlib-list: sig-python-modules -python-stem: sig-python-modules -python-stestr: sig-openstack -python-stevedore: sig-openstack -python-stomper: sig-python-modules -python-stompest: sig-python-modules -python-storage-interfaces: sig-openstack -python-storops: sig-openstack -python-storpool: sig-openstack -python-storpool.spopenstack: sig-openstack -python-straight-plugin: sig-python-modules -python-strict-rfc3339: sig-python-modules -python-stuf: sig-python-modules -python-subliminal: sig-python-modules -python-subprocess32: sig-recycle -python-subunit2sql: sig-openstack -python-suds-jurko: sig-openstack -python-suds2: sig-python-modules -python-supersmoother: sig-python-modules -python-supervisor: sig-python-modules -python-sure: Programming-language -python-sushy: sig-openstack -python-sushy-oem-idrac: sig-openstack -python-svg: sig-python-modules -python-svg.path: sig-python-modules -python-svgwrite: sig-python-modules -python-swiftclient: sig-openstack -python-sympy: sig-python-modules -python-systemd: Base-service -python-sysv-ipc: sig-openstack -python-tables: Private -python-tablib: sig-openstack -python-tabulate: sig-python-modules -python-tambo: sig-python-modules -python-taskflow: sig-openstack -python-tasklib: sig-python-modules -python-tbgrep: sig-python-modules -python-tblib: sig-python-modules -python-tbtrim: sig-python-modules -python-tempdir: sig-python-modules -python-tempest-lib: sig-openstack -python-tempita: Base-service -python-tempora: sig-python-modules -python-tenacity: sig-python-modules -python-tensorboard: A-Tune -python-tensorboard-plugin-wit: A-Tune -python-termcolor: sig-python-modules -python-terminado: bigdata -python-terminaltables: sig-python-modules -python-test-server: sig-python-modules -python-testpath: sig-python-modules -python-testrepository: sig-python-modules -python-testresources: sig-python-modules -python-testscenarios: Programming-language -python-testtools: Programming-language -python-texext: sig-python-modules -python-text-unidecode: sig-python-modules -python-textfsm: sig-openstack -python-textparser: sig-python-modules -python-texttable: sig-python-modules -python-tftpy: sig-python-modules -python-tgext.crud: sig-python-modules -python-threadloop: sig-openstack -python-threadpoolctl: sig-python-modules -python-timeout-decorator: sig-python-modules -python-timeunit: sig-python-modules -python-tinydb: sig-python-modules -python-tinyrpc: sig-python-modules -python-toml: sig-python-modules -python-tomli: sig-python-modules -python-toolz: sig-python-modules -python-tooz: sig-openstack -python-tornado: Programming-language -python-tox: sig-python-modules -python-tqdm: sig-python-modules -python-traceback2: Programming-language -python-traitlets: sig-python-modules -python-traitsui: sig-python-modules -python-transaction: sig-openstack -python-translationstring: sig-python-modules -python-treelib: sig-ops -python-trove-dashboard: sig-openstack -python-trove-tempest-plugin: sig-openstack -python-troveclient: sig-openstack -python-trustme: sig-python-modules -python-tvb-data: sig-python-modules -python-tw2.core: sig-python-modules -python-tw2.forms: sig-python-modules -python-tw2.jqplugins.ui: sig-python-modules -python-tw2.jquery: sig-python-modules -python-twilio: sig-python-modules -python-twine: sig-python-modules -python-twisted: sig-python-modules -python-txWS: sig-python-modules -python-txZMQ: sig-python-modules -python-typed-ast: sig-openstack -python-typing: sig-python-modules -python-typing-extensions: sig-openstack -python-typogrify: sig-python-modules -python-tzlocal: sig-python-modules -python-u-msgpack-python: Base-service -python-ucam-webauth: sig-python-modules -python-uhashring: sig-openstack -python-ujson: sig-openstack -python-unicodecsv: sig-recycle -python-unidiff: sig-python-modules -python-unittest2: Programming-language -python-upoints: sig-python-modules -python-uritemplate: sig-python-modules -python-urlgrabber: Programming-language -python-urllib3: Networking -python-urllib_gssapi: sig-python-modules -python-urwid: Programming-language -python-urwidtrees: sig-python-modules -python-utils: sig-python-modules -python-utmp: sig-python-modules -python-uwsgidecorators: sig-recycle -python-vagrantpy: sig-python-modules -python-validators: sig-python-modules -python-varlink: sig-python-modules -python-vatnumber: sig-python-modules -python-vconnector: sig-python-modules -python-vdirsyncer: sig-python-modules -python-verboselogs: sig-python-modules -python-versiontools: sig-python-modules -python-vine: sig-python-modules -python-vintage: sig-openstack -python-virtualenv: Programming-language -python-virtualenv-api: sig-python-modules -python-virtualenv-clone: sig-python-modules -python-virtualenvwrapper: sig-python-modules -python-visidata: sig-python-modules -python-visitor: sig-python-modules -python-visvis: sig-python-modules -python-vitrageclient: sig-openstack -python-vobject: sig-python-modules -python-volkszaehler: sig-python-modules -python-voluptuous: sig-python-modules -python-vpoller: sig-python-modules -python-vulture: sig-python-modules -python-w3lib: sig-python-modules -python-wadllib: sig-python-modules -python-waiting: sig-openstack -python-waitress: sig-python-modules -python-walkdir: sig-python-modules -python-warlock: sig-python-modules -python-watchdog: sig-python-modules -python-watcherclient: sig-openstack -python-wcwidth: sig-python-modules -python-weakrefmethod: sig-openstack -python-webassets: sig-python-modules -python-webcolors: sig-python-modules -python-webencodings: Base-service -python-webob: sig-python-modules -python-websocket-client: sig-python-modules -python-websockets: sig-python-modules -python-websockify: sig-openstack -python-webtest: sig-python-modules -python-webthing-ws: sig-python-modules -python-werkzeug: Programming-language -python-wheel: Programming-language -python-whereto: sig-openstack -python-which: sig-recycle -python-whichcraft: sig-python-modules -python-whisper: sig-python-modules -python-whitenoise: sig-python-modules -python-whois: sig-python-modules -python-whoosh: Programming-language -python-wikipedia: sig-python-modules -python-wikitcms: sig-python-modules -python-winacl: sig-python-modules -python-winrm: sig-python-modules -python-wmi: sig-openstack -python-wrapt: sig-python-modules -python-wrapt-1.10: sig-python-modules -python-wsgi-intercept: sig-python-modules -python-wsme: sig-openstack -python-wsproto: sig-python-modules -python-wtf-peewee: sig-python-modules -python-wurlitzer: sig-python-modules -python-www-authenticate: sig-python-modules -python-wxnatpy: sig-python-modules -python-xarray: sig-python-modules -python-xattr: sig-openstack -python-xcffib: sig-python-modules -python-xclarityclient: sig-openstack -python-xgboost: sig-python-modules -python-xlib: sig-python-modules -python-xlrd: sig-python-modules -python-xlwt: sig-python-modules -python-xml2rfc: sig-python-modules -python-xmlrunner: sig-python-modules -python-xmlschema: sig-openstack -python-xmltodict: sig-python-modules -python-xmod: sig-python-modules -python-xpath-expressions: sig-python-modules -python-xtermcolor: sig-python-modules -python-xunitparser: sig-python-modules -python-xvfbwrapper: sig-python-modules -python-xxhash: sig-python-modules -python-yamllint: sig-openstack -python-yamlloader: sig-openstack -python-yamlordereddictloader: sig-python-modules -python-yappi: sig-python-modules -python-yaql: sig-python-modules -python-yara: sig-python-modules -python-yarg: sig-python-modules -python-yarl: sig-python-modules -python-yaspin: sig-python-modules -python-yaswfp: sig-python-modules -python-yattag: sig-python-modules -python-yubico: sig-python-modules -python-yubikey-manager: sig-python-modules -python-zVMCloudConnector: sig-openstack -python-zabbix-api-erigones: sig-python-modules -python-zake: sig-openstack -python-zanata2fedmsg: sig-python-modules -python-zaqarclient: sig-openstack -python-zarr: sig-python-modules -python-zc-lockfile: sig-python-modules -python-zc.customdoctests: sig-python-modules -python-zdaemon: sig-python-modules -python-zeroconf: sig-python-modules -python-zipp: sig-python-modules -python-zipstream: sig-python-modules -python-zmq: sig-python-modules -python-zope-component: sig-python-modules -python-zope-configuration: sig-python-modules -python-zope-deferredimport: sig-python-modules -python-zope-deprecation: sig-python-modules -python-zope-event: sig-python-modules -python-zope-hookable: sig-python-modules -python-zope-interface: sig-python-modules -python-zope-proxy: sig-python-modules -python-zope-schema: sig-python-modules -python-zope.dottedname: sig-python-modules -python-zope.fixers: sig-python-modules -python-zope.i18n: sig-python-modules -python-zope.i18nmessageid: sig-python-modules -python-zope.testing: sig-python-modules -python-zstandard: sig-python-modules -python-zstd: sig-python-modules -python-zunclient: sig-openstack -python2: sig-recycle -python2-typing: sig-recycle -python3: Base-service -python3-docs: sig-python-modules -python3-mallard-ducktype: Base-service -python_cmake_module: sig-ROS -python_qt_binding: sig-ROS -pytorch: ai -pytz: Desktop -pyusb: dev-utils -pywbem: Programming-language -pyxattr: Base-service -pyxdg: Programming-language -qca: sig-KDE -qdox: Base-service -qemu: Virt -qfs: bigdata -qgnomeplatform: GNOME -qhull: Others -qjson: sig-UKUI -qpdf: Programming-language -qperf: Application -qpid-proton: Base-service -qpid-proton-java: sig-Java -qrencode: Desktop -qrupdate: bigdata -qscintilla: bigdata -qstardict: sig-desktop-apps -qt: Runtime -qt-assistant-adp: Others -qt-at-spi: Desktop -qt-gui: sig-ROS -qt-mobility: Others -qt-settings: Desktop -qt5: Desktop -qt5-doc: Others -qt5-qt3d: Programming-language -qt5-qtbase: Programming-language -qt5-qtcanvas3d: Programming-language -qt5-qtcharts: sig-UKUI -qt5-qtconnectivity: Programming-language -qt5-qtdeclarative: Programming-language -qt5-qtdoc: Programming-language -qt5-qtenginio: Others -qt5-qtgraphicaleffects: Programming-language -qt5-qtimageformats: Programming-language -qt5-qtlocation: Programming-language -qt5-qtmultimedia: Programming-language -qt5-qtquickcontrols: Programming-language -qt5-qtquickcontrols2: Programming-language -qt5-qtscript: Programming-language -qt5-qtsensors: Programming-language -qt5-qtserialbus: Programming-language -qt5-qtserialport: Programming-language -qt5-qtspeech: Programming-language -qt5-qtsvg: Programming-language -qt5-qttools: Programming-language -qt5-qttranslations: Programming-language -qt5-qtvirtualkeyboard: Programming-language -qt5-qtwayland: Programming-language -qt5-qtwebchannel: Programming-language -qt5-qtwebengine: Others -qt5-qtwebkit: Others -qt5-qtwebsockets: Programming-language -qt5-qtx11extras: Programming-language -qt5-qtxmlpatterns: Programming-language -qt5-ukui-platformtheme: sig-UKUI -qt5dxcb-plugin: sig-DDE -qt5integration: sig-DDE -qtchooser: sig-UKUI -qtwebkit: sig-recycle -quartz: sig-Java -quay: sig-OKD -quazip-qt5: sig-UKUI -querydsl3: sig-Java -quilt: Application -quota: Storage -qwt_dependency: sig-ROS -rabbitmq-java-client: sig-Java -rabbitmq-server: Application -racon: sig-bio -radiaTest: sig-QA -radvd: Networking -ragel: dev-utils -rain: bigdata -randomizedtesting: Base-service -ranger: dev-utils -rapidjson: Base-service -raptor2: Others -rarian: Base-service -rasdaemon: Base-service -raspberrypi: sig-RaspberryPi -raspberrypi-bluetooth: sig-RaspberryPi -raspberrypi-build: sig-RaspberryPi -raspberrypi-eeprom: sig-RaspberryPi -raspberrypi-firmware: sig-RaspberryPi -raspberrypi-kernel: sig-RaspberryPi -raspberrypi-userland: sig-RaspberryPi -raspi-config: sig-RaspberryPi -rasqal: Application -rcl: sig-ROS -rcl_interfaces: sig-ROS -rcl_logging: sig-ROS -rclcpp: sig-ROS -rcpputils: sig-ROS -rcs: Others -rcutilsl: sig-ROS -rdate: Application -rdiff-backup: Application -rdma-core: sig-high-performance-network -re2: Others -re2c: sig-mate-desktop -readline: Base-service -realmd: Base-service -realtime_support: sig-ROS -realtime_tools: sig-ROS -rear: Others -recode: Base-service -redhat-menus: sig-recycle -redis: Others -redis-protocol: Application -redis5: bigdata -redis6: bigdata -redland: Runtime -redshift: sig-UKUI -reflectasm: dev-utils -reflections: sig-Java -regexp: Application -reiserfs-utils: Others -relaxngDatatype: dev-utils -relaxngcc: dev-utils -release-management: sig-release-management -release-tools: sig-EasyLife -remotetea: sig-Java -replacer: Base-service -repo: Private -reproducible-builds: sig-reproducible-builds -resource-agents: Others -resource_retriever: sig-ROS -rest: Desktop -resteasy: sig-Java -rfkill: System-tool -rhash: Runtime -rhino: sig-Java -rhnlib: Programming-language -rhq-plugin-annotations: sig-Java -rhythmbox: Application -riemann-c-client: oVirt -rinetd: Application -risc-v-kernel: sig-RISC-V -ristretto: xfce -rmic-maven-plugin: Base-service -rmw: sig-ROS -rmw_connext: sig-ROS -rmw_cyclonedds: sig-ROS -rmw_dds_common: sig-ROS -rmw_fastrtps: sig-ROS -rmw_implementation: sig-ROS -rng-tools: Base-service -rngom: Application -robodoc: Application -robot_state_publisher: sig-ROS -robust-http-client: sig-Java -rockchip: sig-RaspberryPi -rockchip-kernel: sig-RaspberryPi -rocksdb: System-tool -rome: sig-Java -rootfiles: Base-service -rootsh: Others -ros: sig-ROS -ros1_bridge: sig-ROS -ros2_demos: sig-ROS -ros2_example_interfaces: sig-ROS -ros2_examples: sig-ROS -ros2_system_tests: sig-ROS -ros2_tracing: sig-ROS -ros2cli: sig-ROS -ros2cli_common_extensions: sig-ROS -ros_comm: sig-ROS -ros_comm_msgs: sig-ROS -ros_control: sig-ROS -ros_controllers: sig-ROS -ros_environment: sig-ROS -ros_testing: sig-ROS -ros_tutorials: sig-ROS -rosbag2: sig-ROS -rosbag_migration_rule: sig-ROS -rosconsole: sig-ROS -rosconsole_bridge: sig-ROS -roscpp_core: sig-ROS -rosidl: sig-ROS -rosidl_dds: sig-ROS -rosidl_defaults: sig-ROS -rosidl_python: sig-ROS -rosidl_runtime_py: sig-ROS -rosidl_typesupport: sig-ROS -rosidl_typesupport_connext: sig-ROS -rosidl_typesupport_fastrtps: sig-ROS -roslint: sig-ROS -roslisp: sig-ROS -rospack: sig-ROS -rpcbind: Networking -rpcsvc-proto: Application -rpm: Base-service -rpm-mpi-hooks: Private -rpm-ostree: sig-OKD -rpm-ostree-toolbox: sig-Ostree-Assembly -rpmdevtools: Packaging -rpmlint: Programming-language -rpmrebuild: Base-service -rpyutils: sig-ROS -rqt: sig-ROS -rqt_action: sig-ROS -rqt_bag: sig-ROS -rqt_common_plugins: sig-ROS -rqt_console: sig-ROS -rqt_dep: sig-ROS -rqt_graph: sig-ROS -rqt_image_view: sig-ROS -rqt_launch: sig-ROS -rqt_logger_level: sig-ROS -rqt_moveit: sig-ROS -rqt_msg: sig-ROS -rqt_nav_view: sig-ROS -rqt_plot: sig-ROS -rqt_pose_view: sig-ROS -rqt_publisher: sig-ROS -rqt_py_console: sig-ROS -rqt_reconfigure: sig-ROS -rqt_robot_dashboard: sig-ROS -rqt_robot_monitor: sig-ROS -rqt_robot_plugins: sig-ROS -rqt_robot_steering: sig-ROS -rqt_runtime_monitor: sig-ROS -rqt_rviz: sig-ROS -rqt_service_caller: sig-ROS -rqt_shell: sig-ROS -rqt_srv: sig-ROS -rqt_tf_tree: sig-ROS -rqt_top: sig-ROS -rqt_topic: sig-ROS -rqt_web: sig-ROS -rrdtool: Application -rsh: Private -rstudio: sig-bio -rsync: Base-service -rsyslog: Base-service -rt-tests: sig-industrial-control -rtcheck: sig-industrial-control -rteval: sig-industrial-control -rtkit: Desktop -rtorrent: Application -rubberband: Desktop -rubik: sig-CloudNative -ruby: sig-ruby -ruby-augeas: sig-ruby -ruby-common: sig-ruby -rubygem-Ascii85: sig-ruby -rubygem-RedCloth: sig-ruby -rubygem-ZenTest: sig-ruby -rubygem-abrt: sig-ruby -rubygem-actioncable: sig-ruby -rubygem-actionmailbox: sig-Ha -rubygem-actionmailer: sig-ruby -rubygem-actionpack: sig-ruby -rubygem-actiontext: sig-Ha -rubygem-actionview: sig-ruby -rubygem-activejob: sig-ruby -rubygem-activemodel: sig-ruby -rubygem-activemodel-serializers-xml: sig-ruby -rubygem-activerecord: sig-ruby -rubygem-activerecord-nulldb-adapter: sig-ruby -rubygem-activerecord-session_store: sig-ruby -rubygem-activeresource: sig-ruby -rubygem-activestorage: sig-ruby -rubygem-activesupport: sig-ruby -rubygem-addressable: sig-ruby -rubygem-afm: sig-ruby -rubygem-algebrick: sig-ops -rubygem-ancestry: sig-ruby -rubygem-ansi: sig-ruby -rubygem-apipie-dsl: sig-ruby -rubygem-apipie-params: sig-ruby -rubygem-apipie-rails: sig-ruby -rubygem-arel: sig-ruby -rubygem-aruba: sig-ruby -rubygem-asciidoctor: sig-ruby -rubygem-atomic: sig-recycle -rubygem-audited: sig-ruby -rubygem-autoprefixer-rails: sig-ruby -rubygem-backports: sig-ruby -rubygem-bacon: sig-ruby -rubygem-bcrypt: sig-ruby -rubygem-bindex: sig-ruby -rubygem-bootsnap: sig-ruby -rubygem-bootstrap-sass: sig-ops -rubygem-bson: sig-ruby -rubygem-builder: sig-ruby -rubygem-bundler: sig-ruby -rubygem-bundler_ext: sig-ops -rubygem-byebug: sig-ruby -rubygem-capybara: sig-ruby -rubygem-childprocess: sig-ruby -rubygem-chronic: sig-ruby -rubygem-clamp: sig-ruby -rubygem-coderay: sig-ruby -rubygem-coffee-rails: sig-ops -rubygem-coffee-script: sig-ruby -rubygem-coffee-script-source: sig-ruby -rubygem-concurrent-ruby: sig-ruby -rubygem-connection_pool: sig-ruby -rubygem-contracts: sig-ruby -rubygem-cool.io: sig-ruby -rubygem-crack: sig-ruby -rubygem-crass: sig-ruby -rubygem-creole: sig-ruby -rubygem-cucumber: sig-ruby -rubygem-cucumber-core: sig-ruby -rubygem-cucumber-expressions: sig-ruby -rubygem-cucumber-tag_expressions: sig-ruby -rubygem-cucumber-wire: sig-ruby -rubygem-curb: sig-ruby -rubygem-daemons: sig-ruby -rubygem-dalli: sig-ruby -rubygem-deacon: sig-ops -rubygem-deep_cloneable: sig-ruby -rubygem-delorean: sig-ruby -rubygem-diff-lcs: sig-ruby -rubygem-dig_rb: sig-ruby -rubygem-docile: sig-ruby -rubygem-domain_name: sig-ruby -rubygem-dynflow: sig-ruby -rubygem-ejs: sig-ruby -rubygem-elasticsearch-ruby: sig-ops -rubygem-erubi: sig-ruby -rubygem-erubis: sig-ruby -rubygem-ethon: sig-ruby -rubygem-eventmachine: sig-ruby -rubygem-excon: sig-ruby -rubygem-execjs: sig-ruby -rubygem-expression_parser: sig-ruby -rubygem-facter: sig-ruby -rubygem-fakefs: sig-ruby -rubygem-faraday: sig-ruby -rubygem-faraday-em_http: sig-ops -rubygem-faraday-em_synchrony: sig-ops -rubygem-faraday-excon: sig-ops -rubygem-faraday-httpclient: sig-ops -rubygem-faraday-net_http: sig-ops -rubygem-faraday-net_http_persistent: sig-ops -rubygem-faraday-patron: sig-ops -rubygem-faraday-rack: sig-ops -rubygem-fast_gettext: sig-ruby -rubygem-fattr: sig-ruby -rubygem-ffi: sig-ruby -rubygem-flexmock: sig-ruby -rubygem-fluent-plugin-elasticsearch: sig-ops -rubygem-fluentd: sig-ruby -rubygem-fog-core: sig-ruby -rubygem-font-awesome-sass: sig-ruby -rubygem-formatador: sig-ruby -rubygem-friendly_id: sig-ruby -rubygem-gem2rpm: sig-ruby -rubygem-get_process_mem: sig-ruby -rubygem-gettext: sig-ops -rubygem-gettext_i18n_rails_js: sig-ruby -rubygem-gherkin: sig-ruby -rubygem-globalid: sig-ruby -rubygem-graphql: sig-ruby -rubygem-graphql-batch: sig-ruby -rubygem-haml: sig-ruby -rubygem-hashdiff: sig-ruby -rubygem-hashery: sig-ruby -rubygem-hashie: sig-ruby -rubygem-highline: sig-ruby -rubygem-hiredis: sig-Ha -rubygem-hpricot: sig-recycle -rubygem-http-cookie: sig-ruby -rubygem-http_parser: sig-ruby -rubygem-httpclient: sig-ruby -rubygem-i18n: sig-ruby -rubygem-idn: sig-ruby -rubygem-introspection: sig-ruby -rubygem-jbuilder: sig-ruby -rubygem-jquery-rails: sig-ruby -rubygem-jquery-ui-rails: sig-ops -rubygem-json_pure: sig-ruby -rubygem-jwt: sig-ruby -rubygem-kafo: sig-ruby -rubygem-kafo_parsers: sig-ruby -rubygem-kafo_wizards: sig-ruby -rubygem-kramdown: Application -rubygem-kramdown-parser-gfm: sig-ruby -rubygem-launchy: sig-ruby -rubygem-ldap_fluff: sig-ruby -rubygem-liquid: sig-ruby -rubygem-listen: sig-ruby -rubygem-little-plugger: sig-ruby -rubygem-locale: sig-ops -rubygem-logging: sig-ruby -rubygem-loofah: sig-ruby -rubygem-mail: sig-ruby -rubygem-marcel: sig-ruby -rubygem-maruku: sig-ruby -rubygem-memcache-client: sig-ruby -rubygem-metaclass: sig-ruby -rubygem-method_source: sig-ruby -rubygem-mime-types: sig-ruby -rubygem-mime-types-data: sig-ruby -rubygem-mimemagic: sig-ruby -rubygem-mini_magick: sig-ruby -rubygem-mini_mime: sig-ruby -rubygem-minitest: sig-ruby -rubygem-minitest-reporters: sig-ruby -rubygem-minitest4: sig-ruby -rubygem-mocha: sig-ruby -rubygem-msgpack: sig-ruby -rubygem-multi_json: sig-ruby -rubygem-multi_test: sig-ruby -rubygem-multipart-post: sig-ruby -rubygem-mustache: sig-ruby -rubygem-mustermann: sig-ruby -rubygem-net-ldap: sig-ruby -rubygem-net-ping: sig-ops -rubygem-net-scp: sig-ruby -rubygem-net-ssh: sig-ruby -rubygem-netrc: sig-ruby -rubygem-nio4r: sig-ruby -rubygem-nokogiri: sig-ruby -rubygem-oauth: sig-ruby -rubygem-oj: sig-ruby -rubygem-open4: sig-ruby -rubygem-ovirt-engine-sdk4: oVirt -rubygem-pathspec: sig-ruby -rubygem-patternfly-sass: sig-ruby -rubygem-pdf-core: sig-ruby -rubygem-pdf-inspector: sig-ruby -rubygem-pdf-reader: sig-ruby -rubygem-pg: sig-Ha -rubygem-pkg-config: sig-ruby -rubygem-power_assert: sig-ruby -rubygem-powerbar: sig-ops -rubygem-prawn: sig-ruby -rubygem-prawn-table: sig-ruby -rubygem-pry: sig-ruby -rubygem-pry-nav: sig-ruby -rubygem-public_suffix: sig-ruby -rubygem-puma: sig-ruby -rubygem-rabl: sig-ruby -rubygem-racc: sig-Ha -rubygem-rack: sig-ruby -rubygem-rack-cache: sig-ruby -rubygem-rack-cors: sig-ruby -rubygem-rack-protection: sig-ruby -rubygem-rack-test: sig-ruby -rubygem-rails: sig-ruby -rubygem-rails-controller-testing: sig-ruby -rubygem-rails-css_parser: sig-ruby -rubygem-rails-dom-testing: sig-ruby -rubygem-rails-html-sanitizer: sig-ruby -rubygem-rails-i18n: sig-ruby -rubygem-railties: sig-ruby -rubygem-rake-compiler: sig-ruby -rubygem-rb-inotify: sig-ruby -rubygem-rdiscount: sig-ruby -rubygem-record_tag_helper: sig-ruby -rubygem-redcarpet: sig-ruby -rubygem-redis: sig-ruby -rubygem-regexp_parser: sig-ruby -rubygem-regexp_property_values: sig-ruby -rubygem-responders: sig-ruby -rubygem-rest-client: sig-ruby -rubygem-rgen: sig-ruby -rubygem-roadie: sig-ruby -rubygem-roadie-rails: sig-ops -rubygem-ronn: sig-recycle -rubygem-ronn-ng: sig-ruby -rubygem-rouge: sig-ruby -rubygem-rr: sig-Ha -rubygem-rspec: sig-ruby -rubygem-rspec-core: sig-ruby -rubygem-rspec-expectations: sig-ruby -rubygem-rspec-its: sig-ruby -rubygem-rspec-mocks: sig-ruby -rubygem-rspec-rails: sig-ruby -rubygem-rspec-support: sig-ruby -rubygem-rspec2: sig-ruby -rubygem-rspec2-core: sig-ruby -rubygem-rspec2-expectations: sig-ruby -rubygem-rspec2-mocks: sig-ruby -rubygem-ruby-progressbar: sig-ruby -rubygem-ruby-rc4: sig-ruby -rubygem-ruby-shadow: sig-ruby -rubygem-ruby2_keywords: sig-ops -rubygem-ruby2ruby: sig-ops -rubygem-rubyzip: sig-ruby -rubygem-safe_yaml: sig-ruby -rubygem-safemode: sig-ruby -rubygem-sass: sig-ruby -rubygem-sass-rails: sig-ruby -rubygem-sassc: sig-Ha -rubygem-sassc-rails: sig-Ha -rubygem-scoped_search: sig-ruby -rubygem-sdoc: sig-ruby -rubygem-secure_headers: sig-ruby -rubygem-selenium-webdriver: sig-ruby -rubygem-sequel: sig-ruby -rubygem-serverengine: sig-ruby -rubygem-session: sig-recycle -rubygem-sexp_processor: sig-ruby -rubygem-shindo: sig-ruby -rubygem-shoulda: sig-ruby -rubygem-shoulda-context: sig-ruby -rubygem-shoulda-matchers: sig-ruby -rubygem-sigdump: sig-ruby -rubygem-simplecov: sig-ruby -rubygem-simplecov-html: sig-ruby -rubygem-sinatra: sig-ruby -rubygem-slop: sig-ruby -rubygem-spring: sig-ruby -rubygem-sprockets: sig-ruby -rubygem-sprockets-rails: sig-ruby -rubygem-sqlite3: sig-ruby -rubygem-sshkey: sig-ruby -rubygem-statsd-instrument: sig-ruby -rubygem-strptime: sig-ruby -rubygem-temple: sig-ruby -rubygem-test-unit-rr: sig-Ha -rubygem-test_declarative: sig-ruby -rubygem-text: sig-ops -rubygem-thin: sig-ruby -rubygem-thor: sig-ruby -rubygem-thread_order: sig-ruby -rubygem-thread_safe: sig-ruby -rubygem-tilt: sig-ruby -rubygem-timecop: sig-ruby -rubygem-ttfunk: sig-ruby -rubygem-turbolinks: sig-ruby -rubygem-turbolinks-source: sig-ruby -rubygem-typhoeus: sig-ruby -rubygem-tzinfo: sig-ruby -rubygem-tzinfo-data: sig-ruby -rubygem-uglifier: sig-ruby -rubygem-unf: sig-ruby -rubygem-unf_ext: sig-ruby -rubygem-validates_lengths_from_database: sig-ruby -rubygem-webmock: sig-ruby -rubygem-webpack-rails: sig-ops -rubygem-webrick: sig-ruby -rubygem-websocket: sig-ruby -rubygem-websocket-driver: sig-ruby -rubygem-websocket-extensions: sig-ruby -rubygem-wikicloth: sig-ruby -rubygem-will_paginate: sig-ruby -rubygem-xpath: sig-ruby -rubygem-yajl-ruby: sig-ruby -rubygem-yard: sig-ruby -rubygem-zeitwerk: sig-Ha -rubyporter: dev-utils -runc: sig-CloudNative -rust: sig-Rust -rust-cbindgen: sig-Rust -rust-packaging: sig-Rust -rust-srpm-macros: sig-epol -rustup: sig-Rust -rviz: sig-ROS -rxjava: sig-Java -rxtx: sig-Java -rygel: GNOME -s-tui: dev-utils -s3fs-fuse: Storage -saab-fonts: Desktop -sac: sig-Java -safelease: oVirt -samba: Networking -samtools: sig-bio -samyak-fonts: Desktop -sane-backends: System-tool -sane-frontends: Application -sanlock: System-tool -sassc: Others -sat4j: sig-Java -satyr: Desktop -saxon: dev-utils -saxpath: sig-Java -sbc: Desktop -sbd: sig-Ha -sbinary: sig-Java -sblim-cmpi-devel: Programming-language -sblim-sfcCommon: Others -sblim-sfcb: System-tool -sblim-sfcc: System-tool -sbt: sig-Java -scala: sig-Java -scalapack: sig-epol -scannotation: sig-Java -scap-security-guide: sig-security-facility -scap-workbench: sig-security-facility -schroedinger: sig-epol -scipy: Programming-language -scl-utils: Application -scotch: sig-epol -screen: Base-service -scrub: Application -scsi-target-utils: Others -sdparm: Storage -seabios: Others -seahorse: Desktop -seastar: sig-high-performance-network -secGear: sig-confidential-computing -secpaver: sig-security-facility -security-committee: security-committee -security-facility: sig-security-facility -security-tool: sig-security-facility -sed: Base-service -selinux-policy: sig-security-facility -sendmail: Desktop -sentencepiece: ai -seqtk: sig-bio -sequence-library: sig-Java -serd: sig-epol -serp: sig-Java -setools: Base-service -setroubleshoot: sig-security-facility -setroubleshoot-plugins: Base-service -setserial: Desktop -setup: Base-service -setuptool: Private -sezpoz: sig-Java -sg3_utils: Storage -sgabios: Base-service -sgml-common: Desktop -sgpio: Base-service -shaderc: Desktop -shadow: Base-service -shapelib: Computing -shared-desktop-ontologies: sig-UKUI -shared-mime-info: Desktop -sharutils: Base-service -shibboleth-java-parent-v3: sig-Java -shibboleth-java-support: sig-Java -shim: Base-service -shim-unsigned-aarch64: sig-recycle -shotwell: sig-UKUI -shrinkwrap: sig-Java -shrinkwrap-descriptors: sig-Java -shrinkwrap-resolver: sig-Java -si-units: Base-service -siege: sig-epol -sig-Edge: sig-Edge -sig-OSCourse: sig-OSCourse -sig-OpenBoard: sig-OpenBoard -sig-epol: sig-epol -sigar: sig-Java -signpost-core: sig-Java -sil-abyssinica-fonts: System-tool -sil-nuosu-fonts: Desktop -sil-padauk-fonts: Desktop -sil-scheherazade-fonts: System-tool -simde: dev-utils -simple: sig-Java -simple-scan: GNOME -simple-xml: sig-Java -sip: Others -sisu: sig-Java -sisu-mojos: sig-Java -skkdic: Application -skopeo: sig-CloudNative -skylark: Virt -slam_gmapping: sig-ROS -slang: Base-service -slapi-nis: Application -sleuthkit: Others -slf4j: sig-Java -slf4j-jboss-logmanager: sig-Java -slirp4netns: sig-CloudNative -slurm: dev-utils -smartdenovo: sig-bio -smartmontools: Storage -smc-fonts: System-tool -smp_utils: Storage -snakeyaml: Base-service -snapd-glib: Application -snappy: Base-service -snappy-java: Base-service -sni-qt: Private -snmp4j: oVirt -snowball-java: sig-Java -socat: Application -socket_wrapper: Programming-language -soem: sig-industrial-control -soes: sig-industrial-control -sofia-sip: GNOME -softhsm: sig-security-facility -solr: Application -sombok: Base-service -sonatype-oss-parent: sig-Java -sonatype-plugins-parent: sig-Java -sonic-buildimage: sig-ONL -sonic-linux-kernel: sig-ONL -sord: sig-epol -sos: Base-service -sos-collector: Private -sound-theme-freedesktop: Desktop -soundtouch: Application -source-highlight: Desktop -sox: Others -soxr: Desktop -spamassassin: Application -spark: bigdata -sparsehash: sig-epol -spatial4j: sig-Java -spawn-fcgi: Networking -spdk: Storage -spdlog_vendor: sig-ROS -spec-version-maven-plugin: Application -speech-dispatcher: System-tool -speex: Base-service -speexdsp: Base-service -sphinx: Others -spice: Desktop -spice-gtk: Desktop -spice-html5: sig-openstack -spice-parent: Desktop -spice-protocol: Programming-language -spice-vdagent: Desktop -spirv-headers: sig-compat-winapp -spirv-llvm-translator: Compiler -spirv-tools: sig-compat-winapp -spock: Programming-language -spring-ldap: sig-Java -springframework: sig-Java -springframework-amqp: sig-Java -springframework-batch: sig-Java -springframework-data-commons: sig-Java -springframework-data-mongodb: sig-Java -springframework-data-redis: sig-Java -springframework-hateoas: sig-Java -springframework-plugin: sig-Java -springframework-retry: sig-Java -spymemcached: sig-Java -sqlite: DB -sqlite-jdbc: dev-utils -sqljet: sig-Java -squashfs-tools: Storage -squid: Networking -sratom: sig-epol -sros2: sig-ROS -srt: Desktop -sscg: Base-service -ssh-key-dir: sig-CloudNative -sshj: sig-Java -sshpass: Application -sslext: sig-Java -sssd: Base-service -stage: sig-ROS -stage_ros: sig-ROS -stalld: sig-CloudNative -stapler: sig-Java -stapler-adjunct-timeline: sig-Java -star: Base-service -stardict: sig-desktop-apps -startdde: sig-DDE -startup-notification: Base-service -stax-ex: dev-utils -stax2-api: sig-Java -staxmapper: sig-Java -std_msgs: sig-ROS -stix-fonts: Desktop -storm: bigdata -stortrace: Storage -strace: Computing -stratovirt: Virt -stream-lib: sig-Java -stress-ng: dev-utils -stringtemplate: dev-utils -stringtemplate4: Base-service -stringtie: sig-bio -strongswan: sig-security-facility -struts: sig-Java -stunnel: Application -subscription-manager: sig-recycle -subunit: Programming-language -subversion: Base-service -sudo: Base-service -suitesparse: Others -summer2022: sig-OSCourse -sundials: ai -sunpinyin: Desktop -supermin: System-tool -sushi: Base-service -svnkit: sig-Java -sw-committee: sig-sw-arch -swagger-codegen: sig-high-performance-network -swagger-core: sig-Java -swagger-spec-validator: sig-python-modules -swagger-ui-bundle: sig-python-modules -swig: Programming-language -switcheroo-control: Desktop -swt-chart: sig-Java -swtpm: sig-security-facility -symlinks: Base-service -sync-bot: sig-EasyLife -sysbench: dev-utils -sysconftool: sig-epol -syscontainer-tools: iSulad -sysdig: A-Tune -sysfsutils: Storage -sysget: dev-utils -syslinux: sig-OS-Builder -syslinux-tftpboot: Private -sysmonitor: sig-ops -sysprof: Desktop -sysrepo: sig-industrial-control -sysstat: Base-service -system-config-firewall: Private -system-config-language: sig-UKUI -system-config-printer: System-tool -system-config-users: sig-mate-desktop -system-config-users-docs: sig-mate-desktop -system-storage-manager: Storage -systemd: Base-service -systemd-bootchart: dev-utils -systemtap: Computing -t-digest: Application -t1utils: Application -taglib: Desktop -taglist-enable: Private -tagsoup: sig-Java -takari-archiver: sig-Java -takari-incrementalbuild: sig-Java -takari-lifecycle: sig-Java -takari-plugin-testing: sig-Java -takari-pom: sig-Java -tang: System-tool -tango_icons_vendor: sig-ROS -tar: Base-service -targetcli: Application -tarsier: sig-high-performance-network -tascalate-asmx: sig-Java -tascalate-javaflow: sig-Java -tbb: Programming-language -tboot: Application -tcl: Base-service -tcllib: Base-service -tclx: Base-service -tcp_wrappers: Networking -tcpdump: Networking -tcsh: Base-service -technical-certification: sig-Compatibility-Infra -teckit: dev-utils -telegraf: bigdata -telepathy-filesystem: Desktop -telepathy-glib: Desktop -telepathy-logger: Desktop -telnet: Networking -template-glib: GNOME -tengine: Application -tensorflow: ai -tepl: GNOME -tesseract: Application -tesseract-tessdata: Application -test-interface: sig-Java -test-tools: sig-QA -test_interface_files: sig-ROS -testng: Application -teuthology: sig-ceph -tex-fonts-hebrew: Application -texi2html: Application -texinfo: Base-service -texlive: Application -texlive-base: Application -texlive-filesystem: Application -texlive-split-a: Application -texlive-split-b: Application -texlive-split-c: Application -texlive-split-d: Application -texlive-split-e: Application -texlive-split-f: Application -texlive-split-g: Application -texlive-split-h: Application -texlive-split-i: Application -texlive-split-j: Application -texlive-split-k: Application -texlive-split-l: Application -texlive-split-m: Application -texlive-split-n: Application -texlive-split-o: Application -texlive-split-p: Application -texlive-split-q: Application -texlive-split-r: Application -texlive-split-s: Application -texlive-split-t: Application -texlive-split-u: Application -texlive-split-v: Application -texlive-split-w: Application -texlive-split-x: Application -texlive-split-y: Application -texlive-split-z: Application -tez: DB -tftp: Networking -thai-scalable-fonts: Desktop -the_silver_searcher: dev-utils -thin-provisioning-tools: Storage -thonny: Desktop -thredds: sig-Java -three-eight-nine-ds-base: Application -thrift: Base-service -thunar-archive-plugin: xfce -thunar-media-tags-plugin: xfce -thunar-vcs-plugin: xfce -thunar-volman: xfce -thunarx-python: xfce -thunderbird: sig-desktop-apps -thx: Private -tibetan-machine-uni-fonts: Desktop -tidb: DB -tidy: Others -tig: dev-utils -tigervnc: Desktop -tika: sig-Java -tilda: sig-desktop-apps -tiles: sig-Java -time: Base-service -time-api: sig-Java -time-shutdown: sig-UKUI -timedatex: Base-service -tinycdb: Runtime -tinyxml: sig-compat-winapp -tinyxml2: Programming-language -tinyxml2_vendor: sig-ROS -tinyxml_vendor: sig-ROS -tipcutils: Networking -tito: dev-utils -tix: Programming-language -tk: Desktop -tldr: Application -tlsf: sig-ROS -tmpwatch: Base-service -tmux: Desktop -tng: Base-service -tofrodos: dev-utils -tog-pegasus: System-tool -tokyocabinet: Base-service -tomcat: Application -tomcat-native: Private -tomcat-taglibs-parent: sig-Java -tomcat-taglibs-standard: Application -tomcatjss: Base-service -tool-collections: Infrastructure -toolbox: sig-CloudNative -torque: Application -totem: Base-service -totem-pl-parser: Base-service -tp-libvirt: sig-QA -tp-qemu: sig-QA -tpm-quote-tools: Application -tpm-tools: Application -tpm2-abrmd: sig-security-facility -tpm2-abrmd-selinux: sig-recycle -tpm2-tools: sig-security-facility -tpm2-tss: sig-security-facility -trace-cmd: Programming-language -traceroute: Networking -tracker: Base-service -tracker-miners: Base-service -tracker3: GNOME -tracker3-miners: GNOME -trafficserver: Networking -transfig: Application -transmission: Desktop -tre: Application -tree: Storage -treelayout: sig-Java -trilead-putty-extension: sig-Java -trilead-ssh2: sig-Java -trimmomatic: sig-bio -trousers: Base-service -trucker: dev-utils -tslib: sig-compat-winapp -tss2: sig-security-facility -ttembed: Application -ttfautohint: Application -ttmkfdir: Application -tumbler: xfce -tuna: Others -tuned: Computing -tuscany-sdo-java: Base-service -twolame: Others -txt2man: sig-epol -txw2: sig-Java -tycho: sig-Java -tycho-extras: sig-Java -typesafe-config: sig-Java -tzdata: Computing -u2f-hidraw-policy: Others -uadk: sig-AccLib -uadk_engine: sig-AccLib -uboot-tools: sig-OS-Builder -ubu-keyring: Private -ubuntukylin-default-settings: sig-UKUI -uchardet: Others -ucs-miscfixed-fonts: Others -udisks2: Storage -udisks2-qt5: Desktop -uget: sig-desktop-apps -uglify-js: sig-nodejs -uglify-js1: sig-nodejs -uid_wrapper: Application -uima-addons: sig-Java -uima-parent-pom: sig-Java -uimaj: sig-Java -ukui-biometric-auth: sig-UKUI -ukui-biometric-manager: sig-UKUI -ukui-bluetooth: sig-UKUI -ukui-control-center: sig-UKUI -ukui-desktop-environment: sig-UKUI -ukui-greeter: sig-UKUI -ukui-indicators: sig-UKUI -ukui-interface: sig-UKUI -ukui-kwin: sig-UKUI -ukui-media: sig-UKUI -ukui-menu: sig-UKUI -ukui-notebook: sig-UKUI -ukui-notification-daemon: sig-UKUI -ukui-panel: sig-UKUI -ukui-paste: sig-UKUI -ukui-power-manager: sig-UKUI -ukui-screensaver: sig-UKUI -ukui-screenshot: sig-UKUI -ukui-search: sig-UKUI -ukui-search-extensions: sig-UKUI -ukui-session-manager: sig-UKUI -ukui-settings-daemon: sig-UKUI -ukui-sidebar: sig-UKUI -ukui-system-monitor: sig-UKUI -ukui-themes: sig-UKUI -ukui-user-guide: sig-UKUI -ukui-wallpapers: sig-UKUI -ukui-window-switch: sig-UKUI -ukwm: sig-UKUI -ukylin-feedback-client: sig-UKUI -umoci: sig-CloudNative -umockdev: Base-service -uname-build-checks: Base-service -unbound: Networking -unboundid-ldapsdk: oVirt -uncrustify_vendor: sig-ROS -undertow: sig-Java -unicode-emoji: Base-service -unicode-ucd: System-tool -uniconvertor: Application -unique: Base-service -unique3: sig-mate-desktop -unique_identifier_msgs: sig-ROS -unit-api: Base-service -units: Application -univocity-parsers: Base-service -unixODBC: DB -unixbench: dev-utils -unrtf: Application -unzip: Base-service -uom-lib: Base-service -uom-parent: Base-service -uom-se: dev-utils -uom-systems: Base-service -update-desktop-files: Desktop -uperf: Application -upower: Computing -urdf: sig-ROS -urdf_sim_tutorial: sig-ROS -urdf_tutorial: sig-ROS -urdfdom: sig-ROS -urdfdom_headers: sig-ROS -urdfdom_py: sig-ROS -uriparser: dev-utils -urjtag: sig-embedded -urlview: Others -urw-base35-fonts: Desktop -usb_modeswitch: System-tool -usb_modeswitch-data: Application -usbguard: Application -usbmuxd: Runtime -usbredir: Storage -usbutils: Storage -user-committee: user-committee -usermode: Base-service -userspace-rcu: Computing -ustr: Base-service -utf8cpp: dev-utils -utf8proc: Base-service -uthash: Base-service -util-linux: Base-service -uuid: Programming-language -uwsgi: Application -v2v-conversion-host: oVirt -v4l-utils: System-tool -vala: GNOME -valgrind: Programming-language -vamp-plugin-sdk: Desktop -vapoursynth: sig-epol -varnish: System-tool -vboot-utils: Base-service -vcftools: sig-bio -vconfig: Networking -vdo: Runtime -vdsm: oVirt -vdsm-jsonrpc-java: oVirt -vectorBlas: bigdata -velocity: sig-Java -velocity-tools: sig-Java -vhostmd: oVirt -vid.stab: Desktop -viewnior: Desktop -vim: Base-service -vim-airline: Application -vim-ansible: Application -vinagre: Application -vino: Desktop -virglrenderer: Virt -virt-manager: oVirt -virt-top: Private -virt-viewer: oVirt -virt-what: sig-CloudNative -vision_opencv: sig-ROS -visualization_tutorials: sig-ROS -vkd3d: sig-compat-winapp -vlc: sig-epol -vmaf: sig-epol -vmtop: Virt -vnpy: sig-desktop-apps -vo-amrwbenc: Desktop -volume_key: Base-service -vorbis-java: sig-Java -vorbis-tools: Application -voroplusplus: Application -vpnc-script: Application -vsftpd: Networking -vte: Application -vte291: GNOME -vtk: sig-epol -vulkan-headers: Base-service -vulkan-loader: Base-service -waf: Programming-language -watchdog: System-tool -wavpack: Application -wayca-scheduler: sig-WayCa -wayca-scheduler-bench: sig-WayCa -wayland: Desktop -wayland-protocols: Application -wdiff: Application -web-assets: Application -webbench: dev-utils -webkit2gtk3: Desktop -webkit_dependency: sig-ROS -webrtc-audio-processing: Desktop -website: Infrastructure -website-v2: Infrastructure -weechat: Application -weld-api: sig-Java -weld-core: sig-Java -weld-parent: sig-Java -weston: Application -wget: Networking -which: Base-service -whois: sig-epol -wildfly-build-tools: sig-Java -wildfly-common: sig-Java -wildfly-core: sig-Java -wildfly-elytron: sig-Java -wildfly-security-manager: sig-Java -wildmidi: Application -wine: sig-compat-winapp -wine-app: sig-compat-winapp -wine-mono: sig-compat-winapp -wireguard-tools: Networking -wireless-regdb: sig-mate-desktop -wireless-tools: sig-mate-desktop -wireshark: Application -wisdom-advisor: A-Tune -wmctrl: Application -woff2: Desktop -woodstox-core: dev-utils -wordpress: sig-epol -words: Base-service -wpa_supplicant: Base-service -wpebackend-fdo: dev-utils -wqy-microhei-fonts: Desktop -wqy-zenhei-fonts: Desktop -wrf: sig-HPC -wrk: Networking -ws-commons-util: dev-utils -ws-jaxme: sig-Java -ws-xmlschema: sig-Java -wsdl4j: sig-Java -wsl: sig-OS-Builder -wsmancli: System-tool -wss4j: sig-Java -wtdbg2: sig-bio -wuhan_uni_tech_2021: sig-recycle -wxGTK3: Desktop -wxPython: sig-python-modules -x264: Desktop -x265: Desktop -x3270: Desktop -xacro: sig-ROS -xalan-j2: Base-service -xapian-core: Others -xapool: sig-Java -xapps: sig-cinnamon -xarchiver: xfce -xbanish: Application -xbean: sig-Java -xcb-proto: Programming-language -xcb-util: Desktop -xcb-util-cursor: Programming-language -xcb-util-image: Programming-language -xcb-util-keysyms: Programming-language -xcb-util-renderutil: Programming-language -xcb-util-wm: Programming-language -xdelta: Programming-language -xdg-dbus-proxy: Application -xdg-desktop-portal: Desktop -xdg-desktop-portal-gtk: Application -xdg-user-dirs: Desktop -xdg-user-dirs-gtk: Desktop -xdg-utils: Desktop -xdp-cpumap-tc: Networking -xdp-tools: sig-high-performance-network -xenomai: sig-industrial-control -xerces-c: Application -xerces-j2: Base-service -xfburn: xfce -xfce-polkit: xfce -xfce-theme-manager: xfce -xfce4-appfinder: xfce -xfce4-battery-plugin: xfce -xfce4-calculator-plugin: xfce -xfce4-clipman-plugin: xfce -xfce4-cpufreq-plugin: xfce -xfce4-cpugraph-plugin: xfce -xfce4-datetime-plugin: xfce -xfce4-dev-tools: xfce -xfce4-dict: xfce -xfce4-diskperf-plugin: xfce -xfce4-embed-plugin: sig-recycle -xfce4-eyes-plugin: xfce -xfce4-fsguard-plugin: xfce -xfce4-genmon-plugin: xfce -xfce4-hardware-monitor-plugin: sig-recycle -xfce4-mailwatch-plugin: xfce -xfce4-mount-plugin: xfce -xfce4-mpc-plugin: xfce -xfce4-netload-plugin: xfce -xfce4-notes-plugin: xfce -xfce4-notifyd: xfce -xfce4-panel: xfce -xfce4-panel-profiles: xfce -xfce4-places-plugin: xfce -xfce4-power-manager: xfce -xfce4-pulseaudio-plugin: xfce -xfce4-screensaver: xfce -xfce4-screenshooter: xfce -xfce4-sensors-plugin: xfce -xfce4-session: xfce -xfce4-settings: xfce -xfce4-smartbookmark-plugin: xfce -xfce4-statusnotifier-plugin: xfce -xfce4-systemload-plugin: xfce -xfce4-taskmanager: xfce -xfce4-terminal: xfce -xfce4-time-out-plugin: xfce -xfce4-timer-plugin: xfce -xfce4-vala: xfce -xfce4-verve-plugin: xfce -xfce4-volumed-pulse: xfce -xfce4-wavelan-plugin: xfce -xfce4-weather-plugin: xfce -xfce4-whiskermenu-plugin: xfce -xfce4-xkb-plugin: xfce -xfconf: xfce -xfdashboard: xfce -xfdesktop: xfce -xfsdump: Storage -xfsprogs: Storage -xfwm4: xfce -xhtml1-dtds: Others -xhtml2fo-style-xsl: Private -xinetd: Networking -xkeyboard-config: Desktop -xml-commons-apis: Application -xml-commons-resolver: Base-service -xml-maven-plugin: sig-Java -xml-security: sig-Java -xmlbeans: sig-Java -xmlbeans-maven-plugin: sig-Java -xmlenc: sig-Java -xmlgraphics-commons: dev-utils -xmlpull: sig-Java -xmlrpc: sig-Java -xmlrpc-c: Networking -xmlsec1: Base-service -xmlstarlet: Base-service -xmlstreambuffer: Application -xmlto: Base-service -xmltoman: Application -xmltool: Application -xmlunit: sig-Java -xmms: Application -xmpcore: sig-Java -xmvn: sig-Java -xmvn-connector-gradle: Private -xmvn-tools: Private -xnio: sig-Java -xom: dev-utils -xorg-sgml-doctools: Private -xorg-x11-apps: Desktop -xorg-x11-docs: Desktop -xorg-x11-drivers: Desktop -xorg-x11-drv-armsoc: Desktop -xorg-x11-drv-ati: Desktop -xorg-x11-drv-dummy: Desktop -xorg-x11-drv-evdev: Desktop -xorg-x11-drv-fbdev: Desktop -xorg-x11-drv-intel: Desktop -xorg-x11-drv-libinput: Desktop -xorg-x11-drv-nouveau: Desktop -xorg-x11-drv-qxl: Desktop -xorg-x11-drv-synaptics: sig-UKUI -xorg-x11-drv-v4l: Desktop -xorg-x11-drv-vesa: Desktop -xorg-x11-drv-vmware: Desktop -xorg-x11-drv-wacom: Desktop -xorg-x11-font-utils: Desktop -xorg-x11-fonts: Desktop -xorg-x11-proto-devel: Programming-language -xorg-x11-server: Desktop -xorg-x11-server-utils: Desktop -xorg-x11-util-macros: Programming-language -xorg-x11-utils: Desktop -xorg-x11-xauth: Desktop -xorg-x11-xbitmaps: Desktop -xorg-x11-xinit: Desktop -xorg-x11-xkb-utils: Desktop -xorg-x11-xtrans-devel: Runtime -xpp3: sig-Java -xrestop: Desktop -xsane: Application -xsd: sig-KIRAN-DESKTOP -xsom: sig-Java -xstream: sig-Java -xterm: Desktop -xvattr: Desktop -xvidcore: Desktop -xxhash: dev-utils -xz: Base-service -xz-java: Application -yaffs2: sig-embedded -yajl: Base-service -yakuake: sig-KDE -yaml-cpp: Base-service -yaml-cpp03: Base-service -yaml_cpp_vendor: sig-ROS -yapet: System-tool -yasm: Base-service -ycsb: bigdata -yecht: sig-Java -yelp: Desktop -yelp-tools: GNOME -yelp-xsl: Desktop -ykpers: dev-utils -yocto-embedded-tools: sig-Yocto -yocto-meta-openeuler: sig-Yocto -yocto-opkg-utils: sig-Yocto -yocto-poky: sig-Yocto -yocto-pseudo: sig-Yocto -you-get: Application -youker-assistant: sig-UKUI -yp-tools: Desktop -ypbind: Desktop -ypserv: Desktop -yum-metadata-parser: sig-recycle -zabbix: Base-service -zbar: Application -zchunk: sig-CloudNative -zd1211-firmware: Networking -zenity: Desktop -zephyr: sig-Zephyr -zephyr-cn: sig-Zephyr -zeppelin: bigdata -zerofree: Others -zeromq: dev-utils -zimg: Desktop -zinc: sig-Java -zincati: sig-CloudNative -zip: Base-service -zipkin: sig-epol -zlib: Base-service -zlog: sig-KIRAN-DESKTOP -znerd-oss-parent: sig-Java -zookeeper: bigdata -zopfli: Base-service -zram-generator: sig-CloudNative -zsh: Base-service -zssh: Desktop -zstd: Base-service -zvbi: Desktop -zxing: sig-Java -zziplib: Base-service diff --git a/tools/oos/etc/openeuler_sig_repo.yaml b/tools/oos/etc/openeuler_sig_repo.yaml deleted file mode 100644 index 95788fd21f29965df8ada3f50e3b7e2ded0b9503..0000000000000000000000000000000000000000 --- a/tools/oos/etc/openeuler_sig_repo.yaml +++ /dev/null @@ -1,309 +0,0 @@ -ansible-lint: sig-openstack -crudini: sig-openstack -dibbler: sig-openstack -diskimage-builder: sig-openstack -gnocchi: sig-openstack -google-api-core: sig-openstack -google-auth-httplib2: sig-openstack -googleapis-common-protos: sig-openstack -hostha: sig-openstack -kafka-python: sig-openstack -liberasurecode: sig-openstack -networking-baremetal: sig-openstack -networking-generic-switch: sig-openstack -novnc: sig-openstack -openstack: sig-openstack -openstack-aodh: sig-openstack -openstack-ceilometer: sig-openstack -openstack-cinder: sig-openstack -openstack-cyborg: sig-openstack -openstack-glance: sig-openstack -openstack-heat: sig-openstack -openstack-heat-agents: sig-openstack -openstack-horizon: sig-openstack -openstack-ironic: sig-openstack -openstack-ironic-inspector: sig-openstack -openstack-ironic-python-agent: sig-openstack -openstack-ironic-python-agent-builder: sig-openstack -openstack-ironic-staging-drivers: sig-openstack -openstack-keystone: sig-openstack -openstack-kolla: sig-openstack -openstack-kolla-ansible: sig-openstack -openstack-kolla-ansible-plugin: sig-openstack -openstack-kolla-plugin: sig-openstack -openstack-macros: sig-openstack -openstack-neutron: sig-openstack -openstack-nova: sig-openstack -openstack-panko: sig-openstack -openstack-placement: sig-openstack -openstack-plugin: sig-openstack -openstack-rally: sig-openstack -openstack-rally-plugins: sig-openstack -openstack-releases: sig-openstack -openstack-swift: sig-openstack -openstack-tempest: sig-openstack -openstack-trove: sig-openstack -python-3parclient: sig-openstack -python-PyMI: sig-openstack -python-URLObject: sig-openstack -python-XStatic-Angular: sig-openstack -python-XStatic-Angular-Bootstrap: sig-openstack -python-XStatic-Angular-FileUpload: sig-openstack -python-XStatic-Angular-Gettext: sig-openstack -python-XStatic-Angular-Schema-Form: sig-openstack -python-XStatic-Angular-lrdragndrop: sig-openstack -python-XStatic-Bootstrap-Datepicker: sig-openstack -python-XStatic-Bootstrap-SCSS: sig-openstack -python-XStatic-D3: sig-openstack -python-XStatic-Font-Awesome: sig-openstack -python-XStatic-Hogan: sig-openstack -python-XStatic-JQuery-Migrate: sig-openstack -python-XStatic-JQuery.TableSorter: sig-openstack -python-XStatic-JQuery.quicksearch: sig-openstack -python-XStatic-JSEncrypt: sig-openstack -python-XStatic-Jasmine: sig-openstack -python-XStatic-Rickshaw: sig-openstack -python-XStatic-Spin: sig-openstack -python-XStatic-bootswatch: sig-openstack -python-XStatic-jQuery: sig-openstack -python-XStatic-jquery-ui: sig-openstack -python-XStatic-mdi: sig-openstack -python-XStatic-objectpath: sig-openstack -python-XStatic-roboto-fontface: sig-openstack -python-XStatic-smart-table: sig-openstack -python-XStatic-term.js: sig-openstack -python-XStatic-tv4: sig-openstack -python-aodhclient: sig-openstack -python-api-object-schema: sig-openstack -python-argparse: sig-openstack -python-arrow: sig-openstack -python-automaton: sig-openstack -python-barbicanclient: sig-openstack -python-beautifulsoup4: sig-openstack -python-binary-memcached: sig-openstack -python-blazarclient: sig-openstack -python-bunch: sig-openstack -python-capacity: sig-openstack -python-cassandra-driver: sig-openstack -python-castellan: sig-openstack -python-ceilometermiddleware: sig-openstack -python-cinder-tempest-plugin: sig-openstack -python-cinderclient: sig-openstack -python-cliff: sig-openstack -python-confetti: sig-openstack -python-confget: sig-openstack -python-confluent-kafka: sig-openstack -python-congressclient: sig-openstack -python-consul: sig-openstack -python-cotyledon: sig-openstack -python-cursive: sig-openstack -python-cyborgclient: sig-openstack -python-debtcollector: sig-openstack -python-designateclient: sig-openstack -python-dfs-sdk: sig-openstack -python-dib-utils: sig-openstack -python-discover: sig-openstack -python-doc8: sig-openstack -python-dracclient: sig-openstack -python-elasticsearch2: sig-openstack -python-elementpath: sig-openstack -python-etcd3: sig-openstack -python-etcd3gw: sig-openstack -python-flake8-docstrings: sig-openstack -python-flake8-logging-format: sig-openstack -python-flux: sig-openstack -python-futurist: sig-openstack -python-glance-store: sig-openstack -python-glance-tempest-plugin: sig-openstack -python-glanceclient: sig-openstack -python-gnocchiclient: sig-openstack -python-gossip: sig-openstack -python-hacking: sig-openstack -python-heat-cfntools: sig-openstack -python-heatclient: sig-openstack -python-hidapi: sig-openstack -python-ibmcclient: sig-openstack -python-infi.dtypes.iqn: sig-openstack -python-infi.dtypes.wwn: sig-openstack -python-infinisdk: sig-openstack -python-ironic-inspector-client: sig-openstack -python-ironic-lib: sig-openstack -python-ironic-prometheus-exporter: sig-openstack -python-ironic-tempest-plugin: sig-openstack -python-ironic-ui: sig-openstack -python-ironicclient: sig-openstack -python-jaeger-client: sig-openstack -python-jaraco.packaging: sig-openstack -python-karborclient: sig-openstack -python-kazoo: sig-openstack -python-keystone-tempest-plugin: sig-openstack -python-keystoneauth1: sig-openstack -python-keystoneclient: sig-openstack -python-keystonemiddleware: sig-openstack -python-krest: sig-openstack -python-ldappool: sig-openstack -python-lefthandclient: sig-openstack -python-lz4: sig-openstack -python-magnumclient: sig-openstack -python-manilaclient: sig-openstack -python-memory-profiler: sig-openstack -python-microversion-parse: sig-openstack -python-mistralclient: sig-openstack -python-mitba: sig-openstack -python-monascaclient: sig-openstack -python-moto: sig-openstack -python-mox3: sig-openstack -python-multiprocessing: sig-openstack -python-murano-pkg-check: sig-openstack -python-muranoclient: sig-openstack -python-mypy-extensions: sig-openstack -python-netmiko: sig-openstack -python-neutron-lib: sig-openstack -python-neutron-tempest-plugin: sig-openstack -python-neutronclient: sig-openstack -python-nocasedict: sig-openstack -python-nocaselist: sig-openstack -python-nodeenv: sig-openstack -python-nosehtmloutput: sig-openstack -python-nosexcover: sig-openstack -python-novaclient: sig-openstack -python-ntc-templates: sig-openstack -python-octaviaclient: sig-openstack -python-openstack-doc-tools: sig-openstack -python-openstack.nose_plugin: sig-openstack -python-openstackclient: sig-openstack -python-openstackdocstheme: sig-openstack -python-openstacksdk: sig-openstack -python-opentracing: sig-openstack -python-os-api-ref: sig-openstack -python-os-apply-config: sig-openstack -python-os-brick: sig-openstack -python-os-client-config: sig-openstack -python-os-collect-config: sig-openstack -python-os-faults: sig-openstack -python-os-ken: sig-openstack -python-os-refresh-config: sig-openstack -python-os-resource-classes: sig-openstack -python-os-service-types: sig-openstack -python-os-testr: sig-openstack -python-os-traits: sig-openstack -python-os-vif: sig-openstack -python-os-win: sig-openstack -python-os-xenapi: sig-openstack -python-osc-lib: sig-openstack -python-osc-placement: sig-openstack -python-oslo.cache: sig-openstack -python-oslo.concurrency: sig-openstack -python-oslo.config: sig-openstack -python-oslo.context: sig-openstack -python-oslo.db: sig-openstack -python-oslo.i18n: sig-openstack -python-oslo.log: sig-openstack -python-oslo.messaging: sig-openstack -python-oslo.middleware: sig-openstack -python-oslo.policy: sig-openstack -python-oslo.privsep: sig-openstack -python-oslo.reports: sig-openstack -python-oslo.rootwrap: sig-openstack -python-oslo.serialization: sig-openstack -python-oslo.service: sig-openstack -python-oslo.sphinx: sig-openstack -python-oslo.upgradecheck: sig-openstack -python-oslo.utils: sig-openstack -python-oslo.versionedobjects: sig-openstack -python-oslo.vmware: sig-openstack -python-oslotest: sig-openstack -python-osprofiler: sig-openstack -python-ovsdbapp: sig-openstack -python-pact: sig-openstack -python-pathlib: sig-openstack -python-pep257: sig-openstack -python-pep8: sig-openstack -python-pifpaf: sig-openstack -python-pika: sig-openstack -python-pip-api: sig-openstack -python-pipreqs: sig-openstack -python-pre-commit: sig-openstack -python-proboscis: sig-openstack -python-proliantutils: sig-openstack -python-purestorage: sig-openstack -python-pycadf: sig-openstack -python-pydotplus: sig-openstack -python-pyeclib: sig-openstack -python-pyghmi: sig-openstack -python-pylama: sig-openstack -python-pypowervm: sig-openstack -python-pytest-django: sig-openstack -python-pytest-html: sig-openstack -python-pyxcli: sig-openstack -python-rbd-iscsi-client: sig-openstack -python-reno: sig-openstack -python-requests-aws: sig-openstack -python-requests-mock: sig-openstack -python-requestsexceptions: sig-openstack -python-requirementslib: sig-openstack -python-responses: sig-openstack -python-restructuredtext-lint: sig-openstack -python-rsd-lib: sig-openstack -python-rsdclient: sig-openstack -python-rst.linker: sig-openstack -python-rtslib-fb: sig-openstack -python-ryu: sig-openstack -python-saharaclient: sig-openstack -python-scciclient: sig-openstack -python-scripttest: sig-openstack -python-searchlightclient: sig-openstack -python-selenium: sig-openstack -python-senlinclient: sig-openstack -python-sentinels: sig-openstack -python-setuptools-rust: sig-openstack -python-soupsieve: sig-openstack -python-sphinx-testing: sig-openstack -python-sphinxcontrib-autoprogram: sig-openstack -python-sphinxcontrib-programoutput: sig-openstack -python-sqlalchemy-migrate: sig-openstack -python-stestr: sig-openstack -python-stevedore: sig-openstack -python-storage-interfaces: sig-openstack -python-storops: sig-openstack -python-storpool: sig-openstack -python-storpool.spopenstack: sig-openstack -python-subunit2sql: sig-openstack -python-suds-jurko: sig-openstack -python-sushy: sig-openstack -python-sushy-oem-idrac: sig-openstack -python-swiftclient: sig-openstack -python-sysv-ipc: sig-openstack -python-tablib: sig-openstack -python-taskflow: sig-openstack -python-tempest-lib: sig-openstack -python-textfsm: sig-openstack -python-threadloop: sig-openstack -python-tooz: sig-openstack -python-transaction: sig-openstack -python-trove-dashboard: sig-openstack -python-trove-tempest-plugin: sig-openstack -python-troveclient: sig-openstack -python-typed-ast: sig-openstack -python-typing-extensions: sig-openstack -python-uhashring: sig-openstack -python-ujson: sig-openstack -python-vintage: sig-openstack -python-vitrageclient: sig-openstack -python-waiting: sig-openstack -python-watcherclient: sig-openstack -python-weakrefmethod: sig-openstack -python-websockify: sig-openstack -python-whereto: sig-openstack -python-wmi: sig-openstack -python-wsme: sig-openstack -python-xattr: sig-openstack -python-xclarityclient: sig-openstack -python-xmlschema: sig-openstack -python-yamllint: sig-openstack -python-yamlloader: sig-openstack -python-zVMCloudConnector: sig-openstack -python-zake: sig-openstack -python-zaqarclient: sig-openstack -python-zunclient: sig-openstack -spice-html5: sig-openstack diff --git a/tools/oos/etc/openstack_release.yaml b/tools/oos/etc/openstack_release.yaml deleted file mode 100644 index ec89c3b5ae13e1ec13d84e5b7e25f71d31a39957..0000000000000000000000000000000000000000 --- a/tools/oos/etc/openstack_release.yaml +++ /dev/null @@ -1,2578 +0,0 @@ -queens: - aodh: 6.0.1 - aodhclient: 1.0.0 - automaton: 1.14.0 - barbican: 6.0.1 - bifrost: 5.0.4 - blazar: 1.0.0 - blazar-dashboard: 1.0.1 - blazar-nova: 1.0.1 - castellan: 0.17.0 - ceilometer: 10.0.1 - ceilometer-powervm: 6.0.1 - ceilometermiddleware: 1.2.0 - cinder: 12.0.10 - cliff: 2.11.1 - cloudkitty: 7.0.0 - cloudkitty-dashboard: 7.0.0 - congress-dashboard: 2.0.1 - debtcollector: 1.19.0 - designate: 6.0.1 - designate-dashboard: 6.0.1 - freezer: 6.0.0 - freezer-api: 6.0.0 - freezer-dr: 6.0.0 - freezer-web-ui: 6.0.0 - futurist: 1.6.0 - glance: 16.0.1 - glance_store: 0.23.0 - heat: 10.0.3 - heat-agents: 1.5.4 - heat-dashboard: 1.0.3 - heat-translator: 1.0.0 - horizon: 13.0.3 - instack: 8.1.0 - instack-undercloud: 8.4.9 - ironic: 10.1.10 - ironic-inspector: 7.2.4 - ironic-lib: 2.12.4 - ironic-python-agent: 3.2.4 - ironic-ui: 3.1.3 - karbor: 1.0.0 - karbor-dashboard: 1.0.0 - keystone: 13.0.4 - keystoneauth1: 3.4.1 - keystonemiddleware: 4.22.0 - kolla: 6.2.4 - kolla-ansible: 6.2.3 - kuryr-kubernetes: 0.4.7 - kuryr-lib: 0.7.0 - kuryr-libnetwork: 1.0.0 - ldappool: 2.2.1 - magnum: 6.3.0 - magnum-ui: 4.0.1 - manila: 6.3.2 - manila-ui: 2.13.1 - mistral: 6.0.6 - mistral-dashboard: 6.0.4 - mistral-extra: 6.0.4 - mistral-lib: 0.4.0 - monasca-agent: 2.6.3 - monasca-api: 2.5.1 - monasca-common: 2.8.0 - monasca-kibana-plugin: 1.2.0 - monasca-log-api: 2.6.1 - monasca-notification: 1.13.1 - monasca-persister: 1.10.2 - monasca-statsd: 1.9.0 - monasca-ui: 1.12.2 - mox3: 0.24.0 - murano: 5.0.0 - murano-agent: 3.4.0 - murano-dashboard: 5.0.0 - networking-bagpipe: 8.0.1 - networking-baremetal: 1.0.1 - networking-bgpvpn: 8.0.1 - networking-generic-switch: 1.0.1 - networking-hyperv: 6.0.0 - networking-midonet: 6.0.0 - networking-odl: 12.0.1 - networking-ovn: 4.0.4 - networking-sfc: 6.0.0 - neutron: 12.1.1 - neutron-dynamic-routing: 12.0.1 - neutron-fwaas: 12.0.2 - neutron-fwaas-dashboard: 1.3.1 - neutron-lbaas: 12.0.0 - neutron-lbaas-dashboard: 4.0.0 - neutron-lib: 1.13.0 - neutron-vpnaas: 12.0.1 - neutron-vpnaas-dashboard: 1.2.3 - nova: 17.0.13 - octavia: 2.1.2 - octavia-dashboard: 1.0.2 - openstack-congress: 7.0.2 - openstack-release-test: 0.12.0 - openstack_requirements: 1.2.0 - openstacksdk: 0.11.4 - os-apply-config: 8.3.2 - os-brick: 2.3.9 - os-client-config: 1.29.0 - os-collect-config: 8.3.1 - os-net-config: 8.5.1 - os-refresh-config: 8.3.1 - os-traits: 0.5.0 - os-win: 3.0.1 - os_vif: 1.9.2 - osc-lib: 1.9.0 - osc-placement: 1.0.0 - oslo.cache: 1.28.1 - oslo.concurrency: 3.25.1 - oslo.config: 5.2.1 - oslo.context: 2.20.0 - oslo.db: 4.33.4 - oslo.i18n: 3.19.0 - oslo.log: 3.36.0 - oslo.messaging: 5.35.6 - oslo.middleware: 3.34.0 - oslo.policy: 1.33.2 - oslo.privsep: 1.27.0 - oslo.reports: 1.26.0 - oslo.rootwrap: 5.13.0 - oslo.serialization: 2.24.0 - oslo.service: 1.29.1 - oslo.utils: 3.35.1 - oslo.versionedobjects: 1.31.3 - oslo.vmware: 2.26.0 - oslosphinx: 4.18.0 - oslotest: 3.2.0 - osprofiler: 1.15.2 - ovsdbapp: 0.10.5 - panko: 4.0.2 - pankoclient: 0.4.1 - patrole: 0.3.0 - paunch: 2.5.3 - puppet-aodh: 12.4.0 - puppet-barbican: 12.4.0 - puppet-ceilometer: 12.5.0 - puppet-cinder: 12.4.1 - puppet-cloudkitty: 1.0.0 - puppet-congress: 12.4.0 - puppet-designate: 12.4.0 - puppet-ec2api: 12.4.0 - puppet-freezer: 1.0.0 - puppet-glance: 12.5.0 - puppet-glare: 1.0.0 - puppet-gnocchi: 12.4.0 - puppet-heat: 12.4.0 - puppet-horizon: 12.4.0 - puppet-ironic: 12.4.0 - puppet-keystone: 12.4.0 - puppet-magnum: 12.2.0 - puppet-manila: 12.5.1 - puppet-mistral: 12.4.0 - puppet-monasca: 1.1.0 - puppet-murano: 12.4.0 - puppet-neutron: 12.4.1 - puppet-nova: 12.5.0 - puppet-octavia: 12.4.0 - puppet-openstack_extras: 12.4.0 - puppet-openstacklib: 12.4.0 - puppet-oslo: 12.4.0 - puppet-ovn: 12.4.0 - puppet-panko: 12.4.0 - puppet-qdr: 1.0.0 - puppet-rally: 0.1.0 - puppet-sahara: 12.4.0 - puppet-swift: 12.4.0 - puppet-tacker: 12.4.0 - puppet-tempest: 12.5.0 - puppet-tripleo: 8.5.1 - puppet-trove: 12.4.0 - puppet-vitrage: 2.4.0 - puppet-vswitch: 8.4.0 - puppet-watcher: 12.4.0 - puppet-zaqar: 12.4.0 - pycadf: 2.7.0 - python-barbicanclient: 4.6.1 - python-blazarclient: 1.0.1 - python-brick-cinderclient-ext: 0.8.0 - python-cinderclient: 3.5.0 - python-cloudkittyclient: 1.2.0 - python-congressclient: 1.9.0 - python-designateclient: 2.9.0 - python-freezerclient: 1.6.0 - python-glanceclient: 2.10.1 - python-heatclient: 1.14.1 - python-ironic-inspector-client: 3.1.2 - python-ironicclient: 2.2.2 - python-karborclient: 1.0.0 - python-keystoneclient: 3.15.1 - python-magnumclient: 2.9.1 - python-manilaclient: 1.21.2 - python-mistralclient: 3.3.0 - python-monascaclient: 1.10.1 - python-muranoclient: 1.0.1 - python-neutronclient: 6.7.0 - python-novaclient: 10.1.1 - python-octaviaclient: 1.4.1 - python-openstackclient: 3.14.3 - python-saharaclient: 1.5.0 - python-searchlightclient: 1.3.0 - python-senlinclient: 1.7.0 - python-solumclient: 2.6.1 - python-swiftclient: 3.5.1 - python-tackerclient: 0.11.0 - python-tripleoclient: 9.3.1 - python-troveclient: 2.14.0 - python-vitrageclient: 2.1.0 - python-watcher: 1.8.1 - python-watcherclient: 1.6.0 - python-zaqarclient: 1.9.0 - python-zunclient: 1.1.0 - requestsexceptions: 1.4.0 - sahara: 8.0.3 - sahara-dashboard: 8.0.2 - sahara-extra: 8.0.1 - sahara-image-elements: 8.0.2 - searchlight: 4.0.0 - searchlight-ui: 4.0.0 - senlin: 5.0.1 - senlin-dashboard: 0.8.0 - shade: 1.27.2 - solum: 5.5.1 - solum-dashboard: 2.3.0 - stevedore: 1.28.0 - storlets: 1.0.0 - sushy: 1.3.4 - swift: 2.17.1 - tacker: 0.9.0 - tacker-horizon: 0.11.0 - taskflow: 3.1.0 - tempest: 18.0.0 - tooz: 1.60.2 - tosca-parser: 0.9.0 - tricircle: 5.0.0 - tricircleclient: 0.3.0 - tripleo-common: 8.7.1 - tripleo-heat-templates: 9.0.0.0b1 - tripleo-image-elements: 8.0.3 - tripleo-ipsec: 8.1.0 - tripleo-puppet-elements: 8.1.1 - tripleo-ui: 8.3.2 - tripleo-validations: 8.5.0 - trove: 9.0.0 - trove-dashboard: 10.0.0 - vitrage: 2.3.0 - vitrage-dashboard: 1.4.2 - watcher-dashboard: 1.8.0 - zaqar: 6.0.1 - zaqar-ui: 4.0.1 - zun: 1.0.1 - zun-ui: 1.0.0 -rocky: - aodh: 7.0.0 - aodhclient: 1.1.1 - automaton: 1.15.0 - barbican: 7.0.0 - barbican_tempest_plugin: 0.1.0 - bifrost: 5.1.5 - blazar: 2.0.0 - blazar-dashboard: 1.2.0 - blazar-nova: 1.1.1 - blazar_tempest_plugin: 0.1.0 - castellan: 0.19.0 - ceilometer: 11.1.0 - ceilometer-powervm: 7.0.0 - ceilometermiddleware: 1.3.0 - cinder: 13.0.9 - cinder_tempest_plugin: 0.1.0 - cliff: 2.13.0 - cloudkitty: 8.0.1 - cloudkitty-dashboard: 8.0.1 - cloudkitty_tempest_plugin: 1.0.0 - congress-dashboard: 3.0.1 - congress-tempest-plugin: 0.1.0 - debtcollector: 1.20.0 - designate: 7.0.1 - designate-dashboard: 7.0.0 - designate-tempest-plugin: 0.5.0 - ec2-api: 7.1.0 - ec2api-tempest-plugin: 0.1.0 - futurist: 1.7.0 - glance: 17.0.1 - glance_store: 0.26.2 - heat-agents: 1.7.1 - heat-dashboard: 1.4.2 - heat-tempest-plugin: 0.2.0 - heat-translator: 1.1.1 - horizon: 14.1.0 - instack: 9.1.0 - instack-undercloud: 9.5.1 - ironic: 11.1.4 - ironic-inspector: 8.0.4 - ironic-lib: 2.14.3 - ironic-python-agent: 3.3.3 - ironic-tempest-plugin: 1.2.1 - ironic-ui: 3.3.1 - karbor: 1.1.0 - karbor-dashboard: 1.1.0 - keystone: 14.2.0 - keystone_tempest_plugin: 0.1.0 - keystoneauth1: 3.10.1 - keystonemiddleware: 5.2.2 - kolla: 7.1.1 - kolla-ansible: 7.2.1 - kuryr-kubernetes: 0.5.4 - kuryr-lib: 0.8.0 - kuryr-libnetwork: 2.0.1 - kuryr-tempest-plugin: 0.3.0 - magnum: 7.2.0 - magnum-ui: 5.0.1 - magnum_tempest_plugin: 0.1.0 - manila: 7.4.1 - manila-tempest-plugin: 0.1.0 - manila-ui: 2.16.2 - masakari: 6.0.0 - masakari-dashboard: 0.2.0 - masakari-monitors: 6.0.0 - mistral: 7.1.0 - mistral-dashboard: 7.1.0 - mistral-extra: 7.1.0 - mistral-lib: 1.0.1 - mistral_tempest_tests: 0.1.0 - monasca-agent: 2.8.1 - monasca-api: 2.7.1 - monasca-common: 2.11.0 - monasca-log-api: 2.7.1 - monasca-notification: 1.14.1 - monasca-persister: 1.12.1 - monasca-statsd: 1.10.1 - monasca-tempest-plugin: 0.2.0 - monasca-ui: 1.14.1 - mox3: 0.26.0 - murano: 6.0.0 - murano-agent: 3.5.1 - murano-dashboard: 6.0.0 - murano-tempest-plugin: 0.1.0 - networking-bagpipe: 9.0.2 - networking-baremetal: 1.2.1 - networking-bgpvpn: 9.0.2 - networking-generic-switch: 1.2.0 - networking-hyperv: 7.0.1 - networking-midonet: 7.0.0 - networking-odl: 13.0.1 - networking-ovn: 5.1.0 - networking-powervm: 7.0.0 - networking-sfc: 7.0.0 - neutron: 13.0.7 - neutron-dynamic-routing: 13.1.0 - neutron-fwaas: 13.0.3 - neutron-fwaas-dashboard: 1.5.0 - neutron-lbaas: 13.0.1 - neutron-lbaas-dashboard: 5.0.1 - neutron-lib: 1.18.0 - neutron-vpnaas: 13.0.2 - neutron-vpnaas-dashboard: 1.4.0 - neutron_tempest_plugin: 0.2.0 - nova: 18.3.0 - nova_powervm: 7.0.0 - octavia: 3.2.2 - octavia-dashboard: 2.0.2 - octavia-tempest-plugin: 0.2.0 - openstack-congress: 8.0.1 - openstack-cyborg: 1.0.0 - openstack-heat: 11.0.3 - openstack-release-test: 1.1.0 - openstacksdk: 0.17.3 - os-acc: 0.1.0 - os-apply-config: 9.1.2 - os-brick: 2.5.10 - os-client-config: 1.31.2 - os-collect-config: 9.2.1 - os-net-config: 9.4.1 - os-refresh-config: 9.1.1 - os-traits: 0.9.0 - os-win: 4.0.1 - os_vif: 1.11.2 - osc-lib: 1.11.1 - osc-placement: 1.3.0 - oslo.cache: 1.30.4 - oslo.concurrency: 3.27.0 - oslo.config: 6.4.2 - oslo.context: 2.21.0 - oslo.db: 4.40.2 - oslo.i18n: 3.21.0 - oslo.log: 3.39.2 - oslo.messaging: 8.1.4 - oslo.middleware: 3.36.0 - oslo.policy: 1.38.1 - oslo.privsep: 1.29.2 - oslo.reports: 1.28.0 - oslo.rootwrap: 5.14.2 - oslo.serialization: 2.27.0 - oslo.service: 1.31.8 - oslo.utils: 3.36.5 - oslo.versionedobjects: 1.33.3 - oslo.vmware: 2.31.0 - oslotest: 3.6.0 - osprofiler: 2.3.1 - ovsdbapp: 0.12.5 - panko: 5.0.0 - pankoclient: 0.5.0 - patrole: 0.4.0 - paunch: 3.2.2 - puppet-aodh: 13.3.1 - puppet-barbican: 13.3.1 - puppet-ceilometer: 13.3.1 - puppet-cinder: 13.3.2 - puppet-cloudkitty: 2.3.1 - puppet-congress: 13.3.1 - puppet-designate: 13.3.1 - puppet-ec2api: 13.3.1 - puppet-freezer: 2.3.1 - puppet-glance: 13.3.1 - puppet-glare: 2.3.1 - puppet-gnocchi: 13.3.1 - puppet-heat: 13.3.1 - puppet-horizon: 13.3.1 - puppet-ironic: 13.3.1 - puppet-keystone: 13.3.1 - puppet-magnum: 13.3.1 - puppet-manila: 13.3.2 - puppet-mistral: 13.3.1 - puppet-monasca: 2.3.1 - puppet-murano: 13.3.1 - puppet-neutron: 13.3.1 - puppet-nova: 13.3.1 - puppet-octavia: 13.3.1 - puppet-openstack_extras: 13.3.1 - puppet-openstacklib: 13.3.1 - puppet-oslo: 13.3.1 - puppet-ovn: 13.3.1 - puppet-panko: 13.3.1 - puppet-qdr: 2.3.1 - puppet-rally: 1.3.1 - puppet-sahara: 13.3.1 - puppet-swift: 13.3.1 - puppet-tacker: 13.3.1 - puppet-tempest: 13.3.1 - puppet-tripleo: 9.5.1 - puppet-trove: 13.3.1 - puppet-vitrage: 3.3.1 - puppet-vswitch: 9.3.1 - puppet-watcher: 13.3.1 - puppet-zaqar: 13.3.1 - pycadf: 2.8.0 - python-barbicanclient: 4.7.2 - python-blazarclient: 2.0.1 - python-brick-cinderclient-ext: 0.9.0 - python-cinderclient: 4.0.3 - python-cloudkittyclient: 2.0.1 - python-congressclient: 1.11.0 - python-cyborgclient: 0.2.0 - python-designateclient: 2.10.0 - python-glanceclient: 2.13.2 - python-heatclient: 1.16.3 - python-ironic-inspector-client: 3.3.0 - python-ironicclient: 2.5.4 - python-karborclient: 1.1.0 - python-keystoneclient: 3.17.0 - python-magnumclient: 2.10.0 - python-manilaclient: 1.24.2 - python-masakariclient: 5.2.0 - python-mistralclient: 3.7.0 - python-monascaclient: 1.12.1 - python-muranoclient: 1.1.1 - python-neutronclient: 6.9.1 - python-novaclient: 11.0.1 - python-octaviaclient: 1.6.2 - python-openstackclient: 3.16.3 - python-qinlingclient: 2.0.0 - python-saharaclient: 2.0.0 - python-senlinclient: 1.8.0 - python-solumclient: 2.7.1 - python-swiftclient: 3.6.1 - python-tackerclient: 0.14.0 - python-tripleoclient: 10.7.1 - python-troveclient: 2.16.0 - python-vitrageclient: 2.3.0 - python-watcher: 1.12.1 - python-watcherclient: 2.1.1 - python-zaqarclient: 1.10.0 - python-zunclient: 2.1.0 - qinling: 1.0.0 - sahara: 9.0.2 - sahara-dashboard: 9.0.2 - sahara-extra: 9.2.0 - sahara-image-elements: 9.0.2 - sahara-tests: 0.7.0 - senlin: 6.0.0 - senlin-dashboard: 0.9.0 - senlin-tempest-plugin: 0.1.0 - shade: 1.29.0 - solum: 5.7.0 - solum-dashboard: 2.5.0 - solum-tempest-plugin: 0.1.0 - stevedore: 1.29.0 - storlets: 2.0.0 - sushy: 1.6.1 - swift: 2.19.2 - tacker: 0.10.0 - tacker-horizon: 0.12.0 - taskflow: 3.2.0 - telemetry_tempest_plugin: 0.2.0 - tempest: 19.0.0 - tooz: 1.62.1 - tosca-parser: 1.0.1 - tricircle: 5.1.0 - tricircleclient: 0.4.0 - tripleo-common: 9.6.1 - tripleo-heat-templates: 9.4.1 - tripleo-image-elements: 9.1.1 - tripleo-ipsec: 9.0.0 - tripleo-puppet-elements: 9.1.1 - tripleo-ui: 9.3.0 - tripleo-validations: 9.4.0 - trove: 10.0.0 - trove-dashboard: 11.0.0 - trove_tempest_plugin: 0.1.0 - vitrage: 3.3.0 - vitrage-dashboard: 1.6.2 - vitrage-tempest-plugin: 1.1.0 - watcher-dashboard: 1.11.0 - watcher-tempest-plugin: 1.0.0 - zaqar: 7.0.0 - zaqar-ui: 5.0.0 - zaqar_tempest_plugin: 0.1.0 - zun: 2.1.0 - zun-tempest-plugin: 2.0.0 - zun-ui: 2.0.0 -stein: - aodh: 8.0.1 - aodhclient: 1.2.0 - automaton: 1.16.0 - barbican: 8.0.1 - barbican_tempest_plugin: 0.2.0 - barbican_tempest_plugin-stein: last - bifrost: 6.0.5 - blazar: 3.0.1 - blazar-dashboard: 1.3.1 - blazar-nova: 1.2.0 - blazar_tempest_plugin: 0.2.0 - blazar_tempest_plugin-stein: last - castellan: 1.2.3 - ceilometer: 12.1.1 - ceilometer-powervm: 8.0.0 - ceilometermiddleware: 1.4.0 - cinder: 14.3.1 - cinder-tempest-plugin-stein: last - cinder_tempest_plugin: 0.2.0 - cliff: 2.14.1 - cloudkitty: 9.0.1 - cloudkitty-dashboard: 8.1.0 - cloudkitty_tempest_plugin: 1.1.0 - cloudkitty_tempest_plugin-stein: last - congress-dashboard: 4.0.0 - congress-tempest-plugin: 0.2.0 - debtcollector: 1.21.0 - designate: 8.0.1 - designate-dashboard: 8.0.0 - designate-tempest-plugin: 0.6.0 - designate-tempest-plugin-stein: last - ec2-api: 8.0.0 - ec2api-tempest-plugin: 0.2.0 - ec2api-tempest-plugin-stein: last - freezer: 7.1.0 - freezer-api: 7.1.0 - freezer-dr: 7.1.0 - freezer-web-ui: 7.1.0 - freezer_tempest_plugin: 0.1.0 - freezer_tempest_plugin-stein: last - futurist: 1.8.1 - glance: 18.0.1 - glance_store: 0.28.1 - heat-agents: 1.8.0 - heat-dashboard: 1.5.1 - heat-tempest-plugin: 0.3.0 - heat-tempest-plugin-stein: last - heat-translator: 1.3.1 - horizon: 15.3.2 - ironic: 12.1.6 - ironic-inspector: 8.2.5 - ironic-lib: 2.16.4 - ironic-python-agent: 3.6.5 - ironic-tempest-plugin: 1.3.0 - ironic-tempest-plugin-stein: last - ironic-ui: 3.4.1 - karbor: 1.3.0 - karbor-dashboard: 1.2.1 - keystone: 15.0.1 - keystone_tempest_plugin: 0.2.0 - keystone_tempest_plugin-stein: last - keystoneauth1: 3.13.2 - keystonemiddleware: 6.0.1 - kolla: 8.0.5 - kolla-ansible: 8.3.0 - kuryr-kubernetes: 1.0.1 - kuryr-lib: 0.9.0 - kuryr-libnetwork: 3.0.1 - kuryr-tempest-plugin: 0.4.1 - kuryr-tempest-plugin-stein: last - magnum: 8.2.1 - magnum-ui: 5.1.0 - magnum_tempest_plugin: 0.2.0 - magnum_tempest_plugin-stein: last - manila: 8.1.4 - manila-tempest-plugin: 0.3.0 - manila-tempest-plugin-stein: last - manila-ui: 2.18.1 - masakari: 7.1.0 - masakari-dashboard: 0.3.1 - masakari-monitors: 7.0.1 - metalsmith: 0.11.1 - mistral: 8.1.0 - mistral-dashboard: 8.1.0 - mistral-extra: 8.1.0 - mistral-lib: 1.1.1 - mistral_tempest_tests: 0.2.0 - mistral_tempest_tests-stein: last - monasca-agent: 2.10.1 - monasca-api: 3.0.1 - monasca-common: 2.13.0 - monasca-events-api: 0.3.0 - monasca-log-api: 2.9.0 - monasca-notification: 1.17.1 - monasca-persister: 1.14.0 - monasca-statsd: 1.11.0 - monasca-tempest-plugin: 1.0.0 - monasca-tempest-plugin-stein: last - monasca-ui: 1.15.1 - mox3: 0.27.0 - murano: 7.1.0 - murano-agent: 3.6.0 - murano-dashboard: 7.0.0 - murano-tempest-plugin: 1.0.0 - murano-tempest-plugin-stein: last - networking-bagpipe: 10.0.1 - networking-baremetal: 1.3.0 - networking-bgpvpn: 10.0.0 - networking-generic-switch: 1.3.1 - networking-hyperv: 7.2.1 - networking-midonet: 8.0.0 - networking-odl: 14.0.1 - networking-ovn: 6.1.1 - networking-powervm: 8.0.0 - networking-sfc: 8.0.0 - neutron: 14.4.2 - neutron-dynamic-routing: 14.0.0 - neutron-fwaas: 14.0.1 - neutron-fwaas-dashboard: 2.0.2 - neutron-lbaas: 14.0.1 - neutron-lbaas-dashboard: 6.0.1 - neutron-lib: 1.25.1 - neutron-tempest-plugin: 0.3.0 - neutron-tempest-plugin-stein: last - neutron-vpnaas: 14.0.1 - neutron-vpnaas-dashboard: 1.5.2 - nova: 19.3.2 - nova_powervm: 8.0.0 - octavia: 4.1.4 - octavia-dashboard: 3.1.0 - octavia-lib: 1.1.1 - octavia-tempest-plugin: 1.0.0 - octavia-tempest-plugin-stein: last - openstack-congress: 9.0.0 - openstack-cyborg: 2.0.0 - openstack-heat: 12.2.0 - openstack-placement: 1.1.0 - openstack-release-test: 1.4.2 - openstacksdk: 0.27.1 - os-acc: 0.2.0 - os-apply-config: 10.3.0 - os-brick: 2.8.7 - os-client-config: 1.32.0 - os-collect-config: 10.3.1 - os-ken: 0.3.1 - os-net-config: 10.4.2 - os-refresh-config: 10.2.2 - os-resource-classes: 0.3.0 - os-traits: 0.11.0 - os-win: 4.2.1 - os_vif: 1.15.2 - osc-lib: 1.12.1 - osc-placement: 1.5.0 - oslo.cache: 1.33.4 - oslo.concurrency: 3.29.1 - oslo.config: 6.8.2 - oslo.context: 2.22.2 - oslo.db: 4.45.0 - oslo.i18n: 3.23.1 - oslo.log: 3.42.5 - oslo.messaging: 9.5.2 - oslo.middleware: 3.37.1 - oslo.policy: 2.1.3 - oslo.privsep: 1.32.2 - oslo.reports: 1.29.2 - oslo.rootwrap: 5.15.3 - oslo.serialization: 2.28.2 - oslo.service: 1.38.1 - oslo.upgradecheck: 0.2.1 - oslo.utils: 3.40.7 - oslo.versionedobjects: 1.35.1 - oslo.vmware: 2.32.2 - oslotest: 3.7.2 - osprofiler: 2.6.1 - oswin-tempest-plugin: 0.2.0 - oswin-tempest-plugin-stein: last - ovsdbapp: 0.15.1 - panko: 6.0.0 - pankoclient: 0.6.0 - patrole: 0.5.0 - paunch: 4.5.2 - puppet-aodh: 14.4.0 - puppet-barbican: 14.4.0 - puppet-ceilometer: 14.4.0 - puppet-cinder: 14.4.0 - puppet-cloudkitty: 3.4.0 - puppet-congress: 14.4.0 - puppet-designate: 14.4.0 - puppet-ec2api: 14.4.0 - puppet-freezer: 3.4.0 - puppet-glance: 14.4.0 - puppet-glare: 3.4.0 - puppet-gnocchi: 14.4.0 - puppet-heat: 14.4.0 - puppet-horizon: 14.4.0 - puppet-ironic: 14.4.0 - puppet-keystone: 14.4.0 - puppet-magnum: 14.4.0 - puppet-manila: 14.4.1 - puppet-mistral: 14.4.0 - puppet-monasca: 3.4.0 - puppet-murano: 14.4.0 - puppet-neutron: 14.4.0 - puppet-nova: 14.4.0 - puppet-octavia: 14.4.0 - puppet-openstack_extras: 14.4.0 - puppet-openstacklib: 14.4.0 - puppet-oslo: 14.4.0 - puppet-ovn: 14.4.0 - puppet-panko: 14.4.0 - puppet-placement: 1.2.0 - puppet-qdr: 3.4.0 - puppet-rally: 2.4.0 - puppet-sahara: 14.4.0 - puppet-senlin: 1.2.0 - puppet-swift: 14.4.0 - puppet-tacker: 14.4.0 - puppet-tempest: 14.4.0 - puppet-tripleo: 10.5.2 - puppet-trove: 14.4.0 - puppet-vitrage: 4.4.0 - puppet-vswitch: 10.4.0 - puppet-watcher: 14.4.0 - puppet-zaqar: 14.4.0 - pycadf: 2.9.0 - python-barbicanclient: 4.8.1 - python-blazarclient: 2.1.0 - python-brick-cinderclient-ext: 0.10.0 - python-cinderclient: 4.2.2 - python-cloudkittyclient: 2.1.1 - python-congressclient: 1.12.0 - python-cyborgclient: 0.3.0 - python-designateclient: 2.11.0 - python-freezerclient: 2.1.0 - python-glanceclient: 2.16.0 - python-heatclient: 1.17.1 - python-ironic-inspector-client: 3.5.0 - python-ironicclient: 2.7.3 - python-karborclient: 1.2.0 - python-keystoneclient: 3.19.1 - python-magnumclient: 2.12.0 - python-manilaclient: 1.27.0 - python-masakariclient: 5.4.0 - python-mistralclient: 3.8.1 - python-monascaclient: 1.15.0 - python-muranoclient: 1.2.0 - python-neutronclient: 6.12.1 - python-novaclient: 13.0.2 - python-octaviaclient: 1.8.2 - python-openstackclient: 3.18.1 - python-qinlingclient: 2.1.0 - python-saharaclient: 2.2.1 - python-searchlightclient: 1.5.1 - python-senlinclient: 1.10.1 - python-solumclient: 2.8.0 - python-swiftclient: 3.7.1 - python-tackerclient: 0.15.0 - python-tripleoclient: 11.5.2 - python-troveclient: 2.17.1 - python-vitrageclient: 2.7.0 - python-watcher: 2.0.0 - python-watcherclient: 2.2.0 - python-zaqarclient: 1.11.0 - python-zunclient: 3.3.1 - qinling: 2.0.0 - qinling-dashboard: 1.0.0 - sahara: 10.0.1 - sahara-dashboard: 10.0.2 - sahara-extra: 9.3.0 - sahara-image-elements: 10.0.2 - sahara-plugin-ambari: 1.0.0 - sahara-plugin-cdh: 1.0.1 - sahara-plugin-mapr: 1.0.2 - sahara-plugin-spark: 1.0.0 - sahara-plugin-storm: 1.0.0 - sahara-plugin-vanilla: 1.0.0 - sahara-tests: 0.9.0 - sahara-tests-stein: last - searchlight: 6.0.0 - searchlight-ui: 6.0.0 - senlin: 7.0.0 - senlin-dashboard: 0.10.1 - senlin-tempest-plugin: 0.2.0 - senlin-tempest-plugin-stein: last - shade: 1.31.0 - solum: 6.0.1 - solum-dashboard: 2.6.0 - solum-tempest-plugin: 1.0.0 - solum-tempest-plugin-stein: last - stevedore: 1.30.1 - storlets: 3.0.0 - sushy: 1.8.2 - swift: 2.21.1 - tacker: 1.0.1 - tacker-horizon: 0.14.1 - taskflow: 3.5.0 - telemetry_tempest_plugin: 0.3.0 - telemetry_tempest_plugin-stein: last - tempest: 20.0.0 - tempest-horizon: 0.1.0 - tempest-stein: last - tooz: 1.64.3 - tosca-parser: 1.4.0 - tricircle: 6.0.0 - tricircleclient: 0.5.0 - tripleo-common: 10.8.2 - tripleo-heat-templates: 10.6.2 - tripleo-image-elements: 10.4.2 - tripleo-ipsec: 9.1.0 - tripleo-puppet-elements: 10.3.3 - tripleo-validations: 10.5.2 - trove: 11.0.0 - trove-dashboard: 12.0.0 - trove_tempest_plugin: 0.2.0 - trove_tempest_plugin-stein: last - vitrage: 4.3.2 - vitrage-dashboard: 1.9.1 - vitrage-tempest-plugin: 2.2.1 - vitrage-tempest-plugin-stein: last - watcher-dashboard: 1.12.0 - watcher-tempest-plugin: 1.1.0 - watcher-tempest-plugin-stein: last - zaqar: 8.0.1 - zaqar-ui: 6.0.0 - zaqar_tempest_plugin: 0.2.0 - zaqar_tempest_plugin-stein: last - zun: 3.0.1 - zun-tempest-plugin: 3.0.0 - zun-tempest-plugin-stein: last - zun-ui: 3.0.1 -train: - aodh: 9.0.1 - aodhclient: 1.3.0 - automaton: 1.17.0 - barbican: 9.0.1 - barbican_tempest_plugin: 0.3.0 - barbican_tempest_plugin-train: last - bifrost: 7.2.2 - blazar: 4.0.1 - blazar-dashboard: 2.0.1 - blazar-nova: 1.3.0 - blazar_tempest_plugin: 0.3.0 - blazar_tempest_plugin-train: last - castellan: 1.3.4 - ceilometer: 13.1.2 - ceilometer-powervm: 9.0.0 - ceilometermiddleware: 1.5.0 - cinder: 15.6.0 - cinder-tempest-plugin: 0.3.0 - cinder-tempest-plugin-train: last - cinderlib: 1.0.1 - cliff: 2.16.0 - cloudkitty: 11.1.0 - cloudkitty-dashboard: 9.0.0 - cloudkitty_tempest_plugin: 1.2.0 - cloudkitty_tempest_plugin-train: last - compute-hyperv: 9.0.1 - congress-dashboard: 5.0.0 - congress-tempest-plugin: 0.3.0 - cyborg-tempest-plugin: 0.1.0 - debtcollector: 1.22.0 - designate: 9.0.2 - designate-dashboard: 9.0.1 - designate-tempest-plugin: 0.7.0 - designate-tempest-plugin-train: last - ec2-api: 9.0.1 - ec2api-tempest-plugin: 0.3.0 - ec2api-tempest-plugin-train: last - freezer: 7.2.0 - freezer-api: 7.2.0 - freezer-dr: 7.2.0 - freezer-web-ui: 7.2.0 - freezer_tempest_plugin: 0.2.0 - freezer_tempest_plugin-train: last - futurist: 1.9.0 - glance: 19.0.4 - glance_store: 1.0.1 - heat-agents: 1.10.0 - heat-dashboard: 2.0.2 - heat-tempest-plugin: 0.4.0 - heat-tempest-plugin-train: last - heat-translator: 1.4.1 - horizon: 16.2.2 - ironic: 13.0.7 - ironic-inspector: 9.2.4 - ironic-lib: 2.21.3 - ironic-prometheus-exporter: 1.1.2 - ironic-python-agent: 5.0.4 - ironic-tempest-plugin: 1.5.1 - ironic-tempest-plugin-train: last - ironic-ui: 3.5.5 - karbor: 1.4.0 - karbor-dashboard: 1.3.0 - kayobe: 7.3.0 - kayobe-config: 7.3.0 - kayobe-config-dev: 7.3.0 - keystone: 16.0.2 - keystone_tempest_plugin: 0.3.0 - keystone_tempest_plugin-train: last - keystoneauth1: 3.17.4 - keystonemiddleware: 7.0.1 - kolla: 9.4.0 - kolla-ansible: 9.3.2 - kolla-cli: 9.0.0 - kuryr-kubernetes: 1.1.2 - kuryr-lib: 1.1.1 - kuryr-libnetwork: 4.0.2 - kuryr-tempest-plugin: 0.5.0 - kuryr-tempest-plugin-train: last - magnum: 9.4.1 - magnum-ui: 5.3.1 - magnum_tempest_plugin: 0.3.0 - magnum_tempest_plugin-train: last - manila: 9.1.5 - manila-tempest-plugin: 0.4.1 - manila-tempest-plugin-train: last - manila-ui: 2.19.2 - masakari: 8.1.2 - masakari-dashboard: 1.0.1 - masakari-monitors: 8.0.2 - metalsmith: 0.15.1 - mistral: 9.1.0 - mistral-dashboard: 9.0.1 - mistral-extra: 9.0.0 - mistral-lib: 1.2.1 - mistral_tempest_tests: 0.3.0 - mistral_tempest_tests-train: last - monasca-agent: 2.12.1 - monasca-api: 3.2.0 - monasca-common: 2.16.1 - monasca-events-api: 0.4.0 - monasca-log-api: 2.11.0 - monasca-notification: 1.18.0 - monasca-persister: 1.15.0 - monasca-statsd: 1.12.0 - monasca-tempest-plugin: 1.1.0 - monasca-tempest-plugin-train: last - monasca-ui: 1.17.2 - mox3: 0.28.0 - murano: 8.1.1 - murano-agent: 4.0.0 - murano-dashboard: 8.0.0 - murano-tempest-plugin: 1.1.0 - murano-tempest-plugin-train: last - networking-bagpipe: 11.0.2 - networking-baremetal: 1.4.0 - networking-bgpvpn: 11.0.1 - networking-generic-switch: 2.1.0 - networking-hyperv: 7.3.1 - networking-midonet: 9.0.0 - networking-odl: 15.0.0 - networking-ovn: 7.4.1 - networking-powervm: 9.0.0 - networking-sfc: 9.0.1 - neutron: 15.3.4 - neutron-dynamic-routing: 15.0.1 - neutron-fwaas: 15.0.1 - neutron-fwaas-dashboard: 2.1.0 - neutron-lib: 1.29.2 - neutron-tempest-plugin: 0.6.0 - neutron-tempest-plugin-train: last - neutron-vpnaas: 15.0.0 - neutron-vpnaas-dashboard: 1.6.1 - nova: 20.6.1 - nova_powervm: 9.0.0 - octavia: 5.1.2 - octavia-dashboard: 4.0.2 - octavia-lib: 1.4.0 - octavia-tempest-plugin: 1.2.0 - octavia-tempest-plugin-train: last - openstack-congress: 10.0.0 - openstack-cyborg: 3.0.1 - openstack-heat: 13.1.0 - openstack-placement: 2.0.1 - openstack-release-test: 2.0.0 - openstacksdk: 0.36.5 - os-apply-config: 10.6.0 - os-brick: 2.10.7 - os-client-config: 1.33.0 - os-collect-config: 10.6.0 - os-ken: 0.4.1 - os-net-config: 11.5.0 - os-refresh-config: 10.4.1 - os-win: 4.3.3 - os_vif: 1.17.0 - osc-lib: 1.14.1 - osc-placement: 1.7.0 - oslo.cache: 1.37.1 - oslo.concurrency: 3.30.1 - oslo.config: 6.11.3 - oslo.context: 2.23.1 - oslo.db: 5.0.2 - oslo.i18n: 3.24.0 - oslo.limit: 0.1.1 - oslo.log: 3.44.3 - oslo.messaging: 10.2.4 - oslo.middleware: 3.38.1 - oslo.policy: 2.3.4 - oslo.privsep: 1.33.5 - oslo.reports: 1.30.0 - oslo.rootwrap: 5.16.1 - oslo.serialization: 2.29.3 - oslo.service: 1.40.2 - oslo.upgradecheck: 0.3.2 - oslo.utils: 3.41.6 - oslo.versionedobjects: 1.36.1 - oslo.vmware: 2.34.1 - oslotest: 3.8.1 - osprofiler: 2.8.2 - oswin-tempest-plugin: 0.3.0 - oswin-tempest-plugin-train: last - ovsdbapp: 0.17.5 - panko: 7.1.0 - pankoclient: 0.7.0 - patrole: 0.7.0 - paunch: 5.5.1 - puppet-aodh: 15.5.0 - puppet-barbican: 15.5.0 - puppet-ceilometer: 15.5.0 - puppet-cinder: 15.5.0 - puppet-cloudkitty: 4.4.1 - puppet-congress: 15.4.0 - puppet-designate: 15.6.0 - puppet-ec2api: 15.4.0 - puppet-freezer: 4.4.0 - puppet-glance: 15.5.0 - puppet-glare: 4.4.0 - puppet-gnocchi: 15.5.0 - puppet-heat: 15.5.0 - puppet-horizon: 15.5.0 - puppet-ironic: 15.5.0 - puppet-keystone: 15.5.0 - puppet-magnum: 15.5.0 - puppet-manila: 15.5.0 - puppet-mistral: 15.5.0 - puppet-monasca: 4.4.0 - puppet-murano: 15.5.1 - puppet-neutron: 15.6.0 - puppet-nova: 15.8.1 - puppet-octavia: 15.5.0 - puppet-openstack_extras: 15.4.1 - puppet-openstacklib: 15.5.0 - puppet-oslo: 15.5.0 - puppet-ovn: 15.5.0 - puppet-panko: 15.5.0 - puppet-placement: 2.5.0 - puppet-qdr: 4.4.0 - puppet-rally: 3.4.0 - puppet-sahara: 15.4.1 - puppet-senlin: 2.4.0 - puppet-swift: 15.5.0 - puppet-tacker: 15.4.1 - puppet-tempest: 15.4.1 - puppet-tripleo: 11.7.0 - puppet-trove: 15.4.1 - puppet-vitrage: 5.4.0 - puppet-vswitch: 11.5.0 - puppet-watcher: 15.4.1 - puppet-zaqar: 15.4.0 - pycadf: 2.10.0 - python-barbicanclient: 4.9.0 - python-blazarclient: 2.2.1 - python-brick-cinderclient-ext: 0.12.0 - python-cinderclient: 5.0.2 - python-cloudkittyclient: 3.1.0 - python-congressclient: 1.13.0 - python-cyborgclient: 0.4.0 - python-designateclient: 3.0.0 - python-freezerclient: 2.2.0 - python-glanceclient: 2.17.1 - python-heatclient: 1.18.1 - python-ironic-inspector-client: 3.7.1 - python-ironicclient: 3.1.2 - python-karborclient: 1.3.0 - python-keystoneclient: 3.21.0 - python-magnumclient: 2.16.0 - python-manilaclient: 1.29.0 - python-masakariclient: 5.5.0 - python-mistralclient: 3.10.0 - python-monascaclient: 1.16.0 - python-muranoclient: 1.3.0 - python-neutronclient: 6.14.1 - python-novaclient: 15.1.1 - python-octaviaclient: 1.10.1 - python-openstackclient: 4.0.2 - python-qinlingclient: 4.0.0 - python-saharaclient: 2.3.0 - python-searchlightclient: 1.6.0 - python-senlinclient: 1.11.1 - python-solumclient: 2.9.0 - python-swiftclient: 3.8.1 - python-tackerclient: 0.16.1 - python-tripleoclient: 12.6.0 - python-troveclient: 3.0.1 - python-vitrageclient: 3.0.0 - python-watcher: 3.0.2 - python-watcherclient: 2.4.0 - python-zaqarclient: 1.12.0 - python-zunclient: 3.5.1 - qinling: 3.0.0 - qinling-dashboard: 2.0.0 - sahara: 11.0.0 - sahara-dashboard: 11.0.1 - sahara-extra: 10.0.0 - sahara-image-elements: 11.0.1 - sahara-plugin-ambari: 2.0.0 - sahara-plugin-cdh: 2.0.0 - sahara-plugin-mapr: 2.0.1 - sahara-plugin-spark: 2.0.0 - sahara-plugin-storm: 2.0.0 - sahara-plugin-vanilla: 2.0.0 - sahara-tests: 0.9.1 - sahara-tests-train: last - searchlight: 7.0.0 - searchlight-ui: 7.0.0 - senlin: 8.0.1 - senlin-dashboard: 0.11.0 - senlin-tempest-plugin: 0.3.0 - senlin-tempest-plugin-train: last - shade: 1.32.0 - solum: 7.0.0 - solum-dashboard: 3.0.1 - solum-tempest-plugin: 1.1.0 - solum-tempest-plugin-train: last - stevedore: 1.31.0 - storlets: 4.0.0 - sushy: 2.0.5 - swift: 2.23.3 - tacker: 2.0.2 - tacker-horizon: 0.15.0 - taskflow: 3.7.1 - telemetry_tempest_plugin: 0.4.0 - telemetry_tempest_plugin-train: last - tempest: 22.1.0 - tempest-horizon: 0.2.0 - tempest-train: last - tooz: 1.66.3 - tosca-parser: 1.6.1 - tricircle: 7.0.0 - tricircleclient: 0.6.0 - tripleo-common: 11.7.0 - tripleo-heat-templates: 11.6.0 - tripleo-image-elements: 10.6.3 - tripleo-ipsec: 9.2.0 - tripleo-puppet-elements: 11.3.1 - tripleo-validations: 11.6.0 - trove: 12.1.0 - trove-dashboard: 13.0.0 - trove_tempest_plugin: 0.3.0 - trove_tempest_plugin-train: last - vitrage: 5.0.2 - vitrage-dashboard: 2.0.0 - vitrage-tempest-plugin: 3.0.0 - vitrage-tempest-plugin-train: last - watcher-dashboard: 2.0.0 - watcher-tempest-plugin: 1.2.0 - watcher-tempest-plugin-train: last - zaqar: 9.0.1 - zaqar-ui: 7.0.0 - zaqar_tempest_plugin: 0.3.0 - zaqar_tempest_plugin-train: last - zun: 4.0.2 - zun-tempest-plugin: 3.1.0 - zun-tempest-plugin-train: last - zun-ui: 4.0.1 -ussuri: - adjutant-ui: 0.5.1 - aodh: 10.0.0 - aodhclient: 2.0.1 - automaton: 2.0.1 - barbican: 10.1.0 - barbican_tempest_plugin: 1.0.0 - bifrost: 8.1.2 - blazar: 5.0.1 - blazar-dashboard: 3.0.1 - blazar-nova: 2.0.0 - blazar_tempest_plugin: 0.4.0 - castellan: 3.0.4 - ceilometer: 14.1.0 - ceilometermiddleware: 2.0.0 - cinder: 16.4.2 - cinder-tempest-plugin: 1.0.0 - cinderlib: 2.1.0 - cliff: 3.1.0 - cloudkitty: 12.1.1 - cloudkitty-dashboard: 10.0.0 - cloudkitty_tempest_plugin: 2.0.0 - compute-hyperv: 10.0.1 - congress-dashboard: 6.0.0 - congress-tempest-plugin: 1.0.0 - cyborg-tempest-plugin: 1.0.0 - debtcollector: 2.0.1 - designate: 10.0.2 - designate-dashboard: 10.0.0 - designate-tempest-plugin: 0.8.0 - ec2-api: 10.0.0 - ec2api-tempest-plugin: 1.0.0 - freezer: 8.0.0 - freezer-api: 8.0.0 - freezer-dr: 8.0.0 - freezer-web-ui: 8.0.0 - freezer_tempest_plugin: 1.0.0 - futurist: 2.1.1 - glance: 20.2.0 - glance_store: 2.0.1 - heat-agents: 2.0.0 - heat-dashboard: 3.0.1 - heat-tempest-plugin: 1.0.0 - heat-translator: 2.0.0 - horizon: 18.3.5 - ironic: 15.0.2 - ironic-inspector: 10.1.3 - ironic-lib: 4.2.3 - ironic-prometheus-exporter: 2.0.1 - ironic-python-agent: 6.1.3 - ironic-tempest-plugin: 2.0.0 - ironic-ui: 4.0.1 - karbor: 1.5.0 - karbor-dashboard: 1.4.0 - kayobe: 8.2.0 - kayobe-config: 8.1.1 - kayobe-config-dev: 8.1.1 - keystone: 17.0.1 - keystone_tempest_plugin: 0.4.0 - keystoneauth1: 4.0.1 - keystonemiddleware: 9.0.0 - kolla: 10.4.0 - kolla-ansible: 10.4.0 - kolla-cli: 10.0.0 - kuryr-kubernetes: 2.1.0 - kuryr-lib: 2.0.1 - kuryr-libnetwork: 5.0.1 - kuryr-tempest-plugin: 0.6.0 - magnum: 10.1.0 - magnum-ui: 6.0.1 - magnum_tempest_plugin: 1.0.0 - manila: 10.2.0 - manila-tempest-plugin: 1.0.0 - manila-ui: 3.1.0 - masakari: 9.1.3 - masakari-dashboard: 2.0.2 - masakari-monitors: 9.0.3 - metalsmith: 1.0.1 - mistral: 10.0.0 - mistral-dashboard: 10.0.0 - mistral-extra: 10.0.1 - mistral-lib: 2.1.0 - mistral_tempest_tests: 1.0.0 - monasca-agent: 3.0.3 - monasca-api: 4.1.0 - monasca-common: 3.1.0 - monasca-events-api: 1.0.0 - monasca-notification: 2.0.1 - monasca-persister: 2.0.1 - monasca-statsd: 2.0.0 - monasca-tempest-plugin: 2.0.0 - monasca-ui: 2.0.2 - mox3: 1.0.0 - murano: 9.0.0 - murano-agent: 5.0.0 - murano-dashboard: 9.0.0 - murano-tempest-plugin: 2.0.0 - networking-bagpipe: 12.0.1 - networking-baremetal: 2.0.0 - networking-bgpvpn: 12.0.0 - networking-generic-switch: 3.0.0 - networking-hyperv: 8.0.1 - networking-midonet: 10.0.0 - networking-odl: 16.0.0 - networking-sfc: 10.0.1 - neutron: 16.4.2 - neutron-dynamic-routing: 16.0.0 - neutron-fwaas: 16.0.0 - neutron-fwaas-dashboard: 3.0.0 - neutron-lib: 2.3.0 - neutron-tempest-plugin: 1.1.0 - neutron-vpnaas: 16.0.0 - neutron-vpnaas-dashboard: 2.0.0 - nova: 21.2.4 - octavia: 6.2.2 - octavia-dashboard: 5.0.2 - octavia-lib: 2.0.0 - octavia-tempest-plugin: 1.4.0 - openstack-congress: 11.0.0 - openstack-cyborg: 4.0.1 - openstack-heat: 14.2.0 - openstack-placement: 3.0.1 - openstack-release-test: 3.0.1 - openstacksdk: 0.46.1 - os-apply-config: 11.2.3 - os-brick: 3.0.8 - os-client-config: 2.1.0 - os-collect-config: 11.0.4 - os-ken: 1.0.0 - os-net-config: 12.3.6 - os-refresh-config: 11.0.1 - os-win: 5.0.2 - os_vif: 2.0.1 - osc-lib: 2.0.0 - osc-placement: 2.0.0 - oslo.cache: 2.3.1 - oslo.concurrency: 4.0.3 - oslo.config: 8.0.5 - oslo.context: 3.0.3 - oslo.db: 8.1.1 - oslo.i18n: 4.0.1 - oslo.limit: 1.0.2 - oslo.log: 4.1.3 - oslo.messaging: 12.1.6 - oslo.middleware: 4.0.2 - oslo.policy: 3.1.2 - oslo.privsep: 2.1.2 - oslo.reports: 2.0.1 - oslo.rootwrap: 6.0.2 - oslo.serialization: 3.1.2 - oslo.service: 2.1.2 - oslo.upgradecheck: 1.0.1 - oslo.utils: 4.1.2 - oslo.versionedobjects: 2.0.2 - oslo.vmware: 3.3.1 - oslotest: 4.2.0 - osprofiler: 3.1.0 - oswin-tempest-plugin: 1.0.0 - ovn-octavia-provider: 0.1.3 - ovsdbapp: 1.2.3 - panko: 8.1.0 - pankoclient: 1.0.1 - patrole: 0.9.0 - paunch: 7.0.4 - puppet-aodh: 16.4.0 - puppet-barbican: 16.4.0 - puppet-ceilometer: 16.4.0 - puppet-cinder: 16.4.0 - puppet-cloudkitty: 5.4.0 - puppet-congress: 16.3.0 - puppet-designate: 16.4.0 - puppet-ec2api: 16.4.0 - puppet-freezer: 5.3.0 - puppet-glance: 16.5.0 - puppet-glare: 5.3.0 - puppet-gnocchi: 16.4.0 - puppet-heat: 16.4.0 - puppet-horizon: 16.4.0 - puppet-ironic: 16.5.0 - puppet-keystone: 16.4.0 - puppet-magnum: 16.4.0 - puppet-manila: 16.4.0 - puppet-mistral: 16.4.0 - puppet-monasca: 5.3.0 - puppet-murano: 16.3.1 - puppet-neutron: 16.5.0 - puppet-nova: 16.6.0 - puppet-octavia: 16.4.0 - puppet-openstack_extras: 16.4.0 - puppet-openstacklib: 16.4.0 - puppet-oslo: 16.4.0 - puppet-ovn: 16.5.0 - puppet-panko: 16.4.0 - puppet-placement: 3.4.0 - puppet-qdr: 5.3.1 - puppet-rally: 4.3.1 - puppet-sahara: 16.4.0 - puppet-senlin: 3.3.0 - puppet-swift: 16.4.0 - puppet-tacker: 16.4.0 - puppet-tempest: 16.3.1 - puppet-tripleo: 12.7.1 - puppet-trove: 16.4.0 - puppet-vitrage: 6.4.0 - puppet-vswitch: 12.4.0 - puppet-watcher: 16.4.0 - puppet-zaqar: 16.4.0 - pycadf: 3.0.0 - python-adjutant: 0.5.1 - python-adjutantclient: 0.5.0 - python-barbicanclient: 4.10.0 - python-blazarclient: 3.0.1 - python-brick-cinderclient-ext: 1.0.2 - python-cinderclient: 7.0.2 - python-cloudkittyclient: 4.0.0 - python-congressclient: 2.0.1 - python-cyborgclient: 1.1.1 - python-designateclient: 4.0.0 - python-freezerclient: 3.0.1 - python-glanceclient: 3.1.2 - python-heatclient: 2.1.0 - python-ironic-inspector-client: 4.1.0 - python-ironicclient: 4.1.0 - python-karborclient: 2.0.0 - python-keystoneclient: 4.0.0 - python-magnumclient: 3.0.1 - python-manilaclient: 2.1.0 - python-masakariclient: 6.0.0 - python-mistralclient: 4.0.1 - python-monascaclient: 2.1.0 - python-muranoclient: 2.0.1 - python-neutronclient: 7.1.1 - python-novaclient: 17.0.1 - python-octaviaclient: 2.0.2 - python-openstackclient: 5.2.2 - python-qinlingclient: 5.0.1 - python-saharaclient: 3.1.0 - python-searchlightclient: 2.0.1 - python-senlinclient: 2.0.1 - python-solumclient: 3.1.0 - python-swiftclient: 3.9.1 - python-tackerclient: 1.1.1 - python-tripleoclient: 13.4.6 - python-troveclient: 3.3.2 - python-vitrageclient: 4.0.1 - python-watcher: 4.0.1 - python-watcherclient: 3.0.0 - python-zaqarclient: 1.13.2 - python-zunclient: 4.0.1 - qinling: 4.0.0 - qinling-dashboard: 3.0.0 - sahara: 12.0.0 - sahara-dashboard: 12.0.0 - sahara-extra: 11.0.0 - sahara-image-elements: 12.0.1 - sahara-plugin-ambari: 3.0.0 - sahara-plugin-cdh: 3.0.1 - sahara-plugin-mapr: 3.0.1 - sahara-plugin-spark: 3.0.1 - sahara-plugin-storm: 3.0.0 - sahara-plugin-vanilla: 3.0.0 - sahara-tests: 0.10.0 - searchlight: 8.0.0 - searchlight-ui: 8.0.0 - senlin: 9.0.1 - senlin-dashboard: 1.0.0 - senlin-tempest-plugin: 1.0.0 - shade: 1.33.0 - solum: 8.0.0 - solum-dashboard: 4.0.0 - solum-tempest-plugin: 2.0.0 - stevedore: 1.32.0 - storlets: 5.0.0 - sushy: 3.2.3 - sushy-cli: 0.2.0 - swift: 2.25.2 - tacker: 3.0.1 - tacker-horizon: 1.0.0 - taskflow: 4.1.0 - telemetry_tempest_plugin: 1.0.0 - tempest: 24.0.0 - tempest-horizon: 1.0.0 - tooz: 2.3.1 - tosca-parser: 2.0.0 - tricircle: 8.0.0 - tricircleclient: 1.0.0 - tripleo-common: 12.4.7 - tripleo-heat-templates: 12.4.6 - tripleo-image-elements: 12.0.2 - tripleo-ipsec: 9.3.2 - tripleo-puppet-elements: 12.3.4 - tripleo-validations: 12.3.6 - trove: 13.0.1 - trove-dashboard: 14.1.0 - trove_tempest_plugin: 1.0.0 - vitrage: 7.1.0 - vitrage-dashboard: 3.1.0 - vitrage-tempest-plugin: 4.0.0 - watcher-dashboard: 3.0.1 - watcher-tempest-plugin: 2.0.0 - zaqar: 10.0.0 - zaqar-ui: 8.0.0 - zaqar_tempest_plugin: 1.0.0 - zun: 5.0.1 - zun-tempest-plugin: 4.0.0 - zun-ui: 5.0.0 -victoria: - adjutant-ui: 1.0.0 - ansible-role-lunasa-hsm: 1.0.0 - aodh: 11.0.0 - aodhclient: 2.1.1 - automaton: 2.2.0 - barbican: 11.0.0 - barbican_tempest_plugin: 1.1.0 - bifrost: 9.1.0 - blazar: 6.0.2 - blazar-dashboard: 4.0.0 - blazar-nova: 2.1.0 - blazar_tempest_plugin: 0.5.0 - castellan: 3.6.1 - ceilometer: 15.1.0 - ceilometermiddleware: 2.1.0 - cinder: 17.4.0 - cinder-tempest-plugin: 1.2.0 - cinderlib: 3.1.0 - cliff: 3.4.0 - cloudkitty: 13.0.2 - cloudkitty-dashboard: 11.0.1 - cloudkitty_tempest_plugin: 2.1.0 - compute-hyperv: 11.0.0 - cyborg-tempest-plugin: 1.1.0 - debtcollector: 2.2.0 - designate: 11.0.2 - designate-dashboard: 11.0.0 - designate-tempest-plugin: 0.9.0 - ec2-api: 11.1.0 - ec2api-tempest-plugin: 1.1.0 - freezer: 9.0.0 - freezer-api: 9.0.0 - freezer-dr: 9.0.0 - freezer-web-ui: 9.0.0 - freezer_tempest_plugin: 1.1.0 - futurist: 2.3.0 - glance: 21.1.0 - glance_store: 2.3.1 - heat-agents: 2.1.1 - heat-dashboard: 4.0.1 - heat-tempest-plugin: 1.1.0 - heat-translator: 2.1.0 - horizon: 18.6.4 - ironic: 16.0.5 - ironic-inspector: 10.4.2 - ironic-lib: 4.4.2 - ironic-prometheus-exporter: 2.1.1 - ironic-python-agent: 6.4.4 - ironic-tempest-plugin: 2.1.0 - ironic-ui: 4.2.0 - karbor: 1.6.0 - karbor-dashboard: 1.5.1 - kayobe: 9.4.0 - kayobe-config: 9.4.0 - kayobe-config-dev: 9.3.0 - keystone: 18.1.0 - keystone_tempest_plugin: 0.5.0 - keystoneauth1: 4.2.1 - keystonemiddleware: 9.1.0 - kolla: 11.3.0 - kolla-ansible: 11.4.0 - kuryr-kubernetes: 3.1.0 - kuryr-lib: 2.1.1 - kuryr-libnetwork: 6.0.0 - kuryr-tempest-plugin: 0.7.0 - magnum: 11.2.1 - magnum-ui: 7.0.1 - magnum_tempest_plugin: 1.1.0 - manila: 11.1.2 - manila-tempest-plugin: 1.2.0 - manila-ui: 4.1.0 - masakari: 10.0.3 - masakari-dashboard: 3.0.1 - masakari-monitors: 10.0.2 - metalsmith: 1.2.1 - mistral: 11.0.0 - mistral-dashboard: 11.0.1 - mistral-extra: 10.1.0 - mistral-lib: 2.3.0 - mistral_tempest_tests: 1.1.0 - monasca-agent: 4.0.1 - monasca-api: 5.0.0 - monasca-common: 3.2.0 - monasca-events-api: 2.0.0 - monasca-notification: 3.0.0 - monasca-persister: 3.0.0 - monasca-statsd: 2.1.0 - monasca-tempest-plugin: 2.1.0 - monasca-ui: 3.0.0 - murano: 10.0.0 - murano-agent: 6.0.1 - murano-dashboard: 10.0.0 - murano-tempest-plugin: 2.1.0 - networking-bagpipe: 13.0.0 - networking-baremetal: 3.0.0 - networking-bgpvpn: 13.0.0 - networking-generic-switch: 4.0.1 - networking-hyperv: 9.0.0 - networking-midonet: 11.0.0 - networking-odl: 17.0.0 - networking-sfc: 11.0.0 - neutron: 17.4.1 - neutron-dynamic-routing: 17.0.0 - neutron-lib: 2.6.2 - neutron-tempest-plugin: 1.2.0 - neutron-vpnaas: 17.0.0 - neutron-vpnaas-dashboard: 3.0.0 - nova: 22.4.0 - octavia: 7.1.2 - octavia-dashboard: 6.0.1 - octavia-lib: 2.2.0 - octavia-tempest-plugin: 1.5.0 - openstack-cyborg: 5.0.1 - openstack-heat: 15.1.0 - openstack-placement: 4.0.0 - openstack-release-test: 3.3.1 - openstacksdk: 0.50.0 - os-apply-config: 12.0.1 - os-brick: 4.0.5 - os-collect-config: 12.0.2 - os-ken: 1.2.1 - os-net-config: 13.3.0 - os-refresh-config: 12.0.1 - os-win: 5.2.0 - os_vif: 2.2.1 - osc-lib: 2.2.1 - osc-placement: 2.1.0 - oslo.cache: 2.6.3 - oslo.concurrency: 4.3.1 - oslo.config: 8.3.4 - oslo.context: 3.1.2 - oslo.db: 8.4.1 - oslo.i18n: 5.0.1 - oslo.limit: 1.2.1 - oslo.log: 4.4.0 - oslo.messaging: 12.5.2 - oslo.middleware: 4.1.1 - oslo.policy: 3.5.0 - oslo.privsep: 2.4.0 - oslo.reports: 2.2.0 - oslo.rootwrap: 6.2.0 - oslo.serialization: 4.0.2 - oslo.service: 2.4.1 - oslo.upgradecheck: 1.1.1 - oslo.utils: 4.6.1 - oslo.versionedobjects: 2.3.0 - oslo.vmware: 3.7.0 - oslotest: 4.4.1 - osprofiler: 3.4.0 - oswin-tempest-plugin: 1.1.0 - ovn-octavia-provider: 0.4.1 - ovsdbapp: 1.6.1 - panko: 9.0.0 - pankoclient: 1.1.0 - patrole: 0.10.0 - puppet-aodh: 17.6.0 - puppet-barbican: 17.5.0 - puppet-ceilometer: 17.5.0 - puppet-cinder: 17.5.0 - puppet-cloudkitty: 6.6.0 - puppet-designate: 17.5.0 - puppet-ec2api: 17.5.0 - puppet-freezer: 6.4.0 - puppet-glance: 17.7.0 - puppet-glare: 6.4.0 - puppet-gnocchi: 17.5.0 - puppet-heat: 17.5.0 - puppet-horizon: 17.5.0 - puppet-ironic: 17.5.0 - puppet-keystone: 17.5.0 - puppet-magnum: 17.5.0 - puppet-manila: 17.5.0 - puppet-mistral: 17.5.0 - puppet-monasca: 6.4.0 - puppet-murano: 17.4.0 - puppet-neutron: 17.7.0 - puppet-nova: 17.7.0 - puppet-octavia: 17.5.0 - puppet-openstack_extras: 17.5.0 - puppet-openstacklib: 17.4.1 - puppet-oslo: 17.5.0 - puppet-ovn: 17.6.0 - puppet-panko: 17.5.0 - puppet-placement: 4.5.0 - puppet-qdr: 6.4.0 - puppet-rally: 5.4.0 - puppet-sahara: 17.5.0 - puppet-senlin: 4.4.0 - puppet-swift: 17.5.0 - puppet-tacker: 17.5.0 - puppet-tempest: 17.4.0 - puppet-tripleo: 13.7.0 - puppet-trove: 17.4.0 - puppet-vitrage: 7.5.0 - puppet-vswitch: 13.5.0 - puppet-watcher: 17.5.0 - puppet-zaqar: 17.5.0 - pycadf: 3.1.1 - python-adjutant: 1.0.0 - python-adjutantclient: 0.7.0 - python-barbicanclient: 5.0.1 - python-blazarclient: 3.1.1 - python-brick-cinderclient-ext: 1.2.0 - python-cinderclient: 7.2.2 - python-cloudkittyclient: 4.1.0 - python-cyborgclient: 1.2.1 - python-designateclient: 4.1.0 - python-freezerclient: 4.0.0 - python-glanceclient: 3.2.2 - python-heatclient: 2.2.1 - python-ironic-inspector-client: 4.4.0 - python-ironicclient: 4.4.1 - python-karborclient: 2.1.0 - python-keystoneclient: 4.1.1 - python-magnumclient: 3.2.2 - python-manilaclient: 2.3.2 - python-masakariclient: 6.1.1 - python-mistralclient: 4.1.1 - python-monascaclient: 2.2.1 - python-muranoclient: 2.1.1 - python-neutronclient: 7.2.1 - python-novaclient: 17.2.1 - python-octaviaclient: 2.2.1 - python-openstackclient: 5.4.0 - python-qinlingclient: 5.1.1 - python-saharaclient: 3.2.1 - python-searchlightclient: 2.1.1 - python-senlinclient: 2.1.1 - python-solumclient: 3.2.0 - python-swiftclient: 3.10.1 - python-tackerclient: 1.3.0 - python-tripleoclient: 14.3.0 - python-troveclient: 5.1.2 - python-vitrageclient: 4.1.1 - python-watcher: 5.0.0 - python-watcherclient: 3.1.1 - python-zaqarclient: 2.0.1 - python-zunclient: 4.1.1 - qinling: 5.0.0 - qinling-dashboard: 4.0.0 - sahara: 13.0.0 - sahara-dashboard: 13.0.0 - sahara-extra: 12.0.0 - sahara-image-elements: 13.0.0 - sahara-plugin-ambari: 4.0.0 - sahara-plugin-cdh: 4.0.0 - sahara-plugin-mapr: 4.0.0 - sahara-plugin-spark: 4.0.0 - sahara-plugin-storm: 4.0.0 - sahara-plugin-vanilla: 4.0.0 - sahara-tests: 0.11.0 - searchlight: 9.0.0 - searchlight-ui: 9.0.0 - senlin: 10.0.0 - senlin-dashboard: 2.0.0 - senlin-tempest-plugin: 1.1.0 - solum: 9.0.0 - solum-dashboard: 5.0.0 - solum-tempest-plugin: 2.1.0 - stevedore: 3.2.2 - storlets: 6.0.0 - sushy: 3.4.6 - sushy-cli: 0.3.1 - swift: 2.26.0 - tacker: 4.1.0 - tacker-horizon: 2.0.0 - taskflow: 4.5.0 - telemetry_tempest_plugin: 1.1.0 - tempest: 25.0.1 - tempest-horizon: 1.1.0 - tooz: 2.7.2 - tosca-parser: 2.1.1 - tripleo-common: 13.3.0 - tripleo-heat-templates: 13.6.0 - tripleo-image-elements: 12.2.3 - tripleo-ipsec: 10.0.1 - tripleo-puppet-elements: 13.1.2 - tripleo-validations: 13.5.0 - trove: 14.1.0 - trove-dashboard: 15.0.1 - trove_tempest_plugin: 1.1.0 - vitrage: 7.3.0 - vitrage-dashboard: 3.2.0 - vitrage-tempest-plugin: 5.1.0 - watcher-dashboard: 4.0.1 - watcher-tempest-plugin: 2.1.0 - zaqar: 11.0.0 - zaqar-ui: 9.0.0 - zaqar_tempest_plugin: 1.1.0 - zun: 6.0.0 - zun-tempest-plugin: 4.1.0 - zun-ui: 6.0.0 -wallaby: - adjutant-ui: 2.0.0 - ansible-role-atos-hsm: 1.0.0 - ansible-role-lunasa-hsm: 1.1.0 - ansible-role-thales-hsm: 1.0.0 - aodh: 12.0.0 - aodhclient: 2.2.0 - automaton: 2.3.1 - barbican: 12.0.0 - barbican_tempest_plugin: 1.3.0 - bifrost: 10.2.0 - blazar: 7.0.0 - blazar-dashboard: 5.0.0 - blazar-nova: 2.2.0 - blazar_tempest_plugin: 0.6.1 - castellan: 3.7.2 - ceilometer: 16.0.1 - ceilometermiddleware: 2.2.0 - cinder: 18.2.0 - cinder-tempest-plugin: 1.4.0 - cinderlib: 4.0.0 - cliff: 3.7.0 - cloudkitty: 14.0.1 - cloudkitty-dashboard: 12.0.0 - cloudkitty_tempest_plugin: 2.3.0 - compute-hyperv: 12.0.0 - cyborg-tempest-plugin: 1.2.0 - designate: 12.0.1 - designate-dashboard: 12.0.0 - designate-tempest-plugin: 0.11.0 - ec2-api: 12.0.0 - ec2api-tempest-plugin: 1.2.1 - freezer: 10.0.0 - freezer-api: 10.0.0 - freezer-dr: 10.0.0 - freezer-web-ui: 10.0.0 - freezer_tempest_plugin: 1.2.0 - glance: 22.1.0 - glance-tempest-plugin: 0.1.0 - glance_store: 2.5.0 - heat-agents: 2.2.0 - heat-dashboard: 5.0.0 - heat-tempest-plugin: 1.2.0 - heat-translator: 2.3.0 - horizon: 19.2.0 - ironic: 17.0.4 - ironic-inspector: 10.6.1 - ironic-lib: 4.6.3 - ironic-prometheus-exporter: 2.2.0 - ironic-python-agent: 7.0.2 - ironic-python-agent-builder: 2.8.0 - ironic-tempest-plugin: 2.2.0 - ironic-ui: 4.3.0 - kayobe: 10.1.0 - kayobe-config: 10.0.0 - kayobe-config-dev: 10.1.0 - keystone: 19.0.0 - keystone_tempest_plugin: 0.7.0 - keystoneauth1: 4.3.1 - keystonemiddleware: 9.2.0 - kolla: 12.1.0 - kolla-ansible: 12.3.0 - kuryr-kubernetes: 4.0.0 - kuryr-lib: 2.3.0 - kuryr-libnetwork: 7.0.0 - kuryr-tempest-plugin: 0.9.0 - magnum: 12.1.0 - magnum-ui: 8.0.1 - magnum_tempest_plugin: 1.3.0 - manila: 12.1.2 - manila-tempest-plugin: 1.4.0 - manila-ui: 5.1.0 - masakari: 11.0.1 - masakari-dashboard: 4.0.1 - masakari-monitors: 11.0.2 - metalsmith: 1.4.3 - mistral: 12.0.0 - mistral-dashboard: 12.0.1 - mistral-extra: 11.0.0 - mistral-lib: 2.4.0 - mistral_tempest_tests: 1.2.0 - monasca-agent: 5.0.0 - monasca-api: 6.0.0 - monasca-common: 3.3.0 - monasca-events-api: 3.0.0 - monasca-notification: 4.0.0 - monasca-persister: 4.0.0 - monasca-statsd: 2.2.0 - monasca-tempest-plugin: 2.2.1 - monasca-ui: 4.0.0 - murano: 11.0.0 - murano-agent: 7.0.0 - murano-dashboard: 11.0.0 - murano-tempest-plugin: 2.2.0 - networking-bagpipe: 14.0.0 - networking-baremetal: 4.0.0 - networking-bgpvpn: 14.0.0 - networking-generic-switch: 5.0.0 - networking-hyperv: 10.0.0 - networking-odl: 18.0.0 - networking-sfc: 12.0.0 - neutron: 18.3.0 - neutron-dynamic-routing: 18.1.0 - neutron-lib: 2.10.2 - neutron-tempest-plugin: 1.4.0 - neutron-vpnaas: 18.0.0 - neutron-vpnaas-dashboard: 4.0.0 - nova: 23.2.0 - octavia: 8.0.1 - octavia-dashboard: 7.0.0 - octavia-lib: 2.3.1 - octavia-tempest-plugin: 1.7.0 - openstack-cyborg: 6.0.0 - openstack-heat: 16.0.0 - openstack-placement: 5.0.1 - openstacksdk: 0.55.1 - os-brick: 4.3.3 - os-ken: 1.4.1 - os-net-config: 14.2.0 - os-win: 5.4.0 - os_vif: 2.4.0 - osc-lib: 2.3.1 - osc-placement: 2.2.0 - oslo.cache: 2.7.1 - oslo.config: 8.5.1 - oslo.context: 3.2.1 - oslo.db: 8.5.1 - oslo.limit: 1.3.1 - oslo.messaging: 12.7.3 - oslo.metrics: 0.2.2 - oslo.middleware: 4.2.1 - oslo.policy: 3.7.1 - oslo.privsep: 2.5.1 - oslo.serialization: 4.1.1 - oslo.service: 2.5.1 - oslo.upgradecheck: 1.3.1 - oslo.utils: 4.8.2 - oslo.versionedobjects: 2.4.1 - oslo.vmware: 3.8.1 - oswin-tempest-plugin: 1.2.0 - ovn-octavia-provider: 1.0.0 - ovsdbapp: 1.9.2 - panko: 10.0.0 - pankoclient: 1.2.0 - patrole: 0.12.0 - puppet-aodh: 18.4.1 - puppet-barbican: 18.4.1 - puppet-ceilometer: 18.4.1 - puppet-cinder: 18.5.0 - puppet-cloudkitty: 7.4.0 - puppet-designate: 18.5.0 - puppet-ec2api: 18.4.0 - puppet-freezer: 18.4.1 - puppet-glance: 18.5.0 - puppet-glare: 7.4.0 - puppet-gnocchi: 18.4.1 - puppet-heat: 18.4.0 - puppet-horizon: 18.5.0 - puppet-ironic: 18.6.0 - puppet-keystone: 18.5.0 - puppet-magnum: 18.4.1 - puppet-manila: 18.5.0 - puppet-mistral: 18.4.0 - puppet-monasca: 7.4.0 - puppet-murano: 18.4.0 - puppet-neutron: 18.5.0 - puppet-nova: 18.5.0 - puppet-octavia: 18.4.1 - puppet-openstack_extras: 18.5.0 - puppet-openstacklib: 18.5.0 - puppet-oslo: 18.4.1 - puppet-ovn: 18.5.0 - puppet-panko: 18.4.0 - puppet-placement: 5.4.1 - puppet-qdr: 7.4.0 - puppet-rally: 6.4.0 - puppet-sahara: 18.4.0 - puppet-senlin: 5.4.0 - puppet-swift: 18.5.0 - puppet-tacker: 18.4.0 - puppet-tempest: 18.5.0 - puppet-tripleo: 14.2.2 - puppet-trove: 18.4.0 - puppet-vitrage: 8.4.0 - puppet-vswitch: 14.4.1 - puppet-watcher: 18.4.0 - puppet-zaqar: 18.4.0 - python-adjutant: 2.0.0 - python-adjutantclient: 0.8.0 - python-barbicanclient: 5.1.0 - python-blazarclient: 3.2.0 - python-brick-cinderclient-ext: 1.3.0 - python-cinderclient: 7.4.1 - python-cloudkittyclient: 4.2.0 - python-cyborgclient: 1.3.0 - python-designateclient: 4.2.0 - python-freezerclient: 4.2.0 - python-glanceclient: 3.3.0 - python-heatclient: 2.3.1 - python-ironic-inspector-client: 4.5.0 - python-ironicclient: 4.6.3 - python-keystoneclient: 4.2.0 - python-magnumclient: 3.4.1 - python-manilaclient: 2.6.3 - python-masakariclient: 7.0.0 - python-mistralclient: 4.2.0 - python-monascaclient: 2.3.0 - python-muranoclient: 2.2.0 - python-neutronclient: 7.3.0 - python-novaclient: 17.4.0 - python-octaviaclient: 2.3.1 - python-openstackclient: 5.5.1 - python-saharaclient: 3.3.0 - python-senlinclient: 2.2.1 - python-solumclient: 3.3.0 - python-swiftclient: 3.11.1 - python-tackerclient: 1.6.0 - python-tripleoclient: 16.4.0 - python-troveclient: 7.0.0 - python-vitrageclient: 4.3.0 - python-watcher: 6.0.0 - python-watcherclient: 3.2.0 - python-zaqarclient: 2.1.0 - python-zunclient: 4.2.0 - sahara: 14.0.0 - sahara-dashboard: 14.0.0 - sahara-extra: 13.0.0 - sahara-image-elements: 14.0.0 - sahara-plugin-ambari: 5.0.0 - sahara-plugin-cdh: 5.0.0 - sahara-plugin-mapr: 5.0.0 - sahara-plugin-spark: 5.0.0 - sahara-plugin-storm: 5.0.0 - sahara-plugin-vanilla: 5.0.0 - sahara-tests: 0.13.0 - senlin: 11.0.0 - senlin-dashboard: 3.0.0 - senlin-tempest-plugin: 1.3.0 - solum: 10.0.0 - solum-dashboard: 6.0.0 - solum-tempest-plugin: 2.2.0 - stevedore: 3.3.1 - storlets: 7.0.0 - sushy: 3.7.5 - sushy-cli: 0.4.0 - swift: 2.27.0 - tacker: 5.0.0 - tacker-horizon: 3.0.0 - telemetry_tempest_plugin: 1.3.0 - tempest: 27.0.0 - tooz: 2.8.2 - tosca-parser: 2.3.0 - tripleo-common: 15.4.0 - tripleo-heat-templates: 14.3.0 - tripleo-image-elements: 13.1.2 - tripleo-puppet-elements: 14.1.2 - tripleo-validations: 14.2.1 - trove: 15.0.0 - trove-dashboard: 16.0.0 - trove_tempest_plugin: 1.2.0 - vitrage: 7.4.0 - vitrage-dashboard: 3.3.0 - vitrage-tempest-plugin: 5.3.0 - watcher-dashboard: 5.0.0 - watcher-tempest-plugin: 2.2.0 - zaqar: 12.0.0 - zaqar-ui: 10.0.0 - zaqar_tempest_plugin: 1.2.1 - zun: 7.0.0 - zun-tempest-plugin: 4.3.0 - zun-ui: 7.0.0 -xena: - adjutant-ui: 3.0.0 - ansible-role-atos-hsm: 2.0.0 - ansible-role-lunasa-hsm: 2.0.0 - ansible-role-thales-hsm: 2.0.0 - aodh: 13.0.0 - aodhclient: 2.3.1 - automaton: 2.4.0 - barbican: 13.0.0 - barbican_tempest_plugin: 1.5.0 - bifrost: 11.2.1 - blazar: 8.0.0 - blazar-dashboard: 6.0.0 - blazar-nova: 2.3.0 - blazar_tempest_plugin: 0.7.0 - castellan: 3.9.1 - ceilometer: 17.0.1 - ceilometermiddleware: 2.3.0 - cinder: 19.1.0 - cinder-tempest-plugin: 1.5.0 - cinderlib: 4.1.0 - cliff: 3.9.0 - cloudkitty: 15.0.0 - cloudkitty-dashboard: 13.0.0 - cloudkitty_tempest_plugin: 2.4.1 - compute-hyperv: 13.0.0 - cyborg-tempest-plugin: 1.3.0 - designate: 13.0.0 - designate-dashboard: 13.0.0 - designate-tempest-plugin: 0.12.0 - ec2-api: 13.0.0 - ec2api-tempest-plugin: 1.3.0 - freezer: 11.0.0 - freezer-api: 11.0.0 - freezer-dr: 11.0.0 - freezer-web-ui: 11.0.0 - freezer_tempest_plugin: 1.3.1 - glance: 23.0.0 - glance-tempest-plugin: 0.2.0 - glance_store: 2.7.0 - heat-agents: 3.0.0 - heat-dashboard: 6.0.0 - heat-tempest-plugin: 1.4.0 - heat-translator: 2.4.1 - horizon: 20.1.2 - ironic: 18.2.1 - ironic-inspector: 10.8.0 - ironic-lib: 5.0.1 - ironic-prometheus-exporter: 3.0.0 - ironic-python-agent: 8.2.1 - ironic-python-agent-builder: 3.0.0 - ironic-tempest-plugin: 2.3.1 - ironic-ui: 5.0.0 - kayobe: 11.0.1 - kayobe-config: 11.0.0 - kayobe-config-dev: 11.0.0 - keystone: 20.0.0 - keystone_tempest_plugin: 0.8.0 - keystoneauth1: 4.4.0 - keystonemiddleware: 9.3.0 - kolla: 13.0.1 - kolla-ansible: 13.0.1 - kuryr-kubernetes: 5.0.0 - kuryr-lib: 2.4.0 - kuryr-libnetwork: 8.0.0 - kuryr-tempest-plugin: 0.11.0 - magnum: 13.0.0 - magnum-ui: 9.0.0 - magnum_tempest_plugin: 1.5.0 - manila: 13.0.3 - manila-tempest-plugin: 1.6.0 - manila-ui: 6.0.0 - masakari: 12.0.0 - masakari-dashboard: 5.0.0 - masakari-monitors: 12.0.0 - metalsmith: 1.5.2 - mistral: 13.0.0 - mistral-dashboard: 13.0.0 - mistral-extra: 11.1.0 - mistral-lib: 2.5.0 - mistral_tempest_tests: 1.3.0 - monasca-agent: 6.0.0 - monasca-api: 7.0.0 - monasca-common: 3.4.0 - monasca-events-api: 4.0.0 - monasca-notification: 5.0.0 - monasca-persister: 5.0.0 - monasca-statsd: 2.3.0 - monasca-tempest-plugin: 2.3.0 - monasca-ui: 5.0.0 - murano: 12.0.0 - murano-agent: 8.0.0 - murano-dashboard: 12.0.0 - murano-tempest-plugin: 2.3.1 - networking-bagpipe: 15.0.0 - networking-baremetal: 5.0.0 - networking-bgpvpn: 15.0.0 - networking-generic-switch: 6.0.0 - networking-hyperv: 11.0.0 - networking-odl: 19.0.0 - networking-sfc: 13.0.0 - neutron: 19.2.0 - neutron-dynamic-routing: 19.1.0 - neutron-lib: 2.15.2 - neutron-tempest-plugin: 1.7.0 - neutron-vpnaas: 19.0.0 - neutron-vpnaas-dashboard: 5.0.0 - nova: 24.1.0 - octavia: 9.0.1 - octavia-dashboard: 8.0.0 - octavia-lib: 2.4.1 - octavia-tempest-plugin: 1.8.1 - openstack-cyborg: 7.0.0 - openstack-heat: 17.0.1 - openstack-placement: 6.0.0 - openstacksdk: 0.59.0 - os-brick: 5.0.2 - os-ken: 2.1.0 - os-win: 5.5.0 - os_vif: 2.6.0 - osc-lib: 2.4.2 - osc-placement: 3.1.1 - oslo.cache: 2.8.2 - oslo.config: 8.7.1 - oslo.context: 3.3.2 - oslo.db: 11.0.0 - oslo.limit: 1.4.0 - oslo.messaging: 12.9.3 - oslo.metrics: 0.3.1 - oslo.middleware: 4.4.0 - oslo.policy: 3.8.3 - oslo.privsep: 2.6.2 - oslo.serialization: 4.2.0 - oslo.service: 2.6.2 - oslo.upgradecheck: 1.4.0 - oslo.utils: 4.10.2 - oslo.versionedobjects: 2.5.0 - oslo.vmware: 3.9.2 - oswin-tempest-plugin: 1.3.0 - ovn-octavia-provider: 1.1.1 - ovsdbapp: 1.12.1 - patrole: 0.13.0 - puppet-aodh: 19.4.0 - puppet-barbican: 19.4.0 - puppet-ceilometer: 19.4.0 - puppet-cinder: 19.4.0 - puppet-cloudkitty: 8.4.0 - puppet-designate: 19.4.0 - puppet-ec2api: 19.4.0 - puppet-freezer: 19.1.0 - puppet-glance: 19.4.0 - puppet-gnocchi: 19.4.0 - puppet-heat: 19.4.0 - puppet-horizon: 19.4.0 - puppet-ironic: 19.4.0 - puppet-keystone: 19.4.0 - puppet-magnum: 19.4.0 - puppet-manila: 19.4.0 - puppet-mistral: 19.4.0 - puppet-monasca: 8.1.0 - puppet-murano: 19.4.0 - puppet-neutron: 19.4.0 - puppet-nova: 19.4.0 - puppet-octavia: 19.4.0 - puppet-openstack_extras: 19.4.0 - puppet-openstacklib: 19.4.0 - puppet-oslo: 19.4.0 - puppet-ovn: 19.4.0 - puppet-placement: 6.4.0 - puppet-qdr: 8.4.0 - puppet-rally: 7.4.0 - puppet-sahara: 19.4.0 - puppet-senlin: 6.4.0 - puppet-swift: 19.4.0 - puppet-tacker: 19.4.0 - puppet-tempest: 19.5.0 - puppet-trove: 19.4.0 - puppet-vitrage: 9.4.0 - puppet-vswitch: 15.4.0 - puppet-watcher: 19.4.0 - puppet-zaqar: 19.4.0 - python-adjutant: 3.0.0 - python-adjutantclient: 0.9.0 - python-barbicanclient: 5.2.0 - python-blazarclient: 3.3.1 - python-brick-cinderclient-ext: 1.4.0 - python-cinderclient: 8.1.0 - python-cloudkittyclient: 4.3.0 - python-cyborgclient: 1.5.0 - python-designateclient: 4.3.0 - python-freezerclient: 4.3.0 - python-glanceclient: 3.5.0 - python-heatclient: 2.4.0 - python-ironic-inspector-client: 4.6.0 - python-ironicclient: 4.8.1 - python-keystoneclient: 4.3.0 - python-magnumclient: 3.5.0 - python-manilaclient: 3.0.2 - python-masakariclient: 7.1.0 - python-mistralclient: 4.3.0 - python-monascaclient: 2.4.0 - python-muranoclient: 2.3.0 - python-neutronclient: 7.6.0 - python-novaclient: 17.6.0 - python-octaviaclient: 2.4.0 - python-openstackclient: 5.6.0 - python-saharaclient: 3.4.0 - python-senlinclient: 2.3.0 - python-solumclient: 3.4.0 - python-swiftclient: 3.12.0 - python-tackerclient: 1.8.0 - python-troveclient: 7.1.1 - python-vitrageclient: 4.4.0 - python-watcher: 7.0.0 - python-watcherclient: 3.3.0 - python-zaqarclient: 2.2.0 - python-zunclient: 4.3.0 - sahara: 15.0.0 - sahara-dashboard: 15.0.0 - sahara-extra: 14.0.0 - sahara-image-elements: 15.0.0 - sahara-plugin-ambari: 6.0.0 - sahara-plugin-cdh: 6.0.0 - sahara-plugin-mapr: 6.0.0 - sahara-plugin-spark: 6.0.0 - sahara-plugin-storm: 6.0.0 - sahara-plugin-vanilla: 6.0.0 - sahara-tests: 0.14.0 - senlin: 12.0.0 - senlin-dashboard: 4.0.0 - senlin-tempest-plugin: 1.4.0 - solum: 11.0.0 - solum-dashboard: 7.0.0 - solum-tempest-plugin: 2.3.0 - stevedore: 3.4.0 - storlets: 8.0.0 - sushy: 3.12.2 - swift: 2.28.0 - tacker: 6.0.0 - tacker-horizon: 4.0.0 - telemetry_tempest_plugin: 1.5.0 - tempest: 29.0.0 - tosca-parser: 2.4.1 - trove: 16.0.0 - trove-dashboard: 17.0.0 - trove_tempest_plugin: 1.3.0 - vitrage: 7.5.0 - vitrage-dashboard: 3.4.0 - vitrage-tempest-plugin: 5.4.0 - watcher-dashboard: 6.0.0 - watcher-tempest-plugin: 2.3.0 - zaqar: 13.0.0 - zaqar-ui: 11.0.0 - zaqar_tempest_plugin: 1.3.1 - zun: 8.0.0 - zun-tempest-plugin: 4.4.1 - zun-ui: 8.0.0 -yoga: - adjutant-ui: 4.0.0 - ansible-collection-kolla: 1.0.0 - ansible-role-atos-hsm: 3.0.0 - ansible-role-lunasa-hsm: 3.0.0 - ansible-role-thales-hsm: 3.0.0 - aodh: 14.0.0 - aodhclient: 2.4.1 - automaton: 2.5.0 - barbican: 14.0.0 - barbican_tempest_plugin: 1.6.0 - bifrost: 14.0.0 - blazar: 9.0.0 - blazar-dashboard: 7.0.0 - blazar-nova: 2.5.0 - blazar_tempest_plugin: 0.8.0 - castellan: 3.10.2 - ceilometer: 18.0.0 - ceilometermiddleware: 2.4.1 - cinder: 20.0.0 - cinder-tempest-plugin: 1.6.0 - cliff: 3.10.1 - cloudkitty: 16.0.0 - cloudkitty-dashboard: 14.0.0 - cloudkitty_tempest_plugin: 2.5.0 - compute-hyperv: 14.0.0 - cyborg-tempest-plugin: 1.4.0 - designate: 14.0.0 - designate-dashboard: 14.0.0 - designate-tempest-plugin: 0.13.0 - ec2-api: 14.0.1 - ec2api-tempest-plugin: 1.4.0 - freezer: 12.0.0 - freezer-api: 12.0.0 - freezer-dr: 12.0.0 - freezer-web-ui: 12.0.0 - freezer_tempest_plugin: 1.4.0 - glance: 24.0.0 - glance-tempest-plugin: 0.3.0 - glance_store: 3.0.0 - heat-agents: 4.0.0 - heat-dashboard: 7.0.0 - heat-tempest-plugin: 1.5.0 - heat-translator: 2.5.0 - horizon: 22.1.0 - ironic: 20.1.0 - ironic-inspector: 10.11.0 - ironic-lib: 5.2.0 - ironic-prometheus-exporter: 3.1.0 - ironic-python-agent: 8.5.0 - ironic-python-agent-builder: 4.0.1 - ironic-tempest-plugin: 2.4.0 - ironic-ui: 5.1.0 - kayobe: 12.0.0 - kayobe-config: 12.0.0 - kayobe-config-dev: 12.0.0 - keystone: 21.0.0 - keystone_tempest_plugin: 0.9.0 - keystoneauth1: 4.5.0 - keystonemiddleware: 9.4.0 - kolla: 14.0.0 - kolla-ansible: 14.0.0 - kuryr-kubernetes: 6.0.0 - kuryr-lib: 2.5.0 - kuryr-libnetwork: 9.0.0 - kuryr-tempest-plugin: 0.12.0 - magnum: 14.0.0 - magnum-ui: 10.0.0 - magnum_tempest_plugin: 1.6.0 - manila: 14.0.0 - manila-tempest-plugin: 1.7.0 - manila-ui: 7.0.0 - masakari: 13.0.0 - masakari-dashboard: 6.0.0 - masakari-monitors: 13.0.0 - metalsmith: 1.6.2 - mistral: 14.0.0 - mistral-dashboard: 14.0.0 - mistral-extra: 11.3.0 - mistral-lib: 2.6.0 - mistral_tempest_tests: 1.4.0 - monasca-agent: 7.0.0 - monasca-api: 8.0.0 - monasca-common: 3.5.0 - monasca-events-api: 5.0.0 - monasca-notification: 6.0.0 - monasca-persister: 6.0.0 - monasca-statsd: 2.4.0 - monasca-tempest-plugin: 2.4.0 - monasca-ui: 6.0.0 - murano: 13.0.0 - murano-agent: 9.0.0 - murano-dashboard: 13.0.0 - murano-tempest-plugin: 2.4.0 - networking-bagpipe: 16.0.0 - networking-baremetal: 5.1.0 - networking-bgpvpn: 16.0.0 - networking-generic-switch: 6.1.0 - networking-hyperv: 12.0.0 - networking-odl: 20.0.0 - networking-sfc: 14.0.0 - neutron: 20.0.0 - neutron-dynamic-routing: 20.0.0 - neutron-lib: 2.20.0 - neutron-tempest-plugin: 1.9.0 - neutron-vpnaas: 20.0.0 - neutron-vpnaas-dashboard: 6.0.0 - nova: 25.0.0 - octavia: 10.0.0 - octavia-dashboard: 9.0.0 - octavia-lib: 2.5.0 - octavia-tempest-plugin: 1.9.0 - openstack-cyborg: 8.0.0 - openstack-heat: 18.0.0 - openstack-placement: 7.0.0 - openstacksdk: 0.61.0 - os-brick: 5.2.0 - os-ken: 2.3.1 - os-win: 5.6.0 - os_vif: 2.7.1 - osc-lib: 2.5.0 - osc-placement: 3.2.0 - oslo.cache: 2.10.1 - oslo.config: 8.8.0 - oslo.context: 4.1.0 - oslo.db: 11.2.0 - oslo.limit: 1.5.1 - oslo.messaging: 12.13.0 - oslo.metrics: 0.4.0 - oslo.middleware: 4.5.1 - oslo.policy: 3.11.0 - oslo.privsep: 2.7.0 - oslo.serialization: 4.3.0 - oslo.service: 2.8.0 - oslo.upgradecheck: 1.5.0 - oslo.utils: 4.12.3 - oslo.versionedobjects: 2.6.0 - oslo.vmware: 3.10.0 - oswin-tempest-plugin: 1.4.0 - ovn-octavia-provider: 2.0.0 - ovsdbapp: 1.15.2 - patrole: 0.14.0 - puppet-aodh: 20.3.0 - puppet-barbican: 20.3.0 - puppet-ceilometer: 20.3.0 - puppet-cinder: 20.3.0 - puppet-cloudkitty: 9.3.0 - puppet-designate: 20.3.0 - puppet-ec2api: 20.3.0 - puppet-glance: 20.3.0 - puppet-gnocchi: 20.3.0 - puppet-heat: 20.3.0 - puppet-horizon: 20.3.0 - puppet-ironic: 20.3.0 - puppet-keystone: 20.3.0 - puppet-magnum: 20.3.0 - puppet-manila: 20.3.0 - puppet-mistral: 20.3.0 - puppet-murano: 20.3.0 - puppet-neutron: 20.3.0 - puppet-nova: 20.3.0 - puppet-octavia: 20.3.0 - puppet-openstack_extras: 20.3.0 - puppet-openstacklib: 20.3.0 - puppet-oslo: 20.3.0 - puppet-ovn: 20.3.0 - puppet-placement: 7.3.0 - puppet-qdr: 9.3.0 - puppet-rally: 8.3.0 - puppet-sahara: 20.3.0 - puppet-swift: 20.3.0 - puppet-tacker: 20.3.0 - puppet-tempest: 20.3.0 - puppet-trove: 20.3.0 - puppet-vitrage: 10.3.0 - puppet-vswitch: 16.3.0 - puppet-watcher: 20.3.1 - puppet-zaqar: 20.3.1 - python-adjutant: 4.0.0 - python-adjutantclient: 0.10.0 - python-barbicanclient: 5.3.0 - python-blazarclient: 3.4.0 - python-brick-cinderclient-ext: 1.5.0 - python-cinderclient: 8.3.0 - python-cloudkittyclient: 4.5.0 - python-cyborgclient: 1.7.0 - python-designateclient: 4.5.0 - python-freezerclient: 4.4.0 - python-glanceclient: 3.6.0 - python-heatclient: 2.5.1 - python-ironic-inspector-client: 4.7.1 - python-ironicclient: 4.11.0 - python-keystoneclient: 4.4.0 - python-magnumclient: 3.6.0 - python-manilaclient: 3.3.0 - python-masakariclient: 7.2.0 - python-mistralclient: 4.4.0 - python-monascaclient: 2.5.0 - python-muranoclient: 2.4.1 - python-neutronclient: 7.8.0 - python-novaclient: 17.7.0 - python-octaviaclient: 2.5.0 - python-openstackclient: 5.8.0 - python-saharaclient: 3.5.0 - python-senlinclient: 2.4.0 - python-solumclient: 3.5.0 - python-swiftclient: 3.13.1 - python-tackerclient: 1.10.0 - python-troveclient: 7.2.0 - python-vitrageclient: 4.5.0 - python-watcher: 8.0.0 - python-watcherclient: 3.4.0 - python-zaqarclient: 2.3.0 - python-zunclient: 4.4.0 - sahara: 16.0.0 - sahara-dashboard: 16.0.0 - sahara-extra: 15.0.0 - sahara-image-elements: 16.0.0 - sahara-plugin-ambari: 7.0.0 - sahara-plugin-cdh: 7.0.0 - sahara-plugin-mapr: 7.0.0 - sahara-plugin-spark: 7.0.0 - sahara-plugin-storm: 7.0.0 - sahara-plugin-vanilla: 7.0.0 - sahara-tests: 0.15.0 - senlin: 13.0.0 - senlin-dashboard: 5.0.0 - senlin-tempest-plugin: 1.5.0 - solum: 12.0.0 - solum-dashboard: 8.0.0 - solum-tempest-plugin: 2.4.0 - stevedore: 3.5.0 - storlets: 9.0.0 - sushy: 4.1.1 - swift: 2.29.1 - tacker: 7.0.0 - tacker-horizon: 5.0.0 - tap-as-a-service: 9.0.0 - telemetry_tempest_plugin: 1.6.0 - tempest: 30.1.0 - tosca-parser: 2.5.1 - trove: 17.0.0 - trove-dashboard: 18.0.0 - trove_tempest_plugin: 1.4.0 - vitrage: 8.0.1 - vitrage-dashboard: 3.5.0 - vitrage-tempest-plugin: 5.5.0 - watcher-dashboard: 7.0.0 - watcher-tempest-plugin: 2.4.0 - zaqar: 14.0.0 - zaqar-ui: 12.0.0 - zaqar_tempest_plugin: 1.4.0 - zun: 9.0.0 - zun-tempest-plugin: 4.5.0 - zun-ui: 9.0.0 diff --git a/tools/oos/etc/package.spec.j2 b/tools/oos/etc/package.spec.j2 deleted file mode 100644 index 1223a5454dba1c43041c2e1013c0b7862befb1c8..0000000000000000000000000000000000000000 --- a/tools/oos/etc/package.spec.j2 +++ /dev/null @@ -1,143 +0,0 @@ -%global _empty_manifest_terminate_build 0 -Name: {{ spec_name }} -Version: {{ version }} -Release: 1 -Summary: {{ pkg_summary }} -License: {{ pkg_license }} -URL: {{ pkg_home }} -Source0: {{ source_url }} -{% if not build_arch %} -BuildArch: noarch -{% endif %} -%description -{{ description }} - -%package -n {{ pkg_name }} -Summary: {{ pkg_summary }} -Provides: {{ provides }} -{% if base_build_requires %} -# Base build requires -{% for br in base_build_requires %} -BuildRequires: {{ br }} -{% endfor %} -{% endif %} -{% if dev_requires %} -# General requires -{% for dr in dev_requires %} -{% if dr not in base_build_requires %} -BuildRequires: {{ dr }} -{% endif %} -{% endfor %} -{% endif %} -{% if test_requires %} -# Tests running requires -{% for tr in test_requires %} -{% if tr not in base_build_requires %} -BuildRequires: {{ tr }} -{% endif %} -{% endfor %} -{% endif %} -{% if dev_requires %} -# General requires -{% for dr in dev_requires %} -Requires: {{ dr }} -{% endfor %} -{% endif %} -{% if test_requires %} -# Tests running requires -{% for tr in test_requires %} -Requires: {{ tr }} -{% endfor %} -{% endif %} -%description -n {{ pkg_name }} -{{ description }} - -%package help -Summary: {{ pkg_summary }} -Provides: {{ pkg_name }}-doc -%description help -{{ description }} - -%prep -%autosetup -n {{ source_file_dir }} - -%build -{% if python2 %} -%py2_build -{% else %} -%py3_build -{% endif %} - -%install -{% if python2 %} -%py2_install -{% else %} -%py3_install -{% endif %} - -install -d -m755 %{buildroot}/%{_pkgdocdir} -if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi -if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi -if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi -if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi -pushd %{buildroot} -if [ -d usr/lib ]; then - find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst -fi -if [ -d usr/lib64 ]; then - find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst -fi -if [ -d usr/bin ]; then - find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst -fi -if [ -d usr/sbin ]; then - find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst -fi -touch doclist.lst -if [ -d usr/share/man ]; then - find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst -fi -popd -mv %{buildroot}/filelist.lst . -mv %{buildroot}/doclist.lst . - -{% if add_check %} -%check -{% if python2 %} -%{__python2} setup.py test -{% else %} -%{__python3} setup.py test -{% endif %} -{% endif %} - -%files -n {{ pkg_name }} -f filelist.lst -{% if not build_arch %} -{% if python2 %} -%dir %{python2_sitelib}/* -{% else %} -%dir %{python3_sitelib}/* -{% endif %} -{% else %} -{% if python2 %} -%dir %{python2_sitearch}/* -{% else %} -%dir %{python3_sitearch}/* -{% endif %} -{% endif %} - -%files help -f doclist.lst -%{_docdir}/* - -%changelog -* {{ today }} OpenStack_SIG - {{ version }}-1 -{% if old_changelog %} -- {{ up_down_grade }} package {{ pkg_name }} to version {{ version }} -{% else %} -- Init package {{ pkg_name }} of version {{ version }} -{% endif %} - -{% if old_changelog %} -{% for cl in old_changelog %} -{{ cl }} -{% endfor %} -{% endif %} \ No newline at end of file diff --git a/tools/oos/etc/playbooks/aodh.yaml b/tools/oos/etc/playbooks/aodh.yaml deleted file mode 100644 index 8318c4d15c3374f0dc6a5fefdb7d9fdd87472992..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/aodh.yaml +++ /dev/null @@ -1,94 +0,0 @@ -- name: Install aodh - hosts: controller - become: yes - roles: - - role: init_database - vars: - database: aodh - user: aodh - - role: create_identity_user - vars: - user: aodh - - role: create_identity_service - vars: - service: aodh - type: alarming - description: Telemetry - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8042 - tasks: - - name: Get pyparsing version of OpenStack - shell: | - yum list |grep pyparsing |grep OpenStack | awk '{print $2}' - register: pyparing_version - - - name: Install aodh package - yum: - name: - - python3-pyparsing-{{ pyparing_version.stdout }} - - openstack-aodh-api - - openstack-aodh-evaluator - - openstack-aodh-notifier - - openstack-aodh-listener - - openstack-aodh-expirer - - python3-aodhclient - allow_downgrade: true - - - name: Initialize config file - shell: | - cat << EOF > /etc/aodh/aodh.conf - [database] - connection = mysql+pymysql://aodh:{{ mysql_project_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}/aodh - - [DEFAULT] - transport_url = rabbit://openstack:{{ rabbitmq_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }} - auth_strategy = keystone - - [keystone_authtoken] - www_authenticate_uri = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - memcached_servers = {{ hostvars['controller']['ansible_default_ipv4']['address'] }}:11211 - auth_type = password - project_domain_id = default - user_domain_id = default - project_name = service - username = aodh - password = {{ project_identity_password }} - - [service_credentials] - auth_type = password - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/v3 - project_domain_id = default - user_domain_id = default - project_name = service - username = aodh - password = {{ project_identity_password }} - interface = internalURL - region_name = RegionOne - EOF - - - name: Sync database - shell: aodh-dbsync - - - name: Start openstack-aodh-api service - systemd: - name: openstack-aodh-api - state: started - enabled: True - - - name: Start openstack-aodh-evaluator service - systemd: - name: openstack-aodh-evaluator - state: started - enabled: True - - - name: Start openstack-aodh-notifier service - systemd: - name: openstack-aodh-notifier - state: started - enabled: True - - - name: Start openstack-aodh-listener service - systemd: - name: openstack-aodh-listener - state: started - enabled: True diff --git a/tools/oos/etc/playbooks/ceilometer.yaml b/tools/oos/etc/playbooks/ceilometer.yaml deleted file mode 100644 index 88499af335c37b5b2525b8aa7cf1df39fb2399ed..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/ceilometer.yaml +++ /dev/null @@ -1,55 +0,0 @@ -- name: Install ceilometer - hosts: controller - become: yes - roles: - - role: init_database - vars: - database: ceilometer - user: ceilometer - - role: create_identity_user - vars: - user: ceilometer - tasks: - - name: Create ceilometer identity service - shell: | - source ~/.admin-openrc - openstack service create --name ceilometer --description "Telemetry" metering - - - name: Install ceilometer package - yum: - name: - - openstack-ceilometer-notification - - openstack-ceilometer-central - - - name: Initialize config file - shell: | - cat << EOF > /etc/ceilometer/ceilometer.conf - [DEFAULT] - transport_url = rabbit://openstack:{{ rabbitmq_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }} - - [service_credentials] - auth_type = password - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/v3 - project_domain_id = default - user_domain_id = default - project_name = service - username = ceilometer - password = {{ project_identity_password }} - interface = internalURL - region_name = RegionOne - EOF - - - name: Sync database - shell: ceilometer-upgrade - - - name: Start openstack-ceilometer-notification service - systemd: - name: openstack-ceilometer-notification - state: started - enabled: True - - - name: Start openstack-ceilometer-central service - systemd: - name: openstack-ceilometer-central - state: started - enabled: True diff --git a/tools/oos/etc/playbooks/cinder.yaml b/tools/oos/etc/playbooks/cinder.yaml deleted file mode 100644 index be1b46461154889fe9b766488cfd499bfdc66673..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/cinder.yaml +++ /dev/null @@ -1,141 +0,0 @@ -# TODO: Add cinder-backup support -- name: Install Cinder controller - hosts: controller - become: yes - roles: - - role: init_database - vars: - database: cinder - user: cinder - - role: create_identity_user - vars: - user: cinder - - role: create_identity_service - vars: - service: cinderv2 - type: volumev2 - description: "OpenStack Block Storage" - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8776/v2/%\(project_id\)s - - role: create_identity_service - vars: - service: cinderv3 - type: volumev3 - description: "OpenStack Block Storage" - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8776/v3/%\(project_id\)s - tasks: - - name: Install cinder package - yum: - name: - - openstack-cinder-api - - openstack-cinder-scheduler - -- name: Install Cinder storage - hosts: storage - become: yes - tasks: - - name: Install cinder package - yum: - name: - - lvm2 - - device-mapper-persistent-data - - scsi-target-utils - - rpcbind - - nfs-utils - - openstack-cinder-volume - - openstack-cinder-backup - - - name: Initialize block device - shell: | - pvcreate /dev/{{ cinder_block_device }} - vgcreate cinder-volumes /dev/{{ cinder_block_device }} - -- name: Config Cinder - hosts: all - become: yes - tasks: - - name: Initialize cinder config file - shell: | - cat << EOF > /etc/cinder/cinder.conf - [DEFAULT] - osapi_volume_workers = {{ cinder_api_workers }} - transport_url = rabbit://openstack:{{ rabbitmq_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }} - auth_strategy = keystone - my_ip = {{ ansible_default_ipv4['address'] }} - enabled_backends = lvm - backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver - # backup_share=HOST:PATH - - [database] - connection = mysql+pymysql://cinder:{{ mysql_project_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}/cinder - - [keystone_authtoken] - www_authenticate_uri = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - memcached_servers = {{ hostvars['controller']['ansible_default_ipv4']['address'] }}:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = cinder - password = {{ project_identity_password }} - - [oslo_concurrency] - lock_path = /var/lib/cinder/tmp - - [lvm] - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver - volume_group = cinder-volumes - iscsi_protocol = iscsi - iscsi_helper = tgtadm - EOF - -- name: Complete Cinder controller - hosts: controller - become: yes - tasks: - - name: Sync cinder database - shell: su -s /bin/sh -c "cinder-manage db sync" cinder - - - name: Start openstack-cinder-api service - systemd: - name: openstack-cinder-api - state: started - enabled: True - - - name: Start openstack-cinder-scheduler service - systemd: - name: openstack-cinder-scheduler - state: started - enabled: True - -- name: Complete Cinder storage - hosts: storage - become: yes - tasks: - - name: Update tgtd config file - shell: | - cat >> /etc/tgt/tgtd.conf << EOF - include /var/lib/cinder/volumes/* - EOF - - - name: Start openstack-cinder-volume service - systemd: - name: openstack-cinder-volume - state: started - enabled: True - - - name: Start tgtd service - systemd: - name: tgtd - state: started - enabled: True - -- name: Complete Nova compute - hosts: compute - become: yes - tasks: - - name: Start iscsid service - systemd: - name: iscsid - state: started - enabled: True diff --git a/tools/oos/etc/playbooks/cleanup.yaml b/tools/oos/etc/playbooks/cleanup.yaml deleted file mode 100644 index d5c42cf7c6758fd60b79f00949732184fe5c14b6..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/cleanup.yaml +++ /dev/null @@ -1,94 +0,0 @@ -- name: Clean up OpenStack environment - hosts: all - become: yes - tasks: - - name: Cleanup block storage - shell: | - vgremove -y cinder-volumes - pvremove /dev/{{ cinder_block_device }} - ignore_errors: yes - - - name: Cleanup interface - shell: | - ip add del {{ default_ext_subnet_gateway }}/24 dev {{ neutron_provider_interface_name }} - ip link set {{ neutron_provider_interface_name }} down - ip tuntap del {{ neutron_provider_interface_name }} mode tap - ignore_errors: yes - - - name: Remove OpenStack related Package - yum: - name: - - openstack-aodh-api - - openstack-aodh-evaluator - - openstack-aodh-notifier - - openstack-aodh-listener - - openstack-aodh-expirer - - openstack-ceilometer-notification - - openstack-ceilometer-central - - openstack-cinder-api - - openstack-cinder-scheduler - - openstack-cinder-volume - - openstack-cinder-backup - - openstack-cyborg - - openstack-glance - - openstack-heat-api - - openstack-heat-api-cfn - - openstack-heat-engine - - openstack-dashboard - - openstack-keystone - - openstack-kolla - - openstack-kolla-ansible - - openstack-kolla-plugin - - openstack-kolla-ansible-plugin - - openstack-neutron - - openstack-neutron-linuxbridge - - openstack-neutron-ml2 - - openstack-neutron-metering-agent - - openstack-neutron-linuxbridge - - openstack-nova-api - - openstack-nova-conductor - - openstack-nova-novncproxy - - openstack-nova-scheduler - - openstack-nova-compute - - openstack-placement-api - - openstack-rally - - openstack-rally-plugins - - openstack-swift-proxy - - openstack-swift-account - - openstack-swift-container - - openstack-swift-object - - openstack-tempest - - openstack-trove - - gnocchi-api - - gnocchi-metricd - - mariadb - - mariadb-server - - memcached - - rabbitmq-server - - redis - - httpd - - openstack-release-{{ openstack_release }} - autoremove: true - state: absent - - - name: Cleanup database files - shell: | - rm -rf /var/lib/mysql - ignore_errors: yes - - - name: Cleanup config files - shell: | - rm -rf /etc/nova /etc/keystone /etc/glance /etc/neutron /etc/cinder - rm -rf /etc/aodh /etc/ceilometer /etc/cyborg /etc/gnocchi /etc/heat - rm -rf /etc/kolla /etc/placement /etc/rally /etc/swift /etc/tempest - rm -rf /etc/openstack-dashboard - ignore_errors: yes - - - name: Clean /etc/fstab item - shell: | - sed -i "/\/srv\/node\/.*xfs/d" /etc/fstab - ignore_errors: yes - - - name: Cleanup tempest folder - shell: rm -rf mytest - ignore_errors: yes diff --git a/tools/oos/etc/playbooks/cyborg.yaml b/tools/oos/etc/playbooks/cyborg.yaml deleted file mode 100644 index 387071907f59c4b8e24712b3562f8fb3a88d729c..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/cyborg.yaml +++ /dev/null @@ -1,85 +0,0 @@ -# TODO: cyborg now is all-in-one mode. Add cluster support in the future. -- name: Install Cyborg - hosts: controller - become: yes - roles: - - role: init_database - vars: - database: cyborg - user: cyborg - - role: create_identity_user - vars: - user: cyborg - - role: create_identity_service - vars: - service: cyborg - type: accelerator - description: Acceleration Service - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:6666/v1 - tasks: - - name: Install cyborg package - yum: - name: - - openstack-cyborg - - - name: Initialize config file - shell: | - cat << EOF > /etc/cyborg/cyborg.conf - [DEFAULT] - transport_url = rabbit://openstack:{{ rabbitmq_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5672/ - use_syslog = False - state_path = /var/lib/cyborg - debug = True - - [database] - connection = mysql+pymysql://cyborg:{{ mysql_project_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}/cyborg - - [service_catalog] - project_domain_id = default - user_domain_id = default - project_name = service - password = {{ project_identity_password }} - username = cyborg - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/v3 - auth_type = password - - [placement] - project_domain_name = Default - project_name = service - user_domain_name = Default - password = {{ project_identity_password }} - username = placement - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - auth_type = password - - [keystone_authtoken] - memcached_servers = {{ hostvars['controller']['ansible_default_ipv4']['address'] }}:11211 - project_domain_name = Default - project_name = service - user_domain_name = Default - password = {{ project_identity_password }} - username = cyborg - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - auth_type = password - EOF - - - name: Sync database - shell: cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade - - - name: Start openstack-cyborg-api service - systemd: - name: openstack-cyborg-api - state: started - enabled: True - - - name: Start openstack-cyborg-conductor service - systemd: - name: openstack-cyborg-conductor - state: started - enabled: True - - - name: Start openstack-cyborg-agent service - systemd: - name: openstack-cyborg-agent - state: started - enabled: True diff --git a/tools/oos/etc/playbooks/entry.yaml b/tools/oos/etc/playbooks/entry.yaml deleted file mode 100644 index b04153fe3d6c6008c80ad453952ed8a6705b23df..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/entry.yaml +++ /dev/null @@ -1,72 +0,0 @@ -- import_playbook: pre.yaml -- import_playbook: mariadb.yaml -- import_playbook: rabbitmq.yaml -- import_playbook: memcached.yaml - -- import_playbook: keystone.yaml - when: - - '"keystone" in enabled_service' - -- import_playbook: glance.yaml - when: - - '"glance" in enabled_service' - -- import_playbook: placement.yaml - when: - - '"placement" in enabled_service' - -- import_playbook: nova.yaml - when: - - '"nova" in enabled_service' - -- import_playbook: neutron.yaml - when: - - '"neutron" in enabled_service' - -- import_playbook: cinder.yaml - when: - - '"cinder" in enabled_service' - -- import_playbook: trove.yaml - when: - - '"trove" in enabled_service' - -- import_playbook: horizon.yaml - when: - - '"horizon" in enabled_service' - -- import_playbook: aodh.yaml - when: - - '"aodh" in enabled_service' - -- import_playbook: gnocchi.yaml - when: - - '"gnocchi" in enabled_service' - -- import_playbook: ceilometer.yaml - when: - - '"ceilometer" in enabled_service' - -- import_playbook: rally.yaml - when: - - '"rally" in enabled_service' - -- import_playbook: kolla.yaml - when: - - '"kolla" in enabled_service' - -- import_playbook: tempest.yaml - when: - - '"tempest" in enabled_service' - -- import_playbook: cyborg.yaml - when: - - '"cyborg" in enabled_service' - -- import_playbook: heat.yaml - when: - - '"heat" in enabled_service' - -- import_playbook: swift.yaml - when: - - '"swift" in enabled_service' diff --git a/tools/oos/etc/playbooks/glance.yaml b/tools/oos/etc/playbooks/glance.yaml deleted file mode 100644 index be47f4ad5b9787626060ab983f8ad50843c0a1fd..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/glance.yaml +++ /dev/null @@ -1,60 +0,0 @@ -- name: Install Glance - hosts: controller - become: yes - roles: - - role: init_database - vars: - database: glance - user: glance - - role: create_identity_user - vars: - user: glance - - role: create_identity_service - vars: - service: glance - type: image - description: OpenStack Image - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:9292 - tasks: - - name: Install glance package - yum: - name: - - openstack-glance - - - name: Initialize config file - shell: | - cat << EOF > /etc/glance/glance.conf - [DEFAULT] - workers = {{ glance_api_workers }} - - [database] - connection = mysql+pymysql://glance:{{ mysql_project_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}/glance - - [keystone_authtoken] - www_authenticate_uri = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - memcached_servers = {{ hostvars['controller']['ansible_default_ipv4']['address'] }}:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = glance - password = {{ project_identity_password }} - - [paste_deploy] - flavor = keystone - - [glance_store] - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - EOF - - - name: Sync database - shell: su -s /bin/sh -c "glance-manage db_sync" glance - - - name: Start openstack-glance-api service - systemd: - name: openstack-glance-api - state: started - enabled: True diff --git a/tools/oos/etc/playbooks/gnocchi.yaml b/tools/oos/etc/playbooks/gnocchi.yaml deleted file mode 100644 index c16be85baaf1ba97dc337e4cb8cd71e749dbec25..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/gnocchi.yaml +++ /dev/null @@ -1,83 +0,0 @@ -- name: Install gnocchi - hosts: controller - become: yes - roles: - - role: init_database - vars: - database: gnocchi - user: gnocchi - - role: create_identity_user - vars: - user: gnocchi - - role: create_identity_service - vars: - service: gnocchi - type: metric - description: Metric Service - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8041 - tasks: - - name: Install gnocchi package - yum: - name: - - redis - - python3-uWSGI - - openstack-gnocchi-api - - openstack-gnocchi-metricd - - python3-gnocchiclient - - - name: Update redis config file - shell: | - cat << EOF >> /etc/redis.conf - bind {{ hostvars['controller']['ansible_default_ipv4']['address'] }} - EOF - - - name: Initialize config file - shell: | - cat << EOF > /etc/gnocchi/gnocchi.conf - [api] - auth_mode = keystone - port = 8041 - uwsgi_mode = http-socket - - [keystone_authtoken] - auth_type = password - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/v3 - project_domain_name = Default - user_domain_name = Default - project_name = service - username = gnocchi - password = {{ project_identity_password }} - interface = internalURL - region_name = RegionOne - - [indexer] - url = mysql+pymysql://gnocchi:{{ mysql_project_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}/gnocchi - - [storage] - # coordination_url is not required but specifying one will improve - # performance with better workload division across workers. - coordination_url = redis://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:6379 - file_basepath = /var/lib/gnocchi - driver = file - EOF - - - name: Sync database - shell: gnocchi-upgrade - - - name: Start redis service - systemd: - name: redis - state: started - enabled: True - - - name: Start openstack-gnocchi-api service - systemd: - name: openstack-gnocchi-api - state: started - enabled: True - - - name: Start openstack-gnocchi-metricd service - systemd: - name: openstack-gnocchi-metricd - state: started - enabled: True diff --git a/tools/oos/etc/playbooks/heat.yaml b/tools/oos/etc/playbooks/heat.yaml deleted file mode 100644 index 02d69702f388d62c550f394b6be87b9e0a40ddec..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/heat.yaml +++ /dev/null @@ -1,95 +0,0 @@ -- name: Install Heat - hosts: controller - become: yes - roles: - - role: init_database - vars: - database: heat - user: heat - - role: create_identity_user - vars: - user: heat - - role: create_identity_service - vars: - service: heat - type: orchestration - description: Orchestration - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8004/v1/%\(tenant_id\)s - - role: create_identity_service - vars: - service: heat-cfn - type: cloudformation - description: Orchestration - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8000/v1 - tasks: - - name: Add stack identity resource - shell: | - source ~/.admin-openrc - openstack user create --domain heat --password {{ project_identity_password }} heat_domain_admin - openstack role add --domain heat --user-domain heat --user heat_domain_admin admin - openstack role create heat_stack_owner - openstack role create heat_stack_user - - - name: Install heat package - yum: - name: - - openstack-heat-api - - openstack-heat-api-cfn - - openstack-heat-engine - - - name: Initialize config file - shell: | - cat << EOF > /etc/heat/heat.conf - [DEFAULT] - transport_url = rabbit://openstack:{{ rabbitmq_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }} - heat_metadata_server_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8000 - heat_waitcondition_server_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8000/v1/waitcondition - stack_domain_admin = heat_domain_admin - stack_domain_admin_password = {{ project_identity_password }} - stack_user_domain_name = heat - - [database] - connection = mysql+pymysql://heat:{{ mysql_project_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}/heat - - [keystone_authtoken] - www_authenticate_uri = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - memcached_servers = {{ hostvars['controller']['ansible_default_ipv4']['address'] }}:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = heat - password = {{ project_identity_password }} - - [trustee] - auth_type = password - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - username = heat - password = {{ project_identity_password }} - user_domain_name = default - - [clients_keystone] - auth_uri = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - EOF - - - name: Sync database - shell: su -s /bin/sh -c "heat-manage db_sync" heat - - - name: Start openstack-heat-api service - systemd: - name: openstack-heat-api - state: started - enabled: True - - - name: Start openstack-heat-api-cfn service - systemd: - name: openstack-heat-api-cfn - state: started - enabled: True - - - name: Start openstack-heat-engine service - systemd: - name: openstack-heat-engine - state: started - enabled: True diff --git a/tools/oos/etc/playbooks/horizon.yaml b/tools/oos/etc/playbooks/horizon.yaml deleted file mode 100644 index 22d490213e6bdfbd4f93351d9753b05e913ff05a..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/horizon.yaml +++ /dev/null @@ -1,42 +0,0 @@ -- name: Install openstack dashboard - hosts: dashboard - become: yes - tasks: - - name: Install horizon package - yum: - name: - - openstack-dashboard - - - name: Initialize dashboard config file - shell: | - sed -i "s/^OPENSTACK_HOST.*/OPENSTACK_HOST = \"{{ hostvars['controller']['ansible_default_ipv4']['address'] }}\"/" /etc/openstack-dashboard/local_settings - sed -i "s/^ALLOWED_HOSTS.*/ALLOWED_HOSTS = [\"localhost\", \"{{ horizon_allowed_host }}\"]/" /etc/openstack-dashboard/local_settings - sed -i "s/^OPENSTACK_KEYSTONE_URL.*/OPENSTACK_KEYSTONE_URL = \"http:\/\/%s:5000\/v3\" % OPENSTACK_HOST/" /etc/openstack-dashboard/local_settings - - - name: Update dashboard config file - blockinfile: - path: /etc/openstack-dashboard/local_settings - insertafter: EOF - block: | - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': '{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:11211', - } - } - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member" - - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 3, - } - - - name: Restart httpd service for dashboard - systemd: - name: httpd - state: restarted - enabled: True diff --git a/tools/oos/etc/playbooks/init.yaml b/tools/oos/etc/playbooks/init.yaml deleted file mode 100644 index 5274e4684c55fe2125ee0a24f0564ab37134da0a..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/init.yaml +++ /dev/null @@ -1,202 +0,0 @@ -- name: Initialize test resource - hosts: controller - become: yes - tasks: - - name: Install required package - yum: - name: - - wget - - - name: Create test flavor - shell: | - source ~/.admin-openrc - openstack flavor create --disk 1 --vcpus 2 --ram 1024 --id 1 --public my-flavor - openstack flavor create --disk 1 --vcpus 2 --ram 1024 --id 2 --public my-flavor-alt - - - name: Download test image - shell: | - source ~/.admin-openrc - if [[ `uname -m` == 'aarch64' ]];then - wget http://download.cirros-cloud.net/0.5.2/cirros-0.5.2-aarch64-disk.img -O cirros-0.5.2.img - else - wget http://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img -O cirros-0.5.2.img - fi - - - name: Create test image - shell: | - source ~/.admin-openrc - openstack image create --disk-format qcow2 --container-format bare --file ./cirros-0.5.2.img --public my-image -c id -f value - register: image_id - - - name: Create test image alt - shell: | - source ~/.admin-openrc - openstack image create --disk-format qcow2 --container-format bare --file ./cirros-0.5.2.img --public my-image-alt -c id -f value - register: image_id_alt - - - name: Create test public network - shell: | - source ~/.admin-openrc - openstack network create --external --share public-network --provider-network-type flat --provider-physical-network provider --default -c id -f value - register: public_network_id - - - name: Create test role - shell: | - source ~/.admin-openrc - openstack role create ResellerAdmin - - - name: Load glance metadata - shell: glance-manage db_load_metadefs - - - name: Create other network resource - shell: | - source ~/.admin-openrc - # Create default shared subnet pool for tempest test - openstack subnet pool create --pool-prefix 192.168.253.0/24 --default --share --default-prefix-length 26 default_subnet_pool - # Init the ext subnet - openstack subnet create --subnet-range {{ default_ext_subnet_range }} --gateway {{ default_ext_subnet_gateway }} --network public-network public-subnet - # Init the private network - openstack network create --internal --share private-network - openstack subnet create --subnet-range 172.188.0.0/16 --network private-network private-subnet - # Connect the simulative ext network with private network via a router - openstack router create my-router - openstack router set my-router --external-gateway {{ public_network_id.stdout }} - openstack router add subnet my-router private-subnet - # Update security group rule - openstack security group rule create default --ingress --protocol icmp - openstack security group rule create default --ingress --protocol tcp --dst-port 22 - - - name: Genereate tempest folder - shell: | - tempest init mytest - - cat << EOF > /root/mytest/etc/tempest-cirros.conf - [DEFAULT] - log_dir = /root/mytest/logs - log_file = tempest.log - - [auth] - admin_username = admin - admin_password = root - admin_project_name = admin - admin_domain_name = Default - - [identity] - auth_version = v3 - uri_v3 = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/v3 - - [identity-feature-enabled] - security_compliance = true - project_tags = true - application_credentials = true - - [compute] - flavor_ref = 1 - flavor_ref_alt = 2 - image_ref = {{ image_id.stdout }} - image_ref_alt = {{ image_id_alt.stdout }} - min_microversion = 2.1 - max_microversion = 2.79 - min_compute_nodes = 2 - fixed_network_name = private-network - build_timeout = 120 - - [scenario] - img_file = /root/cirros-0.5.2.img - img_container_format = bare - img_disk_format = qcow2 - - [compute-feature-enabled] - change_password = false - swap_volume = true - volume_multiattach = true - resize = true - #volume_backed_live_migration = true - #block_migration_for_live_migration = true - #block_migrate_cinder_iscsi = true - #scheduler_enabled_filters = DifferentHostFilter - vnc_console = true - live_migration = false - - [oslo_concurrency] - lock_path = /root/mytest/tempest_lock - - [volume] - min_microversion = 3.0 - max_microversion = 3.59 - backend_names = lvm - build_timeout = 120 - - [volume-feature-enabled] - backup = false - multi_backend = true - manage_volume = true - manage_snapshot = true - extend_attached_volume = true - - [service_available] - nova = true - cinder = true - neutron = true - glance = true - horizon = true - heat = true - placement = true - swift = true - keystone = true - - [placement] - min_microversion = 1.0 - max_microversion = 1.36 - - [network] - public_network_id = {{ public_network_id.stdout }} - project_network_cidr = 172.189.0.0/16 - floating_network_name = public-network - build_timeout = 120 - - [network-feature-enabled] - port_security = true - ipv6_subnet_attributes = true - qos_placement_physnet = true - - [image] - build_timeout = 120 - - [image-feature-enabled] - import_image = true - - [object-storage-feature-enabled] - container_sync = false - - [validation] - image_ssh_user = cirros - image_ssh_password = gocubsgo - image_alt_ssh_user = cirros - image_alt_ssh_password = gocubsgo - ping_timeout = 60 - ssh_timeout = 120 - - [debug] - trace_requests = .* - EOF - - # Some tests fail on openEuler due to some error that unable to repair at - # the comment. Skip them by hand. - - name: Generate black list for cirros - shell: | - cat << EOF > /root/mytest/black_list_file - ^tempest.api.object_storage.test_container_sync_middleware - ^tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes - ^tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_boot_server_from_encrypted_volume_luks - ^tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern - ^tempest.scenario.test_stamp_pattern.TestStampPattern.test_stamp_pattern - ^tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops - ^tempest.api.compute.servers.test_device_tagging.TaggedBootDevicesTest.test_tagged_attachment - ^tempest.api.compute.servers.test_device_tagging.TaggedBootDevicesTest.test_tagged_boot_devices - ^tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_change_server_password - EOF - - - name: Genereate tempest folder - debug: - msg: "The environment is ready for test, please login controller and run `tempest run` command in mytest folder. Or run 'oos env test' command for tempest test." diff --git a/tools/oos/etc/playbooks/keystone.yaml b/tools/oos/etc/playbooks/keystone.yaml deleted file mode 100644 index ebd07532941cf78ed1cbaf534656615ab9afd8ab..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/keystone.yaml +++ /dev/null @@ -1,80 +0,0 @@ -- name: Install openstack-keystone - hosts: controller - become: yes - roles: - - role: init_database - vars: - database: keystone - user: keystone - tasks: - - name: Install openstack-keystone - yum: - name: - - openstack-keystone - - httpd - - mod_wsgi - - python3-openstackclient - - - name: Update config file - shell: | - cat << EOF > /etc/keystone/keystone.conf - [database] - connection = mysql+pymysql://keystone:{{ mysql_project_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}/keystone - - [token] - provider = fernet - - [security_compliance] - unique_last_password_count = 2 - lockout_failure_attempts = 2 - lockout_duration = 5 - EOF - - - name: Sync database schema - shell: keystone-manage db_sync - - - name: Generate fernte key - shell: | - keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - - - name: Bootstrap identity resource - shell: | - keystone-manage bootstrap --bootstrap-password {{ project_identity_password }} \ - --bootstrap-admin-url http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/v3/ \ - --bootstrap-internal-url http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/v3/ \ - --bootstrap-public-url http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/v3/ \ - --bootstrap-region-id RegionOne - - - name: Update httpd Servername - shell: sed -i "s/ServerName.*/ServerName {{ hostvars['controller']['ansible_default_ipv4']['address'] }}/" /etc/httpd/conf/httpd.conf - - - name: enable keystone httpd app - file: - src: /usr/share/keystone/wsgi-keystone.conf - dest: /etc/httpd/conf.d/wsgi-keystone.conf - state: link - - - name: Start httpd service - systemd: - name: httpd - state: restarted - enabled: True - - - name: Genrate admin environment file - shell: | - cat << EOF > ~/.admin-openrc - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=admin - export OS_USERNAME=admin - export OS_PASSWORD={{ project_identity_password }} - export OS_AUTH_URL=http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - EOF - - - name: Create project 'service' - shell: | - source ~/.admin-openrc - openstack project create --domain default --description "Service Project" service diff --git a/tools/oos/etc/playbooks/kolla.yaml b/tools/oos/etc/playbooks/kolla.yaml deleted file mode 100644 index 56fb54e7410851592e4a7b654bcfa26a7c8d6619..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/kolla.yaml +++ /dev/null @@ -1,16 +0,0 @@ -- name: Install Kolla - hosts: kolla - become: yes - tasks: - - name: Install kolla package - yum: - name: - - openstack-kolla - - openstack-kolla-ansible - - - name: Install kolla openEuler plugin package - yum: - name: - - openstack-kolla-plugin - - openstack-kolla-ansible-plugin - when: openeuler_plugin|default(false)|bool diff --git a/tools/oos/etc/playbooks/mariadb.yaml b/tools/oos/etc/playbooks/mariadb.yaml deleted file mode 100644 index 9a7c4ec9ac6b98392b19807105e71437d45d076d..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/mariadb.yaml +++ /dev/null @@ -1,53 +0,0 @@ -- name: Install Database - hosts: controller - become: yes - tasks: - - name: Install Database package - yum: - name: - - mariadb - - mariadb-server - - python3-PyMySQL - - - name: Config Database - shell: | - cat << EOF > /etc/my.cnf.d/openstack.cnf - [mysqld] - bind-address = {{ hostvars['controller']['ansible_default_ipv4']['address'] }} - default-storage-engine = innodb - innodb_file_per_table = on - max_connections = 4096 - collation-server = utf8_general_ci - character-set-server = utf8 - EOF - - - name: Start Database - systemd: - name: mariadb - state: started - enabled: True - - - name: Sets the root password - mysql_user: - login_user: root - name: root - password: "{{ mysql_root_password }}" - login_unix_socket: "/var/lib/mysql/mysql.sock" - ignore_errors: yes - - - name: Deletes anonymous MySQL server user - mysql_user: - login_user: root - login_password: "{{ mysql_root_password }}" - name: '' - host_all: yes - state: absent - login_unix_socket: "/var/lib/mysql/mysql.sock" - - - name: Removes the MySQL test database - mysql_db: - login_user: root - login_password: "{{ mysql_root_password }}" - name: test - state: absent - login_unix_socket: "/var/lib/mysql/mysql.sock" diff --git a/tools/oos/etc/playbooks/memcached.yaml b/tools/oos/etc/playbooks/memcached.yaml deleted file mode 100644 index 5ee4ab08cfe280d5bf8e2f7c1d6166c08f7cf2a4..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/memcached.yaml +++ /dev/null @@ -1,18 +0,0 @@ -- name: Install Memcached - hosts: controller - become: yes - tasks: - - name: Install Memcached package - yum: - name: - - memcached - - python3-memcached - - - name: Config Memcached - shell: sed -i "s/OPTIONS.*/OPTIONS=\"-l 127.0.0.1,::1,"{{ hostvars['controller']['ansible_default_ipv4']['address'] }}"\"/" /etc/sysconfig/memcached - - - name: Start Memcached - systemd: - name: memcached - state: started - enabled: True diff --git a/tools/oos/etc/playbooks/neutron.yaml b/tools/oos/etc/playbooks/neutron.yaml deleted file mode 100644 index ad32bd15551d47491a227cce48d93652a942723c..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/neutron.yaml +++ /dev/null @@ -1,223 +0,0 @@ -- name: Install Neutron controller - hosts: controller - become: yes - roles: - - role: init_database - vars: - database: neutron - user: neutron - - role: create_identity_user - vars: - user: neutron - - role: create_identity_service - vars: - service: neutron - type: network - description: "OpenStack Networking" - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:9696 - tasks: - - name: Install neutron package - yum: - name: - - openstack-neutron - - openstack-neutron-linuxbridge - - openstack-neutron-ml2 - - openstack-neutron-metering-agent - - ebtables - - ipset - - - name: Initialize l3 config file - shell: | - cat << EOF > /etc/neutron/l3_agent.ini - [DEFAULT] - interface_driver = linuxbridge - EOF - - - name: Initialize DHCP config file - shell: | - cat << EOF > /etc/neutron/dhcp_agent.ini - [DEFAULT] - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - EOF - - - name: Initialize Metadata config file - shell: | - cat << EOF > /etc/neutron/metadata_agent.ini - [DEFAULT] - nova_metadata_host = {{ hostvars['controller']['ansible_default_ipv4']['address'] }} - metadata_proxy_shared_secret = secret - EOF - -- name: Install Neutron compute - hosts: compute - become: yes - tasks: - - name: Install neutron compute package - yum: - name: - - openstack-neutron-linuxbridge - - ebtables - - ipset - -- name: Initialize the needed neutron config files - hosts: all - become: yes - tasks: - - name: Initialize neutron config file - shell: | - cat << EOF > /etc/neutron/neutron.conf - [database] - connection = mysql+pymysql://neutron:{{ mysql_project_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}/neutron - - [DEFAULT] - core_plugin = ml2 - service_plugins = router, metering - allow_overlapping_ips = true - transport_url = rabbit://openstack:{{ rabbitmq_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }} - auth_strategy = keystone - notify_nova_on_port_status_changes = true - notify_nova_on_port_data_changes = true - api_workers = {{ neutron_api_workers }} - - [keystone_authtoken] - www_authenticate_uri = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - memcached_servers = {{ hostvars['controller']['ansible_default_ipv4']['address'] }}:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = neutron - password = {{ project_identity_password }} - - [nova] - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - auth_type = password - project_domain_name = Default - user_domain_name = Default - region_name = RegionOne - project_name = service - username = nova - password = {{ project_identity_password }} - - [oslo_concurrency] - lock_path = /var/lib/neutron/tmp - EOF - - - name: Initialize ml2 config file - shell: | - cat << EOF > /etc/neutron/plugins/ml2/ml2_conf.ini - [ml2] - type_drivers = flat,vlan,vxlan - tenant_network_types = vxlan - mechanism_drivers = linuxbridge,l2population - extension_drivers = port_security - - [ml2_type_flat] - flat_networks = provider - - [ml2_type_vxlan] - vni_ranges = 1:1000 - - [securitygroup] - enable_ipset = true - EOF - - ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - -- name: Initialize linux-bridge config file - hosts: all - become: yes - tasks: - - name: set key - set_fact: - interface_key: "{{ neutron_dataplane_interface_name }}" - - - name: Initialize linux-bridge config file - shell: | - cat << EOF > /etc/neutron/plugins/ml2/linuxbridge_agent.ini - [vxlan] - enable_vxlan = true - local_ip = {{ ansible_facts[interface_key]['ipv4']['address'] }} - l2_population = true - - [securitygroup] - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - EOF - -- name: Initialize linux-bridge config file for external access via network node - hosts: controller - become: yes - tasks: - - name: Initialize linux-bridge config file - shell: | - cat << EOF >> /etc/neutron/plugins/ml2/linuxbridge_agent.ini - [linux_bridge] - physical_interface_mappings = provider:{{ neutron_provider_interface_name }} - EOF - -- name: Prepare the external linux taps on network node - hosts: controller - become: yes - tasks: - - name: Create and Init the external tap on network node - shell: | - ip tuntap add {{ neutron_provider_interface_name }} mode tap - ip link set {{ neutron_provider_interface_name }} up - ip add add {{ default_ext_subnet_gateway }}/24 dev {{ neutron_provider_interface_name }} - -- name: Complete Neutron controller install - hosts: controller - become: yes - tasks: - - name: Sync database - shell: su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - - - name: Restart openstack-nova-api - systemd: - name: openstack-nova-api - state: restarted - enabled: True - - - name: Start neutron-server - systemd: - name: neutron-server - state: started - enabled: True - - - name: Start neutron-dhcp-agent - systemd: - name: neutron-dhcp-agent - state: started - enabled: True - - - name: Start neutron-metadata-agent - systemd: - name: neutron-metadata-agent - state: started - enabled: True - - - name: Start neutron-l3-agent - systemd: - name: neutron-l3-agent - state: started - enabled: True - - - name: Start neutron-metering-agent - systemd: - name: neutron-metering-agent - state: started - enabled: True - -- name: Complete Neutron compute install - hosts: all - become: yes - tasks: - - name: Start neutron-linuxbridge-agent - systemd: - name: neutron-linuxbridge-agent - state: started - enabled: True diff --git a/tools/oos/etc/playbooks/nova.yaml b/tools/oos/etc/playbooks/nova.yaml deleted file mode 100644 index 2ccf01d41b9dfab0de737e8cf14576cebe9b4726..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/nova.yaml +++ /dev/null @@ -1,287 +0,0 @@ -- name: Install Nova controller - hosts: controller - become: yes - roles: - - role: init_database - vars: - database: nova - user: nova - - role: init_database - vars: - database: nova_api - user: nova - - role: init_database - vars: - database: nova_cell0 - user: nova - - role: create_identity_user - vars: - user: nova - - role: create_identity_service - vars: - service: nova - type: compute - description: "OpenStack Compute" - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8774/v2.1 - tasks: - - name: Install nova package - yum: - name: - - openstack-nova-api - - openstack-nova-conductor - - openstack-nova-novncproxy - - openstack-nova-scheduler - -- name: Install Nova compute - hosts: compute - become: yes - tasks: - - name: Install nova-compute package - yum: - name: - - openstack-nova-compute - - dmidecode - - - name: Install edk2 for aarch64 - yum: - name: - - edk2-aarch64 - when: ansible_architecture == "aarch64" - - - name: Check node architecture - shell: | - mkdir -p /etc/qemu/firmware - cat << EOF > /etc/qemu/firmware/edk2-aarch64.json - { - "description": "UEFI firmware for ARM64 virtual machines", - "interface-types": [ - "uefi" - ], - "mapping": { - "device": "flash", - "executable": { - "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw", - "format": "raw" - }, - "nvram-template": { - "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw", - "format": "raw" - } - }, - "targets": [ - { - "architecture": "aarch64", - "machines": [ - "virt-*" - ] - } - ], - "features": [ - ], - "tags": [ - ] - } - EOF - when: ansible_architecture == "aarch64" - -- name: Config mutual trust for nova - hosts: compute - become: yes - tasks: - - name: Change login shell for nova - shell: | - usermod -s /bin/bash nova - - name: Create directory - file: - path: /var/lib/nova/.ssh - owner: nova - group: nova - mode: 0755 - state: directory - - name: Copy file for hosts - copy: - src: "{{ item.src }}" - dest: "{{ item.dest }}" - mode: "{{ item.mode }}" - owner: nova - group: nova - remote_src: yes - with_items: - - { src: '/root/.ssh/id_rsa',dest: '/var/lib/nova/.ssh/id_rsa', mode: '0600'} - - { src: '/root/.ssh/authorized_keys',dest: '/var/lib/nova/.ssh/authorized_keys', mode: '0644'} - - name: Initialize config file - shell: | - cat << EOF > /var/lib/nova/.ssh/config - Host * - StrictHostKeyChecking no - EOF - -- name: Init config file - hosts: all - become: yes - tasks: - - name: Initialize config file - shell: | - cat << EOF > /etc/nova/nova.conf - [DEFAULT] - osapi_compute_workers = {{ nova_api_workers }} - metadata_workers = {{ nova_metadata_api_workers }} - enabled_apis = osapi_compute,metadata - transport_url = rabbit://openstack:{{ rabbitmq_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5672/ - my_ip = {{ ansible_default_ipv4['address'] }} - use_neutron = true - firewall_driver = nova.virt.firewall.NoopFirewallDriver - compute_driver=libvirt.LibvirtDriver - instances_path = /var/lib/nova/instances/ - lock_path = /var/lib/nova/tmp - - [api_database] - connection = mysql+pymysql://nova:{{ mysql_project_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}/nova_api - - [database] - connection = mysql+pymysql://nova:{{ mysql_project_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}/nova - - [api] - auth_strategy = keystone - - [keystone_authtoken] - www_authenticate_uri = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/ - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/ - memcached_servers = {{ hostvars['controller']['ansible_default_ipv4']['address'] }}:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = nova - password = {{ project_identity_password }} - - [vnc] - enabled = true - server_listen = {{ ansible_default_ipv4['address'] }} - server_proxyclient_address = {{ ansible_default_ipv4['address'] }} - novncproxy_base_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:6080/vnc_auto.html - - [glance] - api_servers = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:9292 - - [oslo_concurrency] - lock_path = /var/lib/nova/tmp - - [placement] - region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/v3 - username = placement - password = {{ project_identity_password }} - - [neutron] - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = {{ project_identity_password }} - service_metadata_proxy = true - metadata_proxy_shared_secret = secret - - [conductor] - workers = {{ nova_conductor_workers }} - - [scheduler] - workers = {{ nova_scheduler_workers }} - EOF - -# TODO: add kvm support -- name: Update Nova compute - hosts: compute - become: yes - tasks: - - name: Update config file - shell: | - cat << EOF >> /etc/nova/nova.conf - - [libvirt] - virt_type = qemu - num_pcie_ports = 28 - EOF - if [[ `uname -m` == 'aarch64' ]];then - cat << EOF >> /etc/nova/nova.conf - cpu_mode = custom - cpu_model = cortex-a72 - EOF - - mkdir -p /usr/share/AAVMF - chown nova:nova /usr/share/AAVMF - - ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw /usr/share/AAVMF/AAVMF_CODE.fd - ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw /usr/share/AAVMF/AAVMF_VARS.fd - - cat << EOF >> /etc/libvirt/qemu.conf - nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", - "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2/aarch64/vars-template-pflash.raw"] - EOF - fi - -- name: Complete Nova controller install - hosts: controller - become: yes - tasks: - - name: Sync database - shell: | - su -s /bin/sh -c "nova-manage api_db sync" nova - su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova - su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova - su -s /bin/sh -c "nova-manage db sync" nova - su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova - - - name: Start openstack-nova-api service - systemd: - name: openstack-nova-api - state: started - enabled: True - - - name: Start openstack-nova-scheduler service - systemd: - name: openstack-nova-scheduler - state: started - enabled: True - - - name: Start openstack-nova-conductor service - systemd: - name: openstack-nova-conductor - state: started - enabled: True - - - name: Start openstack-nova-novncproxy service - systemd: - name: openstack-nova-novncproxy - state: started - enabled: True - -- name: Complete Nova compute install - hosts: compute - become: yes - tasks: - - name: Start libvirtd service - systemd: - name: libvirtd - state: started - enabled: True - - - name: Start openstack-nova-compute service - systemd: - name: openstack-nova-compute - state: started - enabled: True - -- name: Discover compute node - hosts: controller - become: yes - tasks: - - name: Discover compute node - shell: su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova diff --git a/tools/oos/etc/playbooks/placement.yaml b/tools/oos/etc/playbooks/placement.yaml deleted file mode 100644 index a0206444c2e3d2bdad92ee6f36b488d0821c9a26..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/placement.yaml +++ /dev/null @@ -1,52 +0,0 @@ -- name: Install Placement - hosts: controller - become: yes - roles: - - role: init_database - vars: - database: placement - user: placement - - role: create_identity_user - vars: - user: placement - - role: create_identity_service - vars: - service: placement - type: placement - description: Placement API - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8778 - tasks: - - name: Install placement package - yum: - name: - - openstack-placement-api - - python3-osc-placement - - - name: Initialize config file - shell: | - cat << EOF > /etc/placement/placement.conf - [placement_database] - connection = mysql+pymysql://placement:{{ mysql_project_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}/placement - - [api] - auth_strategy = keystone - - [keystone_authtoken] - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/v3 - memcached_servers = {{ hostvars['controller']['ansible_default_ipv4']['address'] }}:11211 - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = placement - password = {{ project_identity_password }} - EOF - - - name: Sync database - shell: su -s /bin/sh -c "placement-manage db sync" placement - - - name: Restart httpd service - systemd: - name: httpd - state: restarted - enabled: True diff --git a/tools/oos/etc/playbooks/pre.yaml b/tools/oos/etc/playbooks/pre.yaml deleted file mode 100644 index 43877c5c8f19cc82339dcc85bd185a02b394d4ad..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/pre.yaml +++ /dev/null @@ -1,70 +0,0 @@ -- name: Config OpenStack yum repo - hosts: all - become: yes - tasks: - - name: Install openstack-releases package - yum: - name: - - openstack-release-{{ openstack_release }} - ignore_errors: yes - - # huaweicloud mirror is broken. Enable this task once it works. - # - name: Update yum mirror - # shell: | - # sed -i "s/repo.openeuler.org/repo.huaweicloud.com/g" /etc/yum.repos.d/openEuler.repo - # sed -i "s/repo.openeuler.org/repo.huaweicloud.com/g" /etc/yum.repos.d/openstack-{{ openstack_release }}.repo - # ignore_errors: yes - -- name: Config mutual trust for root - hosts: all - become: yes - tasks: - - name: Copy file for hosts - copy: - src: "{{keypair_dir}}/id_rsa" - dest: "/root/.ssh" - owner: root - group: root - mode: 0600 - -- name: Prepare - hosts: all - become: yes - tasks: - - name: Set hostname for hosts - shell: | - cat << EOF >> /etc/hosts - {{ hostvars['controller']['ansible_default_ipv4']['address'] }} {{ hostvars['controller']['ansible_hostname'] }} - {{ hostvars['compute01']['ansible_default_ipv4']['address'] }} {{ hostvars['compute01']['ansible_hostname'] }} - {{ hostvars['compute02']['ansible_default_ipv4']['address'] }} {{ hostvars['compute02']['ansible_hostname'] }} - EOF - when: oos_env_type == 'cluster' - - - name: Setup useful environment for cluster - set_fact: - controller_local_control_ip: "{{ hostvars['controller']['ansible_default_ipv4']['address'] }}" - compute01_local_control_ip: "{{ hostvars['compute01']['ansible_default_ipv4']['address'] }}" - compute02_local_control_ip: "{{ hostvars['compute02']['ansible_default_ipv4']['address'] }}" - when: oos_env_type == 'cluster' - - - name: Setup useful environment for all-in-on - set_fact: - controller_local_control_ip: "{{ hostvars['controller']['ansible_default_ipv4']['address'] }}" - when: oos_env_type == 'all_in_one' - -- name: Update python bin - hosts: all - become: yes - tasks: - - name: Install python - yum: - name: - - python2 - - python3 - ignore_errors: yes - - - name: Update python bin link - shell: | - rm -f /usr/bin/python - ln -s /usr/bin/python3 /usr/bin/python - ignore_errors: yes diff --git a/tools/oos/etc/playbooks/rabbitmq.yaml b/tools/oos/etc/playbooks/rabbitmq.yaml deleted file mode 100644 index bc1f269221247746fa9dabe274f1ea7809bcf513..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/rabbitmq.yaml +++ /dev/null @@ -1,19 +0,0 @@ -- name: Install Message Queue - hosts: controller - become: yes - tasks: - - name: Install RabbitMQ package - yum: - name: - - rabbitmq-server - - - name: Start RabbitMQ - systemd: - name: rabbitmq-server - state: started - enabled: True - - - name: Config RabbitMQ - shell: | - rabbitmqctl add_user openstack {{ rabbitmq_password }} - rabbitmqctl set_permissions openstack ".*" ".*" ".*" diff --git a/tools/oos/etc/playbooks/rally.yaml b/tools/oos/etc/playbooks/rally.yaml deleted file mode 100644 index f901e129ef5298ace77f7eef884ec2bf776d61f2..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/rally.yaml +++ /dev/null @@ -1,9 +0,0 @@ -- name: Install Rally - hosts: controller - become: yes - tasks: - - name: Install rally package - yum: - name: - - openstack-rally - - openstack-rally-plugins diff --git a/tools/oos/etc/playbooks/roles/create_identity_service/tasks/main.yaml b/tools/oos/etc/playbooks/roles/create_identity_service/tasks/main.yaml deleted file mode 100644 index 56a73d661bf92dd353367304dd382749a41f310b..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/roles/create_identity_service/tasks/main.yaml +++ /dev/null @@ -1,7 +0,0 @@ -- name: Create {{ service }} identity service - shell: | - source ~/.admin-openrc - openstack service create --name {{ service }} --description "{{ description }}" {{ type }} - openstack endpoint create --region RegionOne {{ type }} public {{ endpoint }} - openstack endpoint create --region RegionOne {{ type }} internal {{ endpoint }} - openstack endpoint create --region RegionOne {{ type }} admin {{ admin_endpoint | default(endpoint) }} diff --git a/tools/oos/etc/playbooks/roles/create_identity_user/tasks/main.yaml b/tools/oos/etc/playbooks/roles/create_identity_user/tasks/main.yaml deleted file mode 100644 index 91fe8fd556b1ed72da6d79cd58ca0a787943db91..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/roles/create_identity_user/tasks/main.yaml +++ /dev/null @@ -1,5 +0,0 @@ -- name: Create {{ user }} identity user - shell: | - source ~/.admin-openrc - openstack user create --domain default --password {{ project_identity_password }} {{ user }} - openstack role add --project service --user {{ user }} admin diff --git a/tools/oos/etc/playbooks/roles/init_database/tasks/main.yaml b/tools/oos/etc/playbooks/roles/init_database/tasks/main.yaml deleted file mode 100644 index 46e6eb8ff6330d4f86fd6af7df57cab094fabfb2..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/roles/init_database/tasks/main.yaml +++ /dev/null @@ -1,30 +0,0 @@ -- name: Create {{ database }} database - mysql_db: - login_user: root - login_password: "{{ mysql_root_password }}" - name: "{{ database }}" - state: present - login_unix_socket: "/var/lib/mysql/mysql.sock" - -- name: grant {{ database }} database user privilege(local) - mysql_user: - login_user: root - login_password: "{{ mysql_root_password }}" - name: "{{ user }}" - password: "{{ mysql_project_password }}" - priv: "{{ database }}.*:ALL" - append_privs: yes - state: present - login_unix_socket: "/var/lib/mysql/mysql.sock" - -- name: grant {{ database }} database user privilege(remote) - mysql_user: - login_user: root - login_password: "{{ mysql_root_password }}" - name: "{{ user }}" - password: "{{ mysql_project_password }}" - host: "%" - priv: "{{ database }}.*:ALL" - append_privs: yes - state: present - login_unix_socket: "/var/lib/mysql/mysql.sock" diff --git a/tools/oos/etc/playbooks/swift.yaml b/tools/oos/etc/playbooks/swift.yaml deleted file mode 100644 index 8964f99ad497596c3fb426f964d91d0e0321e756..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/swift.yaml +++ /dev/null @@ -1,273 +0,0 @@ -- name: Install Swift controller - hosts: controller - become: yes - roles: - - role: create_identity_user - vars: - user: swift - - role: create_identity_service - vars: - service: swift - type: object-store - description: "OpenStack Object Storage" - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8080/v1/AUTH_%\(project_id\)s - admin_endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8080/v1/ - tasks: - - name: Install swift packages - yum: - name: - - openstack-swift-proxy - - python3-swiftclient - - python3-keystoneclient - - python3-keystonemiddleware - - memcached - - rsync - -- name: Install Swift storage - hosts: storage - become: yes - tasks: - - name: Install packages - yum: - name: - - xfsprogs - - rsync - - - name: Prepare the storage device - shell: | - mkfs.xfs /dev/{{ item }} - mkdir -p /srv/node/{{ item }} - cat << EOF >> /etc/fstab - UUID=`blkid -s UUID -o value /dev/{{ item }}` /srv/node/{{ item }} xfs noatime 0 2 - EOF - mount /srv/node/{{ item }} - with_items: "{{ swift_storage_devices }}" - - - name: Install swift packages - yum: - name: - - openstack-swift-account - - openstack-swift-container - - openstack-swift-object - - - name: Initialize swift config files - shell: | - sed -i "/^bind_ip/cbind_ip= {{ ansible_default_ipv4['address'] }}" /etc/swift/{{ item }} - chown -R swift:swift /srv/node - mkdir -p /var/cache/swift - chown -R root:swift /var/cache/swift - chmod -R 775 /var/cache/swift - with_items: - - account-server.conf - - container-server.conf - - object-server.conf - -- name: Create the initial rings and init config file - hosts: controller - become: yes - tasks: - - name: Create initial rings for cluster - shell: | - cd /etc/swift - swift-ring-builder account.builder create 10 1 1 - swift-ring-builder container.builder create 10 1 1 - swift-ring-builder object.builder create 10 1 1 - - swift-ring-builder account.builder add --region 1 --zone 1 --ip "{{ compute01_local_control_ip }}" --port 6202 --device {{ item }} --weight 100 - swift-ring-builder account.builder add --region 1 --zone 1 --ip "{{ compute02_local_control_ip }}" --port 6202 --device {{ item }} --weight 100 - swift-ring-builder container.builder add --region 1 --zone 1 --ip "{{ compute01_local_control_ip }}" --port 6201 --device {{ item }} --weight 100 - swift-ring-builder container.builder add --region 1 --zone 1 --ip "{{ compute02_local_control_ip }}" --port 6201 --device {{ item }} --weight 100 - swift-ring-builder object.builder add --region 1 --zone 1 --ip "{{ compute01_local_control_ip }}" --port 6200 --device {{ item }} --weight 100 - swift-ring-builder object.builder add --region 1 --zone 1 --ip "{{ compute02_local_control_ip }}" --port 6200 --device {{ item }} --weight 100 - - swift-ring-builder account.builder rebalance - swift-ring-builder container.builder rebalance - swift-ring-builder object.builder rebalance - with_items: "{{ swift_storage_devices }}" - when: oos_env_type == 'cluster' - - - name: Create initial rings for all-in-one - shell: | - cd /etc/swift - swift-ring-builder account.builder create 10 1 1 - swift-ring-builder container.builder create 10 1 1 - swift-ring-builder object.builder create 10 1 1 - - swift-ring-builder account.builder add --region 1 --zone 1 --ip "{{ controller_local_control_ip }}" --port 6202 --device {{ item }} --weight 100 - swift-ring-builder container.builder add --region 1 --zone 1 --ip "{{ controller_local_control_ip }}" --port 6201 --device {{ item }} --weight 100 - swift-ring-builder object.builder add --region 1 --zone 1 --ip "{{ controller_local_control_ip }}" --port 6200 --device {{ item }} --weight 100 - - swift-ring-builder account.builder rebalance - swift-ring-builder container.builder rebalance - swift-ring-builder object.builder rebalance - with_items: "{{ swift_storage_devices }}" - when: oos_env_type == 'all_in_one' - - - name: Init swift config file - shell: | - cat << EOF > /etc/swift/swift.conf - [swift-hash] - swift_hash_path_suffix = {{ swift_hash_path_suffix }} - swift_hash_path_prefix = {{ swift_hash_path_prefix }} - [storage-policy:0] - name = Policy-0 - default = yes - EOF - - - name: Init swift proxy config file - shell: | - cat << EOF > /etc/swift/proxy-server.conf - [DEFAULT] - bind_port = 8080 - workers = 2 - user = swift - swift_dir = /etc/swift - - [pipeline:main] - pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken formpost tempurl keystoneauth staticweb crossdomain container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server - - [app:proxy-server] - use = egg:swift#proxy - account_autocreate = True - - [filter:catch_errors] - use = egg:swift#catch_errors - - [filter:gatekeeper] - use = egg:swift#gatekeeper - - [filter:healthcheck] - use = egg:swift#healthcheck - - [filter:proxy-logging] - use = egg:swift#proxy_logging - - [filter:cache] - use = egg:swift#memcache - memcache_servers = {{ hostvars['controller']['ansible_default_ipv4']['address'] }}:11211 - - [filter:container_sync] - use = egg:swift#container_sync - - [filter:bulk] - use = egg:swift#bulk - - [filter:ratelimit] - use = egg:swift#ratelimit - - [filter:authtoken] - paste.filter_factory = keystonemiddleware.auth_token:filter_factory - www_authenticate_uri = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - memcached_servers = {{ hostvars['controller']['ansible_default_ipv4']['address'] }}:11211 - auth_type = password - project_domain_id = default - user_domain_id = default - project_name = service - username = swift - password = {{ project_identity_password }} - delay_auth_decision = True - - [filter:keystoneauth] - use = egg:swift#keystoneauth - operator_roles = admin,user,member - - [filter:container-quotas] - use = egg:swift#container_quotas - - [filter:account-quotas] - use = egg:swift#account_quotas - - [filter:slo] - use = egg:swift#slo - - [filter:dlo] - use = egg:swift#dlo - - [filter:versioned_writes] - use = egg:swift#versioned_writes - allow_versioned_writes=True - - [filter:staticweb] - use = egg:swift#staticweb - - [filter:crossdomain] - use = egg:swift#crossdomain - - [filter:tempurl] - use = egg:swift#tempurl - - [filter:formpost] - use = egg:swift#formpost - EOF - - - name: ensure proper ownership of the config directory - shell: | - chown -R root:swift /etc/swift - -- name: distribute rings and config file from controller to storage nodes - hosts: storage - become: yes - tasks: - - name: copy rings from controller to storage nodes - synchronize: - src: /etc/swift/{{ item }} - dest: /etc/swift/{{ item }} - rsync_path: /usr/bin/rsync - delegate_to: controller - with_items: - - swift.conf - - account.ring.gz - - container.ring.gz - - object.ring.gz - - - name: ensure proper ownership of the config directory - shell: | - chown -R root:swift /etc/swift - -- name: Start controller services - hosts: controller - become: yes - tasks: - - name: Start openstack-swift-proxy service - systemd: - name: openstack-swift-proxy - state: started - enabled: True - -- name: Start storage services - hosts: storage - become: yes - tasks: - - name: Start swift account services - systemd: - name: "{{ item }}" - state: started - enabled: True - with_items: - - openstack-swift-account.service - - openstack-swift-account-auditor.service - - openstack-swift-account-reaper.service - - openstack-swift-account-replicator.service - - - name: Start swift container services - systemd: - name: "{{ item }}" - state: started - enabled: True - with_items: - - openstack-swift-container.service - - openstack-swift-container-auditor.service - - openstack-swift-container-replicator.service - - openstack-swift-container-updater.service - - - name: Start swift object services - systemd: - name: "{{ item }}" - state: started - enabled: True - with_items: - - openstack-swift-object.service - - openstack-swift-object-auditor.service - - openstack-swift-object-replicator.service - - openstack-swift-object-updater.service diff --git a/tools/oos/etc/playbooks/tempest.yaml b/tools/oos/etc/playbooks/tempest.yaml deleted file mode 100644 index ec63b5af9451ec0c3db1dd36d2104fd7054a1ecf..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/tempest.yaml +++ /dev/null @@ -1,8 +0,0 @@ -- name: Install Tempest - hosts: controller - become: yes - tasks: - - name: Install tempest package - yum: - name: - - openstack-tempest diff --git a/tools/oos/etc/playbooks/test.yaml b/tools/oos/etc/playbooks/test.yaml deleted file mode 100644 index 61c4e616ab656b0ff3690cd334457afef892067b..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/test.yaml +++ /dev/null @@ -1,43 +0,0 @@ -- name: Run tempest test - hosts: controller - become: true - tasks: - - name: Echo start Info - debug: - msg: "Start runing tempest test. Please wait more" - - - name: Run tempest test for cirros - shell: - cmd: tempest run --config-file etc/tempest-cirros.conf --exclude-list black_list_file - chdir: ~/mytest - async: 14400 - poll: 10 - register: tempest_output - ignore_errors: yes - - - name: Fetching tempest stdout - async_status: - jid: "{{ tempest_output.ansible_job_id }}" - register: job_result - until: job_result.finished - delay: 5 - retries: 10 - ignore_errors: yes - - - name: Format the test result - shell: - cmd: | - # ensure the tool installed. - yum install python3-stestr python3-os-testr -y - stestr last --subunit >> test_result - subunit2html test_result - chdir: ~/mytest - - - name: Fetch the test result - fetch: - src: /root/mytest/results.html - dest: /tmp/results.html - - - name: Echo finish Info - debug: - msg: "Test is finished. Please see the result file 'results.html' in /tmp forlder." diff --git a/tools/oos/etc/playbooks/trove.yaml b/tools/oos/etc/playbooks/trove.yaml deleted file mode 100644 index 4405e9a09dc9bbcb3e6e55d4cb15411c4ccdb7db..0000000000000000000000000000000000000000 --- a/tools/oos/etc/playbooks/trove.yaml +++ /dev/null @@ -1,105 +0,0 @@ -- name: Install trove controller - hosts: controller - become: yes - roles: - - role: init_database - vars: - database: trove - user: trove - - role: create_identity_user - vars: - user: trove - - role: create_identity_service - vars: - service: trove - type: database - description: "OpenStack Trove" - endpoint: http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8779/v1.0/%\(tenant_id\)s - tasks: - - name: Install trove package - yum: - name: - - openstack-trove - - python3-troveclient - -- name: Init config file - hosts: controller - become: yes - tasks: - - name: Initialize /etc/trove/trove.conf - shell: | - cat << EOF > /etc/trove/trove.conf - [DEFAULT] - log_dir = /var/log/trove - trove_auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000 - nova_compute_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8774/v2.1 - cinder_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8776/v2 - swift_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:8080/v1/AUTH_ - - rpc_backend = rabbit - transport_url = rabbit://openstack:{{ rabbitmq_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5672/ - - auth_strategy = keystone - add_addresses = True - api_paste_config = /etc/trove/api-paste.ini - - nova_proxy_admin_user = admin - nova_proxy_admin_pass = {{ project_identity_password }} - nova_proxy_admin_tenant_name = service - taskmanager_manager = trove.taskmanager.manager.Manager - use_nova_server_config_drive = True - network_driver=trove.network.neutron.NeutronDriver - network_label_regex=.* - - [database] - connection = mysql+pymysql://trove:{{ mysql_project_password }}@{{ hostvars['controller']['ansible_default_ipv4']['address'] }}/trove - - [keystone_authtoken] - www_authenticate_uri = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/ - auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/ - auth_type = password - project_domain_name = Default - user_domain_name = Default - project_name = service - username = trove - password = {{ project_identity_password }} - EOF - - - name: Initialize /etc/trove/trove-guestagent.conf - shell: | - cat << EOF > /etc/trove/trove-guestagent.conf - [DEFAULT] - rabbit_host = {{ hostvars['controller']['ansible_default_ipv4']['address'] }} - rabbit_password = {{ rabbitmq_password }} - trove_auth_url = http://{{ hostvars['controller']['ansible_default_ipv4']['address'] }}:5000/ - EOF - -- name: Sync database - hosts: controller - become: yes - tasks: - - name: Sync database - shell: | - su -s /bin/sh -c "trove-manage db_sync" trove - -- name: Complete trove install - hosts: controller - become: yes - tasks: - - name: Start openstack-trove-api service - systemd: - name: openstack-trove-api - state: started - enabled: True - - - name: Start openstack-trove-taskmanager service - systemd: - name: openstack-trove-taskmanager - state: started - enabled: True - - - name: Start openstack-trove-conductor service - systemd: - name: openstack-trove-conductor - state: started - enabled: True diff --git a/tools/oos/example/README.md b/tools/oos/example/README.md deleted file mode 100644 index 0e7e1be9f1cb49a874044d52224e780103bd05ff..0000000000000000000000000000000000000000 --- a/tools/oos/example/README.md +++ /dev/null @@ -1,5 +0,0 @@ -该目录包含了`python3 scripts/generate_dependence.py`命令的缓存文件和结果文件示例 - -这些缓存文件生成于2022-03-11,有一定时效性,仅供参考。 - -train_cached_file目录包含了`python3 scripts/generate_dependence.py train`命令生成的文件。 diff --git a/tools/oos/example/train_cached_file/Babel.json b/tools/oos/example/train_cached_file/Babel.json deleted file mode 100644 index 936bcba79dd442de97e6c32c52a03df6cca76042..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Babel.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Babel", "version_dict": {"version": "2.7.0", "eq_version": "", "ge_version": "0.8", "lt_version": "", "ne_version": [], "upper_version": "2.7.0"}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "Jinja2", "Babel"]}, "requires": {"pytz": {"eq_version": "", "ge_version": "2015.7", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/CouchDB.json b/tools/oos/example/train_cached_file/CouchDB.json deleted file mode 100644 index c69db594fbb9a4de8ba436395417d79236e47c8a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/CouchDB.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "CouchDB", "version_dict": {"version": "1.2", "eq_version": "", "ge_version": "0.8", "lt_version": "", "ne_version": [], "upper_version": "1.2"}, "deep": {"count": 1, "list": ["trove", "CouchDB"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Cython.json b/tools/oos/example/train_cached_file/Cython.json deleted file mode 100644 index 2d964813c77b5d53fe98a84b9af7bfd18f83ed71..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Cython.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Cython", "version_dict": {"version": "0.29.7", "eq_version": "", "ge_version": "0.29.7", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "beautifulsoup4", "lxml", "Cython"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Django.json b/tools/oos/example/train_cached_file/Django.json deleted file mode 100644 index 79b80f99133d55af43517a36789d978b6262b981..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Django.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Django", "version_dict": {"version": "2.0.13", "eq_version": "", "ge_version": "1.11", "lt_version": "2.1", "ne_version": [], "upper_version": "2.0.13"}, "deep": {"count": 1, "list": ["horizon", "Django"]}, "requires": {"pytz": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "argon2-cffi": {"eq_version": "", "ge_version": "16.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "16.1.0"}, "bcrypt": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.1.7", "version": "3.1.7"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Flask-RESTful.json b/tools/oos/example/train_cached_file/Flask-RESTful.json deleted file mode 100644 index c20e2785f8e7c363bcca53a1484520f9c5cdc629..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Flask-RESTful.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Flask-RESTful", "version_dict": {"version": "0.3.7", "eq_version": "", "ge_version": "0.3.5", "lt_version": "", "ne_version": [], "upper_version": "0.3.7"}, "deep": {"count": 1, "list": ["keystone", "Flask-RESTful"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Flask.json b/tools/oos/example/train_cached_file/Flask.json deleted file mode 100644 index d9cafa8011e0cef798f667692621c3f2c0a4d49b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Flask.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Flask", "version_dict": {"version": "1.1.1", "eq_version": "", "ge_version": "1.0.2", "lt_version": "", "ne_version": ["0.11"], "upper_version": "1.1.1"}, "deep": {"count": 1, "list": ["keystone", "Flask"]}, "requires": {"Werkzeug": {"eq_version": "", "ge_version": "0.15", "lt_version": "", "ne_version": [], "upper_version": "0.15.6", "version": "0.15.6"}, "Jinja2": {"eq_version": "", "ge_version": "2.10.1", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "itsdangerous": {"eq_version": "", "ge_version": "0.24", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "click": {"eq_version": "", "ge_version": "5.1", "lt_version": "", "ne_version": [], "upper_version": "", "version": "5.1"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "pallets-sphinx-themes": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "sphinxcontrib-log-cabinet": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "sphinx-issues": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "python-dotenv": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/GitPython.json b/tools/oos/example/train_cached_file/GitPython.json deleted file mode 100644 index ea59c59bfdb80c0c1c201603863a03dcd52f394b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/GitPython.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "GitPython", "version_dict": {"version": "3.0.2", "eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "3.0.2"}, "deep": {"count": 10, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "GitPython"]}, "requires": {"gitdb2": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.5", "version": "2.0.5"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Jinja2.json b/tools/oos/example/train_cached_file/Jinja2.json deleted file mode 100644 index 821c73ec3cb5f59469d5fa39f2ca9a90e3e282a9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Jinja2.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Jinja2", "version_dict": {"version": "2.10.1", "eq_version": "", "ge_version": "2.3", "lt_version": "", "ne_version": [], "upper_version": "2.10.1"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "Jinja2"]}, "requires": {"MarkupSafe": {"eq_version": "", "ge_version": "0.23", "lt_version": "", "ne_version": [], "upper_version": "1.1.1", "version": "1.1.1"}, "Babel": {"eq_version": "", "ge_version": "0.8", "lt_version": "", "ne_version": [], "upper_version": "2.7.0", "version": "2.7.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/MarkupSafe.json b/tools/oos/example/train_cached_file/MarkupSafe.json deleted file mode 100644 index 410a2a57b531cf108c9d370b1080fc151caee1b9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/MarkupSafe.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "MarkupSafe", "version_dict": {"version": "1.1.1", "eq_version": "", "ge_version": "0.23", "lt_version": "", "ne_version": [], "upper_version": "1.1.1"}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "Jinja2", "MarkupSafe"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Paste.json b/tools/oos/example/train_cached_file/Paste.json deleted file mode 100644 index 8eb1cbcbce85ce7455b38ad1d9365181ad0af1c8..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Paste.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Paste", "version_dict": {"version": "3.2.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.2.0"}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "WebTest", "PasteDeploy", "Paste"]}, "requires": {"six": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "flup": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "python-openid": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/PasteDeploy.json b/tools/oos/example/train_cached_file/PasteDeploy.json deleted file mode 100644 index 7d73704c2cff3574f92e8edbd4267c0f2375d659..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/PasteDeploy.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "PasteDeploy", "version_dict": {"version": "2.0.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.0.1"}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "WebTest", "PasteDeploy"]}, "requires": {"Paste": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.2.0", "version": "3.2.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.7.5", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "pylons-sphinx-themes": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Pint.json b/tools/oos/example/train_cached_file/Pint.json deleted file mode 100644 index 737ba0315b0bd4200560170ac5d2ba3ff02aaa4c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Pint.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Pint", "version_dict": {"version": "0.9", "eq_version": "", "ge_version": "0.5", "lt_version": "", "ne_version": [], "upper_version": "0.9"}, "deep": {"count": 1, "list": ["horizon", "Pint"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/PrettyTable.json b/tools/oos/example/train_cached_file/PrettyTable.json deleted file mode 100644 index 94a775c75fe715d26dcae48de4ea9ea020a381e7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/PrettyTable.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "PrettyTable", "version_dict": {"version": "0.7.2", "eq_version": "", "ge_version": "0.7.2", "lt_version": "0.8", "ne_version": [], "upper_version": ""}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "PrettyTable"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/PyECLib.json b/tools/oos/example/train_cached_file/PyECLib.json deleted file mode 100644 index 59101e22e6cdf6bf2a8d7b616b42a6a29860ba84..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/PyECLib.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "PyECLib", "version_dict": {"version": "1.3.1", "eq_version": "", "ge_version": "1.3.1", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["swift", "PyECLib"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/PyJWT.json b/tools/oos/example/train_cached_file/PyJWT.json deleted file mode 100644 index 6242a8cc8b8736bca84c56d991e6cd139e82afeb..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/PyJWT.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "PyJWT", "version_dict": {"version": "1.7.1", "eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.1"}, "deep": {"count": 16, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oauthlib", "PyJWT"]}, "requires": {"cryptography": {"eq_version": "", "ge_version": "1.4", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "flake8-import-order": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pep8-naming": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest": {"eq_version": "", "ge_version": "4.0.1", "lt_version": "5.0.0", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-cov": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "3.0.0", "ne_version": [], "upper_version": "", "version": "2.6.0"}, "pytest-runner": {"eq_version": "", "ge_version": "4.2", "lt_version": "5.0.0", "ne_version": [], "upper_version": "", "version": "4.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/PyMySQL.json b/tools/oos/example/train_cached_file/PyMySQL.json deleted file mode 100644 index 96bf9aa912746926bcdf95f662e27596f4dc23d5..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/PyMySQL.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "PyMySQL", "version_dict": {"version": "0.9.3", "eq_version": "", "ge_version": "0.7.6", "lt_version": "", "ne_version": [], "upper_version": "0.9.3"}, "deep": {"count": 9, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "PyMySQL"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/PySocks.json b/tools/oos/example/train_cached_file/PySocks.json deleted file mode 100644 index 58106d2edaf38a8574dc786d35acc1e792189f14..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/PySocks.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "PySocks", "version_dict": {"version": "1.5.6", "eq_version": "", "ge_version": "1.5.6", "lt_version": "2.0", "ne_version": ["1.5.7"], "upper_version": ""}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "jaraco.packaging", "requests", "urllib3", "PySocks"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/PyYAML.json b/tools/oos/example/train_cached_file/PyYAML.json deleted file mode 100644 index 15b268b274dcbdcff6c56f98f3302e40e196a751..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/PyYAML.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "PyYAML", "version_dict": {"version": "5.1.2", "eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2"}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "PyYAML"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Pygments.json b/tools/oos/example/train_cached_file/Pygments.json deleted file mode 100644 index 7917346ddd4c9a2f5f171156d63ff41f7484dada..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Pygments.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Pygments", "version_dict": {"version": "2.6.1", "eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "2.6.1"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "Pygments"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Routes.json b/tools/oos/example/train_cached_file/Routes.json deleted file mode 100644 index 456b60e58c835e4dbd60d0723f4210b7d9a8b88b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Routes.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Routes", "version_dict": {"version": "2.4.1", "eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.1"}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "oslo.service", "Routes"]}, "requires": {"repoze.lru": {"eq_version": "", "ge_version": "0.3", "lt_version": "", "ne_version": [], "upper_version": "0.7", "version": "0.7"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "WebOb": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/SQLAlchemy-Utils.json b/tools/oos/example/train_cached_file/SQLAlchemy-Utils.json deleted file mode 100644 index 69409a7622bb887b70beed1be6c4d03caf55744c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/SQLAlchemy-Utils.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "SQLAlchemy-Utils", "version_dict": {"version": "0.34.2", "eq_version": "", "ge_version": "0.30.11", "lt_version": "", "ne_version": [], "upper_version": "0.34.2"}, "deep": {"count": 2, "list": ["cinder", "taskflow", "SQLAlchemy-Utils"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/SQLAlchemy.json b/tools/oos/example/train_cached_file/SQLAlchemy.json deleted file mode 100644 index e816b977fe86f0ab0eda8eb9b6efa5a163e22331..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/SQLAlchemy.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "SQLAlchemy", "version_dict": {"version": "1.3.8", "eq_version": "", "ge_version": "1.0.10", "lt_version": "", "ne_version": ["1.1.5", "1.1.6", "1.1.7", "1.1.8"], "upper_version": "1.3.8"}, "deep": {"count": 9, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "SQLAlchemy"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/SecretStorage.json b/tools/oos/example/train_cached_file/SecretStorage.json deleted file mode 100644 index 744a5b0b98d61e9d0e6053429ecce26eb929782a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/SecretStorage.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "SecretStorage", "version_dict": {"version": "3.1.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.1.1"}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.tidelift", "keyring", "SecretStorage"]}, "requires": {"cryptography": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "jeepney": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.4.1", "version": "0.4.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Sphinx.json b/tools/oos/example/train_cached_file/Sphinx.json deleted file mode 100644 index 62359bb6bbe80e7e33105cabfd60a4b7169159f9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Sphinx.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Sphinx", "version_dict": {"version": "2.2.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0"}, "deep": {"count": 4, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx"]}, "requires": {"sphinxcontrib-applehelp": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.1", "version": "1.0.1"}, "sphinxcontrib-devhelp": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.1", "version": "1.0.1"}, "sphinxcontrib-jsmath": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.1", "version": "1.0.1"}, "sphinxcontrib-htmlhelp": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.2", "version": "1.0.2"}, "sphinxcontrib-serializinghtml": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.3", "version": "1.1.3"}, "sphinxcontrib-qthelp": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.2", "version": "1.0.2"}, "Jinja2": {"eq_version": "", "ge_version": "2.3", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "Pygments": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "2.6.1", "version": "2.6.1"}, "docutils": {"eq_version": "", "ge_version": "0.12", "lt_version": "", "ne_version": [], "upper_version": "0.15.2", "version": "0.15.2"}, "snowballstemmer": {"eq_version": "", "ge_version": "1.1", "lt_version": "", "ne_version": [], "upper_version": "1.9.1", "version": "1.9.1"}, "Babel": {"eq_version": "", "ge_version": "1.3", "lt_version": "", "ne_version": ["2.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "alabaster": {"eq_version": "", "ge_version": "0.7", "lt_version": "0.8", "ne_version": [], "upper_version": "0.7.12", "version": "0.7.12"}, "imagesize": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "requests": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "setuptools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "57.5.0", "version": "57.5.0"}, "packaging": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "19.1", "version": "19.1"}, "sphinxcontrib-websupport": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.2", "version": "1.1.2"}, "pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-cov": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "html5lib": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "flake8": {"eq_version": "", "ge_version": "3.5.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "3.5.0"}, "flake8-import-order": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "mypy": {"eq_version": "", "ge_version": "0.720", "lt_version": "", "ne_version": [], "upper_version": "0.720", "version": "0.720"}, "docutils-stubs": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Tempita.json b/tools/oos/example/train_cached_file/Tempita.json deleted file mode 100644 index e5610f49bbadc4d544618df6a6deb6d8cd0794a7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Tempita.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Tempita", "version_dict": {"version": "0.5.2", "eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.2"}, "deep": {"count": 10, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "sqlalchemy-migrate", "Tempita"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/WSME.json b/tools/oos/example/train_cached_file/WSME.json deleted file mode 100644 index 258b0edb1d3346ee8f1d30e686a9001f4bf0fcde..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/WSME.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "WSME", "version_dict": {"version": "0.9.3", "eq_version": "", "ge_version": "0.8", "lt_version": "", "ne_version": [], "upper_version": "0.9.3"}, "deep": {"count": 1, "list": ["aodh", "WSME"]}, "requires": {"six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "WebOb": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "simplegeneric": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.8.1", "version": "0.8.1"}, "pytz": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "netaddr": {"eq_version": "", "ge_version": "0.7.12", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "cloud_sptheme": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/WebOb.json b/tools/oos/example/train_cached_file/WebOb.json deleted file mode 100644 index abbc3c1b6719a72d04dfc54e63fddab31577132a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/WebOb.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "WebOb", "version_dict": {"version": "1.8.5", "eq_version": "", "ge_version": "1.7.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5"}, "deep": {"count": 2, "list": ["aodh", "keystonemiddleware", "WebOb"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "1.7.5", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "pylons-sphinx-themes": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest": {"eq_version": "", "ge_version": "3.1.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "coverage": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "pytest-cov": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-xdist": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/WebTest.json b/tools/oos/example/train_cached_file/WebTest.json deleted file mode 100644 index 34dad042dcc6787de5ed40e9ea38b93bb8296cf1..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/WebTest.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "WebTest", "version_dict": {"version": "2.0.33", "eq_version": "", "ge_version": "2.0.27", "lt_version": "", "ne_version": [], "upper_version": "2.0.33"}, "deep": {"count": 2, "list": ["aodh", "keystonemiddleware", "WebTest"]}, "requires": {"six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "WebOb": {"eq_version": "", "ge_version": "1.2", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "waitress": {"eq_version": "", "ge_version": "0.8.5", "lt_version": "", "ne_version": [], "upper_version": "1.3.1", "version": "1.3.1"}, "beautifulsoup4": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.8.0", "version": "4.8.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.8.1", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "docutils": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.15.2", "version": "0.15.2"}, "pylons-sphinx-themes": {"eq_version": "", "ge_version": "1.0.8", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.0.8"}, "nose": {"eq_version": "", "ge_version": "", "lt_version": "1.3.0", "ne_version": [], "upper_version": "1.3.7", "version": "1.3.7"}, "coverage": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "mock": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "PasteDeploy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "WSGIProxy2": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pyquery": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Werkzeug.json b/tools/oos/example/train_cached_file/Werkzeug.json deleted file mode 100644 index e2220bc149c1996e93e701de81a91ffc869ef211..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Werkzeug.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Werkzeug", "version_dict": {"version": "0.15.6", "eq_version": "", "ge_version": "0.15", "lt_version": "", "ne_version": [], "upper_version": "0.15.6"}, "deep": {"count": 2, "list": ["keystone", "Flask", "Werkzeug"]}, "requires": {"termcolor": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "watchdog": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Angular-Bootstrap.json b/tools/oos/example/train_cached_file/XStatic-Angular-Bootstrap.json deleted file mode 100644 index dfa8937bee91378136c46af97138946cbf093e16..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Angular-Bootstrap.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Angular-Bootstrap", "version_dict": {"version": "2.2.0.0", "eq_version": "", "ge_version": "2.2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.2.0.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Angular-Bootstrap"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Angular-FileUpload.json b/tools/oos/example/train_cached_file/XStatic-Angular-FileUpload.json deleted file mode 100644 index 2a1c681ebf7017ca131cbb420331fed6600baa71..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Angular-FileUpload.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Angular-FileUpload", "version_dict": {"version": "12.0.4.0", "eq_version": "", "ge_version": "12.0.4.0", "lt_version": "", "ne_version": [], "upper_version": "12.0.4.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Angular-FileUpload"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Angular-Gettext.json b/tools/oos/example/train_cached_file/XStatic-Angular-Gettext.json deleted file mode 100644 index 7e15bd47bca64487fb7ffec4a5344498a5851adb..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Angular-Gettext.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Angular-Gettext", "version_dict": {"version": "2.3.8.0", "eq_version": "", "ge_version": "2.3.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.8.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Angular-Gettext"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Angular-Schema-Form.json b/tools/oos/example/train_cached_file/XStatic-Angular-Schema-Form.json deleted file mode 100644 index adeabdb1292e06e8f73197595c4c104b828d0e3a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Angular-Schema-Form.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Angular-Schema-Form", "version_dict": {"version": "0.8.13.0", "eq_version": "", "ge_version": "0.8.13.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.13.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Angular-Schema-Form"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Angular-lrdragndrop.json b/tools/oos/example/train_cached_file/XStatic-Angular-lrdragndrop.json deleted file mode 100644 index fdceeb22114ea85db3c10da386504eae9f48029d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Angular-lrdragndrop.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Angular-lrdragndrop", "version_dict": {"version": "1.0.2.4", "eq_version": "", "ge_version": "1.0.2.2", "lt_version": "", "ne_version": [], "upper_version": "1.0.2.4"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Angular-lrdragndrop"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Angular.json b/tools/oos/example/train_cached_file/XStatic-Angular.json deleted file mode 100644 index d88026835f9424459def23aac92365369927481d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Angular.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Angular", "version_dict": {"version": "1.5.8.0", "eq_version": "", "ge_version": "1.5.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.5.8.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Angular"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Bootstrap-Datepicker.json b/tools/oos/example/train_cached_file/XStatic-Bootstrap-Datepicker.json deleted file mode 100644 index a4557bd59d1e596885504b0ddef8e3b406317245..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Bootstrap-Datepicker.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Bootstrap-Datepicker", "version_dict": {"version": "1.3.1.0", "eq_version": "", "ge_version": "1.3.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.1.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Bootstrap-Datepicker"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Bootstrap-SCSS.json b/tools/oos/example/train_cached_file/XStatic-Bootstrap-SCSS.json deleted file mode 100644 index 90e352842fb5153cb8b202392ea59aa61fe3fdb4..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Bootstrap-SCSS.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Bootstrap-SCSS", "version_dict": {"version": "3.3.7.1", "eq_version": "", "ge_version": "3.3.7.1", "lt_version": "", "ne_version": [], "upper_version": "3.3.7.1"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Bootstrap-SCSS"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-D3.json b/tools/oos/example/train_cached_file/XStatic-D3.json deleted file mode 100644 index 95c0f00f5493fa699d7be8f65e98f6d84d123d10..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-D3.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-D3", "version_dict": {"version": "3.5.17.0", "eq_version": "", "ge_version": "3.5.17.0", "lt_version": "", "ne_version": [], "upper_version": "3.5.17.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-D3"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Font-Awesome.json b/tools/oos/example/train_cached_file/XStatic-Font-Awesome.json deleted file mode 100644 index 22764afc1907e6f0063c162f2397cbb86db6f09b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Font-Awesome.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Font-Awesome", "version_dict": {"version": "4.7.0.0", "eq_version": "", "ge_version": "4.7.0.0", "lt_version": "", "ne_version": [], "upper_version": "4.7.0.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Font-Awesome"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Hogan.json b/tools/oos/example/train_cached_file/XStatic-Hogan.json deleted file mode 100644 index 35b2fd1df445069b2357694ff8588b2a0e99acda..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Hogan.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Hogan", "version_dict": {"version": "2.0.0.2", "eq_version": "", "ge_version": "2.0.0.2", "lt_version": "", "ne_version": [], "upper_version": "2.0.0.2"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Hogan"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-JQuery-Migrate.json b/tools/oos/example/train_cached_file/XStatic-JQuery-Migrate.json deleted file mode 100644 index 4432cb9bb129405b2f1a83be73db22313846fa24..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-JQuery-Migrate.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-JQuery-Migrate", "version_dict": {"version": "1.2.1.1", "eq_version": "", "ge_version": "1.2.1.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1.1"}, "deep": {"count": 1, "list": ["horizon", "XStatic-JQuery-Migrate"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-JQuery.TableSorter.json b/tools/oos/example/train_cached_file/XStatic-JQuery.TableSorter.json deleted file mode 100644 index bd539927a12f3b49730d2ec8867b9d2fd5ae0324..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-JQuery.TableSorter.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-JQuery.TableSorter", "version_dict": {"version": "2.14.5.1", "eq_version": "", "ge_version": "2.14.5.1", "lt_version": "", "ne_version": [], "upper_version": "2.14.5.1"}, "deep": {"count": 1, "list": ["horizon", "XStatic-JQuery.TableSorter"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-JQuery.quicksearch.json b/tools/oos/example/train_cached_file/XStatic-JQuery.quicksearch.json deleted file mode 100644 index f99718bb8a6459df66830f12bf7b82e4672ac64f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-JQuery.quicksearch.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-JQuery.quicksearch", "version_dict": {"version": "2.0.3.1", "eq_version": "", "ge_version": "2.0.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.0.3.1"}, "deep": {"count": 1, "list": ["horizon", "XStatic-JQuery.quicksearch"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-JSEncrypt.json b/tools/oos/example/train_cached_file/XStatic-JSEncrypt.json deleted file mode 100644 index bbc6fd6def243e75def002a100772fbeb290f9cf..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-JSEncrypt.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-JSEncrypt", "version_dict": {"version": "2.3.1.1", "eq_version": "", "ge_version": "2.3.1.1", "lt_version": "", "ne_version": [], "upper_version": "2.3.1.1"}, "deep": {"count": 1, "list": ["horizon", "XStatic-JSEncrypt"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Jasmine.json b/tools/oos/example/train_cached_file/XStatic-Jasmine.json deleted file mode 100644 index ee05a88efcb28a5723b86f411be72335617a77a7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Jasmine.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Jasmine", "version_dict": {"version": "2.4.1.2", "eq_version": "", "ge_version": "2.4.1.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.1.2"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Jasmine"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Rickshaw.json b/tools/oos/example/train_cached_file/XStatic-Rickshaw.json deleted file mode 100644 index efe60240b6ecd3a53c30b7e291ff812d5ecc86c9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Rickshaw.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Rickshaw", "version_dict": {"version": "1.5.0.0", "eq_version": "", "ge_version": "1.5.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.5.0.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Rickshaw"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-Spin.json b/tools/oos/example/train_cached_file/XStatic-Spin.json deleted file mode 100644 index 1bbb29fd2cd8ffab4d572924bdf3be77c11b84bd..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-Spin.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-Spin", "version_dict": {"version": "1.2.5.2", "eq_version": "", "ge_version": "1.2.5.2", "lt_version": "", "ne_version": [], "upper_version": "1.2.5.2"}, "deep": {"count": 1, "list": ["horizon", "XStatic-Spin"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-bootswatch.json b/tools/oos/example/train_cached_file/XStatic-bootswatch.json deleted file mode 100644 index 9d9d1fca96faf8303d1b05cbdd94cba5a34b43fb..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-bootswatch.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-bootswatch", "version_dict": {"version": "3.3.7.0", "eq_version": "", "ge_version": "3.3.7.0", "lt_version": "", "ne_version": [], "upper_version": "3.3.7.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-bootswatch"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-jQuery.json b/tools/oos/example/train_cached_file/XStatic-jQuery.json deleted file mode 100644 index c96a208b2e08280d65d307bd8f9b37b73c369793..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-jQuery.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-jQuery", "version_dict": {"version": "1.12.4.1", "eq_version": "", "ge_version": "1.8.2.1", "lt_version": "2", "ne_version": [], "upper_version": "1.12.4.1"}, "deep": {"count": 1, "list": ["horizon", "XStatic-jQuery"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-jquery-ui.json b/tools/oos/example/train_cached_file/XStatic-jquery-ui.json deleted file mode 100644 index a4d21bc32e9f675f46269ea2f5fbe0f03eace320..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-jquery-ui.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-jquery-ui", "version_dict": {"version": "1.12.1.1", "eq_version": "", "ge_version": "1.10.4.1", "lt_version": "", "ne_version": [], "upper_version": "1.12.1.1"}, "deep": {"count": 1, "list": ["horizon", "XStatic-jquery-ui"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-mdi.json b/tools/oos/example/train_cached_file/XStatic-mdi.json deleted file mode 100644 index 179680f77283868daa90a10f73f0e79b87c91eeb..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-mdi.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-mdi", "version_dict": {"version": "1.6.50.2", "eq_version": "", "ge_version": "1.4.57.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.50.2"}, "deep": {"count": 1, "list": ["horizon", "XStatic-mdi"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-objectpath.json b/tools/oos/example/train_cached_file/XStatic-objectpath.json deleted file mode 100644 index 7526223d9210dd5e68bc2d0b40be72b121600e3a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-objectpath.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-objectpath", "version_dict": {"version": "1.2.1.0", "eq_version": "", "ge_version": "1.2.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.2.1.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-objectpath"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-roboto-fontface.json b/tools/oos/example/train_cached_file/XStatic-roboto-fontface.json deleted file mode 100644 index 29ea676252f3a12ebedb486099e7256f503007f2..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-roboto-fontface.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-roboto-fontface", "version_dict": {"version": "0.5.0.0", "eq_version": "", "ge_version": "0.5.0.0", "lt_version": "", "ne_version": [], "upper_version": "0.5.0.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-roboto-fontface"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-smart-table.json b/tools/oos/example/train_cached_file/XStatic-smart-table.json deleted file mode 100644 index 8448d918c6072ab747b96faf393829248dab5d55..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-smart-table.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-smart-table", "version_dict": {"version": "1.4.13.2", "eq_version": "", "ge_version": "1.4.13.2", "lt_version": "", "ne_version": [], "upper_version": "1.4.13.2"}, "deep": {"count": 1, "list": ["horizon", "XStatic-smart-table"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-term.js.json b/tools/oos/example/train_cached_file/XStatic-term.js.json deleted file mode 100644 index 0fb54915fe1a3b59037c99c700a2b2419c2f8558..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-term.js.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-term.js", "version_dict": {"version": "0.0.7.0", "eq_version": "", "ge_version": "0.0.7.0", "lt_version": "", "ne_version": [], "upper_version": "0.0.7.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-term.js"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic-tv4.json b/tools/oos/example/train_cached_file/XStatic-tv4.json deleted file mode 100644 index fedf07cf292586fc7941c1a32c117d88467a691f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic-tv4.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic-tv4", "version_dict": {"version": "1.2.7.0", "eq_version": "", "ge_version": "1.2.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.2.7.0"}, "deep": {"count": 1, "list": ["horizon", "XStatic-tv4"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/XStatic.json b/tools/oos/example/train_cached_file/XStatic.json deleted file mode 100644 index 0b98c2793e6508b6bca0bf23e1fd0069849491ad..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/XStatic.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "XStatic", "version_dict": {"version": "1.0.2", "eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.0.2"}, "deep": {"count": 1, "list": ["horizon", "XStatic"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/Yappi.json b/tools/oos/example/train_cached_file/Yappi.json deleted file mode 100644 index 99b7cc6a85bfd912aed70ade6511052fac420c57..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/Yappi.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "Yappi", "version_dict": {"version": "1.0", "eq_version": "", "ge_version": "1.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "oslo.service", "Yappi"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/alabaster.json b/tools/oos/example/train_cached_file/alabaster.json deleted file mode 100644 index a6f4154d60c4126c589a0b266b92d94670134c7e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/alabaster.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "alabaster", "version_dict": {"version": "0.7.12", "eq_version": "", "ge_version": "0.7", "lt_version": "0.8", "ne_version": [], "upper_version": "0.7.12"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "alabaster"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/alembic.json b/tools/oos/example/train_cached_file/alembic.json deleted file mode 100644 index 5b9492c2d983b01f3cf4bdc6dfdcd18c43b79f77..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/alembic.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "alembic", "version_dict": {"version": "1.1.0", "eq_version": "", "ge_version": "0.4.1", "lt_version": "", "ne_version": [], "upper_version": "1.1.0"}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "alembic"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/amqp.json b/tools/oos/example/train_cached_file/amqp.json deleted file mode 100644 index 4fee21325cc99131541e35b86a8d313b36d9eefc..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/amqp.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "amqp", "version_dict": {"version": "2.5.2", "eq_version": "", "ge_version": "2.4.1", "lt_version": "", "ne_version": [], "upper_version": "2.5.2"}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "amqp"]}, "requires": {"vine": {"eq_version": "", "ge_version": "1.1.3", "lt_version": "5.0.0a1", "ne_version": [], "upper_version": "1.3.0", "version": "1.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ansible.json b/tools/oos/example/train_cached_file/ansible.json deleted file mode 100644 index 2184a9c846b4efb2a59ff68149c10b1fa4653be2..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ansible.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ansible", "version_dict": {"version": "2.5", "eq_version": "", "ge_version": "2.5", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["ironic", "ansible"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/aodh.json b/tools/oos/example/train_cached_file/aodh.json deleted file mode 100644 index 91b2926af5babac7f3f1b41fabbf9835d17c7f95..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/aodh.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "aodh", "version_dict": {"version": "9.0.1", "eq_version": "9.0.1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["aodh"]}, "requires": {"tenacity": {"eq_version": "", "ge_version": "3.2.1", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}, "croniter": {"eq_version": "", "ge_version": "0.3.4", "lt_version": "", "ne_version": [], "upper_version": "0.3.30", "version": "0.3.30"}, "futurist": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.9.0", "version": "1.9.0"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "keystonemiddleware": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": ["4.19.0"], "upper_version": "7.0.1", "version": "7.0.1"}, "gnocchiclient": {"eq_version": "", "ge_version": "3.1.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.5", "version": "7.0.5"}, "lxml": {"eq_version": "", "ge_version": "2.3", "lt_version": "", "ne_version": [], "upper_version": "4.4.1", "version": "4.4.1"}, "oslo.db": {"eq_version": "", "ge_version": "4.8.0", "lt_version": "", "ne_version": ["4.13.1", "4.13.2", "4.15.0"], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.config": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.policy": {"eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.1.1", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "PasteDeploy": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "pecan": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.3", "version": "1.3.3"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.22.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "oslo.utils": {"eq_version": "", "ge_version": "3.5.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "python-keystoneclient": {"eq_version": "", "ge_version": "1.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "pytz": {"eq_version": "", "ge_version": "2013.6", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "requests": {"eq_version": "", "ge_version": "2.5.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "tooz": {"eq_version": "", "ge_version": "1.28.0", "lt_version": "", "ne_version": [], "upper_version": "1.66.3", "version": "1.66.3"}, "voluptuous": {"eq_version": "", "ge_version": "0.8.10", "lt_version": "", "ne_version": [], "upper_version": "0.11.7", "version": "0.11.7"}, "WebOb": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "WSME": {"eq_version": "", "ge_version": "0.8", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "cachetools": {"eq_version": "", "ge_version": "1.1.6", "lt_version": "", "ne_version": [], "upper_version": "3.1.1", "version": "3.1.1"}, "cotyledon": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.7.3", "version": "1.7.3"}, "keystoneauth1": {"eq_version": "", "ge_version": "2.1", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "python-octaviaclient": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.10.1", "version": "1.10.1"}, "python-dateutil": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8.0", "version": "2.8.0"}, "python-heatclient": {"eq_version": "", "ge_version": "1.17.0", "lt_version": "", "ne_version": [], "upper_version": "1.18.1", "version": "1.18.1"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "0.1.1", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-httpdomain": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.7.0", "version": "1.7.0"}, "sphinxcontrib-pecanwsme": {"eq_version": "", "ge_version": "0.8", "lt_version": "", "ne_version": [], "upper_version": "0.10.0", "version": "0.10.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/aodhclient.json b/tools/oos/example/train_cached_file/aodhclient.json deleted file mode 100644 index 2de405ae7944fb3dd33d43aba78656cae03b22f9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/aodhclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "aodhclient", "version_dict": {"version": "1.3.0", "eq_version": "", "ge_version": "0.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.0"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "aodhclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "1.4", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "cliff": {"eq_version": "", "ge_version": "1.14.0", "lt_version": "", "ne_version": ["1.16.0"], "upper_version": "2.16.0", "version": "2.16.0"}, "osc-lib": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "keystoneauth1": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "pyparsing": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.4.2", "version": "2.4.2"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "reno": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "tempest": {"eq_version": "", "ge_version": "10", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "pifpaf": {"eq_version": "", "ge_version": "0.23", "lt_version": "", "ne_version": [], "upper_version": "2.2.2", "version": "2.2.2"}, "gnocchi": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "mock": {"eq_version": "", "ge_version": "1.2", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/appdirs.json b/tools/oos/example/train_cached_file/appdirs.json deleted file mode 100644 index 24170bafd0c4920d30daf4234a865dffea1be2fe..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/appdirs.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "appdirs", "version_dict": {"version": "1.4.3", "eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.3"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "appdirs"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/argon2-cffi.json b/tools/oos/example/train_cached_file/argon2-cffi.json deleted file mode 100644 index 8dfdb5829302af4d7b7736794cd383ff596e62ea..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/argon2-cffi.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "argon2-cffi", "version_dict": {"version": "16.1.0", "eq_version": "", "ge_version": "16.1.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 2, "list": ["horizon", "Django", "argon2-cffi"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/astroid.json b/tools/oos/example/train_cached_file/astroid.json deleted file mode 100644 index 6b62a67182e84dec8dc5bc46f289a88003735a72..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/astroid.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "astroid", "version_dict": {"version": "2.1.0", "eq_version": "2.1.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["horizon", "astroid"]}, "requires": {"lazy-object-proxy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "wrapt": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.11.2", "version": "1.11.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/atomicwrites.json b/tools/oos/example/train_cached_file/atomicwrites.json deleted file mode 100644 index 1cb9062c4d3ff7591fb72701079127a23a6a7cb0..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/atomicwrites.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "atomicwrites", "version_dict": {"version": "1.3.0", "eq_version": "", "ge_version": "1.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.0"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "atomicwrites"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/attrs.json b/tools/oos/example/train_cached_file/attrs.json deleted file mode 100644 index 9771532b41eb60547be37e714b1973f723d43ca4..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/attrs.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "attrs", "version_dict": {"version": "19.1.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "19.1.0"}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "packaging", "attrs"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "zope.interface": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "coverage": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "hypothesis": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "Pympler": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/automaton.json b/tools/oos/example/train_cached_file/automaton.json deleted file mode 100644 index 398da26c6c9cd18a234ffec26b706914fe5c95f2..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/automaton.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "automaton", "version_dict": {"version": "1.17.0", "eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.17.0"}, "deep": {"count": 2, "list": ["cinder", "taskflow", "automaton"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.2", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.2"}, "hacking": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "0.11", "ne_version": [], "upper_version": "", "version": "0.10.0"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/azure-common.json b/tools/oos/example/train_cached_file/azure-common.json deleted file mode 100644 index a75bdd7917ad9a2f46f6f5a8de7cfca520da4190..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/azure-common.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "azure-common", "version_dict": {"version": "1.1.5", "eq_version": "", "ge_version": "1.1.5", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 5, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "azure-servicebus", "azure-common"]}, "requires": {"azure-nspkg": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.0.0"}, "msrestazure": {"eq_version": "", "ge_version": "0.4.0", "lt_version": "0.5.0", "ne_version": [], "upper_version": "", "version": "0.4.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/azure-nspkg.json b/tools/oos/example/train_cached_file/azure-nspkg.json deleted file mode 100644 index bb7f4804ea138cd94076b4e637382f31594aa2ed..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/azure-nspkg.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "azure-nspkg", "version_dict": {"version": "2.0.0", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 6, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "azure-servicebus", "azure-common", "azure-nspkg"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/azure-servicebus.json b/tools/oos/example/train_cached_file/azure-servicebus.json deleted file mode 100644 index 10716b123919b8792e2f20bf0d1b7c85dd57fe3f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/azure-servicebus.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "azure-servicebus", "version_dict": {"version": "0.21.1", "eq_version": "", "ge_version": "0.21.1", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "azure-servicebus"]}, "requires": {"azure-common": {"eq_version": "", "ge_version": "1.1.5", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.5"}, "azure-nspkg": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.0.0"}, "requests": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/bandit.json b/tools/oos/example/train_cached_file/bandit.json deleted file mode 100644 index a363b393a7016223d00ed76ba8f843870a05ad51..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/bandit.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "bandit", "version_dict": {"version": "1.6.0", "eq_version": "1.6.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["cinder", "bandit"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/bashate.json b/tools/oos/example/train_cached_file/bashate.json deleted file mode 100644 index fa1f8fd066223a64ff317337f622f3837e6f4e29..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/bashate.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "bashate", "version_dict": {"version": "0.6.0", "eq_version": "", "ge_version": "0.5.1", "lt_version": "", "ne_version": [], "upper_version": "0.6.0"}, "deep": {"count": 1, "list": ["ironic", "bashate"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "hacking": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "0.11", "ne_version": [], "upper_version": "", "version": "0.10.0"}, "mock": {"eq_version": "", "ge_version": "1.2", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "discover": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "fixtures": {"eq_version": "", "ge_version": "1.3.1", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "reno": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": ["2.3.1"], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/bcrypt.json b/tools/oos/example/train_cached_file/bcrypt.json deleted file mode 100644 index b2c309e2500928e86b4c43f918a6b5c5e9512e9a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/bcrypt.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "bcrypt", "version_dict": {"version": "3.1.7", "eq_version": "", "ge_version": "3.1.3", "lt_version": "", "ne_version": [], "upper_version": "3.1.7"}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "python-glanceclient", "tempest", "paramiko", "bcrypt"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/beautifulsoup4.json b/tools/oos/example/train_cached_file/beautifulsoup4.json deleted file mode 100644 index a838b2bcef1e3bb27976ca03cbe77c7bc3fe98a7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/beautifulsoup4.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "beautifulsoup4", "version_dict": {"version": "4.8.0", "eq_version": "", "ge_version": "4.6.0", "lt_version": "", "ne_version": [], "upper_version": "4.8.0"}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "beautifulsoup4"]}, "requires": {"soupsieve": {"eq_version": "", "ge_version": "1.2", "lt_version": "", "ne_version": [], "upper_version": "1.9.3", "version": "1.9.3"}, "html5lib": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "lxml": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.4.1", "version": "4.4.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/betamax.json b/tools/oos/example/train_cached_file/betamax.json deleted file mode 100644 index f043c6e2f120d2855d28f149b1ce0b591eb6fc86..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/betamax.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "betamax", "version_dict": {"version": "0.8.1", "eq_version": "", "ge_version": "0.7.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.1"}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "betamax"]}, "requires": {"requests": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/bitmath.json b/tools/oos/example/train_cached_file/bitmath.json deleted file mode 100644 index 51cbb4fa3f87e9daee00aa16a81ae76586311a33..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/bitmath.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "bitmath", "version_dict": {"version": "1.3.3.1", "eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.3.1"}, "deep": {"count": 2, "list": ["cinder", "storops", "bitmath"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/boto.json b/tools/oos/example/train_cached_file/boto.json deleted file mode 100644 index e8d045a557f88e779a79d7849dec40963a61aa0d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/boto.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "boto", "version_dict": {"version": "2.49.0", "eq_version": "", "ge_version": "2.32.1", "lt_version": "", "ne_version": [], "upper_version": "2.49.0"}, "deep": {"count": 1, "list": ["swift", "boto"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/boto3.json b/tools/oos/example/train_cached_file/boto3.json deleted file mode 100644 index 73bb0dd23ce93400f32a38942a9f2f8419544e84..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/boto3.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "boto3", "version_dict": {"version": "1.9.225", "eq_version": "", "ge_version": "1.4.4", "lt_version": "", "ne_version": [], "upper_version": "1.9.225"}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "boto3"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/botocore.json b/tools/oos/example/train_cached_file/botocore.json deleted file mode 100644 index 317c629ef92f23dcb74b2002117d0ee704eac0e6..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/botocore.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "botocore", "version_dict": {"version": "1.12.225", "eq_version": "", "ge_version": "1.12", "lt_version": "", "ne_version": [], "upper_version": "1.12.225"}, "deep": {"count": 1, "list": ["swift", "botocore"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/brotlipy.json b/tools/oos/example/train_cached_file/brotlipy.json deleted file mode 100644 index 385ba1a400f0a24c119ae04a3259d6b96e24760b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/brotlipy.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "brotlipy", "version_dict": {"version": "0.6.0", "eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "jaraco.packaging", "requests", "urllib3", "brotlipy"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cachetools.json b/tools/oos/example/train_cached_file/cachetools.json deleted file mode 100644 index f8a6c1540677187f6fb303f75b54482f165b0b7c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cachetools.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cachetools", "version_dict": {"version": "3.1.1", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.1.1"}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "cachetools"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cachez.json b/tools/oos/example/train_cached_file/cachez.json deleted file mode 100644 index b4f9e52be66f1d16f6fcbfeff7bc4be91b69d683..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cachez.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cachez", "version_dict": {"version": "0.1.2", "eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.1.2"}, "deep": {"count": 2, "list": ["cinder", "storops", "cachez"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/capacity.json b/tools/oos/example/train_cached_file/capacity.json deleted file mode 100644 index 285473b9143ec72713f5203ada56b81ba011e06f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/capacity.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "capacity", "version_dict": {"version": "1.3.14", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.3.14"}, "deep": {"count": 1, "list": ["cinder", "capacity"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cassandra-driver.json b/tools/oos/example/train_cached_file/cassandra-driver.json deleted file mode 100644 index ddc6f48c9190fd1aed2ebe124ce3c63f7efe2229..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cassandra-driver.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cassandra-driver", "version_dict": {"version": "3.19.0", "eq_version": "", "ge_version": "2.1.4", "lt_version": "", "ne_version": ["3.6.0"], "upper_version": "3.19.0"}, "deep": {"count": 1, "list": ["trove", "cassandra-driver"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/castellan.json b/tools/oos/example/train_cached_file/castellan.json deleted file mode 100644 index cf1268503f82389e0e6b0043ad468f2272d0f6ce..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/castellan.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "castellan", "version_dict": {"version": "1.3.4", "eq_version": "", "ge_version": "0.16.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.4"}, "deep": {"count": 2, "list": ["cinder", "os-brick", "castellan"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "cryptography": {"eq_version": "", "ge_version": "2.1", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "python-barbicanclient": {"eq_version": "", "ge_version": "4.5.2", "lt_version": "", "ne_version": [], "upper_version": "4.9.0", "version": "4.9.0"}, "oslo.config": {"eq_version": "", "ge_version": "6.4.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": ["2.20.0"], "upper_version": "2.22.0", "version": "2.22.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "pifpaf": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "", "ne_version": [], "upper_version": "2.2.2", "version": "2.2.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ceilometer.json b/tools/oos/example/train_cached_file/ceilometer.json deleted file mode 100644 index dff75990bb0111bac118bc0825abad9abcc9262a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ceilometer.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ceilometer", "version_dict": {"version": "13.1.2", "eq_version": "13.1.2", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["ceilometer"]}, "requires": {"cachetools": {"eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "3.1.1", "version": "3.1.1"}, "cotyledon": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.3", "version": "1.7.3"}, "futurist": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.9.0", "version": "1.9.0"}, "jsonpath-rw-ext": {"eq_version": "", "ge_version": "1.1.3", "lt_version": "", "ne_version": [], "upper_version": "1.2.2", "version": "1.2.2"}, "lxml": {"eq_version": "", "ge_version": "3.4.1", "lt_version": "", "ne_version": [], "upper_version": "4.4.1", "version": "4.4.1"}, "monotonic": {"eq_version": "", "ge_version": "0.6", "lt_version": "", "ne_version": [], "upper_version": "1.5", "version": "1.5"}, "msgpack": {"eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "0.6.1", "version": "0.6.1"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.reports": {"eq_version": "", "ge_version": "1.18.0", "lt_version": "", "ne_version": [], "upper_version": "1.30.0", "version": "1.30.0"}, "oslo.rootwrap": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.16.1", "version": "5.16.1"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "oslo.messaging": {"eq_version": "", "ge_version": "6.2.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.1.1", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.37.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.privsep": {"eq_version": "", "ge_version": "1.32.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.5", "version": "1.33.5"}, "pysnmp": {"eq_version": "", "ge_version": "4.2.3", "lt_version": "5.0.0", "ne_version": [], "upper_version": "4.4.11", "version": "4.4.11"}, "python-glanceclient": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1", "version": "2.17.1"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.15.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.9.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "python-neutronclient": {"eq_version": "", "ge_version": "6.7.0", "lt_version": "", "ne_version": [], "upper_version": "6.14.1", "version": "6.14.1"}, "python-novaclient": {"eq_version": "", "ge_version": "9.1.0", "lt_version": "", "ne_version": [], "upper_version": "15.1.1", "version": "15.1.1"}, "python-swiftclient": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "python-cinderclient": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "requests": {"eq_version": "", "ge_version": "2.8.1", "lt_version": "", "ne_version": ["2.9.0"], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "tenacity": {"eq_version": "", "ge_version": "4.4.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}, "tooz": {"eq_version": "", "ge_version": "1.47.0", "lt_version": "", "ne_version": [], "upper_version": "1.66.3", "version": "1.66.3"}, "os-xenapi": {"eq_version": "", "ge_version": "0.3.3", "lt_version": "", "ne_version": [], "upper_version": "0.3.4", "version": "0.3.4"}, "oslo.cache": {"eq_version": "", "ge_version": "1.26.0", "lt_version": "", "ne_version": [], "upper_version": "1.37.1", "version": "1.37.1"}, "gnocchiclient": {"eq_version": "", "ge_version": "7.0.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.5", "version": "7.0.5"}, "python-zaqarclient": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "os-win": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "4.3.3", "version": "4.3.3"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "oslo.vmware": {"eq_version": "", "ge_version": "2.17.0", "lt_version": "", "ne_version": [], "upper_version": "2.34.1", "version": "2.34.1"}, "pyOpenSSL": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "gabbi": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "1.49.0", "version": "1.49.0"}, "requests-aws": {"eq_version": "", "ge_version": "0.1.4", "lt_version": "", "ne_version": [], "upper_version": "0.1.8", "version": "0.1.8"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-httpdomain": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.0", "version": "1.7.0"}, "sphinxcontrib-blockdiag": {"eq_version": "", "ge_version": "1.5.4", "lt_version": "", "ne_version": [], "upper_version": "1.5.5", "version": "1.5.5"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "os-api-ref": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2", "version": "1.6.2"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/certifi.json b/tools/oos/example/train_cached_file/certifi.json deleted file mode 100644 index 8db03f366fda023a3d3a414b46444a6c21fa2e1f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/certifi.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "certifi", "version_dict": {"version": "2019.6.16", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2019.6.16"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "jaraco.packaging", "requests", "urllib3", "certifi"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cffi.json b/tools/oos/example/train_cached_file/cffi.json deleted file mode 100644 index 875f88638039578d0c2bbc487eefb95e5b83122e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cffi.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cffi", "version_dict": {"version": "1.12.3", "eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.3"}, "deep": {"count": 11, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "pifpaf", "xattr", "cffi"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/chardet.json b/tools/oos/example/train_cached_file/chardet.json deleted file mode 100644 index 189bdab16459a68ea14d9488464a9474d6c4dcd9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/chardet.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "chardet", "version_dict": {"version": "3.0.4", "eq_version": "", "ge_version": "3.0.2", "lt_version": "3.1.0", "ne_version": [], "upper_version": "3.0.4"}, "deep": {"count": 12, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "jaraco.packaging", "requests", "chardet"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cinder-tempest-plugin.json b/tools/oos/example/train_cached_file/cinder-tempest-plugin.json deleted file mode 100644 index 324cc50488b37869bdbadd0b669a4b46622d1219..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cinder-tempest-plugin.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cinder-tempest-plugin", "version_dict": {"version": "0.3.0", "eq_version": "0.3.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["cinder-tempest-plugin"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "oslo.config": {"eq_version": "", "ge_version": "5.1.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cinder.json b/tools/oos/example/train_cached_file/cinder.json deleted file mode 100644 index 9b33902bbe6bb7bbc3958545fa8e3c84c8b9160e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cinder.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cinder", "version_dict": {"version": "15.6.0", "eq_version": "15.6.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["cinder"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "decorator": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "4.4.0", "version": "4.4.0"}, "defusedxml": {"eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}, "eventlet": {"eq_version": "", "ge_version": "0.22.0", "lt_version": "", "ne_version": ["0.23.0", "0.25.0"], "upper_version": "0.25.2", "version": "0.25.2"}, "greenlet": {"eq_version": "", "ge_version": "0.4.10", "lt_version": "", "ne_version": [], "upper_version": "0.4.15", "version": "0.4.15"}, "httplib2": {"eq_version": "", "ge_version": "0.9.1", "lt_version": "", "ne_version": [], "upper_version": "0.13.1", "version": "0.13.1"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "ipaddress": {"eq_version": "", "ge_version": "1.0.17", "lt_version": "", "ne_version": [], "upper_version": "1.0.22", "version": "1.0.22"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.7.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.21.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.1", "version": "7.0.1"}, "lxml": {"eq_version": "", "ge_version": "3.4.1", "lt_version": "", "ne_version": ["3.7.0"], "upper_version": "4.4.1", "version": "4.4.1"}, "oauth2client": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": ["4.0.0"], "upper_version": "4.1.3", "version": "4.1.3"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.context": {"eq_version": "", "ge_version": "2.22.0", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.db": {"eq_version": "", "ge_version": "4.27.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.messaging": {"eq_version": "", "ge_version": "6.4.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.31.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "oslo.policy": {"eq_version": "", "ge_version": "1.44.1", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.privsep": {"eq_version": "", "ge_version": "1.32.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.5", "version": "1.33.5"}, "oslo.reports": {"eq_version": "", "ge_version": "1.18.0", "lt_version": "", "ne_version": [], "upper_version": "1.30.0", "version": "1.30.0"}, "oslo.rootwrap": {"eq_version": "", "ge_version": "5.8.0", "lt_version": "", "ne_version": [], "upper_version": "5.16.1", "version": "5.16.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.service": {"eq_version": "", "ge_version": "1.31.0", "lt_version": "", "ne_version": [], "upper_version": "1.40.2", "version": "1.40.2"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.34.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.versionedobjects": {"eq_version": "", "ge_version": "1.31.2", "lt_version": "", "ne_version": [], "upper_version": "1.36.1", "version": "1.36.1"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "paramiko": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.6.0", "version": "2.6.0"}, "Paste": {"eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "3.2.0", "version": "3.2.0"}, "PasteDeploy": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "psutil": {"eq_version": "", "ge_version": "3.2.2", "lt_version": "", "ne_version": [], "upper_version": "5.6.3", "version": "5.6.3"}, "pyparsing": {"eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "2.4.2", "version": "2.4.2"}, "python-barbicanclient": {"eq_version": "", "ge_version": "4.5.2", "lt_version": "", "ne_version": [], "upper_version": "4.9.0", "version": "4.9.0"}, "python-glanceclient": {"eq_version": "", "ge_version": "2.15.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1", "version": "2.17.1"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.15.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "python-novaclient": {"eq_version": "", "ge_version": "9.1.0", "lt_version": "", "ne_version": [], "upper_version": "15.1.1", "version": "15.1.1"}, "python-swiftclient": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "pytz": {"eq_version": "", "ge_version": "2015.7", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": ["2.20.0"], "upper_version": "2.22.0", "version": "2.22.0"}, "retrying": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": ["1.3.0"], "upper_version": "1.3.3", "version": "1.3.3"}, "Routes": {"eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.1", "version": "2.4.1"}, "taskflow": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.7.1", "version": "3.7.1"}, "rtslib-fb": {"eq_version": "", "ge_version": "2.1.65", "lt_version": "", "ne_version": [], "upper_version": "2.1.69", "version": "2.1.69"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.0.10", "lt_version": "", "ne_version": ["1.1.5", "1.1.6", "1.1.7", "1.1.8"], "upper_version": "1.3.8", "version": "1.3.8"}, "sqlalchemy-migrate": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "", "ne_version": [], "upper_version": "0.12.0", "version": "0.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "suds-jurko": {"eq_version": "", "ge_version": "0.6", "lt_version": "", "ne_version": [], "upper_version": "0.6", "version": "0.6"}, "WebOb": {"eq_version": "", "ge_version": "1.7.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.vmware": {"eq_version": "", "ge_version": "2.17.0", "lt_version": "", "ne_version": [], "upper_version": "2.34.1", "version": "2.34.1"}, "os-brick": {"eq_version": "", "ge_version": "2.10.5", "lt_version": "", "ne_version": [], "upper_version": "2.10.7", "version": "2.10.7"}, "os-win": {"eq_version": "", "ge_version": "4.1.0", "lt_version": "", "ne_version": [], "upper_version": "4.3.3", "version": "4.3.3"}, "tooz": {"eq_version": "", "ge_version": "1.58.0", "lt_version": "", "ne_version": [], "upper_version": "1.66.3", "version": "1.66.3"}, "google-api-python-client": {"eq_version": "", "ge_version": "1.4.2", "lt_version": "", "ne_version": [], "upper_version": "1.7.11", "version": "1.7.11"}, "castellan": {"eq_version": "", "ge_version": "0.16.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.4", "version": "1.3.4"}, "cryptography": {"eq_version": "", "ge_version": "2.1.4", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "cursive": {"eq_version": "", "ge_version": "0.2.1", "lt_version": "", "ne_version": [], "upper_version": "0.2.2", "version": "0.2.2"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.2.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "os-api-ref": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2", "version": "1.6.2"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "pycodestyle": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "2.6.0", "ne_version": [], "upper_version": "", "version": "2.0.0"}, "PyMySQL": {"eq_version": "", "ge_version": "0.7.6", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "psycopg2": {"eq_version": "", "ge_version": "2.7", "lt_version": "", "ne_version": [], "upper_version": "2.8.3", "version": "2.8.3"}, "SQLAlchemy-Utils": {"eq_version": "", "ge_version": "0.33.11", "lt_version": "", "ne_version": [], "upper_version": "0.34.2", "version": "0.34.2"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "bandit": {"eq_version": "1.6.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.6.0"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "mox3": {"eq_version": "", "ge_version": "0.28.0", "lt_version": "", "ne_version": [], "upper_version": "0.28.0", "version": "0.28.0"}, "os-service-types": {"eq_version": "", "ge_version": "1.6.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.0", "version": "1.7.0"}, "msgpack": {"eq_version": "", "ge_version": "0.5.6", "lt_version": "", "ne_version": [], "upper_version": "0.6.1", "version": "0.6.1"}, "Babel": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "2.7.0", "version": "2.7.0"}, "python-3parclient": {"eq_version": "", "ge_version": "4.1.0", "lt_version": "", "ne_version": [], "upper_version": "4.2.11", "version": "4.2.11"}, "krest": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.1", "version": "1.3.1"}, "purestorage": {"eq_version": "", "ge_version": "1.6.0", "lt_version": "", "ne_version": [], "upper_version": "1.17.0", "version": "1.17.0"}, "pyOpenSSL": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "python-lefthandclient": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.0.0"}, "pywbem": {"eq_version": "", "ge_version": "0.7.0", "lt_version": "", "ne_version": [], "upper_version": "0.14.4", "version": "0.14.4"}, "pyxcli": {"eq_version": "", "ge_version": "1.1.5", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.5"}, "protobuf": {"eq_version": "", "ge_version": "3.6.1", "lt_version": "", "ne_version": [], "upper_version": "3.9.1", "version": "3.9.1"}, "rados": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "rbd": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "storops": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "infinisdk": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "141.1.0", "version": "141.1.0"}, "capacity": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.3.14", "version": "1.3.14"}, "infi.dtypes.wwn": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.1.1", "version": "0.1.1"}, "infi.dtypes.iqn": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.4.0", "version": "0.4.0"}, "storpool": {"eq_version": "", "ge_version": "4.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.0", "version": "5.1.0"}, "storpool.spopenstack": {"eq_version": "", "ge_version": "2.2.1", "lt_version": "", "ne_version": [], "upper_version": "2.2.1", "version": "2.2.1"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.3", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "sphinx-feature-classification": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.4.1", "version": "0.4.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/click.json b/tools/oos/example/train_cached_file/click.json deleted file mode 100644 index 79e7cb7beb18c24bb4c71d3dbfe628c7e804df0d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/click.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "click", "version_dict": {"version": "5.1", "eq_version": "", "ge_version": "5.1", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 2, "list": ["keystone", "Flask", "click"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cliff.json b/tools/oos/example/train_cached_file/cliff.json deleted file mode 100644 index b86c6043b442cf2b13ae12047688f8813ab2fc34..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cliff.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cliff", "version_dict": {"version": "2.16.0", "eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.16.0"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "cmd2": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "0.9.0", "ne_version": ["0.8.3"], "upper_version": "0.8.9", "version": "0.8.9"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.2", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.2"}, "pyparsing": {"eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "2.4.2", "version": "2.4.2"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "unicodecsv": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.14.1", "version": "0.14.1"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cmd2.json b/tools/oos/example/train_cached_file/cmd2.json deleted file mode 100644 index 774d59c4df3b3edd8ea70405ca0fba16136fbdf6..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cmd2.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cmd2", "version_dict": {"version": "0.8.9", "eq_version": "", "ge_version": "0.8.0", "lt_version": "0.9.0", "ne_version": ["0.8.3"], "upper_version": "0.8.9"}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "cmd2"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/colorama.json b/tools/oos/example/train_cached_file/colorama.json deleted file mode 100644 index 02b47f7d26d92152b295157c0184cec91485518c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/colorama.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "colorama", "version_dict": {"version": "0.4.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.4.1"}, "deep": {"count": 2, "list": ["ceilometer", "gabbi", "colorama"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/confget.json b/tools/oos/example/train_cached_file/confget.json deleted file mode 100644 index 19e199cf7bdc8bcd39b861094b73963d5bf6ff00..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/confget.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "confget", "version_dict": {"version": "2.3.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.3.0"}, "deep": {"count": 2, "list": ["cinder", "storpool", "confget"]}, "requires": {"six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/configparser.json b/tools/oos/example/train_cached_file/configparser.json deleted file mode 100644 index 3bb1cdc665b426c2e54f42c18e573ad5585e3fc4..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/configparser.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "configparser", "version_dict": {"version": "3.8.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.8.1"}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "flake8", "configparser"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "jaraco.packaging": {"eq_version": "", "ge_version": "3.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "3.2"}, "rst.linker": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9"}, "pytest": {"eq_version": "", "ge_version": "3.5", "lt_version": "", "ne_version": ["3.7.3"], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-checkdocs": {"eq_version": "", "ge_version": "1.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.2"}, "pytest-flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/confluent-kafka.json b/tools/oos/example/train_cached_file/confluent-kafka.json deleted file mode 100644 index b86132f251f0235c0b94ab9480f86bc50d75d917..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/confluent-kafka.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "confluent-kafka", "version_dict": {"version": "1.1.0", "eq_version": "", "ge_version": "0.11.6", "lt_version": "", "ne_version": [], "upper_version": "1.1.0"}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "confluent-kafka"]}, "requires": {"fastavro": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "requests": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "avro-python3": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/construct.json b/tools/oos/example/train_cached_file/construct.json deleted file mode 100644 index 2dee41d2a9ef2c8d9a447d8815aa67a86eaf86a7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/construct.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "construct", "version_dict": {"version": "2.8.22", "eq_version": "", "ge_version": "2.8.10", "lt_version": "2.9", "ne_version": [], "upper_version": "2.8.22"}, "deep": {"count": 1, "list": ["ironic-inspector", "construct"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/contextlib2.json b/tools/oos/example/train_cached_file/contextlib2.json deleted file mode 100644 index b012f2b743ab726fc6055ce88cda02c81dd30adb..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/contextlib2.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "contextlib2", "version_dict": {"version": "0.5.5", "eq_version": "", "ge_version": "0.4.0", "lt_version": "", "ne_version": [], "upper_version": "0.5.5"}, "deep": {"count": 2, "list": ["aodh", "futurist", "contextlib2"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cotyledon.json b/tools/oos/example/train_cached_file/cotyledon.json deleted file mode 100644 index 49e5cdb5936a6d311480ab37f298c408bc5ad019..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cotyledon.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cotyledon", "version_dict": {"version": "1.7.3", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.7.3"}, "deep": {"count": 1, "list": ["aodh", "cotyledon"]}, "requires": {"setproctitle": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.10", "version": "1.1.10"}, "sphinx-rtd-theme": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "oslo.config": {"eq_version": "", "ge_version": "3.14.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "mock": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-cov": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-xdist": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/coverage.json b/tools/oos/example/train_cached_file/coverage.json deleted file mode 100644 index 4bedf68a9312bf89407274bf6e7ae93610a379bc..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/coverage.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "coverage", "version_dict": {"version": "4.5.4", "eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4"}, "deep": {"count": 3, "list": ["aodh", "futurist", "hacking", "coverage"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cradox.json b/tools/oos/example/train_cached_file/cradox.json deleted file mode 100644 index 0bd468197deaadfd3c5284c624e3f49a91b8f57a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cradox.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cradox", "version_dict": {"version": "2.1.2", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["gnocchi", "cradox"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/croniter.json b/tools/oos/example/train_cached_file/croniter.json deleted file mode 100644 index 813ca539d37155913b5ae7d9f7a9f6fe82c24203..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/croniter.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "croniter", "version_dict": {"version": "0.3.30", "eq_version": "", "ge_version": "0.3.4", "lt_version": "", "ne_version": [], "upper_version": "0.3.30"}, "deep": {"count": 1, "list": ["aodh", "croniter"]}, "requires": {"python-dateutil": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8.0", "version": "2.8.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cryptography.json b/tools/oos/example/train_cached_file/cryptography.json deleted file mode 100644 index 1a2095b55392a3002562a9ea5735666efe3b5e3b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cryptography.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cryptography", "version_dict": {"version": "2.8", "eq_version": "", "ge_version": "2.8", "lt_version": "", "ne_version": [], "upper_version": "2.8"}, "deep": {"count": 14, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "jaraco.packaging", "requests", "urllib3", "pyOpenSSL", "cryptography"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cssselect.json b/tools/oos/example/train_cached_file/cssselect.json deleted file mode 100644 index 138f31254d0334565e1a624b81d661a43d438976..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cssselect.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cssselect", "version_dict": {"version": "0.7", "eq_version": "", "ge_version": "0.7", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "beautifulsoup4", "lxml", "cssselect"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/cursive.json b/tools/oos/example/train_cached_file/cursive.json deleted file mode 100644 index c8b1845ac227936bb9cca11f22aa72b26acea700..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/cursive.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "cursive", "version_dict": {"version": "0.2.2", "eq_version": "", "ge_version": "0.2.1", "lt_version": "", "ne_version": [], "upper_version": "0.2.2"}, "deep": {"count": 1, "list": ["cinder", "cursive"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "1.6", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "cryptography": {"eq_version": "", "ge_version": "1.0", "lt_version": "", "ne_version": ["1.3.0"], "upper_version": "2.8", "version": "2.8"}, "oslo.serialization": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.16.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.i18n": {"eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "1.14.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "castellan": {"eq_version": "", "ge_version": "0.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.4", "version": "1.3.4"}, "hacking": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "0.12", "ne_version": [], "upper_version": "", "version": "0.11.0"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.17.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "reno": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/daiquiri.json b/tools/oos/example/train_cached_file/daiquiri.json deleted file mode 100644 index 59d97c1ad71a379721a68f2e4f649030f3f3e1e7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/daiquiri.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "daiquiri", "version_dict": {"version": "1.6.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.6.0"}, "deep": {"count": 10, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "pifpaf", "daiquiri"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ddt.json b/tools/oos/example/train_cached_file/ddt.json deleted file mode 100644 index b073eb3deb4a4d9193caadd82387fcfed3aef26c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ddt.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ddt", "version_dict": {"version": "1.2.1", "eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1"}, "deep": {"count": 18, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log", "oslo.utils", "ddt"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/debtcollector.json b/tools/oos/example/train_cached_file/debtcollector.json deleted file mode 100644 index ca66513a320b52b19cfd38e296c181caf785093e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/debtcollector.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "debtcollector", "version_dict": {"version": "1.22.0", "eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0"}, "deep": {"count": 16, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "debtcollector"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "wrapt": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.11.2", "version": "1.11.2"}, "hacking": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "0.11", "ne_version": [], "upper_version": "", "version": "0.10.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.5", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/decorator.json b/tools/oos/example/train_cached_file/decorator.json deleted file mode 100644 index bd30b9d6006961ec87eb28530bcd5c9fef4cd4b6..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/decorator.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "decorator", "version_dict": {"version": "4.4.0", "eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "4.4.0"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "decorator"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/defusedxml.json b/tools/oos/example/train_cached_file/defusedxml.json deleted file mode 100644 index 503f1b414a5ea0fbff7c89f45474988b99694c21..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/defusedxml.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "defusedxml", "version_dict": {"version": "0.6.0", "eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "0.6.0"}, "deep": {"count": 1, "list": ["cinder", "defusedxml"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/diskimage-builder.json b/tools/oos/example/train_cached_file/diskimage-builder.json deleted file mode 100644 index a3f2a2cf3c60a01b73c5bc64256d8632033e00d8..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/diskimage-builder.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "diskimage-builder", "version_dict": {"version": "2.30.0", "eq_version": "", "ge_version": "1.1.2", "lt_version": "", "ne_version": ["1.6.0", "1.7.0", "1.7.1"], "upper_version": "2.30.0"}, "deep": {"count": 1, "list": ["trove", "diskimage-builder"]}, "requires": {"Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "networkx": {"eq_version": "", "ge_version": "1.10", "lt_version": "", "ne_version": [], "upper_version": "2.3", "version": "2.3"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "pylint": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/django-appconf.json b/tools/oos/example/train_cached_file/django-appconf.json deleted file mode 100644 index 0ff787817068c5561f45056971edd831778d8aa1..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/django-appconf.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "django-appconf", "version_dict": {"version": "1.0.3", "eq_version": "", "ge_version": "1.0", "lt_version": "", "ne_version": [], "upper_version": "1.0.3"}, "deep": {"count": 2, "list": ["horizon", "django-compressor", "django-appconf"]}, "requires": {"Django": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.0.13", "version": "2.0.13"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/django-babel.json b/tools/oos/example/train_cached_file/django-babel.json deleted file mode 100644 index 76c54e9fc8ff49aa91eef79fc9464e0bf1530d3e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/django-babel.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "django-babel", "version_dict": {"version": "0.6.2", "eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "0.6.2"}, "deep": {"count": 1, "list": ["horizon", "django-babel"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/django-compressor.json b/tools/oos/example/train_cached_file/django-compressor.json deleted file mode 100644 index 5c0c88870ea6373ac74f3a06c7f2cf8ce5d67f19..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/django-compressor.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "django-compressor", "version_dict": {"version": "2.3", "eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3"}, "deep": {"count": 1, "list": ["horizon", "django-compressor"]}, "requires": {"django-appconf": {"eq_version": "", "ge_version": "1.0", "lt_version": "", "ne_version": [], "upper_version": "1.0.3", "version": "1.0.3"}, "rcssmin": {"eq_version": "1.0.6", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.6", "version": "1.0.6"}, "rjsmin": {"eq_version": "1.1.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/django-debreach.json b/tools/oos/example/train_cached_file/django-debreach.json deleted file mode 100644 index 5a054a8f679264df48e5b1ae34e409d5aeb61423..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/django-debreach.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "django-debreach", "version_dict": {"version": "1.5.2", "eq_version": "", "ge_version": "1.4.2", "lt_version": "", "ne_version": [], "upper_version": "1.5.2"}, "deep": {"count": 1, "list": ["horizon", "django-debreach"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/django-pyscss.json b/tools/oos/example/train_cached_file/django-pyscss.json deleted file mode 100644 index 6399617819df819cc5e30bddfe8061271e05e4f2..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/django-pyscss.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "django-pyscss", "version_dict": {"version": "2.0.2", "eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "2.0.2"}, "deep": {"count": 1, "list": ["horizon", "django-pyscss"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/dnspython.json b/tools/oos/example/train_cached_file/dnspython.json deleted file mode 100644 index c293c425a2e1140d88d5e0ccec6f7801b6a56406..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/dnspython.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "dnspython", "version_dict": {"version": "1.16.0", "eq_version": "", "ge_version": "1.15.0", "lt_version": "", "ne_version": [], "upper_version": "1.16.0"}, "deep": {"count": 19, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log", "oslo.utils", "eventlet", "dnspython"]}, "requires": {"pycryptodome": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "ecdsa": {"eq_version": "", "ge_version": "0.13", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.13"}, "idna": {"eq_version": "", "ge_version": "2.1", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/doc8.json b/tools/oos/example/train_cached_file/doc8.json deleted file mode 100644 index d8834a542cfa5d2d1abb16af5d337c76b5afbbbf..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/doc8.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "doc8", "version_dict": {"version": "0.8.0", "eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0"}, "deep": {"count": 17, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "debtcollector", "doc8"]}, "requires": {"chardet": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.4", "version": "3.0.4"}, "docutils": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.15.2", "version": "0.15.2"}, "restructuredtext-lint": {"eq_version": "", "ge_version": "0.7", "lt_version": "", "ne_version": [], "upper_version": "1.3.0", "version": "1.3.0"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "doc8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "hacking": {"eq_version": "", "ge_version": "0.9.2", "lt_version": "0.10", "ne_version": [], "upper_version": "", "version": "0.9.2"}, "nose": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.3.7", "version": "1.3.7"}, "oslosphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.18.0", "version": "4.18.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.1.2", "lt_version": "1.3", "ne_version": ["1.2.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "testtools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/docker.json b/tools/oos/example/train_cached_file/docker.json deleted file mode 100644 index 3ac38380f6816c222f5bcbed1c7937587b32b60c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/docker.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "docker", "version_dict": {"version": "4.0.2", "eq_version": "", "ge_version": "2.4.2", "lt_version": "", "ne_version": [], "upper_version": "4.0.2"}, "deep": {"count": 4, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-zunclient", "docker"]}, "requires": {"six": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "websocket-client": {"eq_version": "", "ge_version": "0.32.0", "lt_version": "", "ne_version": [], "upper_version": "0.56.0", "version": "0.56.0"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": ["2.18.0"], "upper_version": "2.22.0", "version": "2.22.0"}, "paramiko": {"eq_version": "", "ge_version": "2.4.2", "lt_version": "", "ne_version": [], "upper_version": "2.6.0", "version": "2.6.0"}, "pyOpenSSL": {"eq_version": "", "ge_version": "17.5.0", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "cryptography": {"eq_version": "", "ge_version": "1.3.4", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "idna": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/docutils.json b/tools/oos/example/train_cached_file/docutils.json deleted file mode 100644 index 908cf4eec72ead660d8f5dcb528c39a2760afd14..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/docutils.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "docutils", "version_dict": {"version": "0.15.2", "eq_version": "", "ge_version": "0.12", "lt_version": "", "ne_version": [], "upper_version": "0.15.2"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "docutils"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/dogpile.cache.json b/tools/oos/example/train_cached_file/dogpile.cache.json deleted file mode 100644 index b82740a9f491ffab139ae8c9c605287544435566..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/dogpile.cache.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "dogpile.cache", "version_dict": {"version": "0.7.1", "eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "0.7.1"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "dogpile.cache"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/dulwich.json b/tools/oos/example/train_cached_file/dulwich.json deleted file mode 100644 index c24c0fbe4dd13f78c82669916cd15c88833950c6..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/dulwich.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "dulwich", "version_dict": {"version": "0.19.13", "eq_version": "", "ge_version": "0.15.0", "lt_version": "", "ne_version": [], "upper_version": "0.19.13"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "dulwich"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ecdsa.json b/tools/oos/example/train_cached_file/ecdsa.json deleted file mode 100644 index 9f22c21cd34eeccfd6e5ac5e581d5d4942f9d14b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ecdsa.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ecdsa", "version_dict": {"version": "0.13", "eq_version": "", "ge_version": "0.13", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 20, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log", "oslo.utils", "eventlet", "dnspython", "ecdsa"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/elasticsearch.json b/tools/oos/example/train_cached_file/elasticsearch.json deleted file mode 100644 index d12be72bf6631d3308958abad3183737a62085cb..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/elasticsearch.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "elasticsearch", "version_dict": {"version": "2.4.1", "eq_version": "", "ge_version": "2.0.0", "lt_version": "3.0.0", "ne_version": [], "upper_version": "2.4.1"}, "deep": {"count": 4, "list": ["aodh", "gnocchiclient", "osc-lib", "osprofiler", "elasticsearch"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/entrypoints.json b/tools/oos/example/train_cached_file/entrypoints.json deleted file mode 100644 index b9ee5d286c27ddb3b6833b98234de55b7fe2c5b7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/entrypoints.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "entrypoints", "version_dict": {"version": "0.3", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.3"}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.tidelift", "keyring", "entrypoints"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/etcd3.json b/tools/oos/example/train_cached_file/etcd3.json deleted file mode 100644 index ca933511292648c66b8dd622c80fd53e65d4ddcc..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/etcd3.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "etcd3", "version_dict": {"version": "0.10.0", "eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "0.10.0"}, "deep": {"count": 2, "list": ["aodh", "tooz", "etcd3"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/etcd3gw.json b/tools/oos/example/train_cached_file/etcd3gw.json deleted file mode 100644 index 1f639fe31e6b5a204db6916e6f9ec697fcc621ff..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/etcd3gw.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "etcd3gw", "version_dict": {"version": "0.2.4", "eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.2.4"}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "oslo.cache", "etcd3gw"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "urllib3": {"eq_version": "", "ge_version": "1.15.1", "lt_version": "", "ne_version": [], "upper_version": "1.25.3", "version": "1.25.3"}, "requests": {"eq_version": "", "ge_version": "2.10.0", "lt_version": "", "ne_version": ["2.12.2", "2.13.0"], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "futurist": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "", "ne_version": ["0.15.0"], "upper_version": "1.9.0", "version": "1.9.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/eventlet.json b/tools/oos/example/train_cached_file/eventlet.json deleted file mode 100644 index c48d263da4c006beedff94f762eb358b3013f384..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/eventlet.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "eventlet", "version_dict": {"version": "0.25.2", "eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1", "0.21.0", "0.23.0"], "upper_version": "0.25.2"}, "deep": {"count": 18, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log", "oslo.utils", "eventlet"]}, "requires": {"dnspython": {"eq_version": "", "ge_version": "1.15.0", "lt_version": "", "ne_version": [], "upper_version": "1.16.0", "version": "1.16.0"}, "greenlet": {"eq_version": "", "ge_version": "0.3", "lt_version": "", "ne_version": [], "upper_version": "0.4.15", "version": "0.4.15"}, "monotonic": {"eq_version": "", "ge_version": "1.4", "lt_version": "", "ne_version": [], "upper_version": "1.5", "version": "1.5"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/extras.json b/tools/oos/example/train_cached_file/extras.json deleted file mode 100644 index ff35f4e329c15a0cbc557f33387d24e16e2ac095..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/extras.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "extras", "version_dict": {"version": "1.0.0", "eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.0.0"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "extras"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/fasteners.json b/tools/oos/example/train_cached_file/fasteners.json deleted file mode 100644 index aa99ecf2a68c7cdf94c5baca0129e61544d6d30f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/fasteners.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "fasteners", "version_dict": {"version": "0.14.1", "eq_version": "", "ge_version": "0.7.0", "lt_version": "", "ne_version": [], "upper_version": "0.14.1"}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "python-glanceclient", "tempest", "oslo.concurrency", "fasteners"]}, "requires": {"six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "monotonic": {"eq_version": "", "ge_version": "0.1", "lt_version": "", "ne_version": [], "upper_version": "1.5", "version": "1.5"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/fixtures.json b/tools/oos/example/train_cached_file/fixtures.json deleted file mode 100644 index 1ad413b4707a82c27ec8dbb39cf9f4e63565dcad..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/fixtures.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "fixtures", "version_dict": {"version": "3.0.0", "eq_version": "", "ge_version": "0.3.14", "lt_version": "", "ne_version": [], "upper_version": "3.0.0"}, "deep": {"count": 3, "list": ["aodh", "futurist", "hacking", "fixtures"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/flake8-docstrings.json b/tools/oos/example/train_cached_file/flake8-docstrings.json deleted file mode 100644 index 322847f676106c731206e10da86872912cec328e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/flake8-docstrings.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "flake8-docstrings", "version_dict": {"version": "0.2.1.post1", "eq_version": "0.2.1.post1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "flake8-docstrings"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/flake8-import-order.json b/tools/oos/example/train_cached_file/flake8-import-order.json deleted file mode 100644 index 3f2b4e59146cbc8088a48337bda84ae100f3a189..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/flake8-import-order.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "flake8-import-order", "version_dict": {"version": "0.17.1", "eq_version": "", "ge_version": "0.17.1", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "flake8-import-order"]}, "requires": {"pycodestyle": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "setuptools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "57.5.0", "version": "57.5.0"}, "enum34": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.6", "version": "1.1.6"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/flake8.json b/tools/oos/example/train_cached_file/flake8.json deleted file mode 100644 index cd9eec4b3a788c836835dfb2489da280f2a80d12..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/flake8.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "flake8", "version_dict": {"version": "3.5.0", "eq_version": "", "ge_version": "3.5.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "flake8"]}, "requires": {"enum34": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.6", "version": "1.1.6"}, "configparser": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "pyflakes": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pycodestyle": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "mccabe": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/freezegun.json b/tools/oos/example/train_cached_file/freezegun.json deleted file mode 100644 index 01e6d7c0e1c47c7ba297fdb223dd7726f2fc371c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/freezegun.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "freezegun", "version_dict": {"version": "0.3.12", "eq_version": "", "ge_version": "0.3.6", "lt_version": "", "ne_version": [], "upper_version": "0.3.12"}, "deep": {"count": 1, "list": ["keystone", "freezegun"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/future.json b/tools/oos/example/train_cached_file/future.json deleted file mode 100644 index d68bdeee7392727665f6dc16c738cc79e7486133..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/future.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "future", "version_dict": {"version": "0.17.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.17.1"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "future"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/futurist.json b/tools/oos/example/train_cached_file/futurist.json deleted file mode 100644 index 47d3c1079c08390f0cd7a2f9dd8b183cddf3b5d4..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/futurist.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "futurist", "version_dict": {"version": "1.9.0", "eq_version": "", "ge_version": "0.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.9.0"}, "deep": {"count": 1, "list": ["aodh", "futurist"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "monotonic": {"eq_version": "", "ge_version": "0.6", "lt_version": "", "ne_version": [], "upper_version": "1.5", "version": "1.5"}, "contextlib2": {"eq_version": "", "ge_version": "0.4.0", "lt_version": "", "ne_version": [], "upper_version": "0.5.5", "version": "0.5.5"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "hacking": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "0.11", "ne_version": [], "upper_version": "", "version": "0.10.0"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/gabbi.json b/tools/oos/example/train_cached_file/gabbi.json deleted file mode 100644 index aa8b059b5fe2339033fb198fd38cd676af7085d3..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/gabbi.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "gabbi", "version_dict": {"version": "1.49.0", "eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "1.49.0"}, "deep": {"count": 1, "list": ["ceilometer", "gabbi"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "PyYAML": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "urllib3": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.25.3", "version": "1.25.3"}, "certifi": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2019.6.16", "version": "2019.6.16"}, "jsonpath-rw-ext": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.2.2", "version": "1.2.2"}, "wsgi-intercept": {"eq_version": "", "ge_version": "1.8.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.1", "version": "1.8.1"}, "colorama": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.4.1", "version": "0.4.1"}, "testtools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/gitdb2.json b/tools/oos/example/train_cached_file/gitdb2.json deleted file mode 100644 index 3e59ee7e38d85913cd4d8d1137b99aed6c89511b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/gitdb2.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "gitdb2", "version_dict": {"version": "2.0.5", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.5"}, "deep": {"count": 11, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "GitPython", "gitdb2"]}, "requires": {"smmap2": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.5", "version": "2.0.5"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/glance-store.json b/tools/oos/example/train_cached_file/glance-store.json deleted file mode 100644 index 35b8123147f4f8fe8548ea80218e03ccd2b307ea..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/glance-store.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "glance-store", "version_dict": {"version": "1.0.1", "eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.0.1"}, "deep": {"count": 1, "list": ["glance", "glance-store"]}, "requires": {"oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "python-cinderclient": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "os-brick": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.10.7", "version": "2.10.7"}, "oslo.rootwrap": {"eq_version": "", "ge_version": "5.8.0", "lt_version": "", "ne_version": [], "upper_version": "5.16.1", "version": "5.16.1"}, "oslo.privsep": {"eq_version": "", "ge_version": "1.23.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.5", "version": "1.33.5"}, "httplib2": {"eq_version": "", "ge_version": "0.9.1", "lt_version": "", "ne_version": [], "upper_version": "0.13.1", "version": "0.13.1"}, "python-swiftclient": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "os-testr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "oslo.vmware": {"eq_version": "", "ge_version": "2.17.0", "lt_version": "", "ne_version": [], "upper_version": "2.34.1", "version": "2.34.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/glance.json b/tools/oos/example/train_cached_file/glance.json deleted file mode 100644 index 4edac725dfe478e7214478fc25dd0cb8eac244b0..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/glance.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "glance", "version_dict": {"version": "19.0.4", "eq_version": "19.0.4", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["glance"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "defusedxml": {"eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.0.10", "lt_version": "", "ne_version": ["1.1.5", "1.1.6", "1.1.7", "1.1.8"], "upper_version": "1.3.8", "version": "1.3.8"}, "eventlet": {"eq_version": "", "ge_version": "0.22.0", "lt_version": "", "ne_version": ["0.23.0", "0.25.0"], "upper_version": "0.25.2", "version": "0.25.2"}, "PasteDeploy": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "Routes": {"eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.1", "version": "2.4.1"}, "WebOb": {"eq_version": "", "ge_version": "1.8.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "sqlalchemy-migrate": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "", "ne_version": [], "upper_version": "0.12.0", "version": "0.12.0"}, "sqlparse": {"eq_version": "", "ge_version": "0.2.2", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "alembic": {"eq_version": "", "ge_version": "0.8.10", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "httplib2": {"eq_version": "", "ge_version": "0.9.1", "lt_version": "", "ne_version": [], "upper_version": "0.13.1", "version": "0.13.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "futurist": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.9.0", "version": "1.9.0"}, "taskflow": {"eq_version": "", "ge_version": "2.16.0", "lt_version": "", "ne_version": [], "upper_version": "3.7.1", "version": "3.7.1"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.17.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.1", "version": "7.0.1"}, "WSME": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "Paste": {"eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "3.2.0", "version": "3.2.0"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "pyOpenSSL": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.db": {"eq_version": "", "ge_version": "4.27.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.29.0", "lt_version": "", "ne_version": ["9.0.0"], "upper_version": "10.2.4", "version": "10.2.4"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.31.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "oslo.reports": {"eq_version": "", "ge_version": "1.18.0", "lt_version": "", "ne_version": [], "upper_version": "1.30.0", "version": "1.30.0"}, "oslo.policy": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "retrying": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": ["1.3.0"], "upper_version": "1.3.3", "version": "1.3.3"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "glance-store": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.0.1", "version": "1.0.1"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "cryptography": {"eq_version": "", "ge_version": "2.1", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "cursive": {"eq_version": "", "ge_version": "0.2.1", "lt_version": "", "ne_version": [], "upper_version": "0.2.2", "version": "0.2.2"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "os-win": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "4.3.3", "version": "4.3.3"}, "castellan": {"eq_version": "", "ge_version": "0.17.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.4", "version": "1.3.4"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "psutil": {"eq_version": "", "ge_version": "3.2.2", "lt_version": "", "ne_version": [], "upper_version": "5.6.3", "version": "5.6.3"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "PyMySQL": {"eq_version": "", "ge_version": "0.7.6", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "psycopg2": {"eq_version": "", "ge_version": "2.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.8.3", "version": "2.8.3"}, "pysendfile": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "xattr": {"eq_version": "", "ge_version": "0.9.2", "lt_version": "", "ne_version": [], "upper_version": "0.9.6", "version": "0.9.6"}, "python-swiftclient": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "os-api-ref": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2", "version": "1.6.2"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "whereto": {"eq_version": "", "ge_version": "0.3.0", "lt_version": "", "ne_version": [], "upper_version": "0.4.0", "version": "0.4.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/gnocchi.json b/tools/oos/example/train_cached_file/gnocchi.json deleted file mode 100644 index ca2f9a986a5f8349ade11a43451db8bcb748e850..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/gnocchi.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "gnocchi", "version_dict": {"version": "4.3.5", "eq_version": "4.3.5", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["gnocchi"]}, "requires": {"numpy": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.17.2", "version": "1.17.2"}, "iso8601": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "oslo.config": {"eq_version": "", "ge_version": "3.22.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.policy": {"eq_version": "", "ge_version": "0.3.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.22.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "pytimeparse": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pecan": {"eq_version": "", "ge_version": "0.9", "lt_version": "", "ne_version": [], "upper_version": "1.3.3", "version": "1.3.3"}, "jsonpatch": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.24", "version": "1.24"}, "cotyledon": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.3", "version": "1.7.3"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "ujson": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.35", "version": "1.35"}, "voluptuous": {"eq_version": "", "ge_version": "0.8.10", "lt_version": "", "ne_version": [], "upper_version": "0.11.7", "version": "0.11.7"}, "Werkzeug": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.15.6", "version": "0.15.6"}, "tenacity": {"eq_version": "", "ge_version": "4.6.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}, "WebOb": {"eq_version": "", "ge_version": "1.4.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "Paste": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.2.0", "version": "3.2.0"}, "PasteDeploy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "monotonic": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.5", "version": "1.5"}, "daiquiri": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "pyparsing": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.4.2", "version": "2.4.2"}, "lz4": {"eq_version": "", "ge_version": "0.9.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.9.0"}, "tooz": {"eq_version": "", "ge_version": "1.38", "lt_version": "", "ne_version": [], "upper_version": "1.66.3", "version": "1.66.3"}, "cachetools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.1.1", "version": "3.1.1"}, "cradox": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.0.0"}, "python-rados": {"eq_version": "", "ge_version": "12.2.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "12.2.0"}, "chardet": {"eq_version": "", "ge_version": "", "lt_version": "4", "ne_version": [], "upper_version": "3.0.4", "version": "3.0.4"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinx-rtd-theme": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "sphinxcontrib-httpdomain": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.7.0", "version": "1.7.0"}, "PyYAML": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "Jinja2": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "reno": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.0.0", "lt_version": "", "ne_version": ["4.19.0"], "upper_version": "7.0.1", "version": "7.0.1"}, "PyMySQL": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "oslo.db": {"eq_version": "", "ge_version": "4.29.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "SQLAlchemy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.3.8", "version": "1.3.8"}, "SQLAlchemy-Utils": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.34.2", "version": "0.34.2"}, "alembic": {"eq_version": "", "ge_version": "0.7.6", "lt_version": "", "ne_version": ["0.8.1", "0.9.0"], "upper_version": "1.1.0", "version": "1.1.0"}, "psycopg2": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8.3", "version": "2.8.3"}, "python-snappy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "protobuf": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.9.1", "version": "3.9.1"}, "redis": {"eq_version": "", "ge_version": "2.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.3.8", "version": "3.3.8"}, "hiredis": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.0", "version": "1.0.0"}, "boto3": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.9.225", "version": "1.9.225"}, "botocore": {"eq_version": "", "ge_version": "1.5", "lt_version": "", "ne_version": [], "upper_version": "1.12.225", "version": "1.12.225"}, "python-swiftclient": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "pifpaf": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "2.2.2", "version": "2.2.2"}, "gabbi": {"eq_version": "", "ge_version": "1.37.0", "lt_version": "", "ne_version": [], "upper_version": "1.49.0", "version": "1.49.0"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "os-testr": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "testrepository": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testresources": {"eq_version": "", "ge_version": "0.2.4", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testtools": {"eq_version": "", "ge_version": "0.9.38", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "WebTest": {"eq_version": "", "ge_version": "2.0.16", "lt_version": "", "ne_version": [], "upper_version": "2.0.33", "version": "2.0.33"}, "wsgi-intercept": {"eq_version": "", "ge_version": "1.4.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.1", "version": "1.8.1"}, "xattr": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": ["0.9.4"], "upper_version": "0.9.6", "version": "0.9.6"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/gnocchiclient.json b/tools/oos/example/train_cached_file/gnocchiclient.json deleted file mode 100644 index 64d3b1ac0d9ff16331f6151aea7750edcb2c562c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/gnocchiclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "gnocchiclient", "version_dict": {"version": "7.0.5", "eq_version": "", "ge_version": "3.1.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.5"}, "deep": {"count": 1, "list": ["aodh", "gnocchiclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "1.4", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "cliff": {"eq_version": "", "ge_version": "2.10", "lt_version": "", "ne_version": [], "upper_version": "2.16.0", "version": "2.16.0"}, "ujson": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.35", "version": "1.35"}, "keystoneauth1": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "futurist": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.9.0", "version": "1.9.0"}, "iso8601": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "monotonic": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.5", "version": "1.5"}, "python-dateutil": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8.0", "version": "2.8.0"}, "debtcollector": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.1.2", "lt_version": "", "ne_version": ["1.2.0", "1.3b1"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinx-rtd-theme": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "osc-lib": {"eq_version": "", "ge_version": "0.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "python-openstackclient": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-xdist": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/google-api-python-client.json b/tools/oos/example/train_cached_file/google-api-python-client.json deleted file mode 100644 index cf0bcd61c77f414aa0874ecda5125a968dd783b9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/google-api-python-client.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "google-api-python-client", "version_dict": {"version": "1.7.11", "eq_version": "", "ge_version": "1.4.2", "lt_version": "", "ne_version": [], "upper_version": "1.7.11"}, "deep": {"count": 1, "list": ["cinder", "google-api-python-client"]}, "requires": {"httplib2": {"eq_version": "", "ge_version": "0.9.2", "lt_version": "1dev", "ne_version": [], "upper_version": "0.13.1", "version": "0.13.1"}, "google-auth": {"eq_version": "", "ge_version": "1.4.1", "lt_version": "", "ne_version": [], "upper_version": "1.6.3", "version": "1.6.3"}, "google-auth-httplib2": {"eq_version": "", "ge_version": "0.0.3", "lt_version": "", "ne_version": [], "upper_version": "0.0.3", "version": "0.0.3"}, "six": {"eq_version": "", "ge_version": "1.6.1", "lt_version": "2dev", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "uritemplate": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "4dev", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/google-auth-httplib2.json b/tools/oos/example/train_cached_file/google-auth-httplib2.json deleted file mode 100644 index b3762cc0062c3f58e19db9fb78fe7a4c2ddebb4f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/google-auth-httplib2.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "google-auth-httplib2", "version_dict": {"version": "0.0.3", "eq_version": "", "ge_version": "0.0.3", "lt_version": "", "ne_version": [], "upper_version": "0.0.3"}, "deep": {"count": 2, "list": ["cinder", "google-api-python-client", "google-auth-httplib2"]}, "requires": {"google-auth": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.6.3", "version": "1.6.3"}, "httplib2": {"eq_version": "", "ge_version": "0.9.1", "lt_version": "", "ne_version": [], "upper_version": "0.13.1", "version": "0.13.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/google-auth.json b/tools/oos/example/train_cached_file/google-auth.json deleted file mode 100644 index f4f9e8a70c4e0c76ed56e4e7044bbb57ac95af17..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/google-auth.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "google-auth", "version_dict": {"version": "1.6.3", "eq_version": "", "ge_version": "1.4.1", "lt_version": "", "ne_version": [], "upper_version": "1.6.3"}, "deep": {"count": 2, "list": ["cinder", "google-api-python-client", "google-auth"]}, "requires": {"pyasn1-modules": {"eq_version": "", "ge_version": "0.2.1", "lt_version": "", "ne_version": [], "upper_version": "0.2.6", "version": "0.2.6"}, "rsa": {"eq_version": "", "ge_version": "3.1.4", "lt_version": "", "ne_version": [], "upper_version": "4.0", "version": "4.0"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "cachetools": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.1.1", "version": "3.1.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/graphviz.json b/tools/oos/example/train_cached_file/graphviz.json deleted file mode 100644 index 11a3a97ad05e9598f1e51bf6e940934968f44374..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/graphviz.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "graphviz", "version_dict": {"version": "0.13", "eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": ["0.5.0"], "upper_version": "0.13"}, "deep": {"count": 1, "list": ["kolla", "graphviz"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "1.7", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinx-rtd-theme": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "mock": {"eq_version": "", "ge_version": "2", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "pytest": {"eq_version": "", "ge_version": "3.4", "lt_version": "", "ne_version": ["3.10.0"], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-mock": {"eq_version": "", "ge_version": "1.8", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.8"}, "pytest-cov": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/greenlet.json b/tools/oos/example/train_cached_file/greenlet.json deleted file mode 100644 index fdd227de5c70b749bebb3b20f41e9846ec954518..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/greenlet.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "greenlet", "version_dict": {"version": "0.4.15", "eq_version": "", "ge_version": "0.3", "lt_version": "", "ne_version": [], "upper_version": "0.4.15"}, "deep": {"count": 19, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log", "oslo.utils", "eventlet", "greenlet"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/gssapi.json b/tools/oos/example/train_cached_file/gssapi.json deleted file mode 100644 index 6237df71be1c20d5fef1b5079e01ab4d4040b4d8..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/gssapi.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "gssapi", "version_dict": {"version": "1.4.1", "eq_version": "", "ge_version": "1.4.1", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "python-glanceclient", "tempest", "paramiko", "gssapi"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/hacking.json b/tools/oos/example/train_cached_file/hacking.json deleted file mode 100644 index cd9bfde12d25dc0b78ea6d38d4e91e27b2a6b36c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/hacking.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "hacking", "version_dict": {"version": "1.1.0", "eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": ""}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "oslo.cache", "hacking"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "flake8": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "2.7.0", "ne_version": [], "upper_version": "", "version": "2.6.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/hiredis.json b/tools/oos/example/train_cached_file/hiredis.json deleted file mode 100644 index 4d873354c06b53af3cd6b0acc571057cb5dec168..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/hiredis.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "hiredis", "version_dict": {"version": "1.0.0", "eq_version": "", "ge_version": "0.1.3", "lt_version": "", "ne_version": [], "upper_version": "1.0.0"}, "deep": {"count": 5, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "redis", "hiredis"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/horizon.json b/tools/oos/example/train_cached_file/horizon.json deleted file mode 100644 index 53222e03e3f77ade54a335b4c8131a3626ad4454..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/horizon.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "horizon", "version_dict": {"version": "16.2.2", "eq_version": "16.2.2", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["horizon"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "Django": {"eq_version": "", "ge_version": "1.11", "lt_version": "2.1", "ne_version": [], "upper_version": "2.0.13", "version": "2.0.13"}, "django-babel": {"eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "0.6.2", "version": "0.6.2"}, "django-compressor": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3", "version": "2.3"}, "django-debreach": {"eq_version": "", "ge_version": "1.4.2", "lt_version": "", "ne_version": [], "upper_version": "1.5.2", "version": "1.5.2"}, "django-pyscss": {"eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "2.0.2", "version": "2.0.2"}, "futurist": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.9.0", "version": "1.9.0"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.policy": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.1.1", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "osprofiler": {"eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "Pint": {"eq_version": "", "ge_version": "0.5", "lt_version": "", "ne_version": [], "upper_version": "0.9", "version": "0.9"}, "pymongo": {"eq_version": "", "ge_version": "3.0.2", "lt_version": "", "ne_version": ["3.1"], "upper_version": "3.9.0", "version": "3.9.0"}, "pyScss": {"eq_version": "", "ge_version": "1.3.7", "lt_version": "", "ne_version": [], "upper_version": "1.3.7", "version": "1.3.7"}, "python-cinderclient": {"eq_version": "", "ge_version": "4.0.1", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "python-glanceclient": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1", "version": "2.17.1"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.15.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "python-neutronclient": {"eq_version": "", "ge_version": "6.7.0", "lt_version": "", "ne_version": [], "upper_version": "6.14.1", "version": "6.14.1"}, "python-novaclient": {"eq_version": "", "ge_version": "9.1.0", "lt_version": "", "ne_version": [], "upper_version": "15.1.1", "version": "15.1.1"}, "python-swiftclient": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "pytz": {"eq_version": "", "ge_version": "2013.6", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "semantic-version": {"eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "XStatic": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.0.2", "version": "1.0.2"}, "XStatic-Angular": {"eq_version": "", "ge_version": "1.5.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.5.8.0", "version": "1.5.8.0"}, "XStatic-Angular-Bootstrap": {"eq_version": "", "ge_version": "2.2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.2.0.0", "version": "2.2.0.0"}, "XStatic-Angular-FileUpload": {"eq_version": "", "ge_version": "12.0.4.0", "lt_version": "", "ne_version": [], "upper_version": "12.0.4.0", "version": "12.0.4.0"}, "XStatic-Angular-Gettext": {"eq_version": "", "ge_version": "2.3.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.8.0", "version": "2.3.8.0"}, "XStatic-Angular-lrdragndrop": {"eq_version": "", "ge_version": "1.0.2.2", "lt_version": "", "ne_version": [], "upper_version": "1.0.2.4", "version": "1.0.2.4"}, "XStatic-Angular-Schema-Form": {"eq_version": "", "ge_version": "0.8.13.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.13.0", "version": "0.8.13.0"}, "XStatic-Bootstrap-Datepicker": {"eq_version": "", "ge_version": "1.3.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.1.0", "version": "1.3.1.0"}, "XStatic-Bootstrap-SCSS": {"eq_version": "", "ge_version": "3.3.7.1", "lt_version": "", "ne_version": [], "upper_version": "3.3.7.1", "version": "3.3.7.1"}, "XStatic-bootswatch": {"eq_version": "", "ge_version": "3.3.7.0", "lt_version": "", "ne_version": [], "upper_version": "3.3.7.0", "version": "3.3.7.0"}, "XStatic-D3": {"eq_version": "", "ge_version": "3.5.17.0", "lt_version": "", "ne_version": [], "upper_version": "3.5.17.0", "version": "3.5.17.0"}, "XStatic-Hogan": {"eq_version": "", "ge_version": "2.0.0.2", "lt_version": "", "ne_version": [], "upper_version": "2.0.0.2", "version": "2.0.0.2"}, "XStatic-Font-Awesome": {"eq_version": "", "ge_version": "4.7.0.0", "lt_version": "", "ne_version": [], "upper_version": "4.7.0.0", "version": "4.7.0.0"}, "XStatic-Jasmine": {"eq_version": "", "ge_version": "2.4.1.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.1.2", "version": "2.4.1.2"}, "XStatic-jQuery": {"eq_version": "", "ge_version": "1.8.2.1", "lt_version": "2", "ne_version": [], "upper_version": "1.12.4.1", "version": "1.12.4.1"}, "XStatic-JQuery-Migrate": {"eq_version": "", "ge_version": "1.2.1.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1.1", "version": "1.2.1.1"}, "XStatic-JQuery.quicksearch": {"eq_version": "", "ge_version": "2.0.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.0.3.1", "version": "2.0.3.1"}, "XStatic-JQuery.TableSorter": {"eq_version": "", "ge_version": "2.14.5.1", "lt_version": "", "ne_version": [], "upper_version": "2.14.5.1", "version": "2.14.5.1"}, "XStatic-jquery-ui": {"eq_version": "", "ge_version": "1.10.4.1", "lt_version": "", "ne_version": [], "upper_version": "1.12.1.1", "version": "1.12.1.1"}, "XStatic-JSEncrypt": {"eq_version": "", "ge_version": "2.3.1.1", "lt_version": "", "ne_version": [], "upper_version": "2.3.1.1", "version": "2.3.1.1"}, "XStatic-mdi": {"eq_version": "", "ge_version": "1.4.57.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.50.2", "version": "1.6.50.2"}, "XStatic-objectpath": {"eq_version": "", "ge_version": "1.2.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.2.1.0", "version": "1.2.1.0"}, "XStatic-Rickshaw": {"eq_version": "", "ge_version": "1.5.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.5.0.0", "version": "1.5.0.0"}, "XStatic-roboto-fontface": {"eq_version": "", "ge_version": "0.5.0.0", "lt_version": "", "ne_version": [], "upper_version": "0.5.0.0", "version": "0.5.0.0"}, "XStatic-smart-table": {"eq_version": "", "ge_version": "1.4.13.2", "lt_version": "", "ne_version": [], "upper_version": "1.4.13.2", "version": "1.4.13.2"}, "XStatic-Spin": {"eq_version": "", "ge_version": "1.2.5.2", "lt_version": "", "ne_version": [], "upper_version": "1.2.5.2", "version": "1.2.5.2"}, "XStatic-term.js": {"eq_version": "", "ge_version": "0.0.7.0", "lt_version": "", "ne_version": [], "upper_version": "0.0.7.0", "version": "0.0.7.0"}, "XStatic-tv4": {"eq_version": "", "ge_version": "1.2.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.2.7.0", "version": "1.2.7.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "2", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "astroid": {"eq_version": "2.1.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.1.0"}, "bandit": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "1.6.3", "ne_version": ["1.6.0"], "upper_version": "", "version": "1.4.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "flake8-import-order": {"eq_version": "0.12", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.12"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "mox3": {"eq_version": "", "ge_version": "0.20.0", "lt_version": "", "ne_version": [], "upper_version": "0.28.0", "version": "0.28.0"}, "nodeenv": {"eq_version": "", "ge_version": "0.9.4", "lt_version": "", "ne_version": [], "upper_version": "1.3.3", "version": "1.3.3"}, "python-memcached": {"eq_version": "", "ge_version": "1.59", "lt_version": "", "ne_version": [], "upper_version": "1.59", "version": "1.59"}, "pylint": {"eq_version": "2.2.2", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.2.2"}, "selenium": {"eq_version": "", "ge_version": "2.50.1", "lt_version": "", "ne_version": [], "upper_version": "3.141.0", "version": "3.141.0"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "xvfbwrapper": {"eq_version": "", "ge_version": "0.1.3", "lt_version": "", "ne_version": [], "upper_version": "0.2.9", "version": "0.2.9"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/httplib2.json b/tools/oos/example/train_cached_file/httplib2.json deleted file mode 100644 index a97431772568ba46a6a8695fb37495b4855c0fb0..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/httplib2.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "httplib2", "version_dict": {"version": "0.13.1", "eq_version": "", "ge_version": "0.7.5", "lt_version": "", "ne_version": [], "upper_version": "0.13.1"}, "deep": {"count": 11, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "sqlalchemy-migrate", "tempest-lib", "httplib2"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/hypothesis.json b/tools/oos/example/train_cached_file/hypothesis.json deleted file mode 100644 index b45fdbf3b69e95892dd339fa07dfa8ebd8ce4f91..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/hypothesis.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "hypothesis", "version_dict": {"version": "3.56", "eq_version": "", "ge_version": "3.56", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "hypothesis"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/idna.json b/tools/oos/example/train_cached_file/idna.json deleted file mode 100644 index eda75608a591010c781948c5221fe2242daf893b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/idna.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "idna", "version_dict": {"version": "2.8", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.8"}, "deep": {"count": 12, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "jaraco.packaging", "requests", "idna"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/imagesize.json b/tools/oos/example/train_cached_file/imagesize.json deleted file mode 100644 index b44cd709b05b5b44298395e28d4a26f79e69dc82..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/imagesize.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "imagesize", "version_dict": {"version": "1.1.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.0"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "imagesize"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/importlib-metadata.json b/tools/oos/example/train_cached_file/importlib-metadata.json deleted file mode 100644 index bbece1ba6af736f328d78844fd6a304cf4f2373a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/importlib-metadata.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "importlib-metadata", "version_dict": {"version": "0.20", "eq_version": "", "ge_version": "0.12", "lt_version": "", "ne_version": [], "upper_version": "0.20"}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata"]}, "requires": {"zipp": {"eq_version": "", "ge_version": "0.5", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "rst.linker": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/importlib-resources.json b/tools/oos/example/train_cached_file/importlib-resources.json deleted file mode 100644 index c99dd52faf08beff13478a0d928ed2b501409374..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/importlib-resources.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "importlib-resources", "version_dict": {"version": "2.0.0", "eq_version": "", "ge_version": "1.6", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.tidelift", "importlib-resources"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "rst.linker": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "jaraco.packaging": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/infi.dtypes.iqn.json b/tools/oos/example/train_cached_file/infi.dtypes.iqn.json deleted file mode 100644 index 8b6111981b0bf76a8df5d7083f69fed394259f93..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/infi.dtypes.iqn.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "infi.dtypes.iqn", "version_dict": {"version": "0.4.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.4.0"}, "deep": {"count": 1, "list": ["cinder", "infi.dtypes.iqn"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/infi.dtypes.wwn.json b/tools/oos/example/train_cached_file/infi.dtypes.wwn.json deleted file mode 100644 index 850c6c9716bd486c1343e65733c3a8e7e6c92384..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/infi.dtypes.wwn.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "infi.dtypes.wwn", "version_dict": {"version": "0.1.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.1.1"}, "deep": {"count": 1, "list": ["cinder", "infi.dtypes.wwn"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/infinisdk.json b/tools/oos/example/train_cached_file/infinisdk.json deleted file mode 100644 index 4418b7005a397b1a522fc56f958d7baba83cbd74..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/infinisdk.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "infinisdk", "version_dict": {"version": "141.1.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "141.1.0"}, "deep": {"count": 1, "list": ["cinder", "infinisdk"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ipaddress.json b/tools/oos/example/train_cached_file/ipaddress.json deleted file mode 100644 index cce4b0d4292534ede48c1de8e87e6905574df847..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ipaddress.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ipaddress", "version_dict": {"version": "1.0.22", "eq_version": "", "ge_version": "1.0.17", "lt_version": "", "ne_version": [], "upper_version": "1.0.22"}, "deep": {"count": 18, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log", "oslo.serialization", "ipaddress"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ironic-inspector.json b/tools/oos/example/train_cached_file/ironic-inspector.json deleted file mode 100644 index bab9f57c1eb1cd5b5de7c6f48fc12c0ac6f0a123..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ironic-inspector.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ironic-inspector", "version_dict": {"version": "9.2.4", "eq_version": "9.2.4", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["ironic-inspector"]}, "requires": {"automaton": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.17.0", "version": "1.17.0"}, "alembic": {"eq_version": "", "ge_version": "0.8.10", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "construct": {"eq_version": "", "ge_version": "2.8.10", "lt_version": "2.9", "ne_version": [], "upper_version": "2.8.22", "version": "2.8.22"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "Flask": {"eq_version": "", "ge_version": "0.10", "lt_version": "", "ne_version": ["0.11"], "upper_version": "1.1.1", "version": "1.1.1"}, "futurist": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.9.0", "version": "1.9.0"}, "ironic-lib": {"eq_version": "", "ge_version": "2.17.0", "lt_version": "", "ne_version": [], "upper_version": "2.21.3", "version": "2.21.3"}, "jsonpath-rw": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "2.0", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.18.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.1", "version": "7.0.1"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "python-ironicclient": {"eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": ["2.5.2", "2.7.1", "3.0.0"], "upper_version": "3.1.2", "version": "3.1.2"}, "pytz": {"eq_version": "", "ge_version": "2013.6", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "openstacksdk": {"eq_version": "", "ge_version": "0.30.0", "lt_version": "", "ne_version": [], "upper_version": "0.36.5", "version": "0.36.5"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.db": {"eq_version": "", "ge_version": "4.27.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.32.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.31.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "oslo.policy": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.rootwrap": {"eq_version": "", "ge_version": "5.8.0", "lt_version": "", "ne_version": [], "upper_version": "5.16.1", "version": "5.16.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.service": {"eq_version": "", "ge_version": "1.24.0", "lt_version": "", "ne_version": ["1.28.1"], "upper_version": "1.40.2", "version": "1.40.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "retrying": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": ["1.3.0"], "upper_version": "1.3.3", "version": "1.3.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.0.10", "lt_version": "", "ne_version": ["1.1.5", "1.1.6", "1.1.7", "1.1.8"], "upper_version": "1.3.8", "version": "1.3.8"}, "tooz": {"eq_version": "", "ge_version": "1.64.0", "lt_version": "", "ne_version": [], "upper_version": "1.66.3", "version": "1.66.3"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "2.0.0", "ne_version": ["1.6.0"], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "flake8-import-order": {"eq_version": "", "ge_version": "0.13", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.13"}, "hacking": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.0.0"}, "mock": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-svg2pdfconverter": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.1.0", "version": "0.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "os-api-ref": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2", "version": "1.6.2"}, "pymemcache": {"eq_version": "", "ge_version": "1.2.9", "lt_version": "", "ne_version": ["1.3.0"], "upper_version": "2.2.2", "version": "2.2.2"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ironic-lib.json b/tools/oos/example/train_cached_file/ironic-lib.json deleted file mode 100644 index c60bc7c3d6cdf4fbf1e4fa51b216c55b5e950857..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ironic-lib.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ironic-lib", "version_dict": {"version": "2.21.3", "eq_version": "", "ge_version": "2.17.1", "lt_version": "", "ne_version": [], "upper_version": "2.21.3"}, "deep": {"count": 1, "list": ["ironic", "ironic-lib"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.service": {"eq_version": "", "ge_version": "1.24.0", "lt_version": "", "ne_version": ["1.28.1"], "upper_version": "1.40.2", "version": "1.40.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "zeroconf": {"eq_version": "", "ge_version": "0.19.1", "lt_version": "", "ne_version": [], "upper_version": "0.23.0", "version": "0.23.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "flake8-import-order": {"eq_version": "", "ge_version": "0.13", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.13"}, "hacking": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "1.1.0", "ne_version": [], "upper_version": "", "version": "1.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ironic-prometheus-exporter.json b/tools/oos/example/train_cached_file/ironic-prometheus-exporter.json deleted file mode 100644 index f92104dab7326e908df9b5e16706c97c688babff..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ironic-prometheus-exporter.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ironic-prometheus-exporter", "version_dict": {"version": "1.1.2", "eq_version": "1.1.2", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["ironic-prometheus-exporter"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "oslo.messaging": {"eq_version": "", "ge_version": "9.4.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "Flask": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.1", "version": "1.1.1"}, "prometheus_client": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.6.0"}, "flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ironic-python-agent.json b/tools/oos/example/train_cached_file/ironic-python-agent.json deleted file mode 100644 index 01cd74f8e721d63bab74b9339ab188eb43ed258c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ironic-python-agent.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ironic-python-agent", "version_dict": {"version": "5.0.4", "eq_version": "5.0.4", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["ironic-python-agent"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "netifaces": {"eq_version": "", "ge_version": "0.10.4", "lt_version": "", "ne_version": [], "upper_version": "0.10.9", "version": "0.10.9"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.service": {"eq_version": "", "ge_version": "1.24.0", "lt_version": "", "ne_version": ["1.28.1"], "upper_version": "1.40.2", "version": "1.40.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "pecan": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": ["1.0.2", "1.0.3", "1.0.4", "1.2"], "upper_version": "1.3.3", "version": "1.3.3"}, "Pint": {"eq_version": "", "ge_version": "0.5", "lt_version": "", "ne_version": [], "upper_version": "0.9", "version": "0.9"}, "psutil": {"eq_version": "", "ge_version": "3.2.2", "lt_version": "", "ne_version": [], "upper_version": "5.6.3", "version": "5.6.3"}, "pyudev": {"eq_version": "", "ge_version": "0.18", "lt_version": "", "ne_version": [], "upper_version": "0.21.0", "version": "0.21.0"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "rtslib-fb": {"eq_version": "", "ge_version": "2.1.65", "lt_version": "", "ne_version": [], "upper_version": "2.1.69", "version": "2.1.69"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "WSME": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "ironic-lib": {"eq_version": "", "ge_version": "2.17.0", "lt_version": "", "ne_version": [], "upper_version": "2.21.3", "version": "2.21.3"}, "hacking": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "1.1.0", "ne_version": [], "upper_version": "", "version": "1.0.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "bashate": {"eq_version": "", "ge_version": "0.5.1", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}, "flake8-import-order": {"eq_version": "", "ge_version": "0.13", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.13"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "2.0.0", "ne_version": ["1.6.0"], "upper_version": "", "version": "1.1.0"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-pecanwsme": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.10.0", "version": "0.10.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ironic-tempest-plugin.json b/tools/oos/example/train_cached_file/ironic-tempest-plugin.json deleted file mode 100644 index 8413c0d3b5c75833aabc87e03ec1fb30fc955f49..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ironic-tempest-plugin.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ironic-tempest-plugin", "version_dict": {"version": "1.5.1", "eq_version": "1.5.1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["ironic-tempest-plugin"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ironic-ui.json b/tools/oos/example/train_cached_file/ironic-ui.json deleted file mode 100644 index 7464d9b8c45d5980188a51b1557c68fc61524845..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ironic-ui.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ironic-ui", "version_dict": {"version": "3.5.5", "eq_version": "3.5.5", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["ironic-ui"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "python-ironicclient": {"eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": ["2.5.2", "2.7.1"], "upper_version": "3.1.2", "version": "3.1.2"}, "horizon": {"eq_version": "", "ge_version": "16.0.0", "lt_version": "", "ne_version": [], "upper_version": "16.2.2", "version": "16.2.2"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "selenium": {"eq_version": "", "ge_version": "2.50.1", "lt_version": "", "ne_version": [], "upper_version": "3.141.0", "version": "3.141.0"}, "xvfbwrapper": {"eq_version": "", "ge_version": "0.1.3", "lt_version": "", "ne_version": [], "upper_version": "0.2.9", "version": "0.2.9"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ironic.json b/tools/oos/example/train_cached_file/ironic.json deleted file mode 100644 index 6106dd02f69708d84f0e8893dbc75faae6eaf3f3..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ironic.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ironic", "version_dict": {"version": "13.0.7", "eq_version": "13.0.7", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["ironic"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.0.10", "lt_version": "", "ne_version": ["1.1.5", "1.1.6", "1.1.7", "1.1.8"], "upper_version": "1.3.8", "version": "1.3.8"}, "alembic": {"eq_version": "", "ge_version": "0.8.10", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "automaton": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.17.0", "version": "1.17.0"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "WebOb": {"eq_version": "", "ge_version": "1.7.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "python-cinderclient": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": ["4.0.0"], "upper_version": "5.0.2", "version": "5.0.2"}, "python-neutronclient": {"eq_version": "", "ge_version": "6.7.0", "lt_version": "", "ne_version": [], "upper_version": "6.14.1", "version": "6.14.1"}, "python-glanceclient": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1", "version": "2.17.1"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.15.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "ironic-lib": {"eq_version": "", "ge_version": "2.17.1", "lt_version": "", "ne_version": [], "upper_version": "2.21.3", "version": "2.21.3"}, "python-swiftclient": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "pytz": {"eq_version": "", "ge_version": "2013.6", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "pysendfile": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.db": {"eq_version": "", "ge_version": "4.27.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.rootwrap": {"eq_version": "", "ge_version": "5.8.0", "lt_version": "", "ne_version": [], "upper_version": "5.16.1", "version": "5.16.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.31.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "oslo.policy": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.reports": {"eq_version": "", "ge_version": "1.18.0", "lt_version": "", "ne_version": [], "upper_version": "1.30.0", "version": "1.30.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.service": {"eq_version": "", "ge_version": "1.24.0", "lt_version": "", "ne_version": ["1.28.1"], "upper_version": "1.40.2", "version": "1.40.2"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "osprofiler": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "os-traits": {"eq_version": "", "ge_version": "0.4.0", "lt_version": "", "ne_version": [], "upper_version": "0.16.0", "version": "0.16.0"}, "pecan": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": ["1.0.2", "1.0.3", "1.0.4", "1.2"], "upper_version": "1.3.3", "version": "1.3.3"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "rfc3986": {"eq_version": "", "ge_version": "0.3.1", "lt_version": "", "ne_version": [], "upper_version": "1.3.2", "version": "1.3.2"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "jsonpatch": {"eq_version": "", "ge_version": "1.16", "lt_version": "", "ne_version": ["1.20"], "upper_version": "1.24", "version": "1.24"}, "WSME": {"eq_version": "", "ge_version": "0.9.3", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "Jinja2": {"eq_version": "", "ge_version": "2.10", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.17.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.1", "version": "7.0.1"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.29.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "retrying": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": ["1.3.0"], "upper_version": "1.3.3", "version": "1.3.3"}, "oslo.versionedobjects": {"eq_version": "", "ge_version": "1.31.2", "lt_version": "", "ne_version": [], "upper_version": "1.36.1", "version": "1.36.1"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "psutil": {"eq_version": "", "ge_version": "3.2.2", "lt_version": "", "ne_version": [], "upper_version": "5.6.3", "version": "5.6.3"}, "futurist": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.9.0", "version": "1.9.0"}, "tooz": {"eq_version": "", "ge_version": "1.58.0", "lt_version": "", "ne_version": [], "upper_version": "1.66.3", "version": "1.66.3"}, "openstacksdk": {"eq_version": "", "ge_version": "0.31.2", "lt_version": "", "ne_version": [], "upper_version": "0.36.5", "version": "0.36.5"}, "hacking": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "1.1.0", "ne_version": [], "upper_version": "", "version": "1.0.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "PyMySQL": {"eq_version": "", "ge_version": "0.7.6", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "psycopg2": {"eq_version": "", "ge_version": "2.7.3", "lt_version": "", "ne_version": [], "upper_version": "2.8.3", "version": "2.8.3"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "WebTest": {"eq_version": "", "ge_version": "2.0.27", "lt_version": "", "ne_version": [], "upper_version": "2.0.33", "version": "2.0.33"}, "bashate": {"eq_version": "", "ge_version": "0.5.1", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}, "flake8-import-order": {"eq_version": "", "ge_version": "0.13", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.13"}, "Pygments": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.6.1", "version": "2.6.1"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": ["1.6.0"], "upper_version": "", "version": "1.1.0"}, "proliantutils": {"eq_version": "", "ge_version": "2.9.1", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.9.1"}, "pysnmp": {"eq_version": "", "ge_version": "4.3.0", "lt_version": "5.0.0", "ne_version": [], "upper_version": "4.4.11", "version": "4.4.11"}, "python-scciclient": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.8.0"}, "python-dracclient": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "4.0.0", "ne_version": [], "upper_version": "", "version": "3.0.0"}, "python-xclarityclient": {"eq_version": "", "ge_version": "0.1.6", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.1.6"}, "sushy": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.5", "version": "2.0.5"}, "ansible": {"eq_version": "", "ge_version": "2.5", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.5"}, "python-ibmcclient": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "0.3.0", "ne_version": ["0.2.1"], "upper_version": "", "version": "0.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "os-api-ref": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2", "version": "1.6.2"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "sphinxcontrib-pecanwsme": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "", "ne_version": [], "upper_version": "0.10.0", "version": "0.10.0"}, "sphinxcontrib-seqdiag": {"eq_version": "", "ge_version": "0.8.4", "lt_version": "", "ne_version": [], "upper_version": "0.8.5", "version": "0.8.5"}, "sphinxcontrib-svg2pdfconverter": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.1.0", "version": "0.1.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/iso8601.json b/tools/oos/example/train_cached_file/iso8601.json deleted file mode 100644 index 076a76c12fb3151c7edff4a93ef9f89c2d1ca40d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/iso8601.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "iso8601", "version_dict": {"version": "0.1.12", "eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12"}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "iso8601"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/isodate.json b/tools/oos/example/train_cached_file/isodate.json deleted file mode 100644 index 7cc2eb30c1b67494e4caeb003ec0378afc36d2d5..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/isodate.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "isodate", "version_dict": {"version": "0.5.4", "eq_version": "", "ge_version": "0.5.4", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 8, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "azure-servicebus", "azure-common", "msrestazure", "msrest", "isodate"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/isort.json b/tools/oos/example/train_cached_file/isort.json deleted file mode 100644 index 720c1c8e7c67dbb74093fe563cbb97537c4e7c8b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/isort.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "isort", "version_dict": {"version": "4.3.21", "eq_version": "4.3.21", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 2, "list": ["neutron", "ovsdbapp", "isort"]}, "requires": {"pipreqs": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "requirementslib": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "toml": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pip-api": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "appdirs": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.3", "version": "1.4.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/itsdangerous.json b/tools/oos/example/train_cached_file/itsdangerous.json deleted file mode 100644 index b1cbcdf139bb4706a732de106ec053d9291a4fda..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/itsdangerous.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "itsdangerous", "version_dict": {"version": "1.1.0", "eq_version": "", "ge_version": "0.24", "lt_version": "", "ne_version": [], "upper_version": "1.1.0"}, "deep": {"count": 2, "list": ["keystone", "Flask", "itsdangerous"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/jaeger-client.json b/tools/oos/example/train_cached_file/jaeger-client.json deleted file mode 100644 index bd81c3ffe2c235f1323618fe03dc387f8808cf07..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/jaeger-client.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "jaeger-client", "version_dict": {"version": "4.1.0", "eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "4.1.0"}, "deep": {"count": 4, "list": ["aodh", "gnocchiclient", "osc-lib", "osprofiler", "jaeger-client"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/jaraco.packaging.json b/tools/oos/example/train_cached_file/jaraco.packaging.json deleted file mode 100644 index 26d119930407d97b4013e7ee7b5e144c376aa30d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/jaraco.packaging.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "jaraco.packaging", "version_dict": {"version": "8.2", "eq_version": "", "ge_version": "8.2", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.packaging"]}, "requires": {"setuptools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "57.5.0", "version": "57.5.0"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "jaraco.packaging": {"eq_version": "", "ge_version": "3.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "3.2"}, "rst.linker": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9"}, "pytest": {"eq_version": "", "ge_version": "3.5", "lt_version": "", "ne_version": ["3.7.3"], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-checkdocs": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.2.3"}, "pytest-flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-cov": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "jaraco.test": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "3.2.0"}, "pytest-black": {"eq_version": "", "ge_version": "0.3.7", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.3.7"}, "pytest-mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/jaraco.path.json b/tools/oos/example/train_cached_file/jaraco.path.json deleted file mode 100644 index 56085557c470385af8e000cdeeb3eb2a6705991d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/jaraco.path.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "jaraco.path", "version_dict": {"version": "3.2.0", "eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.path"]}, "requires": {"pyobjc": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "jaraco.packaging": {"eq_version": "", "ge_version": "8.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "8.2"}, "rst.linker": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9"}, "pytest": {"eq_version": "", "ge_version": "3.5", "lt_version": "", "ne_version": ["3.7.3"], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-checkdocs": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.2.3"}, "pytest-flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-cov": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-enabler": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-black": {"eq_version": "", "ge_version": "0.3.7", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.3.7"}, "pytest-mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/jaraco.test.json b/tools/oos/example/train_cached_file/jaraco.test.json deleted file mode 100644 index 567a396149bbf3ac9a92e9a49a7776650590a378..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/jaraco.test.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "jaraco.test", "version_dict": {"version": "3.2.0", "eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.packaging", "jaraco.test"]}, "requires": {"toml": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "jaraco.functools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "jaraco.context": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "more-itertools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "7.2.0", "version": "7.2.0"}, "jaraco.collections": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "jaraco.packaging": {"eq_version": "", "ge_version": "3.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "3.2"}, "rst.linker": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9"}, "pytest": {"eq_version": "", "ge_version": "3.5", "lt_version": "", "ne_version": ["3.7.3"], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-checkdocs": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.2.3"}, "pytest-flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-black": {"eq_version": "", "ge_version": "0.3.7", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.3.7"}, "pytest-cov": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/jaraco.tidelift.json b/tools/oos/example/train_cached_file/jaraco.tidelift.json deleted file mode 100644 index 3bb99598b171f340ff4b00b91304da2c6f47e2f5..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/jaraco.tidelift.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "jaraco.tidelift", "version_dict": {"version": "1.4", "eq_version": "", "ge_version": "1.4", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.tidelift"]}, "requires": {"autocommand": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "requests-toolbelt": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.9.1", "version": "0.9.1"}, "keyring": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "importlib-resources": {"eq_version": "", "ge_version": "1.6", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.6"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "jaraco.packaging": {"eq_version": "", "ge_version": "8.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "8.2"}, "rst.linker": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9"}, "pytest": {"eq_version": "", "ge_version": "4.6", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-checkdocs": {"eq_version": "", "ge_version": "2.4", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.4"}, "pytest-flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-cov": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-enabler": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.0.1"}, "types-docutils": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-black": {"eq_version": "", "ge_version": "0.3.7", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.3.7"}, "pytest-mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/jeepney.json b/tools/oos/example/train_cached_file/jeepney.json deleted file mode 100644 index 4f105dfa2e5cfa2abdb62b87c05cdd284c1d3f9c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/jeepney.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "jeepney", "version_dict": {"version": "0.4.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.4.1"}, "deep": {"count": 9, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.tidelift", "keyring", "SecretStorage", "jeepney"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/jmespath.json b/tools/oos/example/train_cached_file/jmespath.json deleted file mode 100644 index c18beb1ce19a7446904e5c4254c9ae1e87977d50..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/jmespath.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "jmespath", "version_dict": {"version": "0.9.4", "eq_version": "", "ge_version": "0.9.0", "lt_version": "", "ne_version": [], "upper_version": "0.9.4"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "jmespath"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/jsonpatch.json b/tools/oos/example/train_cached_file/jsonpatch.json deleted file mode 100644 index 59f6b4caddae0a7e9b3e1b9b982437789351b566..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/jsonpatch.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "jsonpatch", "version_dict": {"version": "1.24", "eq_version": "", "ge_version": "1.16", "lt_version": "", "ne_version": ["1.20"], "upper_version": "1.24"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "jsonpatch"]}, "requires": {"jsonpointer": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "2.0", "version": "2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/jsonpath-rw-ext.json b/tools/oos/example/train_cached_file/jsonpath-rw-ext.json deleted file mode 100644 index 7e290b7f813103e15aa5a6fa10ea1dc0afb8ada5..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/jsonpath-rw-ext.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "jsonpath-rw-ext", "version_dict": {"version": "1.2.2", "eq_version": "", "ge_version": "1.1.3", "lt_version": "", "ne_version": [], "upper_version": "1.2.2"}, "deep": {"count": 1, "list": ["ceilometer", "jsonpath-rw-ext"]}, "requires": {"jsonpath-rw": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "pbr": {"eq_version": "", "ge_version": "1.8", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/jsonpath-rw.json b/tools/oos/example/train_cached_file/jsonpath-rw.json deleted file mode 100644 index b71b283ab58d530150a3c39130b9677d038a9d08..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/jsonpath-rw.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "jsonpath-rw", "version_dict": {"version": "1.4.0", "eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0"}, "deep": {"count": 2, "list": ["ceilometer", "jsonpath-rw-ext", "jsonpath-rw"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/jsonpointer.json b/tools/oos/example/train_cached_file/jsonpointer.json deleted file mode 100644 index c0d408bbf5cc370a2fba3e425d1cd6194260ba01..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/jsonpointer.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "jsonpointer", "version_dict": {"version": "2.0", "eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "2.0"}, "deep": {"count": 14, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "jsonpatch", "jsonpointer"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/jsonschema.json b/tools/oos/example/train_cached_file/jsonschema.json deleted file mode 100644 index cc0e71c96a8da179aacf5afa9b9be0069249ca28..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/jsonschema.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "jsonschema", "version_dict": {"version": "3.0.2", "eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "jsonschema"]}, "requires": {"attrs": {"eq_version": "", "ge_version": "17.4.0", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "pyrsistent": {"eq_version": "", "ge_version": "0.14.0", "lt_version": "", "ne_version": [], "upper_version": "0.15.4", "version": "0.15.4"}, "setuptools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "57.5.0", "version": "57.5.0"}, "six": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "idna": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "jsonpointer": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.0", "version": "2.0"}, "rfc3987": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "strict-rfc3339": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "webcolors": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.10", "version": "1.10"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/kazoo.json b/tools/oos/example/train_cached_file/kazoo.json deleted file mode 100644 index e3be25812bf9c4745eb408a214403ab958201ebf..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/kazoo.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "kazoo", "version_dict": {"version": "2.6.1", "eq_version": "", "ge_version": "1.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.6.1"}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "kazoo"]}, "requires": {"six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "coverage": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "mock": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "nose": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.3.7", "version": "1.3.7"}, "flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pure-sasl": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "objgraph": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/keyring.json b/tools/oos/example/train_cached_file/keyring.json deleted file mode 100644 index 23c019e89dcf0319508a8c15555b19d4ed1ffbbd..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/keyring.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "keyring", "version_dict": {"version": "19.1.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "19.1.0"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.tidelift", "keyring"]}, "requires": {"entrypoints": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.3", "version": "0.3"}, "SecretStorage": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.1.1", "version": "3.1.1"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "jaraco.packaging": {"eq_version": "", "ge_version": "3.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "3.2"}, "rst.linker": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9"}, "pytest": {"eq_version": "", "ge_version": "3.5", "lt_version": "", "ne_version": ["3.7.3"], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-checkdocs": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-black-multipy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/keystone-tempest-plugin.json b/tools/oos/example/train_cached_file/keystone-tempest-plugin.json deleted file mode 100644 index a1550f03109e9f6500b8f1ffe81d2bab699468bc..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/keystone-tempest-plugin.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "keystone-tempest-plugin", "version_dict": {"version": "0.3.0", "eq_version": "0.3.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["keystone-tempest-plugin"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "lxml": {"eq_version": "", "ge_version": "3.4.1", "lt_version": "", "ne_version": ["3.7.0"], "upper_version": "4.4.1", "version": "4.4.1"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/keystone.json b/tools/oos/example/train_cached_file/keystone.json deleted file mode 100644 index bb8164ec8f7638eec43bc464dd84fbd797e3d1a5..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/keystone.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "keystone", "version_dict": {"version": "16.0.2", "eq_version": "16.0.2", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["keystone"]}, "requires": {"Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "WebOb": {"eq_version": "", "ge_version": "1.7.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "Flask": {"eq_version": "", "ge_version": "1.0.2", "lt_version": "", "ne_version": ["0.11"], "upper_version": "1.1.1", "version": "1.1.1"}, "Flask-RESTful": {"eq_version": "", "ge_version": "0.3.5", "lt_version": "", "ne_version": [], "upper_version": "0.3.7", "version": "0.3.7"}, "cryptography": {"eq_version": "", "ge_version": "2.1", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.8", "version": "1.3.8"}, "sqlalchemy-migrate": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "", "ne_version": [], "upper_version": "0.12.0", "version": "0.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "passlib": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.1", "version": "1.7.1"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "keystonemiddleware": {"eq_version": "", "ge_version": "7.0.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.1", "version": "7.0.1"}, "bcrypt": {"eq_version": "", "ge_version": "3.1.3", "lt_version": "", "ne_version": [], "upper_version": "3.1.7", "version": "3.1.7"}, "scrypt": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.13", "version": "0.8.13"}, "oslo.cache": {"eq_version": "", "ge_version": "1.26.0", "lt_version": "", "ne_version": [], "upper_version": "1.37.1", "version": "1.37.1"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.22.0", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.29.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "oslo.db": {"eq_version": "", "ge_version": "4.27.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.44.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.31.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "oslo.policy": {"eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oauthlib": {"eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "3.1.0", "version": "3.1.0"}, "pysaml2": {"eq_version": "", "ge_version": "4.5.0", "lt_version": "", "ne_version": [], "upper_version": "4.8.0", "version": "4.8.0"}, "PyJWT": {"eq_version": "", "ge_version": "1.6.1", "lt_version": "", "ne_version": [], "upper_version": "1.7.1", "version": "1.7.1"}, "dogpile.cache": {"eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "0.7.1", "version": "0.7.1"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "pycadf": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": ["2.0.0"], "upper_version": "2.10.0", "version": "2.10.0"}, "msgpack": {"eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "0.6.1", "version": "0.6.1"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "pytz": {"eq_version": "", "ge_version": "2013.6", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "pep257": {"eq_version": "0.7.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.7.0"}, "flake8-docstrings": {"eq_version": "0.2.1.post1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.2.1.post1"}, "bashate": {"eq_version": "", "ge_version": "0.5.1", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}, "os-testr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "freezegun": {"eq_version": "", "ge_version": "0.3.6", "lt_version": "", "ne_version": [], "upper_version": "0.3.12", "version": "0.3.12"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "lxml": {"eq_version": "", "ge_version": "3.4.1", "lt_version": "", "ne_version": ["3.7.0"], "upper_version": "4.4.1", "version": "4.4.1"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "WebTest": {"eq_version": "", "ge_version": "2.0.27", "lt_version": "", "ne_version": [], "upper_version": "2.0.33", "version": "2.0.33"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "sphinxcontrib-seqdiag": {"eq_version": "", "ge_version": "0.8.4", "lt_version": "", "ne_version": [], "upper_version": "0.8.5", "version": "0.8.5"}, "sphinx-feature-classification": {"eq_version": "", "ge_version": "0.3.2", "lt_version": "", "ne_version": [], "upper_version": "0.4.1", "version": "0.4.1"}, "sphinxcontrib-blockdiag": {"eq_version": "", "ge_version": "1.5.5", "lt_version": "", "ne_version": [], "upper_version": "1.5.5", "version": "1.5.5"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "os-api-ref": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2", "version": "1.6.2"}, "python-ldap": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.2.0", "version": "3.2.0"}, "ldappool": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.4.1", "version": "2.4.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/keystoneauth1.json b/tools/oos/example/train_cached_file/keystoneauth1.json deleted file mode 100644 index d23d15992e250a6a1aeecc81efc55faee1dafbb3..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/keystoneauth1.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "keystoneauth1", "version_dict": {"version": "3.17.4", "eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4"}, "deep": {"count": 14, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1"]}, "requires": {"iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "os-service-types": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.0", "version": "1.7.0"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "betamax": {"eq_version": "", "ge_version": "0.7.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.1", "version": "0.8.1"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "requests-kerberos": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.12.0", "version": "0.12.0"}, "oauthlib": {"eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "3.1.0", "version": "3.1.0"}, "lxml": {"eq_version": "", "ge_version": "3.4.1", "lt_version": "", "ne_version": ["3.7.0"], "upper_version": "4.4.1", "version": "4.4.1"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "flake8-docstrings": {"eq_version": "0.2.1.post1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.2.1.post1"}, "flake8-import-order": {"eq_version": "", "ge_version": "0.17.1", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.17.1"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/keystonemiddleware.json b/tools/oos/example/train_cached_file/keystonemiddleware.json deleted file mode 100644 index 873f49b5fbf7ed2c488518999cb8bf4b5e848eea..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/keystonemiddleware.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "keystonemiddleware", "version_dict": {"version": "7.0.1", "eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": ["4.19.0"], "upper_version": "7.0.1"}, "deep": {"count": 1, "list": ["aodh", "keystonemiddleware"]}, "requires": {"keystoneauth1": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "oslo.cache": {"eq_version": "", "ge_version": "1.26.0", "lt_version": "", "ne_version": [], "upper_version": "1.37.1", "version": "1.37.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "pycadf": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": ["2.0.0"], "upper_version": "2.10.0", "version": "2.10.0"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.20.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "WebOb": {"eq_version": "", "ge_version": "1.7.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "hacking": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "0.11", "ne_version": [], "upper_version": "", "version": "0.10.0"}, "flake8-docstrings": {"eq_version": "0.2.1.post1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.2.1.post1"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "cryptography": {"eq_version": "", "ge_version": "2.1", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "python-memcached": {"eq_version": "", "ge_version": "1.56", "lt_version": "", "ne_version": [], "upper_version": "1.59", "version": "1.59"}, "WebTest": {"eq_version": "", "ge_version": "2.0.27", "lt_version": "", "ne_version": [], "upper_version": "2.0.33", "version": "2.0.33"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.29.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": ["1.6.0"], "upper_version": "", "version": "1.1.0"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/kolla-ansible.json b/tools/oos/example/train_cached_file/kolla-ansible.json deleted file mode 100644 index 219b66d8fb890985c9fa6419826080b08f66720a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/kolla-ansible.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "kolla-ansible", "version_dict": {"version": "9.3.2", "eq_version": "9.3.2", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["kolla-ansible"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "docker": {"eq_version": "", "ge_version": "2.4.2", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "Jinja2": {"eq_version": "", "ge_version": "2.10", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "setuptools": {"eq_version": "", "ge_version": "21.0.0", "lt_version": "", "ne_version": ["24.0.0", "34.0.0", "34.0.1", "34.0.2", "34.0.3", "34.1.0", "34.1.1", "34.2.0", "34.3.0", "34.3.1", "34.3.2", "36.2.0"], "upper_version": "57.5.0", "version": "57.5.0"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "cryptography": {"eq_version": "", "ge_version": "2.1", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "jmespath": {"eq_version": "", "ge_version": "0.9.3", "lt_version": "", "ne_version": [], "upper_version": "0.9.4", "version": "0.9.4"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.3", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "bashate": {"eq_version": "", "ge_version": "0.5.1", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}, "beautifulsoup4": {"eq_version": "", "ge_version": "4.6.0", "lt_version": "", "ne_version": [], "upper_version": "4.8.0", "version": "4.8.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "extras": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.0.0", "version": "1.0.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "pytz": {"eq_version": "", "ge_version": "2013.6", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.19.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-svg2pdfconverter": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.1.0", "version": "0.1.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/kolla.json b/tools/oos/example/train_cached_file/kolla.json deleted file mode 100644 index 6af0e6116f2cc9873fdd2ee8dc90f151a626b803..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/kolla.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "kolla", "version_dict": {"version": "9.4.0", "eq_version": "9.4.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["kolla"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "docker": {"eq_version": "", "ge_version": "2.4.2", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "Jinja2": {"eq_version": "", "ge_version": "2.8", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "GitPython": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.1.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "setuptools": {"eq_version": "", "ge_version": "21.0", "lt_version": "", "ne_version": ["24.0.0", "34.0.0", "34.0.1", "34.0.2", "34.0.3", "34.1.0", "34.1.1", "34.2.0", "34.3.0", "34.3.1", "34.3.2", "36.2.0"], "upper_version": "57.5.0", "version": "57.5.0"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.3", "ne_version": ["1.6.0"], "upper_version": "", "version": "1.1.0"}, "bashate": {"eq_version": "", "ge_version": "0.5.1", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}, "beautifulsoup4": {"eq_version": "", "ge_version": "4.6.0", "lt_version": "", "ne_version": [], "upper_version": "4.8.0", "version": "4.8.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "extras": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.0.0", "version": "1.0.0"}, "graphviz": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": ["0.5.0"], "upper_version": "0.13", "version": "0.13"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "PyYAML": {"eq_version": "", "ge_version": "3.10", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "python-barbicanclient": {"eq_version": "", "ge_version": "4.0.0", "lt_version": "", "ne_version": [], "upper_version": "4.9.0", "version": "4.9.0"}, "python-heatclient": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.18.1", "version": "1.18.1"}, "python-neutronclient": {"eq_version": "", "ge_version": "6.3.0", "lt_version": "", "ne_version": [], "upper_version": "6.14.1", "version": "6.14.1"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "python-swiftclient": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "pytz": {"eq_version": "", "ge_version": "2013.6", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "stestr": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.19.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-svg2pdfconverter": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.1.0", "version": "0.1.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/kombu.json b/tools/oos/example/train_cached_file/kombu.json deleted file mode 100644 index d164a915bedc6cd53d4135ae49057a94f5cb9969..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/kombu.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "kombu", "version_dict": {"version": "4.6.6", "eq_version": "", "ge_version": "4.6.1", "lt_version": "", "ne_version": ["4.0.2"], "upper_version": "4.6.6"}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu"]}, "requires": {"amqp": {"eq_version": "", "ge_version": "2.5.2", "lt_version": "2.6", "ne_version": [], "upper_version": "2.5.2", "version": "2.5.2"}, "importlib-metadata": {"eq_version": "", "ge_version": "0.18", "lt_version": "", "ne_version": [], "upper_version": "0.20", "version": "0.20"}, "azure-servicebus": {"eq_version": "", "ge_version": "0.21.1", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.21.1"}, "azure-storage-queue": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "python-consul": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "librabbitmq": {"eq_version": "", "ge_version": "1.5.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.5.2"}, "pymongo": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": [], "upper_version": "3.9.0", "version": "3.9.0"}, "msgpack": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.6.1", "version": "0.6.1"}, "pyro4": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "qpid-python": {"eq_version": "", "ge_version": "0.26", "lt_version": "", "ne_version": [], "upper_version": "1.36.0.post1", "version": "1.36.0.post1"}, "qpid-tools": {"eq_version": "", "ge_version": "0.26", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.26"}, "redis": {"eq_version": "", "ge_version": "3.3.11", "lt_version": "", "ne_version": [], "upper_version": "3.3.8", "version": "3.3.8"}, "softlayer-messaging": {"eq_version": "", "ge_version": "1.0.3", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.0.3"}, "SQLAlchemy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.3.8", "version": "1.3.8"}, "boto3": {"eq_version": "", "ge_version": "1.4.4", "lt_version": "", "ne_version": [], "upper_version": "1.9.225", "version": "1.9.225"}, "pycurl": {"eq_version": "7.43.0.2", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "7.43.0.2"}, "PyYAML": {"eq_version": "", "ge_version": "3.10", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "kazoo": {"eq_version": "", "ge_version": "1.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.6.1", "version": "2.6.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/krest.json b/tools/oos/example/train_cached_file/krest.json deleted file mode 100644 index ba327ebcd84607bc961c5abf0f6306ced552f816..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/krest.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "krest", "version_dict": {"version": "1.3.1", "eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.1"}, "deep": {"count": 1, "list": ["cinder", "krest"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/lazy-object-proxy.json b/tools/oos/example/train_cached_file/lazy-object-proxy.json deleted file mode 100644 index 6243dada9c6cef40bcbbaeb50686f8bcd022786e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/lazy-object-proxy.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "lazy-object-proxy", "version_dict": {"version": "1.6.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.6.0"}, "deep": {"count": 5, "list": ["openstack-heat", "neutron-lib", "os-ken", "pylint", "astroid", "lazy-object-proxy"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ldappool.json b/tools/oos/example/train_cached_file/ldappool.json deleted file mode 100644 index 6eaa2a70784d37ea386b34ff7238824aa8e2b43d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ldappool.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ldappool", "version_dict": {"version": "2.4.1", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.4.1"}, "deep": {"count": 1, "list": ["keystone", "ldappool"]}, "requires": {"python-ldap": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.2.0", "version": "3.2.0"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.2", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.2"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "flake8-docstrings": {"eq_version": "0.2.1.post1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.2.1.post1"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/librabbitmq.json b/tools/oos/example/train_cached_file/librabbitmq.json deleted file mode 100644 index a43ed8f2d5b30ea5f7a0267dc47f35affcd51c53..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/librabbitmq.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "librabbitmq", "version_dict": {"version": "1.5.2", "eq_version": "", "ge_version": "1.5.2", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "librabbitmq"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/lxml.json b/tools/oos/example/train_cached_file/lxml.json deleted file mode 100644 index fc6edd17d8a2255f62d03285f77496cc35030697..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/lxml.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "lxml", "version_dict": {"version": "4.4.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.4.1"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "beautifulsoup4", "lxml"]}, "requires": {"cssselect": {"eq_version": "", "ge_version": "0.7", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.7"}, "html5lib": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "beautifulsoup4": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.8.0", "version": "4.8.0"}, "Cython": {"eq_version": "", "ge_version": "0.29.7", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.29.7"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/lz4.json b/tools/oos/example/train_cached_file/lz4.json deleted file mode 100644 index 739629e0ed2877be5e36a1807db83aa1426e85e5..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/lz4.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "lz4", "version_dict": {"version": "0.9.0", "eq_version": "", "ge_version": "0.9.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["gnocchi", "lz4"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/matplotlib.json b/tools/oos/example/train_cached_file/matplotlib.json deleted file mode 100644 index 3fcd96474c613393bcf027e79d2d2744ef1cdda7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/matplotlib.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "matplotlib", "version_dict": {"version": "1.4", "eq_version": "", "ge_version": "1.4", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "matplotlib"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/mccabe.json b/tools/oos/example/train_cached_file/mccabe.json deleted file mode 100644 index f0b950cdca457759a0227aa4522aeca0e48cd317..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/mccabe.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "mccabe", "version_dict": {"version": "0.2.1", "eq_version": "0.2.1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 3, "list": ["aodh", "futurist", "hacking", "mccabe"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/microversion-parse.json b/tools/oos/example/train_cached_file/microversion-parse.json deleted file mode 100644 index 58aa25c23c8375fd1c0fe1761a75929dd2979f52..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/microversion-parse.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "microversion-parse", "version_dict": {"version": "0.2.1", "eq_version": "", "ge_version": "0.2.1", "lt_version": "", "ne_version": [], "upper_version": "0.2.1"}, "deep": {"count": 1, "list": ["nova", "microversion-parse"]}, "requires": {"WebOb": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "hacking": {"eq_version": "", "ge_version": "0.10.2", "lt_version": "0.11", "ne_version": [], "upper_version": "", "version": "0.10.2"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "Sphinx": {"eq_version": "", "ge_version": "1.1.2", "lt_version": "1.3", "ne_version": ["1.2.0", "1.3b1"], "upper_version": "2.2.0", "version": "2.2.0"}, "oslosphinx": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": ["3.4.0"], "upper_version": "4.18.0", "version": "4.18.0"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "gabbi": {"eq_version": "", "ge_version": "1.35.0", "lt_version": "", "ne_version": [], "upper_version": "1.49.0", "version": "1.49.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/mock.json b/tools/oos/example/train_cached_file/mock.json deleted file mode 100644 index 7dafd4eefa8acca8b079082f077f326773826868..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/mock.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "mock", "version_dict": {"version": "3.0.5", "eq_version": "", "ge_version": "1.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5"}, "deep": {"count": 3, "list": ["aodh", "futurist", "hacking", "mock"]}, "requires": {"six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "twine": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "wheel": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "blurb": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-cov": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/monotonic.json b/tools/oos/example/train_cached_file/monotonic.json deleted file mode 100644 index 419707e1b644a4f75b7b1be3de8b3c56c2679260..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/monotonic.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "monotonic", "version_dict": {"version": "1.5", "eq_version": "", "ge_version": "0.6", "lt_version": "", "ne_version": [], "upper_version": "1.5"}, "deep": {"count": 2, "list": ["aodh", "futurist", "monotonic"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/more-itertools.json b/tools/oos/example/train_cached_file/more-itertools.json deleted file mode 100644 index 7b0e5198d4741b320cc36bdc11a96e58cb76d625..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/more-itertools.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "more-itertools", "version_dict": {"version": "7.2.0", "eq_version": "", "ge_version": "4.0.0", "lt_version": "", "ne_version": [], "upper_version": "7.2.0"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "more-itertools"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/mox3.json b/tools/oos/example/train_cached_file/mox3.json deleted file mode 100644 index 782193763864a13e40c7a38549b632bdab5c790b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/mox3.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "mox3", "version_dict": {"version": "0.28.0", "eq_version": "", "ge_version": "0.20.0", "lt_version": "", "ne_version": [], "upper_version": "0.28.0"}, "deep": {"count": 11, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "mox3"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "flake8": {"eq_version": "", "ge_version": "2.5.4", "lt_version": "2.6.0", "ne_version": [], "upper_version": "", "version": "2.5.4"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.5", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/msgpack.json b/tools/oos/example/train_cached_file/msgpack.json deleted file mode 100644 index 54798125ceb47b78e5cfc3e93d80ff265d3803c5..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/msgpack.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "msgpack", "version_dict": {"version": "0.6.1", "eq_version": "", "ge_version": "0.5.2", "lt_version": "", "ne_version": [], "upper_version": "0.6.1"}, "deep": {"count": 18, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log", "oslo.serialization", "msgpack"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/msrest.json b/tools/oos/example/train_cached_file/msrest.json deleted file mode 100644 index d2b717603255277da76ba50a488c04030de5f7cb..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/msrest.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "msrest", "version_dict": {"version": "0.4.0", "eq_version": "", "ge_version": "0.4.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 7, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "azure-servicebus", "azure-common", "msrestazure", "msrest"]}, "requires": {"certifi": {"eq_version": "", "ge_version": "2015.9.6.2", "lt_version": "", "ne_version": [], "upper_version": "2019.6.16", "version": "2019.6.16"}, "chardet": {"eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.4", "version": "3.0.4"}, "enum34": {"eq_version": "", "ge_version": "1.0.4", "lt_version": "", "ne_version": [], "upper_version": "1.1.6", "version": "1.1.6"}, "isodate": {"eq_version": "", "ge_version": "0.5.4", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.5.4"}, "keyring": {"eq_version": "", "ge_version": "5.6", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "requests": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "requests-oauthlib": {"eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "1.2.0", "version": "1.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/msrestazure.json b/tools/oos/example/train_cached_file/msrestazure.json deleted file mode 100644 index 623e40bde1b31b1bba358d27e94a647137517e6d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/msrestazure.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "msrestazure", "version_dict": {"version": "0.4.0", "eq_version": "", "ge_version": "0.4.0", "lt_version": "0.5.0", "ne_version": [], "upper_version": ""}, "deep": {"count": 6, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "azure-servicebus", "azure-common", "msrestazure"]}, "requires": {"msrest": {"eq_version": "", "ge_version": "0.4.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.4.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/munch.json b/tools/oos/example/train_cached_file/munch.json deleted file mode 100644 index a6cab519aded3a29c461d48ccfe3d5b173b4e917..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/munch.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "munch", "version_dict": {"version": "2.3.2", "eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.2"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "munch"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/murano-pkg-check.json b/tools/oos/example/train_cached_file/murano-pkg-check.json deleted file mode 100644 index eb18dbce01b4ca32bb48e4a7bf64e66ba75b9ac6..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/murano-pkg-check.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "murano-pkg-check", "version_dict": {"version": "0.3.0", "eq_version": "", "ge_version": "0.3.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0"}, "deep": {"count": 4, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-muranoclient", "murano-pkg-check"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "1.8", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "PyYAML": {"eq_version": "", "ge_version": "3.10.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "yaql": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.3", "version": "1.1.3"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.17.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "semantic-version": {"eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "oslo.i18n": {"eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "hacking": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "0.12", "ne_version": [], "upper_version": "", "version": "0.11.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.2.1", "lt_version": "1.4", "ne_version": ["1.3b1"], "upper_version": "2.2.0", "version": "2.2.0"}, "oslosphinx": {"eq_version": "", "ge_version": "4.7.0", "lt_version": "", "ne_version": [], "upper_version": "4.18.0", "version": "4.18.0"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "reno": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/mypy-extensions.json b/tools/oos/example/train_cached_file/mypy-extensions.json deleted file mode 100644 index 12bb9c8cd2a0b1aa5004f7abce45c1d7f5983338..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/mypy-extensions.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "mypy-extensions", "version_dict": {"version": "0.4.1", "eq_version": "", "ge_version": "0.4.0", "lt_version": "0.5.0", "ne_version": [], "upper_version": "0.4.1"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "mypy", "mypy-extensions"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/mypy.json b/tools/oos/example/train_cached_file/mypy.json deleted file mode 100644 index e9121126657e8db769132945bc3076e301734022..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/mypy.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "mypy", "version_dict": {"version": "0.720", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.720"}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "mypy"]}, "requires": {"typed-ast": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "1.5.0", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "typing-extensions": {"eq_version": "", "ge_version": "3.7.4", "lt_version": "", "ne_version": [], "upper_version": "3.7.4", "version": "3.7.4"}, "mypy-extensions": {"eq_version": "", "ge_version": "0.4.0", "lt_version": "0.5.0", "ne_version": [], "upper_version": "0.4.1", "version": "0.4.1"}, "psutil": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": [], "upper_version": "5.6.3", "version": "5.6.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/netaddr.json b/tools/oos/example/train_cached_file/netaddr.json deleted file mode 100644 index e205106a3a663bd0a4c403a0d27999c005f75b87..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/netaddr.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "netaddr", "version_dict": {"version": "0.7.19", "eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19"}, "deep": {"count": 16, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "netaddr"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/netifaces.json b/tools/oos/example/train_cached_file/netifaces.json deleted file mode 100644 index aa88f23d0492c26ff8bc4c239232b2a41bd15626..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/netifaces.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "netifaces", "version_dict": {"version": "0.10.9", "eq_version": "", "ge_version": "0.10.4", "lt_version": "", "ne_version": [], "upper_version": "0.10.9"}, "deep": {"count": 18, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log", "oslo.utils", "netifaces"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/netmiko.json b/tools/oos/example/train_cached_file/netmiko.json deleted file mode 100644 index aece9ef640cee492aa4130c583ccfcffb8d8b312..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/netmiko.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "netmiko", "version_dict": {"version": "2.4.2", "eq_version": "", "ge_version": "2.4.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.2"}, "deep": {"count": 1, "list": ["networking-generic-switch", "netmiko"]}, "requires": {"setuptools": {"eq_version": "", "ge_version": "38.4.0", "lt_version": "", "ne_version": [], "upper_version": "57.5.0", "version": "57.5.0"}, "paramiko": {"eq_version": "", "ge_version": "2.4.3", "lt_version": "", "ne_version": [], "upper_version": "2.6.0", "version": "2.6.0"}, "scp": {"eq_version": "", "ge_version": "0.13.2", "lt_version": "", "ne_version": [], "upper_version": "0.13.2", "version": "0.13.2"}, "pyserial": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.4", "version": "3.4"}, "textfsm": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "PyYAML": {"eq_version": "5.1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1"}, "pytest": {"eq_version": "", "ge_version": "4.6.3", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/networking-baremetal.json b/tools/oos/example/train_cached_file/networking-baremetal.json deleted file mode 100644 index 5de588a30f779d4f4b2b0d4923cd0864ff7a9b79..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/networking-baremetal.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "networking-baremetal", "version_dict": {"version": "1.4.0", "eq_version": "1.4.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["networking-baremetal"]}, "requires": {"neutron-lib": {"eq_version": "", "ge_version": "1.18.0", "lt_version": "", "ne_version": [], "upper_version": "1.29.1", "version": "1.29.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.29.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "python-ironicclient": {"eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": [], "upper_version": "3.1.2", "version": "3.1.2"}, "tooz": {"eq_version": "", "ge_version": "1.58.0", "lt_version": "", "ne_version": [], "upper_version": "1.66.3", "version": "1.66.3"}, "neutron": {"eq_version": "", "ge_version": "13.0.0.0b1", "lt_version": "", "ne_version": [], "upper_version": "15.3.4", "version": "15.3.4"}, "bashate": {"eq_version": "", "ge_version": "0.5.1", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/networking-generic-switch.json b/tools/oos/example/train_cached_file/networking-generic-switch.json deleted file mode 100644 index 851e94f249209aff05e78d9d57262687a3e56132..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/networking-generic-switch.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "networking-generic-switch", "version_dict": {"version": "2.1.0", "eq_version": "2.1.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["networking-generic-switch"]}, "requires": {"stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "netmiko": {"eq_version": "", "ge_version": "2.4.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.2", "version": "2.4.2"}, "neutron": {"eq_version": "", "ge_version": "13.0.0.0b1", "lt_version": "", "ne_version": [], "upper_version": "15.3.4", "version": "15.3.4"}, "neutron-lib": {"eq_version": "", "ge_version": "1.18.0", "lt_version": "", "ne_version": [], "upper_version": "1.29.1", "version": "1.29.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "tenacity": {"eq_version": "", "ge_version": "4.4.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}, "tooz": {"eq_version": "", "ge_version": "1.58.0", "lt_version": "", "ne_version": [], "upper_version": "1.66.3", "version": "1.66.3"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "hacking": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.0.0"}, "flake8-import-order": {"eq_version": "0.11", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.11"}, "bashate": {"eq_version": "", "ge_version": "0.5.1", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "futurist": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.9.0", "version": "1.9.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-pecanwsme": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.10.0", "version": "0.10.0"}, "sphinxcontrib-seqdiag": {"eq_version": "", "ge_version": "0.8.4", "lt_version": "", "ne_version": [], "upper_version": "0.8.5", "version": "0.8.5"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/networkx.json b/tools/oos/example/train_cached_file/networkx.json deleted file mode 100644 index 85bf4cf6fb02f6c6d5ae00445f907d4d4c86c125..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/networkx.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "networkx", "version_dict": {"version": "2.3", "eq_version": "", "ge_version": "1.10", "lt_version": "", "ne_version": [], "upper_version": "2.3"}, "deep": {"count": 2, "list": ["cinder", "taskflow", "networkx"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/neutron-lib.json b/tools/oos/example/train_cached_file/neutron-lib.json deleted file mode 100644 index abe7f9760f901cabdf8bad2f6201f61cd42e88d3..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/neutron-lib.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "neutron-lib", "version_dict": {"version": "1.29.1", "eq_version": "", "ge_version": "1.14.0", "lt_version": "", "ne_version": [], "upper_version": "1.29.1"}, "deep": {"count": 1, "list": ["openstack-heat", "neutron-lib"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.8", "version": "1.3.8"}, "pecan": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": ["1.0.2", "1.0.3", "1.0.4", "1.2"], "upper_version": "1.3.3", "version": "1.3.3"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "os-ken": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.4.1", "version": "0.4.1"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.db": {"eq_version": "", "ge_version": "4.37.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.29.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "oslo.policy": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.service": {"eq_version": "", "ge_version": "1.24.0", "lt_version": "", "ne_version": ["1.28.1"], "upper_version": "1.40.2", "version": "1.40.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.versionedobjects": {"eq_version": "", "ge_version": "1.31.2", "lt_version": "", "ne_version": [], "upper_version": "1.36.1", "version": "1.36.1"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "setproctitle": {"eq_version": "", "ge_version": "1.1.10", "lt_version": "", "ne_version": [], "upper_version": "1.1.10", "version": "1.1.10"}, "WebOb": {"eq_version": "", "ge_version": "1.7.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "os-traits": {"eq_version": "", "ge_version": "0.9.0", "lt_version": "", "ne_version": [], "upper_version": "0.16.0", "version": "0.16.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": ["1.6.0"], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "flake8-import-order": {"eq_version": "0.12", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.12"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "os-api-ref": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2", "version": "1.6.2"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/neutron-tempest-plugin.json b/tools/oos/example/train_cached_file/neutron-tempest-plugin.json deleted file mode 100644 index 06162ba5192ea954c9cfd333cfdb258f4130dc92..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/neutron-tempest-plugin.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "neutron-tempest-plugin", "version_dict": {"version": "0.6.0", "eq_version": "0.6.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["neutron-tempest-plugin"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "neutron-lib": {"eq_version": "", "ge_version": "1.25.0", "lt_version": "", "ne_version": [], "upper_version": "1.29.1", "version": "1.29.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "ipaddress": {"eq_version": "", "ge_version": "1.0.17", "lt_version": "", "ne_version": [], "upper_version": "1.0.22", "version": "1.0.22"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "paramiko": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.6.0", "version": "2.6.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "tenacity": {"eq_version": "", "ge_version": "3.2.1", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.13", "ne_version": [], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "flake8-import-order": {"eq_version": "0.12", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.12"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/neutron.json b/tools/oos/example/train_cached_file/neutron.json deleted file mode 100644 index d5b4d5f1cf88a671e97746019d85690d99b862fb..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/neutron.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "neutron", "version_dict": {"version": "15.3.4", "eq_version": "15.3.4", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["neutron"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "4.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "Paste": {"eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "3.2.0", "version": "3.2.0"}, "PasteDeploy": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "Routes": {"eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.1", "version": "2.4.1"}, "debtcollector": {"eq_version": "", "ge_version": "1.19.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "decorator": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "4.4.0", "version": "4.4.0"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "pecan": {"eq_version": "", "ge_version": "1.3.2", "lt_version": "", "ne_version": [], "upper_version": "1.3.3", "version": "1.3.3"}, "httplib2": {"eq_version": "", "ge_version": "0.9.1", "lt_version": "", "ne_version": [], "upper_version": "0.13.1", "version": "0.13.1"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "Jinja2": {"eq_version": "", "ge_version": "2.10", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.17.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.1", "version": "7.0.1"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "netifaces": {"eq_version": "", "ge_version": "0.10.4", "lt_version": "", "ne_version": [], "upper_version": "0.10.9", "version": "0.10.9"}, "neutron-lib": {"eq_version": "", "ge_version": "1.29.1", "lt_version": "", "ne_version": [], "upper_version": "1.29.1", "version": "1.29.1"}, "python-neutronclient": {"eq_version": "", "ge_version": "6.7.0", "lt_version": "", "ne_version": [], "upper_version": "6.14.1", "version": "6.14.1"}, "tenacity": {"eq_version": "", "ge_version": "3.2.1", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.8", "version": "1.3.8"}, "WebOb": {"eq_version": "", "ge_version": "1.8.2", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.14.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "alembic": {"eq_version": "", "ge_version": "0.8.10", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "oslo.cache": {"eq_version": "", "ge_version": "1.26.0", "lt_version": "", "ne_version": [], "upper_version": "1.37.1", "version": "1.37.1"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.db": {"eq_version": "", "ge_version": "4.37.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.29.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.31.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "oslo.policy": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.privsep": {"eq_version": "", "ge_version": "1.32.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.5", "version": "1.33.5"}, "oslo.reports": {"eq_version": "", "ge_version": "1.18.0", "lt_version": "", "ne_version": [], "upper_version": "1.30.0", "version": "1.30.0"}, "oslo.rootwrap": {"eq_version": "", "ge_version": "5.8.0", "lt_version": "", "ne_version": [], "upper_version": "5.16.1", "version": "5.16.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.service": {"eq_version": "", "ge_version": "1.24.0", "lt_version": "", "ne_version": ["1.28.1"], "upper_version": "1.40.2", "version": "1.40.2"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.versionedobjects": {"eq_version": "", "ge_version": "1.35.1", "lt_version": "", "ne_version": [], "upper_version": "1.36.1", "version": "1.36.1"}, "osprofiler": {"eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "os-ken": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.4.1", "version": "0.4.1"}, "ovs": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.8", "version": "2.11.8"}, "ovsdbapp": {"eq_version": "", "ge_version": "0.12.1", "lt_version": "", "ne_version": [], "upper_version": "0.17.5", "version": "0.17.5"}, "psutil": {"eq_version": "", "ge_version": "3.2.2", "lt_version": "", "ne_version": [], "upper_version": "5.6.3", "version": "5.6.3"}, "pyroute2": {"eq_version": "", "ge_version": "0.5.3", "lt_version": "", "ne_version": [], "upper_version": "0.5.6", "version": "0.5.6"}, "python-novaclient": {"eq_version": "", "ge_version": "9.1.0", "lt_version": "", "ne_version": [], "upper_version": "15.1.1", "version": "15.1.1"}, "openstacksdk": {"eq_version": "", "ge_version": "0.31.2", "lt_version": "", "ne_version": [], "upper_version": "0.36.5", "version": "0.36.5"}, "python-designateclient": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "os-xenapi": {"eq_version": "", "ge_version": "0.3.1", "lt_version": "", "ne_version": [], "upper_version": "0.3.4", "version": "0.3.4"}, "os-vif": {"eq_version": "", "ge_version": "1.15.1", "lt_version": "", "ne_version": [], "upper_version": "1.17.0", "version": "1.17.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": ["1.6.0"], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "flake8-import-order": {"eq_version": "0.12", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.12"}, "pycodestyle": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "2.6.0", "ne_version": [], "upper_version": "", "version": "2.0.0"}, "mock": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "WebTest": {"eq_version": "", "ge_version": "2.0.27", "lt_version": "", "ne_version": [], "upper_version": "2.0.33", "version": "2.0.33"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "astroid": {"eq_version": "2.1.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.1.0"}, "pylint": {"eq_version": "2.2.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.2.0"}, "isort": {"eq_version": "4.3.21", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "4.3.21"}, "PyMySQL": {"eq_version": "", "ge_version": "0.7.6", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "bashate": {"eq_version": "", "ge_version": "0.5.1", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/nodeenv.json b/tools/oos/example/train_cached_file/nodeenv.json deleted file mode 100644 index 7d0a0ea2e0822486c76c2f58212ba72b09928736..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/nodeenv.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "nodeenv", "version_dict": {"version": "1.3.3", "eq_version": "", "ge_version": "0.9.4", "lt_version": "", "ne_version": [], "upper_version": "1.3.3"}, "deep": {"count": 1, "list": ["horizon", "nodeenv"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/nova.json b/tools/oos/example/train_cached_file/nova.json deleted file mode 100644 index 679130387c5b72195f4946dde13fa332c31c04da..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/nova.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "nova", "version_dict": {"version": "20.6.1", "eq_version": "20.6.1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["nova"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.2.19", "lt_version": "", "ne_version": [], "upper_version": "1.3.8", "version": "1.3.8"}, "decorator": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "4.4.0", "version": "4.4.0"}, "eventlet": {"eq_version": "", "ge_version": "0.20.0", "lt_version": "", "ne_version": ["0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "Jinja2": {"eq_version": "", "ge_version": "2.10", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.20.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.1", "version": "7.0.1"}, "lxml": {"eq_version": "", "ge_version": "3.4.1", "lt_version": "", "ne_version": ["3.7.0"], "upper_version": "4.4.1", "version": "4.4.1"}, "Routes": {"eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.1", "version": "2.4.1"}, "cryptography": {"eq_version": "", "ge_version": "2.7", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "WebOb": {"eq_version": "", "ge_version": "1.8.2", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "greenlet": {"eq_version": "", "ge_version": "0.4.10", "lt_version": "", "ne_version": ["0.4.14"], "upper_version": "0.4.15", "version": "0.4.15"}, "PasteDeploy": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "Paste": {"eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "3.2.0", "version": "3.2.0"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "sqlalchemy-migrate": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "", "ne_version": [], "upper_version": "0.12.0", "version": "0.12.0"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "netifaces": {"eq_version": "", "ge_version": "0.10.4", "lt_version": "", "ne_version": [], "upper_version": "0.10.9", "version": "0.10.9"}, "paramiko": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.6.0", "version": "2.6.0"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "python-cinderclient": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": ["4.0.0"], "upper_version": "5.0.2", "version": "5.0.2"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.16.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "python-neutronclient": {"eq_version": "", "ge_version": "6.7.0", "lt_version": "", "ne_version": [], "upper_version": "6.14.1", "version": "6.14.1"}, "python-glanceclient": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1", "version": "2.17.1"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "websockify": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.9.0", "version": "0.9.0"}, "oslo.cache": {"eq_version": "", "ge_version": "1.26.0", "lt_version": "", "ne_version": [], "upper_version": "1.37.1", "version": "1.37.1"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "6.1.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.reports": {"eq_version": "", "ge_version": "1.18.0", "lt_version": "", "ne_version": [], "upper_version": "1.30.0", "version": "1.30.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.21.1", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.1.1", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.40.2", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.db": {"eq_version": "", "ge_version": "4.44.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.rootwrap": {"eq_version": "", "ge_version": "5.8.0", "lt_version": "", "ne_version": [], "upper_version": "5.16.1", "version": "5.16.1"}, "oslo.messaging": {"eq_version": "", "ge_version": "7.0.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "oslo.policy": {"eq_version": "", "ge_version": "1.35.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.privsep": {"eq_version": "", "ge_version": "1.33.2", "lt_version": "", "ne_version": [], "upper_version": "1.33.5", "version": "1.33.5"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.service": {"eq_version": "", "ge_version": "1.40.1", "lt_version": "", "ne_version": [], "upper_version": "1.40.2", "version": "1.40.2"}, "rfc3986": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.2", "version": "1.3.2"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.31.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "psutil": {"eq_version": "", "ge_version": "3.2.2", "lt_version": "", "ne_version": [], "upper_version": "5.6.3", "version": "5.6.3"}, "oslo.versionedobjects": {"eq_version": "", "ge_version": "1.35.0", "lt_version": "", "ne_version": [], "upper_version": "1.36.1", "version": "1.36.1"}, "os-brick": {"eq_version": "", "ge_version": "2.6.1", "lt_version": "", "ne_version": [], "upper_version": "2.10.7", "version": "2.10.7"}, "os-resource-classes": {"eq_version": "", "ge_version": "0.4.0", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "os-traits": {"eq_version": "", "ge_version": "0.16.0", "lt_version": "", "ne_version": [], "upper_version": "0.16.0", "version": "0.16.0"}, "os-vif": {"eq_version": "", "ge_version": "1.14.0", "lt_version": "", "ne_version": [], "upper_version": "1.17.0", "version": "1.17.0"}, "os-win": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "4.3.3", "version": "4.3.3"}, "castellan": {"eq_version": "", "ge_version": "0.16.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.4", "version": "1.3.4"}, "microversion-parse": {"eq_version": "", "ge_version": "0.2.1", "lt_version": "", "ne_version": [], "upper_version": "0.2.1", "version": "0.2.1"}, "os-xenapi": {"eq_version": "", "ge_version": "0.3.3", "lt_version": "", "ne_version": [], "upper_version": "0.3.4", "version": "0.3.4"}, "tooz": {"eq_version": "", "ge_version": "1.58.0", "lt_version": "", "ne_version": [], "upper_version": "1.66.3", "version": "1.66.3"}, "cursive": {"eq_version": "", "ge_version": "0.2.1", "lt_version": "", "ne_version": [], "upper_version": "0.2.2", "version": "0.2.2"}, "pypowervm": {"eq_version": "", "ge_version": "1.1.15", "lt_version": "", "ne_version": [], "upper_version": "1.1.20", "version": "1.1.20"}, "retrying": {"eq_version": "", "ge_version": "1.3.3", "lt_version": "", "ne_version": ["1.3.0"], "upper_version": "1.3.3", "version": "1.3.3"}, "os-service-types": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.0", "version": "1.7.0"}, "taskflow": {"eq_version": "", "ge_version": "2.16.0", "lt_version": "", "ne_version": [], "upper_version": "3.7.1", "version": "3.7.1"}, "python-dateutil": {"eq_version": "", "ge_version": "2.5.3", "lt_version": "", "ne_version": [], "upper_version": "2.8.0", "version": "2.8.0"}, "zVMCloudConnector": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.1", "version": "1.4.1"}, "futurist": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.9.0", "version": "1.9.0"}, "openstacksdk": {"eq_version": "", "ge_version": "0.35.0", "lt_version": "", "ne_version": [], "upper_version": "0.36.5", "version": "0.36.5"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "contextlib2": {"eq_version": "", "ge_version": "0.5.5", "lt_version": "", "ne_version": [], "upper_version": "0.5.5", "version": "0.5.5"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "mox3": {"eq_version": "", "ge_version": "0.20.0", "lt_version": "", "ne_version": [], "upper_version": "0.28.0", "version": "0.28.0"}, "psycopg2": {"eq_version": "", "ge_version": "2.7", "lt_version": "", "ne_version": [], "upper_version": "2.8.3", "version": "2.8.3"}, "PyMySQL": {"eq_version": "", "ge_version": "0.7.6", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "pycodestyle": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "2.6.0", "ne_version": [], "upper_version": "", "version": "2.0.0"}, "python-barbicanclient": {"eq_version": "", "ge_version": "4.5.2", "lt_version": "", "ne_version": [], "upper_version": "4.9.0", "version": "4.9.0"}, "python-ironicclient": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": ["2.7.1"], "upper_version": "3.1.2", "version": "3.1.2"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "oslotest": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "gabbi": {"eq_version": "", "ge_version": "1.35.0", "lt_version": "", "ne_version": [], "upper_version": "1.49.0", "version": "1.49.0"}, "wsgi-intercept": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.8.1", "version": "1.8.1"}, "oslo.vmware": {"eq_version": "", "ge_version": "2.17.0", "lt_version": "", "ne_version": [], "upper_version": "2.34.1", "version": "2.34.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-actdiag": {"eq_version": "", "ge_version": "0.8.5", "lt_version": "", "ne_version": [], "upper_version": "0.8.5", "version": "0.8.5"}, "sphinxcontrib-seqdiag": {"eq_version": "", "ge_version": "0.8.4", "lt_version": "", "ne_version": [], "upper_version": "0.8.5", "version": "0.8.5"}, "sphinxcontrib-svg2pdfconverter": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.1.0", "version": "0.1.0"}, "sphinx-feature-classification": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.4.1", "version": "0.4.1"}, "os-api-ref": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2", "version": "1.6.2"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "whereto": {"eq_version": "", "ge_version": "0.3.0", "lt_version": "", "ne_version": [], "upper_version": "0.4.0", "version": "0.4.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/numpy.json b/tools/oos/example/train_cached_file/numpy.json deleted file mode 100644 index a0b50657f24102d9dfeb78a90c63b7c05f480c44..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/numpy.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "numpy", "version_dict": {"version": "1.17.2", "eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.17.2"}, "deep": {"count": 1, "list": ["gnocchi", "numpy"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oauth2client.json b/tools/oos/example/train_cached_file/oauth2client.json deleted file mode 100644 index f494cea725622e196309f2963b952057a98190d1..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oauth2client.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oauth2client", "version_dict": {"version": "4.1.3", "eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": ["4.0.0"], "upper_version": "4.1.3"}, "deep": {"count": 1, "list": ["cinder", "oauth2client"]}, "requires": {"httplib2": {"eq_version": "", "ge_version": "0.9.1", "lt_version": "", "ne_version": [], "upper_version": "0.13.1", "version": "0.13.1"}, "pyasn1": {"eq_version": "", "ge_version": "0.1.7", "lt_version": "", "ne_version": [], "upper_version": "0.4.7", "version": "0.4.7"}, "pyasn1-modules": {"eq_version": "", "ge_version": "0.0.5", "lt_version": "", "ne_version": [], "upper_version": "0.2.6", "version": "0.2.6"}, "rsa": {"eq_version": "", "ge_version": "3.1.4", "lt_version": "", "ne_version": [], "upper_version": "4.0", "version": "4.0"}, "six": {"eq_version": "", "ge_version": "1.6.1", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oauthlib.json b/tools/oos/example/train_cached_file/oauthlib.json deleted file mode 100644 index 5c516330c7476393a9d3b6b9ddc768b67238dac9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oauthlib.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oauthlib", "version_dict": {"version": "3.1.0", "eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "3.1.0"}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oauthlib"]}, "requires": {"cryptography": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "blinker": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "PyJWT": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.1", "version": "1.7.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/openstack-cyborg.json b/tools/oos/example/train_cached_file/openstack-cyborg.json deleted file mode 100644 index c4b84220895741c3e36d8c9e163b7f0621edf444..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/openstack-cyborg.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "openstack-cyborg", "version_dict": {"version": "3.0.1", "eq_version": "3.0.1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["openstack-cyborg"]}, "requires": {"SQLAlchemy": {"eq_version": "", "ge_version": "0.9.0", "lt_version": "", "ne_version": ["1.1.5", "1.1.6", "1.1.7", "1.1.8"], "upper_version": "1.3.8", "version": "1.3.8"}, "WSME": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "alembic": {"eq_version": "", "ge_version": "0.8.10", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "eventlet": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "", "ne_version": ["0.18.3", "0.20.1", "0.21.0", "0.23.0", "0.25.0"], "upper_version": "0.25.2", "version": "0.25.2"}, "jsonpatch": {"eq_version": "", "ge_version": "1.16", "lt_version": "", "ne_version": ["1.20"], "upper_version": "1.24", "version": "1.24"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.17.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.1", "version": "7.0.1"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "os-resource-classes": {"eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": ["4.3.0", "4.4.0"], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.9.0", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.db": {"eq_version": "", "ge_version": "4.1.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.i18n": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "1.14.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.29.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "oslo.policy": {"eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.privsep": {"eq_version": "", "ge_version": "1.32.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.5", "version": "1.33.5"}, "oslo.service": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": ["1.28.1"], "upper_version": "1.40.2", "version": "1.40.2"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.versionedobjects": {"eq_version": "", "ge_version": "1.31.2", "lt_version": "", "ne_version": [], "upper_version": "1.36.1", "version": "1.36.1"}, "pbr": {"eq_version": "", "ge_version": "0.11", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "pecan": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": ["1.0.2", "1.0.3", "1.0.4", "1.2"], "upper_version": "1.3.3", "version": "1.3.3"}, "psutil": {"eq_version": "", "ge_version": "3.2.2", "lt_version": "", "ne_version": [], "upper_version": "5.6.3", "version": "5.6.3"}, "python-glanceclient": {"eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1", "version": "2.17.1"}, "six": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/openstack-heat.json b/tools/oos/example/train_cached_file/openstack-heat.json deleted file mode 100644 index e6f53762b8401e765901b00cb24aab7b2be6b69e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/openstack-heat.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "openstack-heat", "version_dict": {"version": "13.1.0", "eq_version": "13.1.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["openstack-heat"]}, "requires": {"Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "PasteDeploy": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "Routes": {"eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.1", "version": "2.4.1"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.0.10", "lt_version": "", "ne_version": ["1.1.5", "1.1.6", "1.1.7", "1.1.8"], "upper_version": "1.3.8", "version": "1.3.8"}, "WebOb": {"eq_version": "", "ge_version": "1.7.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "aodhclient": {"eq_version": "", "ge_version": "0.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.0", "version": "1.3.0"}, "croniter": {"eq_version": "", "ge_version": "0.3.4", "lt_version": "", "ne_version": [], "upper_version": "0.3.30", "version": "0.3.30"}, "cryptography": {"eq_version": "", "ge_version": "2.1", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1", "0.21.0", "0.23.0", "0.25.0"], "upper_version": "0.25.2", "version": "0.25.2"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.17.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.1", "version": "7.0.1"}, "lxml": {"eq_version": "", "ge_version": "3.4.1", "lt_version": "", "ne_version": ["3.7.0"], "upper_version": "4.4.1", "version": "4.4.1"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "neutron-lib": {"eq_version": "", "ge_version": "1.14.0", "lt_version": "", "ne_version": [], "upper_version": "1.29.1", "version": "1.29.1"}, "openstacksdk": {"eq_version": "", "ge_version": "0.11.2", "lt_version": "", "ne_version": [], "upper_version": "0.36.5", "version": "0.36.5"}, "oslo.cache": {"eq_version": "", "ge_version": "1.26.0", "lt_version": "", "ne_version": [], "upper_version": "1.37.1", "version": "1.37.1"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.db": {"eq_version": "", "ge_version": "4.27.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.29.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.31.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "oslo.policy": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.reports": {"eq_version": "", "ge_version": "1.18.0", "lt_version": "", "ne_version": [], "upper_version": "1.30.0", "version": "1.30.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.service": {"eq_version": "", "ge_version": "1.24.0", "lt_version": "", "ne_version": ["1.28.1"], "upper_version": "1.40.2", "version": "1.40.2"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.37.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.versionedobjects": {"eq_version": "", "ge_version": "1.31.2", "lt_version": "", "ne_version": [], "upper_version": "1.36.1", "version": "1.36.1"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "python-barbicanclient": {"eq_version": "", "ge_version": "4.5.2", "lt_version": "", "ne_version": [], "upper_version": "4.9.0", "version": "4.9.0"}, "python-blazarclient": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "2.2.1", "version": "2.2.1"}, "python-cinderclient": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "python-designateclient": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "python-glanceclient": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1", "version": "2.17.1"}, "python-heatclient": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.18.1", "version": "1.18.1"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "python-magnumclient": {"eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": [], "upper_version": "2.16.0", "version": "2.16.0"}, "python-manilaclient": {"eq_version": "", "ge_version": "1.16.0", "lt_version": "", "ne_version": [], "upper_version": "1.29.0", "version": "1.29.0"}, "python-mistralclient": {"eq_version": "", "ge_version": "3.1.0", "lt_version": "", "ne_version": ["3.2.0"], "upper_version": "3.10.0", "version": "3.10.0"}, "python-monascaclient": {"eq_version": "", "ge_version": "1.12.0", "lt_version": "", "ne_version": [], "upper_version": "1.16.0", "version": "1.16.0"}, "python-neutronclient": {"eq_version": "", "ge_version": "6.7.0", "lt_version": "", "ne_version": [], "upper_version": "6.14.1", "version": "6.14.1"}, "python-novaclient": {"eq_version": "", "ge_version": "9.1.0", "lt_version": "", "ne_version": [], "upper_version": "15.1.1", "version": "15.1.1"}, "python-octaviaclient": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.10.1", "version": "1.10.1"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "python-saharaclient": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "python-swiftclient": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "python-troveclient": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.1", "version": "3.0.1"}, "python-zaqarclient": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "python-zunclient": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.5.1", "version": "3.5.1"}, "pytz": {"eq_version": "", "ge_version": "2013.6", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "sqlalchemy-migrate": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "", "ne_version": [], "upper_version": "0.12.0", "version": "0.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "tenacity": {"eq_version": "", "ge_version": "4.4.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}, "yaql": {"eq_version": "", "ge_version": "1.1.3", "lt_version": "", "ne_version": [], "upper_version": "1.1.3", "version": "1.1.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/openstack-placement.json b/tools/oos/example/train_cached_file/openstack-placement.json deleted file mode 100644 index ef2558a507b19108ed39cbdf825b4798d3fd2466..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/openstack-placement.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "openstack-placement", "version_dict": {"version": "2.0.1", "eq_version": "2.0.1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["openstack-placement"]}, "requires": {"Routes": {"eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.1", "version": "2.4.1"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.2.19", "lt_version": "", "ne_version": [], "upper_version": "1.3.8", "version": "1.3.8"}, "WebOb": {"eq_version": "", "ge_version": "1.8.2", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.18.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.1", "version": "7.0.1"}, "microversion-parse": {"eq_version": "", "ge_version": "0.2.1", "lt_version": "", "ne_version": [], "upper_version": "0.2.1", "version": "0.2.1"}, "os-resource-classes": {"eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "os-traits": {"eq_version": "", "ge_version": "0.16.0", "lt_version": "", "ne_version": [], "upper_version": "0.16.0", "version": "0.16.0"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "6.7.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.db": {"eq_version": "", "ge_version": "4.40.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.31.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "oslo.policy": {"eq_version": "", "ge_version": "1.35.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.37.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "setuptools": {"eq_version": "", "ge_version": "21.0.0", "lt_version": "", "ne_version": ["24.0.0", "34.0.0", "34.0.1", "34.0.2", "34.0.3", "34.1.0", "34.1.1", "34.2.0", "34.3.0", "34.3.1", "34.3.2", "36.2.0"], "upper_version": "57.5.0", "version": "57.5.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/openstackdocstheme.json b/tools/oos/example/train_cached_file/openstackdocstheme.json deleted file mode 100644 index 14e65be02efa66810dd2b7b64ddc934c7a4f16fe..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/openstackdocstheme.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "openstackdocstheme", "version_dict": {"version": "1.31.1", "eq_version": "", "ge_version": "1.17.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1"}, "deep": {"count": 4, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "dulwich": {"eq_version": "", "ge_version": "0.15.0", "lt_version": "", "ne_version": [], "upper_version": "0.19.13", "version": "0.19.13"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "os-api-ref": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2", "version": "1.6.2"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/openstacksdk.json b/tools/oos/example/train_cached_file/openstacksdk.json deleted file mode 100644 index e8e558cbb5a4878e266143ff8e730a1d79dfc71f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/openstacksdk.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "openstacksdk", "version_dict": {"version": "0.36.5", "eq_version": "", "ge_version": "0.13.0", "lt_version": "", "ne_version": [], "upper_version": "0.36.5"}, "deep": {"count": 12, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "appdirs": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.3", "version": "1.4.3"}, "requestsexceptions": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "jsonpatch": {"eq_version": "", "ge_version": "1.16", "lt_version": "", "ne_version": ["1.20"], "upper_version": "1.24", "version": "1.24"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "os-service-types": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.0", "version": "1.7.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.16.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "munch": {"eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.2", "version": "2.3.2"}, "decorator": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "4.4.0", "version": "4.4.0"}, "jmespath": {"eq_version": "", "ge_version": "0.9.0", "lt_version": "", "ne_version": [], "upper_version": "0.9.4", "version": "0.9.4"}, "ipaddress": {"eq_version": "", "ge_version": "1.0.17", "lt_version": "", "ne_version": [], "upper_version": "1.0.22", "version": "1.0.22"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "netifaces": {"eq_version": "", "ge_version": "0.10.4", "lt_version": "", "ne_version": [], "upper_version": "0.10.9", "version": "0.10.9"}, "dogpile.cache": {"eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "0.7.1", "version": "0.7.1"}, "cryptography": {"eq_version": "", "ge_version": "2.1", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "hacking": {"eq_version": "", "ge_version": "1.0", "lt_version": "1.2", "ne_version": [], "upper_version": "", "version": "1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "extras": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.0.0", "version": "1.0.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "prometheus-client": {"eq_version": "", "ge_version": "0.4.2", "lt_version": "", "ne_version": [], "upper_version": "0.7.1", "version": "0.7.1"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "oslo.config": {"eq_version": "", "ge_version": "6.1.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "statsd": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": [], "upper_version": "3.3.0", "version": "3.3.0"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "doc8": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "Pygments": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.6.1", "version": "2.6.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "docutils": {"eq_version": "", "ge_version": "0.11", "lt_version": "", "ne_version": [], "upper_version": "0.15.2", "version": "0.15.2"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "beautifulsoup4": {"eq_version": "", "ge_version": "4.6.0", "lt_version": "", "ne_version": [], "upper_version": "4.8.0", "version": "4.8.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ordereddict.json b/tools/oos/example/train_cached_file/ordereddict.json deleted file mode 100644 index de000d6c3d403be5d41ff961c42d13814c8d19cd..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ordereddict.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ordereddict", "version_dict": {"version": "1.1", "eq_version": "", "ge_version": "1.1", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 2, "list": ["cinder", "pywbem", "ordereddict"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/os-api-ref.json b/tools/oos/example/train_cached_file/os-api-ref.json deleted file mode 100644 index a2e402d178560c20ffa02ca46cbc763f79f1be9d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/os-api-ref.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "os-api-ref", "version_dict": {"version": "1.6.2", "eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "sphinx-testing": {"eq_version": "", "ge_version": "0.7.2", "lt_version": "", "ne_version": [], "upper_version": "1.0.1", "version": "1.0.1"}, "beautifulsoup4": {"eq_version": "", "ge_version": "4.6.0", "lt_version": "", "ne_version": [], "upper_version": "4.8.0", "version": "4.8.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/os-brick.json b/tools/oos/example/train_cached_file/os-brick.json deleted file mode 100644 index 6138137afad0e502c9faf9b28cb157f336bc92e4..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/os-brick.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "os-brick", "version_dict": {"version": "2.10.7", "eq_version": "", "ge_version": "2.10.5", "lt_version": "", "ne_version": [], "upper_version": "2.10.7"}, "deep": {"count": 1, "list": ["cinder", "os-brick"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.privsep": {"eq_version": "", "ge_version": "1.23.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.5", "version": "1.33.5"}, "oslo.service": {"eq_version": "", "ge_version": "1.24.0", "lt_version": "", "ne_version": ["1.28.1"], "upper_version": "1.40.2", "version": "1.40.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "retrying": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": ["1.3.0"], "upper_version": "1.3.3", "version": "1.3.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "os-win": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "4.3.3", "version": "4.3.3"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.3", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "oslo.vmware": {"eq_version": "", "ge_version": "2.17.0", "lt_version": "", "ne_version": [], "upper_version": "2.34.1", "version": "2.34.1"}, "castellan": {"eq_version": "", "ge_version": "0.16.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.4", "version": "1.3.4"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "os-api-ref": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2", "version": "1.6.2"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "sphinx-feature-classification": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.4.1", "version": "0.4.1"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/os-client-config.json b/tools/oos/example/train_cached_file/os-client-config.json deleted file mode 100644 index e9ce9b1c0fbcc3e7b71dc1d434300f769d4838cc..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/os-client-config.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "os-client-config", "version_dict": {"version": "1.33.0", "eq_version": "", "ge_version": "1.28.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.0"}, "deep": {"count": 11, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config"]}, "requires": {"openstacksdk": {"eq_version": "", "ge_version": "0.13.0", "lt_version": "", "ne_version": [], "upper_version": "0.36.5", "version": "0.36.5"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "extras": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.0.0", "version": "1.0.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "3.0.0", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "python-glanceclient": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1", "version": "2.17.1"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "docutils": {"eq_version": "", "ge_version": "0.11", "lt_version": "", "ne_version": [], "upper_version": "0.15.2", "version": "0.15.2"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/os-ken.json b/tools/oos/example/train_cached_file/os-ken.json deleted file mode 100644 index 7b6c9e4dcef833b04a01bbe5eeeb1ca9bbbdb4dc..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/os-ken.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "os-ken", "version_dict": {"version": "0.4.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.4.1"}, "deep": {"count": 2, "list": ["openstack-heat", "neutron-lib", "os-ken"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1", "0.21.0", "0.23.0"], "upper_version": "0.25.2", "version": "0.25.2"}, "msgpack": {"eq_version": "", "ge_version": "0.3.0", "lt_version": "", "ne_version": [], "upper_version": "0.6.1", "version": "0.6.1"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "oslo.config": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "ovs": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.8", "version": "2.11.8"}, "Routes": {"eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.1", "version": "2.4.1"}, "six": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "tinyrpc": {"eq_version": "", "ge_version": "0.6", "lt_version": "", "ne_version": [], "upper_version": "1.0.3", "version": "1.0.3"}, "WebOb": {"eq_version": "", "ge_version": "1.2", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.13", "ne_version": [], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "nose": {"eq_version": "", "ge_version": "1.3.7", "lt_version": "", "ne_version": [], "upper_version": "1.3.7", "version": "1.3.7"}, "pycodestyle": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.0.0"}, "pylint": {"eq_version": "1.9.2", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9.2"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/os-resource-classes.json b/tools/oos/example/train_cached_file/os-resource-classes.json deleted file mode 100644 index 494d2e25c32a92f987d6636a777136e19c23b718..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/os-resource-classes.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "os-resource-classes", "version_dict": {"version": "0.5.0", "eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "0.5.0"}, "deep": {"count": 1, "list": ["openstack-cyborg", "os-resource-classes"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.13", "ne_version": [], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/os-service-types.json b/tools/oos/example/train_cached_file/os-service-types.json deleted file mode 100644 index 78f9453adb676b96eda292eca507090711edfd90..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/os-service-types.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "os-service-types", "version_dict": {"version": "1.7.0", "eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.0"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/os-testr.json b/tools/oos/example/train_cached_file/os-testr.json deleted file mode 100644 index fb92aaeace2dee0c34d00d081a687dbcc238c59e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/os-testr.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "os-testr", "version_dict": {"version": "1.1.0", "eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0"}, "deep": {"count": 9, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "os-testr"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/os-traits.json b/tools/oos/example/train_cached_file/os-traits.json deleted file mode 100644 index 4d7d64dc76104004f324b51c3634db354a2b73aa..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/os-traits.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "os-traits", "version_dict": {"version": "0.16.0", "eq_version": "", "ge_version": "0.9.0", "lt_version": "", "ne_version": [], "upper_version": "0.16.0"}, "deep": {"count": 2, "list": ["openstack-heat", "neutron-lib", "os-traits"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/os-vif.json b/tools/oos/example/train_cached_file/os-vif.json deleted file mode 100644 index e27b86d006ed763e63f45e077b57fb8b1e9fec3e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/os-vif.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "os-vif", "version_dict": {"version": "1.17.0", "eq_version": "", "ge_version": "1.15.1", "lt_version": "", "ne_version": [], "upper_version": "1.17.0"}, "deep": {"count": 1, "list": ["neutron", "os-vif"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.20.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.1.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.log": {"eq_version": "", "ge_version": "3.30.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.privsep": {"eq_version": "", "ge_version": "1.23.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.5", "version": "1.33.5"}, "oslo.versionedobjects": {"eq_version": "", "ge_version": "1.28.0", "lt_version": "", "ne_version": [], "upper_version": "1.36.1", "version": "1.36.1"}, "ovsdbapp": {"eq_version": "", "ge_version": "0.12.1", "lt_version": "", "ne_version": [], "upper_version": "0.17.5", "version": "0.17.5"}, "pyroute2": {"eq_version": "", "ge_version": "0.5.2", "lt_version": "", "ne_version": [], "upper_version": "0.5.6", "version": "0.5.6"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "debtcollector": {"eq_version": "", "ge_version": "1.19.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "ovs": {"eq_version": "", "ge_version": "2.9.2", "lt_version": "", "ne_version": [], "upper_version": "2.11.8", "version": "2.11.8"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/os-win.json b/tools/oos/example/train_cached_file/os-win.json deleted file mode 100644 index b860993551642404ca4f34e1b0cf57008061742a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/os-win.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "os-win", "version_dict": {"version": "4.3.3", "eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "4.3.3"}, "deep": {"count": 1, "list": ["ceilometer", "os-win"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "docutils": {"eq_version": "", "ge_version": "0.11", "lt_version": "", "ne_version": [], "upper_version": "0.15.2", "version": "0.15.2"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/os-xenapi.json b/tools/oos/example/train_cached_file/os-xenapi.json deleted file mode 100644 index 7023a6259480d6e988af7bd9ab04f1e5ee6af76f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/os-xenapi.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "os-xenapi", "version_dict": {"version": "0.3.4", "eq_version": "", "ge_version": "0.3.3", "lt_version": "", "ne_version": [], "upper_version": "0.3.4"}, "deep": {"count": 1, "list": ["ceilometer", "os-xenapi"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "paramiko": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.6.0", "version": "2.6.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "0.12", "ne_version": [], "upper_version": "", "version": "0.11.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "oslosphinx": {"eq_version": "", "ge_version": "4.7.0", "lt_version": "", "ne_version": [], "upper_version": "4.18.0", "version": "4.18.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "os-testr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/osc-lib.json b/tools/oos/example/train_cached_file/osc-lib.json deleted file mode 100644 index e1090c2fa0e5c2d47ffdbf58aee1cef0aaa18c3c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/osc-lib.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "osc-lib", "version_dict": {"version": "1.14.1", "eq_version": "", "ge_version": "0.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1"}, "deep": {"count": 2, "list": ["aodh", "gnocchiclient", "osc-lib"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": ["2.9.0"], "upper_version": "2.16.0", "version": "2.16.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.7.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "openstacksdk": {"eq_version": "", "ge_version": "0.15.0", "lt_version": "", "ne_version": [], "upper_version": "0.36.5", "version": "0.36.5"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "simplejson": {"eq_version": "", "ge_version": "3.5.1", "lt_version": "", "ne_version": [], "upper_version": "3.16.0", "version": "3.16.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "hacking": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "0.11", "ne_version": [], "upper_version": "", "version": "0.10.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "requests-mock": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.5", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "os-testr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/osc-placement.json b/tools/oos/example/train_cached_file/osc-placement.json deleted file mode 100644 index 39d2e1f12cb058683c16d7421fd2d8a7f5140e39..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/osc-placement.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "osc-placement", "version_dict": {"version": "1.7.0", "eq_version": "1.7.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["osc-placement"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "simplejson": {"eq_version": "", "ge_version": "3.16.0", "lt_version": "", "ne_version": [], "upper_version": "3.16.0", "version": "3.16.0"}, "osc-lib": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.utils": {"eq_version": "", "ge_version": "3.37.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.13", "ne_version": [], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "wsgi-intercept": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.8.1", "version": "1.8.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinx-feature-classification": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.4.1", "version": "0.4.1"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.24.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "cliff": {"eq_version": "", "ge_version": "2.14", "lt_version": "", "ne_version": [], "upper_version": "2.16.0", "version": "2.16.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "whereto": {"eq_version": "", "ge_version": "0.3.0", "lt_version": "", "ne_version": [], "upper_version": "0.4.0", "version": "0.4.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.cache.json b/tools/oos/example/train_cached_file/oslo.cache.json deleted file mode 100644 index 3089db2c5c58fe7686a89b989cfae811c435c554..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.cache.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.cache", "version_dict": {"version": "1.37.1", "eq_version": "", "ge_version": "1.26.0", "lt_version": "", "ne_version": [], "upper_version": "1.37.1"}, "deep": {"count": 2, "list": ["aodh", "keystonemiddleware", "oslo.cache"]}, "requires": {"dogpile.cache": {"eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "0.7.1", "version": "0.7.1"}, "six": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "pifpaf": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "", "ne_version": [], "upper_version": "2.2.2", "version": "2.2.2"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "python-memcached": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.59", "version": "1.59"}, "pymongo": {"eq_version": "", "ge_version": "3.0.2", "lt_version": "", "ne_version": ["3.1"], "upper_version": "3.9.0", "version": "3.9.0"}, "etcd3gw": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.2.4", "version": "0.2.4"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.concurrency.json b/tools/oos/example/train_cached_file/oslo.concurrency.json deleted file mode 100644 index 6927d7459c2a14458edc9e38efd32eed06a4d37c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.concurrency.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.concurrency", "version_dict": {"version": "3.30.1", "eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1"}, "deep": {"count": 14, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "python-glanceclient", "tempest", "oslo.concurrency"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "fasteners": {"eq_version": "", "ge_version": "0.7.0", "lt_version": "", "ne_version": [], "upper_version": "0.14.1", "version": "0.14.1"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.config.json b/tools/oos/example/train_cached_file/oslo.config.json deleted file mode 100644 index be77cd605f02d2052259737289c0ab3e4e09d2c0..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.config.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.config", "version_dict": {"version": "6.11.3", "eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3"}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config"]}, "requires": {"debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "rfc3986": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.2", "version": "1.3.2"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "requests": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "stestr": {"eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "mock": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "requests-mock": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.context.json b/tools/oos/example/train_cached_file/oslo.context.json deleted file mode 100644 index cf10f3f8f0f1828cd751a719a960db318c03540a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.context.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.context", "version_dict": {"version": "2.23.1", "eq_version": "", "ge_version": "2.20.0", "lt_version": "", "ne_version": [], "upper_version": "2.23.1"}, "deep": {"count": 17, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log", "oslo.context"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.db.json b/tools/oos/example/train_cached_file/oslo.db.json deleted file mode 100644 index a3ba53a906b2b794e82777ed60cce4527756ff80..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.db.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.db", "version_dict": {"version": "5.0.2", "eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2"}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "alembic": {"eq_version": "", "ge_version": "0.9.6", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.0.10", "lt_version": "", "ne_version": ["1.1.5", "1.1.6", "1.1.7", "1.1.8"], "upper_version": "1.3.8", "version": "1.3.8"}, "sqlalchemy-migrate": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "", "ne_version": [], "upper_version": "0.12.0", "version": "0.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "os-testr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "pifpaf": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "", "ne_version": [], "upper_version": "2.2.2", "version": "2.2.2"}, "PyMySQL": {"eq_version": "", "ge_version": "0.7.6", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "psycopg2": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.3", "version": "2.8.3"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.i18n.json b/tools/oos/example/train_cached_file/oslo.i18n.json deleted file mode 100644 index acd9ffdc7e4815e9c30984636448f3f58b2a84f2..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.i18n.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.i18n", "version_dict": {"version": "3.24.0", "eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0"}, "deep": {"count": 16, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.i18n"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.5", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.log.json b/tools/oos/example/train_cached_file/oslo.log.json deleted file mode 100644 index e5ab2fef0aee0da83b8503d9c58490c7c28f0180..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.log.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.log", "version_dict": {"version": "3.44.3", "eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3"}, "deep": {"count": 16, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "3.1.1", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.20.0", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.20.0", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.25.0", "lt_version": "", "ne_version": [], "upper_version": "2.29.3", "version": "2.29.3"}, "debtcollector": {"eq_version": "", "ge_version": "1.19.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "pyinotify": {"eq_version": "", "ge_version": "0.9.6", "lt_version": "", "ne_version": [], "upper_version": "0.9.6", "version": "0.9.6"}, "python-dateutil": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.0", "version": "2.8.0"}, "monotonic": {"eq_version": "", "ge_version": "1.4", "lt_version": "", "ne_version": [], "upper_version": "1.5", "version": "1.5"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "coverage": {"eq_version": "", "ge_version": "4.5.1", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.messaging.json b/tools/oos/example/train_cached_file/oslo.messaging.json deleted file mode 100644 index 2671147c063ac188e22f097be7f0610879da1da7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.messaging.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.messaging", "version_dict": {"version": "10.2.4", "eq_version": "", "ge_version": "5.29.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4"}, "deep": {"count": 2, "list": ["aodh", "keystonemiddleware", "oslo.messaging"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "futurist": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.9.0", "version": "1.9.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.service": {"eq_version": "", "ge_version": "1.24.0", "lt_version": "", "ne_version": ["1.28.1"], "upper_version": "1.40.2", "version": "1.40.2"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "monotonic": {"eq_version": "", "ge_version": "0.6", "lt_version": "", "ne_version": [], "upper_version": "1.5", "version": "1.5"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "cachetools": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.1.1", "version": "3.1.1"}, "WebOb": {"eq_version": "", "ge_version": "1.7.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "amqp": {"eq_version": "", "ge_version": "2.4.1", "lt_version": "", "ne_version": [], "upper_version": "2.5.2", "version": "2.5.2"}, "kombu": {"eq_version": "", "ge_version": "4.6.1", "lt_version": "", "ne_version": ["4.0.2"], "upper_version": "4.6.6", "version": "4.6.6"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.31.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "pifpaf": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.2.2", "version": "2.2.2"}, "confluent-kafka": {"eq_version": "", "ge_version": "0.11.6", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "pyngus": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "greenlet": {"eq_version": "", "ge_version": "0.4.10", "lt_version": "", "ne_version": [], "upper_version": "0.4.15", "version": "0.4.15"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "tenacity": {"eq_version": "", "ge_version": "3.2.1", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.middleware.json b/tools/oos/example/train_cached_file/oslo.middleware.json deleted file mode 100644 index 26abcf2d80e61f7a78b6a3950a267a2673d1d1ea..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.middleware.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.middleware", "version_dict": {"version": "3.38.1", "eq_version": "", "ge_version": "3.31.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1"}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "oslo.middleware"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Jinja2": {"eq_version": "", "ge_version": "2.10", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "WebOb": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "statsd": {"eq_version": "", "ge_version": "3.2.1", "lt_version": "", "ne_version": [], "upper_version": "3.3.0", "version": "3.3.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.policy.json b/tools/oos/example/train_cached_file/oslo.policy.json deleted file mode 100644 index 9a26c6b77774e22540ba7e929faf2cf344900af6..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.policy.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.policy", "version_dict": {"version": "2.3.4", "eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4"}, "deep": {"count": 1, "list": ["aodh", "oslo.policy"]}, "requires": {"requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.22.0", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.5", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.privsep.json b/tools/oos/example/train_cached_file/oslo.privsep.json deleted file mode 100644 index afea7a9b288c26f04c5d2c39adf61e1dc685a4bd..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.privsep.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.privsep", "version_dict": {"version": "1.33.5", "eq_version": "", "ge_version": "1.32.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.5"}, "deep": {"count": 1, "list": ["ceilometer", "oslo.privsep"]}, "requires": {"oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "cffi": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.3", "version": "1.12.3"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "greenlet": {"eq_version": "", "ge_version": "0.4.10", "lt_version": "", "ne_version": [], "upper_version": "0.4.15", "version": "0.4.15"}, "msgpack": {"eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "0.6.1", "version": "0.6.1"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.reports.json b/tools/oos/example/train_cached_file/oslo.reports.json deleted file mode 100644 index c55de716e57c01768d293f553d86bd6fa19e1f2c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.reports.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.reports", "version_dict": {"version": "1.30.0", "eq_version": "", "ge_version": "1.18.0", "lt_version": "", "ne_version": [], "upper_version": "1.30.0"}, "deep": {"count": 1, "list": ["ceilometer", "oslo.reports"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Jinja2": {"eq_version": "", "ge_version": "2.10", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "psutil": {"eq_version": "", "ge_version": "3.2.2", "lt_version": "", "ne_version": [], "upper_version": "5.6.3", "version": "5.6.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "greenlet": {"eq_version": "", "ge_version": "0.4.10", "lt_version": "", "ne_version": [], "upper_version": "0.4.15", "version": "0.4.15"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.rootwrap.json b/tools/oos/example/train_cached_file/oslo.rootwrap.json deleted file mode 100644 index e2db393c413c2b365763ccfa28b9af9b78dee783..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.rootwrap.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.rootwrap", "version_dict": {"version": "5.16.1", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.16.1"}, "deep": {"count": 1, "list": ["ceilometer", "oslo.rootwrap"]}, "requires": {"six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.serialization.json b/tools/oos/example/train_cached_file/oslo.serialization.json deleted file mode 100644 index da6df7680a03a7d27a75b87af191e2760539c8de..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.serialization.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.serialization", "version_dict": {"version": "2.29.3", "eq_version": "", "ge_version": "2.25.0", "lt_version": "", "ne_version": [], "upper_version": "2.29.3"}, "deep": {"count": 17, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log", "oslo.serialization"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "msgpack": {"eq_version": "", "ge_version": "0.5.2", "lt_version": "", "ne_version": [], "upper_version": "0.6.1", "version": "0.6.1"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "pytz": {"eq_version": "", "ge_version": "2013.6", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "ipaddress": {"eq_version": "", "ge_version": "1.0.17", "lt_version": "", "ne_version": [], "upper_version": "1.0.22", "version": "1.0.22"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.service.json b/tools/oos/example/train_cached_file/oslo.service.json deleted file mode 100644 index 594d91e1ef96a46b11d8a3524b567523d63ae5ec..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.service.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.service", "version_dict": {"version": "1.40.2", "eq_version": "", "ge_version": "1.24.0", "lt_version": "", "ne_version": ["1.28.1"], "upper_version": "1.40.2"}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "oslo.service"]}, "requires": {"WebOb": {"eq_version": "", "ge_version": "1.7.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "greenlet": {"eq_version": "", "ge_version": "0.4.10", "lt_version": "", "ne_version": [], "upper_version": "0.4.15", "version": "0.4.15"}, "monotonic": {"eq_version": "", "ge_version": "0.6", "lt_version": "", "ne_version": [], "upper_version": "1.5", "version": "1.5"}, "oslo.utils": {"eq_version": "", "ge_version": "3.40.2", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.25.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.1.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "PasteDeploy": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "Routes": {"eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.1", "version": "2.4.1"}, "Paste": {"eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "3.2.0", "version": "3.2.0"}, "Yappi": {"eq_version": "", "ge_version": "1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.upgradecheck.json b/tools/oos/example/train_cached_file/oslo.upgradecheck.json deleted file mode 100644 index e485604888a1099396f386ff8e3ceef9eb077f31..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.upgradecheck.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.upgradecheck", "version_dict": {"version": "0.3.2", "eq_version": "", "ge_version": "0.1.1", "lt_version": "", "ne_version": [], "upper_version": "0.3.2"}, "deep": {"count": 1, "list": ["aodh", "oslo.upgradecheck"]}, "requires": {"Babel": {"eq_version": "", "ge_version": "1.3", "lt_version": "", "ne_version": [], "upper_version": "2.7.0", "version": "2.7.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "hacking": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "0.11", "ne_version": [], "upper_version": "", "version": "0.10.0"}, "oslotest": {"eq_version": "", "ge_version": "1.5.1", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.utils.json b/tools/oos/example/train_cached_file/oslo.utils.json deleted file mode 100644 index 29b86cdf1acfbea662f3770e97901f4bea7c9bd2..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.utils.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.utils", "version_dict": {"version": "3.41.6", "eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6"}, "deep": {"count": 17, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log", "oslo.utils"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "monotonic": {"eq_version": "", "ge_version": "0.6", "lt_version": "", "ne_version": [], "upper_version": "1.5", "version": "1.5"}, "pytz": {"eq_version": "", "ge_version": "2013.6", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "netifaces": {"eq_version": "", "ge_version": "0.10.4", "lt_version": "", "ne_version": [], "upper_version": "0.10.9", "version": "0.10.9"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "pyparsing": {"eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "2.4.2", "version": "2.4.2"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1", "0.21.0", "0.23.0"], "upper_version": "0.25.2", "version": "0.25.2"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.versionedobjects.json b/tools/oos/example/train_cached_file/oslo.versionedobjects.json deleted file mode 100644 index 6656f8fe8c47116408ad041c58a442b87a85ed98..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.versionedobjects.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.versionedobjects", "version_dict": {"version": "1.36.1", "eq_version": "", "ge_version": "1.31.2", "lt_version": "", "ne_version": [], "upper_version": "1.36.1"}, "deep": {"count": 1, "list": ["cinder", "oslo.versionedobjects"]}, "requires": {"six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.29.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "WebOb": {"eq_version": "", "ge_version": "1.7.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslo.vmware.json b/tools/oos/example/train_cached_file/oslo.vmware.json deleted file mode 100644 index c567b06f5e2b60a6e156a79a905854365e1802ab..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslo.vmware.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslo.vmware", "version_dict": {"version": "2.34.1", "eq_version": "", "ge_version": "2.17.0", "lt_version": "", "ne_version": [], "upper_version": "2.34.1"}, "deep": {"count": 1, "list": ["ceilometer", "oslo.vmware"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "lxml": {"eq_version": "", "ge_version": "3.4.1", "lt_version": "", "ne_version": ["3.7.0"], "upper_version": "4.4.1", "version": "4.4.1"}, "suds-jurko": {"eq_version": "", "ge_version": "0.6", "lt_version": "", "ne_version": [], "upper_version": "0.6", "version": "0.6"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "urllib3": {"eq_version": "", "ge_version": "1.21.1", "lt_version": "", "ne_version": [], "upper_version": "1.25.3", "version": "1.25.3"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslosphinx.json b/tools/oos/example/train_cached_file/oslosphinx.json deleted file mode 100644 index 1d717760e3b878ba3b82dce926ee6cb3841018cc..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslosphinx.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslosphinx", "version_dict": {"version": "4.18.0", "eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "4.18.0"}, "deep": {"count": 3, "list": ["aodh", "futurist", "hacking", "oslosphinx"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.17.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/oslotest.json b/tools/oos/example/train_cached_file/oslotest.json deleted file mode 100644 index a47ff71cf521d4dbdac498bdafbcf39970973c6c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/oslotest.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "oslotest", "version_dict": {"version": "3.8.1", "eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1"}, "deep": {"count": 10, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest"]}, "requires": {"fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "mox3": {"eq_version": "", "ge_version": "0.20.0", "lt_version": "", "ne_version": [], "upper_version": "0.28.0", "version": "0.28.0"}, "os-client-config": {"eq_version": "", "ge_version": "1.28.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.0", "version": "1.33.0"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.5", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/osprofiler.json b/tools/oos/example/train_cached_file/osprofiler.json deleted file mode 100644 index 9f77944aac1f7bcf6ae00f70460aa24dcff95765..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/osprofiler.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "osprofiler", "version_dict": {"version": "2.8.2", "eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "osc-lib", "osprofiler"]}, "requires": {"netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": [], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.2", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "WebOb": {"eq_version": "", "ge_version": "1.7.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "pymongo": {"eq_version": "", "ge_version": "3.0.2", "lt_version": "", "ne_version": ["3.1"], "upper_version": "3.9.0", "version": "3.9.0"}, "elasticsearch": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "3.0.0", "ne_version": [], "upper_version": "2.4.1", "version": "2.4.1"}, "redis": {"eq_version": "", "ge_version": "2.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.3.8", "version": "3.3.8"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "jaeger-client": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "4.1.0", "version": "4.1.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ovs.json b/tools/oos/example/train_cached_file/ovs.json deleted file mode 100644 index eab907eb6153a8b2f05931c28d6a028b13d737d1..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ovs.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ovs", "version_dict": {"version": "2.11.8", "eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.8"}, "deep": {"count": 3, "list": ["openstack-heat", "neutron-lib", "os-ken", "ovs"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ovsdbapp.json b/tools/oos/example/train_cached_file/ovsdbapp.json deleted file mode 100644 index bf806b163a482723f5ca6eb1580fa89be45394d2..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ovsdbapp.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ovsdbapp", "version_dict": {"version": "0.17.5", "eq_version": "", "ge_version": "0.12.1", "lt_version": "", "ne_version": [], "upper_version": "0.17.5"}, "deep": {"count": 1, "list": ["neutron", "ovsdbapp"]}, "requires": {"fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "ovs": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.8", "version": "2.11.8"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.13", "ne_version": [], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "isort": {"eq_version": "4.3.21", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "4.3.21"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "os-testr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "pylint": {"eq_version": "1.9.2", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9.2"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/packaging.json b/tools/oos/example/train_cached_file/packaging.json deleted file mode 100644 index b66ca9ce7a0f55f81117997f72a8d63c2e605162..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/packaging.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "packaging", "version_dict": {"version": "19.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "19.1"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "packaging"]}, "requires": {"attrs": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "pyparsing": {"eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "2.4.2", "version": "2.4.2"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pandas.json b/tools/oos/example/train_cached_file/pandas.json deleted file mode 100644 index b1bd3882bfae2d16588b76f72de3a839487ee301..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pandas.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pandas", "version_dict": {"version": "0.11", "eq_version": "", "ge_version": "0.11", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "pandas"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/panko.json b/tools/oos/example/train_cached_file/panko.json deleted file mode 100644 index 975de745c759e4126e9ef5f1181a384a8e527f7f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/panko.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "panko", "version_dict": {"version": "7.1.0", "eq_version": "7.1.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["panko"]}, "requires": {"debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "tenacity": {"eq_version": "", "ge_version": "3.1.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.0.0", "lt_version": "", "ne_version": ["4.1.0", "4.19.0"], "upper_version": "7.0.1", "version": "7.0.1"}, "lxml": {"eq_version": "", "ge_version": "2.3", "lt_version": "", "ne_version": [], "upper_version": "4.4.1", "version": "4.4.1"}, "oslo.db": {"eq_version": "", "ge_version": "4.1.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "oslo.config": {"eq_version": "", "ge_version": "3.9.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "1.14.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.policy": {"eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "oslo.reports": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "1.30.0", "version": "1.30.0"}, "Paste": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.2.0", "version": "3.2.0"}, "PasteDeploy": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "pecan": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.3", "version": "1.3.3"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.25.0", "lt_version": "", "ne_version": [], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.5.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "PyYAML": {"eq_version": "", "ge_version": "3.1.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.0.10", "lt_version": "", "ne_version": ["1.1.5", "1.1.6", "1.1.7", "1.1.8"], "upper_version": "1.3.8", "version": "1.3.8"}, "stevedore": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "WebOb": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "WSME": {"eq_version": "", "ge_version": "0.8", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "alembic": {"eq_version": "", "ge_version": "0.7.6", "lt_version": "", "ne_version": ["0.8.1", "0.9.0"], "upper_version": "1.1.0", "version": "1.1.0"}, "python-dateutil": {"eq_version": "", "ge_version": "2.4.2", "lt_version": "", "ne_version": [], "upper_version": "2.8.0", "version": "2.8.0"}, "pymongo": {"eq_version": "", "ge_version": "3.0.2", "lt_version": "", "ne_version": ["3.1"], "upper_version": "3.9.0", "version": "3.9.0"}, "elasticsearch": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "2.4.1", "version": "2.4.1"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "1.3.1", "lt_version": "2.0", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "1.2", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "PyMySQL": {"eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "psycopg2": {"eq_version": "", "ge_version": "2.5", "lt_version": "", "ne_version": [], "upper_version": "2.8.3", "version": "2.8.3"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "sphinxcontrib-httpdomain": {"eq_version": "", "ge_version": "1.6.1", "lt_version": "", "ne_version": [], "upper_version": "1.7.0", "version": "1.7.0"}, "sphinxcontrib-pecanwsme": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.10.0", "version": "0.10.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "gabbi": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.49.0", "version": "1.49.0"}, "os-testr": {"eq_version": "", "ge_version": "0.4.1", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "WebTest": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.33", "version": "2.0.33"}, "pifpaf": {"eq_version": "", "ge_version": "0.0.11", "lt_version": "", "ne_version": [], "upper_version": "2.2.2", "version": "2.2.2"}, "reno": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "SQLAlchemy-Utils": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.34.2", "version": "0.34.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/paramiko.json b/tools/oos/example/train_cached_file/paramiko.json deleted file mode 100644 index 5e27ba0eaa2e2d5495977bcfe43ae1d2a35c4555..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/paramiko.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "paramiko", "version_dict": {"version": "2.6.0", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.6.0"}, "deep": {"count": 14, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "python-glanceclient", "tempest", "paramiko"]}, "requires": {"bcrypt": {"eq_version": "", "ge_version": "3.1.3", "lt_version": "", "ne_version": [], "upper_version": "3.1.7", "version": "3.1.7"}, "cryptography": {"eq_version": "", "ge_version": "2.5", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "pynacl": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.0.1"}, "pyasn1": {"eq_version": "", "ge_version": "0.1.7", "lt_version": "", "ne_version": [], "upper_version": "0.4.7", "version": "0.4.7"}, "gssapi": {"eq_version": "", "ge_version": "1.4.1", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.4.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/passlib.json b/tools/oos/example/train_cached_file/passlib.json deleted file mode 100644 index e07d353f9240fcfeb72ecbc7063372315b8df39b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/passlib.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "passlib", "version_dict": {"version": "1.7.1", "eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.1"}, "deep": {"count": 1, "list": ["keystone", "passlib"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pathlib2.json b/tools/oos/example/train_cached_file/pathlib2.json deleted file mode 100644 index 6c39fc74a5bd99f04d1b772802e3ad5f67b151e6..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pathlib2.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pathlib2", "version_dict": {"version": "2.3.4", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.3.4"}, "deep": {"count": 10, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "pathlib2"]}, "requires": {"six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "scandir": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.10.0", "version": "1.10.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pbr.json b/tools/oos/example/train_cached_file/pbr.json deleted file mode 100644 index e748ef3c4d5f9b1c1e174f6e9393a272be7bb51e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pbr.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pbr", "version_dict": {"version": "5.4.3", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3"}, "deep": {"count": 2, "list": ["aodh", "futurist", "pbr"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pecan.json b/tools/oos/example/train_cached_file/pecan.json deleted file mode 100644 index 81bf59e833d156a61a77fa67ade8f57d8f23dd93..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pecan.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pecan", "version_dict": {"version": "1.3.3", "eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.3"}, "deep": {"count": 1, "list": ["aodh", "pecan"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pep257.json b/tools/oos/example/train_cached_file/pep257.json deleted file mode 100644 index a296f97438641557644592dfe2b38bac9367149d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pep257.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pep257", "version_dict": {"version": "0.7.0", "eq_version": "0.7.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["keystone", "pep257"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pep8.json b/tools/oos/example/train_cached_file/pep8.json deleted file mode 100644 index 372ede008c8960cf8b811e444fa4f8a1e44bc181..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pep8.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pep8", "version_dict": {"version": "1.5.7", "eq_version": "1.5.7", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 3, "list": ["aodh", "futurist", "hacking", "pep8"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/persist-queue.json b/tools/oos/example/train_cached_file/persist-queue.json deleted file mode 100644 index 2d73ba92ffe3658c02af29455145db8cdb2558cb..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/persist-queue.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "persist-queue", "version_dict": {"version": "0.4.2", "eq_version": "", "ge_version": "0.2.3", "lt_version": "", "ne_version": [], "upper_version": "0.4.2"}, "deep": {"count": 2, "list": ["cinder", "storops", "persist-queue"]}, "requires": {"msgpack": {"eq_version": "", "ge_version": "0.5.6", "lt_version": "", "ne_version": [], "upper_version": "0.6.1", "version": "0.6.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pexpect.json b/tools/oos/example/train_cached_file/pexpect.json deleted file mode 100644 index 120229f77dabe3ad7c149fd92fecbbe0dbcc7184..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pexpect.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pexpect", "version_dict": {"version": "4.7.0", "eq_version": "", "ge_version": "3.1", "lt_version": "", "ne_version": ["3.3"], "upper_version": "4.7.0"}, "deep": {"count": 1, "list": ["trove", "pexpect"]}, "requires": {"ptyprocess": {"eq_version": "", "ge_version": "0.5", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pifpaf.json b/tools/oos/example/train_cached_file/pifpaf.json deleted file mode 100644 index ee55ac812c4e0604d31015172ae8adcec88f4fd3..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pifpaf.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pifpaf", "version_dict": {"version": "2.2.2", "eq_version": "", "ge_version": "0.10.0", "lt_version": "", "ne_version": [], "upper_version": "2.2.2"}, "deep": {"count": 9, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "pifpaf"]}, "requires": {"daiquiri": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "click": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pbr": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "Jinja2": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "fixtures": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "psutil": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.6.3", "version": "5.6.3"}, "xattr": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.9.6", "version": "0.9.6"}, "uwsgi": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "requests": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "testrepository": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "os-testr": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "mock": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pip.json b/tools/oos/example/train_cached_file/pip.json deleted file mode 100644 index 2d4ba2fd976cde8436ed34ff147b44e4db618a58..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pip.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pip", "version_dict": {"version": "19.1", "eq_version": "", "ge_version": "19.1", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "pip"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pluggy.json b/tools/oos/example/train_cached_file/pluggy.json deleted file mode 100644 index 73e8217652391f52f07d39e88bf0c10d7889c74e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pluggy.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pluggy", "version_dict": {"version": "0.12.0", "eq_version": "", "ge_version": "0.12", "lt_version": "1.0", "ne_version": [], "upper_version": "0.12.0"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy"]}, "requires": {"importlib-metadata": {"eq_version": "", "ge_version": "0.12", "lt_version": "", "ne_version": [], "upper_version": "0.20", "version": "0.20"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ply.json b/tools/oos/example/train_cached_file/ply.json deleted file mode 100644 index 8a37836219de30c30865a335fa50973660363b4d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ply.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ply", "version_dict": {"version": "3.11", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.11"}, "deep": {"count": 5, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-muranoclient", "yaql", "ply"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/proboscis.json b/tools/oos/example/train_cached_file/proboscis.json deleted file mode 100644 index 5858c4096b0096109f41fabf9f8199c01266282e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/proboscis.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "proboscis", "version_dict": {"version": "1.2.6.0", "eq_version": "", "ge_version": "1.2.5.3", "lt_version": "", "ne_version": [], "upper_version": "1.2.6.0"}, "deep": {"count": 1, "list": ["trove", "proboscis"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/proliantutils.json b/tools/oos/example/train_cached_file/proliantutils.json deleted file mode 100644 index 7af7cc11c9825fadb3102c8ce1f250ec327e2029..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/proliantutils.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "proliantutils", "version_dict": {"version": "2.9.1", "eq_version": "", "ge_version": "2.9.1", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["ironic", "proliantutils"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.20.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "requests": {"eq_version": "", "ge_version": "2.10.0", "lt_version": "", "ne_version": ["2.12.2", "2.13.0"], "upper_version": "2.22.0", "version": "2.22.0"}, "retrying": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": ["1.3.0"], "upper_version": "1.3.3", "version": "1.3.3"}, "pysnmp": {"eq_version": "", "ge_version": "4.2.3", "lt_version": "5.0.0", "ne_version": [], "upper_version": "4.4.11", "version": "4.4.11"}, "sushy": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.5", "version": "2.0.5"}, "mock": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "stestr": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": ["2.3.0"], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "ddt": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/prometheus-client.json b/tools/oos/example/train_cached_file/prometheus-client.json deleted file mode 100644 index d54a8760d73fd8a0411552c9af87bd92c4a2080d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/prometheus-client.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "prometheus-client", "version_dict": {"version": "0.7.1", "eq_version": "", "ge_version": "0.4.2", "lt_version": "", "ne_version": [], "upper_version": "0.7.1"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "prometheus-client"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/prometheus_client.json b/tools/oos/example/train_cached_file/prometheus_client.json deleted file mode 100644 index 406921f808ad217220389839edf92d345da51bb2..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/prometheus_client.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "prometheus_client", "version_dict": {"version": "0.6.0", "eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["ironic-prometheus-exporter", "prometheus_client"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/protobuf.json b/tools/oos/example/train_cached_file/protobuf.json deleted file mode 100644 index d36eaa9d956ef77d6e4c04da0ac276e5f00bc52d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/protobuf.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "protobuf", "version_dict": {"version": "3.9.1", "eq_version": "", "ge_version": "3.6.1", "lt_version": "", "ne_version": [], "upper_version": "3.9.1"}, "deep": {"count": 1, "list": ["cinder", "protobuf"]}, "requires": {"six": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "setuptools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "57.5.0", "version": "57.5.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/psutil.json b/tools/oos/example/train_cached_file/psutil.json deleted file mode 100644 index 8f85f47ceb5b4acf71394ae57b70667993a7c407..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/psutil.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "psutil", "version_dict": {"version": "5.6.3", "eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": [], "upper_version": "5.6.3"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "mypy", "psutil"]}, "requires": {"enum34": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.6", "version": "1.1.6"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/psycopg2.json b/tools/oos/example/train_cached_file/psycopg2.json deleted file mode 100644 index d853d3dcc0e4fd244ce24d30cfe8f2d1dfddf46c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/psycopg2.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "psycopg2", "version_dict": {"version": "2.8.3", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8.3"}, "deep": {"count": 10, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "sqlalchemy-migrate", "psycopg2"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ptyprocess.json b/tools/oos/example/train_cached_file/ptyprocess.json deleted file mode 100644 index 1c7cd50a778ce2249d3422d325d7d2f08c556898..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ptyprocess.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ptyprocess", "version_dict": {"version": "0.6.0", "eq_version": "", "ge_version": "0.5", "lt_version": "", "ne_version": [], "upper_version": "0.6.0"}, "deep": {"count": 2, "list": ["trove", "pexpect", "ptyprocess"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/purestorage.json b/tools/oos/example/train_cached_file/purestorage.json deleted file mode 100644 index b6759f860c281656dd271a3d7867aeeb23c29be5..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/purestorage.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "purestorage", "version_dict": {"version": "1.17.0", "eq_version": "", "ge_version": "1.6.0", "lt_version": "", "ne_version": [], "upper_version": "1.17.0"}, "deep": {"count": 1, "list": ["cinder", "purestorage"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/py.json b/tools/oos/example/train_cached_file/py.json deleted file mode 100644 index 012b168f15806f6b6f8e10c27158e6bd8930caf9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/py.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "py", "version_dict": {"version": "1.8.0", "eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "1.8.0"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "py"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyOpenSSL.json b/tools/oos/example/train_cached_file/pyOpenSSL.json deleted file mode 100644 index 72326a50b99cc6e099826b4b8c19db3a9cd15893..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyOpenSSL.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyOpenSSL", "version_dict": {"version": "19.1.0", "eq_version": "", "ge_version": "0.14", "lt_version": "", "ne_version": [], "upper_version": "19.1.0"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "jaraco.packaging", "requests", "urllib3", "pyOpenSSL"]}, "requires": {"cryptography": {"eq_version": "", "ge_version": "2.8", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "six": {"eq_version": "", "ge_version": "1.5.2", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinx-rtd-theme": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "flaky": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pretend": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest": {"eq_version": "", "ge_version": "3.0.1", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyScss.json b/tools/oos/example/train_cached_file/pyScss.json deleted file mode 100644 index 7afaf7a266202ed05699ccb57be8d3c3277b809d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyScss.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyScss", "version_dict": {"version": "1.3.7", "eq_version": "", "ge_version": "1.3.7", "lt_version": "", "ne_version": [], "upper_version": "1.3.7"}, "deep": {"count": 1, "list": ["horizon", "pyScss"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyasn1-modules.json b/tools/oos/example/train_cached_file/pyasn1-modules.json deleted file mode 100644 index 6a703dc9c352be960cdc6c927ac1d3aba1d7afe5..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyasn1-modules.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyasn1-modules", "version_dict": {"version": "0.2.6", "eq_version": "", "ge_version": "0.0.5", "lt_version": "", "ne_version": [], "upper_version": "0.2.6"}, "deep": {"count": 2, "list": ["cinder", "oauth2client", "pyasn1-modules"]}, "requires": {"pyasn1": {"eq_version": "", "ge_version": "0.4.6", "lt_version": "0.5.0", "ne_version": [], "upper_version": "0.4.7", "version": "0.4.7"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyasn1.json b/tools/oos/example/train_cached_file/pyasn1.json deleted file mode 100644 index 254ffa7c4a87595efc460b9e5ab5f5c7ff2ca727..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyasn1.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyasn1", "version_dict": {"version": "0.4.7", "eq_version": "", "ge_version": "0.1.7", "lt_version": "", "ne_version": [], "upper_version": "0.4.7"}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "python-glanceclient", "tempest", "paramiko", "pyasn1"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pycadf.json b/tools/oos/example/train_cached_file/pycadf.json deleted file mode 100644 index e4acc3041406d950a1420ef749a8159855f3b511..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pycadf.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pycadf", "version_dict": {"version": "2.10.0", "eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": ["2.0.0"], "upper_version": "2.10.0"}, "deep": {"count": 2, "list": ["aodh", "keystonemiddleware", "pycadf"]}, "requires": {"oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "pytz": {"eq_version": "", "ge_version": "2013.6", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "hacking": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "0.11", "ne_version": [], "upper_version": "", "version": "0.10.0"}, "flake8-docstrings": {"eq_version": "0.2.1.post1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.2.1.post1"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pycodestyle.json b/tools/oos/example/train_cached_file/pycodestyle.json deleted file mode 100644 index 0d45237822320ae952af093870711c7331899933..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pycodestyle.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pycodestyle", "version_dict": {"version": "2.0.0", "eq_version": "", "ge_version": "2.0.0", "lt_version": "2.6.0", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["cinder", "pycodestyle"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pycryptodomex.json b/tools/oos/example/train_cached_file/pycryptodomex.json deleted file mode 100644 index a944c19331ba6b5c24b529495988e3e83d0d0f06..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pycryptodomex.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pycryptodomex", "version_dict": {"version": "3.9.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.9.0"}, "deep": {"count": 2, "list": ["ceilometer", "pysnmp", "pycryptodomex"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pycurl.json b/tools/oos/example/train_cached_file/pycurl.json deleted file mode 100644 index de631337ac26b68607c56ce6f05a0d2b52a9eb83..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pycurl.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pycurl", "version_dict": {"version": "7.43.0.2", "eq_version": "7.43.0.2", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "pycurl"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pydot.json b/tools/oos/example/train_cached_file/pydot.json deleted file mode 100644 index 8762dc15f8634b1ca3758f832518314e864aca62..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pydot.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pydot", "version_dict": {"version": "1.4.1", "eq_version": "", "ge_version": "1.2.4", "lt_version": "", "ne_version": [], "upper_version": "1.4.1"}, "deep": {"count": 2, "list": ["cinder", "taskflow", "pydot"]}, "requires": {"pyparsing": {"eq_version": "", "ge_version": "2.1.4", "lt_version": "", "ne_version": [], "upper_version": "2.4.2", "version": "2.4.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pydotplus.json b/tools/oos/example/train_cached_file/pydotplus.json deleted file mode 100644 index b90d0b461d23776a65891c2f38876fc3ab3ab17f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pydotplus.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pydotplus", "version_dict": {"version": "2.0.2", "eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "2.0.2"}, "deep": {"count": 2, "list": ["cinder", "taskflow", "pydotplus"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyflakes.json b/tools/oos/example/train_cached_file/pyflakes.json deleted file mode 100644 index 0b52753c2bbdaedfd220efb92319edcdb0d2cd6d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyflakes.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyflakes", "version_dict": {"version": "0.8.1", "eq_version": "0.8.1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 3, "list": ["aodh", "futurist", "hacking", "pyflakes"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyghmi.json b/tools/oos/example/train_cached_file/pyghmi.json deleted file mode 100644 index bcb6152f2c84e7534b3d888f1e834a446eb98e8e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyghmi.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyghmi", "version_dict": {"version": "1.4.1", "eq_version": "", "ge_version": "1.0.22", "lt_version": "", "ne_version": [], "upper_version": "1.4.1"}, "deep": {"count": 2, "list": ["ironic", "python-scciclient", "pyghmi"]}, "requires": {"cryptography": {"eq_version": "", "ge_version": "2.1", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.5", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pygments-github-lexers.json b/tools/oos/example/train_cached_file/pygments-github-lexers.json deleted file mode 100644 index 7dc4950970768e31fa2390ffea90f9affb6d0ee2..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pygments-github-lexers.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pygments-github-lexers", "version_dict": {"version": "0.0.5", "eq_version": "0.0.5", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "pygments-github-lexers"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyinotify.json b/tools/oos/example/train_cached_file/pyinotify.json deleted file mode 100644 index 339240d50ce294f769ee59f02b1778bb9737dad9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyinotify.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyinotify", "version_dict": {"version": "0.9.6", "eq_version": "", "ge_version": "0.9.6", "lt_version": "", "ne_version": [], "upper_version": "0.9.6"}, "deep": {"count": 17, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "oslo.log", "pyinotify"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pykerberos.json b/tools/oos/example/train_cached_file/pykerberos.json deleted file mode 100644 index 511b940efdbbe8fc9890262b575803d28a93623d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pykerberos.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pykerberos", "version_dict": {"version": "1.2.1", "eq_version": "", "ge_version": "1.1.8", "lt_version": "2.0.0", "ne_version": [], "upper_version": "1.2.1"}, "deep": {"count": 16, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "requests-kerberos", "pykerberos"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pylint.json b/tools/oos/example/train_cached_file/pylint.json deleted file mode 100644 index 851d3591b5fde8a66bd854f52875e655fb617cb6..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pylint.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pylint", "version_dict": {"version": "2.2.2", "eq_version": "2.2.2", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["horizon", "pylint"]}, "requires": {"astroid": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.0.0"}, "isort": {"eq_version": "", "ge_version": "4.2.5", "lt_version": "", "ne_version": [], "upper_version": "", "version": "4.2.5"}, "mccabe": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pylons-sphinx-themes.json b/tools/oos/example/train_cached_file/pylons-sphinx-themes.json deleted file mode 100644 index 542e5b7148f930896cbdca6351bb9cfa4ee85278..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pylons-sphinx-themes.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pylons-sphinx-themes", "version_dict": {"version": "1.0.9", "eq_version": "", "ge_version": "1.0.9", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "WebTest", "waitress", "pylons-sphinx-themes"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pymemcache.json b/tools/oos/example/train_cached_file/pymemcache.json deleted file mode 100644 index fb4ea13309a712c7e721bf6e8ded415c88736abf..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pymemcache.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pymemcache", "version_dict": {"version": "2.2.2", "eq_version": "", "ge_version": "1.2.9", "lt_version": "", "ne_version": ["1.3.0"], "upper_version": "2.2.2"}, "deep": {"count": 2, "list": ["aodh", "tooz", "pymemcache"]}, "requires": {"six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pymongo.json b/tools/oos/example/train_cached_file/pymongo.json deleted file mode 100644 index ae1cb80649607fb431cd38fa58f0ec6b94830705..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pymongo.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pymongo", "version_dict": {"version": "3.9.0", "eq_version": "", "ge_version": "3.0.2", "lt_version": "", "ne_version": ["3.1"], "upper_version": "3.9.0"}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "oslo.cache", "pymongo"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pynacl.json b/tools/oos/example/train_cached_file/pynacl.json deleted file mode 100644 index a6e177b46647bce5e426217e9b08ae6587ae6464..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pynacl.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pynacl", "version_dict": {"version": "1.0.1", "eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "python-glanceclient", "tempest", "paramiko", "pynacl"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyngus.json b/tools/oos/example/train_cached_file/pyngus.json deleted file mode 100644 index 137d3ee236d73c389c6871c38350808f8e36e157..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyngus.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyngus", "version_dict": {"version": "2.3.0", "eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0"}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "pyngus"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyparsing.json b/tools/oos/example/train_cached_file/pyparsing.json deleted file mode 100644 index 8a195128e75b79be7c44675e12c37064a5ffb3a9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyparsing.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyparsing", "version_dict": {"version": "2.4.2", "eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "2.4.2"}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "packaging", "pyparsing"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pypowervm.json b/tools/oos/example/train_cached_file/pypowervm.json deleted file mode 100644 index 10547bed9bdecf11e546fbae348005c44c199ad7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pypowervm.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pypowervm", "version_dict": {"version": "1.1.20", "eq_version": "", "ge_version": "1.1.15", "lt_version": "", "ne_version": [], "upper_version": "1.1.20"}, "deep": {"count": 1, "list": ["nova", "pypowervm"]}, "requires": {"lxml": {"eq_version": "", "ge_version": "3.4.1", "lt_version": "", "ne_version": ["3.7.0"], "upper_version": "4.4.1", "version": "4.4.1"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.context": {"eq_version": "", "ge_version": "2.12.0", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.11.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.20.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "pyasn1": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.4.7", "version": "0.4.7"}, "pyasn1-modules": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.2.6", "version": "0.2.6"}, "pytz": {"eq_version": "", "ge_version": "2013.6", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "requests": {"eq_version": "", "ge_version": "2.10.0", "lt_version": "", "ne_version": ["2.13.0", "2.12.2"], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "taskflow": {"eq_version": "", "ge_version": "2.16.0", "lt_version": "", "ne_version": [], "upper_version": "3.7.1", "version": "3.7.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyroute2.json b/tools/oos/example/train_cached_file/pyroute2.json deleted file mode 100644 index e43b480a0411cd1714678e08d114c8f7d907f90e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyroute2.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyroute2", "version_dict": {"version": "0.5.6", "eq_version": "", "ge_version": "0.5.3", "lt_version": "", "ne_version": [], "upper_version": "0.5.6"}, "deep": {"count": 1, "list": ["neutron", "pyroute2"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyrsistent.json b/tools/oos/example/train_cached_file/pyrsistent.json deleted file mode 100644 index 06e46c86c28f33d3e9b9eebcab2e50a147c7aa13..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyrsistent.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyrsistent", "version_dict": {"version": "0.15.4", "eq_version": "", "ge_version": "0.14.0", "lt_version": "", "ne_version": [], "upper_version": "0.15.4"}, "deep": {"count": 14, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "jsonschema", "pyrsistent"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pysaml2.json b/tools/oos/example/train_cached_file/pysaml2.json deleted file mode 100644 index c059423446e4c044f9d539db6d3e2976fbd74cc0..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pysaml2.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pysaml2", "version_dict": {"version": "4.8.0", "eq_version": "", "ge_version": "4.5.0", "lt_version": "", "ne_version": [], "upper_version": "4.8.0"}, "deep": {"count": 1, "list": ["keystone", "pysaml2"]}, "requires": {"cryptography": {"eq_version": "", "ge_version": "1.4", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "defusedxml": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.6.0", "version": "0.6.0"}, "pyOpenSSL": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "python-dateutil": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8.0", "version": "2.8.0"}, "pytz": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}, "requests": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "paste": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "zope.interface": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "repoze.who": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pysendfile.json b/tools/oos/example/train_cached_file/pysendfile.json deleted file mode 100644 index d972f29c1c94f48b7d535ec4482a59abdf4a2469..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pysendfile.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pysendfile", "version_dict": {"version": "2.0.1", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1"}, "deep": {"count": 1, "list": ["glance", "pysendfile"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyserial.json b/tools/oos/example/train_cached_file/pyserial.json deleted file mode 100644 index 2230633928cd7de8b2007630d5232613e547690e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyserial.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyserial", "version_dict": {"version": "3.4", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.4"}, "deep": {"count": 2, "list": ["networking-generic-switch", "netmiko", "pyserial"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pysmi.json b/tools/oos/example/train_cached_file/pysmi.json deleted file mode 100644 index a81d6e5525fd719b8238be47c940db279be4f374..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pysmi.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pysmi", "version_dict": {"version": "0.3.4", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.3.4"}, "deep": {"count": 2, "list": ["ceilometer", "pysnmp", "pysmi"]}, "requires": {"ply": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.11", "version": "3.11"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pysnmp.json b/tools/oos/example/train_cached_file/pysnmp.json deleted file mode 100644 index 132cbc1e21a422661769bddba5da25d533f7507d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pysnmp.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pysnmp", "version_dict": {"version": "4.4.11", "eq_version": "", "ge_version": "4.2.3", "lt_version": "5.0.0", "ne_version": [], "upper_version": "4.4.11"}, "deep": {"count": 1, "list": ["ceilometer", "pysnmp"]}, "requires": {"pysmi": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.3.4", "version": "0.3.4"}, "pycryptodomex": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.9.0", "version": "3.9.0"}, "pyasn1": {"eq_version": "", "ge_version": "0.2.3", "lt_version": "", "ne_version": [], "upper_version": "0.4.7", "version": "0.4.7"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pytest-black.json b/tools/oos/example/train_cached_file/pytest-black.json deleted file mode 100644 index fc8736b5257c79fab4900618ec8d9e9856681f14..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pytest-black.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pytest-black", "version_dict": {"version": "0.3.7", "eq_version": "", "ge_version": "0.3.7", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.packaging", "jaraco.test", "pytest-black"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pytest-checkdocs.json b/tools/oos/example/train_cached_file/pytest-checkdocs.json deleted file mode 100644 index 55dc75f3ddffee89d450e4f64de3bf8861eb1508..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pytest-checkdocs.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pytest-checkdocs", "version_dict": {"version": "2.4", "eq_version": "", "ge_version": "2.4", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.tidelift", "pytest-checkdocs"]}, "requires": {"docutils": {"eq_version": "", "ge_version": "0.15", "lt_version": "", "ne_version": [], "upper_version": "0.15.2", "version": "0.15.2"}, "pep517": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "jaraco.functools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "jaraco.packaging": {"eq_version": "", "ge_version": "8.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "8.2"}, "rst.linker": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9"}, "pytest": {"eq_version": "", "ge_version": "4.6", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-checkdocs": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.2.3"}, "pytest-flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-cov": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-enabler": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-black": {"eq_version": "", "ge_version": "0.3.7", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.3.7"}, "pytest-mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pytest-cov.json b/tools/oos/example/train_cached_file/pytest-cov.json deleted file mode 100644 index 5093917328f4522dbce2da9559f5073aaf231745..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pytest-cov.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pytest-cov", "version_dict": {"version": "2.6.0", "eq_version": "", "ge_version": "2.6.0", "lt_version": "3.0.0", "ne_version": [], "upper_version": ""}, "deep": {"count": 17, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oauthlib", "PyJWT", "pytest-cov"]}, "requires": {"pytest": {"eq_version": "", "ge_version": "2.9", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "coverage": {"eq_version": "", "ge_version": "4.4", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pytest-enabler.json b/tools/oos/example/train_cached_file/pytest-enabler.json deleted file mode 100644 index 4ed29eee631c77b2fd65bac2fd65d108e313362f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pytest-enabler.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pytest-enabler", "version_dict": {"version": "1.0.1", "eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.tidelift", "pytest-enabler"]}, "requires": {"toml": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "jaraco.functools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "jaraco.context": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "more-itertools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "7.2.0", "version": "7.2.0"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "jaraco.packaging": {"eq_version": "", "ge_version": "8.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "8.2"}, "rst.linker": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9"}, "pytest": {"eq_version": "", "ge_version": "3.5", "lt_version": "", "ne_version": ["3.7.3"], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-checkdocs": {"eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.2.3"}, "pytest-flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-cov": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-enabler": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-black": {"eq_version": "", "ge_version": "0.3.7", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.3.7"}, "pytest-mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pytest-mock.json b/tools/oos/example/train_cached_file/pytest-mock.json deleted file mode 100644 index 8e18696db0e52c08a8826457a8c6d61d1368817f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pytest-mock.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pytest-mock", "version_dict": {"version": "1.8", "eq_version": "", "ge_version": "1.8", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 2, "list": ["kolla", "graphviz", "pytest-mock"]}, "requires": {"pytest": {"eq_version": "", "ge_version": "2.7", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pytest-runner.json b/tools/oos/example/train_cached_file/pytest-runner.json deleted file mode 100644 index be94615054827d5a2ccbb0e40cc6bc89e2573cc8..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pytest-runner.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pytest-runner", "version_dict": {"version": "4.2", "eq_version": "", "ge_version": "4.2", "lt_version": "5.0.0", "ne_version": [], "upper_version": ""}, "deep": {"count": 17, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oauthlib", "PyJWT", "pytest-runner"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "jaraco.packaging": {"eq_version": "", "ge_version": "3.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "3.2"}, "rst.linker": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9"}, "pytest": {"eq_version": "", "ge_version": "2.8", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-sugar": {"eq_version": "", "ge_version": "0.9.1", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.9.1"}, "collective.checkdocs": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-virtualenv": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pytest-sugar.json b/tools/oos/example/train_cached_file/pytest-sugar.json deleted file mode 100644 index 4ddb52481b587211ed26dbb487ed123dd1e34815..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pytest-sugar.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pytest-sugar", "version_dict": {"version": "0.9.1", "eq_version": "", "ge_version": "0.9.1", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 18, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oauthlib", "PyJWT", "pytest-runner", "pytest-sugar"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pytest-timeout.json b/tools/oos/example/train_cached_file/pytest-timeout.json deleted file mode 100644 index 2f58670fd34bff8a883f6d6c60cb623d46e39c1b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pytest-timeout.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pytest-timeout", "version_dict": {"version": "1.3.0", "eq_version": "", "ge_version": "1.3.0", "lt_version": "2", "ne_version": [], "upper_version": ""}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "virtualenv", "pytest-timeout"]}, "requires": {"pytest": {"eq_version": "", "ge_version": "3.6.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pytest-virtualenv.json b/tools/oos/example/train_cached_file/pytest-virtualenv.json deleted file mode 100644 index a5a0f8cdd4e7919ee76ee37b54d6d255a4483e41..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pytest-virtualenv.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pytest-virtualenv", "version_dict": {"version": "1.2.7", "eq_version": "", "ge_version": "1.2.7", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "pytest-virtualenv"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pytest.json b/tools/oos/example/train_cached_file/pytest.json deleted file mode 100644 index 71bee4b2a94d79184973a80161e335bde8f7a85e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pytest.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pytest", "version_dict": {"version": "5.1.2", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2"}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest"]}, "requires": {"py": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "1.8.0", "version": "1.8.0"}, "packaging": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "19.1", "version": "19.1"}, "attrs": {"eq_version": "", "ge_version": "17.4.0", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "more-itertools": {"eq_version": "", "ge_version": "4.0.0", "lt_version": "", "ne_version": [], "upper_version": "7.2.0", "version": "7.2.0"}, "atomicwrites": {"eq_version": "", "ge_version": "1.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.0", "version": "1.3.0"}, "pluggy": {"eq_version": "", "ge_version": "0.12", "lt_version": "1.0", "ne_version": [], "upper_version": "0.12.0", "version": "0.12.0"}, "wcwidth": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.1.7", "version": "0.1.7"}, "argcomplete": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "hypothesis": {"eq_version": "", "ge_version": "3.56", "lt_version": "", "ne_version": [], "upper_version": "", "version": "3.56"}, "mock": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "nose": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.3.7", "version": "1.3.7"}, "requests": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "xmlschema": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-3parclient.json b/tools/oos/example/train_cached_file/python-3parclient.json deleted file mode 100644 index f552e74de3105849db031519ed9df27a8baa3e21..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-3parclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-3parclient", "version_dict": {"version": "4.2.11", "eq_version": "", "ge_version": "4.1.0", "lt_version": "", "ne_version": [], "upper_version": "4.2.11"}, "deep": {"count": 1, "list": ["cinder", "python-3parclient"]}, "requires": {"eventlet": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.25.2", "version": "0.25.2"}, "paramiko": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.6.0", "version": "2.6.0"}, "requests": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-barbicanclient.json b/tools/oos/example/train_cached_file/python-barbicanclient.json deleted file mode 100644 index ec194f8489e7a9578ce7c75613fdd4eded445eb9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-barbicanclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-barbicanclient", "version_dict": {"version": "4.9.0", "eq_version": "", "ge_version": "4.5.2", "lt_version": "", "ne_version": [], "upper_version": "4.9.0"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-barbicanclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": ["2.9.0"], "upper_version": "2.16.0", "version": "2.16.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "nose": {"eq_version": "", "ge_version": "1.3.7", "lt_version": "", "ne_version": [], "upper_version": "1.3.7", "version": "1.3.7"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-blazarclient.json b/tools/oos/example/train_cached_file/python-blazarclient.json deleted file mode 100644 index 30505eee17a030890b78b4383dcc7d1163b455db..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-blazarclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-blazarclient", "version_dict": {"version": "2.2.1", "eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "2.2.1"}, "deep": {"count": 1, "list": ["openstack-heat", "python-blazarclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": ["2.9.0"], "upper_version": "2.16.0", "version": "2.16.0"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-cinderclient.json b/tools/oos/example/train_cached_file/python-cinderclient.json deleted file mode 100644 index 5d45dae892c6d8f942fb98fcfc36d6b761c99d62..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-cinderclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-cinderclient", "version_dict": {"version": "5.0.2", "eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": ["4.0.0"], "upper_version": "5.0.2"}, "deep": {"count": 4, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-novaclient", "python-cinderclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "simplejson": {"eq_version": "", "ge_version": "3.5.1", "lt_version": "", "ne_version": [], "upper_version": "3.16.0", "version": "3.16.0"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": ["2.20.0"], "upper_version": "2.22.0", "version": "2.22.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-congressclient.json b/tools/oos/example/train_cached_file/python-congressclient.json deleted file mode 100644 index 0eaca16c520593f29f8ce5f38ff66f75b207e1d4..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-congressclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-congressclient", "version_dict": {"version": "1.13.0", "eq_version": "", "ge_version": "1.9.0", "lt_version": "2000", "ne_version": [], "upper_version": "1.13.0"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-congressclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": ["2.9.0"], "upper_version": "2.16.0", "version": "2.16.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-consul.json b/tools/oos/example/train_cached_file/python-consul.json deleted file mode 100644 index 5e39b751db3973a7f92a413e84d50d797fa37211..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-consul.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-consul", "version_dict": {"version": "1.1.0", "eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0"}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "python-consul"]}, "requires": {"requests": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.4", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "aiohttp": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "tornado": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}, "twisted": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "treq": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-cyborgclient.json b/tools/oos/example/train_cached_file/python-cyborgclient.json deleted file mode 100644 index 13e7ab1277ddfa2e969ff059e48b5b6ded2fde00..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-cyborgclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-cyborgclient", "version_dict": {"version": "0.4.0", "eq_version": "0.4.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["python-cyborgclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "os-client-config": {"eq_version": "", "ge_version": "1.28.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.0", "version": "1.33.0"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "cryptography": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": ["2.0"], "upper_version": "2.8", "version": "2.8"}, "decorator": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "4.4.0", "version": "4.4.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.13", "ne_version": [], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-dateutil.json b/tools/oos/example/train_cached_file/python-dateutil.json deleted file mode 100644 index a133f33f1f272f2d49206f46b536c6a9b13fc9b9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-dateutil.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-dateutil", "version_dict": {"version": "2.8.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8.0"}, "deep": {"count": 2, "list": ["aodh", "croniter", "python-dateutil"]}, "requires": {"six": {"eq_version": "", "ge_version": "1.5", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-designateclient.json b/tools/oos/example/train_cached_file/python-designateclient.json deleted file mode 100644 index 68cbb681988720b83a3f5a8cb04bfe7a79968bf9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-designateclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-designateclient", "version_dict": {"version": "3.0.0", "eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-designateclient"]}, "requires": {"cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": ["2.9.0"], "upper_version": "2.16.0", "version": "2.16.0"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "os-testr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "reno": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-dracclient.json b/tools/oos/example/train_cached_file/python-dracclient.json deleted file mode 100644 index d672c63e8dac298c19cbf05a9da5ac84e62425e6..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-dracclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-dracclient", "version_dict": {"version": "3.0.0", "eq_version": "", "ge_version": "3.0.0", "lt_version": "4.0.0", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["ironic", "python-dracclient"]}, "requires": {"lxml": {"eq_version": "", "ge_version": "2.3", "lt_version": "", "ne_version": [], "upper_version": "4.4.1", "version": "4.4.1"}, "pbr": {"eq_version": "", "ge_version": "1.6", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "requests": {"eq_version": "", "ge_version": "2.10.0", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "doc8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "mock": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "requests-mock": {"eq_version": "", "ge_version": "1.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.2.1", "lt_version": "1.3", "ne_version": ["1.3b1"], "upper_version": "2.2.0", "version": "2.2.0"}, "oslosphinx": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": ["3.4.0"], "upper_version": "4.18.0", "version": "4.18.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-glanceclient.json b/tools/oos/example/train_cached_file/python-glanceclient.json deleted file mode 100644 index 2f991e0f3c3f4d328fcdea0ce35df0973a5736ff..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-glanceclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-glanceclient", "version_dict": {"version": "2.17.1", "eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1"}, "deep": {"count": 12, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "python-glanceclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.6.2", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "warlock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "2", "ne_version": [], "upper_version": "1.3.3", "version": "1.3.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "wrapt": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.11.2", "version": "1.11.2"}, "pyOpenSSL": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "os-client-config": {"eq_version": "", "ge_version": "1.28.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.0", "version": "1.33.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-heatclient.json b/tools/oos/example/train_cached_file/python-heatclient.json deleted file mode 100644 index 26557343ef11ed427034cef2d37419c5e3c9c3de..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-heatclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-heatclient", "version_dict": {"version": "1.18.1", "eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.18.1"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-heatclient"]}, "requires": {"Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": ["2.9.0"], "upper_version": "2.16.0", "version": "2.16.0"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.2", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.2"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "python-swiftclient": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "mox3": {"eq_version": "", "ge_version": "0.20.0", "lt_version": "", "ne_version": [], "upper_version": "0.28.0", "version": "0.28.0"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-httpdomain": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.0", "version": "1.7.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-ibmcclient.json b/tools/oos/example/train_cached_file/python-ibmcclient.json deleted file mode 100644 index 590276afa3da26da53f77e527df7eae99f8d6949..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-ibmcclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-ibmcclient", "version_dict": {"version": "0.1.0", "eq_version": "", "ge_version": "0.1.0", "lt_version": "0.3.0", "ne_version": ["0.2.1"], "upper_version": ""}, "deep": {"count": 1, "list": ["ironic", "python-ibmcclient"]}, "requires": {"requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "coverage": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-ironic-inspector-client.json b/tools/oos/example/train_cached_file/python-ironic-inspector-client.json deleted file mode 100644 index 6d9f40ea7108415aaca24115260bcb8d5673fdc0..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-ironic-inspector-client.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-ironic-inspector-client", "version_dict": {"version": "3.7.1", "eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "3.7.1"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-ironic-inspector-client"]}, "requires": {"keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "flake8-import-order": {"eq_version": "", "ge_version": "0.13", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.13"}, "hacking": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.25.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "sphinxcontrib-svg2pdfconverter": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.1.0", "version": "0.1.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-ironicclient.json b/tools/oos/example/train_cached_file/python-ironicclient.json deleted file mode 100644 index f645583c1d6c4d718fda9a9f323e099fc9d357e8..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-ironicclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-ironicclient", "version_dict": {"version": "3.1.2", "eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": [], "upper_version": "3.1.2"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-ironicclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "appdirs": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.3", "version": "1.4.3"}, "dogpile.cache": {"eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "0.7.1", "version": "0.7.1"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "osc-lib": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "1.1.0", "ne_version": [], "upper_version": "", "version": "1.0.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-karborclient.json b/tools/oos/example/train_cached_file/python-karborclient.json deleted file mode 100644 index 7ae1f88ff4d26cf4a91e00b271294ceb2800f337..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-karborclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-karborclient", "version_dict": {"version": "1.3.0", "eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.0"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-karborclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "simplejson": {"eq_version": "", "ge_version": "3.5.1", "lt_version": "", "ne_version": [], "upper_version": "3.16.0", "version": "3.16.0"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "docutils": {"eq_version": "", "ge_version": "0.11", "lt_version": "", "ne_version": [], "upper_version": "0.15.2", "version": "0.15.2"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-keystoneclient.json b/tools/oos/example/train_cached_file/python-keystoneclient.json deleted file mode 100644 index 638c2a0fc7d2c0ea9f25e95fa49594a09dbac284..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-keystoneclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-keystoneclient", "version_dict": {"version": "3.21.0", "eq_version": "", "ge_version": "3.20.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0"}, "deep": {"count": 2, "list": ["aodh", "keystonemiddleware", "python-keystoneclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "flake8-docstrings": {"eq_version": "0.2.1.post1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.2.1.post1"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "keyring": {"eq_version": "", "ge_version": "5.5.1", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "lxml": {"eq_version": "", "ge_version": "3.4.1", "lt_version": "", "ne_version": ["3.7.0"], "upper_version": "4.4.1", "version": "4.4.1"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oauthlib": {"eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "3.1.0", "version": "3.1.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": ["1.6.0"], "upper_version": "", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-ldap.json b/tools/oos/example/train_cached_file/python-ldap.json deleted file mode 100644 index e264b17111af71463c3b60c81fcd84d5ca8d700f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-ldap.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-ldap", "version_dict": {"version": "3.2.0", "eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.2.0"}, "deep": {"count": 1, "list": ["keystone", "python-ldap"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-lefthandclient.json b/tools/oos/example/train_cached_file/python-lefthandclient.json deleted file mode 100644 index f7a65234e8baa103c2262f24121531a8c42b8c5f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-lefthandclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-lefthandclient", "version_dict": {"version": "2.0.0", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["cinder", "python-lefthandclient"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-magnumclient.json b/tools/oos/example/train_cached_file/python-magnumclient.json deleted file mode 100644 index 36caa9434f5174f2f2a87c1a70b297f9d5b01263..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-magnumclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-magnumclient", "version_dict": {"version": "2.16.0", "eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": [], "upper_version": "2.16.0"}, "deep": {"count": 1, "list": ["openstack-heat", "python-magnumclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "os-client-config": {"eq_version": "", "ge_version": "1.28.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.0", "version": "1.33.0"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.2", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.2"}, "cryptography": {"eq_version": "", "ge_version": "2.1", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "decorator": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "4.4.0", "version": "4.4.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": ["1.6.0"], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-manilaclient.json b/tools/oos/example/train_cached_file/python-manilaclient.json deleted file mode 100644 index 369abc3a204c244b1f0f35bba78a26784b4b6d14..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-manilaclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-manilaclient", "version_dict": {"version": "1.29.0", "eq_version": "", "ge_version": "1.16.0", "lt_version": "", "ne_version": [], "upper_version": "1.29.0"}, "deep": {"count": 1, "list": ["openstack-heat", "python-manilaclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "ipaddress": {"eq_version": "", "ge_version": "1.0.17", "lt_version": "", "ne_version": [], "upper_version": "1.0.22", "version": "1.0.22"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "simplejson": {"eq_version": "", "ge_version": "3.5.1", "lt_version": "", "ne_version": [], "upper_version": "3.16.0", "version": "3.16.0"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "os-testr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "sphinxcontrib-programoutput": {"eq_version": "", "ge_version": "0.11", "lt_version": "", "ne_version": [], "upper_version": "0.14", "version": "0.14"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-memcached.json b/tools/oos/example/train_cached_file/python-memcached.json deleted file mode 100644 index c0ca2eb14b3775761fd463bd580758ff1b1197e8..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-memcached.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-memcached", "version_dict": {"version": "1.59", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.59"}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "oslo.cache", "python-memcached"]}, "requires": {"six": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-mistralclient.json b/tools/oos/example/train_cached_file/python-mistralclient.json deleted file mode 100644 index ef0e46534f7ac1310683902c5717b25f0821913b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-mistralclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-mistralclient", "version_dict": {"version": "3.10.0", "eq_version": "", "ge_version": "3.1.0", "lt_version": "", "ne_version": ["3.2.0"], "upper_version": "3.10.0"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-mistralclient"]}, "requires": {"cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": ["2.9.0"], "upper_version": "2.16.0", "version": "2.16.0"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "docutils": {"eq_version": "", "ge_version": "0.11", "lt_version": "", "ne_version": [], "upper_version": "0.15.2", "version": "0.15.2"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-monascaclient.json b/tools/oos/example/train_cached_file/python-monascaclient.json deleted file mode 100644 index e036f8830c3dd441f50dce15c4196f7015b771e4..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-monascaclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-monascaclient", "version_dict": {"version": "1.16.0", "eq_version": "", "ge_version": "1.12.0", "lt_version": "", "ne_version": [], "upper_version": "1.16.0"}, "deep": {"count": 1, "list": ["openstack-heat", "python-monascaclient"]}, "requires": {"osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.2", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.2"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.5", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-muranoclient.json b/tools/oos/example/train_cached_file/python-muranoclient.json deleted file mode 100644 index 5e334727130be4a2226ef79ccc6e71772ce54009..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-muranoclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-muranoclient", "version_dict": {"version": "1.3.0", "eq_version": "", "ge_version": "0.8.2", "lt_version": "", "ne_version": [], "upper_version": "1.3.0"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-muranoclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.2", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.2"}, "python-glanceclient": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1", "version": "2.17.1"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "pyOpenSSL": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "yaql": {"eq_version": "", "ge_version": "1.1.3", "lt_version": "", "ne_version": [], "upper_version": "1.1.3", "version": "1.1.3"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "murano-pkg-check": {"eq_version": "", "ge_version": "0.3.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "requests-mock": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "os-testr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-neutronclient.json b/tools/oos/example/train_cached_file/python-neutronclient.json deleted file mode 100644 index ce778436a37e7a687dbb7767c74a82d442dba27c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-neutronclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-neutronclient", "version_dict": {"version": "6.14.1", "eq_version": "", "ge_version": "6.7.0", "lt_version": "", "ne_version": [], "upper_version": "6.14.1"}, "deep": {"count": 4, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-novaclient", "python-neutronclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": ["2.9.0"], "upper_version": "2.16.0", "version": "2.16.0"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "os-client-config": {"eq_version": "", "ge_version": "1.28.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.0", "version": "1.33.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "simplejson": {"eq_version": "", "ge_version": "3.5.1", "lt_version": "", "ne_version": [], "upper_version": "3.16.0", "version": "3.16.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": ["1.6.0"], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "flake8-import-order": {"eq_version": "0.12", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.12"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "osprofiler": {"eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-novaclient.json b/tools/oos/example/train_cached_file/python-novaclient.json deleted file mode 100644 index 7539fe39b0604c57653eb9f0c534499a3c022952..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-novaclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-novaclient", "version_dict": {"version": "15.1.1", "eq_version": "", "ge_version": "15.0.0", "lt_version": "", "ne_version": [], "upper_version": "15.1.1"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-novaclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.5.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.2", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.2"}, "simplejson": {"eq_version": "", "ge_version": "3.5.1", "lt_version": "", "ne_version": [], "upper_version": "3.16.0", "version": "3.16.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "python-cinderclient": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": ["4.0.0"], "upper_version": "5.0.2", "version": "5.0.2"}, "python-glanceclient": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1", "version": "2.17.1"}, "python-neutronclient": {"eq_version": "", "ge_version": "6.7.0", "lt_version": "", "ne_version": [], "upper_version": "6.14.1", "version": "6.14.1"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "openstacksdk": {"eq_version": "", "ge_version": "0.11.2", "lt_version": "", "ne_version": [], "upper_version": "0.36.5", "version": "0.36.5"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "whereto": {"eq_version": "", "ge_version": "0.3.0", "lt_version": "", "ne_version": [], "upper_version": "0.4.0", "version": "0.4.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-octaviaclient.json b/tools/oos/example/train_cached_file/python-octaviaclient.json deleted file mode 100644 index bf63d65ecd5488dc47334a1d9bb5926a355ea086..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-octaviaclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-octaviaclient", "version_dict": {"version": "1.10.1", "eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.10.1"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-octaviaclient"]}, "requires": {"Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": ["2.9.0"], "upper_version": "2.16.0", "version": "2.16.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "netifaces": {"eq_version": "", "ge_version": "0.10.4", "lt_version": "", "ne_version": [], "upper_version": "0.10.9", "version": "0.10.9"}, "python-neutronclient": {"eq_version": "", "ge_version": "6.7.0", "lt_version": "", "ne_version": [], "upper_version": "6.14.1", "version": "6.14.1"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "requests-mock": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-openstackclient.json b/tools/oos/example/train_cached_file/python-openstackclient.json deleted file mode 100644 index 94e9aea19bef945c67a90ec5ff6d5df137f217c3..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-openstackclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-openstackclient", "version_dict": {"version": "4.0.2", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.0.2"}, "deep": {"count": 2, "list": ["aodh", "gnocchiclient", "python-openstackclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": ["2.9.0"], "upper_version": "2.16.0", "version": "2.16.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.6.2", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "openstacksdk": {"eq_version": "", "ge_version": "0.17.0", "lt_version": "", "ne_version": [], "upper_version": "0.36.5", "version": "0.36.5"}, "osc-lib": {"eq_version": "", "ge_version": "1.14.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "python-glanceclient": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1", "version": "2.17.1"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.17.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "python-novaclient": {"eq_version": "", "ge_version": "15.0.0", "lt_version": "", "ne_version": [], "upper_version": "15.1.1", "version": "15.1.1"}, "python-cinderclient": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "flake8-import-order": {"eq_version": "0.13", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.13"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "os-client-config": {"eq_version": "", "ge_version": "1.28.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.0", "version": "1.33.0"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": ["1.6.0"], "upper_version": "", "version": "1.1.0"}, "wrapt": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.11.2", "version": "1.11.2"}, "aodhclient": {"eq_version": "", "ge_version": "0.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.0", "version": "1.3.0"}, "gnocchiclient": {"eq_version": "", "ge_version": "3.3.1", "lt_version": "", "ne_version": [], "upper_version": "7.0.5", "version": "7.0.5"}, "python-barbicanclient": {"eq_version": "", "ge_version": "4.5.2", "lt_version": "", "ne_version": [], "upper_version": "4.9.0", "version": "4.9.0"}, "python-congressclient": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "2000", "ne_version": [], "upper_version": "1.13.0", "version": "1.13.0"}, "python-designateclient": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "python-heatclient": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.18.1", "version": "1.18.1"}, "python-ironicclient": {"eq_version": "", "ge_version": "2.3.0", "lt_version": "", "ne_version": [], "upper_version": "3.1.2", "version": "3.1.2"}, "python-ironic-inspector-client": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "3.7.1", "version": "3.7.1"}, "python-karborclient": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.0", "version": "1.3.0"}, "python-mistralclient": {"eq_version": "", "ge_version": "3.1.0", "lt_version": "", "ne_version": ["3.2.0"], "upper_version": "3.10.0", "version": "3.10.0"}, "python-muranoclient": {"eq_version": "", "ge_version": "0.8.2", "lt_version": "", "ne_version": [], "upper_version": "1.3.0", "version": "1.3.0"}, "python-neutronclient": {"eq_version": "", "ge_version": "6.7.0", "lt_version": "", "ne_version": [], "upper_version": "6.14.1", "version": "6.14.1"}, "python-octaviaclient": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.10.1", "version": "1.10.1"}, "python-rsdclient": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.2.0", "version": "0.2.0"}, "python-saharaclient": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "python-searchlightclient": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "python-senlinclient": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.11.1", "version": "1.11.1"}, "python-troveclient": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.1", "version": "3.0.1"}, "python-zaqarclient": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "python-zunclient": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.5.1", "version": "3.5.1"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.23.2", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.5", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-pcre.json b/tools/oos/example/train_cached_file/python-pcre.json deleted file mode 100644 index 8d64f29f3e82cfd94cbe9452e10b831dd24cddcc..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-pcre.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-pcre", "version_dict": {"version": "0.7", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.7"}, "deep": {"count": 5, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-novaclient", "whereto", "python-pcre"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-rsdclient.json b/tools/oos/example/train_cached_file/python-rsdclient.json deleted file mode 100644 index 5785a51cce347eed062a06490aefbac3d6fa7d05..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-rsdclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-rsdclient", "version_dict": {"version": "0.2.0", "eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.2.0"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-rsdclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.16.0", "version": "2.16.0"}, "osc-lib": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "rsd-lib": {"eq_version": "", "ge_version": "0.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.13", "ne_version": [], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-saharaclient.json b/tools/oos/example/train_cached_file/python-saharaclient.json deleted file mode 100644 index 6ac52bedf6e1750550ae5d02bb1cca10cacf4b70..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-saharaclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-saharaclient", "version_dict": {"version": "2.3.0", "eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-saharaclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "osc-lib": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-scciclient.json b/tools/oos/example/train_cached_file/python-scciclient.json deleted file mode 100644 index 319dc41dec49f12fae8b77e00de68aad61438643..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-scciclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-scciclient", "version_dict": {"version": "0.8.0", "eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["ironic", "python-scciclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "pyghmi": {"eq_version": "", "ge_version": "1.0.22", "lt_version": "", "ne_version": [], "upper_version": "1.4.1", "version": "1.4.1"}, "pysnmp": {"eq_version": "", "ge_version": "4.2.3", "lt_version": "", "ne_version": [], "upper_version": "4.4.11", "version": "4.4.11"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "oslosphinx": {"eq_version": "", "ge_version": "4.7.0", "lt_version": "", "ne_version": [], "upper_version": "4.18.0", "version": "4.18.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-searchlightclient.json b/tools/oos/example/train_cached_file/python-searchlightclient.json deleted file mode 100644 index 1fbad686ed342da66d9be9568749e68245aa8057..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-searchlightclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-searchlightclient", "version_dict": {"version": "1.6.0", "eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-searchlightclient"]}, "requires": {"Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-senlinclient.json b/tools/oos/example/train_cached_file/python-senlinclient.json deleted file mode 100644 index 818e5766df54c1a68924ed2f9467f1b6147a7fe0..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-senlinclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-senlinclient", "version_dict": {"version": "1.11.1", "eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.11.1"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-senlinclient"]}, "requires": {"Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.2", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.2"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "openstacksdk": {"eq_version": "", "ge_version": "0.24.0", "lt_version": "", "ne_version": [], "upper_version": "0.36.5", "version": "0.36.5"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "python-heatclient": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.18.1", "version": "1.18.1"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "setuptools": {"eq_version": "", "ge_version": "21.0.0", "lt_version": "", "ne_version": ["24.0.0", "34.0.0", "34.0.1", "34.0.2", "34.0.3", "34.1.0", "34.1.1", "34.2.0", "34.3.0", "34.3.1", "34.3.2", "36.2.0"], "upper_version": "57.5.0", "version": "57.5.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-subunit.json b/tools/oos/example/train_cached_file/python-subunit.json deleted file mode 100644 index d6f97f2134a14ac6f517864e9fb8c9ab13381137..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-subunit.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-subunit", "version_dict": {"version": "1.4.0", "eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0"}, "deep": {"count": 3, "list": ["aodh", "futurist", "hacking", "python-subunit"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-swiftclient.json b/tools/oos/example/train_cached_file/python-swiftclient.json deleted file mode 100644 index e0aa84a53b2f4b9eae93e04399f82c8c13dce046..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-swiftclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-swiftclient", "version_dict": {"version": "3.8.1", "eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1"}, "deep": {"count": 4, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-heatclient", "python-swiftclient"]}, "requires": {"requests": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-troveclient.json b/tools/oos/example/train_cached_file/python-troveclient.json deleted file mode 100644 index 2f6d2102afc251b3ec26eb6e4ebe7d296ea95b3e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-troveclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-troveclient", "version_dict": {"version": "3.0.1", "eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.1"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-troveclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.2", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.2"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "simplejson": {"eq_version": "", "ge_version": "3.5.1", "lt_version": "", "ne_version": [], "upper_version": "3.16.0", "version": "3.16.0"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "python-swiftclient": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "python-mistralclient": {"eq_version": "", "ge_version": "3.1.0", "lt_version": "", "ne_version": ["3.2.0"], "upper_version": "3.10.0", "version": "3.10.0"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "httplib2": {"eq_version": "", "ge_version": "0.9.1", "lt_version": "", "ne_version": [], "upper_version": "0.13.1", "version": "0.13.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-apidoc": {"eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-xclarityclient.json b/tools/oos/example/train_cached_file/python-xclarityclient.json deleted file mode 100644 index 5a947234ab1fc3a8655658420b86fad6537ff845..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-xclarityclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-xclarityclient", "version_dict": {"version": "0.1.6", "eq_version": "", "ge_version": "0.1.6", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["ironic", "python-xclarityclient"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-zaqarclient.json b/tools/oos/example/train_cached_file/python-zaqarclient.json deleted file mode 100644 index c5bd1c84684330580068dfd88443201de9baf517..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-zaqarclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-zaqarclient", "version_dict": {"version": "1.12.0", "eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-zaqarclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "os-client-config": {"eq_version": "", "ge_version": "1.28.0", "lt_version": "", "ne_version": [], "upper_version": "1.33.0", "version": "1.33.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/python-zunclient.json b/tools/oos/example/train_cached_file/python-zunclient.json deleted file mode 100644 index 7a8180496fb2b517cc667005a6320e5f006bb066..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/python-zunclient.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "python-zunclient", "version_dict": {"version": "3.5.1", "eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.5.1"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-zunclient"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "keystoneauth1": {"eq_version": "", "ge_version": "3.4.0", "lt_version": "", "ne_version": [], "upper_version": "3.17.4", "version": "3.17.4"}, "osc-lib": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.14.1", "version": "1.14.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "websocket-client": {"eq_version": "", "ge_version": "0.44.0", "lt_version": "", "ne_version": [], "upper_version": "0.56.0", "version": "0.56.0"}, "docker": {"eq_version": "", "ge_version": "2.4.2", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "testresources": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pytz.json b/tools/oos/example/train_cached_file/pytz.json deleted file mode 100644 index 9857a8bc9716e1bf8d99f4ba328e952d575557fe..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pytz.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pytz", "version_dict": {"version": "2019.2", "eq_version": "", "ge_version": "2015.7", "lt_version": "", "ne_version": [], "upper_version": "2019.2"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "Jinja2", "Babel", "pytz"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyudev.json b/tools/oos/example/train_cached_file/pyudev.json deleted file mode 100644 index d9b3cf91a00d73babdc384c06cc0fbef27b389e7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyudev.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyudev", "version_dict": {"version": "0.21.0", "eq_version": "", "ge_version": "0.16.1", "lt_version": "", "ne_version": [], "upper_version": "0.21.0"}, "deep": {"count": 2, "list": ["cinder", "rtslib-fb", "pyudev"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pywbem.json b/tools/oos/example/train_cached_file/pywbem.json deleted file mode 100644 index 6bc69a548bb567b28a2868d9b46219fa3f7c52c7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pywbem.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pywbem", "version_dict": {"version": "0.14.4", "eq_version": "", "ge_version": "0.7.0", "lt_version": "", "ne_version": [], "upper_version": "0.14.4"}, "deep": {"count": 1, "list": ["cinder", "pywbem"]}, "requires": {"mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "pbr": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "ply": {"eq_version": "", "ge_version": "3.10", "lt_version": "", "ne_version": [], "upper_version": "3.11", "version": "3.11"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "ordereddict": {"eq_version": "", "ge_version": "1.1", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1"}, "PyYAML": {"eq_version": "", "ge_version": "3.13", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/pyxcli.json b/tools/oos/example/train_cached_file/pyxcli.json deleted file mode 100644 index 3f356908c7656722af43aa48a64c8bc9b4a3f5d6..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/pyxcli.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "pyxcli", "version_dict": {"version": "1.1.5", "eq_version": "", "ge_version": "1.1.5", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 1, "list": ["cinder", "pyxcli"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/qpid-python.json b/tools/oos/example/train_cached_file/qpid-python.json deleted file mode 100644 index 86184839972516b833c168800c5a9904dca4b594..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/qpid-python.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "qpid-python", "version_dict": {"version": "1.36.0.post1", "eq_version": "", "ge_version": "0.26", "lt_version": "", "ne_version": [], "upper_version": "1.36.0.post1"}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "qpid-python"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/qpid-tools.json b/tools/oos/example/train_cached_file/qpid-tools.json deleted file mode 100644 index de25f6dcefc6d2b0be0fa9b829acf31999c991e5..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/qpid-tools.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "qpid-tools", "version_dict": {"version": "0.26", "eq_version": "", "ge_version": "0.26", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "qpid-tools"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/rcssmin.json b/tools/oos/example/train_cached_file/rcssmin.json deleted file mode 100644 index cc26ca2a57aa446b88fd2948189e4626f4964ffb..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/rcssmin.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "rcssmin", "version_dict": {"version": "1.0.6", "eq_version": "1.0.6", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.6"}, "deep": {"count": 2, "list": ["horizon", "django-compressor", "rcssmin"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/redis.json b/tools/oos/example/train_cached_file/redis.json deleted file mode 100644 index 9821d33626487e721dd6d7475f3012fe80c8e95b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/redis.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "redis", "version_dict": {"version": "3.3.8", "eq_version": "", "ge_version": "3.3.11", "lt_version": "", "ne_version": [], "upper_version": "3.3.8"}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "redis"]}, "requires": {"hiredis": {"eq_version": "", "ge_version": "0.1.3", "lt_version": "", "ne_version": [], "upper_version": "1.0.0", "version": "1.0.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/reno.json b/tools/oos/example/train_cached_file/reno.json deleted file mode 100644 index 27fac56bb3285a1035b1cdf18e55aa16eb1bf91e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/reno.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "reno", "version_dict": {"version": "2.11.3", "eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3"}, "deep": {"count": 17, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "debtcollector", "reno"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "PyYAML": {"eq_version": "", "ge_version": "3.10", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "dulwich": {"eq_version": "", "ge_version": "0.15.0", "lt_version": "", "ne_version": [], "upper_version": "0.19.13", "version": "0.19.13"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "mock": {"eq_version": "", "ge_version": "1.2", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/repoze.lru.json b/tools/oos/example/train_cached_file/repoze.lru.json deleted file mode 100644 index 87d37354ba7331328602e5466436e50d4ec3666e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/repoze.lru.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "repoze.lru", "version_dict": {"version": "0.7", "eq_version": "", "ge_version": "0.3", "lt_version": "", "ne_version": [], "upper_version": "0.7"}, "deep": {"count": 5, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "oslo.service", "Routes", "repoze.lru"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "coverage": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "nose": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.3.7", "version": "1.3.7"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/requests-aws.json b/tools/oos/example/train_cached_file/requests-aws.json deleted file mode 100644 index 7224923e1b108a134ae6ebf0dd37436c86d8dace..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/requests-aws.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "requests-aws", "version_dict": {"version": "0.1.8", "eq_version": "", "ge_version": "0.1.4", "lt_version": "", "ne_version": [], "upper_version": "0.1.8"}, "deep": {"count": 1, "list": ["ceilometer", "requests-aws"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/requests-kerberos.json b/tools/oos/example/train_cached_file/requests-kerberos.json deleted file mode 100644 index d168533068933e0a71ba6dd09077ea1d7ba647cd..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/requests-kerberos.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "requests-kerberos", "version_dict": {"version": "0.12.0", "eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.12.0"}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "requests-kerberos"]}, "requires": {"requests": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "cryptography": {"eq_version": "", "ge_version": "1.3", "lt_version": "2", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "pykerberos": {"eq_version": "", "ge_version": "1.1.8", "lt_version": "2.0.0", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/requests-mock.json b/tools/oos/example/train_cached_file/requests-mock.json deleted file mode 100644 index 381943e2ba0895e7b4daa84d6e07b7ad9a5d2b4f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/requests-mock.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "requests-mock", "version_dict": {"version": "1.6.0", "eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0"}, "deep": {"count": 14, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "requests-mock"]}, "requires": {"requests": {"eq_version": "", "ge_version": "2.3", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "fixtures": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "purl": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/requests-oauthlib.json b/tools/oos/example/train_cached_file/requests-oauthlib.json deleted file mode 100644 index 9579cc648a0f39449715db2453f22513e2c5aa88..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/requests-oauthlib.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "requests-oauthlib", "version_dict": {"version": "1.2.0", "eq_version": "", "ge_version": "0.5.0", "lt_version": "", "ne_version": [], "upper_version": "1.2.0"}, "deep": {"count": 8, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "azure-servicebus", "azure-common", "msrestazure", "msrest", "requests-oauthlib"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/requests-toolbelt.json b/tools/oos/example/train_cached_file/requests-toolbelt.json deleted file mode 100644 index b0b094b29545c389ec89092e7f03009979ce183d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/requests-toolbelt.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "requests-toolbelt", "version_dict": {"version": "0.9.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.9.1"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "jaraco.tidelift", "requests-toolbelt"]}, "requires": {"requests": {"eq_version": "", "ge_version": "2.0.1", "lt_version": "3.0.0", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/requests.json b/tools/oos/example/train_cached_file/requests.json deleted file mode 100644 index 05be9231168eee57c3f1437a11795cb0efd12ffa..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/requests.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "requests", "version_dict": {"version": "2.22.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.22.0"}, "deep": {"count": 11, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "jaraco.packaging", "requests"]}, "requires": {"chardet": {"eq_version": "", "ge_version": "3.0.2", "lt_version": "3.1.0", "ne_version": [], "upper_version": "3.0.4", "version": "3.0.4"}, "idna": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "urllib3": {"eq_version": "", "ge_version": "1.21.1", "lt_version": "1.26", "ne_version": ["1.25.0", "1.25.1"], "upper_version": "1.25.3", "version": "1.25.3"}, "certifi": {"eq_version": "", "ge_version": "2017.4.17", "lt_version": "", "ne_version": [], "upper_version": "2019.6.16", "version": "2019.6.16"}, "pyOpenSSL": {"eq_version": "", "ge_version": "0.14", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "cryptography": {"eq_version": "", "ge_version": "1.3.4", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "PySocks": {"eq_version": "", "ge_version": "1.5.6", "lt_version": "", "ne_version": ["1.5.7"], "upper_version": "", "version": "1.5.6"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/requestsexceptions.json b/tools/oos/example/train_cached_file/requestsexceptions.json deleted file mode 100644 index 0d85aa69c898fa6218271db74b127140374aa38c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/requestsexceptions.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "requestsexceptions", "version_dict": {"version": "1.4.0", "eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "requestsexceptions"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/restructuredtext-lint.json b/tools/oos/example/train_cached_file/restructuredtext-lint.json deleted file mode 100644 index b01d11d5f13297eec157740c228e3a2c9fa13040..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/restructuredtext-lint.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "restructuredtext-lint", "version_dict": {"version": "1.3.0", "eq_version": "", "ge_version": "0.7", "lt_version": "", "ne_version": [], "upper_version": "1.3.0"}, "deep": {"count": 18, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "debtcollector", "doc8", "restructuredtext-lint"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/retrying.json b/tools/oos/example/train_cached_file/retrying.json deleted file mode 100644 index 0def8419da6f072bbb9d42abbcb3c225d0a83ced..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/retrying.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "retrying", "version_dict": {"version": "1.3.3", "eq_version": "", "ge_version": "1.2.3", "lt_version": "", "ne_version": ["1.3.0"], "upper_version": "1.3.3"}, "deep": {"count": 1, "list": ["cinder", "retrying"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/retryz.json b/tools/oos/example/train_cached_file/retryz.json deleted file mode 100644 index 9c9bd9a573b127e3d3b31676ca4e823155a1f0d5..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/retryz.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "retryz", "version_dict": {"version": "0.1.9", "eq_version": "", "ge_version": "0.1.8", "lt_version": "", "ne_version": [], "upper_version": "0.1.9"}, "deep": {"count": 2, "list": ["cinder", "storops", "retryz"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/rfc3986.json b/tools/oos/example/train_cached_file/rfc3986.json deleted file mode 100644 index 06849495f4ad38128617873615aab1a4323b0e6e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/rfc3986.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "rfc3986", "version_dict": {"version": "1.3.2", "eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.2"}, "deep": {"count": 16, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "rfc3986"]}, "requires": {"idna": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/rjsmin.json b/tools/oos/example/train_cached_file/rjsmin.json deleted file mode 100644 index 3a8971204ea8a181010464e13c4818037f7ec17a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/rjsmin.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "rjsmin", "version_dict": {"version": "1.1.0", "eq_version": "1.1.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.0"}, "deep": {"count": 2, "list": ["horizon", "django-compressor", "rjsmin"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/rsa.json b/tools/oos/example/train_cached_file/rsa.json deleted file mode 100644 index 9a5a54f65176b1d7c4213104ccc4174f92ab6f63..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/rsa.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "rsa", "version_dict": {"version": "4.0", "eq_version": "", "ge_version": "3.1.4", "lt_version": "", "ne_version": [], "upper_version": "4.0"}, "deep": {"count": 2, "list": ["cinder", "oauth2client", "rsa"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/rsd-lib.json b/tools/oos/example/train_cached_file/rsd-lib.json deleted file mode 100644 index 0a3d4e4d3f936e924ab4f8c3236ea1150dd664dd..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/rsd-lib.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "rsd-lib", "version_dict": {"version": "1.1.0", "eq_version": "", "ge_version": "0.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.1.0"}, "deep": {"count": 4, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-rsdclient", "rsd-lib"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "sushy": {"eq_version": "", "ge_version": "1.8.1", "lt_version": "", "ne_version": [], "upper_version": "2.0.5", "version": "2.0.5"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.13", "ne_version": [], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/rst.linker.json b/tools/oos/example/train_cached_file/rst.linker.json deleted file mode 100644 index d59160fc1b70ac109c8b629855ab1f21c67602a7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/rst.linker.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "rst.linker", "version_dict": {"version": "1.9", "eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 10, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "rst.linker"]}, "requires": {"python-dateutil": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8.0", "version": "2.8.0"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/rtslib-fb.json b/tools/oos/example/train_cached_file/rtslib-fb.json deleted file mode 100644 index c17d6cd1871dddac63f4028eba71f16c6a50df54..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/rtslib-fb.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "rtslib-fb", "version_dict": {"version": "2.1.69", "eq_version": "", "ge_version": "2.1.65", "lt_version": "", "ne_version": [], "upper_version": "2.1.69"}, "deep": {"count": 1, "list": ["cinder", "rtslib-fb"]}, "requires": {"pyudev": {"eq_version": "", "ge_version": "0.16.1", "lt_version": "", "ne_version": [], "upper_version": "0.21.0", "version": "0.21.0"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/scandir.json b/tools/oos/example/train_cached_file/scandir.json deleted file mode 100644 index f2d3ed8b2dcb4f1e5ec33da92e95539af754017b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/scandir.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "scandir", "version_dict": {"version": "1.10.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.10.0"}, "deep": {"count": 11, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "pathlib2", "scandir"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/scp.json b/tools/oos/example/train_cached_file/scp.json deleted file mode 100644 index 892af2e6517949a9f3bee1524e4c7f184931a7c8..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/scp.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "scp", "version_dict": {"version": "0.13.2", "eq_version": "", "ge_version": "0.13.2", "lt_version": "", "ne_version": [], "upper_version": "0.13.2"}, "deep": {"count": 2, "list": ["networking-generic-switch", "netmiko", "scp"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/scrypt.json b/tools/oos/example/train_cached_file/scrypt.json deleted file mode 100644 index 420a095aa3701c1623b21a5a90f8911b61dbb646..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/scrypt.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "scrypt", "version_dict": {"version": "0.8.13", "eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.13"}, "deep": {"count": 1, "list": ["keystone", "scrypt"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/selenium.json b/tools/oos/example/train_cached_file/selenium.json deleted file mode 100644 index b74fc9ac2b85a13fd6473a6f6d853818654d28fe..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/selenium.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "selenium", "version_dict": {"version": "3.141.0", "eq_version": "", "ge_version": "2.50.1", "lt_version": "", "ne_version": [], "upper_version": "3.141.0"}, "deep": {"count": 1, "list": ["horizon", "selenium"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/semantic-version.json b/tools/oos/example/train_cached_file/semantic-version.json deleted file mode 100644 index 8a3f86d21857b018a48c3f19d4a1a6832a016974..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/semantic-version.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "semantic-version", "version_dict": {"version": "2.8.2", "eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.8.2"}, "deep": {"count": 5, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-muranoclient", "murano-pkg-check", "semantic-version"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/setproctitle.json b/tools/oos/example/train_cached_file/setproctitle.json deleted file mode 100644 index 460908580bff3be674fdcc5c10fb21d6257c3484..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/setproctitle.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "setproctitle", "version_dict": {"version": "1.1.10", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.10"}, "deep": {"count": 2, "list": ["aodh", "cotyledon", "setproctitle"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/setuptools.json b/tools/oos/example/train_cached_file/setuptools.json deleted file mode 100644 index 613d66fa20e63b8b070ba22d4cb2a7d5560ebf13..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/setuptools.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "setuptools", "version_dict": {"version": "57.5.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "57.5.0"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "jaraco.packaging": {"eq_version": "", "ge_version": "8.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "8.2"}, "rst.linker": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9"}, "jaraco.tidelift": {"eq_version": "", "ge_version": "1.4", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.4"}, "pygments-github-lexers": {"eq_version": "0.0.5", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.0.5"}, "sphinx-inline-tabs": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "sphinxcontrib-towncrier": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "furo": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest": {"eq_version": "", "ge_version": "4.6", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "pytest-checkdocs": {"eq_version": "", "ge_version": "2.4", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.4"}, "pytest-flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-cov": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-enabler": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.0.1"}, "mock": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "flake8-2020": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "virtualenv": {"eq_version": "", "ge_version": "13.0.0", "lt_version": "", "ne_version": [], "upper_version": "16.7.5", "version": "16.7.5"}, "pytest-virtualenv": {"eq_version": "", "ge_version": "1.2.7", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.2.7"}, "wheel": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "paver": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pip": {"eq_version": "", "ge_version": "19.1", "lt_version": "", "ne_version": [], "upper_version": "", "version": "19.1"}, "jaraco.envs": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-xdist": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "jaraco.path": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "3.2.0"}, "pytest-black": {"eq_version": "", "ge_version": "0.3.7", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.3.7"}, "pytest-mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/simplegeneric.json b/tools/oos/example/train_cached_file/simplegeneric.json deleted file mode 100644 index 68f83a03e61612ef2ab191e9c83f052c4b784059..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/simplegeneric.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "simplegeneric", "version_dict": {"version": "0.8.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.8.1"}, "deep": {"count": 2, "list": ["aodh", "WSME", "simplegeneric"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/simplejson.json b/tools/oos/example/train_cached_file/simplejson.json deleted file mode 100644 index eddcc2e8e16b2f12e056fef5f99574fd77baf897..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/simplejson.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "simplejson", "version_dict": {"version": "3.16.0", "eq_version": "", "ge_version": "3.5.1", "lt_version": "", "ne_version": [], "upper_version": "3.16.0"}, "deep": {"count": 3, "list": ["aodh", "gnocchiclient", "osc-lib", "simplejson"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/six.json b/tools/oos/example/train_cached_file/six.json deleted file mode 100644 index fe4e56aeb279d71a6e27c7ce72fb28ab85e48c79..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/six.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "six", "version_dict": {"version": "1.12.0", "eq_version": "", "ge_version": "1.5", "lt_version": "", "ne_version": [], "upper_version": "1.12.0"}, "deep": {"count": 3, "list": ["aodh", "croniter", "python-dateutil", "six"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/smmap2.json b/tools/oos/example/train_cached_file/smmap2.json deleted file mode 100644 index 0387d1130308316ce6a90f637ec3dd5339074d1c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/smmap2.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "smmap2", "version_dict": {"version": "2.0.5", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.5"}, "deep": {"count": 12, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "GitPython", "gitdb2", "smmap2"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/snowballstemmer.json b/tools/oos/example/train_cached_file/snowballstemmer.json deleted file mode 100644 index 65f149c41962cebc19eac3d9848529e4792590fd..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/snowballstemmer.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "snowballstemmer", "version_dict": {"version": "1.9.1", "eq_version": "", "ge_version": "1.1", "lt_version": "", "ne_version": [], "upper_version": "1.9.1"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "snowballstemmer"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/softlayer-messaging.json b/tools/oos/example/train_cached_file/softlayer-messaging.json deleted file mode 100644 index 5cd750114fb0842401573ab320b0da2c1cc5264e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/softlayer-messaging.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "softlayer-messaging", "version_dict": {"version": "1.0.3", "eq_version": "", "ge_version": "1.0.3", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "softlayer-messaging"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/soupsieve.json b/tools/oos/example/train_cached_file/soupsieve.json deleted file mode 100644 index 61205ea626db5cb0137e32993768d3db2cda49a7..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/soupsieve.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "soupsieve", "version_dict": {"version": "1.9.3", "eq_version": "", "ge_version": "1.2", "lt_version": "", "ne_version": [], "upper_version": "1.9.3"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "beautifulsoup4", "soupsieve"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinx-feature-classification.json b/tools/oos/example/train_cached_file/sphinx-feature-classification.json deleted file mode 100644 index 7217e53373238691423359cbd85391f1d93fd291..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinx-feature-classification.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinx-feature-classification", "version_dict": {"version": "0.4.1", "eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.4.1"}, "deep": {"count": 2, "list": ["cinder", "os-brick", "sphinx-feature-classification"]}, "requires": {"docutils": {"eq_version": "", "ge_version": "0.11", "lt_version": "", "ne_version": [], "upper_version": "0.15.2", "version": "0.15.2"}, "pbr": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.13", "ne_version": [], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.17.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.5.1", "lt_version": "", "ne_version": ["1.6.1"], "upper_version": "2.2.0", "version": "2.2.0"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "reno": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinx-rtd-theme.json b/tools/oos/example/train_cached_file/sphinx-rtd-theme.json deleted file mode 100644 index ac66d31ac81920106030de5002bb7bd3a2b5f160..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinx-rtd-theme.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinx-rtd-theme", "version_dict": {"version": "0.4.2", "eq_version": "", "ge_version": "0.4.2", "lt_version": "1", "ne_version": [], "upper_version": ""}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "virtualenv", "sphinx-rtd-theme"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinx-testing.json b/tools/oos/example/train_cached_file/sphinx-testing.json deleted file mode 100644 index f90f1b189dfd463c6b4d47503037d4a96c093ce2..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinx-testing.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinx-testing", "version_dict": {"version": "1.0.1", "eq_version": "", "ge_version": "0.7.2", "lt_version": "", "ne_version": [], "upper_version": "1.0.1"}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "sphinx-testing"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-actdiag.json b/tools/oos/example/train_cached_file/sphinxcontrib-actdiag.json deleted file mode 100644 index 470c33898336bf3d6d80fb4656f8695334560c65..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-actdiag.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-actdiag", "version_dict": {"version": "0.8.5", "eq_version": "", "ge_version": "0.8.5", "lt_version": "", "ne_version": [], "upper_version": "0.8.5"}, "deep": {"count": 1, "list": ["nova", "sphinxcontrib-actdiag"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-apidoc.json b/tools/oos/example/train_cached_file/sphinxcontrib-apidoc.json deleted file mode 100644 index fdd09015cd2d44ffb9bb160df5329e85ec6554f9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-apidoc.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-apidoc", "version_dict": {"version": "0.3.0", "eq_version": "", "ge_version": "0.2.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.0"}, "deep": {"count": 16, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "sphinxcontrib-apidoc"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.0", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-applehelp.json b/tools/oos/example/train_cached_file/sphinxcontrib-applehelp.json deleted file mode 100644 index 7fab19ebc4088a98f115aba3a0afa94ca6494625..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-applehelp.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-applehelp", "version_dict": {"version": "1.0.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.1"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp"]}, "requires": {"pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.720", "version": "0.720"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-blockdiag.json b/tools/oos/example/train_cached_file/sphinxcontrib-blockdiag.json deleted file mode 100644 index 44662e11394c0ce7802c05741a8a05e89e1d7be1..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-blockdiag.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-blockdiag", "version_dict": {"version": "1.5.5", "eq_version": "", "ge_version": "1.5.4", "lt_version": "", "ne_version": [], "upper_version": "1.5.5"}, "deep": {"count": 1, "list": ["ceilometer", "sphinxcontrib-blockdiag"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-devhelp.json b/tools/oos/example/train_cached_file/sphinxcontrib-devhelp.json deleted file mode 100644 index 326635a989b1bac1ef4c5bfce183e6a1c29e23ad..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-devhelp.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-devhelp", "version_dict": {"version": "1.0.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.1"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-devhelp"]}, "requires": {"pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.720", "version": "0.720"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-htmlhelp.json b/tools/oos/example/train_cached_file/sphinxcontrib-htmlhelp.json deleted file mode 100644 index 8da85fe775cc197106d2319d239ba32aaad41108..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-htmlhelp.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-htmlhelp", "version_dict": {"version": "1.0.2", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.2"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-htmlhelp"]}, "requires": {"pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.720", "version": "0.720"}, "html5lib": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-httpdomain.json b/tools/oos/example/train_cached_file/sphinxcontrib-httpdomain.json deleted file mode 100644 index 98a6356fafc955c0ee670b5434f252f43b0a0d17..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-httpdomain.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-httpdomain", "version_dict": {"version": "1.7.0", "eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.0"}, "deep": {"count": 4, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-heatclient", "sphinxcontrib-httpdomain"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "1.5", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-jsmath.json b/tools/oos/example/train_cached_file/sphinxcontrib-jsmath.json deleted file mode 100644 index f0d149bb365f2bbfdb598f89460eb7ff5d97fbff..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-jsmath.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-jsmath", "version_dict": {"version": "1.0.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.1"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-jsmath"]}, "requires": {"pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.720", "version": "0.720"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-pecanwsme.json b/tools/oos/example/train_cached_file/sphinxcontrib-pecanwsme.json deleted file mode 100644 index 889614691e72db47682d5743f6f0f092e78a213a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-pecanwsme.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-pecanwsme", "version_dict": {"version": "0.10.0", "eq_version": "", "ge_version": "0.8", "lt_version": "", "ne_version": [], "upper_version": "0.10.0"}, "deep": {"count": 1, "list": ["aodh", "sphinxcontrib-pecanwsme"]}, "requires": {"six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "sphinxcontrib-httpdomain": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.7.0", "version": "1.7.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-programoutput.json b/tools/oos/example/train_cached_file/sphinxcontrib-programoutput.json deleted file mode 100644 index e2926048c506aff3bc636ff686d469e6733cc9d1..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-programoutput.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-programoutput", "version_dict": {"version": "0.14", "eq_version": "", "ge_version": "0.11", "lt_version": "", "ne_version": [], "upper_version": "0.14"}, "deep": {"count": 2, "list": ["openstack-heat", "python-manilaclient", "sphinxcontrib-programoutput"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-qthelp.json b/tools/oos/example/train_cached_file/sphinxcontrib-qthelp.json deleted file mode 100644 index c454eae19ed65a765205607413300897cd325bdb..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-qthelp.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-qthelp", "version_dict": {"version": "1.0.2", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.0.2"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-qthelp"]}, "requires": {"pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.720", "version": "0.720"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-seqdiag.json b/tools/oos/example/train_cached_file/sphinxcontrib-seqdiag.json deleted file mode 100644 index 42b5f2776842bc075101511477116ce3b89b9016..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-seqdiag.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-seqdiag", "version_dict": {"version": "0.8.5", "eq_version": "", "ge_version": "0.8.4", "lt_version": "", "ne_version": [], "upper_version": "0.8.5"}, "deep": {"count": 1, "list": ["ironic", "sphinxcontrib-seqdiag"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-serializinghtml.json b/tools/oos/example/train_cached_file/sphinxcontrib-serializinghtml.json deleted file mode 100644 index f2ae01bbe108fd9f021e1e23b733fa48a84d7c24..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-serializinghtml.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-serializinghtml", "version_dict": {"version": "1.1.3", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.3"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-serializinghtml"]}, "requires": {"pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "flake8": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "mypy": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.720", "version": "0.720"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-svg2pdfconverter.json b/tools/oos/example/train_cached_file/sphinxcontrib-svg2pdfconverter.json deleted file mode 100644 index 4a985ad0c5c1aa88a63537d85874f1578378381c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-svg2pdfconverter.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-svg2pdfconverter", "version_dict": {"version": "0.1.0", "eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.1.0"}, "deep": {"count": 14, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "python-glanceclient", "tempest", "sphinxcontrib-svg2pdfconverter"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "0.6", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sphinxcontrib-websupport.json b/tools/oos/example/train_cached_file/sphinxcontrib-websupport.json deleted file mode 100644 index 2e7c035bed000ee235a03cf3fa1674f344a2bd47..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sphinxcontrib-websupport.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sphinxcontrib-websupport", "version_dict": {"version": "1.1.2", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.2"}, "deep": {"count": 5, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-websupport"]}, "requires": {"pytest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "mock": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sqlalchemy-migrate.json b/tools/oos/example/train_cached_file/sqlalchemy-migrate.json deleted file mode 100644 index c25cf16bf7b8f9356b05c5debfb23ee84d7e2bfc..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sqlalchemy-migrate.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sqlalchemy-migrate", "version_dict": {"version": "0.12.0", "eq_version": "", "ge_version": "0.11.0", "lt_version": "", "ne_version": [], "upper_version": "0.12.0"}, "deep": {"count": 9, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "sqlalchemy-migrate"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "1.8", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "SQLAlchemy": {"eq_version": "", "ge_version": "0.9.6", "lt_version": "", "ne_version": [], "upper_version": "1.3.8", "version": "1.3.8"}, "decorator": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.4.0", "version": "4.4.0"}, "six": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "sqlparse": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.3.0", "version": "0.3.0"}, "Tempita": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.2", "version": "0.5.2"}, "pep8": {"eq_version": "1.5.7", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.5.7"}, "pyflakes": {"eq_version": "0.8.1", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.8.1"}, "flake8": {"eq_version": "", "ge_version": "2.2.4", "lt_version": "", "ne_version": [], "upper_version": "", "version": "2.2.4"}, "hacking": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "0.11", "ne_version": [], "upper_version": "", "version": "0.10.0"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "discover": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "feedparser": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "fixtures": {"eq_version": "", "ge_version": "0.3.14", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "mock": {"eq_version": "", "ge_version": "1.2", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "mox": {"eq_version": "", "ge_version": "0.5.3", "lt_version": "", "ne_version": [], "upper_version": "0.5.3", "version": "0.5.3"}, "psycopg2": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.8.3", "version": "2.8.3"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.1.2", "lt_version": "1.2", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-issuetracker": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "testrepository": {"eq_version": "", "ge_version": "0.0.17", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "0.9.34", "lt_version": "0.9.36", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "tempest-lib": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.1.0"}, "scripttest": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pylint": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytz": {"eq_version": "", "ge_version": "2010h", "lt_version": "", "ne_version": [], "upper_version": "2019.2", "version": "2019.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sqlparse.json b/tools/oos/example/train_cached_file/sqlparse.json deleted file mode 100644 index 4a6371b6fbd29222814b2a3a899e7a7bdb5ab016..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sqlparse.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sqlparse", "version_dict": {"version": "0.3.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.3.0"}, "deep": {"count": 10, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "sqlalchemy-migrate", "sqlparse"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/statsd.json b/tools/oos/example/train_cached_file/statsd.json deleted file mode 100644 index dad98f0e63cd5d982a8fdd567f102827dcc5cebb..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/statsd.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "statsd", "version_dict": {"version": "3.3.0", "eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": [], "upper_version": "3.3.0"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "statsd"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/stestr.json b/tools/oos/example/train_cached_file/stestr.json deleted file mode 100644 index 42f0a5bffb2fe74f3337106904924e22d8ca5f42..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/stestr.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "stestr", "version_dict": {"version": "2.5.1", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1"}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr"]}, "requires": {"future": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.17.1", "version": "0.17.1"}, "pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0", "4.0.0", "4.0.1", "4.0.2", "4.0.3"], "upper_version": "5.4.3", "version": "5.4.3"}, "cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.16.0", "version": "2.16.0"}, "python-subunit": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "PyYAML": {"eq_version": "", "ge_version": "3.10.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "voluptuous": {"eq_version": "", "ge_version": "0.8.9", "lt_version": "", "ne_version": [], "upper_version": "0.11.7", "version": "0.11.7"}, "subunit2sql": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.8.0"}, "hacking": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "0.12", "ne_version": [], "upper_version": "", "version": "0.11.0"}, "mock": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "1.0.1", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "doc8": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/stevedore.json b/tools/oos/example/train_cached_file/stevedore.json deleted file mode 100644 index 6a93a920ce613c6618391343722417c800cd1ae5..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/stevedore.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "stevedore", "version_dict": {"version": "1.31.0", "eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0"}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.6.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/storops.json b/tools/oos/example/train_cached_file/storops.json deleted file mode 100644 index 95f8584f9a9d3794ec51a7b25cea1effc3e49c35..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/storops.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "storops", "version_dict": {"version": "1.2.1", "eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.2.1"}, "deep": {"count": 1, "list": ["cinder", "storops"]}, "requires": {"requests": {"eq_version": "", "ge_version": "2.8.1", "lt_version": "", "ne_version": ["2.20.0", "2.9.0"], "upper_version": "2.22.0", "version": "2.22.0"}, "PyYAML": {"eq_version": "", "ge_version": "3.10", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "python-dateutil": {"eq_version": "", "ge_version": "2.4.2", "lt_version": "", "ne_version": [], "upper_version": "2.8.0", "version": "2.8.0"}, "retryz": {"eq_version": "", "ge_version": "0.1.8", "lt_version": "", "ne_version": [], "upper_version": "0.1.9", "version": "0.1.9"}, "cachez": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.1.2", "version": "0.1.2"}, "bitmath": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.3.3.1", "version": "1.3.3.1"}, "persist-queue": {"eq_version": "", "ge_version": "0.2.3", "lt_version": "", "ne_version": [], "upper_version": "0.4.2", "version": "0.4.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/storpool.json b/tools/oos/example/train_cached_file/storpool.json deleted file mode 100644 index 1ed1992eb9d77f458015a4a9b95766a09962931b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/storpool.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "storpool", "version_dict": {"version": "5.1.0", "eq_version": "", "ge_version": "4.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.0"}, "deep": {"count": 1, "list": ["cinder", "storpool"]}, "requires": {"confget": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/storpool.spopenstack.json b/tools/oos/example/train_cached_file/storpool.spopenstack.json deleted file mode 100644 index b70b7cd4f294479c26ab22658952d626d8e7e51b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/storpool.spopenstack.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "storpool.spopenstack", "version_dict": {"version": "2.2.1", "eq_version": "", "ge_version": "2.2.1", "lt_version": "", "ne_version": [], "upper_version": "2.2.1"}, "deep": {"count": 1, "list": ["cinder", "storpool.spopenstack"]}, "requires": {"storpool": {"eq_version": "", "ge_version": "4.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.0", "version": "5.1.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/subunit2sql.json b/tools/oos/example/train_cached_file/subunit2sql.json deleted file mode 100644 index 8f5ebebe817a887ddf80232c8d3c95a7eaeacff3..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/subunit2sql.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "subunit2sql", "version_dict": {"version": "1.8.0", "eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql"]}, "requires": {"alembic": {"eq_version": "", "ge_version": "0.4.1", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "oslo.config": {"eq_version": "", "ge_version": "1.4.0.0a3", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.db": {"eq_version": "", "ge_version": "2.1.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "pbr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "six": {"eq_version": "", "ge_version": "1.5.2", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "SQLAlchemy": {"eq_version": "", "ge_version": "0.8.2", "lt_version": "", "ne_version": [], "upper_version": "1.3.8", "version": "1.3.8"}, "stevedore": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "python-dateutil": {"eq_version": "", "ge_version": "2.4.2", "lt_version": "", "ne_version": [], "upper_version": "2.8.0", "version": "2.8.0"}, "pandas": {"eq_version": "", "ge_version": "0.11", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.11"}, "matplotlib": {"eq_version": "", "ge_version": "1.4", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.4"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/suds-jurko.json b/tools/oos/example/train_cached_file/suds-jurko.json deleted file mode 100644 index 3abed662cee446165731927af85481c7fd0d6e48..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/suds-jurko.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "suds-jurko", "version_dict": {"version": "0.6", "eq_version": "", "ge_version": "0.6", "lt_version": "", "ne_version": [], "upper_version": "0.6"}, "deep": {"count": 2, "list": ["ceilometer", "oslo.vmware", "suds-jurko"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sushy.json b/tools/oos/example/train_cached_file/sushy.json deleted file mode 100644 index 42632ebd5325c7be508b1f9c31b6e0305cddc8df..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sushy.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sushy", "version_dict": {"version": "2.0.5", "eq_version": "", "ge_version": "1.8.1", "lt_version": "", "ne_version": [], "upper_version": "2.0.5"}, "deep": {"count": 5, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-rsdclient", "rsd-lib", "sushy"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "python-dateutil": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.0", "version": "2.8.0"}, "stevedore": {"eq_version": "", "ge_version": "1.29.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "hacking": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "1.1.0", "ne_version": [], "upper_version": "", "version": "1.0.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/swift.json b/tools/oos/example/train_cached_file/swift.json deleted file mode 100644 index 6050d50cc63c2c683b104adba44372df6d826991..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/swift.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "swift", "version_dict": {"version": "2.23.3", "eq_version": "2.23.3", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["swift"]}, "requires": {"eventlet": {"eq_version": "", "ge_version": "0.25.0", "lt_version": "", "ne_version": [], "upper_version": "0.25.2", "version": "0.25.2"}, "greenlet": {"eq_version": "", "ge_version": "0.3.2", "lt_version": "", "ne_version": [], "upper_version": "0.4.15", "version": "0.4.15"}, "netifaces": {"eq_version": "", "ge_version": "0.8", "lt_version": "", "ne_version": ["0.10.0", "0.10.1"], "upper_version": "0.10.9", "version": "0.10.9"}, "PasteDeploy": {"eq_version": "", "ge_version": "1.3.3", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "lxml": {"eq_version": "", "ge_version": "3.4.1", "lt_version": "", "ne_version": [], "upper_version": "4.4.1", "version": "4.4.1"}, "requests": {"eq_version": "", "ge_version": "2.14.2", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "xattr": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.9.6", "version": "0.9.6"}, "PyECLib": {"eq_version": "", "ge_version": "1.3.1", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.3.1"}, "cryptography": {"eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "ipaddress": {"eq_version": "", "ge_version": "1.0.16", "lt_version": "", "ne_version": [], "upper_version": "1.0.22", "version": "1.0.22"}, "hacking": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "0.12", "ne_version": [], "upper_version": "", "version": "0.11.0"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "nose": {"eq_version": "", "ge_version": "1.3.7", "lt_version": "", "ne_version": [], "upper_version": "1.3.7", "version": "1.3.7"}, "nosexcover": {"eq_version": "", "ge_version": "1.0.10", "lt_version": "", "ne_version": [], "upper_version": "1.0.11", "version": "1.0.11"}, "nosehtmloutput": {"eq_version": "", "ge_version": "0.0.3", "lt_version": "", "ne_version": [], "upper_version": "0.0.5", "version": "0.0.5"}, "os-testr": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "mock": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "python-swiftclient": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "python-keystoneclient": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "3.21.0", "version": "3.21.0"}, "reno": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "python-openstackclient": {"eq_version": "", "ge_version": "3.12.0", "lt_version": "", "ne_version": [], "upper_version": "4.0.2", "version": "4.0.2"}, "boto": {"eq_version": "", "ge_version": "2.32.1", "lt_version": "", "ne_version": [], "upper_version": "2.49.0", "version": "2.49.0"}, "boto3": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "1.9.225", "version": "1.9.225"}, "botocore": {"eq_version": "", "ge_version": "1.12", "lt_version": "", "ne_version": [], "upper_version": "1.12.225", "version": "1.12.225"}, "requests-mock": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.0", "version": "1.6.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.17.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.1", "version": "7.0.1"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "docutils": {"eq_version": "", "ge_version": "0.11", "lt_version": "", "ne_version": [], "upper_version": "0.15.2", "version": "0.15.2"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "os-api-ref": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2", "version": "1.6.2"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/sysv-ipc.json b/tools/oos/example/train_cached_file/sysv-ipc.json deleted file mode 100644 index 48c0a4f9ab4f0b3d8c79717e1f7ff864c29c7132..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/sysv-ipc.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "sysv-ipc", "version_dict": {"version": "1.0.0", "eq_version": "", "ge_version": "0.6.8", "lt_version": "", "ne_version": [], "upper_version": "1.0.0"}, "deep": {"count": 2, "list": ["aodh", "tooz", "sysv-ipc"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/taskflow.json b/tools/oos/example/train_cached_file/taskflow.json deleted file mode 100644 index 8897feec78f3618c9561292dd7980cf39c4a09a4..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/taskflow.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "taskflow", "version_dict": {"version": "3.7.1", "eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.7.1"}, "deep": {"count": 1, "list": ["cinder", "taskflow"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "futurist": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.9.0", "version": "1.9.0"}, "fasteners": {"eq_version": "", "ge_version": "0.7.0", "lt_version": "", "ne_version": [], "upper_version": "0.14.1", "version": "0.14.1"}, "networkx": {"eq_version": "", "ge_version": "1.10", "lt_version": "", "ne_version": [], "upper_version": "2.3", "version": "2.3"}, "contextlib2": {"eq_version": "", "ge_version": "0.4.0", "lt_version": "", "ne_version": [], "upper_version": "0.5.5", "version": "0.5.5"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "automaton": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.17.0", "version": "1.17.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "tenacity": {"eq_version": "", "ge_version": "4.4.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}, "cachetools": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.1.1", "version": "3.1.1"}, "pydot": {"eq_version": "", "ge_version": "1.2.4", "lt_version": "", "ne_version": [], "upper_version": "1.4.1", "version": "1.4.1"}, "kazoo": {"eq_version": "", "ge_version": "2.2", "lt_version": "", "ne_version": [], "upper_version": "2.6.1", "version": "2.6.1"}, "zake": {"eq_version": "", "ge_version": "0.1.6", "lt_version": "", "ne_version": [], "upper_version": "0.2.2", "version": "0.2.2"}, "redis": {"eq_version": "", "ge_version": "2.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.3.8", "version": "3.3.8"}, "kombu": {"eq_version": "", "ge_version": "4.0.0", "lt_version": "", "ne_version": ["4.0.2"], "upper_version": "4.6.6", "version": "4.6.6"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1", "0.21.0"], "upper_version": "0.25.2", "version": "0.25.2"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.0.10", "lt_version": "", "ne_version": ["1.1.5", "1.1.6", "1.1.7", "1.1.8"], "upper_version": "1.3.8", "version": "1.3.8"}, "alembic": {"eq_version": "", "ge_version": "0.8.10", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "SQLAlchemy-Utils": {"eq_version": "", "ge_version": "0.30.11", "lt_version": "", "ne_version": [], "upper_version": "0.34.2", "version": "0.34.2"}, "PyMySQL": {"eq_version": "", "ge_version": "0.7.6", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "psycopg2": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.3", "version": "2.8.3"}, "pydotplus": {"eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "2.0.2", "version": "2.0.2"}, "hacking": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "0.11", "ne_version": [], "upper_version": "", "version": "0.10.0"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/tempest-lib.json b/tools/oos/example/train_cached_file/tempest-lib.json deleted file mode 100644 index 49807cf7a1d10efb5c03318bc82a5e66b903571d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/tempest-lib.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "tempest-lib", "version_dict": {"version": "0.1.0", "eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 10, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "sqlalchemy-migrate", "tempest-lib"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "0.6", "lt_version": "1.0", "ne_version": ["0.7"], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "1.3", "lt_version": "", "ne_version": [], "upper_version": "2.7.0", "version": "2.7.0"}, "fixtures": {"eq_version": "", "ge_version": "0.3.14", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "oslo.config": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "iso8601": {"eq_version": "", "ge_version": "0.1.9", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "jsonschema": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "3.0.0", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "httplib2": {"eq_version": "", "ge_version": "0.7.5", "lt_version": "", "ne_version": [], "upper_version": "0.13.1", "version": "0.13.1"}, "six": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "0.9.2", "lt_version": "0.10", "ne_version": [], "upper_version": "", "version": "0.9.2"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "discover": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.1.2", "lt_version": "1.3", "ne_version": ["1.2.0", "1.3b1"], "upper_version": "2.2.0", "version": "2.2.0"}, "oslosphinx": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "4.18.0", "version": "4.18.0"}, "oslotest": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "0.9.36", "lt_version": "", "ne_version": ["1.2.0"], "upper_version": "2.3.0", "version": "2.3.0"}, "mock": {"eq_version": "", "ge_version": "1.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/tempest.json b/tools/oos/example/train_cached_file/tempest.json deleted file mode 100644 index de2d24f4a5ff67e862078315e328639f71eb1888..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/tempest.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "tempest", "version_dict": {"version": "22.1.0", "eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "python-glanceclient", "tempest"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "cliff": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": ["2.9.0"], "upper_version": "2.16.0", "version": "2.16.0"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "paramiko": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.6.0", "version": "2.6.0"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "stestr": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "PyYAML": {"eq_version": "", "ge_version": "3.12", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "python-subunit": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "PrettyTable": {"eq_version": "", "ge_version": "0.7.1", "lt_version": "0.8", "ne_version": [], "upper_version": "", "version": "0.7.1"}, "urllib3": {"eq_version": "", "ge_version": "1.21.1", "lt_version": "", "ne_version": [], "upper_version": "1.25.3", "version": "1.25.3"}, "debtcollector": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.22.0", "version": "1.22.0"}, "unittest2": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "flake8-import-order": {"eq_version": "0.11", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.11"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "sphinxcontrib-svg2pdfconverter": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.1.0", "version": "0.1.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/tenacity.json b/tools/oos/example/train_cached_file/tenacity.json deleted file mode 100644 index 3a95b3e5fd99ab80fc5c3351a691445377f3c5e0..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/tenacity.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "tenacity", "version_dict": {"version": "5.1.1", "eq_version": "", "ge_version": "3.2.1", "lt_version": "", "ne_version": [], "upper_version": "5.1.1"}, "deep": {"count": 1, "list": ["aodh", "tenacity"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/termcolor.json b/tools/oos/example/train_cached_file/termcolor.json deleted file mode 100644 index 871c2ead6d3eb4553350b0aef768304821017c30..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/termcolor.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "termcolor", "version_dict": {"version": "1.1.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.0"}, "deep": {"count": 3, "list": ["keystone", "Flask", "Werkzeug", "termcolor"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/testrepository.json b/tools/oos/example/train_cached_file/testrepository.json deleted file mode 100644 index 5286d05688ff9527afaa74dc367498b7b8fc5371..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/testrepository.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "testrepository", "version_dict": {"version": "0.0.20", "eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20"}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "testrepository"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/testresources.json b/tools/oos/example/train_cached_file/testresources.json deleted file mode 100644 index ed25933747a85ec99635207e055ed18db295b085..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/testresources.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "testresources", "version_dict": {"version": "2.0.1", "eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1"}, "deep": {"count": 15, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "testresources"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/testscenarios.json b/tools/oos/example/train_cached_file/testscenarios.json deleted file mode 100644 index 7e7379ad874ad311d6e2a4bd746cee029ba5527c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/testscenarios.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "testscenarios", "version_dict": {"version": "0.5.0", "eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0"}, "deep": {"count": 10, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "testscenarios"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/testtools.json b/tools/oos/example/train_cached_file/testtools.json deleted file mode 100644 index 3c9514659913e11fd7522757be1e23401885bf57..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/testtools.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "testtools", "version_dict": {"version": "2.3.0", "eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0"}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "testtools"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/textfsm.json b/tools/oos/example/train_cached_file/textfsm.json deleted file mode 100644 index 1d23d8da4fd170e9f6f011b32dc85db662ea5e36..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/textfsm.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "textfsm", "version_dict": {"version": "1.1.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.0"}, "deep": {"count": 2, "list": ["networking-generic-switch", "netmiko", "textfsm"]}, "requires": {"future": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.17.1", "version": "0.17.1"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/tinyrpc.json b/tools/oos/example/train_cached_file/tinyrpc.json deleted file mode 100644 index 203cb384bda91300748a5e653db5c1d4f8d52d8c..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/tinyrpc.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "tinyrpc", "version_dict": {"version": "1.0.3", "eq_version": "", "ge_version": "0.6", "lt_version": "", "ne_version": [], "upper_version": "1.0.3"}, "deep": {"count": 3, "list": ["openstack-heat", "neutron-lib", "os-ken", "tinyrpc"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/tooz.json b/tools/oos/example/train_cached_file/tooz.json deleted file mode 100644 index 7f5680c067ba84527fbe164706129b78f69de563..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/tooz.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "tooz", "version_dict": {"version": "1.66.3", "eq_version": "", "ge_version": "1.28.0", "lt_version": "", "ne_version": [], "upper_version": "1.66.3"}, "deep": {"count": 1, "list": ["aodh", "tooz"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "1.6", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "stevedore": {"eq_version": "", "ge_version": "1.16.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "voluptuous": {"eq_version": "", "ge_version": "0.8.9", "lt_version": "", "ne_version": [], "upper_version": "0.11.7", "version": "0.11.7"}, "msgpack": {"eq_version": "", "ge_version": "0.4.0", "lt_version": "", "ne_version": [], "upper_version": "0.6.1", "version": "0.6.1"}, "fasteners": {"eq_version": "", "ge_version": "0.7", "lt_version": "", "ne_version": [], "upper_version": "0.14.1", "version": "0.14.1"}, "tenacity": {"eq_version": "", "ge_version": "3.2.1", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}, "futurist": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "1.9.0", "version": "1.9.0"}, "oslo.utils": {"eq_version": "", "ge_version": "3.15.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.serialization": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "2.29.3", "version": "2.29.3"}, "mock": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "3.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "pifpaf": {"eq_version": "", "ge_version": "0.10.0", "lt_version": "", "ne_version": [], "upper_version": "2.2.2", "version": "2.2.2"}, "os-testr": {"eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "stestr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7", "2.1.0"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "python-consul": {"eq_version": "", "ge_version": "0.4.7", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}, "sysv-ipc": {"eq_version": "", "ge_version": "0.6.8", "lt_version": "", "ne_version": [], "upper_version": "1.0.0", "version": "1.0.0"}, "zake": {"eq_version": "", "ge_version": "0.1.6", "lt_version": "", "ne_version": [], "upper_version": "0.2.2", "version": "0.2.2"}, "redis": {"eq_version": "", "ge_version": "2.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.3.8", "version": "3.3.8"}, "psycopg2": {"eq_version": "", "ge_version": "2.5", "lt_version": "", "ne_version": [], "upper_version": "2.8.3", "version": "2.8.3"}, "PyMySQL": {"eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "pymemcache": {"eq_version": "", "ge_version": "1.2.9", "lt_version": "", "ne_version": ["1.3.0"], "upper_version": "2.2.2", "version": "2.2.2"}, "etcd3": {"eq_version": "", "ge_version": "0.6.2", "lt_version": "", "ne_version": [], "upper_version": "0.10.0", "version": "0.10.0"}, "etcd3gw": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.2.4", "version": "0.2.4"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/tornado.json b/tools/oos/example/train_cached_file/tornado.json deleted file mode 100644 index e0e82aa109f60e2abaf79dc2e2cb2fbcdaf7480d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/tornado.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "tornado", "version_dict": {"version": "5.1.1", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "5.1.1"}, "deep": {"count": 5, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "kombu", "python-consul", "tornado"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/towncrier.json b/tools/oos/example/train_cached_file/towncrier.json deleted file mode 100644 index 212c00a90a85895eb586c80c9a27ed77d921cf48..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/towncrier.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "towncrier", "version_dict": {"version": "18.5.0", "eq_version": "", "ge_version": "18.5.0", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "virtualenv", "towncrier"]}, "requires": {"click": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "incremental": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "Jinja2": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "toml": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/trove-dashboard.json b/tools/oos/example/train_cached_file/trove-dashboard.json deleted file mode 100644 index 33213dcc2e1c9d4f5371cc0e00c34de1344ed768..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/trove-dashboard.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "trove-dashboard", "version_dict": {"version": "13.0.0", "eq_version": "13.0.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["trove-dashboard"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "1.6", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "oslo.log": {"eq_version": "", "ge_version": "3.30.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "python-swiftclient": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "python-troveclient": {"eq_version": "", "ge_version": "1.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.1", "version": "3.0.1"}, "horizon": {"eq_version": "", "ge_version": "14.0.0.0b3", "lt_version": "", "ne_version": [], "upper_version": "16.2.2", "version": "16.2.2"}, "hacking": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "1.2.0", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "ddt": {"eq_version": "", "ge_version": "0.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.2.1", "version": "1.2.1"}, "mock": {"eq_version": "", "ge_version": "1.2", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "selenium": {"eq_version": "", "ge_version": "2.50.1", "lt_version": "", "ne_version": [], "upper_version": "3.141.0", "version": "3.141.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.17.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "xvfbwrapper": {"eq_version": "", "ge_version": "0.1.3", "lt_version": "", "ne_version": [], "upper_version": "0.2.9", "version": "0.2.9"}, "reno": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/trove-tempest-plugin.json b/tools/oos/example/train_cached_file/trove-tempest-plugin.json deleted file mode 100644 index 20419bdddccf1acf83a8258840b2bd8c5401d7e3..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/trove-tempest-plugin.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "trove-tempest-plugin", "version_dict": {"version": "0.3.0", "eq_version": "0.3.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["trove-tempest-plugin"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "tempest": {"eq_version": "", "ge_version": "17.1.0", "lt_version": "", "ne_version": [], "upper_version": "22.1.0", "version": "22.1.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.13", "ne_version": [], "upper_version": "", "version": "0.12.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": ["1.6.6", "1.6.7"], "upper_version": "2.2.0", "version": "2.2.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/trove.json b/tools/oos/example/train_cached_file/trove.json deleted file mode 100644 index 9191eecc02f800d0f39535ac227f8776eb4e986f..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/trove.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "trove", "version_dict": {"version": "12.1.0", "eq_version": "12.1.0", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": ""}, "deep": {"count": 0, "list": ["trove"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": ["2.1.0"], "upper_version": "5.4.3", "version": "5.4.3"}, "SQLAlchemy": {"eq_version": "", "ge_version": "1.0.10", "lt_version": "", "ne_version": ["1.1.5", "1.1.6", "1.1.7", "1.1.8"], "upper_version": "1.3.8", "version": "1.3.8"}, "eventlet": {"eq_version": "", "ge_version": "0.18.2", "lt_version": "", "ne_version": ["0.18.3", "0.20.1"], "upper_version": "0.25.2", "version": "0.25.2"}, "keystonemiddleware": {"eq_version": "", "ge_version": "4.17.0", "lt_version": "", "ne_version": [], "upper_version": "7.0.1", "version": "7.0.1"}, "Routes": {"eq_version": "", "ge_version": "2.3.1", "lt_version": "", "ne_version": [], "upper_version": "2.4.1", "version": "2.4.1"}, "WebOb": {"eq_version": "", "ge_version": "1.7.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.5", "version": "1.8.5"}, "PasteDeploy": {"eq_version": "", "ge_version": "1.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.0.1", "version": "2.0.1"}, "Paste": {"eq_version": "", "ge_version": "2.0.2", "lt_version": "", "ne_version": [], "upper_version": "3.2.0", "version": "3.2.0"}, "sqlalchemy-migrate": {"eq_version": "", "ge_version": "0.11.0", "lt_version": "", "ne_version": [], "upper_version": "0.12.0", "version": "0.12.0"}, "netaddr": {"eq_version": "", "ge_version": "0.7.18", "lt_version": "", "ne_version": [], "upper_version": "0.7.19", "version": "0.7.19"}, "httplib2": {"eq_version": "", "ge_version": "0.9.1", "lt_version": "", "ne_version": [], "upper_version": "0.13.1", "version": "0.13.1"}, "lxml": {"eq_version": "", "ge_version": "3.4.1", "lt_version": "", "ne_version": ["3.7.0"], "upper_version": "4.4.1", "version": "4.4.1"}, "passlib": {"eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.7.1", "version": "1.7.1"}, "python-heatclient": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.18.1", "version": "1.18.1"}, "python-novaclient": {"eq_version": "", "ge_version": "9.1.0", "lt_version": "", "ne_version": [], "upper_version": "15.1.1", "version": "15.1.1"}, "python-cinderclient": {"eq_version": "", "ge_version": "3.3.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "python-keystoneclient": {"eq_version": "", "ge_version": "3.8.0", "lt_version": "", "ne_version": [], "upper_version": "3.21.0", "version": "3.21.0"}, "python-swiftclient": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "python-designateclient": {"eq_version": "", "ge_version": "2.7.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "python-neutronclient": {"eq_version": "", "ge_version": "6.7.0", "lt_version": "", "ne_version": [], "upper_version": "6.14.1", "version": "6.14.1"}, "python-glanceclient": {"eq_version": "", "ge_version": "2.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.17.1", "version": "2.17.1"}, "iso8601": {"eq_version": "", "ge_version": "0.1.11", "lt_version": "", "ne_version": [], "upper_version": "0.1.12", "version": "0.1.12"}, "jsonschema": {"eq_version": "", "ge_version": "2.6.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.2", "version": "3.0.2"}, "Jinja2": {"eq_version": "", "ge_version": "2.10", "lt_version": "", "ne_version": [], "upper_version": "2.10.1", "version": "2.10.1"}, "pexpect": {"eq_version": "", "ge_version": "3.1", "lt_version": "", "ne_version": ["3.3"], "upper_version": "4.7.0", "version": "4.7.0"}, "oslo.config": {"eq_version": "", "ge_version": "5.2.0", "lt_version": "", "ne_version": [], "upper_version": "6.11.3", "version": "6.11.3"}, "oslo.context": {"eq_version": "", "ge_version": "2.19.2", "lt_version": "", "ne_version": [], "upper_version": "2.23.1", "version": "2.23.1"}, "oslo.i18n": {"eq_version": "", "ge_version": "3.15.3", "lt_version": "", "ne_version": [], "upper_version": "3.24.0", "version": "3.24.0"}, "oslo.middleware": {"eq_version": "", "ge_version": "3.31.0", "lt_version": "", "ne_version": [], "upper_version": "3.38.1", "version": "3.38.1"}, "oslo.serialization": {"eq_version": "", "ge_version": "2.18.0", "lt_version": "", "ne_version": ["2.19.1"], "upper_version": "2.29.3", "version": "2.29.3"}, "oslo.service": {"eq_version": "", "ge_version": "1.24.0", "lt_version": "", "ne_version": ["1.28.1"], "upper_version": "1.40.2", "version": "1.40.2"}, "oslo.upgradecheck": {"eq_version": "", "ge_version": "0.1.0", "lt_version": "", "ne_version": [], "upper_version": "0.3.2", "version": "0.3.2"}, "oslo.utils": {"eq_version": "", "ge_version": "3.33.0", "lt_version": "", "ne_version": [], "upper_version": "3.41.6", "version": "3.41.6"}, "oslo.concurrency": {"eq_version": "", "ge_version": "3.26.0", "lt_version": "", "ne_version": [], "upper_version": "3.30.1", "version": "3.30.1"}, "PyMySQL": {"eq_version": "", "ge_version": "0.7.6", "lt_version": "", "ne_version": [], "upper_version": "0.9.3", "version": "0.9.3"}, "Babel": {"eq_version": "", "ge_version": "2.3.4", "lt_version": "", "ne_version": ["2.4.0"], "upper_version": "2.7.0", "version": "2.7.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "stevedore": {"eq_version": "", "ge_version": "1.20.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.0", "version": "1.31.0"}, "oslo.messaging": {"eq_version": "", "ge_version": "5.29.0", "lt_version": "", "ne_version": [], "upper_version": "10.2.4", "version": "10.2.4"}, "osprofiler": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.8.2", "version": "2.8.2"}, "oslo.log": {"eq_version": "", "ge_version": "3.36.0", "lt_version": "", "ne_version": [], "upper_version": "3.44.3", "version": "3.44.3"}, "oslo.db": {"eq_version": "", "ge_version": "4.27.0", "lt_version": "", "ne_version": [], "upper_version": "5.0.2", "version": "5.0.2"}, "xmltodict": {"eq_version": "", "ge_version": "0.10.1", "lt_version": "", "ne_version": [], "upper_version": "0.12.0", "version": "0.12.0"}, "cryptography": {"eq_version": "", "ge_version": "2.1.4", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "oslo.policy": {"eq_version": "", "ge_version": "1.30.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "diskimage-builder": {"eq_version": "", "ge_version": "1.1.2", "lt_version": "", "ne_version": ["1.6.0", "1.7.0", "1.7.1"], "upper_version": "2.30.0", "version": "2.30.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "bandit": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.1.0"}, "os-api-ref": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "1.6.2", "version": "1.6.2"}, "reno": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "nose": {"eq_version": "", "ge_version": "1.3.7", "lt_version": "", "ne_version": [], "upper_version": "1.3.7", "version": "1.3.7"}, "nosexcover": {"eq_version": "", "ge_version": "1.0.10", "lt_version": "", "ne_version": [], "upper_version": "1.0.11", "version": "1.0.11"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.18.1", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "openstack.nose_plugin": {"eq_version": "", "ge_version": "0.7", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.7"}, "WebTest": {"eq_version": "", "ge_version": "2.0.27", "lt_version": "", "ne_version": [], "upper_version": "2.0.33", "version": "2.0.33"}, "wsgi-intercept": {"eq_version": "", "ge_version": "1.4.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.1", "version": "1.8.1"}, "proboscis": {"eq_version": "", "ge_version": "1.2.5.3", "lt_version": "", "ne_version": [], "upper_version": "1.2.6.0", "version": "1.2.6.0"}, "python-troveclient": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.1", "version": "3.0.1"}, "mock": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "3.0.5", "version": "3.0.5"}, "testtools": {"eq_version": "", "ge_version": "2.2.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "pymongo": {"eq_version": "", "ge_version": "3.0.2", "lt_version": "", "ne_version": ["3.1"], "upper_version": "3.9.0", "version": "3.9.0"}, "redis": {"eq_version": "", "ge_version": "2.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.3.8", "version": "3.3.8"}, "psycopg2": {"eq_version": "", "ge_version": "2.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.8.3", "version": "2.8.3"}, "cassandra-driver": {"eq_version": "", "ge_version": "2.1.4", "lt_version": "", "ne_version": ["3.6.0"], "upper_version": "3.19.0", "version": "3.19.0"}, "CouchDB": {"eq_version": "", "ge_version": "0.8", "lt_version": "", "ne_version": [], "upper_version": "1.2", "version": "1.2"}, "stestr": {"eq_version": "", "ge_version": "1.1.0", "lt_version": "", "ne_version": [], "upper_version": "2.5.1", "version": "2.5.1"}, "doc8": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "0.8.0", "version": "0.8.0"}, "astroid": {"eq_version": "1.6.5", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.6.5"}, "pylint": {"eq_version": "1.9.2", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9.2"}, "oslotest": {"eq_version": "", "ge_version": "3.2.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "tenacity": {"eq_version": "", "ge_version": "4.9.0", "lt_version": "", "ne_version": [], "upper_version": "5.1.1", "version": "5.1.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/typed-ast.json b/tools/oos/example/train_cached_file/typed-ast.json deleted file mode 100644 index 96140d3e2c0fd9092a8c86ea38afbd7ac589bc53..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/typed-ast.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "typed-ast", "version_dict": {"version": "1.4.0", "eq_version": "", "ge_version": "1.4.0", "lt_version": "1.5.0", "ne_version": [], "upper_version": "1.4.0"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "mypy", "typed-ast"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/typing-extensions.json b/tools/oos/example/train_cached_file/typing-extensions.json deleted file mode 100644 index 2fab72a626a68d76dd016371678fbb70fd12056e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/typing-extensions.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "typing-extensions", "version_dict": {"version": "3.7.4", "eq_version": "", "ge_version": "3.7.4", "lt_version": "", "ne_version": [], "upper_version": "3.7.4"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "mypy", "typing-extensions"]}, "requires": {"typing": {"eq_version": "", "ge_version": "3.7.4", "lt_version": "", "ne_version": [], "upper_version": "3.7.4.1", "version": "3.7.4.1"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/typing.json b/tools/oos/example/train_cached_file/typing.json deleted file mode 100644 index bbfee39c0c4dc24dabf8e8775009b1c9a0a02d9e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/typing.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "typing", "version_dict": {"version": "3.7.4.1", "eq_version": "", "ge_version": "3.7.4", "lt_version": "", "ne_version": [], "upper_version": "3.7.4.1"}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "mypy", "typing-extensions", "typing"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/ujson.json b/tools/oos/example/train_cached_file/ujson.json deleted file mode 100644 index 8e8cbafdc928223621a2cd03b668c3195532b645..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/ujson.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "ujson", "version_dict": {"version": "1.35", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.35"}, "deep": {"count": 2, "list": ["aodh", "gnocchiclient", "ujson"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/unicodecsv.json b/tools/oos/example/train_cached_file/unicodecsv.json deleted file mode 100644 index be01acb2dc2543e88cc418a4bcccab87f052aa5d..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/unicodecsv.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "unicodecsv", "version_dict": {"version": "0.14.1", "eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.14.1"}, "deep": {"count": 8, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "unicodecsv"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/unittest2.json b/tools/oos/example/train_cached_file/unittest2.json deleted file mode 100644 index c69552e55059c88646be4f5dc2d3290065490aee..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/unittest2.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "unittest2", "version_dict": {"version": "1.1.0", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.0"}, "deep": {"count": 10, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "unittest2"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/unknown b/tools/oos/example/train_cached_file/unknown deleted file mode 100644 index f6689591f4658aa3769b851c755ce1d49b3db843..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/unknown +++ /dev/null @@ -1,72 +0,0 @@ -discover -twine -wheel -blurb -zope.interface -Pympler -flaky -pretend -argcomplete -xmlschema -html5lib -pytest-flake8 -pytest-black-multipy -toml -jaraco.functools -jaraco.context -jaraco.collections -pytest-mypy -autocommand -pep517 -types-docutils -sphinx-inline-tabs -sphinxcontrib-towncrier -furo -flake8-2020 -incremental -pytest-xdist -pytest-localserver -pypiserver -xonsh -paver -jaraco.envs -pyobjc -docutils-stubs -purl -blinker -pep8-naming -collective.checkdocs -pycryptodome -rfc3987 -strict-rfc3339 -feedparser -sphinxcontrib-issuetracker -scripttest -uwsgi -flup -python-openid -WSGIProxy2 -pyquery -azure-storage-queue -aiohttp -twisted -treq -pyro4 -pure-sasl -objgraph -fastavro -avro-python3 -sphinxcontrib.autoprogram -cloud_sptheme -watchdog -pallets-sphinx-themes -sphinxcontrib-log-cabinet -sphinx-issues -python-dotenv -paste -repoze.who -pipreqs -requirementslib -pip-api -pytimeparse -python-snappy diff --git a/tools/oos/example/train_cached_file/uritemplate.json b/tools/oos/example/train_cached_file/uritemplate.json deleted file mode 100644 index 4d73215dcc0809f00cbc9db78d850b7496697df3..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/uritemplate.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "uritemplate", "version_dict": {"version": "3.0.0", "eq_version": "", "ge_version": "3.0.0", "lt_version": "4dev", "ne_version": [], "upper_version": "3.0.0"}, "deep": {"count": 2, "list": ["cinder", "google-api-python-client", "uritemplate"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/urllib3.json b/tools/oos/example/train_cached_file/urllib3.json deleted file mode 100644 index 4d8c85861e67025b0304165590f21a5400c089f2..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/urllib3.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "urllib3", "version_dict": {"version": "1.25.3", "eq_version": "", "ge_version": "1.21.1", "lt_version": "1.26", "ne_version": ["1.25.0", "1.25.1"], "upper_version": "1.25.3"}, "deep": {"count": 12, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp", "jaraco.packaging", "requests", "urllib3"]}, "requires": {"brotlipy": {"eq_version": "", "ge_version": "0.6.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "0.6.0"}, "pyOpenSSL": {"eq_version": "", "ge_version": "0.14", "lt_version": "", "ne_version": [], "upper_version": "19.1.0", "version": "19.1.0"}, "cryptography": {"eq_version": "", "ge_version": "1.3.4", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "idna": {"eq_version": "", "ge_version": "2.0.0", "lt_version": "", "ne_version": [], "upper_version": "2.8", "version": "2.8"}, "certifi": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2019.6.16", "version": "2019.6.16"}, "PySocks": {"eq_version": "", "ge_version": "1.5.6", "lt_version": "2.0", "ne_version": ["1.5.7"], "upper_version": "", "version": "1.5.6"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/vine.json b/tools/oos/example/train_cached_file/vine.json deleted file mode 100644 index db0efb285ce93f752cf754d47b278dd8c8dadda6..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/vine.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "vine", "version_dict": {"version": "1.3.0", "eq_version": "", "ge_version": "1.1.3", "lt_version": "5.0.0a1", "ne_version": [], "upper_version": "1.3.0"}, "deep": {"count": 4, "list": ["aodh", "keystonemiddleware", "oslo.messaging", "amqp", "vine"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/virtualenv.json b/tools/oos/example/train_cached_file/virtualenv.json deleted file mode 100644 index 3336f0438953a3033050d57ed0c81422a4ddaf67..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/virtualenv.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "virtualenv", "version_dict": {"version": "16.7.5", "eq_version": "", "ge_version": "13.0.0", "lt_version": "", "ne_version": [], "upper_version": "16.7.5"}, "deep": {"count": 6, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "setuptools", "virtualenv"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "2", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "towncrier": {"eq_version": "", "ge_version": "18.5.0", "lt_version": "", "ne_version": [], "upper_version": "", "version": "18.5.0"}, "sphinx-rtd-theme": {"eq_version": "", "ge_version": "0.4.2", "lt_version": "1", "ne_version": [], "upper_version": "", "version": "0.4.2"}, "pytest": {"eq_version": "", "ge_version": "4.0.0", "lt_version": "5", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "coverage": {"eq_version": "", "ge_version": "4.5.0", "lt_version": "5", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "pytest-timeout": {"eq_version": "", "ge_version": "1.3.0", "lt_version": "2", "ne_version": [], "upper_version": "", "version": "1.3.0"}, "six": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "2", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "pytest-xdist": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pytest-localserver": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "pypiserver": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}, "xonsh": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/voluptuous.json b/tools/oos/example/train_cached_file/voluptuous.json deleted file mode 100644 index 5424397dee2f5f691ecbc7acbb7fbdd4366103fa..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/voluptuous.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "voluptuous", "version_dict": {"version": "0.11.7", "eq_version": "", "ge_version": "0.8.9", "lt_version": "", "ne_version": [], "upper_version": "0.11.7"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "voluptuous"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/waitress.json b/tools/oos/example/train_cached_file/waitress.json deleted file mode 100644 index 43df89fa17da12196e19691b02b9cc0cff808908..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/waitress.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "waitress", "version_dict": {"version": "1.3.1", "eq_version": "", "ge_version": "0.8.5", "lt_version": "", "ne_version": [], "upper_version": "1.3.1"}, "deep": {"count": 3, "list": ["aodh", "keystonemiddleware", "WebTest", "waitress"]}, "requires": {"Sphinx": {"eq_version": "", "ge_version": "1.8.1", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "docutils": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.15.2", "version": "0.15.2"}, "pylons-sphinx-themes": {"eq_version": "", "ge_version": "1.0.9", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.0.9"}, "nose": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.3.7", "version": "1.3.7"}, "coverage": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/warlock.json b/tools/oos/example/train_cached_file/warlock.json deleted file mode 100644 index aa8b2860c54181fcfb4ee01a5c653fd7ac2bc445..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/warlock.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "warlock", "version_dict": {"version": "1.3.3", "eq_version": "", "ge_version": "1.2.0", "lt_version": "2", "ne_version": [], "upper_version": "1.3.3"}, "deep": {"count": 13, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "python-glanceclient", "warlock"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/wcwidth.json b/tools/oos/example/train_cached_file/wcwidth.json deleted file mode 100644 index 43e9bac603514e84bc2ee667e2aaaeafde3c2982..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/wcwidth.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "wcwidth", "version_dict": {"version": "0.1.7", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.1.7"}, "deep": {"count": 7, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "wcwidth"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/webcolors.json b/tools/oos/example/train_cached_file/webcolors.json deleted file mode 100644 index a2adfeb5967e01ad88c113bf82c16305debcb5b4..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/webcolors.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "webcolors", "version_dict": {"version": "1.10", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.10"}, "deep": {"count": 14, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "jsonschema", "webcolors"]}, "requires": {"six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/websocket-client.json b/tools/oos/example/train_cached_file/websocket-client.json deleted file mode 100644 index a799cd1db0a1bb376b43f05306721454ad3719d2..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/websocket-client.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "websocket-client", "version_dict": {"version": "0.56.0", "eq_version": "", "ge_version": "0.44.0", "lt_version": "", "ne_version": [], "upper_version": "0.56.0"}, "deep": {"count": 4, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-zunclient", "websocket-client"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/websockify.json b/tools/oos/example/train_cached_file/websockify.json deleted file mode 100644 index 5c4ec1dc4870214027b144a13c7d5fac857dec92..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/websockify.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "websockify", "version_dict": {"version": "0.9.0", "eq_version": "", "ge_version": "0.8.0", "lt_version": "", "ne_version": [], "upper_version": "0.9.0"}, "deep": {"count": 1, "list": ["nova", "websockify"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/whereto.json b/tools/oos/example/train_cached_file/whereto.json deleted file mode 100644 index b8c393cc7fc4dc4d079d1ca03bdf1d358ad634ec..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/whereto.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "whereto", "version_dict": {"version": "0.4.0", "eq_version": "", "ge_version": "0.3.0", "lt_version": "", "ne_version": [], "upper_version": "0.4.0"}, "deep": {"count": 4, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-novaclient", "whereto"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "2.0", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "python-pcre": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.7", "version": "0.7"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.13", "ne_version": [], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "4.0", "lt_version": "", "ne_version": ["4.4"], "upper_version": "4.5.4", "version": "4.5.4"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.6.2", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "oslotest": {"eq_version": "", "ge_version": "1.10.0", "lt_version": "", "ne_version": [], "upper_version": "3.8.1", "version": "3.8.1"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "openstackdocstheme": {"eq_version": "", "ge_version": "1.17.0", "lt_version": "", "ne_version": [], "upper_version": "1.31.1", "version": "1.31.1"}, "reno": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}, "sphinxcontrib.autoprogram": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "", "version": "unknown"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/wrapt.json b/tools/oos/example/train_cached_file/wrapt.json deleted file mode 100644 index a1db2c56854ac63665f72a17c2c2779982e5997e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/wrapt.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "wrapt", "version_dict": {"version": "1.11.2", "eq_version": "", "ge_version": "1.7.0", "lt_version": "", "ne_version": [], "upper_version": "1.11.2"}, "deep": {"count": 17, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "cliff", "stevedore", "bandit", "oslotest", "os-client-config", "openstacksdk", "os-service-types", "keystoneauth1", "oslo.config", "debtcollector", "wrapt"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/wsgi-intercept.json b/tools/oos/example/train_cached_file/wsgi-intercept.json deleted file mode 100644 index 25b7e1886fc6fa8c006ed5971ce851ca47afb772..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/wsgi-intercept.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "wsgi-intercept", "version_dict": {"version": "1.8.1", "eq_version": "", "ge_version": "1.8.1", "lt_version": "", "ne_version": [], "upper_version": "1.8.1"}, "deep": {"count": 2, "list": ["ceilometer", "gabbi", "wsgi-intercept"]}, "requires": {"six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "pytest": {"eq_version": "", "ge_version": "2.4", "lt_version": "", "ne_version": [], "upper_version": "5.1.2", "version": "5.1.2"}, "httplib2": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.13.1", "version": "0.13.1"}, "requests": {"eq_version": "", "ge_version": "2.0.1", "lt_version": "", "ne_version": [], "upper_version": "2.22.0", "version": "2.22.0"}, "urllib3": {"eq_version": "", "ge_version": "1.11.0", "lt_version": "", "ne_version": [], "upper_version": "1.25.3", "version": "1.25.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/xattr.json b/tools/oos/example/train_cached_file/xattr.json deleted file mode 100644 index d2cfbf4a9c535e2105579256a0367777d7e8c865..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/xattr.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "xattr", "version_dict": {"version": "0.9.6", "eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.9.6"}, "deep": {"count": 10, "list": ["aodh", "futurist", "hacking", "oslosphinx", "openstackdocstheme", "os-api-ref", "stestr", "subunit2sql", "oslo.db", "pifpaf", "xattr"]}, "requires": {"cffi": {"eq_version": "", "ge_version": "1.0.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.3", "version": "1.12.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/xmltodict.json b/tools/oos/example/train_cached_file/xmltodict.json deleted file mode 100644 index 3975cf7586b5411f46cf915d80874fd94b4f612b..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/xmltodict.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "xmltodict", "version_dict": {"version": "0.12.0", "eq_version": "", "ge_version": "0.10.1", "lt_version": "", "ne_version": [], "upper_version": "0.12.0"}, "deep": {"count": 1, "list": ["trove", "xmltodict"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/xvfbwrapper.json b/tools/oos/example/train_cached_file/xvfbwrapper.json deleted file mode 100644 index c433d0307d8c123000bae1aa8c19dfba7da48e52..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/xvfbwrapper.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "xvfbwrapper", "version_dict": {"version": "0.2.9", "eq_version": "", "ge_version": "0.1.3", "lt_version": "", "ne_version": [], "upper_version": "0.2.9"}, "deep": {"count": 1, "list": ["horizon", "xvfbwrapper"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/yaql.json b/tools/oos/example/train_cached_file/yaql.json deleted file mode 100644 index b69aaefd14fefac51420fb5430da3ee6127d916e..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/yaql.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "yaql", "version_dict": {"version": "1.1.3", "eq_version": "", "ge_version": "1.1.3", "lt_version": "", "ne_version": [], "upper_version": "1.1.3"}, "deep": {"count": 4, "list": ["aodh", "gnocchiclient", "python-openstackclient", "python-muranoclient", "yaql"]}, "requires": {"pbr": {"eq_version": "", "ge_version": "1.8", "lt_version": "", "ne_version": [], "upper_version": "5.4.3", "version": "5.4.3"}, "Babel": {"eq_version": "", "ge_version": "1.3", "lt_version": "", "ne_version": [], "upper_version": "2.7.0", "version": "2.7.0"}, "python-dateutil": {"eq_version": "", "ge_version": "2.4.2", "lt_version": "", "ne_version": [], "upper_version": "2.8.0", "version": "2.8.0"}, "ply": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "3.11", "version": "3.11"}, "six": {"eq_version": "", "ge_version": "1.9.0", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}, "hacking": {"eq_version": "", "ge_version": "0.12.0", "lt_version": "0.14", "ne_version": ["0.13.0"], "upper_version": "", "version": "0.12.0"}, "coverage": {"eq_version": "", "ge_version": "3.6", "lt_version": "", "ne_version": [], "upper_version": "4.5.4", "version": "4.5.4"}, "fixtures": {"eq_version": "", "ge_version": "1.3.1", "lt_version": "", "ne_version": [], "upper_version": "3.0.0", "version": "3.0.0"}, "python-subunit": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "1.4.0", "version": "1.4.0"}, "Sphinx": {"eq_version": "", "ge_version": "1.1.2", "lt_version": "1.3", "ne_version": ["1.2.0", "1.3b1"], "upper_version": "2.2.0", "version": "2.2.0"}, "oslosphinx": {"eq_version": "", "ge_version": "2.5.0", "lt_version": "", "ne_version": [], "upper_version": "4.18.0", "version": "4.18.0"}, "testrepository": {"eq_version": "", "ge_version": "0.0.18", "lt_version": "", "ne_version": [], "upper_version": "0.0.20", "version": "0.0.20"}, "testscenarios": {"eq_version": "", "ge_version": "0.4", "lt_version": "", "ne_version": [], "upper_version": "0.5.0", "version": "0.5.0"}, "testtools": {"eq_version": "", "ge_version": "1.4.0", "lt_version": "", "ne_version": [], "upper_version": "2.3.0", "version": "2.3.0"}, "reno": {"eq_version": "", "ge_version": "1.8.0", "lt_version": "", "ne_version": [], "upper_version": "2.11.3", "version": "2.11.3"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/zVMCloudConnector.json b/tools/oos/example/train_cached_file/zVMCloudConnector.json deleted file mode 100644 index f0bc9b19035028358c5db64fef79f522670aa77a..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/zVMCloudConnector.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "zVMCloudConnector", "version_dict": {"version": "1.4.1", "eq_version": "", "ge_version": "1.3.0", "lt_version": "", "ne_version": [], "upper_version": "1.4.1"}, "deep": {"count": 1, "list": ["nova", "zVMCloudConnector"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/zake.json b/tools/oos/example/train_cached_file/zake.json deleted file mode 100644 index 170292b275c0f34a62e3d0f5e2d24ab388b38694..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/zake.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "zake", "version_dict": {"version": "0.2.2", "eq_version": "", "ge_version": "0.1.6", "lt_version": "", "ne_version": [], "upper_version": "0.2.2"}, "deep": {"count": 2, "list": ["aodh", "tooz", "zake"]}, "requires": {"kazoo": {"eq_version": "", "ge_version": "1.3.1", "lt_version": "", "ne_version": ["2.1"], "upper_version": "2.6.1", "version": "2.6.1"}, "six": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.12.0", "version": "1.12.0"}}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/zeroconf.json b/tools/oos/example/train_cached_file/zeroconf.json deleted file mode 100644 index d4ead2283101584410ca6ed0b6a2a2cf09724ccd..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/zeroconf.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "zeroconf", "version_dict": {"version": "0.23.0", "eq_version": "", "ge_version": "0.19.1", "lt_version": "", "ne_version": [], "upper_version": "0.23.0"}, "deep": {"count": 2, "list": ["ironic", "ironic-lib", "zeroconf"]}, "requires": {}} \ No newline at end of file diff --git a/tools/oos/example/train_cached_file/zipp.json b/tools/oos/example/train_cached_file/zipp.json deleted file mode 100644 index cafc75d3bd63e50d8c03b061a7f9c0c1ccb41ce9..0000000000000000000000000000000000000000 --- a/tools/oos/example/train_cached_file/zipp.json +++ /dev/null @@ -1 +0,0 @@ -{"name": "zipp", "version_dict": {"version": "0.6.0", "eq_version": "", "ge_version": "0.5", "lt_version": "", "ne_version": [], "upper_version": "0.6.0"}, "deep": {"count": 9, "list": ["aodh", "futurist", "hacking", "mock", "Sphinx", "sphinxcontrib-applehelp", "pytest", "pluggy", "importlib-metadata", "zipp"]}, "requires": {"more-itertools": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "7.2.0", "version": "7.2.0"}, "Sphinx": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.2.0", "version": "2.2.0"}, "jaraco.packaging": {"eq_version": "", "ge_version": "3.2", "lt_version": "", "ne_version": [], "upper_version": "", "version": "3.2"}, "rst.linker": {"eq_version": "", "ge_version": "1.9", "lt_version": "", "ne_version": [], "upper_version": "", "version": "1.9"}, "pathlib2": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "2.3.4", "version": "2.3.4"}, "contextlib2": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "0.5.5", "version": "0.5.5"}, "unittest2": {"eq_version": "", "ge_version": "", "lt_version": "", "ne_version": [], "upper_version": "1.1.0", "version": "1.1.0"}}} \ No newline at end of file diff --git a/tools/oos/oos/__init__.py b/tools/oos/oos/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/tools/oos/oos/cli.py b/tools/oos/oos/cli.py deleted file mode 100644 index b57ec414834350e9b3654900c48bf2ffde8b26b4..0000000000000000000000000000000000000000 --- a/tools/oos/oos/cli.py +++ /dev/null @@ -1,20 +0,0 @@ -import click - -from oos.commands.repo import cli as repo_cli -from oos.commands.environment import cli as environment_cli -from oos.commands.dependence import cli as dep_cli -from oos.commands.spec import cli as spec_cli - - -@click.group() -def run(): - pass - - -def main(): - # Add more command group if needed. - run.add_command(spec_cli.group) - run.add_command(dep_cli.group) - run.add_command(environment_cli.group) - run.add_command(repo_cli.group) - run() diff --git a/tools/oos/oos/commands/__init__.py b/tools/oos/oos/commands/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/tools/oos/oos/commands/dependence/__init__.py b/tools/oos/oos/commands/dependence/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/tools/oos/oos/commands/dependence/cli.py b/tools/oos/oos/commands/dependence/cli.py deleted file mode 100644 index 191c4a4f50b11e3490fd01f4fdf3de3e5135cbac..0000000000000000000000000000000000000000 --- a/tools/oos/oos/commands/dependence/cli.py +++ /dev/null @@ -1,143 +0,0 @@ -import csv -import json -import os -from pathlib import Path - -import click -from packaging import version as p_version - -from oos.common import gitee -from oos.common import utils - - -class CountDependence(object): - def __init__(self, output, token, location): - self.output = output + ".csv" if not output.endswith(".csv") else output - if not Path(location).exists(): - raise Exception("The cache folder doesn't exist") - self.location = location - self.token = token if token else os.environ.get("GITEE_PAT") - - def _generate_without_compare(self, file_list): - with open(self.output, "w") as csv_file: - writer = csv.writer(csv_file) - writer.writerow(["Project", "Version", "Requires", "Depth"]) - for file_name in file_list: - if file_name == 'unknown': - with open(self.location + '/' + file_name, 'r', encoding='utf-8') as fp: - for project in fp.readlines(): - writer.writerow([project.split('\n')[0], '', '', '']) - else: - with open(self.location + '/' + file_name, 'r', encoding='utf8') as fp: - project_dict = json.load(fp) - writer.writerow([ - project_dict['name'], - project_dict['version_dict']['version'], - project_dict['requires'].keys(), - project_dict['deep']['count'] - ]) - - def _get_repo_version(self, repo_name, compare_branch): - print('fetch %s info from gitee' % repo_name) - if not gitee.has_branch('src-openeuler', repo_name, compare_branch, self.token): - return '', False - repo_version = gitee.get_gitee_project_version('src-openeuler', repo_name, compare_branch, self.token) - return repo_version, True - - def _get_version_and_status(self, repo_name, project_version, project_eq_version, - project_lt_version, project_ne_version, project_upper_version, compare_branch): - if not repo_name: - return '', 'Need Create Repo' - repo_version, has_branch = self._get_repo_version(repo_name, compare_branch) - if not has_branch: - return '', 'Need Create Branch' - if not repo_version: - return '', 'Need Init Branch' - if p_version.parse(repo_version) == p_version.parse(project_version): - return repo_version, 'OK' - if project_upper_version: - if p_version.parse(repo_version) > p_version.parse(project_upper_version): - return repo_version, 'Need Downgrade' - else: - if p_version.parse(repo_version) > p_version.parse(project_version): - if project_version and project_version == project_eq_version: - status = 'Need Downgrade' - elif repo_version not in project_ne_version: - if not project_lt_version: - status = 'OK' - elif p_version.parse(repo_version) < p_version.parse(project_lt_version): - status = 'OK' - else: - status = 'Need Downgrade' - else: - status = 'Need Downgrade' - return repo_version, status - return repo_version,'Need Upgrade' - - def _generate_with_compare(self, file_list, compare_branch): - with open(self.output, "w") as csv_file: - writer=csv.writer(csv_file) - writer.writerow(["Project Name", "openEuler Repo", "SIG", "Repo version", - "Required (Min) Version", "lt Version", "ne Version", "Upper Version", "Status", - "Requires", "Depth"]) - for file_name in file_list: - with open(self.location + '/' + file_name, 'r', encoding='utf8') as fp: - if file_name == 'unknown': - project_list = [{'name': project} for project in fp.read().splitlines()] - else: - project_list = [json.load(fp)] - for project_dict in project_list: - project_name = project_dict['name'] - version_dict = project_dict.get('version_dict') - project_version = version_dict['version'] if version_dict else '' - project_eq_version = version_dict['eq_version'] if version_dict else '' - project_lt_version = version_dict['lt_version'] if version_dict else '' - project_ne_version = version_dict['ne_version'] if version_dict else [] - project_upper_version = version_dict['upper_version'] if version_dict else '' - requires = list(project_dict['requires'].keys()) if project_dict.get('requires') else [] - deep_count = project_dict['deep']['count'] if project_dict.get('deep') else '' - repo_name, sig = utils.get_openeuler_repo_name_and_sig(project_name) - repo_version, status = self._get_version_and_status(repo_name, - project_version, project_eq_version, project_lt_version, - project_ne_version, project_upper_version, compare_branch) - if project_version and project_version == project_eq_version: - project_version += '(Must)' - writer.writerow([ - project_name, - repo_name, - sig, - repo_version, - project_version, - project_lt_version, - project_ne_version, - project_upper_version, - status, - requires, - deep_count - ] - ) - - def get_all_dep(self, compare, compare_branch): - """fetch all related dependent packages""" - file_list = os.listdir(self.location) - if not compare: - self._generate_without_compare(file_list) - else: - self._generate_with_compare(file_list, compare_branch) - - -@click.group(name='dependence', help='package dependence related commands') -def group(): - pass - - -@group.command(name='generate', help='generate required package list for the specified OpenStack release') -@click.option('-c', '--compare', is_flag=True, help='Check the project in openEuler community or not') -@click.option('-cb', '--compare-branch', default='master', help='Branch to compare with') -@click.option('-o', '--output', default='result', help='Output file name, default: result.csv') -@click.option('-t', '--token', help='Personal gitee access token used for fetching info from gitee') -@click.argument('location', type=click.Path(dir_okay=True)) -def generate(compare, compare_branch, output, token, location): - myobj = CountDependence(output, token, location) - myobj.get_all_dep(compare, compare_branch) - print("Success generate dependencies, the result is saved into %s file" % output) diff --git a/tools/oos/oos/commands/environment/__init__.py b/tools/oos/oos/commands/environment/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/tools/oos/oos/commands/environment/cli.py b/tools/oos/oos/commands/environment/cli.py deleted file mode 100644 index 18d14909d4770cf894458d378b634d0c6cef4915..0000000000000000000000000000000000000000 --- a/tools/oos/oos/commands/environment/cli.py +++ /dev/null @@ -1,325 +0,0 @@ -import os -import platform -import sqlite3 -import subprocess -import time - -import click -from huaweicloudsdkcore.auth.credentials import BasicCredentials -from huaweicloudsdkcore.exceptions import exceptions -from huaweicloudsdkcore.http.http_config import HttpConfig -from huaweicloudsdkecs.v2 import * -import prettytable - -from oos.commands.environment import sqlite_ops -from oos.common import ANSIBLE_PLAYBOOK_DIR, ANSIBLE_INVENTORY_DIR, KEY_DIR, CONFIG - - -# TODO: Update the mapping or make it discoverable -OE_OS_RELEASE = { - '20.03-lts-sp1': ['train'], - '20.03-lts-sp2': ['rocky', 'queens'], - '20.03-lts-sp3': ['rocky', 'queens', 'train'], - '22.03-lts': ['train', 'wallaby'] -} -FLAVOR_MAPPING = { - 'small_x86': 'c6.large.2', - 'medium_x86': 'c6.xlarge.2', - 'large_x86': 'c6.2xlarge.2', - 'small_aarch64': 'kc1.large.2', - 'medium_aarch64': 'kc1.xlarge.2', - 'large_aarch64': 'kc1.2xlarge.2' -} - -IMAGE_MAPPING = { - '22.03-lts_x86': '399dcb80-53ed-495c-96c5-807bb2b134a0', - '22.03-lts_aarch64': 'cdf284dd-86fa-4d2d-be59-c317d9d59d51', - '20.03-lts-sp1_x86': "479b599f-2e7d-49d7-89ba-1c134d5a7eb3", - '20.03-lts-sp1_aarch64': "ee1c6b7e-fcc7-422a-aeee-2d62eb647703", - '20.03-lts-sp2_x86': "7db7ef61-9b3f-4a36-9525-ebe5257010cd", - '20.03-lts-sp2_aarch64': "fcbbd404-1945-4791-b8c2-98216dcf0eaa", - '20.03-lts-sp3_x86': '7f7961bf-2d5f-4370-ae07-03f33b0b3565', - '20.03-lts-sp3_aarch64': '1ec9b082-9166-473b-9f78-86ba37f0774a' -} - -VPC_ID = '288ffe75-a44e-4332-9fdc-435fd5fbe51b' -VPC_MAPPING = { - # vpc_id: sub_net_id - VPC_ID: ['08dbb5f3-329f-4c08-9f1e-038eabef7d44', '1987d67b-1299-46b2-8f83-bc4149f1796b'] -} - -TABLE_COLUMN = ['Provider', 'Name', 'UUID', 'IP', 'Flavor', 'openEuler_release', 'OpenStack_release', 'create_time'] - -OPENEULER_DEFAULT_USER = "root" -OPENEULER_DEFAULT_PASSWORD = "openEuler12#$" - - -@click.group(name='env', help='OpenStack Cluster Action') -def group(): - pass - - -def _init_ecs_client(): - # TODO: 支持更多provider,插件化 - provider = CONFIG.get('provider', 'driver', fallback='huaweicloud') - ak = CONFIG.get(provider, 'ak') - sk = CONFIG.get(provider, 'sk') - project_id = CONFIG.get(provider, 'project_id') - endpoint = CONFIG.get(provider, 'endpoint') - if not ak or not sk: - raise click.ClickException("No credentials info provided") - if not project_id or not endpoint : - raise click.ClickException("No project id or endpoint provided") - config = HttpConfig.get_default_config() - credentials = BasicCredentials(ak, sk, project_id) - - ecs_client = EcsClient.new_builder() \ - .with_http_config(config) \ - .with_credentials(credentials) \ - .with_endpoint(endpoint) \ - .build() - return provider, ecs_client - - -@group.command(name='list', help='List environment') -def list(): - table = prettytable.PrettyTable(TABLE_COLUMN) - res = sqlite_ops.list_targets() - for raw in res: - table.add_row(raw) - print(table) - - -@group.command(name='create', help='Create environment') -@click.option('-r', '--release', required=True, - type=click.Choice(OE_OS_RELEASE.keys())) -@click.option('-f', '--flavor', required=True, - type=click.Choice(['small', 'medium', 'large'])) -@click.option('-a', '--arch', required=True, - type=click.Choice(['x86', 'aarch64'])) -@click.option('-n', '--name', required=True, - help='The cluster/all_in_one name') -@click.argument('target', type=click.Choice(['cluster', 'all_in_one'])) -def create(release, flavor, arch, name, target): - # TODO: - # 1. 支持秘钥注入,当前openEuler云镜像不支持该功能 - if name in ['all_in_one', 'cluster']: - raise click.ClickException("Can not name all_in_one or cluster.") - vm = sqlite_ops.get_target_column(target=name, col_name='*') - if vm: - raise click.ClickException("The target name should be unique.") - - find_sshpass = subprocess.getoutput("which sshpass") - has_sshpass = find_sshpass and find_sshpass.find("no sshpass") == -1 - if not has_sshpass: - print("Warning: sshpass is not installed. It'll fail to sync " - "key-pair to the target VMs. Please do the sync step by hand.") - provider, ecs_client = _init_ecs_client() - request = CreateServersRequest() - listPrePaidServerDataVolumeDataVolumesServer = [ - PrePaidServerDataVolume( - volumetype="SAS", - size=100 - ), - PrePaidServerDataVolume( - volumetype="SAS", - size=100 - ) - ] - rootVolumePrePaidServerRootVolume = PrePaidServerRootVolume( - volumetype="SAS", - size=100 - ) - listPrePaidServerSecurityGroupSecurityGroupsServer = [ - PrePaidServerSecurityGroup( - id="fc28e87a-819e-42c5-8015-28f07e671842" - ) - ] - bandwidthPrePaidServerEipBandwidth = PrePaidServerEipBandwidth( - sharetype="PER", - size=1 - ) - eipPrePaidServerEip = PrePaidServerEip( - iptype="5_bgp", - bandwidth=bandwidthPrePaidServerEipBandwidth - ) - publicipPrePaidServerPublicip = PrePaidServerPublicip( - eip=eipPrePaidServerEip - ) - listPrePaidServerNicNicsServer = [ - PrePaidServerNic( - subnet_id=VPC_MAPPING[VPC_ID][0] - ), - PrePaidServerNic( - subnet_id=VPC_MAPPING[VPC_ID][1] - ) - ] - serverPrePaidServer = PrePaidServer( - image_ref=IMAGE_MAPPING[f"{release}_{arch}"], - flavor_ref=FLAVOR_MAPPING[f"{flavor}_{arch}"], - name=f"{name}_oos_vm", - vpcid=VPC_ID, - nics=listPrePaidServerNicNicsServer, - publicip=publicipPrePaidServerPublicip, - count=1 if target == 'all_in_one' else 3, - is_auto_rename=False, - security_groups=listPrePaidServerSecurityGroupSecurityGroupsServer, - root_volume=rootVolumePrePaidServerRootVolume, - data_volumes=listPrePaidServerDataVolumeDataVolumesServer - ) - request.body = CreateServersRequestBody( - server=serverPrePaidServer - ) - print("Creating target VMs") - response = ecs_client.create_servers(request) - table = prettytable.PrettyTable(TABLE_COLUMN) - for server_id in response.server_ids: - while True: - print("Waiting for the VM becoming active") - ip = None - created = None - try: - request = ShowServerRequest() - request.server_id = server_id - response = ecs_client.show_server(request) - except exceptions.ClientRequestException as ex: - if ex.status_code == 404: - time.sleep(3) - continue - for _, addresses in response.server.addresses.items(): - for address in addresses: - if address.os_ext_ip_stype == 'floating': - ip = address.addr - created = response.server.created - break - if ip and created: - break - time.sleep(3) - print("Success created the target VMs") - if has_sshpass: - print("Preparing the mutual trust for ssh") - cmds = [f'ssh-keygen -f ~/.ssh/known_hosts -R "{ip}"', - f'ssh-keygen -R "{ip}"', - f'sshpass -p {OPENEULER_DEFAULT_PASSWORD} ssh-copy-id -i "{KEY_DIR}/id_rsa.pub" -o StrictHostKeyChecking=no "{OPENEULER_DEFAULT_USER}@{ip}"'] - for cmd in cmds: - subprocess.getoutput(cmd) - print(f"All is done, you can now login the target with the key in " - f"{KEY_DIR}") - sqlite_ops.insert_target(provider, name, server_id, ip, flavor, release, None, created) - table.add_row([provider, name, server_id, ip, flavor, release, None, created]) - print(table) - - -@group.command(name='delete', - help='Delete environment by cluster/all_in_one name') -@click.argument('name', type=str) -def delete(name): - _, ecs_client = _init_ecs_client() - server_info = sqlite_ops.get_target_column(name, 'uuid') - for server_id in server_info: - request = DeleteServersRequest() - listServerIdServersbody = [ - ServerId( - id=server_id[0] - ) - ] - request.body = DeleteServersRequestBody( - servers=listServerIdServersbody, - delete_volume=True, - delete_publicip=True - ) - response = ecs_client.delete_servers(request) - print(response) - sqlite_ops.delete_target(name) - - -def _run_action(target, action): - ips = sqlite_ops.get_target_column(target, 'ip') - if len(ips) == 1: - os.environ.setdefault('CONTROLLER_IP', ips[0][0]) - os.environ.setdefault('OOS_ENV_TYPE', 'all_in_one') - elif len(ips) == 3: - os.environ.setdefault('CONTROLLER_IP', ips[0][0]) - os.environ.setdefault('COMPUTE01_IP', ips[1][0]) - os.environ.setdefault('COMPUTE02_IP', ips[2][0]) - os.environ.setdefault('OOS_ENV_TYPE', 'cluster') - else: - raise click.ClickException(f"Can't find the environment {target}") - inventory_file = os.path.join(ANSIBLE_INVENTORY_DIR, 'oos_inventory.py') - playbook_entry = os.path.join(ANSIBLE_PLAYBOOK_DIR, f'{action}.yaml') - private_key = os.path.join(KEY_DIR, 'id_rsa') - user = 'root' - - if 'openEuler' in platform.platform() or 'oe1' in platform.platform(): - os.chmod(private_key, 0o400) - - cmd = ['ansible-playbook', '-i', inventory_file, - '--private-key', private_key, - '--user', user, - playbook_entry] - print(cmd) - subprocess.call(cmd) - - -@group.command(name='setup', help='Setup OpenStack Cluster') -@click.option('-r', '--release', required=True, - help='OpenStack release to install, like train, wallaby...') -@click.argument('target') -def setup(release, target): - oe = sqlite_ops.get_target_column(target, 'openEuler_release')[0][0] - if release.lower() not in OE_OS_RELEASE[oe]: - print("%s does not support openstack %s" % (oe, release)) - return - if target in ['all_in_one', 'cluster']: - inventory_file = os.path.join(ANSIBLE_INVENTORY_DIR, target+'.yaml') - playbook_entry = os.path.join(ANSIBLE_PLAYBOOK_DIR, 'entry.yaml') - cmd = ['ansible-playbook', '-i', inventory_file, playbook_entry] - subprocess.call(cmd) - else: - os.environ.setdefault('OpenStack_Release', release.lower()) - os.environ.setdefault('keypair_dir', KEY_DIR) - _run_action(target, 'entry') - sql = 'UPDATE resource SET openstack_release=?' - sqlite_ops.exe_sql(sql, (release.lower(),)) - - -@group.command(name='init', - help='Initialize the base OpenStack resource for the Cluster') -@click.argument('target') -def init(target): - if target in ['all_in_one', 'cluster']: - inventory_file = os.path.join(ANSIBLE_INVENTORY_DIR, target+'.yaml') - playbook_entry = os.path.join(ANSIBLE_PLAYBOOK_DIR, 'init.yaml') - cmd = ['ansible-playbook', '-i', inventory_file, playbook_entry] - subprocess.call(cmd) - else: - _run_action(target, 'init') - - -@group.command(name='test', help='Run tempest on the Cluster') -@click.argument('target') -def test(target): - if target in ['all_in_one', 'cluster']: - inventory_file = os.path.join(ANSIBLE_INVENTORY_DIR, target+'.yaml') - playbook_entry = os.path.join(ANSIBLE_PLAYBOOK_DIR, 'test.yaml') - cmd = ['ansible-playbook', '-i', inventory_file, playbook_entry] - subprocess.call(cmd) - else: - _run_action(target, 'test') - - -@group.command(name='clean', help='Clean up the Cluster') -@click.argument('target') -def clean(target): - if target in ['all_in_one', 'cluster']: - inventory_file = os.path.join(ANSIBLE_INVENTORY_DIR, target+'.yaml') - playbook_entry = os.path.join(ANSIBLE_PLAYBOOK_DIR, 'cleanup.yaml') - cmd = ['ansible-playbook', '-i', inventory_file, playbook_entry] - subprocess.call(cmd) - else: - res = sqlite_ops.get_target_column(target, 'openstack_release') - os.environ.setdefault('OpenStack_Release', res[0][0]) - _run_action(target, 'cleanup') - sql = 'UPDATE resource SET openstack_release=?' - sqlite_ops.exe_sql(sql, (None,)) - os.environ.pop('OpenStack_Release') diff --git a/tools/oos/oos/commands/environment/sqlite_ops.py b/tools/oos/oos/commands/environment/sqlite_ops.py deleted file mode 100644 index c74b529918c82afc884aea74bbdbfc21ee763b90..0000000000000000000000000000000000000000 --- a/tools/oos/oos/commands/environment/sqlite_ops.py +++ /dev/null @@ -1,52 +0,0 @@ -import sqlite3 - -from oos.common import SQL_DB - - -def exe_query_sql(sql, *args): - connect = sqlite3.connect(SQL_DB) - cur = connect.cursor() - try: - cur.execute(sql, *args) - result = cur.fetchall() - except Exception as e: - print(e) - finally: - cur.close() - connect.close() - return result - - -def exe_sql(sql, *args): - connect = sqlite3.connect(SQL_DB) - cur = connect.cursor() - try: - cur.execute(sql, *args) - connect.commit() - except Exception as e: - connect.rollback() - print(e) - finally: - cur.close() - connect.close() - - -def get_target_column(target, col_name): - sql = "SELECT %s from resource where name=?" % col_name - return exe_query_sql(sql, (target,)) - - -def delete_target(target): - sql = "DELETE from resource where name=?" - exe_sql(sql, (target,)) - - -def list_targets(): - sql = 'SELECT * FROM resource ORDER BY create_time' - return exe_query_sql(sql) - - -def insert_target(*args): - sql = "INSERT INTO resource VALUES (?,?,?,?,?,?,?,?)" - exe_sql(sql, args) - diff --git a/tools/oos/oos/commands/repo/__init__.py b/tools/oos/oos/commands/repo/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/tools/oos/oos/commands/repo/cli.py b/tools/oos/oos/commands/repo/cli.py deleted file mode 100644 index 9a4c6402f3741ea1a64a2af2603331985c41289b..0000000000000000000000000000000000000000 --- a/tools/oos/oos/commands/repo/cli.py +++ /dev/null @@ -1,493 +0,0 @@ -#!/usr/bin/env python3 - -import os -import shutil - -import click -import csv -import pandas -import yaml - -from bs4 import BeautifulSoup -from functools import partial -from multiprocessing import Pool - -from oos.commands.repo.repo_class import PkgGitRepo -from oos.common import gitee -from oos.common import OPENEULER_SIG_REPO -from pathlib import Path - - -def __get_repos(repo_name, repos_file): - if not (repo_name or repos_file): - raise click.ClickException( - "You must specify repos file or specific repo name!") - - repos = set() - if repo_name: - repos.add(repo_name) - else: - repo_data = pandas.read_csv(repos_file) - repos_df = pandas.DataFrame(repo_data, columns=["repo_name"]) - if repos_df.empty: - raise click.ClickException( - "You must specify repos file or specific repo name!") - for row in repos_df.itertuples(): - repos.add(row.repo_name) - return repos - - -def __prepare_local_repo(gitee_pat, gitee_email, work_branch, - repo_org, repo_name, repo_path): - git_user, g_email = gitee.get_user_info(gitee_pat) - git_email = gitee_email or g_email - if not git_email: - raise click.ClickException( - "Your email was not publicized in gitee, need to manually " - "specified by -e or --gitee-email") - - local_repo = PkgGitRepo(gitee_pat, repo_org, - git_user, git_email, - repo_name=repo_name) - if repo_path and os.path.exists(repo_path): - local_repo.repo_dir = repo_path - else: - repo_dir = os.path.join(Path.home(), repo_name) - if os.path.exists(repo_dir): - local_repo.repo_dir = repo_dir - else: - local_repo.fork_repo() - local_repo.clone_repo(str(Path.home())) - local_repo.add_branch(work_branch, 'master') - return local_repo - - -def __find_repo_yaml_file(repo_name, community_path, gitee_org): - file_name = repo_name + '.yaml' - cmd = 'find %(community_path)s -name ' \ - '"%(file_name)s" |grep %(gitee_org)s' % { - "community_path": community_path, - "file_name": file_name, - "gitee_org": gitee_org} - lines = os.popen(cmd).readlines() - if not lines: - print('Can not find yaml file for repo %s in community' % repo_name) - return - - return lines[0][:-1] - - -def __parse_project_from_branch(branch, is_mainline=False): - meta_dir_base = 'OBS_PRJ_meta' - is_multi = False - - if branch == 'master': - main_pro = 'openEuler' - obs_pro = main_pro + ':Mainline' if is_mainline else ':Epol' - elif 'oepkg' in branch: - parts = branch.split('_') - main_pro = parts[2].replace('oe', 'openEuler').replace('-', ':') - stack = parts[1].replace('-', ':') - obs_pro = main_pro + ':' + parts[0] + ':' + stack - elif 'Multi-Version' in branch: - is_multi = True - parts = branch.split('_') - main_pro = parts[2].replace('-', ':') - stack = parts[1].replace('-', ':') - obs_pro = main_pro + ':Epol:' + parts[0] + ':' + stack - else: - main_pro = obs_pro = branch.replace('-', ':') - if not is_mainline: - obs_pro = main_pro + ":Epol" - - meta_dir = os.path.join(meta_dir_base, branch) - if is_multi: - meta_dir = os.path.join(meta_dir_base, 'multi_version', branch) - - return main_pro, obs_pro, meta_dir, is_multi - - -def __prepare_obs_project(obs_dir, branch, is_mainline=False, - gitee_user=None): - main_pro, obs_pro, meta_dir, multi = __parse_project_from_branch( - branch, is_mainline) - - if multi: - project_dir = os.path.join(obs_dir, 'multi_version', - branch, obs_pro) - else: - project_dir = os.path.join(obs_dir, branch, obs_pro) - - if not os.path.exists(project_dir): - # the obs project does not exist, create it first - os.makedirs(project_dir) - meta_dir = os.path.join(obs_dir, meta_dir) - os.mkdir(meta_dir) - prefix = main_pro.replace(':', '_').lower() - mpro = ' \n' - ' \n' - ' <description/>\n' - ' <person userid="Admin" role="maintainer"/>\n' - ' <person userid="%(gitee_user)s" role="maintainer"/>\n' - ' <build>\n' - ' <enable/>\n' - ' </build>\n' - ' <repository name="standard_x86_64">\n' - '%(mpro)s %(x86_repo)s' - '%(mpro)s %(x86_epol)s' - ' <arch>x86_64</arch>\n' - ' </repository>\n' - ' <repository name="standard_aarch64">\n' - '%(mpro)s %(aarch64_repo)s' - '%(mpro)s %(aarch64_epol)s' - ' <arch>aarch64</arch>\n' - ' </repository>\n' - '</project>\n' % {'obs_pro': obs_pro, - 'gitee_user': gitee_user, - 'mpro': mpro, - 'x86_repo': x86_repo, - 'aarch64_repo': aarch64_repo, - 'x86_epol': x86_epol, - 'aarch64_epol': aarch64_epol}) - return project_dir - - -def __get_failed_info(repo, gitee_org, param): - repo_obj = PkgGitRepo(gitee_org=gitee_org, repo_name=repo) - prs = repo_obj.get_pr_list(param) - results = [] - - # 筛选失败信息 - for pr in prs: - try: - if list(filter(lambda label: label['name'] == 'ci_successful', - pr['labels'])): - continue - except Exception: - return results - - # PR链接 责任人 - result = [repo, pr['html_url'], pr['user']['name']] - comments = repo_obj.pr_get_comments(str(pr['number'])) - - table = [] - for com in comments: - if com['body'].startswith('<table>'): - i = 0 - rows = BeautifulSoup(com['body'], 'lxml').select('tr')[1:] - for row in rows: - cols = row.find_all('td') - cols = [cols[0].text.strip(), cols[1].text.strip().split(':')[-1]] - a = row.find_all('a') - cols.append(table[i - 1][2] if len(a) == 0 else a[0].get('href')) - # cols[0]-信息 cols[1]-Failed cols[2]-链接 - table.append(cols) - i += 1 - break - - summary = ' '.join([x[0] for x in filter(lambda row: row[1] == 'FAILED', table)]) - link = ' '.join([x[2] for x in filter(lambda row: row[1] == 'FAILED', table)]) - - result.append(summary) - result.append(link) - results.append(result) - - return results - - -@click.group(name='repo', help='Management for openEuler repositories') -def group(): - pass - - -@group.command(name="branch-create", help='Create branches for repos') -@click.option("-rf", "--repos-file", - help="File of openEuler repos in csv, includes 'repo_name' " - "column now") -@click.option("-r", "--repo", help="Repo name to create branch") -@click.option("-b", "--branches", nargs=3, type=click.Tuple([str, str, str]), - multiple=True, required=True, - help="Branch info to create for openEuler repos, the format is: " - "'-b branch-name branch-type(always is 'protected') " - "parent-branch' you can specify multiple times for this") -@click.option("-t", "--gitee-pat", envvar='GITEE_PAT', required=True, - help="Gitee personal access token") -@click.option("-e", "--gitee-email", envvar='GITEE_EMAIL', - help="Email address for git commit changes, automatically " - "query from gitee if you have public in gitee") -@click.option("-o", "--gitee-org", envvar='GITEE_ORG', required=True, - default="src-openeuler", show_default=True, - help="Gitee organization name of repos") -@click.option("--community-path", - help="Path of openeuler/community in local") -@click.option("-w", "--work-branch", default='openstack-create-branch', - help="Local working branch of openeuler/community") -@click.option('-dp', '--do-push', is_flag=True, - help="Do PUSH or not, if push it will create pr") -def branch_create(repos_file, repo, branches, gitee_pat, gitee_email, - gitee_org, community_path, work_branch, do_push): - repos = __get_repos(repo, repos_file) - community_repo = __prepare_local_repo( - gitee_pat, gitee_email, work_branch, - 'openeuler', 'community', community_path) - - for repo in repos: - yaml_file = __find_repo_yaml_file( - repo, community_repo.repo_dir, gitee_org) - if not yaml_file: - continue - - with open(yaml_file, 'r', encoding='utf-8') as f: - data = yaml.load(f, Loader=yaml.FullLoader) - for bn, bt, bp in branches: - for exist in data['branches']: - if exist['name'] == bn: - print('The branch %s of %s is already exist' % ( - bn, data['name'])) - break - else: - print('Create branch %s for %s' % (bn, data['name'])) - data['branches'].append({'name': bn, - 'type': bt, - 'create_from': bp}) - - with open(yaml_file, 'w', encoding='utf-8') as nf: - yaml.dump(data, nf, default_flow_style=False, sort_keys=False) - - commit_msg = 'Create branches for OpenStack packages' - community_repo.commit(commit_msg, do_push) - if do_push: - community_repo.create_pr(work_branch, 'master', commit_msg) - - -@group.command(name="branch-delete", help='Delete branches for repos') -@click.option("-rf", "--repos-file", - help="File of openEuler repos in csv, includes 'repo_name' " - "column now") -@click.option("-r", "--repo", help="Repo name to delete branch") -@click.option("-b", "--branch", multiple=True, required=True, - help="Branch name to delete for openEuler repos, " - "you can specify multiple times for this") -@click.option("-t", "--gitee-pat", envvar='GITEE_PAT', required=True, - help="Gitee personal access token") -@click.option("-e", "--gitee-email", envvar='GITEE_EMAIL', - help="Email address for git commit changes, automatically " - "query from gitee if you have public in gitee") -@click.option("-o", "--gitee-org", envvar='GITEE_ORG', required=True, - default="src-openeuler", show_default=True, - help="Gitee organization name of repos") -@click.option("--community-path", - help="Path of openeuler/community in local") -@click.option("-w", "--work-branch", default='openstack-delete-branch', - help="Local working branch of openeuler/community") -@click.option('-dp', '--do-push', is_flag=True, - help="Do PUSH or not, if push it will create pr") -def branch_delete(repos_file, repo, branch, gitee_pat, gitee_email, - gitee_org, community_path, work_branch, do_push): - repos = __get_repos(repo, repos_file) - community_repo = __prepare_local_repo( - gitee_pat, gitee_email, work_branch, - 'openeuler', 'community', community_path) - - for repo in repos: - yaml_file = __find_repo_yaml_file( - repo, community_repo.repo_dir, gitee_org) - - with open(yaml_file, 'r', encoding='utf-8') as f: - data = yaml.load(f, Loader=yaml.FullLoader) - for bn in branch: - for exist in data['branches'][::]: - if exist['name'] == bn: - data['branches'].remove(exist) - print('Delete the branch %s for %s successful!' % - (bn, data['name'])) - break - else: - print('Can not delete branch %s for %s: not exist' % - (bn, data['name'])) - - with open(yaml_file, 'w', encoding='utf-8') as nf: - yaml.dump(data, nf, default_flow_style=False, sort_keys=False) - - commit_msg = 'Delete branches for OpenStack packages' - community_repo.commit(commit_msg, do_push) - if do_push: - community_repo.create_pr(work_branch, 'master', commit_msg) - - -@group.command(name="obs-create", help='Add repos into OBS project') -@click.option("-rf", "--repos-file", - help="File of openEuler repos in csv, includes 'repo_name' " - "column now") -@click.option("-r", "--repo", help="Repo name to put into OBS project") -@click.option("-b", "--branch", required=True, - help="The branch name of repo to put into OBS project") -@click.option("--mainline", is_flag=True, - help='Whether to put repo into mainline of project') -@click.option("-t", "--gitee-pat", envvar='GITEE_PAT', required=True, - help="Gitee personal access token") -@click.option("-e", "--gitee-email", envvar='GITEE_EMAIL', - help="Email address for git commit changes, automatically " - "query from gitee if you have public in gitee") -@click.option("--obs-path", - help="Path of src-openeuler/obs_meta in local") -@click.option("-w", "--work-branch", default='obs-add-repo', - help="Local working branch of src-openeuler/obs_meta") -@click.option('-dp', '--do-push', is_flag=True, - help="Do PUSH or not, if push it will create pr") -def obs_create(repos_file, repo, branch, mainline, gitee_pat, gitee_email, - obs_path, work_branch, do_push): - repos = __get_repos(repo, repos_file) - obs_repo = __prepare_local_repo( - gitee_pat, gitee_email, work_branch, - 'src-openeuler', 'obs_meta', obs_path) - - project_dir = __prepare_obs_project(obs_repo.repo_dir, - branch, mainline, - obs_repo.gitee_user) - - for repo in repos: - repo_dir = os.path.join(project_dir, repo) - if os.path.exists(repo_dir): - print("The repo %s is already in project %s" % ( - repo, project_dir)) - continue - os.mkdir(repo_dir) - _service_file = os.path.join(repo_dir, '_service') - with open(_service_file, 'w', encoding='utf-8') as f: - f.write('<services>\n' - ' <service name="tar_scm_kernel_repo">\n' - ' <param name="scm">repo</param>\n' - ' <param name="url">next/%s/%s</param>\n' - ' </service>\n' - '</services>\n' % (branch, repo)) - - commit_msg = 'Put repos into OBS project' - obs_repo.commit(commit_msg, do_push) - if do_push: - obs_repo.create_pr(work_branch, 'master', commit_msg) - - -@group.command(name="obs-delete", help='Remove repos from OBS project') -@click.option("-rf", "--repos-file", - help="File of openEuler repos in csv, includes 'repo_name' " - "column now") -@click.option("-r", "--repo", help="Repo name to remove from OBS project") -@click.option("-b", "--branch", required=True, - help="The branch name of repo to remove from OBS project") -@click.option("-t", "--gitee-pat", envvar='GITEE_PAT', required=True, - help="Gitee personal access token") -@click.option("-e", "--gitee-email", envvar='GITEE_EMAIL', - help="Email address for git commit changes, automatically " - "query from gitee if you have public in gitee") -@click.option("--obs-path", - help="Path of src-openeuler/obs_meta in local") -@click.option("-w", "--work-branch", default='obs-remove-repo', - help="Local working branch of src-openeuler/obs_meta") -@click.option('-dp', '--do-push', is_flag=True, - help="Do PUSH or not, if push it will create pr") -def obs_delete(repos_file, repo, branch, gitee_pat, gitee_email, - obs_path, work_branch, do_push): - repos = __get_repos(repo, repos_file) - obs_repo = __prepare_local_repo( - gitee_pat, gitee_email, work_branch, - 'src-openeuler', 'obs_meta', obs_path) - - branch_dir = os.path.join(obs_repo.repo_dir, branch) - if not os.path.exists(branch_dir): - print("The branch %s does not exist in obs %s" % ( - branch, obs_repo.repo_dir)) - return - for repo in repos: - cmd = 'find %s -name %s' % (branch_dir, repo) - lines = os.popen(cmd).readlines() - if not lines: - print("The repo %s does not exist under branch %s" % ( - repo, branch_dir)) - continue - repo_dir = lines[0][:-1] - shutil.rmtree(repo_dir) - print("Remove repo %s successful!!" % repo_dir) - - commit_msg = 'Remove repos from OBS project' - obs_repo.commit(commit_msg, do_push) - if do_push: - obs_repo.create_pr(work_branch, 'master', commit_msg) - - -@group.command(name='pr-comment', help='Add comment for PR') -@click.option("-t", "--gitee-pat", envvar='GITEE_PAT', required=True, - help="Gitee personal access token") -@click.option("-o", "--gitee-org", envvar='GITEE_ORG', required=True, - show_default=True, default="src-openeuler", - help="Gitee organization name of openEuler") -@click.option("-p", "--projects-data", - help="File of projects list, includes 'repo_name', " - "'pr_num' 2 columns ") -@click.option('--repo', help="Specify repo to add comment") -@click.option('--pr', '--pr-num', help="Specify PR of repo to add comment") -@click.option('-c', '--comment', required=True, help="Comment to PR") -def pr_comment(gitee_pat, gitee_org, projects_data, - repo, pr, comment): - if not ((repo and pr) or projects_data): - raise click.ClickException("You must specify projects_data file or " - "specific repo and pr number!") - if repo and pr: - if projects_data: - click.secho("You have specified repo and PR number, " - "the projects_data will be ignore.", fg='red') - repo = PkgGitRepo(gitee_pat, gitee_org, repo_name=repo) - repo.pr_add_comment(comment, pr) - return - projects = pandas.read_csv(projects_data) - projects_data = pandas.DataFrame(projects, columns=["repo_name", "pr_num"]) - if projects_data.empty: - click.echo("Projects list is empty, exit!") - return - for row in projects_data.itertuples(): - click.secho("Start to comment repo: %s, PR: %s" % - (row.repo_name, row.pr_num), bg='blue', fg='white') - repo = PkgGitRepo(gitee_pat, gitee_org, repo_name=row.repo_name) - repo.pr_add_comment(comment, row.pr_num) - - -@group.command(name='pr-fetch', help='Fetch pull request which CI is failed') -@click.option('-g', '--gitee-org', envvar='GITEE_ORG', show_default=True, - default='src-openeuler', help='Gitee organization name of openEuler') -@click.option('-r', '--repos', help='Specify repo to get failed PR, ' - 'format can be like repo1,repo2,...') -@click.option('-s', '--state', type=click.Choice(['open', 'closed', 'merged', 'all']), - default='open', help='Specify the state of failed PR') -@click.option('-o', '--output', default='failed_PR_result.csv', show_default=True, - help='Specify output file') -def ci_failed_pr(gitee_org, repos, state, output): - if repos is None: - repos = list(OPENEULER_SIG_REPO.keys()) - else: - repos = repos.split(',') - - param = {'state': state, 'labels': 'ci_failed'} - - with Pool() as pool: - results = pool.map( - partial(__get_failed_info, gitee_org=gitee_org, - param=param), repos) - - # 记录最终结果 - outputs = sum(results, []) - outputs.insert(0, ['Repo', 'PR Link', 'Owner', 'Summary', 'Log Link']) - - if output is None: - output = 'failed_PR_result.csv' - - with open(output, 'w', encoding='utf-8-sig') as f: - csv_writer = csv.writer(f) - csv_writer.writerows(outputs) diff --git a/tools/oos/oos/commands/repo/repo_class.py b/tools/oos/oos/commands/repo/repo_class.py deleted file mode 100644 index b2d92a5fc7f224529be0e05f8025da627a8cb11d..0000000000000000000000000000000000000000 --- a/tools/oos/oos/commands/repo/repo_class.py +++ /dev/null @@ -1,168 +0,0 @@ -import os -import subprocess - -import click -import requests - -from oos.common import CONSTANTS -from oos.common import utils - - -class PkgGitRepo(object): - def __init__(self, gitee_pat=None, gitee_org='src-openeuler', - gitee_user=None, gitee_email=None, - pypi_name=None, repo_name=None): - self.pypi_name = pypi_name - self.gitee_org = gitee_org - self.gitee_pat = gitee_pat - self.gitee_user = gitee_user - self.gitee_email = gitee_email - self.not_found = False - self.branch_not_found = False - self.repo_dir = '' - self.commit_pushed = False - if not repo_name: - self.repo_name, _ = utils.get_openeuler_repo_name_and_sig( - self.pypi_name) - else: - self.repo_name = repo_name - - def fork_repo(self): - try: - url = "https://gitee.com/api/v5/repos/%s/%s/forks" % ( - self.gitee_org, self.repo_name) - resp = requests.request("POST", url, - data={"access_token": self.gitee_pat}) - if resp.status_code == 404: - click.echo("Repo not found for: %s/%s" % (self.gitee_org, - self.repo_name), - err=True) - self.not_found = True - elif resp.status_code != 201: - click.echo("Fork repo failed, %s" % resp.text, err=True) - except requests.RequestException as e: - click.echo("HTTP request to gitee failed: %s" % e, err=True) - - def clone_repo(self, src_dir): - clone_url = "https://gitee.com/%s/%s" % ( - self.gitee_user, self.repo_name) - click.echo("Cloning source repo from: %s" % clone_url) - repo_dir = os.path.join(src_dir, self.repo_name) - if os.path.exists(repo_dir): - subprocess.call(["rm", "-fr", repo_dir]) - subprocess.call(["git", "clone", clone_url, repo_dir]) - self.repo_dir = os.path.join(src_dir, self.repo_name) - - def add_branch(self, src_branch, dest_branch): - url = "https://gitee.com/api/v5/repos/{gitee_org}/{repo_name}/" \ - "branches/{dest_branch}".format(gitee_org=self.gitee_org, - repo_name=self.repo_name, - dest_branch=dest_branch) - resp = requests.request("GET", url) - if resp.status_code == 404: - click.echo("Branch: %s not found for project: %s/%s" % - (dest_branch, self.gitee_org, self.repo_name), - err=True) - self.branch_not_found = True - return - click.echo("Adding branch for %s/%s" % (self.gitee_org, self.repo_name)) - cmd = 'cd %(repo_dir)s; ' \ - 'git config --global user.email "%(gitee_email)s";' \ - 'git config --global user.name "%(gitee_user)s";' \ - 'git remote add upstream "https://gitee.com/%(gitee_org)s/' \ - '%(repo_name)s";' \ - 'git remote update;' \ - 'git checkout -b %(src_branch)s upstream/%(dest_branch)s; ' % { - "repo_dir": self.repo_dir, - "gitee_user": self.gitee_user, - "gitee_email": self.gitee_email, - "gitee_org": self.gitee_org, - "repo_name": self.repo_name, - "src_branch": src_branch, - "dest_branch": dest_branch} - click.echo("CMD: %s" % cmd) - status = subprocess.call(cmd, shell=True) - if status != 0: - raise Exception("Add branch %s for repo %s FAILED!!" % ( - src_branch, self.repo_name)) - - def commit(self, commit_message, do_push=True): - click.echo("Commit changes for %s/%s" % ( - self.gitee_org, self.repo_name)) - commit_cmd = 'cd %(repo_dir)s/; ' \ - 'git add .; ' \ - 'git commit -am "%(commit_message)s";' \ - 'git remote set-url origin https://%(gitee_user)s:' \ - '%(gitee_pat)s@gitee.com/%(gitee_user)s/%(repo_name)s;' \ - % {"repo_dir": self.repo_dir, - "repo_name": self.repo_name, - "gitee_user": self.gitee_user, - "gitee_pat": self.gitee_pat, - "commit_message": commit_message} - if do_push: - commit_cmd += 'git push origin -f' - self.commit_pushed = True - click.echo("CMD: %s" % commit_cmd) - subprocess.call(commit_cmd, shell=True) - - def create_pr(self, src_branch, remote_branch, tittle): - if not self.commit_pushed: - click.secho("WARNING: Commit was not pushed of %s, exit!" % - self.repo_name, fg='red') - return - click.echo("Creating pull request for project: %s" % self.repo_name) - try: - url = "https://gitee.com/api/v5/repos/%s/%s/pulls" % ( - self.gitee_org, self.repo_name) - resp = requests.request( - "POST", url, data={"access_token": self.gitee_pat, - "title": tittle, - "head": self.gitee_user + ":" + src_branch, - "base": remote_branch}) - if resp.status_code != 201: - click.echo("Create pull request failed, %s" % resp.text) - except requests.RequestException as e: - click.echo("HTTP request to gitee failed: %s" % e, err=True) - - def delete_fork(self): - url = 'https://gitee.com/api/v5/repos/%s/%s?access_token=%s' % ( - self.gitee_user, self.repo_name, self.gitee_pat) - resp = requests.request("DELETE", url) - if resp.status_code == 404: - click.echo("Repo %s/%s not found" % ( - self.gitee_user, self.repo_name)) - - def pr_add_comment(self, comment, pr_num): - click.echo("Adding comment: %s for project: %s in PR: %s" % ( - comment, self.repo_name, pr_num)) - url = 'https://gitee.com/api/v5/repos/%s/%s/pulls/%s/comments' % ( - self.gitee_org, self.repo_name, pr_num) - body = {"access_token": "%s" % self.gitee_pat, - "body": "%s" % comment} - resp = requests.request("POST", url, data=body) - if resp.status_code != 201: - click.echo("Comment PR %s failed, reason: %s" % - (pr_num, resp.reason), err=True) - - def pr_get_comments(self, pr_num): - click.echo("Getting comments for %s/%s in PR: %s" % ( - self.gitee_org, self.repo_name, pr_num)) - url = 'https://gitee.com/api/v5/repos/%s/%s/pulls/%s/comments' % ( - self.gitee_org, self.repo_name, pr_num) - param = {'comment_type': 'pr_comment', 'direction': 'desc'} - resp = requests.get(url, params=param) - if resp.status_code != 200: - click.echo("Getting comments of %s failed, reason: %s" % - (pr_num, resp.reason), err=True) - return resp.json() - - def get_pr_list(self, filter=None): - click.echo("Getting PR list for %s/%s" % ( - self.gitee_org, self.repo_name)) - url = 'https://gitee.com/api/v5/repos/%s/%s/pulls' % ( - self.gitee_org, self.repo_name) - resp = requests.get(url, params=filter) - if resp.status_code != 200: - click.echo("Getting PR list failed, reason: %s" % resp.reason, - err=True) - return resp.json() diff --git a/tools/oos/oos/commands/spec/__init__.py b/tools/oos/oos/commands/spec/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/tools/oos/oos/commands/spec/cli.py b/tools/oos/oos/commands/spec/cli.py deleted file mode 100644 index 9467a7001e0ae42561029a4b02cf3d3cfda901b3..0000000000000000000000000000000000000000 --- a/tools/oos/oos/commands/spec/cli.py +++ /dev/null @@ -1,320 +0,0 @@ -import glob -import os -from pathlib import Path -import subprocess - -import click -import pandas - -from oos.commands.repo.repo_class import PkgGitRepo -from oos.commands.spec.spec_class import RPMSpec -from oos.common import gitee - - -class SpecPush(object): - def __init__(self, build_root, gitee_pat, gitee_email, gitee_org, - name, version, projects_data_file, dest_branch, src_branch, - repos_dir, query, arch, python2, no_check, reuse_spec): - self.build_root = build_root - self.repos_dir = os.path.join(self.build_root, repos_dir) - self.projects_data_file = projects_data_file - self.gitee_org = gitee_org - self.gitee_pat = gitee_pat - self.name = name - self.version = version - self.arch = arch - self.python2 = python2 - self.no_check = no_check - self.dest_branch = dest_branch - self.src_branch = src_branch - self.missed_repos = [] - self.missed_deps = [] - self.projects_missed_branch = [] - self.build_failed = [] - self.check_stage_failed = [] - self.query = query - self.reuse_spec = reuse_spec - - g_user, g_email = gitee.get_user_info(self.gitee_pat) - self.gitee_email = gitee_email or g_email - if not self.gitee_email: - raise click.ClickException( - "Your email was not publicized in gitee, need to manually " - "specified by --gitee-email") - self.gitee_user = g_user - - @property - def projects_data(self): - if self.name and self.version: - return None - projects = pandas.read_csv(self.projects_data_file) - project_df = pandas.DataFrame(projects, - columns=["pypi_name", "version"]) - if self.query: - project_df = project_df.set_index('pypi_name', drop=False, ).filter( - like=self.query, axis=0) - return project_df - - def _get_old_changelog_version(self, repo_obj): - old_version = None - old_changelog = None - spec_f = glob.glob(os.path.join(repo_obj.repo_dir, '*.spec')) - if not spec_f: - return None, None - spec_f = spec_f[0] - with open(spec_f) as f_spec: - lines = f_spec.readlines() - for l_num, line in enumerate(lines): - if 'Version:' in line: - old_version = line.partition(':')[2].strip() - if '%changelog' in line: - old_changelog = [cl.rstrip() for cl in lines[l_num + 1:]] - break - - return old_changelog, old_version - - def _copy_spec_source(self, spec_obj, repo_obj): - if not (spec_obj.spec_path and spec_obj.source_path): - click.secho("ERROR: Spec or Source file not found for: %s" - % spec_obj.pypi_name, fg='red') - return - if not repo_obj.repo_dir: - click.secho("Repo was not cloned: %s" % spec_obj.pypi_name, - fg='red') - return - - click.echo("Copying spec file and source package for: %s" - % spec_obj.pypi_name) - - rm_cmd = "rm -fr %(repo_dir)s/*.spec; rm -fr %(repo_dir)s/*.tar.gz; " \ - "rm -fr %(repo_dir)s/*.zip; rm -fr %(repo_dir)s/*.patch" \ - % {"repo_dir": repo_obj.repo_dir} - click.echo("CMD: %s" % rm_cmd) - subprocess.call(rm_cmd, shell=True) - - cp_spec_cmd = "yes | cp %s %s" % (spec_obj.spec_path, repo_obj.repo_dir) - click.echo("CMD: %s" % cp_spec_cmd) - subprocess.call(cp_spec_cmd, shell=True) - - cp_src_pkg_cmd = "yes | cp %s %s" % (spec_obj.source_path, - repo_obj.repo_dir) - click.echo("CMD: %s" % cp_src_pkg_cmd) - subprocess.call(cp_src_pkg_cmd, shell=True) - - def _build_one(self, pypi_name, version, do_push): - repo_obj = PkgGitRepo(self.gitee_pat, self.gitee_org, - self.gitee_user, self.gitee_email, - pypi_name=pypi_name) - repo_obj.fork_repo() - if repo_obj.not_found: - self.missed_repos.append(repo_obj.repo_name) - return - - repo_obj.clone_repo(self.repos_dir) - repo_obj.add_branch(self.src_branch, self.dest_branch) - if repo_obj.branch_not_found: - self.projects_missed_branch.append(pypi_name) - return - old_changelog, old_version = self._get_old_changelog_version(repo_obj) - - spec_obj = RPMSpec(pypi_name, version, self.arch, self.python2, - add_check=not self.no_check, - old_changelog=old_changelog, old_version=old_version) - commit_msg = "Update package %s of version %s" % (pypi_name, version) - spec_obj.build_package(self.build_root, reuse_spec=self.reuse_spec) - if spec_obj.build_failed: - if spec_obj.check_stage_failed: - self.check_stage_failed.append(pypi_name) - self.build_failed.append(pypi_name) - return - - spec_obj.check_deps() - if spec_obj.deps_missed: - self.missed_deps.append({pypi_name: list(spec_obj.deps_missed)}) - - self._copy_spec_source(spec_obj, repo_obj) - repo_obj.commit(commit_msg, do_push=do_push) - repo_obj.create_pr(self.src_branch, self.dest_branch, commit_msg) - - def build_all(self, do_push=False): - if self.name and self.version: - if self.projects_data: - click.echo("Package name and version has been specified, ignore" - " projects data!") - pkg_amount = 1 - self._build_one(self.name, self.version, do_push) - else: - if self.projects_data.empty: - click.echo("Projects list is empty, exit!") - return - pkg_amount = len(self.projects_data.index) - for row in self.projects_data.itertuples(): - click.secho("Start to handle project: %s" % row.pypi_name, - bg='blue', fg='white') - self._build_one(row.pypi_name, row.version, do_push) - - click.secho("=" * 20 + "Summary" + "=" * 20, fg='black', bg='green') - failed = (len(self.build_failed) + len(self.missed_repos) + - len(self.missed_deps) + len(self.projects_missed_branch)) - click.secho("%s projects handled, failed %s" % ( - pkg_amount, failed), fg='yellow') - click.secho("Source repos not found: %s" % self.missed_repos, - fg='red') - click.secho("Miss requires: %s" % self.missed_deps, fg='red') - click.secho("Projects missed dest branch: %s" % - self.projects_missed_branch, fg='red') - click.secho("Build failed packages: %s" % self.build_failed, - fg='red') - click.secho("Check stage failed packages: %s" % self.check_stage_failed, - fg='red') - - click.secho("=" * 20 + "Summary" + "=" * 20, fg='black', bg='green') - - -def _rpmbuild_env_ensure(build_root): - rpmbuild_cmd = subprocess.call(["rpmbuild", "--help"], shell=True) - tree_cmd = subprocess.call(["rpmdev-setuptree", "--help"], - shell=True) - if rpmbuild_cmd != 0 or tree_cmd != 0: - raise click.ClickException("You must install rpm-build tools, e.g. " - "yum isntall -y rpm-build rpmdevtools") - - for rb_dir in ['SPECS', 'SOURCES', 'BUILD', 'RPMS']: - if not os.path.exists(os.path.join(build_root, rb_dir)): - raise click.ClickException( - "You must setup the rpm build directories by running " - "'rpmdev-setuptree' command and specify the build_root the " - "path of 'rpmbuild/' directory.") - - -@click.group(name='spec', help='RPM spec related commands') -def group(): - pass - - -@group.command(name='push', help='Build RPM spec and push to Gitee repo') -@click.option("--build-root", envvar='BUILD_ROOT', - default=os.path.join(str(Path.home()), 'rpmbuild'), - help="Building root directory") -@click.option("-t", "--gitee-pat", envvar='GITEE_PAT', required=True, - help="Gitee personal access token") -@click.option("-e", "--gitee-email", envvar='GITEE_EMAIL', - help="Email address for git commit changes, automatically " - "query from gitee if you have public in gitee") -@click.option("-o", "--gitee-org", envvar='GITEE_ORG', required=True, - show_default=True, - default="src-openeuler", - help="Gitee organization name of openEuler") -@click.option("-n", "--name", help="Name of package to build") -@click.option("-v", "--version", default='latest', help="Package version") -@click.option("-p", "--projects-data", - help="File of projects list, includes 'pypi_name'," - " 'version' 2 columns ") -@click.option("-d", "--dest-branch", default='master', show_default=True, - help="Target remote branch to create PR, default as master") -@click.option("-s", "--src-branch", default='openstack-pkg-support', - show_default=True, - help="Local source branch to create PR") -@click.option("-r", "--repos-dir", default='src-repos', show_default=True, - help="Directory for storing source repo locally") -@click.option('-q', '--query', - help="Filter, fuzzy match the 'pypi_name' of projects list, e.g. " - "'-q novaclient.") -@click.option("-a", "--arch", is_flag=True, - help="Build module with arch, noarch by default.") -@click.option("-py2", "--python2", is_flag=True, help="Build python2 package") -@click.option('-dp', '--do-push', is_flag=True, help="Do PUSH or not") -@click.option("-nc", "--no-check", is_flag=True, - help="Do not add %check step in spec") -@click.option('-rs', '--reuse-spec', is_flag=True, - help="Reuse existed spec file") -def push(build_root, gitee_pat, gitee_email, gitee_org, name, version, - projects_data, dest_branch, src_branch, repos_dir, query, arch, - python2, do_push, no_check, reuse_spec): - if not (name or projects_data): - raise click.ClickException("You must specify projects_data file or " - "specific package name!") - if build_root: - _rpmbuild_env_ensure(build_root) - spec_push = SpecPush(build_root=build_root, gitee_pat=gitee_pat, - gitee_email=gitee_email, gitee_org=gitee_org, - name=name, version=version, - projects_data_file=projects_data, - dest_branch=dest_branch, src_branch=src_branch, - repos_dir=repos_dir, query=query, - arch=arch, python2=python2, no_check=no_check, - reuse_spec=reuse_spec) - spec_push.build_all(do_push) - - -@group.command(name='build', help='Build RPM spec locally') -@click.option("--build-root", envvar='BUILD_ROOT', - default=os.path.join(str(Path.home()), 'rpmbuild'), - help="Building root directory") -@click.option("-n", "--name", help="Name of package to build") -@click.option("-v", "--version", default='latest', help="Package version") -@click.option("-p", "--projects-data", help="File of projects list, includes " - "'pypi_name', 'version' 2 columns ") -@click.option('-q', '--query', - help="Filter, fuzzy match the 'pypi_name' of projects list, e.g. " - "'-q novaclient.") -@click.option("-a", "--arch", is_flag=True, - help="Build module with arch, noarch by default.") -@click.option("-py2", "--python2", is_flag=True, help="Build python2 package") -@click.option('-sd', '--short-description', is_flag=True, default=True, - help="Shorten description") -@click.option("-nc", "--no-check", is_flag=True, - help="Do not add %check step in spec") -@click.option("-b", "--build-rpm", is_flag=True, help="Build rpm package") -@click.option("-o", "--output", help="Specify output file of generated Spec") -@click.option('-rs', '--reuse-spec', is_flag=True, - help="Reuse existed spec file") -def build(build_root, name, version, projects_data, query, arch, python2, - short_description, no_check, build_rpm, output, reuse_spec): - if build_root and build_rpm: - _rpmbuild_env_ensure(build_root) - if not (name or projects_data): - raise click.ClickException("You must specify projects_data file or " - "specific package name!") - if name and version: - if projects_data: - click.secho("You have specified package name and version, " - "the projects_data will be ignore.", fg='red') - spec_obj = RPMSpec(name, version, arch, python2, short_description, - not no_check) - if build_rpm: - spec_obj.build_package(build_root, output, reuse_spec) - return - spec_obj.generate_spec(build_root, output, reuse_spec) - return - projects = pandas.read_csv(projects_data) - projects_data = pandas.DataFrame(projects, columns=["pypi_name", "version"]) - if query: - projects_data = projects_data.set_index('pypi_name', drop=False).filter( - like=query, axis=0) - if projects_data.empty: - click.echo("Projects list is empty, exit!") - return - failed_pkgs = [] - check_stage_failed = [] - for row in projects_data.itertuples(): - click.secho("Start to build spec for: %s, version: %s" % - (row.pypi_name, row.version), bg='blue', fg='white') - spec_obj = RPMSpec(row.pypi_name, row.version, arch, python2, - short_description, not no_check) - if build_rpm: - spec_obj.build_package(build_root, output, reuse_spec) - else: - spec_obj.generate_spec(build_root, output, reuse_spec) - if spec_obj.build_failed: - failed_pkgs.append(spec_obj.pypi_name) - if spec_obj.check_stage_failed: - check_stage_failed.append(spec_obj.pypi_name) - - click.secho("=" * 20 + "Summary" + "=" * 20, fg='black', bg='green') - click.secho("%s projects handled, failed %s" % ( - len(projects_data.index), len(failed_pkgs)), fg='yellow') - click.secho("Built failed projects: %s" % failed_pkgs, fg='red') - click.secho("Projects built failed in check stage: %s" % - check_stage_failed, fg='red') - click.secho("=" * 20 + "Summary" + "=" * 20, fg='black', bg='green') diff --git a/tools/oos/oos/commands/spec/spec_class.py b/tools/oos/oos/commands/spec/spec_class.py deleted file mode 100644 index 6708bf7ec66e16cf8db74de394a2b5202de41c94..0000000000000000000000000000000000000000 --- a/tools/oos/oos/commands/spec/spec_class.py +++ /dev/null @@ -1,358 +0,0 @@ -# NOTE: some code of this py file is copy from the pyporter tool of openEuler -# community:https://gitee.com/openeuler/pyporter - -import datetime -import json -import os -import re -import subprocess -import textwrap - -import click -import jinja2 -import urllib.request - -from oos.common import CONSTANTS -from oos.common import SPEC_TEMPLET_DIR - - -class RPMSpec(object): - def __init__(self, pypi_name, version='latest', arch=None, - python2=False, short_description=True, add_check=True, - old_changelog=None, old_version=None): - self.pypi_name = pypi_name - # use 'latest' as version if version is NaN - self.version = 'latest' if version != version else version - self.shorten_description = short_description - self.arch = arch - self.python2 = python2 - self.spec_path = '' - self.source_path = '' - self.deps_missed = set() - self.build_failed = False - self.check_stage_failed = False - self.add_check = add_check - self.old_changelog = old_changelog - self.old_version = old_version - - self._pypi_json = None - self._spec_name = "" - self._pkg_name = "" - self._pkg_summary = "" - self._pkg_home = "" - self._pkg_license = "" - self._source_url = "" - self._source_file = "" - self._source_file_dir = "" - self._base_build_requires = [] - self._dev_requires = [] - self._test_requires = [] - self._check_supported = True - - @property - def pypi_json(self): - if not self._pypi_json: - url_template = 'https://pypi.org/pypi/{name}/{version}/json' - url_template_latest = 'https://pypi.org/pypi/{name}/json' - if self.version == 'latest': - url = url_template_latest.format(name=self.pypi_name) - else: - url = url_template.format(name=self.pypi_name, - version=self.version) - with urllib.request.urlopen(url) as u: - self._pypi_json = json.loads(u.read().decode('utf-8')) - return self._pypi_json - - @property - def spec_name(self): - if not self._spec_name: - self._spec_name = self.pypi_json["info"]["name"].replace(".", "-") - if not self._spec_name.startswith("python-"): - self._spec_name = "python-" + self._spec_name - return self._spec_name - - def _pypi2pkg_name(self, pypi_name): - prefix = 'python2-' if self.python2 else 'python3-' - if pypi_name in CONSTANTS['pypi2pkgname']: - pkg_name = CONSTANTS['pypi2pkgname'][pypi_name] - else: - pkg_name = pypi_name.lower().replace('.', '-') - if pkg_name.startswith('python-'): - pkg_name = pkg_name[7:] - return prefix + pkg_name - - @property - def pkg_name(self): - if not self._pkg_name: - self._pkg_name = self._pypi2pkg_name(self.pypi_name) - return self._pkg_name - - @property - def pkg_summary(self): - if not self._pkg_summary: - self._pkg_summary = self.pypi_json["info"]["summary"] - return self._pkg_summary - - @property - def pkg_home(self): - if not self._pkg_home: - project_urls = self.pypi_json["info"]["project_urls"] - if project_urls: - self._pkg_home = project_urls.get("Homepage") - else: - self._pkg_home = self.pypi_json["info"]["project_url"] - return self._pkg_home - - @property - def module_name(self): - return self.pypi_json["info"]["name"] - - @property - def version_num(self): - return self.pypi_json["info"]["version"] - - def _is_upgrade(self): - if not self.old_version: - return False - try: - old_version = float(self.old_version) - new_version = float(self.version_num) - return new_version > old_version - except ValueError: - return str(self.version_num) > str(self.old_version) - - def _get_provide_name(self): - return self.pkg_name if self.python2 else self.pkg_name.replace( - 'python3-', 'python-') - - def _get_license(self): - if CONSTANTS['pypi_license'].get(self.module_name): - return CONSTANTS['pypi_license'][self.module_name] - if (self.pypi_json["info"]["license"] != "" and - self.pypi_json["info"]["license"] != "UNKNOWN"): - org_license = self.pypi_json["info"]["license"] - else: - for k in self.pypi_json["info"]["classifiers"]: - if k.startswith("License"): - ks = k.split("::") - if len(ks) <= 2: - org_license = 'UNKNOWN' - else: - org_license = ks[2].strip() - break - else: - org_license = 'UNKNOWN' - # openEuler CI is a little stiff. It hard-codes the License name. - # We change the format here to satisfy openEuler CI's requirement. - if "Apache" in org_license: - return "Apache-2.0" - if "BSD" in org_license: - return "BSD" - if "MIT" in org_license: - return "MIT" - return org_license - - def _init_source_info(self): - urls_info = self.pypi_json['urls'] - for url_info in urls_info: - if url_info["packagetype"] == "sdist": - self._source_file = url_info["filename"] - self._source_url = url_info["url"] - if self._source_file: - self._source_file_dir = self._source_file.partition( - '-' + self.version_num)[0] + '-%{version}' - - def _get_description(self, shorten=True): - if self.pypi_name in CONSTANTS['pkg_description']: - return CONSTANTS['pkg_description'][self.pypi_name] - org_description = self.pypi_json["info"]["description"] - if not shorten: - return org_description - cut_dot = org_description.find('.', 80 * 8) - cut_br = org_description.find('\n', 80 * 8) - if cut_dot > -1: - shorted = org_description[:cut_dot + 1] - elif cut_br > -1: - shorted = org_description[:cut_br] - else: - shorted = org_description - spec_description = re.sub( - r'\s+', ' ', # multiple whitespaces \ - # general URLs - re.sub(r'\w+:\/{2}[\d\w-]+(\.[\d\w-]+)*(?:(?:\/[^\s/]*))*', '', - # delimiters - re.sub('(#|=|---|~|`_|-\s|\*\s|`)*', '', - # very short lines, typically titles - re.sub('((\r?\n)|^).{0,8}((\r?\n)|$)', '', - # PyPI's version and downloads tags - re.sub( - '((\r*.. image::|:target:) https?|' - '(:align:|:alt:))[^\n]*\n', '', - shorted))))) - return '\n'.join(textwrap.wrap(spec_description, 80)) - - def _parse_requires(self): - self._base_build_requires = [] - self._dev_requires = [] - self._test_requires = [] - - if self.python2: - self._base_build_requires = ['python2-devel', 'python2-setuptools', - 'python2-pbr', 'python2-pip', - 'python2-wheel'] - else: - self._base_build_requires = ['python3-devel', 'python3-setuptools', - 'python3-pbr', 'python3-pip', - 'python3-wheel'] - if self.arch: - if self.python2: - self._base_build_requires.append('python2-cffi') - else: - self._base_build_requires.append('python3-cffi') - self._base_build_requires.extend(['gcc', 'gdb']) - - pypi_requires = self.pypi_json["info"]["requires_dist"] - if pypi_requires is None: - return - for r in pypi_requires: - req, _, condition = r.partition(";") - striped = condition.replace('\"', '').replace( - '\'', '').replace(' ', '') - if 'platform==win32' in striped: - click.secho("Requires %s is Windows platform specific" % req) - continue - match_py_ver = True - for py_cond in ("python_version==", "python_version<=", - "python_version<"): - if py_cond in striped: - py_ver = re.findall(r'\d+\.?\d*', - striped.partition(py_cond)[2]) - if (py_ver and (py_ver[0] < '2.7.3' or - '3' < py_ver[0] < '3.8.3')): - match_py_ver = False - break - if not match_py_ver: - click.secho("[INFO] Requires %s is not match python version, " - "skipped" % req) - continue - - r_name, _, r_ver = req.rstrip().partition(' ') - r_pkg = self._pypi2pkg_name(r_name) - if 'extra==test' in striped: - self._test_requires.append(r_pkg) - else: - self._dev_requires.append(r_pkg) - - def generate_spec(self, build_root, output_file=None, reuse_spec=False): - self._init_source_info() - self._parse_requires() - if output_file: - self.spec_path = output_file - else: - self.spec_path = os.path.join( - build_root, "SPECS/", self.spec_name) + '.spec' - if reuse_spec: - if not os.path.exists(self.spec_path): - click.secho("Spec file no existed with reuse spec parameter " - "specified" % self.pypi_name, fg='red') - self.build_failed = True - return - env = jinja2.Environment(trim_blocks=True, lstrip_blocks=True, - loader=jinja2.FileSystemLoader( - SPEC_TEMPLET_DIR)) - template = env.get_template('package.spec.j2') - up_down_grade = 'Upgrade' if self._is_upgrade() else "Downgrade" - - test_requires = self._test_requires if self.add_check else [] - template_vars = {'spec_name': self.spec_name, - 'version': self.version_num, - 'pkg_summary': self.pkg_summary, - 'pkg_license': self._get_license(), - 'pkg_home': self.pkg_home, - 'source_url': self._source_url, - 'build_arch': self.arch, - 'pkg_name': self.pkg_name, - 'provides': self._get_provide_name(), - 'base_build_requires': self._base_build_requires, - 'dev_requires': self._dev_requires, - 'test_requires': test_requires, - 'description': self._get_description(), - 'today': datetime.date.today().strftime("%a %b %d %Y"), - 'add_check': self.add_check, - 'python2': self.python2, - "source_file_dir": self._source_file_dir, - "old_changelog": self.old_changelog, - "up_down_grade": up_down_grade - } - output = template.render(template_vars) - with open(self.spec_path, 'w') as f: - f.write(output) - - def _verify_check_stage(self, build_root): - # Verify the %check stage of spec file - if not self.add_check: - return - pkg_src_dir = os.path.join(build_root, 'BUILD', self._source_file_dir) - if self.python2: - cmd = "cd %s; python2 setup.py test" % pkg_src_dir - else: - cmd = "cd %s; python3 setup.py test" % pkg_src_dir - status = subprocess.call(cmd, shell=True) - if status != 0: - click.secho("Run check stage failed: %s" % self.pypi_name, fg='red') - output = subprocess.run(cmd, shell=True, stderr=subprocess.STDOUT, - stdout=subprocess.PIPE) - if "invalid command 'test'" in str(output.stdout): - click.secho("Does not support setup.py test command of %s, " - "skip check stage." % self.pypi_name, fg='yellow') - self._check_supported = False - self.check_stage_failed = True - return - - def build_package(self, build_root, output_file=None, reuse_spec=False): - self.generate_spec(build_root, output_file, reuse_spec) - if not self.spec_path: - return - status = subprocess.call(["dnf", "builddep", '-y', self.spec_path]) - if status != 0: - click.secho("Project: %s built failed, install dependencies failed." - % self.pypi_name, fg='red') - self.build_failed = True - return - status = subprocess.call(["rpmbuild", - "--undefine=_disable_source_fetch", "-ba", - self.spec_path]) - if status != 0: - self.build_failed = True - if self._check_supported and self.add_check: - self._verify_check_stage(build_root) - if not self._check_supported: - click.secho("Project: %s does not support check stage, " - "re-generate " "spec." % self.pypi_name, - fg='yellow') - self.add_check = False - self.build_failed = False - self.build_package(build_root, output_file) - if self.build_failed: - click.secho("Project: %s built failed, need to manually fix." % - self.pypi_name, fg='red') - return - - self.source_path = os.path.join(build_root, "SOURCES/", - self._source_file) - if not os.path.isfile(self.source_path): - click.secho("Project: %s built failed, source file not found." % - self.pypi_name, fg='red') - self.build_failed = True - return - - def check_deps(self, all_repo_names=None): - self._parse_requires() - for r in self._dev_requires + self._test_requires: - in_list = True - if (all_repo_names and r.replace("python2", "python").lower() - not in all_repo_names or []): - in_list = False - status, _ = subprocess.getstatusoutput("yum info %s" % r) - if status != 0 and not in_list: - self.deps_missed.add(r) diff --git a/tools/oos/oos/common/__init__.py b/tools/oos/oos/common/__init__.py deleted file mode 100644 index 944f65f2a5f49c7d3d3d2e97b57e42adb6d20ea8..0000000000000000000000000000000000000000 --- a/tools/oos/oos/common/__init__.py +++ /dev/null @@ -1,92 +0,0 @@ -import configparser -import os -from pathlib import Path -import sqlite3 - -import click -import yaml - -import oos - - -CONSTANTS = None -SPEC_TEMPLET_DIR = None -OPENEULER_REPO = None -OPENEULER_SIG_REPO = None -OPENSTACK_RELEASE_MAP = None -ANSIBLE_PLAYBOOK_DIR = None -ANSIBLE_INVENTORY_DIR = None -KEY_DIR = None -CONFIG = None -SQL_DB = '/etc/oos/data.db' - - -search_paths = ['/etc/oos/', - os.path.join(os.path.dirname(oos.__path__[0]), 'etc'), - os.environ.get("OOS_CONF_DIR", ""), '/usr/local/etc/oos', - '/usr/etc/oos', - ] -conf_paths = ['/etc/oos/oos.conf', '/usr/local/etc/oos/oos.conf'] - - -for conf_path in search_paths: - cons = os.path.join(conf_path, "constants.yaml") - pkg_tpl = os.path.join(conf_path, "package.spec.j2") - openeuler_repo = os.path.join(conf_path, "openeuler_repo.yaml") - openeuler_sig_repo = os.path.join(conf_path, "openeuler_sig_repo.yaml") - openstack_release = os.path.join(conf_path, "openstack_release.yaml") - playbook_path = os.path.join(conf_path, "playbooks") - inventory_path = os.path.join(conf_path, "inventory") - key_path = os.path.join(conf_path, "key_pair") - if os.path.isfile(cons) and not CONSTANTS: - CONSTANTS = yaml.safe_load(open(cons, encoding="utf-8")) - if os.path.isfile(pkg_tpl) and not SPEC_TEMPLET_DIR: - SPEC_TEMPLET_DIR = conf_path - if os.path.isfile(openeuler_repo) and not OPENEULER_REPO: - OPENEULER_REPO = yaml.safe_load(open(openeuler_repo, encoding="utf-8")) - if os.path.isfile(openeuler_sig_repo) and not OPENEULER_SIG_REPO: - OPENEULER_SIG_REPO = yaml.safe_load(open(openeuler_sig_repo, encoding="utf-8")) - if os.path.isfile(openstack_release) and not OPENSTACK_RELEASE_MAP: - OPENSTACK_RELEASE_MAP = yaml.safe_load(open(openstack_release, encoding="utf-8")) - if os.path.isdir(playbook_path) and not ANSIBLE_PLAYBOOK_DIR: - ANSIBLE_PLAYBOOK_DIR = playbook_path - if os.path.isdir(inventory_path) and not ANSIBLE_INVENTORY_DIR: - ANSIBLE_INVENTORY_DIR = inventory_path - if os.path.isdir(key_path) and not KEY_DIR: - KEY_DIR = key_path - -for fp in conf_paths: - if os.path.exists(fp): - CONFIG = configparser.ConfigParser() - CONFIG.read(fp) - break - -if not Path(SQL_DB).exists(): - try: - Path(SQL_DB).parents[0].mkdir(parents=True) - except FileExistsError: - pass - Path(SQL_DB).touch() - connect = sqlite3.connect(SQL_DB) - cur = connect.cursor() - cur.execute('''CREATE TABLE resource - (provider, name, uuid, ip, flavor, openeuler_release, openstack_release, create_time)''') - connect.commit() - connect.close() - -if not CONSTANTS: - raise click.ClickException("constants.yaml is missing") -if not SPEC_TEMPLET_DIR: - raise click.ClickException("package.spec.j2 is missing") -if not OPENEULER_REPO: - raise click.ClickException("openeuler_repo is missing") -if not OPENSTACK_RELEASE_MAP: - raise click.ClickException("openstack_release.yaml is missing") -if not ANSIBLE_PLAYBOOK_DIR: - raise click.ClickException("ansible playbook dir is missing") -if not ANSIBLE_INVENTORY_DIR: - raise click.ClickException("ansible inventory dir is missing") -if not CONFIG: - raise click.ClickException("Unable to locate config file") -if not KEY_DIR: - raise click.ClickException("Unable to locate key pair file") diff --git a/tools/oos/oos/common/gitee.py b/tools/oos/oos/common/gitee.py deleted file mode 100644 index d68ea288022a3703eb7800dab773e6602927eab9..0000000000000000000000000000000000000000 --- a/tools/oos/oos/common/gitee.py +++ /dev/null @@ -1,65 +0,0 @@ -import json - -import requests - - -def get_gitee_project_tree(owner, project, branch, access_token=None): - """Get project content tree from gitee""" - headers = { - 'Content-Type': 'application/json;charset=UTF-8', - } - url = 'https://gitee.com/api/v5/repos/%s/%s/git/trees/%s' % (owner, project, branch) - if access_token: - url = url + '?access_token=%s' % access_token - response = requests.get(url, headers=headers) - return json.loads(response.content.decode()) - - -def get_gitee_project_version(owner, project, branch, access_token=None): - """Get project version""" - version = '' - file_tree = get_gitee_project_tree(owner, project, branch, access_token) - for file in file_tree['tree']: - if file['path'].endswith('tar.gz') or \ - file['path'].endswith('tar.bz2') or \ - file['path'].endswith('.zip') or \ - file['path'].endswith('.tgz'): - if file['path'].endswith('tar.gz') or file['path'].endswith('tar.bz2'): - sub_str = file['path'].rsplit('.', 2)[0] - else: - sub_str = file['path'].rsplit('.', 1)[0] - if '-' in sub_str: - version = sub_str.rsplit('-', 1)[1].strip('v') - elif '_' in sub_str: - version = sub_str.rsplit('_', 1)[1].strip('v') - else: - version = sub_str.strip('v') - break - - return version - - -def has_branch(owner, project, branch, access_token=None): - """Check if the repo has specified branch""" - headers = { - 'Content-Type': 'application/json;charset=UTF-8', - } - url = 'https://gitee.com/api/v5/repos/%s/%s/branches/%s' % (owner, project, branch) - if access_token: - url = url + '?access_token=%s' % access_token - response = requests.get(url, headers=headers) - - if response.status_code != 200: - return False - else: - return True - - -def get_user_info(token): - user_info_url = 'https://gitee.com/api/v5/user?access_token=%s' % token - user_info = requests.request('GET', user_info_url).json() - gitee_user = user_info['login'] - if not user_info.get('email'): - return gitee_user, None - gitee_email = user_info['email'] if '@' in user_info['email'] else None - return gitee_user, gitee_email diff --git a/tools/oos/oos/common/pypi.py b/tools/oos/oos/common/pypi.py deleted file mode 100644 index b284426efad2326c5413d0500e8f67eaca7169b3..0000000000000000000000000000000000000000 --- a/tools/oos/oos/common/pypi.py +++ /dev/null @@ -1,14 +0,0 @@ -import json - -import requests - - -def get_json_from_pypi(project, version=None): - if version and version != 'latest': - url = 'https://pypi.org/pypi/%s/%s/json' % (project, version) - else: - url = 'https://pypi.org/pypi/%s/json' % project - response = requests.get(url) - if response.status_code != 200: - raise Exception("%s-%s doesn't exist on pypi" % (project, version)) - return json.loads(response.content.decode()) diff --git a/tools/oos/oos/common/utils.py b/tools/oos/oos/common/utils.py deleted file mode 100644 index d747aca9b74a67807b213c2d7a7849e04342d2b3..0000000000000000000000000000000000000000 --- a/tools/oos/oos/common/utils.py +++ /dev/null @@ -1,14 +0,0 @@ -from oos.common import CONSTANTS -from oos.common import OPENEULER_REPO - - -def get_openeuler_repo_name_and_sig(pypi_name): - openeuler_name = CONSTANTS['pypi2reponame'].get(pypi_name, pypi_name) - if OPENEULER_REPO.get('python-' + openeuler_name): - return 'python-'+openeuler_name, OPENEULER_REPO['python-'+openeuler_name] - elif OPENEULER_REPO.get(openeuler_name): - return openeuler_name, OPENEULER_REPO[openeuler_name], - elif OPENEULER_REPO.get('openstack-'+openeuler_name): - return 'openstack-'+openeuler_name, OPENEULER_REPO['openstack-'+openeuler_name] - else: - return '', '' diff --git a/tools/oos/requirements.txt b/tools/oos/requirements.txt deleted file mode 100644 index 49db07719feeeba601f87ff7d1064e0582e0f5f5..0000000000000000000000000000000000000000 --- a/tools/oos/requirements.txt +++ /dev/null @@ -1,14 +0,0 @@ -ansible==2.9.27 -bs4 -click -huaweicloudsdkecs -Jinja2 -lxml -markdown -packaging -pandas -prettytable -pymdown-extensions -pyyaml -requests -xmltodict diff --git a/tools/oos/scripts/README.md b/tools/oos/scripts/README.md deleted file mode 100644 index 948d3a5696fd138c5fc45f589fa55a7ad4d03279..0000000000000000000000000000000000000000 --- a/tools/oos/scripts/README.md +++ /dev/null @@ -1,77 +0,0 @@ -# 脚本集合 - -本目录包含一些开发脚本, 开发者可以手动调用。同时我们也配置了Github Action CI,每日会把相关执行结果推送PR到本项目中,或发送邮件给相关负责人。 - -1. check_obs_status.py - - 功能: 检查OBS上OpenStack SIG软件包构建情况。 - - 输入: `python3 check_obs_status.py markdown` - - 输出: `result.md` - - 输入: `python3 check_obs_status.py html` - - 输出: `result_attach.html`, `result_body.html` - - 输入: `python3 check_obs_status.py gitee` - - 输出: [Gitee issue](https://gitee.com/openeuler/openstack/issues) - - 环境变量: - - `OBS_USER_NAME` - `OBS_USER_PASSWORD` - `GITEE_USER_TOKEN` - -2. fetch_openstack_release_mapping.py - - 功能: 获取OpenStack社区上游最新的各组件的版本号 - - 输入: `python3 fetch_openstack_release_mapping.py` - - 输出: `openstack_release.yaml` - - 环境变量: None - -3. fetch_openeuler_repo_name.py - - 功能: 获取src-openEuler最新的仓库名列表, 可以指定目标sig列表 - - 输入: `python3 fetch_openeuler_repo_name.py local` - - 输出: `openeuler_repo.yaml` - - 输入: `python3 fetch_openeuler_repo_name.py --sig sig1,sig2 remote` - - 输出: `openeuler_repo.yaml` - - 环境变量: - - `GITEE_USER_TOKEN` - -4. generate_dependence.py - - 功能:生成指定OpenStack版本指定项目的依赖项目json文件。 - - 输入: `python3 generate_dependence.py --project xxx yyy` - - 例如: - `python3 generate_dependence.py --project nova train` - `python3 generate_dependence.py train` - - 输出:指定OpenStack版本的目录,其中包含各个项目的json文件。 - - example目录中包含了train版本生成的文件示例。 - - 环境变量: None - -5. check_openstack_ci_status.py - - 功能:获取openstack社区openEuler相关CI的最新5次执行结果 - - 输入: `python3 check_openstack_ci_status.py` - - 输出:包含最新5次CI结果的html文件 - - 环境变量: None diff --git a/tools/oos/scripts/check_obs_status.py b/tools/oos/scripts/check_obs_status.py deleted file mode 100755 index 5a22548db3929bba0f5ed5fe00e6b8b6991b2605..0000000000000000000000000000000000000000 --- a/tools/oos/scripts/check_obs_status.py +++ /dev/null @@ -1,222 +0,0 @@ -#!/usr/bin/python3 -import datetime -import json -import os -import sys - -import markdown -import requests -import xmltodict -import yaml - - -BRANCHS = [ - 'openEuler:20.03:LTS:SP2:oepkg:openstack:queens', - 'openEuler:20.03:LTS:SP2:oepkg:openstack:rocky', - 'openEuler:20.03:LTS:SP2:oepkg:openstack:common', - 'openEuler:20.03:LTS:SP3:oepkg:openstack:queens', - 'openEuler:20.03:LTS:SP3:oepkg:openstack:rocky', - 'openEuler:20.03:LTS:SP3:oepkg:openstack:common', - 'openEuler:20.03:LTS:SP3:Epol', - 'openEuler:21.03:Epol', - 'openEuler:21.09:Epol', - 'openEuler:22.03:LTS:Next:Epol:Multi-Version:OpenStack:Train', - 'openEuler:22.03:LTS:Next:Epol:Multi-Version:OpenStack:Wallaby', - 'openEuler:22.03:LTS:Epol:Multi-Version:OpenStack:Train', - 'openEuler:22.03:LTS:Epol:Multi-Version:OpenStack:Wallaby', - 'openEuler:Epol', -] - - -OBS_PACKAGE_BUILD_RESULT_URL = 'https://build.openeuler.org/build/%(branch)s/_result' -OBS_PROJECT_URL = 'https://build.openeuler.org/package/show/%(branch)s/%(project)s' -PROJECT_MARKDOWN_FORMAT = '[%(project)s](%(url)s)' -GITEE_ISSUE_LIST_URL = 'https://gitee.com/api/v5/repos/openeuler/openstack/issues?state=open&labels=kind/obs-failed&sort=created&direction=desc&page=1&per_page=20' -GITEE_ISSUE_CREATE_URL = 'https://gitee.com/api/v5/repos/openeuler/issues' -GITEE_ISSUE_UPDATE_URL = 'https://gitee.com/api/v5/repos/openeuler/issues/%s' -SIG_PROJECT_URL = 'https://gitee.com/openeuler/community/raw/master/sig/sig-openstack/sig-info.yaml' - -OBS_USER_NAME = os.environ.get('OBS_USER_NAME') -OBS_USER_PASSWORD = os.environ.get('OBS_USER_PASSWORD') -GITEE_USER_TOKEN = os.environ.get('GITEE_USER_TOKEN') - - -def get_openstack_sig_project(): - project_list = [] - sig_dict = yaml.safe_load(requests.get(SIG_PROJECT_URL).content.decode()) - for item in sig_dict['repositories']: - project_list.append(item['repo'].split('/')[-1]) - return project_list - - -# The result dict format will be like: -# { -# 'branch_name': { -# 'package_name': { -# 'x86_64': 'fail reason', -# 'aarch64': 'fail reason' -# } -# }, -# 'branch_name': 'Success', -# 'branch_name': 'Unknown', -# } -def check_status(): - white_list = get_openstack_sig_project() - branch_session = requests.session() - branch_session.auth = (OBS_USER_NAME, OBS_USER_PASSWORD) - result = {} - for branch in BRANCHS: - sub_res = {} - res = branch_session.get(OBS_PACKAGE_BUILD_RESULT_URL % {'branch': branch}, verify=False) - obs_result = xmltodict.parse(res.content.decode())['resultlist']['result'] - for each_arch in obs_result: - if each_arch['@state'] == 'unknown': - result[branch] = 'Unknown' - break - arch = each_arch['@arch'] - if not each_arch.get('status'): - result[branch] = 'No Content' - break - arch_result = each_arch['status'] - for package in arch_result: - package_name = package['@package'] - package_status = package['@code'] - if ('oepkg' in branch or 'Multi' in branch or package_name in white_list) and package_status in ['unresolvable', 'failed', 'broken']: - project_key = PROJECT_MARKDOWN_FORMAT % {'project': package_name, 'url': OBS_PROJECT_URL % {'branch': branch, 'project': package_name}} - if not sub_res.get(project_key): - sub_res[project_key] = {} - sub_res[project_key][arch] = package.get('details', 'build failed') - else: - if sub_res: - result[branch] = sub_res - else: - result[branch] = 'Success' - return result - - -def get_obs_issue(): - headers = { - 'Content-Type': 'application/json;charset=UTF-8', - } - issue_list = requests.get(GITEE_ISSUE_LIST_URL, headers=headers).content.decode() - issue_list = json.loads(issue_list) - if issue_list: - return issue_list[0]['number'] - else: - return None - - -def update_issue(issue_number, result_str): - headers = { - 'Content-Type': 'application/json;charset=UTF-8', - } - body = { - "access_token": GITEE_USER_TOKEN, - "repo": "openstack", - "body": result_str, - } - response = requests.patch(GITEE_ISSUE_UPDATE_URL % issue_number, headers=headers, params=body) - if response.status_code != 200: - raise Exception("Failed update gitee issue") - -def create_issue(result_str): - headers = { - 'Content-Type': 'application/json;charset=UTF-8', - } - body = { - "access_token": GITEE_USER_TOKEN, - "repo": "openstack", - "title": "[CI] OBS Build Failed", - "body": result_str, - "labels": "kind/obs-failed", - "assignee": "huangtianhua", - "collaborators": "xiyuanwang" - } - response = requests.post(GITEE_ISSUE_CREATE_URL, headers=headers, params=body) - if response.status_code != 201: - raise Exception("Failed create gitee issue") - - -def create_or_update_issue(result_str): - issue_number = get_obs_issue() - if issue_number: - update_issue(issue_number, result_str) - else: - create_issue(result_str) - - -def format_content_for_markdown(input_dict): - output = "" - today = datetime.datetime.now() - output += '## check date: %s-%s-%s\n' % (today.year, today.month, today.day) - if input_dict: - for branch, project_info in input_dict.items(): - output += '## %s\n' % branch - output += ' \n' - if isinstance(project_info, str): - output += '%s\n' % project_info - continue - for project_name, status in project_info.items(): - output += ' %s:\n' % project_name - if status.get('x86_64'): - output += ' x86_64: %s\n' % status['x86_64'] - if status.get('aarch64'): - output += ' aarch64: %s\n' % status['aarch64'] - else: - output += 'All package build success.' - - return output - - -def format_content_for_html(input_dict): - output_attach = "" - output_body = "" - today = datetime.datetime.now() - output_body += '# check date: %s-%s-%s\n\n' % (today.year, today.month, today.day) - output_body += 'See the attached file for the failed branch\n\n' - if input_dict: - for branch, project_info in input_dict.items(): - if isinstance(project_info, str): - output_body += '## %s\n\n' % branch - output_body += '%s\n' % project_info - continue - output_attach += '## %s\n\n' % branch - output_attach += '??? note "Detail"\n' - for project_name, status in project_info.items(): - output_attach += ' %s:\n\n' % project_name - if status.get('x86_64'): - output_attach += ' x86_64: %s\n' % status['x86_64'] - if status.get('aarch64'): - output_attach += ' aarch64: %s\n' % status['aarch64'] - output_attach += '\n' - else: - output_body += 'All package build success.' - - return output_attach, output_body - - -def main(): - try: - output_type = sys.argv[1] - except IndexError: - print("Please specify the output type: markdown, html or gitee") - exit(1) - result = check_status() - if output_type == 'markdown': - output = format_content_for_markdown(result) - with open('result.md', 'w') as f: - f.write(markdown.markdown(output)) - elif output_type == 'html': - result_str_attach, result_str_body= format_content_for_html(result) - with open('result_attach.html', 'w') as f: - html = markdown.markdown(result_str_attach, extensions=['pymdownx.details']) - f.write(html) - with open('result_body.html', 'w') as f: - html = markdown.markdown(result_str_body, extensions=['pymdownx.details']) - f.write(html) - elif output_type == 'gitee': - create_or_update_issue(result) - - -if __name__ == '__main__': - main() diff --git a/tools/oos/scripts/check_openstack_ci_status.py b/tools/oos/scripts/check_openstack_ci_status.py deleted file mode 100644 index 2cd99b4460db99417739d015ad6a8987a0ec95d8..0000000000000000000000000000000000000000 --- a/tools/oos/scripts/check_openstack_ci_status.py +++ /dev/null @@ -1,32 +0,0 @@ -import datetime -import json - -import markdown -import requests - - -jobs = ['kolla-ansible-openeuler-source', 'devstack-platform-openEuler-20.03-SP2'] -zuul_url = 'https://zuul.opendev.org/api/tenant/openstack/builds?job_name=%s' - - -def get_ci_result(job): - response = requests.get(zuul_url % job) - return json.loads(response.content.decode('utf8')) - - -if __name__ == '__main__': - today = datetime.datetime.now() - output_body = '# check date: %s-%s-%s\n\n' % (today.year, today.month, today.day) - for job in jobs: - output_body += '## %s\n\n' % job - output_body += 'Recent five job results: \n\n' - res = get_ci_result(job) - output_body += '| Number| Result | Time | LOG |\n' - output_body += '|-|-|-|-|\n' - for i in range(5): - output_body += '| %s| **%s** | %s | %s |\n' % (i, res[i]['result'], res[i]['start_time'], res[i]['log_url']) - output_body += '\n' - - with open('result_body.html', 'w') as f: - html = markdown.markdown(output_body, extensions=['pymdownx.extra', 'pymdownx.magiclink']) - f.write(html) diff --git a/tools/oos/scripts/fetch_openeuler_repo_name.py b/tools/oos/scripts/fetch_openeuler_repo_name.py deleted file mode 100755 index 3000f41be4a861fc163b6bb9bf0a05605bcdf88c..0000000000000000000000000000000000000000 --- a/tools/oos/scripts/fetch_openeuler_repo_name.py +++ /dev/null @@ -1,117 +0,0 @@ -#!/usr/bin/python3 -import base64 -import json -import os -import sys -import yaml - -import click -import requests - - -def get_tree(target_hash, token=os.environ.get("GITEE_USER_TOKEN", ''), verify=True): - url = f"https://gitee.com/api/v5/repos/openeuler/community/git/trees/{target_hash}?access_token={token}" - headers = { - 'Content-Type': 'application/json;charset=UTF-8', - } - response = requests.get(url, headers=headers, verify=verify) - tree = json.loads(response.content.decode()) - return tree['tree'] - - -def get_project_name(target_hash, token=os.environ.get("GITEE_USER_TOKEN", ''), verify=True): - url = f"https://gitee.com/api/v5/repos/openeuler/community/git/blobs/{target_hash}?access_token={token}" - headers = { - 'Content-Type': 'application/json;charset=UTF-8', - } - response = requests.get(url, headers=headers, verify=verify) - content = json.loads(response.content.decode()) - project_name = base64.b64decode(content['content']).decode().split('\n')[0].split(' ')[-1].rstrip("\r") - return project_name - - -def parser_remote(target_sigs): - community_tree = get_tree('master') - for node in community_tree: - if node['path'] == 'sig': - sigs_tree_hash = node['sha'] - break - sigs_tree = get_tree(sigs_tree_hash) - result = {} - for sig in sigs_tree: - if sig['type'] == 'blob': - print(f"{sig['path']} is not a sig, skip it.") - continue - if target_sigs and sig['path'] not in target_sigs: - continue - sig_name = sig['path'] - sig_tree = get_tree(sig['sha']) - rpms_tree_hash_list = [] - for node in sig_tree: - if node['path'] in ['src-openeuler', 'openeuler']: - rpms_tree_hash_list.append(node['sha']) - if not rpms_tree_hash_list: - print(f"There is no src-openEuler project for sig {sig_name}") - continue - for rpms_tree_hash in rpms_tree_hash_list: - all_rpms_tree = get_tree(rpms_tree_hash) - for node in all_rpms_tree: - sub_rpms_tree = get_tree(node['sha']) - for rpm in sub_rpms_tree: - rpm_name = get_project_name(rpm['sha']) - exist_sig = result.get(rpm_name) - if exist_sig and exist_sig != sig_name: - print(f"Warning: the {rpm_name} contains in different sig: {exist_sig}, {sig_name}") - result[rpm_name] = sig_name - print(f"Adding {rpm_name} in {sig_name} sig") - return result - - -def get_file_path(path, file_list): - if path.endswith('community'): - path = os.path.join(path, 'sig') - dir_or_files = os.listdir(path) - for dir_file in dir_or_files: - dir_file_path = os.path.join(path, dir_file) - if os.path.isdir(dir_file_path): - get_file_path(dir_file_path, file_list) - else: - file_list.append(dir_file_path) - - -def parser_local(path, target_sigs): - result = {} - file_list = [] - get_file_path(path, file_list) - for file in file_list: - if not file.endswith('.yaml'): - continue - elif file.endswith('sig-info.yaml'): - continue - sig_name = file.split('/community/sig/')[1].split('/')[0] - if target_sigs and sig_name not in target_sigs: - continue - project_name = file.split('/')[-1].split('.yaml')[0] - exist_sig = result.get(project_name) - if exist_sig and exist_sig != sig_name: - print(f"Warning: the {project_name} contains in different sig: {exist_sig}, {sig_name}") - result[project_name] = sig_name - return result - - -@click.command() -@click.option('--sig', default='', help='"The sig format should be like: sig1,sig2,sig3...') -@click.option('--path', default='./community', help='"The community repo') -@click.argument('way', type=click.Choice(['local', 'remote'])) -def parser(sig, path, way): - target_sigs = sig.split(',') if sig else [] - if way == 'remote': - result = parser_remote(target_sigs) - else: - result = parser_local(path, target_sigs) - with open('openeuler_repo.yaml', 'w') as fp: - fp.write(yaml.dump(result)) - - -if __name__ == '__main__': - parser() diff --git a/tools/oos/scripts/fetch_openstack_release_mapping.py b/tools/oos/scripts/fetch_openstack_release_mapping.py deleted file mode 100755 index 554e294200a6d47d1292474e13a7c8d7e90eaaa4..0000000000000000000000000000000000000000 --- a/tools/oos/scripts/fetch_openstack_release_mapping.py +++ /dev/null @@ -1,48 +0,0 @@ -#!/usr/bin/python3 -from packaging import version -import re - -import requests -import yaml - - -releases = [ - 'queens', - 'rocky', - 'train', - 'stein', - 'ussuri', - 'victoria', - 'wallaby', - 'xena', - 'yoga' -] - - -all_res = dict() -for release in releases: - url = 'https://releases.openstack.org/' + release - url_os_content = requests.get(url, verify=True).content.decode() - - # get all links, which ends .tar.gz from HTML - links = re.findall(r'https://.*\.tar\.gz', url_os_content) - results = dict() - for pkg_link in links: - # get name and package informations from link - tmp = pkg_link.split("/") - pkg_full_name = tmp[4] - pkg_name = pkg_full_name[0:pkg_full_name.rfind('-')] - pkg_ver = pkg_full_name[ - pkg_full_name.rfind('-') + 1:pkg_full_name.rfind('.tar')] - # check if package with version are in results, - # and check for higher version - if pkg_name not in results: - results[pkg_name] = pkg_ver - else: - # if current versions < new version, then update it - if version.parse(results.get(pkg_name)) < version.parse(pkg_ver): - results[pkg_name] = pkg_ver - all_res[release] = results - -with open('openstack_release.yaml', 'w') as fp: - fp.write(yaml.dump(all_res)) diff --git a/tools/oos/scripts/generate_dependence.py b/tools/oos/scripts/generate_dependence.py deleted file mode 100755 index 4e640c199cb8bb9a1acc08fc69bbb7331247d08e..0000000000000000000000000000000000000000 --- a/tools/oos/scripts/generate_dependence.py +++ /dev/null @@ -1,398 +0,0 @@ -#!/usr/bin/python3 -import copy -import json -import os -from pathlib import Path -import re - -import click -import oos -from packaging import version as p_version -import requests -import yaml - -CONSTANTS = None -OPENSTACK_RELEASE_MAP = None -UPPER = dict() -_SEARVICE = [ - # service - "aodh", - "ceilometer", - "cinder", - "openstack-cyborg", - "glance", - "openstack-heat", - "horizon", - "ironic", - "keystone", - "kolla", - "kolla-ansible", - "neutron", - "nova", - "panko", - "openstack-placement", - "swift", - "trove", - # client - "python-openstackclient", - "osc-placement", - "python-cyborgclient", - # ui - "ironic-ui", - "trove-dashboard", - # test - "tempest", - "cinder-tempest-plugin", - "ironic-tempest-plugin", - "keystone-tempest-plugin", - "neutron-tempest-plugin", - "trove-tempest-plugin", - # library - "ironic-inspector", - "ironic-prometheus-exporter", - "ironic-python-agent", - "networking-baremetal", - "networking-generic-switch", -] -SUPPORT_RELEASE = { - "queens": { - "base_service": _SEARVICE + ['barbican'], - }, - "rocky": { - "base_service": _SEARVICE + ['barbican'], - }, - "train": { - "base_service": _SEARVICE, - "extra_service": { - "gnocchi": "4.3.5" - } - }, - "wallaby": { - "base_service": _SEARVICE, - "extra_service": { - "gnocchi": "4.3.5" - } - }, -} - - -class Project(object): - def __init__(self, name, version, - eq_version='', ge_version='', lt_version='', ne_version=None, - upper_version='', deep_count=0, deep_list=None, requires=None): - self.name = name - self.version = version - self.eq_version = eq_version - self.ge_version = ge_version - self.lt_version = lt_version - self.ne_version = ne_version if ne_version else [] - self.upper_version = upper_version - self.deep_list = deep_list if deep_list else [] - self.deep_list.append(self.name) - self.requires = requires if requires else {} - self.deep_count = deep_count - - self.dep_file = [ - "requirements.txt", - "test-requirements.txt", - "driver-requirements.txt", - "doc/requirements.txt" - ] - - def _refresh(self, local_project): - is_out_of_date = False - if p_version.parse(self.version) > p_version.parse(local_project.version): - is_out_of_date = True - if not is_out_of_date: - self.name = local_project.name - self.version = local_project.version - self.eq_version = local_project.eq_version - self.ge_version = local_project.ge_version - self.lt_version = local_project.lt_version - self.ne_version = local_project.ne_version - self.deep_count = local_project.deep_count - self.deep_list = local_project.deep_list - self.requires = local_project.requires - return is_out_of_date - - def refresh_from_local(self, file_path): - with open(file_path, 'r', encoding='utf8') as fp: - project_dict = json.load(fp) - local_project = Project.from_dict(**project_dict) - is_out_of_date = self._refresh(local_project) - return is_out_of_date - - def refresh_from_upstream(self, file_path): - if not self._generate_cache_from_opendev(): - self._generate_cache_from_pypi() - with open(file_path, 'w', encoding='utf8') as fp: - json.dump(self.to_dict(), fp, ensure_ascii=False) - - def _is_legal(self, line): - """check the input requires line is legal or not""" - if line == '': - return False - if line.startswith('#') or line.startswith('-r'): - return False - # win32 and dev requires should be excluded. - if re.search(r"(sys_platform|extra|platform_system)[ ]*==[ \\'\"]*(win32|dev|Windows)", line): - return False - if re.search(r"python_version[ ]*==[ \\'\"]*2\.7", line): - return False - python_version_ge_regex = [r"(?<=python_version>=)[0-9\.'\"]+", r"(?<=python_version >=) [0-9\.'\"]+"] - python_version_lt_regex = [r"(?<=python_version<)[0-9\.'\"]+", r"(?<=python_version <) [0-9\.'\"]+"] - python_version_eq_regex = [r"(?<=python_version==)[0-9\.'\"]+", r"(?<=python_version <) [0-9\.'\"]+"] - for regex in python_version_ge_regex: - if re.search(regex, line) and re.search(regex, line).group() in ['3.9', "'3.9'", '"3.9"']: - return False - for regex in python_version_lt_regex: - if re.search(regex, line) and re.search(regex, line).group() in ['3.8', "'3.8'", '"3.8"']: - return False - for regex in python_version_eq_regex: - if re.search(regex, line) and re.search(regex, line).group() not in ['3.8', "'3.8'", '"3.8"']: - return False - return True - - def _analysis_version_range(self, version_range): - # TODO: analysis improvement. - if version_range.get('eq_version'): - return version_range['eq_version'] - if version_range.get('upper_version'): - return version_range['upper_version'] - if version_range.get('ge_version'): - return version_range['ge_version'] - return 'unknown' - - def _update_requires(self, requires_list): - project_version_ge_regex = r"(?<=>=)[0-9a-zA-Z\.\*]+" - project_version_lt_regex = r"(?<=<)[0-9a-zA-Z\.\*]+" - project_version_eq_regex = r"(?<===)[0-9a-zA-Z\.\*]+" - project_version_ne_regex = r"(?<=!=)[0-9a-zA-Z\.\*]+" - for line in requires_list: - if self._is_legal(line): - required_project_name = re.search(r"^[a-zA-Z0-9_\.\-]+", line).group() - required_project_name = CONSTANTS['pypi_name_fix'].get(required_project_name, required_project_name) - - required_project_info = { - "eq_version": re.search(project_version_eq_regex, line).group() if re.search(project_version_eq_regex, line) else '', - "ge_version": re.search(project_version_ge_regex, line).group() if re.search(project_version_ge_regex, line) else '', - "lt_version": re.search(project_version_lt_regex, line).group() if re.search(project_version_lt_regex, line) else '', - "ne_version": re.findall(project_version_ne_regex, line), - "upper_version": UPPER.get(required_project_name, '') - } - required_project_info['version'] = self._analysis_version_range(required_project_info) - - self.requires[required_project_name] = required_project_info - - def _generate_cache_from_opendev(self): - file_content = "" - for file_name in self.dep_file: - url = "https://opendev.org/openstack/%s/raw/tag/%s/%s" % (self.name, self.version, file_name) - response = requests.get(url, verify=True) - if response.status_code == 200: - file_content += response.content.decode() - else: - if file_name == "requirements.txt": - break - else: - continue - if not file_content: - return False - self._update_requires(file_content.split('\n')) - return True - - def _get_json_from_pypi(self, project, version=None): - if version and version != 'latest': - url = 'https://pypi.org/pypi/%s/%s/json' % (project, version) - else: - url = 'https://pypi.org/pypi/%s/json' % project - response = requests.get(url, verify=True) - if response.status_code != 200: - raise Exception("%s-%s doesn't exist on pypi" % (project, version)) - return json.loads(response.content.decode()) - - def _generate_cache_from_pypi(self): - requires_list = self._get_json_from_pypi(self.name, self.version)["info"]["requires_dist"] - if requires_list: - self._update_requires(requires_list) - - @classmethod - def from_dict(cls, **args): - name = args['name'] - version = args['version_dict']['version'] - eq_version = args['version_dict']['eq_version'] - ge_version = args['version_dict']['ge_version'] - lt_version = args['version_dict']['lt_version'] - ne_version = args['version_dict']['ne_version'] - upper_version = args['version_dict']['upper_version'] - deep_count = args['deep']['count'] - deep_list = args['deep']['list'] - requires = args['requires'] - return cls( - name, version, eq_version, ge_version, lt_version, - ne_version, upper_version, deep_count, deep_list, requires - ) - - def to_dict(self): - return { - 'name': self.name, - 'version_dict': { - 'version': self.version, - 'eq_version': self.eq_version, - 'ge_version': self.ge_version, - 'lt_version': self.lt_version, - 'ne_version': self.ne_version, - 'upper_version': self.upper_version, - }, - 'deep': { - 'count': self.deep_count, - 'list': self.deep_list, - }, - 'requires': self.requires, - } - - -class InitDependence(object): - def __init__(self, openstack_release, projects): - self.pypi_cache_path = "./%s_cached_file" % openstack_release - self.unknown_file = self.pypi_cache_path + "/" + "unknown" - self.unknown_list = [] - self.loaded_list = [] - - self.project_dict = dict() - if projects: - for project in projects.split(","): - version =OPENSTACK_RELEASE_MAP[openstack_release].get(project, OPENSTACK_RELEASE_MAP[openstack_release].get(project.replace("-", "_"))) - if version: - self.project_dict[project] = version - else: - print("%s doesn't support %s" % (openstack_release, project)) - else: - for project in SUPPORT_RELEASE[openstack_release]['base_service']: - version = OPENSTACK_RELEASE_MAP[openstack_release].get(project, OPENSTACK_RELEASE_MAP[openstack_release].get(project.replace("-", "_"))) - if version: - self.project_dict[project] = version - else: - print("%s doesn't support %s" % (openstack_release, project)) - for project, version in SUPPORT_RELEASE[openstack_release]['extra_service'].items(): - self.project_dict[project] = version - - def _cache_dependencies(self, project_obj): - """Cache dependencies by recursion way""" - if project_obj.name in CONSTANTS['black_list']: - print("%s is in black list, skip now" % project_obj.name) - return - if project_obj.version == 'unknown': - print("The version of %s is not specified, skip now" % project_obj.name) - if project_obj.name not in self.unknown_list: - if Path(self.unknown_file).exists(): - with open(self.unknown_file, 'a') as fp: - fp.write(project_obj.name + "\n") - self.unknown_list.append(project_obj.name) - else: - with open(self.unknown_file, 'w') as fp: - fp.write(project_obj.name + "\n") - self.unknown_list.append(project_obj.name) - return - file_path = self.pypi_cache_path + "/" + "%s.json" % project_obj.name - if Path(file_path).exists(): - is_out_of_date = project_obj.refresh_from_local(file_path) - if not is_out_of_date: - print('Cache %s exists, loading from cache, deep %s' % (project_obj.name, project_obj.deep_count)) - if project_obj.name in self.loaded_list: - return - else: - self.loaded_list.append(project_obj.name) - else: - print('Cache %s exists but out of date, loading from upstream, deep %s' % (project_obj.name, project_obj.deep_count)) - project_obj.refresh_from_upstream(file_path) - else: - # Load and cache info from upstream - print('Cache %s doesn\'t exists, loading from upstream, deep %s' % (project_obj.name, project_obj.deep_count)) - project_obj.refresh_from_upstream(file_path) - for name, version_range in project_obj.requires.items(): - if name in project_obj.deep_list: - continue - version = version_range['version'] - version = CONSTANTS['pypi_version_fix'].get("%s-%s" % (name, version), version) - child_project_obj = Project( - name, version, eq_version=version_range['eq_version'], ge_version=version_range['ge_version'], - lt_version=version_range['lt_version'], ne_version=version_range['ne_version'], - upper_version=version_range['upper_version'], - deep_count=project_obj.deep_count+1, deep_list=copy.deepcopy(project_obj.deep_list) - ) - self._cache_dependencies(child_project_obj) - - def _pre(self): - if Path(self.pypi_cache_path).exists(): - print("Cache folder exists, Using the cached file first. " - "Please delete the cache folder if you want to " - "generate the new cache.") - else: - print("Creating Cache folder %s" % self.pypi_cache_path) - Path(self.pypi_cache_path).mkdir(parents=True) - - def _post(self): - all_project = set(self.project_dict.keys()) - file_list = os.listdir(self.pypi_cache_path) - for file_name in file_list: - if not file_name.endswith('.json'): - continue - with open(self.pypi_cache_path + '/' + file_name, 'r', encoding='utf8') as fp: - project_dict = json.load(fp) - all_project.update(project_dict['requires'].keys()) - for file_name in file_list: - project_name = os.path.splitext(file_name)[0] - if project_name not in list(all_project) and file_name.endswith('.json'): - print("%s is required by nothing, remove it." % project_name) - os.remove(self.pypi_cache_path + '/' + file_name) - with open(self.unknown_file, 'r') as fp: - content = fp.readlines() - with open(self.unknown_file, 'w') as fp: - for project in content: - if project.split('\n')[0] + '.json' not in file_list: - fp.write(project) - - def init_all_dep(self): - """download and cache all related requirement file""" - if not self.project_dict: - return - self._pre() - for name, version in self.project_dict.items(): - project_obj = Project(name, version, eq_version=version) - self._cache_dependencies(project_obj) - self._post() - -@click.command() -@click.option('-p', '--projects', default=None, help='Specify the projects to be generated. Format should be like project1,project2') -@click.argument('release', type=click.Choice(SUPPORT_RELEASE.keys())) -def run(projects, release): - upper_url = "https://opendev.org/openstack/requirements/raw/branch/stable/%s/upper-constraints.txt" % release - upper_projects = requests.get(upper_url, verify=True).content.decode().split('\n') - for upper_project in upper_projects: - if not upper_project: - continue - project_name, project_version = upper_project.split('===') - project_version = project_version.split(';')[0] - UPPER[project_name] = project_version - - InitDependence(release, projects).init_all_dep() - - -if __name__ == '__main__': - search_paths = ['/etc/oos/', - os.path.join(os.path.dirname(oos.__path__[0]), 'etc'), - os.environ.get("OOS_CONF_DIR", ""), '/usr/local/etc/oos', - '/usr/etc/oos', - ] - for conf_path in search_paths: - cons = os.path.join(conf_path, "constants.yaml") - openstack_release = os.path.join(conf_path, "openstack_release.yaml") - if (os.path.isfile(cons) - and os.path.isfile(openstack_release)): - CONSTANTS = yaml.safe_load(open(cons, 'r', encoding='utf-8')) - OPENSTACK_RELEASE_MAP = yaml.safe_load(open(openstack_release)) - break - else: - raise Exception("The constants or openstack release file are not found!") - run() diff --git a/tools/oos/setup.cfg b/tools/oos/setup.cfg deleted file mode 100644 index 20154106812908df67d3f1422f2e727acd0bf60a..0000000000000000000000000000000000000000 --- a/tools/oos/setup.cfg +++ /dev/null @@ -1,39 +0,0 @@ -[metadata] -name = openstack-sig-tool -summary = The command line tool for openEuler OpenStack SIG -description-file = - README.md -long_description_content_type = text/markdown -author = openEuler OpenStack SIG -home-page = https://gitee.com/openeuler/openstack/ -python-requires = >=3.7 -classifier = - Intended Audience :: Information Technology - Intended Audience :: System Administrators - License :: OSI Approved :: Apache Software License - Operating System :: POSIX :: Linux - Programming Language :: Python - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.7 - Programming Language :: Python :: 3.8 - Programming Language :: Python :: 3.9 - -[pbr] -warnerrors = True - -[files] -packages = - oos - -data_files = - etc/oos = etc/* - etc/oos/playbooks = etc/playbooks/* - etc/oos/inventory = etc/inventory/* - etc/oos/key_pair = etc/key_pair/* - -[entry_points] -console_scripts = - oos = oos.cli:main - -[wheel] -universal = 1 diff --git a/tools/oos/setup.py b/tools/oos/setup.py deleted file mode 100644 index 9f78acbfc5e27a489272e4a5e63576eab150a7f2..0000000000000000000000000000000000000000 --- a/tools/oos/setup.py +++ /dev/null @@ -1,21 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Huawei 2021 -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -# implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import setuptools - -setuptools.setup( - setup_requires=['pbr>=1.3'], - pbr=True)